Welcome to TiddlyWiki created by Jeremy Ruston, Copyright © 2007 UnaMesa Association
http://www.evernote.com/shard/s48/sh/780e7534-cee4-4278-87f7-449e81d2acfc/23124af97669edefaca70c4f6e719426
http://www.evernote.com/shard/s48/sh/c5653239-108b-4515-a19e-d69fa3ad92c1/2476cb2c13785e02a391d05a7daf7507
http://jacobian.org/writing/web-scale/
http://thebuild.com/blog/2010/10/27/things-i-do-not-understand-web-scale/
<<showtoc>>
On the following notes I've discussed the "scale out vs scale up" and "speed vs bandwidth"
** T3 CPUs thread:core ratio http://bit.ly/2g5bdPA
** Mainframe (MIPS) to Sparc sizing http://bit.ly/2g5dwSB
** DB2 Mainframe to Oracle Sizing http://bit.ly/2g53JMm
You can read on the URLs to get more details but here are the essential parts of the discussions
! The "scale out (x86) vs scale up (T5)" boils down to two factors
!! Factor 1) bandwidth vs speed
<<<
[img[ http://i.imgur.com/oP70UP8.png ]]
T5 can offer more bandwidth capacity but slower CPU speed than x86 CPUs of Exadata. But with Exadata you need to have more compute nodes to match the bandwidth capacity of the T5.
If you compare the LIO/sec performance of
SPARCT5 8threads pinned to a core with SPECint_rate of 29
vs
XeonE5 (X4-2 in this example) 2threads pinned to a core with SPECint_rate of 39
You’ll see the following curve.. the 2threads on Xeon will give you higher LIOs/sec value vs the first 2threads of SPARC just because of the speed differences
[img[ http://i.imgur.com/MjMn10y.png ]]
> Y-axis: **Logical IOs/sec** X-axis: **CPU thread**
But then when you saturate the entire platform the SPARC given that it has a lot of “slower” threads in effect can consolidate more LIO workload but at a price of LIO speed performance. So meaning if you have an OLTP SQL executing .2ms per execute in Xeon that will be much slower in SPARC. That’s why I prefer to scale out using Xeon machines (with faster CPUs) than scale up with SPARC. But then it depends if the application can take advantage of the RAC (rac-aware app)
[img[ http://i.imgur.com/wDoXUjp.png ]]
> Y-axis: **Logical IOs/sec** X-axis: **CPU thread**
But then the X4-8 is pretty promising if they need a scale up kind of solution, it is faster than T5,M5,M6 which has speed of 38 vs 30 (see comparison here [http://bit.ly/2fOzM06](http://bit.ly/2fOzM06))
and X4-8 (https://twitter.com/karlarao/status/435882623500423168) is pretty much the same speed as the compute nodes of X4-2 so you also get that linear scaling that they’re saying in T5,M5,M6 but a much faster CPU…
<<<
!! Factor 2) the compatibility of the apps to either T5 (scale-up) or x86 (scale-out)
<<<
If this is way too old school mainframe specific program then they might want to stick with zEC12.
<<<
! I like this reply by Alex Fatkulin about Xeon vs SPARC
https://twitter.com/alexfatkulin
<<<
I think Xeon is a lot more versatile platform when it comes to the types of workloads it can handle.
A very strong point about Xeon is it a lot more forgiving to the type of workloads you want to run. It can be a race car or it can be a heavy duty truck. A SPARC is a bulldozer and that's the only thing it is and the only thing it can be. You might find yourself in a wrong vehicle in the middle of the highway ;-)
SPARC is like a bulldozer. It can move with a constant speed no matter what's in front of it but it doesn't change the fact that it moves slow.
Some years ago I was involved with a company doing AirCraft maintenance software which bought a bunch of SPARC servers thinking that a lot of threads is cool otherwise why would SUN call these CoolThreads? The problem was that they had a lot of very complex logic sensitive to single threaded performance which wasn't designed to run in parallel. The end result is SPARC could not do a maintenance cycle within a window. For these unfamiliar with the subject the only thing worth mentioning is that it kept planes grounded. So it was a case where a software couldn't take any advantage of the high throughput offered by the SPARC platform while SPARC couldn't offer high single threaded performance. Guess what these SPARC servers got replaced with. Now granted this was all before T5 but the fact of the matter is T5 continues to lag significantly behind latest Intel generations in core IPC.
<<<
! Discussions on thread:core ratio with Frits Hoogland
https://twitter.com/fritshoogland
AFAIK, these are the heavily threaded ones (thread:core ratio 8:1). When I calculate the CPU time from AWR per second, any CPU time above the number of threads roughly means queue time right?
<<<
<- Yes, and that should show as CPU Wait on Oracle side of things
<<<
With the calculated CPU time (alias queue time subtracted), I got the number of active threads. However, in my book only one of the eight threads can truly execute, the other ones are visible as running on CPU, but in reality waiting to truly execute on the core.
<<<
<- this will manifest as diminishing returns on workload level LIO/sec performance (imagine the LIO load profile section in AWR) as you saturate more threads.. imagine a line like this https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=diminishing%20returns%20curve
<<<
This should mean that when CPU threads are more busy than the core can handle, you get increased time on CPU, which in reality is only waiting time, which is a cpu thread waiting/stalling to run.
<<<
<- you’ll first see the diminishing returns on LIO/sec performance and then the CPU wait afterwards when the line reaches plateau and it gets worse and worse
<<<
Can you confirm this is how that works? Or otherwise please correct me where I am making a mistake.
<<<
<- From my tests on investigating the thread performance, I’ve noticed there are two types of LIO workload.. the short and the long ones, Kevin showed this at oaktable world before and even at RMOUG way back. I think he calls it big and small. But the idea is, the short ones tends to share pretty well with other threads on let’s say the same core. Meaning the core gets busy and the threads are just time slicing pretty damn quick that the net net effect is a pretty good LIO/sec performance, and this also assumes all threads are being utilized evenly and still the diminishing returns apply as you saturate more and more threads. On the other hand, the long ones tends to hog on the time slice which results to overall lower LIO/sec performance .. this behavior is better explained on this wiki entry here [[cpu centric benchmark comparisons]] or here http://bit.ly/1xOJrEu
And also this two types can mix in a given workload.
<<<
This also means that when going to Xeon (a recent one), this will extremely boost performance because the CPU time will decrease (significantly) because the thread:core ratio is much lower (2:1).
This means that it’s not only the specint ratio difference which the system get’s faster, but also the excess stalling on CPU.
<<<
<- Yes it’s possible, that it’s a contribution of all those factors. But I think the boost is mainly driven by the speed (newer CPU) or you can also say the thread:core ratio is much faster in Xeon than SPARC.
If you compare the LIO/sec performance of
SPARCT5 8threads pinned to a core with SPECint_rate of 29
vs
XeonE5 2threads pinned to a core with SPECint_rate of 44
You’ll see the following curve.. the 2threads on Xeon will give you higher LIOs/sec value vs the first 2threads of SPARC just because of the speed differences
[img[ http://i.imgur.com/MjMn10y.png ]]
Y-axis: **Logical IOs/sec** X-axis: **CPU thread**
But then when you saturate the entire platform the SPARC given that it has a lot of “slower” threads in effect can consolidate more LIO workload but at the price of LIO speed performance. So meaning if you have an OLTP SQL executing .2ms per execute in Xeon that will be much slower in SPARC. That’s why I prefer to scale out using Xeon machines (with faster CPUs) than scale up with SPARC. But then it depends if the application can take advantage of the RAC (rac-aware app)
[img[ http://i.imgur.com/wDoXUjp.png ]]
Y-axis: **Logical IOs/sec** X-axis: **CPU thread**
But the X4-8 is pretty promising if they need a scale up kind of solution, it is faster than T5,M5,M6 (see wiki [[M6, M5, T5]]) which has speed of 38 vs 30
X4-8 https://twitter.com/karlarao/status/435882623500423168
and X4-8 is pretty much the same speed as the compute nodes of X4-2 so you also get that linear scaling that they’re saying in T5,M5,M6 but a much faster CPU…
SPECint_rate2006 reference
— below are the variable values (raw and final header)
Result/# Cores, # Cores, # Chips, # Cores Per Chip, # Threads Per Core, Baseline, Result, Hardware Vendor, System, Published
$ less spec.txt | sort -rnk1 | grep -i sparc | grep -i oracle
30.5625, 16, 1, 16, 8, 441, 489, Oracle Corporation, SPARC T5-1B, Oct-13
@@29.2969, 128, 8, 16, 8, 3490, 3750, Oracle Corporation, SPARC T5-8, Apr-13@@
29.1875, 16, 1, 16, 8, 436, 467, Oracle Corporation, SPARC T5-1B, Apr-13
18.6, 2, 1, 2, 2, 33.7, 37.2, Oracle Corporation, SPARC Enterprise M3000, Apr-11
14.05, 4, 1, 4, 2, 50.3, 56.2, Oracle Corporation, SPARC Enterprise M3000, Apr-11
13.7812, 64, 16, 4, 2, 806, 882, Oracle Corporation, SPARC Enterprise M8000, Dec-10
13.4375, 128, 32, 4, 2, 1570, 1720, Oracle Corporation, SPARC Enterprise M9000, Dec-10
12.3047, 256, 64, 4, 2, 2850, 3150, Oracle Corporation, SPARC Enterprise M9000, Dec-10
11.1875, 16, 4, 4, 2, 158, 179, Oracle Corporation, SPARC Enterprise M4000, Dec-10
11, 32, 8, 4, 2, 313, 352, Oracle Corporation, SPARC Enterprise M5000, Dec-10
@@10.4688, 32, 2, 16, 8, 309, 335, Oracle Corporation, SPARC T3-2, Feb-11
10.4062, 64, 4, 16, 8, 614, 666, Oracle Corporation, SPARC T3-4, Feb-11
10.375, 16, 1, 16, 8, 153, 166, Oracle Corporation, SPARC T3-1, Jan-11@@
x3-2 spec
$ cat spec.txt | grep -i intel | grep -i "E5-26" | grep -i sun | sort -rnk1
@@44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X6270 M3 (Intel Xeon E5-2690 2.9GHz)@@
44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X3-2B (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Server X3-2L (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Fire X4270 M3 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Server X3-2 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Fire X4170 M3 (Intel Xeon E5-2690 2.9GHz)
<<<
[img(70%,70%)[https://i.imgur.com/XsBOAey.jpg]]
[img(70%,70%)[https://i.imgur.com/xMCK0Ug.png]]
<<<
Introduction 06:20
Welcome! Thank you for learning the Data Warehouse concepts with me!
Preview
06:20
–
Brief about the Data warehouse
21:13
Is Data Warehouse still relevant in the age of Big Data?
Preview
04:25
Why do we need a Data Warehouse?
Preview
05:26
What is a Data Warehouse?
Preview
05:42
Characteristics of a Data Warehouse
Preview
05:40
–
Business Intelligence
23:37
What is Business Intelligence?
05:37
Business Intelligence -Extended Explanation
03:34
Uses of Business Intelligence
08:02
Tools used for (in) Business Intelligence
06:24
–
Data Warehouse Architectures
32:12
Enterprise Architecture or Centralized Architecture
Preview
04:46
Federated Architecture
03:05
Multi-Tired Architecture
03:13
Components of a Data Warehouse
03:57
Purpose of a Staging Area in Data Warehouse Architecture - Part 1
04:49
Purpose of a Staging Area in Data Warehouse Architecture - Part 2
03:41
Advantages of Traditional warehouse
02:33
Limitations of Traditional Data Warehouses
06:08
–
ODS - Operational Data Store
14:13
What is ODS?
02:26
Define ODS
07:40
Differences between ODS,DWH, OLTP, OLAP, DSS
04:07
–
OLAP
28:15
OLAP Overview
05:17
OLTP Vs OLAP - Part 1_U
04:05
OLTP Vs OLAP - Part 2
05:31
OLAP Architecture - MOLAP
05:56
ROLAP
03:35
HOLAP
02:20
DOLAP
01:31
–
Data Mart
13:52
What is a Data Mart?
01:40
Fundamental Difference between DWH and DM
00:40
Advantages of a Data Mart
02:46
Characteristics of a Data Mart
03:37
Disadvantages of a Data Mart
03:01
Mistakes and MisConceptions of a Data Mart
02:08
–
Metadata
19:30
Overview of Metadata
01:50
Benefits of Metadata
01:47
Types of Metadata
05:38
Projects on Metadata
05:28
Best Practices for Metadata Setup
01:36
Summary
03:11
–
Data Modeling
05:53
What is Data Modeling?
02:11
Data Modeling Techniques
03:42
–
Entity Relational Data Model
35:46
ER - (Entity Relation) Data Model
03:37
ER Data Model - What is Entity?
02:01
ER Data Model - Types of Entities - Part 1
03:57
ER Data Model - Types of Entities - Part 2
01:49
ER Data Model - Attributes
01:54
ER Data Model - Types of Attributes
03:59
ER Data Model - Entity-Set and Keys
02:42
ER Data Model - Identifier
01:53
ER Data Model - Relationship
01:15
ER Data Model - Notation
02:34
ER Data Model - Logical Data Model
01:30
ER Data Model - Moving from Logical Data Model to Physical Data Model
02:14
ER Data Model - Differences between CDM, LDM and PDM
03:06
ER Data Model - Disadvantages
03:15
–
Dimensional Model
01:24:32
What is Dimension Modelling?
04:38
Benefits of Dimensional Modelling
01:52
What is a Dimension?
02:36
What is a Fact?
02:00
Additive Facts
01:45
Semi Additive Facts
02:23
Non-Additive Facts
01:26
FactLess Facts
02:26
What is a Surrogate key?
03:45
Star Schema
04:54
SnowFlake Schema
03:22
Galaxy Schema or Fact Constellation Schema
02:25
Differences between Star Schema and SnowFlake Schema?
04:55
Conformed Dimension
06:17
Junk Dimension
03:12
Degenerate Dimension
03:36
Slowly Changing Dimensions - Intro and Example Creation
05:35
Slowly Changing Dimensions - Type 1, 2 and 3
12:14
Slowly Changing Dimensions - Summary
03:05
Step by Step approach to set up the Dimensional Model using a retail case study
06:44
ER Model Vs Dimensional Model
05:22
–
DWH Indexes
10:59
What is an Index?
02:04
Bitmap Index
03:46
B-Tree index
01:49
Bitmap Index Vs B Tree Index
03:20
–
Data Integration and ETL
13:20
What is Data Integration?
06:49
What is ETL?
03:49
Common Questions and Summary
02:42
–
ETL Vs ELT
13:45
ETL - Explained
06:03
ELT - Explained
05:24
ETL Vs ELT
02:18
–
ETL - Extraction Transformation & Loading
12:48
Build Vs Buy
05:10
ETL Tools for Data Warehouses
01:56
Extraction Methods in Data Warehouses
05:42
–
Typical Roles In DWH Project
44:18
Project Sponsor
03:24
Project Manager
01:46
Functional Analyst or Business Analyst
02:53
SME - Subject Matter Expert
04:17
DW BI Architect
03:07
Data Modeler
08:59
DWH Tech Admin
01:20
ETL Developers
01:56
BI OLAP Developers
01:29
ETL Testers/QA Group
01:58
DB UNIX Network Admins
00:56
Data Architect, Data Warehouse Architect, BI Architect and Solution Architect
09:57
Final Note about the Roles
02:16
–
DW/BI/ETL Implemetation Approach
39:48
Different phases in DW/BI/ETL Implementation Approach
01:51
Knowledge Capture Sessions
03:34
Requirements
07:21
Architecture phases
04:48
Data Model/Database
01:35
ETL Phase
02:43
Data Access Phase
02:10
Data Access Types - Selection
01:37
Data Access Types - Drilling Down
00:58
Data Access Types - Exception Reporting
00:36
Data Access Types - Calculations
01:26
Data Access Types - Graphics and Visualization
00:58
Data Access Types -Data Entry Options
02:04
Data Access Types - Customization
01:00
Data Access Types - WebBased Reporting
00:56
Data Access Types - BroadCasting
01:04
Deploy
01:42
Iterative Approach
03:25
–
Retired Lectures
02:23
ETL Vs ELT
02:23
–
Bonus Section
01:37
Links to other courses
01:37
<<<
good compilation of oracle hints http://www.hellodba.com/Download/OracleSQLHints.pdf
12c new SQL hints
http://www.hellodba.com/reader.php?ID=220&lang=EN
search for "hints"
http://www.hellodba.com/index.php?class=DOC&lang=EN
.
<<showtoc>>
! Greg Wooledge wiki
http://mywiki.wooledge.org/FullBashGuide
http://mywiki.wooledge.org/BashGuide/Practices
https://mywiki.wooledge.org/BashPitfalls
http://mywiki.wooledge.org/BashWeaknesses
http://mywiki.wooledge.org/BashFAQ
http://www.tldp.org/LDP/abs/html/abs-guide.html
! essential documentation
http://www.tldp.org/LDP/abs/html/abs-guide.html
https://github.com/DingGuodong/bashstyle <- some style guide
http://superuser.com/questions/414965/when-to-use-bash-and-when-to-use-perl-python-ruby/415134
https://www.shellcheck.net/
http://www.gnu.org/software/bash/manual/bash.html
https://wiki.bash-hackers.org/
https://www.in-ulm.de/~mascheck/
http://www.grymoire.com/Unix/Quote.html
http://www.shelldorado.com/
! video courses
https://www.pluralsight.com/courses/bash-shell-scripting
https://www.pluralsight.com/courses/red-hat-enterprise-linux-shell-scripting-fundamentals
https://www.safaribooksonline.com/library/view/bash-scripting-fundamentals/9780134541730/
https://www.safaribooksonline.com/library/view/advanced-bash-scripting/9780134586229/
! /usr/bin/env or /bin/env
<<<
it's better to use
#!/usr/bin/env bash
In most cases, using /usr/bin/env bash will be better than /bin/bash;
If you are running in a multi-user environment and security is a big concern, forget about /usr/bin/env (or anything that uses the $PATH, actually);
If you need an extra argument to your interpreter and you care about portability, /usr/bin/env may also give you some headaches.
<<<
https://www.google.com/search?q=%2Fusr%2Fbin%2Fenv+or+%2Fbin%2Fenv&oq=usr%2Fbin+or+%2Fbin&aqs=chrome.4.69i57j69i58j0l4.11969j1j1&sourceid=chrome&ie=UTF-8
https://stackoverflow.com/questions/5549044/whats-the-difference-of-using-usr-bin-env-or-bin-env-in-shebang
https://unix.stackexchange.com/questions/29608/why-is-it-better-to-use-usr-bin-env-name-instead-of-path-to-name-as-my
https://www.brianstorti.com/rethinking-your-shebang/
! batch
http://steve-jansen.github.io/guides/windows-batch-scripting/part-10-advanced-tricks.html
''batch file a-z'' http://ss64.com/nt/
''batch file categorized'' http://ss64.com/nt/commands.html
! loop
http://ss64.com/nt/for.html
http://ss64.com/nt/for_cmd.html
http://stackoverflow.com/questions/1355791/how-do-you-loop-in-a-windows-batch-file
http://stackoverflow.com/questions/1103994/how-to-run-multiple-bat-files-within-a-bat-file
.
<<showtoc>>
! info
!! getting started
!! sample code
! data types and variables
!! data type
!! specific data types/values
!! variable assignment/scope
!! comparison operators
! data structures
!! data containers/structures
!! vector
!! matrix
!! data frame or pandas
!! list
!! sets
! control structures
!! control workflow
!! if else
!! error handling
!! unit testing / TDD
! loops
!! loops workflow
!! for loop
!! while loop
! advanced concepts
!! functions
!! OOP
! other functions methods procedures
! language specific operations
!! data workflow
!! directory operations
!! package management
!! importing data
!! cleaning data
!! data manipulation
!! visualization
! scripting
!! scripting workflow
!! run a script
!! print multiple var
!! input data
! xxxxxxxxxxxxxxxxxxxxxxxx
! xxx Data Engineering
! xxxxxxxxxxxxxxxxxxxxxxxx
! workflow
! installation and upgrade
! commands
! performance and troubleshooting
!! sizing and capacity planning
!! benchmark
! high availability
! security
! xxxxxxxxxxxxxxxxxxxxxxxx
.
! 2021
<<<
Kumaran's courses are the best out there to get you up to speed w/ design patterns and technology components for modern data architecture. Love the format, short, sweet, practical, and direct to the point.
https://www.linkedin.com/learning/architecting-big-data-applications-real-time-application-engineering/sm-analyze-the-problem
https://www.linkedin.com/learning/architecting-big-data-applications-batch-mode-application-engineering/welcome
https://www.linkedin.com/learning/stream-processing-design-patterns-with-kafka-streams/stream-processing-with-kafka
https://www.linkedin.com/learning/stream-processing-design-patterns-with-spark/streaming-with-spark
https://www.linkedin.com/learning/applied-ai-for-it-operations/artificial-intelligence-and-its-many-uses
https://www.linkedin.com/learning/applied-ai-for-human-resources/artificial-intelligence-and-human-resources
https://www.linkedin.com/in/kumaran-ponnambalam-961a344/?trk=lil_instructor
<<<
<<<
design and architecture
https://www.pluralsight.com/courses/google-dataflow-architecting-serverless-big-data-solutions
https://www.pluralsight.com/courses/google-cloud-platform-leveraging-architectural-design-patterns
https://www.pluralsight.com/courses/google-cloud-functions-architecting-event-driven-serverless-solutions
https://www.pluralsight.com/courses/google-dataproc-architecting-big-data-solutions
https://www.pluralsight.com/courses/google-machine-learning-apis-designing-implementing-solutions
https://www.pluralsight.com/courses/google-bigquery-architecting-data-warehousing-solutions
https://www.pluralsight.com/courses/google-cloud-automl-designing-implementing-solutions
https://www.linkedin.com/learning/search?keywords=apache%20beam
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-building-data-pipelines/what-goes-into-a-data-pipeline <— good summary
https://www.linkedin.com/learning/google-cloud-platform-for-enterprise-essential-training/enterprise-ready-gcp
https://www.linkedin.com/learning/architecting-big-data-applications-batch-mode-application-engineering/dw-lay-out-the-architecture <— good 5 use cases
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-architecting-solutions/architecting-data-science <— good 4 use cases
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-designing-data-warehouses/why-data-warehouses-are-important
https://www.linkedin.com/learning/architecting-big-data-applications-real-time-application-engineering/sm-analyze-the-problem <— good 4 use cases
<<<
!! Distributed systems in one lesson
https://learning.oreilly.com/videos/distributed-systems-in/9781491924914?autoplay=false
!! ML system
ML system design https://us.teamblind.com/s/5HGmH4Wd
Machine Learning Systems: Designs that scale https://learning.oreilly.com/library/view/machine-learning-systems/9781617293337/kindle_split_024.html
!! kafka
https://www.youtube.com/results?search_query=kafka+system+design
!! messaging service
https://www.datanami.com/2019/05/28/assessing-your-options-for-real-time-message-buses/
https://www.udemy.com/courses/search/?src=ukw&q=message+queueing+
https://www.udemy.com/rabbitmq-messaging-with-java-spring-boot-and-spring-mvc/
https://bytes.com/topic/python/answers/437385-queueing-python-ala-jms
!! instagram
http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html
https://github.com/CodeThat/Algorithm-Implementations
https://www.algoexpert.io/purchase coupon "devon"
https://www.algoexpert.io/questions/Nth%20Fibonacci
<<showtoc>>
! practice courses and resources
<<<
https://www.udemy.com/aws-emr-and-spark-2-using-scala/learn/v4/t/lecture/9366830?start=0
https://www.udemy.com/python-and-spark-setup-development-environment/
https://www.udemy.com/linux-fundamentals-for-it-professionals/learn/v4/overview
https://www.udemy.com/fundamentals-of-programming-using-python-3/learn/v4/overview
https://www.udemy.com/python-for-data-science-and-machine-learning-bootcamp/learn/v4/t/lecture/5774370?start=0
https://www.udemy.com/data-science-and-machine-learning-bootcamp-with-r/learn/v4/content
https://www.udemy.com/machinelearning/learn/v4/t/lecture/5935024?start=0
https://www.udemy.com/data-science-and-machine-learning-with-python-hands-on/learn/v4/t/lecture/4020676?start=0
Installing TensorFlow and H2O in R https://learning.oreilly.com/learning-paths/learning-path-r/9781789340839/9781788838771-video1_4
Using the H2O Deep Learning Framework https://learning.oreilly.com/videos/learning-path-r/9781788298742/9781788298742-video1_29
Interpretable AI - Not just for regulators - Patrick Hall (H2O.ai | George Washington University), Sri Satish (H2O.ai) h2o.ai https://learning.oreilly.com/videos/strata-data-conference/9781491976326/9781491976326-video316338
https://weidongzhou.wordpress.com/tag/big-data/
Practical Machine Learning with H2O https://learning.oreilly.com/library/view/practical-machine-learning/9781491964590/
https://www.udemy.com/complete-deep-learning-in-r-with-keras-others/
<<<
<<showtoc>>
! info
!! getting started
!! sample code
! data types and variables
!! data type
!! specific data types/values
!! variable assignment/scope
!!! delete variables
https://stackoverflow.com/questions/26545051/is-there-a-way-to-delete-created-variables-functions-etc-from-the-memory-of-th?rq=1
https://stackoverflow.com/questions/3543833/how-do-i-clear-all-variables-in-the-middle-of-a-python-script
!! comparison operators
! data structures
!! data containers/structures
!! R vector / python tuple or list
<<<
https://stackoverflow.com/questions/252703/difference-between-append-vs-extend-list-methods-in-python/28119966
<<<
!! R matrix / python numpy
!! R data frame / python pandas
!!! save dataframe to parquet file
https://stackoverflow.com/questions/41066582/python-save-pandas-data-frame-to-parquet-file
{{{
pip install fastparquet
df.to_parquet('myfile.parquet', engine='fastparquet', compression='UNCOMPRESSED')
# this created the table columns as BYTES
bq load --location=US --source_format=PARQUET tink.enc_parquet2 myfile.parquet
}}}
!! R list / python dictionary
<<<
https://stackoverflow.com/questions/1024847/add-new-keys-to-a-dictionary
https://stackoverflow.com/questions/1867861/dictionaries-how-to-keep-keys-values-in-same-order-as-declared
<<<
!! python sets
!! python list comprehension, nested list/dictionary
! control structures
!! control workflow
!! if else
<<<
https://stackoverflow.com/questions/2493404/complex-if-statement-in-python
<<<
!! error handling
!! unit testing / TDD
! loops
!! loops workflow
!! for loop
!! while loop
! advanced concepts
!! functions
!!! functional programming
Reactive Programming in Python https://learning.oreilly.com/videos/reactive-programming-in/9781786460332/9781786460332-video1_3?autoplay=false
!! OOP
!!! static vs class method
https://stackoverflow.com/questions/136097/what-is-the-difference-between-staticmethod-and-classmethod
!!! what is pass
https://stackoverflow.com/questions/13886168/how-to-use-the-pass-statement
!!! iterating through instance object attributes
https://www.saltycrane.com/blog/2008/09/how-iterate-over-instance-objects-data-attributes-python/
https://stackoverflow.com/questions/739882/iterating-over-object-instances-of-a-given-class-in-python
https://stackoverflow.com/questions/44196243/iterate-over-list-of-class-objects-pythonic-way
https://stackoverflow.com/questions/42581286/iterate-over-an-instance-objects-attributes-in-python
https://stackoverflow.com/questions/21598872/how-to-create-multiple-class-objects-with-a-loop-in-python
https://stackoverflow.com/questions/25150955/python-iterating-through-object-attributes
!! args kwargs
<<<
https://stackoverflow.com/questions/8977594/in-python-what-determines-the-order-while-iterating-through-kwargs/41634018
https://stackoverflow.com/questions/26748097/using-an-ordereddict-in-kwargs
<<<
! other functions methods procedures
! language specific operations
!! data workflow
!! directory operations
!! package management
!!! use-import-module-or-from-module-import
https://stackoverflow.com/questions/710551/use-import-module-or-from-module-import
!! importing data
!! cleaning data
!! data manipulation
!! visualization
!!! python and highcharts
<<<
https://www.highcharts.com/blog/products/highmaps/226-get-your-data-ready-for-charts-with-python/
https://github.com/kyper-data/python-highcharts
Flask Web Development in Python - 6 - js Plugin - Highcharts example https://www.youtube.com/watch?v=9Ic79kOBj_M
<<<
! scripting
!! scripting workflow
!! run a script
!! print multiple var
!! input data
!! check if list exist
<<<
https://stackoverflow.com/questions/11556234/how-to-check-if-a-list-exists-in-python
<<<
!! command line args parser
<<<
Building cmd line using click https://www.youtube.com/watch?v=6OY1xFYJVxQ
https://medium.com/@collectiveacuity/argparse-vs-click-227f53f023dc
https://realpython.com/comparing-python-command-line-parsing-libraries-argparse-docopt-click/
https://stackoverflow.com/questions/3217673/why-use-argparse-rather-than-optparse
https://ttboj.wordpress.com/2010/02/03/getopt-vs-optparse-vs-argparse/
https://pymotw.com/2/optparse/
https://docs.python.org/2/howto/argparse.html
https://leancrew.com/all-this/2015/06/better-option-parsing-in-python-maybe/
https://www.quora.com/What-are-the-advantages-of-using-argparse-over-optparse-or-vice-versa
<<<
<<<
http://www.annasyme.com/docs/python_structure.html <- GOOD
good basics https://medium.com/code-85/how-to-pass-command-line-values-to-a-python-script-1e3e7b244c89 <- GOOD
https://towardsdatascience.com/a-simple-guide-to-command-line-arguments-with-argparse-6824c30ab1c3
https://martin-thoma.com/how-to-parse-command-line-arguments-in-python/
logging https://gist.github.com/olooney/8155400
https://pymotw.com/2/argparse/
https://gist.github.com/BurkovBA/947ae7406a3b22b32c81904da9d9797e
https://zetcode.com/python/argparse/
https://gist.github.com/abalter/605773b34a68bb370bf84007ee55a130
https://github.com/nhoffman/argparse-bash
https://python.plainenglish.io/parse-args-in-bash-scripts-d50669be6a61
https://stackoverflow.com/questions/14340822/pass-bash-argument-to-python-script
{{{
#!/bin/sh
python script.py "$@"
}}}
https://stackoverflow.com/questions/4256107/running-bash-commands-in-python
{{{
bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
import subprocess
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
}}}
https://stackabuse.com/executing-shell-commands-with-python/
https://stackoverflow.com/questions/34836382/python-3-subprocessing-a-python-script-that-uses-argparse
https://medium.com/code-85/how-to-pass-command-line-values-to-a-python-script-1e3e7b244c89
<<<
{{{
Every option has some values like:
dest: You will access the value of option with this variable
help: This text gets displayed whey someone uses --help.
default: If the command line argument was not specified, it will get this default value.
action: Actions tell optparse what to do when it encounters an option on the command line. action defaults to store. These actions are available:
store: take the next argument (or the remainder of the current argument), ensure that it is of the correct type, and store it to your chosen destination dest.
store_true: store True in dest if this flag was set.
store_false: store False in dest if this flag was set.
store_const: store a constant value
append: append this option’s argument to a list
count: increment a counter by one
callback: call a specified function
nargs: ArgumentParser objects usually associate a single command-line argument with a single action to be taken. The nargs keyword argument associates a different number of command-line arguments with a single action.
required: Mark a command line argument as non-optional (required).
choices: Some command-line arguments should be selected from a restricted set of values. These can be handled by passing a container object as the choices keyword argument to add_argument(). When the command line is parsed, argument values will be checked, and an error message will be displayed if the argument was not one of the acceptable values.
type: Use this command, if the argument is of another type (e.g. int or float).
argparse automatically generates a help text. So if you call python myScript.py --help you will get something like that:
usage: ikjMultiplication.py [-h] [-i FILE]
ikjMatrix multiplication
optional arguments:
-h, --help show this help message and exit
-i FILE input file with two matrices
}}}
! xx
! forecasting
!! times series
timeseries techniques https://www.safaribooksonline.com/library/view/practical-data-analysis/9781783551668/ch07.html
http://www.johnwittenauer.net/a-simple-time-series-analysis-of-the-sp-500-index/
time series python statsmodels http://conference.scipy.org/scipy2011/slides/mckinney_time_series.pdf
Do not smooth times series, you hockey puck http://wmbriggs.com/post/195/
practical Data Analysis Cookbook https://github.com/drabastomek/practicalDataAnalysisCookbook
! underscore in python
watch the two videos below:
{{{
What's the meaning of underscores (_ & __) in Python variable names
Python Tutorial: if __name__ == '__main__'
}}}
https://www.youtube.com/watch?v=ALZmCy2u0jQ
https://www.youtube.com/watch?v=sugvnHA7ElY
{{{
Difference between _, __ and __xx__ in Python
http://igorsobreira.com/2010/09/16/difference-between-one-underline-and-two-underlines-in-python.html
http://stackoverflow.com/questions/8689964/why-do-some-functions-have-underscores-before-and-after-the-function-name
http://programmers.stackexchange.com/questions/229804/usage-of-while-declaring-any-variables-or-class-member-in-python
}}}
! sqldf / pandasql
http://blog.yhat.com/posts/pandasql-intro.html
pandasql: Make python speak SQL https://community.alteryx.com/t5/Data-Science-Blog/pandasql-Make-python-speak-SQL/ba-p/138435
https://statcompute.wordpress.com/2016/10/17/flavors-of-sql-on-pandas-dataframe/
https://www.r-bloggers.com/turning-data-into-awesome-with-sqldf-and-pandasql/
! gui ide
https://www.yhat.com/products/rodeo
! for loops
https://data36.com/python-for-loops-explained-data-science-basics-5/
! PYTHONPATH
https://stackoverflow.com/questions/19917492/how-to-use-pythonpath
<<<
You're confusing PATH and PYTHONPATH. You need to do this:
export PATH=$PATH:/home/randy/lib/python
PYTHONPATH is used by the python interpreter to determine which modules to load.
PATH is used by the shell to determine which executables to run.
<<<
! python compatibility
!! pycon talk - start here
Brett Cannon - How to make your code Python 2/3 compatible - PyCon 2015 https://www.youtube.com/watch?v=KPzDX5TX5HE
https://www.youtube.com/results?search_query=python-modernize
!! performance between 2 and 3
https://chairnerd.seatgeek.com/migrating-to-python-3/
!! coding differences between 2 and 3
[[..python 2 to 3]]
https://wiki.python.org/moin/Python2orPython3
!! 2to3 - tool to automatically convert code
https://docs.python.org/2/library/2to3.html
Python 2to3 - Convert your Python 2 to Python 3 automatically https://www.youtube.com/watch?v=8qxKYnAsNuU
Make Python 2 Programs Compatible with Python 3 Automatically https://www.youtube.com/watch?v=M6wkCIdfI8U
https://stackoverflow.com/questions/40020178/what-python-linter-can-i-use-to-spot-python-2-3-compatibility-issues
!! futurize and modernize
{{{
# this will work in python 2
from __future__ import print_function
print('hello world')
}}}
https://python-future.org/faq.html
https://www.youtube.com/results?search_query=python+futurize
python-future vs 2to3 https://www.google.com/search?q=python-future+vs+2to3&oq=python-future+vs+2to3&aqs=chrome..69i57.3384j0j4&sourceid=chrome&ie=UTF-8
Moving from Python 2 to Python 3 http://ptgmedia.pearsoncmg.com/imprint_downloads/informit/promotions/python/python2python3.pdf <-- good stuff
Python How to use from __future__ import print_function https://www.youtube.com/watch?v=lLpp2cbUWX0 <-- good stuff
http://python-future.org/quickstart.html#to-convert-existing-python-2-code <- futurize
https://www.youtube.com/results?search_query=future__+import
http://python3porting.com/noconv.html
https://www.reddit.com/r/Python/comments/45vok2/why_did_python_3_change_the_print_syntax/
!! six
https://pypi.org/project/six/
! tricks
!! count frequency of words
http://stackoverflow.com/questions/30202011/how-can-i-count-comma-separated-values-in-one-column-of-my-panda-table
https://www.google.com/search?q=R+word+count&oq=R+word+count&aqs=chrome..69i57j0l5.2673j0j1&sourceid=chrome&ie=UTF-8#q=r+count+frequency+of+numbers&*
http://stackoverflow.com/questions/8920145/count-the-number-of-words-in-a-string-in-r
http://r.789695.n4.nabble.com/How-to-count-the-number-of-occurence-td1661733.html
http://stackoverflow.com/questions/1923273/counting-the-number-of-elements-with-the-values-of-x-in-a-vector
https://www.quora.com/How-do-I-generate-frequency-counts-of-categorical-variables-eg-total-number-of-0s-and-total-number-of-1s-from-each-column-within-a-dataset-in-RStudio
http://stackoverflow.com/questions/1296646/how-to-sort-a-dataframe-by-columns
! data structures / containers
!! pickle
https://stackoverflow.com/questions/11641493/how-to-cpickle-dump-and-load-separate-dictionaries-to-the-same-file
Serializing Data Using the pickle and cPickle Modules https://learning.oreilly.com/library/view/python-cookbook/0596001673/ch08s03.html
Reading a pickle file (PANDAS Python Data Frame) in R https://stackoverflow.com/questions/35121192/reading-a-pickle-file-pandas-python-data-frame-in-r
! scheduler - celery
https://www.youtube.com/results?search_query=python+scheduler+async+every+minute+background
https://www.udemy.com/using-python-with-oracle-db/learn/lecture/5330818#overview
https://stackoverflow.com/questions/22715086/scheduling-python-script-to-run-every-hour-accurately
https://stackoverflow.com/questions/2223157/how-to-execute-a-function-asynchronously-every-60-seconds-in-python
! learning materials
https://linuxacademy.com/linux/training/learningpath/name/scripting-automation-for-sysadmins
https://acloud.guru/learn/automating-aws-with-python
<<showtoc>>
! Upgrading R
* first, fix the permissions of the R folder by making if "full control" http://stackoverflow.com/questions/5059692/unable-to-update-r-packages-in-default-library-on-windows-7
* download the new rstudio https://www.rstudio.com/products/rstudio/download/
* follow the steps mentioned here http://stackoverflow.com/questions/13656699/update-r-using-rstudio and here http://www.r-statistics.com/2013/03/updating-r-from-r-on-windows-using-the-installr-package/ basically you'll have to execute the following:
{{{
# installing/loading the package:
if(!require(installr)) {
install.packages("installr"); require(installr)} #load / install+load installr
# using the package:
updateR() # this will start the updating process of your R installation. It will check for newer versions, and if one is available, will guide you through the decisions you'd need to make.
}}}
! Clone R
https://github.com/MangoTheCat/pkgsnap
! rstudio
preview version https://www.rstudio.com/products/rstudio/download/preview/
! documentation
{{{
> library(RDocumentation)
Do you want to automatically load RDocumentation when you start R? [y|n] y
Congratulations!
R will now use RDocumentation to display your help files.
If you're offline, R will just display your local documentation.
To avoid automatically loading the RDocumentation package, use disable_autoload().
If you don't want the ? and help functionality to show RDocumentation pages, use disable_override().
Attaching package: ‘RDocumentation’
The following objects are masked from ‘package:utils’:
?, help, help.search
}}}
! favorite packages
!! summary
!!! visualization
* ggfortify (autoplot) - easy plotting of data.. just execute autoplot
<<<
http://www.sthda.com/english/wiki/ggfortify-extension-to-ggplot2-to-handle-some-popular-packages-r-software-and-data-visualization
http://rpubs.com/sinhrks/basics
http://rpubs.com/sinhrks/plot_lm
<<<
!!! time series
* quantstart time series - https://www.quantstart.com/articles#time-series-analysis
* xts - convert to time series object
** as.xts()
!!! quant
* quantmod http://www.quantmod.com/gallery/
* quantstrat and blotter
http://masterr.org/r/how-to-install-quantstrat/
http://www.r-bloggers.com/nuts-and-bolts-of-quantstrat-part-i/
http://www.programmingr.com/content/installing-quantstrat-r-forge-and-source/
using quantstrat to evaluate intraday trading strategies http://www.rinfinance.com/agenda/2013/workshop/Humme+Peterson.pdf
* highfrequency package
https://cran.r-project.org/web/packages/highfrequency/highfrequency.pdf , http://feb.kuleuven.be/public/n09022/research.htm
http://highfrequency.herokuapp.com/
*quantlib http://quantlib.org/index.shtml
!!!! quant topics
* quant data
https://www.onetick.com/
interactivebrokers api http://www.r-bloggers.com/how-to-save-high-frequency-data-in-mongodb/ , http://www.r-bloggers.com/i-see-high-frequency-data/
* quant portals
quanstart - learning materials (books, scripts) https://www.quantstart.com/faq
http://www.rfortraders.com/
http://www.quantlego.com/welcome/
https://www.quantstart.com/articles/Quantitative-Finance-Reading-List
http://datalab.lu/
http://carlofan.wix.com/data-science-chews
* quant books
https://www.amazon.com/Quantitative-Trading-Understanding-Mathematical-Computational/dp/1137354070?ie=UTF8&camp=1789&creative=9325&creativeASIN=1137354070&linkCode=as2&linkId=KJAPF3TMVPQHWD4H&redirect=true&ref_=as_li_qf_sp_asin_il_tl&tag=boucom-20
https://www.quantstart.com/successful-algorithmic-trading-ebook
https://www.quantstart.com/advanced-algorithmic-trading-ebook
https://www.quantstart.com/cpp-for-quantitative-finance-ebook
* quant career
https://www.quantstart.com/articles/Can-You-Still-Become-a-Quant-in-Your-Thirties
http://www.dlsu.edu.ph/academics/graduate-studies/cob/master-sci-fin-eng.asp
* quant strategies
trend following strategy http://www.followingthetrend.com/2014/03/improving-the-free-trend-following-trading-rules/
connorsRSI http://www.qmatix.com/ConnorsRSI-Pullbacks-Guidebook.pdf
* quant options trade
http://www.businessinsider.com/the-story-of-the-first-ever-options-trade-in-recorded-history-2012-3
* quant portfolio optimization
http://www.rinfinance.com/RinFinance2009/presentations/yollin_slides.pdf
http://zoonek.free.fr/blosxom/R/2012-06-01_Optimization.html
* quant time series databases
https://kx.com/benchmarks.php
http://www.paradigm4.com/
* PerformanceAnalytics-package
http://braverock.com/brian/R/PerformanceAnalytics/html/PerformanceAnalytics-package.html
!!! TDD, testing
http://www.agiledata.org/essays/tdd.html
http://r-pkgs.had.co.nz/tests.html
* testthat
!!! performance
* rtools
https://github.com/stan-dev/rstan/wiki/Install-Rtools-for-Windows
* Rcpp
* RInside
* speed up loop in R
http://stackoverflow.com/questions/2908822/speed-up-the-loop-operation-in-r
http://www.r-bloggers.com/faster-for-loops-in-r/
http://biostat.mc.vanderbilt.edu/wiki/pub/Main/SvetlanaEdenRFiles/handouts.pdf
http://www.r-bloggers.com/faster-higher-stonger-a-guide-to-speeding-up-r-code-for-busy-people/
!!! reporting
* knitr
!!! database programming
http://blog.aguskurniawan.net/
! favorite functions
* cut
** turns continuous variables into factors http://www.r-bloggers.com/r-function-of-the-day-cut/
! .Rprofile
http://www.r-bloggers.com/fun-with-rprofile-and-customizing-r-startup/
http://stackoverflow.com/questions/13633876/getting-rprofile-to-load-at-startup
http://www.dummies.com/how-to/content/how-to-install-and-configure-rstudio.html
! require vs library
http://stackoverflow.com/questions/5595512/what-is-the-difference-between-require-and-library
http://yihui.name/en/2014/07/library-vs-require/
https://github.com/rstudio/shiny#installation
! R java issue fix
{{{
check the environment
> Sys.getenv()
ALLUSERSPROFILE C:\ProgramData
APPDATA C:\Users\karl\AppData\Roaming
CommonProgramFiles C:\Program Files\Common Files
CommonProgramFiles(x86)
C:\Program Files (x86)\Common Files
CommonProgramW6432 C:\Program Files\Common Files
COMPUTERNAME KARL-REMOTE
ComSpec C:\Windows\system32\cmd.exe
DISPLAY :0
FP_NO_HOST_CHECK NO
GFORTRAN_STDERR_UNIT -1
GFORTRAN_STDOUT_UNIT -1
HADOOP_HOME C:\tmp\hadoop
HOME C:/Users/karl/Documents
HOMEDRIVE C:
HOMEPATH \Users\karl
JAVA_HOME C:/Program Files/Java/jdk1.8.0_25/bin
LOCALAPPDATA C:\Users\karl\AppData\Local
LOGONSERVER \\KARL-REMOTE
NUMBER_OF_PROCESSORS 4
OS Windows_NT
PATH C:\Program
Files\R\R-3.3.1\bin\x64;C:\ProgramData\Oracle\Java\javapath;C:\oracle\product\11.1.0\db_1\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program
Files (x86)\Common
Files\SYSTEM\MSMAPI\1033;C:\Python33;C:\Python33\Scripts;C:\Program
Files (x86)\QuickTime\QTSystem\;C:\Program Files
(x86)\nodejs\;C:\Users\karl\AppData\Roaming\npm;C:\Users\karl\AppData\Local\atom\bin;C:\Users\karl\AppData\Local\Pandoc\;C:\Program
Files\Java\jdk1.8.0_25\jre\bin\server;C:\Program
Files\Java\jdk1.8.0_25\bin
PATHEXT .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
PROCESSOR_ARCHITECTURE AMD64
PROCESSOR_IDENTIFIER Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
PROCESSOR_LEVEL 6
PROCESSOR_REVISION 3a09
ProgramData C:\ProgramData
ProgramFiles C:\Program Files
ProgramFiles(x86) C:\Program Files (x86)
ProgramW6432 C:\Program Files
PSModulePath C:\Windows\system32\WindowsPowerShell\v1.0\Modules\
PUBLIC C:\Users\Public
R_ARCH /x64
R_COMPILED_BY gcc 4.9.3
R_DOC_DIR C:/PROGRA~1/R/R-33~1.1/doc
R_HOME C:/PROGRA~1/R/R-33~1.1
R_LIBS_USER C:/Users/karl/Documents/R/win-library/3.3
R_USER C:/Users/karl/Documents
RMARKDOWN_MATHJAX_PATH C:/Program Files/RStudio/resources/mathjax-23
RS_LOCAL_PEER \\.\pipe\33860-rsession
RS_RPOSTBACK_PATH C:/Program Files/RStudio/bin/rpostback
RS_SHARED_SECRET 63341846741
RSTUDIO 1
RSTUDIO_MSYS_SSH C:/Program Files/RStudio/bin/msys-ssh-1000-18
RSTUDIO_PANDOC C:/Program Files/RStudio/bin/pandoc
RSTUDIO_SESSION_PORT 33860
RSTUDIO_USER_IDENTITY karl
RSTUDIO_WINUTILS C:/Program Files/RStudio/bin/winutils
SESSIONNAME Console
SystemDrive C:
SystemRoot C:\Windows
TEMP C:\Users\karl\AppData\Local\Temp
TMP C:\Users\karl\AppData\Local\Temp
USERDOMAIN karl-remote
USERNAME karl
USERPROFILE C:\Users\karl
windir C:\Windows
check java version
> system("java -version")
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) Client VM (build 25.91-b14, mixed mode)
add the java directories on PATH , critical here is the directory of jvm.dll
C:\Program Files\Java\jdk1.8.0_25\jre\bin\server;C:\Program Files\Java\jdk1.8.0_25\bin
set JAVA_HOME
Sys.setenv(JAVA_HOME="C:/Program Files/Java/jdk1.8.0_25/bin")
library(rJava)
library(XLConnect)
}}}
! remove duplicate records
http://www.cookbook-r.com/Manipulating_data/Finding_and_removing_duplicate_records/
http://www.dummies.com/how-to/content/how-to-remove-duplicate-data-in-r.html
! get R memory usage
http://stackoverflow.com/questions/1358003/tricks-to-manage-the-available-memory-in-an-r-session
{{{
# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.prettysize <- napply(names, function(x) {
capture.output(format(utils::object.size(x), units = "auto")) })
obj.size <- napply(names, object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
out
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}
lsos()
}}}
! dplyr join functions cheat sheet
https://stat545-ubc.github.io/bit001_dplyr-cheatsheet.html
! loess
http://flowingdata.com/2010/03/29/how-to-make-a-scatterplot-with-a-smooth-fitted-line/
! gather vs melt
http://stackoverflow.com/questions/26536251/comparing-gather-tidyr-to-melt-reshape2
! tidyr vs reshape2
http://rpubs.com/paul4forest/reshape2tidyrdplyr
! bootstrapping
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=bootstrapping%20in%20r
! forecasting
Forecasting time series using R by Prof Rob J Hyndman at Melbourne R Users https://www.youtube.com/watch?v=1Lh1HlBUf8k
forecasting principles and practice http://robjhyndman.com/uwafiles/fpp-notes.pdf
http://www.statistics.com/forecasting-analytics#fees
!! melbourne talk
http://robjhyndman.com/seminars/melbournerug/
http://robjhyndman.com/talks/MelbourneRUG.pdf
http://robjhyndman.com/talks/MelbourneRUGexamples.R
!! time series data
https://forecasters.org/resources/time-series-data/m3-competition/
https://forecasters.org/resources/time-series-data/
http://www.forecastingprinciples.com/index.php?option=com_content&view=article&id=8&Itemid=18
https://datamarket.com/data/list/?q=provider%3atsdl
!! prediction competitions
http://robjhyndman.com/hyndsight/prediction-competitions/
!! forecasting books
Forecasting: principles and practice https://www.otexts.org/book/fpp
!! automated forecasting examples
http://www.dxbydt.com/munge-automate-forecast/ , https://github.com/djshahbydt/Munge-Automate-Forecast.../blob/master/Munge%2C%20Automate%20%26%20Forecast...
http://www.dxbydt.com/wp-content/uploads/2015/11/data.csv
https://github.com/pmaier1971/AutomatedForecastingWithShiny/blob/master/server.R
!! forecasting UI examples
https://pmaier1971.shinyapps.io/AutomatedForecastingWithShiny/ <- check the overview and economic forecasting tabs
http://www.ae.be/blog-en/combining-the-power-of-r-and-d3-js/ , http://vanhumbeecka.github.io/R-and-D3/plotly.html R and D3 binding
https://nxsheet.com/sheets/56d0a87264e47ee60a95f652
!! forecasting and shiny
https://aneesha.shinyapps.io/ShinyTimeseriesForecasting/
https://medium.com/@aneesha/timeseries-forecasting-with-the-forecast-r-package-and-shiny-6fa04c64196#.r9nllan82
http://www.datasciencecentral.com/profiles/blogs/time-series-forecasting-and-internet-of-things-iot-in-grain
!! forecasting time series reading materials
http://a-little-book-of-r-for-time-series.readthedocs.io/en/latest/index.html
http://a-little-book-of-r-for-time-series.readthedocs.io/en/latest/src/timeseries.html
understanding time series data https://www.safaribooksonline.com/library/view/practical-data-analysis/9781783551668/ch07s03.html
https://www.quantstart.com/articles#time-series-analysis
!! acf pacf, arima arma
http://www.forecastingbook.com/resources/online-tutorials/acf-and-random-walk-in-xlminer
autocorrelation in bearing performance https://www.youtube.com/watch?v=oVQCS9Om_w4
autocorrelation function in time series analysis https://www.youtube.com/watch?v=pax02Q0aJO8
Detecting AR & MA using ACF and PACF plots https://www.youtube.com/watch?v=-vSzKfqcTDg
time series theory https://www.youtube.com/playlist?list=PLUgZaFoyJafhfcggaNzmZt_OdJq32-iFW
R Programming LiveLessons (Video Training): Fundamentals to Advanced https://www.safaribooksonline.com/library/view/r-programming-livelessons/9780133578867/
understanding time series data https://www.safaribooksonline.com/library/view/practical-data-analysis/9781783551668/ch07s03.html
ARMA (no differencing), ARIMA (with differencing) https://www.quora.com/Whats-the-difference-between-ARMA-ARIMA-and-ARIMAX-in-laymans-terms
https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average
https://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model
ARIMA models https://www.otexts.org/fpp/8
Stationarity and differencing https://www.otexts.org/fpp/8/1
https://www.quora.com/What-are-the-differences-between-econometrics-quantitative-finance-mathematical-finance-computational-finance-and-financial-engineering
!! time series forecasting model compare
http://stats.stackexchange.com/questions/140163/timeseries-analysis-procedure-and-methods-using-r
!! cross validation
http://stats.stackexchange.com/questions/140163/timeseries-analysis-procedure-and-methods-using-r
http://robjhyndman.com/hyndsight/crossvalidation/
http://robjhyndman.com/hyndsight/tscvexample/
Evaluating forecast accuracy https://www.otexts.org/fpp/2/5/
http://moderntoolmaking.blogspot.com/2011/11/functional-and-parallel-time-series.html
http://moderntoolmaking.blogspot.com/search/label/cross-validation
http://moderntoolmaking.blogspot.com/search/label/forecasting
cross validation and train/test split - Selecting the best model in scikit-learn using cross-validation https://www.youtube.com/watch?v=6dbrR-WymjI
! dplyr vs data.table
http://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly/27840349#27840349
http://www.r-bloggers.com/working-with-large-datasets-with-dplyr-and-data-table/
http://www.r-statistics.com/2013/09/a-speed-test-comparison-of-plyr-data-table-and-dplyr/
! shiny
reproducible research with R and shiny https://www.safaribooksonline.com/library/view/strata-hadoop/9781491927960/part24.html
http://rmarkdown.rstudio.com/
https://rstudio.github.io/packrat/
https://rstudio.github.io/packrat/
https://www.shinyapps.io
https://gist.github.com/SachaEpskamp/5796467 A general shiny app to import and export data to R. Note that this can be used as a starting point for any app that requires data to be loaded into Shiny.
https://www.youtube.com/watch?v=HPZSunrSo5M R Shiny app tutorial # 15 - how to use fileInput to upload CSV or Text file
!! shiny time series
http://markedmondson.me/my-google-analytics-time-series-shiny-app-alpha
https://gist.github.com/MarkEdmondson1234/3190fb967f3cbc2eeae2
http://blog.rstudio.org/2015/04/14/interactive-time-series-with-dygraphs/
http://stackoverflow.com/questions/28049248/create-time-series-graph-in-shiny-from-user-inputs
!! courses/tutorials
http://shiny.rstudio.com/
http://shiny.rstudio.com/tutorial/
http://shiny.rstudio.com/articles/
http://shiny.rstudio.com/gallery/
http://shiny.rstudio.com/articles/shinyapps.html
http://shiny.rstudio.com/reference/shiny/latest/ <- function references
https://www.safaribooksonline.com/library/view/introduction-to-shiny/9781491959558/
https://www.safaribooksonline.com/library/view/web-application-development/9781782174349/
http://deanattali.com/blog/building-shiny-apps-tutorial/
https://github.com/rstudio/IntroToShiny
!! showcase/gallery/examples
https://www.rstudio.com/products/shiny/shiny-user-showcase/
https://github.com/rstudio/shiny-examples
!! persistent data/storage in shiny
http://deanattali.com/blog/shiny-persistent-data-storage/
http://daattali.com/shiny/persistent-data-storage/
https://github.com/daattali/shiny-server/tree/master/persistent-data-storage
!! google form with shiny app
http://deanattali.com/2015/06/14/mimicking-google-form-shiny/
!! real time monitoring of R package downloads
https://gallery.shinyapps.io/087-crandash/
https://github.com/Athospd/semantix_closeness_centrality
!! R pivot table
http://www.magesblog.com/2015/03/pivot-tables-with-r.html
http://www.joyofdata.de/blog/pivoting-data-r-excel-style/
http://stackoverflow.com/questions/33214397/download-rpivottable-ouput-in-shiny
https://www.rforexcelusers.com/make-pivottable-in-r/
https://github.com/smartinsightsfromdata/rpivotTable/blob/master/R/rpivotTable.R
https://github.com/joyofdata/r-big-pivot
!! setup shiny server
https://www.digitalocean.com/community/tutorials/how-to-set-up-shiny-server-on-ubuntu-14-04
http://deanattali.com/2015/05/09/setup-rstudio-shiny-server-digital-ocean/
http://www.r-bloggers.com/how-to-get-your-very-own-rstudio-server-and-shiny-server-with-digitalocean/
http://johndharrison.blogspot.com/2014/03/rstudioshiny-server-on-digital-ocean.html
http://www.r-bloggers.com/deploying-your-very-own-shiny-server/
http://matthewlincoln.net/2015/08/31/setup-rstudio-and-shiny-servers-on-digital-ocean.html
!! nearPoints, brushedPoints
http://shiny.rstudio.com/articles/selecting-rows-of-data.html
http://shiny.rstudio.com/reference/shiny/latest/brushedPoints.html
http://stackoverflow.com/questions/31445367/r-shiny-datatableoutput-not-displaying-brushed-points
http://stackoverflow.com/questions/34642851/shiny-ggplot-with-interactive-x-and-y-does-not-pass-information-to-brush
http://stackoverflow.com/questions/29965979/data-object-not-found-when-deploying-shiny-app
https://github.com/BillPetti/Scheduling-Shiny-App
!! deploy app
library(rsconnect)
rsconnect::deployApp('E:/GitHub/code_ninja/r/shiny/karlshiny')
!! shiny and d3
http://stackoverflow.com/questions/26650561/binding-javascript-d3-js-to-shiny
http://www.r-bloggers.com/d3-and-r-interacting-through-shiny/
https://github.com/timelyportfolio/shiny-d3-plot
https://github.com/vega/vega/wiki/Vega-and-D3
http://vega.github.io/
! data frame vs data table
http://stackoverflow.com/questions/13618488/what-you-can-do-with-data-frame-that-you-cant-in-data-table
http://stackoverflow.com/questions/18001120/what-is-the-practical-difference-between-data-frame-and-data-table-in-r
! stat functions
stat_summary dot plot - ggplot2 dot plot : Quick start guide - R software and data visualization http://www.sthda.com/english/wiki/print.php?id=180
! ggplot2
ggplot2 essentials http://www.sthda.com/english/wiki/ggplot2-essentials
Be Awesome in ggplot2: A Practical Guide to be Highly Effective - R software and data visualization http://www.sthda.com/english/wiki/be-awesome-in-ggplot2-a-practical-guide-to-be-highly-effective-r-software-and-data-visualization
Beautiful plotting in R: A ggplot2 cheatsheet http://zevross.com/blog/2014/08/04/beautiful-plotting-in-r-a-ggplot2-cheatsheet-3/
!! real time viz
http://stackoverflow.com/questions/11365857/real-time-auto-updating-incremental-plot-in-r
http://stackoverflow.com/questions/27205610/real-time-auto-incrementing-ggplot-in-r
! ggvis
ggvis vs ggplot2 http://ggvis.rstudio.com/ggplot2.html
ggvis basics http://ggvis.rstudio.com/ggvis-basics.html#layers
Properties and scales http://ggvis.rstudio.com/properties-scales.html
ggvis cookbook http://ggvis.rstudio.com/cookbook.html
https://www.cheatography.com/shanly3011/cheat-sheets/data-visualization-in-r-ggvis-continued/
http://stats.stackexchange.com/questions/117078/for-plotting-with-r-should-i-learn-ggplot2-or-ggvis
! Execute R inside Oracle
https://blogs.oracle.com/R/entry/invoking_r_scripts_via_oracle
https://blogs.oracle.com/R/entry/oraah_enabling_high_performance_r
https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1
http://sheepsqueezers.com/media/documentation/oracle/ore-trng4-embeddedrscripts-1501638.pdf
Oracle R Enterprise Hands-on Lab http://static1.1.sqspcdn.com/static/f/552253/24257177/1390505576063/BIWA_14_Presentation_3.pdf?token=LqmhB3tJhuDeN0eYOXaGlm04BlI%3D
http://www.peakindicators.com/blog/the-advantages-of-ore-over-traditional-r
COUPLING DATABASES AND ADVANCED ANALYTICAL TOOLS (R) http://it4bi.univ-tours.fr/it4bi/medias/pdfs/2014_Master_Thesis/IT4BI_2014_submission_4.pdf
R Interface for Embedded R Execution http://docs.oracle.com/cd/E67822_01/OREUG/GUID-3227A0D4-C5FE-49C9-A28C-8448705ADBCF.htm#OREUG495
automated trading strategies with R http://www.oracle.com/assets/media/automatedtradingstrategies-2188856.pdf?ssSourceSiteId=otnen
Is it possible to run a SAS or R script from PL/SQL? http://stackoverflow.com/questions/4043629/is-it-possible-to-run-a-sas-or-r-script-from-pl-sql
statistical analysis with oracle http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/40cfa537-62b8-2f10-a78d-d320a2ab7205?overridelayout=true
! Turn your R code into a web API
https://github.com/trestletech/plumber
! errors
!! dplyr “Select” - Error: found duplicated column name
http://stackoverflow.com/questions/28549045/dplyr-select-error-found-duplicated-column-name
! spark
[[sparklyr]]
! R cookbook - Winston C.
http://www.cookbook-r.com/
! references
http://www.amazon.com/The-Art-Programming-Statistical-Software/dp/1593273843/ref=tmm_pap_title_0?ie=UTF8&qid=1392504776&sr=8-1
http://www.amazon.com/R-Graphics-Cookbook-Winston-Chang/dp/1449316956/ref=tmm_pap_title_0?ie=UTF8&qid=1392504949&sr=8-2
http://had.co.nz/
http://adv-r.had.co.nz/ <- advanced guide
http://adv-r.had.co.nz/Style.html <- style guide
https://www.youtube.com/user/rdpeng/playlists
https://cran.r-project.org/doc/contrib/Short-refcard.pdf
discovering statistics using R http://library.mpib-berlin.mpg.de/toc/z2012_1351.pdf
! rpubs favorites
http://rpubs.com/karlarao
Interpreting coefficients from interaction (Part 1) http://rpubs.com/hughes/15353
! tricks
!! count frequency of words
{{{
numbers <- c(33, 30, 14, 1 , 6, 19, 34, 17, 14, 15, 24 , 21, 24, 34, 6, 24, 34, 6, 29, 5, 19 , 4, 3, 19, 4, 14, 20, 34)
library(dplyr); arrange(as.data.frame(table(numbers)))
numbers Freq
1 1
3 1
5 1
15 1
17 1
20 1
21 1
29 1
30 1
33 1
4 2
6 3
14 3
19 3
24 3
34 4
}}}
!! How to print text and variables in a single line in r
https://stackoverflow.com/questions/32241806/how-to-print-text-and-variables-in-a-single-line-in-r/32242334
! data structures
!! see R DATA FORMAT
[[R data format]]
! XLConnect
!! XLConnect strftime
https://stackoverflow.com/questions/21312173/how-can-i-retrive-the-time-only-with-xlconnect
!! 1899-Dec-31
http://www.cpearson.com/excel/datetime.htm
!! R import big xlsx
https://stackoverflow.com/questions/19147884/importing-a-big-xlsx-file-into-r/31029292#31029292
!! R write CSV
http://rprogramming.net/write-csv-in-r/
!! R java memory
https://stackoverflow.com/questions/34624002/r-error-java-lang-outofmemoryerror-java-heap-space
https://stackoverflow.com/questions/11766981/xlconnect-r-use-of-jvm-memory
!! R commandargs
https://www.rdocumentation.org/packages/R.utils/versions/2.8.0/topics/commandArgs
! scraping HTML (XML package)
http://bradleyboehmke.github.io/2015/12/scraping-html-tables.html
.
<<<
Before h2o there’s a GUI data mining tool called rattle
quick intro http://r4stats.com/articles/software-reviews/rattle/
detailed course https://www.udemy.com/data-mining-with-rattle/
https://www.kdnuggets.com/2017/02/top-r-packages-machine-learning.html
I’d definitely try h2o with the same data set https://www.h2o.ai/try-driverless-ai/
it also helps having Tableau for easy validation the raw data, and sqldf https://www.r-bloggers.com/make-r-speak-sql-with-sqldf/ (also available in python - from pandasql import sqldf)
and of course R studio, pycharm, and sql developer
another GUI tool is exploratory made by an ex oracle guy (from oracle visual analyzer team)
https://exploratory.io/features
<<<
<<showtoc>>
http://insightdataengineering.com/blog/The-Data-Engineering-Ecosystem-An-Interactive-Map.html
https://blog.insightdatascience.com/the-new-data-engineering-ecosystem-trends-and-rising-stars-414a1609d4a0#.c03g5b1nc
https://github.com/InsightDataScience/data-engineering-ecosystem/wiki/Data-Engineering-Ecosystem
https://github.com/InsightDataScience/data-engineering-ecosystem
! the ecosystem
!! v3
http://xyz.insightdataengineering.com/blog/pipeline_map/
[img(50%,50%)[ https://i.imgur.com/xeo0SP4.png ]]
!! v2
http://xyz.insightdataengineering.com/blog/pipeline_map_v2.html
[img(90%,90%)[ http://i.imgur.com/gn9E7Jf.png ]]
!! v1
http://xyz.insightdataengineering.com/blog/pipeline_map_v1.html
[img(90%,90%)[ https://lh3.googleusercontent.com/-iD9v8Iho_7g/VZVvf0mK1PI/AAAAAAAACmU/VlovJ-JP2cI/s2048/20150702_DataEngineeringEcosystem.png ]]
! hadoop architecture use case
[img[ https://lh3.googleusercontent.com/-QsRM3czDMkg/Vhfg7pTmFrI/AAAAAAAACzU/4BEa8SfK_KU/s800-Ic42/IMG_8542.JPG ]]
! others
https://trello.com/b/rbpEfMld/data-science
! also check
!! microservices patterns
[img(100%,100%)[ https://i.imgur.com/7p8kBwI.png]]
https://microservices.io/patterns/index.html
<<showtoc>>
! The players:
!! Cloud computing
!!! AWS
!!! Azure
!!! Google Cloud
!!! Digital Ocean
!! Infrastructure as code
!!! Chef http://www.getchef.com/chef/ , http://puppetlabs.com/puppet/puppet-enterprise
!!! Puppet
!!! Ansible
!!! Saltstack
!!! terraform
!!! cfengine
!! Build and Test using continuous integration
!!! jenkins https://jenkins-ci.org/
!! Containerization
!!! docker, kubernetes
! ''reviews''
http://www.infoworld.com/d/data-center/puppet-or-chef-the-configuration-management-dilemma-215279
! Comparison of open-source configuration management software
http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software
! List of build automation software
http://en.wikipedia.org/wiki/List_of_build_automation_software
! nice viz from heroku website
[img[ http://i.imgur.com/4kTs7TE.png ]]
[img[ http://i.imgur.com/PHCK74x.png ]]
! from hashicorp
[img(70%,70%)[ http://i.imgur.com/yj0bKNF.png ]]
[img(70%,70%)[ http://i.imgur.com/zrSw8ge.png ]]
http://thenewstack.io/devops-landscape-2015-the-race-to-the-management-layer/
https://gist.github.com/diegopacheco/8f3a03a0869578221ecf
https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463
https://medium.com/machine-learning-in-practice/cheat-sheet-of-machine-learning-and-python-and-math-cheat-sheets-a4afe4e791b6
https://ml-cheatsheet.readthedocs.io/en/latest/
https://technology.amis.nl/2017/05/06/the-hello-world-of-machine-learning-with-python-pandas-jupyter-doing-iris-classification-based-on-quintessential-set-of-flower-data/
<<showtoc>>
! DVC - data version control
MLOps Data Versioning and DataOps with Dmitry Petrov of DVC.org
https://www.meetup.com/pl-PL/bristech/events/271251921/
https://www.eventbrite.com/e/dc-thurs-dvc-w-dmitry-petrov-tickets-120036389071?ref=enivtefor001&invite=MjAwNjgyNjMva2FybGFyYW9AZ21haWwuY29tLzA%3D%0A&utm_source=eb_email&utm_medium=email&utm_campaign=inviteformalv2&utm_term=eventpage
Using Python With Oracle Database 11g
http://www.oracle.com/technetwork/articles/dsl/python-091105.html
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/OOW11/python_db/python_db.htm
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/oow10/python_db/python_db.htm
http://cx-oracle.sourceforge.net/
http://www.python.org/dev/peps/pep-0249/
http://www.amazon.com/gp/product/1887902996
http://wiki.python.org/moin/BeginnersGuide
''python for ipad'' http://www.tuaw.com/2012/11/19/python-3-2-lets-you-write-python-on-the-iphone/
''python environment''
http://blog.andrewhays.net/love-your-terminal
http://ozkatz.github.com/improving-your-python-productivity.html
http://showmedo.com/videotutorials/python
The Ultimate Python Programming Course http://goo.gl/vvpWE, https://www.udemy.com/the-ultimate-python-programming-course/
Python 3 Essential Training http://www.lynda.com/Python-3-tutorials/essential-training/62226-2.html
{{{
parameters:
p_owner
p_tabname
p_partname
p_granularity
p_est_percent
p_method_opt
p_degree
}}}
{{{
CREATE OR REPLACE PROCEDURE alloc_app_perf.table_stats
(
p_owner IN varchar2,
p_tabname IN varchar2,
p_partname IN varchar2 default NULL,
p_granularity IN varchar2 default 'GLOBAL AND PARTITION',
p_est_percent IN varchar2 default 'DBMS_STATS.AUTO_SAMPLE_SIZE',
p_method_opt IN varchar2 default 'FOR ALL COLUMNS SIZE AUTO',
p_degree IN varchar2 default 8
)
IS
action varchar2(128);
v_mode varchar2(30);
cmd varchar2(2000);
BEGIN
action := 'Analyzing the table ' || p_tabname;
IF p_partname IS NOT NULL THEN
action := action||', partition '||p_partname;
v_mode := p_granularity;
cmd := '
BEGIN
DBMS_STATS.GATHER_TABLE_STATS('||
'ownname=>'''||p_owner||''',tabname=>'''||p_tabname||
''',partname=>'''||p_partname||''',granularity=>'''||v_mode||
''',estimate_percent=>'||p_est_percent||',method_opt=>'''||p_method_opt||''',cascade=>TRUE,degree=>'||p_degree||');
END;';
execute immediate cmd;
ELSE
v_mode := 'DEFAULT';
cmd := '
BEGIN
DBMS_STATS.GATHER_TABLE_STATS('||
'ownname=>'''||p_owner||''',tabname=>'''||p_tabname||
''',estimate_percent=>'||p_est_percent||',method_opt=>'''||p_method_opt||''',cascade=>TRUE,degree=>'||p_degree||');
END;';
execute immediate cmd;
END IF;
END;
/
}}}
{{{
grant analyze any to alloc_app_perf;
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'CLASS_SALES')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'CLASS_SALES',p_est_percent=>'dbms_stats.auto_sample_size')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'CLASS_SALES',p_est_percent=>'1')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DBA_OBJECTS')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DBA_OBJECTS',p_est_percent=>'1')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DBA_OBJECTS',p_est_percent=>'100')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DBA_OBJECTS',p_est_percent=>'dbms_stats.auto_sample_size')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DEMO_SKEW')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DEMO_SKEW',p_method_opt=>'for all columns size skewonly')
}}}
! generate stats commands
{{{
set lines 500
set pages 0
select 'exec alloc_app_perf.table_stats(p_owner=>'''||owner||''',p_tabname=>'''||table_name||''',p_degree=>''16'');'
from dba_tables where table_name in
('ALGO_INPUT_FOR_REVIEW'
,'ALLOCATED_NOTSHIPPED_INVS'
,'ALLOC_BATCH_LINE_ITEMS'
,'ALLOC_BATCH_VOLUMEGRADE_CHANGE'
,'ALLOC_SKU0_MASTER'
,'ALLOC_SKU_MASTER'
,'ALLOC_STORES'
,'EOM_NEED_UNITS'
,'EOM_UNIT_TARGETS'
,'INVENTORY_CLASSES'
,'SIM_CLASSES'
,'STAGING_STORES'
,'STORE_ON_ORDERS'
,'STORE_WAREHOUSE_DETAILS'
,'VOLUME_GRADE_CONSTRAINTS')
/
}}}
https://github.com/DingGuodong/LinuxBashShellScriptForOps
http://www.bashoneliners.com/
https://github.com/learnbyexample/scripting_course
https://github.com/learnbyexample/Linux_command_line/blob/master/Shell_Scripting.md
https://github.com/learnbyexample/Linux_command_line/blob/master/Text_Processing.md
https://medium.com/capital-one-developers/bashing-the-bash-replacing-shell-scripts-with-python-d8d201bc0989
https://www.linuxjournal.com/content/python-scripts-replacement-bash-utility-scripts
https://www.educba.com/bash-shell-programming-with-python/
https://www.linuxquestions.org/questions/linux-software-2/need-help-converting-bash-script-to-python-4175605267/
https://stackoverflow.com/questions/2839810/converting-a-bash-script-to-python-small-script
https://tails.boum.org/blueprint/Port_shell_scripts_to_Python/
https://www.dreamincode.net/forums/topic/399713-convert-a-shell-script-to-python/
https://grasswiki.osgeo.org/wiki/Converting_Bash_scripts_to_Python
https://medium.com/capital-one-tech/bashing-the-bash-replacing-shell-scripts-with-python-d8d201bc0989
! tools
https://zwischenzugs.com/2016/08/29/bash-to-python-converter/
https://github.com/tomerfiliba/plumbum
https://hub.docker.com/r/imiell/bash2py/
https://github.com/ianmiell/bash2py
http://www.swag.uwaterloo.ca/bash2py/index.html , https://ieeexplore.ieee.org/document/7081866/?reload=true
<<showtoc>>
! gcp security
* https://www.udemy.com/course/introduction-to-google-cloud-security-features/learn/lecture/14562410#overview
! bigquery
* Practical Google BigQuery for those who already know SQL https://www.udemy.com/course/practical-google-bigquery-for-those-who-already-know-sql/
* https://www.udemy.com/course/google-bigquery-for-marketers-and-agencies/
! cloud composer (airflow)
* playlist - Apache Airflow Tutorials - https://www.youtube.com/watch?v=AHMm1wfGuHE&list=PLYizQ5FvN6pvIOcOd6dFZu3lQqc6zBGp2
* Apache Airflow using Google Cloud Composer https://www.udemy.com/course/apache-airflow-using-google-cloud-composer-introduction/
! dataflow (apache beam)
* https://www.udemy.com/course/streaming-analytics-on-google-cloud-platform/learn/lecture/7996614#announcements
* https://www.udemy.com/course/apache-beam-a-hands-on-course-to-build-big-data-pipelines/learn/lecture/16220774#announcements
! end to end
* https://www.udemy.com/course/data-engineering-on-google-cloud-platform/
* https://www.udemy.com/course/talend-open-studio-for-big-data-using-gcp-bigquery/
! SQL
https://www.udemy.com/course/oracle-analytic-functions-in-depth/
https://www.udemy.com/course/oracle-plsql-is-my-game-exam-1z0-144/
! python
https://www.udemy.com/course/python-oops-beginners/learn/lecture/7359360#overview
https://www.udemy.com/course/python-object-oriented-programming-oop/learn/lecture/16917860#overview
https://www.udemy.com/course/python-sql-tableau-integrating-python-sql-and-tableau/learn/lecture/13205790#overview
https://www.youtube.com/c/Coreyms/videos
https://www.youtube.com/c/realpython/videos
! java
https://www.udemy.com/course/java-for-absolute-beginners/learn/lecture/14217184#overview
<<showtoc>>
! PL/SQL User's Guide and Reference Release - Sample PL/SQL Programs
https://docs.oracle.com/cd/A97630_01/appdev.920/a96624/a_samps.htm
! steve's videos - the plsql channel
http://tutorials.plsqlchannel.com/public/index.php
https://learning.oreilly.com/search/?query=Steven%20Feuerstein&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&is_academic_institution_account=false&sort=relevance&page=0
https://www.youtube.com/channel/UCpJpLMRm452kVcie3RpINPw/playlists
http://stevenfeuersteinonplsql.blogspot.com/2015/03/27-hours-of-free-plsql-video-training.html
practically perfect plsql playlist https://apexapps.oracle.com/pls/apex/f?p=44785:141:0::NO::P141_PAGE_ID,P141_SECTION_ID:168,1208
https://www.youtube.com/channel/UCpJpLMRm452kVcie3RpINPw/playlists
https://www.oracle.com/database/technologies/appdev/plsql.html
! style guide
http://oracle.readthedocs.org/en/latest/sql/basics/style-guide.html
http://www.williamrobertson.net/documents/plsqlcodingstandards.htlm
! plsql the good parts
https://github.com/mortenbra/plsql-the-good-parts
http://mortenbra.github.io/plsql-the-good-parts/
! bulk collect and forall
https://venzi.wordpress.com/2007/09/27/bulk-collect-forall-vs-cursor-for-loop/
! mvc pl/sq/
http://www.dba-oracle.com/oracle_news/2004_10_27_MVC_development_using_plsql.htm
https://github.com/osalvador/dbax
http://it.toolbox.com/blogs/jjflash-oracle-journal/mvc-for-plsql-and-the-apex-listener-42688
http://jj-blogger.blogspot.com/2006/05/plsql-and-faces.html
https://www.rittmanmead.com/blog/2004/09/john-flack-on-mvc-development-using-plsql/
http://www.liberidu.com/blog/2016/11/02/how-you-should-or-shouldnt-design-program-for-a-performing-database-environment/
! references/books
http://stevenfeuersteinonplsql.blogspot.com/2014/05/resources-for-new-plsql-developers.html
https://www.safaribooksonline.com/library/view/beginning-plsql-from/9781590598825/
https://www.safaribooksonline.com/library/view/beginning-oracle-plsql/9781484207376/
https://www.safaribooksonline.com/library/view/oracle-and-plsql/9781430232070/
https://www.safaribooksonline.com/library/view/oracle-plsql-for/9780764599576/
https://www.safaribooksonline.com/library/view/oracle-plsql-for/0596005873/
! wiki
http://www.java2s.com/Tutorials/Database/Oracle_PL_SQL_Tutorial/index.htm
https://gerardnico.com/wiki/plsql/plsql
! implicit cursor attribute
https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.apdv.plsql.doc/doc/c0053881.html
https://www.ibm.com/support/knowledgecenter/SS6NHC/com.ibm.swg.im.dashdb.apdv.plsql.doc/doc/c0053879.html
https://www.ibm.com/support/knowledgecenter/SS6NHC/com.ibm.swg.im.dashdb.apdv.plsql.doc/doc/c0053878.html
https://www.ibm.com/support/knowledgecenter/SS6NHC/com.ibm.swg.im.dashdb.apdv.plsql.doc/doc/c0053607.html
PL/SQL Language Elements https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/langelems.htm#LNPLS013
Cursor Attribute https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/cursor_attribute.htm#LNPLS01311
https://markhoxey.wordpress.com/2012/12/11/referencing-implicit-cursor-attributes-sql/
! plsql profiling
https://www.thatjeffsmith.com/archive/2019/02/sql-developer-the-pl-sql-hierarchical-profiler/
{{{
Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Doc ID 243755.1)
PURPOSE
To use the PL/SQL Profiler please refer to DBMS_PROFILER documentation as per Oracle® Database PL/SQL Packages and Types Reference for your specific release and platform.
Once you have executed the PL/SQL Profiler for a piece of your application, you can use script profiler.sql provided in this document. This profiler.sql script produces a nice HTML report with the top time consumers as per your execution of the PL/SQL Profiler.
TROUBLESHOOTING STEPS
Familiarize yourself with the PL/SQL Profiler documented in the "Oracle® Database PL/SQL Packages and Types Reference" under DBMS_PROFILER.
If needed, create the PL/SQL Profiler Tables under your application schema: @?/rdbms/admin/proftab.sql
If needed, install the DBMS_PROFILER API, connected as SYS: @?/rdbms/admin/profload.sql
Start PL/SQL Profiler in your application: EXEC DBMS_PROFILER.START_PROFILER('optional comment');
Execute your transaction to be profiled. Calls to PL/SQL Libraries are expected.
Stop PL/SQL Profiler: EXEC DBMS_PROFILER.STOP_PROFILER;
Connect as your application user, execute script profiler.sql provided in this document: @profiler.sql
Provide to profiler.sql the "runid" out of a displayed list.
Review HTML report generated by profiler.sql.
}}}
! plsql collections
Collections in Oracle PLSQL https://www.youtube.com/watch?v=DvA-amyao7s
!! accessing varray
Accessing elements in a VARRAY column which is in a type https://community.oracle.com/thread/3961996
https://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_10002.htm#i2071643
! pl/sql design patterns
<<<
https://technology.amis.nl/2006/03/10/design-patterns-in-plsql-the-template-pattern/
https://technology.amis.nl/2006/03/11/design-patterns-in-plsql-interface-injection-for-even-looser-coupling/
https://technology.amis.nl/2006/04/02/design-patterns-in-plsql-implementing-the-observer-pattern/
https://blog.serpland.com/tag/design-patterns
https://blog.serpland.com/oracle/design-patterns-in-plsql-oracle
https://peterhrasko.wordpress.com/2017/09/16/oop-design-patterns-in-plsql/
<<<
! plsql dynamic sql
!! EXECUTE IMMEDIATE with multiple lines of columns to insert
https://stackoverflow.com/questions/14401631/execute-immediate-with-multiple-lines-of-columns-to-insert
https://stackoverflow.com/questions/9090072/insert-a-multiline-string-in-oracle-with-sqlplus
! plsql cursor within the cursor
https://www.techonthenet.com/oracle/questions/cursor2.php
! end
https://startupsventurecapital.com/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5
Data Mining from a process perspective
(from the book Data Mining for Business Analytics - Concepts, Techniques, and Applications)
[img(80%,80%)[http://i.imgur.com/tJ4TVCX.png]]
[img(80%,80%)[http://i.imgur.com/rmLhSnV.png]]
Machine Learning Summarized in One Picture
http://www.datasciencecentral.com/profiles/blogs/machine-learning-summarized-in-one-picture
[img(80%,80%)[http://i.imgur.com/oA0LjyF.png]]
Data Science Summarized in One Picture
http://www.datasciencecentral.com/profiles/blogs/data-science-summarized-in-one-picture
https://www.linkedin.com/pulse/business-intelligence-data-science-fuzzy-borders-rubens-zimbres
[img(80%,80%)[http://i.imgur.com/1SnVfqV.png]]
Python for Big Data in One Picture
http://www.datasciencecentral.com/profiles/blogs/python-for-big-data-in-one-picture
https://www.r-bloggers.com/python-r-vs-spss-sas/
[img(80%,80%)[http://i.imgur.com/5kPV76P.jpg]]
R for Big Data in One Picture
http://www.datasciencecentral.com/profiles/blogs/r-for-big-data-in-one-picture
[img(80%,80%)[http://i.imgur.com/abDq0ow.jpg]]
! top data science packages
[img(100%,100%)[ https://i.imgur.com/tj3ryoK.png]]
https://www.coriers.com/comparison-of-top-data-science-libraries-for-python-r-and-scala-infographic/
! ML modelling in R cheat sheet
[img(100%,100%)[ https://i.imgur.com/GPiepGw.jpg]]
https://github.com/rstudio/cheatsheets/raw/master/Machine%20Learning%20Modelling%20in%20R.pdf
https://www.r-bloggers.com/machine-learning-modelling-in-r-cheat-sheet/
! ML workflow
[img(100%,100%)[ https://i.imgur.com/TuhIB7T.png ]]
.
also see [[database/data movement methods]]
<<showtoc>>
! RMAN
[img(50%,50%)[ http://i.imgur.com/eLK7RRk.png ]]
!! backup and restore
backup and restore from physical standby http://gavinsoorma.com/2012/04/performing-a-database-clone-using-a-data-guard-physical-standby-database/
Using RMAN Incremental Backups to Refresh Standby Database http://oracleinaction.com/using-rman-incremental-backups-refresh-standby-database/
https://jarneil.wordpress.com/2008/06/03/applying-an-incremental-backup-to-a-physical-standby/
!! active duplication
create standby database using rman active duplicate https://www.pythian.com/blog/creating-a-physical-standby/
https://oracle-base.com/articles/11g/duplicate-database-using-rman-11gr2#active_database_duplication
https://oracle-base.com/articles/12c/recovery-manager-rman-database-duplication-enhancements-12cr1
!! backup-based duplication
duplicate database without connecting to target http://oracleinaction.com/duplicate-db-no-db-conn/
https://www.safaribooksonline.com/library/view/rman-recipes-for/9781430248361/9781430248361_Ch15.xhtml
https://www.safaribooksonline.com/library/view/oracle-database-12c/9780071847445/ch10.html#ch10lev15
http://oracleinaction.com/duplicate-db-no-db-conn/
http://oracledbasagar.blogspot.com/2011/11/cloning-on-different-server-using-rman.html
!! restartable duplicate
11gr2 DataGuard: Restarting DUPLICATE After a Failure https://blogs.oracle.com/XPSONHA/entry/11gr2_dataguard_restarting_dup
! dNFS + CloneDB
uses the backup piece as the backing storage,
1210656.1: “Clone your dNFS Production Database for Testing.”
How to Accelerate Test and Development Through Rapid Cloning of Production Databases and Operating Environments http://www.oracle.com/technetwork/server-storage/hardware-solutions/o13-022-rapid-cloning-db-1919816.pdf
https://oracle-base.com/articles/11g/clonedb-11gr2
http://datavirtualizer.com/database-thin-cloning-clonedb-oracle/
Clonedb: The quick and easy cloning solution you never knew you had https://www.youtube.com/watch?v=YBVj1DkUG54
! oem12c snapclone , snap clone
http://datavirtualizer.com/em-12c-snap-clone/
snap clone https://www.safaribooksonline.com/library/view/building-database-clouds/9780134309781/ch08.html#ch08
https://dbakevlar.com/2013/09/em-12c-snap-clone/
DB Snap Clone on Exadata https://www.youtube.com/watch?v=nvEmP6Z65Bg
! Thin provisioning of PDBs using “Snapshot Copy” (using ACFS snapshot or ZFS)
!! ACFS snapshot
https://www.youtube.com/watch?v=jwgD2sg8cyM
https://www.youtube.com/results?search_query=acfs+snapshot
How To Manually Create An ACFS Snapshot (Doc ID 1347365.1)
12.2 Oracle ACFS Snapshot Enhancements (Doc ID 2200299.1)
! exadata sparse clones
https://www.doag.org/formes/pubfiles/10819226/2018-Infra-Peter_Brink-Exadata_Snapshot_Clones-Praesentation.pdf
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-snapshots.html#GUID-78F67DD0-93C8-4944-A8F0-900D910A06A0
https://learning.oreilly.com/library/view/Oracle+Database+Exadata+Cloud+Service:+A+Beginner's+Guide/9781260120882/ch3.xhtml#page_83
How to Calculate the Physical Size and Virtual Size for Sparse GridDisks in Exadata Sparse Diskgroups (ORA-15041) (Doc ID 2473412.1)
! summary matrix
https://www.oracle.com/technetwork/database/exadata/learnmore/exadata-database-copy-twp-2543083.pdf
https://blogs.oracle.com/exadata/exadata-snapshots-part1
[img(100%,100%)[ https://i.imgur.com/C9dQNwr.png]]
! flexclone (netapp)
snap best practices http://www.netapp.com/us/media/tr-3761.pdf
! delphix
Instant Cloning: Boosting Application Development http://www.nocoug.org/download/2014-02/NoCOUG_201402_delphix.pdf
! references
https://www.safaribooksonline.com/search/?query=RMAN%20duplicate&highlight=true&is_academic_institution_account=false&extended_publisher_data=true&include_orioles=true&source=user&include_courses=true&sort=relevance&page=2
Oracle Database 12c Oracle RMAN Backup & Recovery https://www.safaribooksonline.com/library/view/oracle-database-12c/9780071847445/
{{{
10 Duplication: Cloning the Target Database
RMAN Duplication: A Primer
Why Use RMAN Duplication?
Different Types of RMAN Duplication
The Duplication Architecture
Duplication: Location Considerations
Duplication to the Same Server: An Overview
Duplication to the Same Server, Different ORACLE_HOME
Duplication to a Remote Server: An Overview
Duplication and the Network
RMAN Workshop: Build a Password File
Duplication to the Same Server
RMAN Workshop: Duplication to the Same Server Using Disk Backups
Using Tape Backups
Duplication to a Remote Server
RMAN Workshop: Duplication to a Remote Server Using Disk Backups
Using Tape Backups for Remote Server Duplication
Targetless Duplication in 12c
Incomplete Duplication: Using the DBNEWID Utility
New RMAN Cloning Features for 12c
Using Compression
Duplicating Large Tablespaces
Summary
Duplication to a Single-Node System
RMAN Workshop: Duplicating a RAC Database to a Single-Node Database
Case #9: Completing a Failed Duplication Manually
Case #10: Using RMAN Duplication to Create a Historical Subset of the Target Database
}}}
Building Database Clouds in Oracle 12c https://www.safaribooksonline.com/library/view/building-database-clouds/9780134309781/ch08.html#ch08
{{{
Chapter 8. Cloning Databases in Enterprise Manager 12c
Full Clones
Snap Clones
Summary
}}}
Oracle Database 11g—Underground Advice for Database Administrators https://www.safaribooksonline.com/library/view/oracle-database-11gunderground/9781849680004/ch06s09.html
{{{
RMAN cloning and standbys—physical, snapshot, or logical
}}}
Oracle Database Problem Solving and Troubleshooting Handbook https://www.safaribooksonline.com/library/view/oracle-database-problem/9780134429267/ch14.html
{{{
14. Strategies for Migrating Data Quickly between Databases
}}}
Oracle RMAN Database Duplication https://www.safaribooksonline.com/library/view/oracle-rman-database/9781484211120/9781484211137_Ch01.xhtml
{{{
CHAPTER 1 Introduction
}}}
RMAN Recipes for Oracle Database 12c: A Problem-Solution Approach, Second Edition https://www.safaribooksonline.com/library/view/rman-recipes-for/9781430248361/9781430248361_Ch15.xhtml
{{{
15-1. Renaming Database Files in a Duplicate Database
15-2. Specifying Alternative Names for OMF or ASM File Systems
15-3. Creating a Duplicate Database from RMAN Backups
15-4. Duplicating a Database Without Using RMAN Backups
15-5. Specifying Options for Network-based Active Database Duplication
15-6. Duplicating a Database with Several Directories
15-7. Duplicating a Database to a Past Point in Time
15-8. Skipping Tablespaces During Database Duplication
15-9. Duplicating a Database with a Specific Backup Tag
15-10. Resynchronizing a Duplicate Database
15-11. Duplicating Pluggable Databases and Container Databases
15-12. Transporting Tablespaces on the Same Operating System Platform
15-13. Performing a Cross-Platform Tablespace Transport by Converting Files on the Source Host
15-14. Performing a Cross-Platform Tablespace Transport by Converting Files on the Destination Host
15-15. Transporting a Database by Converting Files on the Source Database Platform
15-16. Transporting Tablespaces to a Different Platform Using RMAN Backup Sets
15-17. Transporting a Database to a Different Platform Using RMAN Backup Sets
}}}
https://15445.courses.cs.cmu.edu/fall2019/
also see [[database cloning methods , rman duplicate]]
<<showtoc>>
! Transactional Migration Methods
[img(30%,30%)[http://i.imgur.com/IXfxJlZ.png]]
! Nontransactional Migration Methods
[img(30%,30%)[http://i.imgur.com/cB7is6q.png]]
[img(30%,30%)[http://i.imgur.com/09X0J6j.png]]
! Piecemeal Migration Methods / Manual Migration Methods
[img(30%,30%)[http://i.imgur.com/zPZlA2V.png]]
[img(30%,30%)[http://i.imgur.com/cnyrkW4.png]]
! Replication techniques
[img(30%,30%)[http://i.imgur.com/oi93qRg.png]]
[img(30%,30%)[http://i.imgur.com/ka9Tm52.png]]
! references
Oracle Database Problem Solving and Troubleshooting Handbook https://www.safaribooksonline.com/library/view/oracle-database-problem/9780134429267/ch14.html
{{{
14. Strategies for Migrating Data Quickly between Databases
}}}
Oracle RMAN Database Duplication https://www.safaribooksonline.com/library/view/oracle-rman-database/9781484211120/9781484211137_Ch01.xhtml
{{{
CHAPTER 1 Introduction
}}}
! yahoo oath
https://www.google.com/search?q=oath+hadoop+platform&oq=oath+hadoop+platform&aqs=chrome..69i57.5613j1j1&sourceid=chrome&ie=UTF-8
https://www.google.com/search?biw=1194&bih=747&ei=4ux9W7m3Nurs_QbNupXQCg&q=yahoo+hadoop+platform+oath&oq=yahoo+hadoop+platform+oath&gs_l=psy-ab.3...12449.17634.0.20887.26.26.0.0.0.0.108.2024.24j2.26.0..2..0...1.1.64.psy-ab..0.23.1791...0j0i131k1j0i67k1j0i131i67k1j0i3k1j0i22i30k1j0i22i10i30k1j33i21k1j33i160k1j33i22i29i30k1.0.rnoY_HHMiAw
also see [[JL five-hints]] for examples on using hints to manipulate the priority of query block/table
.
based on http://ptgmedia.pearsoncmg.com/imprint_downloads/informit/promotions/python/python2python3.pdf
! .
[img(80%,80%)[https://i.imgur.com/VFFUDki.png]]
! .
[img(80%,80%)[https://i.imgur.com/aMNeCYv.png]]
! .
[img(80%,80%)[https://i.imgur.com/21KlHuY.png]]
! .
[img(80%,80%)[https://i.imgur.com/11GXxTf.png]]
https://community.oracle.com/docs/DOC-1005069 <- arup, good stuff
https://blogs.oracle.com/developers/updates-to-python-php-and-c-drivers-for-oracle-database
https://blog.dbi-services.com/oracle-locks-identifiying-blocking-sessions/
{{{
when w.wait_event_text like 'enq: TM%' then
' mode '||decode(w.p1 ,1414332418,'Row-S' ,1414332419,'Row-X' ,1414332420,'Share' ,1414332421,'Share RX' ,1414332422,'eXclusive')
||( select ' on '||object_type||' "'||owner||'"."'||object_name||'" ' from all_objects where object_id=w.p2 )
}}}
https://jonathanlewis.wordpress.com/2010/06/21/locks/
{{{
This list is specifically about the lock modes for a TM lock:
Value Name(s) Table method (TM lock)
0 No lock n/a
1 Null lock (NL) Used during some parallel DML operations (e.g. update) by
the pX slaves while the QC is holding an exclusive lock.
2 Sub-share (SS) Until 9.2.0.5/6 "select for update"
Row-share (RS) Since 9.2.0.1/2 used at opposite end of RI during DML until 11.1
Lock table in row share mode
Lock table in share update mode
3 Sub-exclusive(SX) Update (also "select for update" from 9.2.0.5/6)
Row-exclusive(RX) Lock table in row exclusive mode
Since 11.1 used at opposite end of RI during DML
4 Share (S) Lock table in share mode
Can appear during parallel DML with id2 = 1, in the PX slave sessions
Common symptom of "foreign key locking" (missing index) problem
Note that bitmap indexes on the child DON'T address the locking problem
5 share sub exclusive (SSX) Lock table in share row exclusive mode
share row exclusive (SRX) Less common symptom of "foreign key locking" but likely to be more
frequent if the FK constraint is defined with "on delete cascade."
6 Exclusive (X) Lock table in exclusive mode
create index -- duration and timing depend on options used
insert /*+ append */
}}}
{{{
select * from dba_tables where owner = 'KARLARAO' order by last_analyzed desc;
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS('KARLARAO',
options=>'GATHER',
estimate_percent=>dbms_stats.auto_sample_size,
degree=>dbms_stats.auto_degree,
cascade=>TRUE,
no_invalidate=> FALSE);
END;
/
SELECT DBMS_STATS.GET_PREFS('AUTOSTATS_TARGET') AS autostats_target,
DBMS_STATS.GET_PREFS('CASCADE') AS cascade,
DBMS_STATS.GET_PREFS('DEGREE') AS degree,
DBMS_STATS.GET_PREFS('ESTIMATE_PERCENT') AS estimate_percent,
DBMS_STATS.GET_PREFS('METHOD_OPT') AS method_opt,
DBMS_STATS.GET_PREFS('NO_INVALIDATE') AS no_invalidate,
DBMS_STATS.GET_PREFS('GRANULARITY') AS granularity,
DBMS_STATS.GET_PREFS('PUBLISH') AS publish,
DBMS_STATS.GET_PREFS('INCREMENTAL') AS incremental,
DBMS_STATS.GET_PREFS('STALE_PERCENT') AS stale_percent
FROM dual;
}}}
<<showtoc>>
Here are some profile/baseline steps that can be done
! Then to create a profile from a good plan you can do either of the two below:
{{{
Take note that if the predicate has literals you need to specify on force_matching=TRUE so that the literals will be treated as binds
Create a profile by copying the plan_hash_value from a different SQL_ID (let’s say you rewrote the SQL and you want to inject that new plan to the old SQL_ID)
https://raw.githubusercontent.com/karlarao/scripts/master/performance/create_sql_profile-goodbad.sql
dwbs001s1(sys): @create_sql_profile-goodbad.sql
Enter value for goodsql_id: 22s34g2djar10
Enter value for goodchild_no (0): <HIT ENTER>
Enter value for badsql_id: 00fnpu38hz98x
Enter value for badchild_no (0): <HIT ENTER>
Enter value for profile_name (PROF_sqlid_planhash): <HIT ENTER>
Enter value for category (DEFAULT): <HIT ENTER>
Enter value for force_matching (FALSE): <HIT ENTER>
Enter value for plan_hash_value: <HIT ENTER>
SQL Profile PROF_00fnpu38hz98x_ created.
Create a profile by copying the plan_hash_value from the same SQL (let’s say the previous good plan_hash_value exist, and you want the SQL_ID to use that)
https://raw.githubusercontent.com/karlarao/scripts/master/performance/copy_plan_hash_value.sql
HCMPRD1> @copy_plan_hash_value.sql
Enter value for plan_hash_value to generate profile from (X0X0X0X0): 3609883731 <-- this is the good plan
Enter value for sql_id to attach profile to (X0X0X0X0): c7tadymffd34z
Enter value for child_no to attach profile to (0):
Enter value for category (DEFAULT):
Enter value for force_matching (false):
PL/SQL procedure successfully completed.
}}}
! After stabilizing the SQL to run on acceptable response time. You can create a SQL baseline on the SQL_ID with one or more good plan_hash_value
Example below
{{{
SQL with multiple Execution Plans
· The following SQLs especially SQL_ID 93c0q2r788x6c (bad PHV 369685592) and 8txzdvns1jzxm (bad PHV 866924405) would benefit from using the SQL Plan Baseline to exclude the
bad PHVs from executing and just use the good ones
3d.297. SQL with multiple Execution Plans (DBA_HIST_SQLSTAT)
cid:image001.png@01D4C1AE.70BF4300
This can be done by following the example below:
· In the example the SQL_ID 93c0q2r788x6c adds Plan Hash Values 1948592153 and 2849155601 to its SQL Plan Baseline so that the optimizer would just choose between the two plans
-- create the baseline
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '93c0q2r788x6c',plan_hash_value=>'1948592153', fixed =>'YES', enabled=>'YES');
END;
/
-- add the other plan
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '93c0q2r788x6c',plan_hash_value=>'2849155601', fixed =>'YES', enabled=>'YES');
END;
/
-- verify
set lines 200
set verify off
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE(sql_handle=>'&sql_handle', format=>'basic'));
--######################################################################
if PHV 2849155601 is not on cursor cache then the DBMS_SPM.LOAD_PLANS_FROM_SQLSET has to be used
--########################################################################
exec dbms_sqltune.create_sqlset(sqlset_name => '93c0q2r788x6c_sqlset_test',description => 'sqlset descriptions');
declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap_id, &end_snap_id,'sql_id='||CHR(39)||'&sql_id'||CHR(39)||' and plan_hash_value=2849155601',NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('93c0q2r788x6c_sqlset_test', baseline_ref_cur);
end;
/
SELECT NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET where name='93c0q2r788x6c_sqlset_test';
select * from table(dbms_xplan.display_sqlset('93c0q2r788x6c_sqlset_test','&sql_id'));
select sql_handle, plan_name, origin, enabled, accepted, fixed, module from dba_sql_plan_baselines;
set serveroutput on
declare
my_int pls_integer;
begin
my_int := dbms_spm.load_plans_from_sqlset (
sqlset_name => '93c0q2r788x6c_sqlset_test',
basic_filter => 'sql_id="93c0q2r788x6c",
sqlset_owner => 'SYS',
fixed => 'YES',
enabled => 'YES');
DBMS_OUTPUT.PUT_line(my_int);
end;
/
select sql_handle, plan_name, origin, enabled, accepted, fixed, module from dba_sql_plan_baselines;
-- make sure the additional PHV is ACCEPTED and FIXED
SET SERVEROUTPUT ON
DECLARE
l_plans_altered PLS_INTEGER;
BEGIN
l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
sql_handle => 'SQL_c244ec33ef56024a',
plan_name => 'SQL_PLAN_c4j7c6grpc0kaf8003e90',
attribute_name => 'ACCEPTED',
attribute_value => 'YES');
DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/
set serveroutput on
DECLARE
l_plans_altered PLS_INTEGER;
BEGIN
l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
sql_handle => 'SQL_c244ec33ef56024a',
plan_name => 'SQL_PLAN_c4j7c6grpc0kaf8003e90',
attribute_name => 'FIXED',
attribute_value => 'YES');
DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/
}}}
! You can verify the SQL_ID picking up the good plan by using dplan or dplanx
<<<
https://raw.githubusercontent.com/karlarao/scripts/master/performance/dplan.sql
rac-aware https://raw.githubusercontent.com/karlarao/scripts/master/performance/dplanx.sql
<<<
! Also read on “how to migrate SQL baseline” across databases. Because you need to have those baselines propagated on all your environments.
<<<
There’s also a tool inside SQLTXPLAIN (search this in MOS) it’s called coe_xfr_sql_profile https://raw.githubusercontent.com/karlarao/scripts/master/performance/coe_xfr_sql_profile_12c.sql
What this does is you run it in a SQL_ID and PLAN_HASH_VALUE and it will create a sql file. And when you run this on another environment it will create a sql profile on that SQL_ID and PLAN_HASH_VALUE combination.
So it becomes a backup of that SQL performance or another way of migrating or backing up profiles across environments.
In summary if you have full control over the code. Rewriting or putting hints (but not too much) to behave optimally is what I recommend. This way it gets pushed to your code base and it’s tracked on your git/version control repo and propagated across environments.
You can also baseline on top of the rewrite or hints but make sure this is maintained across environments.
<<<
.
http://kerryosborne.oracle-guy.com/2009/07/how-to-attach-a-sql-profile-to-a-different-statement/
HOWTO: bad plan to good plan switch http://www.evernote.com/shard/s48/sh/308af73e-47bc-4598-ab31-77ab74cbbed9/7acc32b91ebb64639116d3931a4e9935
{{{
15:07:41 HCMPRD1> @copy_plan_hash_value.sql
Enter value for plan_hash_value to generate profile from (X0X0X0X0): 3609883731 <-- this is the good plan
Enter value for sql_id to attach profile to (X0X0X0X0): c7tadymffd34z
Enter value for child_no to attach profile to (0):
Enter value for category (DEFAULT):
Enter value for force_matching (false):
PL/SQL procedure successfully completed.
}}}
Database Development guide -> 2 Connection Strategies for Database Applications
https://docs.oracle.com/en/database/oracle/oracle-database/19/adfns/connection_strategies.html#GUID-90D1249D-38B8-47BF-9829-BA0146BD814A
https://docs.oracle.com/database/122/ADFNS/connection_strategies.htm#ADFNS-GUID-90D1249D-38B8-47BF-9829-BA0146BD814A
<<showtoc>>
! redo apply
!! Without Real Time Apply (RTA) on standby database
{{{
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
}}}
!! With Real Time Apply (RTA)
If you configured your standby redo logs, you can start real-time apply using the following command:
{{{
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
}}}
!! Stopping Redo Apply on standby database
To stop Redo Apply in the foreground, issue the following SQL statement.
{{{
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
}}}
! monitor redo apply
!! Last sequence received and applied
You can use this (important) SQL to check whether your physical standby is in Sync with the Primary:
{{{
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#;
}}}
{{{
-- redo transport services
ALTER SESSION SET NLS_DATE_FORMAT ='DD-MON-RR HH24:MI:SS';
SELECT INST_ID, SEQUENCE#, APPLIED, FIRST_TIME, NEXT_TIME FROM GV$ARCHIVED_LOG ORDER BY 2,1,4;
ALTER SYSTEM SWITCH LOGFILE;
SELECT INST_ID, SEQUENCE#, APPLIED, FIRST_TIME, NEXT_TIME FROM GV$ARCHIVED_LOG ORDER BY 2,1,4;
}}}
!! on standby, get data guard stats
{{{
set linesize 120
col START_TIME format a20
col ITEM format a20
SELECT TO_CHAR(START_TIME, 'DD-MON-RR HH24:MI:SS') START_TIME, ITEM , SOFAR, UNITS
FROM V$RECOVERY_PROGRESS
WHERE ITEM IN ('Active Apply Rate', 'Average Apply Rate', 'Redo Applied');
}}}
!! on standby, retrieve the transport lag and the apply lag
{{{
-- Transport lag represents the data that will be lost in case of disaster
col NAME for a13
col VALUE for a13
col UNIT for a30
set LINES 132
SELECT NAME, VALUE, UNIT, TIME_COMPUTED
FROM V$DATAGUARD_STATS WHERE NAME IN ('transport lag', 'apply lag');
}}}
!! Standby database process status
{{{
select distinct process, status, thread#, sequence#, block#, blocks from v$managed_standby ;
}}}
If using real time apply
{{{
select TYPE, ITEM, to_char(TIMESTAMP, 'DD-MON-YYYY HH24:MI:SS') from v$recovery_progress where ITEM='Last Applied Redo';
or
select recovery_mode from v$archive_dest_status where dest_id=1;
}}}
! others
{{{
select * from v$managed_standby;
select * from v$log;
select * from v$standby_log;
select * from DBA_REGISTERED_ARCHIVED_LOG
select * from V$ARCHIVE
select * from V$PROXY_ARCHIVEDLOG
select * from V$ARCHIVED_LOG
select * from V$ARCHIVE_GAP
select * from V$ARCHIVE_PROCESSES;
select * from V$ARCHIVE_DEST;
select * from V$ARCHIVE_DEST_STATUS
select * from V$PROXY_ARCHIVELOG_DETAILS
select * from V$BACKUP_ARCHIVELOG_DETAILS
select * from V$BACKUP_ARCHIVELOG_SUMMARY
select * from V$PROXY_ARCHIVELOG_SUMMARY
----------------------------
-- MONITOR RECOVERY
----------------------------
-- Monitoring the Process Activities
-- The V$MANAGED_STANDBY view on the standby database site shows you the activities performed by both redo transport and Redo Apply processes in a Data Guard environment. The CLIENT_P column in the output of the following query identifies the corresponding primary database process.
SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
-- Determining the Progress of Redo Apply
-- The V$ARCHIVE_DEST_STATUS view on either a primary or standby database site provides you information such as the online redo log files that were archived, the archived redo log files that are applied, and the log sequence numbers of each. The following query output shows the standby database is two archived redo log files behind in applying the redo data received from the primary database. To determine if real-time apply is enabled, query the RECOVERY_MODE column of the V$ARCHIVE_DEST_STATUS view. It will contain the value MANAGED REAL TIME APPLY when real-time apply is enabled
SELECT ARCHIVED_THREAD#, ARCHIVED_SEQ#, APPLIED_THREAD#, APPLIED_SEQ#, RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS;
-- Determining the Location and Creator of the Archived Redo Log Files
-- the location of the archived redo log, which process created the archived redo log, redo log sequence number of each archived redo log file, when each log file was archived, and whether or not the archived redo log file was applied
set lines 300
col name format a80
alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
SELECT NAME, CREATOR, SEQUENCE#, APPLIED, COMPLETION_TIME FROM V$ARCHIVED_LOG where applied = 'NO' order by 3;
-- Viewing Database Incarnations Before and After OPEN RESETLOGS
SELECT RESETLOGS_ID,THREAD#,SEQUENCE#,STATUS,ARCHIVED FROM V$ARCHIVED_LOG ORDER BY RESETLOGS_ID,SEQUENCE# ;
SELECT INCARNATION#, RESETLOGS_ID, STATUS FROM V$DATABASE_INCARNATION;
-- Viewing the Archived Redo Log History
-- The V$LOG_HISTORY on the standby site shows you a complete history of the archived redo log, including information such as the time of the first entry, the lowest SCN in the log, the highest SCN in the log, and the sequence numbers for the archived redo log files.
SELECT FIRST_TIME, FIRST_CHANGE#, NEXT_CHANGE#, SEQUENCE# FROM V$LOG_HISTORY;
-- Determining Which Log Files Were Applied to the Standby Database
select max(sequence#), applied, thread# from v$archived_log group by applied, thread# order by 1;
-- Determining Which Log Files Were Not Received by the Standby Site
--SELECT LOCAL.THREAD#, LOCAL.SEQUENCE#, local.applied FROM
--(SELECT THREAD#, SEQUENCE#, applied FROM V$ARCHIVED_LOG WHERE DEST_ID=1) LOCAL
--WHERE LOCAL.SEQUENCE# NOT IN
--(SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND
--THREAD# = LOCAL.THREAD#);
------------------------------------------------------------------------------------
-- Monitoring Log Apply Services on Physical Standby Databases
------------------------------------------------------------------------------------
-- Accessing the V$DATABASE View
-- Issue the following query to show information about the protection mode, the protection level, the role of the database, and switchover status:
SELECT DATABASE_ROLE, DB_UNIQUE_NAME INSTANCE, OPEN_MODE, PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS FROM V$DATABASE;
-- Issue the following query to show information about fast-start failover:
SELECT FS_FAILOVER_STATUS FSFO_STATUS, FS_FAILOVER_CURRENT_TARGET TARGET_STANDBY, FS_FAILOVER_THRESHOLD THRESHOLD, FS_FAILOVER_OBSERVER_PRESENT OBS_PRES FROM V$DATABASE;
-- Accessing the V$MANAGED_STANDBY Fixed View
-- Query the physical standby database to monitor Redo Apply and redo transport services activity at the standby site.. The previous query output shows that an RFS process completed archiving a redo log file with sequence number 947. The output also shows that Redo Apply is actively applying an archived redo log file with the sequence number 946. The recovery operation is currently recovering block number 10 of the 72-block archived redo log file.
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
-- Accessing the V$ARCHIVE_DEST_STATUS Fixed View
-- To determine if real-time apply is enabled, query the RECOVERY_MODE column of the V$ARCHIVE_DEST_STATUS view. It will contain the value MANAGED REAL TIME APPLY when real-time apply is enabled
SELECT ARCHIVED_THREAD#, ARCHIVED_SEQ#, APPLIED_THREAD#, APPLIED_SEQ#, RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS;
-- Accessing the V$ARCHIVED_LOG Fixed View
-- The V$ARCHIVED_LOG fixed view on the physical standby database shows all the archived redo log files received from the primary database. This view is only useful after the standby site starts receiving redo data; before that time, the view is populated by old archived redo log records generated from the primary control file.
SELECT REGISTRAR, CREATOR, THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE# FROM V$ARCHIVED_LOG;
-- Accessing the V$LOG_HISTORY Fixed View
-- Query the V$LOG_HISTORY fixed view on the physical standby database to show all the archived redo log files that were applied
SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE# FROM V$LOG_HISTORY;
-- Accessing the V$DATAGUARD_STATUS Fixed View
-- The V$DATAGUARD_STATUS fixed view displays events that would typically be triggered by any message to the alert log or server process trace files.
SELECT MESSAGE FROM V$DATAGUARD_STATUS;
}}}
! references
https://www.oracle-scripts.net/dataguard-management/
[[Coderepo]]
[[Awk]], [[grep]], [[sed]], [[sort, uniq]]
[[BashShell]]
[[PowerShell]]
[[Perl]]
[[Python]]
[[PL/SQL]]
[[x R - Datacamp]] [[R maxym]]
[[HTML5]]
[[Javascript]] [[node.js]]
[[GoLang]]
[[Java]]
[[Machine Learning]]
viz and reporting in [[Tableau]]
[[noSQL]]
<<showtoc>>
! learning path
https://training.looker.com/looker-development-foundations enroll here first "Getting Started with LookML" you will be redirected to -> https://learn.looker.com/projects/learn_intro/documents/home.md
https://training.looker.com/looker-development-foundations/334816
! watch videos
!! Lynda
https://www.linkedin.com/learning/looker-first-look/welcome
!! Business User Video Tutorials
https://docs.looker.com/video-library/exploring-data
!! Developer Video Tutorials
https://docs.looker.com/video-library/data-modeling
! official doc
https://docs.looker.com/
!! release notes
https://docs.looker.com/relnotes/intro
!! development
!!! what is LookML
https://docs.looker.com/data-modeling/learning-lookml/what-is-lookml
!!! Steps to Learning LookML
https://docs.looker.com/data-modeling/learning-lookml
!!! Retrieve and Chart Data
https://docs.looker.com/exploring-data/retrieve-chart-intro
!! admin
!!! Clustering
https://docs.looker.com/setup-and-management/tutorials/clustering
! comparison vs Tableau
https://looker.com/compare/looker-vs-tableau "a trusted data model"
https://webanalyticshub.com/tableau-looker-domo/
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/90553425-5bb73780-e162-11ea-9b74-038d47f3341f.png]]
https://www.itbusinessedge.com/business-intelligence/looker-vs.-tableau.html
https://www.quora.com/To-anyone-that-has-used-Looker-how-would-you-compare-it-to-Tableau-in-terms-of-price-capabilities
! x
! LOD vs Subtotals, row totals, and Table Calculations
https://discourse.looker.com/t/tableau-lod-equivalent-custom-dimension/18641
https://help.looker.com/hc/en-us/articles/360023635234-Subtotals-with-Table-Calculations
https://docs.looker.com/exploring-data/visualizing-query-results/table-next-options
https://help.tableau.com/current/pro/desktop/en-us/calculations_calculatedfields_lod_fixed.htm
<<showtoc>>
! h2o
https://www.h2o.ai/products/h2o-driverless-ai/
! metaverse
http://www.matelabs.in/#home
http://docs.mateverse.com/user-guide/getting-started/
HOWTO: create a manual SQL Profile https://www.evernote.com/shard/s48/sh/f1bda7e9-2ced-4794-8c5e-32b1beac567b/96cd95cebb8f3cad0329833d7aa4a328
http://kerryosborne.oracle-guy.com/2010/07/sqlt-coe_xfr_sql_profilesql/
http://kerryosborne.oracle-guy.com/2010/11/how-to-lock-sql-profiles-generated-by-sql-tuning-advisor/
http://bryangrenn.blogspot.com/2010/12/sql-profiles.html
Oracle Analytics Cloud: Augmented Analytics at Scale https://www.oracle.com/business-analytics/comparison-chart.html
https://docs.oracle.com/en/middleware/bi/analytics-server/whats-different-oas/index.html#OASWD-GUID-C907A4B0-FAFD-4F54-905C-D6FCA519C262
https://www.linkedin.com/pulse/should-i-run-performance-test-part-ci-pipeline-aniket-gadre/ <- ANSWER IS YES!
<<<
I like Wilson's answer
Aniket, the assumptions underlying your statements makes sense ... if this was still 1995. This is exactly what I cover in my PerfGuild talk April 8th https://automationguild.com/performance 1) Perf tests just to determine "Pass or Fail" is out of fashion now because performance testers should be more valuable than "traffic cops writing tickets", and provide tools and advice to developers. Why? So that performance issues are identified early, while that code is still fresh in the mind of developers rather than in production when changes are too expensive to change. 2) monitoring systems can be brought up automatically in the pipeline. Ask your APM vendor to show you how. 3) in my experience, many performance issues are evident within 10 minutes if you can ramp up quickly enough. 4) same. 5) the point of CI/CD is to provide coverage of potential risks. Companies pay us the big bucks for us to predict issues, not to be reactive chumps. 6) Please get back in your time machine and join us in the 21st century. There are cloud environments now which spin up servers for a short time. 6 again) memory leaks are not the only reason for perf tests. Perf tests are now done to tune configurations since companies are now paying for every cycle used rather than having a fixed number of machines. It's time to upgrade your assumptions.
<<<
references
https://alexanderpodelko.com/docs/Continuous_Performance_Testing_CMG17.pdf
https://www.qlik.com/us/products/qlikview/personal-edition
https://community.qlik.com/thread/36516
https://www.qlik.com/us/solutions/developers
http://branch.qlik.com/#!/project
https://app.pluralsight.com/library/courses/qlikview-analyzing-data/table-of-contents
{{{
forecast step by step:
eyeball the data
raw data
data exploration
periodicity
ndiff (how much we should difference)
decomposition - determine the series components (trend, seasonality etc.)
x = decompose(AirPassengers, "additive")
mymodel = x$trend + x$seasonal; plot(mymodel) # just the trend and seasonal data
mymodel2 = AirPassengers - x$seasonal ; plot(mymodel2) # orig data minus the seasonal data
seasonplot
process data
create xts object
create a ts object from xts (coredata, index, frequency/periodicity)
partition data train,validation sets
graph it
tsoutliers (outlier detection) , anomaly detection (AnomalyDetection package)
log scale data
add trend line (moving average (centered - ma and trailing - rollmean) and simple exponential smoothing (ets))
performance evaluation
Type of seasonality assessed graphically (decompose - additive,etc.)
detrend and seasonal adjustment (smoothing/deseasonalizing)
lag-1 diff graph
forecast residual graph
forecast error graph
acf/pacf (Acf, tsdisplay)
raw data
forecast residual
lag-1 diff
autocorrelation
fUnitRoots::adfTest() - time series data is non-stationary (p value above 0.05)
tsdisplay(diff(data_ts, lag=1)) - ACF displays there's no autocorrelation going on (no significant lags out of the 95% confidence interval, the blue line)
accuracy
cross validation https://github.com/karlarao/forecast_examples/tree/master/cross_validation/cvts_tscvexample_investigation
forecast of training
forecast of training + validation + future (steps ahead)
forecast result
display prediction intervals (forecast quantile)
display the actual and forecasted series
displaying the forecast errors
distribution of forecast errors
}}}
https://saplumira.com/
https://www.quora.com/Which-data-visualization-tool-is-better-SAP-Lumira-or-Tableau
https://www.prokarma.com/blog/2014/08/20/look-sap-lumira-and-lumira-cloud
https://blogs.sap.com/2014/09/12/a-lumira-extension-to-acquire-twitter-data/
https://www.sap.com/developer/tutorials/lumira-initial-data-acquisition.html
http://visualbi.com/blogs/sap-lumira-discovery/connect-sap-hana-bw-universe-sap-lumira-discovery/
<<showtoc>>
! standards for business communications
[img(40%,40%)[ https://i.imgur.com/Y6Ekegn.png]]
! download and documentation
Alternate download site across versions https://licensing.tableausoftware.com/esdalt/
Release notes across versions http://www.tableausoftware.com/support/releases?signin=650fb8c2841d145bc3236999b96fd7ab
Official doc http://www.tableausoftware.com/community/support/documentation-old
knowledgebase http://kb.tableausoftware.com/
manuals http://www.tableausoftware.com/support/manuals
http://www.tableausoftware.com/new-features/6.0
http://www.tableausoftware.com/new-features/7.0
http://www.tableausoftware.com/new-features/8.0
http://www.tableausoftware.com/fast-pace-innovation <-- timeline across versions
''Tableau - Think Data Thursday Video Library'' http://community.tableausoftware.com/community/groups/tdt-video-library
''Tableau Style Guide'' https://github.com/davidski/dataviz/blob/master/Tableau%20Style%20Guide.md
''Software Development Lifecycle With Tableau'' https://github.com/russch/tableau-sdlc-sample
''How to share data with a statistician'' https://github.com/davidski/datasharing
! license
https://customer-portal.tableau.com/s/
upgrading tableau desktop http://kb.tableausoftware.com/articles/knowledgebase/upgrading-tableau-desktop
offline activation http://kb.tableausoftware.com/articles/knowledgebase/offline-activation
renewal cost for desktop and personal http://www.triadtechpartners.com/wp-content/uploads/Tableau-GSA-Price-List-April-2013.pdf
renewal FAQ http://www.tableausoftware.com/support/customer-success
eula http://mkt.tableausoftware.com/files/eula.pdf
! viz types
* treemap http://www.tableausoftware.com/new-features/new-view-types
* bubble chart
* word cloud
! connectors
''Oracle Driver''
there’s an Oracle Driver so you can connect directly to a database http://downloads.tableausoftware.com/drivers/oracle/desktop/tableau7.0-oracle-driver.msi
http://www.tableausoftware.com/support/drivers
http://kb.tableausoftware.com/articles/knowledgebase/oracle-connection-errors
! HOWTOs
http://www.tableausoftware.com/learn/training <-- LOTS OF GOOD STUFF!!!
http://community.tableausoftware.com/message/242749#242749 <-- Johan's Ideas Collections
''parameters'' http://www.youtube.com/watch?v=wvF7gAV82_c
''calculated fields'' http://www.youtube.com/watch?v=FpppiLBdtGc, http://www.tableausoftware.com/table-calculations. http://kb.tableausoftware.com/articles/knowledgebase/combining-date-and-time-single-field
''scatter plots'' http://www.youtube.com/watch?v=RYMlIY4nT9k, http://downloads.tableausoftware.com/quickstart/feature-guides/trend_lines.pdf
''getting the r2'',''trendlines'' http://kb.tableausoftware.com/articles/knowledgebase/statistics-finding-correlation, http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/trendlines_model.html
''forecasting'' http://tombrownonbi.blogspot.com/2010/07/simple-forecasting-using-tableau.html, resolving forecast errors http://onlinehelp.tableausoftware.com/current/pro/online/en-us/forecast_resolve_errors.html
tableau forecast model - Holt-Winters exponential smoothing
http://onlinehelp.tableausoftware.com/v8.1/pro/online/en-us/help.html#forecast_describe.html
Method for Creating Multipass Aggregations Using Tableau Server <-- doing various statistical methods in tableau
http://community.tableausoftware.com/message/181143#181143
Monte Carlo in Tableau
http://drawingwithnumbers.artisart.org/basic-monte-carlo-simulations-in-tableau/
''dashboards'' http://community.tableausoftware.com/thread/109753?start=0&tstart=0, http://tableaulove.tumblr.com/post/27627548817/another-method-to-update-data-from-inside-tableau, http://ryrobes.com/tableau/tableau-phpgrid-an-almost-instant-gratification-data-entry-tool/
''dashboard size'' http://kb.tableausoftware.com/articles/knowledgebase/fixed-size-dashboard
''dashboard multiple sources'' http://kb.tableausoftware.com/articles/knowledgebase/multiple-sources-one-worksheet
''reference line weekend highlight , reference line weekend in tableau'' https://community.tableau.com/thread/123456 (shading in weekends), http://www.evolytics.com/blog/tableau-hack-how-to-highlight-a-dimension/ , https://discussions.apple.com/thread/1919024?tstart=0 , https://3danim8.wordpress.com/2013/11/18/using-tableau-buckets-to-compare-weekday-to-weekend-data/ , http://onlinehelp.tableau.com/current/pro/desktop/en-us/actions_highlight_advanced.html , https://community.tableau.com/thread/120260 (How can I add weekend reference lines)
''reference line'', ''reference band'' http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/reflines_addlines.html, http://vizwiz.blogspot.com/2012/09/tableau-tip-adding-moving-reference.html, http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/i1000860.html, http://kb.tableausoftware.com/articles/knowledgebase/independent-field-reference-line, http://community.tableausoftware.com/thread/127009?start=0&tstart=0, http://community.tableausoftware.com/thread/121369
''custom reference line - based on a measure field''
http://community.tableausoftware.com/message/275150 <-- drag calculated field on the marks area
''dynamic reference line''
http://community.tableausoftware.com/thread/124998, http://community.tableausoftware.com/thread/105433, http://www.interworks.com/blogs/iwbiteam/2012/04/09/adding-different-reference-lines-tableau
''percentile on reference line''
https://community.tableau.com/thread/108974
''dynamic parameter''
http://drawingwithnumbers.artisart.org/creating-a-dynamic-parameter-with-a-tableau-data-blend/
''thresholds'' Multiple thresholds for different cells on one worksheet http://community.tableausoftware.com/thread/122285
''email and alerting'' http://www.metricinsights.com/data-driven-alerting-and-email-notifications-for-tableau/, http://community.tableausoftware.com/thread/124411
''templates'' http://kb.tableausoftware.com/articles/knowledgebase/replacing-data-source, http://www.tableausoftware.com/public/templates/schools, http://wannabedatarockstar.blogspot.com/2013/06/create-default-tableau-template.html, http://wannabedatarockstar.blogspot.co.uk/2013/04/colour-me-right.html
''click to filter'' http://kb.tableausoftware.com/articles/knowledgebase/combining-sheet-links-and-dashboards
''tableau worksheet actions'' http://community.tableausoftware.com/thread/138785
''date functions and calculations'' http://onlinehelp.tableausoftware.com/current/pro/online/en-us/functions_functions_date.html, http://pharma-bi.com/2011/04/fiscal-period-calculations-in-tableau-2/
''date dimension'' http://blog.inspari.dk/2013/08/27/making-the-date-dimension-ready-for-tableau/
''Date Range filter and Default date filter''
google search https://www.google.com/search?q=tableau+date+range+filter&oq=tableau+date+range+&aqs=chrome.2.69i57j0l5.9028j0j7&sourceid=chrome&es_sm=119&ie=UTF-8
Creating a Filter for Start and End Dates Using Parameters http://kb.tableausoftware.com/articles/howto/creating-a-filter-for-start-and-end-dates-parameters
Tableau Tip: Showing all dates on a date filter after a Server refresh http://vizwiz.blogspot.com/2014/01/tableau-tip-showing-all-dates-on-date.html
Tableau Tip: Default a date filter to the last N days http://vizwiz.blogspot.com/2013/09/tableau-tip-default-date-filter-to-last.html
''hide NULL values'' http://reports4u.co.uk/tableau-hide-null-values/, http://reports4u.co.uk/tableau-hide-values-quick-filter/, http://kb.tableausoftware.com/articles/knowledgebase/replacing-null-literalsclass, http://kb.tableausoftware.com/articles/knowledgebase/null-values <-- good stuff
''logical functions - if then else, case when then'' http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/functions_functions_logical.html, http://kb.tableausoftware.com/articles/knowledgebase/understanding-logical-calculations, http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/id2611b7e2-acb6-467e-9f69-402bba5f9617.html
''tableau working with sets''
https://www.tableausoftware.com/public/blog/2013/03/powerful-new-tools
http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/i1201140.html
http://community.tableausoftware.com/thread/136845 <-- good example on filters
https://www.tableausoftware.com/learn/tutorials/on-demand/sets?signin=a8f73d84a4b046aec26bc955854a381b <-- GOOD STUFF video tutorial
IOPS SIORS - Combining several measures in one dimension - http://tableau-ext.hosted.jivesoftware.com/thread/137680
''tableau groups''
http://vizwiz.blogspot.com/2013/05/tableau-tip-creating-primary-group-from.html
http://www.tableausoftware.com/learn/tutorials/on-demand/grouping?signin=f98f9fd64dcac0e7f2dc574bca03b68c <-- VIDEO tutorial
''Random Number generation in tableau''
http://community.tableausoftware.com/docs/DOC-1474
''Calendar view viz''
http://thevizioneer.blogspot.com/2014/04/day-1-how-to-make-calendar-in-tableau.html
http://vizwiz.blogspot.com/2012/05/creating-interactive-monthly-calendar.html
http://vizwiz.blogspot.com/2012/05/how-common-is-your-birthday-find-out.html
''Custom SQL''
http://kb.tableausoftware.com/articles/knowledgebase/customizing-odbc-connections
http://tableaulove.tumblr.com/post/20781994395/tableau-performance-multiple-tables-or-custom-sql
http://bensullins.com/leveraging-your-tableau-server-to-create-large-data-extracts/
http://tableaulove.tumblr.com/post/18945358848/how-to-publish-an-unpopulated-tableau-extract
http://onlinehelp.tableausoftware.com/v8.1/pro/online/en-us/customsql.html
http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/customsql.html
Using Raw SQL Functions http://kb.tableausoftware.com/articles/knowledgebase/raw-sql
http://community.tableausoftware.com/thread/131017
''Geolocation''
http://tableaulove.tumblr.com/post/82299898419/ip-based-geo-location-in-tableau-new-now-with-more
http://dataremixed.com/2014/08/from-gps-to-viz-hiking-washingtons-trails/
https://public.tableausoftware.com/profile/timothyvermeiren#!/vizhome/TimothyAllRuns/Dashboard
''tableau - import custom geocoding data - world map''
https://community.tableau.com/thread/200454
https://www.youtube.com/watch?v=nVrCH-PWM10
https://www.youtube.com/watch?v=IDyMMPiNVGw
https://onlinehelp.tableau.com/current/pro/online/mac/en-us/custom_geocoding.html
https://onlinehelp.tableau.com/current/pro/online/mac/en-us/maps_customgeocode_importing.html
''tableau perf analyzer''
http://www.interworks.com/services/business-intelligence/tableau-performance-analyzer
''tableau and python''
http://bensullins.com/bit-ly-data-to-csv-for-import-to-tableau/
''Visualize and Understand Tableau Functions''
https://public.tableausoftware.com/profile/tyler3281#!/vizhome/EVERYONEWILLUSEME/MainScreen
''tableau workbook on github''
http://blog.pluralsight.com/how-to-store-your-tableau-server-workbooks-on-github
''tableau radar chart / spider graph''
https://wikis.utexas.edu/display/tableau/How+to+create+a+Radar+Chart
''maps animation''
http://www.tableausoftware.com/public/blog/2014/08/capturing-animation-tableau-maps-2574?elq=d12cbf266b1342e68ea20105369371cf
''if in list'' http://community.tableausoftware.com/ideas/1870, http://community.tableausoftware.com/ideas/1500
<<<
{{{
IF
trim([ENV])='x07d' OR
trim([ENV])='x07p'
THEN 'AML'
ELSE 'OTHER' END
IF
TRIM([ENV]) = 'x07d' THEN 'AML' ELSEIF
TRIM([ENV]) = 'x07p' THEN 'AML'
ELSE 'OTHER' END
IF [Processor AMD] THEN 'AMD'
ELSEIF [Processor Intel] THEN 'INTEL'
ELSEIF [Processor IBM Power] THEN 'IBM Power'
ELSEIF [Processor SPARC] THEN 'SPARC'
ELSE 'Other' END
IF contains('x11p,x08p,x28p',trim([ENV]))=true THEN 'PROD'
ELSEIF contains('x29u,x10u,x01u',trim([ENV]))=true THEN 'UAT'
ELSEIF contains('x06d,x07d,x12d',trim([ENV]))=true THEN 'DEV'
ELSEIF contains('x06t,x14t,x19t',trim([ENV]))=true THEN 'TEST'
ELSE 'OTHER' END
[Snap Id] = (150106) or
[Snap Id] = (150107) or
[Snap Id] = (150440) or
[Snap Id] = (150441)
}}}
<<<
''calculated field filter'' http://stackoverflow.com/questions/30753330/tableau-using-calculated-fields-for-filtering-dimensions, http://breaking-bi.blogspot.com/2013/03/creating-table-calculations-on-values.html
<<<
{{{
DRW
SUM(IF contains('CD_IO_RQ_R_LG_SEC-CD,CD_IO_RQ_R_SM_SEC-CD,CD_IO_RQ_W_LG_SEC-CD,CD_IO_RQ_W_SM_SEC-CD',trim([Metric]))=true THEN 1 END) > 0
CD_IO_RQ_R_LG_SEC-CD,0.21
CD_IO_RQ_R_SM_SEC-CD,0.62
CD_IO_RQ_W_LG_SEC-CD,2.14
CD_IO_RQ_W_SM_SEC-CD,5.69
}}}
<<<
''What is the difference between Tableau Server and Tableau Server Worker?'' http://community.tableausoftware.com/thread/109121
''tableau vs spotfire vs qlikview'' http://community.tableausoftware.com/thread/116055, https://apandre.wordpress.com/2013/09/13/tableau-8-1-vs-qlikview-11-2-vs-spotfire-5-5/ , http://butleranalytics.com/spotfire-tableau-and-qlikview-in-a-nutshell/ , https://www.trustradius.com/compare-products/tableau-desktop-vs-tibco-spotfire
''twbx for sending workbooks'' http://kb.tableausoftware.com/articles/knowledgebase/sending-packaged-workbook
''YOY moving average'' http://daveandrade.com/2015/01/25/tableau-table-calcs-how-to-calculate-a-year-over-year-4-week-moving-average/
''json'' http://community.tableau.com/ideas/1276
''tableau reverse engineering'' http://www.theinformationlab.co.uk/2015/01/22/learning-tableau-reverse-engineering/
''filter partial highlight'' https://community.tableau.com/thread/143761 , http://breaking-bi.blogspot.com/2014/03/partial-highlighting-on-charts-in.html
''Window functions'' https://community.tableau.com/thread/144402, http://kb.tableau.com/articles/knowledgebase/functional-differences-olap-relational, http://www.lunametrics.com/blog/2015/09/17/yoy-bar-charts-in-tableau/, http://breaking-bi.blogspot.com/2013/03/working-with-window-calculations-and.html, https://www.interworks.com/blog/tmccullough/2014/09/29/5-tableau-table-calculation-functions-you-need-know, http://breaking-bi.blogspot.com/2013/04/using-lookup-function-in-tableau.html
{{{
LOOKUP(sum([Net]),-1)
}}}
''Count only the numbers that are positive, and get the percentage''
{{{
(COUNT(IF [Diff] >= 0 THEN [Diff] END) / COUNT([Diff]))*100
}}}
''Add category for stock Position Type''
{{{
IF contains('Buy,Sell',trim([Type 1]))=true THEN 'Long'
ELSEIF contains('Buy to Cover,Sell Short',trim([Type 1]))=true THEN 'Short'
ELSE 'OTHER' END
}}}
''updated processor group filter''
{{{
IF contains(lower(trim([Processor])),'amd')=true THEN 'AMD'
ELSEIF contains(lower(trim([Processor])),'intel')=true THEN 'INTEL'
ELSEIF contains(lower(trim([Processor])),'power')=true THEN 'IBM'
ELSEIF contains(lower(trim([Processor])),'sparc')=true THEN 'SPARC'
ELSE 'OTHER' END
}}}
''storage cell dimension for x2 and x3 cells on the same diskgroup - useful for destage IOs''
{{{
IF
trim([Cellname])='192.168.10.9' OR
trim([Cellname])='192.168.10.10' OR
trim([Cellname])='192.168.10.11' OR
trim([Cellname])='192.168.10.12' OR
trim([Cellname])='192.168.10.13' OR
trim([Cellname])='192.168.10.14' OR
trim([Cellname])='192.168.10.15' OR
trim([Cellname])='192.168.10.16' OR
trim([Cellname])='192.168.10.17' OR
trim([Cellname])='192.168.10.18' OR
trim([Cellname])='192.168.10.19' OR
trim([Cellname])='192.168.10.20' OR
trim([Cellname])='192.168.10.21' OR
trim([Cellname])='192.168.10.22'
THEN 'x2'
elseif
trim([Cellname])='192.168.10.38' OR
trim([Cellname])='192.168.10.39' OR
trim([Cellname])='192.168.10.40' OR
trim([Cellname])='192.168.10.41' OR
trim([Cellname])='192.168.10.42' OR
trim([Cellname])='192.168.10.43' OR
trim([Cellname])='192.168.10.44' OR
trim([Cellname])='192.168.10.45' OR
trim([Cellname])='192.168.10.46' OR
trim([Cellname])='192.168.10.47' OR
trim([Cellname])='192.168.10.48' OR
trim([Cellname])='192.168.10.49' OR
trim([Cellname])='192.168.10.50' OR
trim([Cellname])='192.168.10.51'
THEN 'x3'
else
'other'
end
}}}
! highlight SQL
{{{
IF contains(lower(trim([Sql Id])),'069k4ppu1n1nc')=true THEN [Sql Id]
ELSE 'OTHER' END
}}}
{{{
IF contains(lower(trim([Sql Id])),'069k4ppu1n1nc')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'0vzyv2wsr2apz')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'0xsz99mn2nuvc')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'0zwcr39tvssxj')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'1d9qrkfvh78bt')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'1zbk54du40dnu')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'2xjwy1jvu31xu')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'2ywv61bm22pw7')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'3fn33utt2ptns')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'3knptw3bxf1c9')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'3qvc497pz6hvp')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'3tpznswf2f7ak')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'4v775zu1p3b3f')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'51u31qah6z8d9')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'59wrat188thgf')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'5f2t4rq7xkfav')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'61r81qmqpt1bs')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'6cwh5bz0d0jkv')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'6fyy4v8c85cmk')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'6k6g6725pwjpw')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'7x0psn00ac54g')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'7xmjvrazhyntv')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'82psz0nhm68wf')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'885mt394synz4')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'af4vzj7jyv5mz')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'azkbmbyxahmh2')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'cws1kfprz7u8f')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'d6h43fh3d9p7g')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'dwgtzzmc509zf')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'f3frkf6tvkjwn')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'gd55mjru6w8x')=true THEN [Sql Id]
ELSE 'OTHER' END
}}}
!! complex calculated field logic
IF statement with multiple value condition https://community.tableau.com/thread/119254
Specific rows to column using calculated fields https://community.tableau.com/thread/266616
Tableau Tip: Return the value of a single dimension member https://www.thedataschool.co.uk/naledi-hollbruegge/tableau-tip-tuesday-getting-just-one-variable-field/
https://www.google.com/search?q=tableau+calculated+field+nested+if&oq=tableau+calculated+field+nested+&aqs=chrome.0.0j69i57j0.6974j0j1&sourceid=chrome&ie=UTF-8
Nesting IF/CASE question https://community.tableau.com/thread/254997 , https://community.tableau.com/thread/254997?start=15&tstart=0
{{{
IF [Metric Name] = 'RSRC_MGR_CPU_WAIT_TIME_PCT' THEN
IF [Value] > 100 THEN [Value]*.2 END
END
IF [Metric Name] = 'RSRC_MGR_CPU_WAIT_TIME_PCT' AND [Value] > 100 THEN
IF [Value] > 100 THEN 120
ELSE [Value] END
ELSEIF [Metric Name] = 'Host CPU Utilization (%)' THEN [Value]
END
IF [Metric Name] = 'RSRC_MGR_CPU_WAIT_TIME_PCT' AND [Value] > 100 THEN [Value]*.2
END
}}}
! ASH elapsed time , DATEDIFF
{{{
DATEDIFF('second',MIN([TMS]),max([TM]))
}}}
https://www.google.com/search?ei=Yr8LXdCgGu2N_Qau-6_QCQ&q=tableau+lod+time+calculation+max+first+column+minus+value+second+column&oq=tableau+lod+time+calculation+max+first+column+minus+value+second+column&gs_l=psy-ab.3...40963.51043..51405...0.0..0.311.4393.27j1j7j1......0....1..gws-wiz.......0i71j35i304i39j33i10j33i160j33i299.cvq9wrnC6uY
LOD expression for 'difference from overall average' https://community.tableau.com/thread/227662
https://www.theinformationlab.co.uk/2017/01/27/calculate-datediff-one-column-tableau/
https://onlinehelp.tableau.com/current/pro/desktop/en-us/functions_functions_date.htm
!! DATEADD , add hours
{{{
DATEADD('hour', 3, #2004-04-15#)
}}}
https://community.tableau.com/thread/157714
! LOD expression
LOD expression to calculate average of a sum of values per user https://community.tableau.com/thread/224293
! OBIEE workload separation
{{{
IF contains(lower(trim([Module])),'BIP')=true THEN 'BIP'
ELSEIF contains(lower(trim([Module])),'ODI')=true THEN 'ODI'
ELSEIF contains(lower(trim([Module])),'nqs')=true THEN 'nqsserver'
ELSE 'OTHER' END
}}}
! get underlying SQL
https://community.tableau.com/thread/170370
http://kb.tableau.com/articles/howto/viewing-underlying-sql-queries-desktop
{{{
C:\Users\karl\Documents\My Tableau Repository\Logs
}}}
! automating tableau reports
Tableau-generated PDF imports to Inkscape with text missing https://community.tableau.com/thread/118822
Automate pdf generation using Tableau Desktop https://community.tableau.com/thread/137724
Tableau Scripting Engine https://community.tableau.com/ideas/1694
tableau community https://community.tableau.com/message/199249#199249
http://powertoolsfortableau.com/tools/portals-for-tableau
https://dwuconsulting.com/tools/software-videos/automating-tableau-pdf <- GOOD STUFF
https://www.autoitscript.com/site/autoit/
http://stackoverflow.com/questions/17212676/vba-automation-of-tableau-workbooks-using-cmd
http://www.graphgiraffe.net/blog/tableau-tutorial-automated-pdf-report-creation-with-tableau-desktop
! remove last 2 characters
https://community.tableau.com/docs/DOC-1391
https://community.tableau.com/message/325328
! measure names, measure values
Combining several measures in one dimension https://community.tableau.com/thread/137680
Tableau Tips and Tricks: Measure Names and Measure Values https://www.youtube.com/watch?v=m0DGW_WYKtA
http://kb.tableau.com/articles/knowledgebase/measure-names-and-measure-values-explained
! initial SQL filter
When the user opens the workbook there will be parameter1 and parameter2 prompts for date range. And that date range will be in effect for all the sheets on that workbook
https://onlinehelp.tableau.com/current/pro/desktop/en-us/connect_basic_initialsql.html
https://www.tableau.com/about/blog/2016/2/introducing-initial-sql-parameters-tableau-93-50213
https://tableauandbehold.com/2016/03/09/using-initial-sql-for/
! add total to stacked bar chart
https://www.credera.com/blog/business-intelligence/tableau-workaround-part-3-add-total-labels-to-stacked-bar-chart/
! tableau roles
tableau roles
https://onlinehelp.tableau.com/current/server/en-us/users_site_roles.htm
https://www.google.com/search?q=tableau+publisher+vs+interactor&oq=tableau+publisher+vs+inter&aqs=chrome.0.0j69i57.5424j0j1&sourceid=chrome&ie=UTF-8
! How to verify the version of Tableau workbook
https://community.tableau.com/thread/257592
https://community.powertoolsfortableau.com/t/how-to-find-the-tableau-version-of-a-workbook/176
<<<
Add “.zip” to the end of the TWBX file. For example, “my workbook.twbx” would become “my workbook.twbx.zip”
<<<
! Embedded database credentials in Tableau
https://www.google.com/search?q=tableau+data+source+embed+credentials&oq=tableau+data+source+embed+credentials&aqs=chrome..69i57.5805j0j1&sourceid=chrome&ie=UTF-8
https://onlinehelp.tableau.com/current/pro/desktop/en-us/publishing_sharing_authentication.htm
https://onlinehelp.tableau.com/current/server/en-us/impers_SQL.htm
http://help.metricinsights.com/m/61790/l/278359-embed-database-credentials-in-tableau
https://howto.mt.gov/Portals/19/Documents/Publishing%20in%20Tableau.pdf
https://howto.mt.gov/Portals/19/Documents/Tableau%20Server%20Authentication.pdf
https://howto.mt.gov/tableau#610017167-tableau-desktop
! tableau dynamic reference line
https://kb.tableau.com/articles/howto/adding-separate-dynamic-reference-lines-for-each-dimension-member
Add a reference line based on a calculated field https://community.tableau.com/thread/216051
! Add Separate Dynamic Reference Lines For Each Dimension Member in Tableau
How to Add Separate Dynamic Reference Lines For Each Dimension Member in Tableau https://www.youtube.com/watch?v=_3ASdjKFsAM
! dynamic axis range
Dynamic Axis Range - Fixing One End (or both, or have it dynamic) https://community.tableau.com/docs/DOC-6215
http://drawingwithnumbers.artisart.org/creating-a-dynamic-range-parameter-in-tableau/
https://www.reddit.com/r/tableau/comments/77dx2v/hello_heroes_of_tableau_is_it_possible_to/
Set Axis Range Based On Calculated Field https://community.tableau.com/thread/243009
https://community.tableau.com/docs/DOC-6215 <- good stuff
https://public.tableau.com/profile/simon.r5129#!/vizhome/DynamicAxisRange-onlyPlotwithin1SD/Usethistorestrictoutliers
! window_stdev
https://playfairdata.com/how-to-do-anomaly-detection-in-tableau/
https://www.linkedin.com/pulse/standard-deviation-tableau-sumeet-bedekar/
! visualizing survey data
https://www.datarevelations.com/visualizing-survey-data
! export tableau to powerpoint
https://onlinehelp.tableau.com/current/pro/desktop/en-us/save_export_image.htm
https://www.clearlyandsimply.com/clearly_and_simply/2012/05/embed-tableau-visualizations-in-powerpoint.html
https://www.google.com/search?q=tableau+on+powerpoint&oq=tableau+on+powe&aqs=chrome.0.0j69i57j0l4.3757j0j1&sourceid=chrome&ie=UTF-8
! T test
https://dabblingwithdata.wordpress.com/2015/09/18/kruskal-wallis-significance-testing-with-tableau-and-r/
http://breaking-bi.blogspot.com/2013/03/conducting-2-sample-z-test-in-tableau.html
T-test using Tableau for proportion & means https://community.tableau.com/thread/258064
Calculating T-Test (or any other statistical tests) parameters in Tableau https://community.tableau.com/thread/251371
T test https://www.google.com/search?q=t+test+columns+in+tableau&oq=t+test+columns+in+tableau&aqs=chrome..69i57j69i64l3.7841j0j1&sourceid=chrome&ie=UTF-8
t-test of two independent means https://community.tableau.com/docs/DOC-1428
https://www.google.com/search?q=t+test+in+tableau&oq=t+test+in+tableau&aqs=chrome..69i57j0l3j69i64l2.2239j0j1&sourceid=chrome&ie=UTF-8
! pivot data source
Tableau in Two Minutes - How to Pivot Data in the Data Source https://www.youtube.com/watch?v=fvRVJ7d7NFI
Combine 3 Date Fields in the same Axis https://community.tableau.com/thread/206580
tableau combine 3 dates in one axis https://www.google.com/search?q=tableau+combine+3+dates+in+one+axis&oq=tableau+combine+3+dates+in+one+axis&aqs=chrome..69i57.22106j0j4&sourceid=chrome&ie=UTF-8
! people
http://www.penguinanalytics.co , http://www.penguinanalytics.co/Datasets/ , https://public.tableau.com/profile/john.alexander.cook#!/
! Pareto Chart Reference line 20pct
Pareto Chart Reference line 20pct https://community.tableau.com/thread/228448
add y axis reference line pareto chart tableau
https://www.google.com/search?q=add+y+axis+reference+line+pareto+chart+tableau&oq=add+y+axis+reference+line+pareto+chart+tableau&aqs=chrome..69i57.6862j0j1&sourceid=chrome&ie=UTF-8
create pareto chart https://www.youtube.com/watch?v=pptICtCPSVg
! math based bin
Count Number of Occurances of a Value https://community.tableau.com/message/303878#303878 <- good stuff
http://vizdiff.blogspot.com/2015/07/create-bins-via-math-formula.html
To create custom bins or buckets for Sales https://community.tableau.com/thread/229116
tableau manual create bucket of 1 https://www.google.com/search?q=tableau+manual+create+bucket+of+1&oq=tableau+manual+create+bucket+of+1&aqs=chrome..69i57.7065j1j1&sourceid=chrome&ie=UTF-8
Calculated Field - Number Generator https://community.tableau.com/thread/205361
tableau create a sequence of numbers calculater field https://www.google.com/search?q=tableau+create+a+sequence+of+numbers+calculater+field&oq=tableau+create+a+sequence+of+numbers+calculater+field&aqs=chrome..69i57.11948j0j1&sourceid=chrome&ie=UTF-8
tableau calculated field create sequence https://www.google.com/search?q=tableau+calculated+field+create+sequence&oq=tableau+calculated+field+create+sequence&aqs=chrome..69i57.7054j0j1&sourceid=chrome&ie=UTF-8
Grouping bins greater than 'x' https://community.tableau.com/thread/220806
creating bins https://www.youtube.com/watch?v=ZFdqXVNST24
creating bins2 https://www.youtube.com/watch?v=VwDPBWuHu3Q
! constant reference line on continuous data tableau
constant reference line on continuous data tableau https://www.google.com/search?ei=6rSmXPysDaWt_Qb5nrlI&q=constant+reference+line+on+continuous+data+tableau&oq=constant+reference+line+on+continuous+data+tableau&gs_l=psy-ab.3...14576.15058..15169...0.0..0.83.382.5......0....1..gws-wiz.......0i71j35i304i39.3OhD28SQVL0
! Add reference line in an axis made by Dimension
Add reference line in an axis made by Dimension https://community.tableau.com/thread/223274
Placing the Reference Line https://community.tableau.com/thread/260253
Highlight bin in Histogram https://community.tableau.com/thread/287638
tableau highlight bin https://www.google.com/search?q=tableau+highligh+bin&oq=tableau+highligh+bin&aqs=chrome..69i57.3251j0j1&sourceid=chrome&ie=UTF-8
! slope graph
https://www.tableau.com/about/blog/2016/9/how-add-vertical-lines-slope-graphs-multiple-measures-59632
! Reference Bands based on calculation
Reference Bands based on calculation https://community.tableau.com/thread/258490
! coding CASE statement easily
http://vizdiff.blogspot.com/2015/07/coding-case-statement-made-easy.html
! Floor and Ceiling Functions
Floor and Ceiling Functions https://community.tableau.com/docs/DOC-1354
https://www.google.com/search?q=tableau+floor+function&oq=tableau+floor+f&aqs=chrome.0.0j69i57.4668j0j1&sourceid=chrome&ie=UTF-8
! reference band based on calculation
reference band based on calculation https://community.tableau.com/thread/258490
tableau reference band on dimension in title https://www.google.com/search?ei=1rWmXInnLe61ggeNzI_wBg&q=tableau+reference+band+on+dimension+in+title&oq=tableau+reference+band+on+dimension&gs_l=psy-ab.1.0.33i22i29i30.16534.18647..20459...0.0..0.154.931.6j3......0....1..gws-wiz.......0i71j0i22i30j0i22i10i30.BFQ4RmLGzpA
! reference band, highlight weekends (check screenshot)
{{{
IF DATEPART('weekday',[Date])=6 or DATEPART('weekday',[Date])=7 THEN 0 END
}}}
https://www.evolytics.com/blog/tableau-hack-how-to-highlight-a-dimension/
adding reference line to discrete variable https://community.tableau.com/thread/193986
tableau Reference Line Discrete Headers https://www.google.com/search?q=tableau+Reference+Line+Discrete+Headers&oq=tableau+Reference+Line+Discrete+Headers&aqs=chrome..69i57j69i60.7445j0j1&sourceid=chrome&ie=UTF-8
https://kb.tableau.com/articles/issue/add-reference-line-to-discrete-field
Shading in weekends https://community.tableau.com/thread/123456
! Dashboard actions - Highlight bin from different source (see screenshot)
Highlight bin from different source https://community.tableau.com/thread/157710
! conditional format individual rows
Tableau: Advanced Conditional Formatting - format text columns differently https://www.youtube.com/watch?v=w2nlT_TBUzU , https://www.youtube.com/watch?v=7H7Dy0G0y04
https://www.evolytics.com/blog/tableau-hack-conditionally-format-individual-rows-columns/
http://www.vizwiz.com/2016/06/tableau-tip-tuesday-how-to.html
is there a way to highlight or bold certain rows ( the entire row) in tableau the way you can in excel https://community.tableau.com/thread/122382
Color Coding by column instead of the entire row https://community.tableau.com/thread/115822
! tableau can't compare numerical bin and integer values
https://www.google.com/search?ei=8p-mXJuVFNCs5wKU0KDgAg&q=tableau+can%27t+compare+numerical+bin+and+integer+values&oq=tableau+can%27t+compare+integer+and+numeric+values&gs_l=psy-ab.1.0.0i8i7i30.12190.18434..20595...0.0..0.110.1319.13j2......0....1..gws-wiz.......0i71j33i10.xCVj6fYB9lU
! A secondary axis chart: How to add a secondary axis in Tableau
A secondary axis chart: How to add a secondary axis in Tableau https://www.youtube.com/watch?v=8yNPCgL7OtI
! tableau server
https://www.udemy.com/administering-tableau-server-10-with-real-time-scenarios/
! tableau performance tuning
Enhancing Tableau Data Queries https://www.youtube.com/watch?v=STfTQ55QE9s&index=19&list=LLmp7QJNLQvBQcvdltLTkiYQ&t=0s
! perf tool - tableau log viewer
https://github.com/tableau/tableau-log-viewer https://github.com/tableau/tableau-log-viewer/releases
! tableau dashboard actions
Tableau Actions Give Your Dashboards Superpowers https://www.youtube.com/watch?v=r8SNKmzsW6c
! tabpy/R
Data science applications with TabPy/R https://www.youtube.com/watch?v=nRtOMTnBz_Y&feature=youtu.be
! time dimension example
(with example workbook) US Holiday Date Flags 2010-2020, to share https://community.tableau.com/thread/246992
http://radacad.com/do-you-need-a-date-dimension
http://radacad.com/custom-functions-made-easy-in-power-bi-desktop
oracle generate date dimension https://sonra.io/2009/02/24/lets-create-a-date-dimension-with-pure-oracle-sql/
! prophet forecasting
(with example workbook) using prophet to forecast https://community.tableau.com/thread/285800
example usage https://community.tableau.com/servlet/JiveServlet/download/855640-292880/Data%20Science%20Applications.twbx
! tableau database writeback
Writeback to reporting database in Tableau 8 - hack or feature https://community.tableau.com/thread/122102
https://www.tableau.com/ja-jp/about/blog/2016/10/tableau-getdata-api-60539
Tableau writeback https://www.reddit.com/r/tableau/comments/6quhg4/tableau_writeback/
Updating Data in Your Database with Tableau https://www.youtube.com/watch?v=UWI_ub1Xuwg
K4 Analytics: How to extend Tableau with write-back and leverage your Excel models https://www.youtube.com/watch?v=5PlzdA19TUw
https://www.clearlyandsimply.com/clearly_and_simply/2016/06/writing-and-reading-tableau-views-to-and-from-databases-and-text-files-part-2.html
Tableau Write Back to Database https://community.tableau.com/thread/284428
Can we write-back to the database https://community.tableau.com/thread/279806
https://www.computerweekly.com/blog/CW-Developer-Network/Tableau-widens-developer-play-what-is-a-writeback-extension
Tableau's Extension API - Write Back https://www.youtube.com/watch?v=Jiazp_zQ0jY
https://tableaufans.com/extension-api/tableau-extension-api-write-back-updated-source-code-for-tableau-2018-2/
Another method to update data from inside tableau http://tableaulove.tumblr.com/post/27627548817/another-method-to-update-data-from-inside-tableau
https://biztory.com/2017/10/09/interactive-commenting-solution-tableau-server/
adding information to charts for data which is not in the data source file https://community.tableau.com/thread/139439
! Commenting On Data Points In Dashboard
https://interworks.com/blog/jlyons/2018/10/01/portals-for-tableau-101-inline-commenting-on-dashboards/
https://www.theinformationlab.co.uk/2016/04/13/dashboards-reports-dynamic-comments-tableau/
Commenting On Data Points In Dashboard https://community.tableau.com/docs/DOC-8867
https://community.tableau.com/thread/157149?start=15&tstart=0
! tableau display database data as annotation
Populate Annotation with Calculated Field https://community.tableau.com/thread/156259
Annotations / Comments / data writeback https://community.tableau.com/ideas/1261
https://stackoverflow.com/questions/40533735/generating-dynamic-displayed-annotations-in-tableau-dashboard
! order of operations
https://www.theinformationlab.co.uk/2013/01/28/5-things-i-wish-i-knew-about-tableau-when-i-started/
! adding timestamp on charts
https://www.thedataschool.co.uk/robbin-vernooij/time-stamping-your-data-in-tableau-and-tableau-prep/
! tableau threshold data alerts
data driven alerts https://www.youtube.com/watch?v=vp3u4D7ao8w
https://www.google.com/search?q=tableau+threshold+alerts&oq=tableau+threshold+alerts&aqs=chrome..69i57.5575j0j1&sourceid=chrome&ie=UTF-8
! tableau pdf automation
<<<
Automation of creating PDF workbooks and delivery via email https://community.tableau.com/thread/120031
Tabcmd and Batch Scripts to automate PDF generation https://www.youtube.com/watch?v=ajB7CDcoyDU
https://www.thedataschool.co.uk/philip-mannering/idiots-guide-controlling-tableau-command-line-using-tabcmd/
Print to PDF using pages shelf in Tableau Desktop https://community.tableau.com/thread/238654
https://www.quora.com/How-do-I-automate-reports-using-the-Tableau-software
https://onlinehelp.tableau.com/current/pro/desktop/en-us/printing.htm
Can TabCMD be used to automatically schedule reports to a file share https://community.tableau.com/thread/176497
how to use tabcmd in tableau desktop,and what are the commands for downloading pdf file and txbx file https://community.tableau.com/thread/154051
we can achieve the batch export to pdf using tabcmd and can also input parameters, https://www.thedataschool.co.uk/philip-mannering/idiots-guide-controlling-tableau-command-line-using-tabcmd/ , a batch file can be scheduled on your laptop or on the server itself. then PDF files related to EXP1 will be spooled on a directory and can be merged (cpu, io, mem, etc.) into 1 file using another tool. all handled in the batch script
example workflow of automating pdf https://www.youtube.com/watch?v=ajB7CDcoyDU
<<<
! tableau open source
https://tableau.github.io/
! tableau javascript api (embedded analytics - js api, REST API, SSO, and mobile)
Tableau JavaScript API | The most delicious ingredient for your custom applications https://www.youtube.com/watch?v=Oda_T5PMwt0
official doc https://onlinehelp.tableau.com/current/api/js_api/en-us/JavaScriptAPI/js_api.htm
Tableau JavaScript API: Getting Started https://www.youtube.com/watch?v=pCstUYalMEU
! tableau hyper api (hyper files as data frame - enables CRUD on files)
Hyper API: Automating Data Connectivity to Solve Real Business Problems https://www.youtube.com/watch?v=-FrMCmknI0Y
https://help.tableau.com/current/api/hyper_api/en-us/docs/hyper_api_whatsnew.html
* https://github.com/tableau/hyper-api-samples
* https://github.com/Bartman0/tableau-incremental-refresh/blob/main/tableau-incremental-refresh.py
* https://github.com/manish-aspiring-DS/Tableau-Hyper-Files
! tableau other developer tools
https://www.tableau.com/developer/tools
https://www.tableau.com/support/help
<<<
Tableau Connector SDK
Tableau Embedded Analytics Playbook
Tableau Extensions API
Tableau Hyper API
Tableau JavaScript API
Tableau Metadata API
Tableau Python Server (TabPY)
Tableau REST API
Tableau Webhooks
Web Data Connector SDK
<<<
! alexa tableau integration
https://github.com/jinuik?tab=repositories
http://bihappyblog.com/2016/06/11/voice-controlled-tableau-dashboard/
https://www.talater.com/annyang/
Tableau and Google Assistant / Siri https://community.tableau.com/thread/267634
Tableau Assistant - Alexa https://www.youtube.com/watch?v=V8TJBj0msIQ
https://www.tableau.com/about/blog/2017/3/hacking-alexa-and-other-tableau-api-tricks-67108
Alexa as a Tableau Assistant https://www.youtube.com/watch?v=zqGK2LYtx-U
Tableau 16 Hackathon - Voice Assisted Analytics https://www.youtube.com/watch?v=5Uul3Qy8YVE
alexa with tableau https://community.tableau.com/thread/256681
Integrating Tableau with Alexa https://community.tableau.com/thread/264965
https://twitter.com/tableau/status/967885701164527621
! tableau data source refresh schedule
https://www.youtube.com/results?search_query=tableau+data+source
https://www.youtube.com/results?search_query=tableau+data+source+refresh
https://www.youtube.com/watch?v=FuDX1u9QSb8 Tableau - Do it Yourself Tutorial - Refresh Extracts using Command line - DIY -33-of-50
!! tableau sync client, tableau bridge
Tableau - Do it Yourself Tutorial - Refresh Extracts using Command line - DIY -33-of-50 https://www.youtube.com/watch?v=FuDX1u9QSb8&list=PLklSCDzsQHdkjiTHqqCaU8tdA70AlSnPs&index=24&t=0s
https://onlinehelp.tableau.com/current/pro/desktop/en-gb/extracting_push.htm
https://www.google.com/search?q=tableay+sync+client&oq=tableay+sync+client&aqs=chrome..69i57j0l5.2789j0j0&sourceid=chrome&ie=UTF-8
https://www.tableau.com/about/blog/2015/5/online-sync-client-38549
https://onlinehelp.tableau.com/current/online/en-us/qs_refresh_local_data.htm
https://kb.tableau.com/articles/issue/error-this-file-was-created-by-a-newer-version-of-tableau-using-online-sync-client
! tableau outlier detection, standard deviation
https://public.tableau.com/views/HandlingDataOutliers/OutlierHandling?%3Aembed=y&%3AshowVizHome=no&%3Adisplay_count=y&%3Adisplay_static_image=y
Outliers based on Standard Deviation https://community.tableau.com/thread/195904
! tableau awk split delimiter
{{{
SPLIT([Name],':',2 )
}}}
https://community.tableau.com/thread/177520
! custom color palette
How to use same color palette for different visualizations https://community.tableau.com/thread/248482
https://onlinehelp.tableau.com/current/pro/desktop/en-us/formatting_worksheet.htm
https://www.tableauexpert.co.in/2015/11/how-to-create-custom-color-palette-in.html
! Labels overlapping
Labels overlapping https://community.tableau.com/thread/208870
mark labels https://community.tableau.com/thread/212775
How to avoid overlapping of labels in dual axis charts https://community.tableau.com/thread/236099
! real time graphs, kafka streaming
streaming data https://community.tableau.com/thread/125081
https://rockset.com/blog/using-tableau-for-live-dashboards-on-event-data/
https://rockset.com/blog/tableau-kafka-real-time-sql-dashboard-on-streaming-data/
Real Time streaming data from Kafka https://community.tableau.com/ideas/8913
<<<
Since Kafka is a streaming data source, it would not make sense to connect Kafka directly to Tableau. But you can use Tableau's Rockset JDBC connector to build live Tableau dashboards on streaming event data with:
1. Low data latency (new data shows up in seconds)
2. Fast SQL queries (including JOINs with other data sources)
3. Support for high QPS, interactive queries & drill downs
<<<
! tableau data source - custom SQL pivot
https://help.tableau.com/current/pro/desktop/en-us/pivot.htm
Combine multiple dimensions / pivot multiple columns https://community.tableau.com/thread/189601
https://www.google.com/search?q=tableau+pivot+dimension+columns&oq=tableau+pivot+dimension+columns&aqs=chrome..69i57j33.11202j0j1&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=tableau+pivot+column+not+working&oq=tableau+pivot+column+not+&aqs=chrome.2.69i57j33l6.7094j1j1&sourceid=chrome&ie=UTF-8
! circle graph jitter - spacing scatter plot, overlapping circles
Overlapping marks on scatter plot https://community.tableau.com/thread/283671
https://www.google.com/search?q=tableau+jitter+on+circle+chart&oq=tableau+jitter+on+circle+chart&aqs=chrome..69i57j33.6029j0j1&sourceid=chrome&ie=UTF-8
! tableau timeline graph
https://playfairdata.com/how-to-make-a-tableau-timeline-when-events-overlap/
https://playfairdata.com/how-to-make-a-timeline-in-tableau/
https://www.google.com/search?q=tableau+visualize+start+end+times+time+series&oq=tableau+visualize+start+end+times+time+series&aqs=chrome..69i57.8797j1j1&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=tableau+time+series+start+and+end+timestamp&oq=tableau+time+series+start+and+end+timestamp&aqs=chrome..69i57.12272j0j1&sourceid=chrome&ie=UTF-8
! Changing the File Path for Extracts
https://kb.tableau.com/articles/howto/changing-the-file-path-for-extracts
{{{
Answer
If you have a .twbx file, convert the .twbx file to a .twb file using one of the following methods:
In Tableau Desktop, open the packaged workbook (.twbx file), and then select File > Save As. Under Save as type, select Tableau Workbook (*.twb).
In Windows Explorer, right-click the .twbx file, and then select Unpackage.
In Tableau Desktop, open the .twb file. Click the sheet tab and then select Data > <data source name> > Extract > Remove.
Select Just remove the extract, and then click OK.
Select Data > <data source name> > Extract Data, and then click Extract.
Select the desired location, and then click Save.
}}}
! Difference Between Extract Filters and Data Source Filters
https://community.tableau.com/docs/DOC-8721
{{{
Extract Filter:
As the name implies extract filters are used to filter out the data while creating the extract.
Example: Let’s say we have database with the data for different countries as shown below
USA – 5000 rows
Canada – 2000 rows
India 10000 rows
Australia 1500 rows
If we apply the Extract filters to bring the data only for USA (Country=USA), Tableau creates the Extract (.tde) just for the Country USA and ignore the data for all other countries.
Size of the Extract is always proportionate the Extract filters.
#of rows in the extract: 5000 rows for country USA
Data Source Filters:
In Tableau 9.0.4 applying Data source filters won’t change to the volume data and size of the extract. Instead data source filters applies the filters to the background query when we use any of the dimensions or measures in the visualizations.
Example:
If we apply the Data Source filters to bring the data only for USA (Country=USA), Tableau creates the Extract (.tde) with the full volume of the data for all countries (not only for USA) and there won’t be any relationship between the data source filters and the size of the extract.
#of rows in the extract: 18,500 (for all countries)
However there won’t be any change the way we use the dimensions and measures using both the extracts in the Visualization. Both should work as expected and will show the data only for USA.
}}}
! Trim a string up to a special character
https://community.tableau.com/thread/134857
{{{
RIGHT([String], LEN([String] - FIND([String],":"))
}}}
! startswith
Creating two filters using the first letter (Starts with) https://community.tableau.com/thread/235823
! tableau topn within category - INDEX
How to find the top N within a category in Tableau
https://www.youtube.com/watch?v=z0R9OsDl-10
https://kb.tableau.com/articles/howto/finding-the-top-n-within-a-category
! Videos
Tableau TCC12 Session: Facebook http://www.ustream.tv/recorded/26807227
''Tableau Server/Desktop videos''
http://www.lynda.com/Tableau-tutorials/Up-Running-Tableau/165439-2.html
http://beta.pluralsight.com/search/?searchTerm=tableau
http://pluralsight.com/training/Authors/Details/ben-sullins
http://www.livefyre.com/profile/21361843/ ben sullins comments/questions
! people
tableauing dangerously https://cmtoomey.github.io/blog/page5/
! TC conference papers and materials (2016 to 2018)
https://www.dropbox.com/sh/lztdogubf20498e/AAAPptLIxaAPLdBGmwUtMVJba?dl=0
.
! fork this
https://github.com/karlarao/karlaraowiki
! how to run two versions of mozilla (need to create a new profile)
{{{
"C:\Program Files (x86)\MozillaFirefox4RC2\firefox.exe" -P "karlarao" -no-remote
}}}
https://blogs.oracle.com/datawarehousing/getting-started-with-autonomous-data-warehouse-part-1-oracle-moviestream
DML without limits, now in BigQuery https://cloud.google.com/blog/products/data-analytics/dml-without-limits-now-in-bigquery
Dremel: Interactive Analysis of Web-Scale Datasets (with cost based optimizer)
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36632.pdf
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91346837-69e30480-e7af-11ea-9e18-452a3c1c8a28.png]]
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91346835-68b1d780-e7af-11ea-9186-7a12eb70ddf1.png]]
https://github.com/googleapis/python-bigquery/tree/master/benchmark
https://github.com/googleapis/python-bigquery/tree/master/samples
https://cloud.google.com/bigquery/docs/release-notes
<<showtoc>>
! one SQL
vi test.py
{{{
from google.cloud import bigquery
# Construct a BigQuery client object.
client = bigquery.Client()
query = """
with cte as (
SELECT /* cte_query */ b.*
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1
)
SELECT /* now3 main_query */ b.*
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1
inner join cte
on a.col1 = cte.col1
inner join (SELECT /* subquery */ b.*
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1) sq
on a.col1 = sq.col1
"""
query_job = client.query(query) # Make an API request.
print("done")
#for row in query_job:
# print(','.join(row))
}}}
! two SQLs - SELECT and DDL
{{{
cat test3-createtbl.py
from google.cloud import bigquery
# Construct a BigQuery client object.
client = bigquery.Client()
query = """
with cte as (
SELECT /* cte_query */ b.*
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1
)
SELECT /* now3 main_query */ b.*
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1
inner join cte
on a.col1 = cte.col1
inner join (SELECT /* subquery */ b.*
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1) sq
on a.col1 = sq.col1
;
create table `example-dev-284123.dataset01.table03`
as select * from `example-dev-284123.dataset01.table01`;
"""
query_job = client.query(query) # Make an API request.
}}}
.
https://cloud.google.com/products/calculator/
https://cloudpricingcalculator.appspot.com/static/data/pricelist.json
https://cloud.google.com/storage/pricing
http://calculator.s3.amazonaws.com/index.html
<<showtoc>>
! node and pl/sql
http://www.slideshare.net/lucasjellema/oracle-databasecentric-apis-on-the-cloud-using-plsql-and-nodejs <-- GOOD STUFF
https://technology.amis.nl/2016/04/01/running-node-oracledb-the-oracle-database-driver-for-node-js-in-the-pre-built-vm-for-database-development/ <- GOOD STUFF
https://github.com/lucasjellema/sig-nodejs-amis-2016 <- GOOD STUFF
https://www.npmjs.com/search?q=plsql
https://www.npmjs.com/package/node_plsql
https://www.npmjs.com/package/oracledb
http://www.slideshare.net/lucenerevolution/sir-en-final
https://blogs.oracle.com/opal/entry/introducing_node_oracledb_a_node
https://github.com/oracle/node-oracledb
https://github.com/oracle/node-oracledb/blob/master/doc/api.md#plsqlexecution
http://stackoverflow.com/questions/36009085/how-to-execute-stored-procedure-query-in-node-oracledb-if-we-are-not-aware-of
http://www.slideshare.net/lucasjellema/oracle-databasecentric-apis-on-the-cloud-using-plsql-and-nodejs
http://dgielis.blogspot.com/2015/01/setting-up-node-and-oracle-database.html
http://pythonhackers.com/p/doberkofler/node_plsql
http://lauren.pomogi-mne.com/how-to-run-oracle-user-defined-functions-using-nodejs-stack-overflow-1061567616/
! node websocket , socket.io
websocket https://www.youtube.com/watch?v=9L-_cNQaizM
Real-time Data with Node.js, Socket.IO, and Oracle Database https://www.youtube.com/watch?v=-mMIxikhi6M
http://krisrice.blogspot.com/2014/06/publish-data-over-rest-with-nodejs.html
An Intro to JavaScript Web Apps on Oracle Database http://nyoug.org/wp-content/uploads/2015/04/McGhan_JavaScript.pdf
! dan mcghan's relational to json
I'm looking up books/references for "end to end app from data model to nodejs" and I came across that video - VTS: Relational to JSON with Node.js https://www.youtube.com/watch?v=hFoeVZ4UpBs
https://blogs.oracle.com/newgendbaccess/entry/in_praise_of_dan_mcghan
https://jsao.io/2015/07/relational-to-json-in-oracle-database/ , http://www.slideshare.net/CJavaPeru/relational-to-json-with-node-dan-mc-ghanls
http://www.garysguide.com/events/lxnry6t/NodeJS-Microservices-NW-js-Mapping-Relational-to-JSON
https://blog.risingstack.com/nodejs-at-scale-npm-best-practices/
An Intro to JavaScript Web Apps on Oracle Database http://nyoug.org/wp-content/uploads/2015/04/McGhan_JavaScript.pdf
! path to nodejs
https://github.com/gilcrest/OracleMacOSXElCapitanSetup4Node
http://drumtechy.blogspot.com/2015/03/my-path-to-nodejs-and-oracle-glory.html
http://drumtechy.blogspot.com/2015/03/my-path-to-nodejs-and-oracle-glory_14.html
http://drumtechy.blogspot.com/2015/03/my-path-to-nodejs-and-oracle-glory_16.html
<<showtoc>>
! ksun-oracle
!! book - oracle database performance tuning - studies practices research
https://drive.google.com/file/d/1VijpHBG1I7Wi2mMPj91kSmHH_J-CZKr_/edit
!! Cache Buffer Chains Latch Contention Case Study-2: Reverse Primary Key Index
http://ksun-oracle.blogspot.com/2020/02/cache-buffer-chains-latch-contention_25.html
{{{
wget https://raw.githubusercontent.com/karlarao/scripts/master/performance/create_hint_sqlprofile.sql
profile fix offload initial max SQL from hours to 3secs
We have the following SQL that ran long in DWTST that we fixed through SQL profile (from 50mins to 3secs). We are expecting this to run longer in PROD due to larger size table.
SELECT MIN("LOAD_DATE") FROM "DIM"."ENS_CSM_SUMMARY_DT_GLT"
I’ve attached the script to implement the fix. And please following the steps below:
1) On prod host
cd /db_backup_denx3/p1/gluent/karl
2) Connect / as sysdba
3) Execute the script as follows
@create_hint_sqlprofile
Enter value for sql_id: dg7zj0q9qa2gf
Enter value for profile_name (PROFILE_sqlid_MANUAL): <just hit ENTER here>
Enter value for category (DEFAULT): <just hit ENTER here>
Enter value for force_matching (false): <just hit ENTER here>
Enter value for hint_text: NO_PARALLEL
Profile PROFILE_dg7zj0q9qa2gf_MANUAL created.
This will make the query run in serial through a profile hint which is the fix for this issue.
After this profile creation. Please cancel the job and restart it.
}}}
Field Guide to Hadoop https://www.safaribooksonline.com/library/view/field-guide-to/9781491947920/
<<<
* scd
* cdc
* streaming
<<<
Data lake ingestion strategies - Practical Enterprise Data Lake Insights: Handle Data-Driven Challenges in an Enterprise Big Data Lake https://learning.oreilly.com/library/view/practical-enterprise-data/9781484235225/html/454145_1_En_2_Chapter.xhtml
Information Integration and Exchange - Enterprise Information Management in Practice: Managing Data and Leveraging Profits in Today’s Complex Business Environment https://learning.oreilly.com/library/view/enterprise-information-management/9781484212189/9781484212196_Ch05.xhtml
Data Warehouse Patterns - SQL Server Integration Services Design Patterns, Second Edition https://learning.oreilly.com/library/view/sql-server-integration/9781484200827/9781484200834_Ch11.xhtml
Change Data Capture techniques - SAP Data Services 4.x Cookbook https://learning.oreilly.com/library/view/sap-data-services/9781782176565/ch09s02.html
https://www.youtube.com/results?search_query=scd+vs+cdc
https://communities.sas.com/t5/SAS-Data-Management/SCD-Type-2-Loader-vs-Change-Data-Capture/td-p/136421
https://www.google.com/search?q=why+CDC+vs+SCD&ei=0C9wXNDrM-Wmgge64pbYCg&start=10&sa=N&ved=0ahUKEwjQk_Ks7c_gAhVlk-AKHTqxBasQ8tMDCLQB&biw=1439&bih=798
https://network.informatica.com/message/75171#75171
https://it.toolbox.com/question/slowly-changing-dimension-vs-change-data-capture-053110
https://network.informatica.com/thread/40299
https://archive.sap.com/discussions/thread/2140880
<<<
CDC is Change Data Capture -
The CDC methods will enable you to extract and load only the new or changed records form the source, rather than loading the entire records from the source. Also called as delta or incremental load.
SCD Type 2 (Slowly Changing Dimension Type 2)
This lets you store/preserve the history of changed records of selected dimensions as per your choice. The transaction table / source table will mostly have only the current value and is used in certain cases where in the history of a certain dimension is required for analysis purpose.
<<<
https://books.google.com/books?id=83pgjociDWsC&pg=RA5-PT9&lpg=RA5-PT9&dq=scd+and+cdc&source=bl&ots=Ipp7HAYCFX&sig=ACfU3U2C8CiaSxa_urF19q5IhQ8DOXLbIQ&hl=en&sa=X&ved=2ahUKEwiXt9WC7M_gAhXqRt8KHdWuBnsQ6AEwCXoECAoQAQ#v=onepage&q=scd%20and%20cdc&f=false
https://community.talend.com/t5/Design-and-Development/Difference-between-CDC-and-SCD/td-p/111312
! Ideas for Event Sourcing in Oracle
https://medium.com/@FranckPachot/ideas-for-event-sourcing-in-oracle-d4e016e90af6
<<<
With more and more organization moving to the cloud, there is a growing demand to feed data from on-premise Oracle/DB2/SQL Server databases to various platforms on the cloud. CDC can captures changes as they happen in real-time fashion and push to the target platforms, such as Kafka, Event Hub, and data lake. There are many ways to perform CDC and many CDC software are also available in the market. In this session, we will discuss what CDC options are available and introduce a few key CDC softwares, such as Oracle GoldenGate, Attunity, and Striim.
<<<
..
<<<
QUESTION:
is there a way to ignore hints AND profiles through 1 single parameter?
like **** you all hints and profiles i hate you!
or is the only way to do this is set _optimizer_ignore_hints and disable/drop all profiles ?
ANSWER:
For profiles: ALTER SESSION SET SQLTUNE_CATEGORY = 'IGNOREMENOW';
For baselines: ALTER SESSION SET OPTIMIZER_USE_SQL_PLAN_BASELINES=false
just everything off, these are the knobs :) because gluent doesn't like having USE_NL hint on offloaded tables it errors with KUP-04108: unable to reread file
just in case the developers have to deal with 1000+ SQLs we know how to attack this with these knobs
OTHER WAYS OF DISABLING:
IGNORE_OPTIM_EMBEDDED_HINTS <- disables hints at session level
{{{
select /*+ index(DEPT) ignore_optim_embedded_hints */ * from SCOTT.DEPT;
}}}
optimizer_ignore_hints <- database wide or session level through trigger
{{{
alter session set optimizer_ignore_hints=true;
alter session set optimizer_ignore_parallel_hints=true;
}}}
<<<
https://onlinexperiences.com/scripts/Server.nxp?LASCmd=AI:4;F:QS!10100&ShowUUID=958AB2AD-BBE8-4F30-82C9-338C87B7D6C6&ShowKey=73520&AffiliateData=DSCGR#xsid=a62e_5IW
https://www.youtube.com/results?search_query=How+to+Use+Time+Series+Data+to+Forecast+at+Scale
Mahan Hosseinzadeh- Prophet at scale to tune & forecast time series at Spotify https://www.youtube.com/watch?v=fegS34ItKcI
Joe Jevnik - A Worked Example of Using Neural Networks for Time Series Prediction https://www.youtube.com/watch?v=hAlGqT3Xpus
Real-time anomaly detection system for time series at scale https://www.youtube.com/watch?v=oVXySPH7MjQ
Two Effective Algorithms for Time Series Forecasting https://www.youtube.com/watch?v=VYpAodcdFfA
Nathaniel Cook - Forecasting Time Series Data at scale with the TICK stack https://www.youtube.com/watch?v=raEyZEryC0k
How to Use Time Series Data to Forecast at Scale| DZone.com Webinar https://www.youtube.com/watch?v=KoLR7baZYec
Forecasting at Scale: How and Why We Developed Prophet for Forecasting at Facebook https://www.youtube.com/watch?v=pOYAXv15r3A
https://cloud.google.com/blog/products/databases/alloydb-for-postgresql-columnar-engine
.
https://community.hortonworks.com/articles/58458/installing-docker-version-of-sandbox-on-mac.html <-- follow this @@docker@@ howto!
https://hortonworks.com/tutorial/learning-the-ropes-of-the-hortonworks-sandbox/
HORTONWORKS SANDBOX SANDBOX DEPLOYMENT AND INSTALL GUIDE Deploying Hortonworks Sandbox on @@Docker@@ https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/3/#for-mac
https://community.hortonworks.com/questions/57757/hdp-25-sandbox-not-starting.html <-- issue on sandbox not starting
https://www.quora.com/To-start-learning-and-playing-with-Hadoop-which-one-should-I-prefer-Cloudera-QuickStart-VM-Hortonworks-Sandbox-or-MapR-Sandbox
.
<<showtoc>>
! cat files
https://stackoverflow.com/questions/19778137/why-is-there-no-hadoop-fs-head-shell-command
{{{
hadoop fs -cat /path/to/file | head
hadoop fs -cat /path/to/file | tail
}}}
! create home directory
{{{
[root@node1 ~]# su - hdfs
[hdfs@node1 ~]$ hadoop fs -mkdir /user/vagrant
[hdfs@node1 ~]$ hadoop fs -chown vagrant:vagrant /user/vagrant
[hdfs@node1 ~]$ hadoop fs -ls /user
Found 5 items
drwxr-xr-x - admin hdfs 0 2019-01-06 02:11 /user/admin
drwxrwx--- - ambari-qa hdfs 0 2019-01-06 01:31 /user/ambari-qa
drwxr-xr-x - hcat hdfs 0 2019-01-06 01:44 /user/hcat
drwxr-xr-x - hive hdfs 0 2019-01-06 02:06 /user/hive
drwxr-xr-x - vagrant vagrant 0 2019-01-08 06:26 /user/vagrant
}}}
! copy file
{{{
[vagrant@node1 data]$ du -sm salaries.csv
16 salaries.csv
[vagrant@node1 data]$ hadoop fs -put salaries.csv
[vagrant@node1 data]$ hadoop fs -ls
Found 1 items
-rw-r--r-- 3 vagrant vagrant 16257213 2019-01-08 06:27 salaries.csv
}}}
! copy file with different block size
* this spreads the 16MB file to 1MB across data nodes and replicated 3x
{{{
[vagrant@node1 data]$ hadoop fs -D dfs.blocksize=1m -put salaries.csv salaries2.csv
}}}
! check file status
{{{
[vagrant@node1 data]$ hdfs fsck /user/vagrant/salaries.csv
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&path=%2Fuser%2Fvagrant%2Fsalaries.csv
FSCK started by vagrant (auth:SIMPLE) from /192.168.199.2 for path /user/vagrant/salaries.csv at Tue Jan 08 06:29:06 UTC 2019
.
/user/vagrant/salaries.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741876_1056. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Status: HEALTHY
Total size: 16257213 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 1 (avg. block size 16257213 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 1 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 1 (33.333332 %)
Number of data-nodes: 2
Number of racks: 1
FSCK ended at Tue Jan 08 06:29:06 UTC 2019 in 4 milliseconds
The filesystem under path '/user/vagrant/salaries.csv' is HEALTHY
-- FILE WITH DIFFERENT BLOCK SIZE
[vagrant@node1 data]$ hdfs fsck /user/vagrant/salaries2.csv
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&path=%2Fuser%2Fvagrant%2Fsalaries2.csv
FSCK started by vagrant (auth:SIMPLE) from /192.168.199.2 for path /user/vagrant/salaries2.csv at Tue Jan 08 06:31:11 UTC 2019
.
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741877_1057. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741878_1058. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741879_1059. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741880_1060. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741881_1061. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741882_1062. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741883_1063. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741884_1064. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741885_1065. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741886_1066. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741887_1067. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741888_1068. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741889_1069. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741890_1070. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741891_1071. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
/user/vagrant/salaries2.csv: Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741892_1072. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Status: HEALTHY
Total size: 16257213 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 16 (avg. block size 1016075 B)
Minimally replicated blocks: 16 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 16 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 16 (33.333332 %)
Number of data-nodes: 2
Number of racks: 1
FSCK ended at Tue Jan 08 06:31:11 UTC 2019 in 1 milliseconds
The filesystem under path '/user/vagrant/salaries2.csv' is HEALTHY
}}}
! get file locations and blocks
{{{
[vagrant@node1 data]$ hdfs fsck /user/vagrant/salaries.csv -files -locations -blocks
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&files=1&locations=1&blocks=1&path=%2Fuser%2Fvagrant%2Fsalaries.csv
FSCK started by vagrant (auth:SIMPLE) from /192.168.199.2 for path /user/vagrant/salaries.csv at Tue Jan 08 06:33:18 UTC 2019
/user/vagrant/salaries.csv 16257213 bytes, 1 block(s): Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741876_1056. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
0. BP-534825236-192.168.199.2-1546738263299:blk_1073741876_1056 len=16257213 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
Status: HEALTHY
Total size: 16257213 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 1 (avg. block size 16257213 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 1 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 1 (33.333332 %)
Number of data-nodes: 2
Number of racks: 1
FSCK ended at Tue Jan 08 06:33:18 UTC 2019 in 1 milliseconds
The filesystem under path '/user/vagrant/salaries.csv' is HEALTHY
[vagrant@node1 data]$
[vagrant@node1 data]$
[vagrant@node1 data]$
[vagrant@node1 data]$
[vagrant@node1 data]$
[vagrant@node1 data]$
[vagrant@node1 data]$ hdfs fsck /user/vagrant/salaries2.csv -files -locations -blocks
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&files=1&locations=1&blocks=1&path=%2Fuser%2Fvagrant%2Fsalaries2.csv
FSCK started by vagrant (auth:SIMPLE) from /192.168.199.2 for path /user/vagrant/salaries2.csv at Tue Jan 08 06:36:04 UTC 2019
/user/vagrant/salaries2.csv 16257213 bytes, 16 block(s): Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741877_1057. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741878_1058. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741879_1059. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741880_1060. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741881_1061. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741882_1062. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741883_1063. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741884_1064. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741885_1065. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741886_1066. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741887_1067. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741888_1068. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741889_1069. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741890_1070. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741891_1071. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741892_1072. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
0. BP-534825236-192.168.199.2-1546738263299:blk_1073741877_1057 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
1. BP-534825236-192.168.199.2-1546738263299:blk_1073741878_1058 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
2. BP-534825236-192.168.199.2-1546738263299:blk_1073741879_1059 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
3. BP-534825236-192.168.199.2-1546738263299:blk_1073741880_1060 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK], DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK]]
4. BP-534825236-192.168.199.2-1546738263299:blk_1073741881_1061 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
5. BP-534825236-192.168.199.2-1546738263299:blk_1073741882_1062 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK], DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK]]
6. BP-534825236-192.168.199.2-1546738263299:blk_1073741883_1063 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
7. BP-534825236-192.168.199.2-1546738263299:blk_1073741884_1064 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
8. BP-534825236-192.168.199.2-1546738263299:blk_1073741885_1065 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
9. BP-534825236-192.168.199.2-1546738263299:blk_1073741886_1066 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
10. BP-534825236-192.168.199.2-1546738263299:blk_1073741887_1067 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
11. BP-534825236-192.168.199.2-1546738263299:blk_1073741888_1068 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
12. BP-534825236-192.168.199.2-1546738263299:blk_1073741889_1069 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
13. BP-534825236-192.168.199.2-1546738263299:blk_1073741890_1070 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
14. BP-534825236-192.168.199.2-1546738263299:blk_1073741891_1071 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
15. BP-534825236-192.168.199.2-1546738263299:blk_1073741892_1072 len=528573 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
Status: HEALTHY
Total size: 16257213 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 16 (avg. block size 1016075 B)
Minimally replicated blocks: 16 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 16 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 16 (33.333332 %)
Number of data-nodes: 2
Number of racks: 1
FSCK ended at Tue Jan 08 06:36:04 UTC 2019 in 1 milliseconds
The filesystem under path '/user/vagrant/salaries2.csv' is HEALTHY
}}}
! read raw file in data node filesystem
* check for the blk_<id>
{{{
[root@node1 ~]# find /hadoop/hdfs/ -name "blk_1073741876" -print
/hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741876
[root@node1 ~]# find /hadoop/hdfs/ -name "blk_1073741878" -print
/hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741878
[root@node1 ~]# less /hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741878
}}}
! explore files using ambari files view and NameNode UI
Ambari files view http://127.0.0.1:8080/#/main/view/FILES/auto_files_instance
Quicklinks NameNode UI http://192.168.199.2:50070/explorer.html#/
.
http://hortonworks.com/wp-content/uploads/2016/05/Hortonworks.CheatSheet.SQLtoHive.pdf
! show all config parameters
{{{
set;
}}}
! connect on beeline
{{{
beeline> !connect jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connecting to jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
Enter password for jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
Connected to: Apache Hive (version 1.2.1000.2.5.0.0-1245)
Driver: Hive JDBC (version 1.2.1000.2.5.0.0-1245)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://sandbox.hortonworks.com:2181/>
0: jdbc:hive2://sandbox.hortonworks.com:2181/> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
| foodmart |
| hr |
| xademo |
+----------------+--+
4 rows selected (0.14 seconds)
}}}
https://cwiki.apache.org/confluence/display/Hive/LanguageManual
https://cwiki.apache.org/confluence/display/Hive/Home#Home-HiveDocumentation
http://hive.apache.org/
<<showtoc>>
! WITH AS
https://cwiki.apache.org/confluence/display/Hive/Common+Table+Expression
! UNION ALL
https://stackoverflow.com/questions/16181684/combine-many-tables-in-hive-using-union-all
! CASE function
http://www.folkstalk.com/2011/11/conditional-functions-in-hive.html , https://stackoverflow.com/questions/41023835/case-statements-in-hive , https://community.modeanalytics.com/sql/tutorial/sql-case/
! hive JOINS
https://www.tutorialspoint.com/hive/hiveql_joins.htm
! DDL
{{{
show create table
}}}
! spool to CSV
{{{
hive -e 'header =true, select * from table' > file.csv
hive -e "use default;set hive.cli.print.header=true;select * from test1;" | sed 's/[\t]/,/g' >/temp/test.csv
INSERT OVERWRITE LOCAL DIRECTORY '/path/to/hive/csv' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' SELECT * FROM hivetablename;
}}}
! spool to pipe delimited
https://stackoverflow.com/questions/44333450/hive-e-with-delimiter?rq=1
https://stackoverflow.com/questions/30224875/exporting-hive-table-to-csv-in-hdfs
{{{
[raj_ops@sandbox ~]$ hive -e "use default;set hive.cli.print.header=true;select * from hr.departments;" | sed 's/[\t]/|/g' > testdata.csv
Logging initialized using configuration in file:/etc/hive/2.5.0.0-1245/0/hive-log4j.properties
OK
Time taken: 3.496 seconds
OK
Time taken: 1.311 seconds, Fetched: 30 row(s)
[raj_ops@sandbox ~]$
[raj_ops@sandbox ~]$
[raj_ops@sandbox ~]$ cat testdata.csv
departments.department_id|departments.department_name|departments.manager_id|departments.location_id
10|Administration|200|1700
20|Marketing|201|1800
110|Accounting|205|1700
120|Treasury|NULL|1700
130|Corporate Tax|NULL|1700
140|Control And Credit|NULL|1700
150|Shareholder Services|NULL|1700
160|Benefits|NULL|1700
170|Manufacturing|NULL|1700
180|Construction|NULL|1700
190|Contracting|NULL|1700
200|Operations|NULL|1700
30|Purchasing|114|1700
210|IT Support|NULL|1700
220|NOC|NULL|1700
230|IT Helpdesk|NULL|1700
240|Government Sales|NULL|1700
250|Retail Sales|NULL|1700
260|Recruiting|NULL|1700
270|Payroll|NULL|1700
10|Administration|200|1700
50|Shipping|121|1500
50|Shipping|121|1500
40|Human Resources|203|2400
50|Shipping|121|1500
60|IT|103|1400
70|Public Relations|204|2700
80|Sales|145|2500
90|Executive|100|1700
100|Finance|108|1700
}}}
! alter table CSV header property skip header
{{{
hadoop fs -copyFromLocal mts_main_v1.csv /sdxx/derived/restou/dc_master_target_summary_v1
alter table dc_master_target_summary_v1 set TBLPROPERTIES ("skip.header.line.count"="1");
}}}
! run query
{{{
[raj_ops@sandbox ~]$ hive -e 'select * from foodmart.customer limit 2'
}}}
! run script
{{{
[raj_ops@sandbox ~]$ hive -f test.sql
[raj_ops@sandbox ~]$ cat test.sql
select * from foodmart.customer limit 2;
}}}
<<showtoc>>
! certification matrix
https://supportmatrix.hortonworks.com/
! ambari and hdp versions
ambari_and_hdp_versions.md https://gist.github.com/karlarao/ba6bbc1c0049de1fc1404b5d8dc56c4d
<<<
* ambari 2.4.3.0
* hdp 2.2 to 2.5.3.0
----------
* ambari 2.5.2.0
* hdp 2.3 to 2.6.3.0
----------
* ambari 2.6.2.2
* hdp 2.4 to 2.6.5
<<<
! ways to install
!! own machine
!!! manual install
* using apt-get and yum for ambari-server/ambari-agent
* and then manual provisioning of the cluster through ambari UI
!!! unattended install using vagrant
* using automation tools to install ambari-server/ambari-agent
* using blueprints to push a setup and configuration to the cluster
!! on cloud
!!! manual or unattended install
!!! using hortonworks cloudbreak (uses docker to provision to cloud)
https://hortonworks.com/open-source/cloudbreak/#section_1
https://cwiki.apache.org/confluence/display/AMBARI/Blueprints
! installation docs
https://docs.hortonworks.com/
https://hortonworks.com/products/data-platforms/hdp/
!! ambari doc
https://docs.hortonworks.com/HDPDocuments/Ambari/Ambari-2.7.3.0/index.html
!! cluster planning
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/cluster-planning/content/partitioning.html
! cluster resources
!!! ambari-server
!!! ambari-agent (node 2-3)
!!! NameNode
!!! ResourceManager
!!! Zookeeper (node 2-3)
!!! DataNode (node 2-3)
!!! NodeManager (node 2-3)
! gluent installation
https://docs.gluent.com/goe/install_and_upgrade.html
.
http://hadooptutorial.info/impala-commands-cheat-sheet/
<<showtoc>>
! ember.js
!! list of apps written in ember
http://iheanyi.com/journal/2015/03/24/a-list-of-open-source-emberjs-projects/
http://stackoverflow.com/questions/10830072/recommended-example-applications-written-in-ember-js
!! a very cool restaurant app
http://www.pluralsight.com/courses/fire-up-emberjs
!! analytics tuts app
https://code.tutsplus.com/courses/end-to-end-analytics/lessons/getting-started
<<<
very nice app that shows the resturant tables and the items ordered on that table
shows the table, items, and item details w/ total
nice howto of ember in action
<<<
! backbone.js
!! simple app backbone rectangle app
http://www.pluralsight.com/courses/backbone-fundamentals
<<<
very nice explanation so far!
<<<
!! blogroll app (MBEN stack)
https://www.youtube.com/watch?v=a-ijUKVIJSw&list=PLX2HoWE32I8OCnumQmc9lcjnHIjAamIy6&index=4
another server example https://www.youtube.com/watch?v=uykzCfu1RiQ
https://www.youtube.com/watch?v=kHV7gOHvNdk&list=PLX2HoWE32I8Nkzw2TqcifObuhgJZz8a0U
!!! git repo
https://github.com/michaelcheng429/backbone_tutorial_blogroll_app/tree/part1-clientside-code
https://github.com/michaelcheng429/backbone_tutorial_blogroll_app
!! db administration app
Application Building Patterns with Backbone.js - http://www.pluralsight.com/courses/playing-with-backbonejs
<<<
a full db administration app
uses node!
<<<
!! backbone todo
https://app.pluralsight.com/library/courses/choosing-javascript-framework/exercise-files
<<<
frontend masters - a todo mvc example
<<<
!! another app todo list
Backbone.JS In-Depth and Intro to Testing with Mocha and Sinon - https://app.pluralsight.com/library/courses/backbone-js-in-depth-testing-mocha-sinon/table-of-contents
<<<
fronend masters class
another app todo list
<<<
!! music player
http://www.pluralsight.com/courses/backbonejs
! angular
!! video publishing site w/ login (MEAN stack)
http://www.pluralsight.com/courses/building-angularjs-nodejs-apps-mean
<<<
video publishing site built on mean stack
great example of authentication and authorization
<<<
! handlebars.js
!! just a simple demo page about employee address book details
http://www.lynda.com/sdk/Web-Interaction-Design-tutorials/JavaScript-Templating/156166-2.html
<<<
clear explanations maybe because it's a simple app, I like this one!
very nice simple howto on different templating engines
the guy used vivaldi browser and aptana for minimal setup
<<<
<<<
jquery, moustache.js, handlebars.js, dust
<<<
!! dog or not app
http://www.pluralsight.com/courses/handlebars-javascript-templating
<<<
this is a handlebars centric course
cute webapp that shows a photo where you would identify if it is a dog or not
this app shows how filtering, pagination, and scoring is done
<<<
<<<
bower, handlebars.js, gulp
<<<
! nodejs
!! oracle to_json and node oracle driver voting on hr schema
http://www.slideshare.net/lucasjellema/oracle-databasecentric-apis-on-the-cloud-using-plsql-and-nodejs
https://github.com/pavadeli/oowsession2016-app
https://github.com/pavadeli/oowsession2016
!! node-oracledb at amis lucas
https://github.com/lucasjellema/sig-nodejs-amis-2016
!! dino-date - showcase Oracle DB features on multiple programming languages (node, python, ruby, etc.)
<<<
DinoDate is "a fun site for finding your perfect Dino partner". It is a learning platform to showcase Oracle Database features using examples in multiple programming languages. https://community.oracle.com/docs/DOC-998357
Blaine Carter https://www.youtube.com/channel/UCnyo1hKeJ4GOsppGVRX6Y4A
http://learncodeshare.net/2016/04/08/dinodate-a-demonstration-platform-for-oracle-database/
way back 2009 http://feuerstein28.rssing.com/browser.php?indx=30498827&item=34
<<<
https://sakthismysqlblog.wordpress.com/2019/08/02/mysql-8-internal-architecture/
.
{{{
There are MYSQL functions you can use. Like this one that resolves the user:
SELECT USER();
This will return something like root@localhost so you get the host and the user.
To get the current database run this statement:
SELECT DATABASE();
}}}
! create new superuser
{{{
create USER 'karlarao'@'%' IDENTIFIED BY 'karlarao';
GRANT ALL PRIVILEGES ON *.* TO 'karlarao'@'%' WITH GRANT OPTION;
SELECT CURRENT_USER();
SELECT DATABASE();
status;
}}}
https://community.oracle.com/tech/apps-infra/categories/database-ideas-ideas
<<<
0) It would be ideal if you can create a separate database for your tests
1) If you really want a quick IO test, then do calibrate_io then the iperf2/netperf
http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_resmgr.htm#ARPLS67598
2) If you have a bit of time, then do Orion
You can quickly do a -run dss or -run oltp
But you can also explore the attached oriontoolkit.zip
3) If you have a lot of time, then do SLOB and test the large IOs
For OLTP test
http://kevinclosson.net/2012/02/06/introducing-slob-the-silly-little-oracle-benchmark/
If it’s DW, then you need to test the large IOs. See attached IOsaturationtoolkit-v2.tar.bz2
http://karlarao.tiddlyspot.com/#[[cpu%20-%20SillyLittleBenchmark%20-%20SLOB]]
Kyle also has some good reference on making use of FIO https://github.com/khailey/fio_scripts/blob/master/README.md
<<<
https://wikibon.com/oracle-mysql-database-service-heatwave-vaporizes-aws-redshift-aqua-snowflake-azure-synapse-gcp-bq/
https://www.oracle.com/mysql/heatwave/
! competitors
a similar product is TiDB
https://pingcap.com/blog/how-we-build-an-htap-database-that-simplifies-your-data-platform
https://medium.com/swlh/making-an-htap-database-a-reality-what-i-learned-from-pingcaps-vldb-paper-6d249c930a11
! name of current db
https://dba.stackexchange.com/questions/58312/how-to-get-the-name-of-the-current-database-from-within-postgresql
{{{
SELECT current_database();
}}}
! list current user
https://www.postgresql.org/message-id/52C315B8.2040006@gmail.com
{{{
select current_database();
}}}
.
<<showtoc>>
! 202008
INTRO
A Tour of PostgreSQL https://www.pluralsight.com/courses/tekpub-postgres
PostgreSQL Playbook for Developer DBAs https://www.pluralsight.com/courses/postgresql-playbook
https://www.linkedin.com/learning/postgresql-essential-training/using-the-exercise-files
https://app.pluralsight.com/library/courses/meet-postgresql/table-of-contents
PERFORMANCE
Play by Play: Database Tuning https://www.pluralsight.com/courses/play-by-play-rob-sullivan
PGPLSQL
https://www.pluralsight.com/courses/postgresql-advanced-server-programming
https://www.pluralsight.com/courses/posgresql-functions-playbook
https://www.pluralsight.com/courses/capturing-logic-custom-functions-postgresql
https://www.pluralsight.com/courses/programming-postgresql
JSON
https://www.pluralsight.com/courses/postgresql-document-database
! courses
https://www.pluralsight.com/courses/tekpub-postgres
https://www.udemy.com/beginners-guide-to-postgresql/learn/lecture/82719#overview
https://www.udemy.com/learn-database-design-using-postgresql/learn/lecture/1594438#overview
https://www.udemy.com/learn-partitioning-in-postgresql-from-scratch/learn/lecture/5639644#overview
pl/pgsql
https://app.pluralsight.com/profile/author/pinal-dave
https://app.pluralsight.com/library/courses/postgresql-advanced-server-programming/table-of-contents
https://www.udemy.com/course/learn-partitioning-in-postgresql-from-scratch/
https://www.udemy.com/course/the-complete-python-postgresql-developer-course/
https://www.udemy.com/course/learn-database-design-using-postgresql/
https://www.udemy.com/course/postgresql-permissionsprivilegesadvanced-review/
https://www.udemy.com/course/ordbms-with-postgresql-essential-administration-training/
https://www.udemy.com/course/ultimate-expert-guide-mastering-postgresql-administration/
https://www.udemy.com/course/beginners-guide-to-postgresql/
https://www.udemy.com/course/postgresql-encryptiondata-at-rest-ssl-security/
https://www.udemy.com/course/postgresql-backupreplication-restore/
https://www.youtube.com/results?search_query=postgresql+replication+step+by+step
https://www.youtube.com/results?search_query=postgresql+performance+tuning
! books
https://learning.oreilly.com/library/view/postgresql-up-and/9781491963401/
https://learning.oreilly.com/library/view/postgresql-for-data/9781783288601/ <- for data architects
https://learning.oreilly.com/library/view/postgresql-high-availability/9781787125537/cover.xhtml
https://learning.oreilly.com/library/view/postgresql-replication-/9781783550609/
https://learning.oreilly.com/library/view/postgresql-10-high/9781788474481/
https://learning.oreilly.com/library/view/postgresql-high-performance/9781785284335/
https://learning.oreilly.com/library/view/postgresql-96-high/9781784392970/
https://learning.oreilly.com/library/view/postgresql-90-high/9781849510301/
https://learning.oreilly.com/library/view/postgresql-high-availability/9781787125537/
https://learning.oreilly.com/library/view/postgresql-administration-cookbook/9781785883187/
https://learning.oreilly.com/library/view/mastering-postgresql-96/9781783555352/
https://learning.oreilly.com/library/view/postgresql-11-server/9781789342222/
https://learning.oreilly.com/library/view/postgresql-development-essentials/9781783989003/
https://learning.oreilly.com/library/view/beginning-postgresql-on/9781484234471/
https://learning.oreilly.com/library/view/postgresql-9-administration/9781849519069/
https://learning.oreilly.com/library/view/troubleshooting-postgresql/9781783555314/
https://learning.oreilly.com/library/view/postgresql-server-programming/9781783980581/
https://learning.oreilly.com/library/view/postgresql-developers-guide/9781783989027/
https://learning.oreilly.com/library/view/practical-postgresql/9781449309770/
https://learning.oreilly.com/library/view/professional-website-performance/9781118551721/
https://learning.oreilly.com/search/?query=postgresql%20performance&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&include_collections=true&include_notebooks=true&is_academic_institution_account=false&source=user&sort=relevance&facet_json=true&page=10
.
<<showtoc>>
! download
https://postgresapp.com/
! configure
{{{
sudo mkdir -p /etc/paths.d &&
echo /Applications/Postgres.app/Contents/Versions/latest/bin | sudo tee /etc/paths.d/postgresapp
$ pwd
/Applications/Postgres.app/Contents/Versions/latest/bin
$ ls
clusterdb gdalbuildvrt invgeod pg_dump pg_waldump
createdb gdaldem invproj pg_dumpall pgbench
createuser gdalenhance nad2bin pg_isready pgsql2shp
cs2cs gdalinfo nearblack pg_receivewal postgres
dropdb gdallocationinfo ogr2ogr pg_recvlogical postmaster
dropuser gdalmanage ogrinfo pg_resetwal proj
ecpg gdalserver ogrtindex pg_restore psql
gdal-config gdalsrsinfo oid2name pg_rewind raster2pgsql
gdal_contour gdaltindex pg_archivecleanup pg_standby reindexdb
gdal_grid gdaltransform pg_basebackup pg_test_fsync shp2pgsql
gdal_rasterize gdalwarp pg_config pg_test_timing testepsg
gdal_translate geod pg_controldata pg_upgrade vacuumdb
gdaladdo initdb pg_ctl pg_verify_checksums vacuumlo
# data directory
/Users/kristofferson.a.arao/Library/Application Support/Postgres/var-11
# postgresql.conf
find . | grep postgresql.conf
./Library/Application Support/Postgres/var-11/postgresql.conf
}}}
[img(80%,80%)[https://i.imgur.com/q4haN6t.png]]
possible to create other versions of database
[img(80%,80%)[https://i.imgur.com/IA6y1mh.png]]
also the same as [[get tuning advisor hints]]
https://blog.dbi-services.com/oracle-sql-profiles-check-what-they-do-before-accepting-them-blindly/
{{{
set serveroutput on echo off
declare
-- input variables
input_task_owner dba_advisor_tasks.owner%type:='SYS';
input_task_name dba_advisor_tasks.task_name%type:='dbiInSite';
input_show_outline boolean:=false;
-- local variables
task_id dba_advisor_tasks.task_id%type;
outline_data xmltype;
benefit number;
begin
for o in ( select * from dba_advisor_objects where owner=input_task_owner and task_name=input_task_name and type='SQL')
loop
-- get the profile hints (opt_estimate)
dbms_output.put_line('--- PROFILE HINTS from '||o.task_name||' ('||o.object_id||') statement '||o.attr1||':');
dbms_output.put_line('/*+');
for r in (
select hint,benefit from (
select case when attr5 like 'OPT_ESTIMATE%' then cast(attr5 as varchar2(4000)) when attr1 like 'OPT_ESTIMATE%' then attr1 end hint,benefit
from dba_advisor_recommendations t join dba_advisor_rationale r using (task_id,rec_id)
where t.owner=o.owner and t.task_name = o.task_name and r.object_id=o.object_id and t.type='SQL PROFILE'
--and r.message='This attribute adjusts optimizer estimates.'
) order by to_number(regexp_replace(hint,'^.*=([0-9.]+)[^0-9].*$','\1'))
) loop
dbms_output.put_line(' '||r.hint); benefit:=to_number(r.benefit)/100;
end loop;
dbms_output.put_line('*/');
-- get the outline hints
begin
select outline_data into outline_data from (
select case when other_xml is not null then extract(xmltype(other_xml),'/*/outline_data/hint') end outline_data
from dba_advisor_tasks t join dba_sqltune_plans p using (task_id)
where t.owner=o.owner and t.task_name = o.task_name and p.object_id=o.object_id and t.advisor_name='SQL Tuning Advisor' --11gonly-- and execution_type='TUNE SQL'
and p.attribute='Using SQL profile'
) where outline_data is not null;
exception when no_data_found then null;
end;
exit when not input_show_outline;
dbms_output.put_line('--- OUTLINE HINTS from '||o.task_name||' ('||o.object_id||') statement '||o.attr1||':');
dbms_output.put_line('/*+');
for r in (
select (extractvalue(value(d), '/hint')) hint from table(xmlsequence(extract( outline_data , '/'))) d
) loop
dbms_output.put_line(' '||r.hint);
end loop;
dbms_output.put_line('*/');
dbms_output.put_line('--- Benefit: '||to_char(to_number(benefit),'FM99.99')||'%');
end loop;
dbms_output.put_line('');
end;
/
}}}
What is the benefit of using google cloud pub/sub service in a streaming pipeline https://stackoverflow.com/questions/60919717/what-is-the-benefit-of-using-google-cloud-pub-sub-service-in-a-streaming-pipelin/60920217#60920217
<<<
Dataflow will need a source to get the data from. If you are using a streaming pipeline you can use different options as a source and each of them will have its own characteristics that may fit your scenario.
With Pub/Sub you can easily publish events using a client library or directly the API to a topic, and it will guarantee at least once delivery of that message.
When you connect it with Dataflow streaming pipeline, you can have a resilient architecture (Pub/Sub will keep sending the message until Dataflow acknowledge that it has processed it) and a near real-time processing. In addition, Dataflow can use Pub/Sub metrics to scale up or down depending on the number of the messages in the backlog.
Finally, Dataflow runner uses an optimized version of the PubSubIO connector which provides additional features. I suggest checking this documentation that describes some of these features.
<<<
* https://raw.githubusercontent.com/karlarao/scripts/master/security/sechealthcheck.sql
* esec360
* DBSAT https://blogs.oracle.com/cloudsecurity/announcing-oracle-database-security-assessment-tool-dbsat-22
** https://www.oracle.com/a/ocom/docs/corporate/cyber-resilience-ds.pdf
** https://go.oracle.com/LP=38340
** concepts https://docs.oracle.com/en/database/oracle/security-assessment-tool/2.2/satug/index.html#UGSAT-GUID-C7E917BB-EDAC-4123-900A-D4F2E561BFE9
** https://www.oracle.com/technetwork/database/security/dbsat/dbsat-ds-jan2018-4219315.pdf
** https://www.oracle.com/technetwork/database/security/dbsat/dbsat-public-faq-4219329.pdf
** https://www.oracle.com/technetwork/database/security/dbsat/dbsec-dbsat-public-4219331.pdf
https://status.snowflake.com/
https://community.snowflake.com/s/topic/0TO0Z000000Unu5WAC/releases
https://docs.snowflake.com/en/release-notes/2021-01.html?_ga=2.227732125.1483243957.1613593318-1423095178.1586365212
https://www.snowflake.com/blog/new-snowflake-features-released-in-january-2021/
https://spark.apache.org/news/index.html
https://spark.apache.org/releases/spark-release-3-0-0.html
https://spark.apache.org/releases/spark-release-3-0-2.html
.
https://www.udemy.com/course/oracle-12c-sql-tuning/
https://www.udemy.com/course/sql-performance-tuning-masterclass/
https://www.udemy.com/course/sql-tuning/
<<showtoc>>
! manual way (this is recommended)
{{{
11:11:09 KARLARAO@cdb1> @spm_demo_query.sql
ALL_DISTINCT SKEW
------------ ----------
3 3
P_SQLID
-------------
a5jq5khm9w64n
Enter value for p_sqlid: a5jq5khm9w64n
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3
Plan hash value: 246648590
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 8 (100)| |
|* 1 | TABLE ACCESS FULL| SKEW | 909 | 6363 | 8 (13)| 00:00:01 |
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SKEW"=3)
18 rows selected.
11:11:43 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:
no rows selected
-- create the baseline
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => 'a5jq5khm9w64n',plan_hash_value=>'246648590', fixed =>'YES', enabled=>'YES');
END;
/
11:12:52 KARLARAO@cdb1> DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => 'a5jq5khm9w64n',plan_hash_value=>'246648590', fixed =>'YES', enabled=>'YES');
END;
/11:18:16 2 11:18:16 3 11:18:16 4 11:18:16 5 11:18:16 6
PL/SQL procedure successfully completed.
11:18:20 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:
PARSING_ CREATED PLAN_NAME SQL_HANDLE SQL_TEXT OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18 SQL_PLAN_fahs3brrwbxcm950a48a8 SQL_e543035defc5f593 select * from skew where skew=3 8 YES YES YES YES MANUAL-L
OAD
--##############################################################################################################################
created an index but doesn't get used by the SQL_ID
what needs to be done is create a new SQL_ID with new plan hash value, and add that PHV to the old SQL_ID
--##############################################################################################################################
11:20:38 KARLARAO@cdb1> @spm_demo_createindex.sql
Index created.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
11:21:27 KARLARAO@cdb1> @spm_demo_fudgestats.sql
PL/SQL procedure successfully completed.
11:21:36 KARLARAO@cdb1> @spm_demo_query.sql
ALL_DISTINCT SKEW
------------ ----------
3 3
P_SQLID
-------------
a5jq5khm9w64n
Enter value for p_sqlid: a5jq5khm9w64n
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3
Plan hash value: 246648590
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
|* 1 | TABLE ACCESS FULL| SKEW | 1 | 1 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SKEW"=3)
Note
-----
- SQL plan baseline SQL_PLAN_fahs3brrwbxcm950a48a8 used for this statement
22 rows selected.
11:21:46 KARLARAO@cdb1> @spm_demo_query.sql
ALL_DISTINCT SKEW
------------ ----------
3 3
P_SQLID
-------------
a5jq5khm9w64n
Enter value for p_sqlid: a5jq5khm9w64n
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3
Plan hash value: 246648590
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
|* 1 | TABLE ACCESS FULL| SKEW | 1 | 1 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SKEW"=3)
Note
-----
- SQL plan baseline SQL_PLAN_fahs3brrwbxcm950a48a8 used for this statement
22 rows selected.
11:21:59 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:
PARSING_ CREATED PLAN_NAME SQL_HANDLE SQL_TEXT OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18 SQL_PLAN_fahs3brrwbxcm950a48a8 SQL_e543035defc5f593 select * from skew where skew=3 8 YES YES YES YES MANUAL-L
OAD
--## regather stats
exec dbms_stats.gather_index_stats(user,'SKEW_IDX', no_invalidate => false);
exec dbms_stats.gather_table_stats(user,'SKEW', no_invalidate => false);
--## index was picked up
11:28:22 KARLARAO@cdb1> @spm_demo_query2.sql
ALL_DISTINCT SKEW
------------ ----------
3 3
P_SQLID
-------------
693ccxff9a8ku
Enter value for p_sqlid: 693ccxff9a8ku
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 693ccxff9a8ku, child number 0
-------------------------------------
select /* new */ * from skew where skew=3
Plan hash value: 1949605896
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW | 1 | 7 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=3)
19 rows selected.
--## use coe.sql to force index to the OLD SQL_ID
-- edit the output sql file to match the text of OLD SQL_ID
SQL>set lines 300
SQL>set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);SQL>
ALL_DISTINCT SKEW
------------ ----------
3 3
SQL>
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3
Plan hash value: 1949605896
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW | 1 | 7 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=3)
Note
-----
- SQL profile coe_693ccxff9a8ku_1949605896 used for this statement
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
23 rows selected.
SQL>@spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:
PARSING_ CREATED PLAN_NAME SQL_HANDLE SQL_TEXT OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18 SQL_PLAN_fahs3brrwbxcm950a48a8 SQL_e543035defc5f593 select * from skew where skew=3 8 YES YES YES YES MANUAL-L
OAD
-- add the other plan
-- you can even use a different SQL_ID, what matters is the text matches the EXACT_MATCHING_SIGNATURE to be tied to SQL_HANDLE as a new SQL PLAN_NAME
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => 'a5jq5khm9w64n',plan_hash_value=>'1949605896', fixed =>'YES', enabled=>'YES');
END;
/
-- SQL HANDLE is the same, there's a new PLAN NAME
SQL>@spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:
PARSING_ CREATED PLAN_NAME SQL_HANDLE SQL_TEXT OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18 SQL_PLAN_fahs3brrwbxcm950a48a8 SQL_e543035defc5f593 select * from skew where skew=3 8 YES YES YES YES MANUAL-L
OAD
KARLARAO 03/23/20 11:41:32 SQL_PLAN_fahs3brrwbxcm08e93fe4 SQL_e543035defc5f593 select * from skew where skew=3 2 YES YES YES YES MANUAL-L
OAD
-- verify
set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);
Connected.
11:42:26 KARLARAO@cdb1>
set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);11:42:27 KARLARAO@cdb1> 11:42:27 KARLARAO@cdb1> 11:42:27 KARLARAO@cdb1>
ALL_DISTINCT SKEW
------------ ----------
3 3
11:42:27 KARLARAO@cdb1>
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3
Plan hash value: 1949605896
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW | 1 | 7 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=3)
Note
-----
- SQL profile coe_693ccxff9a8ku_1949605896 used for this statement
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- SQL plan baseline SQL_PLAN_fahs3brrwbxcm08e93fe4 used for this statement
24 rows selected.
--## drop sql profile and verify baseline
exec dbms_sqltune.drop_sql_profile(name => '&profile_name');
11:46:10 KARLARAO@cdb1> set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);11:46:11 KARLARAO@cdb1> 11:46:11 KARLARAO@cdb1>
ALL_DISTINCT SKEW
------------ ----------
3 3
11:46:11 KARLARAO@cdb1>
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3
Plan hash value: 1949605896
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW | 1 | 7 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=3)
Note
-----
- SQL plan baseline SQL_PLAN_fahs3brrwbxcm08e93fe4 used for this statement
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
23 rows selected.
--## here you'll see two plans
11:46:34 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:
PARSING_ CREATED PLAN_NAME SQL_HANDLE SQL_TEXT OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18 SQL_PLAN_fahs3brrwbxcm950a48a8 SQL_e543035defc5f593 select * from skew where skew=3 8 YES YES YES YES MANUAL-L
OAD
KARLARAO 03/23/20 11:41:32 SQL_PLAN_fahs3brrwbxcm08e93fe4 SQL_e543035defc5f593 select * from skew where skew=3 2 YES YES YES YES MANUAL-L
OAD
11:46:38 KARLARAO@cdb1>
11:46:38 KARLARAO@cdb1> @spm_plans
Enter value for sql_handle: SQL_e543035defc5f593
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
SQL handle: SQL_e543035defc5f593
SQL text: select * from skew where skew=3
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm08e93fe4 Plan id: 149503972
Enabled: YES Fixed: YES Accepted: YES Origin: MANUAL-LOAD
Plan rows: From dictionary
--------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1949605896
--------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW |
| 2 | INDEX RANGE SCAN | SKEW_IDX |
--------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm950a48a8 Plan id: 2500479144
Enabled: YES Fixed: YES Accepted: YES Origin: MANUAL-LOAD
Plan rows: From dictionary
--------------------------------------------------------------------------------
Plan hash value: 246648590
----------------------------------
| Id | Operation | Name |
----------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| SKEW |
----------------------------------
36 rows selected.
--## let's try to disable that baseline
set verify off
declare
myplan pls_integer;
begin
myplan:=DBMS_SPM.ALTER_SQL_PLAN_BASELINE (sql_handle => '&sql_handle',plan_name => '&plan_name',attribute_name => 'ENABLED', attribute_value => '&YES_OR_NO');
end;
/
set verify off
declare
myplan pls_integer;
begin
myplan:=DBMS_SPM.ALTER_SQL_PLAN_BASELINE (sql_handle => '&sql_handle',plan_name => '&plan_name',attribute_name => 'ENABLED', attribute_value => '&YES_OR_NO');
end;
/11:49:35 KARLARAO@cdb1> 11:49:35 KARLARAO@cdb1> 11:49:35 2 11:49:35 3 11:49:35 4 11:49:35 5 11:49:35 6
Enter value for sql_handle: SQL_e543035defc5f593
Enter value for plan_name: SQL_PLAN_fahs3brrwbxcm08e93fe4
Enter value for yes_or_no: no
PL/SQL procedure successfully completed.
@spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:
PARSING_ CREATED PLAN_NAME SQL_HANDLE SQL_TEXT OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18 SQL_PLAN_fahs3brrwbxcm950a48a8 SQL_e543035defc5f593 select * from skew where skew=3 8 YES YES YES YES MANUAL-L
OAD
KARLARAO 03/23/20 11:41:32 SQL_PLAN_fahs3brrwbxcm08e93fe4 SQL_e543035defc5f593 select * from skew where skew=3 2 NO YES YES YES MANUAL-L
OAD
--## after disabling the full scan baseline was used
11:50:04 KARLARAO@cdb1> set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);11:50:31 KARLARAO@cdb1> 11:50:31 KARLARAO@cdb1>
ALL_DISTINCT SKEW
------------ ----------
3 3
11:50:31 KARLARAO@cdb1>
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 1
-------------------------------------
select * from skew where skew=3
Plan hash value: 246648590
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 8 (100)| |
|* 1 | TABLE ACCESS FULL| SKEW | 1 | 7 | 8 (13)| 00:00:01 |
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SKEW"=3)
Note
-----
- SQL plan baseline SQL_PLAN_fahs3brrwbxcm950a48a8 used for this statement
22 rows selected.
--## let's disable the remaining baseline
--## here the optimizer picked up the index
set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);
11:52:49 KARLARAO@cdb1> set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);11:53:17 KARLARAO@cdb1> 11:53:17 KARLARAO@cdb1>
ALL_DISTINCT SKEW
------------ ----------
3 3
11:53:17 KARLARAO@cdb1>
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3
Plan hash value: 1949605896
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW | 1 | 7 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=3)
19 rows selected.
-- code to drop the individual baselines
set verify off
DECLARE
plans_dropped PLS_INTEGER;
BEGIN
plans_dropped := DBMS_SPM.drop_sql_plan_baseline (
sql_handle => '&sql_handle',
plan_name => '&plan_name');
DBMS_OUTPUT.put_line(plans_dropped);
END;
/
}}}
! automatic pickup of plans using evolve
{{{
20:02:22 KARLARAO@cdb1> @spm_demo_query.sql
ALL_DISTINCT SKEW
------------ ----------
3 3
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 1
-------------------------------------
select * from skew where skew=3
Plan hash value: 246648590
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 8 (100)| |
|* 1 | TABLE ACCESS FULL| SKEW | 1 | 7 | 8 (13)| 00:00:01 |
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SKEW"=3)
Note
-----
- SQL plan baseline SQL_PLAN_fahs3brrwbxcm950a48a8 used for this statement
22 rows selected.
20:02:31 KARLARAO@cdb1> @spm_baselines.sql
Enter value for sql_text:
Enter value for exact_matching_signature:
PARSING_ CREATED PLAN_NAME SQL_HANDLE SQL_TEXT OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/22/20 19:58:58 SQL_PLAN_fahs3brrwbxcm950a48a8 SQL_e543035defc5f593 select * from skew where skew=3 2 YES YES NO YES AUTO-CAP
TURE
KARLARAO 03/22/20 20:01:58 SQL_PLAN_fahs3brrwbxcm08e93fe4 SQL_e543035defc5f593 select * from skew where skew=3 2 YES NO NO YES AUTO-CAP
TURE
20:02:42 KARLARAO@cdb1> @spm_plans
Enter value for sql_handle: SQL_e543035defc5f593
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
SQL handle: SQL_e543035defc5f593
SQL text: select * from skew where skew=3
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm08e93fe4 Plan id: 149503972
Enabled: YES Fixed: NO Accepted: NO Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1949605896
--------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW |
| 2 | INDEX RANGE SCAN | SKEW_IDX |
--------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm950a48a8 Plan id: 2500479144
Enabled: YES Fixed: NO Accepted: YES Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------
Plan hash value: 246648590
----------------------------------
| Id | Operation | Name |
----------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| SKEW |
----------------------------------
36 rows selected.
10:48:47 KARLARAO@cdb1> @spm_evolve.sql
Enter value for sql_handle: SQL_e543035defc5f593
Enter value for verify: yes
Enter value for commit: yes
GENERAL INFORMATION SECTION
---------------------------------------------------------------------------------------------
Task Information:
---------------------------------------------
Task Name : TASK_33661
Task Owner : KARLARAO
Execution Name : EXEC_36257
Execution Type : SPM EVOLVE
Scope : COMPREHENSIVE
Status : COMPLETED
Started : 03/23/2020 10:48:59
Finished : 03/23/2020 10:48:59
Last Updated : 03/23/2020 10:48:59
Global Time Limit : 2147483646
Per-Plan Time Limit : UNUSED
Number of Errors : 0
---------------------------------------------------------------------------------------------
SUMMARY
SECTION
---------------------------------------------------------------------------------------------
Number of plans processed : 1
Number of findings : 2
Number of recommendations : 1
Number of errors : 0
---------------------------------------------------------------------------------------------
DETAILS SECTION
---------------------------------------------------------------------------------------------
Object ID : 2
Test Plan Name : SQL_PLAN_fahs3brrwbxcm08e93fe4
Base Plan Name : SQL_PLAN_fahs3brrwbxcm950a48a8
SQL Handle : SQL_e543035defc5f593
Parsing Schema : KARLARAO
Test Plan Creator : KARLARAO
SQL Text : select * from skew where skew=3
Execution Statistics:
-----------------------------
Base Plan Test Plan
---------------------------- ----------------------------
Elapsed Time (s): .000044 .000003
CPU Time (s): .000019 0
Buffer Gets: 2 0
Optimizer Cost: 8 2
Disk Reads: 0 0
Direct Writes: 0 0
Rows Processed: 0 0
Executions: 10
10
FINDINGS SECTION
---------------------------------------------------------------------------------------------
Findings (2):
-----------------------------
1. The plan was verified in 0.11000 seconds. It passed the benefit criterion
because its verified performance was 6.67303 times better than that of the
baseline plan.
2. The plan was automatically accepted.
Recommendation:
-----------------------------
Consider accepting the plan.
EXPLAIN PLANS SECTION
---------------------------------------------------------------------------------------------
Baseline Plan
-----------------------------
Plan Id : 42237
Plan Hash Value : 2500479144
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 8 | 00:00:01 |
| * 1 | TABLE ACCESS FULL | SKEW | 1 | 7 | 8 | 00:00:01 |
---------------------------------------------------------------------
Predicate Information (identified by operation id):
------------------------------------------
* 1 - filter("SKEW"=3)
Test Plan
-----------------------------
Plan Id : 42238
Plan Hash Value : 149503972
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time
|
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 2 | 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED | SKEW | 1 | 7 | 2 | 00:00:01 |
| * 2 | INDEX RANGE SCAN | SKEW_IDX | 1 | | 1 | 00:00:01 |
-------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
------------------------------------------
* 2 - access("SKEW"=3)
---------------------------------------------------------------------------------------------
PL/SQL procedure successfully completed.
10:52:32 KARLARAO@cdb1> @spm_plans
Enter value for sql_handle: SQL_e543035defc5f593
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
SQL handle: SQL_e543035defc5f593
SQL text: select * from skew where skew=3
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm08e93fe4 Plan id: 149503972
Enabled: YES Fixed: NO Accepted: YES Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1949605896
--------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW |
| 2 | INDEX RANGE SCAN | SKEW_IDX |
--------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm950a48a8 Plan id: 2500479144
Enabled: YES Fixed: NO Accepted: YES Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------
Plan hash value: 246648590
----------------------------------
| Id | Operation | Name |
----------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| SKEW |
----------------------------------
36 rows selected.
10:53:09 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:
PARSING_ CREATED PLAN_NAME SQL_HANDLE SQL_TEXT OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/22/20 19:58:58 SQL_PLAN_fahs3brrwbxcm950a48a8 SQL_e543035defc5f593 select * from skew where skew=3 2 YES YES NO YES AUTO-CAP
TURE
KARLARAO 03/22/20 20:01:58 SQL_PLAN_fahs3brrwbxcm08e93fe4 SQL_e543035defc5f593 select * from skew where skew=3 2 YES YES NO YES AUTO-CAP
TURE
10:56:04 KARLARAO@cdb1> set serveroutput off
10:56:14 KARLARAO@cdb1> select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);
ALL_DISTINCT SKEW
------------ ----------
3 3
10:56:16 KARLARAO@cdb1>
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3
Plan hash value: 1949605896
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW | 1 | 7 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=3)
Note
-----
- SQL plan baseline SQL_PLAN_fahs3brrwbxcm08e93fe4 used for this statement
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
23 rows selected.
}}}
<<showtoc>>
! from this guy https://medium.com/@benstr/meteorjs-vs-angularjs-aint-a-thing-3559b74d52cc
<<<
So bro, answer me… what should I learn?
First - Javascript, jQuery & maybe Node.
Second, It depends on your end goals…
… Wanna work for Facebook? Learn React, Flux, PHP, etc.
… Wanna work for Google? Learn Angular, Dart, Polymer, Python, etc.
… Wanna work for a 2 to 4 year old startup? Learn M-E-A-N
… Wanna work for a 5 to 10 year old startup? Learn Angular & Ruby on Rails
… Wanna create a new startup and impress everyone with how fast you add new features? Learn Meteor (& add whatever UI framework you want)
One last note for beginners. When building a web app you are going to deal with a lot of components (servers, databases, frameworks, pre-processors, packages, testing, …). To manage all this we created automated builders like Grunt & Gulp. After all, making web apps is serious and complicated business… or did we just make it complicated so it seems serious??
If you rather not bother with all that complicated build stuff then choose Meteor, it does it all auto-magically.
<<<
! angular-meteor
http://angular-meteor.com/manifest
! bootstrap
http://stackoverflow.com/questions/14546709/what-is-bootstrap
http://getbootstrap.com/getting-started/
a good example is this http://joshcanfindit.com/
! discussion forums
http://www.quora.com/JavaScript-Frameworks/AngularJS-Meteor-Backbone-Express-or-plain-NodeJs-When-to-use-each-one
http://www.quora.com/Should-I-learn-Angular-js-or-Meteor
! ember.js
!! https://www.emberscreencasts.com/
!! http://www.letscodejavascript.com/
!! vs backbone.js
http://smus.com/backbone-and-ember/
backbone-ember-back-and-forth-transcript.txt https://gist.github.com/jashkenas/1732351
!! vs rails
https://www.airpair.com/ember.js/posts/top-mistakes-ember-rails
http://aokolish.me/blog/2014/11/16/8-reasons-i-won't-be-choosing-ember.js-for-my-next-app/
!! transactions, CRUD
http://bigbinary.com/videos/learn-ember-js/crud-application-in-ember-js <-- video!
http://blog.trackets.com/2013/02/02/using-transactions-in-ember-data.html
http://blog.trackets.com/2013/01/27/ember-data-in-depth.html
http://embersherpa.com/articles/crud-example-app-without-ember-data/
http://discuss.emberjs.com/t/beginner-guidance-on-building-crud/4990
http://stackoverflow.com/questions/18691644/crud-operations-using-ember-model
!! sample real time web app
http://www.codeproject.com/Articles/511031/A-sample-real-time-web-application-using-Ember-js
!! HTMLBars
http://www.lynda.com/Emberjs-tutorials/About-HTMLBars/178116/191855-4.html
!! ember and d3.js
https://corner.squareup.com/2012/04/building-analytics.html
! backbone.js
http://stackoverflow.com/questions/16284724/what-does-var-app-app-do
<<<
If app is already defined, the it does nothing. If app is not defined, then it's equivalent to var app = {};
<<<
https://www.quora.com/What-are-the-pros-of-using-Handlebars-template-over-Underscore-js
https://engineering.linkedin.com/frontend/client-side-templating-throwdown-mustache-handlebars-dustjs-and-more
http://www.pluralsight.com/courses/choosing-javascript-framework
http://www.pluralsight.com/search/?searchTerm=backbone.js
I didn't really like backbone at all. It was a pain. https://news.ycombinator.com/item?id=4427556
!! backbone.js and d3.js
Sam Selikoff - Using D3 with Backbone, Angular and Ember https://www.youtube.com/watch?v=ca3pQWc2-Xs <-- good stuff
https://github.com/samselikoff/talks/tree/master/4-apr2014-using-d3-backbone-angular-ember <-- good stuff
Backbone and D3 in a large, complex app https://groups.google.com/forum/#!topic/d3-js/3gmyzPOXNBM
D3 with Backbone / D3 with Angular / D3 with Ember http://stackoverflow.com/questions/17050921/d3-with-backbone-d3-with-angular-d3-with-ember
!! react.js as a view - Integrating React With Backbone
http://www.slideshare.net/RyanRoemer/backbonejs-with-react-views-server-rendering-virtual-dom-and-more <-- good stuff
http://timecounts.github.io/backbone-react-redux/#61
http://www.thomasboyt.com/2013/12/17/using-reactjs-as-a-backbone-view.html
https://blog.engineyard.com/2015/integrating-react-with-backbone
http://joelburget.com/backbone-to-react/
https://blog.mayflower.de/3937-Backbone-React.html
http://clayallsopp.com/posts/from-backbone-to-react/
http://leoasis.github.io/posts/2014/03/22/from_backbone_views_to_react/
! react.js
http://www.pluralsight.com/search/?searchTerm=react.js
!! react as a view in ember
http://discuss.emberjs.com/t/can-reactjs-be-used-as-a-view-within-emberjs/3470
!! react and d3.js
http://nicolashery.com/integrating-d3js-visualizations-in-a-react-app/
https://www.codementor.io/reactjs/tutorial/3-steps-scalable-data-visualization-react-js-d3-js
http://10consulting.com/2014/02/19/d3-plus-reactjs-for-charting/
!! react vs ember
Choosing Ember over React in 2016 https://blog.instant2fa.com/choosing-ember-over-react-in-2016-41a2e7fd341#.1712iqvw8
https://grantnorwood.com/why-i-chose-ember-over-react/
Check this React vs. Ember presentation by Alex Matchneer, a lot of good points on uni-directional flow. http://bit.ly/2fk0Ybe
http://www.creativebloq.com/web-design/react-goes-head-head-emberjs-31514361
http://www.slideshare.net/mraible/comparing-hot-javascript-frameworks-angularjs-emberjs-and-reactjs-springone-2gx-2015
! RoR
https://www.quora.com/Which-is-superior-between-Node-js-vs-RoR-vs-Go
http://www.hostingadvice.com/blog/nodejs-vs-golang/
https://www.codementor.io/learn-programming/ruby-on-rails-vs-node-js-backend-language-for-beginners
https://hackhands.com/use-ruby-rails-node-js-next-projectstartup/
https://www.quora.com/Which-server-side-programming-language-is-the-best-for-a-starting-programmer-Perl-PHP-Python-Ruby-JavaScript-Node-Scala-Java-Go-ASP-NET-or-ColdFusion
https://www.quora.com/Which-is-the-best-option-for-a-Ruby-on-Rails-developer-AngularJS-or-Ember-js
! references
https://en.wikipedia.org/wiki/Comparison_of_JavaScript_frameworks
http://www.cyberciti.biz/tips/what-is-devshm-and-its-practical-usage.html
http://superuser.com/questions/45342/when-should-i-use-dev-shm-and-when-should-i-use-tmp
http://download.oracle.com/docs/cd/B28359_01/server.111/b32009/appi_vlm.htm
tanel mentioned he used it as a persistent storage when he was doing a migration on this one database because it needs to do fast writes so he put the redo log on the /dev/shm.. this is dangerous because when the server crash then you have to do a restore/recover.. data residing in /dev/shm is not persistent on OS reboot..
<<showtoc>>
! install software on node2
{{{
[root@node2 scripts]# ./install_krb5.sh
}}}
! test login
{{{
[root@node2 scripts]# kinit admin/admin
Password for admin/admin@EXAMPLE.COM:
[root@node2 scripts]#
[root@node2 scripts]# klist
Ticket cache: KEYRING:persistent:0:0
Default principal: admin/admin@EXAMPLE.COM
Valid starting Expires Service principal
01/08/2019 17:47:41 01/09/2019 17:47:41 krbtgt/EXAMPLE.COM@EXAMPLE.COM
[root@node2 scripts]#
}}}
! configure kerberos in ambari
[img(90%,90%)[https://i.imgur.com/GTkXrtL.png]]
[img(90%,90%)[https://i.imgur.com/u4opvr0.png]]
[img(90%,90%)[https://i.imgur.com/VbQmRUQ.png]]
[img(90%,90%)[https://i.imgur.com/oHHdXC1.png]]
[img(90%,90%)[https://i.imgur.com/MlbhMgw.png]]
[img(90%,90%)[https://i.imgur.com/7i7JRTT.png]]
[img(90%,90%)[https://i.imgur.com/o7RhsLh.png]]
[img(90%,90%)[https://i.imgur.com/yx6ORX2.png]]
* restart ambari server
[img(90%,90%)[https://i.imgur.com/dBtX7my.png]]
* manually restart other services
[img(90%,90%)[https://i.imgur.com/GiWRr6d.png]]
! test kerberos from hdfs
* it errors because only admin user is configured or have credentials
{{{
[vagrant@node1 data]$ hadoop fs -ls
19/01/08 18:15:59 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "node1.example.com/192.168.199.2"; destination host is: "node1.example.com":8020;
}}}
! from KDC host, add principal on other users
* kerberos can be linked to an existing active directory through TRUST (needs to be configured), so users will automatically be recognized
* here we are adding the vagrant user to the principal
{{{
-- summary commands
sudo su -
klist
kinit admin/admin
kadmin.local -q "addprinc vagrant"
-- detail
[root@node2 scripts]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]# kinit admin/admin
Password for admin/admin@EXAMPLE.COM:
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin/admin@EXAMPLE.COM
Valid starting Expires Service principal
01/08/2019 18:34:19 01/09/2019 18:34:19 krbtgt/EXAMPLE.COM@EXAMPLE.COM
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]# kadmin.local -q "addprinc vagrant"
Authenticating as principal admin/admin@EXAMPLE.COM with password.
WARNING: no policy specified for vagrant@EXAMPLE.COM; defaulting to no policy
Enter password for principal "vagrant@EXAMPLE.COM": <USE THE VAGRANT USER PASSWORD>
Re-enter password for principal "vagrant@EXAMPLE.COM": <USE THE VAGRANT USER PASSWORD>
Principal "vagrant@EXAMPLE.COM" created.
}}}
! test new user principal
{{{
[vagrant@node1 ~]$ kinit
Password for vagrant@EXAMPLE.COM:
[vagrant@node1 ~]$
[vagrant@node1 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: vagrant@EXAMPLE.COM
Valid starting Expires Service principal
01/08/2019 18:35:52 01/09/2019 18:35:52 krbtgt/EXAMPLE.COM@EXAMPLE.COM
[vagrant@node1 ~]$
[vagrant@node1 ~]$ hadoop fs -ls
Found 3 items
-rw-r--r-- 3 vagrant vagrant 16257213 2019-01-08 06:27 salaries.csv
-rw-r--r-- 3 vagrant vagrant 16257213 2019-01-08 06:31 salaries2.csv
drwxr-xr-x - vagrant vagrant 0 2019-01-08 06:58 test
}}}
! list principals
{{{
kadmin.local -q "list_principals"
[root@node2 scripts]# kadmin.local -q "list_principals"
Authenticating as principal admin/admin@EXAMPLE.COM with password.
HTTP/node1.example.com@EXAMPLE.COM
HTTP/node2.example.com@EXAMPLE.COM
HTTP/node3.example.com@EXAMPLE.COM
K/M@EXAMPLE.COM
activity_analyzer/node1.example.com@EXAMPLE.COM
activity_explorer/node1.example.com@EXAMPLE.COM
admin/admin@EXAMPLE.COM
ambari-qa-hadoop@EXAMPLE.COM
ambari-server-hadoop@EXAMPLE.COM
amshbase/node1.example.com@EXAMPLE.COM
amszk/node1.example.com@EXAMPLE.COM
dn/node1.example.com@EXAMPLE.COM
dn/node2.example.com@EXAMPLE.COM
dn/node3.example.com@EXAMPLE.COM
hdfs-hadoop@EXAMPLE.COM
hive/node1.example.com@EXAMPLE.COM
hive/node2.example.com@EXAMPLE.COM
hive/node3.example.com@EXAMPLE.COM
jhs/node2.example.com@EXAMPLE.COM
kadmin/admin@EXAMPLE.COM
kadmin/changepw@EXAMPLE.COM
kadmin/node2.example.com@EXAMPLE.COM
keyadmin@EXAMPLE.COM
kiprop/node2.example.com@EXAMPLE.COM
krbtgt/EXAMPLE.COM@EXAMPLE.COM
nm/node1.example.com@EXAMPLE.COM
nm/node2.example.com@EXAMPLE.COM
nm/node3.example.com@EXAMPLE.COM
nn/node1.example.com@EXAMPLE.COM
nn/node2.example.com@EXAMPLE.COM
nn@EXAMPLE.COM
ranger@EXAMPLE.COM
rangeradmin/node2.example.com@EXAMPLE.COM
rangerlookup/node2.example.com@EXAMPLE.COM
rangertagsync/node1.example.com@EXAMPLE.COM
rangertagsync/node2.example.com@EXAMPLE.COM
rangerusersync/node2.example.com@EXAMPLE.COM
rm/node2.example.com@EXAMPLE.COM
vagrant@EXAMPLE.COM
yarn/node2.example.com@EXAMPLE.COM
zookeeper/node1.example.com@EXAMPLE.COM
zookeeper/node2.example.com@EXAMPLE.COM
zookeeper/node3.example.com@EXAMPLE.COM
}}}
! other references
https://community.pivotal.io/s/article/Kerberos-Cheat-Sheet
.
<<showtoc>>
! SPNEGO
!! search for "auth" in hdfs advanced config
* make sure all settings are configured as follows
[img(90%,90%)[ https://i.imgur.com/wAtMrPd.png ]]
!! test using curl
{{{
[root@node2 scripts]# curl -u : --negotiate http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16392,"group":"hadoop","length":0,"modificationTime":1546970946566,"owner":"yarn","pathSuffix":"app-logs","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16418,"group":"hdfs","length":0,"modificationTime":1546739119560,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16389,"group":"hadoop","length":0,"modificationTime":1546738288975,"owner":"yarn","pathSuffix":"ats","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16399,"group":"hdfs","length":0,"modificationTime":1546738301288,"owner":"hdfs","pathSuffix":"hdp","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16395,"group":"hdfs","length":0,"modificationTime":1546738294255,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16397,"group":"hadoop","length":0,"modificationTime":1546738323395,"owner":"mapred","pathSuffix":"mr-history","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":6,"fileId":16386,"group":"hdfs","length":0,"modificationTime":1546971003969,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":5,"fileId":16387,"group":"hdfs","length":0,"modificationTime":1546928769061,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}
[root@node2 scripts]#
[root@node2 scripts]#
[root@node2 scripts]# kdestroy
[root@node2 scripts]#
[root@node2 scripts]# curl -u : --negotiate http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /webhdfs/v1/. Reason:
<pre> Authentication required</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
</body>
</html>
[root@node2 scripts]# curl -u : --negotiate http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS -vvvvv
* About to connect() to node1.example.com port 50070 (#0)
* Trying 192.168.199.2...
* Connected to node1.example.com (192.168.199.2) port 50070 (#0)
> GET /webhdfs/v1/?op=LISTSTATUS HTTP/1.1
> User-Agent: curl/7.29.0
> Host: node1.example.com:50070
> Accept: */*
>
< HTTP/1.1 401 Authentication required
< Cache-Control: must-revalidate,no-cache,no-store
< Date: Tue, 08 Jan 2019 19:49:42 GMT
< Pragma: no-cache
< Date: Tue, 08 Jan 2019 19:49:42 GMT
< Pragma: no-cache
< Content-Type: text/html; charset=iso-8859-1
< X-FRAME-OPTIONS: SAMEORIGIN
* gss_init_sec_context() failed: : No Kerberos credentials available (default cache: /tmp/krb5cc_0)
< WWW-Authenticate: Negotiate
< Set-Cookie: hadoop.auth=; Path=/; HttpOnly
< Content-Length: 1404
< Server: Jetty(6.1.26.hwx)
<
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /webhdfs/v1/. Reason:
<pre> Authentication required</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
</body>
</html>
* Connection #0 to host node1.example.com left intact
[root@node2 scripts]# curl -u : --negotiate http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS -vvvvv
* About to connect() to node1.example.com port 50070 (#0)
* Trying 192.168.199.2...
* Connected to node1.example.com (192.168.199.2) port 50070 (#0)
> GET /webhdfs/v1/?op=LISTSTATUS HTTP/1.1
> User-Agent: curl/7.29.0
> Host: node1.example.com:50070
> Accept: */*
>
< HTTP/1.1 401 Authentication required
< Cache-Control: must-revalidate,no-cache,no-store
< Date: Tue, 08 Jan 2019 19:50:46 GMT
< Pragma: no-cache
< Date: Tue, 08 Jan 2019 19:50:46 GMT
< Pragma: no-cache
< Content-Type: text/html; charset=iso-8859-1
< X-FRAME-OPTIONS: SAMEORIGIN
< WWW-Authenticate: Negotiate
< Set-Cookie: hadoop.auth=; Path=/; HttpOnly
< Content-Length: 1404
< Server: Jetty(6.1.26.hwx)
<
* Ignoring the response-body
* Connection #0 to host node1.example.com left intact
* Issue another request to this URL: 'http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS'
* Found bundle for host node1.example.com: 0x1762e90
* Re-using existing connection! (#0) with host node1.example.com
* Connected to node1.example.com (192.168.199.2) port 50070 (#0)
* Server auth using GSS-Negotiate with user ''
> GET /webhdfs/v1/?op=LISTSTATUS HTTP/1.1
> Authorization: Negotiate YIICZQYJKoZIhvcSAQICAQBuggJUMIICUKADAgEFoQMCAQ6iBwMFACAAAACjggFhYYIBXTCCAVmgAwIBBaENGwtFWEFNUExFLkNPTaIkMCKgAwIBA6EbMBkbBEhUVFAbEW5vZGUxLmV4YW1wbGUuY29to4IBGzCCARegAwIBEqEDAgEBooIBCQSCAQXLZTgGMbj4xkzKM2CMLYH5zCAciK7lFaCnUvhul79oo/Id5YP2e8lW96h69TZHjp227eHfO1oKgyX1NJqvzDp6QJ5cOGo6QXKNfmx3dEkKPJgsg09w6FcvDaWflhclfH/pN4OKCBoo23IkcR8uv+FmAwKlhT0eA5a0yV9zeoGstRSAPrBA+t63xdBf8hZB9RtAI6ISLDI329OZkblKnTbwBesh7naY8hJtNNqPiLS2n5dd+KsG+cSnSD1EwOytBsnsVN0gRVg6718N95M70Da7DV64bPhaEfWimIfjOX+zaNOJCbpiIzwe34Oeo8MAimZvhahdIWFM/wUFy19FeTIZBtGE/lykgdUwgdKgAwIBEqKBygSBx5uFXt9DLbTQn8FDDz007/VG0EDw7J4o+erYUSejz6ylv4ueEFXo83xGK0I5Nag4DD3RtHXB44jdLmiRmW+Vx0zAck+M/0MqNg3X5xD4p0RKFicVklJw17FLMprpLHeWg1jcsKpCyHdNt8KQeB4modt2DY8okBCyJSMS3snCPt2mDLM0Erfd/MiHYOW2038mUSIPxv8vuEJYUv9zchJ6XAjMWCGA7UqvS5mU49jAsWyXhfTi4sIFWbNm4ftmS4o7d6eCPIvuqcQ=
> User-Agent: curl/7.29.0
> Host: node1.example.com:50070
> Accept: */*
>
< HTTP/1.1 200 OK
< Cache-Control: no-cache
< Expires: Tue, 08 Jan 2019 19:50:46 GMT
< Date: Tue, 08 Jan 2019 19:50:46 GMT
< Pragma: no-cache
< Expires: Tue, 08 Jan 2019 19:50:46 GMT
< Date: Tue, 08 Jan 2019 19:50:46 GMT
< Pragma: no-cache
< Content-Type: application/json
< X-FRAME-OPTIONS: SAMEORIGIN
< WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARChwZbpr515XQ6+c68a4ZMAPjEGIHhnQJjRn8yt4jQ9qe3DHOozQIWOkQyj6nexCoqhKPWKbc4YG0cMZ/ZcCOnA4g5
< Set-Cookie: hadoop.auth="u=admin&p=admin/admin@EXAMPLE.COM&t=kerberos&e=1547013046868&s=nx4sCU8jegk52hkosxLZaWgouLk="; Path=/; HttpOnly
< Transfer-Encoding: chunked
< Server: Jetty(6.1.26.hwx)
<
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16392,"group":"hadoop","length":0,"modificationTime":1546970946566,"owner":"yarn","pathSuffix":"app-logs","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16418,"group":"hdfs","length":0,"modificationTime":1546739119560,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16389,"group":"hadoop","length":0,"modificationTime":1546738288975,"owner":"yarn","pathSuffix":"ats","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16399,"group":"hdfs","length":0,"modificationTime":1546738301288,"owner":"hdfs","pathSuffix":"hdp","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16395,"group":"hdfs","length":0,"modificationTime":1546738294255,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16397,"group":"hadoop","length":0,"modificationTime":1546738323395,"owner":"mapred","pathSuffix":"mr-history","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":6,"fileId":16386,"group":"hdfs","length":0,"modificationTime":1546971003969,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":5,"fileId":16387,"group":"hdfs","length":0,"modificationTime":1546928769061,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}
* Closing connection 0
}}}
! Knox
* another way of securing authentication is through knox to act as a gateway, see [[apache sentry vs ranger vs knox]]
[img(90%,90%)[https://i.imgur.com/5TdfGUh.png]]
[img(90%,90%)[https://i.imgur.com/BPUJVlB.jpg]]
<<showtoc>>
! architecture
[img(90%,90%)[https://i.imgur.com/W50LYmu.png]]
[img(90%,90%)[https://i.imgur.com/vmGmoYO.png]]
! installation
* On node1 ambari server, configure the connector
{{{
yum -y install mysql-connector-java
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
}}}
* On ambari, add service
[img(90%,90%)[https://i.imgur.com/VAyNRPj.png]]
* On node2, Configure mysql root account /usr/bin/mysql_secure_installation
* Keep in mind the Disallow root login remotely (answer should be n)
{{{
[root@node2 scripts]# /usr/bin/mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] n
... skipping.
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] n
... skipping.
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] n
... skipping.
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
}}}
* Configure ranger user and create ranger database
{{{
mysql -u root -proot
CREATE USER 'ranger'@'localhost' IDENTIFIED BY 'ranger';
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'localhost';
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'node2.example.com';
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'%';
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'localhost' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'node2.example.com' IDENTIFIED BY 'ranger' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'localhost' IDENTIFIED BY 'ranger' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'%' IDENTIFIED BY 'ranger' WITH GRANT OPTION;
FLUSH PRIVILEGES;
system mysql -u ranger -pranger
SELECT CURRENT_USER();
create database ranger;
}}}
* Check users and passwords
{{{
[root@node2 admin]# mysql -u root -proot
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 976
Server version: 5.5.60-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;
+---------------+-------------------+-------------------------------------------+
| User | Host | Password |
+---------------+-------------------+-------------------------------------------+
| root | localhost | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| root | node2.example.com | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| root | 127.0.0.1 | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| root | ::1 | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| rangerinstall | % | *BA6F33B6015522D04D1B2CD0774983FEE64526DD |
| hive | % | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
| rangeradmin | % | *93E5B68E67576EF3867192792A3FA17A35376774 |
| rangeradmin | localhost | *93E5B68E67576EF3867192792A3FA17A35376774 |
| rangeradmin | node2.example.com | *93E5B68E67576EF3867192792A3FA17A35376774 |
| ranger | % | *84BB87F6BF7F61703B24CE1C9AA9C0E3F2286900 |
| ranger | localhost | *84BB87F6BF7F61703B24CE1C9AA9C0E3F2286900 |
| ranger | node2.example.com | *84BB87F6BF7F61703B24CE1C9AA9C0E3F2286900 |
+---------------+-------------------+-------------------------------------------+
12 rows in set (0.06 sec)
}}}
* I don't think this is necessary, but I also added ranger to the principal
{{{
[vagrant@node2 ~]$ sudo su -
Last login: Tue Jan 8 17:45:59 UTC 2019 on pts/0
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# kinit admin/admin
Password for admin/admin@EXAMPLE.COM:
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# kadmin.local -q "addprinc ranger"
Authenticating as principal admin/admin@EXAMPLE.COM with password.
WARNING: no policy specified for ranger@EXAMPLE.COM; defaulting to no policy
Enter password for principal "ranger@EXAMPLE.COM":
Re-enter password for principal "ranger@EXAMPLE.COM":
Principal "ranger@EXAMPLE.COM" created.
}}}
[img(50%,50%)[https://i.imgur.com/KOWWdq0.png]]
* All Ranger components, KDC host, and mysql are on node2. The ambari-server is on node1
[img(90%,90%)[https://i.imgur.com/K4q5JGG.png]]
[img(90%,90%)[https://i.imgur.com/ZS98uJV.png]]
[img(90%,90%)[https://i.imgur.com/1vaHE8X.png]]
[img(90%,90%)[https://i.imgur.com/Ft6otqX.png]]
* Plugins will be installed after Ranger installation
[img(90%,90%)[https://i.imgur.com/qYZBkZX.png]]
* Uncheck previously configured properties, Click OK
[img(90%,90%)[https://i.imgur.com/9WX7NtI.png]]
[img(90%,90%)[https://i.imgur.com/94XHb8O.png]]
[img(90%,90%)[https://i.imgur.com/gvlLrPg.png]]
[img(90%,90%)[https://i.imgur.com/TEL3NTo.png]]
[img(90%,90%)[https://i.imgur.com/anjL7Pb.png]]
[img(90%,90%)[https://i.imgur.com/eW8REAC.png]]
* Go to http://192.168.199.3:6080 , then login as admin/admin
[img(50%,50%)[https://i.imgur.com/5dp5nUb.png]]
[img(90%,90%)[https://i.imgur.com/ycJQYPL.png]]
! install plugins
* Go to Ranger - Configs - Ranger Plugin, Select HDFS and Hive plugins and click Save
[img(90%,90%)[https://i.imgur.com/8n22vUY.png]]
* Click OK
[img(90%,90%)[https://i.imgur.com/DoR5lOF.png]]
[img(90%,90%)[https://i.imgur.com/spaodiJ.png]]
* Stop and Start all services
[img(90%,90%)[https://i.imgur.com/ccTY5GM.png]]
[img(40%,40%)[https://i.imgur.com/cY2AKHn.png]]
[img(90%,90%)[https://i.imgur.com/WINcwWk.png]]
! errors
!! ranger admin process is failing with connection failed
{{{
Connection failed to http://node2.example.com:6080/login.jsp
(Execution of 'curl --location-trusted -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/70a1480a-b71c-4152-815d-8a171bd0b85e
-c /var/lib/ambari-agent/tmp/cookies/70a1480a-b71c-4152-815d-8a171bd0b85e -w '%{http_code}' http://node2.example.com:6080/login.jsp
--connect-timeout 5 --max-time 7 -o /dev/null 1>/tmp/tmp5YAC3n 2>/tmp/tmpSeQnlb' returned 28.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0
curl: (28) Operation timed out after 7022 milliseconds with 0 out of -1 bytes received
000)
}}}
!!! troubleshooting and fix
* check the directory /var/log/ranger/admin
* read the catalina.out file
{{{
[root@node2 admin]# cat catalina.out
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000eab00000, 357564416, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 357564416 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/hdp/2.6.5.1050-37/ranger-admin/ews/hs_err_pid1286.log
}}}
* add 4GB swapfile
{{{
dd if=/dev/zero of=/opt/swapfile bs=1024k count=4096
mkswap /opt/swapfile
chmod 0600 /opt/swapfile
add on /etc/fstab
/opt/swapfile swap swap defaults 0 0
swapon -a
[root@node2 scripts]# free -h
total used free shared buff/cache available
Mem: 3.7G 3.3G 222M 9.9M 215M 190M
Swap: 5.0G 2.0G 3.0G
}}}
! other references
Hadoop Certification - HDPCA - Install and Configure Ranger https://www.youtube.com/watch?v=2zeVvnw_bZs&t=1s
.
<<showtoc>>
[img(40%,40%)[ https://i.imgur.com/zuT36IY.png ]]
! background info - Ranger KMS (Key Management System)
[img(50%,50%)[https://i.imgur.com/khqEFuj.png]]
[img(50%,50%)[https://i.imgur.com/EQJ97Vc.png]]
[img(50%,50%)[https://i.imgur.com/g5fUv8w.png]]
[img(50%,50%)[https://i.imgur.com/xfCR6dm.png]]
! install and configure
01
[img(90%,90%)[https://i.imgur.com/gKjnTZf.png]]
02
[img(90%,90%)[https://i.imgur.com/vGsbzJH.png]]
03
[img(90%,90%)[https://i.imgur.com/tvts1Up.png]]
04
[img(90%,90%)[https://i.imgur.com/HLf1SZk.png]]
05
* On the Advanced config -> "custom kms-site" add the keyadmin proxy user settings (last three lines) for kerberos authentication
[img(90%,90%)[https://i.imgur.com/tfCKduW.png]]
06
[img(90%,90%)[https://i.imgur.com/qa1a6JQ.png]]
07
[img(90%,90%)[https://i.imgur.com/nC83kAh.png]]
08
[img(90%,90%)[https://i.imgur.com/77MW6NV.png]]
09
[img(90%,90%)[https://i.imgur.com/j15oGwA.png]]
10
[img(90%,90%)[https://i.imgur.com/dZUEb8p.png]]
11
[img(90%,90%)[https://i.imgur.com/kgo36fo.png]]
12
[img(90%,90%)[https://i.imgur.com/IGfqToA.png]]
13
[img(90%,90%)[https://i.imgur.com/oFBvaKY.png]]
14
* Edit the hadoop_kms, add EXAMPLE.COM
[img(90%,90%)[https://i.imgur.com/oKyKOw9.png]]
15
[img(90%,90%)[https://i.imgur.com/iUXg2v6.png]]
16
[img(90%,90%)[https://i.imgur.com/FoJSoOT.png]]
17
* Go to key manager, add a new key mykey01 to be used to create an encryption zone
[img(90%,90%)[https://i.imgur.com/Z5JbTjh.png]]
18
[img(90%,90%)[https://i.imgur.com/DiGvYeD.png]]
19
[img(90%,90%)[https://i.imgur.com/8Kl7KVf.png]]
20
[img(90%,90%)[https://i.imgur.com/jISoHhM.png]]
21
* Go back to Access Manager, create a new policy and use the created key mykey01 and grant users to it
[img(90%,90%)[https://i.imgur.com/JBnnxVs.png]]
22
[img(90%,90%)[https://i.imgur.com/0iBlArv.png]]
23
[img(90%,90%)[https://i.imgur.com/Vq5bAWV.png]]
24
[img(90%,90%)[https://i.imgur.com/6znPj2p.png]]
! create hdfs encryption zone (/encrypted folder only accessible by user vagrant)
!! Keytabs are stored in /etc/security/keytabs/
* these are binary files that can be used for kerberos authentication
{{{
ls /etc/security/keytabs/
dn.service.keytab jhs.service.keytab rangeradmin.service.keytab rangertagsync.service.keytab smokeuser.headless.keytab zk.service.keytab
hdfs.headless.keytab nm.service.keytab rangerkms.service.keytab rangerusersync.service.keytab spnego.service.keytab
hive.service.keytab nn.service.keytab rangerlookup.service.keytab rm.service.keytab yarn.service.keytab
less /etc/security/keytabs/hdfs.headless.keytab
"/etc/security/keytabs/hdfs.headless.keytab" may be a binary file. See it anyway?
}}}
!! To list the principals in the keytab
{{{
[root@node2 ~]# klist -kt /etc/security/keytabs/hdfs.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
}}}
!! Switching principal, from admin/admin to hdfs-hadoop
{{{
[root@node2 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin/admin@EXAMPLE.COM
Valid starting Expires Service principal
01/09/2019 01:53:15 01/10/2019 01:53:15 krbtgt/EXAMPLE.COM@EXAMPLE.COM
[root@node2 ~]#
[root@node2 ~]# kdestroy
[root@node2 ~]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)
[root@node2 ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hadoop
[root@node2 ~]#
[root@node2 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-hadoop@EXAMPLE.COM
Valid starting Expires Service principal
01/10/2019 00:53:19 01/11/2019 00:53:19 krbtgt/EXAMPLE.COM@EXAMPLE.COM
}}}
!! listing the encryption keys
{{{
# "kinit -kt" is similar to using kinit but you'll NOT have to input a password because a keytab file is used
# user hdfs-hadoop errors with not allowed to do 'GET_KEYS'
[root@node2 ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hadoop
[root@node2 ~]# hadoop key list -metadata
Cannot list keys for KeyProvider: KMSClientProvider[http://node2.example.com:9292/kms/v1/]: org.apache.hadoop.security.authorize.AuthorizationException: User:hdfs not allowed to do 'GET_KEYS'
# keyadmin is the most powerful user, has access to all keys and can view encrypted data. so protect this user
# even kinit admin/admin will not be able to have access to the keys
[root@node2 ~]# kinit keyadmin
Password for keyadmin@EXAMPLE.COM:
[root@node2 ~]# hadoop key list -metadata
Listing keys for KeyProvider: KMSClientProvider[http://node2.example.com:9292/kms/v1/]
mykey01 : cipher: AES/CTR/NoPadding, length: 128, description: , created: Thu Jan 10 00:43:26 UTC 2019, version: 1, attributes: [key.acl.name=mykey01]
}}}
!! create the new directory "encrypted" using hdfs-hadoop principal
{{{
[root@node2 ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hadoop
[root@node2 ~]# hadoop fs -ls
Found 1 items
drwxr-xr-x - hdfs hdfs 0 2019-01-09 17:14 .hiveJars
[root@node2 ~]# hadoop fs -ls /
Found 9 items
drwxrwxrwx - yarn hadoop 0 2019-01-09 17:48 /app-logs
drwxr-xr-x - hdfs hdfs 0 2019-01-06 01:45 /apps
drwxr-xr-x - yarn hadoop 0 2019-01-06 01:31 /ats
drwxr-xr-x - hdfs hdfs 0 2019-01-06 01:31 /hdp
drwxr-xr-x - mapred hdfs 0 2019-01-06 01:31 /mapred
drwxrwxrwx - mapred hadoop 0 2019-01-06 01:32 /mr-history
drwxr-xr-x - hdfs hdfs 0 2019-01-09 08:49 /ranger
drwxrwxrwx - hdfs hdfs 0 2019-01-08 18:10 /tmp
drwxr-xr-x - hdfs hdfs 0 2019-01-09 17:14 /user
[root@node2 ~]# hadoop fs -mkdir /encrypted
[root@node2 ~]# hadoop fs -ls /
Found 10 items
drwxrwxrwx - yarn hadoop 0 2019-01-09 17:48 /app-logs
drwxr-xr-x - hdfs hdfs 0 2019-01-06 01:45 /apps
drwxr-xr-x - yarn hadoop 0 2019-01-06 01:31 /ats
drwxr-xr-x - hdfs hdfs 0 2019-01-10 00:59 /encrypted
drwxr-xr-x - hdfs hdfs 0 2019-01-06 01:31 /hdp
drwxr-xr-x - mapred hdfs 0 2019-01-06 01:31 /mapred
drwxrwxrwx - mapred hadoop 0 2019-01-06 01:32 /mr-history
drwxr-xr-x - hdfs hdfs 0 2019-01-09 08:49 /ranger
drwxrwxrwx - hdfs hdfs 0 2019-01-08 18:10 /tmp
drwxr-xr-x - hdfs hdfs 0 2019-01-09 17:14 /user
[root@node2 ~]# hadoop fs -chown vagrant:vagrant /encrypted
[root@node2 ~]# hadoop fs -ls /
Found 10 items
drwxrwxrwx - yarn hadoop 0 2019-01-09 17:48 /app-logs
drwxr-xr-x - hdfs hdfs 0 2019-01-06 01:45 /apps
drwxr-xr-x - yarn hadoop 0 2019-01-06 01:31 /ats
drwxr-xr-x - vagrant vagrant 0 2019-01-10 00:59 /encrypted
drwxr-xr-x - hdfs hdfs 0 2019-01-06 01:31 /hdp
drwxr-xr-x - mapred hdfs 0 2019-01-06 01:31 /mapred
drwxrwxrwx - mapred hadoop 0 2019-01-06 01:32 /mr-history
drwxr-xr-x - hdfs hdfs 0 2019-01-09 08:49 /ranger
drwxrwxrwx - hdfs hdfs 0 2019-01-08 18:10 /tmp
drwxr-xr-x - hdfs hdfs 0 2019-01-09 17:14 /user
}}}
!! create the encryption zone on "encrypted" folder using the mykey01
{{{
[root@node2 ~]# hdfs crypto -createZone -keyName mykey01 -path /encrypted
Added encryption zone /encrypted
[root@node2 ~]# hadoop fs -ls /
Found 10 items
drwxrwxrwx - yarn hadoop 0 2019-01-09 17:48 /app-logs
drwxr-xr-x - hdfs hdfs 0 2019-01-06 01:45 /apps
drwxr-xr-x - yarn hadoop 0 2019-01-06 01:31 /ats
drwxr-xr-x - vagrant vagrant 0 2019-01-10 01:01 /encrypted
drwxr-xr-x - hdfs hdfs 0 2019-01-06 01:31 /hdp
drwxr-xr-x - mapred hdfs 0 2019-01-06 01:31 /mapred
drwxrwxrwx - mapred hadoop 0 2019-01-06 01:32 /mr-history
drwxr-xr-x - hdfs hdfs 0 2019-01-09 08:49 /ranger
drwxrwxrwx - hdfs hdfs 0 2019-01-08 18:10 /tmp
drwxr-xr-x - hdfs hdfs 0 2019-01-09 17:14 /user
}}}
!! put files in the encryption zone and read it
{{{
[vagrant@node2 ~]$ kinit
Password for vagrant@EXAMPLE.COM:
[vagrant@node2 ~]$ hadoop fs -ls /encrypted
Found 1 items
drwxrwxrwt - hdfs vagrant 0 2019-01-10 01:01 /encrypted/.Trash
[vagrant@node2 ~]$ hadoop fs -put /vagrant/data/constitution.txt /encrypted/constitution.txt
[vagrant@node2 ~]$ hadoop fs -cat /encrypted/constitution.txt | head
We the People of the United States, in Order to form a more perfect Union,
establish Justice, insure domestic Tranquility, provide for the common
defence, promote the general Welfare, and secure the Blessings of Liberty to
ourselves and our Posterity, do ordain and establish this Constitution for the
United States of America.
}}}
!! get files blocks location of the encrypted file in encryption zone
{{{
[vagrant@node2 ~]$ hadoop fsck /encrypted/constitution.txt -files -blocks -locations
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&files=1&blocks=1&locations=1&path=%2Fencrypted%2Fconstitution.txt
FSCK started by vagrant (auth:KERBEROS_SSL) from /192.168.199.3 for path /encrypted/constitution.txt at Thu Jan 10 01:12:38 UTC 2019
/encrypted/constitution.txt 44841 bytes, 1 block(s): OK
0. BP-534825236-192.168.199.2-1546738263299:blk_1073741963_1146 len=44841 repl=3 [DatanodeInfoWithStorage[192.168.199.3:1019,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK], DatanodeInfoWithStorage[192.168.199.4:1019,DS-a66628de-4daa-433f-9aa2-d3a8c400d5c5,DISK], DatanodeInfoWithStorage[192.168.199.2:1019,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK]]
Status: HEALTHY
Total size: 44841 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 1 (avg. block size 44841 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 3
Number of racks: 1
FSCK ended at Thu Jan 10 01:12:38 UTC 2019 in 12 milliseconds
The filesystem under path '/encrypted/constitution.txt' is HEALTHY
}}}
!! check if the file inside encryption zone is really encrypted
{{{
[vagrant@node2 ~]$ sudo su -
Last login: Thu Jan 10 01:03:59 UTC 2019 on pts/2
[root@node2 ~]#
[root@node2 ~]# find /hadoop/hdfs/data/ -iname "blk_1073741963"
/hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741963
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# head /hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741963
t?ʌF??7h?0?Ɇ??Y???+5e????{??j?,=?(?>?>4e)??l?0?cfC۟??V???<5s?T??Y?Z?.?n9??,
}}}
!! copy the file from encryption zone to an outside folder
{{{
[vagrant@node2 ~]$ hadoop fs -cp /encrypted/constitution.txt /tmp/constitution_copied.txt
# now login just as regular hdfs user, and you can read the file even without keys. meaning the file is not encrypted once outside of encryption zone
[hdfs@node2 ~]$ hadoop fs -cat /tmp/constitution_copied.txt | head
We the People of the United States, in Order to form a more perfect Union,
establish Justice, insure domestic Tranquility, provide for the common
defence, promote the general Welfare, and secure the Blessings of Liberty to
ourselves and our Posterity, do ordain and establish this Constitution for the
United States of America.
[hdfs@node2 ~]$ hadoop fsck /tmp/constitution_copied.txt -files -blocks -locations
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=hdfs&files=1&blocks=1&locations=1&path=%2Ftmp%2Fconstitution_copied.txt
FSCK started by hdfs (auth:KERBEROS_SSL) from /192.168.199.3 for path /tmp/constitution_copied.txt at Thu Jan 10 01:18:08 UTC 2019
/tmp/constitution_copied.txt 44841 bytes, 1 block(s): OK
0. BP-534825236-192.168.199.2-1546738263299:blk_1073741964_1147 len=44841 repl=3 [DatanodeInfoWithStorage[192.168.199.3:1019,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK], DatanodeInfoWithStorage[192.168.199.2:1019,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.4:1019,DS-a66628de-4daa-433f-9aa2-d3a8c400d5c5,DISK]]
Status: HEALTHY
Total size: 44841 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 1 (avg. block size 44841 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 3
Number of racks: 1
FSCK ended at Thu Jan 10 01:18:08 UTC 2019 in 1 milliseconds
The filesystem under path '/tmp/constitution_copied.txt' is HEALTHY
[vagrant@node2 ~]$ sudo su -
[root@node2 ~]# find /hadoop/hdfs/data/ -iname "blk_1073741964"
/hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741964
[root@node2 ~]# head /hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741964
We the People of the United States, in Order to form a more perfect Union,
establish Justice, insure domestic Tranquility, provide for the common
defence, promote the general Welfare, and secure the Blessings of Liberty to
ourselves and our Posterity, do ordain and establish this Constitution for the
United States of America.
}}}
!! hdfs user can create subdirectories under encryption zone but can't create file
{{{
[hdfs@node2 ~]$ hadoop fs -mkdir /encrypted/subdir
[hdfs@node2 ~]$ hadoop fs -put /vagrant/data/constitution.txt /encrypted/subdir/constitution2.txt
put: User:hdfs not allowed to do 'DECRYPT_EEK' on 'mykey01'
19/01/10 01:10:56 ERROR hdfs.DFSClient: Failed to close inode 24384
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /encrypted/subdir/constitution2.txt._COPYING_ (inode 24384): File does not exist. Holder DFSClient_NONMAPREDUCE_1910722416_1 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3697)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3785)
}}}
!! keyadmin can read any encrypted file on any encryption zone
{{{
[hdfs@node2 ~]$ kinit keyadmin@EXAMPLE.COM
Password for keyadmin@EXAMPLE.COM:
[hdfs@node2 ~]$ hdfs fs -cat /encrypted/constitution.txt | head
Error: Could not find or load main class fs
[hdfs@node2 ~]$ hadoop fs -cat /encrypted/constitution.txt | head
We the People of the United States, in Order to form a more perfect Union,
establish Justice, insure domestic Tranquility, provide for the common
defence, promote the general Welfare, and secure the Blessings of Liberty to
ourselves and our Posterity, do ordain and establish this Constitution for the
United States of America.
# admin/admin will not be able to read any encrypted file
[hdfs@node2 ~]$ kinit admin/admin
Password for admin/admin@EXAMPLE.COM:
[hdfs@node2 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1006
Default principal: admin/admin@EXAMPLE.COM
Valid starting Expires Service principal
01/10/2019 01:30:08 01/11/2019 01:30:08 krbtgt/EXAMPLE.COM@EXAMPLE.COM
[hdfs@node2 ~]$ hadoop fs -cat /encrypted/constitution.txt | head
cat: User:admin not allowed to do 'DECRYPT_EEK' on 'mykey01'
}}}
! troubleshooting
!! kms install properties file
<<<
/usr/hdp/current/ranger-kms/install.properties
<<<
!! ranger kms install error "unable to connect to DB"
!!! error message
{{{
stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms_server.py", line 121, in <module>
KmsServer().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms_server.py", line 48, in install
self.configure(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 120, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms_server.py", line 90, in configure
kms()
File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms.py", line 183, in kms
Execute(db_connection_check_command, path='/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin', tries=5, try_sleep=10, environment=env_dict)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/lib/jvm/jre//bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://node2:3306/rangerkms' rangerkms [PROTECTED] com.mysql.jdbc.Driver' returned 1. ERROR: Unable to connect to the DB. Please check DB connection properties.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
}}}
!!! fix
* On the Advanced config -> "custom kms-site" add the keyadmin proxy user settings (last three lines) for kerberos authentication
[img(90%,90%)[ https://i.imgur.com/tUpTVUL.png ]]
.
http://learnxinyminutes.com/docs/javascript/
https://developer.mozilla.org/en-US/docs/Web/JavaScript/A_re-introduction_to_JavaScript
https://en.wikipedia.org/wiki/Oracle_Exadata#Hardware_Configurations
Vagrant + Docker
http://en.wikipedia.org/wiki/Vagrant_%28software%29
http://www.slideshare.net/3dgiordano/vagrant-docker
http://www.quora.com/What-is-the-difference-between-Docker-and-Vagrant-When-should-you-use-each-one
http://www.scriptrock.com/articles/docker-vs-vagrant
https://www.vagrantup.com/downloads.html
{{{
vagrant up node1 node2 node3
vagrant suspend
vagrant destroy
vagrant global-status
}}}
https://stackoverflow.com/questions/10953070/how-to-debug-vagrant-cannot-forward-the-specified-ports-on-this-vm-message
password for geerlingguy/centos7 https://github.com/geerlingguy/drupal-vm/issues/1203
https://docs.oracle.com/database/121/ADMIN/cdb_create.htm#ADMIN13514
<<<
A CDB contains the following files:
One control file
One active online redo log for a single-instance CDB, or one active online redo log for each instance of an Oracle RAC CDB
One set of temp files
There is one default temporary tablespace for the root and for each PDB.
One active undo tablespace for a single-instance CDB, or one active undo tablespace for each instance of an Oracle RAC CDB
Sets of system data files
The primary physical difference between a CDB and a non-CDB is in the non-undo data files. A non-CDB has only one set of system data files. In contrast, a CDB includes one set of system data files for each container in the CDB, including a set of system data files for each PDB. In addition, a CDB has one set of user-created data files for each container.
Sets of user-created data files
Each PDB has its own set of non-system data files. These data files contain the user-defined schemas and database objects for the PDB.
For backup and recovery of a CDB, Recovery Manager (RMAN) is recommended. PDB point-in-time recovery (PDB PITR) must be performed with RMAN. By default, RMAN turns on control file autobackup for a CDB. It is strongly recommended that control file autobackup is enabled for a CDB, to ensure that PDB PITR can undo data file additions or deletions.
<<<
https://leetcode.com/problems/two-sum/
{{{
Given an array of integers, return indices of the two numbers such that they add up to a specific target.
You may assume that each input would have exactly one solution, and you may not use the same element twice.
Example:
Given nums = [2, 7, 11, 15], target = 9,
Because nums[0] + nums[1] = 2 + 7 = 9,
return [0, 1].
Accepted
2,243,655
Submissions
5,021,879
}}}
{{{
# class Solution:
# # def twoSum(nums, target):
# def twoSum(self,nums, target):
# for i in range(len(nums)):
# # print(nums[i])
# for j in range(i+1, len(nums)):
# # print(nums[i],nums[j])
# sum = nums[i] + nums[j]
# if sum == target:
# return(i,j)
class Solution:
# def twoSum(nums, target):
def twoSum(self,nums, target):
if len(nums) <= 1:
return False
kv_hmap = dict()
for i in range(len(nums)): # 0,1,2,3,4
# print(i)
num = nums[i] # 1,2,7,3,11
# print(num)
key = target - num # 8,7,2,6,-2
# print(key)
if num in kv_hmap:
# print ([kv_hmap[num], i])
return( [kv_hmap[num],i] )
else:
kv_hmap[key] = i
}}}
turbo mode is disabled
{{{
<!-- Turbo Mode -->
<!-- Description: Turbo Mode. -->
<!-- Possible Values: "Disabled", "Enabled" -->
<Turbo_Mode>Disabled</Turbo_Mode>
}}}
! cpu_topology script
{{{
[root@enkx3cel01 ~]# sh cpu_topology
Product Name: SUN FIRE X4270 M3
Product Name: ASSY,MOTHERBOARD,2U
model name : Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
physical id (processor socket) 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
siblings (logical CPUs/socket) 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
core id (# assigned to a core) 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
cpu cores (physical cores/socket) 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
}}}
! intel cpu topology tool
{{{
[root@enkx3cel01 cpu-topology]# ./cpu_topology64.out
Advisory to Users on system topology enumeration
This utility is for demonstration purpose only. It assumes the hardware topology
configuration within a coherent domain does not change during the life of an OS
session. If an OS support advanced features that can change hardware topology
configurations, more sophisticated adaptation may be necessary to account for
the hardware configuration change that might have added and reduced the number
of logical processors being managed by the OS.
User should also`be aware that the system topology enumeration algorithm is
based on the assumption that CPUID instruction will return raw data reflecting
the native hardware configuration. When an application runs inside a virtual
machine hosted by a Virtual Machine Monitor (VMM), any CPUID instructions
issued by an app (or a guest OS) are trapped by the VMM and it is the VMM's
responsibility and decision to emulate/supply CPUID return data to the virtual
machines. When deploying topology enumeration code based on querying CPUID
inside a VM environment, the user must consult with the VMM vendor on how an VMM
will emulate CPUID instruction relating to topology enumeration.
Software visible enumeration in the system:
Number of logical processors visible to the OS: 24
Number of logical processors visible to this process: 24
Number of processor cores visible to this process: 12
Number of physical packages visible to this process: 2
Hierarchical counts by levels of processor topology:
# of cores in package 0 visible to this process: 6 .
# of logical processors in Core 0 visible to this process: 2 .
# of logical processors in Core 1 visible to this process: 2 .
# of logical processors in Core 2 visible to this process: 2 .
# of logical processors in Core 3 visible to this process: 2 .
# of logical processors in Core 4 visible to this process: 2 .
# of logical processors in Core 5 visible to this process: 2 .
# of cores in package 1 visible to this process: 6 .
# of logical processors in Core 0 visible to this process: 2 .
# of logical processors in Core 1 visible to this process: 2 .
# of logical processors in Core 2 visible to this process: 2 .
# of logical processors in Core 3 visible to this process: 2 .
# of logical processors in Core 4 visible to this process: 2 .
# of logical processors in Core 5 visible to this process: 2 .
Affinity masks per SMT thread, per core, per package:
Individual:
P:0, C:0, T:0 --> 1
P:0, C:0, T:1 --> 1z3
Core-aggregated:
P:0, C:0 --> 1001
Individual:
P:0, C:1, T:0 --> 2
P:0, C:1, T:1 --> 2z3
Core-aggregated:
P:0, C:1 --> 2002
Individual:
P:0, C:2, T:0 --> 4
P:0, C:2, T:1 --> 4z3
Core-aggregated:
P:0, C:2 --> 4004
Individual:
P:0, C:3, T:0 --> 8
P:0, C:3, T:1 --> 8z3
Core-aggregated:
P:0, C:3 --> 8008
Individual:
P:0, C:4, T:0 --> 10
P:0, C:4, T:1 --> 1z4
Core-aggregated:
P:0, C:4 --> 10010
Individual:
P:0, C:5, T:0 --> 20
P:0, C:5, T:1 --> 2z4
Core-aggregated:
P:0, C:5 --> 20020
Pkg-aggregated:
P:0 --> 3f03f
Individual:
P:1, C:0, T:0 --> 40
P:1, C:0, T:1 --> 4z4
Core-aggregated:
P:1, C:0 --> 40040
Individual:
P:1, C:1, T:0 --> 80
P:1, C:1, T:1 --> 8z4
Core-aggregated:
P:1, C:1 --> 80080
Individual:
P:1, C:2, T:0 --> 100
P:1, C:2, T:1 --> 1z5
Core-aggregated:
P:1, C:2 --> 100100
Individual:
P:1, C:3, T:0 --> 200
P:1, C:3, T:1 --> 2z5
Core-aggregated:
P:1, C:3 --> 200200
Individual:
P:1, C:4, T:0 --> 400
P:1, C:4, T:1 --> 4z5
Core-aggregated:
P:1, C:4 --> 400400
Individual:
P:1, C:5, T:0 --> 800
P:1, C:5, T:1 --> 8z5
Core-aggregated:
P:1, C:5 --> 800800
Pkg-aggregated:
P:1 --> fc0fc0
APIC ID listings from affinity masks
OS cpu 0, Affinity mask 00000001 - apic id 0
OS cpu 1, Affinity mask 00000002 - apic id 2
OS cpu 2, Affinity mask 00000004 - apic id 4
OS cpu 3, Affinity mask 00000008 - apic id 6
OS cpu 4, Affinity mask 00000010 - apic id 8
OS cpu 5, Affinity mask 00000020 - apic id a
OS cpu 6, Affinity mask 00000040 - apic id 20
OS cpu 7, Affinity mask 00000080 - apic id 22
OS cpu 8, Affinity mask 00000100 - apic id 24
OS cpu 9, Affinity mask 00000200 - apic id 26
OS cpu 10, Affinity mask 00000400 - apic id 28
OS cpu 11, Affinity mask 00000800 - apic id 2a
OS cpu 12, Affinity mask 00001000 - apic id 1
OS cpu 13, Affinity mask 00002000 - apic id 3
OS cpu 14, Affinity mask 00004000 - apic id 5
OS cpu 15, Affinity mask 00008000 - apic id 7
OS cpu 16, Affinity mask 00010000 - apic id 9
OS cpu 17, Affinity mask 00020000 - apic id b
OS cpu 18, Affinity mask 00040000 - apic id 21
OS cpu 19, Affinity mask 00080000 - apic id 23
OS cpu 20, Affinity mask 00100000 - apic id 25
OS cpu 21, Affinity mask 00200000 - apic id 27
OS cpu 22, Affinity mask 00400000 - apic id 29
OS cpu 23, Affinity mask 00800000 - apic id 2b
Package 0 Cache and Thread details
Box Description:
Cache is cache level designator
Size is cache size
OScpu# is cpu # as seen by OS
Core is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32, Cores/cache= 2, Caches/package= 6
L1I is Level 1 Instruction cache, size(KBytes)= 32, Cores/cache= 2, Caches/package= 6
L2 is Level 2 Unified cache, size(KBytes)= 256, Cores/cache= 2, Caches/package= 6
L3 is Level 3 Unified cache, size(KBytes)= 15360, Cores/cache= 12, Caches/package= 1
+-------------+-------------+-------------+-------------+-------------+-------------+
Cache | L1D | L1D | L1D | L1D | L1D | L1D |
Size | 32K | 32K | 32K | 32K | 32K | 32K |
OScpu#| 0 12| 1 13| 2 14| 3 15| 4 16| 5 17|
Core | c0_t0 c0_t1| c1_t0 c1_t1| c2_t0 c2_t1| c3_t0 c3_t1| c4_t0 c4_t1| c5_t0 c5_t1|
AffMsk| 1 1z3| 2 2z3| 4 4z3| 8 8z3| 10 1z4| 20 2z4|
CmbMsk| 1001 | 2002 | 4004 | 8008 | 10010 | 20020 |
+-------------+-------------+-------------+-------------+-------------+-------------+
Cache | L1I | L1I | L1I | L1I | L1I | L1I |
Size | 32K | 32K | 32K | 32K | 32K | 32K |
+-------------+-------------+-------------+-------------+-------------+-------------+
Cache | L2 | L2 | L2 | L2 | L2 | L2 |
Size | 256K | 256K | 256K | 256K | 256K | 256K |
+-------------+-------------+-------------+-------------+-------------+-------------+
Cache | L3 |
Size | 15M |
CmbMsk| 3f03f |
+-----------------------------------------------------------------------------------+
Combined socket AffinityMask= 0x3f03f
Package 1 Cache and Thread details
Box Description:
Cache is cache level designator
Size is cache size
OScpu# is cpu # as seen by OS
Core is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
where # is number of zeroes (so '8z5' is '0x800000')
+-------------+-------------+-------------+-------------+-------------+-------------+
Cache | L1D | L1D | L1D | L1D | L1D | L1D |
Size | 32K | 32K | 32K | 32K | 32K | 32K |
OScpu#| 6 18| 7 19| 8 20| 9 21| 10 22| 11 23|
Core | c0_t0 c0_t1| c1_t0 c1_t1| c2_t0 c2_t1| c3_t0 c3_t1| c4_t0 c4_t1| c5_t0 c5_t1|
AffMsk| 40 4z4| 80 8z4| 100 1z5| 200 2z5| 400 4z5| 800 8z5|
CmbMsk| 40040 | 80080 |100100 |200200 |400400 |800800 |
+-------------+-------------+-------------+-------------+-------------+-------------+
Cache | L1I | L1I | L1I | L1I | L1I | L1I |
Size | 32K | 32K | 32K | 32K | 32K | 32K |
+-------------+-------------+-------------+-------------+-------------+-------------+
Cache | L2 | L2 | L2 | L2 | L2 | L2 |
Size | 256K | 256K | 256K | 256K | 256K | 256K |
+-------------+-------------+-------------+-------------+-------------+-------------+
Cache | L3 |
Size | 15M |
CmbMsk|fc0fc0 |
+-----------------------------------------------------------------------------------+
}}}
! intel turbostat
{{{
[root@enkx3cel01 ~]# ./turbostat
pkg core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
4.22 2.00 2.00 95.78 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 0 3.85 2.00 2.00 96.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 12 2.74 2.00 2.00 97.26 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 1 1 24.62 2.00 2.00 75.38 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 1 13 26.93 2.00 2.00 73.07 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 2 2 2.68 2.00 2.00 97.32 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 2 14 3.15 2.00 2.00 96.85 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 3 3 2.10 2.00 2.00 97.90 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 3 15 1.44 2.00 2.00 98.56 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 4 4 2.66 2.00 2.00 97.34 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 4 16 1.99 2.00 2.00 98.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 5 5 1.88 2.00 2.00 98.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 5 17 2.34 2.00 2.00 97.66 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 0 6 3.10 2.00 2.00 96.90 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 0 18 2.28 2.00 2.00 97.72 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 7 2.73 2.00 2.00 97.27 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 19 2.28 2.00 2.00 97.72 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 2 8 1.94 2.00 2.00 98.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 2 20 1.41 2.00 2.00 98.59 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 3 9 2.45 2.00 2.00 97.55 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 3 21 2.26 2.00 2.00 97.74 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 4 10 1.41 2.00 2.00 98.59 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 4 22 1.48 2.00 2.00 98.52 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 11 1.59 2.00 2.00 98.41 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 23 1.87 2.00 2.00 98.13 0.00 0.00 0.00 0.00 0.00 0.00 0.00
}}}
! cpu_topology script
{{{
[root@enkx3db01 cpu-topology]# sh ~root/cpu_topology
Product Name: SUN FIRE X4170 M3
Product Name: ASSY,MOTHERBOARD,1U
model name : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
physical id (processor socket) 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
siblings (logical CPUs/socket) 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
core id (# assigned to a core) 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
cpu cores (physical cores/socket) 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
}}}
! intel cpu topology tool
{{{
[root@enkx3db01 cpu-topology]# ./cpu_topology64.out
Advisory to Users on system topology enumeration
This utility is for demonstration purpose only. It assumes the hardware topology
configuration within a coherent domain does not change during the life of an OS
session. If an OS support advanced features that can change hardware topology
configurations, more sophisticated adaptation may be necessary to account for
the hardware configuration change that might have added and reduced the number
of logical processors being managed by the OS.
User should also`be aware that the system topology enumeration algorithm is
based on the assumption that CPUID instruction will return raw data reflecting
the native hardware configuration. When an application runs inside a virtual
machine hosted by a Virtual Machine Monitor (VMM), any CPUID instructions
issued by an app (or a guest OS) are trapped by the VMM and it is the VMM's
responsibility and decision to emulate/supply CPUID return data to the virtual
machines. When deploying topology enumeration code based on querying CPUID
inside a VM environment, the user must consult with the VMM vendor on how an VMM
will emulate CPUID instruction relating to topology enumeration.
Software visible enumeration in the system:
Number of logical processors visible to the OS: 32
Number of logical processors visible to this process: 32
Number of processor cores visible to this process: 16
Number of physical packages visible to this process: 2
Hierarchical counts by levels of processor topology:
# of cores in package 0 visible to this process: 8 .
# of logical processors in Core 0 visible to this process: 2 .
# of logical processors in Core 1 visible to this process: 2 .
# of logical processors in Core 2 visible to this process: 2 .
# of logical processors in Core 3 visible to this process: 2 .
# of logical processors in Core 4 visible to this process: 2 .
# of logical processors in Core 5 visible to this process: 2 .
# of logical processors in Core 6 visible to this process: 2 .
# of logical processors in Core 7 visible to this process: 2 .
# of cores in package 1 visible to this process: 8 .
# of logical processors in Core 0 visible to this process: 2 .
# of logical processors in Core 1 visible to this process: 2 .
# of logical processors in Core 2 visible to this process: 2 .
# of logical processors in Core 3 visible to this process: 2 .
# of logical processors in Core 4 visible to this process: 2 .
# of logical processors in Core 5 visible to this process: 2 .
# of logical processors in Core 6 visible to this process: 2 .
# of logical processors in Core 7 visible to this process: 2 .
Affinity masks per SMT thread, per core, per package:
Individual:
P:0, C:0, T:0 --> 1
P:0, C:0, T:1 --> 1z4
Core-aggregated:
P:0, C:0 --> 10001
Individual:
P:0, C:1, T:0 --> 2
P:0, C:1, T:1 --> 2z4
Core-aggregated:
P:0, C:1 --> 20002
Individual:
P:0, C:2, T:0 --> 4
P:0, C:2, T:1 --> 4z4
Core-aggregated:
P:0, C:2 --> 40004
Individual:
P:0, C:3, T:0 --> 8
P:0, C:3, T:1 --> 8z4
Core-aggregated:
P:0, C:3 --> 80008
Individual:
P:0, C:4, T:0 --> 10
P:0, C:4, T:1 --> 1z5
Core-aggregated:
P:0, C:4 --> 100010
Individual:
P:0, C:5, T:0 --> 20
P:0, C:5, T:1 --> 2z5
Core-aggregated:
P:0, C:5 --> 200020
Individual:
P:0, C:6, T:0 --> 40
P:0, C:6, T:1 --> 4z5
Core-aggregated:
P:0, C:6 --> 400040
Individual:
P:0, C:7, T:0 --> 80
P:0, C:7, T:1 --> 8z5
Core-aggregated:
P:0, C:7 --> 800080
Pkg-aggregated:
P:0 --> ff00ff
Individual:
P:1, C:0, T:0 --> 100
P:1, C:0, T:1 --> 1z6
Core-aggregated:
P:1, C:0 --> 1000100
Individual:
P:1, C:1, T:0 --> 200
P:1, C:1, T:1 --> 2z6
Core-aggregated:
P:1, C:1 --> 2000200
Individual:
P:1, C:2, T:0 --> 400
P:1, C:2, T:1 --> 4z6
Core-aggregated:
P:1, C:2 --> 4000400
Individual:
P:1, C:3, T:0 --> 800
P:1, C:3, T:1 --> 8z6
Core-aggregated:
P:1, C:3 --> 8000800
Individual:
P:1, C:4, T:0 --> 1z3
P:1, C:4, T:1 --> 1z7
Core-aggregated:
P:1, C:4 --> 10001z3
Individual:
P:1, C:5, T:0 --> 2z3
P:1, C:5, T:1 --> 2z7
Core-aggregated:
P:1, C:5 --> 20002z3
Individual:
P:1, C:6, T:0 --> 4z3
P:1, C:6, T:1 --> 4z7
Core-aggregated:
P:1, C:6 --> 40004z3
Individual:
P:1, C:7, T:0 --> 8z3
P:1, C:7, T:1 --> 8z7
Core-aggregated:
P:1, C:7 --> 80008z3
Pkg-aggregated:
P:1 --> ff00ff00
APIC ID listings from affinity masks
OS cpu 0, Affinity mask 0000000001 - apic id 0
OS cpu 1, Affinity mask 0000000002 - apic id 2
OS cpu 2, Affinity mask 0000000004 - apic id 4
OS cpu 3, Affinity mask 0000000008 - apic id 6
OS cpu 4, Affinity mask 0000000010 - apic id 8
OS cpu 5, Affinity mask 0000000020 - apic id a
OS cpu 6, Affinity mask 0000000040 - apic id c
OS cpu 7, Affinity mask 0000000080 - apic id e
OS cpu 8, Affinity mask 0000000100 - apic id 20
OS cpu 9, Affinity mask 0000000200 - apic id 22
OS cpu 10, Affinity mask 0000000400 - apic id 24
OS cpu 11, Affinity mask 0000000800 - apic id 26
OS cpu 12, Affinity mask 0000001000 - apic id 28
OS cpu 13, Affinity mask 0000002000 - apic id 2a
OS cpu 14, Affinity mask 0000004000 - apic id 2c
OS cpu 15, Affinity mask 0000008000 - apic id 2e
OS cpu 16, Affinity mask 0000010000 - apic id 1
OS cpu 17, Affinity mask 0000020000 - apic id 3
OS cpu 18, Affinity mask 0000040000 - apic id 5
OS cpu 19, Affinity mask 0000080000 - apic id 7
OS cpu 20, Affinity mask 0000100000 - apic id 9
OS cpu 21, Affinity mask 0000200000 - apic id b
OS cpu 22, Affinity mask 0000400000 - apic id d
OS cpu 23, Affinity mask 0000800000 - apic id f
OS cpu 24, Affinity mask 0001000000 - apic id 21
OS cpu 25, Affinity mask 0002000000 - apic id 23
OS cpu 26, Affinity mask 0004000000 - apic id 25
OS cpu 27, Affinity mask 0008000000 - apic id 27
OS cpu 28, Affinity mask 0010000000 - apic id 29
OS cpu 29, Affinity mask 0020000000 - apic id 2b
OS cpu 30, Affinity mask 0040000000 - apic id 2d
OS cpu 31, Affinity mask 0080000000 - apic id 2f
Package 0 Cache and Thread details
Box Description:
Cache is cache level designator
Size is cache size
OScpu# is cpu # as seen by OS
Core is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32, Cores/cache= 2, Caches/package= 8
L1I is Level 1 Instruction cache, size(KBytes)= 32, Cores/cache= 2, Caches/package= 8
L2 is Level 2 Unified cache, size(KBytes)= 256, Cores/cache= 2, Caches/package= 8
L3 is Level 3 Unified cache, size(KBytes)= 20480, Cores/cache= 16, Caches/package= 1
+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache | L1D | L1D | L1D | L1D | L1D | L1D | L1D | L1D |
Size | 32K | 32K | 32K | 32K | 32K | 32K | 32K | 32K |
OScpu#| 0 16| 1 17| 2 18| 3 19| 4 20| 5 21| 6 22| 7 23|
Core | c0_t0 c0_t1| c1_t0 c1_t1| c2_t0 c2_t1| c3_t0 c3_t1| c4_t0 c4_t1| c5_t0 c5_t1| c6_t0 c6_t1| c7_t0 c7_t1|
AffMsk| 1 1z4| 2 2z4| 4 4z4| 8 8z4| 10 1z5| 20 2z5| 40 4z5| 80 8z5|
CmbMsk| 10001 | 20002 | 40004 | 80008 | 100010 | 200020 | 400040 | 800080 |
+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache | L1I | L1I | L1I | L1I | L1I | L1I | L1I | L1I |
Size | 32K | 32K | 32K | 32K | 32K | 32K | 32K | 32K |
+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L2 |
Size | 256K | 256K | 256K | 256K | 256K | 256K | 256K | 256K |
+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache | L3 |
Size | 20M |
CmbMsk| ff00ff |
+-----------------------------------------------------------------------------------------------------------------------------------------------+
Combined socket AffinityMask= 0xff00ff
Package 1 Cache and Thread details
Box Description:
Cache is cache level designator
Size is cache size
OScpu# is cpu # as seen by OS
Core is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
where # is number of zeroes (so '8z5' is '0x800000')
+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache | L1D | L1D | L1D | L1D | L1D | L1D | L1D | L1D |
Size | 32K | 32K | 32K | 32K | 32K | 32K | 32K | 32K |
OScpu#| 8 24| 9 25| 10 26| 11 27| 12 28| 13 29| 14 30| 15 31|
Core | c0_t0 c0_t1| c1_t0 c1_t1| c2_t0 c2_t1| c3_t0 c3_t1| c4_t0 c4_t1| c5_t0 c5_t1| c6_t0 c6_t1| c7_t0 c7_t1|
AffMsk| 100 1z6| 200 2z6| 400 4z6| 800 8z6| 1z3 1z7| 2z3 2z7| 4z3 4z7| 8z3 8z7|
CmbMsk| 1000100 | 2000200 | 4000400 | 8000800 | 10001z3 | 20002z3 | 40004z3 | 80008z3 |
+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache | L1I | L1I | L1I | L1I | L1I | L1I | L1I | L1I |
Size | 32K | 32K | 32K | 32K | 32K | 32K | 32K | 32K |
+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L2 |
Size | 256K | 256K | 256K | 256K | 256K | 256K | 256K | 256K |
+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache | L3 |
Size | 20M |
CmbMsk|ff00ff00 |
+-----------------------------------------------------------------------------------------------------------------------------------------------+
}}}
! intel turbostat
{{{
[root@enkx3db01 ~]# ./turbostat
pkg core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
0.73 1.99 2.89 99.27 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 0 1.71 1.86 2.89 98.29 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 16 0.82 1.88 2.89 99.18 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 1 1 3.66 1.60 2.89 96.34 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 1 17 3.34 1.97 2.89 96.66 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 2 2 0.20 2.12 2.89 99.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 2 18 0.32 2.68 2.89 99.68 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 3 3 0.43 2.28 2.89 99.57 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 3 19 0.32 1.47 2.89 99.68 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 4 4 0.14 2.61 2.89 99.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 4 20 0.14 1.90 2.89 99.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 5 5 0.09 1.98 2.89 99.91 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 5 21 0.18 1.80 2.89 99.82 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 6 6 0.14 1.94 2.89 99.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 6 22 0.03 2.12 2.89 99.97 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 7 7 0.02 2.28 2.89 99.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 7 23 0.02 2.02 2.89 99.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 0 8 3.49 2.37 2.89 96.51 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 0 24 1.30 2.48 2.89 98.70 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 9 0.85 2.39 2.89 99.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 25 0.54 2.66 2.89 99.46 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 2 10 0.49 1.92 2.89 99.51 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 2 26 0.23 2.17 2.89 99.77 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 3 11 0.24 2.18 2.89 99.76 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 3 27 0.57 1.65 2.89 99.43 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 4 12 0.22 2.30 2.89 99.78 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 4 28 0.28 2.10 2.89 99.72 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 13 0.44 1.79 2.89 99.56 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 29 0.10 2.02 2.89 99.90 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 6 14 0.05 2.46 2.89 99.95 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 6 30 0.06 2.44 2.89 99.94 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 7 15 2.24 1.44 2.89 97.76 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 7 31 0.70 2.23 2.89 99.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00
}}}
turbo mode is disabled
{{{
<!-- Turbo Mode -->
<!-- Description: Turbo Mode. -->
<!-- Possible Values: "Disabled", "Enabled" -->
<Turbo_Mode>Disabled</Turbo_Mode>
}}}
! cpu_topology script
{{{
[root@enkx3db02 cpu-topology]# sh ~root/cpu_topology
Product Name: SUN FIRE X4170 M3
Product Name: ASSY,MOTHERBOARD,1U
model name : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
physical id (processor socket) 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
siblings (logical CPUs/socket) 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
core id (# assigned to a core) 0 1 6 7 0 1 6 7 0 1 6 7 0 1 6 7
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
}}}
! intel cpu topology tool
{{{
[root@enkx3db02 cpu-topology]# ./cpu_topology64.out
Advisory to Users on system topology enumeration
This utility is for demonstration purpose only. It assumes the hardware topology
configuration within a coherent domain does not change during the life of an OS
session. If an OS support advanced features that can change hardware topology
configurations, more sophisticated adaptation may be necessary to account for
the hardware configuration change that might have added and reduced the number
of logical processors being managed by the OS.
User should also`be aware that the system topology enumeration algorithm is
based on the assumption that CPUID instruction will return raw data reflecting
the native hardware configuration. When an application runs inside a virtual
machine hosted by a Virtual Machine Monitor (VMM), any CPUID instructions
issued by an app (or a guest OS) are trapped by the VMM and it is the VMM's
responsibility and decision to emulate/supply CPUID return data to the virtual
machines. When deploying topology enumeration code based on querying CPUID
inside a VM environment, the user must consult with the VMM vendor on how an VMM
will emulate CPUID instruction relating to topology enumeration.
Software visible enumeration in the system:
Number of logical processors visible to the OS: 16
Number of logical processors visible to this process: 16
Number of processor cores visible to this process: 8
Number of physical packages visible to this process: 2
Hierarchical counts by levels of processor topology:
# of cores in package 0 visible to this process: 4 .
# of logical processors in Core 0 visible to this process: 2 .
# of logical processors in Core 1 visible to this process: 2 .
# of logical processors in Core 2 visible to this process: 2 .
# of logical processors in Core 3 visible to this process: 2 .
# of cores in package 1 visible to this process: 4 .
# of logical processors in Core 0 visible to this process: 2 .
# of logical processors in Core 1 visible to this process: 2 .
# of logical processors in Core 2 visible to this process: 2 .
# of logical processors in Core 3 visible to this process: 2 .
Affinity masks per SMT thread, per core, per package:
Individual:
P:0, C:0, T:0 --> 1
P:0, C:0, T:1 --> 100
Core-aggregated:
P:0, C:0 --> 101
Individual:
P:0, C:1, T:0 --> 2
P:0, C:1, T:1 --> 200
Core-aggregated:
P:0, C:1 --> 202
Individual:
P:0, C:2, T:0 --> 4
P:0, C:2, T:1 --> 400
Core-aggregated:
P:0, C:2 --> 404
Individual:
P:0, C:3, T:0 --> 8
P:0, C:3, T:1 --> 800
Core-aggregated:
P:0, C:3 --> 808
Pkg-aggregated:
P:0 --> f0f
Individual:
P:1, C:0, T:0 --> 10
P:1, C:0, T:1 --> 1z3
Core-aggregated:
P:1, C:0 --> 1010
Individual:
P:1, C:1, T:0 --> 20
P:1, C:1, T:1 --> 2z3
Core-aggregated:
P:1, C:1 --> 2020
Individual:
P:1, C:2, T:0 --> 40
P:1, C:2, T:1 --> 4z3
Core-aggregated:
P:1, C:2 --> 4040
Individual:
P:1, C:3, T:0 --> 80
P:1, C:3, T:1 --> 8z3
Core-aggregated:
P:1, C:3 --> 8080
Pkg-aggregated:
P:1 --> f0f0
APIC ID listings from affinity masks
OS cpu 0, Affinity mask 000001 - apic id 0
OS cpu 1, Affinity mask 000002 - apic id 2
OS cpu 2, Affinity mask 000004 - apic id c
OS cpu 3, Affinity mask 000008 - apic id e
OS cpu 4, Affinity mask 000010 - apic id 20
OS cpu 5, Affinity mask 000020 - apic id 22
OS cpu 6, Affinity mask 000040 - apic id 2c
OS cpu 7, Affinity mask 000080 - apic id 2e
OS cpu 8, Affinity mask 000100 - apic id 1
OS cpu 9, Affinity mask 000200 - apic id 3
OS cpu 10, Affinity mask 000400 - apic id d
OS cpu 11, Affinity mask 000800 - apic id f
OS cpu 12, Affinity mask 001000 - apic id 21
OS cpu 13, Affinity mask 002000 - apic id 23
OS cpu 14, Affinity mask 004000 - apic id 2d
OS cpu 15, Affinity mask 008000 - apic id 2f
Package 0 Cache and Thread details
Box Description:
Cache is cache level designator
Size is cache size
OScpu# is cpu # as seen by OS
Core is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32, Cores/cache= 2, Caches/package= 4
L1I is Level 1 Instruction cache, size(KBytes)= 32, Cores/cache= 2, Caches/package= 4
L2 is Level 2 Unified cache, size(KBytes)= 256, Cores/cache= 2, Caches/package= 4
L3 is Level 3 Unified cache, size(KBytes)= 20480, Cores/cache= 8, Caches/package= 1
+-----------+-----------+-----------+-----------+
Cache | L1D | L1D | L1D | L1D |
Size | 32K | 32K | 32K | 32K |
OScpu#| 0 8| 1 9| 2 10| 3 11|
Core |c0_t0 c0_t1|c1_t0 c1_t1|c2_t0 c2_t1|c3_t0 c3_t1|
AffMsk| 1 100| 2 200| 4 400| 8 800|
CmbMsk| 101 | 202 | 404 | 808 |
+-----------+-----------+-----------+-----------+
Cache | L1I | L1I | L1I | L1I |
Size | 32K | 32K | 32K | 32K |
+-----------+-----------+-----------+-----------+
Cache | L2 | L2 | L2 | L2 |
Size | 256K | 256K | 256K | 256K |
+-----------+-----------+-----------+-----------+
Cache | L3 |
Size | 20M |
CmbMsk| f0f |
+-----------------------------------------------+
Combined socket AffinityMask= 0xf0f
Package 1 Cache and Thread details
Box Description:
Cache is cache level designator
Size is cache size
OScpu# is cpu # as seen by OS
Core is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
where # is number of zeroes (so '8z5' is '0x800000')
+-----------+-----------+-----------+-----------+
Cache | L1D | L1D | L1D | L1D |
Size | 32K | 32K | 32K | 32K |
OScpu#| 4 12| 5 13| 6 14| 7 15|
Core |c0_t0 c0_t1|c1_t0 c1_t1|c2_t0 c2_t1|c3_t0 c3_t1|
AffMsk| 10 1z3| 20 2z3| 40 4z3| 80 8z3|
CmbMsk| 1010 | 2020 | 4040 | 8080 |
+-----------+-----------+-----------+-----------+
Cache | L1I | L1I | L1I | L1I |
Size | 32K | 32K | 32K | 32K |
+-----------+-----------+-----------+-----------+
Cache | L2 | L2 | L2 | L2 |
Size | 256K | 256K | 256K | 256K |
+-----------+-----------+-----------+-----------+
Cache | L3 |
Size | 20M |
CmbMsk| f0f0 |
+-----------------------------------------------+
}}}
! intel turbostat
{{{
[root@enkx3db02 ~]# ./turbostat
pkg core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
2.05 2.42 2.89 97.95 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 0 3.19 1.93 2.89 96.81 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 8 2.09 1.93 2.89 97.91 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 1 1 4.14 2.22 2.89 95.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 1 9 10.10 2.66 2.89 89.90 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 6 2 0.89 1.98 2.89 99.11 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 6 10 5.12 2.79 2.89 94.88 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 7 3 0.40 2.26 2.89 99.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 7 11 0.46 2.33 2.89 99.54 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 0 4 1.86 2.07 2.89 98.14 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 0 12 0.53 2.33 2.89 99.47 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 5 0.57 2.45 2.89 99.43 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 13 0.95 2.55 2.89 99.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 6 6 0.58 1.62 2.89 99.42 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 6 14 1.04 2.68 2.89 98.96 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 7 7 0.31 2.18 2.89 99.69 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 7 15 0.58 2.75 2.89 99.42 0.00 0.00 0.00 0.00 0.00 0.00 0.00
}}}
http://rnm1978.wordpress.com/2011/02/02/instrumenting-obiee-for-tracing-oracle-db-calls/
http://rnm1978.wordpress.com/2010/01/26/identify-your-users-by-setting-client-id-in-oracle/
http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
http://method-r.com/software/mrtools
http://method-r.com/component/content/article/115 <-- mrls
http://method-r.com/component/content/article/116 <-- mrnl
http://method-r.com/component/content/article/117 <-- mrskew
http://appsdba.com/docs/orcl_event_6340.html <-- trace file event timeline
http://www.appsdba.com/blog/?category_name=oracle-dba&paged=2
http://www.appsdba.com/blog/?p=109 <-- trace file execution tree
http://appsdba.com/utilities_resource.htm
http://www.juliandyke.com/Diagnostics/Trace/EnablingTrace.html
http://www.rittmanmead.com/2005/04/tracing-parallel-execution/
http://www.antognini.ch/2012/08/event-10046-full-list-of-levels/
http://www.sagecomputing.com.au/papers_presentations/lostwithoutatrace.pdf <- good stuff, with sample codes
http://www.oracle-base.com/articles/8i/DBMS_APPLICATION_INFO.php <- DBMS_APPLICATION_INFO : For Code Instrumentation
http://www.oracle-base.com/articles/misc/DBMS_SESSION.php <- DBMS_SESSION : Managing Sessions From a Connection Pool in Oracle Databases
http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
http://www.petefinnigan.com/ramblings/how_to_set_trace.htm
http://psoug.org/reference/dbms_monitor.html
http://psoug.org/reference/dbms_applic_info.html
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:49818662859946
How to: Trace the SQL executed by SYSMAN Using a Trigger [ID 400937.1]
{{{
CREATE OR REPLACE TRIGGER logontrig AFTER logon ON database
begin
if ora_login_user = 'SYSMAN' then
execute immediate 'alter session set tracefile_identifier = '||'SYSMAN';
execute immediate 'Alter session set events ''10046 trace name context forever, level 12''';
end if;
end;
/
}}}
Capture 10046 Traces Upon User Login (without using a trigger) [ID 371678.1]
http://dbmentors.blogspot.com/2011/09/using-dbmsmonitor.html
http://docs.oracle.com/cd/B28359_01/network.111/b28531/app_context.htm <- application context
https://method-r.fogbugz.com/default.asp?method-r.11.139.2 <- hotsos ILO
http://www.databasejournal.com/features/oracle/article.php/3435431/Oracle-Session-Tracing-Part-I.htm <- Oracle Session Tracing Part I
''per module''
{{{
exec DBMS_MONITOR.serv_mod_act_trace_enable (service_name => 'FSTSTAH', module_name => 'EX_APPROVAL');
exec DBMS_MONITOR.serv_mod_act_trace_disable (service_name => 'FSTSTAH', module_name => 'EX_APPROVAL');
trcsess output=client.trc module=EX_APPROVAL *.trc
./orasrp --aggregate=no --binds=0 --recognize-idle-events=no --sys=no client.trc fsprd.html
tkprof client.trc client.tkprof sort=exeela
}}}
''grep tkprof SQLs''
{{{
less client.tkprof-webapp | grep -B3 -A30 "SELECT L2.TREE_NODE_NUM" | egrep "SQL ID|total" | less
SQL ID: 9gxa3r2v0mkzp Plan Hash: 751140913
total 24 3.65 3.65 0 9103 0 4294
SQL ID: 9zssps0292n9m Plan Hash: 2156210208
total 17 2.64 2.64 0 206748 0 2901
SQL ID: 034a6u0h7psb1 Plan Hash: 2156210208
total 3 0.18 0.18 0 8929 0 4
SQL ID: 2yr2m4xfb14z0 Plan Hash: 4136997945
total 3 0.18 0.18 0 9102 0 3
SQL ID: 0rurft7y2paks Plan Hash: 3656446192
total 14 3.62 3.62 0 9102 0 2391
SQL ID: 99ugjzcz1j1r4 Plan Hash: 2156210208
total 24 2.62 2.62 0 206749 0 4337
SQL ID: 5fgb0cvhqy8w2 Plan Hash: 2156210208
total 28 3.26 3.26 0 215957 0 5077
SQL ID: amrb5fkaysu2r Plan Hash: 2156210208
total 3 0.14 0.14 0 11367 0 3
SQL ID: 3d6u5vjh1y5ny Plan Hash: 2156210208
total 20 3.26 3.27 0 215956 0 3450
}}}
{{{
select service_name, module from v$session where module = 'EX_APPROVAL'
SERVICE_NAME MODULE
---------------------------------------------------------------- ----------------------------------------------------------------
FSPRDOL EX_APPROVAL
FSPRDOL EX_APPROVAL
FSPRDOL EX_APPROVAL
FSPRDOL EX_APPROVAL
FSPRDOL EX_APPROVAL
FSPRDOL EX_APPROVAL
FSPRDOL EX_APPROVAL
FSPRDOL EX_APPROVAL
FSPRDOL EX_APPROVAL
9 rows selected.
SYS@fsprd2> SELECT * FROM DBA_ENABLED_TRACES ;
SYS@fsprd2>
SYS@fsprd2> /
no rows selected
SYS@fsprd2>
SYS@fsprd2>
SYS@fsprd2> exec DBMS_MONITOR.serv_mod_act_trace_enable (service_name => 'FSPRDOL', module_name => 'EX_APPROVAL');
PL/SQL procedure successfully completed.
SELECT
TRACE_TYPE,
PRIMARY_ID,
QUALIFIER_ID1,
waits,
binds
FROM DBA_ENABLED_TRACES;
TRACE_TYPE PRIMARY_ID QUALIFIER_ID1 WAITS BINDS
--------------------- ---------------------------------------------------------------- ------------------------------------------------ ----- -----
SERVICE_MODULE FSPRDOL EX_APPROVAL TRUE FALSE
--To disable
exec DBMS_MONITOR.serv_mod_act_trace_disable (service_name => 'FSPRDOL', module_name => 'EX_APPROVAL');
}}}
<<showtoc>>
! 10046 and 10053
* when this is used the 10046 and 10053 data are contained in one trace file
* you can parse this using the tv10053.exe but not with lab128 (v10053.exe)
* you may have to regenerate a separate 10046 and 10053 traces for a less noisy session call graph on 10046 report
* if you want a separate run of 10046 and 10053, then remove the 10053 trace on the testcase file and use DBMS_SQLDIAG.DUMP_TRACE at the end of the SQL execution as shown here [[10053]]
{{{
+++10046_10053++++
sqlplus <app user>/<pwd>
alter session set timed_statistics = true;
alter session set statistics_level=ALL;
alter session set max_dump_file_size=UNLIMITED;
alter session set tracefile_identifier='10046_10053';
alter session set events '10046 trace name context forever, level 12';
alter session set events '10053 trace name context forever, level 1';
>>>here run the query
--run dummy query to close cursor
select 1 from dual;
exit;
Find trc with suffix "10046_10053" in <diag> directory and upload it to the SR.
To find all trace files for the current instance >>>>> SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';
select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
}}}
! time series short_stack
[[genstack loop, short_stack loop, time series short_stack]]
! perf and flamegraph
[[Flamegraph using SQL]]
<<showtoc>>
11g
http://structureddata.org/2011/08/18/creating-optimizer-trace-files/?utm_source=rss&utm_medium=rss&utm_campaign=creating-optimizer-trace-files
Examining the Oracle Database 10053 Trace Event Dump File
http://www.databasejournal.com/features/oracle/article.php/3894901/article.htm
Don Seiler
http://seilerwerks.wordpress.com/2007/08/17/dr-statslove-or-how-i-learned-to-stop-guessing-and-love-the-10053-trace/
! new way
{{{
-- execute the SQL here
-- put this at the end of the testcase file
BEGIN
DBMS_SQLDIAG.DUMP_TRACE (
p_sql_id => 'd4cdk8w5sazzq',
p_child_number=> 0,
p_component => 'Compiler',
p_file_id => 'TESTCASE_COLUMN_GROUP_C0');
END;
/
BEGIN
DBMS_SQLDIAG.DUMP_TRACE (
p_sql_id => 'd4cdk8w5sazzq',
p_child_number=> 1,
p_component => 'Compiler',
p_file_id => 'TESTCASE_COLUMN_GROUP_C1');
END;
/
select value from v$diag_info where name = 'Default Trace File';
select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
rm /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/*TESTCASE*
ls -ltr /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/*TESTCASE*
$ ls -ltr /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/*TESTCASE*
-rw-r-----. 1 oracle oinstall 326371 Aug 17 10:49 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20180_TESTCASE_COLUMN_GROUP_C0.trm
-rw-r-----. 1 oracle oinstall 760584 Aug 17 10:49 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20180_TESTCASE_COLUMN_GROUP_C0.trc
-rw-r-----. 1 oracle oinstall 323253 Aug 17 10:49 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20180_TESTCASE_COLUMN_GROUP_C1.trm
-rw-r-----. 1 oracle oinstall 751171 Aug 17 10:49 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20180_TESTCASE_COLUMN_GROUP_C1.trc
-rw-r-----. 1 oracle oinstall 318794 Aug 17 10:50 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20274_TESTCASE_NO_COLUMN_GROUP_C0.trm
-rw-r-----. 1 oracle oinstall 745873 Aug 17 10:50 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20274_TESTCASE_NO_COLUMN_GROUP_C0.trc
-rw-r-----. 1 oracle oinstall 318767 Aug 17 10:50 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20274_TESTCASE_NO_COLUMN_GROUP_C1.trm
-rw-r-----. 1 oracle oinstall 745875 Aug 17 10:50 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20274_TESTCASE_NO_COLUMN_GROUP_C1.trc
}}}
! generic
{{{
99g0fgyrhb4n7
BEGIN
DBMS_SQLDIAG.DUMP_TRACE (
p_sql_id => 'bmd4dk0p4r0pc',
p_child_number=> 0,
p_component => 'Compiler',
p_file_id => 'bmd4dk0p4r0pc');
END;
/
select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
mv /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_24762_TC_NOLOB_PEEKED.trc .
cat orclcdb_ora_19285_TCPEEKED.trc | grep -hE "^DP|^AP"
mv /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_25322_TC_NOLOB_ACTUAL.trc .
cat orclcdb_ora_25322_TC_NOLOB_ACTUAL.trc | grep -hE "^DP|^AP"
cat /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_3776_bmd4dk0p4r0pc.trc | grep -hE "^DP|^AP"
}}}
! your own session
{{{
trace the session
ALTER SESSION SET TRACEFILE_IDENTIFIER='LIO_TRACE';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT FOREVER, LEVEL 1';
Then take the occurrence of the LIO reasons
$ less emrep_ora_9946_WATCH_CONSISTENT.trc | grep "started for block" | awk '{print $1} ' | sort | uniq -c
324 ktrget2():
44 ktrgtc2():
I found this too which more on tracking the objects
http://hoopercharles.wordpress.com/2011/01/24/watching-consistent-gets-10200-trace-file-parser/
}}}
! another session
{{{
1) create the files ss.sql and getlio.awk (see below)
2) get the sid and serial# and trace file name
SELECT s.sid,
s.serial#,
s.server,
lower(
CASE
WHEN s.server IN ('DEDICATED','SHARED') THEN
i.instance_name || '_' ||
nvl(pp.server_name, nvl(ss.name, 'ora')) || '_' ||
p.spid || '.trc'
ELSE NULL
END
) AS trace_file_name
FROM v$instance i,
v$session s,
v$process p,
v$px_process pp,
v$shared_server ss
WHERE s.paddr = p.addr
AND s.sid = pp.sid (+)
AND s.paddr = ss.paddr(+)
AND s.type = 'USER'
ORDER BY s.sid;
3) to start trace, set the 10200 event level 1
exec sys.dbms_system.set_ev(200 , 11667, 10200, 1, '');
4) monitor the file size
while : ; do du -sm dw_ora_18177.trc ; echo "--" ; sleep 2 ; done
5) execute ss.sql on the sid for 5 times
6) to stop trace, set the 10200 event level 0
exec sys.dbms_system.set_ev(200 , 11667, 10200, 0, '');
7) process the trace file and the oradebug output
-- get the top objects
awk -v trcfile=dw_ora_18177.trc -f getlio.awk
-- get the function names
less dw_ora_18177.trc | grep "started for block" | awk '{print $1} ' | sort | uniq -c
8) SQL to get the object names
SELECT
OBJECT_NAME,
DATA_OBJECT_ID,
TO_CHAR(DATA_OBJECT_ID, 'XXXXX') HEX_DATA_OBJECT_ID
FROM
DBA_OBJECTS
WHERE
DATA_OBJECT_ID IN(
TO_NUMBER('15ced', 'XXXXX'))
/
OBJECT_NAME DATA_OBJECT_ID HEX_DA
-------------------------------------------------------------------------------------------------------------------------------- -------------- ------
OBJ$ 18 12
Summary obj for file: dw_ora_18177.trc
---------------------------------
0x00000012 2781466
2781466 ktrget2():
#### ss.sql and getlio.awk scripts below
cat ss.sql
oradebug setospid &spid
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
$ cat getlio.awk
BEGIN {
FS ="[ \t<>:]+"
print "Details for file: " trcfile
print "---------------------------------"
while( getline < trcfile != EOF ){
if ( $0 ~ /started for block/ ) {
rdba[$6]+=1
obj[$8]+=1
both[$6","$8]+=1
#print $6 " " rdba[$6] ", " $8 " " obj[$8]
}
}
close (trcfile)
print ""
print ""
print "Summary rdba and obj for file: " trcfile
print "---------------------------------"
for ( var in both) {
#print var " " both[var]
}
print ""
print "Summary obj for file: " trcfile
print "---------------------------------"
for ( var in obj ) {
print var " " obj[var]
}
}
}}}
https://leetcode.com/problems/customers-who-bought-all-products/
{{{
1045. Customers Who Bought All Products
Medium
SQL Schema
Table: Customer
+-------------+---------+
| Column Name | Type |
+-------------+---------+
| customer_id | int |
| product_key | int |
+-------------+---------+
product_key is a foreign key to Product table.
Table: Product
+-------------+---------+
| Column Name | Type |
+-------------+---------+
| product_key | int |
+-------------+---------+
product_key is the primary key column for this table.
Write an SQL query for a report that provides the customer ids from the Customer table that bought all the products in the Product table.
For example:
Customer table:
+-------------+-------------+
| customer_id | product_key |
+-------------+-------------+
| 1 | 5 |
| 2 | 6 |
| 3 | 5 |
| 3 | 6 |
| 1 | 6 |
+-------------+-------------+
Product table:
+-------------+
| product_key |
+-------------+
| 5 |
| 6 |
+-------------+
Result table:
+-------------+
| customer_id |
+-------------+
| 1 |
| 3 |
+-------------+
The customers who bought all the products (5 and 6) are customers with id 1 and 3.
Accepted
4,086
Submissions
6,109
}}}
{{{
select customer_id
from customer
group by customer_id
having sum(distinct(product_key)) = (select sum(distinct(product_key)) from product)
-- select a.customer_id
-- from customer a, product b
-- where a.product_key = b.product_key;
select
customer_id
from customer
group by customer_id
having count(distinct product_key) in (select count(*) from product);
}}}
http://www.freelists.org/post/oracle-l/SQL-High-version-count-because-of-too-many-varchar2-columns,12
http://t31808.db-oracle-general.databasetalk.us/sql-high-version-count-because-of-too-many-varchar2-columns-t31808.html
SQLs With Bind Variable Has Very High Version Count (Doc ID 258742.1)
{{{
event="10503 trace name context forever, level "
For eg., if the maximum length of a bind variable in the application is 128, then
event="10503 trace name context forever, level 128"
The EVENT 10503 was added as a result of BUG:2450264
This fix introduces the EVENT 10503 which enables users to specify a character bind buffer length.
Depending on the length used, the character binds in the child cursor can all be created
using the same bind length;
skipping bind graduation and keeping the child chain relatively small.
This helps to alleviate a potential cursor-sharing problem related to graduated binds.
The level of the event is the bind length to use, in bytes.
It is relevant for binds of types:
Character (but NOT ANSI Fixed CHAR (type 96 == DTYAFC))
Raw
Long Raw
Long
* There really is no limit for the EVENT 10503 but for the above datatypes.
For non-PL/SQL calls, the maximum bind buffer size is 4001 (bytes). For PL/SQL,
the maximum bind buffer size is 32K.
* Specifying a buffer length which is greater than the pre-set maximum will cause the
pre-set maximum to be used. To go back to using the pre-set lengths, specify '0' for the buffer
length.
Test the patch and event in development environment before implementing in the production environment.
}}}
! tuning
http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
http://www.oracle.com/technetwork/server-storage/vm/ovm3-10gbe-perf-1900032.pdf
http://dak1n1.com/blog/7-performance-tuning-intel-10gbe
-- this will hog your server's memory in no time
{{{
select count(*) from dual connect by 1=1;
}}}
http://www.pythian.com/news/26003/rdbms-online-patching/
''Online Patching is a new feature introduced in 11.1.0.6. It will be delivered starting with RDBMS 11.2.0.2.0.''
http://goo.gl/2U3H3
http://apex.oracle.com/pls/apex/f?p=44785:24:0:::24:P24_CONTENT_ID,P24_PREV_PAGE:4679,1
RDBMS Online Patching Aka Hot Patching [ID 761111.1]
''Quick guide to package ORA- errors with ADRCI'' http://www.evernote.com/shard/s48/sh/e6086cd4-ab4e-4065-b145-323cfa545f80/a831bef2f6480f43c96bb23749df2710
http://goo.gl/mNnaD
''quick step by step'' https://support.oracle.com/CSP/main/article?cmd=show&type=ATT&id=443529.1:Steps&inline=1
How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors (Doc ID 232963.1)
To change the ADR base
<<<
ADR base = "/u01/app/oracle/product/11.2.0.3/dbhome_1/log"
adrci>
adrci>
''adrci> set base /u01/app/oracle''
adrci>
adrci> show home
ADR Homes:
diag/asm/+asm/+ASM4
diag/tnslsnr/pd01db04/listener
diag/tnslsnr/pd01db04/listener_fsprd
diag/tnslsnr/pd01db04/listener_temp
diag/tnslsnr/pd01db04/listener_mtaprd11
diag/tnslsnr/pd01db04/listener_scan2
diag/tnslsnr/pd01db04/listener_mvwprd
diag/tnslsnr/pd01db04/stat
diag/rdbms/dbm/dbm4
diag/rdbms/dbfsprd/DBFSPRD4
diag/rdbms/mtaprd11/mtaprd112
diag/rdbms/fsprd/fsprd2
diag/rdbms/fsqacdc/fsqa2
diag/rdbms/fsprddal/fsprd2
diag/rdbms/mtaprd11dal/mtaprd112
diag/rdbms/mvwprd/mvwprd2
diag/rdbms/mvwprddal/mvwprd2
diag/clients/user_oracle/host_783020838_80
diag/clients/user_oracle/host_783020838_11
<<<
{{{
Use ADRCI or SWB steps to create IPS packages
ADRCI
1. Enter ADRCI
# Adrci
2 shows the existence of the ADR home
adrci> show home
4 Setting ADR home
adrci> set home
5 shows all the problems
adrci> show problem
6 show all events
adrci> show incident
7 diagnostic information packed event
adrci> ips pack incident <incident id>
SWB
1 Log in to Enterprise Manager
2 Click the link 'support workbench'
3 Select 'all active' problem
4 Click the 'problem id' to view the corresponding event
5 Select the appropriate event
6 Click the 'quick package'
7 Enter the package name, description, choose whether to upload to oracle support
8 See the information package
9. Select the 'immediate' create the package, and click the button 'submit'
<br /> For more information, please read the following note for more information.
Note 422893.1 - 11g Understanding Automatic Diagnostic Repository.
Note 1091653.1 - "11g Quick Steps - How to create an IPS package using Support Workbench" [Video]
Note 443529.1 - 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support [Video]
}}}
! purge
http://www.runshell.com/2013/01/oracle-how-to-purge-old-trace-and-dump.html
11g : Active Database Duplication
Doc ID: Note:568034.1
-- DATABASE REPLAY
Oracle Database Replay Client Provisioning - Platform Download Matrix
Doc ID: 815567.1
How To Find Database Replay Divergence Details [ID 1388309.1]
Oracle Database 11g: Interactive Quick Reference http://goo.gl/rQejT
{{{
New Products Installed in 11g:
------------------------------
1) Oracle APEX
**- Installed by default
2) Oracle Warehouse Builder
**- Installed by default
3) Oracle Configuration Manager
- Offered, not installed by default
two options:
connected mode
disconnected mode
4) SQL Developer
- Installed by default with template-based database installations
- It is also installed with database client
5) Database Vault
- Installed by default (OPTIONAL component - custom installation)
Changes in Install Options:
---------------------------
1) Oracle Configuration Manager
- Starting 11g, Integrated with OUI (OPTIONAL component)
2) Oracle Data Mining
- Selected on Enterprise Edition Installation type
3) Oracle Database Vault
- Starting 11g, Integrated with OUI (OPTIONAL component - custom installation)
4) Oracle HTTP Server
- Starting 11g, Available on separate media
5) Oracle Ultra Search
- Starting 11g, Integrated with the Oracle Database
6) Oracle XML DB
- Starting 11g, Installed by default
New Parameters:
---------------
MEMORY_TARGET
DIAGNOSTIC_DEST
New in ASM:
-----------
Automatic Storage Management Fast Mirror Resync
see: Oracle Database Storage Administrator's Guide
SYSASM privilege
OSASM group
New Directories:
----------------
ADR_base/diag <-- automatic diagnostic repository
Deprecated Components:
----------------------
iSQL*Plus
Oracle Workflow
Oracle Data Mining Scoring Engine
Oracle Enterprise Manager Java Console
Overview of Installation:
-------------------------
CSS (Cluster Synchronization Services) does the synchronization between ASM and database instance
for RAC, resides on Clusterware Home
for Single Node-Single System, resides on home directory of ASM instance
Automatic Storage Management
can be used starting 10.1.0.3 or later
also, if you are 11.1 then you could use ASM from 10.1
Database Management Options:
either you use:
1) Enterprise Manager Grid Control
Oracle Management Repository & Service --> Install Management Agent on each computer
2) Local Database Control
Upgrading the database using RHEL 2.1 OS
www.oracle.com/technology/tech/linux/pdf/rhel_23_upgrade.pdf
Preinstallation:
----------------
1) Logging In to the System as root
2) Checking the Hardware Requirements
**NEW-parameters:
memory_max_target
memory_target
3) Checking the Software Requirements
# Operating System Requirements
# Kernel Requirements
# Package Requirements
rpm -qa | grep -i "binutils"
rpm -qa | grep -i "compat-libstdc++"
rpm -qa | grep -i "elfutils-libelf"
rpm -qa | grep -i "elfutils-libelf-devel"
rpm -qa | grep -i "glibc"
rpm -qa | grep -i "glibc-common"
rpm -qa | grep -i "glibc-devel"
rpm -qa | grep -i "gcc"
rpm -qa | grep -i "gcc-c++"
rpm -qa | grep -i "libaio"
rpm -qa | grep -i "libaio-devel"
rpm -qa | grep -i "libgcc"
rpm -qa | grep -i "libstdc++"
rpm -qa | grep -i "libstdc++-devel"
rpm -qa | grep -i "make"
rpm -qa | grep -i "sysstat"
rpm -qa | grep -i "unixODBC"
rpm -qa | grep -i "unixODBC-devel"
NOT DISCOVERED:
rpm -qa | grep -i "elfutils-libelf-devel"
dep: elfutils-libelf-devel-static-0.125-3.el5.i386.rpm
rpm -qa | grep -i "libaio-devel"
rpm -qa | grep -i "sysstat"
rpm -qa | grep -i "unixODBC"
rpm -qa | grep -i "unixODBC-devel"
# Compiler Requirements
# Additional Software Requirements
4) Preinstallation Requirements for Oracle Configuration Manager
5) Checking the Network Setup
# Configuring Name Resolution
# Installing on DHCP Computers
# Installing on Multihomed Computers
# Installing on Computers with Multiple Aliases
# Installing on Non-Networked Computers
6) Creating Required Operating System Groups and Users
**NEW-group:
OSASM group...which has a usual name of "ASMADMIN"
this group is for ASM storage administrators
groupadd oinstall
groupadd dba
groupadd oper
groupadd asmadmin
useradd -g oinstall -G dba,oper,asmadmin oracle
7) Configuring Kernel Parameters
in /etc/sysctl.conf
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 268435456
fs.file-max = 102552
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 262144
to increase shell limits:
in /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
in /etc/pam.d/login
session required /lib/security/pam_limits.so
session required pam_limits.so
in /etc/profile
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
8) Identifying Required Software Directories
9) Identifying or Creating an Oracle Base Directory
root@localhost ~]# mkdir -p /u01/app
[root@localhost ~]# chown -R oracle:oinstall /u01/app
[root@localhost ~]# chmod -R 775 /u01/app
10) Choosing a Storage Option for Oracle Database and Recovery Files
11) Creating Directories for Oracle Database or Recovery Files
[root@localhost oracle]# mkdir flash_recovery_area
[root@localhost oracle]# chown oracle:oinstall flash_recovery_area/
[root@localhost oracle]# chmod 775 flash_recovery_area/
12) Preparing Disk Groups for an Automatic Storage Management Installation
13) Stopping Existing Oracle Processes
14) Configuring the oracle User's Environment
umask 022
export ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1
export ORACLE_BASE=/u01/app/oracle
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_SID=ora11
PATH=$ORACLE_HOME/bin:$PATH
}}}
Creating an Oracle ACFS File System https://docs.oracle.com/database/121/OSTMG/GUID-4C98CF06-8CCC-45F1-9316-C40FB3EFF268.htm#OSTMG94787
http://www.oracle-base.com/articles/11g/ACFS_11gR2.php
ACFS Technical Overview and Deployment Guide [ID 948187.1] ''<-- ACFS now supports RMAN, DataPump on 11.2.0.3 above... BTW, it does not support archivelogs… You still have to have the FRA diskgroup to put your archivelogs/redo. At least you can have the ACFS as container of backupsets and data pump files''
''update''
11.2.0.3 now supports almost everything
http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmfilesystem.htm#CACJFGCD
Starting with Oracle Automatic Storage Management 11g Release 2 (11.2.0.3), Oracle ACFS supports RMAN backups (BACKUPSET file type), archive logs (ARCHIVELOG file type), and Data Pump dumpsets (DUMPSET file type). Note that Oracle ACFS snapshots are not supported with these files.
''update 08/2014''
ACFS supported on Exadata
<<<
Creating AFCS file systems on Exadata storage requires the following:
Oracle Linux
Grid Infrastructure 12.1.0.2
Database files stored in ACFS on Exadata storage are subject to the following guidelines and restrictions:
Supported database versions are 10.2.0.4, 10.2.0.5, 11.2.0.4, and 12.1.
Hybrid Columnar Compression (HCC) support (for 11.2 and 12.1) requires fix for bug 19136936.
Exadata-offload features such as Smart Scan, Storage Indexes, IORM, Network RM, etc. are not supported.
Exadata Smart Flash Cache will cache read operations. Caching of write operations is expected in a later release.
No specialized cache hints are passed from the Database to the Exadata Storage layer, which means the Smart Flash Cache heuristics are based on I/O size, similar to any other block storage caching technology.
Exadata Smart Flash Logging is not supported.
Hardware Assisted Resilient Data (HARD) checks are not performed.
<<<
How To Install/Reinstall Or Deinstall ACFS Modules/Installation Manually? [ID 1371067.1]
http://www.oracle-base.com/articles/11g/DBFS_11gR2.php
http://ronnyegner.wordpress.com/2009/10/08/the-oracle-database-file-system-dbfs/
http://www.pythian.com/news/17849/chopt-utility/
http://perumal.org/enabling-and-disabling-database-options/
http://juliandyke.wordpress.com/2010/10/06/oracle-11-2-0-2-requires-multicasting-on-the-interconnect/
http://dbastreet.com/blog/?p=515
http://blog.ronnyegner-consulting.de/oracle-11g-release-2-install-guide/
{{{
the only difference it would make on the databases that will have the DBV and TDE configured is that when
DBAs would try to create a user it has to go through the dvadmin user. Other databases that doesn’t have the
DV schemas created and configured will still behave as is.
Below is a sample of create a user in a DBV environment
SYS@dbv_1> SYS@dbv_1> select username from dba_users order by 1;
USERNAME
------------------------------
ANONYMOUS
APEX_030200
APEX_PUBLIC_USER
APPQOSSYS
BI
CTXSYS
DBSNMP
DIP
DVADMIN
DVF
DVOWNER
DVSYS
SYS@dbv_1> conn / as sysdba
SYS@dbv_1> create user karlarao identified by karlarao;
create user karlarao identified by karlarao
*
ERROR at line 1:
ORA-01031: insufficient privileges
SYS@dbv_1> conn dvadmin/<password>
Connected.
DVADMIN@dbv_1> create user karlarao identified by karlarao;
User created.
}}}
http://www.dpriver.com/blog/list-of-demos-illustrate-how-to-use-general-sql-parser/oracle-sql-query-rewrite/
{{{
1. (NOT) IN sub-query to (NOT) EXISTS sub-query
2. (NOT) EXISTS sub-query to (NOT) IN sub-query
3. Separate outer joined inline view using UNION ALL or add hint for the inline view
4. IN clause to UNION ALL statement
5. OR clause to UNION ALL statement
6. NVL function to UNION ALL statement
7. Re-write suppressed joined columns in the WHERE clause
8. VIEW expansion
9. NOT EXISTS to NOT IN hash anti-join
10. Make columns suppressed using RTRIM function or ‘+0’
11. Add hint to the statement
12. Co-related sub-query to inline View
}}}
! 2021
Common Coding and Design mistakes (that really mess up performance) https://www.slideshare.net/SageComputing/optmistakesora11dist
.
https://balazspapp.wordpress.com/2018/04/05/oracle-18c-recover-standby-database-from-service/
https://www.virtual-dba.com/blog/refreshing-physical-standby-using-recover-from-service-on-12c/
https://dbtut.com/index.php/2019/12/27/recover-datbase-using-service-refresh-standby-database-in-oracle-12c/
Restoring and Recovering Files Over the Network (from SERVICE)
https://docs.oracle.com/database/121/BRADV/rcmadvre.htm#BRADV685
Creating a Physical Standby database using RMAN restore database from service (Doc ID 2283978.1)
http://emarcel.com/upgrade-oracle-database-12c-with-asm-12-1-0-1-to-12-1-0-2/
{{{
LISTENER =
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))
(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=orcl)
(SID_NAME=orcl)
(ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1)
)
(SID_DESC=
(GLOBAL_DBNAME=noncdb)
(SID_NAME=noncdb)
(ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1)
)
)
SECURE_REGISTER_LISTENER = (IPC)
}}}
https://martincarstenbach.wordpress.com/2012/06/20/little-things-worth-knowing-static-and-dynamic-listener-registration/
http://kerryosborne.oracle-guy.com/papers/12c_Adaptive_Optimization.pdf
https://oracle-base.com/articles/12c/adaptive-plans-12cr1
https://blog.dbi-services.com/sql-monitoring-12102-shows-adaptive-plans/
https://blog.dbi-services.com/oracle-12c-adaptive-plan-inflexion-point/
https://blogs.oracle.com/letthesunshinein/sql-monitor-now-tells-you-whether-the-execution-plan-was-adaptive-or-not
https://oracle.readthedocs.io/en/latest/sql/plans/adaptive-query-optimization.html#sql-adaptive
<<showtoc>>
! 12.1
!! optimizer_adaptive_features
* In 12.1, adaptive optimization as a whole is controlled by the dynamic parameter optimizer_adaptive_features, which defaults to TRUE. All of the features it controls are enabled when optimizer_features_enable >= 12.1
! 12.2
!! optimizer_adaptive_features has been obsoleted, replaced by two new parameters
!! optimizer_adaptive_plans, defaults to TRUE
* The optimizer_adaptive_plans parameter controls whether the optimizer creates adaptive plans and defaults to TRUE.
* The most commonly seen use of adaptive plans is where different sub-plans that may use different join methods are selected at run time. For example, a nested loops join may be converted to a hash join once execution information has identified that it provides better performance. The plan has been adapted according to the data presented.
!! optimizer_adaptive_statistics, defaults to FALSE
* The optimizer_adaptive_statistics parameter controls whether the optimizer uses adaptive statistics and defaults to FALSE
* The creation of automatic extended statistics is controlled by the table-level statistics preference AUTO_STAT_EXTENSIONS, which defaults to OFF. (AUTO_STAT_EXTENSIONS can be set using DBMS_STATS procedures like SET_TABLE_PREFS and SET_GLOBAL_PREFS.) These defaults have been chosen to place emphasis on achieving stable SQL execution plans
* Setting optimizer_features_enable has no effect on the features controlled by optimizer_adaptive_statistics. The creation of automatic extended statistics is controlled by the table-level statistics preference AUTO_STAT_EXTENSIONS, which defaults to OFF.
Monitoring Business Applications http://docs.oracle.com/cd/E24628_01/install.121/e24215/bussapps.htm#BEIBBHFH
It’s kind of a Service Type that combines information from:
* Systems (PSFT systems for example),
* Service tests,
* Real User experience Insight data and
* Business Transaction Management data.
http://hemantoracledba.blogspot.sg/2013/07/concepts-features-overturned-in-12c.html
Oracle Database 12c Release 1 Information Center (Doc ID 1595421.2)
Release Schedule of Current Database Releases (Doc ID 742060.1)
Master Note For Oracle Database 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment (Non-RAC) (Doc ID 1520299.1)
Master Note of Linux OS Requirements for Database Server (Doc ID 851598.1)
Requirements for Installing Oracle Database 12.1 on RHEL5 or OL5 64-bit (x86-64) (Doc ID 1529433.1)
Requirements for Installing Oracle Database 12.1 on RHEL6 or OL6 64-bit (x86-64) (Doc ID 1529864.1)
Exadata 12.1.1.1.0 release and patch (16980054 ) (Doc ID 1571789.1)
http://ermanarslan.blogspot.com/2014/02/rac-listener-configuration-in-oracle.html
<<showtoc>>
<<<
12c, single instance installation featuring Oracle 12.1.0.2.0 on Oracle Linux 6.6.
The system is configured with 8 GB of RAM and 2 virtual CPUs.
The username and password match for the oracle account. Root password is r00t.
The ORACLE_HOME is in /u01/app/oracle/product/12.1.0.2/dbhome_1
<<<
! LAB X: OEM EXPRESS
{{{
0) create a swingbench schema
method a: lights out using swingbench installation
$> ./oewizard -scale 1 -dbap change_on_install -u soe_master -p soe_master -cl -cs //localhost/NCDB -ts SOE -create
SwingBench Wizard
Author : Dominic Giles
Version : 2.5.0.949
Running in Lights Out Mode using config file : oewizard.xml
============================================
| Datagenerator Run Stats |
============================================
Connection Time 0:00:00.004
Data Generation Time 0:00:20.889
DDL Creation Time 0:00:56.606
Total Run Time 0:01:17.503
Rows Inserted per sec 579,546
Data Generated (MB) per sec 47.2
Actual Rows Generated 13,007,340
Post Creation Validation Report
===============================
The schema appears to have been created successfully.
Valid Objects
=============
Valid Tables : 'ORDERS','ORDER_ITEMS','CUSTOMERS','WAREHOUSES','ORDERENTRY_METADATA','INVENTORIES','PRODUCT_INFORMATION','PRODUCT_DESCRIPTIONS','ADDRESSES','CARD_DETAILS'
Valid Indexes : 'PRD_DESC_PK','PROD_NAME_IX','PRODUCT_INFORMATION_PK','PROD_SUPPLIER_IX','PROD_CATEGORY_IX','INVENTORY_PK','INV_PRODUCT_IX','INV_WAREHOUSE_IX','ORDER_PK','ORD_SALES_REP_IX','ORD_CUSTOMER_IX','ORD_ORDER_DATE_IX','ORD_WAREHOUSE_IX','ORDER_ITEMS_PK','ITEM_ORDER_IX','ITEM_PRODUCT_IX','WAREHOUSES_PK','WHS_LOCATION_IX','CUSTOMERS_PK','CUST_EMAIL_IX','CUST_ACCOUNT_MANAGER_IX','CUST_FUNC_LOWER_NAME_IX','ADDRESS_PK','ADDRESS_CUST_IX','CARD_DETAILS_PK','CARDDETAILS_CUST_IX'
Valid Views : 'PRODUCTS','PRODUCT_PRICES'
Valid Sequences : 'CUSTOMER_SEQ','ORDERS_SEQ','ADDRESS_SEQ','LOGON_SEQ','CARD_DETAILS_SEQ'
Valid Code : 'ORDERENTRY'
Schema Created
Method b) exp/imp
FYI - the export information
[enkdb03:oracle:MBACH] /home/oracle/mbach/swingbench/bin
> expdp system/manager directory=oradir logfile=exp_soe_master.txt dumpfile=exp_soe_master.dmp schemas=soe_master
Export: Release 12.1.0.2.0 - Production on Mon Jun 8 05:20:55 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=oradir logfile=exp_soe_master.txt dumpfile=exp_soe_master.dmp schemas=soe_master
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.219 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "SOE_MASTER"."ORDER_ITEMS" 228.4 MB 4290312 rows
. . exported "SOE_MASTER"."ADDRESSES" 110.4 MB 1500000 rows
. . exported "SOE_MASTER"."CUSTOMERS" 108.0 MB 1000000 rows
. . exported "SOE_MASTER"."ORDERS" 129.1 MB 1429790 rows
. . exported "SOE_MASTER"."INVENTORIES" 15.26 MB 901254 rows
. . exported "SOE_MASTER"."CARD_DETAILS" 63.88 MB 1500000 rows
. . exported "SOE_MASTER"."LOGON" 51.24 MB 2382984 rows
. . exported "SOE_MASTER"."PRODUCT_DESCRIPTIONS" 216.8 KB 1000 rows
. . exported "SOE_MASTER"."PRODUCT_INFORMATION" 188.1 KB 1000 rows
. . exported "SOE_MASTER"."ORDERENTRY_METADATA" 5.617 KB 4 rows
. . exported "SOE_MASTER"."WAREHOUSES" 35.70 KB 1000 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/mbach/oradir/exp_soe_master.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Mon Jun 8 05:23:39 2015 elapsed 0 00:02:39
-> this needs to be imported into NCDB, taken from /u01/software
1) enable OEM express if you haven't already with your database.
Check if enabled:
select dbms_xdb.getHttpPort() from dual;
select dbms_xdb_config.getHttpsPort() from dual;
If none returns a result, set it up
exec dbms_xdb_config.sethttpsport(5500);
2) start charbench on the command line
create a AWR snapshot (exec dbms_workload_repository.create_snapshot)
./charbench -u soe_master -p soe_master -cs //localhost/NCDB -uc 10 -min -10 -max 100 -stats full -rt 0:10 -bs 0:01 -a
create another AWR snapshot (exec dbms_workload_repository.create_snapshot)
3) view the activity with your OEM express
if you need to use port-forwarding:
ssh -L<oem express port>localhost:<oem express port>
Then point your browser to it: https://<VM-IP>:<oem express port>/em
4) explore OEM express
Look at the performance overview page
Review the performance hub and look at the various panes available to you
5) Create an active-html AWR report
Review and admire it
}}}
! LAB X) SQL Monitor reports
{{{
SQL Monitor reports are an very useful performance monitoring and tuning tool. In this lab you will start experimenting with it. In order to do so you need a query. In the first step you'll create one of your own liking based on the SOE schema you imported earlier. Ensure to supply the /*+ monitor */ hint when executing it!
0) run a large query
select /*+ monitor gather_plan_statistics sqlmon001 */
count(*)
from customers c,
addresses a,
orders o,
order_items oi
where o.order_id = oi.order_id
and o.customer_id = c.customer_id
and a.customer_id = c.customer_id
and c.credit_limit =
(select max(credit_limit) from customers);
1) Create a SQL Monitor report from OEM express
Navigate the User Interface and find your monitored query. Take a note of the SQL ID, you will need it in step 3
2) Create a text version of the same SQL report
The graphical monitoring report requires a GUI and once retrieved, also relies on loading data from Oracle's website. In secure environments you may not have access to the Internet. In this step you need to look up the documentation for dbms_sqltune.report_sql_monitor and produce a text version of the report.
select dbms_sqltune.report_sql_monitor('&sqlID') from dual;
Review the reports and have a look around
}}}
! LAB X: OTHER DEVELOPMENT FEATURES
{{{
This is a large-ish lab where you are going to explore various development-related features with the database.
1) Advanced index compression
The first lab will introduce you to index compression. It's based on a table created as a subset of soe.order_items. Copy the following script and execute it in your environment.
SET ECHO ON;
DROP TABLE t1 purge;
CREATE TABLE t1 NOLOGGING AS
SELECT * FROM ORDER_ITEMS WHERE ROWNUM <= 1e6;
CREATE INDEX t1_i1 ON t1 (order_id,line_item_id,product_id);
CREATE INDEX t1_i2 ON t1 (order_id, line_item_id);
CREATE INDEX t1_i3 ON t1 (order_id,line_item_id,product_id,unit_price);
CREATE INDEX t1_i4 ON t1 (order_id);
COL segment_name FOR A5 HEA "INDEX";
SET ECHO OFF;
SPO index.txt;
PRO NO COMPRESS
SELECT segment_name,
blocks
FROM user_segments
WHERE segment_name LIKE 'T1%'
AND segment_type = 'INDEX'
ORDER BY
segment_name;
SPO OFF;
SET ECHO ON;
/*
DROP TABLE t1 purge;
CREATE TABLE t1 NOLOGGING
AS
SELECT * FROM ORDER_ITEMS WHERE ROWNUM <= 1e6;
*/
DROP INDEX t1_i1;
DROP INDEX t1_i2;
DROP INDEX t1_i3;
DROP INDEX t1_i4;
CREATE INDEX t1_i1 ON t1 (order_id,line_item_id,product_id) COMPRESS 2;
CREATE INDEX t1_i2 ON t1 (order_id, line_item_id) COMPRESS 1;
CREATE INDEX t1_i3 ON t1 (order_id,line_item_id,product_id,unit_price) COMPRESS 3;
CREATE INDEX t1_i4 ON t1 (order_id) COMPRESS 1;
SET ECHO OFF;
SPO index.txt APP;
PRO PREFIX COMPRESSION
SELECT segment_name,
blocks
FROM user_segments
WHERE segment_name LIKE 'T1%'
AND segment_type = 'INDEX'
ORDER BY
segment_name;
SPO OFF;
SET ECHO ON;
DROP INDEX t1_i1;
DROP INDEX t1_i2;
DROP INDEX t1_i3;
DROP INDEX t1_i4;
CREATE INDEX t1_i1 ON t1 (order_id,line_item_id,product_id) COMPRESS ADVANCED LOW;
CREATE INDEX t1_i2 ON t1 (order_id, line_item_id) COMPRESS ADVANCED LOW;
CREATE INDEX t1_i3 ON t1 (order_id,line_item_id,product_id,unit_price) COMPRESS ADVANCED LOW;
CREATE INDEX t1_i4 ON t1 (order_id) COMPRESS ADVANCED LOW;
SET ECHO OFF;
SPO index.txt APP;
PRO ADVANCED COMPRESSION
SELECT segment_name,
blocks
FROM user_segments
WHERE segment_name LIKE 'T1%'
AND segment_type = 'INDEX'
ORDER BY
segment_name;
SPO OFF;
SET ECHO ON;
Review file index.txt and have a look at the various compression results.
2) Sequences as default values
In this part of the lab you will create two tables and experiment with sequences as default values for surrogate keys. You will need to create the following:
- table the_old_way: make sure it has an "ID" column as primary key
- create a sequence
- create a trigger that populates the ID if not supplied in the insert command
- insert 100000 rows
One Potential Solution:
Create a sequence to allow the population of the table using default values.
create sequence s cache 10000 noorder;
create a simple table to hold an ID column to be used as a primary key. Add a few random columns such as a timestamp and a vc to store information. Next you need to create a before insert trigger that captures the insert statement and sets the ID's value to sequence.nextval, but only if the ID column is not part of the insert statement! The next step is to create an anonymous PL/SQL block to insert 100000 rows into the table.
create table the_old_way (
id number primary key,
d timestamp not null,
vc varchar2(50) not null
)
/
create or replace trigger the_old_way_bit
before insert on the_old_way for each row
declare
begin
if :new.id is null then
:new.id := s.nextval;
end if;
end;
/
begin
for i in 1..100000 loop
insert into the_old_way (d, vc) values (systimestamp, 'with trigger');
end loop;
end;
/
Note down the time for the execution of the PL/SQL block
Part two of the lab is a test with sequences as default values for the column. Create another table similar to the first one created but this time without the trigger. Ensure that the ID column is used as a primary key and that it has the sequence's next value as its default value. Then insert 100000 and note the time.
drop sequence s;
create sequence s cache 10000 noorder;
create table the_12c_way (
id number default s.nextval primary key,
d timestamp not null,
vc varchar2(50) not null
)
/
begin
for i in 1..100000 loop
insert into the_12c_way (d, vc) values (systimestamp, 'with trigger');
end loop;
end;
/
Finally create yet another table, but this time with identity columns. Ensure that the identity column is defined in the same way as the sequence you created earlier. Then insert again and note the time.
create table the_12c_way_with_id (
id number generated always as identity (
start with 1 cache 100000),
d timestamp not null,
vc varchar2(50) not null
)
/
begin
for i in 1..100000 loop
insert into the_12c_way_with_id (d, vc) values (systimestamp, 'with identity');
end loop;
end;
/
Before finishing this section review the objects created as part of the identity table's DDL.
col IDENTITY_OPTIONS for a50 wrap
col SEQUENCE_NAME for a30
col COLUMN_NAME for a15
select column_name, generation_type, sequence_name, identity_options from USER_TAB_IDENTITY_COLS;
3) Embed a function in the WITH clause
Create a statement that selects from t1 and uses a function declared in the with-clause of the query to return a truncated date.
with
function silly_little_function (pi_d in date)
return date is
begin
return trunc(pi_d);
end;
select order_id, silly_little_function(dispatch_date)
from t1 where rownum < 11
/
4) Automatic gathering of table statistics
create table t2 as select * from t1 and check the table statistics. Are they current? Why are there table statistics during a CTAS statement?
SQL> create table t2 as select * from t1 sample (50);
Table created.
Elapsed: 00:00:00.73
SQL> select table_name, partitioned, num_rows from tabs where table_name = 'T2';
TABLE_NAME PAR NUM_ROWS
------------------------------ --- ----------
T2 NO 500736
Elapsed: 00:00:00.04
SQL> select count(*) from t2;
COUNT(*)
----------
500736
Elapsed: 00:00:00.10
SQL> select sql_id from v$sql where sql_text = 'create table t2 as select * from t1 sample (50)';
SQL_ID
-------------
0h72ryws535xf
SQL> select * from table(dbms_xplan.display_cursor('0h72ryws535xf',null));
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 0h72ryws535xf, child number 0
-------------------------------------
create table t2 as select * from t1 sample (50)
Plan hash value: 2307360015
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | CREATE TABLE STATEMENT | | | | 2780 (100)| |
| 1 | LOAD AS SELECT | | | | | |
| 2 | OPTIMIZER STATISTICS GATHERING | | 500K| 24M| 2132 (1)| 00:00:01 |
| 3 | TABLE ACCESS STORAGE SAMPLE | T1 | 500K| 24M| 2132 (1)| 00:00:01 |
-----------------------------------------------------------------------------------------
Note
-----
- automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold
19 rows selected.
5) Top-N queries and pagination
Top-N queries used to be interesting in Oracle before 12c. In this lab you will appreciate their ease of use.
- list how many rows there are in table t1
select count(*) from t1;
- what are the min and max dispatch dates in the table?
SQL> alter session set nls_date_format='dd.mm.yyyy hh24:mi:ss';
SQL> select min(dispatch_date), max(dispatch_date) from t1;
MIN(DISPATCH_DATE) MAX(DISPATCH_DATE)
------------------- -------------------
01.01.2012 00:00:00 03.05.2012 00:00:00
- Create a query that orders rows in t1 by dispatch date and shows the first 15 rows only
select order_id, dispatch_date, gift_wrap from t1 order by dispatch_date fetch first 15 rows only;
- create a query that orders rows in t1 by dispatch date and shows rows 150 to 155
select order_id, dispatch_date, gift_wrap from t1 order by dispatch_date offset 150 rows fetch next 5 rows only;
- rewrite the last query with the pre-12c syntax and compare results
http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html
select *
from ( select /*+ FIRST_ROWS(n) */
a.*, ROWNUM rnum
from ( select order_id, dispatch_date, gift_wrap from t1 order by dispatch_date ) a
where ROWNUM <= 155 )
where rnum > 150;
- Compare execution times and plans
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID b20zp6jrn2yag, child number 0
-------------------------------------
select * from ( select /*+ FIRST_ROWS(n) */ a.*, ROWNUM rnum
from ( select order_id, dispatch_date, gift_wrap from t1 order by
dispatch_date ) a where ROWNUM <= 155 ) where rnum > 150
Plan hash value: 2771300550
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 8202 (100)| |
|* 1 | VIEW | | 155 | 7285 | | 8202 (1)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | | |
| 3 | VIEW | | 1000K| 32M| | 8202 (1)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY | | 1000K| 19M| 30M| 8202 (1)| 00:00:01 |
| 5 | TABLE ACCESS STORAGE FULL FIRST ROWS| T1 | 1000K| 19M| | 2134 (1)| 00:00:01 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RNUM">150)
2 - filter(ROWNUM<=155)
4 - filter(ROWNUM<=155)
Note
-----
- automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold
SQL> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 0xhpxrmzzbwkp, child number 0
-------------------------------------
select order_id, dispatch_date, gift_wrap from t1 order by
dispatch_date offset 150 rows fetch next 5 rows only
Plan hash value: 2433988517
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 8202 (100)| |
|* 1 | VIEW | | 1000K| 53M| | 8202 (1)| 00:00:01 |
|* 2 | WINDOW SORT PUSHED RANK | | 1000K| 19M| 30M| 8202 (1)| 00:00:01 |
| 3 | TABLE ACCESS STORAGE FULL| T1 | 1000K| 19M| | 2134 (1)| 00:00:01 |
--------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=CASE WHEN (150>=0)
THEN 150 ELSE 0 END +5 AND "from$_subquery$_002"."rowlimit_$$_rownumber">150))
2 - filter(ROW_NUMBER() OVER ( ORDER BY "DISPATCH_DATE")<=CASE WHEN (150>=0)
THEN 150 ELSE 0 END +5)
Note
-----
- automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold
}}}
! LAB X: PLUGGABLE DATABASES
{{{
Unlike the database in-memory option we can use the lab to experiment with Pluggable Databases.
1) create a CDB
Use dbca in silent mode or any other technique you like to create a CDB with 1 PDB. Specify the data file location to be /u01/oradata and the FRA to go to /u01/fra. It is recommended to use oracle managed files for the database but you are free to chose whichever method you are most comfortable with.
2) log in to the CDB root
Once the CDB is created connect to it as SYSDBA and list all of the PDBs in the database. Where can you find them?
SQL> show pdbs
SQL> select con_id, name, open_mode, total_size from v$pdbs;
SQL> select pdb_id, pdb_name, status, logging, force_logging, force_nologging from dba_pdbs;
3) create a new PDB named MASTER from the seed
- check if you are using OMF
SQL> show parameter db_create_file_dest
SQL> create pluggable database master admin user master_admin identified by secret roles=(dba)
2 default tablespace users datafile size 20m;
SQL> alter pluggable database master open
4) list the MASTER PDBs data files
- from the root
SQL> select name from v$datafile where con_id = (select con_id from v$pdbs where name = 'MASTER');
- from the PDB
SQL> select name, bytes/power(1024,2) m from v$datafile;
--> what is odd here? Compare with DBA_DATA_FILES
SQL> select con_id, name, bytes/power(1024,2) m from v$datafile;
5) get familiar with the new dictionary views
The new architecture introduces new views and columns to existing views. Explore these, focus on the CDB% views and how they differ from the DBA views. Also check how many V$- views have a new column? Can you find evidence for linking packages in the PDB to the Root?
SQL> desc cdb_data_files
Name Null? Type
----------------------------------------- -------- ----------------------------
FILE_NAME VARCHAR2(513)
FILE_ID NUMBER
TABLESPACE_NAME VARCHAR2(30)
BYTES NUMBER
BLOCKS NUMBER
STATUS VARCHAR2(9)
RELATIVE_FNO NUMBER
AUTOEXTENSIBLE VARCHAR2(3)
MAXBYTES NUMBER
MAXBLOCKS NUMBER
INCREMENT_BY NUMBER
USER_BYTES NUMBER
USER_BLOCKS NUMBER
ONLINE_STATUS VARCHAR2(7)
CON_ID NUMBER
SQL> desc dba_data_files
Name Null? Type
----------------------------------------- -------- ----------------------------
FILE_NAME VARCHAR2(513)
FILE_ID NUMBER
TABLESPACE_NAME VARCHAR2(30)
BYTES NUMBER
BLOCKS NUMBER
STATUS VARCHAR2(9)
RELATIVE_FNO NUMBER
AUTOEXTENSIBLE VARCHAR2(3)
MAXBYTES NUMBER
MAXBLOCKS NUMBER
INCREMENT_BY NUMBER
USER_BYTES NUMBER
USER_BLOCKS NUMBER
ONLINE_STATUS VARCHAR2(7)
SQL> desc v$datafile
SQL> desc v$datafile
Name Null? Type
----------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------
FILE# NUMBER
CREATION_CHANGE# NUMBER
CREATION_TIME DATE
TS# NUMBER
RFILE# NUMBER
STATUS VARCHAR2(7)
ENABLED VARCHAR2(10)
CHECKPOINT_CHANGE# NUMBER
CHECKPOINT_TIME DATE
UNRECOVERABLE_CHANGE# NUMBER
UNRECOVERABLE_TIME DATE
LAST_CHANGE# NUMBER
LAST_TIME DATE
OFFLINE_CHANGE# NUMBER
ONLINE_CHANGE# NUMBER
ONLINE_TIME DATE
BYTES NUMBER
BLOCKS NUMBER
CREATE_BYTES NUMBER
BLOCK_SIZE NUMBER
NAME VARCHAR2(513)
PLUGGED_IN NUMBER
BLOCK1_OFFSET NUMBER
AUX_NAME VARCHAR2(513)
FIRST_NONLOGGED_SCN NUMBER
FIRST_NONLOGGED_TIME DATE
FOREIGN_DBID NUMBER
FOREIGN_CREATION_CHANGE# NUMBER
FOREIGN_CREATION_TIME DATE
PLUGGED_READONLY VARCHAR2(3)
PLUGIN_CHANGE# NUMBER
PLUGIN_RESETLOGS_CHANGE# NUMBER
PLUGIN_RESETLOGS_TIME DATE
CON_ID NUMBER
SQL> select object_name, object_type, namespace, sharing, oracle_maintained from dba_objects where object_name = 'DBMS_REPCAT_AUTH';
OBJECT_NAME OBJECT_TYPE NAMESPACE SHARING O
-------------------------------- ----------------------- ---------- ------------- -
DBMS_REPCAT_AUTH PACKAGE 1 METADATA LINK Y
DBMS_REPCAT_AUTH PACKAGE BODY 2 METADATA LINK Y
DBMS_REPCAT_AUTH SYNONYM 1 METADATA LINK Y
DBMS_REPCAT_AUTH PACKAGE 1 METADATA LINK Y
DBMS_REPCAT_AUTH PACKAGE BODY 2 METADATA LINK Y
6) switch to the PDB as root
Explore ways to switch to the newly created PDB from the root without logging in again
SQL> alter session set container = MASTER;
SQL> select sys_context('userenv', 'con_name') from dual;
SQL> select sys_context('userenv', 'con_id') from dual;
7) connect to the PDB using Net*8
Now try to connect to the PDB using Net*8 as MASTER_ADMIN. Ensure that you are connected to the correct container! Can you see that the user has the DBA role granted?
$ sqlplus master_admin/secret@localhost/MASTER
SQL> select sys_context('userenv', 'con_name') from dual;
SQL> select sys_context('userenv', 'con_id') from dual;
SQL> select * from session_privs;
8) view the privileges granted to MASTER_ADMIN
Now let's look a bit closer at the privileges granted to MASTER_ADMIN. Find the hierarchy of roles and grants. Which is the primary role granted to the user? How are the roles specified in the create pluggable database command linked to this role?
SQL> select * from dba_role_privs where grantee = user;
GRANTEE GRANTED_ROLE ADM DEL DEF COM
------------------------------ ------------------------------ --- --- --- ---
MASTER_ADMIN PDB_DBA YES NO YES NO
SQL> select * from dba_role_privs where grantee = 'PDB_DBA';
GRANTEE GRANTED_ROLE ADM DEL DEF COM
------------------------------ ------------------------------ --- --- --- ---
PDB_DBA DBA NO NO YES NO
Is it really the DBA role?
SQL> select * from dba_sys_privs where grantee = 'DBA';
9) view the connection to the PDB from the root
You can see anyone connected to the PDBs from the root. In a separate session, connect to the MASTER PDB and try to identify that particular session from the CDB$ROOT.
- connect to the PDB
$ sqlplus master_admin/secret@localhost/MASTER
SQL> exec dbms_application_info.set_client_info('find me!')
- in another session, connect to the root
$ sqlplus / as sysdba
SQL> select username,sid,serial#,client_info,con_id from v$session where con_id = (select con_id from v$pdbs where name = 'MASTER');
USERNAME SID SERIAL# CLIENT_INFO CON_ID
------------------------------ ---------- ---------- -------------------- ----------
MASTER_ADMIN 21 26215 find me! 3
10) limit the maximum size of the PDB to 50M
PDBs are often used for consolidation. When consolidating, users pay for storage. We don't want them to use more than they pay for. Can you think of a way to limit the space available to a PDB? Can you test if that limit is enforced?
- what is the minimum size you can set it to?
SQL> alter pluggable database MASTER storage (maxsize 800M);
- what is the PDB_MASTER's default tablespace?
SQL> select default_tablespace from dba_users where username = user;
DEFAULT_TABLESPACE
------------------------------
USERS
- check if the limit is enforced
SQL> grant unlimited tablespace to master_admin;
SQL> create table t1 nologging as select a.*, rpad(object_name, 200, 'x') large_c from dba_objects a;
(may have to allow users to autoextend)
11) create a PDB from the MASTER
Creating a PDB from the SEED is only one way of creating a PDB. In the next step, create a PDB named PDB1 as a clone of MASTER. But first create a golden image of a database you'd like to use. To do so, create the following accounts in the MASTER PDB:
+ MONITORING
+ BACKUP
+ APPL_USER
Grant whichever privileges you like to grant to them. APPL_USER must have 3 tables in his schema: T1, T2 and T3. While you perform these tasks, tail the alert.log in a different session.
SQL> create user monitoring identified by monitoring;
User created.
SQL> grant select any dictionary to monitoring;
Grant succeeded.
SQL> create user backup identified by backup;
User created.
SQL> grant create session to backup;
Grant succeeded.
SQL> create user appl_user identified by appl_user;
User created.
SQL> alter user appl_user quota unlimited on users;
User altered.
SQL> grant connect , resource to appl_user;
Grant succeeded.
SQL> conn appl_user/appl_user@localhost/MASTER
Connected.
SQL> create table t1 as select * from all_objects ;
Table created.
SQL> select count(*) from t1;
COUNT(*)
----------
73704
SQL> create table t2 as select * from all_objects where rownum < 11 ;
Table created.
SQL> c.t2.t3
1* create table t3 as select * from all_objects where rownum < 11
SQL> r
1* create table t3 as select * from all_objects where rownum < 11
Table created.
SQL> show user
USER is "APPL_USER"
SQL>
- prepare the PDB for cloning
alter pluggable database master close immediate;
alter pluggable database master open read only;
- view the alert log
adrci> set home CDB1
adrci> show alert -tail -f
- clone the PDB
SQL> create pluggable database pdb1 from master;
(are you still tailing the alert.log?)
SQL> alter pluggable database PDB1 open;
+ do you see the users you created? Do they have data in the tables?
SQL> conn appl_user/appl_user@localhost/PDB1
Connected.
SQL> select count(*) from t1;
COUNT(*)
----------
73704
SQL> select count(*) from t2;
COUNT(*)
----------
10
SQL> select count(*) from t3;
COUNT(*)
----------
10
+ perform any further validations you like
12) Create a metadata only clone
Since 12.1.0.2 it is possible to perform a metadata only clone. Try to perform one based on MASTER. Ensure that the tables in the new PDB have no data!
- as SYSDBA
SQL> create pluggable database pdb2 from master no data;
SQL> alter pluggable database pdb2 open;
SQL> conn appl_user/appl_user@localhost/PDB2
Connected.
SQL> select count(*) from t1;
COUNT(*)
----------
0
13) Unplug and plug
In this lab you will unplug a PDB and plug it back in. Usually you'd perform these steps on a different CDB but due to space constraints it'll be the same one you will experiment with. Note that it is crucial to drop the PDB once unplugged. This isn't documented that clear in the official documentation set but nevertheless required. https://blogs.oracle.com/UPGRADE/entry/recent_news_about_pluggable_databases
The steps to perform are:
a) unplug the PDB
b) review the metadata file
c) check for plug-in-compatibility (a formality in our case but important in real life)
d) drop the PDB _keeping_ data files
e) create the new PDB by plugging it in
All the while you are tailing the alert.log
- unplug the PDB
SQL> alter pluggable database pdb2 close immediate;
SQL> alter pluggable database pdb2 unplug into '/home/oracle/pdb2.xml';
-> keep tailing the alert.log!
- verify the contents of the XML file
[oracle@server3 ~]$ cat /home/oracle/pdb2.xml
<?xml version="1.0" encoding="UTF-8"?>
<PDB>
<xmlversion>1</xmlversion>
<pdbname>PDB2</pdbname>
<cid>5</cid>
<byteorder>1</byteorder>
<vsn>202375680</vsn>
<vsns>
<vsnnum>12.1.0.2.0</vsnnum>
<cdbcompt>12.1.0.2.0</cdbcompt>
<pdbcompt>12.1.0.2.0</pdbcompt>
<vsnlibnum>0.0.0.0.22</vsnlibnum>
<vsnsql>22</vsnsql>
<vsnbsv>8.0.0.0.0</vsnbsv>
</vsns>
<dbid>1858507191</dbid>
<ncdb2pdb>0</ncdb2pdb>
<cdbid>628942599</cdbid>
<guid>18135BAD243A6341E0530C64A8C0B88F</guid>
<uscnbas>1675596</uscnbas>
<uscnwrp>0</uscnwrp>
<rdba>4194824</rdba>
<tablespace>
<name>SYSTEM</name>
<type>0</type>
<tsn>0</tsn>
<status>1</status>
<issft>0</issft>
<file>
<path>/u01/oradata/CDB2/18135BAD243A6341E0530C64A8C0B88F/datafile/o1_mf_system_bqfdbtg0_.dbf</path>
<afn>17</afn>
<rfn>1</rfn>
<createscnbas>1674775</createscnbas>
<createscnwrp>0</createscnwrp>
<status>1</status>
<fileblocks>32000</fileblocks>
<blocksize>8192</blocksize>
<vsn>202375680</vsn>
<fdbid>1858507191</fdbid>
<fcpsw>0</fcpsw>
<fcpsb>1675592</fcpsb>
<frlsw>0</frlsw>
<frlsb>1594143</frlsb>
<frlt>881895559</frlt>
</file>
</tablespace>
<tablespace>
<name>SYSAUX</name>
<type>0</type>
<tsn>1</tsn>
<status>1</status>
<issft>0</issft>
<file>
<path>/u01/oradata/CDB2/18135BAD243A6341E0530C64A8C0B88F/datafile/o1_mf_sysaux_bqfdbtg1_.dbf</path>
<afn>18</afn>
<rfn>4</rfn>
<createscnbas>1674799</createscnbas>
<createscnwrp>0</createscnwrp>
<status>1</status>
<fileblocks>65280</fileblocks>
<blocksize>8192</blocksize>
<vsn>202375680</vsn>
<fdbid>1858507191</fdbid>
<fcpsw>0</fcpsw>
<fcpsb>1675592</fcpsb>
<frlsw>0</frlsw>
<frlsb>1594143</frlsb>
<frlt>881895559</frlt>
</file>
</tablespace>
<tablespace>
<name>TEMP</name>
<type>1</type>
<tsn>2</tsn>
<status>1</status>
<issft>0</issft>
<bmunitsize>128</bmunitsize>
<file>
<path>/u01/oradata/CDB2/18135BAD243A6341E0530C64A8C0B88F/datafile/o1_mf_temp_bqfdbtg1_.dbf</path>
<afn>5</afn>
<rfn>1</rfn>
<createscnbas>1674776</createscnbas>
<createscnwrp>0</createscnwrp>
<status>0</status>
<fileblocks>2560</fileblocks>
<blocksize>8192</blocksize>
<vsn>202375680</vsn>
<autoext>1</autoext>
<maxsize>4194302</maxsize>
<incsize>80</incsize>
</file>
</tablespace>
<tablespace>
<name>USERS</name>
<type>0</type>
<tsn>3</tsn>
<status>1</status>
<issft>0</issft>
<file>
<path>/u01/oradata/CDB2/18135BAD243A6341E0530C64A8C0B88F/datafile/o1_mf_users_bqfdbtg1_.dbf</path>
<afn>19</afn>
<rfn>10</rfn>
<createscnbas>1674802</createscnbas>
<createscnwrp>0</createscnwrp>
<status>1</status>
<fileblocks>2560</fileblocks>
<blocksize>8192</blocksize>
<vsn>202375680</vsn>
<fdbid>1858507191</fdbid>
<fcpsw>0</fcpsw>
<fcpsb>1675592</fcpsb>
<frlsw>0</frlsw>
<frlsb>1594143</frlsb>
<frlt>881895559</frlt>
</file>
</tablespace>
<optional>
<ncdb2pdb>0</ncdb2pdb>
<csid>178</csid>
<ncsid>2000</ncsid>
<options>
<option>APS=12.1.0.2.0</option>
<option>CATALOG=12.1.0.2.0</option>
<option>CATJAVA=12.1.0.2.0</option>
<option>CATPROC=12.1.0.2.0</option>
<option>CONTEXT=12.1.0.2.0</option>
<option>DV=12.1.0.2.0</option>
<option>JAVAVM=12.1.0.2.0</option>
<option>OLS=12.1.0.2.0</option>
<option>ORDIM=12.1.0.2.0</option>
<option>OWM=12.1.0.2.0</option>
<option>SDO=12.1.0.2.0</option>
<option>XDB=12.1.0.2.0</option>
<option>XML=12.1.0.2.0</option>
<option>XOQ=12.1.0.2.0</option>
</options>
<olsoid>0</olsoid>
<dv>0</dv>
<APEX>4.2.5.00.08:1</APEX>
<parameters>
<parameter>processes=300</parameter>
<parameter>nls_language='ENGLISH'</parameter>
<parameter>nls_territory='UNITED KINGDOM'</parameter>
<parameter>sga_target=1073741824</parameter>
<parameter>db_block_size=8192</parameter>
<parameter>compatible='12.1.0.2.0'</parameter>
<parameter>open_cursors=300</parameter>
<parameter>pga_aggregate_target=536870912</parameter>
<parameter>enable_pluggable_database=TRUE</parameter>
</parameters>
<tzvers>
<tzver>primary version:18</tzver>
<tzver>secondary version:0</tzver>
</tzvers>
<walletkey>0</walletkey>
<opatches>
<opatch>19769480</opatch>
<opatch>20299022</opatch>
<opatch>20299023</opatch>
<opatch>20415564</opatch>
</opatches>
<hasclob>1</hasclob>
<awr>
<loadprofile>CPU Usage Per Sec=0.000000</loadprofile>
<loadprofile>DB Block Changes Per Sec=0.000000</loadprofile>
<loadprofile>Database Time Per Sec=0.000000</loadprofile>
<loadprofile>Executions Per Sec=0.000000</loadprofile>
<loadprofile>Hard Parse Count Per Sec=0.000000</loadprofile>
<loadprofile>Logical Reads Per Sec=0.000000</loadprofile>
<loadprofile>Logons Per Sec=0.000000</loadprofile>
<loadprofile>Physical Reads Per Sec=0.000000</loadprofile>
<loadprofile>Physical Writes Per Sec=0.000000</loadprofile>
<loadprofile>Redo Generated Per Sec=0.000000</loadprofile>
<loadprofile>Total Parse Count Per Sec=0.000000</loadprofile>
<loadprofile>User Calls Per Sec=0.000000</loadprofile>
<loadprofile>User Rollbacks Per Sec=0.000000</loadprofile>
<loadprofile>User Transaction Per Sec=0.000000</loadprofile>
</awr>
<hardvsnchk>0</hardvsnchk>
</optional>
</PDB>
- check for compatibility
SQL> drop pluggable database pdb2 keep datafiles;
DECLARE
compatible CONSTANT VARCHAR2(3) :=
CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
pdb_descr_file => '/home/oracle/pdb2.xml',
pdb_name => 'PDB2')
WHEN TRUE THEN 'YES'
ELSE 'NO'
END;
BEGIN
DBMS_OUTPUT.PUT_LINE(compatible);
END;
/
- If you get a YES then plug the PDB in
SQL> create pluggable database pdb2 using '/home/oracle/pdb2.xml' nocopy tempfile reuse;
14) drop a PDB
You use the drop pluggable database command to drop the PDB.
SQL> alter pluggable database PDB2 close immediate;
SQL> drop pluggable database PDB2;
- what happens to its data files? Do you get an error? how do you correct the error?
LAB 5: RMAN and PDBs
1) Connect the the CDB$ROOT as RMAN and "report schema"
RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB2
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 780 SYSTEM YES /u01/oradata/CDB2/datafile/o1_mf_system_bqf3ktdf_.dbf
3 600 SYSAUX NO /u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3jpvv_.dbf
4 355 UNDOTBS1 YES /u01/oradata/CDB2/datafile/o1_mf_undotbs1_bqf3lz4q_.dbf
5 250 PDB$SEED:SYSTEM NO /u01/oradata/CDB2/datafile/o1_mf_system_bqf3phmo_.dbf
6 5 USERS NO /u01/oradata/CDB2/datafile/o1_mf_users_bqf3lxrp_.dbf
7 490 PDB$SEED:SYSAUX NO /u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3ph5z_.dbf
8 250 MASTER:SYSTEM NO /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_system_bqf4tntv_.dbf
9 510 MASTER:SYSAUX NO /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_sysaux_bqf4tnv2_.dbf
10 20 MASTER:USERS NO /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_users_bqf4v3cm_.dbf
14 250 PDB1:SYSTEM NO /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
15 510 PDB1:SYSAUX NO /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
16 20 PDB1:USERS NO /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 60 TEMP 32767 /u01/oradata/CDB2/datafile/o1_mf_temp_bqf3p96h_.tmp
2 20 PDB$SEED:TEMP 32767 /u01/oradata/CDB2/datafile/pdbseed_temp012015-06-09_02-59-59-AM.dbf
3 20 MASTER:TEMP 32767 /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_temp_bqf4tnv3_.dbf
4 20 PDB1:TEMP 32767 /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_temp_bqfd57b8_.dbf
RMAN>
- what do you notice? How is the output different from the Non-CDB
2) Review the configuration settings
Have a look at the RMAN configuration settings. There is one item that is different from non-CDBs. Can you spot it?
RMAN> show all;
RMAN configuration parameters for database with db_unique_name CDB2 are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/snapcf_CDB2.f'; # default
3) Back up the CDB
It is possible to backup up the CDB, CDB$ROOT, and PDBs. In this step you back up the entire CDB. Always good to have a full backup. If not yet in archivelog mode, change that and perform a full backup (incremental or full does not matter)
RMAN> shutdown immediate
startup mount
database closed
database dismounted
Oracle instance shut down
RMAN>
connected to target database (not started)
Oracle instance started
database mounted
Total System Global Area 1073741824 bytes
Fixed Size 2932632 bytes
Variable Size 377487464 bytes
Database Buffers 687865856 bytes
Redo Buffers 5455872 bytes
RMAN> alter database archivelog;
Statement processed
RMAN> alter database open;
Statement processed
RMAN> configure channel device type disk format '/u01/oraback/CDB2/%U';
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u01/oraback/CDB2/%U';
new RMAN configuration parameters are successfully stored
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored
RMAN> backup database plus archivelog;
Starting backup at 09-JUN-15
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=16 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=27 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=18 RECID=1 STAMP=881907051
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/01q91lbd_1_1 tag=TAG20150609T061052 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-JUN-15
Starting backup at 09-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/oradata/CDB2/datafile/o1_mf_system_bqf3ktdf_.dbf
input datafile file number=00004 name=/u01/oradata/CDB2/datafile/o1_mf_undotbs1_bqf3lz4q_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3jpvv_.dbf
input datafile file number=00006 name=/u01/oradata/CDB2/datafile/o1_mf_users_bqf3lxrp_.dbf
channel ORA_DISK_2: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/02q91lbf_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_sysaux_bqf4tnv2_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_2: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/03q91lbf_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00015 name=/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
channel ORA_DISK_2: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/04q91lc8_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3ph5z_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_2: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/05q91lc9_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:10
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00008 name=/u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_system_bqf4tntv_.dbf
input datafile file number=00010 name=/u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_users_bqf4v3cm_.dbf
channel ORA_DISK_2: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/06q91lcj_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00014 name=/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
input datafile file number=00016 name=/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_2: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/07q91lcj_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/oradata/CDB2/datafile/o1_mf_system_bqf3phmo_.dbf
channel ORA_DISK_2: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/08q91lcr_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_2: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/09q91lcr_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:07
Finished backup at 09-JUN-15
Starting backup at 09-JUN-15
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=19 RECID=2 STAMP=881907108
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/0aq91ld5_1_1 tag=TAG20150609T061148 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-JUN-15
Starting Control File and SPFILE Autobackup at 09-JUN-15
piece handle=/u01/fra/CDB2/autobackup/2015_06_09/o1_mf_s_881907110_bqfgzb3n_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 09-JUN-15
RMAN>
4) Try to cause some trouble and get out unscathed
Assume someone in PDB1 removed an essential file from the database. Time to recover! In this part of the lab you
a) close PDB1
b) remove a data file
c) perform a full recovery (agree it not strictly speaking needed but a good test)
d) open the database without data loss
SQL> set lines 200
SQL> select name from v$datafile where con_id = (select con_id from v$pdbs where name = 'PDB1');
NAME
---------------------------------------------------------------------------------------------------------------------------
/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf
[oracle@server3 ~]$ rm /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf
[oracle@server3 ~]$
SQL> alter pluggable database pdb1 open;
alter pluggable database pdb1 open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 16 - see DBWR trace file
ORA-01110: data file 16:
'/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf'
- perform full recovery
[oracle@server3 ~]$ rman target sys/change_on_install@localhost/PDB1
Recovery Manager: Release 12.1.0.2.0 - Production on Tue Jun 9 06:48:03 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: CDB2 (DBID=628942599, not open)
RMAN> run {
2> restore database;
3> recover database;
4> alter database open;
5> }
Starting restore at 09-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=255 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=269 device type=DISK
skipping datafile 14; already restored to file /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
skipping datafile 15; already restored to file /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00016 to /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf
channel ORA_DISK_1: reading from backup piece /u01/oraback/CDB2/08q91lcr_1_1
channel ORA_DISK_1: piece handle=/u01/oraback/CDB2/08q91lcr_1_1 tag=TAG20150609T061054
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 09-JUN-15
Starting recover at 09-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
starting media recovery
media recovery complete, elapsed time: 00:00:00
Finished recover at 09-JUN-15
Statement processed
RMAN>
- check if that worked
[oracle@server3 ~]$ sqlplus appl_user/appl_user@localhost/pdb1
SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 9 06:50:32 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Tue Jun 09 2015 06:50:12 -04:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select count(*) from t1;
COUNT(*)
----------
73704
SQL> select tablespace_name from tabs;
TABLESPACE_NAME
------------------------------
USERS
USERS
USERS
SQL>
5) Beware of PDB backups when dropping PDBs!
This is an example about what can be considered a bug, but is expected behaviour. Assume that you dropped a PDB accidentally including data files. How can you get it back?
- create a PDB we don't really care about with a default tablespace named USERS
SQL> create pluggable database I_AM_AN_EX_PARROT admin user martin identified by secret default tablespace users datafile size 10m;
Pluggable database created.
SQL> alter pluggable database I_AM_AN_EX_PARROT open;
Pluggable database altered.
- create a level 0 backup of the CDB and make sure I_AM_AN_EX_PARRAT has been backed up. Validate both the PDB backup and the archivelogs.
RMAN> backup incremental level 0 database plus archivelog delete all input;
...
RMAN> list backup of pluggable database I_AM_AN_EX_PARROT;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
26 Incr 0 395.59M DISK 00:00:02 11-JUN-15
BP Key: 26 Status: AVAILABLE Compressed: NO Tag: TAG20150611T060056
Piece Name: /u01/oraback/CDB2/0pq96tij_1_1
List of Datafiles in backup set 26
Container ID: 6, PDB Name: I_AM_AN_EX_PARROT
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
26 0 Incr 2345577 11-JUN-15 /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_sysaux_bqlowk9f_.dbf
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
29 Incr 0 203.38M DISK 00:00:02 11-JUN-15
BP Key: 29 Status: AVAILABLE Compressed: NO Tag: TAG20150611T060056
Piece Name: /u01/oraback/CDB2/0tq96tjc_1_1
List of Datafiles in backup set 29
Container ID: 6, PDB Name: I_AM_AN_EX_PARROT
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
25 0 Incr 2345609 11-JUN-15 /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_system_bqlowk93_.dbf
27 0 Incr 2345609 11-JUN-15 /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_users_bqloxjmr_.dbf
- the backup exists!
RMAN> restore pluggable database I_AM_AN_EX_PARROT validate;
Starting restore at 11-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_2: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /u01/oraback/CDB2/0pq96tij_1_1
channel ORA_DISK_2: reading from backup piece /u01/oraback/CDB2/0tq96tjc_1_1
channel ORA_DISK_1: piece handle=/u01/oraback/CDB2/0pq96tij_1_1 tag=TAG20150611T060056
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_2: piece handle=/u01/oraback/CDB2/0tq96tjc_1_1 tag=TAG20150611T060056
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: validation complete, elapsed time: 00:00:01
Finished restore at 11-JUN-15
RMAN> RESTORE ARCHIVELOG ALL VALIDATE;
Starting restore at 11-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_2: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/oraback/CDB2/0gq96tfe_1_1
channel ORA_DISK_2: reading from backup piece /u01/oraback/CDB2/0hq96tff_1_1
channel ORA_DISK_1: piece handle=/u01/oraback/CDB2/0gq96tfe_1_1 tag=TAG20150611T060013
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_2: piece handle=/u01/oraback/CDB2/0hq96tff_1_1 tag=TAG20150611T060013
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: validation complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/oraback/CDB2/0iq96tg9_1_1
channel ORA_DISK_2: reading from backup piece /u01/oraback/CDB2/0vq96tjp_1_1
channel ORA_DISK_1: piece handle=/u01/oraback/CDB2/0iq96tg9_1_1 tag=TAG20150611T060013
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_2: piece handle=/u01/oraback/CDB2/0vq96tjp_1_1 tag=TAG20150611T060233
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: validation complete, elapsed time: 00:00:01
Finished restore at 11-JUN-15
- drop the PDB including data files. We have a backup, should be ok even if we made a mistake. Tail the alert.log while executing the steps
RMAN> alter pluggable database I_AM_AN_EX_PARROT close;
Statement processed
RMAN> drop pluggable database I_AM_AN_EX_PARROT including datafiles;
Statement processed
2015-06-11 06:19:47.088000 -04:00
alter pluggable database I_AM_AN_EX_PARROT close
ALTER SYSTEM: Flushing buffer cache inst=0 container=6 local
2015-06-11 06:19:59.242000 -04:00
Pluggable database I_AM_AN_EX_PARROT closed
Completed: alter pluggable database I_AM_AN_EX_PARROT close
2015-06-11 06:20:15.885000 -04:00
drop pluggable database I_AM_AN_EX_PARROT including datafiles
2015-06-11 06:20:20.655000 -04:00
Deleted Oracle managed file /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_users_bqloxjmr_.dbf
Deleted Oracle managed file /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_temp_bqlowk9g_.dbf
Deleted Oracle managed file /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_sysaux_bqlowk9f_.dbf
Deleted Oracle managed file /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_system_bqlowk93_.dbf
Completed: drop pluggable database I_AM_AN_EX_PARROT including datafiles
- oops, that was a mistake! Call from the users: restore the PDB, it is production critical!
RMAN> report schema;
Report of database schema for database with db_unique_name CDB2
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 780 SYSTEM YES /u01/oradata/CDB2/datafile/o1_mf_system_bqf3ktdf_.dbf
3 680 SYSAUX NO /u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3jpvv_.dbf
4 355 UNDOTBS1 YES /u01/oradata/CDB2/datafile/o1_mf_undotbs1_bqf3lz4q_.dbf
5 250 PDB$SEED:SYSTEM NO /u01/oradata/CDB2/datafile/o1_mf_system_bqf3phmo_.dbf
6 5 USERS NO /u01/oradata/CDB2/datafile/o1_mf_users_bqf3lxrp_.dbf
7 490 PDB$SEED:SYSAUX NO /u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3ph5z_.dbf
8 250 MASTER:SYSTEM NO /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_system_bqf4tntv_.dbf
9 510 MASTER:SYSAUX NO /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_sysaux_bqf4tnv2_.dbf
10 20 MASTER:USERS NO /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_users_bqf4v3cm_.dbf
14 250 PDB1:SYSTEM NO /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
15 520 PDB1:SYSAUX NO /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
16 20 PDB1:USERS NO /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfk42sq_.dbf
23 260 PDBSBY:SYSTEM NO /u01/oradata/CDB2/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_system_bqfmtn2d_.dbf
24 520 PDBSBY:SYSAUX NO /u01/oradata/CDB2/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_sysaux_bqfmtn2l_.dbf
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 60 TEMP 32767 /u01/oradata/CDB2/datafile/o1_mf_temp_bqf3p96h_.tmp
2 20 PDB$SEED:TEMP 32767 /u01/oradata/CDB2/datafile/pdbseed_temp012015-06-09_02-59-59-AM.dbf
3 20 MASTER:TEMP 32767 /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_temp_bqf4tnv3_.dbf
4 20 PDB1:TEMP 32767 /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_temp_bqfd57b8_.dbf
5 20 PDBSBY:TEMP 32767 /u01/oradata/CDB2/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_temp_bqfmtn2l_.dbf
RMAN> run {
2> restore pluggable database I_AM_AN_EX_PARROT;
3> recover pluggable database I_AM_AN_EX_PARROT;
4> }
Starting restore at 11-JUN-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=280 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=55 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/11/2015 06:22:07
RMAN-06813: could not translate pluggable database I_AM_AN_EX_PARROT
- Why? the backup was there a minute ago! Check the controlfile for the PDB backup:
RMAN> list backup of pluggable database I_AM_AN_EX_PARROT;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of list command at 06/11/2015 06:22:52
RMAN-06813: could not translate pluggable database I_AM_AN_EX_PARROT
- And indeed the backup is gone, as well as all the information with it.
}}}
! LAB X: Data Guard
{{{
Data Guard is an essential part of data protection. CDBs can be Data-Guarded as well. In this lab you will learn how to. It might be a bit more involved than previous labs and therefore we have most time here. The steps in the lab guide you through what needs to be done, the examples may have to be updated according to your environment.
1) create a physical standby of the CDB you used
- connect to the CDB as root and enable automatic standby_file_management
- make sure that you use a SPFILE
- edit tnsnames.ora in $ORACLE_HOME to include the new standby database
CDBSBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = class<n>)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = CDBSBY)
)
)
- modify listener.ora in $ORACLE_HOME/network/admin/listener.ora and reload it. Ensure the names match your environment!
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = CDB2)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
(SID_NAME = CDB2)
)
(SID_DESC =
(GLOBAL_DBNAME = CDB2_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
(SID_NAME = CDB2)
)
(SID_DESC =
(GLOBAL_DBNAME = CDBSBY)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
(SID_NAME = CDBSBY)
)
(SID_DESC =
(GLOBAL_DBNAME = CDBSBY_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
(SID_NAME = CDBSBY)
)
)
- use lsnrctl service to ensure the services are registered
- update oratab with the new standby database
CDBSBY:/u01/app/oracle/product/12.1.0.2/dbhome_1:N
- create a minimum ppfile for the clone
*.audit_file_dest='/u01/app/oracle/admin/CDBSBY/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.db_block_size=8192
*.db_create_file_dest='/u01/oradata'
*.db_domain=''
*.db_name='CDB2'
*.db_unique_name='CDBSBY'
*.db_recovery_file_dest='/u01/fra'
*.db_recovery_file_dest_size=4560m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CDBSBYXDB)'
*.enable_pluggable_database=true
*.nls_language='ENGLISH'
*.nls_territory='UNITED KINGDOM'
*.open_cursors=300
*.pga_aggregate_target=512m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1024m
*.standby_file_management='AUTO'
*.undo_tablespace='UNDOTBS1'
- ensure the audit file dest is created
mkdir -vp /u01/app/oracle/admin/CDBSBY/adump
- copy the pwfile to allow remote login
[oracle@server3 dbs]$ cp orapwCDB2 orapwCDBSBY
- duplicate
[oracle@server3 ~]$ rman target sys/password@cdb2 auxiliary sys/password@cdbsby
RMAN> startup clone nomount
....
RMAN> duplicate target database for standby;
- Make sure to note down the control files. Their names are in the RMAN output
executing Memory Script
Starting restore at 09-JUN-15
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/fra/CDB2/autobackup/2015_06_09/o1_mf_s_881907110_bqfgzb3n_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/fra/CDB2/autobackup/2015_06_09/o1_mf_s_881907110_bqfgzb3n_.bkp tag=TAG20150609T061150
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/oradata/CDBSBY/controlfile/o1_mf_bqflhy0t_.ctl
output file name=/u01/fra/CDBSBY/controlfile/o1_mf_bqflhydr_.ctl
Finished restore at 09-JUN-15
- In this case they are:
output file name=/u01/oradata/CDBSBY/controlfile/o1_mf_bqflhy0t_.ctl
output file name=/u01/fra/CDBSBY/controlfile/o1_mf_bqflhydr_.ctl
- modify the pfile to include these. You can also use "show parameter control_files".
- create spfile from pfile and restart the standby
2) add the database into the broker configuration
- Enable the broker on primary and standby
SQL> alter system set dg_broker_start = true;
- add the databases to the broker configuration
[oracle@server3 ~]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production
Copyright (c) 2000, 2013, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> CREATE CONFIGURATION twelve as PRIMARY DATABASE IS 'CDB2' CONNECT IDENTIFIER IS 'CDB2';
Configuration "twelve" created with primary database "CDB2"
DGMGRL> add database 'CDBSBY' AS CONNECT IDENTIFIER IS 'CDBSBY';
Database "CDBSBY" added
- create standby redo logs on each database
Check their size in v$log, and create the files on each database. The following should work, you create group# + 1 SRLs per thread (there is only 1 in single instance Oracle)
SQL> begin
2 for i in 1..4 loop
3 execute immediate 'alter database add standby logfile size 52428800';
4 end loop;
5 end;
6 /
- enable the configuration
DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration
Configuration - twelve
Protection Mode: MaxPerformance
Members:
CDB2 - Primary database
CDBSBY - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 4 seconds ago)
3) create a new PDB on the primary, tail the alert.log to see what's happening on the standby
- be sure to have standby_file_management set to auto
DGMGRL> show database 'CDB2' standbyfilemanagement
StandbyFileManagement = 'AUTO'
DGMGRL> show database 'CDBSBY' standbyfilemanagement
StandbyFileManagement = 'AUTO'
SQL> select name, db_unique_name, database_role from v$database;
NAME DB_UNIQUE_NAME DATABASE_ROLE
--------- ------------------------------ ----------------
CDB2 CDB2 PRIMARY
SQL> create pluggable database PDBSBY admin user PDBSBY_ADMIN identified by secret;
- tail the primary alert.log
2015-06-09 07:34:43.265000 -04:00
create pluggable database PDBSBY admin user PDBSBY_ADMIN identified by *
APEX_040200.WWV_FLOW_ADVISOR_CHECKS (CHECK_STATEMENT) - CLOB populated
2015-06-09 07:35:05.280000 -04:00
****************************************************************
Pluggable Database PDBSBY with pdb id - 5 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
Database Characterset for PDBSBY is WE8MSWIN1252
2015-06-09 07:35:06.834000 -04:00
Deleting old file#5 from file$
Deleting old file#7 from file$
Adding new file#23 to file$(old file#5)
Adding new file#24 to file$(old file#7)
2015-06-09 07:35:08.031000 -04:00
Successfully created internal service pdbsby at open
2015-06-09 07:35:12.391000 -04:00
ALTER SYSTEM: Flushing buffer cache inst=0 container=5 local
2015-06-09 07:35:20.225000 -04:00
****************************************************************
Post plug operations are now complete.
Pluggable database PDBSBY with pdb id - 5 is now marked as NEW.
****************************************************************
Completed: create pluggable database PDBSBY admin user PDBSBY_ADMIN identified by *
- tail the standby
2015-06-09 07:34:58.636000 -04:00
Recovery created pluggable database PDBSBY
2015-06-09 07:35:03.499000 -04:00
Recovery copied files for tablespace SYSTEM
Recovery successfully copied file /u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_system_bqfmtn2d_.dbf from /u01/oradata/CDBSBY/datafile/o1_mf_system_bqflkyry_.dbf
2015-06-09 07:35:05.219000 -04:00
Successfully added datafile 23 to media recovery
Datafile #23: '/u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_system_bqfmtn2d_.dbf'
2015-06-09 07:35:13.119000 -04:00
Recovery copied files for tablespace SYSAUX
Recovery successfully copied file /u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_sysaux_bqfmtn2l_.dbf from /u01/oradata/CDBSBY/datafile/o1_mf_sysaux_bqflkpc2_.dbf
2015-06-09 07:35:17.968000 -04:00
Successfully added datafile 24 to media recovery
Datafile #24: '/u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_sysaux_bqfmtn2l_.dbf'
4) switch over to CDBSBY
Using the broker connect to CDBSBY as sysdba. Then verify switchover readiness
[oracle@server3 ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production
Copyright (c) 2000, 2013, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdbsby
Password:
Connected as SYSDBA.
DGMGRL> validate database 'CDBSBY';
Database Role: Physical standby database
Primary Database: CDB2
Ready for Switchover: Yes
Ready for Failover: Yes (Primary Running)
Temporary Tablespace File Information:
CDB2 TEMP Files: 5
CDBSBY TEMP Files: 4
Flashback Database Status:
CDB2: Off
CDBSBY: Off
Current Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(CDB2) (CDBSBY)
1 3 3 Insufficient SRLs
Future Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(CDBSBY) (CDB2)
1 3 0 Insufficient SRLs
Warning: standby redo logs not configured for thread 1 on CDB2
DGMGRL>
If you see "ready for switchover", do it:
DGMGRL> switchover to 'CDBSBY';
Performing switchover NOW, please wait...
New primary database "CDBSBY" is opening...
Oracle Clusterware is restarting database "CDB2" ...
Switchover succeeded, new primary is "CDBSBY"
DGMGRL>
4) check if you can access PDBSBY
SQL> select name,db_unique_name,database_role from v$database;
NAME DB_UNIQUE_NAME DATABASE_ROLE
--------- ------------------------------ ----------------
CDB2 CDBSBY PRIMARY
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 MASTER READ WRITE NO
4 PDB1 READ WRITE NO
5 PDBSBY READ WRITE NO
SQL> select name from v$datafile
2 /
NAME
---------------------------------------------------------------------------------------------------------
/u01/oradata/CDBSBY/datafile/o1_mf_undotbs1_bqfljffy_.dbf
/u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_system_bqfmtn2d_.dbf
/u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_sysaux_bqfmtn2l_.dbf
SQL> select sys_context('userenv', 'con_name') from dual;
SYS_CONTEXT('USERENV','CON_NAME')
----------------------------------------------------------------------------------------------------------
PDBSBY
- keep the standby database! Will be needed for later on.
}}}
! LAB X: CDB Resource Manager
{{{
1) Create a CDB resource plan
Consolidation requires the creation of a CDB resource manager plan. Please ensure you have the following PDBs in your CDB:
- MASTER
- PDB1
- PDBSBY
Create a CDB plan for your CDB and set the distribution of CPU shares and utilisation limts as follows:
- MASTER: 1 share, limit 30
- PDB1: 5 shares, limit 100
- PDBSBY: 3 shares, limit 70
There is no need to limit PQ. To keep the lab simple, no PDB plans are needed.
Unfortunately due to a limited number of CPUs we cannot test the plans in action!
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 MASTER READ WRITE NO
4 PDB1 READ WRITE NO
5 PDBSBY READ WRITE NO
- make sure you are in the ROOT
SQL> select sys_context('userenv','con_name') from dual;
declare
v_plan_name varchar2(50) := 'ENKITEC_CDB_PLAN';
begin
dbms_resource_manager.clear_pending_area;
dbms_resource_manager.create_pending_area;
dbms_resource_manager.create_cdb_plan(
plan => v_plan_name,
comment => 'A CDB plan for the 12c class'
);
dbms_resource_manager.create_cdb_plan_directive(
plan => v_plan_name,
pluggable_database => 'MASTER',
shares => 1,
utilization_limit => 30);
dbms_resource_manager.create_cdb_plan_directive(
plan => v_plan_name,
pluggable_database => 'PDB1',
shares => 5,
utilization_limit => 100);
dbms_resource_manager.create_cdb_plan_directive(
plan => v_plan_name,
pluggable_database => 'PDBSBY',
shares => 3,
utilization_limit => 70);
dbms_resource_manager.validate_pending_area;
dbms_resource_manager.submit_pending_area;
end;
/
2) Query the CDB Resource Plan dictionary information
COLUMN PLAN FORMAT A30
COLUMN STATUS FORMAT A10
COLUMN COMMENTS FORMAT A35
SELECT PLAN, STATUS, COMMENTS FROM DBA_CDB_RSRC_PLANS ORDER BY PLAN;
3) Query the CDB Plan directives in the dictionary
COLUMN PLAN HEADING 'Plan' FORMAT A26
COLUMN PLUGGABLE_DATABASE HEADING 'Pluggable|Database' FORMAT A25
COLUMN SHARES HEADING 'Shares' FORMAT 999
COLUMN UTILIZATION_LIMIT HEADING 'Utilization|Limit' FORMAT 999
COLUMN PARALLEL_SERVER_LIMIT HEADING 'Parallel|Server|Limit' FORMAT 999
SELECT PLAN,
PLUGGABLE_DATABASE,
SHARES,
UTILIZATION_LIMIT,
PARALLEL_SERVER_LIMIT
FROM DBA_CDB_RSRC_PLAN_DIRECTIVES
ORDER BY PLAN;
4) Set the CDB resource plan
Set the new plan in the CDB$ROOT
SQL> alter system set RESOURCE_MANAGER_PLAN = 'ENKITEC_CDB_PLAN' scope=both;
System altered.
5) create a DBRM plan for MASTER PDB
Create a new database resource plan for 2 new consumer group, lowprio and highprio. The consumer group mappings are to be based on the oracle user. Do not forget to add the SYS_GROUP and OTHER_GROUP.
The various CPU entitlements are as follows:
- SYS_GROUP - level 1 - 70
- HIGHPRIO_GROUP - level 1 - 100
- LOWPRIO_GROUP - level 1 - 25
- OTHERS_GROUP - level 1 - 45
- ORA$AUTOTASK - level 1 - 15
No other plan directives are needed. You can only have plans at level 1 in a PDB... Make sure you are connected against the PDB when executing the commands! Start by creating the users and grant them the connect role. Define the mapping based on the oracle user, and grant both the privilege to switch consumer groups. In the last step, create the plan and plan directives.
create user LOWPRIO identified by lowprio;
create user HIGHPRIO identified by highprio;
grant connect to LOWPRIO;
grant connect to HIGHPRIO;
begin
dbms_resource_manager.clear_pending_area;
dbms_resource_manager.create_pending_area;
dbms_resource_manager.create_consumer_group('LOWPRIO_GROUP', 'for low priority processing');
dbms_resource_manager.create_consumer_group('HIGHPRIO_GROUP', 'we will starve you');
dbms_resource_manager.validate_pending_area();
dbms_resource_manager.submit_pending_area();
end;
/
begin
dbms_resource_manager.create_pending_area();
dbms_resource_manager.set_consumer_group_mapping(
dbms_resource_manager.oracle_user, 'LOWPRIO', 'LOWPRIO_GROUP');
dbms_resource_manager.set_consumer_group_mapping(
dbms_resource_manager.oracle_user, 'HIGHPRIO', 'HIGHPRIO_GROUP');
dbms_resource_manager.submit_pending_area();
end;
/
begin
dbms_resource_manager_privs.grant_switch_consumer_group('LOWPRIO','LOWPRIO_GROUP', true);
dbms_resource_manager_privs.grant_switch_consumer_group('HIGHPRIO','HIGHPRIO_GROUP', true);
end;
/
BEGIN
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.create_plan(
plan => 'ENKITEC_MASTER_PDB_PLAN',
comment => 'sample DBRM plan for the training classes'
);
dbms_resource_manager.create_plan_directive(
plan => 'ENKITEC_MASTER_PDB_PLAN',
comment => 'sys_group is level 1',
group_or_subplan => 'SYS_GROUP',
mgmt_p1 => 50);
dbms_resource_manager.create_plan_directive(
plan => 'ENKITEC_MASTER_PDB_PLAN',
group_or_subplan => 'HIGHPRIO_GROUP',
comment => 'us before anyone else',
mgmt_p1 => 30
);
-- artificially limit the resources
dbms_resource_manager.create_plan_directive(
plan => 'ENKITEC_MASTER_PDB_PLAN',
group_or_subplan => 'LOWPRIO_GROUP',
comment => 'then the LOWPRIO group',
mgmt_p1 => 10
);
-- finally anyone not in a previous consumer group will be mapped to the
-- OTHER_GROUPS
dbms_resource_manager.create_plan_directive(
plan => 'ENKITEC_MASTER_PDB_PLAN',
group_or_subplan => 'OTHER_GROUPS',
comment => 'all the rest',
mgmt_p1 => 5
);
dbms_resource_manager.validate_pending_area();
dbms_resource_manager.submit_pending_area();
end;
/
6) enable the PDB resource plan
alter system set resource_manager_plan = 'ENKITEC_MASTER_PDB_PLAN';
7) Verify the resource manager plans are correct in their respective container
SQL> select sys_context('userenv','con_name') from dual;
SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT
SQL> show parameter resource_manager_plan
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
resource_manager_plan string ENKITEC_CDB_PLAN
SQL>
SQL> alter session set container = master;
Session altered.
SQL> select sys_context('userenv','con_name') from dual;
SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
MASTER
SQL> show parameter resource_manager_plan
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
resource_manager_plan string ENKITEC_MASTER_PDB_PLAN
SQL>
8) connect as either highprio or lowprio to the PDB and check if the mapping works
[oracle@server3 ~]$ sqlplus highprio/highprio@localhost/master
SQL*Plus: Release 12.1.0.2.0 Production on Wed Jun 10 05:58:55 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Jun 10 2015 05:57:33 -04:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
- in a different session
SQL> select con_id, resource_consumer_group from v$session where username = 'HIGHPRIO';
CON_ID RESOURCE_CONSUMER_GROUP
---------- --------------------------------
3 HIGHPRIO_GROUP
}}}
! LAB X: generic RMAN enhancements
{{{
In this lab you learn how to perform a table point in time recovery.
1) create a table in a schema of your choice in the non-CDB and populate it with data
SQL> sho user
USER is "MARTIN"
SQL> create table recoverme tablespace users as select * from dba_objects;
Table created
SQL> select table_name from tabs;
TABLE_NAME
--------------------------------------------------------------------------------
RECOVERME
2) get some information about the database, the most useful is the SCN. This is the SCN to recover to in the next steps, so take a note of it.
SQL> select db_unique_name, database_role, cdb, current_scn from v$database;
DB_UNIQUE_NAME DATABASE_ROLE CDB CURRENT_SCN
------------------------------ ---------------- --- -----------
NCDB PRIMARY NO 1766295
3) ensure there were rows in the table at this particular SCN
SQL> select count(*) from recoverme;
COUNT(*)
----------
91858
4) truncate the table to simulate something daft
SQL> truncate table recoverme;
table truncated.
5) try to salvage the table without having to revert to a restore
SQL> flashback table recoverme to scn 1766304;
flashback table recoverme to scn 1766304
*
ERROR at line 1:
ORA-08189: cannot flashback the table because row movement is not enabled
SQL> alter table recoverme enable row movement;
Table altered.
SQL> flashback table recoverme to scn 1766304;
flashback table recoverme to scn 1766304
*
ERROR at line 1:
ORA-01466: unable to read data - table definition has changed
6) After this proved unsuccessful perform a table point in time recovery
NB: which other recovery technique could you have tried in step 5?
RECOVER TABLE MARTIN.RECOVERME
UNTIL SCN 1766295
AUXILIARY DESTINATION '/u02/oradata/adata/oraback/NCDB/temp'
REMAP TABLE 'MARTIN'.'RECOVERME':'RECOVERME_RESTRD';
....
executing Memory Script
Oracle instance shut down
Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_onvy_iEei" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_onvy_iEei":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported "MARTIN"."RECOVERME_RESTRD" 10.01 MB 91858 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job "SYS"."TSPITR_IMP_onvy_iEei" successfully completed at Thu Jun 11 09:42:52 2015 elapsed 0 00:00:15
Import completed
Removing automatic instance
Automatic instance removed
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/datafile/o1_mf_temp_bqlldwvd_.tmp deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/ONVY_PITR_NCDB/onlinelog/o1_mf_3_bqllgqf9_.log deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/ONVY_PITR_NCDB/onlinelog/o1_mf_2_bqllgq2t_.log deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/ONVY_PITR_NCDB/onlinelog/o1_mf_1_bqllgpq2_.log deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/ONVY_PITR_NCDB/datafile/o1_mf_users_bqllgnnh_.dbf deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/datafile/o1_mf_sysaux_bqllddlj_.dbf deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/datafile/o1_mf_undotbs1_bqllddln_.dbf deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/datafile/o1_mf_system_bqllddlb_.dbf deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/controlfile/o1_mf_bqlld6c7_.ctl deleted
auxiliary instance file tspitr_onvy_78692.dmp deleted
Finished recover at 11.06.2015 09:42:54
7) check if the table is imported ok and has the rows needed.
SQL> conn martin/secret
Connected.
SQL> select table_name from tabs;
TABLE_NAME
--------------------------------------------------------------------------------
RECOVERME
RECOVERME_RESTRD
SQL> select count(*) from RECOVERME_RESTRD;
COUNT(*)
----------
91858
SQL> select count(*) from RECOVERME;
COUNT(*)
----------
0
}}}
! Lab X: Threaded Execution
{{{
In the final lab you will experiment with the changes introduced with threaded execution
1) Verify your current settings
Start your NCDB if it is not open. Have a look at all the OS process IDs. How many are started?
[oracle@server3 ~]$ ps -ef | grep NCDB
oracle 19728 19658 0 06:27 pts/0 00:00:00 screen -S NCDB
oracle 19729 19728 0 06:27 ? 00:00:00 SCREEN -S NCDB
oracle 19963 1 0 06:28 ? 00:00:00 ora_pmon_NCDB
oracle 19965 1 0 06:28 ? 00:00:00 ora_psp0_NCDB
oracle 19967 1 2 06:28 ? 00:00:11 ora_vktm_NCDB
oracle 19971 1 0 06:28 ? 00:00:00 ora_gen0_NCDB
oracle 19973 1 0 06:28 ? 00:00:00 ora_mman_NCDB
oracle 19977 1 0 06:28 ? 00:00:00 ora_diag_NCDB
oracle 19979 1 0 06:28 ? 00:00:00 ora_dbrm_NCDB
oracle 19981 1 0 06:28 ? 00:00:00 ora_vkrm_NCDB
oracle 19983 1 0 06:28 ? 00:00:00 ora_dia0_NCDB
oracle 19985 1 0 06:28 ? 00:00:02 ora_dbw0_NCDB
oracle 19987 1 0 06:28 ? 00:00:03 ora_lgwr_NCDB
oracle 19989 1 0 06:28 ? 00:00:00 ora_ckpt_NCDB
oracle 19991 1 0 06:28 ? 00:00:00 ora_lg00_NCDB
oracle 19993 1 0 06:28 ? 00:00:00 ora_smon_NCDB
oracle 19995 1 0 06:28 ? 00:00:00 ora_lg01_NCDB
oracle 19997 1 0 06:28 ? 00:00:00 ora_reco_NCDB
oracle 19999 1 0 06:28 ? 00:00:00 ora_lreg_NCDB
oracle 20001 1 0 06:28 ? 00:00:00 ora_pxmn_NCDB
oracle 20003 1 0 06:28 ? 00:00:00 ora_rbal_NCDB
oracle 20005 1 0 06:28 ? 00:00:00 ora_asmb_NCDB
oracle 20007 1 0 06:28 ? 00:00:01 ora_mmon_NCDB
oracle 20009 1 0 06:28 ? 00:00:00 ora_mmnl_NCDB
oracle 20013 1 0 06:28 ? 00:00:00 ora_mark_NCDB
oracle 20015 1 0 06:28 ? 00:00:00 ora_d000_NCDB
oracle 20017 1 0 06:28 ? 00:00:00 ora_s000_NCDB
oracle 20021 1 0 06:28 ? 00:00:00 ora_dmon_NCDB
oracle 20033 1 0 06:28 ? 00:00:00 ora_o000_NCDB
oracle 20039 1 0 06:28 ? 00:00:00 ora_o001_NCDB
oracle 20043 1 0 06:28 ? 00:00:00 ora_rvwr_NCDB
oracle 20045 1 0 06:28 ? 00:00:00 ora_insv_NCDB
oracle 20047 1 0 06:28 ? 00:00:00 ora_nsv1_NCDB
oracle 20049 1 0 06:28 ? 00:00:00 ora_fsfp_NCDB
oracle 20054 1 0 06:28 ? 00:00:00 ora_rsm0_NCDB
oracle 20056 1 0 06:29 ? 00:00:00 ora_tmon_NCDB
oracle 20058 1 0 06:29 ? 00:00:00 ora_arc0_NCDB
oracle 20060 1 0 06:29 ? 00:00:00 ora_arc1_NCDB
oracle 20062 1 0 06:29 ? 00:00:00 ora_arc2_NCDB
oracle 20064 1 0 06:29 ? 00:00:00 ora_arc3_NCDB
oracle 20066 1 0 06:29 ? 00:00:00 ora_o002_NCDB
oracle 20068 1 0 06:29 ? 00:00:00 ora_o003_NCDB
oracle 20074 1 0 06:29 ? 00:00:00 ora_o004_NCDB
oracle 20078 1 0 06:29 ? 00:00:00 ora_tt00_NCDB
oracle 20080 1 0 06:29 ? 00:00:00 ora_tt01_NCDB
oracle 20091 1 0 06:29 ? 00:00:00 ora_p000_NCDB
oracle 20093 1 0 06:29 ? 00:00:00 ora_p001_NCDB
oracle 20095 1 0 06:29 ? 00:00:00 ora_p002_NCDB
oracle 20097 1 0 06:29 ? 00:00:00 ora_p003_NCDB
oracle 20099 1 0 06:29 ? 00:00:00 ora_smco_NCDB
oracle 20101 1 0 06:29 ? 00:00:00 ora_w000_NCDB
oracle 20103 1 0 06:29 ? 00:00:00 ora_w001_NCDB
oracle 20107 1 0 06:29 ? 00:00:00 ora_aqpc_NCDB
oracle 20111 1 0 06:29 ? 00:00:00 ora_p004_NCDB
oracle 20113 1 0 06:29 ? 00:00:00 ora_p005_NCDB
oracle 20115 1 0 06:29 ? 00:00:00 ora_p006_NCDB
oracle 20117 1 0 06:29 ? 00:00:00 ora_p007_NCDB
oracle 20119 1 0 06:29 ? 00:00:00 ora_cjq0_NCDB
oracle 20121 1 0 06:29 ? 00:00:00 ora_qm02_NCDB
oracle 20125 1 0 06:29 ? 00:00:00 ora_q002_NCDB
oracle 20127 1 0 06:29 ? 00:00:00 ora_q003_NCDB
oracle 20131 1 0 06:29 ? 00:00:00 oracleNCDB (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 20305 1 1 06:30 ? 00:00:03 ora_m005_NCDB
oracle 20412 1 0 06:33 ? 00:00:00 ora_m004_NCDB
oracle 20481 1 0 06:34 ? 00:00:00 ora_j000_NCDB
oracle 20483 1 0 06:34 ? 00:00:00 ora_j001_NCDB
oracle 20508 20415 0 06:36 pts/15 00:00:00 grep --color=auto NCDB
[oracle@server3 ~]$ ps -ef | grep NCDB | grep -v grep | wc -l
66
- keep that number in mind
2) switch to threaded_execution
Connect as sysdba check if it is using threaded execution. If not, enable threaded execution and bounce the instance for the parameter to take effect. What do you notice when the database restarts?
SQL> show parameter threaded_
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
threaded_execution boolean FALSE
SQL> alter system set threaded_execution=true scope=spfile;
System altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ERROR:
ORA-01017: invalid username/password; logon denied
ORA-01017: invalid username/password; logon denied
SQL>
3) how can you start the database?
SQL> conn sys/change_on_install as sysdba
Connected.
SQL> alter database mount;
Database altered.
SQL> alter database open;
Database altered.
SQL>
3) check the OS processes now-how many are there?
[oracle@server3 ~]$ ps -ef | grep NCDB | egrep -vi "grep|screen" | nl
1 oracle 20858 1 0 06:40 ? 00:00:00 ora_pmon_NCDB
2 oracle 20860 1 0 06:40 ? 00:00:00 ora_psp0_NCDB
3 oracle 20862 1 2 06:40 ? 00:00:05 ora_vktm_NCDB
4 oracle 20866 1 0 06:40 ? 00:00:01 ora_u004_NCDB
5 oracle 20872 1 6 06:40 ? 00:00:15 ora_u005_NCDB
6 oracle 20879 1 0 06:40 ? 00:00:00 ora_dbw0_NCDB
7 oracle 20959 1 0 06:42 ? 00:00:00 oracleNCDB (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
[oracle@server3 ~]$
4) Can you think of a reason why there are so few? Can you make the others appear? Clue: check man ps
[oracle@server3 ~]$ ps -eLf | grep NCDB | egrep -vi "grep|screen" | grep -v grep
oracle 20858 1 20858 0 1 06:40 ? 00:00:00 ora_pmon_NCDB
oracle 20860 1 20860 0 1 06:40 ? 00:00:00 ora_psp0_NCDB
oracle 20862 1 20862 2 1 06:40 ? 00:00:06 ora_vktm_NCDB
oracle 20866 1 20866 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20867 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20868 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20869 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20875 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20880 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20881 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20882 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20883 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20884 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20886 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20888 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20889 0 14 06:40 ? 00:00:00 ora_u004_NCDB
oracle 20866 1 20928 0 14 06:41 ? 00:00:00 ora_u004_NCDB
oracle 20872 1 20872 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20873 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20874 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20876 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20877 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20885 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20887 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20890 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20891 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20892 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20893 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20896 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20897 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20898 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20900 0 45 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20922 0 45 06:41 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20925 0 45 06:41 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20932 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20934 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20935 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20936 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20937 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20938 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20939 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20940 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20941 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20942 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20943 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20944 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20945 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20947 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20948 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20949 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20950 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20951 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20952 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20953 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20954 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20955 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21088 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21089 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21091 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21092 0 45 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21138 0 45 06:44 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21139 0 45 06:44 ? 00:00:00 ora_u005_NCDB
oracle 20879 1 20879 0 1 06:40 ? 00:00:00 ora_dbw0_NCDB
oracle 20959 1 20959 0 1 06:42 ? 00:00:00 oracleNCDB (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
5) create a session using net*8 - is the session a process or a thread?
[oracle@server3 ~]$ sqlplus martin/secret@ncdb
SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 11 06:45:57 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Thu Apr 23 2015 04:11:21 -04:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
SQL> select userenv('sid') from dual;
USERENV('SID')
--------------
15
- in another session
SQL> select pid,sosid,spid,stid,execution_type from v$process where addr = (select paddr from v$session where sid = 15);
PID SOSID SPID STID EXECUTION_
---------- ------------------------ ------------------------ ------------------------ ----------
30 21178 21178 21178 PROCESS
- this appears to be a process. Confirm on the OS-level:
[oracle@server3 ~]$ ps -eLf | grep 21178 | grep -v grep
oracle 21178 1 21178 0 1 06:45 ? 00:00:00 oracleNCDB (LOCAL=NO)
6) Now enable new sessions to be created as threads. In order to do so you need to change the listener and add DEDICATED_THROUGH_BROKER_listener = ON. Then reload the listener
7) connect again
[oracle@server3 ~]$ sqlplus martin/secret@ncdb
SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 11 06:56:38 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Thu Jun 11 2015 06:45:58 -04:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
SQL> select userenv('sid') from dual;
USERENV('SID')
--------------
31
- verify if it's a process or a thread
SQL> select pid,sosid,spid,stid,execution_type from v$process where addr = (select paddr from v$session where sid = 31);
PID SOSID SPID STID EXECUTION_
---------- ------------------------ ------------------------ ------------------------ ----------
30 20872_21481 20872 21481 THREAD
- it is. Can you see this on the OS too?
[oracle@server3 ~]$ ps -eLf | egrep 21481
oracle 20872 1 21481 0 46 06:56 ? 00:00:00 ora_u005_NCDB
- Note that this is the STID. The SPID is the thread ID
[oracle@server3 ~]$ ps -eLf | egrep 20872
oracle 20872 1 20872 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20873 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20874 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20876 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20877 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20885 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20887 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20890 0 49 06:40 ? 00:00:01 ora_u005_NCDB
oracle 20872 1 20891 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20892 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20893 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20896 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20898 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20900 0 49 06:40 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20922 0 49 06:41 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20925 0 49 06:41 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20932 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20934 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20935 0 49 06:42 ? 00:00:01 ora_u005_NCDB
oracle 20872 1 20936 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20937 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20938 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20939 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20940 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20941 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20942 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20943 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20944 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20945 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20947 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20948 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20949 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20950 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20951 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20952 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20953 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20954 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 20955 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21088 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21089 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21091 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21092 0 49 06:42 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21481 0 49 06:56 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21494 0 49 06:57 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21499 0 49 06:57 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21500 0 49 06:57 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21521 0 49 06:59 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21522 0 49 06:59 ? 00:00:00 ora_u005_NCDB
oracle 20872 1 21536 0 49 07:00 ? 00:00:00 ora_u005_NCDB
8) kill the user session with SID 31 ON THE OS LEVEL. Do not use alter system disconnect session!
[oracle@server3 ~]$ ps -eLf | egrep 21481
oracle 20872 1 21481 0 48 06:56 ? 00:00:00 ora_u005_NCDB
oracle 21578 21236 21578 0 1 07:01 pts/16 00:00:00 grep -E --color=auto 21481
[oracle@server3 ~]$ kill -9 21481
[oracle@server3 ~]$
- what do you see? Remember never to do this in production!
}}}
! end
http://facedba.blogspot.com/2017/10/recover-table-from-rman-backup-in.html
<<showtoc>>
! instrumentation for the before and after change
vi memcount.sh
{{{
echo "##### count the threads"
ps -eLf | grep $ORACLE_SID | wc -l
echo "##### count the processes"
ps -ef | grep $ORACLE_SID | wc -l
echo "##### CPU%, MEM%, VSZ, RSS for all users"
ps -A -o pcpu,pmem,vsz,rss | awk '{cpu += $1; mem += $2; vsz += $3; rss += $4} END {print cpu, mem, vsz/1024, rss/1024}'
echo "##### CPU%, MEM%, VSZ, RSS for oracle user"
ps -u $USER -o pcpu,pmem,vsz,rss | awk '{cpu += $1; mem += $2; vsz += $3; rss += $4} END {print cpu, mem, vsz/1024, rss/1024}'
echo "##### system memory"
free -m
echo "##### this sums the %MEM for all users"
ps aux | awk 'NR != 1 {x[$1] += $4} END{ for(z in x) {print z, x[z]"%"}}'
echo "##### this greps the current ORACLE_SID (excluding others) and sums the %MEM"
ps aux | grep $ORACLE_SID | awk 'NR != 1 {x[$1] += $4} END{ for(z in x) {print z, x[z]"%"}}'
}}}
! enable / disable
-- enable
alter system set threaded_execution=true scope=spfile sid='*';
srvctl stop database -d noncdb
srvctl start database -d noncdb
sqlplus sys/oracle@enkx4db01.enkitec.com/noncdb.enkitec.com as sysdba
-- disable
alter system set threaded_execution=false scope=spfile sid='*';
srvctl stop database -d noncdb
srvctl start database -d noncdb
sqlplus sys/oracle@enkx4db01.enkitec.com/noncdb.enkitec.com as sysdba
show parameter threaded
select sys_context('userenv','sid') from dual;
set lines 300
select s.username, s.sid, s.serial#, s.con_id, p.spid, p.sosid, p.stid, p.execution_type
from v$session s, v$process p
where s.sid = 270
and s.paddr = p.addr
/
select count(spid),spid,execution_type from v$process where background = 1 group by spid, execution_type;
select pname, pid, sosid, spid, stid, execution_type
from v$process where background = 1
order by pname
/
select pname, pid, sosid, spid, stid, execution_type
from v$process
order by pname
/
ps -ef | grep noncdb
ps -eLf | grep noncdb
! before and after effect
-- BEFORE
{{{
$ sh memcount.sh
##### count the threads
49
##### count the processes
49
##### CPU%, MEM%, VSZ, RSS for all users
42.3 159.7 81873.2 1686.75
##### CPU%, MEM%, VSZ, RSS for oracle user
41 154.1 62122.9 1586.95
##### system memory
total used free shared buffers cached
Mem: 994 978 15 0 1 592
-/+ buffers/cache: 385 609
Swap: 1227 591 636
##### this sums the %MEM for all users
gdm 1.1%
oracle 153.9%
rpc 0%
dbus 0.1%
68 0.1%
rtkit 0%
postfix 0.1%
rpcuser 0%
root 4.2%
##### this greps the current ORACLE_SID (excluding others) and sums the %MEM
oracle 142.3%
}}}
-- AFTER
{{{
$ sh memcount.sh
##### count the threads
55
##### count the processes
7
##### CPU%, MEM%, VSZ, RSS for all users
58.3 92.9 56845.8 1005.93
##### CPU%, MEM%, VSZ, RSS for oracle user
57.1 87.3 37095.5 906.363
##### system memory
total used free shared buffers cached
Mem: 994 965 28 0 1 628
-/+ buffers/cache: 336 658
Swap: 1227 591 636
##### this sums the %MEM for all users
gdm 1.1%
oracle 87.4%
rpc 0%
dbus 0.1%
68 0.1%
rtkit 0%
postfix 0.1%
rpcuser 0%
root 4.2%
##### this greps the current ORACLE_SID (excluding others) and sums the %MEM
oracle 75.7%
}}}
! initial conclusions
* all in all the count of processes dropped from @@49 to 7@@, but what does this mean in terms of resource savings?
I say this mostly affects the memory
** VSZ (virtual memory size) dropped from 62122.9 MB to 37095.5 MB for the oracle user which is a 40% decrease
** RSS (resident set size) dropped from 1586.95 MB to 906.363 MB for the oracle user which is a 42% decrease
** %MEM (ratio of the processes resident set size to the physical memory on the machine) dropped from 142.3% to 75.7% which is 46% decrease
So when you consolidate, the savings gained from changing to threaded_execution will be more physical memory headroom for more instances
and even more when switched to PDB (multi-tenant) architecture
For CPU, there's really no effect. I say, the CPU workload requirements of an app will be the same and it's only going to decrease if you 1) tune 2) move to a faster CPU.
See the slide 27-30 of this OOW presentation by Arup http://www.oracle.com/technetwork/oem/app-quality-mgmt/con8788-2088738.pdf
! updates to the initial conclusions
Had a talk with Frits on the memory part of 12c threaded_execution..
I did a research on the performance and here's the result https://twitter.com/karlarao/status/582053491079843840
For the memory here's the before and after https://twitter.com/karlarao/status/581367258804396032
On the side of performance, non-threaded_execution is faster. I'm definitive about that.
For the memory gains, yes some of the sessions could still end up eating the same (SGA+PGA) memory but there are still some memory gains with some background processes although there are some inconsistencies
* on the VM test that I did it showed ~40 decrease in RSS memory
* but on the Exadata test it actually increased in RSS memory (from ~27258.4MB to ~42487.8MB).
All in all, I don't like threaded_execution in terms of performance. For memory, that needs a little bit more investigation because I'm seeing different results on VM and non-VM environments.
! FINAL: complete view of metrics for CPU speed comparison - elap_exec, lios_elap, us_lio
[img(95%,95%)[ http://i.imgur.com/WpgZHre.png ]]
! a lot of POLL syscalls which is an overhead causing slower overall performance
This blog post validated what I found. Here he showed that threaded_execution=true is doing a lot of POLL syscalls which is an overhead causing slower overall performance.
The process ora_u00N was also discovered using sockets to communicate with its threads.
http://blog.ora-600.pl/2015/12/17/oracle-12c-internals-of-threaded-execution/
{{{
Oracle instance with threaded_execution=false:
[root@rico ~]# strace -cp 12168
Process 12168 attached
^CProcess 12168 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
0.00 0.000000 0 2 read
0.00 0.000000 0 2 write
0.00 0.000000 0 1 semctl
0.00 0.000000 0 159 getrusage
0.00 0.000000 0 12 times
0.00 0.000000 0 3 semtimedop
------ ----------- ----------- --------- --------- ----------------
100.00 0.000000 179 total
Oracle instance with threaded_execution=true:
[root@rico fd]# strace -cp 12165
Process 12165 attached
^CProcess 12165 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
84.22 0.113706 0 980840 poll
10.37 0.014000 7000 2 read
5.41 0.007310 1218 6 semtimedop
0.00 0.000000 0 2 write
0.00 0.000000 0 1 semctl
0.00 0.000000 0 419 getrusage
0.00 0.000000 0 12 times
------ ----------- ----------- --------- --------- ----------------
100.00 0.135016 981282 total
[root@rico fd]# strace -p 12165 -o /tmp/threaded_exec.out
Process 12165 attached
^CProcess 12165 detached
[root@rico fd]# grep poll /tmp/threaded_exec.out | tail
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
The STID 12165 was signed to SPID 8107:
SQL> get spid
1 select spid, stid
2 from v$process p, v$session s
3 where p.addr=s.paddr
4* and s.sid=sys_context('userenv','sid')
SQL> /
SPID STID
------------------------ ------------------------
8107 12165
Let’s check the file descriptors for this thread:
[root@rico ~]# cd /proc/8107/task/12165/fd
[root@rico fd]# ls -al | grep 63
lrwx------. 1 oracle oinstall 64 12-17 21:38 63 -> socket:[73968]
[root@rico fd]# lsof | grep 73968
ora_scmn_ 8107 oracle 63u IPv6 73968 0t0 TCP localhost:ncube-lm->localhost:32400 (ESTABLISHED)
[root@rico fd]# ps aux | grep 8107 | grep -v grep
oracle 8107 4.7 29.0 6155520 2901516 ? Ssl 20:01 6:54 ora_u005_orclth
[root@rico fd]#
}}}
also check this tiddler [[12c New Features]]
https://docs.oracle.com/database/121/DWHSG/refresh.htm#DWHSG-GUID-51191C38-D52F-4A4D-B6FF-E631965AD69A
<<<
Types of Out-of-Place Refresh
There are three types of out-of-place refresh:
out-of-place fast refresh
This offers better availability than in-place fast refresh. It also offers better performance when changes affect a large part of the materialized view.
out-of-place PCT refresh
This offers better availability than in-place PCT refresh. There are two different approaches for partitioned and non-partitioned materialized views. If truncation and direct load are not feasible, you should use out-of-place refresh when the changes are relatively large. If truncation and direct load are feasible, in-place refresh is preferable in terms of performance. In terms of availability, out-of-place refresh is always preferable.
out-of-place complete refresh
This offers better availability than in-place complete refresh.
Using the refresh interface in the DBMS_MVIEW package, with method = ? and out_of_place = true, out-of-place fast refresh are attempted first, then out-of-place PCT refresh, and finally out-of-place complete refresh. An example is the following:
DBMS_MVIEW.REFRESH('CAL_MONTH_SALES_MV', method => '?',
atomic_refresh => FALSE, out_of_place => TRUE);
<<<
http://karandba.blogspot.com/2014/10/out-of-place-refresh-option-for.html
http://horia-berca-oracledba.blogspot.com/2013/10/out-of-place-materialized-view-refresh.html
https://community.oracle.com/mosc/discussion/4281580/partition-truncate-with-deferred-invalidation-any-side-effects
https://blog.dbi-services.com/oracle-12cr2-ddl-deferred-invalidation/
truncate table https://docs.oracle.com/database/121/SQLRF/statements_10007.htm#SQLRF01707
https://blogs.oracle.com/optimizer/optimizer-adaptive-features-in-oracle-database-12c-release-2
http://kerryosborne.oracle-guy.com/2013/11/24/12c-adaptive-optimization-part-1/
http://kerryosborne.oracle-guy.com/2013/12/09/12c-adaptive-optimization-part-2-hints/
.
Interesting observation about 15sec Top Activity graph
http://oracleprof.blogspot.com/2010/07/oem-performance-tab-and-active-session.html
https://leetcode.com/problems/combine-two-tables/
{{{
175. Combine Two Tables
Easy
SQL Schema
Table: Person
+-------------+---------+
| Column Name | Type |
+-------------+---------+
| PersonId | int |
| FirstName | varchar |
| LastName | varchar |
+-------------+---------+
PersonId is the primary key column for this table.
Table: Address
+-------------+---------+
| Column Name | Type |
+-------------+---------+
| AddressId | int |
| PersonId | int |
| City | varchar |
| State | varchar |
+-------------+---------+
AddressId is the primary key column for this table.
Write a SQL query for a report that provides the following information for each person in the Person table, regardless if there is an address for each of those people:
FirstName, LastName, City, State
Accepted
201,012
Submissions
365,130
}}}
{{{
/* Write your PL/SQL query statement below */
select a.FirstName, a.LastName, b.City, b.State from
person a, address b
where a.personid = b.personid (+);
}}}
https://leetcode.com/problems/second-highest-salary/
{{{
Write a SQL query to get the second highest salary from the Employee table.
+----+--------+
| Id | Salary |
+----+--------+
| 1 | 100 |
| 2 | 200 |
| 3 | 300 |
+----+--------+
For example, given the above Employee table, the query should return 200 as the second highest salary. If there is no second highest salary, then the query should return null.
+---------------------+
| SecondHighestSalary |
+---------------------+
| 200 |
+---------------------+
Accepted
162,933
Submissions
566,565
}}}
{{{
SELECT MAX(salary) AS SecondHighestSalary
FROM employee
WHERE salary NOT IN (
SELECT MAX(salary)
FROM employee
);
--select id, salary from
--(
--select a.id, a.salary,
-- dense_rank() over(order by a.salary desc) drank
--from test a)
--where drank = 2;
}}}
https://leetcode.com/problems/nth-highest-salary/
{{{
177. Nth Highest Salary
Medium
Write a SQL query to get the nth highest salary from the Employee table.
+----+--------+
| Id | Salary |
+----+--------+
| 1 | 100 |
| 2 | 200 |
| 3 | 300 |
+----+--------+
For example, given the above Employee table, the nth highest salary where n = 2 is 200. If there is no nth highest salary, then the query should return null.
+------------------------+
| getNthHighestSalary(2) |
+------------------------+
| 200 |
+------------------------+
Accepted
81,363
Submissions
288,058
}}}
{{{
CREATE or replace FUNCTION getNthHighestSalary(N IN NUMBER) RETURN NUMBER IS
result NUMBER;
BEGIN
/* Write your PL/SQL query statement below */
select nvl(null,salary) salary
into result
from
(
select distinct a.salary,
dense_rank() over(order by a.salary desc) drank
from employee a)
where drank = N;
RETURN result;
END;
/
select getNthHighestSalary(2) from dual;
CREATE or replace FUNCTION getNthHighestSalary2(N IN NUMBER) RETURN NUMBER IS
result NUMBER;
BEGIN
select salary into result
from (select distinct(salary),rank() over (order by salary desc) as r
from test group by salary) where r=N;
return result;
END;
/
select getNthHighestSalary2(2) from dual;
select * from test;
select nvl('x',null) from (
select 1/NULL a from dual);
select 1/nvl(null,1) from dual; -- if not null return 1st , if null return 1
select 1/nvl(0,1) from dual; -- errors if zero
select 1/nullif(nvl( nullif(21,0) ,1),0) from dual;
SELECT NULLIF(0,0) FROM DUAL;
select nvl(null,salary) salary from
(
select a.id, a.salary,
dense_rank() over(order by a.salary desc) drank
from test a)
where drank = 2;
}}}
https://leetcode.com/problems/rank-scores/
{{{
Write a SQL query to rank scores. If there is a tie between two scores, both should have the same ranking. Note that after a tie, the next ranking number should be the next consecutive integer value. In other words, there should be no "holes" between ranks.
+----+-------+
| Id | Score |
+----+-------+
| 1 | 3.50 |
| 2 | 3.65 |
| 3 | 4.00 |
| 4 | 3.85 |
| 5 | 4.00 |
| 6 | 3.65 |
+----+-------+
For example, given the above Scores table, your query should generate the following report (order by highest score):
+-------+------+
| Score | Rank |
+-------+------+
| 4.00 | 1 |
| 4.00 | 1 |
| 3.85 | 2 |
| 3.65 | 3 |
| 3.65 | 3 |
| 3.50 | 4 |
+-------+------+
Accepted
78,882
Submissions
199,284
}}}
{{{
select a.score score,
dense_rank() over(order by a.score desc) rank
from scores a;
}}}
https://leetcode.com/problems/employees-earning-more-than-their-managers/
{{{
The Employee table holds all employees including their managers. Every employee has an Id, and there is also a column for the manager Id.
+----+-------+--------+-----------+
| Id | Name | Salary | ManagerId |
+----+-------+--------+-----------+
| 1 | Joe | 70000 | 3 |
| 2 | Henry | 80000 | 4 |
| 3 | Sam | 60000 | NULL |
| 4 | Max | 90000 | NULL |
+----+-------+--------+-----------+
Given the Employee table, write a SQL query that finds out employees who earn more than their managers. For the above table, Joe is the only employee who earns more than his manager.
+----------+
| Employee |
+----------+
| Joe |
+----------+
Accepted
134,240
Submissions
261,500
}}}
{{{
/* Write your PL/SQL query statement below */
select b.name as Employee
from Employee a, Employee b
where a.id = b.managerid
and b.salary > a.salary;
select * from employeeslc a, employeeslc b
where a.employee_id = b.manager_id
and b.salary > a.salary;
}}}
https://leetcode.com/problems/department-highest-salary/
{{{
The Employee table holds all employees. Every employee has an Id, a salary, and there is also a column for the department Id.
+----+-------+--------+--------------+
| Id | Name | Salary | DepartmentId |
+----+-------+--------+--------------+
| 1 | Joe | 70000 | 1 |
| 2 | Jim | 90000 | 1 |
| 3 | Henry | 80000 | 2 |
| 4 | Sam | 60000 | 2 |
| 5 | Max | 90000 | 1 |
+----+-------+--------+--------------+
The Department table holds all departments of the company.
+----+----------+
| Id | Name |
+----+----------+
| 1 | IT |
| 2 | Sales |
+----+----------+
Write a SQL query to find employees who have the highest salary in each of the departments. For the above tables, your SQL query should return the following rows (order of rows does not matter).
+------------+----------+--------+
| Department | Employee | Salary |
+------------+----------+--------+
| IT | Max | 90000 |
| IT | Jim | 90000 |
| Sales | Henry | 80000 |
+------------+----------+--------+
Explanation:
Max and Jim both have the highest salary in the IT department and Henry has the highest salary in the Sales department.
Accepted
79,335
Submissions
250,355
}}}
{{{
/* Write your PL/SQL query statement below */
select department, employee, salary from (
select a.departmentid, b.name department, a.name employee, a.salary, dense_rank() over(partition by a.departmentid order by a.salary desc) rank
from employee a, department b
where a.departmentid = b.id
)
where rank = 1;
-- in oracle
select department, employee, salary from (
select a.department_id, b.department_name department, a.first_name employee, a.salary, dense_rank() over(partition by a.department_id order by a.salary desc) rank
from employeeslc a, departmentslc b
where a.department_id = b.department_id(+)
)
where rank = 1;
}}}
https://leetcode.com/problems/department-top-three-salaries/
{{{
The Employee table holds all employees. Every employee has an Id, and there is also a column for the department Id.
+----+-------+--------+--------------+
| Id | Name | Salary | DepartmentId |
+----+-------+--------+--------------+
| 1 | Joe | 85000 | 1 |
| 2 | Henry | 80000 | 2 |
| 3 | Sam | 60000 | 2 |
| 4 | Max | 90000 | 1 |
| 5 | Janet | 69000 | 1 |
| 6 | Randy | 85000 | 1 |
| 7 | Will | 70000 | 1 |
+----+-------+--------+--------------+
The Department table holds all departments of the company.
+----+----------+
| Id | Name |
+----+----------+
| 1 | IT |
| 2 | Sales |
+----+----------+
Write a SQL query to find employees who earn the top three salaries in each of the department. For the above tables, your SQL query should return the following rows (order of rows does not matter).
+------------+----------+--------+
| Department | Employee | Salary |
+------------+----------+--------+
| IT | Max | 90000 |
| IT | Randy | 85000 |
| IT | Joe | 85000 |
| IT | Will | 70000 |
| Sales | Henry | 80000 |
| Sales | Sam | 60000 |
+------------+----------+--------+
Explanation:
In IT department, Max earns the highest salary, both Randy and Joe earn the second highest salary, and Will earns the third highest salary. There are only two employees in the Sales department, Henry earns the highest salary while Sam earns the second highest salary.
Accepted
54,931
Submissions
189,029
}}}
{{{
select department, employee, salary from
(
select b.name department, a.name employee, a.salary salary, dense_rank() over(partition by b.id order by a.salary desc) drank
from employee a, department b
where a.departmentid = b.id) a
where a.drank <= 3;
select * from employees;
select * from departments;
select b.department_name department, a.first_name employee, a.salary salary
from employees a, departments b
where a.department_id = b.department_id;
select department, employee, salary from
(
select b.department_name department, a.first_name employee, a.salary salary, dense_rank() over(partition by b.department_name order by a.salary desc) drank
from employees a, departments b
where a.department_id = b.department_id) a
where a.drank <= 3;
select department, employee, salary from
(
select b.name department, a.name employee, a.salary salary, dense_rank() over(partition by b.id order by a.salary desc) drank
from employee a, department b
where a.departmentid = b.id) a
where a.drank <= 3;
}}}
https://docs.oracle.com/en/database/oracle/oracle-database/18/newft/new-features.html#GUID-04A4834D-848F-44D5-8C34-36237D40F194
https://docs.oracle.com/en/database/oracle/oracle-database/19/newft/new-features.html#GUID-06A15128-1172-48E5-8493-CD670B9E57DC
! issues
!! upgrade
https://jonathanlewis.wordpress.com/2019/04/08/describe-upgrade/
!! 19c RAC limitations / licensing
<<<
We know that a license of Oracle Database Standard Edition (DB SE) includes into it clustering services with Oracle Real Application Clusters (RAC) as a standard feature. Oracle RAC is not included in the Standard Edition of releases prior to Oracle Database 10g, nor is it an available option with those earlier releases. For Oracle DB SE that is no longer available on the price list, the free feature of RAC could be used to cluster up to a maximum of 4 sockets for eligible versions of DB SE.
For customers that are using Oracle DB SE, Oracle has now announced the de-support of RAC with Oracle DB SE 19c. If a customer attempts to upgrade to Oracle DB 19c, they will have 2 options (upgrade paths) to choose from:
OPTION 1: Upgrade to DB EE, on which RAC 19c is an extra-add on, chargeable option (as opposed to standard feature with DB SE). Here, a customer will upgrade from Oracle RAC Standard Edition (SE) to Oracle RAC Enterprise Edition (EE). Note: If customer attempts to install RAC 19c Database using Standard Edition, the Oracle Universal Installer will prevent the installation.
OR
OPTION 2: Convert Oracle RAC Standard Edition to a Single Instance (Non-RAC) Standard Edition
There is another consideration. Most real life requirements are for business critical HA, that is Active Passive. If this is the real requirement then you can also use Clusterware from Oracle which comes included at no charge when you buy Oracle Linux support. If you are using Red Hat you don’t have to re install the OS. Oracle will just take over supporting your current Red Hat OS and you get to use Clusterware for free. Best part is that Oracle Linux is lower in price to buy than Red Hat. Over all a much much lower solution cost. Many Oracle customers are choosing this option.
<<<
* Auto STS Capture Task
https://mikedietrichde.com/2020/05/28/do-you-love-unexpected-surprises-sys_auto_sts-in-oracle-19-7-0/
<<<
As far as I can see, the starting point for this is Bug 30001331 - CAPTURE SQL STATEMENTS INTO STS FOR PLAN STABILITY. It directed me to Bug 30260530 - CONTENT INCLUSION OF 30001331 IN DATABASE RU 19.7.0.0.0. So this seem to be present since 19.7.0. And the capture into it happens by default.
<<<
http://www.evernote.com/shard/s48/sh/1a9c1779-94ec-4e5a-a26f-ba92ea08988e/3bb10603e76f4fb346d7df4328882dcd
Also check out this thread at oracle-l for options on 10GbE on V2 http://www.freelists.org/post/oracle-l/Exadata-V2-Compute-Node-10GigE-PCI-card-installation
{{{
create table parallel_t1(c1 int, c2 char(100));
insert into parallel_t1
select level, 'x'
from dual
connect by level <= 8000
;
commit;
alter system set db_file_multiblock_read_count=128;
*._db_block_prefetch_limit=0
*._db_block_prefetch_quota=0
*._db_file_noncontig_mblock_read_count=0
alter system flush buffer_cache;
-- generate one parallel query
select count(*) from parallel_t1;
16:28:36 SYS@orcl> shutdown abort
ORACLE instance shut down.
16:29:21 SYS@orcl> startup pfile='/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/initorcl.ora'
ORACLE instance started.
Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 348129976 bytes
Database Buffers 100663296 bytes
Redo Buffers 6008832 bytes
Database mounted.
Database opened.
16:29:33 SYS@orcl> alter system flush buffer_cache;
System altered.
16:29:38 SYS@orcl> show parameter db_file_multi
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_multiblock_read_count integer 128
16:29:47 SYS@orcl>
16:29:47 SYS@orcl> set lines 300
16:29:51 SYS@orcl> col "Parameter" FOR a40
16:29:51 SYS@orcl> col "Session Value" FOR a20
16:29:51 SYS@orcl> col "Instance Value" FOR a20
16:29:51 SYS@orcl> col "Description" FOR a50
16:29:51 SYS@orcl> SELECT a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value", a.ksppdesc "Description"
16:29:51 2 FROM x$ksppi a, x$ksppcv b, x$ksppsv c
16:29:51 3 WHERE a.indx = b.indx AND a.indx = c.indx
16:29:51 4 AND substr(ksppinm,1,1)='_'
16:29:51 5 AND a.ksppinm like '%¶meter%'
16:29:51 6 /
Enter value for parameter: read_count
Parameter Session Value Instance Value Description
---------------------------------------- -------------------- -------------------- --------------------------------------------------
_db_file_exec_read_count 128 128 multiblock read count for regular clients
_db_file_optimizer_read_count 128 128 multiblock read count for regular clients
_db_file_noncontig_mblock_read_count 0 0 number of noncontiguous db blocks to be prefetched
_sort_multiblock_read_count 2 2 multi-block read count for sort
16:29:54 SYS@orcl>
16:29:54 SYS@orcl> @mystat
628 rows created.
SNAP_DATE_END
-------------------
2014-09-08 16:29:57
SNAP_DATE_BEGIN
-------------------
no rows selected
no rows selected
0 rows deleted.
16:29:57 SYS@orcl> select count(*) from parallel_t1;
COUNT(*)
----------
8000
16:30:03 SYS@orcl> @mystat
628 rows created.
SNAP_DATE_END
-------------------
2014-09-08 16:30:05
SNAP_DATE_BEGIN
-------------------
2014-09-08 16:29:57
Difference Statistics Name
---------------- --------------------------------------------------------------
2 CPU used by this session
4 CPU used when call started
3 DB time
628 HSC Heap Segment Block Changes
10 SQL*Net roundtrips to/from client
80 buffer is not pinned count
3,225 bytes received via SQL*Net from client
2,308 bytes sent via SQL*Net to client
15 calls to get snapshot scn: kcmgss
1 calls to kcmgas
32 calls to kcmgcs
1,097,728 cell physical IO interconnect bytes
4 cluster key scan block gets
4 cluster key scans
672 consistent changes
250 consistent gets
12 consistent gets - examination
250 consistent gets from cache
211 consistent gets from cache (fastpath)
1 cursor authentications
1,307 db block changes
703 db block gets
703 db block gets from cache
10 db block gets from cache (fastpath)
18 enqueue releases
19 enqueue requests
14 execute count
530 file io wait time
149 free buffer requested
5 index fetch by key
2 index scans kdiixs1
218 no work - consistent read gets
42 non-idle wait count
19 opened cursors cumulative
5 parse count (failures)
12 parse count (hard)
19 parse count (total)
1 parse time elapsed
32 physical read IO requests
1,097,728 physical read bytes
32 physical read total IO requests
1,097,728 physical read total bytes
134 physical reads
134 physical reads cache
102 physical reads cache prefetch
56 recursive calls
629 redo entries
88,372 redo size
953 session logical reads
3 shared hash latch upgrades - no wait
3 sorts (memory)
2 sorts (rows)
5 sql area purged
1 table fetch by rowid
211 table scan blocks gotten
13,560 table scan rows gotten
4 table scans (short tables)
42,700 undo change vector size
17 user calls
3 workarea executions - optimal
4 workarea memory allocated
61 rows selected.
SNAP_DATE_BEGIN SNAP_DATE_END
------------------- -------------------
2014-09-08 16:29:57 2014-09-08 16:30:05
1256 rows deleted.
16:30:05 SYS@orcl> set lines 300
16:30:38 SYS@orcl> col "Parameter" FOR a40
16:30:38 SYS@orcl> col "Session Value" FOR a20
16:30:38 SYS@orcl> col "Instance Value" FOR a20
16:30:38 SYS@orcl> col "Description" FOR a50
16:30:38 SYS@orcl> SELECT a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value", a.ksppdesc "Description"
16:30:38 2 FROM x$ksppi a, x$ksppcv b, x$ksppsv c
16:30:38 3 WHERE a.indx = b.indx AND a.indx = c.indx
16:30:38 4 AND substr(ksppinm,1,1)='_'
16:30:38 5 AND a.ksppinm like '%¶meter%'
16:30:38 6 /
Enter value for parameter: prefetch
Parameter Session Value Instance Value Description
---------------------------------------- -------------------- -------------------- --------------------------------------------------
_db_block_prefetch_quota 0 0 Prefetch quota as a percent of cache size
_db_block_prefetch_limit 0 0 Prefetch limit in blocks
}}}
{{{
-- CREATE THE JOB
-- 1min interval -- repeat_interval => 'FREQ=MINUTELY;BYSECOND=0',
-- 2mins interval -- repeat_interval => 'FREQ=MINUTELY;INTERVAL=2;BYSECOND=0',
-- 10secs interval -- repeat_interval => 'FREQ=SECONDLY;INTERVAL=10',
BEGIN
SYS.DBMS_SCHEDULER.CREATE_JOB (
job_name => '"SYSTEM"."AWR_1MIN_SNAP"',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN
dbms_workload_repository.create_snapshot;
END;',
number_of_arguments => 0,
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=MINUTELY;BYSECOND=0',
end_date => NULL,
job_class => '"SYS"."DEFAULT_JOB_CLASS"',
enabled => FALSE,
auto_drop => FALSE,
comments => 'AWR_1MIN_SNAP',
credential_name => NULL,
destination_name => NULL);
SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(
name => '"SYSTEM"."AWR_1MIN_SNAP"',
attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF);
SYS.DBMS_SCHEDULER.enable(
name => '"SYSTEM"."AWR_1MIN_SNAP"');
END;
/
-- ENABLE JOB
BEGIN
SYS.DBMS_SCHEDULER.enable(
name => '"SYSTEM"."AWR_1MIN_SNAP"');
END;
/
-- RUN JOB
BEGIN
SYS.DBMS_SCHEDULER.run_job('"SYSTEM"."AWR_1MIN_SNAP"');
END;
/
-- DISABLE JOB
BEGIN
SYS.DBMS_SCHEDULER.disable(
name => '"SYSTEM"."AWR_1MIN_SNAP"');
END;
/
-- DROP JOB
BEGIN
SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '"SYSTEM"."AWR_1MIN_SNAP"',
defer => false,
force => true);
END;
/
-- MONITOR JOB
SELECT * FROM DBA_SCHEDULER_JOB_LOG WHERE job_name = 'AWR_1MIN_SNAP';
col JOB_NAME format a15
col START_DATE format a25
col LAST_START_DATE format a25
col NEXT_RUN_DATE format a25
SELECT job_name, enabled, start_date, last_start_date, next_run_date FROM DBA_SCHEDULER_JOBS WHERE job_name = 'AWR_1MIN_SNAP';
-- AWR get recent snapshot
select * from
(SELECT s0.instance_number, s0.snap_id, s0.startup_time,
TO_CHAR(s0.END_INTERVAL_TIME,'YYYY-Mon-DD HH24:MI:SS') snap_start,
TO_CHAR(s1.END_INTERVAL_TIME,'YYYY-Mon-DD HH24:MI:SS') snap_end,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) ela_min
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1
WHERE s1.snap_id = s0.snap_id + 1
ORDER BY snap_id DESC)
where rownum < 11;
}}}
.
A short video about it, worth watching it whenever you get some time, only 12min:
https://www.ansible.com/quick-start-video
https://www.doag.org/formes/pubfiles/7375105/2015-K-INF-Frits_Hoogland-Automating__DBA__tasks_with_Ansible-Praesentation.pdf
https://fritshoogland.wordpress.com/2014/09/14/using-ansible-for-executing-oracle-dba-tasks/
https://learnxinyminutes.com/docs/ansible/
{{{
oracle@localhost.localdomain:/u01/oracle:orcl
$ s1
SQL*Plus: Release 12.1.0.1.0 Production on Tue Dec 16 00:53:22 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
00:53:23 SYS@orcl> select name, cdb, con_id from v$database;
NAME CDB CON_ID
--------- --- ----------
ORCL YES 0
00:53:23 SYS@orcl> select INSTANCE_NAME, STATUS, CON_ID from v$instance;
INSTANCE_NAME STATUS CON_ID
---------------- ------------ ----------
orcl OPEN 0
00:53:39 SYS@orcl> col name format A20
00:54:24 SYS@orcl> select name, con_id from v$services;
NAME CON_ID
-------------------- ----------
pdb1 3
orclXDB 1
orcl 1
SYS$BACKGROUND 1
SYS$USERS 1
00:54:30 SYS@orcl> select CON_ID, NAME, OPEN_MODE from v$pdbs;
CON_ID NAME OPEN_MODE
---------- -------------------- ----------
2 PDB$SEED READ ONLY
3 PDB1 READ WRITE
00:57:49 SYS@orcl> show con_name
CON_NAME
------------------------------
CDB$ROOT
00:58:19 SYS@orcl> show con_id
CON_ID
------------------------------
1
00:58:25 SYS@orcl> SELECT sys_context('userenv','CON_NAME') from dual;
SYS_CONTEXT('USERENV','CON_NAME')
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CDB$ROOT
00:58:36 SYS@orcl> SELECT sys_context('userenv','CON_ID') from dual;
SYS_CONTEXT('USERENV','CON_ID')
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1
00:58:44 SYS@orcl> col PDB_NAME format a8
00:59:43 SYS@orcl> col CON_ID format 99
00:59:51 SYS@orcl> select PDB_ID, PDB_NAME, DBID, GUID, CON_ID from cdb_pdbs;
PDB_ID PDB_NAME DBID GUID CON_ID
---------- -------- ---------- -------------------------------- ------
2 PDB$SEED 4080030308 F081641BB43F0F7DE045000000000001 1
3 PDB1 3345156736 F0832BAF14721281E045000000000001 1
01:00:07 SYS@orcl> col MEMBER format A40
01:00:42 SYS@orcl> select GROUP#, CON_ID, MEMBER from v$logfile;
GROUP# CON_ID MEMBER
---------- ------ ----------------------------------------
3 0 /u01/app/oracle/oradata/ORCL/onlinelog/o
1_mf_3_9fxn1pmn_.log
3 0 /u01/app/oracle/fast_recovery_area/ORCL/
onlinelog/o1_mf_3_9fxn1por_.log
2 0 /u01/app/oracle/oradata/ORCL/onlinelog/o
1_mf_2_9fxn1lmy_.log
2 0 /u01/app/oracle/fast_recovery_area/ORCL/
onlinelog/o1_mf_2_9fxn1lox_.log
GROUP# CON_ID MEMBER
---------- ------ ----------------------------------------
1 0 /u01/app/oracle/oradata/ORCL/onlinelog/o
1_mf_1_9fxn1dq4_.log
1 0 /u01/app/oracle/fast_recovery_area/ORCL/
onlinelog/o1_mf_1_9fxn1dsx_.log
6 rows selected.
01:00:49 SYS@orcl> col NAME format A60
01:01:28 SYS@orcl> select NAME , CON_ID from v$controlfile;
NAME CON_ID
------------------------------------------------------------ ------
/u01/app/oracle/oradata/ORCL/controlfile/o1_mf_9fxn1csd_.ctl 0
/u01/app/oracle/fast_recovery_area/ORCL/controlfile/o1_mf_9f 0
xn1d0k_.ctl
01:01:35 SYS@orcl> col file_name format A50
01:02:01 SYS@orcl> col tablespace_name format A8
01:02:10 SYS@orcl> col file_id format 9999
01:02:18 SYS@orcl> col con_id format 999
01:02:26 SYS@orcl> select FILE_NAME, TABLESPACE_NAME, FILE_ID, con_id from cdb_data_files order by con_id ;
FILE_NAME TABLESPA FILE_ID CON_ID
-------------------------------------------------- -------- ------- ------
/u01/app/oracle/oradata/ORCL/datafile/o1_mf_system SYSTEM 1 1
_9fxmx6s1_.dbf
/u01/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux SYSAUX 3 1
_9fxmvhl3_.dbf
/u01/app/oracle/oradata/ORCL/datafile/o1_mf_users_ USERS 6 1
9fxn0t8s_.dbf
/u01/app/oracle/oradata/ORCL/datafile/o1_mf_undotb UNDOTBS1 4 1
s1_9fxn0vgg_.dbf
FILE_NAME TABLESPA FILE_ID CON_ID
-------------------------------------------------- -------- ------- ------
/u01/app/oracle/oradata/ORCL/datafile/o1_mf_system SYSTEM 5 2
_9fxn22po_.dbf
/u01/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux SYSAUX 7 2
_9fxn22p3_.dbf
/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0450 USERS 13 3
00000000001/datafile/o1_mf_users_9fxvoh6n_.dbf
/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0450 SYSAUX 12 3
FILE_NAME TABLESPA FILE_ID CON_ID
-------------------------------------------------- -------- ------- ------
00000000001/datafile/o1_mf_sysaux_9fxvnjdl_.dbf
/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0450 APEX_226 14 3
00000000001/datafile/o1_mf_apex_226_9gfgd96o_.dbf 45286309
61551
/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0450 SYSTEM 11 3
00000000001/datafile/o1_mf_system_9fxvnjdq_.dbf
10 rows selected.
01:02:40 SYS@orcl> col file_name format A42
01:03:49 SYS@orcl> select FILE_NAME, TABLESPACE_NAME, FILE_ID from dba_data_files;
FILE_NAME TABLESPA FILE_ID
------------------------------------------ -------- -------
/u01/app/oracle/oradata/ORCL/datafile/o1_m SYSTEM 1
f_system_9fxmx6s1_.dbf
/u01/app/oracle/oradata/ORCL/datafile/o1_m SYSAUX 3
f_sysaux_9fxmvhl3_.dbf
/u01/app/oracle/oradata/ORCL/datafile/o1_m USERS 6
f_users_9fxn0t8s_.dbf
/u01/app/oracle/oradata/ORCL/datafile/o1_m UNDOTBS1 4
f_undotbs1_9fxn0vgg_.dbf
FILE_NAME TABLESPA FILE_ID
------------------------------------------ -------- -------
01:03:56 SYS@orcl> col NAME format A12
01:07:23 SYS@orcl> select FILE#, ts.name, ts.ts#, ts.con_id
01:07:24 2 from v$datafile d, v$tablespace ts
01:07:30 3 where d.ts#=ts.ts#
01:07:39 4 and d.con_id=ts.con_id
01:07:46 5 order by 4,3;
FILE# NAME TS# CON_ID
---------- ------------ ---------- ------
1 SYSTEM 0 1
3 SYSAUX 1 1
4 UNDOTBS1 2 1
6 USERS 4 1
5 SYSTEM 0 2
7 SYSAUX 1 2
11 SYSTEM 0 3
12 SYSAUX 1 3
13 USERS 3 3
14 APEX_2264528 4 3
630961551
FILE# NAME TS# CON_ID
---------- ------------ ---------- ------
10 rows selected.
01:07:52 SYS@orcl> col file_name format A47
01:08:23 SYS@orcl> select FILE_NAME, TABLESPACE_NAME, FILE_ID
01:08:30 2 from cdb_temp_files;
FILE_NAME TABLESPA FILE_ID
----------------------------------------------- -------- -------
/u01/app/oracle/oradata/ORCL/datafile/o1_mf_tem TEMP 1
p_9fxn206l_.tmp
/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0 TEMP 3
45000000000001/datafile/o1_mf_temp_9fxvnznp_.db
f
/u01/app/oracle/oradata/ORCL/datafile/pdbseed_t TEMP 2
emp01.dbf
01:08:36 SYS@orcl> col username format A22
01:09:09 SYS@orcl> select username, common, con_id from cdb_users
01:09:17 2 where username ='SYSTEM';
USERNAME COM CON_ID
---------------------- --- ------
SYSTEM YES 1
SYSTEM YES 3
SYSTEM YES 2
01:09:22 SYS@orcl> select distinct username from cdb_users
01:09:37 2 where common ='YES';
USERNAME
----------------------
SPATIAL_WFS_ADMIN_USR
OUTLN
CTXSYS
SYSBACKUP
APEX_REST_PUBLIC_USER
ORACLE_OCM
APEX_PUBLIC_USER
MDDATA
GSMADMIN_INTERNAL
SYSDG
ORDDATA
USERNAME
----------------------
APEX_040200
DVF
MDSYS
GSMUSER
FLOWS_FILES
AUDSYS
DVSYS
OJVMSYS
APPQOSSYS
SI_INFORMTN_SCHEMA
ANONYMOUS
USERNAME
----------------------
LBACSYS
WMSYS
DIP
SYSKM
XS$NULL
OLAPSYS
SPATIAL_CSW_ADMIN_USR
APEX_LISTENER
SYSTEM
ORDPLUGINS
DBSNMP
USERNAME
----------------------
ORDSYS
XDB
GSMCATUSER
SYS
37 rows selected.
01:09:43 SYS@orcl> select distinct username, con_id from cdb_users
01:10:07 2 where common ='NO';
USERNAME CON_ID
---------------------- ------
HR 3
OE 3
ADMIN 3
PMUSER 3
OBE 3
01:10:26 SYS@orcl> select username, con_id from cdb_users
01:10:51 2 where common ='NO';
USERNAME CON_ID
---------------------- ------
PMUSER 3
HR 3
ADMIN 3
OE 3
OBE 3
01:10:59 SYS@orcl> col role format A30
01:11:34 SYS@orcl> select role, common, con_id from cdb_roles;
ROLE COM CON_ID
------------------------------ --- ------
CONNECT YES 1
RESOURCE YES 1
DBA YES 1
AUDIT_ADMIN YES 1
AUDIT_VIEWER YES 1
SELECT_CATALOG_ROLE YES 1
EXECUTE_CATALOG_ROLE YES 1
DELETE_CATALOG_ROLE YES 1
CAPTURE_ADMIN YES 1
EXP_FULL_DATABASE YES 1
IMP_FULL_DATABASE YES 1
... output snipped ...
ROLE COM CON_ID
------------------------------ --- ------
DV_PATCH_ADMIN YES 2
DV_STREAMS_ADMIN YES 2
DV_GOLDENGATE_ADMIN YES 2
DV_XSTREAM_ADMIN YES 2
DV_GOLDENGATE_REDO_ACCESS YES 2
DV_AUDIT_CLEANUP YES 2
DV_DATAPUMP_NETWORK_LINK YES 2
DV_REALM_RESOURCE YES 2
DV_REALM_OWNER YES 2
251 rows selected.
01:11:40 SYS@orcl> desc sys.system_privilege_map
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
PRIVILEGE NOT NULL NUMBER
NAME NOT NULL VARCHAR2(40)
PROPERTY NOT NULL NUMBER
01:12:22 SYS@orcl> desc sys.table_privilege_map
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
PRIVILEGE NOT NULL NUMBER
NAME NOT NULL VARCHAR2(40)
01:12:30 SYS@orcl> desc CDB_SYS_PRIVS
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
GRANTEE VARCHAR2(128)
PRIVILEGE VARCHAR2(40)
ADMIN_OPTION VARCHAR2(3)
COMMON VARCHAR2(3)
CON_ID NUMBER
01:13:07 SYS@orcl> desc CDB_TAB_PRIVS
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
GRANTEE VARCHAR2(128)
OWNER VARCHAR2(128)
TABLE_NAME VARCHAR2(128)
GRANTOR VARCHAR2(128)
PRIVILEGE VARCHAR2(40)
GRANTABLE VARCHAR2(3)
HIERARCHY VARCHAR2(3)
COMMON VARCHAR2(3)
TYPE VARCHAR2(24)
CON_ID NUMBER
01:13:16 SYS@orcl> col grantee format A10
01:14:02 SYS@orcl> col granted_role format A28
01:14:09 SYS@orcl> select grantee, granted_role, common, con_id
01:14:16 2 from cdb_role_privs
01:14:22 3 where grantee='SYSTEM';
GRANTEE GRANTED_ROLE COM CON_ID
---------- ---------------------------- --- ------
SYSTEM DBA YES 1
SYSTEM AQ_ADMINISTRATOR_ROLE YES 1
SYSTEM DBA YES 2
SYSTEM AQ_ADMINISTRATOR_ROLE YES 2
SYSTEM DBA YES 3
SYSTEM AQ_ADMINISTRATOR_ROLE YES 3
6 rows selected.
01:14:29 SYS@orcl>
}}}
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/release-notes/content/upgrading_parent.html
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/release-notes/content/known_issues.html
— Oracle Mix - Oracle OpenWorld and Oracle Develop Suggest-a-Session
https://mix.oracle.com/oow10/faq
https://mix.oracle.com/oow10/streams
http://blogs.oracle.com/oracleopenworld/2010/06/missed_the_call_for_papers_dea.html
http://blogs.oracle.com/datawarehousing/2010/06/openworld_suggest-a-session_vo.html
http://structureddata.org/2010/07/13/oracle-openworld-2010-the-oracle-real-world-performance-group/
http://kevinclosson.wordpress.com/2010/08/26/whats-really-happening-at-openworld-2010/
BI
http://www.rittmanmead.com/2010/09/03/rittman-mead-at-oracle-openworld-2010-san-francisco/
OCW 2010 photos by Karl Arao
http://www.flickr.com/photos/kylehailey/sets/72157625025196338/
Oracle Closed World 2010
http://www.flickr.com/photos/kylehailey/sets/72157625018583630/
-- scheduler builder username is karlara0
https://oracleus.wingateweb.com/scheduler/login.jsp
Volunteer geek work at RACSIG 9-10am Wed, Oct5
http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/Events
my notes ... http://www.evernote.com/shard/s48/sh/6591ce43-e00f-4b5c-ad12-b1f1547183a7/2a146737c4bfb7dab7453ba0bcdb4677
''bloggers meetup''
http://blogs.portrix-systems.de/brost/good-morning-san-francisco-5k-partner-fun-run/
http://dbakevlar.com/2011/10/oracle-open-world-2011-followup/
https://connor-mcdonald.com/2019/09/24/all-the-openworld-2019-downloads/
Data Mining for Business Analytics https://learning.oreilly.com/library/view/data-mining-for/9781119549840/
https://www.dataminingbook.com/book/python-edition
https://github.com/gedeck/dmba
Agile Data Science 2.0 https://learning.oreilly.com/library/view/agile-data-science/9781491960103/
https://gumroad.com/d/910c45fe02199287cc2ff23abcfcf821
https://github.com/rjurney/Agile_Data_Code_2
making use of smart scan made the run times faster, cpu on a lower utilization, + can accommodate more databases
http://www.evernote.com/shard/s48/sh/b1f43d49-1bcd-4319-b274-19a91cf338ac/f9f554d2d03b3f20db591d5e68392cbf
https://leetcode.com/problems/valid-anagram/
{{{
242. Valid Anagram
Easy
Given two strings s and t , write a function to determine if t is an anagram of s.
Example 1:
Input: s = "anagram", t = "nagaram"
Output: true
Example 2:
Input: s = "rat", t = "car"
Output: false
Note:
You may assume the string contains only lowercase alphabets.
Follow up:
What if the inputs contain unicode characters? How would you adapt your solution to such case?
Accepted
415,494
Submissions
769,852
}}}
{{{
# class Solution:
# def isAnagram(self, s: str, t: str) -> bool:
class Solution:
def isAnagram(self, text1, text2):
# text1 = 'Dog '
text1 = text1.replace(' ', '').lower()
# text2 = 'God '
text2 = text2.replace(' ', '').lower()
if sorted(text1) == sorted(text2):
return True
else:
return False
}}}
{{{
Glenn Fawcett
http://glennfawcett.files.wordpress.com/2013/06/ciops_data_x3-2.jpg
---
It wasn’t actually SLOB, but that might be interesting.
I used a mod of my blkhammer populate script to populate a bunch of tables OLTP style to
show how WriteBack is used. As expected, Exadata is real good on
“db file sequential read”… in the sub picosecond range if I am not mistaken :)
---
That was just a simple OLTP style insert test that spawns a bunch of PLSQL. Yes for sure
their were spills to disk... But the benefit was the coalescing of blocks. DBWR is flushing really
mostly random blocks, but the write back flash is pretty huge these days. I was seeing average
iosize to disk being around 800k but only about 8k to flash.
}}}
Backup and Recovery Performance and Best Practices for Exadata Cell and Oracle Exadata Database Machine Oracle Database Release 11.2.0.2 and 11.2.0.3
http://www.oracle.com/technetwork/database/features/availability/maa-tech-wp-sundbm-backup-11202-183503.pdf
ODA (Oracle Database Appliance): HowTo Configure Multiple Public Network on GI (Grid Infrastructure) (Doc ID 1501039.1)
Data Guard: Redo Transport Services – How to use a separate network in a RAC environment. (Doc ID 1210153.1)
Data Guard Physical Standby 11.2 RAC Primary to RAC Standby using a second network (Doc ID 1349977.1)
https://blog.gruntwork.io/why-we-use-terraform-and-not-chef-puppet-ansible-saltstack-or-cloudformation-7989dad2865c
https://www.udemy.com/course/learn-devops-infrastructure-automation-with-terraform/learn/lecture/5890850#overview
https://www.udemy.com/course/building-oracle-cloud-infrastructure-using-terraform/
https://www.udemy.com/course/oracle-database-automation-using-ansible/
https://www.udemy.com/course/oracle-database-and-elk-stack-lets-do-data-visualization/
https://www.udemy.com/course/automate-file-processing-in-oracle-db-using-dbms-scheduler/
! short and sweet
https://www.linkedin.com/learning/learning-terraform-2/next-steps
https://www.udemy.com/course/learn-devops-infrastructure-automation-with-terraform/learn/lecture/5886134#overview
! OCI example stack (MuShop app)
https://oracle-quickstart.github.io/oci-cloudnative/introduction/
https://github.com/oracle-quickstart/oci-cloudnative
..
<<showtoc>>
! @@Create a new@@ PDB from the seed PDB
@@quickest way is to DBCA@@
DBCA options:
* create a new PDB
* create new PDB from PDB Archive
* create PDB from PDB file set (RMAN backup and PDB XML metadata file)
<<<
1) Copies the data files from PDB$SEED data files
2) Creates tablespaces SYSTEM, SYSAUX
3) Creates a full catalog including metadata pointing to Oracle-supplied objects
4) Creates common users:
> – Superuser SYS
> – SYSTEM
5) Creates a local user (PDBA)
> granted local PDB_DBA role
6) Creates a new default service
7) After PDB creation make sure TNS entry is created
> CONNECT sys/oracle@pdb2 AS SYSDBA
> CONNECT oracle/oracle@pdb2
<<<
! @@Plug a non-CDB@@ in a CDB
options:
* TTS
* full export/import
* TDB (transportable database)
* DBMS_PDB package
* Clone a Remote Non-CDB (you can do it remotely)
* replication (Golden Gate)
<<<
using DBMS_PDB package below (running on the same server):
{{{
Cleanly shutdown the non-CDB and start it in read-only mode.
sqlplus / as sysdba
SHUTDOWN IMMEDIATE;
STARTUP OPEN READ ONLY;
Describe the non-DBC using the DBMS_PDB.DESCRIBE procedure
BEGIN
DBMS_PDB.DESCRIBE(
pdb_descr_file => '/tmp/db12c.xml');
END;
/
Shutdown the non-CDB database.
SHUTDOWN IMMEDIATE;
Connect to an existing CDB and create a new PDB using the file describing the non-CDB database
CREATE PLUGGABLE DATABASE pdb4 USING '/tmp/db12c.xml'
COPY;
ALTER SESSION SET CONTAINER=pdb4;
@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
ALTER SESSION SET CONTAINER=pdb4;
ALTER PLUGGABLE DATABASE OPEN;
08:24:03 SYS@cdb21> ALTER SESSION SET CONTAINER=pdb4;
Session altered.
08:24:23 SYS@cdb21>
08:24:24 SYS@cdb21>
08:24:24 SYS@cdb21> select name from v$datafile;
NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/CDB2/DATAFILE/undotbs1.340.868397457
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/system.391.868520259
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/sysaux.390.868520259
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/users.389.868520259
08:24:29 SYS@cdb21> conn / as sysdba
Connected.
08:24:41 SYS@cdb21> select name from v$datafile
08:24:45 2 ;
NAME
-----------------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/CDB2/DATAFILE/system.338.868397411
+DATA/CDB2/DATAFILE/sysaux.337.868397377
+DATA/CDB2/DATAFILE/undotbs1.340.868397457
+DATA/CDB2/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/system.346.868397513
+DATA/CDB2/DATAFILE/users.339.868397457
+DATA/CDB2/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/sysaux.345.868397513
+DATA/CDB2/DATAFILE/undotbs2.348.868397775
+DATA/CDB2/DATAFILE/undotbs3.349.868397775
+DATA/CDB2/DATAFILE/undotbs4.350.868397775
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/system.391.868520259
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/sysaux.390.868520259
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/users.389.868520259
12 rows selected.
08:24:46 SYS@cdb21> select FILE_NAME, TABLESPACE_NAME, FILE_ID, con_id from cdb_data_files order by con_id ;
FILE_NAME
-----------------------------------------------------------------------------------------------------------------------------------------------------------
TABLESPACE_NAME FILE_ID CON_ID
------------------------------ ---------- ----------
+DATA/CDB2/DATAFILE/system.338.868397411
SYSTEM 1 1
+DATA/CDB2/DATAFILE/sysaux.337.868397377
SYSAUX 3 1
+DATA/CDB2/DATAFILE/undotbs1.340.868397457
UNDOTBS1 4 1
+DATA/CDB2/DATAFILE/undotbs4.350.868397775
UNDOTBS4 10 1
+DATA/CDB2/DATAFILE/undotbs2.348.868397775
UNDOTBS2 8 1
+DATA/CDB2/DATAFILE/undotbs3.349.868397775
UNDOTBS3 9 1
+DATA/CDB2/DATAFILE/users.339.868397457
USERS 6 1
7 rows selected.
}}}
<<<
! @@Clone a PDB@@ from another PDB
@@through SQL*Plus@@
<<<
{{{
This technique copies a source PDB from a CDB and plugs the copy in to a CDB. The source
PDB is in the local CDB.
The steps to clone a PDB within the same CDB are the following:
1. In init.ora, set DB_CREATE_FILE_DEST= 'PDB3dir' (OMF) or
PDB_FILE_NAME_CONVERT= 'PDB1dir', 'PDB3dir' (non OMF).
2. Connect to the root of the CDB as a common user with CREATE PLUGGABLE DATABASE
privilege.
3. Quiesce the source PDB used to clone, using the command ALTER PLUGGABLE
DATABASE pdb1 READ ONLY after closing the PDB using the command ALTER
PLUGGABLE DATABASE CLOSE
4. Use the command CREATE PLUGGABLE DATABASE to clone the PDB pdb3 FROM pdb1.
5. Then open the new pdb3 with the command ALTER PLUGGABLE DATABASE OPEN.
If you do not use OMF, in step 4, use the command CREATE PLUGGABLE DATABASE with the
clause FILE_NAME_CONVERT=(’pdb1dir’,’ pdb3dir’) to define the directory of the
source files to copy from PDB1 and the target directory for the new files of PDB3.
quick step by step
alter session set container=cdb$root;
show con_name
set db_create_file_dest
15:09:04 SYS@orcl> ALTER PLUGGABLE DATABASE pdb2 close;
Pluggable database altered.
15:09:30 SYS@orcl> ALTER PLUGGABLE DATABASE pdb2 open read only;
Pluggable database altered.
15:09:40 SYS@orcl> CREATE PLUGGABLE DATABASE PDB3 FROM PDB2;
Pluggable database created.
15:12:25 SYS@orcl> ALTER PLUGGABLE DATABASE pdb3 open;
Pluggable database altered.
15:12:58 SYS@orcl> select CON_ID, dbid, NAME, OPEN_MODE from v$pdbs;
CON_ID NAME OPEN_MODE
— — —
2 PDB$SEED READ ONLY
3 PDB1 READ WRITE
4 PDB2 READ ONLY
5 PDB3 READ WRITE
select FILE_NAME, TABLESPACE_NAME, FILE_ID, con_id from cdb_data_files order by con_id ;
15:13:33 SYS@orcl> show parameter db_create
NAME TYPE VALUE
-
db_create_file_dest string /u01/app/oracle/oradata
db_create_online_log_dest_1 string
db_create_online_log_dest_2 string
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string
15:15:14 SYS@orcl>
15:15:17 SYS@orcl>
15:15:17 SYS@orcl> show parameter file_name
NAME TYPE VALUE
-
db_file_name_convert string
log_file_name_convert string
pdb_file_name_convert string
ALTER PLUGGABLE DATABASE pdb2 close;
ALTER PLUGGABLE DATABASE pdb2 open;
CONNECT sys/oracle@pdb3 AS SYSDBA
CONNECT oracle/oracle@pdb3
}}}
<<<
! @@Plug an unplugged PDB@@ into another CDB
@@quickest is DBCA, just do the unplug and plug from UI@@
! References:
http://oracle-base.com/articles/12c/multitenant-create-and-configure-pluggable-database-12cr1.php
http://kevinclosson.wordpress.com/2012/02/12/how-many-non-exadata-rac-licenses-do-you-need-to-match-exadata-performance/
{{{
kevinclosson
February 14, 2012 at 8:52 pm
Actually, Matt, I see nothing wrong with what the rep said. A single Exadata database grid host can drive a tremendous amount of storage throughput but it can only eat 3.2GB/s since there is but a single 40Gb HCA port active on each host. A single host can drive the storage grid nearly to saturation via Smart Scan…but as soon as the data flow back to the host approaches 3.2GB/s the Smart Scan will start to throttle. In fact single session (non-Parallel Query) can drive Smart Scan to well over 10GB/s in a full rack but, in that case you’d have a single foreground process on a single core of WSM-EP so there wouldn’t sufficient bandwidth to ingest much data..about 250MB/s can flow into a single session performing a Smart Scan. So the hypothetical there would be Smart Scan is churning through, let’s say, 10GB/s and Smart Scan is whittling down the payload by about 9.75GB/s through filtration and projection. Those are very close to realistic numbers I’ve just cited but I haven’t measured those sort of “atomics” in a year so I’m going by memory. Let’s say give or take 5% on my numbers.
<<<
}}}
http://forums.theregister.co.uk/forum/1/2011/12/12/ibm_vs_oracle_data_centre_optimisation/
{{{
Exadata: 2 Grids, 2 sets of roles.
>The Exadata storage nodes compress database files using a hybrid columnar algorithm so they take up less space and can be searched more quickly. They also run a chunk of the Oracle 11g code, pre-processing SQL queries on this compressed data before passing it off to the full-on 11g database nodes.
Exadata cells do not compress data. Data compression is done at load time (in the direct path) and compression (all varieties not just HCC) is code executed only on the RAC grid CPUS. Exadata users get no CPU help from the 168 cores in the storage grid when it comes to compressing data.
Exadata cells can, however, decompress HCC data (but not the other types of compressed data). I wrote "can" because cells monitor how busy they are and are constantly notified by the RAC servers about their respective CPU utilization. Since decompressing HCC data is murderously CPU-intensive the cells easily go processor-bound. At that time cells switch to "pass-through" mode shipping up to 40% of the HCC blocks to the RAC grid in compressed form. Unfortunately there are more CPUs in the storage grid than the RAC grid. There is a lot of writing on this matter on my blog and in the Expert Oracle Exadata book (Apress).
Also, while there are indeed 40GB DDR Infiniband paths to/from the RAC grid and the storage grid, there is only 3.2GB/s usable bandwidth for application payload between these grids. Therefore, the aggregate maximum data flow between the RAC grid and the cells is 25.6GB/s (3.2x8). There are 8 IB HCAs in either X2 model as well so the figure sticks for both. In the HP Oracle Database Mahine days that figure was 12.8GB/s.
With a maximum of 25.6 GB/s for application payload (Oracle's iDB protocol as it is called) one has to quickly do the math to see the mandatory data reduction rate in storage. That is, if only 25.6 GB/s fits through the network between these two grids yet a full rack can scan combined HDD+FLASH at 75 GB/s then you have to write SQL that throws away at least 66% of the data that comes off disk. Now, I'll be the first to point out that 66% payload reduction from cells is common. Indeed, the cells filter (WHERE predicate) and project columns (only the cited and join columns need shipped). However, compression changes all of that.
If scanning HCC data on a full rack Exadata configuration, and that data is compressed at the commonly cited compression ratio of 10:1 then the "effective" scan rate is 750GB/s. Now use the same predicates and cite the same columns and you'll get 66% reduced payload--or 255GB/s that needs to flow over iDB. That's about 10x over-subscription of the available 25.6 GB/s iDB bandwidth. When this occurs, I/O is throttled. That is, if the filtered/projected data produced by the cells is greater than 25.6GB/s then I/O wanes. Don't expect 10x query speedup because the product only has to perform 10% the I/O it would in the non-compressed case (given a HCC compression ratio of 10:1).
That is how the product works. So long as your service levels are met, fine. Just don't expect to see 75GB/s of HCC storage throughput with complex queries because this asymmetrical MPP architecture (Exadata) cannot scale that way (for more info see: http://bit.ly/tFauDA )
}}}
http://kevinclosson.wordpress.com/2011/11/23/mark-hurd-knows-cios-i-know-trivia-cios-may-not-care-about-either-hang-on-im-booting-my-cell-phone/#comment-37527
{{{
kevinclosson
November 28, 2011 at 7:09 pm
“I can see the shared nothing vs shared everything point in a CPU + separate storage perspective.”
…actually, I don’t fester about with the shared-disk versus shared nothing as I really don’t think it matters. It’s true that Real Application Clusters requires shared disk but that is not a scalability hindrance–so long as one works out the storage bandwidth requirements–a task that is not all that difficult with modern storage networking options. So long as ample I/O flow is plumbed into RAC it scales DW/BI workloads. It is as simple as that. On the other hand, what doesn’t scale is asymmetry. Asymmetry has never scaled as would be obvious to even the casual observer. As long as all code can run on all CPUs (symmetry) scalability is within reach. What I’m saying is that RAC actually has better scalability characteristics when running with conventional storage than with Exadata! That’s a preposterous statement to the folks who don’t actually know the technology, as well as those who are dishonest about the technology, but obvious to the rest of us. It’s simple computer science. One cannot take the code path of query processing, chop it off at the knees (filtration/projection) and offload that to some arbitrary percentage of your CPU assets and pigeon-hole all the rest of the code to the remaining CPUs and cross fingers.
A query cannot be equally CPU-intensive in all query code all the time. There is natural ebb and tide. If the query plan is at the point of intensive join processing it is not beneficial to have over fifty percent of the CPUs in the rack unable to process join code (as is the case with Exadata).
To address this sort of ebb/tide imbalance Oracle has “released” a “feature” referred to as “passthrough” where Exadata cells stop doing their value-add (filtration and HCC decompression) for up to about 40% of the data flowing off storage when cells get too busy (CPU-wise). At that point they just send unfiltered, compressed data to the RAC grid. The RAC grid, unfortunately, has less CPU cores than the storage grid and has brutally CPU-intensive work of its own to do (table join, sort, agg). “Passthrough” is discussed in the Expert Oracle Exadata (Apress) book.
This passthrough feature does allow water to find its level, as it were. When Exadata falls back to passthrough mode the whole configuration does indeed utilize all CPU and since idle CPU doesn’t do well to increase query processing performance this is a good thing. However, if Exadata cells stop doing the “Secret Sauce” (a.k.a., Offload Processing) when they get busy then why not just build a really large database grid (e.g., with the CPU count of all servers in an Exadata rack) and feed it with conventional storage? That way all CPU power is “in the right place” all the time. Well, the answer to that is clearly RAC licensing. Very few folks can afford to license enough cores to run a large enough RAC grid to make any of this matter. Instead they divert some monies that could go for a bigger database grid into “intelligent storage” and hope for the best.
}}}
http://www.snia.org/sites/default/education/tutorials/2008/fall/networking/DrorGoldenberg-Fabric_Consolidation_InfiniBand.pdf
3.2 GB/s unidirectional
theoretical limit 3.2 GB/s measured due to server IO limitations
http://www.it-einkauf.de/images/PDF/677C777.pdf
{{{
INFINIBAND PHYSICAL-LAYER CHARACTERISTICS
The InfiniBand physical-layer specification supports three data rates, designated 1X, 4X, and 12X, over both copper and fiber optic media.
The base data rate, 1X single data rate (SDR), is clocked at 2.5 Gbps and is transmitted over two pairs of wires—transmit and receive—and
yields an effective data rate of 2 Gbps full duplex (2 Gbps transmit, 2 Gbps receive). The 25 percent difference between data rate and
clock rate is due to 8B/10B line encoding that dictates that for every 8 bits of data transmitted, an additional 2 bits of transmission
overhead is incurred.
}}}
infiniband cabling issues
{{{
InfiniBand cable presents a challenge within this environment because the cables are considerably thicker, heavier, and shorter in length
to mitigate the effects of cross-talk and signal attenuation and achieve low bit error rates (BERs). To assure the operational integrity and
performance of the HPC cluster, it is critically important to maintain the correct bend radius, or the integrity of the cable can be
compromised such that the effects of cross-talk introduce unacceptable BERs.
To address these issues, it is essential to thoroughly plan the InfiniBand implementation and provide a good cable management solution
that enables easy expansion and replacement of failed cables and hardware. This is especially important when InfiniBand 12X or DDR
technologies are being deployed because the high transmission rates are less tolerant to poor installation practices.
}}}
http://www.redbooks.ibm.com/abstracts/tips0456.html
A single PCI Express serial link is a dual-simplex connection using two pairs of wires, one pair for transmit and one pair for receive, and can only transmit one bit per cycle. Although this sounds limiting, it can transmit at the extremely high speed of 2.5 Gbps, which equates to a burst mode of 320 MBps on a single connection. These two pairs of wires is called a lane.
{{{
Table: PCI Express maximum transfer rate
Lane width Clock speed Throughput (duplex, bits) Throughput (duplex, bytes) Initial expected uses
x1 2.5 GHz 5 Gbps 400 MBps Slots, Gigabit Ethernet
x2 2.5 GHz 10 Gbps 800 MBps
x4 2.5 GHz 20 Gbps 1.6 GBps Slots, 10 Gigabit Ethernet, SCSI, SAS
x8 2.5 GHz 40 Gbps 3.2 GBps
x16 2.5 GHz 80 Gbps 6.4 GBps Graphics adapters
}}}
http://www.aiotestking.com/juniper/2011/07/when-using-a-40-gbps-switch-fabric-how-much-full-duplex-bandwidth-is-available-to-each-slot/
{{{
When using a 40 Gbps switch fabric, how much full duplex bandwidth is available to each slot?
A.
1.25 Gbps
}}}
Sun Blade 6048 InfiniBand QDR Switched Network Express Module Introduction
http://docs.oracle.com/cd/E19914-01/820-6705-10/chapter1.html
{{{
IB transfer rate (maximum)
40 Gbps (QDR) per 4x IB port for the Sun Blade X6275 server module and 20 Gbps (DDR) per 4x IB port for the Sun Blade X6270 server module. There are two 4x IB ports per server module.
1,536 Gbps aggregate throughput
}}}
''email with Kevin''
<<<
on Exadata the 3.2 is establsihed by the PCI slot the HCA is sitting in. I don't scrutinize QDR IB these days. It would be duplex...would have to look it up.
<<<
wikipedia
<<<
http://en.wikipedia.org/wiki/InfiniBand where it mentioned about "The SDR connection's signalling rate is 2.5 gigabit per second (Gbit/s) in each direction per connection"
<<<
''The flash and HCA cards uses pci-e x8''
http://jarneil.wordpress.com/2012/02/02/upgradingdowngrading-exadata-ilom-firmware/
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CFkQFjAA&url=http%3A%2F%2Fhusnusensoy.files.wordpress.com%2F2010%2F10%2Foracle-exadata-v2-fast-track.pptx&ei=M4zTT_mvLcGC2AWpufi6Dw&usg=AFQjCNFMAJgvIx9QuD3513dWS9nETkeXqw
{{{
IB Switches
3 x 36-port managed switches as opposed to Exadata v1 (2+1).
2 “leaf”
1 “spine” switches
Spine switch is only available for Full Rack because it is for connecting multiple full racks side by side.
A subnet manager running on one switch discovers the topology of the network.
HCA
Each node (RAC & Storage Cell) has a PCIe x8 40 Gbit HCA with two ports
Active-Standby Intracard Bonding.
}}}
F20 PCIe Card
{{{
Not a SATA/SAS SSD driver but a x8 PCIe device providing SATA/SAS interface.
4 Solid State Flash Disk Modules (FMod) each of 24 GB size
256 MB Cache
SuperCap Power Reserve (EnergyStorageModule) provides write-back operation mode.
ESM should be enabled for optimal write performance
Should be replaced in every two years.
Can be monitored using various tools like ILOM
Embedded SAS/SATA configuration will expose 16 (4 cards x 4 FMod) Linux devices.
/dev/sdn
4K sector boundary for Fmods
Each FMod consists of several NAND modules best performance can be reached with multithreading (32+ thread/FMod etc)
}}}
{{{
How To Avoid ORA-04030/ORA-12500 In 32 Bit Windows Environment
Doc ID: Note:373602.1
How to convert a 32-bit database to 64-bit database on Linux?
Doc ID: Note:341880.1
-- PAE/AWE
Some relief may be obtained by setting the /3GB flag as well as the /PAE flag in Oracle. This at least assures that up to 2 GB of memory is available for the Large Pool,
the Shared Pool, the PGA, and all user threads, after the AWE_WINDOW_SIZE parameter is taken into account. However, Microsoft recommends that the /3GB flag not be set if
the /AWE flag is set. This is due to the fact that the total amount of RAM accessible for ALL purposes is limited to 16 GB if the /3GB flag is set. RAM above 16 GB simply
�disappears� from the view of the OS. For PowerEdge 6850 servers that can support up to 64 GB of RAM, a limitation to only 16 GB of RAM is unacceptable.
As noted previously, the model used for extended memory access under a 32-bit Operating System entails a substantial performance penalty. However, with a 64-bit OS, a flat linear model for memory used, with no need for PAE to access memory above 4 GB. Improved performance will be experienced for database SGA sizes greater than 3 GB, due to elimination of PAE overhead.
MAXIMUM OF 4 GB OF ADDRESSABLE MEMORY FOR THE 32 BIT ARCHITECTURE. THIS IS A MAXIMUM PER PROCESS. THAT IS, EACH PROCESS MAY ALLOCATE UP TO 4 GB OF MEMORY
2GB for OS
2GB for USER THREADS
1st workaround on 4GB limit:
- To expand the total memory used by Oracle above 2 GB, the /3GB flag may be set in the boot.ini file.
With the /3GB flag set, only 1 GB is used for the OS, and 3 GB is available for all user threads, including the Oracle SGA.
2nd workaround on 4GB limit:
- use the PAE, Intel 32-bit processors such as the Xeon processor support PAGING ADDRESS EXTENSIONS for large memory support
MS Windows 2000 and 2003 support PAE through ADDRESS WINDOWING EXTENSIONS (AWE). PAE/AWE may be enabled by setting the /PAE flag in the boot.ini file.
The �USE_INDIRECT_BUFFERS=TRUE� parameter must also be set in the Oracle initialization file. In addition, the DB_BLOCK_BUFFERS parameter must be used
instead of the DB_CACHE parameter in the Oracle initialization file. With this method, Windows 2000 Server and Windows Server 2003 versions can support
up to 8 GB of total memory.
Windows Advanced Server and Data Center versions support up to 64 GB of addressable memory with PAE/AWE.
- One limitation of AWE is that only the Data Buffer component of the SGA may be placed in extended memory. Threads for other
SGA components such as the Shared Pool and the Large Pool, as well as the PGA and all Oracle user sessions must still fit inside
a relatively small memory area. THERE IS AN AWE_WINDOW_SIZE REGISTRY KEY PARAMETER THAT IS USED TO SET THE SIZE OF A KIND OF �SWAP� AREA IN THE SGA. <-- swap area in SGA
This �swap� area is used for mapping data blocks in upper memory to a lower memory location. By default,
this takes an additional 1 GB of low memory. This leaves only 2 GB of memory for everything other than the Buffer cache, assuming
the /3GB flag is set. If the /3GB flag is not set, only 1 GB of memory is available for the non-Buffer Cache components.
- Note that the maximum addressable memory was limited to 16 GB of RAM
Some relief may be obtained by setting the /3GB flag as well as the /PAE flag in Oracle. This at least assures that up to 2 GB of memory is available
for the Large Pool, the Shared Pool, the PGA, and all user threads, after the AWE_WINDOW_SIZE parameter is taken into account. However, Microsoft
recommends that the /3GB flag not be set if the /AWE flag is set. This is due to the fact that the total amount of RAM accessible for ALL purposes
is limited to 16 GB if the /3GB flag is set. RAM ABOVE 16 GB SIMPLY �DISAPPEARS� FROM THE VIEW OF THE OS. For PowerEdge 6850 servers that can support
up to 64 GB of RAM, a limitation to only 16 GB of RAM is unacceptable.
This will give you (/3GB is set):
3-4GB for Buffer Cache
1GB for the swap area
2GB for everything other than the Buffer Cache
1GB for OS
This will give you (/3GB is not set):
3-4GB for Buffer Cache
1GB for the swap area
1GB for everything other than the Buffer Cache
2GB for OS
- Performance Tuning Corporation Benchmark:
This will give you (/3GB is set):
11GB for Buffer Cache
.75GB for the swap area (AWE_MEMORY_WINDOW..minimum size that allowed the database to start)
2.25GB for everything other than the Buffer Cache
1GB for OS
This will give you (/3GB is not set):
11GB for Buffer Cache
.75GB for the swap area (AWE_MEMORY_WINDOW..minimum size that allowed the database to start)
1.25GB for everything other than the Buffer Cache
2GB for OS
}}}
Using Large Pages for Oracle on Windows 64-bit (ORA_LPENABLE) http://blog.ronnyegner-consulting.de/2010/10/19/using-large-pages-for-oracle-on-windows-64-bit-ora_lpenable/
http://www.sketchup.com/download
<<<
3D XPoint (cross point) memory, which will be sold under the name Optane
<<<
https://www.intel.com/content/www/us/en/architecture-and-technology/intel-optane-technology.html
https://www.intel.com/content/www/us/en/architecture-and-technology/optane-memory.html
https://www.computerworld.com/article/3154051/data-storage/intel-unveils-its-optane-hyperfast-memory.html
https://www.computerworld.com/article/3082658/data-storage/intel-lets-slip-roadmap-for-optane-ssds-with-1000x-performance.html
http://docs.oracle.com/cd/E11857_01/em.111/e16790/ha_strategy.htm#EMADM9613
http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/ <-- good stuff
http://ubuntuforums.org/showthread.php?t=1854524
http://ubuntuforums.org/showthread.php?t=1685666
http://ubuntuforums.org/showthread.php?t=1768635
also see [[Get BlockSize of OS]]
''Oracle related links''
1.8.1.4 Support 4 KB Sector Disk Drives http://docs.oracle.com/cd/E11882_01/server.112/e22487/chapter1.htm#FEATURENO08747
Planning the Block Size of Redo Log Files http://docs.oracle.com/cd/E11882_01/server.112/e25494/onlineredo002.htm#ADMIN12891
Specifying the Sector Size for Drives http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#OSTMG10203
Microsoft support policy for 4K sector hard drives in Windows http://support.microsoft.com/kb/2510009
ATA 4 KiB sector issues https://ata.wiki.kernel.org/index.php/ATA_4_KiB_sector_issues
http://en.wikipedia.org/wiki/Advanced_format
http://martincarstenbach.wordpress.com/2013/04/29/4k-sector-size-and-grid-infrastructure-11-2-installation-gotcha/
http://flashdba.com/4k-sector-size/
http://flashdba.com/install-cookbooks/installing-oracle-database-11-2-0-3-single-instance-using-4k-sector-size/
http://flashdba.com/2013/04/12/strange-asm-behaviour-with-4k-devices/
http://flashdba.com/2013/05/08/the-most-important-thing-you-need-to-know-about-flash/
http://www.storagenewsletter.com/news/disk/217-companies-hdd-since-1956
http://www.theregister.co.uk/2013/02/04/ihs_hdd_projections/
Alert: (Fix Is Ready + Additional Steps!) : After SAN Firmware Upgrade, ASM Diskgroups ( Using ASMLIB) Cannot Be Mounted Due To ORA-15085: ASM disk "" has inconsistent sector size. [1500460.1]
Design Tradeoffs for SSD Performance http://research.cs.wisc.edu/adsl/Publications/ssd-usenix08.pdf
Enabling Enterprise Solid State Disks Performance http://repository.cmu.edu/cgi/viewcontent.cgi?article=1732&context=compsci
https://flashdba.com/?s=4kb
http://karlarao.wordpress.com/2009/12/31/50-sql-performance-optimization-scenarios/
{{{
ORACLE SQL Performance Optimization Series (1)
1. The types of ORACLE optimizer
2. The way to visit Table
3. Shared SQL statement
ORACLE SQL Performance Optimization Series (2)
4. Select the table name of the most efficient order (only in the effective rule-based optimizer)
5. WHERE clause in the order of the connections
6. SELECT clause to avoid using ‘*’
7. Access to the database to reduce the number of
ORACLE SQL Performance Optimization Series (3)
8. Using the DECODE function to reduce the processing time
9. Integration of simple, non-associated database access
10. Remove duplicate records
11. Alternative DELETE with TRUNCATE
12. As much as possible the use of COMMIT
ORACLE SQL Performance Optimization Series (4)
13. Calculate the number of records
14. Where clause with the HAVING clause to replace
15. To reduce the query table
16. Through an internal function to improve SQL efficiency
ORACLE SQL Performance Optimization Series (5)
17. Use the table alias (Alias)
18. Replace IN with EXISTS
19. Replace NOT IN with NOT EXISTS
ORACLE SQL performance optimization Series (6)
20. Connect with the table to replace EXISTS
21. Replace DISTINCT with EXISTS
22. Recognition ‘inefficient implementation of the’ in SQL statements
23. Use TKPROF tool to query SQL Performance Status
ORACLE SQL Performance Optimization Series (7)
24. Analysis of SQL statements with EXPLAIN PLAN
ORACLE SQL Performance Optimization Series (8)
25. With the index to improve efficiency
26. Operation index
ORACLE SQL Performance Optimization Series (9)
27. The choice of the basis of the table
28. Number of equal index
29. Comparing and scope of the comparison equation
30. The index level is not clear
ORACLE SQL Performance Optimization Series (10)
31. Force index failure
32. Avoid the use of columns in the index calculation.
33. Auto Select Index
34. Avoid the use of NOT in the index column
35. With “= substitute”
ORACLE SQL Performance Optimization Series (11)
36. UNION replaced with the OR (for the index column)
37. To replace the OR with the IN
38. Avoid the use of columns in the index IS NULL and IS NOT NULL
ORACLE SQL Performance Optimization Series (12)
39. Always use the first column index
40. ORACLE internal operations
41. With the UNION-ALL replaced UNION (if possible)
42. Usage Tips (Hints)
ORACLE SQL Performance Optimization Series (13)
43. WHERE replaced with ORDER BY
44. Avoid changing the index of the column type
45. Need to be careful of the WHERE clause
ORACLE SQL Performance Optimization Series (14)
46. Connect multiple scan
47. CBO to use a more selective index of
48. Avoid the use of resource-intensive operations
49. GROUP BY Optimization
50. Use Date
51. Use explicit cursor (CURSORs)
52. Optimization EXPORT and IMPORT
53. Separate tables and indexes
ORACLE SQL Performance Optimization Series (15)
EXISTS / NOT EXISTS must be better than IN / NOT IN the efficiency of high?
ORACLE SQL Performance Optimization Series (16)
I used the view of how query results are wrong?
ORACLE SQL Performance Optimization Series (17)
Page Which writing efficient SQL?
ORACLE SQL Performance Optimization Series (18)
COUNT (rowid) / COUNT (pk) the efficiency of high?
ORACLE SQL Performance Optimization Series (19)
ORACLE data type implicit conversions
ORACLE SQL Performance Optimization Series (20)
The use of INDEX should pay attention to the three questions
ORACLE Tips (HINT) use (Part 1) (21)
ORACLE Tips (HINT) use (Part 2) (22)
Analysis of function-based index (Part 1) (23)
Analysis of function-based index (Part 2) (24)
How to achieve efficient paging query (25)
ORACLE achieved in the SELECT TOP N method (26)
}}}
http://highscalability.com/blog/2013/11/25/how-to-make-an-infinitely-scalable-relational-database-manag.html
http://dimitrik.free.fr/blog/archives/2013/11/mysql-performance-over-1m-qps-with-innodb-memcached-plugin-in-mysql-57.html
Average Active Sessions (AAS) is a metric of the database load. This value should not go above the CPU count, if it does then that means the database is working very hard or waiting a lot for something.
''The AAS & CPU count is used as a yardstick for a possible performance problem (I suggest reading Kyle's stuff about this):''
{{{
if AAS < 1
-- Database is not blocked
AAS ~= 0
-- Database basically idle
-- Problems are in the APP not DB
AAS < # of CPUs
-- CPU available
-- Database is probably not blocked
-- Are any single sessions 100% active?
AAS > # of CPUs
-- Could have performance problems
AAS >> # of CPUS
-- There is a bottleneck
}}}
''AAS Formula''
--
{{{
* AAS is either dbtime/elapsed
* or count/samples
* in the case of dba_hist_ count is count*10 since they only write out 1/10 samples (19751*10)/600 = 329.18
}}}
<<showtoc>>
This Tiddler will show you a new interesting metric included in the performance graph of Enterprise Manager 11g.. which is the ''CPU Wait'' or ''CPU + CPU Wait''
a little background..
I've done an IO test with the intention of bringing the system down to its knees and characterizing the IO performance on that level of stress. That time I want to know the IO performance of my R&D server http://www.facebook.com/photo.php?pid=5272015&l=d5f2be4166&id=552113028 (which I intend to run lots of VMs) having 8GB memory, IntelCore2Quad Q9500 & 5 x 1TB short stroked disk (on the outer 100GB area) and I was able to built from it an LVM stripe that produced about 900+ IOPS & 300+ MB/s on my ''Orion'' and ''dbms_resource_manager.calibrate_io'' runs and validated those numbers against the database I created by actually running ''256 parallel sessions'' doing SELECT * on a 300GB table http://goo.gl/PYYyH (the same disks are used but as ASM disks on the next 100GB area - short stroked).
I'll start off by showing you how AAS is computed.. Then detail on how it is being graphed and show you the behavior of AAS on IO and CPU bound workload..
The tools I used for graphing the AAS:
* Enterprise Manager 11g
** both the real time and historical graphs
* ASH Viewer by Alexander Kardapolov http://j.mp/dNidrB
** this tool samples from the ASH itself and graphs it.. so it allows me to check the correctness and compare it with the ''real time'' graph of Enterprise Manager
* MS Excel and awr_topevents.sql
** this tool samples from the DBA_HIST views and graphs it.. so it allows me to check the correctness and compare it with the ''historical'' graph of Enterprise Manager
Let's get started..
! How AAS is computed
AAS is the abstraction of database load and you can get it by the following means...
!!!! 1) From ASH
<<<
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZtyRXwwiOI/AAAAAAAABLA/BYOUYtXO1Vo/AASFromASH.png]]
<<<
!!!! 2) From DBA_HIST_ACTIVE_SESS_HISTORY
* In the case of DBA_HIST_ ''sample count'' is sample count*10 since they only write out 1/10 samples
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZtyRcp7m_I/AAAAAAAABLI/sLqztbLY3Mw/AASFromDBA_HIST.png]]
<<<
!!!! 3) From the AWR Top Events
* The Top Events section unions the output of ''dba_hist_system_event'' (all the events) and the ''CPU'' from time model (''dba_hist_sys_time_model'') and then filter only the ''top 5'' and do this across the SNAP_IDs
** To get the ''high level AAS'' you have to divide DB Time / Elapsed Time
** To get the ''AAS for the Top Events'', you have to divide the ''time'' (from event or cpu) by ''elapsed time''
* You can see below that we are having ''the same'' AAS numbers compared to the ASH reports
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZtyRdPqm3I/AAAAAAAABLE/o23FMIG1yeQ/AASFromAWRTop.png]]
<<<
! How AAS is being graphed
I have a dedicated blog post on this topic.. http://karlarao.wordpress.com/2010/07/25/graphing-the-aas-with-perfsheet-a-la-enterprise-manager/
So we already know how we get the AAS, and how is it graphed.. ''so what's my issue?''
''Remember I mentioned this on the blog post above.. ?''
<<<
"So what’s the effect? mm… on a high CPU activity period you’ll notice that there will be a higher AAS on the Top Activity Page compared to Performance Page. Simply because ASH samples every second and it does that quickly on every active session (the only way to see CPU usage realtime) while the time model CPU although it updates quicker (5secs I think) than v$sysstat “CPU used by this session” there could still be some lag time and it will still be based on Time Statistics (one of two ways to calculate AAS) which could be affected by averages."
<<<
I'll expound on that with test cases included.. ''see below!''
! AAS behavior on an IO bound load
* This is the graph of an IO bound load using ASH Viewer, this will be similar to the graph you will see on ''real time'' view of the Enterprise Manager 11g
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZ9Cp2Kc8aI/AAAAAAAABN0/1konJAJZMUo/highio-3.png]]
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZt3yMWQUCI/AAAAAAAABLM/8d-I2RqvF3I/AASIObound.png]]
<<<
* This is the graph of the same workload using MS Excel and the script awr_topevents.sql, this will be the similar graph you will see on the ''historical'' view of the Enterprise Manager 11g
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ9FJ6cXxRI/AAAAAAAABN4/eWRs8SQd0ws/highio-4.png]]
<<<
As you can see from the images above and the numbers below.. the database is doing a lot of ''direct path read'' and we don't have a high load average. Although when you look at the OS statistics, from this IO intensive workload you will see high IO WAIT from the CPU.
Looking at the data below from AWR and ASH.. ''we see no discrepancies''.. now, let's compare this to the workload below where the database server is CPU bound and has a really high load average.
''AAS Data from AWR''
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZ8-nKFw2pI/AAAAAAAABNk/oozsoEgnmeE/highio-1.png]]
<<<
''AAS Data from ASH''
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ8-nFhmP7I/AAAAAAAABNo/x5kIF-HuhnY/highio-2.png]]
<<<
! AAS behavior on a CPU bound load
This is the Enterprise Manager 11g graph of a CPU bound load
<<<
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZt8ZxUeEUI/AAAAAAAABLY/gmclSmutRVg/AASCPUbound.png]]
<<<
This is the ASH Viewer graph of a CPU bound load
* The dark green color you see below (18:30 - 22:00) is actually the ''CPU Wait'' metric that you are seeing on the Enterprise Manager graph above
* The light green color on the end part of the graph (22:00) is the ''Scheduler wait - resmgr: cpu quantum''
* The small hump on the 16:30-17:30 time frame is the IO bound load test case
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZ6emvgui7I/AAAAAAAABNI/fxVzQryIwKc/highcpu-4.png]]
<<<
Below are the data from AWR and ASH of the same time period ''(21:50 - 22:00)''.. see the high level and drill down numbers below
... it seems like if the database server is ''high on CPU/high on runqueue'' or the ''"wait for CPU"'' appears.. then the AAS numbers from the AWR and ASH reports don't match anymore but I would expect ASH to be bigger because it has fine grained samples of 1 second. But as you can see (below)..
* the ASH top events correctly accounted the CPU time ''(95.37 AAS)'' which was tagged as ''CPU + Wait for CPU''
* while the AWR CPU seems to be idle ''(.2 AAS)''.
And what's even more interesting is
* the high level AAS on AWR is ''356.7''
* while on the ASH it is ''329.18''
that's a huge gap! Well that could be because of
* the high DB Time ''(215947.8)'' on AWR
* compared to what Sample Count ASH has ''(197510)''.
Do you have any idea why is this happening? Interesting right?
''AAS Data from AWR''
<<<
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ6BdKu23hI/AAAAAAAABMw/Nuwg_qTt6m8/highcpu-1.png]]
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ6BdrU46FI/AAAAAAAABM4/6Inv_8_Z5dc/highcpu-2.png]]
<<<
''AAS Data from ASH''
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ8rp2UTbWI/AAAAAAAABNg/6VBzvJxxApM/highcpu-3.png]]
<<<
''A picture is worth a thousand words...'' - To clearly explain this behavior of ''CPU not properly accounted'' I'll show you the graph of the data samples
__''AWR Top Events with CPU "not properly" accounted''__
<<<
* This is the high level AAS we are getting from the ''DB Time/Elapsed Time'' from the AWR report across SNAP_IDs.. this output comes from the script ''awr_genwl.sql'' (AAS column - http://goo.gl/MUWr) notice that there are AAS number as high as 350 and above.. the second occurence of 350+ is from the SNAP_ID 495-496 mentioned above..
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ61tG_iQ0I/AAAAAAAABNY/iKAy7j4Y534/highcpu-5.png]]
* Drilling down on the AAS components of that high level AAS we have to graph the output of the ''awr_topevents.sql''... given that this is still the same workload, you see here that only the ''Direct Path Read'' is properly accounted and when you look at the CPU time it seems to be idle... thus, giving lower AAS than the image above..
* Take note that SNAP_ID 495 the AWR ''CPU'' seems to be idle (.2 AAS) which is what is happening on this image
* Also on the 22:00 period, the database stopped waiting on CPU and started to wait on ''Scheduler''.. and then it matched again the high level AAS from the image above (AAS range of 320).. Interesting right?
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ53u_cLWLI/AAAAAAAABMY/9QP2C4S7AUI/highcpu-6.png]]
* We will also have this same behavior on Enterprise Manager 11g when we go to the ''Top Activity page'' and change the ''Real Time'' to ''Historical''... see the similarities on the graph from MS Excel? So when you go ''Real Time'' you are actually pulling from ASH.. then when you go ''Historical'' you are just pulling the Top Timed events across SNAP_IDs and graphing it.. but when you have issues like CPU time not properly accounted you'll see a really different graph and if you are not careful and don't know what it means you may end up with bad conclusions..
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ6fz5UzkVI/AAAAAAAABNM/9xL8IukSM4A/highcpu-10.png]]
<<<
__''AWR Top Events with CPU "properly" accounted''__
<<<
* Now, this is really interesting... the graph shown below is from the ''Performance page'' and is also ''Historical'' but produced a different graph from the ''Top Activity page''...
* Why and how did it account for the ''CPU Wait''? where did it pull the data that the ''Top Activity page'' missed?
* This is an improvement in the Enterprise Manager! So I'm curious how is this happening...
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZ6ogMsAp0I/AAAAAAAABNQ/b9dTIxATxoY/highcpu-11.png]]
<<<
__''ASH with CPU "properly" accounted (well.. I say, ALWAYS!)''__
From the graph above & below where the CPU is properly accounted, you see the AAS is consistent at the range of 320..
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ53uvK1xkI/AAAAAAAABMU/7HThzn4uoEo/highcpu-7.png]]
What makes ASH different is the proper accounting of the ''CPU'' AAS component unlike the chart coming from awr_topevents.sql (mentioned on the AWR Top Events with CPU "not properly" accounted) where there's no CPU accounted at all... this could be the problem of DBA_HIST_SYS_TIME_MODEL - ''DB CPU'' metric that when the database server is high on runqueue and there are already scheduling issues in the OS the ''ASH is even more reliable'' on accounting all the CPU time..
Another thing that bothers me is why is it that the ''DB Time'' when applied to the AAS formula gives much higher AAS value than of the ASH? so that could also mean that ''the DB Time is another reliable source'' if the database server is high on runqueue..
If this is the case, from a pure AWR perspective... what I would do is have the output of ''awr_genwl.sql''.. then run the ''awr_topevents.sql''..
and then if I would see that my AAS is high on awr_genwl.sql with a really high "OS Load" and "CPU Utilization" and then if I compare it with the output of awr_topevents.sql and see a big discrepancy that would give me an idea that I'm experiencing the same issue mentioned here, and I would investigate further with the ASH data to solidify my conclusions..
If you are curious about the output of Time model statistics on SNAP_ID 495-496
the CPU values found here does not help either because they have low values..
{{{
DB CPU = 126.70 sec
BG CPU = 4.32 sec
OS CPU (osstat) = 335.71 sec
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 215,866.2 100.0
DB CPU 126.7 .1
parse time elapsed 62.8 .0
hard parse elapsed time 60.0 .0
PL/SQL execution elapsed time 33.9 .0
hard parse (sharing criteria) elapsed time 9.7 .0
sequence load elapsed time 0.6 .0
PL/SQL compilation elapsed time 0.2 .0
connection management call elapsed time 0.0 .0
repeated bind elapsed time 0.0 .0
hard parse (bind mismatch) elapsed time 0.0 .0
DB time 215,947.9
background elapsed time 1,035.5
background cpu time 4.3
-------------------------------------------------------------
}}}
''Now we move on by splitting the ASH AAS components into their separate areas..''
* the ''CPU''
* and ''USER IO''
see the charts below..
This just shows that there is something about ASH properly accounting the ''CPU + WAIT FOR CPU'' whenever the database server is high on runqueue or OS load average... as well as the ''DB Time''
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ53wDeLd4I/AAAAAAAABMc/G5lodk6IAqE/highcpu-8.png]]
this is the ''USER IO'' AAS.. same as what is accounted in awr_topevents.sql
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZ53wKTIMVI/AAAAAAAABMg/dAihs-LYGfY/highcpu-9.png]]
So the big question for me is...
How does ASH and the Enterprise Manager performance page account for the "CPU + WAIT FOR CPU"? even if you drill down on the V$ACTIVE_SESSION_HISTORY you will not find this metric. So I'm really interested on where they pull the data.. :)
''update''
... and then I asked a couple of people, and I had a recent problem on a client site running on Exadata where I was troubleshooting their ETL runs. I was running 10046 for every run and found out that my unaccounted-for time is due to the CPU wait that is shown on this tiddler. So using Mr. Tools, and given that I'm having a similar workload.. I had an idea that the unaccounted-for time is the CPU wait. See the write up here http://www.evernote.com/shard/s48/sh/3ccc1e38-b5ef-46f8-bc75-371156ade4b3/69066fa2741f780f93b86af1626a1bcd , and I was right all along ;)
''AAS investigation updates: Answered questions + bits of interesting findings''
http://www.evernote.com/shard/s48/sh/b4ecaaf2-1ceb-43ea-b58e-6f16079a775c/cb2e28e651c3993b325e66cc858c3935
''I've updated the awr_topevents.sql script to show CPU wait to solve the unnaccounted DB Time issue'' see the write up on the link below:
awr_topevents_v2.sql - http://www.evernote.com/shard/s48/sh/a64a656f-6511-4026-be97-467dccc82688/de5991c75289f16eee73c26c249a60bf
Thanks to the following people for reading/listening about this research, and for the interesting discussions and ideas around this topic:
- Kyle Hailey, Riyaj Shamsudeen, Dave Abercrombie, Cary Millsap, John Beresniewicz
''Here's the MindMap of the AAS investigation'' http://www.evernote.com/shard/s48/sh/90cdf56f-da52-4dc5-91d0-a9540905baa6/9eb34e881a120f82f2dab0f5424208bf
! update (rmoug 2012 slides on cpu wait)
[img(100%,100%)[https://i.imgur.com/xArySP8.png]]
[img(100%,100%)[https://i.imgur.com/j4FiOwY.png]]
[img(100%,100%)[https://i.imgur.com/hw7ttDe.png]]
[img(100%,100%)[https://i.imgur.com/AQqrZSn.png]]
[img(100%,100%)[https://i.imgur.com/POHSIQ5.png]]
[img(100%,100%)[https://i.imgur.com/GXPupKb.png]]
xxx
[img(100%,100%)[https://i.imgur.com/94TlhTh.jpg]]
[img(100%,100%)[https://i.imgur.com/jtWvg7Z.png]]
[img(100%,100%)[https://i.imgur.com/PzDdJGs.png]]
[img(100%,100%)[https://i.imgur.com/SKmcj4K.png]]
[img(100%,100%)[https://i.imgur.com/MHTcbPD.png]]
[img(100%,100%)[https://i.imgur.com/83Jfspe.png]]
.
http://www.evernote.com/shard/s48/sh/a0875f07-26e6-4ec7-ab31-2d946925ef73/6d2fe9d6adc6f716a40ec87e35a0b264
https://blogs.oracle.com/RobertGFreeman/entry/exadata_support_for_acfs_and
''Further Reading:'' @@Brewer@@ (http://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed) and @@Gilbert and Lynch@@ (http://groups.csail.mit.edu/tds/papers/Gilbert/Brewer2.pdf) on the CAP Theorem; @@Vogels@@ (http://queue.acm.org/detail.cfm?id=1466448) on Eventual Consistency, @@Hamilton@@ (http://perspectives.mvdirona.com/2010/02/24/ILoveEventualConsistencyBut.aspx) on its limitations, and @@Bailis and Ghodsi@@ (https://queue.acm.org/detail.cfm?id=2462076) on measuring it and more; and @@Sirer@@ (http://hackingdistributed.com/2013/03/23/consistency-alphabet-soup/) on the multiple meanings of consistency in Computer Science. @@Liveness manifestos@@ (http://cs.nyu.edu/acsys/beyond-safety/liveness.htm) has interesting definition variants for liveness and safety.
! Big Data 4Vs + 1
<<<
Volume - scale at which data is generated
Variety - different forms of data
Velocity - data arrives in continuous stream
Veracity - uncertainty: data is not always accurate
Value - immediacy and hidden relationships
<<<
! ACID
* redo and undo in Oracle provides ACID
<<<
Atomic - they all complete successfully, or not at all
Consistent - integrity rules. ACID consistency is all about database rules.
Isolated - locking
Durable - transaction is guaranteed
<<<
* ACID http://docs.oracle.com/database/122/CNCPT/glossary.htm#CNCPT89623 The basic properties of a database transaction that all Oracle Database transactions must obey. ACID is an acronym for atomicity, consistency, isolation, and durability.
* Transaction http://docs.oracle.com/database/122/CNCPT/glossary.htm#GUID-212D8EA1-D704-4D7B-A72D-72001965CE45 Logical unit of work that contains one or more SQL statements. All statements in a transaction commit or roll back together. The use of transactions is one of the most important ways that a database management system differs from a file system.
* Oracle Fusion Middleware Developing JTA Applications for Oracle WebLogic Server - ACID Properties of Transactions http://docs.oracle.com/middleware/12212/wls/WLJTA/gstrx.htm#WLJTA117
http://cacm.acm.org/magazines/2011/6/108651-10-rules-for-scalable-performance-in-simple-operation-datastores/fulltext
http://www.slideshare.net/jkanagaraj/oracle-vs-nosql-the-good-the-bad-and-the-ugly
http://highscalability.com/blog/2009/11/30/why-existing-databases-rac-are-so-breakable.html
Databases in the wild file:///C:/Users/karl/Downloads/Databases%20in%20the%20Wild%20(1).pdf
! CAP theorem
* CAP is a tool to explain trade-offs in distributed systems.
<<<
Consistent: All replicas of the same data will be the same value across a distributed system. CAP consistency promises that every replica of the same logical value, spread across nodes in a distributed system, has the same exact value at all times. Note that this is a logical guarantee, rather than a physical one. Due to the speed of light, it may take some non-zero time to replicate values across a cluster. The cluster can still present a logical view by preventing clients from viewing different values at different nodes.
Available: All live nodes in a distributed system can process operations and respond to queries.
Partition Tolerant: The system is designed to operate in the face of unplanned network connectivity loss between replicas.
<<<
https://en.wikipedia.org/wiki/CAP_theorem
https://dzone.com/articles/better-explaining-cap-theorem
https://cloudplatform.googleblog.com/2017/02/inside-Cloud-Spanner-and-the-CAP-Theorem.html
http://guyharrison.squarespace.com/blog/2010/6/13/consistency-models-in-non-relational-databases.html <- good stuff
http://www.datastax.com/2014/08/comparing-oracle-rac-and-nosql <- good stuff
http://docs.oracle.com/database/121/GSMUG/toc.htm , http://www.oracle.com/technetwork/database/availability/global-data-services-12c-wp-1964780.pdf <- Database Global Data Services Concepts and Administration Guide [[Global Data Services]]
http://www.oracle.com/technetwork/database/options/clustering/overview/backtothefuture-2192291.pdf <- good stuff Back to the Future with Oracle Database 12c
https://blogs.oracle.com/MAA/tags/cap <- two parts good stuff
https://www.percona.com/live/mysql-conference-2013/sites/default/files/slides/aslett%20cap%20theorem.pdf <- very good stuff
<<<
[img(100%,100%)[ http://i.imgur.com/q1QEtGI.png ]]
<<<
http://blog.nahurst.com/visual-guide-to-nosql-systems
<<<
[img(100%,100%)[ http://i.imgur.com/I7jYbVD.png ]]
<<<
http://www.ctodigest.com/2014/distributed-applications/the-distributed-relational-database-shattering-the-cap-theorem/
https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed
Spanner, TrueTime and the CAP Theorem https://research.google.com/pubs/pub45855.html , https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45855.pdf
https://www.voltdb.com/blog/disambiguating-acid-and-cap <- difference between two Cs (voltdb founder)
https://martin.kleppmann.com/2015/05/11/please-stop-calling-databases-cp-or-ap.html <- nice, a lot of references! author of "Designing Data-Intensive Applications"
<<<
https://aphyr.com/posts/322-call-me-maybe-mongodb-stale-reads
http://blog.thislongrun.com/2015/04/cap-availability-high-availability-and_16.html
https://github.com/jepsen-io/knossos
https://aphyr.com/posts/288-the-network-is-reliable
http://dbmsmusings.blogspot.co.uk/2010/04/problems-with-cap-and-yahoos-little.html
https://codahale.com/you-cant-sacrifice-partition-tolerance/
https://www.somethingsimilar.com/2013/01/14/notes-on-distributed-systems-for-young-bloods/
http://henryr.github.io/cap-faq/
http://henryr.github.io/distributed-systems-readings/
<<<
http://blog.thislongrun.com/2015/03/the-confusing-cap-and-acid-wording.html
https://news.ycombinator.com/item?id=9285751
[img(100%,100%)[http://i.imgur.com/G9vV8Qh.png ]]
http://www.slideshare.net/AerospikeDB/acid-cap-aerospike
Next Generation Databases: NoSQL, NewSQL, and Big Data https://www.safaribooksonline.com/library/view/next-generation-databases/9781484213292/9781484213308_Ch09.xhtml#Sec2
https://www.pluralsight.com/courses/cqrs-theory-practice
https://www.pluralsight.com/blog/software-development/relational-non-relational-databases
https://www.amazon.com/Seven-Concurrency-Models-Weeks-Programmers-ebook/dp/B00MH6EMN6/ref=mt_kindle?_encoding=UTF8&me=
https://en.wikipedia.org/wiki/Michael_Stonebraker#Data_Analysis_.26_Extraction
http://scaledb.blogspot.com/2011/03/cap-theorem-event-horizon.html
! Think twice before dropping ACID and throw your CAP away
https://static.rainfocus.com/oracle/oow19/sess/1552610610060001frc7/PF/AG_%20Think%20twice%20before%20dropping%20ACID%20and%20throw%20your%20CAP%20away%202019_09_16%20-%20oco_1568777054624001jIWT.pdf
! BASE (eventual consistency)
BASE – Basically Available Soft-state Eventually consistent is an acronym used to contrast this approach with the RDBMS ACID transactions described above.
http://www.allthingsdistributed.com/2008/12/eventually_consistent.html <- amazon cto
! NRW notation
NRW notation describes at a high level how a distributed database will trade off consistency, read performance and write performance. NRW stands for:
N: the number of copies of each data item that the database will maintain.
R: the number of copies that the application will access when reading the data item
W: the number of copies of the data item that must be written before the write can complete.
! Database test
!! jepsen
A framework for distributed systems verification, with fault injection
https://github.com/jepsen-io/jepsen
https://www.youtube.com/watch?v=tRc0O9VgzB0
!! sqllogictest
https://github.com/gregrahn/sqllogictest
.
{{{
connect / as sysdba
set serveroutput on
show user;
create or replace procedure mailserver_acl(
aacl varchar2,
acomment varchar2,
aprincipal varchar2,
aisgrant boolean,
aprivilege varchar2,
aserver varchar2,
aport number)
is
begin
begin
DBMS_NETWORK_ACL_ADMIN.DROP_ACL(aacl);
dbms_output.put_line('ACL dropped.....');
exception
when others then
dbms_output.put_line('Error dropping ACL: '||aacl);
dbms_output.put_line(sqlerrm);
end;
begin
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL(aacl,acomment,aprincipal,aisgrant,aprivilege);
dbms_output.put_line('ACL created.....');
exception
when others then
dbms_output.put_line('Error creating ACL: '||aacl);
dbms_output.put_line(sqlerrm);
end;
begin
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL(aacl,aserver,aport);
dbms_output.put_line('ACL assigned.....');
exception
when others then
dbms_output.put_line('Error assigning ACL: '||aacl);
dbms_output.put_line(sqlerrm);
end;
commit;
dbms_output.put_line('ACL commited.....');
end;
/
show errors
select acl, host, lower_port, upper_port from dba_network_acls
ACL HOST LOWER_PORT UPPER_PORT
---------------------------------------- ------------------------------ ---------- ----------
/sys/acls/IFSAPP-PLSQLAP-Permission.xml haiapp09.mfg.am.mds. 59080 59080
select acl, principal, privilege, is_grant from dba_network_acl_privileges
ACL PRINCIPAL PRIVILE IS_GR
---------------------------------------- ------------------------------ ------- -----
/sys/acls/IFSAPP-PLSQLAP-Permission.xml IFSAPP connect true
/sys/acls/IFSAPP-PLSQLAP-Permission.xml IFSSYS connect true
begin
mailserver_acl(
'/sys/acls/IFSAPP-PLSQLAP-Permission.xml',
'ACL for used Email Server to connect',
'IFSAPP',
TRUE,
'connect',
'haiapp09.mfg.am.mds.',
59080);
end;
/
begin
DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE('/sys/acls/IFSAPP-PLSQLAP-Permission.xml','IFSSYS',TRUE,'connect');
commit;
end;
/
}}}
{{{
Summary:
> Implement Instance Caging
> Enable Parallel Force Query and Parallel Statement Queuing
> A database trigger has to be created on the Active Data Guard for all databases to enable Parallel Force Query on the session level upon login
> Create a new Resource Management Plan to limit the per session parallelism to 4
> Enable IORM and set to objective of AUTO on the Storage Cells
Commands to implement the recommended changes:
> The numbers 1 and 2 need to be executed on each database of the Active Data Guard environment
> #3 needs to be executed on all the Storage Cells, use the dcli and execute only on the 1st storage cell if passwordless ssh is configured
> #4 needs to be executed on each database (ECC, EWM, GTS, APO) of the Primary site to create the new Resource Management Plan
> #5 needs to be executed on each database of the Active Data Guard environment to activate the Resource Management Plan
The behavior:
instance caging is set to CPU_COUNT of 40 (83% max CPU utilization)
parallel 4 will be set to all users logged in as ENTERPRISE, no need for hints
although the hints override the session settings, the non-ENTERPRISE users will be throttled on the resource management layer to PX of 4 even if hints are set
RM plan has PX limit of 4 for other_groups
We can set a higher limit (let's say 8) for the ENTERPRISE users so they can override the PX 4 to a higher value through hints
this configuration will be done on all 4 databases
Switchover steps - just in case the 4 DBs will switchover to Exadata:
disable the px trigger
alter the resource plan to SAP primary
######################################################################
1) instance caging
alter system set cpu_count=40 scope=both sid='*';
alter system set resource_manager_plan=default_plan;
2) statement queueing and create trigger
alter system set parallel_force_local=false scope=both sid='*';
alter system set parallel_max_servers=128 scope=both sid='*';
alter system set parallel_servers_target=64 scope=both sid='*';
alter system set parallel_min_servers=64 scope=both sid='*';
alter system set "_parallel_statement_queuing"=true scope=both sid='*';
-- alter trigger sys.adg_pxforce_trigger disable;
-- the trigger checks if ENTERPRISE user is logged on, if it's running as PHYSICAL STANDBY, and if it's running on X4DP cluster
CREATE OR REPLACE TRIGGER adg_pxforce_trigger
AFTER LOGON ON database
WHEN (USER in ('ENTERPRISE'))
BEGIN
IF (SYS_CONTEXT('USERENV','DATABASE_ROLE') IN ('PHYSICAL STANDBY'))
AND (UPPER(SUBSTR(SYS_CONTEXT ('USERENV','SERVER_HOST'),1,4)) IN ('X4DP'))
THEN
execute immediate 'alter session force parallel query parallel 4';
END IF;
END;
/
3) IORM AUTO
-- execute on each storage cell
cellcli -e list iormplan detail
cellcli -e alter iormplan objective = auto
cellcli -e alter iormplan active
-- use these commands if passwordless ssh is configured
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = auto'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan active'
######################################################################
4) RM plan to be created on the primary site
exec DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'px_force', COMMENT => 'force parallel query parallel 4');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'CG_ENTERPRISE', COMMENT => 'CG for ENTERPRISE users');
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN =>'px_force', GROUP_OR_SUBPLAN => 'CG_ENTERPRISE', COMMENT => 'Directive for ENTERPRISE users', PARALLEL_DEGREE_LIMIT_P1 => 4);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN =>'px_force', GROUP_OR_SUBPLAN => 'OTHER_GROUPS', COMMENT => 'Low priority users', PARALLEL_DEGREE_LIMIT_P1 => 4);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
begin
dbms_resource_manager_privs.grant_switch_consumer_group(grantee_name => 'ENTERPRISE',consumer_group => 'CG_ENTERPRISE', grant_option => FALSE);
end;
/
begin
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP ('ENTERPRISE', 'CG_ENTERPRISE');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
-- check config
set wrap off
set head on
set linesize 300
set pagesize 132
col comments format a64
-- show current resource plan
select * from V$RSRC_PLAN;
-- show all resource plans
select PLAN,NUM_PLAN_DIRECTIVES,CPU_METHOD,substr(COMMENTS,1,64) "COMMENTS",STATUS,MANDATORY
from dba_rsrc_plans
order by plan;
-- show consumer groups
select CONSUMER_GROUP,CPU_METHOD,STATUS,MANDATORY,substr(COMMENTS,1,64) "COMMENTS"
from DBA_RSRC_CONSUMER_GROUPS
order by consumer_group;
-- show category
SELECT consumer_group, category
FROM DBA_RSRC_CONSUMER_GROUPS
ORDER BY category;
-- show mappings
col value format a30
select ATTRIBUTE, VALUE, CONSUMER_GROUP, STATUS
from DBA_RSRC_GROUP_MAPPINGS
order by 3;
-- show mapping priority
select * from DBA_RSRC_MAPPING_PRIORITY;
-- show directives
SELECT plan,group_or_subplan,cpu_p1,cpu_p2,cpu_p3, PARALLEL_DEGREE_LIMIT_P1, status
FROM dba_rsrc_plan_directives
order by 1,3 desc,4 desc,5 desc;
-- show grants
select * from DBA_RSRC_CONSUMER_GROUP_PRIVS order by grantee;
select * from DBA_RSRC_MANAGER_SYSTEM_PRIVS order by grantee;
-- show scheduler windows
select window_name, resource_plan, START_DATE, DURATION, WINDOW_PRIORITY, enabled, active from dba_scheduler_windows;
5) enforce on the standby site
connect / as sysdba
--ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:px_force';
-- revert
connect / as sysdba
exec DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'default_plan';
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.DELETE_PLAN_CASCADE ('px_force');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
}}}
Setting up Swingbench for Oracle Autonomous Data Warehousing (ADW) http://www.dominicgiles.com/blog/files/7fd178b363b32b85ab889edfca6cadb2-170.html
https://www.accenture.com/_acnmedia/pdf-108/accenture-destination-autonomous-oracle-database.pdf
! exploring ADW
https://content.dsp.co.uk/exploring-adw-part-1-uploading-more-than-1mb-to-object-storage
https://content.dsp.co.uk/exploring-autonomous-data-warehouse-loading-data
! how autonomous is ADW
https://indico.cern.ch/event/757894/attachments/1720580/2777513/8b_AutonomousIsDataWarehouse_AntogniniSchnider.pdf
.
<<showtoc>>
! not explicit set compression
{{{
-- not explicit set compression --# for some reason this resulted to BASIC compression
set timing on
alter session set optimizer_ignore_hints = false;
create table SD_HECHOS_COBERTURA_PP_HCC_TEST
compress parallel as select /*+ NO_GATHER_OPTIMIZER_STATISTICS full(sd_hechos_cobertura_pp) */ * from sd_hechos_cobertura_pp;
}}}
! explicit set compression
{{{
-- explicit set compression
set timing on
alter session set optimizer_ignore_hints = false;
create table SD_HECHOS_COBERTURA_PP_HCC_TEST
compress for QUERY HIGH ROW LEVEL LOCKING parallel as select /*+ NO_GATHER_OPTIMIZER_STATISTICS full(sd_hechos_cobertura_pp) */ * from sd_hechos_cobertura_pp;
}}}
https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/automatic-operations-256569003.html
<<<
! Automatic Operations
Automatic Indexing Enhancements
Automatic Index Optimization
Automatic Materialized Views
Automatic SQL Tuning Set
Automatic Temporary Tablespace Shrink
Automatic Undo Tablespace Shrink
Automatic Zone Maps
Object Activity Tracking System
Sequence Dynamic Cache Resizing
<<<
.
{{{
-- registry
SELECT /*+ NO_MERGE */ /* 1a.15 */
x.*
,c.name con_name
FROM cdb_registry x
LEFT OUTER JOIN v$containers c ON c.con_id = x.con_id
ORDER BY
x.con_id,
x.comp_id;
-- registry history
SELECT /*+ NO_MERGE */ /* 1a.17 */
x.*
,c.name con_name
FROM cdb_registry_history x
LEFT OUTER JOIN v$containers c ON c.con_id = x.con_id
ORDER BY 1
,x.con_id;
-- registry hierarchy
SELECT /*+ NO_MERGE */ /* 1a.18 */
x.*
,c.name con_name
FROM cdb_registry_hierarchy x
LEFT OUTER JOIN v$containers c ON c.con_id = x.con_id
ORDER BY
1, 2, 3;
}}}
{{{
set lines 400 pages 2000
col MESSAGE_TEXT format a200
col ORIGINATING_TIMESTAMP format a40
col MESSAGE_ARGUMENTS format a20
SELECT originating_timestamp,
MESSAGE_TEXT
FROM v$diag_alert_ext
WHERE component_id = 'rdbms'
AND originating_timestamp >= to_date('2021/11/23 21:00', 'yyyy/mm/dd hh24:mi')
AND originating_timestamp <= to_date('2021/11/24 14:00', 'yyyy/mm/dd hh24:mi')
ORDER BY originating_timestamp;
}}}
{{{
SQL> set lines 400 pages 2000
SQL> col MESSAGE_TEXT format a200
SQL> col ORIGINATING_TIMESTAMP format a40
SQL> col MESSAGE_ARGUMENTS format a20
SQL>
SQL> SELECT originating_timestamp,
2 MESSAGE_TEXT
3 FROM v$diag_alert_ext
4 WHERE component_id = 'rdbms'
5 AND originating_timestamp >= to_date('2021/11/23 21:00', 'yyyy/mm/dd hh24:mi')
6 AND originating_timestamp <= to_date('2021/11/24 14:00', 'yyyy/mm/dd hh24:mi')
7 ORDER BY originating_timestamp;
ORIGINATING_TIMESTAMP MESSAGE_TEXT
---------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
24-NOV-21 05.49.29.724000000 AM GMT Space search: ospid:105880 starts dumping trace
}}}
https://github.com/oracle/data-warehouse-etl-offload-samples
Monitor the Performance of Autonomous Data Warehouse
https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/monitor-performance-intro.html#GUID-54CCC1C6-C32E-47F4-8EB6-64CD6EDB5938
! also you can dump the ASH (PDB level) and graph it
* the service account created on root cdb
{{{
C##CLOUD$SERVICE
}}}
https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/unavailable-oracle-database-features.html#GUID-B6FB5EFC-4828-43F4-BA63-72DA74FFDB87
<<<
Database Features Unavailable in Autonomous Database
Lists the Oracle Database features that are not available in Autonomous Database. Additionally, database features designed for administration are not available.
List of Unavailable Oracle Features
Oracle Real Application Testing (Database Replay)
Oracle Real Application Security Administration Console (RASADM)
Oracle OLAP: Not available in Autonomous Database. See Deprecation of Oracle OLAP for more information.
Oracle R capabilities of Oracle Advanced Analytics
Oracle Industry Data Models
Oracle Database Lifecycle Management Pack
Oracle Data Masking and Subsetting Pack
Oracle Cloud Management Pack for Oracle Database
Oracle Multimedia: Not available in Autonomous Database and deprecated in Oracle Database 18c.
Oracle Sharding
Java in DB
Oracle Workspace Manager
<<<
https://wiki.archlinux.org/index.php/AHCI
http://en.wikipedia.org/wiki/AHCI
http://en.wikipedia.org/wiki/NCQ
Disks from the Perspective of a File System - TCQ,NCQ,4KSectorSize,MRAM http://goo.gl/eWUK7
Power5
Power6 <-- most advanced processor, starting clock is 4Ghz
Power7
Hardware Virtualization (LPAR)
1) Standard Partition
4 LPARs, each have its own dedicated resources (processor, memory)
2) Micropartition
4 LPARs can utilize a pool of 8 processors
2 LPARs can utilize 1 processor
Note:
- Dynamic allocation can happen,
CPU 5seconds
Memory 1minute
http://www.oraclerant.com/?p=8
{{{
# Oracle Database environment variables
umask 022
export ORACLE_BASE='/oracle/app/oracle'
export ORACLE_HOME="${ORACLE_BASE}/product/10.2.0/db_1"
export AIXTHREAD_SCOPE=S
export PATH="${ORACLE_HOME}/OPatch:${ORACLE_HOME}/bin:${PATH}"
# export NLS_LANG=language_territory.characterset
export LIBPATH=$ORACLE_HOME/lib:$LIBPATH
export TNS_ADMIN=$ORACLE_HOME/network/admin
}}}
http://www.scribd.com/doc/2153747/AIX-EtherChannel-Load-Balancing-Options
http://gjilevski.wordpress.com/2009/12/13/hardware-solution-for-oracle-rac-11g-private-interconnect-aggregating/
http://www.freelists.org/post/oracle-l/Oracle-10g-R2-RAC-network-configuration
! show system configuration
<<<
* show overall system config
{{{
prtconf
}}}
* to give the highest installed maintenance level
{{{
$ oslevel -r
6100-05
}}}
* to give the known recommended ML
{{{
$ oslevel -rq
Known Recommended Maintenance Levels
------------------------------------
6100-06
6100-05
6100-04
6100-03
6100-02
6100-01
6100-00
}}}
* To show you Service Packs levels as well
{{{
$ oslevel -s
6100-05-03-1036
}}}
* amount of real memory
{{{
lsattr -El sys0 -a realmem
realmem 21757952 Amount of usable physical memory in Kbytes False
}}}
* Displays the system model name. For example, IBM, 9114-275
{{{
uname -M
-- on p6
IBM,8204-E8A
-- on p7
IBM,8205-E6C
}}}
<<<
! get CPU information
* get number of CPUs
{{{
lscfg | grep proc
-- on p6
+ proc0 Processor
+ proc2 Processor
+ proc4 Processor
+ proc6 Processor
+ proc8 Processor
+ proc10 Processor
+ proc12 Processor
+ proc14 Processor
-- on p7
+ proc0 Processor
+ proc4 Processor
}}}
* get CPU speed
{{{
lsattr -El proc0
-- on p6
frequency 4204000000 Processor Speed False
smt_enabled true Processor SMT enabled False
smt_threads 2 Processor SMT threads False
state enable Processor state False
type PowerPC_POWER6 Processor type False
-- on p7
frequency 3550000000 Processor Speed False
smt_enabled true Processor SMT enabled False
smt_threads 4 Processor SMT threads False
state enable Processor state False
type PowerPC_POWER7 Processor type False
}}}
{{{
# lsdev -Cc processor
proc0 Available 00-00 Processor
proc2 Available 00-02 Processor
proc4 Available 00-04 Processor
proc6 Available 00-06 Processor
proc8 Available 00-08 Processor
proc10 Available 00-10 Processor
Which says 6 processors but the following command shows it is only a single 6-way card:
lscfg -vp |grep -ip proc |grep "PROC"
6 WAY PROC CUOD :
The problem seems to revolve around what is a cpu these days, is it a chip or a core or a single piece of a silicone wafer and whatever resides on that being counted as 1 or many.
IBM deem a core to be a CPU so they would say your system has 6 processors.
They are all on one card and may all be in one MCM / chip or there may be several MCMs / chips on that card but you have a 6 CPU system there.
lsdev shows 6 processors so AIX has configured 6 processors.
lscfg shows it is a CUoD 6 processor system and as AIX has configured all 6 it shows all 6 are activated by a suitable POD code.
The Oracle wiki at orafaq.com shows Oracle licence the Standard Edition by CPU (definition undefined) and Enterprise by core (again undefined).
http://www.orafaq.com/wiki/Oracle_Licensing
What ever you call a cpu or a core I would say you have a 6 way / 6 processor system there and the fact that all 6 may or may not be on one bit of silicone wafer will not make any difference.
#############################################################################
get number of processors, its name, physical location, Lists all processors
odmget -q"PdDvLn LIKE processor/*" CuDv
list specific processor, but it is more about Physical location etc, nothing about single/dual core etc
odmget -q"PdDvLn LIKE processor/* AND name=proc0" CuDv
#############################################################################
I've checked is on LPARs on two servers - p55A and p570 - both servers 8 CPUs and seems that in p55A there are 2 4-core CPUs and in 570 4 2-core CPUs.
$ lsattr -El sys0 -a modelname
modelname IBM,9133-55A Machine name False
$ lparstat -i|grep ^Active\ Phys
Active Physical CPUs in system : 8
$ lscfg -vp|grep WAY
4-WAY PROC CUOD :
4-WAY PROC CUOD :
$ lscfg -vp|grep proc
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
$
$ lsattr -El sys0 -a modelname
modelname IBM,9117-570 Machine name False
$ lparstat -i|grep ^Active\ Phys
Active Physical CPUs in system : 8
$ lscfg -vp|grep WAY
2-WAY PROC CUOD :
2-WAY PROC CUOD :
2-WAY PROC CUOD :
2-WAY PROC CUOD :
$ lscfg -vp|grep proc
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
$
#############################################################################
p550 with 2 quad-core processors (no LPARs):
/ #>lsattr -El sys0 -a modelname
modelname IBM,9133-55A Machine name False
/ #>lparstat -i|grep Active\ Phys
Active Physical CPUs in system : 8
/ #>lscfg -vp | grep WAY
2-WAY PROC CUOD :
2-WAY PROC CUOD :
/ #>lscfg -vp |grep proc
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
proc8 Processor
proc10 Processor
proc12 Processor
proc14 Processor
And the further detailed lscfg -vp output shows:
2-WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U787B.001.DNWC2F7-P1-C9
Customer Card ID Number.....8313
Serial Number...............YL10HA68E008
FRU Number..................10N6469
Part Number.................10N6469
As you can see, the part number is 10N6469, which clearly is a quad-core cpu:
http://www.searchlighttech.com/searchResults.cfm?part=10N6469
#############################################################################
Power5 and Power6 processors are both Dual Core - Dual Threads.
The next Power7 should have 8 cores and each core can execute 4 threads (comes 2010) but less frequency (3.2Ghz max instead of 5.0Ghz on the power6).
#############################################################################
To get the information about the partition, enter the following command:
lparstat -i
#############################################################################
lparstat -i
lparstat
lscfg | grep proc
lsattr -El proc0
uname -M
lsattr -El sys0 -a realmem
lscfg | grep proc
lsdev -Cc processor
lscfg -vp |grep -ip proc |grep "PROC"
odmget -q"PdDvLn LIKE processor/*" CuDv
odmget -q"PdDvLn LIKE processor/* AND name=proc0" CuDv
odmget -q"PdDvLn LIKE processor/* AND name=proc14" CuDv
lsattr -El sys0 -a modelname
lparstat -i|grep ^Active\ Phys
lscfg -vp|grep WAY
lscfg -vp|grep proc
lsattr -El sys0 -a modelname
lparstat -i|grep Active\ Phys
lscfg -vp | grep WAY
lscfg -vp |grep proc
lscfg -vp
#############################################################################
So the physical CPUs of the AIX box is 8… now it’s a bit tricky to get the real CPU% in AIX..
First you have to determine the CPUs of the machine
$ prtconf
System Model: IBM,8204-E8A
Machine Serial Number: 10F2441
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 8
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 nad0019aixp21
Memory Size: 21248 MB
Good Memory Size: 21248 MB
Platform Firmware level: Not Available
Firmware Version: IBM,EL350_132
Console Login: enable
Auto Restart: true
Full Core: false
Then, execute the lparstat…
• The ent 2.30 is the entitled CPU capacity
• The psize is the # of physical CPUs on the shared pool
• The physc 4.42 means that the CPU usage went above the entitled capacity because it is “Uncapped”.. so to get the real CPU% just do a 4.42/8 = 55% utilization
• 55% utilization could either be applied on the 8Physical CPUs or 16 Logical CPUs… because that’s just the percentage used so I just put on the prov worksheet 60%
$ lparstat 1 10000
System configuration: type=Shared mode=Uncapped smt=On lcpu=16 mem=21247 psize=8 ent=2.30
%user %sys %wait %idle physc %entc lbusy vcsw phint
----- ----- ------ ------ ----- ----- ------ ----- -----
91.4 7.6 0.8 0.3 3.94 171.1 29.9 4968 1352
92.0 6.9 0.7 0.4 3.76 163.4 26.2 4548 1054
93.1 6.0 0.5 0.3 4.42 192.3 33.2 4606 1316
91.3 7.5 0.7 0.5 3.74 162.6 25.6 5220 1191
93.4 5.7 0.6 0.3 4.07 176.9 28.7 4423 1239
93.1 6.0 0.6 0.4 4.05 176.0 29.4 4709 1164
92.3 6.7 0.6 0.5 3.46 150.2 24.8 4299 718
92.2 6.9 0.6 0.4 3.69 160.6 27.9 4169 973
91.9 7.3 0.5 0.3 4.06 176.5 33.2 4248 1233
}}}
! install IYs
{{{
To list all IYs
# instfix –i | pg
To show the filesets on a given IY
# instfix –avik IY59135
To commit a fileset
# smitty maintain_software
To list the fileset of an executable
# lslpp –w <full path of the executable>
To install an IY
# Uncompress <file>
# Tar –xvf <file>
# inutoc .
# smity installp
}}}
! iostat
{{{
> iostat -sl
System configuration: lcpu=4 drives=88 ent=0.20 paths=176 vdisks=8
tty: tin tout avg-cpu: % user % sys % idle % iowait physc % entc
0.3 29.5 64.5 28.6 5.1 1.9 0.9 435.5
System:
Kbps tps Kb_read Kb_wrtn
30969.7 429.9 937381114927 200661442300
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 1.3 61.9 7.6 1479300432 794583660
...
> iostat -st
System configuration: lcpu=4 drives=88 ent=0.20 paths=176 vdisks=8
tty: tin tout avg-cpu: % user % sys % idle % iowait physc % entc
0.3 29.5 64.5 28.6 5.1 1.9 0.9 435.5
System:
Kbps tps Kb_read Kb_wrtn
30969.7 429.9 937381298349 200661442605
}}}
{{{
$ iostat -DRTl 10 100
System configuration: lcpu=16 drives=80 paths=93 vdisks=2
Disks: xfers read write queue time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
%tm bps tps bread bwrtn rps avg min max time fail wps avg min max time fail avg min max avg avg serv
act serv serv serv outs serv serv serv outs time time time wqsz sqsz qfull
hdisk3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk15 61.5 3.3M 162.0 3.2M 90.2K 158.4 6.4 0.2 60.0 0 0 3.5 3.3 0.7 4.6 0 0 0.5 0.0 15.7 0.0 0.1 53.6 16:05:30
hdisk14 67.3 3.4M 166.2 3.3M 67.7K 162.3 7.2 0.2 71.8 0 0 3.9 2.8 0.8 5.7 0 0 1.0 0.0 36.0 0.0 0.1 63.0 16:05:30
hdisk8 58.9 3.0M 165.2 2.9M 112.8K 160.6 5.6 0.2 57.1 0 0 4.6 3.0 0.6 5.5 0 0 0.4 0.0 18.8 0.0 0.1 43.2 16:05:30
hdisk12 57.6 3.4M 151.3 3.3M 91.8K 147.4 6.0 0.2 54.7 0 0 3.9 3.1 0.6 4.7 0 0 0.5 0.0 23.4 0.0 0.1 43.6 16:05:30
hdisk11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk10 86.0 2.9M 144.9 2.9M 58.0K 141.4 12.7 0.3 109.3 0 0 3.5 2.8 0.8 5.1 0 0 5.3 0.0 82.6 0.0 0.1 86.2 16:05:30
hdisk9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk16 0.1 402.8 0.1 0.0 402.8 0.0 0.0 0.0 0.0 0 0 0.1 8.8 8.8 8.8 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk18 1.3 391.7K 17.1 0.0 391.7K 0.0 0.0 0.0 0.0 0 0 17.1 1.0 0.5 6.2 0 0 0.0 0.0 0.1 0.0 0.0 0.1 16:05:30
hdisk7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk4 43.7 3.2M 150.8 3.2M 67.7K 147.0 4.0 0.3 27.6 0 0 3.8 2.9 0.7 5.0 0 0 0.3 0.0 19.4 0.0 0.0 26.1 16:05:30
hdisk17 0.3 1.2K 0.3 0.0 1.2K 0.0 0.0 0.0 0.0 0 0 0.3 7.2 5.3 8.2 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk6 67.8 3.0M 151.8 2.9M 45.1K 149.1 7.6 0.2 58.4 0 0 2.8 2.8 0.7 4.6 0 0 0.5 0.0 27.1 0.0 0.1 51.6 16:05:30
hdisk21 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk27 0.4 1.2K 0.3 0.0 1.2K 0.0 0.0 0.0 0.0 0 0 0.3 16.7 7.7 34.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk23 61.3 3.3M 178.8 3.3M 59.6K 175.9 5.8 0.2 63.7 0 0 2.9 2.9 0.8 5.7 0 0 0.8 0.0 61.8 0.0 0.1 57.6 16:05:30
hdisk1 64.5 3.2M 149.7 3.2M 48.3K 146.8 7.0 0.3 45.0 0 0 2.9 2.5 0.9 4.5 0 0 0.7 0.0 46.4 0.0 0.1 42.0 16:05:30
hdisk20 64.8 3.3M 148.6 3.2M 90.2K 145.0 7.1 0.3 52.5 0 0 3.5 2.7 0.9 4.9 0 0 1.0 0.0 41.7 0.0 0.1 49.8 16:05:30
hdisk22 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk28 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk19 42.6 3.5M 162.6 3.4M 68.9K 160.0 3.6 0.2 22.2 0 0 2.7 1.6 0.5 4.3 0 0 0.1 0.0 8.2 0.0 0.0 27.2 16:05:30
Disks: xfers read write queue time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
%tm bps tps bread bwrtn rps avg min max time fail wps avg min max time fail avg min max avg avg serv
act serv serv serv outs serv serv serv outs time time time wqsz sqsz qfull
hdisk0 53.9 3.0M 153.7 3.0M 41.9K 151.1 5.1 0.2 38.4 0 0 2.6 3.0 1.1 4.6 0 0 0.2 0.0 14.7 0.0 0.0 31.7 16:05:30
hdisk26 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk2 63.6 3.2M 144.1 3.2M 64.4K 141.3 7.3 0.2 72.3 0 0 2.8 3.2 0.7 4.5 0 0 0.9 0.0 28.8 0.0 0.1 46.1 16:05:30
hdisk24 56.0 2.9M 139.6 2.8M 77.3K 135.3 6.2 0.2 56.6 0 0 4.3 3.0 1.0 4.7 0 0 0.5 0.0 19.0 0.0 0.1 34.9 16:05:30
hdisk30 65.5 3.3M 156.9 3.2M 70.9K 152.7 7.1 0.3 42.8 0 0 4.2 3.0 0.7 5.6 0 0 0.6 0.0 20.2 0.0 0.1 50.1 16:05:30
hdisk33 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk34 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk37 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk41 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk40 63.5 2.8M 148.2 2.7M 103.1K 143.9 7.0 0.2 42.0 0 0 4.3 2.9 1.0 5.2 0 0 0.8 0.0 19.2 0.0 0.1 49.7 16:05:30
hdisk38 60.6 3.0M 146.1 2.9M 70.9K 142.5 7.0 0.2 64.1 0 0 3.6 2.7 0.8 5.4 0 0 0.8 0.0 24.1 0.0 0.1 45.4 16:05:30
hdisk25 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk35 50.0 4.0M 197.6 3.9M 107.9K 193.2 3.7 0.2 37.7 0 0 4.3 3.0 0.6 5.4 0 0 0.3 0.0 15.2 0.0 0.0 41.9 16:05:30
hdisk32 41.9 3.0M 159.2 3.0M 54.8K 156.0 3.5 0.2 25.7 0 0 3.2 3.4 1.0 4.8 0 0 0.1 0.0 12.6 0.0 0.0 21.7 16:05:30
hdisk36 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk42 79.7 3.0M 159.3 2.9M 83.8K 155.5 10.1 0.2 92.3 0 0 3.8 2.6 0.9 5.3 0 0 2.2 0.0 50.5 0.0 0.1 79.7 16:05:30
hdisk31 3.6 2.1M 52.7 1.7M 391.7K 35.6 0.8 0.2 7.1 0 0 17.1 1.0 0.5 3.4 0 0 0.0 0.0 0.2 0.0 0.0 1.3 16:05:30
hdisk43 42.6 2.9M 144.2 2.8M 64.4K 140.9 4.0 0.2 34.3 0 0 3.2 3.0 1.3 5.4 0 0 0.1 0.0 10.9 0.0 0.0 21.2 16:05:30
hdisk52 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk48 51.2 3.7M 165.5 3.6M 69.3K 161.4 4.6 0.2 31.7 0 0 4.1 3.0 0.6 4.7 0 0 0.3 0.0 12.7 0.0 0.0 35.5 16:05:30
hdisk47 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk44 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk51 50.1 3.7M 187.6 3.6M 90.2K 183.5 3.7 0.2 40.0 0 0 4.1 3.2 1.1 5.0 0 0 0.4 0.0 37.8 0.0 0.0 44.4 16:05:30
hdisk39 0.1 37.7K 3.5 19.3K 18.3K 1.2 0.5 0.3 1.7 0 0 2.4 0.9 0.5 4.6 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
Disks: xfers read write queue time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
%tm bps tps bread bwrtn rps avg min max time fail wps avg min max time fail avg min max avg avg serv
act serv serv serv outs serv serv serv outs time time time wqsz sqsz qfull
hdisk49 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk57 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk45 51.5 3.0M 154.3 3.0M 54.8K 151.5 4.7 0.2 31.6 0 0 2.8 3.2 1.3 5.1 0 0 0.2 0.0 12.5 0.0 0.0 28.1 16:05:30
hdisk50 7.9 2.1M 50.2 1.7M 391.7K 33.0 2.1 0.3 23.3 0 0 17.1 1.5 0.7 18.3 0 0 0.0 0.0 0.5 0.0 0.0 2.8 16:05:30
hdisk55 64.5 3.7M 169.6 3.6M 72.5K 166.0 6.1 0.2 55.9 0 0 3.6 3.4 0.8 5.2 0 0 0.4 0.0 17.6 0.0 0.1 47.0 16:05:30
hdisk54 66.9 3.6M 165.5 3.5M 80.6K 162.3 6.7 0.3 56.3 0 0 3.2 3.0 0.5 5.0 0 0 0.9 0.0 23.7 0.0 0.1 52.7 16:05:30
hdisk53 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk56 81.9 3.2M 142.5 3.1M 83.8K 138.8 11.6 0.3 117.6 0 0 3.6 3.3 1.1 5.3 0 0 1.9 0.0 42.9 0.0 0.1 72.4 16:05:30
hdisk58 82.2 3.6M 168.2 3.6M 77.3K 164.9 9.9 0.2 84.0 0 0 3.2 2.7 0.6 5.2 0 0 1.9 0.0 45.8 0.0 0.1 88.9 16:05:30
hdisk60 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk29 52.5 3.4M 172.4 3.4M 64.4K 170.1 4.3 0.2 51.9 0 0 2.3 2.6 1.0 5.5 0 0 0.2 0.0 12.5 0.0 0.0 37.1 16:05:30
hdisk59 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk61 46.6 3.0M 157.2 2.9M 58.0K 153.7 4.1 0.2 42.8 0 0 3.5 3.5 1.4 5.3 0 0 0.1 0.0 7.8 0.0 0.0 23.1 16:05:30
hdisk63 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk62 65.5 3.0M 152.3 2.9M 74.1K 148.7 7.4 0.3 66.8 0 0 3.6 2.6 0.8 5.4 0 0 1.0 0.0 43.2 0.0 0.1 56.1 16:05:30
hdisk68 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk65 0.3 19.6K 3.0 1.3K 18.3K 0.6 2.1 0.4 6.6 0 0 2.4 1.2 0.6 2.9 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk64 42.9 3.4M 145.4 3.4M 78.9K 141.5 4.1 0.2 25.1 0 0 3.9 3.0 0.7 5.6 0 0 0.3 0.0 14.5 0.0 0.0 23.7 16:05:30
hdisk67 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk46 66.8 3.4M 165.5 3.3M 93.4K 161.8 6.8 0.2 51.6 0 0 3.7 3.1 0.6 5.0 0 0 0.6 0.0 24.2 0.0 0.1 52.1 16:05:30
hdisk71 1.6 411.0K 18.3 19.3K 391.7K 1.2 0.6 0.3 3.2 0 0 17.1 1.1 0.5 3.1 0 0 0.0 0.0 0.1 0.0 0.0 0.1 16:05:30
hdisk70 61.5 2.7M 135.8 2.7M 62.4K 132.2 7.4 0.2 107.1 0 0 3.6 3.1 0.6 4.9 0 0 0.7 0.0 25.7 0.0 0.1 39.2 16:05:30
hdisk74 86.1 3.6M 182.2 3.5M 69.3K 178.9 10.7 0.2 108.8 0 0 3.3 3.2 0.8 5.3 0 0 4.2 0.0 98.7 0.0 0.1 119.1 16:05:30
hdisk72 58.2 2.5M 130.0 2.5M 80.6K 125.7 7.1 0.3 43.8 0 0 4.3 2.9 1.0 5.3 0 0 0.8 0.0 27.0 0.0 0.1 38.6 16:05:30
Disks: xfers read write queue time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
%tm bps tps bread bwrtn rps avg min max time fail wps avg min max time fail avg min max avg avg serv
act serv serv serv outs serv serv serv outs time time time wqsz sqsz qfull
hdisk75 47.3 3.3M 160.7 3.2M 69.3K 157.1 4.0 0.2 30.9 0 0 3.5 3.2 1.2 5.0 0 0 0.2 0.0 12.7 0.0 0.0 27.9 16:05:30
hdisk78 66.2 3.3M 168.3 3.2M 70.9K 165.5 6.7 0.2 48.5 0 0 2.9 3.8 2.0 5.1 0 0 0.9 0.0 31.5 0.0 0.1 56.3 16:05:30
hdisk69 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk77 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk73 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk76 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
hdisk66 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
cd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 16:05:30
}}}
''AIX commands you should not leave home without'' http://www.ibm.com/developerworks/aix/library/au-dutta_cmds.html
''AIX system identification'' http://www.ibm.com/developerworks/aix/library/au-aix-systemid.html
''Determining CPU Speed in AIX'' http://www-01.ibm.com/support/docview.wss?uid=isg3T1000107
CPU monitoring and tuning http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/
Too many Virtual Processors? https://www.ibm.com/developerworks/mydeveloperworks/blogs/AIXDownUnder/entry/too_many_virtual_processors365?lang=en
AIX Virtual Processor Folding is Misunderstood https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/aix_virtual_processor_folding_in_misunderstood110?lang=en
How to find physical CPU socket count for IBM AIX http://www.tek-tips.com/viewthread.cfm?qid=1623771
Single/Dual Core Processor http://www.ibm.com/developerworks/forums/message.jspa?messageID=14270797
http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=%2Fcom.ibm.aix.cmds%2Fdoc%2Faixcmds3%2Flparstat.htm
lparstat command http://www.ibm.com/developerworks/forums/thread.jspa?messageID=14772565
Micropartitioning and Lparstat Output Virtual/Physical http://unix.ittoolbox.com/groups/technical-functional/ibm-aix-l/micropartitioning-and-lparstat-output-virtualphysical-4241112
Capped/Uncapped Partitions http://www.ibmsystemsmag.com/ibmi/trends/linux/See-Linux-Run/Sidebar--Capped-Uncapped-Partitions/
IBM PowerVM Virtualization Introduction and Configuration http://www.redbooks.ibm.com/abstracts/sg247940.html
iostat http://www.wmduszyk.com/wp-content/uploads/2011/01/PE23_Braden_Nasypany.pdf
https://en.wikipedia.org/wiki/Application_lifecycle_management
''12c'' Getting Started with Oracle Application Management Pack (AMP) for Oracle E-Business Suite, Release 12.1.0.1 [ID 1434392.1]
''11g'' Getting Started with Oracle E-Business Suite Plug-in, Release 4.0 [ID 1224313.1]
''10g'' Getting Started with Oracle Application Management Pack and Oracle Application Change Management Pack for Oracle E-Business Suite, Release 3.1 [ID 982302.1]
''Application Management Suite for PeopleSoft (AMS4PSFT)'' http://www.oracle.com/technetwork/oem/app-mgmt/ds-apps-mgmt-suite-psft-166219.pdf
http://download.oracle.com/technology/products/oem/screenwatches/peoplesoft_amp/PeopleSoft_final.html
http://www.psoftsearch.com/managing-peoplesoft-with-application-management-suite/
http://www.oracle.com/technetwork/oem/em12c-screenwatches-512013.html#app_mgmt
https://apex.oracle.com/pls/apex/f?p=44785:24:9222314894074::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:6415,2
<<<
''AMS we bundled the licenses of AMP and RUEI together in a single skew. AMP already had multiple features in it off course.''
<<<
''11g''
http://gasparotto.blogspot.com/2011/04/manage-peoplesoft-with-oem-grid-control.html
http://gasparotto.blogspot.com/2011/04/manage-peoplesoft-with-oem-grid-control_08.html
http://gasparotto.blogspot.com/2011/04/manage-peoplesoft-with-oem-grid-control_09.html
peoplesoft plugin 8.52 install, peoplesoft plugin agent install, http://oraclehowto.wordpress.com/category/oracle-enterprise-manager-11g-plugins/peoplesoft-plugin/
''10g'' http://www.oracle.com/us/products/enterprise-manager/mgmt-pack-for-psft-ds-068946.pdf?ssSourceSiteId=ocomcafr
http://gigaom.com/2012/10/30/meet-arms-two-newest-cores-for-faster-phones-and-greener-servers/
http://gigaom.com/cloud/facebook-amd-hp-and-others-team-up-to-plan-the-arm-data-center-takeover/
''the consortium'' http://www.linaro.org/linux-on-arm
http://www.arm.com/index.php
''ARM and moore's law'' http://www.technologyreview.com/news/507116/moores-law-is-becoming-irrelevant/, http://www.technologyreview.com/news/428481/the-moores-law-moon-shot/
https://sites.google.com/site/embtdbo/wait-event-documentation/ash---active-session-history
ASH patent http://www.google.com/patents?id=cQWbAAAAEBAJ&pg=PA2&source=gbs_selected_pages&cad=3#v=onepage&q&f=false
Practical ASH http://www.scribd.com/rvenrdra/d/44100090-Practical-Advice-on-the-Use-of-Oracle-Database-s-Active-Session-History
magic metirc? http://wenku.baidu.com/view/7d07b81b964bcf84b9d57b48.html?from=related
Sifting through the ASHes http://www.oracle.com/technetwork/database/focus-areas/manageability/ppt-active-session-history-129612.pdf
{{{
col name for a12
col program for a25
col calling_code for a30
col CPU for 9999
col IO for 9999
col TOTAL for 99999
col WAIT for 9999
col user_id for 99999
col sid for 9999
col sql_text format a10
set linesize 300
select /* usercheck */
decode(nvl(to_char(s.sid),-1),-1,'DISCONNECTED','CONNECTED')
"STATUS",
topsession.sid "SID",
topsession.serial#,
u.username "NAME",
topsession.program "PROGRAM",
topsession.sql_plan_hash_value,
topsession.sql_id,
st.sql_text sql_text,
topsession."calling_code",
max(topsession.CPU) "CPU",
max(topsession.WAIT) "WAITING",
max(topsession.IO) "IO",
max(topsession.TOTAL) "TOTAL",
round((s.LAST_CALL_ET/60),2) ELAP_MIN
from (
select *
from (
select
ash.session_id sid,
ash.session_serial# serial#,
ash.user_id user_id,
ash.program,
ash.sql_plan_hash_value,
ash.sql_id,
procs1.object_name || decode(procs1.procedure_name,'','','.')||
procs1.procedure_name ||' '||
decode(procs2.object_name,procs1.object_name,'',
decode(procs2.object_name,'','',' => '||procs2.object_name))
||
decode(procs2.procedure_name,procs1.procedure_name,'',
decode(procs2.procedure_name,'','',null,'','.')||procs2.procedure_name)
"calling_code",
sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
sum(decode(ash.session_state,'WAITING',1,0)) -
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "WAIT" ,
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "IO" ,
sum(decode(session_state,'ON CPU',1,1)) "TOTAL"
from
v$active_session_history ash,
all_procedures procs1,
all_procedures procs2
where
ash.PLSQL_ENTRY_OBJECT_ID = procs1.object_id (+) and
ash.PLSQL_ENTRY_SUBPROGRAM_ID = procs1.SUBPROGRAM_ID (+) and
ash.PLSQL_OBJECT_ID = procs2.object_id (+) and
ash.PLSQL_SUBPROGRAM_ID = procs2.SUBPROGRAM_ID (+)
and ash.sample_time > sysdate - 1
group by session_id,user_id,session_serial#,program,sql_id,sql_plan_hash_value,
procs1.object_name, procs1.procedure_name, procs2.object_name, procs2.procedure_name
order by sum(decode(session_state,'ON CPU',1,1)) desc
)
where rownum < 10
) topsession,
v$session s,
(select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type = b.action(+)) st,
all_users u
where
u.user_id =topsession.user_id and
/* outer join to v$session because the session might be disconnected */
topsession.sid = s.sid (+) and
topsession.serial# = s.serial# (+) and
st.sql_id(+) = s.sql_id
and topsession."calling_code" like '%&PACKAGE_NAME%'
group by topsession.sid, topsession.serial#,
topsession.user_id, topsession.program, topsession.sql_plan_hash_value, topsession.sql_id,
topsession."calling_code",
s.username, s.sid,s.paddr,u.username, st.sql_text, s.LAST_CALL_ET
order by max(topsession.TOTAL) desc
/
}}}
{{{
col name for a12
col program for a25
col calling_code for a30
col CPU for 9999
col IO for 9999
col TOTAL for 99999
col WAIT for 9999
col user_id for 99999
col sid for 9999
col sql_text format a10
set linesize 300
select /* usercheck */
decode(nvl(to_char(s.sid),-1),-1,'DISCONNECTED','CONNECTED')
"STATUS",
topsession.sid "SID",
topsession.serial#,
u.username "NAME",
topsession.program "PROGRAM",
topsession.sql_plan_hash_value,
topsession.sql_id,
st.sql_text sql_text,
topsession."calling_code",
max(topsession.CPU) "CPU",
max(topsession.WAIT) "WAITING",
max(topsession.IO) "IO",
max(topsession.TOTAL) "TOTAL",
round((s.LAST_CALL_ET/60),2) ELAP_MIN
from (
select *
from (
select
ash.session_id sid,
ash.session_serial# serial#,
ash.user_id user_id,
ash.program,
ash.sql_plan_hash_value,
ash.sql_id,
procs1.object_name || decode(procs1.procedure_name,'','','.')||
procs1.procedure_name ||' '||
decode(procs2.object_name,procs1.object_name,'',
decode(procs2.object_name,'','',' => '||procs2.object_name))
||
decode(procs2.procedure_name,procs1.procedure_name,'',
decode(procs2.procedure_name,'','',null,'','.')||procs2.procedure_name)
"calling_code",
sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
sum(decode(ash.session_state,'WAITING',1,0)) -
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "WAIT" ,
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "IO" ,
sum(decode(session_state,'ON CPU',1,1)) "TOTAL"
from
dba_hist_active_sess_history ash,
all_procedures procs1,
all_procedures procs2
where
ash.PLSQL_ENTRY_OBJECT_ID = procs1.object_id (+) and
ash.PLSQL_ENTRY_SUBPROGRAM_ID = procs1.SUBPROGRAM_ID (+) and
ash.PLSQL_OBJECT_ID = procs2.object_id (+) and
ash.PLSQL_SUBPROGRAM_ID = procs2.SUBPROGRAM_ID (+)
and ash.sample_time > sysdate - 99
group by session_id,user_id,session_serial#,program,sql_id,sql_plan_hash_value,
procs1.object_name, procs1.procedure_name, procs2.object_name, procs2.procedure_name
order by sum(decode(session_state,'ON CPU',1,1)) desc
)
where rownum < 50
) topsession,
v$session s,
(select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type = b.action(+)) st,
all_users u
where
u.user_id =topsession.user_id and
/* outer join to v$session because the session might be disconnected */
topsession.sid = s.sid (+) and
topsession.serial# = s.serial# (+) and
st.sql_id(+) = s.sql_id
and topsession."calling_code" like '%&PACKAGE_NAME%'
group by topsession.sid, topsession.serial#,
topsession.user_id, topsession.program, topsession.sql_plan_hash_value, topsession.sql_id,
topsession."calling_code",
s.username, s.sid,s.paddr,u.username, st.sql_text, s.LAST_CALL_ET
order by max(topsession.TOTAL) desc
/
}}}
{{{
col name for a12
col program for a25
col calling_code for a30
col CPU for 9999
col IO for 9999
col TOTAL for 99999
col WAIT for 9999
col user_id for 99999
col sid for 9999
col sql_text format a10
set linesize 300
select /* usercheck */
decode(nvl(to_char(s.sid),-1),-1,'DISCONNECTED','CONNECTED')
"STATUS",
topsession.sid "SID",
topsession.serial#,
u.username "NAME",
topsession.program "PROGRAM",
topsession.sql_plan_hash_value,
topsession.sql_id,
st.sql_text sql_text,
topsession."calling_code",
max(topsession.CPU) "CPU",
max(topsession.WAIT) "WAITING",
max(topsession.IO) "IO",
max(topsession.TOTAL) "TOTAL",
round((s.LAST_CALL_ET/60),2) ELAP_MIN
from (
select *
from (
select
ash.session_id sid,
ash.session_serial# serial#,
ash.user_id user_id,
ash.program,
ash.sql_plan_hash_value,
ash.sql_id,
procs1.object_name || decode(procs1.procedure_name,'','','.')||
procs1.procedure_name ||' '||
decode(procs2.object_name,procs1.object_name,'',
decode(procs2.object_name,'','',' => '||procs2.object_name))
||
decode(procs2.procedure_name,procs1.procedure_name,'',
decode(procs2.procedure_name,'','',null,'','.')||procs2.procedure_name)
"calling_code",
sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
sum(decode(ash.session_state,'WAITING',1,0)) -
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "WAIT" ,
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "IO" ,
sum(decode(session_state,'ON CPU',1,1)) "TOTAL"
from
v$active_session_history ash,
all_procedures procs1,
all_procedures procs2
where
ash.PLSQL_ENTRY_OBJECT_ID = procs1.object_id (+) and
ash.PLSQL_ENTRY_SUBPROGRAM_ID = procs1.SUBPROGRAM_ID (+) and
ash.PLSQL_OBJECT_ID = procs2.object_id (+) and
ash.PLSQL_SUBPROGRAM_ID = procs2.SUBPROGRAM_ID (+)
and ash.sample_time > sysdate - 1
group by session_id,user_id,session_serial#,program,sql_id,sql_plan_hash_value,
procs1.object_name, procs1.procedure_name, procs2.object_name, procs2.procedure_name
order by sum(decode(session_state,'ON CPU',1,1)) desc
)
where rownum < 50
) topsession,
v$session s,
(select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type = b.action(+)) st,
all_users u
where
u.user_id =topsession.user_id and
/* outer join to v$session because the session might be disconnected */
topsession.sid = s.sid (+) and
topsession.serial# = s.serial# (+) and
st.sql_id(+) = s.sql_id
and topsession.sql_id = '&SQLID'
group by topsession.sid, topsession.serial#,
topsession.user_id, topsession.program, topsession.sql_plan_hash_value, topsession.sql_id,
topsession."calling_code",
s.username, s.sid,s.paddr,u.username, st.sql_text, s.LAST_CALL_ET
order by max(topsession.TOTAL) desc
/
}}}
{{{
$ cat ashtop
#!/bin/bash
while :; do
sqlplus "/ as sysdba" <<-EOF
@ashtop.sql
EOF
sleep 5
echo
done
}}}
{{{
-- (c) Kyle Hailey 2007, edited by Karl Arao 20091217
col name for a12
col program for a25
col calling_code for a25
col CPU for 9999
col IO for 9999
col TOTAL for 99999
col WAIT for 9999
col user_id for 99999
col sid for 9999
col sql_text format a10
set linesize 300
select /* usercheck */
decode(nvl(to_char(s.sid),-1),-1,'DISCONNECTED','CONNECTED')
"STATUS",
topsession.sid "SID",
topsession.serial#,
u.username "NAME",
topsession.program "PROGRAM",
topsession.sql_plan_hash_value,
topsession.sql_id,
st.sql_text sql_text,
topsession."calling_code",
max(topsession.CPU) "CPU",
max(topsession.WAIT) "WAITING",
max(topsession.IO) "IO",
max(topsession.TOTAL) "TOTAL",
round((s.LAST_CALL_ET/60),2) ELAP_MIN
from (
select *
from (
select
ash.session_id sid,
ash.session_serial# serial#,
ash.user_id user_id,
ash.program,
ash.sql_plan_hash_value,
ash.sql_id,
procs1.object_name || decode(procs1.procedure_name,'','','.')||
procs1.procedure_name ||' '||
decode(procs2.object_name,procs1.object_name,'',
decode(procs2.object_name,'','',' => '||procs2.object_name))
||
decode(procs2.procedure_name,procs1.procedure_name,'',
decode(procs2.procedure_name,'','',null,'','.')||procs2.procedure_name)
"calling_code",
sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
sum(decode(ash.session_state,'WAITING',1,0)) -
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "WAIT" ,
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "IO" ,
sum(decode(session_state,'ON CPU',1,1)) "TOTAL"
from
v$active_session_history ash,
all_procedures procs1,
all_procedures procs2
where
ash.PLSQL_ENTRY_OBJECT_ID = procs1.object_id (+) and
ash.PLSQL_ENTRY_SUBPROGRAM_ID = procs1.SUBPROGRAM_ID (+) and
ash.PLSQL_OBJECT_ID = procs2.object_id (+) and
ash.PLSQL_SUBPROGRAM_ID = procs2.SUBPROGRAM_ID (+)
and ash.sample_time > sysdate - 1/(60*24)
group by session_id,user_id,session_serial#,program,sql_id,sql_plan_hash_value,
procs1.object_name, procs1.procedure_name, procs2.object_name, procs2.procedure_name
order by sum(decode(session_state,'ON CPU',1,1)) desc
)
where rownum < 10
) topsession,
v$session s,
(select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type = b.action(+)) st,
all_users u
where
u.user_id =topsession.user_id and
/* outer join to v$session because the session might be disconnected */
topsession.sid = s.sid (+) and
topsession.serial# = s.serial# (+) and
st.sql_id(+) = s.sql_id
group by topsession.sid, topsession.serial#,
topsession.user_id, topsession.program, topsession.sql_plan_hash_value, topsession.sql_id,
topsession."calling_code",
s.username, s.sid,s.paddr,u.username, st.sql_text, s.LAST_CALL_ET
order by max(topsession.TOTAL) desc
/
}}}
grant CREATE SESSION to karlarao;
grant SELECT_CATALOG_ROLE to karlarao;
grant SELECT ANY DICTIONARY to karlarao;
usage:
{{{
./ash
or
sh ash
}}}
create the file and do ''chmod 755 ash''.. this calls the aveactn300.sql
{{{
$ cat ~/dba/bin/ash
#!/bin/bash
while :; do
sqlplus "/ as sysdba" <<-EOF
@/home/oracle/dba/scripts/aveactn300.sql
EOF
sleep 5
echo
done
}}}
{{{
$ cat /home/oracle/dba/scripts/aveactn300.sql
-- (c) Kyle Hailey 2007
set lines 500
column f_days new_value v_days
select 1 f_days from dual;
column f_secs new_value v_secs
select 5 f_secs from dual;
--select &seconds f_secs from dual;
column f_bars new_value v_bars
select 5 f_bars from dual;
column aveact format 999.99
column graph format a50
column fpct format 99.99
column spct format 99.99
column tpct format 99.99
column fasl format 999.99
column sasl format 999.99
column first format a40
column second format a40
select to_char(start_time,'DD HH:MI:SS'),
samples,
--total,
--waits,
--cpu,
round(fpct * (total/samples),2) fasl,
decode(fpct,null,null,first) first,
round(spct * (total/samples),2) sasl,
decode(spct,null,null,second) second,
substr(substr(rpad('+',round((cpu*&v_bars)/samples),'+') ||
rpad('-',round((waits*&v_bars)/samples),'-') ||
rpad(' ',p.value * &v_bars,' '),0,(p.value * &v_bars)) ||
p.value ||
substr(rpad('+',round((cpu*&v_bars)/samples),'+') ||
rpad('-',round((waits*&v_bars)/samples),'-') ||
rpad(' ',p.value * &v_bars,' '),(p.value * &v_bars),10) ,0,50)
graph
-- spct,
-- decode(spct,null,null,second) second,
-- tpct,
-- decode(tpct,null,null,third) third
from (
select start_time
, max(samples) samples
, sum(top.total) total
, round(max(decode(top.seq,1,pct,null)),2) fpct
, substr(max(decode(top.seq,1,decode(top.event,'ON CPU','CPU',event),null)),0,25) first
, round(max(decode(top.seq,2,pct,null)),2) spct
, substr(max(decode(top.seq,2,decode(top.event,'ON CPU','CPU',event),null)),0,25) second
, round(max(decode(top.seq,3,pct,null)),2) tpct
, substr(max(decode(top.seq,3,decode(top.event,'ON CPU','CPU',event),null)),0,25) third
, sum(waits) waits
, sum(cpu) cpu
from (
select
to_date(tday||' '||tmod*&v_secs,'YYMMDD SSSSS') start_time
, event
, total
, row_number() over ( partition by id order by total desc ) seq
, ratio_to_report( sum(total)) over ( partition by id ) pct
, max(samples) samples
, sum(decode(event,'ON CPU',total,0)) cpu
, sum(decode(event,'ON CPU',0,total)) waits
from (
select
to_char(sample_time,'YYMMDD') tday
, trunc(to_char(sample_time,'SSSSS')/&v_secs) tmod
, to_char(sample_time,'YYMMDD')||trunc(to_char(sample_time,'SSSSS')/&v_secs) id
, decode(ash.session_state,'ON CPU','ON CPU',ash.event) event
, sum(decode(session_state,'ON CPU',1,decode(session_type,'BACKGROUND',0,1))) total
, (max(sample_id)-min(sample_id)+1) samples
from
v$active_session_history ash
where
sample_time > sysdate - &v_days
group by trunc(to_char(sample_time,'SSSSS')/&v_secs)
, to_char(sample_time,'YYMMDD')
, decode(ash.session_state,'ON CPU','ON CPU',ash.event)
order by
to_char(sample_time,'YYMMDD'),
trunc(to_char(sample_time,'SSSSS')/&v_secs)
) chunks
group by id, tday, tmod, event, total
) top
group by start_time
) aveact,
v$parameter p
where p.name='cpu_count'
order by start_time
/
}}}
I got the job chain info of IBM Curam batch from the dev team
Here are the details of how the batch works
<<<
IBM Cúram Social Program Management 7.0.10 - 7.0.11
Batch Streaming Architecture
https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1BatchStreamingArchitecture1.html
The Chunker
https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1Chunker1.html
The Stream
https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1Stream1.html
<<<
Here's the SQL to pull data from SCHEDULER_HISTORY table
{{{
SELECT *
FROM (SELECT Substr(H.job_name, Instr(H.job_name, 'jobId-') + 6, 20)
JOB_ID
,
Substr(H.job_name, Instr(H.job_name, 'job-') + 4,
Instr(H.job_name, '-jobId-') - ( Instr(H.job_name, 'job-') + 4 ))
JOB_NAME,
H.start_time,
H.end_time,
Regexp_substr(H.job_name, '[A-Za-z0-9\-]+',
Instr(H.job_name, '/'))
FUNCTIONAL_AREA,
Nvl(Regexp_substr(H.job_name, 'tier-[0-9]+'), 'N/A')
TIER,
Substr(H.job_name, 1, Instr(H.job_name, '/') - 1)
ORDERED_OR_STANDALONE
FROM scheduler_history H
WHERE ( ( H.start_time BETWEEN :startTime AND :endTime )
OR ( :startTime BETWEEN H.start_time AND H.end_time
OR ( :startTime >= H.start_time
AND H.end_time IS NULL ) ) )
AND H.start_time > To_date(:startTime, 'YYYYMMDD HH24:MI:SS') - 2
AND H.job_name LIKE '%jobId%'
AND H.job_name NOT LIKE '%parallel%'
AND H.job_name NOT LIKE '%snyc%'
AND H.job_name NOT LIKE '%Reporting%'
AND H.job_name NOT LIKE '%Stream%'
ORDER BY 5,
3) sub1
ORDER BY 3 ASC
}}}
This is the dependency diagram of the jobs, I defined the levels 1 to 5 to clearly see the sequential dependency on the data set
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/116051201-53df5980-a646-11eb-8acc-85946c99a655.png]]
Here's the calculated field I used. The tableau developer needs to complete this as reflected on the diagram above. What I did is just the jobs executed on 20210317.xlsx data set
{{{
IF contains(lower(trim([Functional Area])),'daytimebatch')=true then 'level 0'
ELSEIF contains(lower(trim([Functional Area])),'standalone-only')=true then 'level 0'
ELSEIF contains(lower(trim([Functional Area])),'post-start-of-batchjobs')=true then 'level 1'
ELSEIF contains(lower(trim([Functional Area])),'recipientfile')=true then 'level 2'
ELSEIF contains(lower(trim([Functional Area])),'pre-financials')=true then 'level 2'
ELSEIF contains(lower(trim([Functional Area])),'post-financials-reports')=true then 'level 4'
ELSEIF contains(lower(trim([Functional Area])),'post-financials')=true then 'level 4'
ELSEIF contains(lower(trim([Functional Area])),'post-financials2')=true then 'level 4'
ELSEIF contains(lower(trim([Functional Area])),'pre-bulkprint')=true then 'level 4'
ELSEIF contains(lower(trim([Functional Area])),'bulkprint')=true then 'level 5'
ELSEIF contains(lower(trim([Functional Area])),'ebt-2')=true then 'level 5'
ELSEIF contains(lower(trim([Functional Area])),'ebt-response')=true then 'level 5'
ELSEIF contains(lower(trim([Functional Area])),'financials')=true then 'level 3'
ELSEIF contains(lower(trim([Functional Area])),'ebt')=true then 'level 4'
ELSE 'OTHER' END
}}}
Here's the gantt chart.
From here we can be tactical and systematic when it comes to tuning. We can identify the blocking jobs and the longest running jobs and how it impacts the overall batch elapsed time. We can isolate these jobs from the Dynatrace instrumentation and even run the identified bad performing batch as standalone and then profile/tune the top SQLs.
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/116051156-4924c480-a646-11eb-85bf-265efbe56fa4.png ]]
Here's how to create the gantt chart but the tableau developer needs to tap the SCHEDULER_HISTORY table directly instead of a data dump from SQL
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/116051200-5346c300-a646-11eb-88b1-6fbab7b01cb0.png ]]
https://www.linkedin.com/pulse/estimating-oltp-execution-latencies-using-ash-john-beresniewicz
{{{
WITH
ash_summary
AS
(select
ash.sql_id
,SUM(usecs_per_row) as DBtime_usecs
,1+MAX(sql_exec_id) - MIN(sql_exec_id) as execs
,SUM(usecs_per_row)/(1+MAX(sql_exec_id) - MIN(sql_exec_id))/1000
as avg_latency_msec_ash
,SUM(elapsed_time)/SUM(executions)/1000
as avg_latency_msec_sqlstats
,MAX(substr(sql_text,1,150)) as sqltext
from
v$active_session_history ash
,v$sqlstats sql
where
ash.sql_id is not null
and ash.sql_exec_id is not null
and sql.executions is not null
and sql.executions > 0
and ash.sql_id = sql.sql_id
group by
ash.sql_id
)
select
sql_id
,DBtime_usecs
,execs
,ROUND(avg_latency_msec_ash,6) ash_latency_msec
,ROUND(avg_latency_msec_sqlstats,6) sqlstats_latency_msec
,sqltext
from
ash_summary
order by 3 desc;
}}}
<<showtoc>>
! End to end picture of ORMB and OBIEE performance
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047450-17a9fa00-a642-11eb-83fc-06b6d9c2c482.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047452-17a9fa00-a642-11eb-939f-1ef82583b6c9.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047454-18429080-a642-11eb-9ccf-d81de4442dd3.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047455-18429080-a642-11eb-8b99-93520ddd7c29.png]]
[img[https://user-images.githubusercontent.com/3683046/116047456-18db2700-a642-11eb-8ca5-1c09c3ebc379.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047465-1a0c5400-a642-11eb-8906-e790b70a9195.png]]
[img[https://user-images.githubusercontent.com/3683046/116047466-1a0c5400-a642-11eb-900d-be731beb1eb6.png]]
[img[https://user-images.githubusercontent.com/3683046/116047473-1b3d8100-a642-11eb-9392-c3ef0e6e7334.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047475-1b3d8100-a642-11eb-9a0e-e8c8d24c126f.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047476-1b3d8100-a642-11eb-98db-f14e777cc084.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047479-1bd61780-a642-11eb-85c2-dd7a43bcad28.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047484-1bd61780-a642-11eb-8e24-c93d0ceea5e3.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047485-1c6eae00-a642-11eb-850d-f6f8b1ddd015.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047487-1c6eae00-a642-11eb-811d-099f8aeca152.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047491-1d9fdb00-a642-11eb-8eaf-5856619d69e8.png]]
[img[https://user-images.githubusercontent.com/3683046/116047492-1d9fdb00-a642-11eb-8938-1bc8d59b78e5.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047494-1ed10800-a642-11eb-9ec0-af7f85d9ad0a.png]]
! Logic to separate the workload of OBIEE
Here's what I used on the ASH data to separate the workload of OBIEE.
* BIP reports (front end)
* ODI ETL jobs
* nqsserver (OBIEE processes)
{{{
Tableau calculated field:
IF contains(lower(trim([Module])),'BIP')=true THEN 'BIP'
ELSEIF contains(lower(trim([Module])),'ODI')=true THEN 'ODI'
ELSEIF contains(lower(trim([Module])),'nqs')=true THEN 'nqsserver'
ELSE 'OTHER' END
}}}
[img[ https://user-images.githubusercontent.com/3683046/116047456-18db2700-a642-11eb-8ca5-1c09c3ebc379.png ]]
Some of the reports are instrumented enough that the ACTION column shows the report number. But separating the workload by module is the best
-- from http://www.perfvision.com/statspack/ash.txt
{{{
ASH Report For CDB10/cdb10
DB Name DB Id Instance Inst Num Release RAC Host
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
Top User Events
Top Background Events
Top Event P1/P2/P3 Values
Top Service/Module
Top Client IDs
Top SQL Command Types
Top SQL Statements
Top SQL using literals
Top Sessions
Top Blocking Sessions
Top DB Objects
Top DB Files
Top Latches
Activity Over Time
}}}
https://blog.tanelpoder.com/2011/10/24/what-the-heck-is-the-sql-execution-id-sql_exec_id/
-- from http://www.perfvision.com/statspack/ash.txt
{{{
ASH Report For CDB10/cdb10
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
CDB10 1193559071 cdb10 1 10.2.0.1.0 NO tsukuba
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
2 440M (100%) 28M (6.4%) 128M (29.1%) 4.0M (0.9%)
Analysis Begin Time: 31-Jul-07 17:52:21
Analysis End Time: 31-Jul-07 18:07:21
Elapsed Time: 15.0 (mins)
Sample Count: 2,647
Average Active Sessions: 2.94
Avg. Active Session per CPU: 1.47
Report Target: None specified
Top User Events DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
db file sequential read User I/O 26.60 0.78
CPU + Wait for CPU CPU 8.88 0.26
db file scattered read User I/O 7.25 0.21
log file sync Commit 5.44 0.16
log buffer space Configuration 4.53 0.13
-------------------------------------------------------------
Top Background Events DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
db file parallel write System I/O 21.61 0.64
log file parallel write System I/O 18.21 0.54
-------------------------------------------------------------
Top Event P1/P2/P3 Values DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
Event % Event P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1 Parameter 2 Parameter 3
-------------------------- -------------------------- --------------------------
db file sequential read 26.97 "201","66953","1" 0.11
file# block# blocks
db file parallel write 21.61 "3","0","2147483647" 3.21
requests interrupt timeout
"2","0","2147483647" 2.49
"5","0","2147483647" 2.42
log file parallel write 18.21 "1","2022","1" 0.68
files blocks requests
db file scattered read 7.37 "201","72065","8" 0.23
file# block# blocks
log file sync 5.48 "4114","0","0" 0.30
buffer# NOT DEFINED NOT DEFINED
-------------------------------------------------------------
Top Service/Module DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$USERS UNNAMED 50.70 UNNAMED 50.70
SYS$BACKGROUND UNNAMED 41.56 UNNAMED 41.56
cdb10 OEM.SystemPool 2.64 UNNAMED 1.47
XMLLoader0 1.17
SYS$USERS sqlplus@tsukuba (TNS V1- 1.55 UNNAMED 1.55
cdb10 Lab128 1.36 UNNAMED 1.36
-------------------------------------------------------------
Top Client IDs DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Command Types DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
-> 'Distinct SQLIDs' is the count of the distinct number of SQLIDs
with the given SQL Command Type found over all the ASH samples
in the analysis period
Distinct Avg Active
SQL Command Type SQLIDs % Activity Sessions
---------------------------------------- ---------- ---------- ----------
INSERT 28 27.81 0.82
SELECT 45 12.73 0.37
UPDATE 11 3.85 0.11
DELETE 4 3.70 0.11
-------------------------------------------------------------
Top SQL Statements DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
SQL ID Planhash % Activity Event % Event
------------- ----------- ---------- ------------------------------ ----------
fd6a0p6333g8z 2993408006 7.59 db file sequential read 3.06
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY
direct path write temp 1.74
db file scattered read 1.32
298wmz1kxjs1m 4251515144 5.25 CPU + Wait for CPU 2.68
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR
db file sequential read 1.78
fhawr20n0wy5x 1792062018 3.40 db file sequential read 2.91
INSERT INTO TMP_CALC_HFC_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.CMID, 0, 0, 0,
T.DOWN_SNR_CNR_A3, T.DOWN_SNR_CNR_A2, T.DOWN_SNR_CNR_A1, T.DOWN_SNR_CNR_A0, R.S
YSUPTIME, R.DOCSIFSIGQUNERROREDS, R.DOCSIFSIGQCORRECTEDS, R.DOCSIFSIGQUNCORRECTA
BLES, R.DOCSIFSIGQSIGNALNOISE, :B3 , L.PREV_SECONDID, L.PREV_DOCSIFSIGQUNERRORED
3a11s4c86wdu5 1366293986 3.21 db file sequential read 1.85
DELETE FROM CM_RAWDATA WHERE BATCHID = 0 AND PROFINDX = :B1
log buffer space 1.06
998t5bbdfm5rm 1914870171 3.21 db file sequential read 1.70
INSERT INTO CM_RAWDATA SELECT PROFINDX, 0 BATCHID, TOPOLOGYID, SAMPLETIME, SYSUP
TIME, DOCSIFCMTSCMSTATUSVALUE, DOCSIFCMTSSERVICEINOCTETS, DOCSIFCMTSSERVICEOUTOC
TETS, DOCSIFCMSTATUSTXPOWER, DOCSIFCMTSCMSTATUSRXPOWER, DOCSIFDOWNCHANNELPOWER,
DOCSIFSIGQUNERROREDS, DOCSIFSIGQCORRECTEDS, DOCSIFSIGQUNCORRECTABLES, DOCSIFSIGQ
-------------------------------------------------------------
Top SQL using literals DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
No data exists for this section of the report.
-------------------------------------------------------------
Top Sessions DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
-> '# Samples Active' shows the number of ASH samples in which the session
was found waiting for that particular event. The percentage shown
in this column is calculated with respect to wall clock time
and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
the PQ slave activity into the session issuing the PQ. Refer to
the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# % Activity Event % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
126, 5 33.59 db file sequential read 18.62
STARGUS 493/900 [ 55%] 4
CPU + Wait for CPU 5.52
146/900 [ 16%] 2
db file scattered read 5.02
133/900 [ 15%] 2
167, 1 21.80 db file parallel write 21.61
SYS oracle@tsukuba (DBW0) 572/900 [ 64%] 0
166, 1 18.47 log file parallel write 18.21
SYS oracle@tsukuba (LGWR) 482/900 [ 54%] 0
133, 763 9.67 db file sequential read 4.80
STARGUS 127/900 [ 14%] 1
direct path write temp 1.74
46/900 [ 5%] 0
db file scattered read 1.32
35/900 [ 4%] 0
152, 618 3.10 db file sequential read 1.10
STARGUS 29/900 [ 3%] 1
-------------------------------------------------------------
Top Blocking Sessions DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
-> Blocking session activity percentages are calculated with respect to
waits on enqueues, latches and "buffer busy" only
-> '% Activity' represents the load on the database caused by
a particular blocking session
-> '# Samples Active' shows the number of ASH samples in which the
blocking session was found active.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the blocking session was found active.
Blocking Sid % Activity Event Caused % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
166, 1 5.48 log file sync 5.48
SYS oracle@tsukuba (LGWR) 512/900 [ 57%] 0
-------------------------------------------------------------
Top Sessions running PQs DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Objects DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
-> With respect to Application, Cluster, User I/O and buffer busy waits only.
Object ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
Object Name (Type) Tablespace
----------------------------------------------------- -------------------------
52652 4.08 db file scattered read 4.08
STARGUS.TMP_CALC_HFC_SLOW_CM_TMP (TABLE) SYSTEM
52543 3.32 db file sequential read 3.32
STARGUS.PK_CM_RAWDATA (INDEX) TS_STARGUS
52698 3.21 db file sequential read 2.98
STARGUS.TMP_TOP_SLOW_CM (TABLE) SYSTEM
52542 2.98 db file sequential read 2.98
STARGUS.CM_RAWDATA (TABLE) TS_STARGUS
52699 1.78 db file sequential read 1.78
STARGUS.PK_TMP_TOP_SLOW_CM (INDEX) SYSTEM
-------------------------------------------------------------
Top DB Files DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
-> With respect to Cluster and User I/O events only.
File ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
File Name Tablespace
----------------------------------------------------- -------------------------
6 23.31 db file sequential read 19.83
/export/home/oracle10/oradata/cdb10/ts_stargus_01.dbf TS_STARGUS
db file scattered read 1.59
direct path write temp 1.59
-------------------------------------------------------------
Top Latches DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
No data exists for this section of the report.
-------------------------------------------------------------
Activity Over Time DB/Inst: CDB10/cdb10 (Jul 31 17:52 to 18:07)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
17:52:21 (1.7 min) 354 log file parallel write 85 3.21
db file sequential read 82 3.10
db file parallel write 65 2.46
17:54:00 (2.0 min) 254 CPU + Wait for CPU 73 2.76
db file sequential read 46 1.74
log file parallel write 44 1.66
17:56:00 (2.0 min) 323 log file parallel write 94 3.55
db file parallel write 85 3.21
db file sequential read 85 3.21
17:58:00 (2.0 min) 385 log file parallel write 109 4.12
db file parallel write 95 3.59
db file sequential read 71 2.68
18:00:00 (2.0 min) 470 db file sequential read 169 6.38
db file parallel write 66 2.49
log file parallel write 61 2.30
18:02:00 (2.0 min) 277 db file sequential read 139 5.25
db file parallel write 58 2.19
CPU + Wait for CPU 39 1.47
18:04:00 (2.0 min) 364 db file parallel write 105 3.97
db file scattered read 90 3.40
db file sequential read 80 3.02
18:06:00 (1.4 min) 220 db file parallel write 67 2.53
db file scattered read 44 1.66
db file sequential read 42 1.59
-------------------------------------------------------------
End of Report
}}}
<<<
Active Session History (ASH) performed an emergency flush. This may mean that ASH is undersized. If emergency flushes are a recurring issue, you may consider increasing ASH size by setting the value of _ASH_SIZE to a sufficiently large value. Currently, ASH size is 16777216 bytes. Both ASH size and the total number of emergency flushes since instance startup can be monitored by running the following query:
select total_size,awr_flush_emergency_count from v$ash_info;
<<<
''RE: Finding Sessions using AWR Report - ASH'' http://www.evernote.com/shard/s48/sh/733fa2e6-4feb-45cf-ac1a-18a679d9bce5/d6f5a6382d71007a633bc30d0a225db6
When slicing and dicing the ASH data. Having the correct sample math and granularity matters!
<<showtoc>>
! 1st example - CPU usage across container databases (CDB)
!! second granularity
change to second granularity and apply the formula below
{{{
count(1)
}}}
[img(100%,100%)[https://i.imgur.com/Awjjz6o.png]]
!! minute granularity
change to minute granularity and apply the formula below
{{{
(count(1)*10)/60
}}}
[img(100%,100%)[https://i.imgur.com/KZ1IImy.png]]
! 2nd example - CPU usage across instances
* This is the consolidated view of CPU and Scheduler wait class of all instances
[img(100%,100%)[https://i.imgur.com/UTWySm3.png]]
* The data is filtered by CPU and Scheduler
[img(40%,40%)[ https://i.imgur.com/nVKkG1P.png]]
* Filtering on the peak July 29 period. If we change to Second granularity you see that the aggregation is incorrect if the minute granularity math is applied
[img(100%,100%)[https://i.imgur.com/qLQa8Zs.png]]
* Changing it back to count(1) with Second granularity shows the correct range of AAS CPU usage
[img(100%,100%)[https://i.imgur.com/xPP1kgh.png]]
<<showtoc>>
! ASH granularity, SQL_EXEC_START - peoplesoft job troubleshooting
<<<
* SQL trace would be more granular and definitive on chasing the outlier elap/exec performance (particularly the < 1sec elapsed times)
* SQL Monitoring is another way but with limitations (space, threshold, etc.) https://sqlmaria.com/2017/08/01/getting-the-most-out-of-oracle-sql-monitor/
* ASH is another way but you lose the granularity (especially the < 1sec elapsed times), although the sample_time and sql_exec_start can give you the general wall clock info when a particular SQL started and ended (more on this below)
<<<
!! 1) ASH granularity
Example is this SQL_ID 0fhpmaba4znqy which is executed thousands of times with .000x seconds response time per execute (PHV 2970305186)
{{{
SYS@FMSSTG:PS122STG1 AS SYSDBA> @sql_id
Enter value for sql_id: 0fhpmaba4znqy
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
UPDATE PS_RECV_LOAD_T15 SET RECEIVER_ID = ( SELECT DISTINCT RECEIVER_ID FROM PS_
1 row selected.
BEGINTIM INSTANCE PLANHASH EXECDELTA ROWSDELTA BUFFERGETSDELTA DISKREADSDELTA IOWAITDELTA CPUTIMEDELTA ELAPSEDTIMEDELTA ELAPSEDEXECDELTA SNAP_ID
-------- -------- ----------- ------------ ------------ ---------------- -------------- ------------------ ---------------- ---------------- ---------------- ----------
02-07 15 1 986626382 110 110 235,624,455 7 3,674 1,367,154,117 1,370,452,852 12.458662 5632 W
.
.
02-08 00 1 986626382 188 188 400,969,410 16 8,354 2,292,949,823 2,298,467,546 12.225891 5641 W
02-08 15 1 3862886561 13,946 13,946 42,043,961 949 354,164 322,228,251 332,017,906 .023807 5656
.
.
>>02-11 22 3 2970305186 15,999 15,999 1,055,855 761 371,071 8,703,310 9,964,815 .000286 5691 B
}}}
The ash_elap.sql output using dba_hist_active_sess_history view shows 1 exec and 0 on avg,min,max elapased
{{{
DBA_HIST_ACTIVE_SESSION_HISTORY - ash_elap exec (start to end) avg min max
------------------------------------------------------------------------
SQL_ID SQL_PLAN_HASH_VALUE COUNT(*) AVG MIN MAX
--------------- ------------------------ ---------- ---------- ---------- ----------
0fhpmaba4znqy 2970305186 1 0 0 0
}}}
Then using v$active_session_history, from 1 exec to 9, then shows .56 avg, 0 min, 1 max elapsed
{{{
ACTIVE_SESSION_HISTORY - ash_elap exec avg min max
------------------------------------------------------------------------
0fhpmaba4znqy 2970305186 9 .56 0 1
}}}
SQL Tracing this would give way more than 9 exec and lower elapsed time numbers
!! 2) ASH SAMPLE_TIME and SQL_EXEC_START visualized
* sql_exec_start can give you the general wall clock info when a particular SQL started and ended, below is the same Peoplesoft workload. Here we are looking at the job level view of performance.
* So the main job that they say executes for 21K times with millisecond level per execute response time actually runs for 2hours overall. The job is called RSTR_RCVLOAD.
[img(60%,60%)[https://i.imgur.com/eNi3znY.png]]
* Below is the same highlighted 2 hours, only that the time period is across 1 month. David Kurtz’s PS360 (https://github.com/davidkurtz/ps360) generated this graph (Process Scheduler Process Map).
[img(100%,100%)[https://i.imgur.com/zNPyygJ.png]]
* As for the PHV 2970305186 above (ASH granularity). The SQL_ID 0fhpmaba4znqy compared to the 2hour end to end job run time is the tiny graph on its own axis (2nd row and it looks like it’s running for a few seconds end to end).
* The others highlighted below it are the rest of the SQL_IDs of RSTR_RCVLOAD
click here for full size image https://i.imgur.com/pJOs8DR.png
[img(100%,100%)[https://i.imgur.com/pJOs8DR.png]]
From the same ASH data. The graph below is the breakdown of the 2 hour time series of RSTR_RCVLOAD above (Process Scheduler Process Map section of PS360).
* The process started 10-FEB-19 10.06.05.000000 PM and ended 10-FEB-19 11.54.55.000000 PM based on sample_time and sql_exec_start.
* The graph is sliced by SQL TYPE and Plan Hash Value, then colored by SQL_ID.
<<<
* The red annotated font are the Plan Hash Values with multiple SQL_IDs. There are 9 of them that’s at least 30 minutes.
* The black annotated font are the Plan Hash Values with single SQL_IDs. There are 5 of them
<<<
click here for full size image https://i.imgur.com/QU47vcy.png
[img(100%,100%)[https://i.imgur.com/QU47vcy.png]]
* All these SQLs are all on CPU event (all green). And the 2 hours is executing in a serial manner. Using 1 CPU on 1 node.
click here for full size image https://i.imgur.com/foKJJOb.png
[img(100%,100%)[https://i.imgur.com/foKJJOb.png]]
* Below are the red and black plan hash values mentioned above
* The green color is the PHV 2970305186 (0fhpmaba4znqy mentioned in ASH granularity) which is also the tiny blip on the time series graph above
[img(40%,40%)[https://i.imgur.com/thYqZcA.png]]
[img(40%,40%)[https://i.imgur.com/zCBgHbq.png]]
* In summary, when we looked at it from the job level, we uncovered more tuning opportunities because we can clearly see which SQLs and plan_hash_value are eating up the 2 hours end to end elapsed. But this workload is a batch job so having this approach works well.
* The wall clock mattered more vs the exact millisecond per execute granularity.
! the scripts used - ash dump and ash_elap
* this ash dump script was used to generate the time series breakdown of the 2 hours end to end elapsed
https://raw.githubusercontent.com/karlarao/pull_dump_and_explore_ash/master/ash/0_gvash_to_csv_12c.sql
* ash_elap scripts are used to generate the avg,min,max elapsed/exec
<<<
* ash_elap.sql - get wall clock time, the filter is SQL_ID
** https://raw.githubusercontent.com/karlarao/scripts/master/performance/ash_elap.sql
* ash_elap2.sql - get wall clock time, the filter is “where run_time_sec < &run_time_sec”. So you can just say 0 and it will output all
** https://raw.githubusercontent.com/karlarao/scripts/master/performance/ash_elap2.sql
* ash_elap_user.sql - get wall clock time, the filter is user_id from dba_users. Here you can change the user_id fileter to ACTION, MODULE, or PROGRAM
** https://raw.githubusercontent.com/karlarao/scripts/master/performance/ash_elap_user.sql
<<<
If you have multiple MODULEs or PROGRAMs and you want to expose that to the group by you can do that just like what I did below
[img(100%,100%)[https://i.imgur.com/dZHZOGz.png]]
<<<
Then if you want to detail on that SQL_ID, use planx Y <sql_id>
https://raw.githubusercontent.com/karlarao/scripts/master/performance/planx.sql
<<<
.
{{{
https://mail.google.com/mail/u/0/#search/tanel+ash+tpt-oracle/FMfcgxwLtGsWjXhhlwQrvQDJVXGkqPMQ
https://github.com/tanelpoder/tpt-oracle/blob/master/ash/devent_hist.sql
https://raw.githubusercontent.com/tanelpoder/tpt-oracle/master/ash/devent_hist.sql
--parameter1:
direct*
cell*
^(direct|cell|log|db)
--parameter2:
1=1
edit the date filters accordingly
}}}
{{{
If you see the time waited for IOs go up, but you're not trying to do more I/O (same amount of data & workload and exec plans haven't changed), you can report the individual I/O latencies to see if your I/O is just slower this time (due to other activity in the storage subsystem).
You can even estimate wait event counts in different latency buckets using ASH data (more granularity and flexibility compared to AWR).
https://github.com/tanelpoder/tpt-oracle/blob/master/ash/devent_hist.sql
SQL> @ash/devent_hist db.file.*read 1=1 "TIMESTAMP'2020-12-10 00:00:00'" "TIMESTAMP'2020-12-10 23:00:00'"
Wait time Num ASH Estimated Estimated % Event Estimated
Wait Event bucket ms+ Samples Total Waits Total Sec Time Time Graph
---------------------------- --------------- ---------- ----------- ------------ ---------- ------------
db file parallel read < 1 7 31592.4 315.9 8.1 |# |
< 2 6 4044.5 80.9 2.1 | |
< 4 5 1878.6 75.1 1.9 | |
< 8 9 1407.2 112.6 2.9 | |
< 16 19 1572.1 251.5 6.5 |# |
< 32 36 1607.3 514.3 13.2 |# |
< 64 35 809.8 518.3 13.3 |# |
< 128 52 530.8 679.5 17.5 |## |
< 256 44 284.6 728.7 18.7 |## |
< 512 28 88 450.7 11.6 |# |
< 1024 2 3.7 38.1 1 | |
< 4096 1 1 41.0 1.1 | |
< 8192 1 1 81.9 2.1 | |
db file scattered read < 1 4 17209.3 172.1 71.1 |####### |
< 2 1 935.5 18.7 7.7 |# |
< 4 3 1021 40.8 16.9 |## |
< 8 1 131.7 10.5 4.3 | |
db file sequential read < 1 276 1354178.7 13,541.8 7.7 |# |
< 2 221 150962.7 3,019.3 1.7 | |
< 4 515 174345.3 6,973.8 4 | |
< 8 1453 250309.8 20,024.8 11.4 |# |
< 16 1974 181327.4 29,012.4 16.6 |## |
< 32 2302 101718.4 32,549.9 18.6 |## |
< 64 2122 49502.4 31,681.5 18.1 |## |
< 128 1068 12998.8 16,638.4 9.5 |# |
< 256 312 1855.9 4,751.1 2.7 | |
< 512 260 763.7 3,909.9 2.2 | |
< 1024 13 24.7 253.2 .1 | |
< 4096 59 59 2,416.6 1.4 | |
< 8192 127 127 10,403.8 5.9 |# |
This way, any potential latency outliers won't get hidden in averages.
}}}
I use the following scripts for quick troubleshooting
{{{
sqlmon.sql
snapper.sql
report_sql_monitor_html.sql
report_sql_monitor.sql
find_sql_awr.sql
dplan.sql
dplan_awr.sql
awr_plan_change.sql
px.sql
}}}
http://oracledoug.com/serendipity/index.php?/archives/1614-Network-Events-in-ASH.html
other articles by Doug about ASH
Alternative Pictures Demo
That Pictures demo in full
Time Matters: Throughput vs. Response Time - Part 2
Diagnosing Locking Problems using ASH/LogMiner – The End
Diagnosing Locking Problems using ASH/LogMiner – Part 9
Diagnosing Locking Problems using ASH/LogMiner – Part 8
Diagnosing Locking Problems using ASH/LogMiner – Part 7
Diagnosing Locking Problems using ASH – Part 6
Diagnosing Locking Problems using ASH – Part 5
Diagnosing Locking Problems using ASH – Part 4
http://www.oaktable.net/content/ukoug-2011-ash-outliers
http://oracledoug.com/serendipity/index.php?/archives/1669-UKOUG-2011-Ash-Outliers.html#comments
http://oracledoug.com/ASHoutliers3c.sql
http://oracledoug.com/adaptive_thresholds_faq.pdf
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1525205200346930663 <-- JB and Graham comments
{{{
select sql_id,max(TEMP_SPACE_ALLOCATED)/(1024*1024*1024) gig
from DBA_HIST_ACTIVE_SESS_HISTORY
where
sample_time > sysdate-2 and
TEMP_SPACE_ALLOCATED > (50*1024*1024*1024)
group by sql_id order by sql_id;
}}}
http://www.bobbydurrettdba.com/2012/05/10/finding-query-with-high-temp-space-usage-using-ash-views/
Visualizing Active Session History (ASH) Data With R http://structureddata.org/2011/12/20/visualizing-active-session-history-ash-data-with-r/
also talks about TIME_WAITED – micro, only the last sample is fixed up, the others will have TIME_WAITED=0
thanks to John Beresniewicz for this info. http://dboptimizer.com/2011/07/20/oracle-time-units-in-v-views/
DAVE ABERCROMBIE research on AAS and ASH
http://aberdave.blogspot.com/search?updated-max=2011-04-02T08:09:00-07:00&max-results=7
http://dboptimizer.com/2011/10/20/tuning-blog-entries/
{{{
ASH
SQL execution times from ASH – using ASH to see SQL execution times and execution time variations
AAS on AWR – my favorite ASH query that shows AAS wait classes as an ascii graph
CPU Wait vs CPU Usage
Simulated ASH 2.1
AWR
Wait Metrics vs v$system_event
Statistic Metrics verses v$sysstat
I/O latency fluctuations
I/O wait histograms
Redo over weeks
AWR mining
Diff’ing AWR reports
Importing AWR repositories
Redo
LGWR redo write times (log file parallel write)
Ratio of Redo bytes to Datablocks writes
Etc
V$ view time units S,CS,MS,US
Parsing 10046 traces
SQL
Display Cursor Explained – what are all those display_cursor options and what exactly is the data
VST – vistual sql tunning
VST in DB Optimizer 3.0
VST with 100 Tables !
SQL Joins using sets
Visualizing SQL Queries
VST – product design
View expansion with VST
Outer Joins Graphically
}}}
* ASM Mind Map
http://jarneil.wordpress.com/2008/08/26/the-asm-mind-map/
* v$asm_disk
http://www.rachelp.nl/index_kb.php?menu=articles&actie=show&id=10
http://www.freelists.org/post/oracle-l/ASM-on-SAN,5
http://www.freelists.org/post/oracle-l/ASM-and-EMC-PowerPath
ASM and shared pool sizing - http://www.evernote.com/shard/s48/sh/c3535415-30fd-42fa-885a-85df36616e6e/288c13d20095240c8882594afed99e8b
Bug 11684854 : ASM ORA-4031 IN LARGE POOL FROM CREATE DISKGROUP
14292825: DEFAULT MEMORY PARAMETER VALUES FOR 11.2 ASM INSTANCES LOW
https://twiki.cern.ch/twiki/bin/view/PDBService/ASM_Internals <-- GOOD STUFF
https://twiki.cern.ch/twiki/bin/view/PDBService/HAandPerf
{{{
ASM considerations on SinglePath and MultiPath across versions (OCR,VD,DATA)
In general you gotta have a facility/mechanism for:
* multipathing -> persistent naming -> ASM
on 10gR2, 11gR1 for your OCR and VD you must use the following:
*
* clustered filesystem (OCFS2) or NFS
* raw devices (RHEL4) or udev (RHEL5)
on 11gR2, for your OCR and VD you must use the following:
*
* clustered filesystem or NFS
* ASM (mirrored at least 3 disks)
-----------------------
Single Path
-----------------------
If you have ASMlib you will go with this setup
*
* ASMlib -> ASM"
If you don't have ASMlib and Powerpath you will go with this setup
*
* 10gR2 and 11g
* raw devices
* udev -> ASM
* 11gR2
* udev -> ASM
-----------------------
Multi Path
-----------------------
If you have ASMlib and Powerpath you will go with this setup
*
* 10gR2, 11g, 11gR2
* "powerpath -> ASMlib -> ASM"
If you don't have ASMlib and Powerpath you will go with this setup
*
* 10gR2
* "dm multipath (dev mapper) -> raw devices -> ASM"
* 11g and 11gR2
* "dm multipath (dev mapper) -> ASM"
you can also be flexible and go with
*
* "dm multipath (dev mapper) -> ASMlib -> ASM"
-----------------------
Notes
-----------------------
kpartx confuses me..just do this..
- assign and share luns on all nodes.
- fdisk the luns and update partition table on all nodes
- configure multipath
- use </dev/mapper/<mpath_alias>
- create asm storage using above devices
https://forums.oracle.com/forums/thread.jspa?threadID=2288213
}}}
http://www.evernote.com/shard/s48/sh/0012dbf5-6648-4792-84ff-825a363f68d3/a744de57fdb99349388e21cdd9c6059a
http://www.pythian.com/news/1078/oracle-11g-asm-diskgroup-compatibility/
http://www.freelists.org/post/oracle-l/Does-ocssdbin-started-from-11gASM-home-support-diskgroups-mounted-by-10g-ASM-instance,5
{{{
Hi Sanjeev,
I'd like to clear some info first.
1st)... the ocssd.bin
the CSS is created when:
- you use ASM as storage
- when you install Clusterware (RAC, but Clusterware has its separate
home already)
For Oracle Real Application Clusters installations, the CSS daemon
is installed with Oracle Clusterware in a separate Oracle home
directory (also called the Clusterware home directory). For
single-node installations, the CSS daemon is installed in and runs
from the same Oracle home as Oracle Database.
You could identify the Oracle home directory being used to run the CSS daemon:
# cat /etc/oracle/ocr.loc
The output from this command is similar to the following:
[oracle@dbrocaix01 bin]$ cat /etc/oracle/ocr.loc
ocrconfig_loc=/oracle/app/oracle/product/10.2.0/asm_1/cdata/localhost/local.ocr
local_only=TRUE
The ocrconfig_loc parameter specifies the location of the Oracle
Cluster Registry (OCR) used by the CSS daemon. The path up to the
cdata directory is the Oracle home directory where the CSS daemon is
running (/oracle/app/oracle/product/10.2.0/asm_1 in this example). To
confirm you could grep the css deamon and see that it's running on
that home
[oracle@dbrocaix01 bin]$ ps -ef | grep -i css
oracle 4950 1 0 04:23 ? 00:00:00
/oracle/app/oracle/product/10.2.0/asm_1/bin/ocssd.bin
oracle 5806 5609 0 04:26 pts/1 00:00:00 grep -i css
Note:
If the value of the local_only parameter is FALSE, Oracle Clusterware
is installed on this system.
2nd)... ASM and Database compatibility
I'll supply you with some references..
Note 337737.1 Oracle Clusterware - ASM - Database Version Compatibility
Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed
version home environment
and Chapter 4, page 116-120 of Oracle ASM (under the hood & practical
deployment guide) 10g & 11g
In the book it says that there are two types of compatibility settings
between ASM and the RDBMS:
1) instance-level software compatibility settings
- the COMPATIBLE parameter (mine is 10.2.0), this defines what
software features are available to the instance. Setting the
COMPATIBLE parameter in the ASM instance
to 10.1 will not enable you to use 11g ASM new features (variable
extents, etc.)
2) diskgroup-specific settings
- COMPATIBLE.ASM and COMPATIBLE.RDBMS which are persistently stored
in the ASM diskgroup metadata..these compatibility settings are
specific to a diskgroup and control which
attributes are available to the ASM diskgroup and which are
available to the database.
- COMPATIBLE.RDBMS, which defaults to 10.1 in 11g, is the minimum
COMPATIBLE version setting of a database that can mount the
diskgroup.. once you advanced it, it cannot be reversed
- COMPATIBLE.ASM, which controls the persistent format of the on-disk
ASM metadata structures. The ASM compatibility defaults to 10.1 in 11g
and must always be greater than or equal to the RDBMS compatibility
level.. once you advanced it, it cannot be reversed
The combination of the compatibility parameter setting of the
database, the software version of the database, and the RDBMS
compatibility setting of a diskgroup determines whether a database
instance is permitted to mount a given diskgroup. The compatibility
setting also determines which ASM features are available for a
diskgroup.
An ASM instance can support different RDBMS clients with different
compatibility settings, as long as the database COMPATIBLE init.ora
parameter setting of each database instance is greater than or equal
to the RDBMS compatibility of all diskgroups.
You could also read more here...
http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asmdiskgrps.htm#CHDDIGBJ
So the following info will give us some background on your environment
cat /etc/oracle/ocr.loc
ps -ef | grep -i css
cat /etc/oratab
select name, group_number, value from v$asm_attribute order by 2;
select db_name, status,software_version,compatible_version from v$asm_client;
select name,compatibility, database_compatibility from v$asm_diskgroup;
I hope I did not confuse you with all of this info.
- Karl Arao
http://karlarao.wordpress.com
}}}
http://blog.ronnyegner-consulting.de/2009/10/27/asm-resilvering-or-how-to-recovery-your-asm-in-crash-scenarios/
http://www.ardentperf.com/2010/07/15/asm-mirroring-no-hot-spare-disk/
http://asmsupportguy.blogspot.com/2010/05/how-to-map-asmlib-disk-to-device-name.html
http://uhesse.wordpress.com/2010/12/01/database-migration-to-asm-with-short-downtime/
{{{
backup as copy database format '+DATA';
switch database to copy;
}}}
''Migrating Databases from non-ASM to ASM and Vice-Versa'' http://www.idevelopment.info/data/Oracle/DBA_tips/Automatic_Storage_Management/ASM_33.shtml
-- ''OCFS to ASM''
''How to Migrate an Existing RAC database to ASM'' http://www.colestock.com/blogs/2008/05/how-to-migrate-existing-rac-database-to.html
http://oss.oracle.com/pipermail/oracleasm-users/2009-June/000094.html
{{{
[root@uscdcmix30 ~]# time dd if=/dev/VgCDCMIX30_App/app_new bs=8192 count=655360 of=/dev/null
655360+0 records in
655360+0 records out
real 0m39.045s
user 0m0.083s
sys 0m6.467s
[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX03 bs=8192 count=655360 of=/dev/null
655360+0 records in
655360+0 records out
real 1m1.784s
user 0m0.084s
sys 0m14.914s
[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX04 bs=8192 count=655360 of=/dev/null
655360+0 records in
655360+0 records out
real 1m17.748s
user 0m0.069s
sys 0m13.409s
[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX03 bs=8192 count=655360 of=/dev/null
655360+0 records in
655360+0 records out
real 1m2.702s
user 0m0.090s
sys 0m16.682s
[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX04 bs=8192 count=655360 of=/dev/null
655360+0 records in
655360+0 records out
real 1m19.698s
user 0m0.079s
sys 0m16.774s
[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX03 bs=8192 count=655360 of=/dev/null
655360+0 records in
655360+0 records out
real 1m2.037s
user 0m0.085s
sys 0m14.386s
[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX03 bs=8192 count=655360 of=/dev/null
655360+0 records in
655360+0 records out
real 1m2.822s
user 0m0.052s
sys 0m11.703s
[root@uscdcmix30 ~]# oracleasm listdisks
DGCRM01
DGCRM02
DGCRM03
DGCRM04
DGCRM05
DGCRM06
DGMIX01
DGMIX02
DGMIX03
DGMIX04
[root@uscdcmix30 ~]# oracleasm deletedisk DGMIX03
Clearing disk header: done
Dropping disk: done
[root@uscdcmix30 ~]# time dd if=/dev/emcpowers1 bs=8192 count=655360 of=/dev/null
655360+0 records in
655360+0 records out
real 1m0.955s
user 0m0.044s
sys 0m11.446s
[root@uscdcmix30 ~]# pvcreate /dev/emcpowers1
Physical volume "/dev/emcpowers1" successfully created
[root@uscdcmix30 ~]# vgcreate VgTemp /dev/emcpowers1
/dev/emcpowero: open failed: No such device
/dev/emcpowero1: open failed: No such device
Volume group "VgTemp" successfully created
[root@uscdcmix30 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VgCDCCRM30_App 1 1 0 wz--n- 101.14G 0
VgCDCCRM30_Arch 1 1 0 wz--n- 101.14G 0
VgCDCMIX30_App 1 1 0 wz--n- 100.00G 0
VgTemp 1 0 0 wz--n- 100.00G 100.00G
vg00 1 7 0 wz--n- 136.50G 66.19G
vg01 1 1 0 wz--n- 101.14G 0
vg03 2 1 0 wz--n- 505.74G 101.14G
[root@uscdcmix30 ~]# lvcreate -L 102396 -n TestLV VgTemp
Logical volume "TestLV" created
[root@uscdcmix30 ~]# time dd if=/dev/VgTemp/TestLV bs=8192 count=655360 of=/dev/null
655360+0 records in
655360+0 records out
real 0m34.027s
user 0m0.056s
sys 0m4.698s
}}}
How to create ASM filesystem in Oracle 11gR2
http://translate.google.com/translate?sl=auto&tl=en&u=http://www.dbform.com/html/2010/1255.html
OTN ASM
http://www.oracle.com/technology/tech/linux/asmlib/index.html
http://www.oracle.com/technology/tech/linux/asmlib/raw_migration.html
http://www.oracle.com/technology/tech/linux/asmlib/multipath.html
http://www.oracle.com/technology/tech/linux/asmlib/persistence.html
ASM using ASMLib and Raw Devices
http://www.oracle-base.com/articles/10g/ASMUsingASMLibAndRawDevices.php
Raw devices with release 11: Note ID 754305.1
#
However, the Unbreakable Enterprise Kernel is optional,
and Oracle Linux continues to include a Red Hat compatible kernel, compiled directly from Red Hat
Enterprise Linux source code, for customers who require strict RHEL compatibility. Oracle also
recommends the Unbreakable Enterprise Kernel when running third party software and third party
hardware.
# Performance improvements
latencytop?
# ASMlib and virtualization modules in the kernel
Updated Kernel Modules
The Unbreakable Enterprise Kernel includes both OCFS2 1.6 as well as Oracle ASMLib, the kernel
driver for Oracle’s Automatic Storage Management feature. There is no need to install separate RPMs
to implement these kernel features. Also, the Unbreakable Enterprise Kernel can be run directly on
bare metal or as a virtual guest on Oracle VM, both in hardware virtualized (HVM) and paravirtualized (PV) mode, as it implements the paravirt_ops instruction set and includes the xen_netfront and
xen_blkfront drivers.
#
Unbreakable Enterprise Kernel itself already includes ocfs2 and oracleasm
Questions:
1) Since it will be a new kernel, what if I have a third party module like EMC Powerpath? I'm sure ill have to reinstall it once I use the new
kernel. But, once reinstalled.. will it be certified with EMC (or vice versa)?
2) Also, Oracle says, if you have to maintain compatibility with a third party module. You can use the old vanilla kernel. Questions is, since the
ASMlib module is already integrated on the Unbreakable Kernel, once I use the non-Unbreakable kernel do they also have the old style RPM
(oracleasm-`uname -r` - kernel driver) for having the ASMlib module?
OR
if it's not supported at all and I'm
ASMLIB has three components.
1. oracleasm-support - user space shell scripts
2. oracleasmlib - user space library (closed source)
3. oracleasm-`uname -r` - kernel driver <-- kernel dependent
###############################################################################################3
-- from this thread http://www.freelists.org/post/oracle-l/ASM-and-EMC-PowerPath
!
! The Storage Report (ASM -> Linux -> EMC)
Below is a sample storage info that you should have, it clearly shows the relationship from the Oracle layer (ASM), Linux, and SAN storage. This info is very useful for you and the storage engineer. So you would know which is which in case of catastrophic problems..
Very useful for storage activities like:
* SAN Migration
* Add/Remove disk
* Powerpath upgrade
* Kernel upgrade
//(Note: The images below might be too big on your current screen resolution, to have a better view just right click and download the images or ''double click'' on this page to see the full path of the images..)//
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TaFARe_nqfI/AAAAAAAABOI/jXAshWxpfw8/powerpath1.png]]
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TaFARSJDiLI/AAAAAAAABOE/SoDU7jrddUQ/powerpath2.png]]
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFAUat0HcI/AAAAAAAABOM/qe5qoeF3wTw/powerpath3.png]]
!
! What info do you need to produce the report?
''You need the following:''
* AWR time series output (my scripts http://karlarao.wordpress.com/scripts-resources)
* output of the command ''powermt display dev=all'' (run as root)
* RDA
* SAR (because I just love looking at the performance data)
* sysreport (run as root)
''You have to collect'' this on each server / instance and properly arrange them per folder so you won't have a hard time documenting the bits of info you need on the Excel sheet
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TaFK4F_SexI/AAAAAAAABPQ/VjSQm0_uUUM/powerdevices4.png]]
''Below is the drill down on each folder'', the data you'll see is from a separate two RAC clusters.. each with it's own SAN storage.. the project I'm working on here is to migrate/consolidate them into a single SAN storage (newly purchased). So I need to collect all these data to help on planning the activity and mitigate the risks/issues. Also the collection of performance data is a must to verify if the IO requirements of the databases can be handled by the new SAN. On this project I have verified that the Capacity exceeds the current requirements.
* AWR
<<<
per server
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmcOnhBI/AAAAAAAABOk/lxo8_tbLqX4/powerdevices5-awr.png]]
> per instance
> [img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFKbFENV1I/AAAAAAAABPE/nUCFo_HOjHY/powerdevices5-awr2.png]]
>> awr output on each instance
>> [img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TaFKbZB6nnI/AAAAAAAABPI/8MVhDN5Q_rI/powerdevices5-awr3.png]]
<<<
* powermt display dev=all
<<<
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmG2XayI/AAAAAAAABOg/0Lo8QoDbm_A/powerdevices6-powermt.png]]
<<<
* RDA
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmsNLT2I/AAAAAAAABOs/sHa-KUryYFo/powerdevices7-rda.png]]
<<<
* SAR
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmcOnhBI/AAAAAAAABOk/lxo8_tbLqX4/powerdevices5-awr.png]]
> sample output
> [img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmyLyKAI/AAAAAAAABOw/f5UgyqVu09I/powerdevices8-sar.png]]
<<<
* sysreport
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TaFDm08TuYI/AAAAAAAABO0/bZzSwZX6Vqc/powerdevices8-sysreport.png]]
> sample output
> [img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TaFDnQpFRHI/AAAAAAAABO4/SiDKdZE9kOY/powerdevices8-sysreport2.png]]
<<<
!
! Putting it all together
On the Excel sheet, you have to fill in the following sections
* From RDA
** ASM Library Information
** ASM Library Disk Information
** Disk Partitions
** Operating System Setup->Operating System Packages
** Operating System Setup->Disk Drives->Disk Mounts
** Oracle Cluster Registry (Cluster -> Cluster Information -> ocrcheck)
* From ''powermt'' command
** Logical Device IDs and names
* From sysreport
** raw devices (possible for OCR and Voting Disk)
** fstab (check for OCFS2 mounts)
* Double check from OS commands
** Voting Disk (''crsctl query css votedisk'')
** ls -l /dev/
** /etc/init.d/oracleasm querydisk <device_name>
''Below are the output from the various sources...'' this will show you how to map the ''ASM disk'' to a particular ''EMC power device'' (follow the ''RED ARROWS'').. you have to do it on all "ASM disks" and the method will also be the same on accounting the ''raw devices'', ''OCFS2'', and ''OCR'' for their mapping on their respective EMC power devices..
To do the correlated report of the ASM, Linux, and SAN storage.. follow the ''BLUE ARROWS''..
You will also see below that having this proper accounting and correlating it from the ASM, Linux, and EMC storage level you will never go wrong and you have the definitive information that you can share with the EMC Storage Engineer which they can also ''double check''.. in that way both the ''DBAs and the Storage guys will be on the same page''.
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFvF-b2-tI/AAAAAAAABPk/cdnpkq6yLeg/emcreport10.png]]
Notice above that the ''emcpowerr'' and ''emcpowers'' have no allocations, so what does that mean? can we allocate these devices now? ... mm ''no!'' ... ''stop''...''move back''... ''think''...
I will do the following:
* Run this query to check if it's recognized as ''FOREIGN'' or ''CANDIDATE''
{{{
set lines 400
col name format a20
col label format a20
col path format a20
col redundancy format a20
select a.group_number, a.name, a.header_status, a.mount_status, a.state, a.total_mb, a.free_mb, a.label, path, a.redundancy
from v$asm_disk a
order by 1,2;
GROUP_NUMBER NAME HEADER_STATU STATE TOTAL_MB FREE_MB LABEL PATH REDUNDANCY
------------ -------------------- ------------ -------- ---------- ---------- -------------------- -------------------- --------------------
}}}
* I've done some precautions on my data gathering by checking on the ''fstab'' and ''raw devices config'' and found out that ''there are no pointers to the two devices''..
** I have an obsessive–compulsive tendencies just to make sure that these devices are not used by some services. If accidentally these EMC power devices were used for something else let's say as a filesystem.. Oracle will still allow you to do the ADD/DROP operation on these devices wiping out all the data on those devices!
* Another thing I would do is validate it with my storage engineer or the in-house DBA if these disks exist for the purpose of expanding the disk group.
If everything is okay. I can safely say they are candidate disks for expanding the space of my current disk group and go ahead with the activity.
!
! From Matt Zito (former EMC solutions architect)
<<<
Hey guys,
I haven't gotten this email address straightened out on Oracle-L yet, but I figured I'd drop you a note, and you could forward it on to the list if you cared to.
The doc you read is correct, powerpath will cheerfully work with any of the devices you send IOs to, because the kernel driver intercepts requests for all devices and routes them through itself before dishing them down the appropriate path.
However, setting scandisks to the emcpower has the administrative benefits of making sure the disks don't show up twice. However, even if ASM picks the first of the two disks, it will still be load-balanced successfully.
Thanks,
Matt Zito
(former EMC solutions architect)
<<<
https://blogs.oracle.com/XPSONHA/entry/asr_snmp_on_exadata
Oracle Auto Service Request (ASR) [ID 1185493.1]
''ASR Documentation'' http://www.oracle.com/technetwork/server-storage/asr/documentation/index.html?ssSourceSiteId=ocomen
What DBAs Need to Know - Data Guard 11g ASYNC Redo Transport
http://www.oracle.com/technetwork/database/features/availability/316925-maa-otn-173423.pdf
http://www.oracle.com/technetwork/database/availability/maa-gg-performance-1969630.pdf
http://www.oracle.com/technetwork/database/availability/sync-2437177.pdf
http://www.oracle.com/au/products/database/maa-wp-10gr2-dataguardnetworkbestpr-134557.pdf
''10mins AWR snap interval, 144 samples in a day, 1008 samples in 7days, 4032 samples in 4weeks, 52560 samples in 1year''
''Good chapter on HOW to read AWR reports'' http://filezone.orapub.com/FF_Book/v4Chap9.pdf
{{{
Understand each field of AWR (Doc ID 884046.1)
AWR report is broken into multiple parts.
1)Instance information:-
This provides information the instance name , number,snapshot ids,total time the report was taken for and the database time during this elapsed time.
Elapsed time= end snapshot time - start snapshot time
Database time= Work done by database during this much elapsed time( CPU and I/o both add to Database time).If this is lesser than the elapsed time by a great margin, then database is idle.Database time does not include time spend by the background processes.
2)Cache Sizes : This shows the size of each SGA region after AMM has changed them. This information
can be compared to the original init.ora parameters at the end of the AWR report.
3)Load Profile: This important section shows important rates expressed in units of per second and
transactions per second.This is very important for understanding how is the instance behaving.This has to be compared to base line report to understand the expected load on the machine and the delta during bad times.
4)Instance Efficiency Percentages (Target 100%): This section talks about how close are the vital ratios like buffer cache hit, library cache hit,parses etc.These can be taken as indicators ,but should not be a cause of worry if they are low.As the ratios cold be low or high based in database activities, and not due to real performance problem.Hence these are not stand alone statistics, should be read for a high level view .
5)Shared Pool Statistics: This summarizes changes to the shared pool during the snapshot
period.
6)Top 5 Timed Events :This is the section which is most relevant for analysis.This section shows what % of database time was the wait event seen for.Till 9i, this was the way to backtrack what was the total database time for the report , as there was no Database time column in 9i.
7)RAC Statistics :This part is seen only incase of cluster instance.This provides important indication on the average time take for block transfer, block receiving , messages ., which can point to performance problems in the Cluster instead of database.
8)Wait Class : This Depicts which wait class was the area of contention and where we need to focus.Was that network, concurrency, cluster, i/o Application, configuration etc.
9)Wait Events Statistics Section: This section shows a breakdown of the main wait events in the
database including foreground and background database wait events as well as time model, operating
system, service, and wait classes statistics.
10)Wait Events: This AWR report section provides more detailed wait event information for foreground
user processes which includes Top 5 wait events and many other wait events that occurred during
the snapshot interval.
11)Background Wait Events: This section is relevant to the background process wait events.
12)Time Model Statistics: Time mode statistics report how database-processing time is spent. This
section contains detailed timing information on particular components participating in database
processing.This gives information about background process timing also which is not included in database time.
13)Operating System Statistics: This section is important from OS server contention point of view.This section shows the main external resources including I/O, CPU, memory, and network usage.
14)Service Statistics: The service statistics section gives information services and their load in terms of CPU seconds, i/o seconds, number of buffer reads etc.
15)SQL Section: This section displays top SQL, ordered by important SQL execution metrics.
a)SQL Ordered by Elapsed Time: Includes SQL statements that took significant execution
time during processing.
b)SQL Ordered by CPU Time: Includes SQL statements that consumed significant CPU time
during its processing.
c)SQL Ordered by Gets: These SQLs performed a high number of logical reads while
retrieving data.
d)SQL Ordered by Reads: These SQLs performed a high number of physical disk reads while
retrieving data.
e)SQL Ordered by Parse Calls: These SQLs experienced a high number of reparsing operations.
f)SQL Ordered by Sharable Memory: Includes SQL statements cursors which consumed a large
amount of SGA shared pool memory.
g)SQL Ordered by Version Count: These SQLs have a large number of versions in shared pool
for some reason.
16)Instance Activity Stats: This section contains statistical information describing how the database
operated during the snapshot period.
17)I/O Section: This section shows the all important I/O activity.This provides time it took to make 1 i/o say Av Rd(ms), and i/o per second say Av Rd/s.This should be compared to the baseline to see if the rate of i/o has always been like this or there is a diversion now.
18)Advisory Section: This section show details of the advisories for the buffer, shared pool, PGA and
Java pool.
19)Buffer Wait Statistics: This important section shows buffer cache waits statistics.
20)Enqueue Activity: This important section shows how enqueue operates in the database. Enqueues are
special internal structures which provide concurrent access to various database resources.
21)Undo Segment Summary: This section gives a summary about how undo segments are used by the database.
Undo Segment Stats: This section shows detailed history information about undo segment activity.
22)Latch Activity: This section shows details about latch statistics. Latches are a lightweight
serialization mechanism that is used to single-thread access to internal Oracle structures.The latch should be checked by its sleeps.The sleepiest Latch is the latch that is under contention , and not the latch with high requests.Hence run through the sleep breakdown part of this section to arrive at the latch under highest contention.
23)Segment Section: This portion is important to make a guess in which segment and which segment type the contention could be.Tally this with the top 5 wait events.
Segments by Logical Reads: Includes top segments which experienced high number of
logical reads.
Segments by Physical Reads: Includes top segments which experienced high number of disk
physical reads.
Segments by Buffer Busy Waits: These segments have the largest number of buffer waits
caused by their data blocks.
Segments by Row Lock Waits: Includes segments that had a large number of row locks on
their data.
Segments by ITL Waits: Includes segments that had a large contention for Interested
Transaction List (ITL). The contention for ITL can be reduced by increasing INITRANS storage
parameter of the table.
24)Dictionary Cache Stats: This section exposes details about how the data dictionary cache is
operating.
25)Library Cache Activity: Includes library cache statistics which are needed in case you see library cache in top 5 wait events.You might want to see if the reload/invalidations are causing the contention or there is some other issue with library cache.
26)SGA Memory Summary:This would tell us the difference in the respective pools at the start and end of report.This could be an indicator of setting minimum value for each, when sga)target is being used..
27)init.ora Parameters: This section shows the original init.ora parameters for the instance during
the snapshot period.
There would be more Sections in case of RAC setups to provide details.
}}}
''A SQL Performance History from AWR''
http://www.toadworld.com/BLOGS/tabid/67/EntryId/125/A-SQL-Performance-History-from-AWR.aspx <-- This could also be possible to graph using my awr_topsqlx.sql
''miTrend AWR Report / StatsPack Gathering Procedures Instructions'' https://community.emc.com/docs/DOC-13949 <-- EMCs tool with nice PPT and paper, also talks about "burst" periods for IO sizing, raid adjusted IOPS, EFDs IOPS
http://pavandba.files.wordpress.com/2009/11/owp_awr_historical_analysis.pdf
{{{
set arraysize 5000
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
ttitle center 'AWR Top SQL Report' skip 2
set pagesize 50000
set linesize 300
col snap_id format 99999 heading "Snap|ID"
col tm format a15 heading "Snap|Start|Time"
col inst format 90 heading "i|n|s|t|#"
col dur format 990.00 heading "Snap|Dur|(m)"
col sql_id format a15 heading "SQL|ID"
col phv format 99999999999 heading "Plan|Hash|Value"
col module format a20 heading "Module"
col elap format 999990.00 heading "Elapsed|Time|(s)"
col elapexec format 999990.00 heading "Elapsed|Time|per exec|(s)"
col cput format 999990.00 heading "CPU|Time|(s)"
col iowait format 999990.00 heading "IO|Wait|(s)"
col bget format 99999999990 heading "LIO"
col dskr format 99999999990 heading "PIO"
col rowp format 99999999990 heading "Rows"
col exec format 9999990 heading "Exec"
col prsc format 999999990 heading "Parse|Count"
col pxexec format 9999990 heading "PX|Exec"
col pctdbt format 990 heading "DB Time|%"
col aas format 990.00 heading "A|A|S"
col time_rank format 90 heading "Time|Rank"
col sql_text format a40 heading "SQL|Text"
select *
from (
select
sqt.snap_id snap_id,
TO_CHAR(sqt.tm,'MM/DD/YY HH24:MI') tm,
sqt.inst inst,
sqt.dur dur,
sqt.sql_id sql_id,
sqt.phv phv,
to_clob(decode(sqt.module, null, null, sqt.module)) module,
nvl((sqt.elap), to_number(null)) elap,
nvl((sqt.elapexec), to_number(null)) elapexec,
nvl((sqt.cput), to_number(null)) cput,
sqt.iowait iowait,
sqt.bget bget,
sqt.dskr dskr,
sqt.rowp rowp,
sqt.exec exec,
sqt.prsc prsc,
sqt.pxexec pxexec,
sqt.aas aas,
sqt.time_rank time_rank
, nvl(st.sql_text, to_clob('** SQL Text Not Available **')) sql_text -- PUT/REMOVE COMMENT TO HIDE/SHOW THE SQL_TEXT
from (
select snap_id, tm, inst, dur, sql_id, phv, module, elap, elapexec, cput, iowait, bget, dskr, rowp, exec, prsc, pxexec, aas, time_rank
from
(
select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
e.sql_id sql_id,
e.plan_hash_value phv,
max(e.module) module,
sum(e.elapsed_time_delta)/1000000 elap,
decode((sum(e.executions_delta)), 0, to_number(null), ((sum(e.elapsed_time_delta)) / (sum(e.executions_delta)) / 1000000)) elapexec,
sum(e.cpu_time_delta)/1000000 cput,
sum(e.iowait_delta)/1000000 iowait,
sum(e.buffer_gets_delta) bget,
sum(e.disk_reads_delta) dskr,
sum(e.rows_processed_delta) rowp,
sum(e.executions_delta) exec,
sum(e.parse_calls_delta) prsc,
sum(px_servers_execs_delta) pxexec,
(sum(e.elapsed_time_delta)/1000000) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60) aas,
DENSE_RANK() OVER (
PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sqlstat e
where
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
and e.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
and e.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
and e.snap_id = s0.snap_id + 1
group by
s0.snap_id, s0.END_INTERVAL_TIME, s0.instance_number, e.sql_id, e.plan_hash_value, e.elapsed_time_delta, s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME
)
where
time_rank <= 5 -- GET TOP 5 SQL ACROSS SNAP_IDs... YOU CAN ALTER THIS TO HAVE MORE DATA POINTS
)
sqt,
dba_hist_sqltext st
where st.sql_id(+) = sqt.sql_id
and st.dbid(+) = &_dbid
-- AND TO_CHAR(tm,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- AND snap_id in (338,339)
-- AND snap_id >= 335 and snap_id <= 339
-- AND snap_id = 3172
-- and sqt.sql_id = 'dj3n91vxsyaq5'
-- AND lower(st.sql_text) like 'select%'
-- AND lower(st.sql_text) like 'insert%'
-- AND lower(st.sql_text) like 'update%'
-- AND lower(st.sql_text) like 'merge%'
-- AND pxexec > 0
-- AND aas > .5
order by
-- snap_id -- TO GET SQL OUTPUT ACROSS SNAP_IDs SEQUENTIALLY AND ASC
nvl(sqt.elap, -1) desc, sqt.sql_id -- TO GET SQL OUTPUT BY ELAPSED TIME
)
where rownum <= 20
;
}}}
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15 heading instname -- instname
col hostname format a30 heading hostname -- hostname
col tm format a17 heading tm -- "tm"
col id format 99999 heading id -- "snapid"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col cpu format 90 heading cpu -- "cpu"
col cap format 9999990.00 heading cap -- "capacity"
col dbt format 999990.00 heading dbt -- "DBTime"
col dbc format 99990.00 heading dbc -- "DBcpu"
col bgc format 99990.00 heading bgc -- "BGcpu"
col rman format 9990.00 heading rman -- "RMANcpu"
col aas format 990.0 heading aas -- "AAS"
col totora format 9999990.00 heading totora -- "TotalOracleCPU"
col busy format 9999990.00 heading busy -- "BusyTime"
col load format 990.00 heading load -- "OSLoad"
col totos format 9999990.00 heading totos -- "TotalOSCPU"
col mem format 999990.00 heading mem -- "PhysicalMemorymb"
col IORs format 9990.000 heading IORs -- "IOPsr"
col IOWs format 9990.000 heading IOWs -- "IOPsw"
col IORedo format 9990.000 heading IORedo -- "IOPsredo"
col IORmbs format 9990.000 heading IORmbs -- "IOrmbs"
col IOWmbs format 9990.000 heading IOWmbs -- "IOwmbs"
col redosizesec format 9990.000 heading redosizesec -- "Redombs"
col logons format 990 heading logons -- "Sess"
col logone format 990 heading logone -- "SessEnd"
col exsraw format 99990.000 heading exsraw -- "Execrawdelta"
col exs format 9990.000 heading exs -- "Execs"
col ucs format 9990.000 heading ucs -- "UserCalls"
col ucoms format 9990.000 heading ucoms -- "Commit"
col urs format 9990.000 heading urs -- "Rollback"
col oracpupct format 990 heading oracpupct -- "OracleCPUPct"
col rmancpupct format 990 heading rmancpupct -- "RMANCPUPct"
col oscpupct format 990 heading oscpupct -- "OSCPUPct"
col oscpuusr format 990 heading oscpuusr -- "USRPct"
col oscpusys format 990 heading oscpusys -- "SYSPct"
col oscpuio format 990 heading oscpuio -- "IOPct"
SELECT * FROM
(
SELECT trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
s3t1.value AS cpu,
(round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value cap,
(s5t1.value - s5t0.value) / 1000000 as dbt,
(s6t1.value - s6t0.value) / 1000000 as dbc,
(s7t1.value - s7t0.value) / 1000000 as bgc,
round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2) as rman,
((s5t1.value - s5t0.value) / 1000000)/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2) totora,
-- s1t1.value - s1t0.value AS busy, -- this is osstat BUSY_TIME
round(s2t1.value,2) AS load,
(s1t1.value - s1t0.value)/100 AS totos,
((round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oracpupct,
((round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as rmancpupct,
(((s1t1.value - s1t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpupct,
(((s17t1.value - s17t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuusr,
(((s18t1.value - s18t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpusys,
(((s19t1.value - s19t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuio
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_osstat s1t0, -- BUSY_TIME
dba_hist_osstat s1t1,
dba_hist_osstat s17t0, -- USER_TIME
dba_hist_osstat s17t1,
dba_hist_osstat s18t0, -- SYS_TIME
dba_hist_osstat s18t1,
dba_hist_osstat s19t0, -- IOWAIT_TIME
dba_hist_osstat s19t1,
dba_hist_osstat s2t1, -- osstat just get the end value
dba_hist_osstat s3t1, -- osstat just get the end value
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1,
dba_hist_sys_time_model s6t0,
dba_hist_sys_time_model s6t1,
dba_hist_sys_time_model s7t0,
dba_hist_sys_time_model s7t1,
dba_hist_sys_time_model s8t0,
dba_hist_sys_time_model s8t1
WHERE s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s1t0.dbid = s0.dbid
AND s1t1.dbid = s0.dbid
AND s2t1.dbid = s0.dbid
AND s3t1.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
AND s6t0.dbid = s0.dbid
AND s6t1.dbid = s0.dbid
AND s7t0.dbid = s0.dbid
AND s7t1.dbid = s0.dbid
AND s8t0.dbid = s0.dbid
AND s8t1.dbid = s0.dbid
AND s17t0.dbid = s0.dbid
AND s17t1.dbid = s0.dbid
AND s18t0.dbid = s0.dbid
AND s18t1.dbid = s0.dbid
AND s19t0.dbid = s0.dbid
AND s19t1.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s1t0.instance_number = s0.instance_number
AND s1t1.instance_number = s0.instance_number
AND s2t1.instance_number = s0.instance_number
AND s3t1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s6t0.instance_number = s0.instance_number
AND s6t1.instance_number = s0.instance_number
AND s7t0.instance_number = s0.instance_number
AND s7t1.instance_number = s0.instance_number
AND s8t0.instance_number = s0.instance_number
AND s8t1.instance_number = s0.instance_number
AND s17t0.instance_number = s0.instance_number
AND s17t1.instance_number = s0.instance_number
AND s18t0.instance_number = s0.instance_number
AND s18t1.instance_number = s0.instance_number
AND s19t0.instance_number = s0.instance_number
AND s19t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s1t0.snap_id = s0.snap_id
AND s1t1.snap_id = s0.snap_id + 1
AND s2t1.snap_id = s0.snap_id + 1
AND s3t1.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s6t0.snap_id = s0.snap_id
AND s6t1.snap_id = s0.snap_id + 1
AND s7t0.snap_id = s0.snap_id
AND s7t1.snap_id = s0.snap_id + 1
AND s8t0.snap_id = s0.snap_id
AND s8t1.snap_id = s0.snap_id + 1
AND s17t0.snap_id = s0.snap_id
AND s17t1.snap_id = s0.snap_id + 1
AND s18t0.snap_id = s0.snap_id
AND s18t1.snap_id = s0.snap_id + 1
AND s19t0.snap_id = s0.snap_id
AND s19t1.snap_id = s0.snap_id + 1
AND s1t0.stat_name = 'BUSY_TIME'
AND s1t1.stat_name = s1t0.stat_name
AND s17t0.stat_name = 'USER_TIME'
AND s17t1.stat_name = s17t0.stat_name
AND s18t0.stat_name = 'SYS_TIME'
AND s18t1.stat_name = s18t0.stat_name
AND s19t0.stat_name = 'IOWAIT_TIME'
AND s19t1.stat_name = s19t0.stat_name
AND s2t1.stat_name = 'LOAD'
AND s3t1.stat_name = 'NUM_CPUS'
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
AND s6t0.stat_name = 'DB CPU'
AND s6t1.stat_name = s6t0.stat_name
AND s7t0.stat_name = 'background cpu time'
AND s7t1.stat_name = s7t0.stat_name
AND s8t0.stat_name = 'RMAN cpu time (backup/restore)'
AND s8t1.stat_name = s8t0.stat_name
)
-- WHERE
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{
-- TO VIEW DB INFO
set lines 300
select dbid,instance_number,version,db_name,instance_name, host_name
from dba_hist_database_instance
where instance_number = (select instance_number from v$instance)
and rownum < 2;
-- TO VIEW RETENTION INFORMATION
select * from dba_hist_wr_control;
set lines 300
select b.name, a.DBID,
((TRUNC(SYSDATE) + a.SNAP_INTERVAL - TRUNC(SYSDATE)) * 86400)/60 AS SNAP_INTERVAL_MINS,
((TRUNC(SYSDATE) + a.RETENTION - TRUNC(SYSDATE)) * 86400)/60 AS RETENTION_MINS,
((TRUNC(SYSDATE) + a.RETENTION - TRUNC(SYSDATE)) * 86400)/60/60/24 AS RETENTION_DAYS,
TOPNSQL
from dba_hist_wr_control a, v$database b
where a.dbid = b.dbid;
/*
-- SET RETENTION PEROID TO 31 DAYS (UNIT IS MINUTES)
execute dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 43200);
-- SET RETENTION PEROID TO 365 DAYS (UNIT IS MINUTES)
exec dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 525600);
-- Create Snapshot
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
END;
/
*/
-- AWR get recent snapshot
set lines 300
select * from
(SELECT s0.instance_number, s0.snap_id,
to_char(s0.startup_time,'yyyy-mon-dd hh24:mi:ss') startup_time,
TO_CHAR(s0.END_INTERVAL_TIME,'yyyy-mon-dd hh24:mi:ss') snap_start,
TO_CHAR(s1.END_INTERVAL_TIME,'yyyy-mon-dd hh24:mi:ss') snap_end,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) ela_min
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1
WHERE s1.snap_id = s0.snap_id + 1
ORDER BY snap_id DESC)
where rownum < 11;
-- MIN/MAX for dba_hist tables
select count(*) snap_count from dba_hist_snapshot;
select min(snap_id) min_snap, max(snap_id) max_snap from dba_hist_snapshot;
select to_char(min(end_interval_time),'yyyy-mon-dd hh24:mi:ss') min_date, to_char(max(end_interval_time),'yyyy-mon-dd hh24:mi:ss') max_date from dba_hist_snapshot;
/*
-- STATSPACK get recent snapshot
set lines 300
col what format a30
set numformat 999999999999999
alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
select sysdate from dual;
select instance, what, job, next_date, next_sec from user_jobs;
select * from
(select
s0.instance_number, s0.snap_id snap_id, s0.startup_time,
to_char(s0.snap_time,'YYYY-Mon-DD HH24:MI:SS') snap_start,
to_char(s1.snap_time,'YYYY-Mon-DD HH24:MI:SS') snap_end,
(s1.snap_time-s0.snap_time)*24*60 ela_min,
s0.dbid, s0.snap_level, s0.snapshot_exec_time_s
from stats$snapshot s0,
stats$snapshot s1
where s1.snap_id = s0.snap_id + 1
ORDER BY s0.snap_id DESC)
where rownum < 11;
-- MIN/MAX for statspack tables
col min_dt format a14
col max_dt format a14
col host_name format a12
select
t1.dbid,
t1.instance_number,
t2.version,
t2.db_name,
t2.instance_name,
t2.host_name,
min(to_char(t1.snap_time,'YYYY-Mon-DD HH24')) min_dt,
max(to_char(t1.snap_time,'YYYY-Mon-DD HH24')) max_dt
from stats$snapshot t1,
stats$database_instance t2
where t1.dbid = t2.dbid
and t1.snap_id = t2.snap_id
group by
t1.dbid,
t1.instance_number,
t2.version,
t2.db_name,
t2.instance_name,
t2.host_name
/
*/
/*
AWR reports:
Running Workload Repository Reports Using Enterprise Manager
Running Workload Repository Compare Period Report Using Enterprise Manager
Running Workload Repository Reports Using SQL Scripts
Running Workload Repository Reports Using SQL Scripts
-----------------------------------------------------
You can view AWR reports by running the following SQL scripts:
The @?/rdbms/admin/awrrpt.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids.
The awrrpti.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids on
a specified database and instance.
The awrsqrpt.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a
range of snapshot Ids. Run this report to inspect or debug the performance of a SQL statement.
The awrsqrpi.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a
range of snapshot Ids on a specified database and instance. Run this report to inspect or debug the performance of a SQL statement on a specific database and instance.
The awrddrpt.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration
settings between two selected time periods.
The awrddrpi.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration
settings between two selected time periods on a specific database and instance.
awrsqrpt.sql -- SQL performance report
*/
}}}
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR IO Workload Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15 heading instname -- instname
col hostname format a30 heading hostname -- hostname
col tm format a17 heading tm -- "tm"
col id format 99999 heading id -- "snapid"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col cpu format 90 heading cpu -- "cpu"
col cap format 9999990.00 heading cap -- "capacity"
col dbt format 999990.00 heading dbt -- "DBTime"
col dbc format 99990.00 heading dbc -- "DBcpu"
col bgc format 99990.00 heading bgc -- "BGcpu"
col rman format 9990.00 heading rman -- "RMANcpu"
col aas format 990.0 heading aas -- "AAS"
col totora format 9999990.00 heading totora -- "TotalOracleCPU"
col busy format 9999990.00 heading busy -- "BusyTime"
col load format 990.00 heading load -- "OSLoad"
col totos format 9999990.00 heading totos -- "TotalOSCPU"
col mem format 999990.00 heading mem -- "PhysicalMemorymb"
col IORs format 99990.000 heading IORs -- "IOPsr"
col IOWs format 99990.000 heading IOWs -- "IOPsw"
col IORedo format 99990.000 heading IORedo -- "IOPsredo"
col IORmbs format 99990.000 heading IORmbs -- "IOrmbs"
col IOWmbs format 99990.000 heading IOWmbs -- "IOwmbs"
col redosizesec format 99990.000 heading redosizesec -- "Redombs"
col logons format 990 heading logons -- "Sess"
col logone format 990 heading logone -- "SessEnd"
col exsraw format 99990.000 heading exsraw -- "Execrawdelta"
col exs format 9990.000 heading exs -- "Execs"
col oracpupct format 990 heading oracpupct -- "OracleCPUPct"
col rmancpupct format 990 heading rmancpupct -- "RMANCPUPct"
col oscpupct format 990 heading oscpupct -- "OSCPUPct"
col oscpuusr format 990 heading oscpuusr -- "USRPct"
col oscpusys format 990 heading oscpusys -- "SYSPct"
col oscpuio format 990 heading oscpuio -- "IOPct"
col SIORs format 99990.000 heading SIORs -- "IOPsSingleBlockr"
col MIORs format 99990.000 heading MIORs -- "IOPsMultiBlockr"
col TIORmbs format 99990.000 heading TIORmbs -- "Readmbs"
col SIOWs format 99990.000 heading SIOWs -- "IOPsSingleBlockw"
col MIOWs format 99990.000 heading MIOWs -- "IOPsMultiBlockw"
col TIOWmbs format 99990.000 heading TIOWmbs -- "Writembs"
col TIOR format 99990.000 heading TIOR -- "TotalIOPsr"
col TIOW format 99990.000 heading TIOW -- "TotalIOPsw"
col TIOALL format 99990.000 heading TIOALL -- "TotalIOPsALL"
col ALLRmbs format 99990.000 heading ALLRmbs -- "TotalReadmbs"
col ALLWmbs format 99990.000 heading ALLWmbs -- "TotalWritembs"
col GRANDmbs format 99990.000 heading GRANDmbs -- "TotalmbsALL"
col readratio format 990 heading readratio -- "ReadRatio"
col writeratio format 990 heading writeratio -- "WriteRatio"
col diskiops format 99990.000 heading diskiops -- "HWDiskIOPs"
col numdisks format 99990.000 heading numdisks -- "HWNumofDisks"
col flashcache format 990 heading flashcache -- "FlashCacheHitsPct"
col cellpiob format 99990.000 heading cellpiob -- "CellPIOICmbs"
col cellpiobss format 99990.000 heading cellpiobss -- "CellPIOICSmartScanmbs"
col cellpiobpreoff format 99990.000 heading cellpiobpreoff -- "CellPIOpredoffloadmbs"
col cellpiobsi format 99990.000 heading cellpiobsi -- "CellPIOstorageindexmbs"
col celliouncomb format 99990.000 heading celliouncomb -- "CellIOuncompmbs"
col cellpiobs format 99990.000 heading cellpiobs -- "CellPIOsavedfilecreationmbs"
col cellpiobsrman format 99990.000 heading cellpiobsrman -- "CellPIOsavedRMANfilerestorembs"
SELECT * FROM
(
SELECT trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
(((s20t1.value - s20t0.value) - (s21t1.value - s21t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as SIORs,
((s21t1.value - s21t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as MIORs,
(((s22t1.value - s22t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as TIORmbs,
(((s23t1.value - s23t0.value) - (s24t1.value - s24t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as SIOWs,
((s24t1.value - s24t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as MIOWs,
(((s25t1.value - s25t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as TIOWmbs,
((s13t1.value - s13t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as IORedo,
(((s14t1.value - s14t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as redosizesec,
((s33t1.value - s33t0.value) / (s20t1.value - s20t0.value))*100 as flashcache,
(((s26t1.value - s26t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiob,
(((s31t1.value - s31t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobss,
(((s29t1.value - s29t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobpreoff,
(((s30t1.value - s30t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobsi,
(((s32t1.value - s32t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as celliouncomb,
(((s27t1.value - s27t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobs,
(((s28t1.value - s28t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobsrman
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sysstat s13t0, -- redo writes, diffed
dba_hist_sysstat s13t1,
dba_hist_sysstat s14t0, -- redo size, diffed
dba_hist_sysstat s14t1,
dba_hist_sysstat s20t0, -- physical read total IO requests, diffed
dba_hist_sysstat s20t1,
dba_hist_sysstat s21t0, -- physical read total multi block requests, diffed
dba_hist_sysstat s21t1,
dba_hist_sysstat s22t0, -- physical read total bytes, diffed
dba_hist_sysstat s22t1,
dba_hist_sysstat s23t0, -- physical write total IO requests, diffed
dba_hist_sysstat s23t1,
dba_hist_sysstat s24t0, -- physical write total multi block requests, diffed
dba_hist_sysstat s24t1,
dba_hist_sysstat s25t0, -- physical write total bytes, diffed
dba_hist_sysstat s25t1,
dba_hist_sysstat s26t0, -- cell physical IO interconnect bytes, diffed, cellpiob
dba_hist_sysstat s26t1,
dba_hist_sysstat s27t0, -- cell physical IO bytes saved during optimized file creation, diffed, cellpiobs
dba_hist_sysstat s27t1,
dba_hist_sysstat s28t0, -- cell physical IO bytes saved during optimized RMAN file restore, diffed, cellpiobsrman
dba_hist_sysstat s28t1,
dba_hist_sysstat s29t0, -- cell physical IO bytes eligible for predicate offload, diffed, cellpiobpreoff
dba_hist_sysstat s29t1,
dba_hist_sysstat s30t0, -- cell physical IO bytes saved by storage index, diffed, cellpiobsi
dba_hist_sysstat s30t1,
dba_hist_sysstat s31t0, -- cell physical IO interconnect bytes returned by smart scan, diffed, cellpiobss
dba_hist_sysstat s31t1,
dba_hist_sysstat s32t0, -- cell IO uncompressed bytes, diffed, celliouncomb
dba_hist_sysstat s32t1,
dba_hist_sysstat s33t0, -- cell flash cache read hits
dba_hist_sysstat s33t1
WHERE s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s13t0.dbid = s0.dbid
AND s13t1.dbid = s0.dbid
AND s14t0.dbid = s0.dbid
AND s14t1.dbid = s0.dbid
AND s20t0.dbid = s0.dbid
AND s20t1.dbid = s0.dbid
AND s21t0.dbid = s0.dbid
AND s21t1.dbid = s0.dbid
AND s22t0.dbid = s0.dbid
AND s22t1.dbid = s0.dbid
AND s23t0.dbid = s0.dbid
AND s23t1.dbid = s0.dbid
AND s24t0.dbid = s0.dbid
AND s24t1.dbid = s0.dbid
AND s25t0.dbid = s0.dbid
AND s25t1.dbid = s0.dbid
AND s26t0.dbid = s0.dbid
AND s26t1.dbid = s0.dbid
AND s27t0.dbid = s0.dbid
AND s27t1.dbid = s0.dbid
AND s28t0.dbid = s0.dbid
AND s28t1.dbid = s0.dbid
AND s29t0.dbid = s0.dbid
AND s29t1.dbid = s0.dbid
AND s30t0.dbid = s0.dbid
AND s30t1.dbid = s0.dbid
AND s31t0.dbid = s0.dbid
AND s31t1.dbid = s0.dbid
AND s32t0.dbid = s0.dbid
AND s32t1.dbid = s0.dbid
AND s33t0.dbid = s0.dbid
AND s33t1.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s13t0.instance_number = s0.instance_number
AND s13t1.instance_number = s0.instance_number
AND s14t0.instance_number = s0.instance_number
AND s14t1.instance_number = s0.instance_number
AND s20t0.instance_number = s0.instance_number
AND s20t1.instance_number = s0.instance_number
AND s21t0.instance_number = s0.instance_number
AND s21t1.instance_number = s0.instance_number
AND s22t0.instance_number = s0.instance_number
AND s22t1.instance_number = s0.instance_number
AND s23t0.instance_number = s0.instance_number
AND s23t1.instance_number = s0.instance_number
AND s24t0.instance_number = s0.instance_number
AND s24t1.instance_number = s0.instance_number
AND s25t0.instance_number = s0.instance_number
AND s25t1.instance_number = s0.instance_number
AND s26t0.instance_number = s0.instance_number
AND s26t1.instance_number = s0.instance_number
AND s27t0.instance_number = s0.instance_number
AND s27t1.instance_number = s0.instance_number
AND s28t0.instance_number = s0.instance_number
AND s28t1.instance_number = s0.instance_number
AND s29t0.instance_number = s0.instance_number
AND s29t1.instance_number = s0.instance_number
AND s30t0.instance_number = s0.instance_number
AND s30t1.instance_number = s0.instance_number
AND s31t0.instance_number = s0.instance_number
AND s31t1.instance_number = s0.instance_number
AND s32t0.instance_number = s0.instance_number
AND s32t1.instance_number = s0.instance_number
AND s33t0.instance_number = s0.instance_number
AND s33t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s13t0.snap_id = s0.snap_id
AND s13t1.snap_id = s0.snap_id + 1
AND s14t0.snap_id = s0.snap_id
AND s14t1.snap_id = s0.snap_id + 1
AND s20t0.snap_id = s0.snap_id
AND s20t1.snap_id = s0.snap_id + 1
AND s21t0.snap_id = s0.snap_id
AND s21t1.snap_id = s0.snap_id + 1
AND s22t0.snap_id = s0.snap_id
AND s22t1.snap_id = s0.snap_id + 1
AND s23t0.snap_id = s0.snap_id
AND s23t1.snap_id = s0.snap_id + 1
AND s24t0.snap_id = s0.snap_id
AND s24t1.snap_id = s0.snap_id + 1
AND s25t0.snap_id = s0.snap_id
AND s25t1.snap_id = s0.snap_id + 1
AND s26t0.snap_id = s0.snap_id
AND s26t1.snap_id = s0.snap_id + 1
AND s27t0.snap_id = s0.snap_id
AND s27t1.snap_id = s0.snap_id + 1
AND s28t0.snap_id = s0.snap_id
AND s28t1.snap_id = s0.snap_id + 1
AND s29t0.snap_id = s0.snap_id
AND s29t1.snap_id = s0.snap_id + 1
AND s30t0.snap_id = s0.snap_id
AND s30t1.snap_id = s0.snap_id + 1
AND s31t0.snap_id = s0.snap_id
AND s31t1.snap_id = s0.snap_id + 1
AND s32t0.snap_id = s0.snap_id
AND s32t1.snap_id = s0.snap_id + 1
AND s33t0.snap_id = s0.snap_id
AND s33t1.snap_id = s0.snap_id + 1
AND s13t0.stat_name = 'redo writes'
AND s13t1.stat_name = s13t0.stat_name
AND s14t0.stat_name = 'redo size'
AND s14t1.stat_name = s14t0.stat_name
AND s20t0.stat_name = 'physical read total IO requests'
AND s20t1.stat_name = s20t0.stat_name
AND s21t0.stat_name = 'physical read total multi block requests'
AND s21t1.stat_name = s21t0.stat_name
AND s22t0.stat_name = 'physical read total bytes'
AND s22t1.stat_name = s22t0.stat_name
AND s23t0.stat_name = 'physical write total IO requests'
AND s23t1.stat_name = s23t0.stat_name
AND s24t0.stat_name = 'physical write total multi block requests'
AND s24t1.stat_name = s24t0.stat_name
AND s25t0.stat_name = 'physical write total bytes'
AND s25t1.stat_name = s25t0.stat_name
AND s26t0.stat_name = 'cell physical IO interconnect bytes'
AND s26t1.stat_name = s26t0.stat_name
AND s27t0.stat_name = 'cell physical IO bytes saved during optimized file creation'
AND s27t1.stat_name = s27t0.stat_name
AND s28t0.stat_name = 'cell physical IO bytes saved during optimized RMAN file restore'
AND s28t1.stat_name = s28t0.stat_name
AND s29t0.stat_name = 'cell physical IO bytes eligible for predicate offload'
AND s29t1.stat_name = s29t0.stat_name
AND s30t0.stat_name = 'cell physical IO bytes saved by storage index'
AND s30t1.stat_name = s30t0.stat_name
AND s31t0.stat_name = 'cell physical IO interconnect bytes returned by smart scan'
AND s31t1.stat_name = s31t0.stat_name
AND s32t0.stat_name = 'cell IO uncompressed bytes'
AND s32t1.stat_name = s32t0.stat_name
AND s33t0.stat_name = 'cell flash cache read hits'
AND s33t1.stat_name = s33t0.stat_name
)
-- WHERE
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (338)
-- aas > 1
-- oscpuio > 50
-- rmancpupct > 0
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{
Network requirements below. TX is transmit, RX is received
https://github.com/carlos-sierra/esp_collect/blob/master/sql/esp_collect_requirements_awr.sql
SUM(CASE WHEN h.stat_name = 'bytes sent via SQL*Net to client' THEN h.value ELSE 0 END) tx_cl,
SUM(CASE WHEN h.stat_name = 'bytes received via SQL*Net from client' THEN h.value ELSE 0 END) rx_cl,
SUM(CASE WHEN h.stat_name = 'bytes sent via SQL*Net to dblink' THEN h.value ELSE 0 END) tx_dl,
SUM(CASE WHEN h.stat_name = 'bytes received via SQL*Net from dblink' THEN h.value ELSE 0 END) rx_dl
ROUND(MAX((tx_cl + rx_cl + tx_dl + rx_dl) / elapsed_sec)) nw_peak_bytes,
ROUND(MAX((tx_cl + tx_dl) / elapsed_sec)) nw_tx_peak_bytes,
ROUND(MAX((rx_cl + rx_dl) / elapsed_sec)) nw_rx_peak_bytes,
Interconnect below
SUM(CASE WHEN h.stat_name = 'gc cr blocks received' THEN h.value ELSE 0 END) gc_cr_bl_rx,
SUM(CASE WHEN h.stat_name = 'gc current blocks received' THEN h.value ELSE 0 END) gc_cur_bl_rx,
SUM(CASE WHEN h.stat_name = 'gc cr blocks served' THEN h.value ELSE 0 END) gc_cr_bl_serv,
SUM(CASE WHEN h.stat_name = 'gc current blocks served' THEN h.value ELSE 0 END) gc_cur_bl_serv,
SUM(CASE WHEN h.stat_name = 'gcs messages sent' THEN h.value ELSE 0 END) gcs_msg_sent,
SUM(CASE WHEN h.stat_name = 'ges messages sent' THEN h.value ELSE 0 END) ges_msg_sent,
SUM(CASE WHEN d.name = 'gcs msgs received' THEN d.value ELSE 0 END) gcs_msg_rcv,
SUM(CASE WHEN d.name = 'ges msgs received' THEN d.value ELSE 0 END) ges_msg_rcv,
SUM(CASE WHEN p.parameter_name = 'db_block_size' THEN to_number(p.value) ELSE 0 END) block_size
ROUND(MAX(((gc_cr_bl_rx + gc_cur_bl_rx + gc_cr_bl_serv + gc_cur_bl_serv)*block_size)+((gcs_msg_sent + ges_msg_sent + gcs_msg_rcv + ges_msg_rcv)*200) / elapsed_sec)) ic_peak_bytes,
}}}
{{{
set arraysize 5000
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR Services Statistics Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15
col hostname format a30
col tm format a15 heading tm --"Snap|Start|Time"
col id format 99999 heading id --"Snap|ID"
col inst format 90 heading inst --"i|n|s|t|#"
col dur format 999990.00 heading dur --"Snap|Dur|(m)"
col cpu format 90 heading cpu --"C|P|U"
col cap format 9999990.00 heading cap --"***|Total|CPU|Time|(s)"
col dbt format 999990.00 heading dbt --"DB|Time"
col dbc format 99990.00 heading dbc --"DB|CPU"
col bgc format 99990.00 heading bgc --"Bg|CPU"
col rman format 9990.00 heading rman --"RMAN|CPU"
col aas format 990.0 heading aas --"A|A|S"
col totora format 9999990.00 heading totora --"***|Total|Oracle|CPU|(s)"
col busy format 9999990.00 heading busy --"Busy|Time"
col load format 990.00 heading load --"OS|Load"
col totos format 9999990.00 heading totos --"***|Total|OS|CPU|(s)"
col mem format 999990.00 heading mem --"Physical|Memory|(mb)"
col IORs format 9990.000 heading IORs --"IOPs|r"
col IOWs format 9990.000 heading IOWs --"IOPs|w"
col IORedo format 9990.000 heading IORedo --"IOPs|redo"
col IORmbs format 9990.000 heading IORmbs --"IO r|(mb)/s"
col IOWmbs format 9990.000 heading IOWmbs --"IO w|(mb)/s"
col redosizesec format 9990.000 heading redosizesec --"Redo|(mb)/s"
col logons format 990 heading logons --"Sess"
col logone format 990 heading logone --"Sess|End"
col exsraw format 99990.000 heading exsraw --"Exec|raw|delta"
col exs format 9990.000 heading exs --"Exec|/s"
col oracpupct format 990 heading oracpupct --"Oracle|CPU|%"
col rmancpupct format 990 heading rmancpupct --"RMAN|CPU|%"
col oscpupct format 990 heading oscpupct --"OS|CPU|%"
col oscpuusr format 990 heading oscpuusr --"U|S|R|%"
col oscpusys format 990 heading oscpusys --"S|Y|S|%"
col oscpuio format 990 heading oscpuio --"I|O|%"
col phy_reads format 99999990.00 heading phy_reads --"physical|reads"
col log_reads format 99999990.00 heading log_reads --"logical|reads"
select trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, snap_id,
TO_CHAR(tm,'MM/DD/YY HH24:MI:SS') tm,
inst,
dur,
service_name,
round(db_time / 1000000, 1) as dbt,
round(db_cpu / 1000000, 1) as dbc,
phy_reads,
log_reads,
aas
from (select
s1.snap_id,
s1.tm,
s1.inst,
s1.dur,
s1.service_name,
sum(decode(s1.stat_name, 'DB time', s1.diff, 0)) db_time,
sum(decode(s1.stat_name, 'DB CPU', s1.diff, 0)) db_cpu,
sum(decode(s1.stat_name, 'physical reads', s1.diff, 0)) phy_reads,
sum(decode(s1.stat_name, 'session logical reads', s1.diff, 0)) log_reads,
round(sum(decode(s1.stat_name, 'DB time', s1.diff, 0))/1000000,1)/60 / s1.dur as aas
from
(select s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
e.service_name service_name,
e.stat_name stat_name,
e.value - b.value diff
from dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_service_stat b,
dba_hist_service_stat e
where
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
and s1.dbid = s0.dbid
and b.dbid = s0.dbid
and e.dbid = s0.dbid
and s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
and s1.instance_number = s0.instance_number
and b.instance_number = s0.instance_number
and e.instance_number = s0.instance_number
and s1.snap_id = s0.snap_id + 1
and b.snap_id = s0.snap_id
and e.snap_id = s0.snap_id + 1
and b.stat_id = e.stat_id
and b.service_name_hash = e.service_name_hash) s1
group by
s1.snap_id, s1.tm, s1.inst, s1.dur, s1.service_name
order by
snap_id asc, aas desc, service_name)
-- where
-- AND TO_CHAR(tm,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- snap_id = 338
-- and snap_id >= 335 and snap_id <= 339
-- aas > .5
;
}}}
{{{
trx/sec = [UCOMS]+[URS]
}}}
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15 heading instname -- instname
col hostname format a30 heading hostname -- hostname
col tm format a17 heading tm -- "tm"
col id format 99999 heading id -- "snapid"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col cpu format 90 heading cpu -- "cpu"
col cap format 9999990.00 heading cap -- "capacity"
col dbt format 999990.00 heading dbt -- "DBTime"
col dbc format 99990.00 heading dbc -- "DBcpu"
col bgc format 99990.00 heading bgc -- "BGcpu"
col rman format 9990.00 heading rman -- "RMANcpu"
col aas format 990.0 heading aas -- "AAS"
col totora format 9999990.00 heading totora -- "TotalOracleCPU"
col busy format 9999990.00 heading busy -- "BusyTime"
col load format 990.00 heading load -- "OSLoad"
col totos format 9999990.00 heading totos -- "TotalOSCPU"
col mem format 999990.00 heading mem -- "PhysicalMemorymb"
col IORs format 9990.000 heading IORs -- "IOPsr"
col IOWs format 9990.000 heading IOWs -- "IOPsw"
col IORedo format 9990.000 heading IORedo -- "IOPsredo"
col IORmbs format 9990.000 heading IORmbs -- "IOrmbs"
col IOWmbs format 9990.000 heading IOWmbs -- "IOwmbs"
col redosizesec format 9990.000 heading redosizesec -- "Redombs"
col logons format 990 heading logons -- "Sess"
col logone format 990 heading logone -- "SessEnd"
col exsraw format 99990.000 heading exsraw -- "Execrawdelta"
col exs format 9990.000 heading exs -- "Execs"
col ucs format 9990.000 heading ucs -- "UserCalls"
col ucoms format 9990.000 heading ucoms -- "Commit"
col urs format 9990.000 heading urs -- "Rollback"
col lios format 9999990.00 heading lios -- "LIOs"
col oracpupct format 990 heading oracpupct -- "OracleCPUPct"
col rmancpupct format 990 heading rmancpupct -- "RMANCPUPct"
col oscpupct format 990 heading oscpupct -- "OSCPUPct"
col oscpuusr format 990 heading oscpuusr -- "USRPct"
col oscpusys format 990 heading oscpusys -- "SYSPct"
col oscpuio format 990 heading oscpuio -- "IOPct"
SELECT * FROM
(
SELECT trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
round(s4t1.value/1024/1024/1024,2) AS memgb,
round(s37t1.value/1024/1024/1024,2) AS sgagb,
round(s36t1.value/1024/1024/1024,2) AS pgagb,
s9t0.value logons,
((s10t1.value - s10t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as exs,
((s40t1.value - s40t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as ucs,
((s38t1.value - s38t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as ucoms,
((s39t1.value - s39t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as urs,
((s41t1.value - s41t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as lios
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_osstat s4t1, -- osstat just get the end value
(select snap_id, dbid, instance_number, sum(value) value from dba_hist_sga group by snap_id, dbid, instance_number) s37t1, -- total SGA allocated, just get the end value
dba_hist_pgastat s36t1, -- total PGA allocated, just get the end value
dba_hist_sysstat s9t0, -- logons current, sysstat absolute value should not be diffed
dba_hist_sysstat s10t0, -- execute count, diffed
dba_hist_sysstat s10t1,
dba_hist_sysstat s38t0, -- user commits, diffed
dba_hist_sysstat s38t1,
dba_hist_sysstat s39t0, -- user rollbacks, diffed
dba_hist_sysstat s39t1,
dba_hist_sysstat s40t0, -- user calls, diffed
dba_hist_sysstat s40t1,
dba_hist_sysstat s41t0, -- session logical reads, diffed
dba_hist_sysstat s41t1
WHERE s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s4t1.dbid = s0.dbid
AND s9t0.dbid = s0.dbid
AND s10t0.dbid = s0.dbid
AND s10t1.dbid = s0.dbid
AND s36t1.dbid = s0.dbid
AND s37t1.dbid = s0.dbid
AND s38t0.dbid = s0.dbid
AND s38t1.dbid = s0.dbid
AND s39t0.dbid = s0.dbid
AND s39t1.dbid = s0.dbid
AND s40t0.dbid = s0.dbid
AND s40t1.dbid = s0.dbid
AND s41t0.dbid = s0.dbid
AND s41t1.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s4t1.instance_number = s0.instance_number
AND s9t0.instance_number = s0.instance_number
AND s10t0.instance_number = s0.instance_number
AND s10t1.instance_number = s0.instance_number
AND s36t1.instance_number = s0.instance_number
AND s37t1.instance_number = s0.instance_number
AND s38t0.instance_number = s0.instance_number
AND s38t1.instance_number = s0.instance_number
AND s39t0.instance_number = s0.instance_number
AND s39t1.instance_number = s0.instance_number
AND s40t0.instance_number = s0.instance_number
AND s40t1.instance_number = s0.instance_number
AND s41t0.instance_number = s0.instance_number
AND s41t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s4t1.snap_id = s0.snap_id + 1
AND s36t1.snap_id = s0.snap_id + 1
AND s37t1.snap_id = s0.snap_id + 1
AND s9t0.snap_id = s0.snap_id
AND s10t0.snap_id = s0.snap_id
AND s10t1.snap_id = s0.snap_id + 1
AND s38t0.snap_id = s0.snap_id
AND s38t1.snap_id = s0.snap_id + 1
AND s39t0.snap_id = s0.snap_id
AND s39t1.snap_id = s0.snap_id + 1
AND s40t0.snap_id = s0.snap_id
AND s40t1.snap_id = s0.snap_id + 1
AND s41t0.snap_id = s0.snap_id
AND s41t1.snap_id = s0.snap_id + 1
AND s4t1.stat_name = 'PHYSICAL_MEMORY_BYTES'
AND s36t1.name = 'total PGA allocated'
AND s9t0.stat_name = 'logons current'
AND s10t0.stat_name = 'execute count'
AND s10t1.stat_name = s10t0.stat_name
AND s38t0.stat_name = 'user commits'
AND s38t1.stat_name = s38t0.stat_name
AND s39t0.stat_name = 'user rollbacks'
AND s39t1.stat_name = s39t0.stat_name
AND s40t0.stat_name = 'user calls'
AND s40t1.stat_name = s40t0.stat_name
AND s41t0.stat_name = 'session logical reads'
AND s41t1.stat_name = s41t0.stat_name
)
-- WHERE
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{
set arraysize 5000
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR Top Events Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15
col hostname format a30
col snap_id format 99999 heading snap_id -- "snapid"
col tm format a17 heading tm -- "tm"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col event format a55 heading event -- "Event"
col event_rank format 90 heading event_rank -- "EventRank"
col waits format 9999999990.00 heading waits -- "Waits"
col time format 9999999990.00 heading time -- "Timesec"
col avgwt format 99990.00 heading avgwt -- "Avgwtms"
col pctdbt format 9990.0 heading pctdbt -- "DBTimepct"
col aas format 990.0 heading aas -- "Aas"
col wait_class format a15 heading wait_class -- "WaitClass"
spool awr_topevents-tableau-&_instname-&_hostname..csv
select trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, snap_id, tm, inst, dur, event, event_rank, waits, time, avgwt, pctdbt, aas, wait_class
from
(select snap_id, TO_CHAR(tm,'MM/DD/YY HH24:MI:SS') tm, inst, dur, event, waits, time, avgwt, pctdbt, aas, wait_class,
DENSE_RANK() OVER (
PARTITION BY snap_id ORDER BY time DESC) event_rank
from
(
select * from
(select * from
(select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
e.event_name event,
e.total_waits - nvl(b.total_waits,0) waits,
round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2) time, -- THIS IS EVENT (sec)
round (decode ((e.total_waits - nvl(b.total_waits, 0)), 0, to_number(NULL), ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000) / (e.total_waits - nvl(b.total_waits,0))), 2) avgwt,
((round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt, -- THIS IS EVENT (sec) / DB TIME (sec)
(round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas, -- THIS IS EVENT (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
e.wait_class wait_class
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_system_event b,
dba_hist_system_event e,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
where
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
and b.dbid(+) = s0.dbid
and e.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
and b.instance_number(+) = s0.instance_number
and e.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND b.snap_id(+) = s0.snap_id
and e.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
and b.event_id = e.event_id
and e.wait_class != 'Idle'
and e.total_waits > nvl(b.total_waits,0)
and e.event_name not in ('smon timer',
'pmon timer',
'dispatcher timer',
'dispatcher listen timer',
'rdbms ipc message')
order by snap_id, time desc, waits desc, event)
union all
select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
'CPU time',
0,
round ((s6t1.value - s6t0.value) / 1000000, 2) as time, -- THIS IS DB CPU (sec)
0,
((round ((s6t1.value - s6t0.value) / 1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt, -- THIS IS DB CPU (sec) / DB TIME (sec)..TO GET % OF DB CPU ON DB TIME FOR TOP 5 TIMED EVENTS SECTION
(round ((s6t1.value - s6t0.value) / 1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas, -- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
'CPU'
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sys_time_model s6t0,
dba_hist_sys_time_model s6t1,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
WHERE
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s6t0.dbid = s0.dbid
AND s6t1.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s6t0.instance_number = s0.instance_number
AND s6t1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s6t0.snap_id = s0.snap_id
AND s6t1.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s6t0.stat_name = 'DB CPU'
AND s6t1.stat_name = s6t0.stat_name
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
union all
(select
dbtime.snap_id,
dbtime.tm,
dbtime.inst,
dbtime.dur,
'CPU wait',
0,
round(dbtime.time - accounted_dbtime.time, 2) time, -- THIS IS UNACCOUNTED FOR DB TIME (sec)
0,
((dbtime.aas - accounted_dbtime.aas)/ NULLIF(nvl(dbtime.aas,0),0))*100 as pctdbt, -- THIS IS UNACCOUNTED FOR DB TIME (sec) / DB TIME (sec)
round(dbtime.aas - accounted_dbtime.aas, 2) aas, -- AAS OF UNACCOUNTED FOR DB TIME
'CPU wait'
from
(select
s0.snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
'DB time',
0,
round ((s5t1.value - s5t0.value) / 1000000, 2) as time, -- THIS IS DB time (sec)
0,
0,
(round ((s5t1.value - s5t0.value) / 1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
'DB time'
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
WHERE
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name) dbtime,
(select snap_id, sum(time) time, sum(AAS) aas from
(select * from (select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
e.event_name event,
e.total_waits - nvl(b.total_waits,0) waits,
round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2) time, -- THIS IS EVENT (sec)
round (decode ((e.total_waits - nvl(b.total_waits, 0)), 0, to_number(NULL), ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000) / (e.total_waits - nvl(b.total_waits,0))), 2) avgwt,
((round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt, -- THIS IS EVENT (sec) / DB TIME (sec)
(round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas, -- THIS IS EVENT (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
e.wait_class wait_class
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_system_event b,
dba_hist_system_event e,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
where
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
and b.dbid(+) = s0.dbid
and e.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
and b.instance_number(+) = s0.instance_number
and e.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND b.snap_id(+) = s0.snap_id
and e.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
and b.event_id = e.event_id
and e.wait_class != 'Idle'
and e.total_waits > nvl(b.total_waits,0)
and e.event_name not in ('smon timer',
'pmon timer',
'dispatcher timer',
'dispatcher listen timer',
'rdbms ipc message')
order by snap_id, time desc, waits desc, event)
union all
select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
'CPU time',
0,
round ((s6t1.value - s6t0.value) / 1000000, 2) as time, -- THIS IS DB CPU (sec)
0,
((round ((s6t1.value - s6t0.value) / 1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt, -- THIS IS DB CPU (sec) / DB TIME (sec)..TO GET % OF DB CPU ON DB TIME FOR TOP 5 TIMED EVENTS SECTION
(round ((s6t1.value - s6t0.value) / 1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas, -- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
'CPU'
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sys_time_model s6t0,
dba_hist_sys_time_model s6t1,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
WHERE
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s6t0.dbid = s0.dbid
AND s6t1.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s6t0.instance_number = s0.instance_number
AND s6t1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s6t0.snap_id = s0.snap_id
AND s6t1.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s6t0.stat_name = 'DB CPU'
AND s6t1.stat_name = s6t0.stat_name
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
) group by snap_id) accounted_dbtime
where dbtime.snap_id = accounted_dbtime.snap_id
)
)
)
)
WHERE event_rank <= 5
-- AND tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- AND TO_CHAR(tm,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- and snap_id = 495
-- and snap_id >= 495 and snap_id <= 496
-- and event = 'db file sequential read'
-- and event like 'CPU%'
-- and avgwt > 5
-- and aas > .5
-- and wait_class = 'CPU'
-- and wait_class like '%I/O%'
-- and event_rank in (1,2,3)
ORDER BY snap_id;
}}}
If you'd like to be detailed and not only give you the top5 across snap_ids.. then comment the following lines below
<<<
where
time_rank <= 5
<<<
then put filters like SQL_ID or AAS after the line
<<<
-- where rownum <= 20
<<<
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR Top SQL Report' skip 2
set pagesize 50000
set linesize 550
col snap_id format 99999 heading -- "Snap|ID"
col tm format a15 heading -- "Snap|Start|Time"
col inst format 90 heading -- "i|n|s|t|#"
col dur format 990.00 heading -- "Snap|Dur|(m)"
col sql_id format a15 heading -- "SQL|ID"
col phv format 99999999999 heading -- "Plan|Hash|Value"
col module format a50
col elap format 999990.00 heading -- "Ela|Time|(s)"
col elapexec format 999990.00 heading -- "Ela|Time|per|exec|(s)"
col cput format 999990.00 heading -- "CPU|Time|(s)"
col iowait format 999990.00 heading -- "IO|Wait|(s)"
col appwait format 999990.00 heading -- "App|Wait|(s)"
col concurwait format 999990.00 heading -- "Ccr|Wait|(s)"
col clwait format 999990.00 heading -- "Cluster|Wait|(s)"
col bget format 99999999990 heading -- "LIO"
col dskr format 99999999990 heading -- "PIO"
col dpath format 99999999990 heading -- "Direct|Writes"
col rowp format 99999999990 heading -- "Rows"
col exec format 9999990 heading -- "Exec"
col prsc format 999999990 heading -- "Parse|Count"
col pxexec format 9999990 heading -- "PX|Server|Exec"
col icbytes format 99999990 heading -- "IC|MB"
col offloadbytes format 99999990 heading -- "Offload|MB"
col offloadreturnbytes format 99999990 heading -- "Offload|return|MB"
col flashcachereads format 99999990 heading -- "Flash|Cache|MB"
col uncompbytes format 99999990 heading -- "Uncomp|MB"
col pctdbt format 990 heading -- "DB Time|%"
col aas format 990.00 heading -- "A|A|S"
col time_rank format 90 heading -- "Time|Rank"
col sql_text format a6 heading -- "SQL|Text"
select *
from (
select
trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
sqt.snap_id snap_id,
TO_CHAR(sqt.tm,'MM/DD/YY HH24:MI:SS') tm,
sqt.inst inst,
sqt.dur dur,
sqt.aas aas,
nvl((sqt.elap), to_number(null)) elap,
nvl((sqt.elapexec), 0) elapexec,
nvl((sqt.cput), to_number(null)) cput,
sqt.iowait iowait,
sqt.appwait appwait,
sqt.concurwait concurwait,
sqt.clwait clwait,
sqt.bget bget,
sqt.dskr dskr,
sqt.dpath dpath,
sqt.rowp rowp,
sqt.exec exec,
sqt.prsc prsc,
sqt.pxexec pxexec,
sqt.icbytes,
sqt.offloadbytes,
sqt.offloadreturnbytes,
sqt.flashcachereads,
sqt.uncompbytes,
sqt.time_rank time_rank,
sqt.sql_id sql_id,
sqt.phv phv,
substr(to_clob(decode(sqt.module, null, null, sqt.module)),1,50) module,
st.sql_text sql_text -- PUT/REMOVE COMMENT TO HIDE/SHOW THE SQL_TEXT
from (
select snap_id, tm, inst, dur, sql_id, phv, module, elap, elapexec, cput, iowait, appwait, concurwait, clwait, bget, dskr, dpath, rowp, exec, prsc, pxexec, icbytes, offloadbytes, offloadreturnbytes, flashcachereads, uncompbytes, aas, time_rank
from
(
select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
e.sql_id sql_id,
e.plan_hash_value phv,
max(e.module) module,
sum(e.elapsed_time_delta)/1000000 elap,
decode((sum(e.executions_delta)), 0, to_number(null), ((sum(e.elapsed_time_delta)) / (sum(e.executions_delta)) / 1000000)) elapexec,
sum(e.cpu_time_delta)/1000000 cput,
sum(e.iowait_delta)/1000000 iowait,
sum(e.apwait_delta)/1000000 appwait,
sum(e.ccwait_delta)/1000000 concurwait,
sum(e.clwait_delta)/1000000 clwait,
sum(e.buffer_gets_delta) bget,
sum(e.disk_reads_delta) dskr,
sum(e.direct_writes_delta) dpath,
sum(e.rows_processed_delta) rowp,
sum(e.executions_delta) exec,
sum(e.parse_calls_delta) prsc,
sum(e.px_servers_execs_delta) pxexec,
sum(e.io_interconnect_bytes_delta)/1024/1024 icbytes,
sum(e.io_offload_elig_bytes_delta)/1024/1024 offloadbytes,
sum(e.io_offload_return_bytes_delta)/1024/1024 offloadreturnbytes,
(sum(e.optimized_physical_reads_delta)* &_blocksize)/1024/1024 flashcachereads,
sum(e.cell_uncompressed_bytes_delta)/1024/1024 uncompbytes,
(sum(e.elapsed_time_delta)/1000000) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60) aas,
DENSE_RANK() OVER (
PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sqlstat e
where
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
and e.dbid = s0.dbid
AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
and e.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
and e.snap_id = s0.snap_id + 1
group by
s0.snap_id, s0.END_INTERVAL_TIME, s0.instance_number, e.sql_id, e.plan_hash_value, e.elapsed_time_delta, s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME
)
where
time_rank <= 5 -- GET TOP 5 SQL ACROSS SNAP_IDs... YOU CAN ALTER THIS TO HAVE MORE DATA POINTS
)
sqt,
(select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type = b.action(+)) st
where st.sql_id(+) = sqt.sql_id
and st.dbid(+) = &_dbid
-- AND TO_CHAR(tm,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- AND snap_id in (338,339)
-- AND snap_id = 338
-- AND snap_id >= 335 and snap_id <= 339
-- AND lower(st.sql_text) like 'select%'
-- AND lower(st.sql_text) like 'insert%'
-- AND lower(st.sql_text) like 'update%'
-- AND lower(st.sql_text) like 'merge%'
-- AND pxexec > 0
-- AND aas > .5
order by
snap_id -- TO GET SQL OUTPUT ACROSS SNAP_IDs SEQUENTIALLY AND ASC
-- nvl(sqt.elap, -1) desc, sqt.sql_id -- TO GET SQL OUTPUT BY ELAPSED TIME
)
-- where rownum <= 20
;
}}}
http://gavinsoorma.com/2009/07/exporting-and-importing-awr-snapshot-data/
http://dboptimizer.com/2011/11/08/importing-awr-repositories-from-cloned-databases/ <-- this is to change the DBIDs
https://sites.google.com/site/oraclemonitor/dba_hist_active_sess_history#TOC-Force-importing-a-in-AWR <-- this is to ''FORCE'' import ASH data
How to Export and Import the AWR Repository From One Database to Another (Doc ID 785730.1)
Transporting Automatic Workload Repository Data to Another System https://docs.oracle.com/en/database/oracle/oracle-database/19/tgdba/gathering-database-statistics.html#GUID-F25470A0-C236-46DE-84F7-D68FBE1B0F12
{{{
###################################
on the source env
###################################
CREATE DIRECTORY AWR_DATA AS '/oracle/app/oracle/awrdata';
@?/rdbms/admin/awrextr.sql
~~~~~~~~~~~~~
AWR EXTRACT
~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ This script will extract the AWR data for a range of snapshots ~
~ into a dump file. The script will prompt users for the ~
~ following information: ~
~ (1) database id ~
~ (2) snapshot range to extract ~
~ (3) name of directory object ~
~ (4) name of dump file ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Databases in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id DB Name Host
------------ ------------ ------------
* 2607950532 IVRS dbrocaix01.b
ayantel.com
The default database id is the local one: '2607950532'. To use this
database id, press <return> to continue, otherwise enter an alternative.
Enter value for dbid: 2607950532
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 235
Begin Snapshot Id specified: 235
Enter value for end_snap: 3333
Specify the Directory Name
~~~~~~~~~~~~~~~~~~~~~~~~~~
Directory Name Directory Path
------------------------------ -------------------------------------------------
ADMIN_DIR /oracle/app/oracle/product/10.2.0/db_1/md/admin
AWR_DATA /oracle/app/oracle/awrdata
DATA_PUMP_DIR /flash_reco/flash_recovery_area/IVRS/expdp
DATA_PUMP_LOG /home/oracle/logs
SQLT$STAGE /oracle/app/oracle/admin/ivrs/udump
SQLT$UDUMP /oracle/app/oracle/admin/ivrs/udump
WORK_DIR /oracle/app/oracle/product/10.2.0/db_1/work
Choose a Directory Name from the above list (case-sensitive).
Enter value for directory_name: AWR_DATA
Using the dump directory: AWR_DATA
Specify the Name of the Extract Dump File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The prefix for the default dump file name is awrdat_235_3333.
To use this name, press <return> to continue, otherwise enter
an alternative.
Enter value for file_name: awrexp
###################################
on the target env
###################################
CREATE DIRECTORY AWR_DATA AS '/oracle/app/oracle/awrdata';
@?/rdbms/admin/awrload.sql
-- on target before the load
-- MIN/MAX for dba_hist tables
2 select min(snap_id) min_snap_id, max(snap_id) max_snap_id from dba_hist_snapshot;
3 select to_char(min(end_interval_time),'yyyy-mon-dd hh24:mi:ss') min_date, to_char(max(end_interval_time),'yyyy-mon-dd hh24:mi:ss') max_date from dba_hist_snapshot;
4 5 6 7 8 9 10 11
INSTANCE_NUMBER SNAP_ID STARTUP_TIME SNAP_START SNAP_END ELA_MIN
--------------- ---------- -------------------- -------------------- -------------------- ----------
1 238 2011-jan-27 08:52:09 2011-jan-27 09:30:31 2011-jan-27 09:40:34 10.05
1 237 2011-jan-27 08:52:09 2011-jan-27 09:20:28 2011-jan-27 09:30:31 10.04
1 236 2011-jan-27 08:52:09 2011-jan-27 09:10:26 2011-jan-27 09:20:28 10.04
1 235 2011-jan-27 08:52:09 2011-jan-27 09:03:24 2011-jan-27 09:10:26 7.03
1 234 2009-dec-15 13:41:20 2009-dec-15 14:00:32 2011-jan-27 09:03:24 587222.87
1 233 2009-dec-15 12:08:35 2009-dec-15 13:00:49 2009-dec-15 14:00:32 59.72
1 232 2009-dec-15 12:08:35 2009-dec-15 12:19:42 2009-dec-15 13:00:49 41.12
1 231 2009-dec-15 07:58:35 2009-dec-15 08:09:41 2009-dec-15 12:19:42 250.01
1 230 2009-dec-14 23:35:11 2009-dec-14 23:46:20 2009-dec-15 08:09:41 503.35
1 229 2009-dec-10 11:27:30 2009-dec-11 04:00:38 2009-dec-14 23:46:20 5505.7
10 rows selected.
sys@IVRS> sys@IVRS> sys@IVRS>
MIN_SNAP_ID MAX_SNAP_ID
----------- -----------
213 239
sys@IVRS>
MIN_DATE MAX_DATE
-------------------- --------------------
2009-dec-10 11:38:56 2011-jan-27 09:40:34
~~~~~~~~~~
AWR LOAD
~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ This script will load the AWR data from a dump file. The ~
~ script will prompt users for the following information: ~
~ (1) name of directory object ~
~ (2) name of dump file ~
~ (3) staging schema name to load AWR data into ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Specify the Directory Name
~~~~~~~~~~~~~~~~~~~~~~~~~~
Directory Name Directory Path
------------------------------ -------------------------------------------------
ADMIN_DIR /oracle/app/oracle/product/10.2.0/db_1/md/admin
AWR_DATA /oracle/app/oracle/awrdata
DATA_PUMP_DIR /flash_reco/flash_recovery_area/IVRS/expdp
DATA_PUMP_LOG /home/oracle/logs
SQLT$STAGE /oracle/app/oracle/admin/ivrs/udump
SQLT$UDUMP /oracle/app/oracle/admin/ivrs/udump
WORK_DIR /oracle/app/oracle/product/10.2.0/db_1/work
Choose a Directory Name from the list above (case-sensitive).
Enter value for directory_name: AWR_DATA
Using the dump directory: AWR_DATA
Specify the Name of the Dump File to Load
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please specify the prefix of the dump file (.dmp) to load:
Enter value for file_name: awrexp
Enter value for schema_name:
Using the staging schema name: AWR_STAGE
Choose the Default tablespace for the AWR_STAGE user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Choose the AWR_STAGE users's default tablespace. This is the
tablespace in which the AWR data will be staged.
TABLESPACE_NAME CONTENTS DEFAULT TABLESPACE
------------------------------ --------- ------------------
CCDATA PERMANENT
CCINDEX PERMANENT
PSE PERMANENT
SOE PERMANENT
SOEINDEX PERMANENT
SYSAUX PERMANENT *
TPCCTAB PERMANENT
TPCHTAB PERMANENT
USERS PERMANENT
Pressing <return> will result in the recommended default
tablespace (identified by *) being used.
Enter value for default_tablespace:
Using tablespace SYSAUX as the default tablespace for the AWR_STAGE
Choose the Temporary tablespace for the AWR_STAGE user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Choose the AWR_STAGE user's temporary tablespace.
TABLESPACE_NAME CONTENTS DEFAULT TEMP TABLESPACE
------------------------------ --------- -----------------------
TEMP TEMPORARY *
Pressing <return> will result in the database's default temporary
tablespace (identified by *) being used.
Enter value for temporary_tablespace:
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 12:46:07
begin
*
ERROR at line 1:
ORA-20105: unable to move AWR data to SYS
ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 1760
ORA-20107: not allowed to move AWR data for local dbid
ORA-06512: at line 3
... Dropping AWR_STAGE user
End of AWR Load
}}}
-- from http://www.perfvision.com/statspack/awr.txt
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
Snap Id Snap Time Sessions Curs/Sess
Cache Sizes
Load Profile
Instance Efficiency Percentages (Target 100%)
Top 5 Timed Events Avg %Total
Time Model Statistics
Wait Class
Wait Events
Background Wait Events
Operating System Statistics
Service Statistics
Service Wait Class Stats
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
Instance Activity Stats
Instance Activity Stats - Absolute Values
Instance Activity Stats - Thread Activity
Tablespace IO Stats
File IO Stats
Buffer Pool Statistics
Instance Recovery Stats
Buffer Pool Advisory
PGA Aggr Summary
PGA Aggr Target Histogram
PGA Memory Advisory
Shared Pool Advisory
SGA Target Advisory
Streams Pool Advisory
Java Pool Advisory
Buffer Wait Statistics
Enqueue Activity
Undo Segment Summary
Latch Activity
Latch Sleep Breakdown
Latch Miss Sources
Parent Latch Statistics
Segments by Logical Reads
Segments by Physical Reads
Segments by Row Lock Waits
Segments by ITL Waits
Segments by Buffer Busy Waits
Dictionary Cache Stats
Library Cache Activity
Process Memory Summary
SGA Memory Summary
SGA regions Begin Size (Bytes) (if different)
SGA breakdown difference
Streams CPU/IO Usage
Streams Capture
Streams Apply
Buffered Queues
Buffered Subscribers
Rule Set
Resource Limit Stats
init.ora Parameters
}}}
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
Snap Id Snap Time Sessions Curs/Sess
Cache Sizes
Load Profile
Instance Efficiency Percentages (Target 100%)
Top 5 Timed Events Avg wait %Total Call
Time Model Statistics
Wait Class
Wait Events
Background Wait Events
Operating System Statistics
Service Statistics
Service Wait Class Stats
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
Instance Activity Stats
Instance Activity Stats - Absolute Values
Instance Activity Stats - Thread Activity
Tablespace IO Stats
File IO Stats
Buffer Pool Statistics
Instance Recovery Stats
Buffer Pool Advisory
PGA Aggr Summary
PGA Aggr Target Stats <-- new in 10.2.0.3
PGA Aggr Target Histogram
PGA Memory Advisory
Shared Pool Advisory
SGA Target Advisory
Streams Pool Advisory
Java Pool Advisory
Buffer Wait Statistics
Enqueue Activity
Undo Segment Summary
Undo Segment Stats <-- new in 10.2.0.3
Latch Activity
Latch Sleep Breakdown
Latch Miss Sources
Parent Latch Statistics
Child Latch Statistics <-- new in 10.2.0.3
Segments by Logical Reads
Segments by Physical Reads
Segments by Row Lock Waits
Segments by ITL Waits
Segments by Buffer Busy Waits
Dictionary Cache Stats
Library Cache Activity
Process Memory Summary
SGA Memory Summary
SGA breakdown difference
Streams CPU/IO Usage
Streams Capture
Streams Apply
Buffered Queues
Buffered Subscribers
Rule Set
Resource Limit Stats
init.ora Parameters
}}}
-- from http://www.perfvision.com/statspack/awrrpt_1_122_123.txt
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
CDB10 1193559071 cdb10 1 10.2.0.1.0 NO tsukuba
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 122 31-Jul-07 17:00:40 36 24.9
End Snap: 123 31-Jul-07 18:00:56 37 25.0
Elapsed: 60.26 (mins)
DB Time: 89.57 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 28M 28M Std Block Size: 8K
Shared Pool Size: 128M 128M Log Buffer: 6,256K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 404,585.37 714,975.12
Logical reads: 8,318.76 14,700.74
Block changes: 2,744.42 4,849.89
Physical reads: 111.18 196.48
Physical writes: 48.07 84.96
User calls: 154.96 273.84
Parses: 3.17 5.60
Hard parses: 0.07 0.13
Sorts: 9.07 16.04
Logons: 0.05 0.09
Executes: 150.07 265.20
Transactions: 0.57
% Blocks changed per Read: 32.99 Recursive Call %: 16.44
Rollback per transaction %: 21.11 Rows per Sort: 57.60
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.98
Buffer Hit %: 98.70 In-memory Sort %: 100.00
Library Hit %: 99.94 Soft Parse %: 97.71
Execute to Parse %: 97.89 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 3.60 % Non-Parse CPU: 99.62
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 91.89 91.86
% SQL with executions>1: 75.28 73.08
% Memory for SQL w/exec>1: 73.58 70.06
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
log file parallel write 2,819 2,037 723 37.9 System I/O
db file parallel write 32,625 1,949 60 36.3 System I/O
db file sequential read 268,447 1,761 7 32.8 User I/O
log file sync 1,850 1,117 604 20.8 Commit
log buffer space 1,189 866 728 16.1 Configurat
-------------------------------------------------------------
Time Model Statistics DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Total time in database user-calls (DB Time): 5374.1s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 4,409.2 82.0
DB CPU 488.2 9.1
parse time elapsed 48.5 .9
hard parse elapsed time 45.8 .9
PL/SQL execution elapsed time 24.0 .4
sequence load elapsed time 6.1 .1
connection management call elapsed time 3.6 .1
failed parse elapsed time 0.8 .0
hard parse (sharing criteria) elapsed time 0.1 .0
repeated bind elapsed time 0.0 .0
DB time 5,374.1 N/A
background elapsed time 4,199.3 N/A
background cpu time 76.0 N/A
-------------------------------------------------------------
Wait Class DB/Inst: CDB10/cdb10 Snaps: 122-123
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
-------------------- ---------------- ------ ---------------- ------- ---------
System I/O 63,959 .0 4,080 64 31.3
User I/O 286,652 .0 2,337 8 140.1
Commit 1,850 47.2 1,117 604 0.9
Configuration 4,319 79.1 1,081 250 2.1
Concurrency 211 14.7 64 301 0.1
Application 1,432 .3 29 21 0.7
Network 566,962 .0 20 0 277.1
Other 499 1.2 9 19 0.2
-------------------------------------------------------------
Wait Events DB/Inst: CDB10/cdb10 Snaps: 122-123
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
log file parallel write 2,819 .0 2,037 723 1.4
db file parallel write 32,625 .0 1,949 60 15.9
db file sequential read 268,447 .0 1,761 7 131.2
log file sync 1,850 47.2 1,117 604 0.9
log buffer space 1,189 51.9 866 728 0.6
db file scattered read 16,589 .0 449 27 8.1
log file switch completion 182 35.2 109 597 0.1
control file parallel write 2,134 .0 87 41 1.0
direct path write temp 415 .0 78 188 0.2
log file switch (checkpoint 120 24.2 53 444 0.1
buffer busy waits 155 18.1 49 315 0.1
free buffer waits 2,387 95.0 43 18 1.2
enq: RO - fast object reuse 60 6.7 23 379 0.0
SQL*Net more data to dblink 1,723 .0 19 11 0.8
direct path read temp 350 .0 16 46 0.2
local write wait 164 1.8 15 90 0.1
direct path write 304 .0 13 42 0.1
write complete waits 11 90.9 10 923 0.0
latch: In memory undo latch 5 .0 8 1592 0.0
os thread startup 40 7.5 7 171 0.0
enq: CF - contention 25 .0 7 272 0.0
SQL*Net break/reset to clien 1,372 .0 7 5 0.7
control file sequential read 26,253 .0 5 0 12.8
db file parallel read 149 .0 4 29 0.1
direct path read 233 .0 1 6 0.1
latch: cache buffers lru cha 10 .0 1 132 0.0
latch: object queue header o 2 .0 1 460 0.0
SQL*Net message to client 557,769 .0 1 0 272.6
log file single write 64 .0 1 13 0.0
SQL*Net more data to client 1,806 .0 0 0 0.9
LGWR wait for redo copy 125 4.8 0 1 0.1
rdbms ipc reply 298 .0 0 0 0.1
SQL*Net more data from clien 93 .0 0 1 0.0
latch free 2 .0 0 17 0.0
latch: redo allocation 1 .0 0 21 0.0
latch: shared pool 2 .0 0 10 0.0
log file sequential read 64 .0 0 0 0.0
reliable message 36 .0 0 1 0.0
read by other session 1 .0 0 15 0.0
SQL*Net message to dblink 5,565 .0 0 0 2.7
latch: library cache 4 .0 0 1 0.0
undo segment extension 430 99.3 0 0 0.2
latch: cache buffers chains 4 .0 0 0 0.0
latch: library cache pin 1 .0 0 0 0.0
SQL*Net more data from dblin 6 .0 0 0 0.0
SQL*Net message from client 557,767 .0 51,335 92 272.6
Streams AQ: waiting for time 50 40.0 3,796 75924 0.0
wait for unread message on b 3,588 99.5 3,522 982 1.8
Streams AQ: qmn slave idle w 128 .0 3,520 27498 0.1
Streams AQ: qmn coordinator 275 53.5 3,520 12799 0.1
virtual circuit status 120 100.0 3,503 29191 0.1
Streams AQ: waiting for mess 725 97.7 3,498 4825 0.4
jobq slave wait 1,133 97.5 3,284 2898 0.6
PL/SQL lock timer 977 99.9 2,862 2929 0.5
SQL*Net message from dblink 5,566 .0 540 97 2.7
class slave wait 2 100.0 10 4892 0.0
single-task message 2 .0 0 103 0.0
-------------------------------------------------------------
Background Wait Events DB/Inst: CDB10/cdb10 Snaps: 122-123
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
log file parallel write 2,820 .0 2,037 722 1.4
db file parallel write 32,625 .0 1,949 60 15.9
control file parallel write 2,134 .0 87 41 1.0
direct path write 231 .0 13 55 0.1
db file sequential read 935 .0 12 13 0.5
log buffer space 13 53.8 10 791 0.0
events in waitclass Other 415 1.4 8 19 0.2
os thread startup 40 7.5 7 171 0.0
db file scattered read 115 .0 3 27 0.1
log file sync 3 66.7 2 828 0.0
direct path read 231 .0 1 6 0.1
buffer busy waits 21 .0 1 63 0.0
control file sequential read 2,550 .0 1 0 1.2
log file single write 64 .0 1 13 0.0
log file sequential read 64 .0 0 0 0.0
latch: shared pool 1 .0 0 7 0.0
latch: library cache 2 .0 0 1 0.0
latch: cache buffers chains 1 .0 0 0 0.0
rdbms ipc message 13,865 72.8 27,604 1991 6.8
Streams AQ: waiting for time 50 40.0 3,796 75924 0.0
pmon timer 1,272 98.6 3,526 2772 0.6
Streams AQ: qmn slave idle w 128 .0 3,520 27498 0.1
Streams AQ: qmn coordinator 275 53.5 3,520 12799 0.1
smon timer 178 3.4 3,360 18875 0.1
-------------------------------------------------------------
Operating System Statistics DB/Inst: CDB10/cdb10 Snaps: 122-123
Statistic Total
-------------------------------- --------------------
AVG_BUSY_TIME 204,954
AVG_IDLE_TIME 155,940
AVG_IOWAIT_TIME 0
AVG_SYS_TIME 15,979
AVG_USER_TIME 188,638
BUSY_TIME 410,601
IDLE_TIME 312,370
IOWAIT_TIME 0
SYS_TIME 32,591
USER_TIME 378,010
LOAD 1
OS_CPU_WAIT_TIME 228,200
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 338,665,472
VM_OUT_BYTES 397,410,304
PHYSICAL_MEMORY_BYTES 6,388,301,824
NUM_CPUS 2
-------------------------------------------------------------
Service Statistics DB/Inst: CDB10/cdb10 Snaps: 122-123
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
-------------------------------- ------------ ------------ ---------- ----------
SYS$USERS 4,666.5 429.9 348,141 ##########
cdb10 701.4 58.1 51,046 224,419
SYS$BACKGROUND 0.0 0.0 2,830 18,255
cdb10XDB 0.0 0.0 0 0
-------------------------------------------------------------
Service Wait Class Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
----------------------------------------------------------------
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
SYS$USERS
271425 210890 65 602 0 0 532492 1979
cdb10
12969 18550 81 4945 0 0 34068 15
SYS$BACKGROUND
2261 4306 65 815 0 0 0 0
-------------------------------------------------------------
SQL ordered by Elapsed Time DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
797 134 1 796.6 14.8 f1qcyh20550cf
Call CALC_QOS_SLOW(:1, :2, :3, :4)
773 58 1 773.2 14.4 fj6gjgsshtxyx
Call CALC_DELETE_OLD_DATA(:1)
354 25 1 354.3 6.6 0cjsxw5ndqdbc
Call CALC_HFC_SLOW(:1, :2, :3, :4)
275 29 1 275.3 5.1 8t8as9usk11qw
Call CALC_TOPOLOGY_SLOW(:1, :2, :3, :4)
202 4 4 50.5 3.8 dr1rkrznhh95b
Call CALC_TOPOLOGY_MEDIUM(:1, :2, :3, :4)
158 16 0 N/A 2.9 10dkqv3kr8xa5
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY
139 7 1 139.2 2.6 38zhkf4jdyff4
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN ash.collect(3,1200); :mydate := next_date; IF broken THEN :b := 1
; ELSE :b := 0; END IF; END;
137 72 1 136.8 2.5 298wmz1kxjs1m
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR
130 9 1 130.5 2.4 6n0d6cv6w6krs
DELETE FROM CM_VA WHERE SECONDID <= :B1
130 9 1 130.0 2.4 86m0m9q8fw9bj
DELETE FROM CM_QOS_PROF WHERE SECONDID <= :B1
126 3 1 125.6 2.3 33bpz9dh1w5jk
Module: Lab128
--lab128 select /*+rule*/ owner, segment_name||decode(partition_name,null,nul
l,' ('||partition_name||')') name, segment_type,tablespace_name, extent_id,f
ile_id,block_id, blocks,bytes/1048576 bytes from dba_extents
124 9 1 124.5 2.3 gyqv6h5pft4mj
DELETE FROM CM_BYTES WHERE SECONDID <= :B1
121 2 56 2.2 2.3 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
120 2 4 30.0 2.2 4zjg6w4mwu0wv
INSERT INTO TMP_TOP_MED_DN SELECT M.CMTSID, M.VENDOR_DESC, M.MODEL_DESC, MAC_L.T
OPOLOGYID, DOWN_L.TOPOLOGYID, M.UP_SNR_CNR_A3, M.UP_SNR_CNR_A2, M.UP_SNR_CNR_A1,
M.UP_SNR_CNR_A0, M.MAC_SLOTS_OPEN, M.MAC_SLOTS_USED, M.CMTS_REBOOT, 0 FROM TMP_
TOP_MED_CMTS M, TOPOLOGY_LINK DOWN_L, TOPOLOGY_NODE DOWN_N, TOPOLOGY_LINK MAC_L
119 9 1 119.1 2.2 aywfs0n7wwwhn
DELETE FROM CM_POWER_2 WHERE SECONDID <= :B1
SQL ordered by Elapsed Time DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
117 9 1 117.4 2.2 0fnnktt50m86h
DELETE FROM CM_ERRORS WHERE SECONDID <= :B1
116 1 977 0.1 2.1 5jh6zfmvpu77f
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1
108 9 1 107.5 2.0 21jqxqyf80cn8
DELETE FROM CM_POWER_1 WHERE SECONDID <= :B1
107 11 1 107.0 2.0 87gy6mxtk7f3z
DELETE FROM CM_POLL_STATUS WHERE TOPOLOGYID IN ( SELECT DISTINCT TOPOLOGYID FROM
CM_RAWDATA WHERE BATCHID = :B1 )
96 6 1 95.9 1.8 2r6jnnf1hzb4z
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, BITSPERSYM
BOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL chan
nel WHERE power.SECONDID = :1 AND link.TOPOLOGYID = power.TOPOLOGYID AND link.PA
RENTLEN = 1 AND link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha
95 1 1 95.1 1.8 1qp1yn30gajjw
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, M.
TOPOLOGYID UP_ID, T.UP_DESC UP_DESC, T.MAC_ID
MAC_ID, T.CMTS_ID CMTS_ID, M.MAX_PERCENT_UTI
L, M.MAX_PACKETS_PER_SEC, M.AVG_PACKET_SIZE,
94 5 1 93.9 1.7 fxvdq915s3qpt
DELETE FROM TMP_CALC_HFC_SLOW_CM_LAST
87 4 1 86.9 1.6 axyukfdx12pu4
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)
85 9 1 84.6 1.6 998t5bbdfm5rm
INSERT INTO CM_RAWDATA SELECT PROFINDX, 0 BATCHID, TOPOLOGYID, SAMPLETIME, SYSUP
TIME, DOCSIFCMTSCMSTATUSVALUE, DOCSIFCMTSSERVICEINOCTETS, DOCSIFCMTSSERVICEOUTOC
TETS, DOCSIFCMSTATUSTXPOWER, DOCSIFCMTSCMSTATUSRXPOWER, DOCSIFDOWNCHANNELPOWER,
DOCSIFSIGQUNERROREDS, DOCSIFSIGQCORRECTEDS, DOCSIFSIGQUNCORRECTABLES, DOCSIFSIGQ
84 5 1 83.8 1.6 3a11s4c86wdu5
DELETE FROM CM_RAWDATA WHERE BATCHID = 0 AND PROFINDX = :B1
77 22 150,832 0.0 1.4 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)
74 9 1 73.6 1.4 3whpusvtv0qq1
INSERT INTO TMP_CALC_QOS_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.UPID, T.CMID,
GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), R.DO
CSIFCMTSSERVICEINOCTETS, R.DOCSIFCMTSSERVICEOUTOCTETS, S.SID, L.PREV_SECONDID, L
.PREV_IFINOCTETS, L.PREV_IFOUTOCTETS, L.PREV_SID FROM TMP_TOP_SLOW_CM T, CM_RAWD
74 8 1 73.5 1.4 9h99br1t3qq3a
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC_SLOW_CM_LAST_TM
P
72 7 1 72.0 1.3 4qunm1qbf8cyk
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, CHANNELWID
TH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_POWER_1 power, TOPOLOGY_LINK lin
k, UPSTREAM_CHANNEL channel, UPSTREAM_POWER_1 upstream_rx WHERE power.SECONDID =
:1 and power.SECONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO
SQL ordered by Elapsed Time DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
68 3 1 68.4 1.3 bzmccctnyjb3z
INSERT INTO DOWNSTREAM_ERRORS SELECT T2.SECONDID, T1.DOWNID, ROUND(AVG(T2.SAMPLE
_LENGTH), 0), ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES
,0,0, T2.UNCORRECTABLES / ( T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES )
* 100)) ,2) AVG_CER, ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRE
64 7 1 63.6 1.2 fqcwt6uak8x3w
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS_SLOW_CM_LAST_TM
P
59 6 1 58.8 1.1 fd6a0p6333g8z
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY
-------------------------------------------------------------
SQL ordered by CPU Time DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
134 797 1 133.81 14.8 f1qcyh20550cf
Call CALC_QOS_SLOW(:1, :2, :3, :4)
72 137 1 71.96 2.5 298wmz1kxjs1m
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR
58 773 1 57.60 14.4 fj6gjgsshtxyx
Call CALC_DELETE_OLD_DATA(:1)
29 275 1 29.25 5.1 8t8as9usk11qw
Call CALC_TOPOLOGY_SLOW(:1, :2, :3, :4)
25 354 1 24.50 6.6 0cjsxw5ndqdbc
Call CALC_HFC_SLOW(:1, :2, :3, :4)
22 77 150,832 0.00 1.4 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)
19 52 150,324 0.00 1.0 6xz6vg8q1zygu
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, docsifcmtscms
tatusvalue, docsifcmtsserviceinoctets, docsifcmtsserviceoutoctets, docsifcmtscms
tatusrxpower, cmtscm_unerr, cmtscm_corr, cmtscm_uncorr, cmtscm_snr, cmtscm_timin
goffset) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)
18 40 150,259 0.00 0.7 c2a2g4fqnm25h
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, sysuptime, do
csifcmstatustxpower, docsifdownchannelpower, docsifsigqunerroreds, docsifsigqcor
recteds, docsifsigquncorrectables, docsifsigqsignalnoise, sysobjectid) values (:
1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12)
16 158 0 N/A 2.9 10dkqv3kr8xa5
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY
11 107 1 10.68 2.0 87gy6mxtk7f3z
DELETE FROM CM_POLL_STATUS WHERE TOPOLOGYID IN ( SELECT DISTINCT TOPOLOGYID FROM
CM_RAWDATA WHERE BATCHID = :B1 )
9 130 1 9.26 2.4 86m0m9q8fw9bj
DELETE FROM CM_QOS_PROF WHERE SECONDID <= :B1
9 130 1 9.03 2.4 6n0d6cv6w6krs
DELETE FROM CM_VA WHERE SECONDID <= :B1
9 108 1 9.01 2.0 21jqxqyf80cn8
DELETE FROM CM_POWER_1 WHERE SECONDID <= :B1
9 74 1 8.99 1.4 3whpusvtv0qq1
INSERT INTO TMP_CALC_QOS_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.UPID, T.CMID,
GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), R.DO
CSIFCMTSSERVICEINOCTETS, R.DOCSIFCMTSSERVICEOUTOCTETS, S.SID, L.PREV_SECONDID, L
.PREV_IFINOCTETS, L.PREV_IFOUTOCTETS, L.PREV_SID FROM TMP_TOP_SLOW_CM T, CM_RAWD
9 117 1 8.96 2.2 0fnnktt50m86h
SQL ordered by CPU Time DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
DELETE FROM CM_ERRORS WHERE SECONDID <= :B1
9 124 1 8.88 2.3 gyqv6h5pft4mj
DELETE FROM CM_BYTES WHERE SECONDID <= :B1
9 119 1 8.87 2.2 aywfs0n7wwwhn
DELETE FROM CM_POWER_2 WHERE SECONDID <= :B1
9 85 1 8.52 1.6 998t5bbdfm5rm
INSERT INTO CM_RAWDATA SELECT PROFINDX, 0 BATCHID, TOPOLOGYID, SAMPLETIME, SYSUP
TIME, DOCSIFCMTSCMSTATUSVALUE, DOCSIFCMTSSERVICEINOCTETS, DOCSIFCMTSSERVICEOUTOC
TETS, DOCSIFCMSTATUSTXPOWER, DOCSIFCMTSCMSTATUSRXPOWER, DOCSIFDOWNCHANNELPOWER,
DOCSIFSIGQUNERROREDS, DOCSIFSIGQCORRECTEDS, DOCSIFSIGQUNCORRECTABLES, DOCSIFSIGQ
8 74 1 7.66 1.4 9h99br1t3qq3a
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC_SLOW_CM_LAST_TM
P
7 64 1 7.43 1.2 fqcwt6uak8x3w
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS_SLOW_CM_LAST_TM
P
7 139 1 7.13 2.6 38zhkf4jdyff4
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN ash.collect(3,1200); :mydate := next_date; IF broken THEN :b := 1
; ELSE :b := 0; END IF; END;
7 72 1 6.69 1.3 4qunm1qbf8cyk
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, CHANNELWID
TH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_POWER_1 power, TOPOLOGY_LINK lin
k, UPSTREAM_CHANNEL channel, UPSTREAM_POWER_1 upstream_rx WHERE power.SECONDID =
:1 and power.SECONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO
6 59 1 6.12 1.1 fd6a0p6333g8z
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY
6 96 1 5.82 1.8 2r6jnnf1hzb4z
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, BITSPERSYM
BOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL chan
nel WHERE power.SECONDID = :1 AND link.TOPOLOGYID = power.TOPOLOGYID AND link.PA
RENTLEN = 1 AND link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha
5 84 1 5.23 1.6 3a11s4c86wdu5
DELETE FROM CM_RAWDATA WHERE BATCHID = 0 AND PROFINDX = :B1
5 94 1 5.19 1.7 fxvdq915s3qpt
DELETE FROM TMP_CALC_HFC_SLOW_CM_LAST
4 202 4 1.11 3.8 dr1rkrznhh95b
Call CALC_TOPOLOGY_MEDIUM(:1, :2, :3, :4)
4 87 1 3.68 1.6 axyukfdx12pu4
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)
3 126 1 2.92 2.3 33bpz9dh1w5jk
Module: Lab128
--lab128 select /*+rule*/ owner, segment_name||decode(partition_name,null,nul
SQL ordered by CPU Time DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
l,' ('||partition_name||')') name, segment_type,tablespace_name, extent_id,f
ile_id,block_id, blocks,bytes/1048576 bytes from dba_extents
3 68 1 2.66 1.3 bzmccctnyjb3z
INSERT INTO DOWNSTREAM_ERRORS SELECT T2.SECONDID, T1.DOWNID, ROUND(AVG(T2.SAMPLE
_LENGTH), 0), ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES
,0,0, T2.UNCORRECTABLES / ( T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES )
* 100)) ,2) AVG_CER, ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRE
2 121 56 0.04 2.3 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
2 120 4 0.42 2.2 4zjg6w4mwu0wv
INSERT INTO TMP_TOP_MED_DN SELECT M.CMTSID, M.VENDOR_DESC, M.MODEL_DESC, MAC_L.T
OPOLOGYID, DOWN_L.TOPOLOGYID, M.UP_SNR_CNR_A3, M.UP_SNR_CNR_A2, M.UP_SNR_CNR_A1,
M.UP_SNR_CNR_A0, M.MAC_SLOTS_OPEN, M.MAC_SLOTS_USED, M.CMTS_REBOOT, 0 FROM TMP_
TOP_MED_CMTS M, TOPOLOGY_LINK DOWN_L, TOPOLOGY_NODE DOWN_N, TOPOLOGY_LINK MAC_L
1 95 1 1.19 1.8 1qp1yn30gajjw
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, M.
TOPOLOGYID UP_ID, T.UP_DESC UP_DESC, T.MAC_ID
MAC_ID, T.CMTS_ID CMTS_ID, M.MAX_PERCENT_UTI
L, M.MAX_PACKETS_PER_SEC, M.AVG_PACKET_SIZE,
1 116 977 0.00 2.1 5jh6zfmvpu77f
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1
-------------------------------------------------------------
SQL ordered by Gets DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 30,077,723
-> Captured SQL account for 169.4% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
16,494,914 1 ############ 54.8 133.81 796.60 f1qcyh20550cf
Call CALC_QOS_SLOW(:1, :2, :3, :4)
11,322,501 1 ############ 37.6 71.96 136.75 298wmz1kxjs1m
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR
3,835,310 1 3,835,310.0 12.8 57.60 773.15 fj6gjgsshtxyx
Call CALC_DELETE_OLD_DATA(:1)
2,140,461 1 2,140,461.0 7.1 24.50 354.27 0cjsxw5ndqdbc
Call CALC_HFC_SLOW(:1, :2, :3, :4)
1,434,233 1 1,434,233.0 4.8 29.25 275.28 8t8as9usk11qw
Call CALC_TOPOLOGY_SLOW(:1, :2, :3, :4)
1,400,037 1 1,400,037.0 4.7 8.99 73.62 3whpusvtv0qq1
INSERT INTO TMP_CALC_QOS_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.UPID, T.CMID,
GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), R.DO
CSIFCMTSSERVICEINOCTETS, R.DOCSIFCMTSSERVICEOUTOCTETS, S.SID, L.PREV_SECONDID, L
.PREV_IFINOCTETS, L.PREV_IFOUTOCTETS, L.PREV_SID FROM TMP_TOP_SLOW_CM T, CM_RAWD
1,213,966 1 1,213,966.0 4.0 6.05 14.45 553hp60qv7vyh
select errors.TOPOLOGYID, errors.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, CHANNELW
IDTH, BITSPERSYMBOL, SNR_DOWN, RXPOWER_DOWN FROM CM_ERRORS errors, CM_POWER_2 po
wer, TOPOLOGY_LINK link, DOWNSTREAM_CHANNEL channel where errors.SECONDID = powe
r.SECONDID AND errors.SECONDID = :1 AND errors.TOPOLOGYID = power.TOPOLOGYID AND
1,065,052 1 1,065,052.0 3.5 6.69 72.01 4qunm1qbf8cyk
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, CHANNELWID
TH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_POWER_1 power, TOPOLOGY_LINK lin
k, UPSTREAM_CHANNEL channel, UPSTREAM_POWER_1 upstream_rx WHERE power.SECONDID =
:1 and power.SECONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO
1,011,784 1 1,011,784.0 3.4 8.52 84.62 998t5bbdfm5rm
INSERT INTO CM_RAWDATA SELECT PROFINDX, 0 BATCHID, TOPOLOGYID, SAMPLETIME, SYSUP
TIME, DOCSIFCMTSCMSTATUSVALUE, DOCSIFCMTSSERVICEINOCTETS, DOCSIFCMTSSERVICEOUTOC
TETS, DOCSIFCMSTATUSTXPOWER, DOCSIFCMTSCMSTATUSRXPOWER, DOCSIFDOWNCHANNELPOWER,
DOCSIFSIGQUNERROREDS, DOCSIFSIGQCORRECTEDS, DOCSIFSIGQUNCORRECTABLES, DOCSIFSIGQ
776,443 1 776,443.0 2.6 7.66 73.54 9h99br1t3qq3a
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC_SLOW_CM_LAST_TM
P
762,710 1 762,710.0 2.5 5.82 95.88 2r6jnnf1hzb4z
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, BITSPERSYM
BOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL chan
nel WHERE power.SECONDID = :1 AND link.TOPOLOGYID = power.TOPOLOGYID AND link.PA
RENTLEN = 1 AND link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha
724,267 1 724,267.0 2.4 7.43 63.59 fqcwt6uak8x3w
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS_SLOW_CM_LAST_TM
P
669,534 1 669,534.0 2.2 6.37 38.97 094vgzny6jvm4
INSERT INTO CM_VA ( SECONDID, TOPOLOGYID, CER, CCER, SNR, STATUSVALUE, TIMINGOFF
SET ) SELECT :B3 , TOPOLOGYID, CASE WHEN (CMTSCM_UNERR_D IS NULL OR CMTSCM_CORR_
D IS NULL OR CMTSCM_UNCORR_D IS NULL) THEN NULL ELSE 100 * CMTSCM_UNCORR_D/TOTAL
SQL ordered by Gets DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 30,077,723
-> Captured SQL account for 169.4% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
_D END CER, CASE WHEN (CMTSCM_UNERR_D IS NULL OR CMTSCM_CORR_D IS NULL OR CMTSCM
633,947 150,259 4.2 2.1 18.21 40.04 c2a2g4fqnm25h
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, sysuptime, do
csifcmstatustxpower, docsifdownchannelpower, docsifsigqunerroreds, docsifsigqcor
recteds, docsifsigquncorrectables, docsifsigqsignalnoise, sysobjectid) values (:
1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12)
618,871 150,324 4.1 2.1 18.56 51.78 6xz6vg8q1zygu
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, docsifcmtscms
tatusvalue, docsifcmtsserviceinoctets, docsifcmtsserviceoutoctets, docsifcmtscms
tatusrxpower, cmtscm_unerr, cmtscm_corr, cmtscm_uncorr, cmtscm_snr, cmtscm_timin
goffset) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)
615,244 1 615,244.0 2.0 9.03 130.46 6n0d6cv6w6krs
DELETE FROM CM_VA WHERE SECONDID <= :B1
615,129 1 615,129.0 2.0 9.26 130.03 86m0m9q8fw9bj
DELETE FROM CM_QOS_PROF WHERE SECONDID <= :B1
614,747 1 614,747.0 2.0 8.96 117.43 0fnnktt50m86h
DELETE FROM CM_ERRORS WHERE SECONDID <= :B1
614,661 1 614,661.0 2.0 8.88 124.47 gyqv6h5pft4mj
DELETE FROM CM_BYTES WHERE SECONDID <= :B1
614,649 1 614,649.0 2.0 10.68 107.01 87gy6mxtk7f3z
DELETE FROM CM_POLL_STATUS WHERE TOPOLOGYID IN ( SELECT DISTINCT TOPOLOGYID FROM
CM_RAWDATA WHERE BATCHID = :B1 )
613,965 1 613,965.0 2.0 8.87 119.15 aywfs0n7wwwhn
DELETE FROM CM_POWER_2 WHERE SECONDID <= :B1
613,256 1 613,256.0 2.0 9.01 107.53 21jqxqyf80cn8
DELETE FROM CM_POWER_1 WHERE SECONDID <= :B1
598,348 150,832 4.0 2.0 22.39 76.71 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)
343,903 1 343,903.0 1.1 2.45 11.06 8b7g4s4qa5r1d
INSERT INTO UPSTREAM_POWER_1 SELECT :B4 , T.UPID, :B4 - :B3 , ROUND(AVG(C.DOCSIF
CMTSCMSTATUSRXPOWER), 0) FROM CM_RAWDATA C, TMP_TOP_SLOW_CM T WHERE C.TOPOLOGYID
= T.CMID AND C.BATCHID = :B2 AND C.PROFINDX = :B1 GROUP BY T.UPID
301,471 1 301,471.0 1.0 2.66 68.37 bzmccctnyjb3z
INSERT INTO DOWNSTREAM_ERRORS SELECT T2.SECONDID, T1.DOWNID, ROUND(AVG(T2.SAMPLE
_LENGTH), 0), ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES
,0,0, T2.UNCORRECTABLES / ( T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES )
* 100)) ,2) AVG_CER, ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRE
-------------------------------------------------------------
SQL ordered by Reads DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Total Disk Reads: 401,992
-> Captured SQL account for 134.7% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
192,597 1 192,597.0 47.9 133.81 796.60 f1qcyh20550cf
Call CALC_QOS_SLOW(:1, :2, :3, :4)
144,969 1 144,969.0 36.1 71.96 136.75 298wmz1kxjs1m
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR
28,436 4 7,109.0 7.1 4.42 201.93 dr1rkrznhh95b
Call CALC_TOPOLOGY_MEDIUM(:1, :2, :3, :4)
22,352 1 22,352.0 5.6 24.50 354.27 0cjsxw5ndqdbc
Call CALC_HFC_SLOW(:1, :2, :3, :4)
21,907 1 21,907.0 5.4 57.60 773.15 fj6gjgsshtxyx
Call CALC_DELETE_OLD_DATA(:1)
15,834 0 N/A 3.9 15.56 158.02 10dkqv3kr8xa5
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY
15,050 1 15,050.0 3.7 29.25 275.28 8t8as9usk11qw
Call CALC_TOPOLOGY_SLOW(:1, :2, :3, :4)
13,424 1 13,424.0 3.3 6.12 58.83 fd6a0p6333g8z
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY
10,667 1 10,667.0 2.7 2.92 125.63 33bpz9dh1w5jk
Module: Lab128
--lab128 select /*+rule*/ owner, segment_name||decode(partition_name,null,nul
l,' ('||partition_name||')') name, segment_type,tablespace_name, extent_id,f
ile_id,block_id, blocks,bytes/1048576 bytes from dba_extents
9,156 4 2,289.0 2.3 1.68 119.84 4zjg6w4mwu0wv
INSERT INTO TMP_TOP_MED_DN SELECT M.CMTSID, M.VENDOR_DESC, M.MODEL_DESC, MAC_L.T
OPOLOGYID, DOWN_L.TOPOLOGYID, M.UP_SNR_CNR_A3, M.UP_SNR_CNR_A2, M.UP_SNR_CNR_A1,
M.UP_SNR_CNR_A0, M.MAC_SLOTS_OPEN, M.MAC_SLOTS_USED, M.CMTS_REBOOT, 0 FROM TMP_
TOP_MED_CMTS M, TOPOLOGY_LINK DOWN_L, TOPOLOGY_NODE DOWN_N, TOPOLOGY_LINK MAC_L
8,700 1 8,700.0 2.2 3.68 86.86 axyukfdx12pu4
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)
6,878 1 6,878.0 1.7 8.99 73.62 3whpusvtv0qq1
INSERT INTO TMP_CALC_QOS_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.UPID, T.CMID,
GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), R.DO
CSIFCMTSSERVICEINOCTETS, R.DOCSIFCMTSSERVICEOUTOCTETS, S.SID, L.PREV_SECONDID, L
.PREV_IFINOCTETS, L.PREV_IFOUTOCTETS, L.PREV_SID FROM TMP_TOP_SLOW_CM T, CM_RAWD
5,338 1 5,338.0 1.3 6.37 38.97 094vgzny6jvm4
INSERT INTO CM_VA ( SECONDID, TOPOLOGYID, CER, CCER, SNR, STATUSVALUE, TIMINGOFF
SET ) SELECT :B3 , TOPOLOGYID, CASE WHEN (CMTSCM_UNERR_D IS NULL OR CMTSCM_CORR_
D IS NULL OR CMTSCM_UNCORR_D IS NULL) THEN NULL ELSE 100 * CMTSCM_UNCORR_D/TOTAL
_D END CER, CASE WHEN (CMTSCM_UNERR_D IS NULL OR CMTSCM_CORR_D IS NULL OR CMTSCM
SQL ordered by Reads DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Total Disk Reads: 401,992
-> Captured SQL account for 134.7% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
4,337 4 1,084.3 1.1 0.36 5.60 46jpzuthyv6wa
Module: Lab128
--lab128 select se.fa_se, uit.ui, uipt.uip, uist.uis, fr_s.fr_se, t.dt from (se
lect /*+ all_rows */ count(*) fa_se from (select ts#,max(length) m from sys.fet$
group by ts#) f, sys.seg$ s where s.ts#=f.ts# and extsize>m) se, (select count(
*) ui from sys.ind$ where bitand(flags,1)=1) uit, (select count(*) uip from sys.
4,197 1 4,197.0 1.0 2.45 11.06 8b7g4s4qa5r1d
INSERT INTO UPSTREAM_POWER_1 SELECT :B4 , T.UPID, :B4 - :B3 , ROUND(AVG(C.DOCSIF
CMTSCMSTATUSRXPOWER), 0) FROM CM_RAWDATA C, TMP_TOP_SLOW_CM T WHERE C.TOPOLOGYID
= T.CMID AND C.BATCHID = :B2 AND C.PROFINDX = :B1 GROUP BY T.UPID
-------------------------------------------------------------
SQL ordered by Executions DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Total Executions: 542,597
-> Captured SQL account for 86.2% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
150,832 150,324 1.0 0.00 0.00 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)
150,324 150,324 1.0 0.00 0.00 6xz6vg8q1zygu
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, docsifcmtscms
tatusvalue, docsifcmtsserviceinoctets, docsifcmtsserviceoutoctets, docsifcmtscms
tatusrxpower, cmtscm_unerr, cmtscm_corr, cmtscm_uncorr, cmtscm_snr, cmtscm_timin
goffset) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)
150,259 150,259 1.0 0.00 0.00 c2a2g4fqnm25h
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, sysuptime, do
csifcmstatustxpower, docsifdownchannelpower, docsifsigqunerroreds, docsifsigqcor
recteds, docsifsigquncorrectables, docsifsigqsignalnoise, sysobjectid) values (:
1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12)
8,128 8,128 1.0 0.00 0.01 12a0nrhpk3hym
UPDATE TOPOLOGY_LINK SET DATETO=sysdate, STATEID=0 WHERE TOPOLOGYID=:1 AND PAREN
TID=:2 AND STATEID=1
977 977 1.0 0.00 0.12 5jh6zfmvpu77f
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1
624 624 1.0 0.00 0.00 7h35uxf5uhmm1
select sysdate from dual
624 0 0.0 0.00 0.00 apuw5pk7p77hc
ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED
595 7,140 12.0 0.01 0.01 d5vf5a1ffcskb
Module: Lab128
--lab128 select replace(stat_name,'TICKS','TIME') stat_name,value from v$osstat
where substr(stat_name,1,3) !='AVG'
567 567 1.0 0.00 0.00 bsa0wjtftg3uw
select file# from file$ where ts#=:1
556 556 1.0 0.01 0.02 7gtztzv329wg0
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# = cd.con# and
cd.enabled = :1 and c.owner# = u.user#
-------------------------------------------------------------
SQL ordered by Parse Calls DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Total Parse Calls: 11,460
-> Captured SQL account for 56.7% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
624 624 5.45 7h35uxf5uhmm1
select sysdate from dual
624 624 5.45 apuw5pk7p77hc
ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED
567 567 4.95 bsa0wjtftg3uw
select file# from file$ where ts#=:1
556 556 4.85 7gtztzv329wg0
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# = cd.con# and
cd.enabled = :1 and c.owner# = u.user#
508 150,832 4.43 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)
448 448 3.91 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
411 411 3.59 9qgtwh66xg6nz
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=:9,e
xtpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL, :13),groups=decod
e(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:
17),scanhint=:18 where ts#=:1 and file#=:2 and block#=:3
297 297 2.59 350f5yrnnmshs
lock table sys.mon_mods$ in exclusive mode nowait
297 297 2.59 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
+ :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn
181 181 1.58 6129566gyvx21
Module: OEM.SystemPool
SELECT INSTANTIABLE, supertype_owner, supertype_name, LOCAL_ATTRIBUTES FROM all_
types WHERE type_name = :1 AND owner = :2
144 144 1.26 0k8522rmdzg4k
select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#
>0
128 128 1.12 cp8ygp2mr8j6s
select * from TOPOLOGY_NODETYPE where NODETYPEID < 0
117 117 1.02 2b064ybzkwf1y
Module: OEM.SystemPool
BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2, :3); END;
117 117 1.02 9p1um1wd886xb
select o.owner#, u.name, o.name, o.namespace, o.obj#, d.d
_timestamp, nvl(d.property,0), o.type#, o.subname, d.d_attrs from dependency$ d
, obj$ o, user$ u where d.p_obj#=:1 and (d.p_timestamp=:2 or d.property=2)
and d.d_obj#=o.obj# and o.owner#=u.user# order by o.obj#
116 116 1.01 9zg6y3ucgy8kb
select n.intcol# from ntab$ n, col$ c where n.obj#=:1 and c.obj#=:1 and c.intco
SQL ordered by Parse Calls DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Total Parse Calls: 11,460
-> Captured SQL account for 56.7% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
l#=n.intcol# and bitand(c.property, 32768)!=32768
-------------------------------------------------------------
SQL ordered by Sharable Memory DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
SQL ordered by Version Count DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session 48,802 13.5 23.9
CPU used when call started 49,725 13.8 24.3
CR blocks created 1,548 0.4 0.8
Cached Commit SCN referenced 4,257 1.2 2.1
Commit SCN cached 19 0.0 0.0
DB time 2,051,539 567.4 1,002.7
DBWR checkpoint buffers written 7,052 2.0 3.5
DBWR checkpoints 78 0.0 0.0
DBWR object drop buffers written 352 0.1 0.2
DBWR revisited being-written buf 281 0.1 0.1
DBWR thread checkpoint buffers w 6,008 1.7 2.9
DBWR transaction table writes 169 0.1 0.1
DBWR undo block writes 86,711 24.0 42.4
IMU CR rollbacks 196 0.1 0.1
IMU Flushes 1,921 0.5 0.9
IMU Redo allocation size 4,831,688 1,336.3 2,361.5
IMU commits 1,095 0.3 0.5
IMU contention 51 0.0 0.0
IMU ktichg flush 11 0.0 0.0
IMU pool not allocated 261 0.1 0.1
IMU recursive-transaction flush 5 0.0 0.0
IMU undo allocation size 8,282,272 2,290.7 4,048.0
IMU- failed to get a private str 261 0.1 0.1
PX local messages recv'd 0 0.0 0.0
PX local messages sent 0 0.0 0.0
SMON posted for undo segment shr 8 0.0 0.0
SQL*Net roundtrips to/from clien 557,524 154.2 272.5
SQL*Net roundtrips to/from dblin 5,571 1.5 2.7
active txn count during cleanout 667,416 184.6 326.2
application wait time 2,949 0.8 1.4
auto extends on undo tablespace 0 0.0 0.0
background checkpoints completed 33 0.0 0.0
background checkpoints started 32 0.0 0.0
background timeouts 10,887 3.0 5.3
branch node splits 14 0.0 0.0
buffer is not pinned count 16,308,390 4,510.5 7,970.9
buffer is pinned count 37,217,420 10,293.4 18,190.3
bytes received via SQL*Net from 54,299,124 15,017.8 26,539.2
bytes received via SQL*Net from 702,510 194.3 343.4
bytes sent via SQL*Net to client 59,493,239 16,454.4 29,077.8
bytes sent via SQL*Net to dblink 4,758,313 1,316.0 2,325.7
calls to get snapshot scn: kcmgs 102,555 28.4 50.1
calls to kcmgas 122,772 34.0 60.0
calls to kcmgcs 666,871 184.4 325.9
change write time 93,636 25.9 45.8
cleanout - number of ktugct call 694,894 192.2 339.6
cleanouts and rollbacks - consis 524 0.1 0.3
cleanouts only - consistent read 16,400 4.5 8.0
cluster key scan block gets 62,504 17.3 30.6
cluster key scans 44,624 12.3 21.8
commit batch performed 5 0.0 0.0
commit batch requested 5 0.0 0.0
commit batch/immediate performed 49 0.0 0.0
commit batch/immediate requested 49 0.0 0.0
commit cleanout failures: block 10,148 2.8 5.0
commit cleanout failures: buffer 39 0.0 0.0
commit cleanout failures: callba 93 0.0 0.1
commit cleanout failures: cannot 2 0.0 0.0
commit cleanouts 49,810 13.8 24.4
commit cleanouts successfully co 39,528 10.9 19.3
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
commit immediate performed 44 0.0 0.0
commit immediate requested 44 0.0 0.0
commit txn count during cleanout 37,416 10.4 18.3
concurrency wait time 6,361 1.8 3.1
consistent changes 375,588 103.9 183.6
consistent gets 19,788,311 5,473.0 9,671.7
consistent gets - examination 15,781,101 4,364.7 7,713.2
consistent gets direct 2 0.0 0.0
consistent gets from cache 19,788,309 5,473.0 9,671.7
current blocks converted for CR 1 0.0 0.0
cursor authentications 60 0.0 0.0
data blocks consistent reads - u 7,046 2.0 3.4
db block changes 9,922,875 2,744.4 4,849.9
db block gets 10,289,412 2,845.8 5,029.0
db block gets direct 3,341 0.9 1.6
db block gets from cache 10,286,071 2,844.9 5,027.4
deferred (CURRENT) block cleanou 10,217 2.8 5.0
dirty buffers inspected 142,881 39.5 69.8
enqueue conversions 13,940 3.9 6.8
enqueue releases 71,947 19.9 35.2
enqueue requests 71,973 19.9 35.2
enqueue timeouts 34 0.0 0.0
enqueue waits 65 0.0 0.0
exchange deadlocks 0 0.0 0.0
execute count 542,597 150.1 265.2
free buffer inspected 536,842 148.5 262.4
free buffer requested 511,414 141.4 250.0
global undo segment hints helped 0 0.0 0.0
global undo segment hints were s 0 0.0 0.0
heap block compress 23,794 6.6 11.6
hot buffers moved to head of LRU 35,300 9.8 17.3
immediate (CR) block cleanout ap 16,924 4.7 8.3
immediate (CURRENT) block cleano 40,644 11.2 19.9
index fast full scans (full) 11 0.0 0.0
index fetch by key 9,609,838 2,657.9 4,696.9
index scans kdiixs1 540,504 149.5 264.2
leaf node 90-10 splits 3,675 1.0 1.8
leaf node splits 7,868 2.2 3.9
lob reads 10 0.0 0.0
lob writes 597 0.2 0.3
lob writes unaligned 597 0.2 0.3
logons cumulative 179 0.1 0.1
messages received 36,800 10.2 18.0
messages sent 36,800 10.2 18.0
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 3,414,669 944.4 1,669.0
opened cursors cumulative 11,030 3.1 5.4
parse count (failures) 11 0.0 0.0
parse count (hard) 263 0.1 0.1
parse count (total) 11,460 3.2 5.6
parse time cpu 184 0.1 0.1
parse time elapsed 5,105 1.4 2.5
physical read IO requests 286,506 79.2 140.0
physical read bytes 3,293,118,464 910,795.7 1,609,539.8
physical read total IO requests 312,883 86.5 152.9
physical read total bytes 3,723,894,784 1,029,937.9 1,820,085.4
physical read total multi block 16,936 4.7 8.3
physical reads 401,992 111.2 196.5
physical reads cache 391,309 108.2 191.3
physical reads cache prefetch 106,160 29.4 51.9
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
physical reads direct 10,683 3.0 5.2
physical reads direct (lob) 2 0.0 0.0
physical reads direct temporary 10,450 2.9 5.1
physical write IO requests 124,209 34.4 60.7
physical write bytes 1,423,941,632 393,827.3 695,963.7
physical write total IO requests 135,013 37.3 66.0
physical write total bytes 3,039,874,048 840,754.5 1,485,764.4
physical write total multi block 9,946 2.8 4.9
physical writes 173,821 48.1 85.0
physical writes direct 15,138 4.2 7.4
physical writes direct (lob) 3 0.0 0.0
physical writes direct temporary 13,312 3.7 6.5
physical writes from cache 158,683 43.9 77.6
physical writes non checkpoint 171,458 47.4 83.8
pinned buffers inspected 1,327 0.4 0.7
prefetched blocks aged out befor 971 0.3 0.5
process last non-idle time 5,863 1.6 2.9
recovery blocks read 0 0.0 0.0
recursive calls 110,227 30.5 53.9
recursive cpu usage 28,845 8.0 14.1
redo blocks read for recovery 0 0.0 0.0
redo blocks written 2,951,190 816.2 1,442.4
redo buffer allocation retries 4,972 1.4 2.4
redo entries 4,971,193 1,374.9 2,429.7
redo log space requests 1,018 0.3 0.5
redo log space wait time 16,736 4.6 8.2
redo ordering marks 86,212 23.8 42.1
redo size 1,462,839,100 404,585.4 714,975.1
redo synch time 114,641 31.7 56.0
redo synch writes 5,072 1.4 2.5
redo wastage 773,164 213.8 377.9
redo write time 208,649 57.7 102.0
redo writer latching time 9 0.0 0.0
redo writes 2,820 0.8 1.4
rollback changes - undo records 7,908 2.2 3.9
rollbacks only - consistent read 1,010 0.3 0.5
rows fetched via callback 6,732,803 1,862.1 3,290.7
session connect time 0 0.0 0.0
session cursor cache hits 6,009 1.7 2.9
session logical reads 30,077,723 8,318.8 14,700.7
session pga memory 87,991,760 24,336.4 43,006.7
session pga memory max 128,361,936 35,501.8 62,738.0
session uga memory 262,000,976,040 72,463,036.0 #############
session uga memory max 122,117,960 33,774.8 59,686.2
shared hash latch upgrades - no 918,434 254.0 448.9
shared hash latch upgrades - wai 3 0.0 0.0
sorts (disk) 0 0.0 0.0
sorts (memory) 32,808 9.1 16.0
sorts (rows) 1,889,801 522.7 923.7
sql area purged 58 0.0 0.0
summed dirty queue length 2,498,747 691.1 1,221.3
switch current to new buffer 10,984 3.0 5.4
table fetch by rowid 20,173,244 5,579.4 9,859.9
table fetch continued row 9 0.0 0.0
table scan blocks gotten 227,381 62.9 111.1
table scan rows gotten 22,027,503 6,092.3 10,766.1
table scans (cache partitions) 0 0.0 0.0
table scans (long tables) 176 0.1 0.1
table scans (short tables) 5,560 1.5 2.7
total number of times SMON poste 172 0.1 0.1
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
transaction rollbacks 49 0.0 0.0
transaction tables consistent re 7 0.0 0.0
transaction tables consistent re 254 0.1 0.1
undo change vector size 619,905,088 171,450.5 302,983.9
user I/O wait time 233,992 64.7 114.4
user calls 560,283 155.0 273.8
user commits 1,614 0.5 0.8
user rollbacks 432 0.1 0.2
workarea executions - onepass 4 0.0 0.0
workarea executions - optimal 36,889 10.2 18.0
write clones created in backgrou 169 0.1 0.1
write clones created in foregrou 830 0.2 0.4
-------------------------------------------------------------
Instance Activity Stats - Absolute ValuesDB/Inst: CDB10/cdb10 Snaps: 122-123
-> Statistics with absolute values (should not be diffed)
Statistic Begin Value End Value
-------------------------------- --------------- ---------------
session cursor cache count 36,864 38,406
opened cursors current 895 925
workarea memory allocated 33,293 34,475
logons current 36 37
-------------------------------------------------------------
Instance Activity Stats - Thread Activity DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
-------------------------------- ------------------ ---------
log switches (derived) 32 31.86
-------------------------------------------------------------
Tablespace IO Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
-> ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TS_STARGUS
194,616 54 8.3 1.2 43,074 12 0 0.0
TEMP
73,213 20 5.1 1.4 13,433 4 0 0.0
UNDOTBS1
998 0 34.5 1.0 65,474 18 152 325.0
SYSTEM
9,656 3 12.1 5.1 254 0 2 300.0
SYSAUX
6,768 2 16.5 1.1 1,773 0 2 10.0
PERFSTAT
661 0 35.7 1.0 271 0 0 0.0
EXAMPLE
482 0 13.4 1.0 33 0 0 0.0
USERS
105 0 8.7 1.0 33 0 0 0.0
-------------------------------------------------------------
File IO Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
-> ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
EXAMPLE /export/home/oracle10/oradata/cdb10/example01.dbf
482 0 13.4 1.0 33 0 0 0.0
PERFSTAT /export/home/oracle10/oradata/cdb10/perfstat01.dbf
661 0 35.7 1.0 271 0 0 0.0
SYSAUX /export/home/oracle10/oradata/cdb10/sysaux01.dbf
6,768 2 16.5 1.1 1,773 0 2 10.0
SYSTEM /export/home/oracle10/oradata/cdb10/system01.dbf
9,656 3 12.1 5.1 254 0 2 300.0
TEMP /export/home/oracle10/oradata/cdb10/temp01.dbf
73,213 20 5.1 1.4 13,433 4 0 N/A
TS_STARGUS /export/home/oracle10/oradata/cdb10/ts_stargus_01.db
194,616 54 8.3 1.2 43,074 12 0 0.0
UNDOTBS1 /export/home/oracle10/oradata/cdb10/undotbs01.dbf
998 0 34.5 1.0 65,474 18 152 325.0
USERS /export/home/oracle10/oradata/cdb10/users01.dbf
105 0 8.7 1.0 33 0 0 0.0
-------------------------------------------------------------
Buffer Pool Statistics DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Writ Buffer
Number of Pool Buffer Physical Physical Buff Comp Busy
P Buffers Hit% Gets Reads Writes Wait Wait Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D 3,465 99 30,072,012 391,303 159,176 #### 8 156
-------------------------------------------------------------
Instance Recovery Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B 0 20 1046 147434 184320 184320 483666 N/A
E 0 16 764 94387 184320 184320 441470 N/A
-------------------------------------------------------------
Buffer Pool Advisory DB/Inst: CDB10/cdb10 Snap: 123
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 4 .1 495 2.6 5,966,703
D 8 .3 990 1.4 3,331,760
D 12 .4 1,485 1.4 3,181,146
D 16 .6 1,980 1.3 3,073,609
D 20 .7 2,475 1.3 2,965,522
D 24 .9 2,970 1.0 2,373,562
D 28 1.0 3,465 1.0 2,334,724
D 32 1.1 3,960 1.0 2,309,994
D 36 1.3 4,455 1.0 2,278,012
D 40 1.4 4,950 1.0 2,253,921
D 44 1.6 5,445 1.0 2,231,246
D 48 1.7 5,940 0.9 2,212,530
D 52 1.9 6,435 0.9 2,184,378
D 56 2.0 6,930 0.9 2,146,358
-------------------------------------------------------------
PGA Aggr Summary DB/Inst: CDB10/cdb10 Snaps: 122-123
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ------------------ --------------------------
93.7 3,557 241
-------------------------------------------------------------
PGA Aggr Target Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B 200 127 141.6 32.5 23.0 100.0 .0 40,960
E 200 125 144.8 33.7 23.2 100.0 .0 40,960
-------------------------------------------------------------
PGA Aggr Target Histogram DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
2K 4K 33,637 33,637 0 0
64K 128K 25 25 0 0
128K 256K 3 3 0 0
256K 512K 26 26 0 0
512K 1024K 2,273 2,273 0 0
1M 2M 895 895 0 0
4M 8M 10 8 2 0
8M 16M 12 12 0 0
16M 32M 2 2 0 0
64M 128M 2 0 2 0
-------------------------------------------------------------
PGA Memory Advisory DB/Inst: CDB10/cdb10 Snap: 123
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
25 0.1 56,190.1 4,876.0 92.0 353
50 0.3 56,190.1 3,846.0 94.0 203
100 0.5 56,190.1 406.6 99.0 0
150 0.8 56,190.1 278.9 100.0 0
200 1.0 56,190.1 278.9 100.0 0
240 1.2 56,190.1 215.7 100.0 0
280 1.4 56,190.1 215.7 100.0 0
320 1.6 56,190.1 215.7 100.0 0
360 1.8 56,190.1 215.7 100.0 0
400 2.0 56,190.1 215.7 100.0 0
600 3.0 56,190.1 215.7 100.0 0
800 4.0 56,190.1 215.7 100.0 0
1,200 6.0 56,190.1 215.7 100.0 0
1,600 8.0 56,190.1 215.7 100.0 0
-------------------------------------------------------------
Shared Pool Advisory DB/Inst: CDB10/cdb10 Snap: 123
-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Est LC Est LC Est LC Est LC
Shared SP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
96 .8 19 2,407 ####### 1.0 882 2.0 3,172,239
112 .9 33 3,038 ####### 1.0 538 1.2 3,190,425
128 1.0 47 4,150 ####### 1.0 433 1.0 3,193,792
144 1.1 62 5,909 ####### 1.0 430 1.0 3,194,235
160 1.3 77 7,196 ####### 1.0 427 1.0 3,194,510
176 1.4 92 8,955 ####### 1.0 427 1.0 3,194,594
192 1.5 107 10,579 ####### 1.0 426 1.0 3,194,828
208 1.6 122 12,029 ####### 1.0 426 1.0 3,195,128
224 1.8 137 13,603 ####### 1.0 424 1.0 3,195,555
240 1.9 152 14,744 ####### 1.0 423 1.0 3,195,770
256 2.0 167 15,773 ####### 1.0 423 1.0 3,195,906
-------------------------------------------------------------
SGA Target Advisory DB/Inst: CDB10/cdb10 Snap: 123
No data exists for this section of the report.
-------------------------------------------------------------
Streams Pool Advisory DB/Inst: CDB10/cdb10 Snap: 123
No data exists for this section of the report.
-------------------------------------------------------------
Java Pool Advisory DB/Inst: CDB10/cdb10 Snap: 123
No data exists for this section of the report.
-------------------------------------------------------------
Buffer Wait Statistics DB/Inst: CDB10/cdb10 Snaps: 122-123
-> ordered by wait time desc, waits desc
Class Waits Total Wait Time (s) Avg Time (ms)
------------------ ----------- ------------------- --------------
undo header 152 49 325
data block 4 1 155
-------------------------------------------------------------
Enqueue Activity DB/Inst: CDB10/cdb10 Snaps: 122-123
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc
Enqueue Type (Request Reason)
------------------------------------------------------------------------------
Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
RO-Multiple Object Reuse (fast object reuse)
414 414 0 46 23 505.78
CF-Controlfile Transaction
2,004 2,003 1 19 7 366.58
-------------------------------------------------------------
Undo Segment Summary DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
1 82.3 16,347 253 6 15/15.25 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Undo Segment Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Most recent 35 Undostat rows, ordered by Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
31-Jul 17:54 17,588 4,451 13 6 15 0/0 0/0/0/0/0/0
31-Jul 17:44 11,302 4,215 0 4 15 0/0 0/0/0/0/0/0
31-Jul 17:34 8,066 1,832 0 4 15 0/0 0/0/0/0/0/0
31-Jul 17:24 17,412 861 90 5 15 0/0 0/0/0/0/0/0
31-Jul 17:14 15,100 892 137 3 15 0/0 0/0/0/0/0/0
31-Jul 17:04 12,857 4,096 253 6 15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Latch Activity DB/Inst: CDB10/cdb10 Snaps: 122-123
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
AWR Alerted Metric Eleme 13,936 0.0 N/A 0 0 N/A
Consistent RBA 2,852 0.0 N/A 0 0 N/A
FOB s.o list latch 333 0.0 N/A 0 0 N/A
In memory undo latch 30,230 0.0 0.7 8 4,148 0.0
JS mem alloc latch 3 0.0 N/A 0 0 N/A
JS queue access latch 3 0.0 N/A 0 0 N/A
JS queue state obj latch 25,990 0.0 N/A 0 0 N/A
JS slv state obj latch 115 0.0 N/A 0 0 N/A
KMG MMAN ready and start 1,201 0.0 N/A 0 0 N/A
KTF sga latch 10 0.0 N/A 0 1,006 0.0
KWQMN job cache list lat 116 0.0 N/A 0 0 N/A
KWQP Prop Status 1 0.0 N/A 0 0 N/A
MQL Tracking Latch 0 N/A N/A 0 72 0.0
Memory Management Latch 0 N/A N/A 0 1,201 0.0
OS process 573 0.0 N/A 0 0 N/A
OS process allocation 1,584 0.0 N/A 0 0 N/A
OS process: request allo 235 0.0 N/A 0 0 N/A
PL/SQL warning settings 935 0.0 N/A 0 0 N/A
SQL memory manager latch 2 0.0 N/A 0 1,177 0.0
SQL memory manager worka 92,470 0.0 N/A 0 0 N/A
Shared B-Tree 137 0.0 N/A 0 0 N/A
active checkpoint queue 34,292 0.0 N/A 0 0 N/A
active service list 8,119 0.0 N/A 0 1,272 0.0
archive control 1,017 0.0 N/A 0 0 N/A
begin backup scn array 92 0.0 N/A 0 0 N/A
cache buffer handles 22,997 0.0 N/A 0 0 N/A
cache buffers chains 66,867,303 0.0 0.0 0 665,222 0.0
cache buffers lru chain 1,026,321 0.1 0.0 1 135,882 0.1
cache table scan latch 0 N/A N/A 0 16,587 0.0
channel handle pool latc 570 0.0 N/A 0 0 N/A
channel operations paren 25,362 0.1 0.0 0 0 N/A
checkpoint queue latch 369,985 0.0 0.0 0 140,728 0.0
client/application info 2,329 0.0 N/A 0 0 N/A
commit callback allocati 97 0.0 N/A 0 0 N/A
compile environment latc 7,185 0.0 N/A 0 0 N/A
dictionary lookup 55 0.0 N/A 0 0 N/A
dml lock allocation 21,178 0.0 N/A 0 0 N/A
dummy allocation 357 0.0 N/A 0 0 N/A
enqueue hash chains 157,981 0.0 0.0 0 5,754 0.0
enqueues 97,190 0.0 0.0 0 0 N/A
event group latch 118 0.0 N/A 0 0 N/A
file cache latch 995 0.0 N/A 0 0 N/A
global KZLD latch for me 81 0.0 N/A 0 0 N/A
global tx hash mapping 10,377 0.0 N/A 0 0 N/A
hash table column usage 163 0.0 N/A 0 72,097 0.0
hash table modification 129 0.0 N/A 0 0 N/A
job workq parent latch 0 N/A N/A 0 122 0.0
job_queue_processes para 120 0.0 N/A 0 0 N/A
kks stats 504 0.0 N/A 0 0 N/A
ksuosstats global area 1,435 0.0 N/A 0 0 N/A
ktm global data 194 0.0 N/A 0 0 N/A
kwqbsn:qsga 137 0.0 N/A 0 0 N/A
lgwr LWN SCN 2,855 0.0 0.0 0 0 N/A
library cache 1,239,482 0.0 0.0 0 322 0.0
library cache load lock 90 0.0 N/A 0 7 0.0
library cache lock 55,083 0.0 N/A 0 0 N/A
library cache lock alloc 1,753 0.0 N/A 0 0 N/A
library cache pin 1,158,486 0.0 0.0 0 0 N/A
library cache pin alloca 584 0.0 N/A 0 0 N/A
list of block allocation 1,340 0.0 N/A 0 0 N/A
Latch Activity DB/Inst: CDB10/cdb10 Snaps: 122-123
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
loader state object free 570 0.0 N/A 0 0 N/A
longop free list parent 1 0.0 N/A 0 1 0.0
message pool operations 568 0.0 N/A 0 0 N/A
messages 99,560 0.0 0.0 0 0 N/A
mostly latch-free SCN 2,865 0.2 0.0 0 0 N/A
multiblock read objects 35,166 0.0 N/A 0 0 N/A
ncodef allocation latch 71 0.0 N/A 0 0 N/A
object queue header heap 7,276 0.0 N/A 0 7,237 0.0
object queue header oper 1,520,984 0.0 0.0 1 0 N/A
object stats modificatio 360 1.1 0.0 0 0 N/A
parallel query alloc buf 472 0.0 N/A 0 0 N/A
parameter list 643 0.0 N/A 0 0 N/A
parameter table allocati 240 0.0 N/A 0 0 N/A
post/wait queue 9,658 0.1 0.0 0 3,466 0.0
process allocation 235 0.0 N/A 0 118 0.0
process group creation 235 0.0 N/A 0 0 N/A
qmn task queue latch 512 0.0 N/A 0 0 N/A
redo allocation 25,223 0.1 0.0 0 4,972,609 0.0
redo copy 0 N/A N/A 0 4,972,708 0.0
redo writing 46,698 0.0 0.0 0 0 N/A
resmgr group change latc 533 0.0 N/A 0 0 N/A
resmgr:actses active lis 950 0.0 N/A 0 0 N/A
resmgr:actses change gro 142 0.0 N/A 0 0 N/A
resmgr:free threads list 353 0.0 N/A 0 0 N/A
resmgr:schema config 597 0.0 N/A 0 0 N/A
row cache objects 231,601 0.0 0.0 0 448 0.0
rules engine aggregate s 17 0.0 N/A 0 0 N/A
rules engine rule set st 134 0.0 N/A 0 0 N/A
sequence cache 4,464 0.0 N/A 0 0 N/A
session allocation 42,421 0.0 N/A 0 0 N/A
session idle bit 1,127,557 0.0 0.0 0 0 N/A
session state list latch 494 0.0 N/A 0 0 N/A
session switching 71 0.0 N/A 0 0 N/A
session timer 1,272 0.0 N/A 0 0 N/A
shared pool 26,428 0.0 0.4 0 0 N/A
simulator hash latch 2,137,589 0.0 N/A 0 0 N/A
simulator lru latch 2,051,579 0.0 0.0 0 46,222 0.1
slave class 2 0.0 N/A 0 0 N/A
slave class create 8 0.0 N/A 0 0 N/A
sort extent pool 4,406 0.1 0.0 0 0 N/A
state object free list 2 0.0 N/A 0 0 N/A
statistics aggregation 140 0.0 N/A 0 0 N/A
temp lob duration state 2 0.0 N/A 0 0 N/A
threshold alerts latch 305 0.0 N/A 0 0 N/A
transaction allocation 875,726 0.0 N/A 0 0 N/A
transaction branch alloc 2,031 0.0 N/A 0 0 N/A
undo global data 804,587 0.0 0.0 0 0 N/A
user lock 444 0.0 N/A 0 0 N/A
-------------------------------------------------------------
Latch Sleep Breakdown DB/Inst: CDB10/cdb10 Snaps: 122-123
-> ordered by misses desc
Latch Name
----------------------------------------
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
cache buffers chains
66,867,303 1,726 4 1,722 0 0 0
cache buffers lru chain
1,026,321 1,124 10 1,114 0 0 0
simulator lru latch
2,051,579 537 2 535 0 0 0
library cache
1,239,482 149 4 145 0 0 0
object queue header operation
1,520,984 123 2 121 0 0 0
library cache pin
1,158,486 33 1 32 0 0 0
redo allocation
25,223 33 1 32 0 0 0
In memory undo latch
30,230 7 5 2 0 0 0
shared pool
26,428 5 2 3 0 0 0
-------------------------------------------------------------
Latch Miss Sources DB/Inst: CDB10/cdb10 Snaps: 122-123
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
In memory undo latch ktiFlush: child 0 5 4
cache buffers chains kcbgtcr: kslbegin excl 0 4 0
cache buffers chains kcbgtcr: fast path 0 2 1
cache buffers chains kcbchg: kslbegin: call CR 0 1 1
cache buffers chains kcbgtcr: kslbegin shared 0 1 0
cache buffers lru chain kcbzgws_1 0 6 9
cache buffers lru chain kcbzar: KSLNBEGIN 0 2 0
cache buffers lru chain kcbbic2 0 1 1
cache buffers lru chain kcbbwlru 0 1 0
library cache kglhdiv: child 0 1 0
library cache lock kgllkdl: child: no lock ha 0 2 0
library cache pin kglpndl 0 1 1
object queue header oper kcbo_switch_cq 0 1 1
object queue header oper kcbw_link_q 0 1 0
redo allocation kcrfw_redo_gen: redo alloc 0 1 0
shared pool kghalp 0 1 0
shared pool kghfrunp: clatch: nowait 0 1 0
shared pool kghupr1 0 1 0
simulator lru latch kcbs_simulate: simulate se 0 2 1
-------------------------------------------------------------
Parent Latch Statistics DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Child Latch Statistics DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Logical Reads DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Total Logical Reads: 30,077,723
-> Captured Segments account for 88.6% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
STARGUS TEMP PK_TMP_TOP_SLOW_CM INDEX 5,264,912 17.50
STARGUS TEMP TMP_TOP_SLOW_CM TABLE 5,244,192 17.44
STARGUS TS_STARGUS PK_CM_RAWDATA INDEX 2,271,232 7.55
STARGUS TS_STARGUS CM_RAWDATA TABLE 1,899,472 6.32
STARGUS TS_STARGUS CM_SID_RAWDATA TABLE 1,440,752 4.79
-------------------------------------------------------------
Segments by Physical Reads DB/Inst: CDB10/cdb10 Snaps: 122-123
-> Total Physical Reads: 401,992
-> Captured Segments account for 67.7% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
STARGUS TS_STARGUS PK_CM_SID_RAWDATA INDEX 42,629 10.60
STARGUS TEMP TMP_TOP_SLOW_CM TABLE 38,818 9.66
STARGUS TS_STARGUS CM_SID_RAWDATA TABLE 38,588 9.60
STARGUS TEMP PK_TMP_TOP_SLOW_CM INDEX 31,020 7.72
STARGUS TS_STARGUS TOPOLOGY_LINK TABLE 30,360 7.55
-------------------------------------------------------------
Segments by Row Lock Waits DB/Inst: CDB10/cdb10 Snaps: 122-123
-> % of Capture shows % of row lock waits for each top segment compared
-> with total row lock waits for all segments captured by the Snapshot
Row
Tablespace Subobject Obj. Lock % of
Owner Name Object Name Name Type Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSTEM SMON_SCN_TIME TABLE 4 30.77
SYSMAN SYSAUX MGMT_METRICS_1HOUR_P INDEX 2 15.38
PERFSTAT PERFSTAT STATS$EVENT_HISTOGRA INDEX 2 15.38
PERFSTAT PERFSTAT STATS$LATCH_PK INDEX 2 15.38
SYS SYSAUX WRH$_SERVICE_STAT_PK 559071_106 INDEX 2 15.38
-------------------------------------------------------------
Segments by ITL Waits DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Buffer Busy Waits DB/Inst: CDB10/cdb10 Snaps: 122-123
-> % of Capture shows % of Buffer Busy Waits for each top segment compared
-> with total Buffer Busy Waits for all segments captured by the Snapshot
Buffer
Tablespace Subobject Obj. Busy % of
Owner Name Object Name Name Type Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSTEM JOB$ TABLE 2 66.67
SYSMAN SYSAUX MGMT_CURRENT_METRICS INDEX 1 33.33
-------------------------------------------------------------
Dictionary Cache Stats DB/Inst: CDB10/cdb10 Snaps: 122-123
-> "Pct Misses" should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control 67 0.0 0 N/A 2 1
dc_database_links 72 0.0 0 N/A 0 1
dc_files 70 0.0 0 N/A 0 7
dc_global_oids 4,852 0.0 0 N/A 0 16
dc_histogram_data 3,190 0.9 0 N/A 0 1,064
dc_histogram_defs 6,187 0.7 0 N/A 0 1,592
dc_object_ids 7,737 0.9 0 N/A 1 480
dc_objects 1,345 1.9 0 N/A 56 437
dc_profiles 163 0.0 0 N/A 0 2
dc_rollback_segments 677 0.0 0 N/A 0 22
dc_segments 1,839 0.5 0 N/A 411 264
dc_sequences 75 0.0 0 N/A 75 6
dc_tablespace_quotas 890 0.1 0 N/A 0 5
dc_tablespaces 26,615 0.0 0 N/A 0 8
dc_usernames 257 0.4 0 N/A 0 9
dc_users 25,512 0.0 0 N/A 0 44
outstanding_alerts 126 7.1 0 N/A 17 17
-------------------------------------------------------------
Library Cache Activity DB/Inst: CDB10/cdb10 Snaps: 122-123
-> "Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 550 0.0 6,440 0.0 0 0
CLUSTER 1 0.0 4 0.0 0 0
INDEX 41 0.0 86 0.0 0 0
SQL AREA 64 71.9 548,251 0.1 194 177
TABLE/PROCEDURE 389 3.9 13,583 0.3 16 0
TRIGGER 34 11.8 520 0.8 0 0
-------------------------------------------------------------
Process Memory Summary DB/Inst: CDB10/cdb10 Snaps: 122-123
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist
Avg Std Dev Max Max
Alloc Used Alloc Alloc Alloc Alloc Num Num
Category (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other 71.7 N/A 1.9 3.6 22 22 38 38
SQL 39.2 38.0 1.4 6.9 37 46 29 25
Freeable 29.8 .0 1.1 1.5 9 N/A 26 26
PL/SQL .9 .5 .0 .0 0 0 36 36
E Other 74.2 N/A 1.9 3.6 22 22 39 39
SQL 40.2 38.9 1.3 6.7 37 46 30 26
Freeable 29.5 .0 1.1 1.5 9 N/A 26 26
PL/SQL 1.0 .6 .0 .0 0 0 37 37
-------------------------------------------------------------
SGA Memory Summary DB/Inst: CDB10/cdb10 Snaps: 122-123
End Size (Bytes)
SGA regions Begin Size (Bytes) (if different)
------------------------------ ------------------- -------------------
Database Buffers 29,360,128
Fixed Size 1,979,488
Redo Buffers 6,406,144
Variable Size 423,627,680
-------------------
sum 461,373,440
-------------------------------------------------------------
SGA breakdown difference DB/Inst: CDB10/cdb10 Snaps: 122-123
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
insignificant, or zero in that snapshot
Pool Name Begin MB End MB % Diff
------ ------------------------------ -------------- -------------- -------
java free memory 24.0 24.0 0.00
shared ASH buffers 4.0 4.0 0.00
shared CCursor 6.5 6.5 -0.11
shared FileOpenBlock 1.4 1.4 0.00
shared Heap0: KGL 3.8 3.7 -0.62
shared KCB Table Scan Buffer 3.8 3.8 0.00
shared KGLS heap 1.6 1.5 -6.07
shared KQR M PO 1.5 1.3 -9.15
shared KSFD SGA I/O b 3.8 3.8 0.00
shared PCursor 5.2 5.3 0.20
shared PL/SQL MPCODE 3.4 3.4 0.00
shared event statistics per sess 1.5 1.5 0.00
shared free memory 10.4 10.4 0.39
shared kglsim hash table bkts 4.0 4.0 0.00
shared kglsim heap 1.3 1.4 1.72
shared kglsim object batch 2.1 2.1 1.04
shared kks stbkt 1.5 1.5 0.00
shared library cache 9.5 9.6 0.57
shared private strands 2.3 2.3 0.00
shared row cache 7.1 7.1 0.00
shared sql area 27.2 27.4 0.89
buffer_cache 28.0 28.0 0.00
fixed_sga 1.9 1.9 0.00
log_buffer 6.1 6.1 0.00
-------------------------------------------------------------
Streams CPU/IO Usage DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Streams Capture DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Streams Apply DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Queues DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Subscribers DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Rule Set DB/Inst: CDB10/cdb10 Snaps: 122-123
No data exists for this section of the report.
-------------------------------------------------------------
Resource Limit Stats DB/Inst: CDB10/cdb10 Snap: 123
No data exists for this section of the report.
-------------------------------------------------------------
init.ora Parameters DB/Inst: CDB10/cdb10 Snaps: 122-123
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
audit_file_dest /export/home/oracle10/admin/cdb10
background_dump_dest /export/home/oracle10/admin/cdb10
compatible 10.2.0.1.0
control_files /export/home/oracle10/oradata/cdb
core_dump_dest /export/home/oracle10/admin/cdb10
db_block_size 8192
db_cache_size 29360128
db_domain
db_file_multiblock_read_count 8
db_name cdb10
db_recovery_file_dest /export/home/oracle10/flash_recov
db_recovery_file_dest_size 2147483648
dispatchers (PROTOCOL=TCP) (SERVICE=cdb10XDB)
job_queue_processes 10
open_cursors 300
pga_aggregate_target 209715200
processes 150
remote_login_passwordfile EXCLUSIVE
sga_max_size 461373440
sga_target 0
shared_pool_size 134217728
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /export/home/oracle10/admin/cdb10
-------------------------------------------------------------
End of Report
}}}
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 338 17-Jan-10 06:50:58 31 2.9
End Snap: 339 17-Jan-10 07:01:01 30 2.2
Elapsed: 10.05 (mins)
DB Time: 22.08 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 200M 196M Std Block Size: 8K
Shared Pool Size: 92M 96M Log Buffer: 2,860K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 25,946.47 6,162.81
Logical reads: 10,033.03 2,383.05
Block changes: 147.02 34.92
Physical reads: 9,390.59 2,230.46
Physical writes: 41.20 9.79
User calls: 19.14 4.55
Parses: 9.87 2.34
Hard parses: 0.69 0.16
Sorts: 3.05 0.72
Logons: 0.52 0.12
Executes: 95.91 22.78
Transactions: 4.21
% Blocks changed per Read: 1.47 Recursive Call %: 90.93
Rollback per transaction %: 0.51 Rows per Sort: ########
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.99
Buffer Hit %: 102.59 In-memory Sort %: 100.00
Library Hit %: 97.85 Soft Parse %: 93.01
Execute to Parse %: 89.71 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 19.56 % Non-Parse CPU: 98.43
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 75.99 78.27
% SQL with executions>1: 68.86 64.10
% Memory for SQL w/exec>1: 65.95 58.03
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time 436 32.9
db file sequential read 18,506 279 15 21.1 User I/O
PX Deq Credit: send blkd 79,918 177 2 13.4 Other
direct path read 374,300 149 0 11.2 User I/O
log file parallel write 2,299 83 36 6.2 System I/O
-------------------------------------------------------------
Time Model Statistics DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total time in database user-calls (DB Time): 1324.6s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 1,272.2 96.0
DB CPU 435.7 32.9
parse time elapsed 52.3 3.9
hard parse elapsed time 42.5 3.2
Java execution elapsed time 4.0 .3
PL/SQL execution elapsed time 3.3 .2
PL/SQL compilation elapsed time 0.3 .0
connection management call elapsed time 0.1 .0
sequence load elapsed time 0.1 .0
hard parse (sharing criteria) elapsed time 0.1 .0
repeated bind elapsed time 0.1 .0
hard parse (bind mismatch) elapsed time 0.0 .0
DB time 1,324.6 N/A
background elapsed time 314.3 N/A
background cpu time 11.6 N/A
-------------------------------------------------------------
Wait Class DB/Inst: IVRS/ivrs Snaps: 338-339
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
-------------------- ---------------- ------ ---------------- ------- ---------
User I/O 396,180 .0 488 1 156.0
Other 88,652 5.2 259 3 34.9
System I/O 4,903 .0 243 50 1.9
Commit 1,418 1.3 67 48 0.6
Concurrency 29 20.7 2 60 0.0
Configuration 1 .0 0 247 0.0
Network 8,410 .0 0 0 3.3
Application 36 .0 0 0 0.0
-------------------------------------------------------------
Wait Events DB/Inst: IVRS/ivrs Snaps: 338-339
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file sequential read 18,506 .0 279 15 7.3
PX Deq Credit: send blkd 79,918 .0 177 2 31.5
direct path read 374,300 .0 149 0 147.4
log file parallel write 2,299 .0 83 36 0.9
db file parallel write 658 .0 79 120 0.3
PX qref latch 6,958 64.7 79 11 2.7
log file sync 1,418 1.3 67 48 0.6
buffer read retry 54 81.5 43 797 0.0
control file parallel write 259 .0 42 163 0.1
log file sequential read 54 .0 27 507 0.0
control file sequential read 1,577 .0 11 7 0.6
db file scattered read 236 .0 9 36 0.1
direct path write temp 1,533 .0 7 4 0.6
direct path read temp 1,533 .0 2 1 0.6
os thread startup 5 20.0 2 321 0.0
PX Deq: Signal ACK 182 26.4 1 8 0.1
change tracking file synchro 11 .0 1 105 0.0
Log archive I/O 54 .0 1 17 0.0
log file switch completion 1 .0 0 247 0.0
PX Deq: Table Q qref 1,422 .0 0 0 0.6
enq: PS - contention 51 .0 0 3 0.0
SQL*Net more data from clien 12 .0 0 12 0.0
PX Deq: Table Q Get Keys 40 .0 0 3 0.0
latch: library cache 9 .0 0 9 0.0
SQL*Net message to client 8,291 .0 0 0 3.3
latch free 7 .0 0 9 0.0
cursor: pin S wait on X 10 50.0 0 5 0.0
latch: cache buffers lru cha 25 .0 0 2 0.0
latch: session allocation 7 .0 0 5 0.0
log file single write 2 .0 0 9 0.0
direct path write 15 .0 0 1 0.0
latch: shared pool 2 .0 0 6 0.0
latch: redo allocation 1 .0 0 10 0.0
read by other session 3 .0 0 3 0.0
SQL*Net break/reset to clien 36 .0 0 0 0.0
SQL*Net more data to client 107 .0 0 0 0.0
latch: object queue header o 3 .0 0 1 0.0
change tracking file synchro 12 .0 0 0 0.0
LGWR wait for redo copy 14 .0 0 0 0.0
latch: cache buffers chains 3 .0 0 0 0.0
enq: BF - allocation content 1 .0 0 0 0.0
PX Idle Wait 1,398 79.0 2,234 1598 0.6
class slave wait 28 21.4 1,114 39769 0.0
PX Deq: Table Q Normal 348,049 .0 682 2 137.1
jobq slave wait 232 94.8 670 2890 0.1
ASM background timer 148 .0 583 3937 0.1
Streams AQ: qmn coordinator 43 51.2 577 13430 0.0
Streams AQ: qmn slave idle w 21 .0 577 27498 0.0
PX Deq: Execution Msg 7,434 2.5 573 77 2.9
SQL*Net message from client 8,291 .0 568 68 3.3
virtual circuit status 19 100.0 557 29296 0.0
PX Deq: Execute Reply 5,871 1.1 508 86 2.3
PX Deq Credit: need buffer 62,922 .0 48 1 24.8
PX Deq: Table Q Sample 1,307 .0 5 4 0.5
KSV master wait 22 .0 0 22 0.0
PX Deq: Parse Reply 201 .0 0 1 0.1
PX Deq: Msg Fragment 234 .0 0 1 0.1
PX Deq: Join ACK 170 .0 0 1 0.1
SGA: MMAN sleep for componen 16 43.8 0 12 0.0
-------------------------------------------------------------
Background Wait Events DB/Inst: IVRS/ivrs Snaps: 338-339
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
log file parallel write 2,299 .0 82 36 0.9
db file parallel write 654 .0 78 119 0.3
control file parallel write 259 .0 42 163 0.1
log file sequential read 54 .0 27 507 0.0
control file sequential read 399 .0 11 26 0.2
os thread startup 5 20.0 2 321 0.0
Log archive I/O 54 .0 1 17 0.0
events in waitclass Other 38 .0 1 16 0.0
log file single write 2 .0 0 9 0.0
direct path write 13 .0 0 1 0.0
latch: shared pool 1 .0 0 10 0.0
direct path read 13 .0 0 0 0.0
db file sequential read 543 .0 -1 -2 0.2
rdbms ipc message 4,458 50.1 7,496 1681 1.8
smon timer 39 .0 611 15662 0.0
ASM background timer 148 .0 583 3937 0.1
pmon timer 241 100.0 581 2412 0.1
Streams AQ: qmn coordinator 43 51.2 577 13430 0.0
Streams AQ: qmn slave idle w 21 .0 577 27498 0.0
KSV master wait 22 .0 0 22 0.0
SGA: MMAN sleep for componen 16 43.8 0 12 0.0
-------------------------------------------------------------
Operating System Statistics DB/Inst: IVRS/ivrs Snaps: 338-339
Statistic Total
-------------------------------- --------------------
BUSY_TIME 46,982
IDLE_TIME 9,587
IOWAIT_TIME 5,623
NICE_TIME 172
SYS_TIME 37,041
USER_TIME 9,589
LOAD 4
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 50,048
NUM_CPUS 1
-------------------------------------------------------------
Service Statistics DB/Inst: IVRS/ivrs Snaps: 338-339
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
-------------------------------- ------------ ------------ ---------- ----------
ivrs.bayantel.com 1,329.2 427.1 5,587,106 5,878,962
SYS$USERS 91.6 13.5 1,357 94,224
SYS$BACKGROUND 0.0 0.0 1,367 19,062
ivrsXDB 0.0 0.0 0 0
-------------------------------------------------------------
Service Wait Class Stats DB/Inst: IVRS/ivrs Snaps: 338-339
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
----------------------------------------------------------------
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
ivrs.bayantel.com
394179 34576 16 6 0 0 8358 6
SYS$USERS
1120 3538 2 1 0 0 42 14
SYS$BACKGROUND
1310 10821 6 162 0 0 0 0
-------------------------------------------------------------
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
90 28 1 89.6 6.8 bsdgaykhvy4xr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
89 28 2 44.5 6.7 bmfc2a2ym0kwr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk
59 6 1 58.5 4.4 081am6psuh26j
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#35' and p_container = 'LG BOX' and l_quanti
ty < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)
57 22 1 56.9 4.3 6mrh6s1s5g851
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
50 6 1 50.2 3.8 acgpfd4ysyfxb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'puff%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem w
here l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1
49 21 1 49.2 3.7 2n4xg8c3dmd62
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
46 15 1 45.7 3.4 29rqwcj4cs31u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 313) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
40 1 1 40.1 3.0 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
34 14 1 34.1 2.6 cvhgz2zwbk4qf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
31 14 1 30.8 2.3 7409gxv4spfj2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
29 14 1 29.3 2.2 1f0r8shtps3bu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk
26 14 1 25.9 2.0 05jp96tzvutb6
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
26 8 1 25.5 1.9 6aqpwwba8xvuu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'GERMANY' then volume else 0 end) / sum(vo
lume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_ext
endedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier
, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_p
24 11 1 23.7 1.8 8sfhj7ua3qfjf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 312) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
23 11 2 11.6 1.8 814qvp0rkqug4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 314) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
23 7 1 23.2 1.8 94wqqbu0ajcvn
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'JAPAN' then volume else 0 end) / sum(volu
me) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_exten
dedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier,
lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_par
23 7 1 23.2 1.8 5xd0ak4417rk0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'MOZAMBIQUE' then volume else 0 end) / sum
(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_
extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, suppl
ier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey =
22 2 539 0.0 1.7 aw9ttz9acxbc3
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN payment(:p_w_id,:p_d_id,:p_c_w_id,:p_c_d_id,:p_c_id,:byname,:p_h_amount,:p
_c_last,:p_w_street_1,:p_w_street_2,:p_w_city,:p_w_state,:p_w_zip,:p_d_street_1,
:p_d_street_2,:p_d_city,:p_d_state,:p_d_zip,:p_c_first,:p_c_middle,:p_c_street_1
,:p_c_street_2,:p_c_city,:p_c_state,:p_c_zip,:p_c_phone,:p_c_since,:p_c_credit,:
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
21 4 2 10.6 1.6 2x4gjqru5u1xx
Module: SQL*Plus
SELECT s0.snap_id id, -- TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END
_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.
21 4 546 0.0 1.6 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;
21 0 1 20.9 1.6 14wnf35dahb7v
SELECT A.ID,A.TYPE FROM SYS.WRI$_ADV_DEFINITIONS A WHERE A.NAME = :B1
17 0 42 0.4 1.3 d4ujh5yqt1fph
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN delivery(:d_w_id,:d_o_carrier_id,TO_DATE(:timestamp,'YYYYMMDDHH24MISS'));
END;
16 2 1 16.5 1.2 1wzqub25cwnjm
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN wksys.wk_job.invoke(21,21); :mydate := next_date; IF broken THEN
:b := 1; ELSE :b := 0; END IF; END;
16 0 420 0.0 1.2 5ps73nuy5f2vj
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
UPDATE ORDER_LINE SET OL_DELIVERY_D = :B4 WHERE OL_O_ID = :B3 AND OL_D_ID = :B2
AND OL_W_ID = :B1
15 1 317 0.0 1.1 4wg725nwpxb1z
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT C_FIRST, C_MIDDLE, C_ID, C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP,
C_PHONE, C_CREDIT, C_CREDIT_LIM, C_DISCOUNT, C_BALANCE, C_SINCE FROM CUSTOMER WH
ERE C_W_ID = :B3 AND C_D_ID = :B2 AND C_LAST = :B1 ORDER BY C_FIRST
15 6 1 14.5 1.1 fcfjqugcc1zy0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_count, count(*) as custdist from ( select c_custkey, count(o_orderkey)
as c_count from customer left outer join orders on c_custkey = o_custkey and o_c
omment not like '%express%requests%' group by c_custkey) c_orders group by c_cou
nt order by custdist desc, c_count desc
14 2 9 1.6 1.1 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
14 7 1 13.9 1.0 15dxu5nmuj14a
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
date '1994-08-01' and o_orderdate < date '1994-08-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
l_receiptdate) group by o_orderpriority order by o_orderpriority
14 2 5,442 0.0 1.0 8yvup05pk06ca
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID
= :B2 AND S_W_ID = :B1
-------------------------------------------------------------
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
28 89 2 14.17 6.7 bmfc2a2ym0kwr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk
28 90 1 28.03 6.8 bsdgaykhvy4xr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
22 57 1 21.81 4.3 6mrh6s1s5g851
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
21 49 1 20.85 3.7 2n4xg8c3dmd62
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
15 46 1 14.85 3.4 29rqwcj4cs31u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 313) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
14 34 1 14.34 2.6 cvhgz2zwbk4qf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
14 31 1 14.08 2.3 7409gxv4spfj2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
14 26 1 13.69 2.0 05jp96tzvutb6
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
14 29 1 13.58 2.2 1f0r8shtps3bu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk
11 24 1 11.47 1.8 8sfhj7ua3qfjf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 312) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
11 23 2 5.66 1.8 814qvp0rkqug4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 314) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
8 26 1 8.14 1.9 6aqpwwba8xvuu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'GERMANY' then volume else 0 end) / sum(vo
lume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_ext
endedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier
, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_p
7 23 1 6.89 1.8 5xd0ak4417rk0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'MOZAMBIQUE' then volume else 0 end) / sum
(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_
extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, suppl
ier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey =
7 23 1 6.76 1.8 94wqqbu0ajcvn
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'JAPAN' then volume else 0 end) / sum(volu
me) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_exten
dedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier,
lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_par
7 14 1 6.63 1.0 15dxu5nmuj14a
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
date '1994-08-01' and o_orderdate < date '1994-08-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
l_receiptdate) group by o_orderpriority order by o_orderpriority
6 15 1 6.36 1.1 fcfjqugcc1zy0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_count, count(*) as custdist from ( select c_custkey, count(o_orderkey)
as c_count from customer left outer join orders on c_custkey = o_custkey and o_c
omment not like '%express%requests%' group by c_custkey) c_orders group by c_cou
nt order by custdist desc, c_count desc
6 50 1 6.02 3.8 acgpfd4ysyfxb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'puff%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem w
here l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1
6 59 1 5.87 4.4 081am6psuh26j
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#35' and p_container = 'LG BOX' and l_quanti
ty < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)
4 21 2 2.15 1.6 2x4gjqru5u1xx
Module: SQL*Plus
SELECT s0.snap_id id, -- TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END
_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.
4 21 546 0.01 1.6 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;
2 14 9 0.27 1.1 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
2 16 1 2.12 1.2 1wzqub25cwnjm
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN wksys.wk_job.invoke(21,21); :mydate := next_date; IF broken THEN
:b := 1; ELSE :b := 0; END IF; END;
2 14 5,442 0.00 1.0 8yvup05pk06ca
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05
, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID
= :B2 AND S_W_ID = :B1
2 22 539 0.00 1.7 aw9ttz9acxbc3
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN payment(:p_w_id,:p_d_id,:p_c_w_id,:p_c_d_id,:p_c_id,:byname,:p_h_amount,:p
_c_last,:p_w_street_1,:p_w_street_2,:p_w_city,:p_w_state,:p_w_zip,:p_d_street_1,
:p_d_street_2,:p_d_city,:p_d_state,:p_d_zip,:p_c_first,:p_c_middle,:p_c_street_1
,:p_c_street_2,:p_c_city,:p_c_state,:p_c_zip,:p_c_phone,:p_c_since,:p_c_credit,:
1 40 1 1.18 3.0 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
1 15 317 0.00 1.1 4wg725nwpxb1z
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT C_FIRST, C_MIDDLE, C_ID, C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP,
C_PHONE, C_CREDIT, C_CREDIT_LIM, C_DISCOUNT, C_BALANCE, C_SINCE FROM CUSTOMER WH
ERE C_W_ID = :B3 AND C_D_ID = :B2 AND C_LAST = :B1 ORDER BY C_FIRST
0 17 42 0.01 1.3 d4ujh5yqt1fph
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN delivery(:d_w_id,:d_o_carrier_id,TO_DATE(:timestamp,'YYYYMMDDHH24MISS'));
END;
0 16 420 0.00 1.2 5ps73nuy5f2vj
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
UPDATE ORDER_LINE SET OL_DELIVERY_D = :B4 WHERE OL_O_ID = :B3 AND OL_D_ID = :B2
AND OL_W_ID = :B1
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
0 21 1 0.12 1.6 14wnf35dahb7v
SELECT A.ID,A.TYPE FROM SYS.WRI$_ADV_DEFINITIONS A WHERE A.NAME = :B1
-------------------------------------------------------------
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 6,050,561
-> Captured SQL account for 72.1% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
331,630 1 331,630.0 5.5 21.81 56.88 6mrh6s1s5g851
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
331,630 1 331,630.0 5.5 28.03 89.61 bsdgaykhvy4xr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
331,626 1 331,626.0 5.5 20.85 49.20 2n4xg8c3dmd62
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
294,409 2 147,204.5 4.9 28.34 89.09 bmfc2a2ym0kwr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk
147,206 1 147,206.0 2.4 13.58 29.35 1f0r8shtps3bu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk
132,996 1 132,996.0 2.2 6.89 23.19 5xd0ak4417rk0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'MOZAMBIQUE' then volume else 0 end) / sum
(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_
extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, suppl
ier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey =
132,996 1 132,996.0 2.2 8.14 25.54 6aqpwwba8xvuu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'GERMANY' then volume else 0 end) / sum(vo
lume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_ext
endedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier
, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_p
132,996 1 132,996.0 2.2 6.76 23.21 94wqqbu0ajcvn
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'JAPAN' then volume else 0 end) / sum(volu
me) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_exten
dedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier,
lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_par
125,774 1 125,774.0 2.1 6.39 12.83 05burzzbuh660
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
date '1993-05-01' and o_orderdate < date '1993-05-01' + interval '3' month and
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 6,050,561
-> Captured SQL account for 72.1% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
l_receiptdate) group by o_orderpriority order by o_orderpriority
125,774 1 125,774.0 2.1 6.12 12.15 05pqvq1019n1t
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
date '1996-05-01' and o_orderdate < date '1996-05-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
l_receiptdate) group by o_orderpriority order by o_orderpriority
125,774 1 125,774.0 2.1 6.63 13.86 15dxu5nmuj14a
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
date '1994-08-01' and o_orderdate < date '1994-08-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
l_receiptdate) group by o_orderpriority order by o_orderpriority
125,774 1 125,774.0 2.1 5.85 11.77 2xf48ymvbjhxv
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
= '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
<> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('MAIL
125,774 1 125,774.0 2.1 5.78 11.67 3yj8qcg6sf32h
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
= '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
<> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('AIR'
125,774 1 125,774.0 2.1 6.01 12.20 c5dr0bxu3s966
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
= '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
<> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('TRUC
114,091 1 114,091.0 1.9 6.32 12.80 bdaz68nhm6jm4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'linen%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem
where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '
113,956 1 113,956.0 1.9 6.02 50.23 acgpfd4ysyfxb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'puff%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem w
here l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1
113,849 1 113,849.0 1.9 5.46 10.68 cx10bjzjkg410
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'moccasin%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineit
em where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= dat
106,702 1 106,702.0 1.8 5.18 9.82 3v74jf7w31h8v
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 6,050,561
-> Captured SQL account for 72.1% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#21' and p_container = 'LG DRUM' and l_quant
ity < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)
106,702 1 106,702.0 1.8 5.52 10.44 5u88ac3spdu0n
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#45' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 1 and l_quantity <= 1 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc
106,702 1 106,702.0 1.8 5.42 12.66 75d32g70ru6f2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#34' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 3 and l_quantity <= 3 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc
106,702 1 106,702.0 1.8 5.36 10.16 by11nan0n3nbb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#23' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 1 and l_quantity <= 1 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc
106,698 1 106,698.0 1.8 5.87 58.52 081am6psuh26j
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#35' and p_container = 'LG BOX' and l_quanti
ty < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)
102,798 1 102,798.0 1.7 11.47 23.72 8sfhj7ua3qfjf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 312) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
102,774 1 102,774.0 1.7 14.85 45.70 29rqwcj4cs31u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 313) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
102,763 2 51,381.5 1.7 11.32 23.29 814qvp0rkqug4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 314) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
102,697 1 102,697.0 1.7 13.69 25.90 05jp96tzvutb6
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 6,050,561
-> Captured SQL account for 72.1% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
102,697 1 102,697.0 1.7 14.08 30.77 7409gxv4spfj2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
102,697 1 102,697.0 1.7 14.34 34.07 cvhgz2zwbk4qf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
88,530 546 162.1 1.5 4.02 20.99 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;
80,163 2 40,081.5 1.3 4.30 21.15 2x4gjqru5u1xx
Module: SQL*Plus
SELECT s0.snap_id id, -- TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END
_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.
69,716 2 34,858.0 1.2 4.26 8.54 ag9jkv5xuz0dz
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select ps_partkey, sum(ps_supplycost * ps_availqty) as value from partsupp, supp
lier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_na
me = 'EGYPT' group by ps_partkey having sum(ps_supplycost * ps_availqty) > ( sel
ect sum(ps_supplycost * ps_availqty) * 0.0001000000 from partsupp, supplier, nat
-------------------------------------------------------------
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Disk Reads: 5,663,126
-> Captured SQL account for 74.4% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
330,210 1 330,210.0 5.8 20.85 49.20 2n4xg8c3dmd62
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
330,210 1 330,210.0 5.8 21.81 56.88 6mrh6s1s5g851
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
330,210 1 330,210.0 5.8 28.03 89.61 bsdgaykhvy4xr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not
301,265 2 150,632.5 5.3 28.34 89.09 bmfc2a2ym0kwr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk
151,726 1 151,726.0 2.7 13.58 29.35 1f0r8shtps3bu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk
132,179 1 132,179.0 2.3 6.89 23.19 5xd0ak4417rk0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'MOZAMBIQUE' then volume else 0 end) / sum
(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_
extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, suppl
ier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey =
132,179 1 132,179.0 2.3 8.14 25.54 6aqpwwba8xvuu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'GERMANY' then volume else 0 end) / sum(vo
lume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_ext
endedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier
, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_p
132,179 1 132,179.0 2.3 6.76 23.21 94wqqbu0ajcvn
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'JAPAN' then volume else 0 end) / sum(volu
me) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_exten
dedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier,
lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_par
125,250 1 125,250.0 2.2 6.39 12.83 05burzzbuh660
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
date '1993-05-01' and o_orderdate < date '1993-05-01' + interval '3' month and
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Disk Reads: 5,663,126
-> Captured SQL account for 74.4% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
l_receiptdate) group by o_orderpriority order by o_orderpriority
125,250 1 125,250.0 2.2 6.12 12.15 05pqvq1019n1t
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
date '1996-05-01' and o_orderdate < date '1996-05-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
l_receiptdate) group by o_orderpriority order by o_orderpriority
125,250 1 125,250.0 2.2 6.63 13.86 15dxu5nmuj14a
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
date '1994-08-01' and o_orderdate < date '1994-08-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
l_receiptdate) group by o_orderpriority order by o_orderpriority
125,250 1 125,250.0 2.2 5.85 11.77 2xf48ymvbjhxv
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
= '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
<> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('MAIL
125,250 1 125,250.0 2.2 5.78 11.67 3yj8qcg6sf32h
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
= '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
<> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('AIR'
125,250 1 125,250.0 2.2 6.01 12.20 c5dr0bxu3s966
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
= '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
<> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('TRUC
109,730 1 109,730.0 1.9 6.02 50.23 acgpfd4ysyfxb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'puff%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem w
here l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1
108,262 1 108,262.0 1.9 5.46 10.68 cx10bjzjkg410
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'moccasin%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineit
em where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= dat
107,978 1 107,978.0 1.9 6.32 12.80 bdaz68nhm6jm4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'linen%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem
where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '
106,241 1 106,241.0 1.9 5.87 58.52 081am6psuh26j
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Disk Reads: 5,663,126
-> Captured SQL account for 74.4% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#35' and p_container = 'LG BOX' and l_quanti
ty < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)
106,241 1 106,241.0 1.9 5.18 9.82 3v74jf7w31h8v
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#21' and p_container = 'LG DRUM' and l_quant
ity < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)
106,241 1 106,241.0 1.9 5.52 10.44 5u88ac3spdu0n
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#45' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 1 and l_quantity <= 1 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc
106,241 1 106,241.0 1.9 5.42 12.66 75d32g70ru6f2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#34' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 3 and l_quantity <= 3 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc
106,241 1 106,241.0 1.9 5.36 10.16 by11nan0n3nbb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#23' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 1 and l_quantity <= 1 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc
103,748 1 103,748.0 1.8 14.85 45.70 29rqwcj4cs31u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 313) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
102,436 2 51,218.0 1.8 11.32 23.29 814qvp0rkqug4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 314) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
102,430 1 102,430.0 1.8 11.47 23.72 8sfhj7ua3qfjf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 312) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda
102,405 1 102,405.0 1.8 13.69 25.90 05jp96tzvutb6
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Disk Reads: 5,663,126
-> Captured SQL account for 74.4% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
102,405 1 102,405.0 1.8 14.08 30.77 7409gxv4spfj2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
102,405 1 102,405.0 1.8 14.34 34.07 cvhgz2zwbk4qf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis
66,900 2 33,450.0 1.2 4.26 8.54 ag9jkv5xuz0dz
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select ps_partkey, sum(ps_supplycost * ps_availqty) as value from partsupp, supp
lier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_na
me = 'EGYPT' group by ps_partkey having sum(ps_supplycost * ps_availqty) > ( sel
ect sum(ps_supplycost * ps_availqty) * 0.0001000000 from partsupp, supplier, nat
-------------------------------------------------------------
SQL ordered by Executions DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Executions: 57,841
-> Captured SQL account for 18.5% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
5,442 5,442 1.0 0.00 0.00 8yvup05pk06ca
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05
, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID
= :B2 AND S_W_ID = :B1
563 563 1.0 0.00 0.00 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),
546 546 1.0 0.01 0.04 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;
539 539 1.0 0.00 0.04 aw9ttz9acxbc3
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN payment(:p_w_id,:p_d_id,:p_c_w_id,:p_c_d_id,:p_c_id,:byname,:p_h_amount,:p
_c_last,:p_w_street_1,:p_w_street_2,:p_w_city,:p_w_state,:p_w_zip,:p_d_street_1,
:p_d_street_2,:p_d_city,:p_d_state,:p_d_zip,:p_c_first,:p_c_middle,:p_c_street_1
,:p_c_street_2,:p_c_city,:p_c_state,:p_c_zip,:p_c_phone,:p_c_since,:p_c_credit,:
420 4,284 10.2 0.00 0.04 5ps73nuy5f2vj
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
UPDATE ORDER_LINE SET OL_DELIVERY_D = :B4 WHERE OL_O_ID = :B3 AND OL_D_ID = :B2
AND OL_W_ID = :B1
317 2,534 8.0 0.00 0.05 4wg725nwpxb1z
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT C_FIRST, C_MIDDLE, C_ID, C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP,
C_PHONE, C_CREDIT, C_CREDIT_LIM, C_DISCOUNT, C_BALANCE, C_SINCE FROM CUSTOMER WH
ERE C_W_ID = :B3 AND C_D_ID = :B2 AND C_LAST = :B1 ORDER BY C_FIRST
268 268 1.0 0.00 0.00 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3
224 203 0.9 0.00 0.02 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
203 0 0.0 0.00 0.00 b2gnxm5z6r51n
lock table sys.col_usage$ in exclusive mode nowait
135 135 1.0 0.00 0.00 3m8smr0v7v1m6
INSERT INTO sys.wri$_adv_message_groups (task_id,id,seq,message#,fac,hdr,lm,nl,p
1,p2,p3,p4,p5) VALUES (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)
-------------------------------------------------------------
SQL ordered by Parse Calls DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Parse Calls: 5,952
-> Captured SQL account for 61.4% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
546 546 9.17 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;
539 539 9.06 aw9ttz9acxbc3
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN payment(:p_w_id,:p_d_id,:p_c_w_id,:p_c_d_id,:p_c_id,:byname,:p_h_amount,:p
_c_last,:p_w_street_1,:p_w_street_2,:p_w_city,:p_w_state,:p_w_zip,:p_d_street_1,
:p_d_street_2,:p_d_city,:p_d_state,:p_d_zip,:p_c_first,:p_c_middle,:p_c_street_1
,:p_c_street_2,:p_c_city,:p_c_state,:p_c_zip,:p_c_phone,:p_c_since,:p_c_credit,:
268 268 4.50 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3
203 563 3.41 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),
203 0 3.41 53btfq0dt9bs9
insert into sys.col_usage$ values ( :objn, :coln, decode(bitand(:flag,1),0,0
,1), decode(bitand(:flag,2),0,0,1), decode(bitand(:flag,4),0,0,1), decode(
bitand(:flag,8),0,0,1), decode(bitand(:flag,16),0,0,1), decode(bitand(:flag,
32),0,0,1), :time)
203 203 3.41 b2gnxm5z6r51n
lock table sys.col_usage$ in exclusive mode nowait
135 135 2.27 3m8smr0v7v1m6
INSERT INTO sys.wri$_adv_message_groups (task_id,id,seq,message#,fac,hdr,lm,nl,p
1,p2,p3,p4,p5) VALUES (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)
132 132 2.22 grwydz59pu6mc
select text from view$ where rowid=:1
130 130 2.18 f80h0xb1qvbsk
SELECT sys.wri$_adv_seq_msggroup.nextval FROM dual
125 125 2.10 350f5yrnnmshs
lock table sys.mon_mods$ in exclusive mode nowait
125 125 2.10 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
+ :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn
83 83 1.39 4m7m0t6fjcs5x
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=
:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
82 82 1.38 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
SQL ordered by Parse Calls DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Parse Calls: 5,952
-> Captured SQL account for 61.4% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
70 70 1.18 1dubbbfqnqvh9
SELECT ORA_TQ_BASE$.NEXTVAL FROM DUAL
63 63 1.06 39m4sx9k63ba2
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece from idl_ub2$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
63 63 1.06 c6awqs517jpj0
select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece from idl_char$ w
here obj#=:1 and part=:2 and version=:3 order by piece#
63 63 1.06 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
63 63 1.06 ga9j9xk5cy9s0
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece from idl_sb4$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
62 62 1.04 5hyh0360hgx2u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN slev(:st_w_id,:st_d_id,:threshold); END;
-------------------------------------------------------------
SQL ordered by Sharable Memory DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
SQL ordered by Version Count DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 338-339
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session 43,952 72.9 17.3
CPU used when call started 80,217 133.0 31.6
CR blocks created 6 0.0 0.0
Cached Commit SCN referenced 432 0.7 0.2
Commit SCN cached 1 0.0 0.0
DB time 374,210 620.5 147.4
DBWR checkpoint buffers written 3,433 5.7 1.4
DBWR checkpoints 1 0.0 0.0
DBWR transaction table writes 20 0.0 0.0
DBWR undo block writes 917 1.5 0.4
DFO trees parallelized 76 0.1 0.0
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 159 0.3 0.1
IMU Redo allocation size 896,956 1,487.3 353.3
IMU commits 1,881 3.1 0.7
IMU contention 5 0.0 0.0
IMU pool not allocated 488 0.8 0.2
IMU recursive-transaction flush 47 0.1 0.0
IMU undo allocation size 14,837,636 24,603.8 5,843.9
IMU- failed to get a private str 488 0.8 0.2
PX local messages recv'd 427,618 709.1 168.4
PX local messages sent 427,626 709.1 168.4
Parallel operations not downgrad 76 0.1 0.0
SMON posted for undo segment shr 0 0.0 0.0
SQL*Net roundtrips to/from clien 8,221 13.6 3.2
active txn count during cleanout 220 0.4 0.1
application wait time 0 0.0 0.0
background checkpoints completed 1 0.0 0.0
background checkpoints started 1 0.0 0.0
background timeouts 2,201 3.7 0.9
branch node splits 0 0.0 0.0
buffer is not pinned count 185,040 306.8 72.9
buffer is pinned count 153,771 255.0 60.6
bytes received via SQL*Net from 2,339,345 3,879.1 921.4
bytes sent via SQL*Net to client 3,067,072 5,085.8 1,208.0
calls to get snapshot scn: kcmgs 93,092 154.4 36.7
calls to kcmgas 4,066 6.7 1.6
calls to kcmgcs 276 0.5 0.1
change write time 204 0.3 0.1
cleanout - number of ktugct call 278 0.5 0.1
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 65 0.1 0.0
cluster key scan block gets 1,999 3.3 0.8
cluster key scans 991 1.6 0.4
commit batch/immediate performed 5 0.0 0.0
commit batch/immediate requested 5 0.0 0.0
commit cleanout failures: block 0 0.0 0.0
commit cleanout failures: buffer 0 0.0 0.0
commit cleanout failures: callba 13 0.0 0.0
commit cleanout failures: cannot 0 0.0 0.0
commit cleanouts 21,493 35.6 8.5
commit cleanouts successfully co 21,480 35.6 8.5
commit immediate performed 5 0.0 0.0
commit immediate requested 5 0.0 0.0
commit txn count during cleanout 254 0.4 0.1
concurrency wait time 168 0.3 0.1
consistent changes 87 0.1 0.0
consistent gets 5,981,829 9,919.1 2,356.0
consistent gets - examination 217,091 360.0 85.5
consistent gets direct 5,802,782 9,622.2 2,285.5
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 338-339
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
consistent gets from cache 354,637 588.1 139.7
cursor authentications 208 0.3 0.1
data blocks consistent reads - u 4 0.0 0.0
db block changes 88,663 147.0 34.9
db block gets 68,732 114.0 27.1
db block gets direct 6 0.0 0.0
db block gets from cache 68,726 114.0 27.1
deferred (CURRENT) block cleanou 9,569 15.9 3.8
dirty buffers inspected 2,347 3.9 0.9
enqueue conversions 806 1.3 0.3
enqueue releases 21,768 36.1 8.6
enqueue requests 21,963 36.4 8.7
enqueue timeouts 201 0.3 0.1
enqueue waits 51 0.1 0.0
execute count 57,841 95.9 22.8
free buffer inspected 20,379 33.8 8.0
free buffer requested 19,793 32.8 7.8
heap block compress 144 0.2 0.1
hot buffers moved to head of LRU 7,705 12.8 3.0
immediate (CR) block cleanout ap 65 0.1 0.0
immediate (CURRENT) block cleano 2,074 3.4 0.8
index crx upgrade (positioned) 498 0.8 0.2
index fast full scans (direct re 78 0.1 0.0
index fast full scans (full) 15 0.0 0.0
index fast full scans (rowid ran 156 0.3 0.1
index fetch by key 82,020 136.0 32.3
index scans kdiixs1 16,708 27.7 6.6
leaf node 90-10 splits 10 0.0 0.0
leaf node splits 172 0.3 0.1
lob reads 42 0.1 0.0
lob writes 178 0.3 0.1
lob writes unaligned 178 0.3 0.1
logons cumulative 312 0.5 0.1
messages received 2,837 4.7 1.1
messages sent 2,837 4.7 1.1
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 5,895,112 9,775.3 2,321.8
opened cursors cumulative 6,773 11.2 2.7
parse count (failures) 0 0.0 0.0
parse count (hard) 416 0.7 0.2
parse count (total) 5,952 9.9 2.3
parse time cpu 684 1.1 0.3
parse time elapsed 3,497 5.8 1.4
physical read IO requests 397,584 659.3 156.6
physical read bytes 47,825,584,128 79,304,326.1 18,836,386.0
physical read total IO requests 387,784 643.0 152.7
physical read total bytes 47,903,270,912 79,433,146.3 18,866,983.4
physical read total multi block 380,329 630.7 149.8
physical reads 5,663,126 9,390.6 2,230.5
physical reads cache 17,948 29.8 7.1
physical reads cache prefetch 943 1.6 0.4
physical reads direct 5,820,136 9,650.9 2,292.3
physical reads direct (lob) 0 0.0 0.0
physical reads direct temporary 17,299 28.7 6.8
physical reads prefetch warmup 0 0.0 0.0
physical reads retry corrupt 54 0.1 0.0
physical write IO requests 5,731 9.5 2.3
physical write bytes 203,554,816 337,534.4 80,171.3
physical write total IO requests 8,484 14.1 3.3
physical write total bytes 283,976,192 470,889.0 111,845.7
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 338-339
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
physical write total multi block 5,019 8.3 2.0
physical writes 24,848 41.2 9.8
physical writes direct 17,318 28.7 6.8
physical writes direct (lob) 0 0.0 0.0
physical writes direct temporary 17,299 28.7 6.8
physical writes from cache 7,530 12.5 3.0
physical writes non checkpoint 23,712 39.3 9.3
pinned buffers inspected 6 0.0 0.0
prefetch warmup blocks aged out 0 0.0 0.0
prefetched blocks aged out befor 2 0.0 0.0
process last non-idle time 579 1.0 0.2
queries parallelized 67 0.1 0.0
recursive calls 115,742 191.9 45.6
recursive cpu usage 43,163 71.6 17.0
redo blocks written 32,504 53.9 12.8
redo buffer allocation retries 1 0.0 0.0
redo entries 19,373 32.1 7.6
redo log space requests 1 0.0 0.0
redo log space wait time 25 0.0 0.0
redo size 15,647,384 25,946.5 6,162.8
redo synch time 5,470 9.1 2.2
redo synch writes 2,102 3.5 0.8
redo wastage 473,620 785.4 186.5
redo write time 7,062 11.7 2.8
redo writer latching time 1 0.0 0.0
redo writes 2,253 3.7 0.9
rollback changes - undo records 109 0.2 0.0
rollbacks only - consistent read 4 0.0 0.0
rows fetched via callback 65,515 108.6 25.8
session connect time 0 0.0 0.0
session cursor cache hits 5,056 8.4 2.0
session logical reads 6,050,561 10,033.0 2,383.1
session pga memory 411,760,136 682,780.2 162,174.1
session pga memory max 5,395,314,184 8,946,503.5 2,124,976.1
session uga memory 51,546,743,504 85,474,748.1 20,301,986.4
session uga memory max 1,043,985,628 1,731,135.7 411,179.9
shared hash latch upgrades - no 4,344 7.2 1.7
sorts (disk) 0 0.0 0.0
sorts (memory) 1,839 3.1 0.7
sorts (rows) 25,796,856 42,776.3 10,160.2
sql area evicted 85 0.1 0.0
sql area purged 0 0.0 0.0
summed dirty queue length 13,205 21.9 5.2
switch current to new buffer 858 1.4 0.3
table fetch by rowid 127,462 211.4 50.2
table fetch continued row 199 0.3 0.1
table scan blocks gotten 5,826,518 9,661.5 2,294.8
table scan rows gotten 342,238,570 567,499.6 134,792.7
table scans (direct read) 3,353 5.6 1.3
table scans (long tables) 4,229 7.0 1.7
table scans (rowid ranges) 4,229 7.0 1.7
table scans (short tables) 4,797 8.0 1.9
total number of times SMON poste 39 0.1 0.0
transaction rollbacks 5 0.0 0.0
transaction tables consistent re 2 0.0 0.0
transaction tables consistent re 28 0.1 0.0
undo change vector size 5,810,908 9,635.6 2,288.7
user I/O wait time 42,789 71.0 16.9
user calls 11,544 19.1 4.6
user commits 2,526 4.2 1.0
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 338-339
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
user rollbacks 13 0.0 0.0
workarea executions - onepass 10 0.0 0.0
workarea executions - optimal 1,805 3.0 0.7
write clones created in foregrou 7 0.0 0.0
-------------------------------------------------------------
Instance Activity Stats - Absolute Values DB/Inst: IVRS/ivrs Snaps: 338-339
-> Statistics with absolute values (should not be diffed)
Statistic Begin Value End Value
-------------------------------- --------------- ---------------
session cursor cache count 10,364 10,760
opened cursors current 91 67
workarea memory allocated 2,258 2,368
logons current 31 30
-------------------------------------------------------------
Instance Activity Stats - Thread Activity DB/Inst: IVRS/ivrs Snaps: 338-339
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
-------------------------------- ------------------ ---------
log switches (derived) 1 5.97
-------------------------------------------------------------
Tablespace IO Stats DB/Inst: IVRS/ivrs Snaps: 338-339
-> ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TPCHTAB
387,123 642 0.2 14.8 2 0 3 0.0
USERS
5,988 10 14.6 1.0 3,649 6 0 0.0
TEMP
1,534 3 0.0 11.3 1,533 3 0 0.0
SYSTEM
1,211 2 47.0 1.1 41 0 0 0.0
SYSAUX
726 1 100.0 1.3 254 0 0 0.0
UNDOTBS1
9 0 4.4 1.0 354 1 0 0.0
CCDATA
1 0 0.0 1.0 1 0 0 0.0
CCINDEX
1 0 0.0 1.0 1 0 0 0.0
PSE
1 0 0.0 1.0 1 0 0 0.0
SOE
1 0 0.0 1.0 1 0 0 0.0
SOEINDEX
1 0 0.0 1.0 1 0 0 0.0
TPCCTAB
1 0 0.0 1.0 1 0 0 0.0
-------------------------------------------------------------
File IO Stats DB/Inst: IVRS/ivrs Snaps: 338-339
-> ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
CCDATA +DATA_1/ivrs/datafile/ccdata.dbf
1 0 0.0 1.0 1 0 0 0.0
CCINDEX +DATA_1/ivrs/datafile/ccindex.dbf
1 0 0.0 1.0 1 0 0 0.0
PSE +DATA_1/ivrs/pse.dbf
1 0 0.0 1.0 1 0 0 0.0
SOE +DATA_1/ivrs/datafile/soe.dbf
1 0 0.0 1.0 1 0 0 0.0
SOEINDEX +DATA_1/ivrs/datafile/soeindex.dbf
1 0 0.0 1.0 1 0 0 0.0
SYSAUX +DATA_1/ivrs/datafile/sysaux.258.652821943
726 1 100.0 1.3 254 0 0 0.0
SYSTEM +DATA_1/ivrs/datafile/system.267.652821909
1,176 2 48.3 1.1 40 0 0 0.0
SYSTEM +DATA_1/ivrs/datafile/system_02.dbf
35 0 4.6 1.0 1 0 0 0.0
TEMP +DATA_1/ivrs/tempfile/temp.256.652821953
1,534 3 0.0 11.3 1,533 3 0 N/A
TPCCTAB +DATA_1/ivrs/tpcctab01.dbf
1 0 0.0 1.0 1 0 0 0.0
TPCHTAB +DATA_1/ivrs/datafile/tpch_01.dbf
387,123 642 0.2 14.8 2 0 3 0.0
UNDOTBS1 +DATA_1/ivrs/datafile/undotbs1.257.652821933
9 0 4.4 1.0 354 1 0 0.0
USERS +DATA_1/ivrs/datafile/users.263.652821963
5,739 10 13.0 1.0 3,515 6 0 0.0
USERS +DATA_1/ivrs/datafile/users02.dbf
249 0 50.3 1.0 134 0 0 0.0
-------------------------------------------------------------
Buffer Pool Statistics DB/Inst: IVRS/ivrs Snaps: 338-339
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Writ Buffer
Number of Pool Buffer Physical Physical Buff Comp Busy
P Buffers Hit% Gets Reads Writes Wait Wait Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D 24,184 96 428,087 18,328 7,579 0 0 3
-------------------------------------------------------------
Instance Recovery Stats DB/Inst: IVRS/ivrs Snaps: 338-339
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B 0 50 1945 28667 184320 184320 219902 N/A
E 0 45 490 3284 119243 184320 119243 N/A
-------------------------------------------------------------
Buffer Pool Advisory DB/Inst: IVRS/ivrs Snap: 339
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 16 .1 1,996 3.5 550,709
D 32 .2 3,992 2.5 392,558
D 48 .2 5,988 1.8 285,114
D 64 .3 7,984 1.5 235,318
D 80 .4 9,980 1.3 209,129
D 96 .5 11,976 1.2 196,161
D 112 .6 13,972 1.2 185,692
D 128 .7 15,968 1.1 178,684
D 144 .7 17,964 1.1 172,352
D 160 .8 19,960 1.1 166,932
D 176 .9 21,956 1.0 162,491
D 192 1.0 23,952 1.0 159,080
D 196 1.0 24,451 1.0 158,303
D 208 1.1 25,948 1.0 156,344
D 224 1.1 27,944 1.0 153,879
D 240 1.2 29,940 1.0 150,890
D 256 1.3 31,936 0.9 141,958
D 272 1.4 33,932 0.9 138,023
D 288 1.5 35,928 0.9 135,507
D 304 1.6 37,924 0.8 133,447
D 320 1.6 39,920 0.8 131,793
-------------------------------------------------------------
PGA Aggr Summary DB/Inst: IVRS/ivrs Snaps: 338-339
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ------------------ --------------------------
88.6 1,083 139
-------------------------------------------------------------
PGA Aggr Target Stats DB/Inst: IVRS/ivrs Snaps: 338-339
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B 103 39 148.9 7.7 5.2 100.0 .0 21,094
E 103 39 154.8 7.2 4.7 100.0 .0 21,094
-------------------------------------------------------------
PGA Aggr Target Histogram DB/Inst: IVRS/ivrs Snaps: 338-339
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
2K 4K 1,418 1,418 0 0
64K 128K 14 14 0 0
128K 256K 30 30 0 0
256K 512K 12 12 0 0
512K 1024K 140 140 0 0
1M 2M 98 98 0 0
2M 4M 70 70 0 0
4M 8M 18 18 0 0
8M 16M 12 6 6 0
16M 32M 24 20 4 0
-------------------------------------------------------------
PGA Memory Advisory DB/Inst: IVRS/ivrs Snap: 339
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
13 0.1 2,834.0 3,667.8 44.0 146
26 0.3 2,834.0 3,667.8 44.0 146
52 0.5 2,834.0 3,664.7 44.0 145
77 0.8 2,834.0 756.1 79.0 7
103 1.0 2,834.0 194.9 94.0 1
124 1.2 2,834.0 41.8 99.0 0
144 1.4 2,834.0 41.8 99.0 0
165 1.6 2,834.0 0.0 100.0 0
185 1.8 2,834.0 0.0 100.0 0
206 2.0 2,834.0 0.0 100.0 0
309 3.0 2,834.0 0.0 100.0 0
412 4.0 2,834.0 0.0 100.0 0
618 6.0 2,834.0 0.0 100.0 0
824 8.0 2,834.0 0.0 100.0 0
-------------------------------------------------------------
Shared Pool Advisory DB/Inst: IVRS/ivrs Snap: 339
-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Est LC Est LC Est LC Est LC
Shared SP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
60 .6 14 1,547 44,365 .8 12,137 28.4 331,775
72 .8 24 2,600 50,039 .9 6,463 15.1 333,416
84 .9 35 3,092 54,519 1.0 1,983 4.6 334,661
96 1.0 45 3,178 56,075 1.0 427 1.0 335,447
108 1.1 56 3,304 56,282 1.0 220 .5 335,896
120 1.3 66 3,529 56,304 1.0 198 .5 336,179
132 1.4 77 3,956 56,314 1.0 188 .4 336,411
144 1.5 88 4,522 56,317 1.0 185 .4 336,631
156 1.6 100 6,389 56,319 1.0 183 .4 336,791
168 1.8 100 6,389 56,319 1.0 183 .4 336,855
180 1.9 100 6,389 56,319 1.0 183 .4 336,865
192 2.0 100 6,389 56,319 1.0 183 .4 336,867
-------------------------------------------------------------
SGA Target Advisory DB/Inst: IVRS/ivrs Snap: 339
SGA Target SGA Size Est DB Est Physical
Size (M) Factor Time (s) Reads
---------- ---------- ------------ ----------------
156 0.5 11,816 283,577
234 0.8 8,316 195,068
312 1.0 7,642 158,193
390 1.3 7,248 137,264
468 1.5 7,098 131,063
546 1.8 7,093 131,063
624 2.0 7,093 131,063
-------------------------------------------------------------
Streams Pool Advisory DB/Inst: IVRS/ivrs Snap: 339
Size for Size Est Spill Est Spill Est Unspill Est Unspill
Est (MB) Factor Count Time (s) Count Time (s)
---------- --------- ----------- ----------- ----------- -----------
4 1.0 0 0 0 0
8 2.0 0 0 0 0
12 3.0 0 0 0 0
16 4.0 0 0 0 0
20 5.0 0 0 0 0
24 6.0 0 0 0 0
28 7.0 0 0 0 0
32 8.0 0 0 0 0
36 9.0 0 0 0 0
40 10.0 0 0 0 0
44 11.0 0 0 0 0
48 12.0 0 0 0 0
52 13.0 0 0 0 0
56 14.0 0 0 0 0
60 15.0 0 0 0 0
64 16.0 0 0 0 0
68 17.0 0 0 0 0
72 18.0 0 0 0 0
76 19.0 0 0 0 0
80 20.0 0 0 0 0
-------------------------------------------------------------
Java Pool Advisory DB/Inst: IVRS/ivrs Snap: 339
Est LC Est LC Est LC Est LC
Java JP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
8 1.0 4 148 78 1.0 427 1.0 163
12 1.5 4 148 78 1.0 427 1.0 163
16 2.0 4 148 78 1.0 427 1.0 163
-------------------------------------------------------------
Buffer Wait Statistics DB/Inst: IVRS/ivrs Snaps: 338-339
-> ordered by wait time desc, waits desc
Class Waits Total Wait Time (s) Avg Time (ms)
------------------ ----------- ------------------- --------------
data block 3 0 0
-------------------------------------------------------------
Enqueue Activity DB/Inst: IVRS/ivrs Snaps: 338-339
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc
Enqueue Type (Request Reason)
------------------------------------------------------------------------------
Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
PS-PX Process Reservation
1,368 1,168 200 50 0 3.80
BF-BLOOM FILTER (allocation contention)
78 78 0 1 0 .00
-------------------------------------------------------------
Undo Segment Summary DB/Inst: IVRS/ivrs Snaps: 338-339
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
1 1.4 4,612 49 4 15/15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Undo Segment Stats DB/Inst: IVRS/ivrs Snaps: 338-339
-> Most recent 35 Undostat rows, ordered by Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
17-Jan 07:07 170 198 0 3 15 0/0 0/0/0/0/0/0
17-Jan 06:57 1,200 4,414 49 4 15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Latch Activity DB/Inst: IVRS/ivrs Snaps: 338-339
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ASM allocation 154 0.0 N/A 0 0 N/A
ASM db client latch 358 0.0 N/A 0 0 N/A
ASM map headers 66 0.0 N/A 0 0 N/A
ASM map load waiting lis 11 0.0 N/A 0 0 N/A
ASM map operation freeli 35 0.0 N/A 0 0 N/A
ASM map operation hash t 836,612 0.0 N/A 0 0 N/A
ASM network background l 302 0.0 N/A 0 0 N/A
AWR Alerted Metric Eleme 2,246 0.0 N/A 0 0 N/A
Bloom filter list latch 27 0.0 N/A 0 0 N/A
Consistent RBA 2,298 0.0 N/A 0 0 N/A
FAL request queue 14 0.0 N/A 0 0 N/A
FAL subheap alocation 14 0.0 N/A 0 0 N/A
FIB s.o chain latch 26 0.0 N/A 0 0 N/A
FOB s.o list latch 131 0.0 N/A 0 0 N/A
In memory undo latch 35,563 0.0 N/A 0 2,736 0.0
JOX SGA heap latch 887 0.0 N/A 0 0 N/A
JS queue state obj latch 4,248 0.0 N/A 0 0 N/A
JS slv state obj latch 4 0.0 N/A 0 0 N/A
KFK SGA context latch 301 0.0 N/A 0 0 N/A
KFMD SGA 33 0.0 N/A 0 0 N/A
KMG MMAN ready and start 213 0.0 N/A 0 0 N/A
KMG resize request state 9 0.0 N/A 0 0 N/A
KTF sga latch 2 0.0 N/A 0 157 0.0
KWQP Prop Status 3 0.0 N/A 0 0 N/A
MQL Tracking Latch 0 N/A N/A 0 11 0.0
Memory Management Latch 96 0.0 N/A 0 213 0.0
OS process 51 0.0 N/A 0 0 N/A
OS process allocation 230 0.0 N/A 0 0 N/A
OS process: request allo 17 0.0 N/A 0 0 N/A
PL/SQL warning settings 2,329 0.0 N/A 0 0 N/A
Reserved Space Latch 3 0.0 N/A 0 0 N/A
SGA IO buffer pool latch 128 0.0 N/A 0 164 0.0
SQL memory manager latch 48 0.0 N/A 0 179 0.0
SQL memory manager worka 16,771 0.0 1.0 0 0 N/A
Shared B-Tree 28 0.0 N/A 0 0 N/A
active checkpoint queue 829 0.0 N/A 0 0 N/A
active service list 3,643 0.0 N/A 0 240 0.0
archive control 16 0.0 N/A 0 0 N/A
archive process latch 193 0.0 N/A 0 0 N/A
begin backup scn array 2 0.0 N/A 0 0 N/A
buffer pool 8 0.0 N/A 0 0 N/A
cache buffer handles 34,516 0.0 N/A 0 0 N/A
cache buffers chains 952,571 0.0 1.0 0 26,780 0.0
cache buffers lru chain 39,708 0.1 1.0 0 8,252 0.0
cache table scan latch 0 N/A N/A 0 236 0.0
channel handle pool latc 95 0.0 N/A 0 0 N/A
channel operations paren 2,989 0.0 N/A 0 0 N/A
checkpoint queue latch 16,493 0.0 N/A 0 4,965 0.0
client/application info 1,815 0.0 N/A 0 0 N/A
compile environment latc 4,979 0.0 N/A 0 0 N/A
dml lock allocation 23,341 0.0 1.0 0 0 N/A
dummy allocation 633 0.0 N/A 0 0 N/A
enqueue hash chains 45,427 0.0 N/A 0 30 0.0
enqueues 19,373 0.0 N/A 0 0 N/A
error message lists 540 0.0 N/A 0 0 N/A
event group latch 8 0.0 N/A 0 0 N/A
file cache latch 36 0.0 N/A 0 0 N/A
global KZLD latch for me 4 0.0 N/A 0 0 N/A
hash table column usage 222 0.0 N/A 0 48,959 0.0
hash table modification 6 0.0 N/A 0 0 N/A
Latch Activity DB/Inst: IVRS/ivrs Snaps: 338-339
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
job workq parent latch 0 N/A N/A 0 24 0.0
job_queue_processes para 21 0.0 N/A 0 0 N/A
kks stats 1,012 0.0 N/A 0 0 N/A
ksuosstats global area 44 0.0 N/A 0 0 N/A
ktm global data 39 0.0 N/A 0 0 N/A
kwqbsn:qsga 27 0.0 N/A 0 0 N/A
lgwr LWN SCN 2,325 0.0 N/A 0 0 N/A
library cache 50,273 0.0 1.1 0 141 0.7
library cache load lock 800 0.0 N/A 0 0 N/A
library cache lock 20,079 0.0 N/A 0 0 N/A
library cache lock alloc 590 0.0 N/A 0 0 N/A
library cache pin 17,505 0.0 N/A 0 0 N/A
library cache pin alloca 188 0.0 N/A 0 0 N/A
list of block allocation 74 0.0 N/A 0 0 N/A
loader state object free 6,844 0.0 N/A 0 0 N/A
logminer context allocat 1 0.0 N/A 0 0 N/A
longop free list parent 2 0.0 N/A 0 2 0.0
message pool operations 78 0.0 N/A 0 0 N/A
messages 12,610 0.0 N/A 0 0 N/A
mostly latch-free SCN 2,325 0.0 N/A 0 0 N/A
msg queue 22 0.0 N/A 0 22 0.0
multiblock read objects 840 0.0 N/A 0 0 N/A
ncodef allocation latch 11 0.0 N/A 0 0 N/A
object queue header heap 1,079 0.0 N/A 0 92 0.0
object queue header oper 65,592 0.0 1.0 0 1,285 0.0
object stats modificatio 2 0.0 N/A 0 0 N/A
parallel query alloc buf 4,364 0.0 N/A 0 0 N/A
parallel query stats 491 0.0 N/A 0 0 N/A
parameter list 107 0.0 N/A 0 0 N/A
parameter table allocati 624 0.0 N/A 0 0 N/A
post/wait queue 3,549 0.0 N/A 0 1,440 0.0
process allocation 85 0.0 N/A 0 8 0.0
process group creation 17 0.0 N/A 0 0 N/A
process queue 2,870 0.0 N/A 0 0 N/A
process queue reference 7,204,772 0.0 1.0 0 585,085 1.2
qmn task queue latch 88 0.0 N/A 0 0 N/A
query server freelists 2,507 0.0 N/A 0 0 N/A
redo allocation 9,876 0.0 1.0 0 19,450 0.0
redo copy 1 0.0 N/A 0 19,449 0.1
redo writing 8,100 0.0 N/A 0 0 N/A
reservation so alloc lat 2 0.0 N/A 0 0 N/A
resmgr group change latc 366 0.0 N/A 0 0 N/A
resmgr:actses active lis 630 0.0 N/A 0 0 N/A
resmgr:actses change gro 312 0.0 N/A 0 0 N/A
resmgr:free threads list 628 0.0 N/A 0 0 N/A
resmgr:schema config 2 0.0 N/A 0 0 N/A
row cache objects 157,344 0.0 N/A 0 425 0.0
rules engine rule set st 200 0.0 N/A 0 0 N/A
segmented array pool 22 0.0 N/A 0 0 N/A
sequence cache 590 0.0 N/A 0 0 N/A
session allocation 78,343 0.0 1.2 0 0 N/A
session idle bit 28,022 0.0 N/A 0 0 N/A
session state list latch 644 0.0 N/A 0 0 N/A
session switching 11 0.0 N/A 0 0 N/A
session timer 240 0.0 N/A 0 0 N/A
shared pool 33,242 0.0 1.0 0 0 N/A
shared pool sim alloc 10 0.0 N/A 0 0 N/A
shared pool simulator 10,083 0.0 N/A 0 0 N/A
simulator hash latch 47,721 0.0 N/A 0 0 N/A
simulator lru latch 26,263 0.0 1.0 0 18,583 0.0
Latch Activity DB/Inst: IVRS/ivrs Snaps: 338-339
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
slave class 69 0.0 N/A 0 0 N/A
slave class create 55 1.8 1.0 0 0 N/A
sort extent pool 321 0.0 N/A 0 0 N/A
state object free list 2 0.0 N/A 0 0 N/A
statistics aggregation 140 0.0 N/A 0 0 N/A
temp lob duration state 2 0.0 N/A 0 0 N/A
threshold alerts latch 64 0.0 N/A 0 0 N/A
transaction allocation 50 0.0 N/A 0 0 N/A
transaction branch alloc 11 0.0 N/A 0 0 N/A
undo global data 14,398 0.0 N/A 0 0 N/A
user lock 42 0.0 N/A 0 0 N/A
-------------------------------------------------------------
Latch Sleep Breakdown DB/Inst: IVRS/ivrs Snaps: 338-339
-> ordered by misses desc
Latch Name
----------------------------------------
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
cache buffers lru chain
39,708 22 23 0 0 0 0
library cache
50,273 8 9 0 0 0 0
session allocation
78,343 6 7 0 0 0 0
cache buffers chains
952,571 3 3 0 0 0 0
object queue header operation
65,592 2 2 0 0 0 0
process queue reference
7,204,772 2 2 0 0 0 0
shared pool
33,242 2 2 0 0 0 0
simulator lru latch
26,263 2 2 0 0 0 0
SQL memory manager workarea list latch
16,771 1 1 0 0 0 0
dml lock allocation
23,341 1 1 0 0 0 0
redo allocation
9,876 1 1 0 0 0 0
slave class create
55 1 1 0 0 0 0
-------------------------------------------------------------
Latch Miss Sources DB/Inst: IVRS/ivrs Snaps: 338-339
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
SQL memory manager worka qesmmIRegisterWorkArea 0 1 1
cache buffers chains kcbgtcr: kslbegin excl 0 2 3
cache buffers chains kcbgtcr: fast path 0 1 0
cache buffers lru chain kcbzgws_1 0 19 20
cache buffers lru chain kcbw_activate_granule 0 1 0
dml lock allocation ktaiam 0 1 0
library cache kglScanDependency 0 3 0
library cache kgldte: child 0 0 3 6
library cache kgldti: 2child 0 1 0
library cache kglobpn: child: 0 1 1
object queue header oper kcbw_link_q 0 1 0
object queue header oper kcbw_unlink_q 0 1 1
process queue reference kxfpqrsnd 0 2 0
redo allocation kcrfw_redo_gen: redo alloc 0 1 0
session allocation ksuxds: KSUSFCLC not set 0 3 1
session allocation ksursi 0 2 2
session allocation ksucri 0 1 1
session allocation ksuxds: KSUSFCLC set 0 1 0
shared pool kghalo 0 2 0
shared pool kghfrunp: clatch: nowait 0 1 0
simulator lru latch kcbs_simulate: simulate se 0 2 2
slave class create ksvcreate 0 1 0
-------------------------------------------------------------
Parent Latch Statistics DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
Child Latch Statistics DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Logical Reads DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Logical Reads: 6,050,561
-> Captured Segments account for 101.7% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
TPCH TPCHTAB LINEITEM TABLE 4,960,400 81.98
TPCH TPCHTAB ORDERS TABLE 502,768 8.31
TPCH TPCHTAB PARTSUPP TABLE 161,968 2.68
TPCH TPCHTAB PART TABLE 95,984 1.59
TPCC USERS STOCK_I1 INDEX 91,984 1.52
-------------------------------------------------------------
Segments by Physical Reads DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Physical Reads: 5,663,126
-> Captured Segments account for 101.7% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
TPCH TPCHTAB LINEITEM TABLE 4,947,520 87.36
TPCH TPCHTAB ORDERS TABLE 492,387 8.69
TPCH TPCHTAB PARTSUPP TABLE 158,037 2.79
TPCH TPCHTAB PART TABLE 92,064 1.63
TPCH TPCHTAB CUSTOMER TABLE 55,709 .98
-------------------------------------------------------------
Segments by Row Lock Waits DB/Inst: IVRS/ivrs Snaps: 338-339
-> % of Capture shows % of row lock waits for each top segment compared
-> with total row lock waits for all segments captured by the Snapshot
Row
Tablespace Subobject Obj. Lock % of
Owner Name Object Name Name Type Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
TPCC USERS IORDL INDEX 24 75.00
PERFSTAT USERS STATS$EVENT_HISTOGRA INDEX 4 12.50
PERFSTAT USERS STATS$LATCH_PK INDEX 4 12.50
-------------------------------------------------------------
Segments by ITL Waits DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Buffer Busy Waits DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
Dictionary Cache Stats DB/Inst: IVRS/ivrs Snaps: 338-339
-> "Pct Misses" should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control 14 0.0 0 N/A 2 1
dc_global_oids 91 4.4 0 N/A 0 29
dc_histogram_data 4,249 2.0 0 N/A 0 1,281
dc_histogram_defs 9,313 2.4 0 N/A 0 2,713
dc_object_grants 26 7.7 0 N/A 0 45
dc_object_ids 4,946 1.0 0 N/A 0 663
dc_objects 1,968 4.0 0 N/A 3 794
dc_profiles 16 0.0 0 N/A 0 1
dc_rollback_segments 136 0.0 0 N/A 0 16
dc_segments 1,989 2.6 0 N/A 4 479
dc_sequences 84 0.0 0 N/A 84 7
dc_tablespaces 16,511 0.0 0 N/A 0 12
dc_usernames 260 0.0 0 N/A 0 12
dc_users 15,529 0.0 0 N/A 0 57
outstanding_alerts 27 0.0 0 N/A 0 24
-------------------------------------------------------------
Library Cache Activity DB/Inst: IVRS/ivrs Snaps: 338-339
-> "Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
SQL AREA 1,117 6.1 64,285 1.8 294 154
TABLE/PROCEDURE 449 0.4 7,900 4.6 261 0
BODY 148 0.0 1,278 1.7 22 0
TRIGGER 42 0.0 80 13.8 11 0
INDEX 24 0.0 80 6.3 5 0
CLUSTER 18 0.0 59 0.0 0 0
JAVA DATA 1 0.0 0 N/A 0 0
-------------------------------------------------------------
Process Memory Summary DB/Inst: IVRS/ivrs Snaps: 338-339
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist
Avg Std Dev Max Max
Alloc Used Alloc Alloc Alloc Alloc Num Num
Category (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other 128.6 N/A 3.5 6.1 24 25 37 37
Freeable 9.7 .0 .6 .6 2 N/A 16 16
SQL 3.6 2.9 .2 .3 1 25 22 15
PL/SQL .4 .1 .0 .0 0 0 35 33
E Other 133.6 N/A 3.7 6.1 24 24 36 36
Freeable 12.1 .0 .7 .4 2 N/A 18 18
SQL 2.9 2.6 .1 .3 1 26 22 14
PL/SQL .5 .1 .0 .0 0 0 34 32
JAVA .0 .0 .0 .0 0 2 1 1
-------------------------------------------------------------
SGA Memory Summary DB/Inst: IVRS/ivrs Snaps: 338-339
End Size (Bytes)
SGA regions Begin Size (Bytes) (if different)
------------------------------ ------------------- -------------------
Database Buffers 213,909,504 205,520,896
Fixed Size 1,261,612
Redo Buffers 2,928,640
Variable Size 109,055,956 117,444,564
-------------------
sum 327,155,712
-------------------------------------------------------------
SGA breakdown difference DB/Inst: IVRS/ivrs Snaps: 338-339
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
insignificant, or zero in that snapshot
Pool Name Begin MB End MB % Diff
------ ------------------------------ -------------- -------------- -------
java free memory 2.8 2.7 -3.98
java joxlod exec hp 5.0 5.1 2.23
java joxs heap .2 .2 0.00
large ASM map operations hashta .2 .2 0.00
large CTWR dba buffer .4 .4 0.00
large PX msg pool .2 .2 20.83
large free memory 1.2 1.2 -3.32
large krcc extent chunk 2.0 2.0 0.00
shared ASH buffers 2.0 2.0 0.00
shared CCursor 3.0 3.3 11.37
shared Heap0: KGL 1.7 1.7 2.13
shared KCB Table Scan Buffer 3.8 3.8 0.00
shared KGH: NO ACCESS 12.0 13.9 16.25
shared KGLS heap 2.6 3.4 31.25
shared KQR M PO 2.2 2.1 -3.77
shared KSFD SGA I/O b 3.8 3.8 0.00
shared KTI-UNDO 1.2 1.2 0.00
shared PCursor 2.0 2.0 1.70
shared PL/SQL DIANA N/A 1.1 N/A
shared PL/SQL MPCODE 2.3 2.3 1.07
shared event statistics per sess 1.3 1.3 0.00
shared free memory 22.1 20.9 -5.56
shared kglsim hash table bkts 2.0 2.0 0.00
shared library cache 5.7 5.7 -0.52
shared private strands 1.1 1.1 0.00
shared row cache 3.6 3.6 0.00
shared sql area 9.7 15.0 54.05
stream free memory 4.0 4.0 0.00
buffer_cache 204.0 196.0 -3.92
fixed_sga 1.2 1.2 0.00
log_buffer 2.8 2.8 0.00
-------------------------------------------------------------
Streams CPU/IO Usage DB/Inst: IVRS/ivrs Snaps: 338-339
-> Streams processes ordered by CPU usage
-> CPU and I/O Time in micro seconds
Session Type CPU Time User I/O Time Sys I/O Time
------------------------- -------------- -------------- --------------
QMON Coordinator 31,890 0 0
QMON Slaves 24,062 0 0
-------------------------------------------------------------
Streams Capture DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
Streams Apply DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Queues DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Subscribers DB/Inst: IVRS/ivrs Snaps: 338-339
No data exists for this section of the report.
-------------------------------------------------------------
Rule Set DB/Inst: IVRS/ivrs Snaps: 338-339
-> Rule Sets ordered by Evaluations
Fast SQL CPU Elapsed
Ruleset Name Evals Evals Execs Time Time
------------------------- -------- -------- -------- -------- --------
SYS.ALERT_QUE_R 0 0 0 0 0
-------------------------------------------------------------
Resource Limit Stats DB/Inst: IVRS/ivrs Snap: 339
No data exists for this section of the report.
-------------------------------------------------------------
init.ora Parameters DB/Inst: IVRS/ivrs Snaps: 338-339
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
audit_file_dest /oracle/app/oracle/admin/ivrs/adu
audit_sys_operations TRUE
background_dump_dest /oracle/app/oracle/admin/ivrs/bdu
compatible 10.2.0.3.0
control_files +DATA_1/ivrs/control01.ctl, +DATA
core_dump_dest /oracle/app/oracle/admin/ivrs/cdu
db_block_size 8192
db_domain bayantel.com
db_file_multiblock_read_count 16
db_name ivrs
db_recovery_file_dest /flash_reco/flash_recovery_area
db_recovery_file_dest_size 161061273600
dispatchers (PROTOCOL=TCP) (SERVICE=ivrsXDB)
job_queue_processes 10
log_archive_dest_1 LOCATION=USE_DB_RECOVERY_FILE_DES
log_archive_format ivrs_%t_%s_%r.arc
open_cursors 300
os_authent_prefix
os_roles FALSE
pga_aggregate_target 108003328
processes 150
recyclebin OFF
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
sga_target 327155712
spfile +DATA_1/ivrs/spfileivrs.ora
sql92_security TRUE
statistics_level TYPICAL
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /oracle/app/oracle/admin/ivrs/udu
-------------------------------------------------------------
End of Report
}}}
http://karlarao.wordpress.com/scripts-resources/
<<<
AWR Tableau Toolkit – create your own performance data warehouse and easily characterize the workload, CPU, IO of the entire cluster (30 instances) with months of perf data in less than 1 hour updated 20120912
I no longer update this toolkit. This served as a version 1 of a more comprehensive tool called eAdam which I started with Carlos Sierra (the main developer), Frits Hoogland, and Randy Johnson at Enkitec.
<<<
see [[eAdam]]
''AWR tableau and R toolkit - blueprint'' http://www.evernote.com/shard/s48/sh/e20c905c-694e-4950-8d57-e890a208c76b/189e50f39a739500e6b98b4511751cea
''Workload visualization notes:'' http://www.evernote.com/shard/s48/sh/0918cd46-2cec-494e-9932-eb725712bb68/9d211e0a7876d7dc41c98c0416675965
''check this out for the mind map version'' https://sites.google.com/site/karlarao/home/mindmap/awr-tableau-and-r-toolkit-visualization-examples
''the viz on the tiddlers are coming from the following data sets:''
<<<
{{{
topevents
sysstat
io workload
cpu workload
services
topsql
General Workload
wait class, wait events, executes/sec
CPU
AAS CPU, load average, NUM_CPUs
IO
latency, IOPS breakdown, MB/s
Memory
SGA , max PGA usage, physical memory
Storage
total storage size, reco size
Services
distribution of workload/modules
Top SQL
modules, SQL type, SQL_ID PIOs/LIOs , PX usage
}}}
<<<
with these data points I can easily characterize the overall behavior of the cluster
''check this out for the mind map version'' https://sites.google.com/site/karlarao/home/mindmap/awr-tableau-and-r-toolkit-visualization-examples
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15 heading instname -- instname
col hostname format a30 heading hostname -- hostname
col tm format a17 heading tm -- "tm"
col id format 99999 heading id -- "snapid"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col cpu format 90 heading cpu -- "cpu"
col cap format 9999990.00 heading cap -- "capacity"
col dbt format 999990.00 heading dbt -- "DBTime"
col dbc format 99990.00 heading dbc -- "DBcpu"
col bgc format 99990.00 heading bgc -- "BGcpu"
col rman format 9990.00 heading rman -- "RMANcpu"
col aas format 990.0 heading aas -- "AAS"
col totora format 9999990.00 heading totora -- "TotalOracleCPU"
col busy format 9999990.00 heading busy -- "BusyTime"
col load format 990.00 heading load -- "OSLoad"
col totos format 9999990.00 heading totos -- "TotalOSCPU"
col mem format 999990.00 heading mem -- "PhysicalMemorymb"
col IORs format 9990.000 heading IORs -- "IOPsr"
col IOWs format 9990.000 heading IOWs -- "IOPsw"
col IORedo format 9990.000 heading IORedo -- "IOPsredo"
col IORmbs format 9990.000 heading IORmbs -- "IOrmbs"
col IOWmbs format 9990.000 heading IOWmbs -- "IOwmbs"
col redosizesec format 9990.000 heading redosizesec -- "Redombs"
col logons format 990 heading logons -- "Sess"
col logone format 990 heading logone -- "SessEnd"
col exsraw format 99990.000 heading exsraw -- "Execrawdelta"
col exs format 9990.000 heading exs -- "Execs"
col ucs format 9990.000 heading ucs -- "UserCalls"
col ucoms format 9990.000 heading ucoms -- "Commit"
col urs format 9990.000 heading urs -- "Rollback"
col oracpupct format 990 heading oracpupct -- "OracleCPUPct"
col rmancpupct format 990 heading rmancpupct -- "RMANCPUPct"
col oscpupct format 990 heading oscpupct -- "OSCPUPct"
col oscpuusr format 990 heading oscpuusr -- "USRPct"
col oscpusys format 990 heading oscpusys -- "SYSPct"
col oscpuio format 990 heading oscpuio -- "IOPct"
SELECT * FROM
(
SELECT trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
s3t1.value AS cpu,
(round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value cap,
(s5t1.value - s5t0.value) / 1000000 as dbt,
(s6t1.value - s6t0.value) / 1000000 as dbc,
(s7t1.value - s7t0.value) / 1000000 as bgc,
round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2) as rman,
((s5t1.value - s5t0.value) / 1000000)/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2) totora,
-- s1t1.value - s1t0.value AS busy, -- this is osstat BUSY_TIME
round(s2t1.value,2) AS load,
(s1t1.value - s1t0.value)/100 AS totos,
((round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oracpupct,
((round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as rmancpupct,
(((s1t1.value - s1t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpupct,
(((s17t1.value - s17t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuusr,
(((s18t1.value - s18t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpusys,
(((s19t1.value - s19t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuio
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_osstat s1t0, -- BUSY_TIME
dba_hist_osstat s1t1,
dba_hist_osstat s17t0, -- USER_TIME
dba_hist_osstat s17t1,
dba_hist_osstat s18t0, -- SYS_TIME
dba_hist_osstat s18t1,
dba_hist_osstat s19t0, -- IOWAIT_TIME
dba_hist_osstat s19t1,
dba_hist_osstat s2t1, -- osstat just get the end value
dba_hist_osstat s3t1, -- osstat just get the end value
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1,
dba_hist_sys_time_model s6t0,
dba_hist_sys_time_model s6t1,
dba_hist_sys_time_model s7t0,
dba_hist_sys_time_model s7t1,
dba_hist_sys_time_model s8t0,
dba_hist_sys_time_model s8t1
WHERE s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s1t0.dbid = s0.dbid
AND s1t1.dbid = s0.dbid
AND s2t1.dbid = s0.dbid
AND s3t1.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
AND s6t0.dbid = s0.dbid
AND s6t1.dbid = s0.dbid
AND s7t0.dbid = s0.dbid
AND s7t1.dbid = s0.dbid
AND s8t0.dbid = s0.dbid
AND s8t1.dbid = s0.dbid
AND s17t0.dbid = s0.dbid
AND s17t1.dbid = s0.dbid
AND s18t0.dbid = s0.dbid
AND s18t1.dbid = s0.dbid
AND s19t0.dbid = s0.dbid
AND s19t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s1t0.instance_number = s0.instance_number
AND s1t1.instance_number = s0.instance_number
AND s2t1.instance_number = s0.instance_number
AND s3t1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s6t0.instance_number = s0.instance_number
AND s6t1.instance_number = s0.instance_number
AND s7t0.instance_number = s0.instance_number
AND s7t1.instance_number = s0.instance_number
AND s8t0.instance_number = s0.instance_number
AND s8t1.instance_number = s0.instance_number
AND s17t0.instance_number = s0.instance_number
AND s17t1.instance_number = s0.instance_number
AND s18t0.instance_number = s0.instance_number
AND s18t1.instance_number = s0.instance_number
AND s19t0.instance_number = s0.instance_number
AND s19t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s1t0.snap_id = s0.snap_id
AND s1t1.snap_id = s0.snap_id + 1
AND s2t1.snap_id = s0.snap_id + 1
AND s3t1.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s6t0.snap_id = s0.snap_id
AND s6t1.snap_id = s0.snap_id + 1
AND s7t0.snap_id = s0.snap_id
AND s7t1.snap_id = s0.snap_id + 1
AND s8t0.snap_id = s0.snap_id
AND s8t1.snap_id = s0.snap_id + 1
AND s17t0.snap_id = s0.snap_id
AND s17t1.snap_id = s0.snap_id + 1
AND s18t0.snap_id = s0.snap_id
AND s18t1.snap_id = s0.snap_id + 1
AND s19t0.snap_id = s0.snap_id
AND s19t1.snap_id = s0.snap_id + 1
AND s1t0.stat_name = 'BUSY_TIME'
AND s1t1.stat_name = s1t0.stat_name
AND s17t0.stat_name = 'USER_TIME'
AND s17t1.stat_name = s17t0.stat_name
AND s18t0.stat_name = 'SYS_TIME'
AND s18t1.stat_name = s18t0.stat_name
AND s19t0.stat_name = 'IOWAIT_TIME'
AND s19t1.stat_name = s19t0.stat_name
AND s2t1.stat_name = 'LOAD'
AND s3t1.stat_name = 'NUM_CPUS'
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
AND s6t0.stat_name = 'DB CPU'
AND s6t1.stat_name = s6t0.stat_name
AND s7t0.stat_name = 'background cpu time'
AND s7t1.stat_name = s7t0.stat_name
AND s8t0.stat_name = 'RMAN cpu time (backup/restore)'
AND s8t1.stat_name = s8t0.stat_name
)
-- WHERE
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
''For Exadata, make use of the following formulas to create Calculated Fields in Tableau, this already accounts for "ASM Normal redundancy" (see * 2), change it to * 3 for High redundancy'' also check this blog post for more on ASM redundancy effect on performance http://karlarao.wordpress.com/2012/06/29/the-effect-of-asm-redundancyparity-on-readwrite-iops-slob-test-case-for-exadata-and-non-exa-environments/
{{{
Flash IOPS ((SIORS+MIORS) + ((SIOWS+MIOWS+IOREDO) * 2)) * ([FLASHCACHE] * .01)
Flash RIOPS (SIORS+MIORS) * ([FLASHCACHE] *.01)
Flash WIOPS ((SIOWS+MIOWS+IOREDO) * 2) * ([FLASHCACHE] * .01)
HD IOPS ((SIORS+MIORS) + ((SIOWS+MIOWS+IOREDO) * 2)) * ((100 - [FLASHCACHE]) * .01)
HD RIOPS (SIORS+MIORS) * ((100 - [FLASHCACHE]) * .01)
HD WIOPS (((SIOWS+MIOWS+IOREDO) * 2)) * ((100 - [FLASHCACHE]) * .01)
}}}
* note, if you see way more cell flash cache read hits than physical read total IO requests then it's accumulating both the reads & writes in the same metric, the effect of this is the above formula will have negative values on the HD IOPS, to fix this create a calculated field on the FLASHCACHE measure, read more about this behavior here http://blog.tanelpoder.com/2013/12/04/cell-flash-cache-read-hits-vs-cell-writes-to-flash-cache-statistics-on-exadata/
{{{
name the calculated field as FLASHCACHE2 then just do a find/replace on the formula above
IF [FLASHCACHE] > 100 THEN 100 ELSE [FLASHCACHE] END
}}}
''Then generate the following graphs''
1) Flash vs HD IOPS
2) Flash vs HD IOPS with read/write breakdown
3) IO throughput read/write MB/s
4) Polynomial Regression (Degree 4)
''For non-Exadata just use the following formula to get the total IOPS, name the Calculated Filed as ALL_IOPS''
{{{
(SIORS+MIORS) + (SIOWS+MIOWS+IOREDO)
}}}
Here are some more examples on ''Different views of IO performance with AWR data'' check the full details on this link http://goo.gl/i660CZ
I also mentioned about this tiddler last OakTable World 2013 http://www.slideshare.net/karlarao/o-2013-ultimate-exadata-io-monitoring-flash-harddisk-write-back-cache-overhead
SECTION 1: USER IO wait class and cell single block reads latency with curve fitting
SECTION 2: Small IOPS vs Large IOPS
SECTION 3: Flash vs HD IOPS
SECTION 4: Flash vs HD IOPS with read/write breakdown
SECTION 5: IO throughput read/write MB/s
SECTION 6: Drill down on smart scans affecting cell single block latency on 24hour period
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR IO Workload Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15 heading instname -- instname
col hostname format a30 heading hostname -- hostname
col tm format a17 heading tm -- "tm"
col id format 99999 heading id -- "snapid"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col cpu format 90 heading cpu -- "cpu"
col cap format 9999990.00 heading cap -- "capacity"
col dbt format 999990.00 heading dbt -- "DBTime"
col dbc format 99990.00 heading dbc -- "DBcpu"
col bgc format 99990.00 heading bgc -- "BGcpu"
col rman format 9990.00 heading rman -- "RMANcpu"
col aas format 990.0 heading aas -- "AAS"
col totora format 9999990.00 heading totora -- "TotalOracleCPU"
col busy format 9999990.00 heading busy -- "BusyTime"
col load format 990.00 heading load -- "OSLoad"
col totos format 9999990.00 heading totos -- "TotalOSCPU"
col mem format 999990.00 heading mem -- "PhysicalMemorymb"
col IORs format 99990.000 heading IORs -- "IOPsr"
col IOWs format 99990.000 heading IOWs -- "IOPsw"
col IORedo format 99990.000 heading IORedo -- "IOPsredo"
col IORmbs format 99990.000 heading IORmbs -- "IOrmbs"
col IOWmbs format 99990.000 heading IOWmbs -- "IOwmbs"
col redosizesec format 99990.000 heading redosizesec -- "Redombs"
col logons format 990 heading logons -- "Sess"
col logone format 990 heading logone -- "SessEnd"
col exsraw format 99990.000 heading exsraw -- "Execrawdelta"
col exs format 9990.000 heading exs -- "Execs"
col oracpupct format 990 heading oracpupct -- "OracleCPUPct"
col rmancpupct format 990 heading rmancpupct -- "RMANCPUPct"
col oscpupct format 990 heading oscpupct -- "OSCPUPct"
col oscpuusr format 990 heading oscpuusr -- "USRPct"
col oscpusys format 990 heading oscpusys -- "SYSPct"
col oscpuio format 990 heading oscpuio -- "IOPct"
col SIORs format 99990.000 heading SIORs -- "IOPsSingleBlockr"
col MIORs format 99990.000 heading MIORs -- "IOPsMultiBlockr"
col TIORmbs format 99990.000 heading TIORmbs -- "Readmbs"
col SIOWs format 99990.000 heading SIOWs -- "IOPsSingleBlockw"
col MIOWs format 99990.000 heading MIOWs -- "IOPsMultiBlockw"
col TIOWmbs format 99990.000 heading TIOWmbs -- "Writembs"
col TIOR format 99990.000 heading TIOR -- "TotalIOPsr"
col TIOW format 99990.000 heading TIOW -- "TotalIOPsw"
col TIOALL format 99990.000 heading TIOALL -- "TotalIOPsALL"
col ALLRmbs format 99990.000 heading ALLRmbs -- "TotalReadmbs"
col ALLWmbs format 99990.000 heading ALLWmbs -- "TotalWritembs"
col GRANDmbs format 99990.000 heading GRANDmbs -- "TotalmbsALL"
col readratio format 990 heading readratio -- "ReadRatio"
col writeratio format 990 heading writeratio -- "WriteRatio"
col diskiops format 99990.000 heading diskiops -- "HWDiskIOPs"
col numdisks format 99990.000 heading numdisks -- "HWNumofDisks"
col flashcache format 990 heading flashcache -- "FlashCacheHitsPct"
col cellpiob format 99990.000 heading cellpiob -- "CellPIOICmbs"
col cellpiobss format 99990.000 heading cellpiobss -- "CellPIOICSmartScanmbs"
col cellpiobpreoff format 99990.000 heading cellpiobpreoff -- "CellPIOpredoffloadmbs"
col cellpiobsi format 99990.000 heading cellpiobsi -- "CellPIOstorageindexmbs"
col celliouncomb format 99990.000 heading celliouncomb -- "CellIOuncompmbs"
col cellpiobs format 99990.000 heading cellpiobs -- "CellPIOsavedfilecreationmbs"
col cellpiobsrman format 99990.000 heading cellpiobsrman -- "CellPIOsavedRMANfilerestorembs"
SELECT * FROM
(
SELECT trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
(((s20t1.value - s20t0.value) - (s21t1.value - s21t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as SIORs,
((s21t1.value - s21t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as MIORs,
(((s22t1.value - s22t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as TIORmbs,
(((s23t1.value - s23t0.value) - (s24t1.value - s24t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as SIOWs,
((s24t1.value - s24t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as MIOWs,
(((s25t1.value - s25t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as TIOWmbs,
((s13t1.value - s13t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as IORedo,
(((s14t1.value - s14t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as redosizesec,
((s33t1.value - s33t0.value) / (s20t1.value - s20t0.value))*100 as flashcache,
(((s26t1.value - s26t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiob,
(((s31t1.value - s31t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobss,
(((s29t1.value - s29t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobpreoff,
(((s30t1.value - s30t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobsi,
(((s32t1.value - s32t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as celliouncomb,
(((s27t1.value - s27t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobs,
(((s28t1.value - s28t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobsrman
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sysstat s13t0, -- redo writes, diffed
dba_hist_sysstat s13t1,
dba_hist_sysstat s14t0, -- redo size, diffed
dba_hist_sysstat s14t1,
dba_hist_sysstat s20t0, -- physical read total IO requests, diffed
dba_hist_sysstat s20t1,
dba_hist_sysstat s21t0, -- physical read total multi block requests, diffed
dba_hist_sysstat s21t1,
dba_hist_sysstat s22t0, -- physical read total bytes, diffed
dba_hist_sysstat s22t1,
dba_hist_sysstat s23t0, -- physical write total IO requests, diffed
dba_hist_sysstat s23t1,
dba_hist_sysstat s24t0, -- physical write total multi block requests, diffed
dba_hist_sysstat s24t1,
dba_hist_sysstat s25t0, -- physical write total bytes, diffed
dba_hist_sysstat s25t1,
dba_hist_sysstat s26t0, -- cell physical IO interconnect bytes, diffed, cellpiob
dba_hist_sysstat s26t1,
dba_hist_sysstat s27t0, -- cell physical IO bytes saved during optimized file creation, diffed, cellpiobs
dba_hist_sysstat s27t1,
dba_hist_sysstat s28t0, -- cell physical IO bytes saved during optimized RMAN file restore, diffed, cellpiobsrman
dba_hist_sysstat s28t1,
dba_hist_sysstat s29t0, -- cell physical IO bytes eligible for predicate offload, diffed, cellpiobpreoff
dba_hist_sysstat s29t1,
dba_hist_sysstat s30t0, -- cell physical IO bytes saved by storage index, diffed, cellpiobsi
dba_hist_sysstat s30t1,
dba_hist_sysstat s31t0, -- cell physical IO interconnect bytes returned by smart scan, diffed, cellpiobss
dba_hist_sysstat s31t1,
dba_hist_sysstat s32t0, -- cell IO uncompressed bytes, diffed, celliouncomb
dba_hist_sysstat s32t1,
dba_hist_sysstat s33t0, -- cell flash cache read hits
dba_hist_sysstat s33t1
WHERE s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s13t0.dbid = s0.dbid
AND s13t1.dbid = s0.dbid
AND s14t0.dbid = s0.dbid
AND s14t1.dbid = s0.dbid
AND s20t0.dbid = s0.dbid
AND s20t1.dbid = s0.dbid
AND s21t0.dbid = s0.dbid
AND s21t1.dbid = s0.dbid
AND s22t0.dbid = s0.dbid
AND s22t1.dbid = s0.dbid
AND s23t0.dbid = s0.dbid
AND s23t1.dbid = s0.dbid
AND s24t0.dbid = s0.dbid
AND s24t1.dbid = s0.dbid
AND s25t0.dbid = s0.dbid
AND s25t1.dbid = s0.dbid
AND s26t0.dbid = s0.dbid
AND s26t1.dbid = s0.dbid
AND s27t0.dbid = s0.dbid
AND s27t1.dbid = s0.dbid
AND s28t0.dbid = s0.dbid
AND s28t1.dbid = s0.dbid
AND s29t0.dbid = s0.dbid
AND s29t1.dbid = s0.dbid
AND s30t0.dbid = s0.dbid
AND s30t1.dbid = s0.dbid
AND s31t0.dbid = s0.dbid
AND s31t1.dbid = s0.dbid
AND s32t0.dbid = s0.dbid
AND s32t1.dbid = s0.dbid
AND s33t0.dbid = s0.dbid
AND s33t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s13t0.instance_number = s0.instance_number
AND s13t1.instance_number = s0.instance_number
AND s14t0.instance_number = s0.instance_number
AND s14t1.instance_number = s0.instance_number
AND s20t0.instance_number = s0.instance_number
AND s20t1.instance_number = s0.instance_number
AND s21t0.instance_number = s0.instance_number
AND s21t1.instance_number = s0.instance_number
AND s22t0.instance_number = s0.instance_number
AND s22t1.instance_number = s0.instance_number
AND s23t0.instance_number = s0.instance_number
AND s23t1.instance_number = s0.instance_number
AND s24t0.instance_number = s0.instance_number
AND s24t1.instance_number = s0.instance_number
AND s25t0.instance_number = s0.instance_number
AND s25t1.instance_number = s0.instance_number
AND s26t0.instance_number = s0.instance_number
AND s26t1.instance_number = s0.instance_number
AND s27t0.instance_number = s0.instance_number
AND s27t1.instance_number = s0.instance_number
AND s28t0.instance_number = s0.instance_number
AND s28t1.instance_number = s0.instance_number
AND s29t0.instance_number = s0.instance_number
AND s29t1.instance_number = s0.instance_number
AND s30t0.instance_number = s0.instance_number
AND s30t1.instance_number = s0.instance_number
AND s31t0.instance_number = s0.instance_number
AND s31t1.instance_number = s0.instance_number
AND s32t0.instance_number = s0.instance_number
AND s32t1.instance_number = s0.instance_number
AND s33t0.instance_number = s0.instance_number
AND s33t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s13t0.snap_id = s0.snap_id
AND s13t1.snap_id = s0.snap_id + 1
AND s14t0.snap_id = s0.snap_id
AND s14t1.snap_id = s0.snap_id + 1
AND s20t0.snap_id = s0.snap_id
AND s20t1.snap_id = s0.snap_id + 1
AND s21t0.snap_id = s0.snap_id
AND s21t1.snap_id = s0.snap_id + 1
AND s22t0.snap_id = s0.snap_id
AND s22t1.snap_id = s0.snap_id + 1
AND s23t0.snap_id = s0.snap_id
AND s23t1.snap_id = s0.snap_id + 1
AND s24t0.snap_id = s0.snap_id
AND s24t1.snap_id = s0.snap_id + 1
AND s25t0.snap_id = s0.snap_id
AND s25t1.snap_id = s0.snap_id + 1
AND s26t0.snap_id = s0.snap_id
AND s26t1.snap_id = s0.snap_id + 1
AND s27t0.snap_id = s0.snap_id
AND s27t1.snap_id = s0.snap_id + 1
AND s28t0.snap_id = s0.snap_id
AND s28t1.snap_id = s0.snap_id + 1
AND s29t0.snap_id = s0.snap_id
AND s29t1.snap_id = s0.snap_id + 1
AND s30t0.snap_id = s0.snap_id
AND s30t1.snap_id = s0.snap_id + 1
AND s31t0.snap_id = s0.snap_id
AND s31t1.snap_id = s0.snap_id + 1
AND s32t0.snap_id = s0.snap_id
AND s32t1.snap_id = s0.snap_id + 1
AND s33t0.snap_id = s0.snap_id
AND s33t1.snap_id = s0.snap_id + 1
AND s13t0.stat_name = 'redo writes'
AND s13t1.stat_name = s13t0.stat_name
AND s14t0.stat_name = 'redo size'
AND s14t1.stat_name = s14t0.stat_name
AND s20t0.stat_name = 'physical read total IO requests'
AND s20t1.stat_name = s20t0.stat_name
AND s21t0.stat_name = 'physical read total multi block requests'
AND s21t1.stat_name = s21t0.stat_name
AND s22t0.stat_name = 'physical read total bytes'
AND s22t1.stat_name = s22t0.stat_name
AND s23t0.stat_name = 'physical write total IO requests'
AND s23t1.stat_name = s23t0.stat_name
AND s24t0.stat_name = 'physical write total multi block requests'
AND s24t1.stat_name = s24t0.stat_name
AND s25t0.stat_name = 'physical write total bytes'
AND s25t1.stat_name = s25t0.stat_name
AND s26t0.stat_name = 'cell physical IO interconnect bytes'
AND s26t1.stat_name = s26t0.stat_name
AND s27t0.stat_name = 'cell physical IO bytes saved during optimized file creation'
AND s27t1.stat_name = s27t0.stat_name
AND s28t0.stat_name = 'cell physical IO bytes saved during optimized RMAN file restore'
AND s28t1.stat_name = s28t0.stat_name
AND s29t0.stat_name = 'cell physical IO bytes eligible for predicate offload'
AND s29t1.stat_name = s29t0.stat_name
AND s30t0.stat_name = 'cell physical IO bytes saved by storage index'
AND s30t1.stat_name = s30t0.stat_name
AND s31t0.stat_name = 'cell physical IO interconnect bytes returned by smart scan'
AND s31t1.stat_name = s31t0.stat_name
AND s32t0.stat_name = 'cell IO uncompressed bytes'
AND s32t1.stat_name = s32t0.stat_name
AND s33t0.stat_name = 'cell flash cache read hits'
AND s33t1.stat_name = s33t0.stat_name
)
-- WHERE
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (338)
-- aas > 1
-- oscpuio > 50
-- rmancpupct > 0
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
this SQL only pulls the following info for non-Exadata environments
-- redo writes
-- redo size
-- physical read total IO requests
-- physical read total multi block requests
-- physical write total IO requests
-- physical write total multi block requests
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR IO Workload Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15 heading instname -- instname
col hostname format a30 heading hostname -- hostname
col tm format a17 heading tm -- "tm"
col id format 99999 heading id -- "snapid"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col cpu format 90 heading cpu -- "cpu"
col cap format 9999990.00 heading cap -- "capacity"
col dbt format 999990.00 heading dbt -- "DBTime"
col dbc format 99990.00 heading dbc -- "DBcpu"
col bgc format 99990.00 heading bgc -- "BGcpu"
col rman format 9990.00 heading rman -- "RMANcpu"
col aas format 990.0 heading aas -- "AAS"
col totora format 9999990.00 heading totora -- "TotalOracleCPU"
col busy format 9999990.00 heading busy -- "BusyTime"
col load format 990.00 heading load -- "OSLoad"
col totos format 9999990.00 heading totos -- "TotalOSCPU"
col mem format 999990.00 heading mem -- "PhysicalMemorymb"
col IORs format 99990.000 heading IORs -- "IOPsr"
col IOWs format 99990.000 heading IOWs -- "IOPsw"
col IORedo format 99990.000 heading IORedo -- "IOPsredo"
col IORmbs format 99990.000 heading IORmbs -- "IOrmbs"
col IOWmbs format 99990.000 heading IOWmbs -- "IOwmbs"
col redosizesec format 99990.000 heading redosizesec -- "Redombs"
col logons format 990 heading logons -- "Sess"
col logone format 990 heading logone -- "SessEnd"
col exsraw format 99990.000 heading exsraw -- "Execrawdelta"
col exs format 9990.000 heading exs -- "Execs"
col oracpupct format 990 heading oracpupct -- "OracleCPUPct"
col rmancpupct format 990 heading rmancpupct -- "RMANCPUPct"
col oscpupct format 990 heading oscpupct -- "OSCPUPct"
col oscpuusr format 990 heading oscpuusr -- "USRPct"
col oscpusys format 990 heading oscpusys -- "SYSPct"
col oscpuio format 990 heading oscpuio -- "IOPct"
col SIORs format 99990.000 heading SIORs -- "IOPsSingleBlockr"
col MIORs format 99990.000 heading MIORs -- "IOPsMultiBlockr"
col TIORmbs format 99990.000 heading TIORmbs -- "Readmbs"
col SIOWs format 99990.000 heading SIOWs -- "IOPsSingleBlockw"
col MIOWs format 99990.000 heading MIOWs -- "IOPsMultiBlockw"
col TIOWmbs format 99990.000 heading TIOWmbs -- "Writembs"
col TIOR format 99990.000 heading TIOR -- "TotalIOPsr"
col TIOW format 99990.000 heading TIOW -- "TotalIOPsw"
col TIOALL format 99990.000 heading TIOALL -- "TotalIOPsALL"
col ALLRmbs format 99990.000 heading ALLRmbs -- "TotalReadmbs"
col ALLWmbs format 99990.000 heading ALLWmbs -- "TotalWritembs"
col GRANDmbs format 99990.000 heading GRANDmbs -- "TotalmbsALL"
col readratio format 990 heading readratio -- "ReadRatio"
col writeratio format 990 heading writeratio -- "WriteRatio"
col diskiops format 99990.000 heading diskiops -- "HWDiskIOPs"
col numdisks format 99990.000 heading numdisks -- "HWNumofDisks"
col flashcache format 990 heading flashcache -- "FlashCacheHitsPct"
col cellpiob format 99990.000 heading cellpiob -- "CellPIOICmbs"
col cellpiobss format 99990.000 heading cellpiobss -- "CellPIOICSmartScanmbs"
col cellpiobpreoff format 99990.000 heading cellpiobpreoff -- "CellPIOpredoffloadmbs"
col cellpiobsi format 99990.000 heading cellpiobsi -- "CellPIOstorageindexmbs"
col celliouncomb format 99990.000 heading celliouncomb -- "CellIOuncompmbs"
col cellpiobs format 99990.000 heading cellpiobs -- "CellPIOsavedfilecreationmbs"
col cellpiobsrman format 99990.000 heading cellpiobsrman -- "CellPIOsavedRMANfilerestorembs"
SELECT * FROM
(
SELECT trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
(((s20t1.value - s20t0.value) - (s21t1.value - s21t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as SIORs,
((s21t1.value - s21t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as MIORs,
(((s23t1.value - s23t0.value) - (s24t1.value - s24t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as SIOWs,
((s24t1.value - s24t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as MIOWs,
((s13t1.value - s13t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as IORedo,
(((s14t1.value - s14t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as redosizesec
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sysstat s13t0, -- redo writes, diffed
dba_hist_sysstat s13t1,
dba_hist_sysstat s14t0, -- redo size, diffed
dba_hist_sysstat s14t1,
dba_hist_sysstat s20t0, -- physical read total IO requests, diffed
dba_hist_sysstat s20t1,
dba_hist_sysstat s21t0, -- physical read total multi block requests, diffed
dba_hist_sysstat s21t1,
dba_hist_sysstat s23t0, -- physical write total IO requests, diffed
dba_hist_sysstat s23t1,
dba_hist_sysstat s24t0, -- physical write total multi block requests, diffed
dba_hist_sysstat s24t1
WHERE s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s13t0.dbid = s0.dbid
AND s13t1.dbid = s0.dbid
AND s14t0.dbid = s0.dbid
AND s14t1.dbid = s0.dbid
AND s20t0.dbid = s0.dbid
AND s20t1.dbid = s0.dbid
AND s21t0.dbid = s0.dbid
AND s21t1.dbid = s0.dbid
AND s23t0.dbid = s0.dbid
AND s23t1.dbid = s0.dbid
AND s24t0.dbid = s0.dbid
AND s24t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s13t0.instance_number = s0.instance_number
AND s13t1.instance_number = s0.instance_number
AND s14t0.instance_number = s0.instance_number
AND s14t1.instance_number = s0.instance_number
AND s20t0.instance_number = s0.instance_number
AND s20t1.instance_number = s0.instance_number
AND s21t0.instance_number = s0.instance_number
AND s21t1.instance_number = s0.instance_number
AND s23t0.instance_number = s0.instance_number
AND s23t1.instance_number = s0.instance_number
AND s24t0.instance_number = s0.instance_number
AND s24t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s13t0.snap_id = s0.snap_id
AND s13t1.snap_id = s0.snap_id + 1
AND s14t0.snap_id = s0.snap_id
AND s14t1.snap_id = s0.snap_id + 1
AND s20t0.snap_id = s0.snap_id
AND s20t1.snap_id = s0.snap_id + 1
AND s21t0.snap_id = s0.snap_id
AND s21t1.snap_id = s0.snap_id + 1
AND s23t0.snap_id = s0.snap_id
AND s23t1.snap_id = s0.snap_id + 1
AND s24t0.snap_id = s0.snap_id
AND s24t1.snap_id = s0.snap_id + 1
AND s13t0.stat_name = 'redo writes'
AND s13t1.stat_name = s13t0.stat_name
AND s14t0.stat_name = 'redo size'
AND s14t1.stat_name = s14t0.stat_name
AND s20t0.stat_name = 'physical read total IO requests'
AND s20t1.stat_name = s20t0.stat_name
AND s21t0.stat_name = 'physical read total multi block requests'
AND s21t1.stat_name = s21t0.stat_name
AND s23t0.stat_name = 'physical write total IO requests'
AND s23t1.stat_name = s23t0.stat_name
AND s24t0.stat_name = 'physical write total multi block requests'
AND s24t1.stat_name = s24t0.stat_name
)
-- WHERE
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (338)
-- aas > 1
-- oscpuio > 50
-- rmancpupct > 0
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{
set arraysize 5000
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR Services Statistics Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15
col hostname format a30
col tm format a15 heading tm --"Snap|Start|Time"
col id format 99999 heading id --"Snap|ID"
col inst format 90 heading inst --"i|n|s|t|#"
col dur format 999990.00 heading dur --"Snap|Dur|(m)"
col cpu format 90 heading cpu --"C|P|U"
col cap format 9999990.00 heading cap --"***|Total|CPU|Time|(s)"
col dbt format 999990.00 heading dbt --"DB|Time"
col dbc format 99990.00 heading dbc --"DB|CPU"
col bgc format 99990.00 heading bgc --"Bg|CPU"
col rman format 9990.00 heading rman --"RMAN|CPU"
col aas format 990.0 heading aas --"A|A|S"
col totora format 9999990.00 heading totora --"***|Total|Oracle|CPU|(s)"
col busy format 9999990.00 heading busy --"Busy|Time"
col load format 990.00 heading load --"OS|Load"
col totos format 9999990.00 heading totos --"***|Total|OS|CPU|(s)"
col mem format 999990.00 heading mem --"Physical|Memory|(mb)"
col IORs format 9990.000 heading IORs --"IOPs|r"
col IOWs format 9990.000 heading IOWs --"IOPs|w"
col IORedo format 9990.000 heading IORedo --"IOPs|redo"
col IORmbs format 9990.000 heading IORmbs --"IO r|(mb)/s"
col IOWmbs format 9990.000 heading IOWmbs --"IO w|(mb)/s"
col redosizesec format 9990.000 heading redosizesec --"Redo|(mb)/s"
col logons format 990 heading logons --"Sess"
col logone format 990 heading logone --"Sess|End"
col exsraw format 99990.000 heading exsraw --"Exec|raw|delta"
col exs format 9990.000 heading exs --"Exec|/s"
col oracpupct format 990 heading oracpupct --"Oracle|CPU|%"
col rmancpupct format 990 heading rmancpupct --"RMAN|CPU|%"
col oscpupct format 990 heading oscpupct --"OS|CPU|%"
col oscpuusr format 990 heading oscpuusr --"U|S|R|%"
col oscpusys format 990 heading oscpusys --"S|Y|S|%"
col oscpuio format 990 heading oscpuio --"I|O|%"
col phy_reads format 99999990.00 heading phy_reads --"physical|reads"
col log_reads format 99999990.00 heading log_reads --"logical|reads"
select trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, snap_id,
TO_CHAR(tm,'MM/DD/YY HH24:MI:SS') tm,
inst,
dur,
service_name,
round(db_time / 1000000, 1) as dbt,
round(db_cpu / 1000000, 1) as dbc,
phy_reads,
log_reads,
aas
from (select
s1.snap_id,
s1.tm,
s1.inst,
s1.dur,
s1.service_name,
sum(decode(s1.stat_name, 'DB time', s1.diff, 0)) db_time,
sum(decode(s1.stat_name, 'DB CPU', s1.diff, 0)) db_cpu,
sum(decode(s1.stat_name, 'physical reads', s1.diff, 0)) phy_reads,
sum(decode(s1.stat_name, 'session logical reads', s1.diff, 0)) log_reads,
round(sum(decode(s1.stat_name, 'DB time', s1.diff, 0))/1000000,1)/60 / s1.dur as aas
from
(select s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
e.service_name service_name,
e.stat_name stat_name,
e.value - b.value diff
from dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_service_stat b,
dba_hist_service_stat e
where
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
and s1.dbid = s0.dbid
and b.dbid = s0.dbid
and e.dbid = s0.dbid
--and s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
and s1.instance_number = s0.instance_number
and b.instance_number = s0.instance_number
and e.instance_number = s0.instance_number
and s1.snap_id = s0.snap_id + 1
and b.snap_id = s0.snap_id
and e.snap_id = s0.snap_id + 1
and b.stat_id = e.stat_id
and b.service_name_hash = e.service_name_hash) s1
group by
s1.snap_id, s1.tm, s1.inst, s1.dur, s1.service_name
order by
snap_id asc, aas desc, service_name)
-- where
-- AND TO_CHAR(tm,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- snap_id = 338
-- and snap_id >= 335 and snap_id <= 339
-- aas > .5
;
}}}
to derive the transactions/seconds put this as a calculated field in Tableau
{{{
TRX = (UCOMS + URS) / DUR
}}}
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15 heading instname -- instname
col hostname format a30 heading hostname -- hostname
col tm format a17 heading tm -- "tm"
col id format 99999 heading id -- "snapid"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col cpu format 90 heading cpu -- "cpu"
col cap format 9999990.00 heading cap -- "capacity"
col dbt format 999990.00 heading dbt -- "DBTime"
col dbc format 99990.00 heading dbc -- "DBcpu"
col bgc format 99990.00 heading bgc -- "BGcpu"
col rman format 9990.00 heading rman -- "RMANcpu"
col aas format 990.0 heading aas -- "AAS"
col totora format 9999990.00 heading totora -- "TotalOracleCPU"
col busy format 9999990.00 heading busy -- "BusyTime"
col load format 990.00 heading load -- "OSLoad"
col totos format 9999990.00 heading totos -- "TotalOSCPU"
col mem format 999990.00 heading mem -- "PhysicalMemorymb"
col IORs format 9990.000 heading IORs -- "IOPsr"
col IOWs format 9990.000 heading IOWs -- "IOPsw"
col IORedo format 9990.000 heading IORedo -- "IOPsredo"
col IORmbs format 9990.000 heading IORmbs -- "IOrmbs"
col IOWmbs format 9990.000 heading IOWmbs -- "IOwmbs"
col redosizesec format 9990.000 heading redosizesec -- "Redombs"
col logons format 990 heading logons -- "Sess"
col logone format 990 heading logone -- "SessEnd"
col exsraw format 99990.000 heading exsraw -- "Execrawdelta"
col exs format 9990.000 heading exs -- "Execs"
col ucs format 9990.000 heading ucs -- "UserCalls"
col ucoms format 9990.000 heading ucoms -- "Commit"
col urs format 9990.000 heading urs -- "Rollback"
col lios format 9999990.00 heading lios -- "LIOs"
col oracpupct format 990 heading oracpupct -- "OracleCPUPct"
col rmancpupct format 990 heading rmancpupct -- "RMANCPUPct"
col oscpupct format 990 heading oscpupct -- "OSCPUPct"
col oscpuusr format 990 heading oscpuusr -- "USRPct"
col oscpusys format 990 heading oscpusys -- "SYSPct"
col oscpuio format 990 heading oscpuio -- "IOPct"
SELECT * FROM
(
SELECT trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
round(s4t1.value/1024/1024/1024,2) AS memgb,
round(s37t1.value/1024/1024/1024,2) AS sgagb,
round(s36t1.value/1024/1024/1024,2) AS pgagb,
s9t0.value logons,
((s10t1.value - s10t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as exs,
((s40t1.value - s40t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as ucs,
((s38t1.value - s38t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as ucoms,
((s39t1.value - s39t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as urs,
((s41t1.value - s41t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as lios
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_osstat s4t1, -- osstat just get the end value
(select snap_id, dbid, instance_number, sum(value) value from dba_hist_sga group by snap_id, dbid, instance_number) s37t1, -- total SGA allocated, just get the end value
dba_hist_pgastat s36t1, -- total PGA allocated, just get the end value
dba_hist_sysstat s9t0, -- logons current, sysstat absolute value should not be diffed
dba_hist_sysstat s10t0, -- execute count, diffed
dba_hist_sysstat s10t1,
dba_hist_sysstat s38t0, -- user commits, diffed
dba_hist_sysstat s38t1,
dba_hist_sysstat s39t0, -- user rollbacks, diffed
dba_hist_sysstat s39t1,
dba_hist_sysstat s40t0, -- user calls, diffed
dba_hist_sysstat s40t1,
dba_hist_sysstat s41t0, -- session logical reads, diffed
dba_hist_sysstat s41t1
WHERE s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s4t1.dbid = s0.dbid
AND s9t0.dbid = s0.dbid
AND s10t0.dbid = s0.dbid
AND s10t1.dbid = s0.dbid
AND s36t1.dbid = s0.dbid
AND s37t1.dbid = s0.dbid
AND s38t0.dbid = s0.dbid
AND s38t1.dbid = s0.dbid
AND s39t0.dbid = s0.dbid
AND s39t1.dbid = s0.dbid
AND s40t0.dbid = s0.dbid
AND s40t1.dbid = s0.dbid
AND s41t0.dbid = s0.dbid
AND s41t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s4t1.instance_number = s0.instance_number
AND s9t0.instance_number = s0.instance_number
AND s10t0.instance_number = s0.instance_number
AND s10t1.instance_number = s0.instance_number
AND s36t1.instance_number = s0.instance_number
AND s37t1.instance_number = s0.instance_number
AND s38t0.instance_number = s0.instance_number
AND s38t1.instance_number = s0.instance_number
AND s39t0.instance_number = s0.instance_number
AND s39t1.instance_number = s0.instance_number
AND s40t0.instance_number = s0.instance_number
AND s40t1.instance_number = s0.instance_number
AND s41t0.instance_number = s0.instance_number
AND s41t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s4t1.snap_id = s0.snap_id + 1
AND s36t1.snap_id = s0.snap_id + 1
AND s37t1.snap_id = s0.snap_id + 1
AND s9t0.snap_id = s0.snap_id
AND s10t0.snap_id = s0.snap_id
AND s10t1.snap_id = s0.snap_id + 1
AND s38t0.snap_id = s0.snap_id
AND s38t1.snap_id = s0.snap_id + 1
AND s39t0.snap_id = s0.snap_id
AND s39t1.snap_id = s0.snap_id + 1
AND s40t0.snap_id = s0.snap_id
AND s40t1.snap_id = s0.snap_id + 1
AND s41t0.snap_id = s0.snap_id
AND s41t1.snap_id = s0.snap_id + 1
AND s4t1.stat_name = 'PHYSICAL_MEMORY_BYTES'
AND s36t1.name = 'total PGA allocated'
AND s9t0.stat_name = 'logons current'
AND s10t0.stat_name = 'execute count'
AND s10t1.stat_name = s10t0.stat_name
AND s38t0.stat_name = 'user commits'
AND s38t1.stat_name = s38t0.stat_name
AND s39t0.stat_name = 'user rollbacks'
AND s39t1.stat_name = s39t0.stat_name
AND s40t0.stat_name = 'user calls'
AND s40t1.stat_name = s40t0.stat_name
AND s41t0.stat_name = 'session logical reads'
AND s41t1.stat_name = s41t0.stat_name
)
-- WHERE
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
only shows the following
- 'PHYSICAL_MEMORY_BYTES'
- 'total PGA allocated'
- 'logons current'
- 'execute count'
which is all the data needed to characterize the SGA,PGA requirements of the databases
and characterize the load activity by the metric "execute count" which pretty much drives the trx/s metric
the range of 25K ex/s is a high load OLTP environment
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15 heading instname -- instname
col hostname format a30 heading hostname -- hostname
col tm format a17 heading tm -- "tm"
col id format 99999 heading id -- "snapid"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col cpu format 90 heading cpu -- "cpu"
col cap format 9999990.00 heading cap -- "capacity"
col dbt format 999990.00 heading dbt -- "DBTime"
col dbc format 99990.00 heading dbc -- "DBcpu"
col bgc format 99990.00 heading bgc -- "BGcpu"
col rman format 9990.00 heading rman -- "RMANcpu"
col aas format 990.0 heading aas -- "AAS"
col totora format 9999990.00 heading totora -- "TotalOracleCPU"
col busy format 9999990.00 heading busy -- "BusyTime"
col load format 990.00 heading load -- "OSLoad"
col totos format 9999990.00 heading totos -- "TotalOSCPU"
col mem format 999990.00 heading mem -- "PhysicalMemorymb"
col IORs format 9990.000 heading IORs -- "IOPsr"
col IOWs format 9990.000 heading IOWs -- "IOPsw"
col IORedo format 9990.000 heading IORedo -- "IOPsredo"
col IORmbs format 9990.000 heading IORmbs -- "IOrmbs"
col IOWmbs format 9990.000 heading IOWmbs -- "IOwmbs"
col redosizesec format 9990.000 heading redosizesec -- "Redombs"
col logons format 990 heading logons -- "Sess"
col logone format 990 heading logone -- "SessEnd"
col exsraw format 99990.000 heading exsraw -- "Execrawdelta"
col exs format 9990.000 heading exs -- "Execs"
col ucs format 9990.000 heading ucs -- "UserCalls"
col ucoms format 9990.000 heading ucoms -- "Commit"
col urs format 9990.000 heading urs -- "Rollback"
col lios format 9999990.00 heading lios -- "LIOs"
col oracpupct format 990 heading oracpupct -- "OracleCPUPct"
col rmancpupct format 990 heading rmancpupct -- "RMANCPUPct"
col oscpupct format 990 heading oscpupct -- "OSCPUPct"
col oscpuusr format 990 heading oscpuusr -- "USRPct"
col oscpusys format 990 heading oscpusys -- "SYSPct"
col oscpuio format 990 heading oscpuio -- "IOPct"
SELECT * FROM
(
SELECT trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
round(s4t1.value/1024/1024/1024,2) AS memgb,
round(s37t1.value/1024/1024/1024,2) AS sgagb,
round(s36t1.value/1024/1024/1024,2) AS pgagb,
s9t0.value logons,
((s10t1.value - s10t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as exs
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_osstat s4t1, -- osstat just get the end value
(select snap_id, dbid, instance_number, sum(value) value from dba_hist_sga group by snap_id, dbid, instance_number) s37t1, -- total SGA allocated, just get the end value
dba_hist_pgastat s36t1, -- total PGA allocated, just get the end value
dba_hist_sysstat s9t0, -- logons current, sysstat absolute value should not be diffed
dba_hist_sysstat s10t0, -- execute count, diffed
dba_hist_sysstat s10t1
WHERE s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s4t1.dbid = s0.dbid
AND s9t0.dbid = s0.dbid
AND s10t0.dbid = s0.dbid
AND s10t1.dbid = s0.dbid
AND s36t1.dbid = s0.dbid
AND s37t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s4t1.instance_number = s0.instance_number
AND s9t0.instance_number = s0.instance_number
AND s10t0.instance_number = s0.instance_number
AND s10t1.instance_number = s0.instance_number
AND s36t1.instance_number = s0.instance_number
AND s37t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s4t1.snap_id = s0.snap_id + 1
AND s36t1.snap_id = s0.snap_id + 1
AND s37t1.snap_id = s0.snap_id + 1
AND s9t0.snap_id = s0.snap_id
AND s10t0.snap_id = s0.snap_id
AND s10t1.snap_id = s0.snap_id + 1
AND s4t1.stat_name = 'PHYSICAL_MEMORY_BYTES'
AND s36t1.name = 'total PGA allocated'
AND s9t0.stat_name = 'logons current'
AND s10t0.stat_name = 'execute count'
AND s10t1.stat_name = s10t0.stat_name
)
-- WHERE
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{
set arraysize 5000
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR Top Events Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15
col hostname format a30
col snap_id format 99999 heading snap_id -- "snapid"
col tm format a17 heading tm -- "tm"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col event format a55 heading event -- "Event"
col event_rank format 90 heading event_rank -- "EventRank"
col waits format 9999999990.00 heading waits -- "Waits"
col time format 9999999990.00 heading time -- "Timesec"
col avgwt format 99990.00 heading avgwt -- "Avgwtms"
col pctdbt format 9990.0 heading pctdbt -- "DBTimepct"
col aas format 990.0 heading aas -- "Aas"
col wait_class format a15 heading wait_class -- "WaitClass"
spool awr_topevents-tableau-&_instname-&_hostname..csv
select trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, snap_id, tm, inst, dur, event, event_rank, waits, time, avgwt, pctdbt, aas, wait_class
from
(select snap_id, TO_CHAR(tm,'MM/DD/YY HH24:MI:SS') tm, inst, dur, event, waits, time, avgwt, pctdbt, aas, wait_class,
DENSE_RANK() OVER (
PARTITION BY snap_id ORDER BY time DESC) event_rank
from
(
select * from
(select * from
(select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
e.event_name event,
e.total_waits - nvl(b.total_waits,0) waits,
round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2) time, -- THIS IS EVENT (sec)
round (decode ((e.total_waits - nvl(b.total_waits, 0)), 0, to_number(NULL), ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000) / (e.total_waits - nvl(b.total_waits,0))), 2) avgwt,
((round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt, -- THIS IS EVENT (sec) / DB TIME (sec)
(round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas, -- THIS IS EVENT (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
e.wait_class wait_class
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_system_event b,
dba_hist_system_event e,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
where
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
and b.dbid(+) = s0.dbid
and e.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
and b.instance_number(+) = s0.instance_number
and e.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND b.snap_id(+) = s0.snap_id
and e.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
and b.event_id = e.event_id
and e.wait_class != 'Idle'
and e.total_waits > nvl(b.total_waits,0)
and e.event_name not in ('smon timer',
'pmon timer',
'dispatcher timer',
'dispatcher listen timer',
'rdbms ipc message')
order by snap_id, time desc, waits desc, event)
union all
select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
'CPU time',
0,
round ((s6t1.value - s6t0.value) / 1000000, 2) as time, -- THIS IS DB CPU (sec)
0,
((round ((s6t1.value - s6t0.value) / 1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt, -- THIS IS DB CPU (sec) / DB TIME (sec)..TO GET % OF DB CPU ON DB TIME FOR TOP 5 TIMED EVENTS SECTION
(round ((s6t1.value - s6t0.value) / 1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas, -- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
'CPU'
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sys_time_model s6t0,
dba_hist_sys_time_model s6t1,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
WHERE
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s6t0.dbid = s0.dbid
AND s6t1.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s6t0.instance_number = s0.instance_number
AND s6t1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s6t0.snap_id = s0.snap_id
AND s6t1.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s6t0.stat_name = 'DB CPU'
AND s6t1.stat_name = s6t0.stat_name
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
union all
(select
dbtime.snap_id,
dbtime.tm,
dbtime.inst,
dbtime.dur,
'CPU wait',
0,
round(dbtime.time - accounted_dbtime.time, 2) time, -- THIS IS UNACCOUNTED FOR DB TIME (sec)
0,
((dbtime.aas - accounted_dbtime.aas)/ NULLIF(nvl(dbtime.aas,0),0))*100 as pctdbt, -- THIS IS UNACCOUNTED FOR DB TIME (sec) / DB TIME (sec)
round(dbtime.aas - accounted_dbtime.aas, 2) aas, -- AAS OF UNACCOUNTED FOR DB TIME
'CPU wait'
from
(select
s0.snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
'DB time',
0,
round ((s5t1.value - s5t0.value) / 1000000, 2) as time, -- THIS IS DB time (sec)
0,
0,
(round ((s5t1.value - s5t0.value) / 1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
'DB time'
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
WHERE
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name) dbtime,
(select snap_id, inst, sum(time) time, sum(AAS) aas from
(select * from (select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
e.event_name event,
e.total_waits - nvl(b.total_waits,0) waits,
round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2) time, -- THIS IS EVENT (sec)
round (decode ((e.total_waits - nvl(b.total_waits, 0)), 0, to_number(NULL), ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000) / (e.total_waits - nvl(b.total_waits,0))), 2) avgwt,
((round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt, -- THIS IS EVENT (sec) / DB TIME (sec)
(round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas, -- THIS IS EVENT (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
e.wait_class wait_class
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_system_event b,
dba_hist_system_event e,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
where
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
and b.dbid(+) = s0.dbid
and e.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
and b.instance_number(+) = s0.instance_number
and e.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND b.snap_id(+) = s0.snap_id
and e.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
and b.event_id = e.event_id
and e.wait_class != 'Idle'
and e.total_waits > nvl(b.total_waits,0)
and e.event_name not in ('smon timer',
'pmon timer',
'dispatcher timer',
'dispatcher listen timer',
'rdbms ipc message')
order by snap_id, time desc, waits desc, event)
union all
select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
'CPU time',
0,
round ((s6t1.value - s6t0.value) / 1000000, 2) as time, -- THIS IS DB CPU (sec)
0,
((round ((s6t1.value - s6t0.value) / 1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt, -- THIS IS DB CPU (sec) / DB TIME (sec)..TO GET % OF DB CPU ON DB TIME FOR TOP 5 TIMED EVENTS SECTION
(round ((s6t1.value - s6t0.value) / 1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas, -- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
'CPU'
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sys_time_model s6t0,
dba_hist_sys_time_model s6t1,
dba_hist_sys_time_model s5t0,
dba_hist_sys_time_model s5t1
WHERE
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s6t0.dbid = s0.dbid
AND s6t1.dbid = s0.dbid
AND s5t0.dbid = s0.dbid
AND s5t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s6t0.instance_number = s0.instance_number
AND s6t1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s6t0.snap_id = s0.snap_id
AND s6t1.snap_id = s0.snap_id + 1
AND s5t0.snap_id = s0.snap_id
AND s5t1.snap_id = s0.snap_id + 1
AND s6t0.stat_name = 'DB CPU'
AND s6t1.stat_name = s6t0.stat_name
AND s5t0.stat_name = 'DB time'
AND s5t1.stat_name = s5t0.stat_name
) group by snap_id, inst) accounted_dbtime
where dbtime.snap_id = accounted_dbtime.snap_id
and dbtime.inst = accounted_dbtime.inst
)
)
)
)
WHERE event_rank <= 5
-- AND tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- AND TO_CHAR(tm,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- and snap_id = 495
-- and snap_id >= 495 and snap_id <= 496
-- and event = 'db file sequential read'
-- and event like 'CPU%'
-- and avgwt > 5
-- and aas > .5
-- and wait_class = 'CPU'
-- and wait_class like '%I/O%'
-- and event_rank in (1,2,3)
ORDER BY snap_id;
}}}
{{{
set arraysize 5000
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR Top Segments' skip 2
set pagesize 50000
set linesize 550
SELECT
trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
snap_id, tm, inst,
owner,
tablespace_name,
dataobj#,
object_name,
subobject_name,
object_type,
physical_rw,
LOGICAL_READS_DELTA,
BUFFER_BUSY_WAITS_DELTA,
DB_BLOCK_CHANGES_DELTA,
PHYSICAL_READS_DELTA,
PHYSICAL_WRITES_DELTA,
PHYSICAL_READS_DIRECT_DELTA,
PHYSICAL_WRITES_DIRECT_DELTA,
ITL_WAITS_DELTA,
ROW_LOCK_WAITS_DELTA,
GC_CR_BLOCKS_SERVED_DELTA,
GC_CU_BLOCKS_SERVED_DELTA,
GC_BUFFER_BUSY_DELTA,
GC_CR_BLOCKS_RECEIVED_DELTA,
GC_CU_BLOCKS_RECEIVED_DELTA,
SPACE_USED_DELTA,
SPACE_ALLOCATED_DELTA,
TABLE_SCANS_DELTA,
CHAIN_ROW_EXCESS_DELTA,
PHYSICAL_READ_REQUESTS_DELTA,
PHYSICAL_WRITE_REQUESTS_DELTA,
OPTIMIZED_PHYSICAL_READS_DELTA,
seg_rank
FROM
(
SELECT
r.snap_id,
TO_CHAR(r.tm,'MM/DD/YY HH24:MI:SS') tm,
r.inst,
n.owner,
n.tablespace_name,
n.dataobj#,
n.object_name,
CASE
WHEN LENGTH(n.subobject_name) < 11
THEN n.subobject_name
ELSE SUBSTR(n.subobject_name,LENGTH(n.subobject_name)-9)
END subobject_name,
n.object_type,
(r.PHYSICAL_READS_DELTA + r.PHYSICAL_WRITES_DELTA) as physical_rw,
r.LOGICAL_READS_DELTA,
r.BUFFER_BUSY_WAITS_DELTA,
r.DB_BLOCK_CHANGES_DELTA,
r.PHYSICAL_READS_DELTA,
r.PHYSICAL_WRITES_DELTA,
r.PHYSICAL_READS_DIRECT_DELTA,
r.PHYSICAL_WRITES_DIRECT_DELTA,
r.ITL_WAITS_DELTA,
r.ROW_LOCK_WAITS_DELTA,
r.GC_CR_BLOCKS_SERVED_DELTA,
r.GC_CU_BLOCKS_SERVED_DELTA,
r.GC_BUFFER_BUSY_DELTA,
r.GC_CR_BLOCKS_RECEIVED_DELTA,
r.GC_CU_BLOCKS_RECEIVED_DELTA,
r.SPACE_USED_DELTA,
r.SPACE_ALLOCATED_DELTA,
r.TABLE_SCANS_DELTA,
r.CHAIN_ROW_EXCESS_DELTA,
r.PHYSICAL_READ_REQUESTS_DELTA,
r.PHYSICAL_WRITE_REQUESTS_DELTA,
r.OPTIMIZED_PHYSICAL_READS_DELTA,
DENSE_RANK() OVER (PARTITION BY r.snap_id ORDER BY r.PHYSICAL_READS_DELTA + r.PHYSICAL_WRITES_DELTA DESC) seg_rank
FROM
dba_hist_seg_stat_obj n,
(
SELECT
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
b.dataobj#,
b.obj#,
b.dbid,
sum(b.LOGICAL_READS_DELTA) LOGICAL_READS_DELTA,
sum(b.BUFFER_BUSY_WAITS_DELTA) BUFFER_BUSY_WAITS_DELTA,
sum(b.DB_BLOCK_CHANGES_DELTA) DB_BLOCK_CHANGES_DELTA,
sum(b.PHYSICAL_READS_DELTA) PHYSICAL_READS_DELTA,
sum(b.PHYSICAL_WRITES_DELTA) PHYSICAL_WRITES_DELTA,
sum(b.PHYSICAL_READS_DIRECT_DELTA) PHYSICAL_READS_DIRECT_DELTA,
sum(b.PHYSICAL_WRITES_DIRECT_DELTA) PHYSICAL_WRITES_DIRECT_DELTA,
sum(b.ITL_WAITS_DELTA) ITL_WAITS_DELTA,
sum(b.ROW_LOCK_WAITS_DELTA) ROW_LOCK_WAITS_DELTA,
sum(b.GC_CR_BLOCKS_SERVED_DELTA) GC_CR_BLOCKS_SERVED_DELTA,
sum(b.GC_CU_BLOCKS_SERVED_DELTA) GC_CU_BLOCKS_SERVED_DELTA,
sum(b.GC_BUFFER_BUSY_DELTA) GC_BUFFER_BUSY_DELTA,
sum(b.GC_CR_BLOCKS_RECEIVED_DELTA) GC_CR_BLOCKS_RECEIVED_DELTA,
sum(b.GC_CU_BLOCKS_RECEIVED_DELTA) GC_CU_BLOCKS_RECEIVED_DELTA,
sum(b.SPACE_USED_DELTA) SPACE_USED_DELTA,
sum(b.SPACE_ALLOCATED_DELTA) SPACE_ALLOCATED_DELTA,
sum(b.TABLE_SCANS_DELTA) TABLE_SCANS_DELTA,
sum(b.CHAIN_ROW_EXCESS_DELTA) CHAIN_ROW_EXCESS_DELTA,
sum(b.PHYSICAL_READ_REQUESTS_DELTA) PHYSICAL_READ_REQUESTS_DELTA,
sum(b.PHYSICAL_WRITE_REQUESTS_DELTA) PHYSICAL_WRITE_REQUESTS_DELTA,
sum(b.OPTIMIZED_PHYSICAL_READS_DELTA) OPTIMIZED_PHYSICAL_READS_DELTA
FROM
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_seg_stat b
WHERE
s0.dbid = &_dbid
AND s1.dbid = s0.dbid
AND b.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber
AND s1.instance_number = s0.instance_number
AND b.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND b.snap_id = s0.snap_id + 1
--AND s0.snap_id = 35547
GROUP BY
s0.snap_id, s0.END_INTERVAL_TIME, s0.instance_number, b.dataobj#, b.obj#, b.dbid
) r
WHERE n.dataobj# = r.dataobj#
AND n.obj# = r.obj#
AND n.dbid = r.dbid
AND r.PHYSICAL_READS_DELTA + r.PHYSICAL_WRITES_DELTA > 0
ORDER BY physical_rw DESC,
object_name,
owner,
subobject_name
)
WHERE
seg_rank <=10
--and snap_id in (35547,35548,35549)
order by inst, snap_id, seg_rank asc;
}}}
''References:''
SPACE_ALLOCATED_DELTA http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:7867887875624
This script is very handy for characterizing the resource hogging SQLs, once the data is pulled from tableau it's very easy to sort stuff and you'll be done in a couple of minutes..
<<<
''Per instance''
top elap / exec
top disk reads
top buffer gets
top executes
top app wait
top concurrency wait
top cluster wait
''Per App Module (parsing schema)''
''or by other dimensions (inst, sql_id, module)''
<<<
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR Top SQL Report' skip 2
set pagesize 50000
set linesize 550
col snap_id format 99999 heading -- "Snap|ID"
col tm format a15 heading -- "Snap|Start|Time"
col inst format 90 heading -- "i|n|s|t|#"
col dur format 990.00 heading -- "Snap|Dur|(m)"
col sql_id format a15 heading -- "SQL|ID"
col phv format 99999999999 heading -- "Plan|Hash|Value"
col module format a50
col elap format 999990.00 heading -- "Ela|Time|(s)"
col elapexec format 999990.00 heading -- "Ela|Time|per|exec|(s)"
col cput format 999990.00 heading -- "CPU|Time|(s)"
col iowait format 999990.00 heading -- "IO|Wait|(s)"
col appwait format 999990.00 heading -- "App|Wait|(s)"
col concurwait format 999990.00 heading -- "Ccr|Wait|(s)"
col clwait format 999990.00 heading -- "Cluster|Wait|(s)"
col bget format 99999999990 heading -- "LIO"
col dskr format 99999999990 heading -- "PIO"
col dpath format 99999999990 heading -- "Direct|Writes"
col rowp format 99999999990 heading -- "Rows"
col exec format 9999990 heading -- "Exec"
col prsc format 999999990 heading -- "Parse|Count"
col pxexec format 9999990 heading -- "PX|Server|Exec"
col icbytes format 99999990 heading -- "IC|MB"
col offloadbytes format 99999990 heading -- "Offload|MB"
col offloadreturnbytes format 99999990 heading -- "Offload|return|MB"
col flashcachereads format 99999990 heading -- "Flash|Cache|MB"
col uncompbytes format 99999990 heading -- "Uncomp|MB"
col pctdbt format 990 heading -- "DB Time|%"
col aas format 990.00 heading -- "A|A|S"
col time_rank format 90 heading -- "Time|Rank"
col sql_text format a6 heading -- "SQL|Text"
select *
from (
select
trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
sqt.snap_id snap_id,
TO_CHAR(sqt.tm,'MM/DD/YY HH24:MI:SS') tm,
sqt.inst inst,
sqt.dur dur,
sqt.aas aas,
nvl((sqt.elap), to_number(null)) elap,
nvl((sqt.elapexec), 0) elapexec,
nvl((sqt.cput), to_number(null)) cput,
sqt.iowait iowait,
sqt.appwait appwait,
sqt.concurwait concurwait,
sqt.clwait clwait,
sqt.bget bget,
sqt.dskr dskr,
sqt.dpath dpath,
sqt.rowp rowp,
sqt.exec exec,
sqt.prsc prsc,
sqt.pxexec pxexec,
sqt.icbytes,
sqt.offloadbytes,
sqt.offloadreturnbytes,
sqt.flashcachereads,
sqt.uncompbytes,
sqt.time_rank time_rank,
sqt.sql_id sql_id,
sqt.phv phv,
sqt.parse_schema parse_schema,
substr(to_clob(decode(sqt.module, null, null, sqt.module)),1,50) module,
st.sql_text sql_text -- PUT/REMOVE COMMENT TO HIDE/SHOW THE SQL_TEXT
from (
select snap_id, tm, inst, dur, sql_id, phv, parse_schema, module, elap, elapexec, cput, iowait, appwait, concurwait, clwait, bget, dskr, dpath, rowp, exec, prsc, pxexec, icbytes, offloadbytes, offloadreturnbytes, flashcachereads, uncompbytes, aas, time_rank
from
(
select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
e.sql_id sql_id,
e.plan_hash_value phv,
e.parsing_schema_name parse_schema,
max(e.module) module,
sum(e.elapsed_time_delta)/1000000 elap,
decode((sum(e.executions_delta)), 0, to_number(null), ((sum(e.elapsed_time_delta)) / (sum(e.executions_delta)) / 1000000)) elapexec,
sum(e.cpu_time_delta)/1000000 cput,
sum(e.iowait_delta)/1000000 iowait,
sum(e.apwait_delta)/1000000 appwait,
sum(e.ccwait_delta)/1000000 concurwait,
sum(e.clwait_delta)/1000000 clwait,
sum(e.buffer_gets_delta) bget,
sum(e.disk_reads_delta) dskr,
sum(e.direct_writes_delta) dpath,
sum(e.rows_processed_delta) rowp,
sum(e.executions_delta) exec,
sum(e.parse_calls_delta) prsc,
sum(e.px_servers_execs_delta) pxexec,
sum(e.io_interconnect_bytes_delta)/1024/1024 icbytes,
sum(e.io_offload_elig_bytes_delta)/1024/1024 offloadbytes,
sum(e.io_offload_return_bytes_delta)/1024/1024 offloadreturnbytes,
(sum(e.optimized_physical_reads_delta)* &_blocksize)/1024/1024 flashcachereads,
sum(e.cell_uncompressed_bytes_delta)/1024/1024 uncompbytes,
(sum(e.elapsed_time_delta)/1000000) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60) aas,
DENSE_RANK() OVER (
PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
from
dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sqlstat e
where
s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
and e.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
and e.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
and e.snap_id = s0.snap_id + 1
group by
s0.snap_id, s0.END_INTERVAL_TIME, s0.instance_number, e.sql_id, e.plan_hash_value, e.parsing_schema_name, e.elapsed_time_delta, s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME
)
where
time_rank <= 5 -- GET TOP 5 SQL ACROSS SNAP_IDs... YOU CAN ALTER THIS TO HAVE MORE DATA POINTS
)
sqt,
(select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type = b.action(+)) st
where st.sql_id(+) = sqt.sql_id
and st.dbid(+) = &_dbid
-- AND TO_CHAR(tm,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- AND snap_id in (338,339)
-- AND snap_id = 338
-- AND snap_id >= 335 and snap_id <= 339
-- AND lower(st.sql_text) like 'select%'
-- AND lower(st.sql_text) like 'insert%'
-- AND lower(st.sql_text) like 'update%'
-- AND lower(st.sql_text) like 'merge%'
-- AND pxexec > 0
-- AND aas > .5
order by
snap_id -- TO GET SQL OUTPUT ACROSS SNAP_IDs SEQUENTIALLY AND ASC
-- nvl(sqt.elap, -1) desc, sqt.sql_id -- TO GET SQL OUTPUT BY ELAPSED TIME
)
-- where rownum <= 20
;
}}}
Sample storage forecast here https://www.evernote.com/shard/s48/sh/9594b0d9-cf51-4bea-b0e1-ce68915c0357/a7626bde5789e0964b25d79bbcf1f6ca
Use the first two SQLs below and extract each of them in a CSV file as input to Tableau, the "per day" is the monthly high level number and the "per snap_id" would be the detail
{{{
-- per day
SELECT a.month,
used_size_mb ,
used_size_mb - LAG (used_size_mb,1) OVER (PARTITION BY a.name ORDER BY a.name,a.month) inc_used_size_mb
FROM
(SELECT TO_CHAR(sp.begin_interval_time,'MM/DD/YY') month ,
ts.name ,
MAX(ROUND((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) used_size_mb
FROM DBA_HIST_TBSPC_SPACE_USAGE tsu,
v$tablespace ts ,
DBA_HIST_SNAPSHOT sp,
DBA_TABLESPACES dt
WHERE tsu.tablespace_id = ts.ts#
AND tsu.snap_id = sp.snap_id
AND ts.name = dt.tablespace_name
GROUP BY TO_CHAR(sp.begin_interval_time,'MM/DD/YY'),
ts.name
ORDER BY
month
) A;
-- detail per snap_id
select * from
(
SELECT
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
ts.ts#,
ts.name ,
MAX(ROUND((tsu0.tablespace_maxsize * dt.block_size )/(1024*1024),2) ) max_size_MB ,
MAX(ROUND((tsu0.tablespace_size * dt.block_size )/(1024*1024),2) ) cur_size_MB ,
MAX(ROUND((tsu0.tablespace_usedsize * dt.block_size )/(1024*1024),2)) used_size_MB,
MAX(ROUND(( (tsu1.tablespace_usedsize - tsu0.tablespace_usedsize) * dt.block_size )/(1024*1024),2)) diff_used_size_MB
FROM
dba_hist_snapshot s0,
dba_hist_snapshot s1,
DBA_HIST_TBSPC_SPACE_USAGE tsu0,
DBA_HIST_TBSPC_SPACE_USAGE tsu1,
v$tablespace ts,
DBA_TABLESPACES dt
WHERE s1.snap_id = s0.snap_id + 1
AND tsu0.snap_id = s0.snap_id
AND tsu1.snap_id = s0.snap_id + 1
AND tsu0.tablespace_id = ts.ts#
AND tsu1.tablespace_id = ts.ts#
AND ts.name = dt.tablespace_name
GROUP BY s0.snap_id, TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS'), ts.ts#, ts.name
)
--where
--tm > to_char(sysdate - 7, 'MM/DD/YY HH24:MI')
--and name = 'SYSAUX'
--and id in (224,225,226)
--and id = 225
ORDER BY id asc;
-- without dba_hist_snapshot
SELECT
sp.snap_id,
TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days ,
ts.name ,
MAX(ROUND((tsu.tablespace_maxsize * dt.block_size )/(1024*1024),2) ) max_size_MB ,
MAX(ROUND((tsu.tablespace_size * dt.block_size )/(1024*1024),2) ) cur_size_MB ,
MAX(ROUND((tsu.tablespace_usedsize * dt.block_size )/(1024*1024),2)) used_size_MB
FROM
DBA_HIST_TBSPC_SPACE_USAGE tsu,
v$tablespace ts,
DBA_HIST_SNAPSHOT sp,
DBA_TABLESPACES dt
WHERE tsu.tablespace_id = ts.ts#
AND tsu.snap_id = sp.snap_id
AND ts.name = dt.tablespace_name
--and sp.snap_id = 225
GROUP BY sp.snap_id, TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.name
ORDER BY ts.name,
days;
}}}
http://docs.oracle.com/cd/E50790_01/doc/doc.121/e50471/views.htm#SAGUG21166
12.1 storage software introduced a bunch of DBA_HIST views
DBA_HIST_ASM_BAD_DISK
DBA_HIST_ASM_DISKGROUP
DBA_HIST_ASM_DISKGROUP_STAT
DBA_HIST_CELL_CONFIG
DBA_HIST_CELL_CONFIG_DETAIL
DBA_HIST_CELL_DB
DBA_HIST_CELL_DISKTYPE
DBA_HIST_CELL_DISK_NAME
DBA_HIST_CELL_DISK_SUMMARY
DBA_HIST_CELL_IOREASON
DBA_HIST_CELL_IOREASON_NAME
DBA_HIST_CELL_METRIC_DESC
DBA_HIST_CELL_NAME
DBA_HIST_CELL_OPEN_ALERTS
{{{
19:42:37 SYS@dbfs1> desc V$CELL_DB
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
CELL_NAME VARCHAR2(400)
CELL_HASH NUMBER
INCARNATION_NUM NUMBER
TIMESTAMP DATE
SRC_DBNAME VARCHAR2(256)
SRC_DBID NUMBER
METRIC_ID NUMBER
METRIC_NAME VARCHAR2(257)
METRIC_VALUE NUMBER
METRIC_TYPE VARCHAR2(17)
CON_ID NUMBER
19:42:39 SYS@dbfs1> desc dba_hist_cell_db
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
SNAP_ID NOT NULL NUMBER
DBID NOT NULL NUMBER
CELL_HASH NOT NULL NUMBER
INCARNATION_NUM NOT NULL NUMBER
SRC_DBID NOT NULL NUMBER
SRC_DBNAME VARCHAR2(256)
DISK_REQUESTS NUMBER
DISK_BYTES NUMBER
FLASH_REQUESTS NUMBER
FLASH_BYTES NUMBER
CON_DBID NUMBER
CON_ID NUMBER
}}}
http://timurakhmadeev.wordpress.com/2012/02/15/ruoug-in-saint-petersburg/
http://iusoltsev.wordpress.com/2012/02/12/awr-snapshot-suspend-oracle-11g/
http://arjudba.blogspot.com/2010/08/ora-13516-awr-operation-failed-swrf.html
Mythbusters: AWR retention days and SYSAUX tablespace usage on it?
http://goo.gl/jTjsk
{{{
-- TO VIEW RETENTION INFORMATION
select * from dba_hist_wr_control;
set lines 300
select b.name, a.DBID,
((TRUNC(SYSDATE) + a.SNAP_INTERVAL - TRUNC(SYSDATE)) * 86400)/60 AS SNAP_INTERVAL_MINS,
((TRUNC(SYSDATE) + a.RETENTION - TRUNC(SYSDATE)) * 86400)/60 AS RETENTION_MINS,
((TRUNC(SYSDATE) + a.RETENTION - TRUNC(SYSDATE)) * 86400)/60/60/24 AS RETENTION_DAYS,
TOPNSQL
from dba_hist_wr_control a, v$database b
where a.dbid = b.dbid;
/*
-- SET RETENTION PERIOD TO 30 DAYS (UNIT IS MINUTES)
execute dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 43200);
-- SET RETENTION PERIOD TO 6 months (UNIT IS MINUTES)
exec dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 262800);
-- SET RETENTION PERIOD TO 365 DAYS (UNIT IS MINUTES)
exec dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 525600);
}}}
AWR snap difference (15mins and 60mins) effect on CPU sizing
''Consolidate 4 instances with different snap intervals - link'' http://www.evernote.com/shard/s48/sh/b7147378-ebb4-4eec-afb5-61222259ce2d/f94d5d98afea81c3ab10af8016775048
Master Note on AWR Warehouse (Doc ID 1907335.1)
Oracle® Database 2 Day + Performance Tuning Guide 12c Release 1 (12.1)
http://docs.oracle.com/database/121/TDPPT/tdppt_awr_warehouse.htm#TDPPT145
Analyze Long-term Performance Across Enterprise Databases Using AWR Warehouse
https://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:9887,29
blogs about it
http://www.dbi-services.com/index.php/blog/entry/oracle-oem-cloud-control-12104-awr-warehouse
http://dbakevlar.com/2014/07/awr-warehouse-status/
http://www.slideshare.net/kellynpotvin/odtug-webinar-awr-warehouse
good stuff - https://www.doag.org/formes/pubfiles/7371313/docs/Konferenz/2015/vortraege/Oracle%20Datenbanken/2015-K-DB-Kellyn_PotVin-Gorman-The_Power_of_the_AWR_Warehouse_and_Beyond-Manuskript.pdf
good stuff - https://jhdba.wordpress.com/2016/01/04/oem-12-awr-warehouse-not-quite-the-finished-product-yet/
good stuff - http://www.slideshare.net/fullscreen/krishna.setwin/con8449-athreya-awr-warehouse
! the workflow
''source''
when awr warehouse is deployed on a target.. it deploys jobs, and this is an agent push only
source exports the awr dump as a job on the local filesystem, this doesn't have to be the same directory path as the warehouse
if you have a 12 month retention, it will not do it all at once, it does the export per 2GB every 3hours
it first exports the oldest snap_id
''oms''
oms does an agent to agent talk when source is transferring the dump file and then puts on the "warehouse stage directory"
oms agent checks the directory every 5 mins and if there's a new file that file gets imported to the warehouse
''warehouse''
warehouse has this ETL job and databases are partitioned per DBID
there's a mapping table to map the database to DBID
ideally if you want to put other data on the warehouse put it on another schema and views as well, because a warehouse upgrade or patch may wipe them out
as long as you have a diag pack license on the source, you are good with the warehouse license, this is pretty much like the licensing scheme of OMS
<<<
Karl Arao
kellyn, question on awr warehouse, so it is recommended to be on a separate database. and on that separate database it requires a separate diag pack license?
so let's say you have a diag and tuning pack on the source... and then you've got to have diag pack on the target awr warehouse?
thanks!
Kellyn Pot'vin
11:09am
Kellyn Pot'vin
No, you're diag pack from source db grants the limited ee license for the awrw
Does that make sense?
Karl Arao
11:09am
Karl Arao
i see
essentially it's free
because source would have diag and tuning pack anyway
Kellyn Pot'vin
11:10am
Kellyn Pot'vin
It took them while to come up with that, but same for em12c omr
Now, if you add anything, rac or data guard, then you have to license that
Karl Arao
11:11am
Karl Arao
yeap which is also the case on omr
Kellyn Pot'vin
11:12am
Kellyn Pot'vin
Exactly
Karl Arao
11:12am
Karl Arao
now, can we add anymore tables on the awr warehouse
let's say i want that to be my longer retention data for my metric extension as well
Kellyn Pot'vin
11:13am
Kellyn Pot'vin
Yes, but no additional partitioning and it may impact patches/upgrades
Karl Arao
11:13am
Karl Arao
sure
Kellyn Pot'vin
11:14am
Kellyn Pot'vin
I wouldn't do triggers or partitions
Views are cool
Karl Arao
11:16am
Karl Arao
we are going to evaluate this soon, just upgraded to r4
Kellyn Pot'vin
11:16am
Kellyn Pot'vin
Otn will post a new article of advance awr usage next week from me
Karl Arao
11:16am
Karl Arao
another question,
on my source i have 12months of data
Kellyn Pot'vin
11:17am
Kellyn Pot'vin
And know exadata team is asking how to incorporate it as part of healthcheck design
Karl Arao
11:17am
Karl Arao
will it ETL that to the warehouse
like 1 shot
Kellyn Pot'vin
11:17am
Kellyn Pot'vin
That is my focus in dev right now
Karl Arao
11:17am
Karl Arao
that's going to be 160GB of data
and with exp warehouse
there's going to be an impact for sure
Kellyn Pot'vin
11:18am
Kellyn Pot'vin
No, it has throttle and will take 2gb file loads in 3hr intervals, oldest snapshots first
Karl Arao
11:18am
Karl Arao
I'm just curious on the etl
what do you mean 3hours intervals ?
2GB to finish in 3hours
Kellyn Pot'vin
11:19am
Kellyn Pot'vin
Tgen go back to 24 hr interval auto after any catchup, same on downtime catchup
<<<
! articles
https://hemantoracledba.blogspot.com/2016/12/122-new-features-4-awr-for-pluggable.html
https://blog.dbi-services.com/oracle-12cr2-awr-views-in-multitenant/
http://oracledbpro.blogspot.com/2017/03/awr-differences-between-12c-release-1.html
https://www.google.com/search?q=awr+retention+pdb+cdb&oq=awr+retention+pdb+cdb&aqs=chrome..69i57.4803j0j0&sourceid=chrome&ie=UTF-8
AWR_SNAPSHOT_TIME_OFFSET
! setup
{{{
-- if set on CDB level, it will take effect on all PDBs
-- if set on PDB, it will take effect only on that PDB
select * from cdb_hist_wr_control;
alter session set container = CDB$ROOT;
alter system set awr_pdb_autoflush_enabled=true;
alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both;
-- 1000000 is the magic number based on pdb name to avoid flushing at the same time
exec dbms_workload_repository.modify_snapshot_settings(interval => 30, dbid => 4182556862);
select con_id, instance_number, snap_id, begin_interval_time, end_interval_time from cdb_hist_snapshot order by 1,2,3;
}}}
! MOS
AWR Snapshots and Reports from Oracle Multitentant Database(CDB, PDB) (Doc ID 2295998.1)
How to Create an AWR Report at the PDB level on 12.2 or later (Doc ID 2334006.1)
ORA-20200 Error When Generating AWR or ADDM Report as a PDB DBA User From a 12.2.0.1 CDB Database (Doc ID 2267849.1)
Bug 25941188 : ORA-20200 WHEN ADDM REPORT GENERATED FROM A PDB DATABASE USING AWR_PDB OPTION
AWR Report run from a Pluggable Database (PDB) Runs Much Slower than from a Container Database (CDB) on 12c (Doc ID 1995938.1
How to Modify Statistics collection by MMON for AWR repository (Doc ID 308450.1)
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/AWR_SNAPSHOT_TIME_OFFSET.html#GUID-90CD8379-DCB2-4681-BB90-0A32C6029C4E
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/AWR_PDB_AUTOFLUSH_ENABLED.html#GUID-08FA21BC-8FB1-4C51-BEEA-139C734D17A7
.
<<<
Oracle on RDS
There are Oracle native features that do not work with RDS such as RAC, Data Guard and RMAN.
Instead of Data Guard, RDS uses replicas which is basically block copies from primary to replica copy.
Backups via RMAN are not possible. AWS performs storage volume snapshots.
Hot backups are possible only if there is a replica in play. In this case, the backup is taken from the secondary instead of the primary. If there is only a primary, the storage snapshot will cause and temporary I/O suspension; so no hot backup.
No access to sys/system; some normal DBA tasks will need to be done via AWS api.
No access to underlying file system
If they really want Oracle on AWS, I would recommend putting Oracle on EC2. The performance and cost are better on OCI.
Oracle RDS database max size up to 6TB
Getting Data into RDS is also another challenge – you are limited to datapump and can’t not use lift and shift approach as RMAN backup is not supported.
<<<
https://www.google.com/search?q=AWS+S3+merge+SCD&oq=AWS+S3+merge+SCD&aqs=chrome..69i57j33l2.6636j0j0&sourceid=chrome&ie=UTF-8
https://cloudbasic.net/white-papers/data-warehousing-scd/
https://www.google.com/search?q=AWS+S3+SCD+type+2&ei=Oa4CXY2SBc-f_Qb9gqugDg&start=20&sa=N&ved=0ahUKEwjNkrq3oufiAhXPT98KHX3BCuQ4ChDy0wMIdw&biw=1334&bih=798
http://resources.pythian.com/hubfs/Framework-For-Migrate-Your-Data-Warehouse-Google-BigQuery-WhitePaper.pdf
https://stackoverflow.com/questions/52919985/incremental-updates-of-data-in-an-s3-data-lake
https://sonra.io/2009/02/01/one-pass-scd2-load-how-to-load-a-slowly-changing-dimension-type-2-with-one-sql-merge-statement-in-oracle/
! discussion
<<<
Need your help on one of the scenario.
We are copying the simple txt/parquet files from (Hadoop Cluster)HDFS to simple s3 bucket.
Base copy is okay, we are good with that but after that whatever changes that are happening on source(HDFS) just want to copy the incremental changes on S3.
I am going through the DataSync and other options in the meantime.
Not looking for any tool option(like Attunity, etc.), looking for some free option.
<<<
<<<
Not sure whether your incremental changes happen on the source base file or a different new file. Usually on HDFS, you want the incremental changes happens on new files. Then to get the complete view the data, you need both base and new incremental files together. After sometime, you need to merge these two kinds of files into a new base file, this usually refers to Compact operation.
<<<
<<<
I’m thinking handle the SCD logic in Hadoop (with ACID on) then append the new data to S3
<<<
<<<
You can also look into Nifi which guarantees delivery of packet and it is open-source package. Also, it can track last record processed, but it comes with some challenges of its own. Another option would we to land your daily feeds into daily_feed_tables, process data to S3 and run the compact operation as suggested.
<<<
https://www.udemy.com/aws-certified-solutions-architect-associate/
https://www.udemy.com/aws-certified-developer-associate/
https://www.udemy.com/aws-certified-sysops-administrator-associate/
https://www.udemy.com/aws-codedeploy/
https://www.udemy.com/get-oracle-flying-in-aws-cloud/
https://www.udemy.com/architecting-amazon-web-services-an-introduction/
! exam
https://aws.amazon.com/certification/
! practice questions
the ones from the LA or cloud guru courses + the aws ones (https://aws.psiexams.com/#/dashboard about 20$) + the free question (available for sysops not sa)
<<showtoc>>
! CPU
https://david-codes.hatanian.com/2019/06/09/aws-costs-every-programmer-should-now.html
https://github.com/dhatanian/aws-ec2-costs
! network
<<<
TIL what EC2's "Up to" means. I used to think it simply indicates best effort bandwidth, but apparently there's a hard baseline bottleneck for most EC2 instance types (those with an "up to"). It's significantly smaller than the rating, and it can be reached in just a few minutes.
<<<
https://twitter.com/dvassallo/status/1120171727399448576
Mmm.. It's a long story, just check out this blog post.. http://karlarao.wordpress.com/2010/04/10/my-personal-wiki-karlarao-tiddlyspot-com/ :)
Also check out my Google profile here https://plus.google.com/102472804060828276067/about to know more about my web/social media presence
check here [[.TiddlyWiki]] to get started on setting up and configuring your own wiki
https://www.slideshare.net/Enkitec/presentations
<<<
https://connectedlearning.accenture.com/curator/chanea-heard
Golden Gate Admin https://connectedlearning.accenture.com/learningboard/goldengate-administration
APEX https://connectedlearning.accenture.com/leaning-list-view/16597
ZFS storage appliance https://connectedlearning.accenture.com/learningboard/16600-zfs-storage-appliance
SPARC Supercluster Admin https://connectedlearning.accenture.com/leaning-list-view/16596
Exadata Admin https://connectedlearning.accenture.com/leaning-list-view/12954
Exadata Optimizations https://connectedlearning.accenture.com/leaning-list-view/13051
All AEG https://connectedlearning.accenture.com/learningactivities
SQL Tuning with SQLTXPLAIN https://connectedlearning.accenture.com/leaning-list-view/13097
E4 2015 https://connectedlearning.accenture.com/leaning-list-view/13512
https://mediaexchange.accenture.com/tag/tagid/hadoop
AEG webinars https://connectedlearning.accenture.com/leaning-list-view/110872
media exchange tag "enkitec" https://mediaexchange.accenture.com/tag/tagid/enkitec
<<<
! Oracle Unlimited Learning Subscription
<<<
Your Unlimited Learning Subscription provides you with:
- Unlimited access to all courses in the Oracle University Training-on-Demand (ToD) catalog – over 450 titles of in depth training courses for Database, Applications and Middleware
- Unlimited access to all Oracle University Learning Subscriptions, including the latest in Oracle’s Cloud Solutions, Product Solutions and Industry Solutions
- Unlimited access to all Oracle University Learning Streams for continuous learning around Oracle’s Database, Middleware, EBS and PSFT products
- Access to Public live virtual classroom training sessions offered by Oracle University in the case that a Training on Demand is not available
https://urldefense.proofpoint.com/v2/url?u=http-3A__launch.oracle.com_-3Faglp&d=CwMFAg&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=uuYKy3Gs1_0JIUEV5KRRHtJRajKnrRi8D07dW2RkXus&m=2h0zMlY_aYkRW_DfSm0TQNCIUJvQ5Ym10XaIfNNla8M&s=ku1WbuFdaZky-ezYS3fnO2V0R9RXG75rJctvZg-ztY0&e=
Digital Training Learning Portal https://isdportal.oracle.com/pls/portal/tsr_admin.page.main?pageid=33,986&dad=portal&schema=PORTAL&p_k=hCwPEObICeHNWFJNdHCxsnXIyaWOpibldVWGShuxqGCGEmtoGCkVshGgcTdu1191413973
Program Overview http://link.brightcove.com/services/player/bcpid1799411699001?bckey=AQ~~,AAABmsB_z2k~,HvNx0XQhsPxXu5er5IYkstkCq_O9j5dg&bctid=4731151798001
Learning Paths https://isdportal.oracle.com/pls/portal/tsr_admin.page.main?pageid=33,976&dad=portal&schema=PORTAL&p_k=hCwPEObICeHNWFJNdHCxsnXIyaWOpibldVWGShuxqGCGEmtoGCkVshGgcTdu1191413973
<<<
http://www.ardentperf.com/2011/08/19/developer-access-to-10046-trace-files/
http://dioncho.wordpress.com/2009/03/19/another-way-to-use-trace-file/
http://kb.acronis.com/content/2788
http://kb.acronis.com/search/apachesolr_search/true%20image%202012%20slow%20backup?filters=%20type%3Aarticle
http://forum.acronis.com/forum/5399
http://kb.acronis.com/content/2293
''Amanda'' http://www.amanda.org/ <-- but this requires a client agent
http://gjilevski.wordpress.com/2010/03/14/creating-oracle-11g-active-standby-database-from-physical-standby-database/
''Oracle Active Data Guard: What’s Really Under the Hood?'' http://www.oracle.com/technetwork/database/features/availability/s316924-1-175932.pdf
''Read only and vice versa''
http://www.adp-gmbh.ch/ora/data_guard/standby_read_only.html
http://juliandyke.wordpress.com/2010/10/14/oracle-11gr2-active-data-guard/
http://www.oracle-base.com/articles/11g/data-guard-setup-11gr2.php#read_only_active_data_guard
! to be in Active DG, remove "read only" step for normal managed recovery
{{{
startup mount
alter database open read only;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE disconnect;
}}}
!''Snapper on Standby Database''
<<<
On the standby site, if the database is open read only with apply you should be able to run snapper on it or do ash queries as well
Check out some commands here http://karlarao.tiddlyspot.com/#snapper
And if you want to loop it and leave it running and check the data the next day you can do this http://karlarao.tiddlyspot.com/#snapperloop (sections “snapper loop showing activity across all instances (must use snapper v4)” and “process the snap.txt file as csv input”)
Some commands you can use and things to check is attached as well. But I would start with
@snapper ash 5 1 all@*
Just to see what’s going on during the slow period
<<<
! triggers on ADG
Using Active Data Guard Reporting with Oracle E-Business Suite Release 12.1 and Oracle Database 11g (Doc ID 1070491.1)
<<<
Section 7: Database Triggers
ADG support delivers three schema level database triggers as follows:
Logon and Logoff
These triggers are a key component of the simulation testing. The logon trigger enables the read-only violation trace, whereas the logoff trigger records the actual number of violations. If these triggers are not enabled, the trace errors and V$ data are not recorded, in other words, the simulations are treated as having no errors.
Servererror
The error trigger is only executed if an ORA-16000 is raised, which is read-only violation (the trigger does nothing on the primary). The error count for the concurrent program is incremented only if standby_error_checking has been enabled as described in 4.2 General Options. If the error trigger is not enabled, report failures will not be recorded and failures will not lead to run_on_standby being disabled.
<<<
http://www.toadworld.com/platforms/oracle/b/weblog/archive/2014/04/27/oracle-apps-r12-offloading-reporting-workload-with-active-data-guard.aspx
! resource management on ADG
Configuring Resource Manager for Oracle Active Data Guard (Doc ID 1930540.1)
<<<
Configuring a resource plan on a physical standby database requires the plan to be created on primary database.
I/O Resource Manager helps multiple databases and workloads within the databases share the I/O resources on the Exadata storage. In a data guard environment, IORM can help protect the I/O latency for the redo apply I/Os from the standby database.
Critical I/Os from standby database backgrounds such as Managed Recovery Process (MRP) or Logical Standby Process (LSP) are automatically prioritized by enabling IORM on the Exadata storage. Database resource plans enabled on the standby databases are automatically pushed to the Exadata storage. Enabling IORM enforces database resource plans on the storage cells to minimize the latency for the critical redo-apply I/Os. To enable IORM, set the IORM objective to 'auto' on the Exadata storage cells.
Bug 12601274: Updates to consumer group mappings on the primary database are not reflected on the standby database. This bug is fixed in 11.2.0.4 and 12.1.0.2. On older releases, the updates are only reflected on the standby upon a restart of the standby database.
<<<
http://www.oracle-base.com/articles/11g/AwrBaselineEnhancements_11gR1.php <-- a good HOWTO
http://neerajbhatia.files.wordpress.com/2010/10/adaptive-thresholds.pdf
http://oracledoug.com/serendipity/index.php?/archives/1496-Adaptive-Thresholds-in-10g-Part-1-Metric-Baselines.html
http://oracledoug.com/serendipity/index.php?/archives/1497-Adaptive-Thresholds-in-10g-Part-2-Time-Grouping.html
http://oracledoug.com/serendipity/index.php?/archives/1498-Adaptive-Thresholds-in-10g-Part-3-Setting-Thresholds.html
http://oracledoug.com/metric_baselines_10g.pdf <-- ''GOOD STUFF''
http://oracledoug.com/adaptive_thresholds_faq.pdf <-- ''GOOD STUFF''
http://www.cmg.org/conference/cmg2007/awards/7122.pdf
http://optimaldba.com/papers/IEDBMgmt.pdf
http://www.oracle-base.com/articles/11g/AwrBaselineEnhancements_11gR1.php
Strategies for Monitoring Large Data Centers with Oracle Enterprise Manager http://gavinsoorma.com/wp-content/uploads/2011/03/monitoring_large_data_centers_with_OEM.pdf
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1525205200346930663
<<<
{{{
You Asked
In version 11g of Oracle database, there is a new feature whereby current performance data (obtained from AWR snapshots) can be compared against an AWR baseline and an alarm triggered if a given metric exceeds a certain threshold. From what I understand, there are 3 types of thresholds : fixed value, percent of maximum and significance level. The first type (fixed value) is very easy to understand - alarms are triggered whenever the metric in question exceeds certain fixed values specified for the warning and critical alerts (without reference to the baseline). The 2nd type (percent of maximum) presumably means that an alert is triggered whenever the current value of the metric exceeds the specified percent of the maximum value of the metric that was observed in the whole baseline period (if I understood this correctly - correct me if I'm wrong).
However, the 3rd type (significance level) is not at all easy to understand. The Oracle documentation is not at all clear on that point, nor could I find any Metalink notes on the subject. I also tried searching the OTN forums, to no avail. Could you please explain, in very simple terms, when exactly an alarm would be triggered if "significance level" is specified for the threshold type, if possible by giving a simple example. There are apparently 4 levels of such thresholds (high, very high, severe and extreme).
and we said...
I asked Graham Wood and John Beresniewicz for their input on this, they are the experts in this particular area
they said:
Graham Wood wrote:
> Sure,
> Copying JB as this is his specialty area, in case I don't get it right. :-)
>
> The basic idea of using significance level thresholds for alerting is that we are trying to detect outliers in the distribution of metric values, rather than setting a simple threshold value.
>
> By looking at the historical metric data from AWR we can identify values for 25th, 50th (median), 75th, 90th, 95th and 99th percentiles. Using a curve fitting algorithm we also extrapolate the 99.9th and 99.99th percentiles. We derive these percentiles based on time grouping, such as day, night, and hour of day.
>
> In the adaptive baselines feature in 11g we allow the user to specify the alert level, which equates to one of these percentile values:
> High 95th percentile
> Very High 99th percentile
> Severe 99.9th percentile
> Extreme 99.99th percentile
>
> Using the AWR history (actually the SYSTEM_MOVING_WINDOW baseline) the database will automatically determine the threshold level for a metric that corresponds to the selected significance level for the current time period.
>
> Setting a significance level of Extreme means that we would only alert on values that we would only expect to see once in a 10000 observations (approximately once in every years for hourly thresholds).
>
> Cheers, Graham
JB wrote:
Shorter answer:
---------------
The significance level thresholds are intended to produce alert threshold values for key performance metrics that represent the following:
"Automatically set threshold such that values observed above the threshold are statistically unusual (i.e. significant) at the Nth percentile based on actual data observed for this metric over the SYSTEM_MOVING_WINDOW baseline."
The premise here is that systems with relatively stable performance characteristics should show statistical stability in core performance metric values, and when unusual but high-impact performance events occur we expect these will be reflected in highly unusual observations in one or more (normally statistically stable) metrics. The significance level thresholds give users a way to specify alerting in terms of "how unusual" rather than "how much".
Longer (original) reply:
-----------------------------
Hi Tom -
Graham did a pretty good job, but I'll add some stuff.
Fixed thresholds are set explicitly by user, and change only when user unsets or sets a different threshold. They are based entirely on user understanding of the underlying metrics in relation to the underlying application and workload. This is the commonly understood paradigm for detecting performance issues: trigger an alert when metric threshold is crossed. There are numerous issues we perceived with this basic mechanism:
1) "Performance" expectations, and thus alert thresholds, often vary by application, workload, database size, etc. This results in what I call the MxN problem, which is that M metrics over N systems becomes MxN threshold decisions each of which can be very specific (i.e. threshold decisions not transferable.) This is potentially very manually intensive for users with many databases.
2) Workload may vary predictably on system (e.g. online day vs. batch night) and different performance expectations (and thus alert thresholds) may pertain to different workloads, so one threshold for all workloads is inappropriate.
3) Systems evolve over time and thresholds applicable for the system supporting 1,000 users may need to be altered when system supports 10,000 users.
The adaptive thresholds feature tries to address these issues as follows:
A) Thresholds are computed by the system based on a context of prior observations of this metric on this system. System-and-metric-specific thresholds are developed without obliging user to understand the specifics (helps relieve the MxN problem.)
B) Thresholds are periodically recomputed using statistical characterizations of metric values over the SYSTEM_MOVING_WINDOW baseline. Thus the thresholds adapt to slowly evolving workload or demand, as the moving window moves forward.
C) Metric statistics for adaptive thresholds are computed over grouping buckets (which we call "time groups") that can accommodate the common workload periodicities (day/night, weekday/weekend, etc.) Thresholds resets can happen as frequently as every hour.
So the net-net is that metric alert thresholds are determined and set automatically by the system using actual metric observations as their basis and using metric-and-system-independent semantics (significance level or pct of max.)
JB
From Tom - Thanks both!
}}}
<<<
''Oracle By Example:''
http://docs.oracle.com/cd/E11882_01/server.112/e16638/autostat.htm#CHDHBGJD
metric baseline http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/10g/r2/metric_baselines.viewlet/metric_baselines_viewlet_swf.html
create baseline http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/11gr2_baseline/11gr2_baseline_viewlet_swf.html
OEM system monitoring http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/emgc10gr2/quick_start/system_monitoring/system_monitoring.htm
**Creating the Monitoring Template
**Creating the User-Defined Metrics
**Setting the Metric Baseline
SQL baseline http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/10g/r2/sql_baseline.viewlet/sql_baseline_viewlet_swf.html
''Proactive Database Monitoring'' http://docs.oracle.com/cd/B28359_01/server.111/b28301/montune001.htm
''15 User-Defined Metrics'' http://docs.oracle.com/cd/B16240_01/doc/em.102/e10954/udm2.htm
''3 Cluster Database'' http://docs.oracle.com/cd/B19306_01/em.102/b25986/rac_database.htm
http://oracledoug.com/serendipity/index.php?/archives/1302-Oracle-Workload-Metrics.html
http://oracledoug.com/serendipity/index.php?/archives/1470-Time-Matters-Throughput-vs.-Response-Time.html
http://docs.oracle.com/cd/B14099_19/manage.1012/b16241/Monitoring.htm#sthref333
http://carymillsap.blogspot.com/2008/12/performance-as-service-part-2.html
-- notes and ideas about R2 and adaptive thresholds
[img[ https://lh5.googleusercontent.com/-JNRShrEzpiQ/T4W3xfgGzII/AAAAAAAABiY/VBPKaiA-zus/s800/AdaptiveThresholds.JPG ]]
http://en.wikipedia.org/wiki/Control_chart
http://en.wikipedia.org/wiki/Exponential_smoothing
http://www.sciencedirect.com/science/article/pii/S0169207003001134
http://prodlife.wordpress.com/2013/10/14/control-charts/
http://kerryosborne.oracle-guy.com/2009/06/oracle-11g-adaptive-cursor-sharing-acs/
http://aychin.wordpress.com/2011/04/04/adaptive-cursor-sharing-and-spm/
Adaptive Cursor Sharing: Worked Example [ID 836256.1]
https://blogs.oracle.com/optimizer/explain-adaptive-cursor-sharing-behavior-with-cursorsharing-similar-and-force
https://oracle.readthedocs.io/en/latest/plsql/bind/adaptive-cursor-sharing.html
<<showtoc>>
! 11gR2
http://jarneil.wordpress.com/2010/11/05/11gr2-database-services-and-instance-shutdown/ <-- 11gR2 version..
http://pat98.tistory.com/531 <-- good stuff, well explained difference on admin and policy managed services
{{{
srvctl add service -d RACDB -s <SERVICE NAME HERE> -preferred RACDB1,RACDB2 -clbgoal short -rlbgoal SERVICE_TIME
}}}
! Create Service PDB
!! Adding a Service to a PDB in RAC
{{{
srvctl add service -db RAC -service MYSVC -preferred RAC1,RAC2 -tafpolicy BASIC -clbgoal SHORT -rlbgoal SERVICE_TIME -pdb PDB
}}}
https://hemantoracledba.blogspot.com/2017/04/12cr1-rac-posts-9-adding-service-to-pdb.html?m=1
https://docs.oracle.com/database/121/RACAD/GUID-15576271-E204-4ABD-961B-09876762EBF4.htm#RACAD5047
https://github.com/karlarao/OracleScheduledNodeAllocationTAF
https://karlarao.github.io/karlaraowiki/index.html#%5B%5BSRVCTL%20useful%20commands%5D%5D
..
do this to collect the most recent occurrence of the error on any of the trace files
{{{
find . -type f -printf '%TY-%Tm-%Td %TT %p\n' | sort
}}}
files to look out
{{{
Agent Log and Trace files
Note: if there are multiple Agents experiencing problems, the files must be uploaded for each Agent.
From $ORACLE_HOME/sysman/log/*.* directory for a single agent.
From $ORACLE_HOME/host/sysman/log/*.* for a RAC agent.
The files are:
emagent.nohup: Agent watchdog log file, Startup errors are recorded in this file.
emagent.log: Main agent log file
emagent.trc: Main agent trace file
emagentfetchlet.log: Log file for Java Fetchlets
emagentfetchlet.trc: Trace file for Java Fetchlets
<OMS_HOME>/sysman/log/emoms.trc
<OMS_HOME>/sysman/log/emoms.log
}}}
output below
{{{
2011-06-28 10:20:11 ./sysman/emd/state/0005.dlt
2011-06-28 10:20:11 ./sysman/emd/state/snapshot
2011-06-28 10:26:00 ./sysman/emd/cputrack/emagent_11747_2011-06-28_10-26-00_cpudiag.trc
2011-06-28 10:26:00 ./sysman/log/emctl.log
2011-06-28 10:28:43 ./sysman/emd/upload/EM_adaptive_thresholds.dat
2011-06-28 10:30:32 ./sysman/emd/state/parse-log-3CBBC0C79ED9B7E65B93EAC0D7457308
2011-06-28 10:30:39 ./sysman/emd/upload/mgmt_db_hdm_metric_helper.dat
2011-06-28 10:30:54 ./sysman/emd/upload/rawdata8.dat
2011-06-28 10:31:05 ./sysman/emd/state/adr/141DB5270B29BDF93743E123C2DF1231.alert.log.xml.state
2011-06-28 10:32:13 ./sysman/emd/state/adr/C12313AF3162E92001DE7952A752106A.alert.log.xml.state
2011-06-28 10:32:37 ./sysman/emd/upload/mgmt_ha_mttr.dat
2011-06-28 10:32:51 ./sysman/emd/upload/rawdata3.dat
2011-06-28 10:33:08 ./sysman/emd/state/adr/5A9DF4683EEF44F8898ABA391E70D194.alert.log.xml.state
2011-06-28 10:33:08 ./sysman/emd/upload/rawdata5.dat
2011-06-28 10:33:54 ./sysman/emd/agntstmp.txt
2011-06-28 10:33:55 ./sysman/emd/upload/rawdata0.dat
2011-06-28 10:34:06 ./sysman/emd/upload/rawdata9.dat
2011-06-28 10:34:09 ./sysman/emd/upload/rawdata2.dat
2011-06-28 10:34:21 ./sysman/emd/upload/rawdata4.dat
2011-06-28 10:34:25 ./sysman/emd/state/3CBBC0C79ED9B7E65B93EAC0D7457308.alerttd01db01.log
2011-06-28 10:34:25 ./sysman/emd/state/progResUtil.log
2011-06-28 10:34:27 ./sysman/emd/upload/rawdata7.dat
2011-06-28 10:34:28 ./sysman/log/emagent.trc
2011-06-28 09:51:42,537 Thread-1118013760 ERROR recvlets.aq: duplicate registration of metric Recovery_Area for target dbm rac_database
2011-06-28 09:51:42,537 Thread-1118013760 ERROR recvlets.aq: Unable to add metric Recovery_Area to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:42,537 Thread-1118013760 ERROR recvlets: Error adding metric Recovery_Area, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets.aq: duplicate registration of metric Snap_Shot_Too_Old for target dbm rac_database
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets.aq: Unable to add metric Snap_Shot_Too_Old to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets: Error adding metric Snap_Shot_Too_Old, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets.aq: duplicate registration of metric WCR for target dbm rac_database
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets.aq: Unable to add metric WCR to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets: Error adding metric WCR, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:42,539 Thread-1118013760 ERROR recvlets.aq: duplicate registration of metric wrc_client for target dbm rac_database
2011-06-28 09:51:42,539 Thread-1118013760 ERROR recvlets.aq: Unable to add metric wrc_client to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:42,539 Thread-1118013760 ERROR recvlets: Error adding metric wrc_client, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:42,540 Thread-1118013760 WARN recvlets.aq: [oracle_database dbm_dbm1] deferred nmevqd_refreshState for dbm rac_database
2011-06-28 09:51:42,540 Thread-1118013760 WARN upload: Upload manager has no Failure script: disabled
2011-06-28 09:51:48,569 Thread-1136912704 WARN collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:51:48,571 Thread-1136912704 WARN collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:51:48,575 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric problemTbsp for target dbm rac_database
2011-06-28 09:51:48,575 Thread-1136912704 ERROR recvlets.aq: Unable to add metric problemTbsp to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,575 Thread-1136912704 ERROR recvlets: Error adding metric problemTbsp, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric Suspended_Session for target dbm rac_database
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: Unable to add metric Suspended_Session to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets: Error adding metric Suspended_Session, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric Recovery_Area for target dbm rac_database
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: Unable to add metric Recovery_Area to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets: Error adding metric Recovery_Area, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric Snap_Shot_Too_Old for target dbm rac_database
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: Unable to add metric Snap_Shot_Too_Old to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets: Error adding metric Snap_Shot_Too_Old, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric WCR for target dbm rac_database
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets.aq: Unable to add metric WCR to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets: Error adding metric WCR, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric wrc_client for target dbm rac_database
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets.aq: Unable to add metric wrc_client to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets: Error adding metric wrc_client, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,578 Thread-1136912704 WARN recvlets.aq: [oracle_database dbm_dbm1] deferred nmevqd_refreshState for dbm rac_database
2011-06-28 09:51:48,579 Thread-1136912704 WARN upload: Upload manager has no Failure script: disabled
2011-06-28 09:51:48,615 Thread-1136912704 WARN collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:51:48,617 Thread-1136912704 WARN recvlets.aq: [oracle_database dbm_dbm1] deferred nmevqd_refreshState for dbm rac_database
2011-06-28 09:52:03,663 Thread-1130613056 WARN collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:52:03,669 Thread-1130613056 WARN collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:52:03,675 Thread-1130613056 WARN collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:52:03,678 Thread-1130613056 WARN collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:52:03,690 Thread-1130613056 WARN recvlets.aq: [rac_database dbm] deferred nmevqd_refreshState for dbm rac_database
2011-06-28 09:52:03,691 Thread-1130613056 WARN upload: Upload manager has no Failure script: disabled
2011-06-28 09:54:21,234 Thread-1136912704 ERROR vpxoci: ORA-03113: end-of-file on communication channel
2011-06-28 09:59:52,513 Thread-1146362176 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 09:59:52,513 Thread-1146362176 WARN Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 09:59:52,513 Thread-1146362176 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:00:41,308 Thread-1084578112 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:00:41,309 Thread-1084578112 WARN Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:00:41,309 Thread-1084578112 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:00:54,251 Thread-1146362176 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:00:54,251 Thread-1146362176 WARN Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:00:54,252 Thread-1146362176 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:01:11,931 Thread-1121163584 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:01:11,931 Thread-1121163584 WARN Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:01:11,932 Thread-1121163584 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:01:53,036 Thread-1130613056 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:01:53,036 Thread-1130613056 WARN Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:01:53,036 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:34:28,828 Thread-1130613056 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:34:28,828 Thread-1130613056 WARN Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:34:28,828 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:20:11 ./sysman/emd/state/snapshot
2011-06-28 10:26:00 ./sysman/emd/cputrack/emagent_11747_2011-06-28_10-26-00_cpudiag.trc
2011-06-28 10:26:00 ./sysman/log/emctl.log
2011-06-28 10:28:43 ./sysman/emd/upload/EM_adaptive_thresholds.dat
2011-06-28 10:30:32 ./sysman/emd/state/parse-log-3CBBC0C79ED9B7E65B93EAC0D7457308
2011-06-28 10:30:54 ./sysman/emd/upload/rawdata8.dat
2011-06-28 10:32:13 ./sysman/emd/state/adr/C12313AF3162E92001DE7952A752106A.alert.log.xml.state
2011-06-28 10:32:51 ./sysman/emd/upload/rawdata3.dat
2011-06-28 10:33:08 ./sysman/emd/state/adr/5A9DF4683EEF44F8898ABA391E70D194.alert.log.xml.state
2011-06-28 10:34:06 ./sysman/emd/upload/rawdata9.dat
2011-06-28 10:34:25 ./sysman/emd/state/3CBBC0C79ED9B7E65B93EAC0D7457308.alerttd01db01.log
2011-06-28 10:34:25 ./sysman/emd/state/progResUtil.log
2011-06-28 10:34:27 ./sysman/emd/upload/rawdata7.dat
2011-06-28 10:35:17 ./sysman/emd/upload/mgmt_ha_mttr.dat
2011-06-28 10:35:21 ./sysman/emd/upload/rawdata5.dat
2011-06-28 10:35:58 ./sysman/emd/upload/rawdata2.dat
2011-06-28 10:36:05 ./sysman/emd/state/adr/141DB5270B29BDF93743E123C2DF1231.alert.log.xml.state
2011-06-28 10:36:28 ./sysman/emd/upload/mgmt_db_hdm_metric_helper.dat
2011-06-28 10:36:32 ./sysman/emd/upload/rawdata4.dat
2011-06-28 10:36:37 ./sysman/log/emagent.trc
2011-06-28 10:36:51 ./sysman/emd/upload/rawdata0.dat
2011-06-28 10:36:54 ./sysman/emd/agntstmp.txt
[td01db01:oracle:dbm1] /home/oracle
> dcli -l oracle -g dbs_group id oracle
td01db01: uid=500(oracle) gid=500(oinstall) groups=500(oinstall),101(fuse),501(dba)
td01db02: uid=500(oracle) gid=500(oinstall) groups=500(oinstall),101(fuse),501(dba)
td01db03: uid=500(oracle) gid=500(oinstall) groups=500(oinstall),101(fuse),501(dba)
td01db04: uid=500(oracle) gid=500(oinstall) groups=500(oinstall),101(fuse),501(dba)
[td01db01:oracle:dbm1] /home/oracle
>
[td01db01:oracle:dbm1] /home/oracle
> dcli -l oracle -g dbs_group ls -l /u01/app/oracle/product/grid/agent11g/bin/nmo
td01db01: -rwxr-xr-x 1 oracle oinstall 32872 Jun 22 17:02 /u01/app/oracle/product/grid/agent11g/bin/nmo
td01db02: -rws--x--- 1 root oinstall 32872 Jun 22 16:07 /u01/app/oracle/product/grid/agent11g/bin/nmo
td01db03: -rws--x--- 1 root oinstall 32872 Jun 22 16:14 /u01/app/oracle/product/grid/agent11g/bin/nmo
td01db04: -rws--x--- 1 root oinstall 32872 Jun 22 16:14 /u01/app/oracle/product/grid/agent11g/bin/nmo
2011-06-28 10:01:53,036 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:34:28,828 Thread-1130613056 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:34:28,828 Thread-1130613056 WARN Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:34:28,828 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:36:37,862 Thread-1130613056 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:36:37,862 Thread-1130613056 WARN Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:36:37,863 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
-rwxr-xr-x 1 oracle oinstall 5985 Jun 22 16:14 owm | -rwxr-xr-x 1 oracle oinstall 5985 Jun 22 17:02 owm
-rwxr-xr-x 1 oracle oinstall 2994 Jun 22 16:14 orapki | -rwxr-xr-x 1 oracle oinstall 2994 Jun 22 17:02 orapki
-rwxr-xr-x 1 oracle oinstall 2680 Jun 22 16:14 mkstore | -rwxr-xr-x 1 oracle oinstall 2680 Jun 22 17:02 mkstore
-rwxr-xr-x 1 oracle oinstall 2326 Jun 22 16:14 bndlchk | -rwxr-xr-x 1 oracle oinstall 2326 Jun 22 17:02 bndlchk
-rwxr-xr-x 1 oracle oinstall 3602 Jun 22 16:14 umu | -rwxr-xr-x 1 oracle oinstall 3602 Jun 22 17:02 umu
-rwxr-xr-x 1 oracle oinstall 1641 Jun 22 16:14 eusm | -rwxr-xr-x 1 oracle oinstall 1641 Jun 22 17:02 eusm
-rwxr-xr-x 1 oracle oinstall 60783 Jun 22 16:14 chronos_se | -rwxr-xr-x 1 oracle oinstall 60783 Jun 22 17:02 chronos_se
-rwxr-xr-x 1 oracle oinstall 1551 Jun 22 16:14 chronos_se | -rwxr-xr-x 1 oracle oinstall 1551 Jun 22 17:02 chronos_se
-rwxr-x--x 1 oracle oinstall 19217 Jun 22 16:14 tnsping | -rwxr-x--x 1 oracle oinstall 19217 Jun 22 17:02 tnsping
-rwxr-x--x 1 oracle oinstall 418787 Jun 22 16:14 wrc | -rwxr-x--x 1 oracle oinstall 418787 Jun 22 17:02 wrc
-rwxr-x--x 1 oracle oinstall 25297 Jun 22 16:14 adrci | -rwxr-x--x 1 oracle oinstall 25297 Jun 22 17:02 adrci
-rwxr-x--x 1 oracle oinstall 16793110 Jun 22 16:14 rmanO | -rwxr-x--x 1 oracle oinstall 16793110 Jun 22 17:02 rmanO
-rwxr-xr-x 1 oracle oinstall 227069 Jun 22 16:14 ojmxtool | -rwxr-xr-x 1 oracle oinstall 227069 Jun 22 17:02 ojmxtool
-rwxr-xr-x 1 oracle oinstall 26061 Jun 22 16:14 nmupm | -rwxr-xr-x 1 oracle oinstall 26061 Jun 22 17:02 nmupm
-rwxr-xr-x 1 oracle oinstall 84093 Jun 22 16:14 nmei | -rwxr-xr-x 1 oracle oinstall 84093 Jun 22 17:02 nmei
-rwx------ 1 oracle oinstall 112352 Jun 22 16:14 emdctl | -rwx------ 1 oracle oinstall 112352 Jun 22 17:02 emdctl
-rwxr-xr-x 1 oracle oinstall 37130 Jun 22 16:14 emagtmc | -rwxr-xr-x 1 oracle oinstall 54596 Jun 22 17:02 emagtm
-rwxr-xr-x 1 oracle oinstall 54596 Jun 22 16:14 emagtm | -rwx------ 1 oracle oinstall 15461 Jun 22 17:02 emagent
-rwx------ 1 oracle oinstall 15461 Jun 22 16:14 emagent | -rwxr-xr-x 1 oracle oinstall 656 Jun 22 17:02 commonenv.
-rwxr-xr-x 1 oracle oinstall 656 Jun 22 16:14 commonenv. | -rwx------ 1 oracle oinstall 347 Jun 22 17:02 opmnassoci
-rwx------ 1 oracle oinstall 347 Jun 22 16:14 opmnassoci | -rwxr-xr-x 1 oracle oinstall 2934 Jun 22 17:02 onsctl.opm
-rwxr-xr-x 1 oracle oinstall 2934 Jun 22 16:14 onsctl.opm | -rwxr-xr-x 1 oracle oinstall 484287 Jun 22 17:02 nmosudo
-rwxr-xr-x 1 oracle oinstall 484287 Jun 22 16:14 nmosudo | -rwxr-xr-x 1 oracle oinstall 24725 Jun 22 17:02 nmocat
-rwxr-xr-x 1 oracle oinstall 24725 Jun 22 16:14 nmocat | -rwxr-xr-x 1 oracle oinstall 32872 Jun 22 17:02 nmo.0
-rwxr-xr-x 1 oracle oinstall 32872 Jun 22 16:14 nmo.0 | -rwxr-xr-x 1 oracle oinstall 32872 Jun 22 17:02 nmo
-rws--x--- 1 root oinstall 32872 Jun 22 16:14 nmo | -rwxr-xr-x 1 oracle oinstall 58483 Jun 22 17:02 nmhs.0
-rwxr-xr-x 1 oracle oinstall 58483 Jun 22 16:14 nmhs.0 | -rwxr-xr-x 1 oracle oinstall 58483 Jun 22 17:02 nmhs
-rws--x--- 1 root oinstall 58483 Jun 22 16:14 nmhs | -rwxr-xr-x 1 oracle oinstall 22746 Jun 22 17:02 nmb.0
-rwxr-xr-x 1 oracle oinstall 22746 Jun 22 16:14 nmb.0 | -rwxr-xr-x 1 oracle oinstall 22746 Jun 22 17:02 nmb
-rws--x--- 1 root oinstall 22746 Jun 22 16:14 nmb | -rwsr-s--- 1 oracle oinstall 76234 Jun 22 17:02 emtgtctl2
-rwsr-s--- 1 oracle oinstall 76234 Jun 22 16:14 emtgtctl2 | -rwxr-xr-x 1 oracle oinstall 3895446 Jun 22 17:02 emsubagent
-rwxr-xr-x 1 oracle oinstall 3895446 Jun 22 16:14 emsubagent | -rwxr-xr-x 1 oracle oinstall 37130 Jun 22 17:02 emagtmc
-rwxr-xr-x 1 oracle oinstall 3031365 Jun 22 16:14 e2eme | -rwxr-xr-x 1 oracle oinstall 3031365 Jun 22 17:02 e2eme
-rwx------ 1 oracle oinstall 1634 Jun 22 16:14 dmstool | -rwx------ 1 oracle oinstall 1634 Jun 22 17:02 dmstool
-rwxr-xr-x 1 oracle oinstall 2639 Jun 22 16:14 db2gc | -rwxr-xr-x 1 oracle oinstall 2639 Jun 22 17:02 db2gc
-rwxr-xr-x 1 oracle oinstall 5258 Jun 22 16:14 emutil | -rwxr-xr-x 1 oracle oinstall 5258 Jun 22 17:02 emutil
-rwxr-xr-x 1 oracle oinstall 1516 Jun 22 16:14 emtgtctl | -rwxr-xr-x 1 oracle oinstall 1516 Jun 22 17:02 emtgtctl
-rwx------ 1 oracle oinstall 19063 Jun 22 16:14 emctl.pl | -rwx------ 1 oracle oinstall 19063 Jun 22 17:02 emctl.pl
-rwxr--r-- 1 oracle oinstall 14476 Jun 22 16:14 emctl | -rwxr--r-- 1 oracle oinstall 14476 Jun 22 17:02 emctl
-rwxr-xr-x 1 oracle oinstall 641 Jun 22 16:14 commonenv | -rwxr-xr-x 1 oracle oinstall 641 Jun 22 17:02 commonenv
-rwxr-xr-x 1 oracle oinstall 701 Jun 22 16:14 agentca | -rwxr-xr-x 1 oracle oinstall 701 Jun 22 17:02 agentca
-rwxr-x--x 1 oracle oinstall 16792553 Jun 22 16:14 rman | -rwxr-x--x 1 oracle oinstall 16792553 Jun 22 17:03 rman
}}}
Grid Control Target Maintenance: Steps to Diagnose Issues Related to "Agent Unreachable" Status [ID 271126.1]
In Grid Control Receiving Agent Unreachable Notification Emails Very Often After 10.2.0.4 Agent Upgrade [ID 752296.1]
https://blogs.oracle.com/db/entry/oracle_support_master_note_for_10g_grid_control_enterprise_manager_communication_and_upload_issues_d
* Tagging search solution design Advanced edition https://www.slideshare.net/AlexanderTokarev4/tagging-search-solution-design-advanced-edition <- GOOD STUFF
* Faceted search with Oracle InMemory option https://www.slideshare.net/AlexanderTokarev4/faceted-search-with-oracle-inmemory-option
* P9 speed of-light faceted search via oracle in-memory option by alexander tokarev https://www.slideshare.net/AlexanderTokarev4/p9-speed-oflight-faceted-search-via-oracle-inmemory-option-by-alexander-tokarev
Oracle json caveats https://www.slideshare.net/AlexanderTokarev4/oracle-json-caveats
...
http://wikis.sun.com/display/Performance/Aligning+Flash+Modules+for+Optimal+Performance
http://blogs.oracle.com/lisan/entry/io_sizes_and_alignments_with
! console
console.aws.amazon.com
! documentation
https://docs.aws.amazon.com/index.html
http://guyharrison.squarespace.com/blog/2011/6/8/a-first-look-at-oracle-on-amazon-rds.html
High perf IOPS on AWS http://aws.typepad.com/aws/2012/09/new-high-performance-provisioned-iops-amazon-rds.html
service dashboard status http://status.aws.amazon.com/
''a Systematic Look at EC2 I/O'' http://blog.scalyr.com/2012/10/16/a-systematic-look-at-ec2-io/
''EC2 compute units'' http://gevaperry.typepad.com/main/2009/03/figuring-out-the-roi-of-infrastructureasaservice.html, http://stackoverflow.com/questions/4849723/a-question-about-amazon-ec2-compute-units
! official doc
data warehousing guide - 19 SQL for Analysis and Reporting
https://docs.oracle.com/en/database/oracle/oracle-database/19/dwhsg/sql-analysis-reporting-data-warehouses.html#GUID-20EFBF1E-F79D-4E4A-906C-6E496EECA684
https://docs.oracle.com/en/database/oracle/oracle-database/19/dwhsg/sql-analysis-reporting-data-warehouses.html#GUID-D6AC065D-670A-40E8-8DA0-E90A7307CFC2
SQL Language Reference - Analytic Functions
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Analytic-Functions.html#GUID-527832F7-63C0-4445-8C16-307FA5084056
https://docs.oracle.com/en/database/oracle/oracle-database/18/sqlrf/Analytic-Functions.html#GUID-527832F7-63C0-4445-8C16-307FA5084056
{{{
Analytic functions are commonly used in data warehousing environments. In the list of analytic functions that follows,
functions followed by an asterisk (*) allow the full syntax, including the windowing_clause.
AVG *
CLUSTER_DETAILS
CLUSTER_DISTANCE
CLUSTER_ID
CLUSTER_PROBABILITY
CLUSTER_SET
CORR *
COUNT *
COVAR_POP *
COVAR_SAMP *
CUME_DIST
DENSE_RANK
FEATURE_DETAILS
FEATURE_ID
FEATURE_SET
FEATURE_VALUE
FIRST
FIRST_VALUE *
LAG
LAST
LAST_VALUE *
LEAD
LISTAGG
MAX *
MIN *
NTH_VALUE *
NTILE
PERCENT_RANK
PERCENTILE_CONT
PERCENTILE_DISC
PREDICTION
PREDICTION_COST
PREDICTION_DETAILS
PREDICTION_PROBABILITY
PREDICTION_SET
RANK
RATIO_TO_REPORT
REGR_ (Linear Regression) Functions *
ROW_NUMBER
STDDEV *
STDDEV_POP *
STDDEV_SAMP *
SUM *
VAR_POP *
VAR_SAMP *
VARIANCE *
}}}
https://oracle-base.com/articles/sql/articles-sql#analytic-functions
https://oracle-base.com/articles/misc/avg-and-median-analytic-functions
https://leetcode.com/problems/median-employee-salary/
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/124168274-ea7b2d00-da72-11eb-9645-8246cfebf122.png ]]
Find Answers Faster
By Jonathan Gennick and Anthony Molinaro
http://www.oracle.com/technology/oramag/oracle/05-mar/o25dba.html
LAG
http://www.appsdba.com/blog/?p=383
CAST function
http://www.oracle.com/technetwork/database/focus-areas/manageability/diag-pack-ow08-131537.pdf
http://psoug.org/reference/cast.html
SQL – RANK, MAX Analytical Functions, DECODE, SIGN
http://hoopercharles.wordpress.com/2009/12/26/sql-–-rank-max-analytical-functions-decode-sign/
RANK - fist and last
https://oracle-base.com/articles/misc/rank-dense-rank-first-last-analytic-functions#first_and_last
https://stackoverflow.com/questions/40404497/select-latest-row-for-each-group-from-oracle
CONNECT BY - hierarchical queries
https://www.linkedin.com/pulse/step-by-step-guide-creating-sql-hierarchical-queries-bibhas-mitra/
http://www.slideshare.net/hamcdc/sep13-analytics
http://www.odtug.com/p/cm/ld/fid=65&tid=35&sid=972
http://www.amazon.com/Window-Functions-SQL-Jonathan-Gennick-ebook/dp/B006YITKJO/ref=sr_1_2?ie=UTF8&qid=1385753351&sr=8-2&keywords=window+functions+in+sql
http://gennick.com/database/?tag=WindowSS
Analytic Functions in Oracle 8i Srikanth Bellamkonda http://infolab.stanford.edu/infoseminar/archive/SpringY2000/speakers/agupta/paper.pdf
Enhanced subquery optimizations in Oracle http://www.vldb.org/pvldb/2/vldb09-423.pdf
Analytic SQL in 12c http://www.oracle.com/technetwork/database/bi-datawarehousing/wp-in-database-analytics-12c-2132656.pdf
Adaptive and big data scale parallel execution in oracle http://dl.acm.org/citation.cfm?id=2536235
..
https://forums.oracle.com/forums/thread.jspa?threadID=2220970
''analyze table sysadm.PSOPRDEFN validate structure cascade online ; ''
''andrew ng''
publications http://cs.stanford.edu/people/ang/?page_id=414
http://en.wikipedia.org/wiki/Andrew_Ng
http://cs.stanford.edu/people/ang/
http://creiley.wordpress.com/
https://www.coursera.org/course/ml
Oracle Clusterware and Application Failover Management [ID 790189.1]
Application Management http://www.oracle.com/technetwork/oem/app-mgmt/app-mgmt-084358.html
http://onlineappsdba.com/index.php/2010/08/30/time-out-while-waiting-for-a-managed-process-to-stop-http_server/
cman http://arup.blogspot.com/2011/08/setting-up-oracle-connection-manager.html
Database Resident Connection Pool (drcp) http://www.oracle-base.com/articles/11g/database-resident-connection-pool-11gr1.php
[img(50%,50%)[ https://lh6.googleusercontent.com/-TEaGT5fnFH0/UZpDd8TgAaI/AAAAAAAAB7A/EqsT3qE_WLg/w599-h798-no/timfoxconnectionpool.JPG ]]
[img(50%,50%)[ https://lh3.googleusercontent.com/-7PfskV3MC1o/UZpHKolfKeI/AAAAAAAAB7w/wvj7c22xHWk/w458-h610-no/timfoxconnectionpool2.JPG ]]
{{{
http://www.oracle.com/technology/software/products/ias/files/ha-certification.html
How to Obtain Pre-Requisites for Oracle Application Server 10g Installation
Doc ID: Note:433077.1
Oracle Application Server 10g Release 3 (10.1.3) Support Status and Alerts
Doc ID: Note:397022.1
How to Find Certification Details for Oracle Application Server 10g
Doc ID: Note:431578.1
How to Verify 9iAS Release 2 (9.0.2) Components
Doc ID: Note:226187.1
What is a 9iAS (9.0.2) Farm
Doc ID: Note:218038.1
What is a 9iAS (9.0.2) Cluster
Doc ID: Note:218039.1
Steps to Maintain Oracle Application Server 10g Release 2 (10.1.2)
Doc ID: Note:415222.1
Subject: Installing Oracle Application Server 10g with Oracle E-Business Suite Release 11i
Doc ID: Note:233436.1
Oracle Application Server 10g Release 2 (10.1.2) Support Status and Alerts
Doc ID: Note:329361.1
Oracle Application Server 10g Examples for Critical Patch Updates
Doc ID: Note:405972.1
Using Oracle Applications with a Split Configuration Database Tier on Oracle 10g Release 2
Doc ID: Note:369693.1
Using Oracle Applications with a Split Configuration Database Tier on Oracle 10g Release 1
Doc ID: Note:356839.1
How to Obtain Pre-Requisites for Oracle Application Server 10g Installation
Doc ID: Note:433077.1
Oracle Application Server with Oracle E-Business Suite Release 11i FAQ
Doc ID: Note:186981.1
How to Find Certification Details for Oracle Application Server 10g
Doc ID: Note:431578.1
Oracle Server - Export Data Pump and Import DataPump FAQ
Doc ID: Note:556636.1
Oracle E-Business Suite Release 11i Technology Stack Documentation Roadmap
Doc ID: Note:207159.1
Using Oracle Applications with a Split Configuration Database Tier on Oracle 10g Release 1
Doc ID: Note:356839.1
10g Release 2 Export/Import Process for Oracle Applications Release 11i
Doc ID: Note:362205.1
Oracle Application Server with Oracle E-Business Suite Release 12 FAQ
Doc ID: Note:415007.1
About Oracle E-Business Suite Applied Technology Family Pack ATG_PF.H
Doc ID: Note:284086.1
Installing Oracle Application Server 10g with Oracle E-Business Suite Release 11i
Doc ID: Note:233436.1
Oracle Applications Documentation Resources, Release 12
Doc ID: Note:394692.1
https://metalink.oracle.com/metalink/plsql/f?p=130:14:491566816839019350::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,461709.1,1,1,1,helvetica
Implement, Upgrade and Optimize � Upgrade Guide � Oracle E-Business Suite Upgrade Resource � Oracle E-Business Suite Upgrade Resource Plan
Globalization Guide for Oracle Applications Release 12
Doc ID: Note:393861.1
Oracle Applications Release 12 Technology Stack Documentation Resources
Doc ID: Note:396957.1
Oracle E-Business Suite Release 12 Technology Stack Documentation Roadmap
Doc ID: Note:380482.1
How to Migrate OAS 4.x Applications to 9iAS Release 1 (1.0.2)
Doc ID: Note:122826.1
Disaster Recovery Setup: Middle Tier and Collocated Infrastructure on the Same Server
Doc ID: Note:420824.1
Understanding OracleAS 10g High Availability - A Roadmap
Doc ID: Note:412159.1
What make and version of Cluster Managers are supported by Oracle in an OracleAS Cold Failover Cluster setup?
Doc ID: Note:303161.1
Examples of Building Highly Available, Highly Secure, Scalable OracleAS 10g Solutions
Doc ID: Note:435025.1
Storage Solutions for OracleAS 10g R2 and OracleAS 10g R3
Doc ID: Note:371251.1
9.0.2.0.1 documentation
http://download-uk.oracle.com/docs/cd/B10202_07/index.htm
Oracle9iAS Release 2 (9.0.3) Support Status and Alerts
Doc ID: Note:248328.1
Installation and Connection Issues with 9iAS 1.0.2.2 and 9i
Doc ID: Note:162843.1
9iAS Release 1 and Release 2 Install Options
Doc ID: Note:203509.1
Explanation of 9iAS Release 1 Installation Prompts
Doc ID: Note:158688.1
9iAS 1.0.2.2.2A Installation Hangs at 100% on Windows
Doc ID: Note:180418.1
Installing 9iAS Release 1 (1.0.2.2) and RDBMS 8.1.7 on the Same Windows Server
Doc ID: Note:170756.1
9iAS Release 1 (1.0.2.2) Installation Requirements Checklist for Linux
Doc ID: Note:158856.1
9iAS Release 1 (1.0.2.2) EE Installation Requirements Checklist (Microsoft Windows NT/2000)
Doc ID: Note:158863.1
ALERT: Windows NT/2000 - 9iAS v.1.0.2.2.1 Unsupported on Pentium 4
Doc ID: Note:136038.1
Checking 9iAS Release 1 Installation Requirements
Doc ID: Note:158634.1
Oracle9i Application Server (9iAS) 9.0.3.1 FAQ
Doc ID: Note:251781.1
--########## FORMS
The History and Methods of Running Oracle Forms Over The Web
Doc ID: Note:166640.1
Overview of Oracle Forms and Using the Oracle Forms Builder
Doc ID: Note:358712.1
Note 166640.1 - The History and Methods of Running Oracle Forms Over The Web
Note 2056834.6 - Does Oracle Support the Use of Emulators to Run Oracle Products?
Note 266541.1 - Patching Lifecycle / Strategy of Oracle Developer (Forms and Reports)
Note 299938.1 - Moving Forms Applications From One Platform To Another
Note 340215.1 - Required Support Files (RSF) in Oracle Forms and Reports
Note 68047.1 - Support of Terminal Emulators, Terminal Server ( e.g. Citrix) with Developer Tools
Note 73736.1 - Installing Developer on a LAN - Is This Supported?
Note 74145.1 - Developer Production and Patchset Version Numbers on MS Windows
How to Web Deploy Oracle Forms Using The Static HTML File Method?
Doc ID: Note:232371.1
Are Unix Clients Supported for Deploying Oracle Forms over the Web?
Doc ID: Note:266439.1
Changing the Oracle Password in Oracle Forms
Doc ID: Note:16365.1
Failed To Detect Change Window Password Of Oracle Forms 6
Doc ID: Note:563955.1
Changing the Oracle Password in Oracle Forms
Doc ID: Note:16365.1
--########## JINITIATOR VERSIONS
oracle 9iR1 - 1.1.8.7
oracle10gr2 AS - 1.3.1.22
--########## PORTAL
Overview of the Portal Export-Import Process
Doc ID: Note:306785.1
Note 456456.1 How to Find the Oracle Application Server 10g Upgrade and Compatibility Guide
Note 433077.1 How to Obtain Pre-Requisites for Oracle Application Server 10g Installation
Note 431028.1 Oracle Fusion Middleware Support of IPv6
Note 429995.1 Is it Supported to Run OracleAS Components on Different Operating Systems and Versions?
Note 420210.1 What User Can Be Used to Perform the IAS Patches/Upgrades?
Note 412439.1 Can A Manually Managed Cluster Be Installed Across Windows And Unix/Linux?
Note 394525.1 How to Know If a New Patch is Released ?
Note 400134.1 How to force Oracle Installer to use Virtual Hostname When Installing an OracleAS Instance?
Note 302535.1 Can Oracle AS 10g Release 2 (10.1.2) Be Installed to Upgrade Forms, Reports and Portal 10g (9.0.4)?
Note 317085.1 OracleAS 10g (10.1.2) Installation Requirements for Linux Red Hat 4.0 / Oracle Enterprise Linux
-- 9.0.3
Oracle9iAS Release 2 (9.0.3) Support Status and Alerts
Doc ID: Note:248328.1
Installation and Connection Issues with 9iAS 1.0.2.2 and 9i
Doc ID: Note:162843.1
9iAS Release 1 and Release 2 Install Options
Doc ID: Note:203509.1
Explanation of 9iAS Release 1 Installation Prompts
Doc ID: Note:158688.1
9iAS Release 1 and Release 2 Install Options
Doc ID: Note:203509.1
9iAS Release 1 (1.0.2.2) Installation Requirements Checklist for Linux
Doc ID: Note:158856.1
9iAS Release 1 (1.0.2.2) EE Installation Requirements Checklist (Microsoft Windows NT/2000)
Doc ID: Note:158863.1
ALERT: Windows NT/2000 - 9iAS v.1.0.2.2.1 Unsupported on Pentium 4
Doc ID: Note:136038.1
Checking 9iAS Release 1 Installation Requirements
Doc ID: Note:158634.1
Oracle9i Application Server (9iAS) 9.0.3.1 FAQ
Doc ID: Note:251781.1
Unable to Bind to Server Machine After Install of Discoverer 4.1.37
Doc ID: Note:149678.1
-- HTTP SERVER
HTTP Server Intermittently Restarted By OPMN
Doc ID: 469720.1
Linux OS Service 'httpd'
Doc ID: 550870.1
Is There a Way to Increase the Maximum Value of ThreadsperChild on Windows?
Doc ID: 460443.1
Unable to Increase Value of Maxclients Above 256 in httpd.conf File
Doc ID: 149874.1
How Apache Works
Doc ID: 334763.1
OC4J_SECURITY Is Falling To Start After Problems With Database
Doc ID: Note:550631.1
-- TUNING / TROUBLESHOOTING
Troubleshooting Web Deployed Oracle Forms Performance Issues
Doc ID: 363285.1
Configurable Connection Limits in Application Server Components
Doc ID: 289908.1
-- AIX
Does OracleAS 10g Support AIX VIO Logical Partitioning (LPAR)?
Doc ID: Note:470083.1
-- EBUSINESS SUITE
Oracle Application Server with Oracle E-Business Suite Release 11i FAQ
Doc ID: Note:186981.1
}}}
{{{
col dest_name format a30
select inst_id, dest_name, status, error, gap_status from gV$ARCHIVE_DEST_STATUS;
SELECT name, free_mb, total_mb, free_mb/total_mb*100 "%" FROM v$asm_diskgroup;
set lines 100
col name format a60
select name, floor(space_limit / 1024 / 1024) "Size MB", ceil(space_used / 1024 / 1024) "Used MB"
from v$recovery_file_dest
order by name;
}}}
{{{
alter system set db_recovery_file_dest_size=<bigger size>;
archive log all;
crosscheck archivelog all;
list expired archivelog all;
delete expired archivelog all;
OR
delete archivelog all completed before 'sysdate-1';
}}}
----------------------------------------------------
Archivelog Mode On RAC 10G, 11g
----------------------------------------------------
1) In Oracle 10.1, you cannot directly enable archive logging in a RAC database. Instead, you must temporarily convert your RAC database to a single-instance database to issue the command. First change the CLUSTER_DATABASE parameter in the SPFILE to FALSE
ALTER SYSTEM SET CLUSTER_DATABASE = FALSE SCOPE = SPFILE;
In Oracle 10.2 and 11g, you can run the ALTER DATABASE SQL statement to change the archiving mode in RAC as long as the database is mounted by the local instance but not open in any instances. You do not need to modify parameter settings to run this statement.
2) Set parameters
If you are using a filesystem do this:
alter system set log_archive_format='orcl_%t_%s_%r.arc' scope=spfile;
alter system set log_archive_dest_1 = 'LOCATION=/u03/flash_recovery_area/ORCL/archivelog' scope=both;
If you are using ASM do this:
alter system set log_archive_format='orcl_%t_%s_%r.arc' scope=spfile;
alter system set db_recovery_file_dest_size=800G scope=both;
alter system set db_recovery_file_dest='+RECOVERY_1' scope=both;
alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
3) shutdown the database
srvctl stop database -d RAC
4) Start a single instance using the following:
srvctl start instance -d RAC - i RACl -o mount
5) Enable archiving as follows:
ALTER DATABASE ARCHIVELOG;
6) In Oracle 10.1, Change the CLUSTER_DATABASE parameter in the SPFILE back to TRUE:
ALTER SYSTEM SET CLUSTER_DATABASE = TRUE SCOPE = SPFILE;
7) The next time the database is stopped and started, it will be a RAC database. Use the following command to stop the instance:
srvctl stop instance -d RAC -i RACl
8) start the database
srvctl start database -d RAC
9) do other stuff:
-- Edit related parameters
alter system set control_file_record_keep_time=14;
alter database enable block change tracking using file '+RECOVERY_1/ORCL/orcl.bct';
-- Configure RMAN settings and related directories
on +RECOVERY_1... mkdir AUTOBACKUP BACKUPSET
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '+RECOVERY_1/ORCL/AUTOBACKUP/%d-%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 2 G FORMAT '+RECOVERY_1/ORCL/BACKUPSET/%d-%T-%U';
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE ENCRYPTION FOR DATABASE OFF;
CONFIGURE ENCRYPTION ALGORITHM 'AES128';
CONFIGURE COMPRESSION ALGORITHM 'BZIP2'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+RECOVERY_1/ORCL/sncf_orcl.f';
List of directories on +RECOVERY_1:
Y ARCHIVELOG/
N AUTOBACKUP/
N BACKUPSET/
Y CHANGETRACKING/
Y CONTROLFILE/
----------------------------------------------------
Archivelog Mode On RAC 9i by ORACLE-BASE
----------------------------------------------------
This article highlights the differences between resetting the archive log mode on a single node instance and a Real Application Clusters (RAC).
On a single node instance the archive log mode is reset as follows:
ALTER SYSTEM SET log_archive_start=TRUE SCOPE=spfile;
ALTER SYSTEM SET log_archive_dest_1='location=/u01/oradata/MYSID/archive/' SCOPE=spfile;
ALTER SYSTEM SET log_archive_format='arch_%t_%s.arc' SCOPE=spfile;
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ARCHIVE LOG START;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
The ALTER DATABASE ARCHIVELOG command can only be performed if the database in mounted in exclusive mode. This means the whole clustered database must be stopped before the operation can be performed. First we set the relevant archive parameters:
ALTER SYSTEM SET log_archive_start=TRUE SCOPE=spfile;
ALTER SYSTEM SET log_archive_dest_1='location=/u01/oradata/MYSID/archive/' SCOPE=spfile;
ALTER SYSTEM SET log_archive_format='arch_%t_%s.arc' SCOPE=spfile;
Since we need to mount the database in exclusive mode we must also alter the following parameter:
ALTER SYSTEM SET cluster_database=FALSE SCOPE=spfile;
From the command line we can stop the entire cluster using:
srvctl stop database -d MYSID
With the cluster down we can connect to a single node and issue the following commands:
STARTUP MOUNT;
ARCHIVE LOG START;
ALTER DATABASE ARCHIVELOG;
ALTER SYSTEM SET cluster_database=TRUE SCOPE=spfile;
SHUTDOWN IMMEDIATE;
Notice that the CLUSTER_DATABASE parameter has been reset to it's original value. Since the datafiles and spfile are shared between all instances this operation only has to be done from a single node.
From the command line we can now start the cluster again using:
srvctl start database -d MYSID
The current settings place all archive logs in the same directory. This is acceptible since the thread (%t) is part of the archive format preventing any name conflicts between instances. If node-specific locations are required the LOG_ARCHIVE_DEST_1 parameter can be repeated for each instance with the relevant SID prefix.
Archiver Best Practices
Doc ID: Note:45042.1
http://www.linuxjournal.com/content/arduino-open-hardware-and-ide-combo
Python Meets the Arduino http://www.youtube.com/watch?v=54XwSUC8klI
http://makeprojects.com/Project/Arduino+and+Python%3A+Learn+Serial+Programming/667/1#.UKSnQYc70hU
http://www.arduino.cc/playground/interfacing/python
https://python.sys-con.com/node/2386200
http://designcodelearn.com/blog/2012/12/01/how-to-make-$10m-in-one-night/
https://levels.io/korea-4g/
https://www.arqbackup.com/features/
Amazon Glacier https://aws.amazon.com/glacier/
<<showtoc>>
Logical I/O(consistent get) and Arraysize relation with SQL*PLUS
http://tonguc.wordpress.com/2007/01/04/logical-ioconsistent-get-and-arraysize-relation-with-sqlplus/
{{{
Master Note for Automatic Storage Management (ASM) [ID 1187723.1]
-- HOMEs COMPATIBILITY MATRIX
Note 337737.1 Oracle Clusterware - ASM - Database Version Compatibility
Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment
-- BEST PRACTICE
ASM Technical Best Practices (Doc ID 265633.1)
-- SETUP
How To Setup ASM on Linux Using ASMLIB Disks, Raw Devices or Block Devices? [ID 580153.1] <— mentions 10gR2 and 11gR2 configuration
Device Persistence and Oracle Linux ASMLib [ID 394959.1]
MOVING ORACLE_HOME
Doc ID: Note:28433.1
Recover database after disk loss
Doc ID: Note:230829.1
Doing Incomplete Recovery and Moving Redo Logs From Corrupted Disk
Doc ID: Note:77643.1
Cross-Platform Migration Using Rman Convert Database on Destination Host ( Windows 32-bit to Linux 32-bit )
Doc ID: Note:414878.1
How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted?
Doc ID: Note:465478.1
Recovering From A Lost Control File
Doc ID: Note:1014504.6
ORACLE V6 INSTALLATION PROCEDURES
Doc ID: Note:11196.1
-- TROUBLESHOOTING
How To Gather/Backup ASM Metadata In A Formatted Manner?
Doc ID: 470211.1
Troubleshooting a multi-node ASMLib installation (Doc ID 811457.1)
ASM is Unable to Detect ASMLIB Disks/Devices. (Doc ID 457369.1)
HOW TO MAP ASM FILES WITH ONLINE DATABASE FILES
Doc ID: 552082.1
-- TRACE DEVICES
How to identify exactly which disks on a SAN have been allocated to an ASM Diskgroup (Doc ID 398435.1)
How to map device name to ASMLIB disk (Doc ID 1098682.1)
-- IMBALANCE
Script to Report the Percentage of Imbalance in all Mounted Diskgroups (Doc ID 367445.1)
-- PERFORMANCE
Comparing ASM to Filesystem in benchmarks [ID 1153664.1]
File System's Buffer Cache versus Direct I/O [ID 462072.1]
question regarding "ASM Performance", version 10.2.0 http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2109833600346625821
http://kevinclosson.wordpress.com/2007/02/11/what-performs-better-direct-io-or-direct-io-there-is-no-such-thing-as-a-stupid-question/
http://www.freelists.org/post/oracle-l/filesystemio-options-setting,4
http://www.freelists.org/post/oracle-l/split-block-torn-page-problem,6
ASM Inherently Performs Asynchronous I/O Regardless of filesystemio_options Parameter [ID 751463.1]
--======================
-- ASM
--======================
Problems with ASM in 10gR2
Doc ID: Note:353065.1
Deployment of very large databases (10TB to PB range) with Automatic Storage Management (ASM)
Doc ID: Note:368055.1
ASMIOSTAT Script to collect iostats for ASM disks
Doc ID: Note:437996.1
How to copy a datafile from ASM to a file system not using RMAN
Doc ID: Note:428893.1
How to upgrade ASM instance from 10.1 to 10.2 (Single Instance)
Doc ID: Note:329987.1
Problems with ASM in 10gR2
Doc ID: Note:353065.1
Unable to startup ASM instance after OS kernel upgrade
Doc ID: Note:313833.1
How To Extract Datapump File From ASM Diskgroup To Local Filesystem?
Doc ID: Note:566941.1
How To Determinate If An EMCPOWER Partition Is Valid For ASMLIB?
Doc ID: Note:566676.1
HOW TO MAP ASM FILES WITH ONLINE DATABASE FILES
Doc ID: Note:552082.1
How To Add a New Disk(s) to An Existing Diskgroup on RAC (Best Practices).
Doc ID: Note:557348.1
Diagnosing Disk not getting discovered in ASM
Doc ID: Note:311926.1
How To Gather/Backup ASM Metadata In A Formatted Manner?
Doc ID: Note:470211.1
How To Move The Database To Different Diskgroup (Change Diskgroup Redundancy)
Doc ID: Note:438580.1
Tips On Installing and Using ASMLib on Linux
Doc ID: Note:394953.1
RHEL5 and ASMLib
Doc ID: Note:434775.1
Oracle Linux ASMLib README Documentation
Doc ID: Note:454035.1
ASM Using Files Instead of Real Devices on Linux
Doc ID: Note:266028.1
CHECKSUMS DIFFER FOR ASM DATAFILES WHEN COPIED USING XDB/FTP
Doc ID: Note:459819.1
How to rename/move a datafile in the same ASM diskgroup
Doc ID: Note:564993.1
How To Remove An Empty ASM System Directory
Doc ID: Note:444812.1
Database Instance Crashes In Case Of Path Offlined In Multipath Storage
Doc ID: Note:555371.1
How To Change ASM SYS PASSWORD ?
Doc ID: Note:452076.1
ASM Instances Are Not Mounted Consistently
Doc ID: Note:351114.1
How To Delete Archive Log Files Out Of +Asm?
Doc ID: Note:300472.1
ENABLE/DISABLE ARCHIVELOG MODE AND FLASH RECOVERY AREA IN A DATABASE USING ASM
Doc ID: Note:468984.1
Unable To Make Disks Available From Asmlib Using SAN
Doc ID: Note:302020.1
Oracle ASM and Multi-Pathing Technologies
Doc ID: Note:294869.1
How to rename ASM disks?
Doc ID: Note:418542.1
Does Asm Survive Change Of Disc Path?
Doc ID: Note:466231.1
Steps To Migrate/Move a Database From Non-ASM to ASM And Vice-Versa
Doc ID: Note:252219.1
Raw Devices and Cluster Filesystems With Real Application Clusters
Doc ID: Note:183408.1
How To Resize An ASM Disk On Release 10.2.0.X?
Doc ID: Note:470209.1
ASM Fast Mirror Resync - Example To Simulate Transient Disk Failure And Restore Disk
Doc ID: Note:443835.1
----------------------------------------------------------------------------------
Note:294869.1 Oracle ASM and Multi-Pathing Technologies
Note:461079.1 ASM does not discover disk(s) on AIX platform
Note:353761.1 Assigning a PVID To An Existing ASM Disk Corrupts the ASM Disk Header
Note:279353.1 Multiple 10g Oracle Home installation - ASM
Note:265633.1 ASM Technical Best Practices
Note:243245.1 10G New Storage Features and Enhancements
Note:282036.1 Minimum Software Versions and Patches Required to Support Oracle Products on IBM pSeries
Note:249992.1 New Feature on ASM (Automatic Storage Manager)
Note:252219.1 Steps To Migrate Database From Non-ASM to ASM And Vice-Versa
Note:303760.1 ASM & ASMlib Using Files Instead of Real Devices on Linux
Note:266028.1 ASM Using Files Instead of Real Devices on Linux
Note:471877.1 Raw Slice Not Showing Up When Trying To Add In Existing ASM Diskgroup
Note:551205.1 11g ASM New Features Technical White Paper
Note:402526.1 Asm Devices Are Still Held Open After Dismount or Drop
Note:452076.1 How To Change ASM SYS PASSWORD
Note:340277.1 How to connect to ASM instance from a remote client (SQL*NET)
Note:351866.1 How To Reclaim Asm Disk Space
Note:470573.1 How To Delete SPFILE in +ASM DISKGROUP And Recreate in $ORACLE_HOME Directory
Note:458419.1 How to Bind RAW devices to Physical Partitions on Linux to be used by ASM
Note:469082.1 How To Setup ASM (10.2) on Windows Platforms
Note:471055.1 OUI Complains That ASM Is Not Release 2 While Installing 10g Database
Note:390274.1 How to move a datafile from a file system to ASM
Note:460909.1 Asm Can'T See Disks After Upgrade to 10.2.0.3 on Itanium
Note:382669.1 Duplicate database from non ASM to ASM (vise versa) to a different host
Note:413389.1 Asynchonous I/O not reported in /proc/slabinfo KIOCB slabdata
Note:437555.1 Created ASM Stamped Disks But Unable To Create Diskgroup
Note:370355.1 How to upgrade an ASM Instance From 10.2.0 lower version To higher version
Note:452924.1 How to Prepare Storage for ASM
Note:313387.1 HOWTO Which Disks Are Handled by ASMLib Kernel Driver
Note:331661.1 How to Re-configure Asm Disk Group
Note:428893.1 How to copy a datafile from ASM to a file system not using RMAN
Note:416046.1 ASM - Internal Handling of Block Corruptions
Note:340848.1 Performing duplicate database with ASM-OMF-RMAN
Note:342234.1 How to relocate an spfile from one ASM diskgroup to another on a RAC environment
Note:330084.1 Install: How To Migrate Oracle10g R1 ASM Database To 10g R2
Note:209850.1 RAC Survival Kit ORA-29702
Note:467354.1 ASM Crashes When Rebooting a Server With ORA-29702 Error
Note:334726.1 Cannot configure ASM because CSS Does Not Start on AIX 5L
STARTUP
Note:404728.1 Automatic Database Startup Does not Work With ASM through DBSTART.
Note:264235.1 ORA-29701 On Reboot When Instance Uses Automatic Storage Management (ASM)
DBMS_FILE_TRANSFER
Note:330103.1 How to Move Asm Database Files From one Diskgroup To Another
Async IO
Note:432854.1 Asynchronous IO Support on OCFS-OCFS2 and Related Settings filesystemio_options, disk_asynch_io
Note 237299.1 HOW TO CHECK IF ASYNCHRONOUS IO IS WORKING ON LINUX
Windows
Note 331796.1 How to setup ASM on Windows
11g
Note:429098.1 11g ASM New Feature
Note:443835.1 ASM Fast Mirror Resync - Example To Simulate Transient Disk Failure And Restore Disk
Note:445037.1 ASM Fast Rebalance
Note 199457.1 Step-By-Step Installation of RAC on IBM AIX (RS/6000)
Note:240575.1 RAC on Linux Best Practices
Note:245356.1 Oracle9i - AIX5L Installation Tips
Note:29676.1 Making the decision to use raw devices
Note:38281.1 RAID and Oracle - 20 Common Questions and Answers
ASM & ASMlib Using Files Instead of Real Devices on Linux
Doc ID: Note:303760.1
Configuring Oracle ASMLib on Multipath Disks
Doc ID: Note:309815.1
Tips On Installing and Using ASMLib on Linux
Doc ID: Note:394953.1
Raw Devices on Linux
Doc ID: Note:224302.1
-- PERFORMANCE
File System's Buffer Cache versus Direct I/O
Doc ID: Note:462072.1
ASMIOSTAT Script to collect iostats for ASM disks
Doc ID: 437996.1
Note:341782.1 Linux Quick Reference
Note:264736.1 How to Create a Filesystem inside of a Linux File (loop device)
-- 11gR2 BUG DETECT ASM ON OCR
Device Checks for ASM Fails with PRVF-5150: Path ORCL: is not a valid path [ID 1210863.1]
FAQ ASMLIB CONFIGURE,VERIFY, TROUBLESHOOT [ID 359266.1]
http://oraclue.com/2010/11/09/grid-11-2-0-2-install-nightmare/
http://gjilevski.wordpress.com/2010/10/03/fresh-oracle-11-2-0-2-grid-infrastructure-installation-prvf-5150-prvf-5184/
PRVF-5449 : Check of Voting Disk location "ORCL:(ORCL:)" failed [ID 1267569.1]
-- DROP DISK ISSUE, BUG
ORA-15041 V$ASM_DISK Shows HUNG State for Dropped Disks
Doc ID: Note:419014.1
ORA-15041 IN A DISKGROUP ALTHOUGH FREE_MB REPORTS SUFFICIENT SPACE
Doc ID: Note:460155.1
-- DROP/CREATE
How To Add Back An ASM Disk or Failgroup (Normal or High Redundancy) After A Transient Failure Occurred (On Release 10.2. or 10.1)? (Doc ID 946213.1)
-- BUG FIXES ON AIX 64bit 10.2.0.2
Note 433399.1-Could not add datafile due to ORA-01119, ORA-17502 and ORA-15041
1. Apply fix for Patch 4691191.
OR
2. Apply 10.2.0.3.
-- AIX
Subject: ASM does not discover disk(s) on AIX platform
Doc ID: Note:461079.1 Type: PROBLEM
Last Revision Date: 24-JAN-2008 Status: PUBLISHED
-- UPGRADE ASM
How to upgrade ASM instance from 10.1 to 10.2 (Single Instance)
Doc ID: Note:329987.1
How To Upgrade ASM from 10.2 to 11.1 (single Instance configuration / Non-RAC)?
Doc ID: Note:736121.1
How To Upgrade ASM from 10.2 to 11.1 (RAC)?
Doc ID: Note:736127.1
How to upgrade an ASM Instance From 10.2.0 lower version To higher version? from 10.2.0.1 to patchset 10.2.0.2
Doc ID: Note:370355.1
Install: How To Migrate Oracle10g R1 ASM Database To 10g R2
Doc ID: Note:330084.1
Asm Can'T See Disks After Upgrade to 10.2.0.3 on Itanium
Doc ID: Note:460909.1
-- UNINSTALL
How to cleanup ASM installation (RAC and Non-RAC)
Doc ID: Note:311350.1
-- QUERY
ASM Extent Size
Doc ID: Note:465039.1
How To Identify If A Disk/Partition Is Still Used By ASM, Has Been Used by ASM Or Has Not Been Used by ASM (Unix/Linux)?
Doc ID: 603210.1
-- DEBUG
Information to gather when diagnosing ASM space issues
Doc ID: Note:351117.1
How To Gather/Backup ASM Metadata In A Formatted Manner?
Doc ID: Note:470211.1
-- COMPATIBLE.ASM
Bug 7173616 - CREATE DISKGROUP with compatible.asm=10.2 fails (OERI:kfdAllocateAu_00)
Doc ID: Note:7173616.8
-- 11g NEW FEATURE
11g ASM New Feature
Doc ID: Note:429098.1
-- RESIZE
How to resize a physical disk or LUN and an ASM DISKGROUP
Doc ID: 311619.1
-- RAC ASM
How to Convert a Single-Instance ASM to Cluster ASM
Doc ID: 452758.1
-- LABEL
Adding The Label To ASMLIB Disk Using 'oracleasm renamedisk' Command
Doc ID: 280650.1
-- REMOVE INSTANCE
How to remove an ASM instance and its corresponding database(s) on WINDOWS?
Doc ID: 342530.1
-- ADD DISK
How To Add a New Disk(s) to An Existing Diskgroup on RAC (Best Practices). (Doc ID 557348.1)
-- ADD DISK WINDOWS
RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]
How To Setup ASM (10.2) on Windows Platforms [ID 469082.1]
ORA-17502 and ORA-15081 when creating a datafile on a ASM diskgroup [ID 369898.1]
New Partitions in Windows 2003 RAC Environments Not Visible on Remote Nodes [ID 454607.1]
RAC: Frequently Asked Questions [ID 220970.1]
Oracle Tools Available for Working With RAW Partitions on Windows Platforms [ID 555645.1]
How to Extend A Raw Logical Volume in Windows [ID 555273.1]
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices. [ID 428681.1] <-- helpful
Asmtoolg Generates An Access Violation When Stamping Disks [ID 443635.1]
Disk Is not Discovered in ASM, Diskgroup Creation Fails with Ora-15018 Ora-15031 Ora-15014 [ID 431013.1]
-- REMOVE DISK
How to Dynamically Add and Remove SCSI Devices on Linux
Doc ID: 603868.1
-- RESYNC
ASM 11g New Features - How ASM Disk Resync Works. (Doc ID 466326.1)
-- RENAME DISK
How to rename ASM disks? (Doc ID 418542.1)
Adding The Label To ASMLIB Disk Using 'oracleasm renamedisk' Command (Doc ID 280650.1)
Oracleasm Createdisk Fails: Device '/dev/emcpoweraxx Is Not A Partition [Failed] (Doc ID 469163.1)
New ASMLib / oracleasm Disk Gets "header_status=Unknown" - Cannot be Added to Diskgroup (Doc ID 391136.1)
Oracleasm Createdisk Fails: Device '/dev/emcpoweraxx Is Not A Partition [Failed] (Doc ID 469163.1)
-- PASSWORD
How To Change ASM SYS PASSWORD ?
Doc ID: 452076.1
-- CSS MISCOUNT
How to Increase CSS Misscount in single instance ASM installations
Doc ID: Note:729878.1
10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
Doc ID: Note:284752.1
-- CLEAN UP ASM INSTALL, UNINSTALL
How to cleanup ASM installation (RAC and Non-RAC)
Doc ID: 311350.1
-- RECREATE ASM DISKGROUPS
Steps to Re-Create ASM Diskgroups
Doc ID: Note:268481.1
-- DUPLICATE CONTROLFILE
Note 345180.1 - How to duplicate a controlfile when ASM is involved
-- MULTIPLE ASM HOME
Multiple 10g Oracle Home installation - ASM
Doc ID: 279353.1
-- 11g CP command
ASMCMD cp command fails with ORA-15046
Doc ID: 452158.1
ASMCMD - New commands in 11g
Doc ID: 451900.1
Copying File Using ASMCMD Copy Command Failed With ASMCMD-08010
Doc ID: 786364.1
Unable To Copy Directory Using ASMCMD Cp -r Command
Doc ID: 829040.1
Asmcmd CP Command Can Not Copy Files Larger Than 2 GB
Doc ID: 786258.1
-- EXPDP
Creating dumpsets in ASM
Doc ID: 559878.1
How To Extract Datapump File From ASM Diskgroup To Local Filesystem?
Doc ID: 566941.1
-- MIGRATION
How to Prepare Storage for ASM
Doc ID: 452924.1
Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime.
Doc ID: 837308.1
Steps To Migrate/Move a Database From Non-ASM to ASM And Vice-Versa
Doc ID: 252219.1
How To Migrate From OCFS To ASM
Doc ID: 579468.1
Install: How To Migrate Oracle10g R1 ASM Database To 10g R2
Doc ID: 330084.1
Migrating Raw Devices to ASMLib on Linux
Doc ID: 394955.1
How To Migrate ASMLIB devices to Block Devices (non-ASMLIB)?
Doc ID: 567508.1
-- FAILOVER
Does Oracle Support Failover Of Asm Based Instance
Doc ID: 762674.1
-- MOVE FILES IN ASM
How to move a datafile from a file system to ASM [ID 390274.1]
How to Copy Archivelog Files From ASM to Filesystem and vice versa [ID 944831.1]
How to transfer backups from ASM to filesystem when restoring to a new host [ID 345134.1]
How To Move Controlfile To ASM [ID 468458.1]
Can RMAN duplex backups to Flash Recovery Area and a Disk location [ID 434222.1]
How to restore archive logs to an alternative location when they already reside on disk [ID 399894.1]
How To Backup Database When Files Are On Raw Devices/File System [ID 469716.1]
RMAN10g: backup copy of database [ID 266980.1]
How To Move The Database To Different Diskgroup (Change Diskgroup Redundancy) [ID 438580.1]
-- 11gR2, Grid Infra
ASM 11.2 Configuration KIT (ASM 11gR2 Installation & Configuration, Deinstallation, Upgrade, ASM Job Role Separation. [ID 1092213.1]
11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]
Pre 11.2 Database Issues in 11gR2 Grid Infrastructure Environment [ID 948456.1]
Database Creation on 11.2 Grid Infracture with Role Separation ( ORA-15025, KFSG-00312, ORA-15081 ) [ID 1084186.1]
-- ACFS - backup and recovery, rman acfs
https://forums.oracle.com/forums/thread.jspa?threadID=2175933
http://download.oracle.com/docs/cd/E11882_01/server.112/e16102/asmfiles.htm#g1030822 <-- supported files on acfs
-- Backing Up an ASM Instance [ID 333257.1]
-- RAW DEVICES
Raw Devices and Cluster Filesystems With Real Application Clusters
Doc ID: 183408.1
-- ASM SEPARATE HOME
DBCA Rejects Asm Password When Creating a New Database
Doc ID: 431312.1
DBCA Is Unable To Connect To +ASM Instance With Error : Invalid Credentials
Doc ID: 277223.1
Diskgroup Mount with Long ASMLib Labels Fails with ORA-15040 ORA-15042
Doc ID: 787082.1
Placeholder for AMDU binaries and using with ASM 10g
Doc ID: 553639.1
How To Migrate ASMLIB devices to Block Devices (non-ASMLIB)?
Doc ID: 567508.1
Bug 5039964 - ASM disks show as provisioned although kfed shows valid disk header
Doc ID: 5039964.8
ORA-15063 When Mounting a Diskgroup After Storage Cloning
Doc ID: 784776.1
ORA-15036 When Starting An ASM Instance
Doc ID: 553319.1
CASE STUDY - WHAT CAUSED ERROR ora-1186 ora-1122 on RAC with ASM
Doc ID: 333816.1
ASM Using Files Instead of Real Devices on Linux
Doc ID: 266028.1
}}}
http://www.oaktable.net/content/auto-dop-and-direct-path-inserts
http://www.pythian.com/news/27867/secrets-of-oracles-automatic-degree-of-parallelism/
http://uhesse.wordpress.com/2011/10/12/auto-dop-differences-of-parallel_degree_policyautolimited/
http://uhesse.wordpress.com/2009/11/24/automatic-dop-in-11gr2/
http://www.rittmanmead.com/2010/01/in-memory-parallel-execution-in-oracle-database-11gr2/
! AUTO DOP
{{{
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
alter system set parallel_degree_policy=AUTO scope=both sid='*';
alter system flush shared_pool;
select 'alter table '||owner||'.'||table_name||' parallel (degree default);' from dba_tables where owner='<app schema>'
}}}
! AUTO DOP + PX queueing, with no in-mem PX
{{{
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
alter system set parallel_degree_policy=LIMITED scope=both sid='*';
alter system set "_parallel_statement_queuing"=TRUE scope=both sid='*';
}}}
''and some other config variations....''
<<<
!AUTO DOP PATH AND IGNORE HINTS
{{{
1) Calibrate the IO
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
2) Parallel_Degree_policy=limited
3) _parallel_statement_queueing=true
4) alter session set "_optimizer_ignore_hints" = TRUE ;
5) set the table and index to “default” degree
}}}
! NO AUTO DOP PATH AND IGNORE HINTS
{{{
1) Calibrate the IO
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
2) Resource manager directive to limit the PX per session = per session 4
3) alter session set "_optimizer_ignore_hints" = TRUE ;
4) _parallel_statement_queueing=true
}}}
! NO AUTO DOP PATH WITHOUT IGNORING HINTS
{{{
1) Calibrate the IO
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
2) Resource manager directive to limit the PX per session = per session 4
3) _parallel_statement_queueing=true
}}}
<<<
! Monitoring
<<<
! determine if PX underscore params are set
{{{
select a.ksppinm name, b.ksppstvl value
from x$ksppi a, x$ksppsv b
where a.indx = b.indx
and a.ksppinm in ('_parallel_cluster_cache_pct','_parallel_cluster_cache_policy','_parallel_statement_queuing','_optimizer_ignore_hints')
order by 1,2
/
}}}
! list if SQLs are using in-mem PX
{{{
The fourth column indicates whether the cursor was satisfied using In-Memory PX; if the
number of parallel servers is greater than zero but the bytes eligible for predicate offload is
zero, it’s a good indication that In-Memory PX was in use.
select ss.sql_id,
sum(ss.PX_SERVERS_EXECS_total) px_servers,
decode(sum(ss.io_offload_elig_bytes_total),0,'No','Yes') offloadelig,
decode(sum(ss.io_offload_elig_bytes_total),0,'Yes','No') impx,
sum(ss.io_offload_elig_bytes_total)/1024/1024 offloadbytes,
sum(ss.elapsed_time_total)/1000000/sum(ss.px_servers_execs_total) elps,
dbms_lob.substr(st.sql_text,60,1) st
from dba_hist_sqlstat ss, dba_hist_sqltext st
where ss.px_servers_execs_total > 0
and ss.sql_id=st.sql_id
and upper(st.sql_text) like '%IN-MEMORY PX T1%'
group by ss.sql_id,dbms_lob.substr(st.sql_text,60,1)
order by 5
/
}}}
<<<
! Quick PX test case
{{{
select degree,num_rows from dba_tables
where owner='&owner' and table_name='&table_name';
#!/bin/sh
for i in 1 2 3 4 5
do
nohup sqlplus oracle/oracle @px_test.sql $i &
done
set serveroutput on size 20000
variable n number
exec :n := dbms_utility.get_time;
spool autodop_&1..lst
select /* queue test 0 */ count(*) from big_table;
begin
dbms_output.put_line
( (round((dbms_utility.get_time - :n)/100,2)) || ' seconds' );
end;
/
spool off
exit
}}}
Related articles:
http://jamesmorle.wordpress.com/2010/06/02/log-file-sync-and-awr-not-good-bedfellows/
http://rnm1978.wordpress.com/2010/09/14/the-danger-of-averages-measuring-io-throughput/
Investigate on metric tables.. especially the fileio metric which has 10 minutes deltas..
-- note: average_read_time is in centiseconds.. *10 to make it ms..
alter session set nls_date_format='dd-mm-yyyy hh24:mi';
select begin_time, end_time, file_id,
physical_reads reads,
nvl(physical_reads,0)/603 rps,
average_read_time*10 atpr,
nvl(physical_block_reads,0) / decode(nvl(physical_reads,0),0,to_number(NULL),physical_reads) bpr,
physical_writes writes,
nvl(physical_writes,0)/603 wps,
average_write_time*10 atpwt,
nvl(physical_block_writes,0)/ decode(nvl(physical_writes,0),0,to_number(NULL),physical_writes) bpw,
physical_reads + physical_writes ios,
nvl((physical_reads + physical_writes),0) / 60000 iops
from v$filemetric_history order by 1 asc;
{{{
sys@IVRS> set lines 300
drop table ioms;
create table ioms as select
file#
, nvl(b.phyrds,0) phyrds
, nvl(b.readtim,0) readtim
, nvl(b.phywrts,0) phywrts
, nvl(b.phyblkrd,0) phyblkrd
from v$filestat b;
exec dbms_lock.sleep(seconds => 600);
select
e.file#
, nvl(e.phyrds,0) ephyrds
, nvl(e.readtim,0) ereadtim
, nvl(e.phywrts,0) ephywrts
, nvl(e.phyblkrd,0) ephyblkrd
, e.phyrds - i.phyrds reads
, (e.phyrds - nvl(i.phyrds,0))/ 603 rps
, decode ((e.phyrds - nvl(i.phyrds, 0)), 0, to_number(NULL), ((e.readtim - nvl(i.readtim,0)) / (e.phyrds - nvl(i.phyrds,0)))*10) atpr_ms
, decode ((e.phyrds - nvl(i.phyrds, 0)), 0, to_number(NULL), (e.phyblkrd - nvl(i.phyblkrd,0)) / (e.phyrds - nvl(i.phyrds,0)) ) bpr
, e.phywrts - nvl(i.phywrts,0) writes
, (e.phywrts - nvl(i.phywrts,0))/ 603 wps
, (e.phyrds - nvl(i.phyrds,0)) + (e.phywrts - nvl(i.phywrts,0)) ios,
((e.phyrds - nvl(i.phyrds,0)) + (e.phywrts - nvl(i.phywrts,0))) / 600 iops
from v$filestat e, ioms i
where e.file# = i.file#;sys@IVRS>
Table dropped.
sys@IVRS> 2 3 4 5 6 7
Table created.
sys@IVRS> sys@IVRS>
PL/SQL procedure successfully completed.
sys@IVRS> sys@IVRS> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
FILE# EPHYRDS EREADTIM EPHYWRTS EPHYBLKRD READS RPS ATPR_MS BPR WRITES WPS IOS IOPS
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1 7374 12818 446 10365 1 .001658375 0 1 26 .043117745 27 .045
2 62 144 472 62 0 0 26 .043117745 26 .043333333
3 2990 4699 907 9525 0 0 10 .016583748 10 .016666667
4 8803 4715 1104 37702 9 .014925373 6.66666667 1 78 .129353234 87 .145
5 66 115 9 93 0 0 0 0 0 0
6 5 6 1 5 0 0 0 0 0 0
7 5 1 1 5 0 0 0 0 0 0
8 5 2 1 5 0 0 0 0 0 0
9 5 2 1 5 0 0 0 0 0 0
10 5 2 1 5 0 0 0 0 0 0
11 5 2 1 5 0 0 0 0 0 0
12 5 15 1 5 0 0 0 0 0 0
13 2341 2333 1297 10584 16 .026533997 5.625 1 76 .126036484 92 .153333333
13 rows selected.
}}}
{{{
BEGIN_TIME END_TIME FILE_ID AVERAGE_READ_TIME*10 AVERAGE_WRITE_TIME*10 PHYSICAL_READS PHYSICAL_WRITES PHYSICAL_BLOCK_READS PHYSICAL_BLOCK_WRITES
---------------- ---------------- ---------- -------------------- --------------------- -------------- --------------- -------------------- ---------------------
17-06-2010 01:28 17-06-2010 01:38 12 0 0 0 0 0 0
17-06-2010 01:28 17-06-2010 01:38 11 0 0 0 0 0 0
17-06-2010 01:28 17-06-2010 01:38 10 0 0 0 0 0 0
17-06-2010 01:28 17-06-2010 01:38 9 0 0 0 0 0 0
17-06-2010 01:28 17-06-2010 01:38 8 0 0 0 0 0 0
17-06-2010 01:28 17-06-2010 01:38 7 0 0 0 0 0 0
17-06-2010 01:28 17-06-2010 01:38 13 5.625 0 16 76 16 179
17-06-2010 01:28 17-06-2010 01:38 2 0 0 0 26 0 83
17-06-2010 01:28 17-06-2010 01:38 3 0 0 0 10 0 10
17-06-2010 01:28 17-06-2010 01:38 4 6.66666667 0 9 78 9 93
17-06-2010 01:28 17-06-2010 01:38 5 0 0 0 0 0 0
17-06-2010 01:28 17-06-2010 01:38 6 0 0 0 0 0 0
17-06-2010 01:28 17-06-2010 01:38 1 0 0 1 28 1 30
}}}
{{{
Karl@Karl-LaptopDell /cygdrive/c/Users/Karl/Desktop
$ cat awk.txt
11 12 13
21 22 23
31 32 33
Karl@Karl-LaptopDell /cygdrive/c/Users/Karl/Desktop
$ cat awk.txt | awk 'FNR == 2 {print $2}' <-- output line 2 column 2
22
$ cat awk.txt | awk '$2=="12"||$2=="32" {print $0}' <-- filter rows on column 2 with "12" or "32"
11 12 13
31 32 33
$ cat awr_genwl.txt | awk '{print $0}' <-- output all rows and columns
$ cat awr_genwl.txt | awk '{print $2}' <-- output only column 2
$ cat awr_genwl.txt | awk '$4=="1" {print $0}' <-- filter on column 4 (instance number) with value "1" and output all rows with that value
$ cat awr_genwl.txt | awk '$4=="1" && $12>10 {print $0}' <-- filter on column 4 and 12 (AAS) with values "1" and "AAS greater than 10" and output all rows with that value
}}}
{{{
cat awr_iowlexa.txt | awk '$6>1 {print $1,$2,$3,$6}' | less <-- will show snap_id, tm, aas > 1
cat awr_topsqlexa2.txt | awk '$6>1 {print $1,$6,$27,$28,$29,$30}' | less <-- will show snap_id, sql, aas > 1
}}}
''awk dcli vmstat output''
dcli -l root -g cell_group --vmstat 1 > oslogs.txt
{{{
-- add the usr and sys columns
cat osload.txt | grep cx02db01 | awk '{print $14 + $15}' > cx02db01.txt
-- add the usr and sys columns for storage cells
cat osload.txt | egrep "cx02cel01|cx02cel02|cx02cel03" | awk '{print $14 + $15}' > storagecells.txt
-- discover the bad lines in vmstat output
cat snap_724-725_1057-1107.txt | grep cx02db01 | grep -v memory | grep -v buff | perl -p -e "s| | |g" -| perl -p -e "s| | |g" - | perl -p -e "s| | |g" - | awk 'BEGIN {x=1}; {print x++ " " $1 " " $2 " " $3 " " $4 " " $5 " " $6 " " $7 " " $8 " " $9 " " $10 " " $11 " " $12 " " $13 " " $14 " " $15 " " $16;}' | column -t | less
-- gives you bad vmstat lines where cs and us columns are not aligning
cat snap_724-725_1057-1107.txt | grep cx02db01 | grep -v memory | grep -v buff | perl -p -e "s| | |g" -| perl -p -e "s| | |g" - | perl -p -e "s| | |g" - | awk 'BEGIN {x=1}; {print x++ " " $1 " " $2 " " $3 " " $4 " " $5 " " $6 " " $7 " " $8 " " $9 " " $10 " " $11 " " $12 " " $13 " " $14 " " $15 " " $16;}' | column -t | grep '\:[0-9]' | less
-- will show the data bug !!!!
cat oslogs.txt | awk 'BEGIN{buf=""} /[0-9]:[0-9][0-9]:[0-9]/{buf=$0} /cx02db01/{print $0,buf}' | column -t | grep '[a-zA-Z0-9]\{8\}:[0-9]\{2\}' | wc -l
prints the usr sys
cat fixed.txt | awk '{print $14, $15, $16}'
prints usr without the newline and us
cat start.txt | awk '{ print $1 }' | grep . | grep -v us | less
-- discard zero
cat finalout.txt | awk ' $0>0 { print $0 }' | less
}}}
''Final''
{{{
# vmstat.sh
# usage:
# sh vmstat.sh <text file output> <hostname filter>
# sh vmstat.sh oslogs.txt cx02db01
# cleanup
rm $2_datapoints.txt &> /dev/null
# regex stuff
foo=`echo "$2"| wc -c`; count=$((${foo}-1))
regexp=[a-zA-Z0-9]'\'{$count'\'}:[0-9]'\'{2'\'}
cat > $2_execvmstat.sh << EOF
# fix the vmstat data bug
cat $1 | awk 'BEGIN{buf=""} /[0-9]:[0-9][0-9]:[0-9]/{buf=\$0} /$2/{print \$0,buf}' | column -t | grep -v '$regexp' > $2_good.txt
cat $1 | awk 'BEGIN{buf=""} /[0-9]:[0-9][0-9]:[0-9]/{buf=\$0} /$2/{print \$0,buf}' | column -t | grep '$regexp' > $2_bad.txt
cat $2_good.txt | awk '{print \$19, "$2", \$14, \$15, \$16, \$17, \$18, \$14+\$15+\$17+\$18 }' | column -t >> $2_datapoints.txt
cat $2_bad.txt | awk '{print \$18, "$2", \$13, \$14, \$15, \$16, \$17, \$13+\$14+\$16+\$17}' | column -t >> $2_datapoints.txt
# create files for statistical analysis
sort -k1 $2_datapoints.txt | awk '\$8>0 {print \$8}' | sort -n > $2_graph_totcpu.txt
sort -k1 $2_datapoints.txt | awk '\$3+\$4>0 {print \$3+\$4}' | sort -n > $2_graph_usrsys.txt
sort -k1 $2_datapoints.txt | awk '\$6>0 {print \$6}' | sort -n > $2_graph_wa.txt
# show data points above 70pct total cpu
sort -k1 $2_datapoints.txt | awk '\$8>70 {print \$0}' > $2_data_totcpu_gt70pct.txt
# show data points above 70pct usr and sys
sort -k1 $2_datapoints.txt | awk '\$3+\$4>70 {print \$0}' > $2_data_usrsys_gt70pct.txt
# show data points above 0pct wait io
sort -k1 $2_datapoints.txt | awk '\$6>0 {print \$0}' > $2_data_wa_gt0pct.txt
# wc on all output files
wc -l $2_graph_totcpu.txt
wc -l $2_graph_usrsys.txt
wc -l $2_graph_wa.txt
wc -l $2_data_totcpu_gt70pct.txt
wc -l $2_data_usrsys_gt70pct.txt
wc -l $2_data_wa_gt0pct.txt
EOF
sh $2_execvmstat.sh
}}}
http://stackoverflow.com/questions/3600170/how-to-cat-two-files-after-each-other-but-omit-the-last-first-line-respectively
http://www.google.com.ph/search?q=cat+filter+2+lines+before&hl=tl&prmd=ivns&ei=yR7MTZO4E9Sutweo_8XrBw&start=20&sa=N
http://www.ibm.com/developerworks/aix/library/au-badunixhabits.html
http://www.linuxquestions.org/questions/linux-software-2/cat-output-specific-number-of-lines-130360/
http://stackoverflow.com/questions/4643022/awk-and-cat-how-to-ignore-multiple-lines
http://tldp.org/LDP/abs/html/textproc.html <-- GOOD STUFF
http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-2/ <-- GOOD STUFF
http://www.google.com.ph/search?sourceid=chrome&ie=UTF-8&q=grep+2+lines+before
http://www.dbforums.com/unix-shell-scripts/1069858-printing-2-2-line-numbers-including-grep-word-line.html
http://www.google.com.ph/search?q=how+to+grep+specific+lines&hl=tl&prmd=ivnsfd&ei=nyDMTemaMcOgtgejksT5Bw&start=10&sa=N <-- GOOD STUFF
http://www.computing.net/answers/unix/grep-to-find-a-specific-line-number/6484.html
http://www.unix.com/shell-programming-scripting/67045-grep-specific-line-file.html <-- grep specific line
http://stackoverflow.com/questions/2914197/how-to-grep-out-specific-line-ranges-of-a-file
http://forums.devshed.com/unix-help-35/displaying-lines-above-grep-ed-line-173427.html
http://vim.1045645.n5.nabble.com/How-to-go-a-file-at-a-specific-line-number-using-the-output-from-grep-td1186768.html
http://www.google.com.ph/search?q=grep+usr+column+vmstat&hl=tl&prmd=ivns&ei=MyjMTdy0OoOutwfjqqDjBw&start=10&sa=N <-- vmstat cs usr
http://www.tuschy.com/nagios/plugins/check_cpu_usage
http://ytrudeau.wordpress.com/2007/11/20/generating-graphs-from-vmstat-output/ <-- GOOD STUFF
http://sourceforge.net/projects/gnuplot/files/gnuplot/4.4.3/ <-- gnuplot
http://www.google.com.ph/search?q=vmstat+aligning&hl=tl&prmd=ivns&ei=qi7MTauxJ8jj0gH4p4n4Bg&start=20&sa=N <-- vmstat aligning
http://stackoverflow.com/questions/3259776/vmstat-and-column <-- GOOD STUFF column -t
http://www.robelle.com/smugbook/regexpr.html <-- GOOD STUFF searching files on linux
http://unix.ittoolbox.com/groups/technical-functional/ibm-aix-l/vmstat-and-ps-ef-columns-not-aligning-786668 <-- GOOD STUFF cs and usr column issue
http://www.issociate.de/board/post/235976/sed_and_newline_(x0a).html
http://www.commandlinefu.com/commands/view/2942/remove-newlines-from-output <-- GOOD STUFF remove newline
http://linux.dsplabs.com.au/rmnl-remove-new-line-characters-tr-awk-perl-sed-c-cpp-bash-python-xargs-ghc-ghci-haskell-sam-ssam-p65/ <-- GOOD STUFF remove new line
http://www.tek-tips.com/viewthread.cfm?qid=1211423&page=1
http://www.computing.net/answers/unix/sed-newline/5640.html
## additional links on the script creation
http://3spoken.wordpress.com/2006/12/10/cpu-steal-time-the-new-statistic/ <-- GOOD STUFF "Steal Time" cpu statistic
http://goo.gl/OPukP <-- put in one line
http://compgroups.net/comp.unix.solaris/Adding-date-time-to-line-in-vmstat <-- GOOD STUFF vmstat with time output
vmstat 2 | while read line; do echo "`date +%T/%m/%d/%y`" "$line" ; done
http://bytes.com/topic/db2/answers/644323-include-date-time-vmstat
http://mishmashmoo.com/blog/?p=65 <-- GOOD STUFF from a performance tester.. reformating vmstat/sar/iostat logs :: loadrunner analysis
http://www.regexbuddy.com/create.html <-- GOOD STUFF regex buddy
http://gskinner.com/RegExr/ <-- regex tool
http://www.fileformat.info/tool/regex.htm <-- regex tool
http://www.txt2re.com/index-perl.php3?s=cx02db01:25&-24&-1 <-- regex tool
http://www.regular-expressions.info/reference.html <-- GOOD STUFF reference for REGEX!!!
http://stackoverflow.com/questions/304864/how-do-i-use-regular-expressions-in-bash-scripts
http://work.lauralemay.com/samples/perl.html <-- GOOD STUFF chapter on regex
http://ask.metafilter.com/80862/how-split-a-string-in-bash <-- split a string in bash
http://www.linuxforums.org/forum/programming-scripting/136269-bash-scripting-can-i-split-word-letter.html <-- split word
http://www.google.com.ph/search?sourceid=chrome&ie=UTF-8&q=grep+%5C%3A%5B0-9%5D <-- search grep \:[0-9]
http://www.unix.com/unix-dummies-questions-answers/128913-finding-files-numbers-file-name.html <-- find file numbers name
http://www.cyberciti.biz/faq/grep-regular-expressions/ <-- GOOD STUFF on regex CYBERCITI
http://www.robelle.com/smugbook/regexpr.html <-- REALLY GOOD STUFF.. got the idea of awk a-zA-Z0-9
http://goo.gl/KV47w <-- search bash word count
http://linux.byexamples.com/archives/57/word-count/#comments <-- GOOD STUFF word count
http://www.softpanorama.org/Scripting/Shellorama/arithmetic_expressions.shtml
http://stackoverflow.com/questions/673016/bash-how-to-do-a-variable-expansion-within-an-arithmetic-expression <-- REALLY GOOD STUFF word count
http://goo.gl/dyYsf <-- search on grep: Invalid content of \{\} variable
http://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg53360.html
http://marc.info/?l=logcheck-devel&m=114076370027762
http://us.generation-nt.com/answer/bug-575204-initscripts-grep-complains-about-invalid-back-reference-umountfs-help-196593881.html
http://www.unix.com/unix-dummies-questions-answers/158405-modifying-shell-script-without-using-editor.html <-- REALLY GOOD STUFF create script w/o editor
http://goo.gl/LLaPG <-- search sed + put space
http://www.unix.com/shell-programming-scripting/41417-add-white-space-end-line-sed.html
http://www.unix.com/shell-programming-scripting/150966-help-sed-insert-space-between-string-form-xxxaxxbcx-without-replacing-pattern.html
http://goo.gl/LeDay <-- search bash sort column
http://www.skorks.com/2010/05/sort-files-like-a-master-with-the-linux-sort-command-bash/ <-- REALLY GOOD STUFF BASH SORTING
http://www.linuxquestions.org/questions/linux-newbie-8/sorting-columns-in-bash-664705/ <-- REALLY GOOD STUFF sort -k1
''References:''
Filter records in a file with sed or awk (UNIX)
http://p2p.wrox.com/other-programming-languages/70727-filter-records-file-sed-awk-unix.html <-- GOOD STUFF
how to get 2 row 2 column
http://studentwebsite.blogspot.com/2010/11/how-to-get-2-row-2-column-using-awk.html <-- GOOD STUFF
Filtering rows for first two instances of a value
http://www.unix.com/shell-programming-scripting/135452-filtering-rows-first-two-instances-value.html
Deleting specific rows in large files having rows greater than 100000
http://www.unix.com/shell-programming-scripting/125807-deleting-specific-rows-large-files-having-rows-greater-than-100000-a.html
awk notes
http://www.i-justblog.com/2009/07/awk-notes.html
awk to select a column from particular line number
http://www.unix.com/shell-programming-scripting/27255-awk-select-column-particular-line-number.html
select row and element in awk
http://stackoverflow.com/questions/1506521/select-row-and-element-in-awk
Filter and migrate data from row to column
http://www.unix.com/shell-programming-scripting/137404-filter-migrate-data-row-column.html
Extracting particular column name values using sed/ awk/ perl
http://stackoverflow.com/questions/1630710/extracting-particular-column-name-values-using-sed-awk-perl
AWK SELECTION
https://www.prodigyone.com/in/doc/docs.php?view=1&nid=317
AWK one liners
http://www.krazyworks.com/useful-awk-one-liners/ <-- AWK NR==
http://www.idevelopment.info/data/Oracle/DBA_tips/LOBs/LOBS_1.shtml
http://www.idevelopment.info/data/Oracle/DBA_tips/LOBs/LOBS_5.shtml
-- quick example
{{{
mkdir -p /home/oracle/oralobfiles
grant create any directory to hr;
DROP TABLE test_lob CASCADE CONSTRAINTS
/
CREATE TABLE test_lob (
id NUMBER(15)
, clob_field CLOB
, blob_field BLOB
, bfile_field BFILE
)
/
CREATE OR REPLACE DIRECTORY
EXAMPLE_LOB_DIR
AS
'/home/oracle/oralobfiles'
/
INSERT INTO test_lob
VALUES ( 1001
, 'Some data for record 1001'
, '48656C6C6F' || UTL_RAW.CAST_TO_RAW(' there!')
, BFILENAME('EXAMPLE_LOB_DIR', 'file1.txt')
);
COMMIT;
col clob format a30
col blob format a30
SELECT
id
, clob_field "Clob"
, UTL_RAW.CAST_TO_VARCHAR2(blob_field) "Blob"
FROM test_lob;
Id Clob Blob
---- ------------------------- -------------
1001 Some data for record 1001 Hello there!
}}}
Damn, these kids are smart!
https://github.com/awreece <- he has a lot of good stuff
https://github.com/davidgomes
https://blog.memsql.com/bpf-linux-performance/
https://blog.memsql.com/linux-off-cpu-investigation/
http://codearcana.com/posts/2015/12/20/using-off-cpu-flame-graphs-on-linux.html
http://codearcana.com/posts/2013/05/18/achieving-maximum-memory-bandwidth.html
<<showtoc>>
! the backbone.js environment
https://jsfiddle.net/karlarao/uf3njwe8/
{{{
// ######################################################################
// MODELS
// ######################################################################
// ----------------------------------------------------------------------
// extend
var Vehicle = Backbone.Model.extend({ // extend the backbone model
prop1: '1' // default property
});
var v = new Vehicle(); // instantiate new model of Vehicle type
var v2 = new Vehicle();
v.prop1 = 'one'; // assign value
console.log(v);
console.log(v.prop1);
console.log(v2.prop1);
// ----------------------------------------------------------------------
// class properties - 2nd argument as class properties
// 1st argument is where we usually configure our model object
var Vehicle = Backbone.Model.extend({}, // possible to have class properties
{ // by providing a 2nd argument to extend
summary : function () {
return 'Vehicles are for travelling';
}
}
);
Vehicle.summary(); // you can call the function even w/o instantiating a new Vehicle type
// ----------------------------------------------------------------------
// instantiating models
// models are constructor functions, call it with "new"
var model = new Backbone.Model();
// this works
var model = new Backbone.Model({
name : 'Karl',
age : 100
});
console.log(model);
// or use custom types
var Vehicle = Backbone.Model.extend({}); // empty model definition
var ford = new Vehicle();
// initialize w/ a function
var Vehicle = Backbone.Model.extend({
initialize : function () {
console.log('new car created');
}
});
var newCar = new Vehicle();
// ----------------------------------------------------------------------
// inheritance
// models can inherit from other models
var Vehicle = Backbone.Model.extend({}); // Vehicle is a model type that extends backbone.model
var Car = Vehicle.extend({}); // Car is a model type that extends vehicle
// example A and B inheritance
var A = Backbone.Model.extend({
initialize : function () {
console.log('initialize A');
},
asString : function () {
return JSON.stringify(this.toJSON());
}
});
var a = new A({ // create the object a
one : '1',
two : '2'
});
console.log(a.asString()); // test the asString function
var B = A.extend({}); // create a new type B, will extend A
var b = new B({ // create the object b
three : '3'
});
console.log(b.asString()); // test state of b
console.log(typeof b);
console.log(b instanceof B); // true instanceof will test the type of object
console.log(b instanceof A); // true
console.log(b instanceof Backbone.Model); // true
console.log(a instanceof B); // false
// ----------------------------------------------------------------------
// attributes
// model attributes hold your data
// set, get, escape (html escaped), has
var ford = new Vehicle();
ford.set('type', 'car');
console.log(ford);
// many properties at once, append maxSpeed and color
ford.set({
'maxSpeed' : '99',
'color' : 'blue'
});
console.log(ford);
// get
ford.get('type');
ford.get('color');
// let's try this
var Vehicle = Backbone.Model.extend({
dump: function () {
console.log(JSON.stringify(this.toJSON()));
}
});
var v = new Vehicle({
type : 'car'
});
v.dump();
v.set('color','blue');
v.set({
description: "<script>alert('this is injection') </script>",
weight: 1000
});
v.dump();
$('body').append(v.escape('description')); // before returning it, it will html encode it
v.has('type'); // true
// ----------------------------------------------------------------------
// model events
// by wrapping attributes to get and set methods backbone can raise events when their state changes
// model "listen" to "changes"
// "on" method bind to "event" and function to execute
ford.on('change', function () {});
// or listen to a change to a property
// "event" bind to "property name" and function to execute
ford.on('change:color', function () {});
// let's try this
var Vehicle = Backbone.Model.extend({ // backbone model w/ one attribute
color : 'blue'
});
var ford = new Vehicle(); // instantiate
ford.on('change', function () { // bind an event handler to the "change" event by using on
console.log('something has changed'); // the callback
});
ford.set('color','red'); // this will "trigger" the event and should say "something has changed"
console.log(ford.get('color')); // this should be red
ford.on('change:color', function () { // listen event
console.log('color has changed');
});
ford.set('color','orange'); // this should output 'something has changed' and 'color has changed'
// custom model events - possible to define "triggers" to events
ford.on('retired', function() {});
// then trigger method to fire the event
ford.trigger('retired');
// let's try this - example 1
var volcano = _.extend({}, Backbone.Events);
volcano.on('disaster:eruption', function () { // namespace convention separated by :
console.log('duck and go');
});
volcano.trigger('disaster:eruption');
// let's try this - example 2
var volcano = _.extend({}, Backbone.Events);
volcano.on('disaster:eruption', function (options) {
console.log('duck and go - ' + options.plan);
});
volcano.trigger('disaster:eruption', {plan : 'run'});
volcano.off('disaster:eruption'); // this will turn off the event handlers
volcano.trigger('disaster:eruption', {plan : 'run'});
// ----------------------------------------------------------------------
// Model Identity
}}}
http://en.wikipedia.org/wiki/Backplane
''passive backplane'' http://www.webopedia.com/TERM/B/backplane.html
http://electronicstechnician.tpub.com/14091/css/14091_36.htm
''back plane vs mother board'' http://www.freelists.org/post/si-list/back-plane-vs-mother-board,1
<<<
Basically a backplane has nothing but connectors, and maybe passive
terminating networks for transmission lines, on it. The cards that do
the real work plug into the backplane.
A motherboard, such as those used in personal computers (PC's), usually
has a processor, logic, memory, DC-DC converters, etc. along with a
backplane-like section for adapter/daughter cards. I designed a couple
of motherboards for my previous employer. Layout can really be a bear,
because you keep finding yourself blocked by these big connectors whose
locations, orientations, and pinouts have been fixed in advance for
mechanical and electrical compatibility reasons.
<<<
RPO vs RTO
<<<
RPO: Recovery Point Objective
Recovery Point Objective (RPO) describes the interval of time that might pass during a disruption before the quantity of data lost during that period exceeds the Business Continuity Plan’s maximum allowable threshold or “tolerance.”
Example: If the last available good copy of data upon an outage is from 18 hours ago, and the RPO for this business is 20 hours then we are still within the parameters of the Business Continuity Plan’s RPO. In other words it the answers the question – “Up to what point in time could the Business Process’s recovery proceed tolerably given the volume of data lost during that interval?”
RTO: Recovery Time Objective
The Recovery Time Objective (RTO) is the duration of time and a service level within which a business process must be restored after a disaster in order to avoid unacceptable consequences associated with a break in continuity. In other words, the RTO is the answer to the question: “How much time did it take to recover after notification of business process disruption?“
RPO designates the variable amount of data that will be lost or will have to be re-entered during network downtime. RTO designates the amount of “real time” that can pass before the disruption begins to seriously and unacceptably impede the flow of normal business operations.
There is always a gap between the actuals (RTA/RPA) and objectives introduced by various manual and automated steps to bring the business application up. These actuals can only be exposed by disaster and business disruption rehearsals.
<<<
https://www.druva.com/blog/understanding-rpo-and-rto/
.
Recovery Manager RMAN Documentation Index
Doc ID: Note:286589.1
RMAN Myths Dispelled: Common RMAN Performance Misconceptions
Doc ID: 134214.1
-- RMAN COMPATIBILITY
RMAN Compatibility Oracle8i 8.1.7.4 - Oracle10g 10.1.0.4
Doc ID: Note:307022.1
RMAN Compatibility Matrix
Doc ID: Note:73431.1
RMAN Standard and Entrprise Edition Compatibility (Doc ID 730193.1)
Answers To FAQ For Restoring Or Duplicating Between Different Versions And Platforms (Doc ID 369644.1)
<<<
It is possible to use the 10.2 RMAN executable to restore a 9.2 database (same for 11.2 to 11.1 or 11.1 to 10.2, etc) even if the restored datafiles will be stored in ASM.
<<<
-- SCENARIOS
List of Database Outages
Doc ID: Note:76449.1
Backup and Recovery Scenarios
Doc ID: Note:94114.1
-- BEST PRACTICES
Top 10 Backup and Recovery best practices.
Doc ID: Note:388422.1
Oracle 9i Media Recovery Best Practices
Doc ID: Note:240875.1
Oracle Suggested Strategy & Backup Retention
Doc ID: Note:351455.1
-- SAMPLE SCRIPTS
RMAN Backup Shell Script Example
Doc ID: Note:137181.1
-- NOLOGGING
Note 290161.1 The Gains and Pains of Nologging Operations
-- 32bit to 64bit
RMAN Restoring A 32 bit Database to 64 bit - An Example
Doc ID: Note:467676.1
How I Solved a Problem During a Migration of 32 bit to 64 bit on 10.2.0.2
Doc ID: 452416.1
-- RMAN BUG
Successful backups are not shown in the list backup.Not able to restore them also.
Doc ID: 284002.1
-- 9iR2 stuff
RMAN Restore/Recovery When the Recovery Catalog and Controlfile are Lost in 9i (Doc ID 174623.1)
How To Catalog Backups / Archivelogs / Datafile Copies / Controlfile Copies (Doc ID 470463.1)
Create Standby Database using RMAN changing backuppiece location (Doc ID 753902.1)
Rolling a Standby Forward using an RMAN Incremental Backup in 9i (Doc ID 290817.1)
RMAN : Block-Level Media Recovery - Concept & Example (Doc ID 144911.1)
Persistent Controlfile configurations for RMAN in 9i and 10g. (Doc ID 305565.1)
Using RMAN to Restore and Recover a Database When the Repository and Spfile/Init.ora Files Are Also Lost (Doc ID 372996.1)
How To Restore Controlfile From A Backupset Without A Catalog Or Autobackup (Doc ID 403883.1)
https://docs.google.com/viewer?url=http://www.nyoug.org/Presentations/2005/20050929rman.pdf
http://www.orafaq.com/wiki/Oracle_database_Backup_and_Recovery_FAQ#Can_one_restore_RMAN_backups_without_a_CONTROLFILE_and_RECOVERY_CATALOG.3F
-- RMAN PERFORMANCE
Advise On How To Improve Rman Performance
Doc ID: Note:579158.1
RMAN Backup Performance
Doc ID: Note:360443.1
Known RMAN Performance Problems
Doc ID: Note:247611.1
TROUBLESHOOTING GUIDE: Common Performance Tuning Issues
Doc ID: Note:106285.1
RMAN Myths Dispelled: Common RMAN Performance Misconceptions
Doc ID: Note:134214.1
-- FRA, Flash Recovery Area, Fast Recovery Area
Flash Recovery Area - FAQ [ID 833663.1]
-- SHARED DISK ERROR
RAC BACKUP FAILS WITH ORA-00245: CONTROL FILE BACKUP OPERATION FAILED [ID 1268725.1]
-- DUPLICATE CONTROLFILE
Note 345180.1 - How to duplicate a controlfile when ASM is involved
-- RECREATE CONTROLFILE
How to Recover Having Lost Controlfiles and Online Redo Logs
Doc ID: 103176.1
http://www.orafaq.com/wiki/Control_file_recovery
http://www.databasejournal.com/features/oracle/article.php/3738736/Recovering-from-Loss-of-All-Control-Files.htm
Recreating the Controlfile in RAC and OPS
Doc ID: 118931.1
How to Recreate a Controlfile for Locally Managed Tablespaces
Doc ID: 221656.1
How to Recreate a Controlfile
Doc ID: 735106.1
Step By Step Guide On How To Recreate Standby Control File When Datafiles Are On ASM And Using Oracle Managed Files
Doc ID: 734862.1
RECREATE CONTROLFILE, USERS ACCEPT SYS LOSE THEIR SYSDBA/SYSOPER PRIVS
Doc ID: 335971.1
Steps to recreate a Physical Standby Controlfile
Doc ID: 459411.1
http://www.freelists.org/post/oracle-l/Recreate-standby-controlfile-for-DB-that-uses-OMF-and-ASM
Steps to recreate a Physical Standby Controlfile (Doc ID 459411.1)
Steps to perform for Rolling forward a standby database using RMAN incremental backup when primary and standby are in ASM filesystem (Doc ID 836986.1)
-- DISK LOSS
Recover database after disk loss
Doc ID: Note:230829.1
Disk Lost in External Redundancy FLASH Diskgroup Having Controlfile and Redo Member
Doc ID: Note:387103.1
-- LOST DATAFILE
Note 1060605.6 Recover A Lost Datafile With No Backup
Note 1029252.6 How to resize a datafile
Note 30910.1 Recreating database objects
Note 1013221.6 Recovering from a lost datafile in a ROLLBACK tablespace
Note 198640.1 How to Recover from a Lost Datafile with Different Scenarios
How to 'DROP' a Datafile from a Tablespace
Doc ID: 111316.1
Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery
Doc ID: 184327.1
How to Recover from a Lost Datafile with Different Scenarios
Doc ID: 198640.1
-- REDO LOG
How To Recover Using The Online Redo Log (Doc ID 186137.1)
Loss Of Online Redo Log And ORA-312 And ORA-313 (Doc ID 117481.1)
-- RESETLOGS
Recovering READONLY tablespace backups made before a RESETLOGS Open
Doc ID: Note:266991.1
-- INCARNATION
RMAN RESTORE fails with RMAN-06023 or ORA-19505 or RMAN-06100 inspite of proper backups (Doc ID 457769.1)
RMAN RESTORE FAILS WITH RMAN-06023 BUT THERE ARE BACKUPS AVAILABLE [ID 965122.1]
RMAN-06023 when Duplicating a Database [ID 108883.1]
Rman Restore Fails With 'RMAN-06023: no backup ...of datafile .. to restore' Although Backup is Available [ID 793401.1]
RMAN-06023 DURING RMAN DUPLICATE [ID 414384.1]
ORA-19909 datafile 1 belongs to an orphan incarnation - http://www.the-playground.de/joomla//index.php?option=com_content&task=view&id=216&Itemid=29
Impact of Partial Recovery and subsequent resetlogs on daily Incrementally Updated Backups [ID 455543.1]
Recovery through resetlogs using User Managed Online Backups [ID 431816.1]
How to duplicate a databaset to a previous Incarnation [ID 293717.1]
RMAN restore of database fails with ORA-01180: Cannot create datafile 1 [ID 392237.1]
How to Recover Through a Resetlogs Command Using RMAN [ID 237232.1]
RMAN: Point-in-Time Recovery of a Backup From Before Last Resetlogs [ID 1070453.6]
How to recover an older incarnation without a controlfile from that time [ID 284510.1]
RMAN-6054 report during recover database [ID 880536.1]
RMAN-06054 While Recovering a Database in NOARCHIVELOG mode [ID 577939.1]
http://oraware.blogspot.com/2008/05/recovery-with-old-controlfilerecover.html
http://hemantoracledba.blogspot.com/2009/09/rman-can-identify-and-catalog-use.html
http://oracle.ittoolbox.com/groups/technical-functional/oracle-db-l/ora01190-controlfile-or-data-file-1-is-from-before-the-last-resetlogs-870241
-- READ ONLY
RMAN Backup With Skip Read Only Takes More Time
Doc ID: Note:561071.1
-- RESTORE
How To Restore From An Old Backupset Using RMAN?
Doc ID: 209214.1
RMAN : Consistent Backup, Restore and Recovery using RMAN
Doc ID: 162855.1
RMAN: Restoring an RMAN Backup to Another Node
Doc ID: Note:73974.1
-- RESTORE HIGHER PATCHSET
Restoring a database to a higher patchset
Doc ID: 558408.1
Oracle Database Upgrade Path Reference List
Doc ID: Note:730365.1
Database Server Upgrade/Downgrade Compatibility Matrix
Doc ID: Note:551141.1
-- CATALOG
RMAN: How to Query the RMAN Recovery Catalog
Doc ID: 98342.1
RMAN Troubleshooting Catalog Performance Issues
Doc ID: Note:748257.1
How To Catalog Multiple Archivelogs in Unix and Windows
Doc ID: Note:404515.1
-- FLASH RECOVERY AREA
Flash Recovery area - Space management Warning & Alerts
Doc ID: Note:305812.1
ENABLE/DISABLE ARCHIVELOG MODE AND FLASH RECOVERY AREA IN A DATABASE USING ASM
Doc ID: 468984.1
How To Delete Archive Log Files Out Of +Asm?
Doc ID: 300472.1
How do you prevent extra archivelog files from being created in the flash recovery area?
Doc ID: Note:353106.1
-- ORA-1157
Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery
Doc ID: Note:184327.1
-- Ora-19660
Restore Validate Database Always Fails Ora-19660
Doc ID: 353614.1
OERR: ORA 19660 some files in the backup set could not be verified
Doc ID: 49356.1
Corrupted Blocks Found During Restore of Backup with RMAN and TIVOLI ORA-19612
Doc ID: 181080.1
-- 8i RMAN
Note 50875.1 Getting Started with Server-Managed Recovery (SMR) and RMAN 8.0-8i
RMAN 8.0 to 8i - Getting Started
Doc ID: Note:120084.1
How To Show Rman Configuration Parameters on Oracle 8.1.7 ?
Doc ID: Note:725922.1
Maintaining V8.0 and V8.1 RMAN Repository
Doc ID: Note:125303.1
RMAN: How to Recover a Database from a Total Failure Using RMAN 8i
Doc ID: Note:121227.1
How To Use RMAN to Backup Archive Logs
Doc ID: Note:237407.1
-- INCREMENTAL, CUMMULATIVE
How To Determine If A RMAN Backup Is Differential Or Cumulative
Doc ID: Note:356349.1
Does RMAN Oracle10g Db support Incremental Level 2 backups?
Doc ID: Note:733535.1
Incrementally Updated Backup In 10G
Doc ID: Note:303861.1
RMAN versus EXPORT Incremental backups
Doc ID: Note:123146.1
Merged Incremental Strategy creates backups larger than expected
Doc ID: Note:413265.1
Merged Incremental Backup Strategies
Doc ID: 745798.1
-- RETENTION POLICY
Rman backup retention policy
Doc ID: Note:462978.1
How to ensure that backup metadata is retained in the controlfile when setting a retention policy and an RMAN catalog is NOTused.
Doc ID: Note:461125.1
RMAN Delete Obsolete Command Deletes Archivelog Backups Inside Retention Policy
Doc ID: Note:734323.1
-- OBSOLETE
Delete Obsolete Does Not Delete Obsolete Backups
Doc ID: Note:314217.1
-- BACKUP OPTIMIZATION
RMAN 9i: Backup Optimization
Doc ID: Note:142962.1
-- LIST, REPORT
LIST and REPORT Commands in RMAN
Doc ID: Note:114284.1
-- FORMAT
What are the various % format code used during RMAN backups
Doc ID: Note:553927.1
-- CONTROL_FILE_RECORD_KEEP_TIME
Setting CONTROL_FILE_RECORD_KEEP_TIME For Incrementally Updated Backups
Doc ID: Note:728471.1
-- TAPE, MEDIA LIBRARY, SBT_LIBRARY=oracle.disksbt
RMAN Tape Simulation - virtual tape
http://www.appsdba.com/blog/?p=205
http://groups.google.com/group/oracle_dba_experts/browse_thread/thread/6990d83752256e20?pli=1
RMAN and Specific Media Managers Environment Variables.
Doc ID: Note:312737.1
Does Unused Block Compression Works With Tape ?
Doc ID: 565237.1
RMAN 10gR2 Tape vs Disk Backup Performance When Database is 99% Empty
Doc ID: 428344.1
How to Configure RMAN to Work with Netbackup for Oracle
Doc ID: Note:162355.1
-- COMPRESSION
A Complete Understanding of RMAN Compression
Doc ID: 563427.1
-- MEMORY CORRUPTION
FAQ Memory Corruption [ID 429380.1]
-- BLOCK CORRUPTIONS
Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g
Doc ID: Note:28814.1
CAUSES OF BLOCK CORRUPTIONS
Doc ID: 77589.1
BLOCK CORRUPTIONS ON ORACLE AND UNIX
Doc ID: 77587.1
TECH: Database Block Checking Features
Doc ID: 32969.1
DBMS_REPAIR example
Doc ID: Note:68013.1
BLOCK CORRUPTIONS ON ORACLE AND UNIX
Doc ID: Note:77587.1
FAQ: Physical Corruption
Doc ID: Note:403747.1
V$Database_Block_Corruption Does not clear after Block Recover Command
Doc ID: Note:422889.1
How to Format Corrupted Block Not Part of Any Segment
Doc ID: Note:336133.1
V$DATABASE_BLOCK_CORRUPTION Shows a File Which Does not Exist
Doc ID: Note:298137.1
RMAN 9i: Block-Level Media Recovery - Concept & Example
Doc ID: 144911.1
Does Block Recovery use Incremental Backups?? -- BLOCKRECOVER command will ONLY use archivelog backups to complete it's recovery
Doc ID: 727706.1
HOW TO PERFORM BLOCK MEDIA RECOVERY (BMR) WHEN BACKUPS ARE NOT TAKEN BY RMAN.
Doc ID: 342972.1
How to Find All the Corrupted Objects in Your Database.
Doc ID: 472231.1
RMAN Does not Report a Corrupt Block if it is not Part of Any Segment
Doc ID: 463821.1
Note 336133.1 - How to Format Corrupted Block Not Part of Any Segment.
Note 269028.1 - DBV Reports Corruption Even After Drop/Recreate Object
Note 209691.1 - V$BACKUP_CORRUPTION Contains Information About Corrupt Blocks
How to Check Archivelogs for Corruption using RMAN
Doc ID: 377146.1
Warnings : Recovery is repairing media corrupt block
Doc ID: 213311.1
Is it possible to use RMAN Block Media Recovery to recover LOGICALLY corrupt blocks? <-- NO
Doc ID: 391120.1
TECH: Database Block Checking Features <-- with 11g
Doc ID: 32969.1
DBVerify Reports Blocks as 'influx - most likely media corrupt'
Doc ID: 468995.1
Meaning of the message "Block found already corrupt" when running dbverify
Doc ID: 139425.1
BLOCK CORRUPTIONS ON ORACLE AND UNIX
Doc ID: 77587.1
How To Check For Corrupt Or Invalid Archived Log Files
Doc ID: 177559.1
CORRUPT BLOCK INFO NOT REPORTED TO ALERT.LOG
Doc ID: 114357.1
Best Practices for Avoiding and Detecting Corruption
Doc ID: 428570.1
-----
Block Corruption FAQ
Doc ID: 47955.1
Physycal and Logical Block Corruptions. All you wanted to know about it.
Doc ID: 840978.1
ORA-1578 Main Reference Index for Solutions
Doc ID: 830997.1
How to identify the corrupt Object reported by ORA-1578 / RMAN / DBVERIFY
Doc ID: 819533.1
Frequently Encountered Corruption Errors, Diagnostics and Resolution - Reference
Doc ID: 463479.1
Data Recovery Advisor -Reference Guide.
Doc ID: 466682.1
Extracting Data from a Corrupt Table using ROWID Range Scans in Oracle8 and higher
Doc ID: 61685.1
Some Statements Referencing a Table with WHERE Clause Fails with ORA-01578
Doc ID: 146851.1
Extracting Data from a Corrupt Table using SKIP_CORRUPT_BLOCKS or Event 10231
Doc ID: 33405.1
How to identify all the Corrupted Objects in the Database reported by RMAN
Doc ID: 472231.1
ORA-1578 / ORA-26040 Corrupt blocks by NOLOGGING - Error explanation and solution
Doc ID: 794505.1
ORA-1578 ORA-26040 in a LOB segment - Script to solve the errors
Doc ID: 293515.1
OERR: ORA-1578 "ORACLE data block corrupted (file # %s, block # %s)"
Doc ID: 18976.1
Diagnosing and Resolving 1578 reported on a Local Index of a Partitioned table
Doc ID: 432923.1
HOW TO TROUBLESHOOT AND RESOLVE an ORA-1110
Doc ID: 434013.1
Cannot Reuse a Corrupt Block in Flashback Mode, ORA-1578
Doc ID: 729433.1
ORA-01578, ORA-0122, ORA-01204: On Startup
Doc ID: 1041424.6
ORA-01578 After Recovering Database Running In NOARCHIVELOG Mode
Doc ID: 122266.1
Identify the corruption extension using RMAN/DBV/ANALYZE etc
Doc ID: 836658.1
"hcheck.sql" script to check for known problems in Oracle8i, Oracle9i, Oracle10g and Oracle 11g
Doc ID: 136697.1
Introduction to the "H*" Helper Scripts
Doc ID: 101466.1
"hout.sql" script to install the "hOut" helper package
Doc ID: 101468.1
ASM - Internal Handling of Block Corruptions
Doc ID: 416046.1
BLOCK CORRUPTIONS ON ORACLE AND UNIX
Doc ID: 77587.1
Introduction to the Corruption Category
Doc ID: 68117.1
Note 33405.1 Extracting Data from a Corrupt Table using SKIP_CORRUPT_BLOCKS or Event 10231
Note 34371.1 Extracting Data from a Corrupt Table using ROWID or Index Scans in Oracle7
Note 61685.1 Extracting Data from a Corrupt Table using ROWID Range Scans in Oracle8/8i
Note 1029883.6 Extracting Data from a Corrupt Table using SALVAGE Scripts / Programs
Note 97357.1 SALVAGE.PC - Oracle8i Pro*C Code to Extract Data from a Corrupt Table
Note 2077307.6 SALVAGE.PC - Oracle7 Pro*C Code to Extract Data from a Corrupt Table
Note 2064553.4 SALVAGE.SQL - PL/SQL Code to Extract Data from a Corrupt Table
ORA-1578, ORA-1110, ORA-26040 on Standby Database Using Index Subpartitions
Doc ID: 431435.1
FAQ: Physical Corruption
Doc ID: 403747.1
Note 250968.1 Block Corruption Error Messages in Alert Log File
How we identified and fixed the workflow tables corruption errors after the database restore
Doc ID: 736033.1
ORA-01578 'ORACLE data block corrupted' When Attempting to Drop a Materialized View
Doc ID: 454955.1
Cloned Olap Database Gets ORA-01578 Nologging
Doc ID: 374036.1
ORA-01578: AGAINST A NEW DATAFILE
Doc ID: 1068001.6
Query of Table Using Index Fails With ORA-01578
Doc ID: 153888.1
Extracting Datafile Blocks From ASM
Doc ID: 294727.1
ORA-1578: Oracle Data Block Corrupted (File # 148, Block # 237913)
Doc ID: 103845.1
Data Corruption fixes in Red Hat AS 2.1 e.24 kernel
Doc ID: 241820.1
TECH: Database Block Checking Features
Doc ID: 32969.1
Analyze Table Validate Structure Cascade Online Is Slow
Doc ID: 434857.1
ANALYZE INDEX VALIDATE STRUCTURE ONLINE DOES NOT POPULATE INDEX_STATS
Doc ID: 283974.1
Meaning of the message "Block found already corrupt" when running dbverify
Doc ID: 139425.1
RMAN Does not Report a Corrupt Block if it is not Part of Any Segment
Doc ID: 463821.1
How to Format Corrupted Block Not Part of Any Segment
Doc ID: 336133.1
DBV Reports Corruption Even After Drop/Recreate Object
Doc ID: 269028.1
TFTS: Converting DBA's (Database Addresses) to File # and Block #
Doc ID: 113005.1
Bug 7329252 - ORA-8102/ORA-1499/OERI[kdsgrp1] Index corruption after rebuild index ONLINE
Doc ID: 7329252.8
ORA-600 [qertbfetchbyrowid]
Doc ID: 300637.1
ORA-600 [qertbfetchbyuserrowid]
Doc ID: 809259.1
ORA-600 [kdsgrp1]
Doc ID: 285586.1
ORA-1499. Table/Index row count mismatch
Doc ID: 563070.1
-- BLOCK CORRUPTION PREVENTION
How To Use RMAN To Check For Logical & Physical Database Corruption
Doc ID: 283053.1
How to check for physical and logical database corruption using "backup validate check logical database" command for database on a non-archivelog mode
Doc ID: 466875.1
How To Check (Validate) If RMAN Backup(s) Are Good
Doc ID: 338607.1
SCHEMA VALIDATION UTILITY
Doc ID: 286619.1
11g New Feature V$Database_block_corruption Enhancements and Rman Validate Command
Doc ID: 471716.1
How to Check/Validate That RMAN Backups Are Good
Doc ID: 466221.1
Which Blocks Will RMAN Check For Corruption Or Include In A Backupset?
Doc ID: 561010.1
Best Practices for Avoiding and Detecting Corruption
Doc ID: 428570.1
v$DATABASE_BLOCK_CORRUPTION Reports Corruption Even After Tablespace is Dropped
Doc ID: 454431.1
V$DATABASE_BLOCK_CORRUPTION Shows a File Which Does not Exist
Doc ID: 298137.1
Performing a Test Backup (VALIDATE BACKUP) Using RMAN
Doc ID: 121109.1
-- DBV
DBVERIFY - Database file Verification Utility (7.3.2 - 10.2)
Doc ID: Note:35512.1
DBVERIFY enhancement - How to scan an object/segment
Doc ID: Note:139962.1
Extract rows from a CORRUPT table creating ROWID from DBA_EXTENTS
Doc ID: Note:422547.1
ORA-8103 Diagnostics and Solution
Doc ID: Note:268302.1
Init.ora Parameter "DB_BLOCK_CHECKING" Reference Note
Doc ID: Note:68483.1
ORA-00600 [510] and ORA-1578 Reported with DB_BLOCK_CHECKING Set to True
Doc ID: Note:456439.1
New Parameter DB_ULTRA_SAFE introduce In 11g
Doc ID: Note:465130.1
ORA-600s and possible corruptions using the RAC TCPIP Interconnect.
Doc ID: Note:244940.1
TECH: Database Block Checking Features
Doc ID: Note:32969.1
[8.1.5] (14) Initialization Parameters
Doc ID: Note:68895.1
Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ?
Doc ID: Note:552424.1
-- RMAN ERRORS
RMAN-20020 Error after Registering Database Twice in a Session
Doc ID: Note:102776.1
Main Index of Common Causes for ORA-19511
Doc ID: 227517.1
-- STUCK RECOVERY
ORA-600 [3020] "Stuck Recovery"
Doc ID: Note:30866.1
Resolving ORA-600[3020] Raised During Recovery
Doc ID: Note:361172.1
Resolving ORA-00600 [3020] Against A Data Guard Database.
Doc ID: Note:470220.1
ORA-600 [3020] "Stuck Recovery"
Doc ID: Note:30866.1
Stuck recovery of database ORA-00600[3020]
Doc ID: Note:283269.1
Trial Recovery
Doc ID: Note:283262.1
Resolving ORA-00600 [3020] Against A Data Guard Database.
Doc ID: Note:470220.1
Bug 4594917 - Write IO error can cause incorrect file header checkpoint information
Doc ID: Note:4594917.8
ORA-00313 During RMAN Recovery
Doc ID: Note:437319.1
RMAN Tablespace Recovery Fails With ORA-00283 RMAN-11003 ORA-01579
Doc ID: Note:419692.1
RMAN Recovery Until Time Failed When Redo-Logs Missed - ORA-00313, ORA-00312 AND ORA-27037
Doc ID: Note:550077.1
RMAN-11003 and ORA-01153 When Doing Recovery through RMAN
Doc ID: Note:264113.1
ORA-600 [kccocx_01] Reported During Primary Database Shutdown
Doc ID: Note:466571.1
ORA-1122, ORA-1110, ORA-120X
Doc ID: Note:1011557.6
OERR: ORA 1205 not a datafile - type number in header is
Doc ID: Note:18777.1
Rman/Nsr Restore Fails. Attempt to Recover Results in ora-01205
Doc ID: Note:260150.1
-- CLONE / DUPLICATE
Database Cloning Process in case of Shutdown Abort
Doc ID: 428623.1
How to clone/duplicate a database with added datafile with no backup.
Doc ID: Note:292947.1
Answers To FAQ For Restoring Or Duplicating Between Different Versions And Platforms
Doc ID: Note:369644.1
RMAN Duplicate Database From RAC ASM To RAC ASM
Doc ID: Note:461479.1
Subject: RMAN-06023 DURING RMAN DUPLICATE
Doc ID: Note:414384.1
How To Make A Copy Of An Open Database For Duplication To A Different Machine
Doc ID: 224274.1
How to Make a Copy of a Database on the Same Unix Machine
Doc ID: 18070.1
Duplicate Database Without Connecting To Target And Without Using RMAN
Doc ID: 732625.1
Performing duplicate database with ASM/OMF/RMAN [ID 340848.1]
Article on How to do Rman Duplicate on ASM/RAC/OMF/Single Instance
Doc ID: 840647.1
Creating a physical standby from ASM primary
Doc ID: 787793.1
RMAN Duplicate Database From RAC ASM To RAC ASM
Doc ID: 461479.1
How To Create A Production (Full or Partial) Duplicate On The Same Host
Doc ID: 388424.1
-- DUPLICATE ERRORS
Rman Duplicate fails with ORA-19870 ORA-19587 ORA-17507
Doc ID: 469132.1
ORA-19870 Control File Not Found When Creating Standby Database With RMAN
Doc ID: 430621.1
ORA-19505 ORA-27037 FAILED TO IDENTIFY FILE
Doc ID: 444610.1
Database Instance Will Not Mount. Ora-19808
Doc ID: 391828.1
RMAN-06136 On Duplicate Database for Standby with OMF and ASM
Doc ID: 341591.1
-- RMAN RAC backup
RMAN: RAC Backup and Recovery using RMAN
Doc ID: Note:243760.1
HowTo Restore RMAN Disk backups of RAC Database to Single Instance On Another Node
Doc ID: 415579.1
-- MISSING ARCHIVELOG
NT: Online Backups
Doc ID: 41946.1
Which System Privileges are required for a User to Perform Backup Operator Tasks
Doc ID: 180019.1
Scripts To Perform Dynamic Hot/Online Backups
Doc ID: 152111.1
EVENT: 10231 "skip corrupted blocks on _table_scans_"
Doc ID: 21205.1
RECOVER A DATAFILE WITH MISSING ARCHIVELOGS
Doc ID: 418476.1
How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted? <--- FUJI
Doc ID: 465478.1
Incomplete Recover Fails with ORA-01194, ORA-01110 and Warning "Recovering from Fuzzy File".
Doc ID: 165671.1
Fuzzy File Warning When Recovering From Cold Backup
Doc ID: 103100.1
How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted?
Doc ID: Note:465478.1
RECOVER A DATAFILE WITH MISSING ARCHIVELOGS
Doc ID: Note:418476.1
-- MISSING
RECREATE MISSING TABLESPACE AND DATAFILE
Doc ID: Note:2072805.6
DATAFILES ARE MISSING AFTER DATABASE IS OPEN IN RESETLOGS
Doc ID: Note:420730.1
-- CROSS PLATFORM
Migration of Oracle Instances Across OS Platforms
Doc ID: Note:733205.1
How To Use RMAN CONVERT DATABASE for Cross Platform Migration
Doc ID: Note:413586.1
Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions
Doc ID: Note:553337.1
10g : Transportable Tablespaces Across Different Platforms
Doc ID: Note:243304.1
-- TAG
How to use RMAN TAG name with different attributes or variables.
Doc ID: 580283.1
How to use Substitution Variables in RMAN commands
Doc ID: 427229.1
-- DATA GUARD ROLL FORWARD
Rolling a Standby Forward using an RMAN Incremental Backup in 10g
Doc ID: 290814.1
-- BACKUP ON RAW DEVICE
How To Backup Database When Files Are On Raw Devices/File System
Doc ID: 469716.1
-- BACKUP COPY OF DATABASE
RMAN10g: backup copy of database
Doc ID: 266980.1
-- ORACLE SECURE BACKUP
OSB Cloud Module - FAQ (Doc ID 740226.1)
How To Determine The Free Space On A Tape? (Doc ID 415026.1)
-- REDO LOG
How To Mulitplex Redo Logs So That One Copy Will Be In FRA ?
Doc ID: 833553.1
-- TSPITR
Limitations of RMAN TSPITR
Doc ID: 304305.1
What Checks Oracle Does during Tablespace Point-In-Time Recovery (TSPITR)
Doc ID: 153981.1
Perform Tablespace Point-In-Time Recovery Using Transportable Tablespace
Doc ID: 100698.1
TSPITR:How to check dependency of the objects and identifying objects that will be lost after TSPITR
Doc ID: 304308.1
How to Recover a Drop Tablespace with RMAN
Doc ID: 455865.1
RMAN: Tablespace Point In Time Recovery (TSPITR) Procedure.
Doc ID: 109979.1
Automatic TSPITR in 10G RMAN -A walk Through
Doc ID: 335851.1
-- TRANSPORTABLE TABLESPACE
Transportable Tablespaces -- An Example to setup and use
Doc ID: 77523.1
-- RMAN TEMPFILES
Recovery Manager and Tempfiles
Doc ID: 305993.1
In that case use RMAN to take the backup to filesystem, here is an example, note that RMAN will not copy redo log members, then in case of needed to restore, the database will need to be open using resetlogs:
rman nocatalog target /
shutdown immediate
startup mount
backup as copy database format '/oracle/bkp/Df_%U';
copy current controlfile to '/oracle/bkp/%d_controlfile.ctl';
backup spfile format '/oracle/bkp/%d_spfile.ora';
shutdown immediate;
mkdir /u04/oradata/RAC/backup/
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u04/oradata/RAC/backup/%F';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u04/oradata/RAC/backup/snapcf_RAC.f';
BACKUP FORMAT '/u04/oradata/RAC/backup/%d_D_%T_%u_s%s_p%p' DATABASE;
-- BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/u04/oradata/RAC/backup/%d_C_%U'; -- if creating standby database
BACKUP CURRENT CONTROLFILE FORMAT '/u04/oradata/RAC/backup/%d_C_%U';
SQL "ALTER SYSTEM ARCHIVE LOG CURRENT";
BACKUP FILESPERSET 10 ARCHIVELOG ALL FORMAT '/u04/oradata/RAC/backup/%d_A_%T_%u_s%s_p%p';
or we could copy the backupset using backup backupset...
http://www.freelists.org/post/oracle-l/Experiencesthoughts-about-hardware-recommendations <-- this is the BIG question
http://structureddata.org/2009/12/22/the-core-performance-fundamentals-of-oracle-data-warehousing-balanced-hardware-configuration/
http://dsstos.blogspot.com/2009/09/download-link-for-storage-design-for.html
{{{
The Core Performance Fundamentals Of Oracle Data Warehousing – Balanced Hardware Configuration
http://structureddata.org/?p=716
Balanced Hardware Configuration
http://download.oracle.com/docs/cd/E11882_01/server.112/e10578/tdpdw_system.htm#CFHFJEDD
General Performance and I/O Topics
http://kevinclosson.wordpress.com/kevin-closson-index/general-performance-and-io-topics/
Oracle Real Application Clusters: Sizing and Capacity Planning Then and Now
http://www.oracleracsig.org/pls/apex/Z?p_url=RAC_SIG.download_my_file?p_file=1001042&p_id=1001042&p_cat=documents&p_user=KARAO&p_company=994323795175833
RAC Performance Experts Reveal All http://www.scribd.com/doc/6850001/RAC-Performance-Experts-Reveal-All
“Storage Design for Datawarehousing”
http://dsstos.blogspot.com/2009/09/download-link-for-storage-design-for.html
Oracle Database Capacity Planning
http://dsstos.blogspot.com/2008/08/oracle-database-capacity-planning.html
Simple Userland tools on Unix to help analyze application impact as a non-root user – Storage Subsystem
http://dsstos.blogspot.com/2008/07/simple-userland-tools-on-unix-to-help.html
}}}
''Docs'' http://wiki.bash-hackers.org/doku.php , ''FAQ'' http://mywiki.wooledge.org/BashFAQ
Sorting data by dates, numbers and much much more
http://prefetch.net/blog/index.php/2010/06/24/sorting-data-by-dates-numbers-and-much-much-more/
{{{
This is crazy useful, and I didn’t realize sort could be used to sort by date. I put this to use today, when I had to sort a slew of data that looked similar to this:
Jun 10 05:17:47 some_data_string
May 20 05:17:48 some_data_string2
Jun 17 05:17:49 some_data_string0
I was able to first sort by the month, and then by the day of the month:
$ awk ‘{printf “%-3s %-2s %-8s %-50s\n”, $1, $2, $3, $4 }’ data | sort -k1M -k2n
May 17 05:17:49 some_data_string0
Jun 01 05:17:47 some_data_string
Jun 20 05:17:48 some_data_string2
}}}
http://www.linuxconfig.org/Bash_scripting_Tutorial
http://www.oracle.com/technetwork/articles/servers-storage-dev/kornshell-1523970.html
* jmeter http://jakarta.apache.org/jmeter/
* httperf http://httperf.comlore.com/
* misc stuff http://www.idsia.ch/~andrea/sim/simvis.html
* geekbench http://browse.geekbench.ca/
http://blogs.netapp.com/virtualization/
SPEC - Standard Performance Evaluation Corporation http://www.spec.org/
spec sfs http://queue.acm.org/blogposting.cfm?id=11445
SPEC FAQ http://www.spec.org/spec/faq/
Ideas International - Benchmark Gateway
http://www.ideasinternational.com/benchmark/ben010.aspx
comp.benchmarks FAQ
http://pages.cs.wisc.edu/~thomas/comp.benchmarks.FAQ.html
PDS: The Performance Database Server
http://performance.netlib.org/performance/html/PDStop.html
Iozone Filesystem Benchmark
http://www.iozone.org/
How to measure I/O Performance on Linux (Doc ID 1931009.1)
File Format Benchmark Avro JSON ORC and Parquet
https://www.youtube.com/watch?v=tB28rPTvRiI
Hadoop Tutorial for Beginners - 32 Hive Storage File Formats: Sequence, RC, ORC, Avro, Parquet
https://www.youtube.com/watch?v=UXhyENkYokw
https://www.youtube.com/results?search_query=parquet+vs+orc
''What is big data?'' http://radar.oreilly.com/2012/01/what-is-big-data.html
http://www.slideshare.net/ksankar/the-art-of-big-data
''What is data science?'' http://radar.oreilly.com/2010/06/what-is-data-science.html
nutanix guy https://sites.google.com/site/mohitaron/research
''Big Data Videos''
http://www.zdnet.com/big-data-projects-is-the-hardware-infrastructure-overlooked-7000005940/
http://www.livestream.com/fbtechtalks/video?clipId=pla_a3d62538-1238-4202-a3be-e257cd866bb9
<<<
If you're a database guy you'll love this 2 hour video, facebook engineers discussed the following – performance focus, server provisioning, automatic server rebuilds, backup & recovery, online schema changes, sharding, HBase and Hadoop, the Q&A part at the end is also interesting at 1:28:46 Mark Callaghan also answered why they chose MySQL vs commercial databases that already have the features that their engineers are hacking. Good stuff!
<<<
Index Is Not Used If Defined On a CHAR Column That Is TDE Encrypted And WHERE Clause Uses Binds (Doc ID 1470350.1)
{{{
The premises of this issue are as follows:
1. create an encrypted column of datatype CHAR and encryption NO SALT.
2. create an index on this encrypted column.
3. run a query that uses a WHERE clause with bind variables on the encrypted column
The query would access the table using Full Table Scan access path.
The issue does not reproduce if using another datatype for the encrypted column or if using literals instead of bind variables.
A succint example is given below:
conn / as sysdba
drop user test cascade;
create user test identified by xxxxx;
grant dba to test;
conn test/xxxxx
create table tbl1
(
col1 char(20) encrypt no salt,
col2 number
);
create index tbl1_col1_ix on tbl1(COL1);
begin
for i in 1..10000 loop
insert into tbl1 values('col1'||i,i);
commit;
end loop;
end;
/
execute dbms_stats.gather_schema_stats('TEST');
conn test/test
variable v_col1 char(19);
execute :v_col1:='col110';
--Then generate either the 10046 or 10053 and check the resulting trace:
alter session set events='10053 trace name context forever, level 1';
alter session set events='10046 trace name context forever, level 8';
select t1.*
from tbl1 t1
where t1.col1=:v_col1;
============
Plan Table
============
-------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 25 | |
| 1 | TABLE ACCESS FULL | TBL1 | 100 | 2500 | 25 | 00:00:01 |
-------------------------------------+-----------------------------------+
Predicate Information:
----------------------
1 - filter(INTERNAL_FUNCTION("T1"."COL1")=:V_COL1)
select t1.*
from tbl1 t1
where t1.col1='col110';
============
Plan Table
============
---------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
---------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 1 | |
| 1 | TABLE ACCESS BY INDEX ROWID | TBL1 | 1 | 102 | 1 | 00:00:01 |
| 2 | INDEX RANGE SCAN | TBL1_COL1_IX| 1 | | 1 | 00:00:01 |
---------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("T1"."COL1"='COL110')
}}}
<<<
CAUSE
This issue has been investigated in bug:
Bug 13926287 - INDEXES ON TDE CHAR COLUMNS ARE NOT USED WITH CHAR BIND VARIABLES
The same problem has been investigated in: 17162592, 16197787, 14639274, 9672564.
This issue concerns the use of CHAR binds and columns with the decryptor to encryptor transformation.
The decryptor to encryptor transformation transforms a predicate of the form: decrypt(col) = expr to col = encrypt(expr). This transformation is not done if the literal (expr) size is longer than encrypted column size.
Equally, if a bind variable is used and the bind buffer length is longer than the encrypted length, the transformation will not occur.
If the column and the bind variables are both of type CHAR then these are both blank padded to the full extent of their current size.
With bind variable length >2000 , the above mentioned transformation does not take place, since encryption increases the length of the value. If the bind length is at the largest size and is encrypted then there is nowhere to go, hence this transformation cannot occur.
These restrictions can only be overcome by changing the used datatypes or encryption types or by manually enforcing the literals/bind variable length.
There will be no Oracle software patch addressing these limitations.
<<<
<<<
SOLUTION
Workarounds
Instead of encrypting a column, place the whole table in an encrypted tablespace. This means that the decryption is not required in the predicate.
Use a VARCHAR2 bind variable – this may not be valid in all cases
Add a substr() function to the bind so that it stays within the maximum allowed limit (even with the added encryption length) . Suggestion: SUBSTR(:B1,1,3900))
Enhancement request:
Bug 14236789 - INDEX USAGE ON TDE CHAR COLUMNS
has been raised to address this issue in the future Oracle releases.
<<<
{{{
oracle SUBSTR(:B1,1,3900))
}}}
! tanel nonshared
https://github.com/PoderC/vconf2021/blob/main/slides/02-Cursor-Sharing.pdf
https://github.com/PoderC/vconf2021/tree/main/scripts/cursor_reuse
! other references
https://jonathanlewis.wordpress.com/2007/01/05/bind-variables/
https://topic.alibabacloud.com/a/a-good-memory-is-better-than-a-rotten-pen-oracle-font-colorredsqlfont-optimization-2_1_46_30060534.html
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-iii
https://hourim.wordpress.com/?s=bind+variable
https://www.slideshare.net/MohamedHouri
http://kerryosborne.oracle-guy.com/2009/03/bind-variable-peeking-drives-me-nuts/
http://www.pythian.com/news/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms/
https://oracle.readthedocs.io/en/latest/plsql/bind/bind-peeking.html
http://psoug.org/reference/bindvars.html
http://surachartopun.com/2008/12/todateoctmon-ora-01843-not-valid-month.html
http://www-03.ibm.com/systems/bladecenter/resources/benchmarks/whitepapers/
http://husnusensoy.wordpress.com/2008/07/28/readonly-tablespace-vs-block-change-tracking-file/
Data Loss on BCT
http://sai-oracle.blogspot.com/2010/09/beware-of-data-loss-in-bct-based-rman.html
<<<
"Reliability of BCT:
On 11.2.0.1 standby, I've seen managed standby recovery failing to start until BCT is reset at least while running the above tests. It doesn't seem like matured enough to be used on the physical standby. I'm working with Oracle support to get all these issues fixed.
As of 11.2.0.1, I don't recommend using BCT on the standby for running RMAN backups. I think it is pretty safe to use it on the primary database."
<<<
ORACLE 10G BLOCK CHANGE TRACKING INSIDE OUT (Doc ID 1528510.1)
{{{
You can enable change tracking with the following statement:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
Alternatively, you can specify location of block change tracking file:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/DB1/bct.ora';
-- USING CLAUSE THE FILE 'MYDG +' ;
To disable:
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
View V$BLOCK_CHANGE_TRACKING can be queried to find out the status of change tracking in
the database.
}}}
http://dsstos.blogspot.com/2009/07/map-disk-block-devices-on-linux-host.html
{{{
2014/12/04: <a href="http://karlarao.wordpress.com/2014/12/04/my-timetaskgoalhabit-ttgh-management/">my Time/Task/Goal/Habit (TTGH) management</a>
2013/09/22: <a href="http://karlarao.wordpress.com/2013/09/22/oow-and-oaktable-world-2013/">OOW and OakTable World 2013</a>
2013/05/23: <a href="http://karlarao.wordpress.com/2013/05/23/speaking-at-e4-2013-and-some-exadata-patents-good-stuff/">Speaking at E4 2013! … and some Exadata Patents good stuff</a>
2013/02/05: <a href="http://karlarao.wordpress.com/2013/02/05/rmoug-ioug-collaborate-kscope-and-e4-2013/">RMOUG, IOUG Collaborate, KSCOPE, and E4 2013</a>
2012/10/16: <a href="http://karlarao.wordpress.com/2012/10/16/oracle-big-data-appliance-first-boot/">Oracle Big Data Appliance First Boot</a>
2012/09/27: <a href="http://karlarao.wordpress.com/2012/09/27/oaktable-world-2012/">OakTable World 2012</a>
2012/06/29: <a href="http://karlarao.wordpress.com/2012/06/29/speaking-at-e4/">Speaking at E4!</a>
2012/06/29: <a href="http://karlarao.wordpress.com/2012/06/29/the-effect-of-asm-redundancyparity-on-readwrite-iops-slob-test-case-for-exadata-and-non-exa-environments/">The effect of ASM redundancy/parity on read/write IOPS – SLOB test case! for Exadata and non-Exa environments</a>
2012/05/14: <a href="http://karlarao.wordpress.com/2012/05/14/iosaturationtoolkit-v2-with-iorm-and-awesome-text-graph">IOsaturationtoolkit-v2 with IORM and AWESOME text graph</a>
2012/03/24: <a href="http://karlarao.wordpress.com/2012/03/24/fast-analytics-of-awr-top-events/">Fast Analytics of AWR Top Events</a>
2012/02/13: <a href="http://karlarao.wordpress.com/2012/02/13/rmoug-2012-training-days/">RMOUG 2012 training days</a>
2012/02/11: <a href="http://karlarao.wordpress.com/2012/02/11/sqltxplain-quick-tips-and-tricks-and-db-optimizer-vst/">SQLTXPLAIN quick tips and tricks and DB Optimizer VST</a>
2011/12/31: <a href="http://karlarao.wordpress.com/2011/12/31/easy-and-fast-environment-framework/">Easy and fast environment framework</a>
2011/12/06: <a href="http://karlarao.wordpress.com/2011/12/06/mining-emgc-notification-alerts">Mining EMGC Notification Alerts</a>
2011/09/21: <a href="http://karlarao.wordpress.com/2011/09/21/oracle-database-appliance-oda-installation-configuration/">Oracle Database Appliance (ODA) Installation / Configuration</a>
2011/07/18: <a href="http://karlarao.wordpress.com/2011/07/18/virtathon-mining-the-awr/">VirtaThon – Mining the AWR</a>
2011/07/14: <a href="http://karlarao.wordpress.com/2011/07/14/enkitec-university-exadata-courses-for-developers-and-dbas/">Enkitec University – Exadata Courses for Developers and DBAs</a>
2011/05/17: <a href="http://karlarao.wordpress.com/2011/05/17/nocoug-journal-ask-the-oracle-aces-why-is-my-database-slow/">NoCOUG Journal – Ask the Oracle ACEs – Why is my database slow?</a>
2011/03/23: <a href="http://karlarao.wordpress.com/2011/03/23/oracle-by-example-portal-now-shows-12g/">Oracle by Example portal now shows 12g</a>
2011/03/11: <a href="http://karlarao.wordpress.com/2011/03/11/hotsos-2011-mining-the-awr-repository-for-capacity-planning-visualization-and-other-real-world-stuff">Hotsos 2011 – Mining the AWR Repository for Capacity Planning, Visualization, and other Real World Stuff</a>
2011/01/30: <a href="http://karlarao.wordpress.com/2011/01/30/migrating-your-vms-from-vmware-to-virtualbox-on-a-netbook">Migrating your VMs from VMware to VirtualBox (on a Netbook)</a>
2010/12/21: <a href="http://karlarao.wordpress.com/2010/12/21/wheeew-i-am-now-a-redhat-certified-engineer">Wheeew, I am now a RedHat Certified Engineer!</a>
2010/11/07: <a href="http://karlarao.wordpress.com/2010/11/07/ill-be-speaking-at-hotsos-2011">I’ll be speaking at HOTSOS 2011!</a>
2010/10/07: <a href="http://karlarao.wordpress.com/2010/10/07/after-oow-my-laptop-broke-down-data-rescue-scenario">After OOW, my laptop broke down – data rescue scenario</a>
2010/09/24: <a href="http://karlarao.wordpress.com/2010/09/24/oracle-closed-world-and-unconference-presentations">Oracle Closed World and Unconference Presentations</a>
2010/09/20: <a href="http://karlarao.wordpress.com/2010/09/20/oow-2010-the-highlights">OOW 2010 - the highlights</a>
2010/09/12: <a href="http://karlarao.wordpress.com/2010/09/12/oow-2010-my-schedule">OOW 2010 - my schedule</a>
2010/08/31: <a href="http://karlarao.wordpress.com/2010/08/31/statistically-summarize-oracle-performance-data">Statistically summarize Oracle Performance data</a>
2010/07/27: <a href="http://karlarao.wordpress.com/2010/07/27/guesstimations">Guesstimations</a>
2010/07/25: <a href="http://karlarao.wordpress.com/2010/07/25/graphing-the-aas-with-perfsheet-a-la-enterprise-manager">Graphing the AAS with Perfsheet a la Enterprise Manager</a>
2010/07/05: <a href="http://karlarao.wordpress.com/2010/07/05/oracle-datafile-io-latency-part-1">Oracle datafile IO latency - Part 1</a>
2010/06/28: <a href="http://karlarao.wordpress.com/2010/06/28/the-not-a-problem-problem-and-other-related-stuff">The “Not a Problem” Problem and other related stuff</a>
2010/06/18: <a href="http://karlarao.wordpress.com/2010/06/18/oracle-mix-oow-2010-suggest-a-session">Oracle Mix - OOW 2010 Suggest-A-Session</a>
2010/05/30: <a href="http://karlarao.wordpress.com/2010/05/30/seeing-exadata-in-action">Seeing Exadata in action</a>
2010/04/10: <a href="http://karlarao.wordpress.com/2010/04/10/my-personal-wiki-karlarao-tiddlyspot-com">My Personal Wiki - karlarao.tiddlyspot.com</a>
2010/03/27: <a href="http://karlarao.wordpress.com/2010/03/27/ideas-build-off-ideas-making-use-of-social-networking-sites">“Ideas build off ideas”… making use of Social Networking sites</a>
2010/02/04: <a href="http://karlarao.wordpress.com/2010/02/04/devcon-luzon-2010">DEVCON Luzon 2010</a>
2010/02/01: <a href="http://karlarao.wordpress.com/2010/02/01/craig-shallahamer-is-now-blogging">Craig Shallahamer is now blogging!</a>
2010/01/31: <a href="http://karlarao.wordpress.com/2010/01/31/workload-characterization-using-dba_hist-tables-and-ksar">Workload characterization using DBA_HIST tables and kSar</a>
2009/12/31: <a href="http://karlarao.wordpress.com/2009/12/31/50-sql-performance-optimization-scenarios/">50+ SQL Performance Optimization scenarios</a>
2009/11/21: <a href="http://karlarao.wordpress.com/2009/11/21/rac-system-load-testing-and-test-plan/">RAC system load testing and test plan</a>
2009/11/03: <a href="http://karlarao.wordpress.com/2009/11/03/rhev-red-hat-enterprise-virtualization-is-out/">RHEV (Red Hat Enterprise Virtualization) is out!!!</a>
2009/08/15: <a href="http://karlarao.wordpress.com/2009/08/15/knowing-the-trend-of-deadlock-occurrences-from-the-alert-log">Knowing the trend of Deadlock occurrences from the Alert Log</a>
2009/07/30: <a href="http://karlarao.wordpress.com/2009/07/30/lucky-to-find-it">Lucky to find it..</a>
2009/06/07: <a href="http://karlarao.wordpress.com/2009/06/07/diagnosing-and-resolving-gc-block-lost">Diagnosing and Resolving “gc block lost”</a>
2009/05/08: <a href="http://karlarao.wordpress.com/2009/05/08/yast-on-oel">Yast on OEL</a>
2009/05/08: <a href="http://karlarao.wordpress.com/2009/05/08/understanding-the-scn">Understanding the SCN</a>
2009/04/20: <a href="http://karlarao.wordpress.com/2009/04/20/advanced-oracle-troubleshooting-by-tanel-poder-in-singapore">Advanced Oracle Troubleshooting by Tanel Poder in Singapore</a>
2009/04/06: <a href="http://karlarao.wordpress.com/2009/04/06/os-thread-startup">OS Thread Startup</a>
2009/04/04: <a href="http://karlarao.wordpress.com/2009/04/04/single-instance-and-rac-kernel-os-upgrade">Single Instance and RAC Kernel/OS upgrade</a>
2009/02/27: <a href="http://karlarao.wordpress.com/2009/02/27/security-forecasting-oracle-performance-and-some-stuff-to-post-soon">Security, Forecasting Oracle Performance and Some stuff to post… soon…</a>
2009/01/03: <a href="http://karlarao.wordpress.com/2009/01/03/migrate-from-windows-xp-64bit-to-ubuntu-intrepid-ibex-810-64bit">Migrate from Windows XP 64bit to Ubuntu Intrepid Ibex 8.10 64bit</a>
2008/11/07: <a href="http://karlarao.wordpress.com/2008/11/07/oraclevalidatedinstallationonoel45">Oracle-Validated RPM on OEL 4.5</a>
<span style="color:white;"> </span>
<span style="color:white;"> </span>
<h1>By Category</h1>
<h3><span style="text-decoration:underline;">Performance/Troubleshooting</span></h3>
<ul>
<li>Capacity Planning
<ul>
<li><a href="http://karlarao.wordpress.com/2010/01/31/workload-characterization-using-dba_hist-tables-and-ksar">Workload characterization using DBA_HIST tables and kSar</a></li>
<li><a href="http://karlarao.wordpress.com/2010/08/31/statistically-summarize-oracle-performance-data">Statistically summarize Oracle Performance data</a></li>
</ul>
</li>
<li>Database Tuning
<ul>
<li><a href="http://karlarao.wordpress.com/2010/07/05/oracle-datafile-io-latency-part-1">Oracle datafile IO latency - Part 1</a></li>
</ul>
</li>
<li>Hardware and Operating System
<ul>
<li>Exadata
<ul>
<li><a href="http://karlarao.wordpress.com/2010/05/30/seeing-exadata-in-action">Seeing Exadata in action</a></li>
<li><a href="http://karlarao.wordpress.com/2012/05/14/iosaturationtoolkit-v2-with-iorm-and-awesome-text-graph">IOsaturationtoolkit-v2 with IORM and AWESOME text graph</a></li>
<li><a href="http://karlarao.wordpress.com/2012/06/29/the-effect-of-asm-redundancyparity-on-readwrite-iops-slob-test-case-for-exadata-and-non-exa-environments/">The effect of ASM redundancy/parity on read/write IOPS – SLOB test case! for Exadata and non-Exa environments</a></li>
</ul>
</li>
<li>Oracle Database Appliance
<ul>
<li><a href="http://karlarao.wordpress.com/2011/09/21/oracle-database-appliance-oda-installation-configuration/">Oracle Database Appliance (ODA) Installation / Configuration</a></li>
</ul>
</li>
<li>Oracle Big Data Appliance
<ul>
<li><a href="http://karlarao.wordpress.com/2012/10/16/oracle-big-data-appliance-first-boot/">Oracle Big Data Appliance First Boot</a></li>
</ul>
</li>
<li>VirtualBox
<ul>
<li><a href="http://karlarao.wordpress.com/2011/01/30/migrating-your-vms-from-vmware-to-virtualbox-on-a-netbook">Migrating your VMs from VMware to VirtualBox (on a Netbook)</a></li>
</ul>
</li>
</ul>
</li>
<li>SQL Tuning
<ul>
<li><a href="http://karlarao.wordpress.com/2009/12/31/50-sql-performance-optimization-scenarios/">50+ SQL Performance Optimization scenarios</a></li>
<li><a href="http://karlarao.wordpress.com/2012/02/11/sqltxplain-quick-tips-and-tricks-and-db-optimizer-vst/">SQLTXPLAIN quick tips and tricks and DB Optimizer VST</a> </li>
</ul>
</li>
<li>Troubleshooting & Internals
<ul>
<li>Wait Events
<ul>
<li><a href="http://karlarao.wordpress.com/2009/04/06/os-thread-startup">OS Thread Startup</a></li>
<li><a href="http://karlarao.wordpress.com/2009/06/07/diagnosing-and-resolving-gc-block-lost">Diagnosing and Resolving “gc block lost”</a></li>
</ul>
</li>
<li>Deadlock
<ul>
<li><a href="http://karlarao.wordpress.com/2009/08/15/knowing-the-trend-of-deadlock-occurrences-from-the-alert-log">Knowing the trend of Deadlock occurrences from the Alert Log</a></li>
</ul>
</li>
<li>Systematic Approach and Method
<ul>
<li><a href="http://karlarao.wordpress.com/2010/06/28/the-not-a-problem-problem-and-other-related-stuff">The “Not a Problem” Problem and other related stuff</a></li>
<li><a href="http://karlarao.wordpress.com/2010/07/25/graphing-the-aas-with-perfsheet-a-la-enterprise-manager">Graphing the AAS with Perfsheet a la Enterprise Manager</a></li>
<li><a href="http://karlarao.wordpress.com/2010/07/27/guesstimations">Guesstimations</a></li>
<li><a href="http://karlarao.wordpress.com/2011/05/17/nocoug-journal-ask-the-oracle-aces-why-is-my-database-slow/">NoCOUG Journal – Ask the Oracle ACEs – Why is my database slow?</a></li>
<li><a href="http://karlarao.wordpress.com/2012/03/24/fast-analytics-of-awr-top-events/">Fast Analytics of AWR Top Events</a></li>
</ul>
</li>
</ul>
</li>
</ul>
<h3><span style="text-decoration:underline;">RAC</span></h3>
<ul>
<li>Upgrade
<ul>
<li><a href="http://karlarao.wordpress.com/2009/04/04/single-instance-and-rac-kernel-os-upgrade">Single Instance and RAC Kernel/OS upgrade</a></li>
</ul>
</li>
<li>Benchmark and Testing
<ul>
<li><a href="http://karlarao.wordpress.com/2009/11/21/rac-system-load-testing-and-test-plan/">RAC system load testing and test plan</a></li>
</ul>
</li>
<li>Performance
<ul>
<li><a href="http://karlarao.wordpress.com/2009/06/07/diagnosing-and-resolving-gc-block-lost">Diagnosing and Resolving “gc block lost”</a></li>
</ul>
</li>
</ul>
<h3><span style="text-decoration:underline;">Enterprise Manager</span></h3>
<ul>
<li>EM troubleshooting
<ul>
<li><a href="http://karlarao.wordpress.com/2011/12/06/mining-emgc-notification-alerts">Mining EMGC Notification Alerts</a></li>
</ul>
</li>
</ul>
<h3><span style="text-decoration:underline;">Linux</span></h3>
<ul>
<li>RedHat
<ul>
<li>RHEV
<ul>
<li><a href="http://karlarao.wordpress.com/2009/11/03/rhev-red-hat-enterprise-virtualization-is-out/">RHEV (Red Hat Enterprise Virtualization) is out!!!</a></li>
</ul>
</li>
<li>RHCE
<ul>
<li><a href="http://karlarao.wordpress.com/2010/12/21/wheeew-i-am-now-a-redhat-certified-engineer">Wheeew, I am now a RedHat Certified Engineer!</a></li>
</ul>
</li>
</ul>
</li>
<li>OEL
<ul>
<li><a href="http://karlarao.wordpress.com/2008/11/07/oraclevalidatedinstallationonoel45">Oracle-Validated RPM on OEL 4.5</a></li>
<li><a href="http://karlarao.wordpress.com/2009/05/08/yast-on-oel">Yast on OEL</a></li>
</ul>
</li>
<li>Ubuntu
<ul>
<li><a href="http://karlarao.wordpress.com/2009/01/03/migrate-from-windows-xp-64bit-to-ubuntu-intrepid-ibex-810-64bit">Migrate from Windows XP 64bit to Ubuntu Intrepid Ibex 8.10 64bit</a></li>
</ul>
</li>
<li>Fedora
<ul>
<li><a href="http://karlarao.wordpress.com/2010/10/07/after-oow-my-laptop-broke-down-data-rescue-scenario">After OOW, my laptop broke down – data rescue scenario</a></li>
</ul>
</li>
</ul>
<h3><span style="text-decoration:underline;">Reviews</span></h3>
<ul>
<li><a href="http://karlarao.wordpress.com/2009/02/27/security-forecasting-oracle-performance-and-some-stuff-to-post-soon">Security, Forecasting Oracle Performance and Some stuff to post… soon…</a></li>
<li><a href="http://karlarao.wordpress.com/2009/04/20/advanced-oracle-troubleshooting-by-tanel-poder-in-singapore">Advanced Oracle Troubleshooting by Tanel Poder in Singapore</a></li>
<li><a href="http://karlarao.wordpress.com/2009/07/30/lucky-to-find-it">Lucky to find it..</a></li>
<li><a href="http://karlarao.wordpress.com/2010/02/01/craig-shallahamer-is-now-blogging">Craig Shallahamer is now blogging!</a></li>
</ul>
<h3><span style="text-decoration:underline;">Backup and Recovery</span></h3>
<ul>
<li><a href="http://karlarao.wordpress.com/2009/05/08/understanding-the-scn">Understanding the SCN</a></li>
</ul>
<h3><span style="text-decoration:underline;">Community</span></h3>
<ul>
<li><a href="http://karlarao.wordpress.com/2010/02/04/devcon-luzon-2010">DEVCON Luzon 2010</a></li>
<li><a href="http://karlarao.wordpress.com/2010/03/27/ideas-build-off-ideas-making-use-of-social-networking-sites">“Ideas build off ideas”… making use of Social Networking sites</a></li>
<li><a href="http://karlarao.wordpress.com/2010/04/10/my-personal-wiki-karlarao-tiddlyspot-com">My Personal Wiki - karlarao.tiddlyspot.com</a></li>
<li><a href="http://karlarao.wordpress.com/2010/06/18/oracle-mix-oow-2010-suggest-a-session">Oracle Mix - OOW 2010 Suggest-A-Session</a></li>
<li><a href="http://karlarao.wordpress.com/2010/09/12/oow-2010-my-schedule">OOW 2010 - my schedule</a></li>
<li><a href="http://karlarao.wordpress.com/2010/09/20/oow-2010-the-highlights">OOW 2010 - the highlights</a></li>
<li><a href="http://karlarao.wordpress.com/2010/09/24/oracle-closed-world-and-unconference-presentations">Oracle Closed World and Unconference Presentations</a></li>
<li><a href="http://karlarao.wordpress.com/2010/11/07/ill-be-speaking-at-hotsos-2011">I’ll be speaking at HOTSOS 2011!</a></li>
<li><a href="http://karlarao.wordpress.com/2011/03/11/hotsos-2011-mining-the-awr-repository-for-capacity-planning-visualization-and-other-real-world-stuff">Hotsos 2011 – Mining the AWR Repository for Capacity Planning, Visualization, and other Real World Stuff</a></li>
<li><a href="http://karlarao.wordpress.com/2011/03/23/oracle-by-example-portal-now-shows-12g/">Oracle by Example portal now shows 12g</a></li>
<li><a href="http://karlarao.wordpress.com/2011/07/14/enkitec-university-exadata-courses-for-developers-and-dbas/">Enkitec University – Exadata Courses for Developers and DBAs</a></li>
<li><a href="http://karlarao.wordpress.com/2011/07/18/virtathon-mining-the-awr/">VirtaThon – Mining the AWR</a></li>
<li><a href="http://karlarao.wordpress.com/2011/12/31/easy-and-fast-environment-framework/">Easy and fast environment framework</a></li>
<li><a href="http://karlarao.wordpress.com/2012/02/13/rmoug-2012-training-days/">RMOUG 2012 training days</a></li>
<li><a href="http://karlarao.wordpress.com/2012/06/29/speaking-at-e4/">Speaking at E4!</a></li>
<li><a href="http://karlarao.wordpress.com/2012/09/27/oaktable-world-2012/">OakTable World 2012</a></li>
<li><a href="http://karlarao.wordpress.com/2013/02/05/rmoug-ioug-collaborate-kscope-and-e4-2013/">RMOUG, IOUG Collaborate, KSCOPE, and E4 2013</a></li>
<li><a href="http://karlarao.wordpress.com/2013/05/23/speaking-at-e4-2013-and-some-exadata-patents-good-stuff/">Speaking at E4 2013! … and some Exadata Patents good stuff</a></li>
<li><a href="http://karlarao.wordpress.com/2013/09/22/oow-and-oaktable-world-2013/">OOW and OakTable World 2013</a></li>
<li><a href="http://karlarao.wordpress.com/2014/12/04/my-timetaskgoalhabit-ttgh-management/">my Time/Task/Goal/Habit (TTGH) management</a></li>
</ul>
<span style="color:white;"> </span>
<span style="color:white;"> </span>
<span style="color:white;"> </span>
<span style="color:white;"> </span>
<span style="color:white;"> </span>
}}}
http://thecomingstorm.us/smf/index.php?topic=323.0
http://golanzakai.blogspot.com/2012/01/openvswitch-with-virtualbox.html
http://www.evernote.com/shard/s48/sh/f5866bf1-97c9-46b1-8830-205d7fa4cde6/ba217e6d8c137d6e30917e1bf375a519
http://www.brendangregg.com/dtrace.html
http://www.brendangregg.com/DTrace/dtrace_oneliners.txt
the paper
http://queue.acm.org/detail.cfm?id=2413037
here's the video of the USE method
http://dtrace.org/blogs/brendan/2012/09/21/fisl13-the-use-method/
Rappler's Mood Navigator
https://www.evernote.com/shard/s48/sh/2a8e6b17-e499-49cb-a1b7-2944be0eb88e/967899d7670f3e470380b8206bb184c5
POWERLINK - buffer io error
http://knowledgebase.emc.com/emcice/documentDisplay.do;jsessionid=E5086F44F54525E1C3E2930AD5ABB7D9?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc187631&passedTitle=null
http://knowledgebase.emc.com/emcice/documentDisplay.do?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc199974&passedTitle=null
http://knowledgebase.emc.com/emcice/documentDisplay.do?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc157139&passedTitle=null
http://knowledgebase.emc.com/emcice/documentDisplay.do?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc203991&passedTitle=null
{{{
"Linux host devices log I/O errors during server reboot"
ID: emc157139
URL:
http://knowledgebase.emc.com/emcice/documentDisplay.do?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc157139&passedTitle=null
Knowledgebase Solution
Environment: OS: Red Hat Linux
Environment: Product: CLARiiON CX-series
Environment: Product: CLARiiON CX3-series
Environment: EMC SW: PowerPath
Problem: Linux host devices log I/O errors during server reboot.
Problem: Dmesg log or messages log have:
Buffer I/O error on device sdm, logical block 2
Buffer I/O error on device sdm, logical block 3
Buffer I/O error on device sdm, logical block 4
Buffer I/O error on device sdm, logical block 5
Device sdm not ready.
end_request: I/O error, dev sdm, sector 16
Buffer I/O error on device sdm, logical block 2
Device sdm not ready.
end_request: I/O error, dev sdm, sector 128
Problem: Output of powermt display dev=all shows:
Pseudo name=emcpowera
CLARiiON ID=CK200063301081 [SG2]
Logical device ID=600601604EE419004A308F0C5AD0DB11 [LUN 61]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
0 qla2xxx sdm SP B0 active alive 0 0
Change: Server rebooted for maintenance
Root Cause: The devices logging the I/O error are assigned to SP-B, but currently "owned" by SP-A. The CLARiiON array is an active-passive array so this is normal behavior when multiple paths are utilized.
Fix: These messages logged at boot up may be ignored.
}}}
-- HASH GROUP BY
_GBY_HASH_AGGREGATION_ENABLED=FALSE
_UNNEST_SUBQUERY = FALSE
in Metalink3 even in patch set 10.2.0.4, PeopleSoft have a workaround on the bug by using the hidden parameter
Wrong Results Possible on 10.2 When New "HASH GROUP BY" Feature is Used
Doc ID: Note:387958.1
Bug 4604970 - Wrong results with 'hash group by' aggregation enabled
Doc ID: Note:4604970.8
ORA-00600 [32695] [hash aggregation can't be done] During Insert.
Doc ID: Note:729447.1
Bug 6471770 - OERI [32695] [hash aggregation can't be done] from Hash GROUP BY
Doc ID: Note:6471770.8
10.2.0.3 Patch Set - List of Bug Fixes by Problem Type [ID 391116.1]
-- running out of OS kernelI/O resources
WARNING:1 Oracle process running out of OS kernelI/O resources
Doc ID: 748607.1
Bug 6687381 - "WARNING: Oracle process running out of OS kernel I/O resources" messages
Doc ID: 6687381.8
http://www.devx.com/dbzone/10MinuteSolution/22191/1954
http://www.dba-oracle.com/t_delete_performance_speed.htm
-- speedup delete
http://dbaforums.org/oracle/index.php?showtopic=534
https://forums.oracle.com/forums/thread.jspa?threadID=987536
http://www.mail-archive.com/oracle-l@fatcity.com/msg15356.html
* Oracle Utilities Customer Care and Billing (CC&B) - utilities companies like DLC
* Oracle Utilities (CC&B, MDM, ...) - Develop, Deploy and Debug with Eclipse and the SDK
** https://www.youtube.com/watch?v=fuPzFCBEEWg
! batch
Oracle Utilities Customer Care And Billing Batch Operations and Configuration Guide https://docs.oracle.com/cd/E18733_01/pdf/E18372_01.pdf
Cloudera Data Platform — the industry’s first enterprise data cloud
https://www.cloudera.com/campaign/try-cdp-public-cloud.html
! before - 6K seconds
{{{
SELECT COUNT (DISTINCT AIR.STORE_ID) STR_COUNT_STYLE
FROM
ALGO_INPUT_FOR_REVIEW AIR,
ALLOC_BATCH_LI_DETAILS LI_DETAILS
WHERE
COALESCE (AIR.ALLOCATED_UNIT_QTY,
AIR.SUGGESTED_ALLOCATED_QTY) > 0 AND
AIR.BATCH_ID = LI_DETAILS.BATCH_ID AND
AIR.BATCH_LINE_NO = LI_DETAILS.BATCH_LINE_NO AND
AIR.ITEM_ID = LI_DETAILS.ITEM_ID AND
AIR.IMMINENT_RELEASE = 'Y' AND
AIR.BATCH_ID = 1426 AND
ALLOCATION_SEQ_NO = (SELECT MAX (AIR2.ALLOCATION_SEQ_NO)
FROM ALGO_INPUT_FOR_REVIEW AIR2
WHERE AIR2.BATCH_ID = AIR.BATCH_ID)
}}}
! after - below 1 sec
{{{
WITH max_seq AS
(SELECT /*+ MATERIALIZE */
batch_id,
MAX(allocation_seq_no) max_allocation_seq_no
FROM algo_input_for_review
WHERE batch_id = :b_batch_id
GROUP BY batch_id),
algo_slice AS
(SELECT /*+ MATERIALIZE */
aifr.*
FROM algo_input_for_review aifr
WHERE (aifr.batch_id, aifr.allocation_seq_no) IN (SELECT batch_id, max_allocation_seq_no FROM max_seq)
AND aifr.imminent_release = 'Y'
)
SELECT COUNT (DISTINCT air.store_id) str_count_style
FROM algo_slice air,
alloc_batch_li_details li_details
WHERE COALESCE (air.allocated_unit_qty, air.suggested_allocated_qty) > 0
AND air.batch_id = li_details.batch_id
AND air.batch_line_no = li_details.batch_line_no
AND air.item_id = li_details.item_id;
}}}
Lawrence To - COE List of database outages
https://docs.google.com/fileview?id=0B5H46jS7ZPdJNGUxNmNiYWQtZGYxZC00OWFhLWEzMmMtYThlYTlhNjQzNjU3&hl=en
Lawrence To - COE Outage Prevention, Detection, And Repair
https://docs.google.com/fileview?id=0B5H46jS7ZPdJNjIzMDNlZjQtYjgyZi00M2M4LWE4OTUtNDFkMDUwYzQ2MjA4&hl=en
http://hackingexpose.blogspot.com/2012/05/oracle-wont-patch-four-year-old-zero.html
http://www.freelists.org/post/oracle-l/Oracle-Security-Alert-for-CVE20121675-10g-extended-support,3
https://blogs.oracle.com/security/entry/security_alert_for_cve_2012
http://asanga-pradeep.blogspot.com/2012/05/using-class-of-secure-transport-cost-to.html
http://seclists.org/fulldisclosure/2012/Apr/343
http://seclists.org/fulldisclosure/2012/Apr/204
Using Class of Secure Transport (COST) to Restrict Instance Registration in Oracle RAC (Doc ID 1340831.1)
2s8c16t - 2sockets,8cores,16threads
1s4c8t - 1socket,4cores,8threads
-list of xeon CPUs
http://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors
CPU CPI
Intel Xeon Phi Coprocessor High Performance Programming http://goo.gl/Ycri3u
Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors, Part 2: Understanding and Using Hardware Events http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-2-understanding
Simultaneous Multi-Threading - CPI http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/smt.htm
HOWTO processor numbers http://www.intel.com/content/www/us/en/processors/processor-numbers.html
E7 family http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-e7-family-performance-model-numbers-paper.pdf
SKUs bin example http://www.anandtech.com/show/5475/intel-releases-seven-sandy-bridge-cpus
''Intel price list''
Intel® Xeon® Processor 5500 Series price list http://ark.intel.com/products/series/39565/Intel-Xeon-Processor-5500-Series
''AMD price list''
http://www.amd.com/us/products/pricing/Pages/server-opteron.aspx
{{{
Exadata updating to Xeon E5 (Fall 2012)..(X3-2,X3-2L,X2-4)
QPI replaces FSB
memory is still in parallel bus which doesn't mingle well with the serial bus
Intel 2 level memory Nahelem is the death of the FSB
}}}
http://en.wikipedia.org/wiki/Uncore
<<<
The uncore is a term used by Intel to describe the functions of a microprocessor that are not in the Core, but which are essential for Core performance.[1] The Core contains the components of the processor involved in executing instructions, including the ALU, FPU, L1 and L2 cache. Uncore functions include QPI controllers, L3 cache, snoop agent pipeline, on-die memory controller, and Thunderbolt controller.[2] Other bus controllers such as PCI Express and SPI are part of the chipset.[3]
The Intel Uncore design stems from its origin as the Northbridge. The design of the Intel Uncore reorganizes the functions critical to the Core, making them physically closer to the Core on-die, thereby reducing their access latency. Functions from the Northbridge which are less essential for the Core, such as PCI Express or the Power Control Unit (PCU), are not integrated into the Uncore -- they remain as part of the Chipset.[4]
''Specifically, the micro-architecture of the Intel Uncore is broken down into a number of modular units. The main Uncore interface to the Core is the Cache Box (CBox), which interfaces with the Last Level Cache (LLC) and is responsible for managing cache coherency. Multiple internal and external QPI links are managed by Physical Layer units, referred to as PBox. Connections between the PBox, CBox, and one or more iMC's (MBox) are managed by System Config Controller (UBox) and a Router (RBox). [5]''
Removal of serial bus controllers from the Intel Uncore further enables increased performance by allowing the Uncore clock (UCLK) to run at a base of 2.66 GHz, with upwards overclocking limits in excess of 3.44 GHz.[6] This increased clock rate allows the Core to access critical functions (such as the iMC) with significantly less latency (typically reducing Core access to DRAM by 10ns or more).
<<<
''CPU Core'' - contains the components of the processor involved in executing instructions
* ALU
* FPU
* L1 and L2 cache
''CPU Uncore''
* QPI controllers
* L3 cache
* snoop agent pipeline
* on-die memory controller
* Thunderbolt
''Chipset'' http://en.wikipedia.org/wiki/Chipset
* PCI Express
* SPI
http://www.evernote.com/shard/s48/sh/3ca3db4e-6cc9-4139-9548-716d22a9ec32/ab43be72457b9ff412efd509f58ca1e6
The Xeon E5520: Popular for VMWare http://h30507.www3.hp.com/t5/Eye-on-Blades-Blog-Trends-in/The-Xeon-E5520-Popular-for-VMWare/ba-p/79934#.Uvmtn0JdWig
https://software.intel.com/en-us/videos/what-is-persistent-memory-persistent-memory-programming-series
http://www.pcgamer.com/rumor-intel-may-release-3d-xpoint-system-memory-in-2018/
https://itpeernetwork.intel.com/new-breakthrough-persistent-memory-first-public-demo/
<<<
this could come as a new switch/parameter on new Exadata versions to enable the use of the hardware feature just like what they did before on HW flash compression on the flash devices. Or it could come enabled as default.
But for sure this is a big boost in performance, look at that microsecond difference in latency.
<<<
<<<
it’s my believe that the new memory cache with Exadata actually is a development based upon 3dxpoint, alias memory that can be written to to persist.
<<<
<<<
3D XPoint (cross point) memory, which will be sold under the name Optane
<<<
<<<
Looks like the application/DB vendor should make code change to take advantage of the new memory layer. Without code change to use the persistent memory structures with the kernel code of the DB software ( like what was done in SqlServer 2016 for very fast transaction/redo log write), it looks like we cannot easily add this super fast layer.
So, I think the question is the adoption of this new persistent memory layer within the software.
Some examples I see are
https://channel9.msdn.com/Shows/Data-Exposed/SQL-Server-2016-and-Windows-Server-2016-SCM--FAST ( check details from 8:00 Min)
https://software.intel.com/en-us/videos/a-c-example-persistent-memory-programming-series
<<<
{{{
top - 12:14:35 up 10 days, 10:42, 24 users, load average: 20.15, 19.97, 19.14
Tasks: 351 total, 1 running, 350 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.3%us, 27.7%sy, 1.7%ni, 40.7%id, 27.1%wa, 0.2%hi, 0.2%si, 0.0%st
Mem: 16344352k total, 6098504k used, 10245848k free, 1912k buffers
Swap: 20021240k total, 988764k used, 19032476k free, 83860k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12442 root 15 0 1135m 4704 3068 S 36.1 0.0 162:59.73 /usr/lib/virtualbox/VirtualBox --comment x_x3 --startvm 94756484-d2d5-4bdb
12413 root 15 0 1196m 5004 3196 S 30.3 0.0 67:55.23 /usr/lib/virtualbox/VirtualBox --comment x_x2 --startvm cc54fb4c-170b-430a
12384 root 15 0 1195m 7660 3248 S 26.1 0.0 162:04.52 /usr/lib/virtualbox/VirtualBox --comment x_x1 --startvm e266cad2-403f-4d98
3972 root 15 0 60376 4588 1524 S 7.7 0.0 583:04.20 /usr/bin/ssh -x -oForwardAgent no -oPermitLocalCommand no -oClearAllForwardings
1053 root 15 0 1526m 5496 3048 S 4.9 0.0 386:27.97 /usr/lib/virtualbox/VirtualBox --comment windows7 --startvm 3da776bd-1d5e-4eec-
3971 root 18 0 54300 1020 848 D 1.6 0.0 49:15.64 scp -rpv 20111015-backup 192.168.0.100 /DataVolume/shares/Public/Backup
12226 root 15 0 251m 9876 2268 S 0.6 0.1 686:00.02 /usr/lib/nspluginwrapper/npviewer.bin --plugin /usr/lib/mozilla/plugins/libflas
12786 root 15 0 1476m 4624 3072 S 0.6 0.0 6:38.29 /usr/lib/virtualbox/VirtualBox --comment x_db1 --startvm 1c3b929d-bdbd-40da-8
12947 root 15 0 1478m 4360 2904 S 0.5 0.0 7:36.47 /usr/lib/virtualbox/VirtualBox --comment x_db2 --startvm f3e1060d-28f5-4a72-8
4620 root 15 0 76600 5500 1252 S 0.2 0.0 78:13.20 Xvnc :1 -desktop desktopserver.localdomain:1 (root) -httpd /usr/share/vnc/class
4729 root 18 0 543m 13m 3668 D 0.2 0.1 9:41.13 nautilus --no-default-window --sm-client-id default3
5808 root 15 0 348m 2576 1324 S 0.2 0.0 39:54.52 /usr/lib/virtualbox/VBoxSVC --auto-shutdown
5754 oracle 16 0 260m 1220 988 S 0.1 0.0 0:25.12 gnome-terminal
5800 root 15 0 112m 856 756 S 0.1 0.0 18:51.71 /usr/lib/virtualbox/VBoxXPCOMIPCD
5899 root 15 0 303m 6264 1860 S 0.1 0.0 15:42.25 gnome-terminal
13941 root 15 0 12892 1216 768 R 0.1 0.0 0:05.06 top -c
29960 root 16 0 109m 6716 1176 S 0.1 0.0 82:39.70 /usr/bin/perl -w /usr/bin/collectl --all -o T -o D
30089 root 15 0 109m 4336 1072 S 0.1 0.0 33:21.60 /usr/bin/perl -w /usr/bin/collectl -sD --verbose -o T -o D
1 root 15 0 10368 88 60 S 0.0 0.0 0:04.05 init [5]
2 root RT -5 0 0 0 S 0.0 0.0 0:00.02 [migration/0]
3 root 34 19 0 0 0 S 0.0 0.0 11:11.08 [ksoftirqd/0]
4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 [watchdog/0]
5 root RT -5 0 0 0 S 0.0 0.0 0:00.35 [migration/1]
6 root 34 19 0 0 0 S 0.0 0.0 0:00.62 [ksoftirqd/1]
7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 [watchdog/1]
8 root RT -5 0 0 0 S 0.0 0.0 0:01.26 [migration/2]
9 root 34 19 0 0 0 S 0.0 0.0 0:00.72 [ksoftirqd/2]
10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 [watchdog/2]
11 root RT -5 0 0 0 S 0.0 0.0 0:04.89 [migration/3]
12 root 34 19 0 0 0 S 0.0 0.0 0:00.63 [ksoftirqd/3]
13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 [watchdog/3]
14 root RT -5 0 0 0 S 0.0 0.0 0:02.66 [migration/4]
15 root 34 19 0 0 0 S 0.0 0.0 0:41.68 [ksoftirqd/4]
16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 [watchdog/4]
17 root RT -5 0 0 0 S 0.0 0.0 0:01.27 [migration/5]
18 root 34 19 0 0 0 S 0.0 0.0 0:12.36 [ksoftirqd/5]
19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 [watchdog/5]
root@192.168.0.101's password:
Last login: Fri Oct 21 10:43:53 2011 from desktopserver.localdomain
[root@desktopserver ~]# vmstat 1 100000 | while read line; do echo "`date +%T`" "$line" ; done
12:13:36 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:13:36 r b swpd free buff cache si so bi bo in cs us sy id wa st
12:13:36 4 3 988016 10329664 1916 70672 4 1 502 322 5 4 2 4 91 2 0
12:13:36 4 3 988016 10316972 1964 79892 4592 0 15488 4 4689 19386 4 28 47 22 0
12:13:37 5 3 987760 10309344 2000 82368 5500 0 16196 52 4555 16869 4 28 48 20 0
12:13:38 3 3 987636 10314420 1776 62736 3456 0 57144 8 2967 13880 3 28 51 17 0
12:13:44 3 5 987636 10424500 1416 18448 5032 0 35004 3060 5930 24333 4 32 42 22 0
12:13:47 2 27 987820 10446888 1508 17316 4632 14944 14096 16088 6657 13410 3 21 31 46 0
12:13:47 0 34 988132 10471332 1540 14196 3460 10840 13916 10892 4853 9797 2 15 47 37 0
12:13:47 2 31 988132 10458716 1616 21952 8076 1768 17948 1768 3684 9470 2 6 62 30 0
12:13:47 3 27 988372 10466144 1588 15752 6540 2916 10832 2968 2154 6603 1 21 57 21 0
12:13:47 1 27 988372 10466100 1588 16508 9128 16 18148 20 1970 6116 2 23 56 18 0
12:13:47 4 25 988368 10440804 1636 28680 13864 0 27728 32 3783 10942 2 24 53 21 0
12:13:47 3 19 988368 10410040 1784 44260 14644 0 33052 0 5570 18623 3 20 45 32 0
12:13:48 4 6 988328 10381236 1848 56596 16680 0 29676 0 4648 23356 4 23 39 34 0
12:13:49 3 6 988328 10356104 1872 67584 14004 0 27300 40 4966 23598 4 30 42 24 0
12:13:50 4 5 987876 10332532 1928 77784 13932 0 25780 0 4908 19443 4 28 47 21 0
12:13:51 5 6 987876 10336848 1760 68392 9140 0 82780 0 3112 13723 3 29 45 23 0
12:13:52 5 5 987876 10356808 1752 47148 5464 0 18388 1440 4893 17567 5 29 48 19 0
12:13:53 4 4 987820 10353080 1880 45368 6124 0 18932 0 4608 17808 4 30 48 18 0
12:13:54 6 2 987696 10344684 1900 56192 4452 0 15092 44 4694 22556 4 29 48 18 0
12:14:01 3 23 988248 10454908 1472 14144 5692 15532 26980 23396 10868 28611 4 22 33 41 0
12:14:02 2 36 989052 10478864 1552 11596 6976 14452 15120 14924 4457 9650 2 25 41 32 0
12:14:02 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:14:02 r b swpd free buff cache si so bi bo in cs us sy id wa st
12:14:02 2 34 989076 10468140 1536 16912 16380 608 42292 696 7441 19332 3 19 48 30 0
12:14:02 3 33 989076 10449392 1620 23468 13628 0 21540 0 2065 5908 1 17 49 32 0
12:14:02 1 22 989072 10427880 1736 28984 16260 0 24148 40 2129 7560 1 11 44 45 0
12:14:02 4 19 989072 10409308 1764 34976 13268 0 20720 0 2642 11492 2 16 54 28 0
12:14:02 4 10 989072 10316360 1884 54384 13684 0 35704 0 4805 22982 4 27 43 26 0
12:14:03 3 5 989072 10362300 1776 57220 12468 0 84288 16 4556 16173 4 30 44 23 0
12:14:04 3 6 988756 10359512 1824 43856 17520 0 34060 340 5176 23298 4 26 46 24 0
12:14:05 4 7 988748 10345436 1832 55780 8660 0 21276 0 4631 19355 4 28 38 30 0
12:14:06 4 5 988700 10331388 1844 62976 12920 0 25012 0 5087 19602 4 29 49 18 0
12:14:07 5 3 988520 10325812 1900 69584 4964 0 16312 4 4738 17187 4 32 45 19 0
12:14:08 3 4 988520 10368964 1792 38452 5004 0 17408 2020 5246 18092 5 29 50 16 0
12:14:16 1 37 988864 10450680 1448 14028 3684 13940 18928 17804 8262 20373 3 24 29 44 0
12:14:16 0 37 989068 10476748 1500 10940 7156 14072 13328 14112 4870 6545 0 4 56 39 0
12:14:16 0 49 989080 10404760 1556 23028 17332 3652 41824 3652 4791 9224 1 2 64 33 0
12:14:16 1 37 989076 10409020 1680 24412 12204 0 32676 16 2168 5833 1 14 49 35 0
12:14:16 2 24 989076 10381452 1784 42528 14900 0 35860 0 2040 6524 1 18 43 38 0
12:14:16 2 20 989060 10370832 1788 76880 14364 0 59880 24 3810 13617 3 11 54 33 0
12:14:16 3 8 989060 10358380 1848 78412 14528 0 32416 0 4995 28265 3 21 41 34 0
12:14:17 5 5 989040 10358372 1916 72364 6144 0 18348 0 4715 16760 4 29 39 27 0
12:14:18 4 7 989004 10354776 1980 61908 13700 0 26612 204 4820 16973 4 29 43 24 0
12:14:19 3 4 988660 10345956 1992 60504 12496 0 27176 108 5166 21841 4 32 46 18 0
on /vbox... not blocked
------------------------------------------------------------------------------------------------------------------------------
[root@desktopserver ~]# vmstat 1 100000 | while read line; do echo "`date +%T`" "$line" ; done
12:28:04 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:28:04 r b swpd free buff cache si so bi bo in cs us sy id wa st
12:28:04 5 5 967424 10157364 1728 125884 4 1 505 322 6 1 2 4 91 2 0
12:28:05 5 5 967424 10113688 1816 167580 36 0 39196 20 11806 58927 12 48 25 15 0
12:28:06 6 3 967424 10101692 1764 178888 104 0 33756 92 10547 55163 11 44 26 19 0
12:28:07 6 2 967424 9993612 1896 284384 12 0 120744 0 8168 36067 8 38 33 21 0
12:28:08 4 3 967424 9989976 1836 289252 36 0 28224 2084 9152 46527 9 41 29 20 0
12:28:09 4 3 967424 10186920 1592 94984 20 0 21336 96740 8117 44754 8 39 38 16 0
12:28:10 3 5 967424 10080256 1572 140400 236 0 48252 24100 11487 57187 12 43 28 17 0
12:28:11 6 3 967424 9993084 1772 285304 28 0 108504 68 7980 34287 8 37 40 15 0
12:28:12 6 2 967424 9986940 1776 290368 64 0 38116 0 11581 56771 11 44 31 13 0
12:28:13 5 3 967424 9895152 1828 342476 48 0 60348 0 10379 45136 10 42 30 19 0
12:28:15 5 4 967424 9838808 1952 436660 284 0 94468 0 8581 41748 9 37 38 16 0
12:28:15 6 5 967424 10095720 1840 181592 80 0 34988 124936 11126 53636 11 44 25 20 0
12:28:16 4 2 967424 10077976 1612 165864 48 0 60840 32 9095 46720 9 40 30 22 0
12:28:17 4 3 967424 10053744 1720 226140 84 0 69404 0 11419 54791 11 44 27 18 0
12:28:18 6 1 967424 10165980 1652 114008 0 0 72700 60800 12525 59365 12 46 29 13 0
12:28:19 6 5 967424 10079536 1552 201004 12 0 91700 0 8773 41036 9 37 36 18 0
12:28:20 5 5 967424 10010352 1640 267340 0 0 36484 0 11164 49569 11 41 35 13 0
12:28:21 5 2 967424 9946732 1720 278668 0 0 81480 40 10572 49893 10 43 34 13 0
12:28:22 6 3 967424 9941164 1812 336472 92 0 73448 820 8063 35725 8 34 38 20 0
12:28:23 7 2 967424 10185156 1708 95096 0 0 37880 116020 11820 56886 11 45 30 14 0
12:28:24 4 4 967424 10158376 1532 122232 16 0 29000 14540 9389 47092 9 41 33 17 0
12:28:25 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:28:25 r b swpd free buff cache si so bi bo in cs us sy id wa st
12:28:25 9 3 967424 9995728 1720 282168 0 0 109168 0 5062 24050 5 32 50 14 0
12:28:26 6 3 967424 9975200 1764 302068 8 0 35616 0 10937 48511 11 43 32 15 0
12:28:27 6 3 967424 9898512 1844 376736 44 0 37016 40 11216 47423 11 42 33 14 0
12:28:28 5 2 967424 9777288 1944 464060 28 0 89372 32 7675 44210 9 38 39 14 0
12:28:29 7 4 967424 10187020 1508 93940 52 0 49944 188124 7343 41070 9 40 32 19 0
12:28:30 6 2 967424 10167236 1580 114732 36 0 40932 64 12227 56547 12 39 35 13 0
12:28:31 7 3 967424 10081364 1664 200636 56 0 91096 0 8624 34942 9 34 40 17 0
12:28:32 8 2 967424 10022828 1788 231756 20 0 40728 36 10431 51118 11 39 33 17 0
12:28:33 8 3 967424 10182900 1696 97704 0 0 58244 1756 10384 56330 10 45 31 14 0
12:28:34 2 4 967424 10162252 1548 117156 20 0 83572 83360 7049 33628 6 37 39 17 0
12:28:35 3 3 967424 10191104 1544 89604 100 0 2944 31372 2563 13785 3 28 48 21 0
12:28:36 11 4 967424 10173104 1600 108220 36 0 33268 0 10110 52401 10 41 34 14 0
12:28:37 4 3 967424 10116640 1648 134780 0 0 40816 40 11156 54719 11 44 32 13 0
12:28:38 5 4 967424 10185980 1640 95592 12 0 116940 24 8661 35493 8 36 38 18 0
12:28:39 6 2 967424 10133704 1520 145244 0 0 38928 232 11766 61009 12 45 27 16 0
12:28:40 8 3 967424 10188060 1544 92580 68 0 24604 76548 8546 41263 9 41 32 19 0
12:28:41 8 3 967424 10126928 1512 155672 12 0 119696 0 8124 36208 8 39 39 15 0
12:28:42 4 3 967424 10183592 1492 97764 0 0 43424 48 12891 52804 12 48 29 10 0
12:28:43 5 3 967424 10160868 1512 118668 20 0 34336 10932 10721 54931 10 46 31 13 0
12:28:44 8 2 967424 9993748 1708 283296 60 0 115504 0 6847 32959 6 36 39 19 0
12:28:45 7 3 967424 9973180 1752 304788 0 0 39256 12 11889 48368 12 43 32 12 0
12:28:46 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:28:46 r b swpd free buff cache si so bi bo in cs us sy id wa st
12:28:46 8 2 967424 9871024 1792 342204 44 0 45900 36 12519 51742 12 43 32 13 0
12:28:47 8 0 967268 9869416 1900 407064 136 0 70860 0 4509 25553 8 34 41 16 0
top - 12:29:22 up 10 days, 10:57, 24 users, load average: 7.40, 10.08, 14.93
Tasks: 353 total, 2 running, 351 sleeping, 0 stopped, 0 zombie
Cpu(s): 8.4%us, 39.8%sy, 1.9%ni, 33.2%id, 15.1%wa, 0.6%hi, 0.9%si, 0.0%st
Mem: 16344352k total, 6264212k used, 10080140k free, 1468k buffers
Swap: 20021240k total, 967256k used, 19053984k free, 170788k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12786 root 15 0 1495m 31m 3424 S 84.1 0.2 9:43.24 /usr/lib/virtualbox/VirtualBox --comment x_db1 --startvm 1c3b929d-bdbd-40da-8
12413 root 15 0 1196m 5904 3352 S 81.7 0.0 77:33.67 /usr/lib/virtualbox/VirtualBox --comment x_x2 --startvm cc54fb4c-170b-430a
12442 root 15 0 1136m 6028 3324 S 81.1 0.0 172:35.79 /usr/lib/virtualbox/VirtualBox --comment x_x3 --startvm 94756484-d2d5-4bdb
12384 root 15 0 1195m 8712 3360 S 68.9 0.1 172:34.70 /usr/lib/virtualbox/VirtualBox --comment x_x1 --startvm e266cad2-403f-4d98
14504 root 15 0 60440 7376 2540 S 43.8 0.0 0:33.65 /usr/bin/ssh -x -oForwardAgent no -oPermitLocalCommand no -oClearAllForwardings
3972 root 15 0 60376 4596 1524 R 15.8 0.0 584:54.46 /usr/bin/ssh -x -oForwardAgent no -oPermitLocalCommand no -oClearAllForwardings
1053 root 15 0 1527m 12m 3500 S 12.2 0.1 387:51.13 /usr/lib/virtualbox/VirtualBox --comment windows7 --startvm 3da776bd-1d5e-4eec-
14503 root 18 0 53884 1904 1452 D 9.2 0.0 0:06.67 scp 1122.tar.bz2 oracle@db1 ~oracle
3971 root 18 0 54300 1056 864 D 3.6 0.0 49:38.29 scp -rpv 20111015-backup 192.168.0.100 /DataVolume/shares/Public/Backup
12226 root 16 0 251m 75m 2408 S 3.6 0.5 686:16.31 /usr/lib/nspluginwrapper/npviewer.bin --plugin /usr/lib/mozilla/plugins/libflas
486 root 10 -5 0 0 0 D 1.3 0.0 25:52.02 [kswapd0]
12947 root 15 0 1478m 9692 3372 S 1.3 0.1 7:47.68 /usr/lib/virtualbox/VirtualBox --comment x_db2 --startvm f3e1060d-28f5-4a72-8
4620 root 15 0 73236 12m 2380 S 1.0 0.1 78:17.59 Xvnc :1 -desktop desktopserver.localdomain:1 (root) -httpd /usr/share/vnc/class
14428 root 18 0 109m 17m 1964 S 0.7 0.1 0:01.79 /usr/bin/perl -w /usr/bin/collectl --all -o T -o D
8 root RT -5 0 0 0 S 0.3 0.0 0:01.33 [migration/2]
20 root RT -5 0 0 0 S 0.3 0.0 0:01.00 [migration/6]
180 root 10 -5 0 0 0 S 0.3 0.0 0:05.36 [kblockd/0]
5014 root 15 0 348m 1216 956 S 0.3 0.0 2:08.20 /usr/libexec/mixer_applet2 --oaf-activate-iid=OAFIID:GNOME_MixerApplet_Factory
14559 root 15 0 12892 1320 824 R 0.3 0.0 0:00.03 top -c
1 root 15 0 10368 88 60 S 0.0 0.0 0:04.08 init [5]
2 root RT -5 0 0 0 S 0.0 0.0 0:00.02 [migration/0]
3 root 34 19 0 0 0 S 0.0 0.0 11:11.08 [ksoftirqd/0]
4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 [watchdog/0]
5 root RT -5 0 0 0 S 0.0 0.0 0:00.44 [migration/1]
6 root 34 19 0 0 0 S 0.0 0.0 0:00.62 [ksoftirqd/1]
7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 [watchdog/1]
[root@desktopserver stage]# scp 1122.tar.bz2 oracle@db1:~oracle
oracle@db1's password:
1122.tar.bz2 68% 2116MB 23.4MB/s 00:40 ETA
}}}
! Setup
<<<
1) download the cputoolkit at http://karlarao.wordpress.com/scripts-resources/
2) untar, then modify the orion_3_fts.sh under the aas30 folder
{{{
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit/aas30:dw
$ ls -ltr
total 16
-rwxr-xr-x 1 oracle dba 315 Sep 27 22:32 saturate
-rwxr-xr-x 1 oracle dba 159 Sep 27 23:10 orion_3_ftsall.sh
-rwxr-xr-x 1 oracle dba 236 Sep 27 23:10 orion_3_ftsallmulti.sh
-rwxr-xr-x 1 oracle dba 976 Nov 26 15:47 orion_3_fts.sh
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit/aas30:dw
$ cat orion_3_fts.sh
# This is the main script
export DATE=$(date +%Y%m%d%H%M%S%N)
sqlplus -s /NOLOG <<! &
connect / as sysdba
declare
rcount number;
begin
-- 600/60=10 minutes of workload
for j in 1..3 loop
-- lotslios by Tanel Poder
select /*+ cputoolkit ordered
use_nl(b) use_nl(c) use_nl(d)
full(a) full(b) full(c) full(d) */
count(*)
into rcount
from
sys.obj$ a,
sys.obj$ b,
sys.obj$ c,
sys.obj$ d
where
a.owner# = b.owner#
and b.owner# = c.owner#
and c.owner# = d.owner#
and rownum <= 10000000;
dbms_lock.sleep(60);
end loop;
end;
/
exit;
!
}}}
3) run the workload
{{{
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit/aas30:dw
$ ./saturate 16 dw
}}}
<<<
! Instrumentation
<<<
this will show you pretty much 8+ CPUs being used
{{{
spool snapper.txt
@snapper out 1 120 "select sid from v$session where status = 'ACTIVE'"
spool off
less snapper.txt | grep -B6 "CPU"
}}}
of course before every run do a begin snap, then run the test case, then do an end snap... then compare the output to snapper
you'll see that snapper is able to catch the fly by CPU load
{{{
exec dbms_workload_repository.create_snapshot;
execute statspack.snap;
@?/rdbms/admin/awrrpt
@?/rdbms/admin/spreport
}}}
<<<
awr_topevents_v2.sql - added "CPU wait" (new in 11g) to include "unaccounted DB Time" on high run queue workloads http://goo.gl/trwKp, http://twitpic.com/89hp4p
this script was pretty useful, because on the usual AWR reports you don't see this CPU wait
''What could possibly cause cpu wait?''
from my performance work and the workloads that I've seen here are the possible reasons so far
! if you are asking more CPU work than the number of CPUs
{{{
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12 0 1054352 377816 767340 1990420 0 0 532 260 15924 11174 92 8 0 0 0
13 2 1054352 373904 767400 1990432 0 0 524 0 21159 10178 93 7 0 0 0
12 1 1054352 373284 767544 1990476 0 0 768 78 17628 11605 92 7 0 0 0
12 1 1054352 373904 767552 1990480 0 0 736 80 16470 12939 95 4 0 0 0
14 1 1054352 372532 767756 1990408 0 0 876 0 17323 13067 92 7 0 0 0
12 1 1054352 324776 767768 2017136 0 0 26957 206 24215 12566 95 5 0 0 0
14 0 1054352 320924 767788 2017168 0 0 796 136 21818 12009 94 5 0 0 0
14 1 1054352 324900 767944 2017180 0 0 836 40 17699 12674 95 5 0 0 0
}}}
! from the AAS investigation, it was caused by hundreds of users being forked at the same time doing select * from a table and not having enough CPU to service those surge of processes causing high "b" - blocked on IO on vmstat
{{{
$ vmstat 1 1000
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 277 1729904 467564 603852 5965768 0 0 1165 67 0 0 5 22 71 1 0
2 275 1729904 465944 603856 5965772 0 0 521776 2676 15712 17116 6 5 15 74 0
3 274 1729904 495324 603868 5965400 0 0 543848 48 10366 47365 14 5 1 80 0
2 275 1729904 478732 603880 5966036 0 0 616776 248 10361 40782 13 5 0 82 0
3 276 1729904 473764 603880 5966296 0 0 538416 816 10809 16695 7 3 0 90 0
0 276 1729904 473136 603880 5966300 0 0 620120 16 15006 15223 13 3 10 74 0
1 275 1729904 485808 603880 5966300 0 0 552696 0 8953 16632 5 3 12 80 0
0 275 1729904 486204 603880 5966308 0 0 536784 52 11397 15096 5 3 1 90 0
3 274 1729904 492916 603880 5966312 0 0 556352 56 10594 15988 7 5 2 86 0
}}}
! PGA usage maxing out the memory and the kswapd kicks in and the server starts to swap like crazy causing high on CPU WAIT IO
<<<
{{{
top - 12:58:20 up 132 days, 42 min, 2 users, load average: 13.68, 10.22, 9.07
Tasks: 995 total, 42 running, 919 sleeping, 0 stopped, 34 zombie
Cpu(s): 48.5%us, 28.4%sy, 0.0%ni, 10.5%id, 11.2%wa, 0.0%hi, 1.3%si, 0.0%st
Mem: 98848968k total, 98407164k used, 441804k free, 852k buffers
Swap: 25165816k total, 2455968k used, 22709848k free, 383132k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13483 oracle 25 0 12.9g 509m 43m R 80.1 0.5 214:25.43 oraclemtaprd111 (LOCAL=NO)
24308 oracle 25 0 13.4g 1.0g 97m R 77.1 1.1 15:58.80 oraclemtaprd111 (LOCAL=NO)
16227 oracle 25 0 13.4g 1.0g 95m R 74.1 1.1 1312:47 oraclemtaprd111 (LOCAL=NO)
1401 root 11 -5 0 0 0 R 67.8 0.0 113:21.15 [kswapd0]
--
top - 12:59:48 up 132 days, 44 min, 2 users, load average: 116.16, 43.81, 20.96
Tasks: 985 total, 73 running, 879 sleeping, 0 stopped, 33 zombie
Cpu(s): 8.6%us, 90.1%sy, 0.0%ni, 0.6%id, 0.7%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 98848968k total, 98407396k used, 441572k free, 2248k buffers
Swap: 25165816k total, 2645544k used, 22520272k free, 370780k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32349 oracle 18 0 9797m 1.1g 33m S 493.3 1.2 0:36.76 oraclebiprd2 (LOCAL=NO)
29495 oracle 15 0 216m 26m 11m S 466.6 0.0 3:01.86 /u01/app/11.2.0/grid/bin/diskmon.bin -d -f
32726 oracle 16 0 8788m 169m 37m R 447.6 0.2 0:24.86 oraclebiprd2 (LOCAL=NO)
32338 oracle 18 0 9525m 905m 42m R 407.0 0.9 0:33.20 oraclebiprd2 (LOCAL=NO)
--
top - 12:59:54 up 132 days, 44 min, 2 users, load average: 107.27, 44.31, 21.37
Tasks: 991 total, 16 running, 942 sleeping, 0 stopped, 33 zombie
Cpu(s): 30.3%us, 3.6%sy, 0.0%ni, 14.3%id, 51.6%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 98848968k total, 98167188k used, 681780k free, 5264k buffers
Swap: 25165816k total, 2745440k used, 22420376k free, 369676k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1401 root 10 -5 0 0 0 S 77.8 0.0 114:36.03 [kswapd0] <-- KSWAPD kicked in
19163 oracle 15 0 2152m 72m 16m S 74.9 0.1 9:42.45 /u01/app/11.2.0/grid/bin/oraagent.bin
3394 oracle 15 0 436m 23m 14m S 33.8 0.0 12:23.44 /u01/app/11.2.0/grid/bin/oraagent.bin
2171 root 16 0 349m 28m 12m S 28.6 0.0 1:50.29 /u01/app/11.2.0/grid/bin/orarootagent.bin
> vmstat 1 5000
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 69 9893340 442000 9032 346328 1160 2760 1264 2904 2083 13656 10 0 34 56 0
3 67 9894700 443524 9036 345768 1204 3236 1268 3300 1930 13332 7 0 47 46 0
1 72 9895936 446228 9052 346484 1052 3156 1220 3648 1819 13674 5 0 47 48 0
2 74 9897156 448732 9064 346616 1724 3432 2128 3436 1936 14598 7 0 44 49 0
3 73 9897724 446904 9068 347580 1524 2468 1636 2480 1730 13363 6 0 32 61 0
7 65 9898208 448312 9080 347472 1328 1944 1660 1952 2496 14019 16 0 32 52 0
8 61 9898500 444836 9092 347904 2128 2004 2464 2208 3381 16093 29 1 23 47 0
1 79 9899372 441588 9104 348048 1236 2684 1424 3300 2774 14103 23 0 24 53 0
13 54 9909828 551780 9224 349588 36124 63296 37608 64800 126067 443473 18 0 23 59 0
16 40 9910136 536048 9260 350004 4208 988 5076 2044 5434 17055 51 3 10 36 0
}}}
<<<
''wait io on CPUIDs'' https://www.evernote.com/shard/s48/sh/0da5e22a-6a80-4a82-86a9-581d9203ed9c/8f5b5f4f63b789a9c3d4dc6a618128d0
http://www.ludovicocaldara.net/dba/how-to-collect-oracle-application-server-performance-data-with-dms-and-rrdtool/
http://allthingsmdw.blogspot.com/2012/02/analyzing-thread-dumps-in-middleware.html
Capacity Planning for LAMP
http://www.scribd.com/doc/43281/Slides-from-Capacity-Planning-for-LAMP-talk-at-MySQL-Conf-2007
http://perfwork.wordpress.com/2010/03/20/cpu-utilization-on-ec2/
IO tuning
http://communities.vmware.com/thread/268869
http://vpivot.com/2010/05/04/storage-io-control/
http://communities.vmware.com/docs/DOC-5490
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008205
http://book.soundonair.ru/hall2/ch06lev1sec1.html <-- COOL LVM Striping!!!
http://tldp.org/HOWTO/LVM-HOWTO/recipethreescsistripe.html
http://linux.derkeiler.com/Newsgroups/comp.os.linux.misc/2010-01/msg00325.html
http://www.goodwebpractices.com/other/wordpress-vs-joomla-vs-drupal.html
http://www.pcpro.co.uk/blogs/2011/02/02/joomla-1-6-vs-drupal-7-0/
http://www.pcpro.co.uk/reviews/software/364549/drupal-7
http://www.alledia.com/blog/general-cms-issues/joomla-and-drupal-which-one-is-right-for-you/ <-- nice comparison
Start/Stop CRS
http://www.dbaexpert.com/blog/2007/09/start-and-stop-crs/
https://forums.oracle.com/forums/thread.jspa?messageID=9817219� <-- installation!
http://www.crisp.demon.co.uk/blog/2011-06.html <-- his blog about his dtrace port
http://crtags.blogspot.com/ <-- the download page
{{{
cd /reco/installers/rpms/dtrace-20110718
make all
make load
build/dtrace -n 'syscall:::entry { @[execname] = count(); }'
build/dtrace -n 'syscall:::entry /execname == "VirtualBox"/ { @[probefunc] = count(); }'
[root@desktopserver dtrace-20110718]# build/dtrace -n 'syscall:::entry { @[execname] = count(); }'
dtrace: description 'syscall:::entry ' matched 633 probes
^C
hpssd.py 1
VBoxNetDHCP 2
mapping-daemon 2
nmbd 3
init 4
gnome-panel 6
httpd 6
ntpd 8
tnslsnr 8
gpm 14
pam-panel-icon 16
perl 17
sshd 17
avahi-daemon 22
metacity 30
iscsid 31
nautilus 38
automount 40
ocssd.bin 42
gam_server 45
gdm-rh-security 55
gnome-screensav 66
emagent 67
gnome-power-man 75
tail 82
gnome-settings- 86
evmd.bin 100
mixer_applet2 143
escd 165
gnome-terminal 205
cssdagent 221
gconfd-2 277
pcscd 372
wnck-applet 382
TeamViewer.exe 392
pam_timestamp_c 406
wineserver 412
collectl 525
dtrace 616
ohasd.bin 1063
vncviewer 1244
oraagent.bin 2046
VBoxXPCOMIPCD 2081
Xvnc 3348
VBoxSVC 8058
oracle 12601
java 39786
firefox 74345
npviewer.bin 204925
VirtualBox 415025
}}}
http://www.evernote.com/shard/s48/sh/1ccb0466-79b7-4090-9a5d-9371358ac54d/b8434e3e3b3130ce72422b9ae067e7b9
<<showtoc>>
! references
https://css-tricks.com/the-difference-between-id-and-class/
http://stackoverflow.com/questions/12889362/difference-between-id-and-class-in-css-and-when-to-use-it
Some noteworthy tweets ... blog summary here http://www.oraclenerd.com/2011/03/fun-with-tuning.html
<<<
{{{
@DBAKevlar Shouldn't need that or any tricks. Defaults of CREATE TABLESPACE should work just fine. /cc @oraclenerd
@oraclenerd Likely because the writer slave set is slower than the reader slave set. Readers want to send more data, writers not ready.
Issue of a Balance HW config or too much PX? RT @GregRahn: @oraclenerd Likely because the writer slave set is slower than the reader slave set. Readers want to send more data, writers not ready
-- OR not proper PX
@GregRahn can you dumb that down for me? slow disks? slow part of disks?
@oraclenerd Seems likely that the disk writes are the slow side of the execution. The read side probably faster. Got SQL Monitor report?
@GregRahn I have the Real Time SQL Monitoring report from SQL Dev. Didn't configure EM or anything else
observing @tomroachoracle run sar reports on my VM
@oraclenerd That should work. Email me that
OH: "Solutions are only useful when the problem is well understood"
@GregRahn could you improve @oraclenerd s parallel query?
@martinberx Indeed. Mr @oraclenerd did not have PARALLEL in the CTAS, only on the SELECT side. Many readers, 1 writer. He's much wiser now.
@GregRahn @oraclenerd top wait events (AWR): DB CPU (82%) - direct path read (15%) - direct path write (2%) can we avoid CPU work somehow?
replacing NULL with constant (-999) in DWH like env to avoid outer joins. Your ideas?
@martinberx Can use NOCOMPRESS. Better option - use more CPU cores. /cc @oraclenerd
@GregRahn good idea! trade CPU vs. IO @oraclenerd has to decide if he wants faster CTAS or query afterwards.
@martinberx If lots are null, you'll skew num_rows/NDV by using a constant instead. Histogram for col?
}}}
<<<
! likely because the writer slave set is slower than the reader slave set. readers want to send more data writers are not ready
{{{
"Perhaps you have a parallel hint on the select but not on the table, like this"
CREATE TABLE claim
COMPRESS BASIC
NOLOGGING
PARALLEL 8
AS
SELECT /*+ PARALLEL( c, 8 ) */
date_of_service,
date_of_payment,
claim_count,
units,
amount,
...
...
}}}
Connect Time Failover & Transparent Application Failover for Data Guard
http://uhesse.wordpress.com/2009/08/19/connect-time-failover-transparent-application-failover-for-data-guard/
DataGuard Startup Service trigger
http://blog.dbvisit.com/the-power-of-oracle-services-with-standby-databases/
1) Read on the PDF here, that talks about capacity planning and sizing. It includes a tool and a sample scenario https://github.com/karlarao/sizing_worksheet
2) You can view my presentations and papers here http://www.slideshare.net/karlarao/presentations, http://www.slideshare.net/karlarao/documents
3) Some of my tools https://karlarao.wordpress.com/scripts-resources/, https://github.com/karlarao
4) And go through the entries under the following topics of my wiki
http://karlarao.tiddlyspot.com/#OraclePerformance
http://karlarao.tiddlyspot.com/#Benchmark
http://karlarao.tiddlyspot.com/#%5B%5BCapacity%20Planning%5D%5D
http://karlarao.tiddlyspot.com/#%5B%5BHardware%20and%20OS%5D%5D
http://karlarao.tiddlyspot.com/#PerformanceTools
http://karlarao.tiddlyspot.com/#%5B%5BTroubleshooting%20%26%20Internals%5D%5D
http://karlarao.tiddlyspot.com/#CloudComputing
https://github.com/karlarao/forecast_examples
I recommend you read the books:
• FOP http://www.amazon.com/Forecasting-Oracle-Performance-Craig-Shallahamer/dp/1590598024/ref=sr_1_1?ie=UTF8&qid=1435948281&sr=8-1&keywords=forecasting+oracle+performance&pebp=1435948282498&perid=16H7PDMSDZYF4PJ4FWET
• OPF http://www.amazon.com/Oracle-Performance-Firefighting-Craig-Shallahamer/dp/0984102302/ref=sr_1_3?ie=UTF8&qid=1435948281&sr=8-3&keywords=forecasting+oracle+performance
• TAOS (headroom section) http://www.amazon.com/Art-Scalability-Architecture-Organizations-Enterprise/dp/0134032802/ref=sr_1_1?ie=UTF8&qid=1435948300&sr=8-1&keywords=the+art+of+scalability
• TPOCSA (capacity planning section) http://www.amazon.com/Practice-Cloud-System-Administration-Distributed/dp/032194318X/ref=sr_1_1?ie=UTF8&qid=1435948307&sr=8-1&keywords=tom+limoncelli
• "Cloud Capacity Management" http://www.apress.com/9781430249238 it goes through the end to end capacity service model which is ideal for a big shop (with a lot of hierarchy and bureaucracy) or if you are thinking about implementing database as a service in a large scale. some points are not very detailed but it goes through all the relevant terms/topics. we have done a similar thing in the past for a fortune 100 bank but focuses only on oracle services
Also join:
• GCAP google groups https://groups.google.com/forum/#!forum/guerrilla-capacity-planning
I’ve been meaning to put together a 1 day workshop about sizing, cap planning, and RM.
At enkitec I’ve done about 80+ sizing engagements so I’ve got a ton of data to show and experiences to share. Hopefully by the end of this year I’ll be able to complete that workshop.
! other
http://cyborginstitute.org/projects/administration/database-scaling/ , http://cyborginstitute.org/projects/administration/
book: Operating Systems: Concurrent and Distributed Software Design http://search.safaribooksonline.com/0-321-11789-1
1) Read on the PDF here, that talks about capacity planning and sizing. It includes a tool and a sample scenario https://github.com/karlarao/sizing_worksheet
2) You can view my presentations and papers here http://www.slideshare.net/karlarao/presentations, http://www.slideshare.net/karlarao/documents
3) Some of my tools https://karlarao.wordpress.com/scripts-resources/, https://github.com/karlarao
4) And go through the entries under the following topics of my wiki
http://karlarao.tiddlyspot.com/#OraclePerformance
http://karlarao.tiddlyspot.com/#Benchmark
http://karlarao.tiddlyspot.com/#%5B%5BCapacity%20Planning%5D%5D
http://karlarao.tiddlyspot.com/#%5B%5BHardware%20and%20OS%5D%5D
http://karlarao.tiddlyspot.com/#PerformanceTools
http://karlarao.tiddlyspot.com/#%5B%5BTroubleshooting%20%26%20Internals%5D%5D
http://karlarao.tiddlyspot.com/#CloudComputing
https://github.com/karlarao/forecast_examples
I recommend you read the books:
• FOP http://www.amazon.com/Forecasting-Oracle-Performance-Craig-Shallahamer/dp/1590598024/ref=sr_1_1?ie=UTF8&qid=1435948281&sr=8-1&keywords=forecasting+oracle+performance&pebp=1435948282498&perid=16H7PDMSDZYF4PJ4FWET
• OPF http://www.amazon.com/Oracle-Performance-Firefighting-Craig-Shallahamer/dp/0984102302/ref=sr_1_3?ie=UTF8&qid=1435948281&sr=8-3&keywords=forecasting+oracle+performance
• TAOS (headroom section) http://www.amazon.com/Art-Scalability-Architecture-Organizations-Enterprise/dp/0134032802/ref=sr_1_1?ie=UTF8&qid=1435948300&sr=8-1&keywords=the+art+of+scalability
• TPOCSA (capacity planning section) http://www.amazon.com/Practice-Cloud-System-Administration-Distributed/dp/032194318X/ref=sr_1_1?ie=UTF8&qid=1435948307&sr=8-1&keywords=tom+limoncelli
• "Cloud Capacity Management" http://www.apress.com/9781430249238 it goes through the end to end capacity service model which is ideal for a big shop (with a lot of hierarchy and bureaucracy) or if you are thinking about implementing database as a service in a large scale. some points are not very detailed but it goes through all the relevant terms/topics. we have done a similar thing in the past for a fortune 100 bank but focuses only on oracle services
Also join:
• GCAP google groups https://groups.google.com/forum/#!forum/guerrilla-capacity-planning
I’ve been meaning to put together a 1 day workshop about sizing, cap planning, and RM.
At enkitec I’ve done about 80+ sizing engagements so I’ve got a ton of data to show and experiences to share. Hopefully by the end of this year I’ll be able to complete that workshop.
! other
http://cyborginstitute.org/projects/administration/database-scaling/ , http://cyborginstitute.org/projects/administration/
book: Operating Systems: Concurrent and Distributed Software Design http://search.safaribooksonline.com/0-321-11789-1
! chargeback , cost
cloud capacity management https://learning.oreilly.com/library/view/cloud-capacity-management/9781430249238/9781430249238_Ch13.xhtml#Sec1
hybrid cloud management https://learning.oreilly.com/library/view/hybrid-cloud-management/9781785283574/ch08s04.html
cloud data centers cost modeling https://learning.oreilly.com/library/view/cloud-data-centers/9780128014134/xhtml/chp016.xhtml
cloud computing billing and chargeback https://learning.oreilly.com/library/view/cloud-computing-automating/9780132604000/ch09.html
cloud native architectures https://learning.oreilly.com/library/view/cloud-native-architectures/9781787280540/2840e1ee-fdf4-437d-84bf-22d2cda7892b.xhtml
oem chargeback https://www.oracle.com/enterprise-manager/downloads/chargeback-capacity-planning-downloads.html
http://www.oracle.com/us/products/engineered-systems/iaas/engineered-systems-iaas-ds-1897230.pdf
{{{
SELECT * from table(
select dbms_sqltune.extract_binds(bind_data) from v$sql
where sql_id = '&sql_id'
and child_number = &child_no)
/
select a.sql_id, a.name, a.value_string
from dba_hist_sqlbind a, dba_hist_snapshot b
where a.snap_id between b.snap_id - 1 and b.snap_id
and b.begin_interval_time <= to_date('&DATE_RUNNING', 'DD-MON-YYYY HH24:MI:SS')
and b.end_interval_time >= to_date('&DATE_RUNNING', 'DD-MON-YYYY HH24:MI:SS')
and sql_id = '&SQL_ID'
/
}}}
How do I know if the cardinality estimates in a plan are accurate?
http://blogs.oracle.com/optimizer/entry/how_do_i_know_if
http://blogs.oracle.com/optimizer/entry/cardinality_feedback
http://kerryosborne.oracle-guy.com/2011/07/cardinality-feedback/
http://kerryosborne.oracle-guy.com/2011/01/sql-profiles-disable-automatic-dynamic-sampling/
http://blogs.oracle.com/optimizer/entry/how_do_i_know_if
Martin -- Thanks for the question regarding "What does Buffer Sort mean", version 10.2 http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3123216800346274434
9.2.0.4 buffer (sort) http://www.freelists.org/post/oracle-l/9204-buffer-sort
Buffer Sort explanation http://www.freelists.org/post/oracle-l/Buffer-Sort-explanation, http://www.orafaq.com/maillist/oracle-l/2005/08/07/0420.htm
Buffer Sorts http://jonathanlewis.wordpress.com/2006/12/17/buffer-sorts/
Buffer Sorts – 2 http://jonathanlewis.wordpress.com/2007/01/12/buffer-sorts-2/
Cartesian Merge Join http://jonathanlewis.wordpress.com/2006/12/13/cartesian-merge-join/
Optimizer Selects the Merge Join Cartesian Despite the Hints [ID 457058.1] alter session set "_optimizer_mjc_enabled"=false ;
Scalar Subquery and Complex View Merging Disabled http://dioncho.wordpress.com/2009/04/17/scalar-subquery-and-complex-view-merging-disabled/
ZS -- Thanks for the question regarding "Why a Merge Join Cartesian?", version 8.1.7 http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4105951726381
<<showtoc>>
https://db-engines.com/en/system/Cassandra%3BOracle
! video learning
* https://www.linkedin.com/learning/cassandra-data-modeling-essential-training/cassandra-and-relational-databases
* Cassandra for Developers https://app.pluralsight.com/library/courses/cassandra-developers/table-of-contents
* https://www.udemy.com/courses/search/?src=ukw&q=cassandra
FREE https://learning.oreilly.com/videos/cassandra-administration/9781782164203?autoplay=false
https://www.udemy.com/course/from-0-to-1-the-cassandra-distributed-database/
https://www.udemy.com/course/cassandra-administration/
https://learning.oreilly.com/videos/mastering-cassandra-essentials/9781491994122?autoplay=false
https://learning.oreilly.com/videos/distributed-systems-in/9781491924914?autoplay=false
! references
https://www.google.com/search?q=cassandra+vs+mongodb&oq=cassandra+vs+m&aqs=chrome.1.69i57j0l7.3644j0j1&sourceid=chrome&ie=UTF-8
cassandra vs mongo vs redis https://db-engines.com/en/system/Cassandra%3BMongoDB%3BRedis
https://www.educba.com/cassandra-vs-redis/
! cassandra for architects
!! Berglund and McCullough on Mastering Cassandra for Architects
https://learning.oreilly.com/videos/berglund-and-mccullough/9781449327378/9781449327378-video153237?autoplay=false
!! Design a monitoring or analytics service like Datadog or SignalFx
https://leetcode.com/discuss/interview-question/system-design/287678/Design-a-monitoring-or-analytics-service-like-Datadog-or-SignalFx
<<<
What is the total events and number of users? The storage data depends on total events * number of users
Its a write heavy service.
Data persist for 6 months
can use Cassandra for storing logs
row key = event + client id, column key is time stamp, value store the the number of times this event happens during this timestamp
365/2 * 86400 = 15768000 seconds, if we needs to store number of time a event happens in each event, we need 15768000 column for each key. For the last 6 months, we can use minutes for storing as well. Which is 365/2 * 24 * 60 = 262800 minutes. We can also use hour to store older date such as 3 months ago to save space.
The database will be shard by event + client id
<<<
!! Cassandra for mission critical data
https://www.slideshare.net/semLiveEnv/cassandra-for-mission-critical-data
!! NoSQL & HBase overview
https://www.slideshare.net/VenkataNagaRavi/hbase-overview-41046280
! cassandra use cases
https://blog.pythian.com/cassandra-use-cases/
{{{
Ideal Cassandra Use Cases
It turns out that Cassandra is really very good for some applications.
The ideal Cassandra application has the following characteristics:
Writes exceed reads by a large margin.
Data is rarely updated and when updates are made they are idempotent.
Read Access is by a known primary key.
Data can be partitioned via a key that allows the database to be spread evenly across multiple nodes.
There is no need for joins or aggregates.
Some of my favorite examples of good use cases for Cassandra are:
Transaction logging: Purchases, test scores, movies watched and movie latest location.
Storing time series data (as long as you do your own aggregates).
Tracking pretty much anything including order status, packages etc.
Storing health tracker data.
Weather service history.
Internet of things status and event history.
Telematics: IOT for cars and trucks.
Email envelopes—not the contents.
}}}
!! migrate oracle to cassandra (NETFLIX)
Global Netflix - Replacing Datacenter Oracle with Global Apache Cassandra on AWS
http://www.hpts.ws/papers/2011/sessions_2011/GlobalNetflixHPTS.pdf
!! hadoop on cassandra - datastax
https://stackoverflow.com/questions/14827693/hadoop-on-cassandra-database
<<<
If you interested to marry Hadoop and Cassandra - the first link should DataStax company which is built around this concept. http://www.datastax.com/ They built and support hadoop with HDFS replaced with cassandra. In best of my understanding - they do have data locality:http://blog.octo.com/en/introduction-to-datastax-brisk-an-hadoop-and-cassandra-distribution/
<<<
!! hadoop vs cassandra
Hadoop vs. Cassandra https://www.youtube.com/watch?v=ZzFCfH8e3QA
! cassandra flask python
https://www.google.com/search?sxsrf=ACYBGNRzZb4Is9CEjMPzBaMsrc7CkGWPHA%3A1568334590979&ei=_uJ6XZKeO9Dy5gKp9oSQDA&q=cassandra+flask+python&oq=cassandra+flask&gs_l=psy-ab.3.1.0j0i22i30l5j0i22i10i30.5510.5925..7882...0.3..0.82.381.5......0....1..gws-wiz.......0i71.CY3xBlZ92os
http://rmehan.com/2016/04/18/using-cassandra-with-flask/
Collect pageviews with Flask and Cassandra https://mmas.github.io/pageviews-flask-cassandra
! cassandra sample application code
https://learning.oreilly.com/library/view/cassandra-the-definitive/9781449399764/ch04.html
.
.
this feature is new in RHEL6
Documentation http://linux.oracle.com/documentation/EL6/Red_Hat_Enterprise_Linux-6-Resource_Management_Guide-en-US.pdf
How I Used CGroups to Manage System Resources In Oracle Linux 6 http://www.oracle.com/technetwork/articles/servers-storage-admin/resource-controllers-linux-1506602.html
IO https://fritshoogland.wordpress.com/2012/12/15/throttling-io-with-linux/
CPU http://manchev.org/2014/03/processor-group-integration-in-oracle-database-12c/
Using PROCESSOR_GROUP_NAME to bind a database instance to CPUs or NUMA nodes on Linux (Doc ID 1585184.1)
Using PROCESSOR_GROUP_NAME to bind a database instance to CPUs or NUMA nodes on Solaris (Doc ID 1928328.1)
Modern Linux Servers with cgroups - Brandon Philips, CoreOS https://www.youtube.com/watch?v=ZD7HDrtkZoI
Resource allocation using cgroups https://www.youtube.com/watch?v=JN2Ei7zn2S0
OEL 6 doc http://docs.oracle.com/cd/E37670_01/E37355/html/index.html, https://docs.oracle.com/cd/E37670_01/E37355/html/ol_getset_param_cgroups.html, http://docs.oracle.com/cd/E37670_01/E37355/html/ol_use_cases_cgroups.html
How I Used CGroups to Manage System Resources http://www.oracle.com/technetwork/articles/servers-storage-admin/resource-controllers-linux-1506602.html
to get rid of ORA-28003: password verification for the specified password failed
{{{
ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION NULL;
alter profile DEFAULT limit PASSWORD_REUSE_MAX 6 PASSWORD_REUSE_TIME unlimited;
ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME UNLIMITED;
}}}
to change profile for a specific user
{{{
select username, account_status, PROFILE from dba_users;
ALTER PROFILE MONITORING_USER LIMIT PASSWORD_VERIFY_FUNCTION NULL;
alter profile MONITORING_USER limit PASSWORD_REUSE_MAX 6 PASSWORD_REUSE_TIME unlimited;
alter user HCMREADONLY identified by noentry;
ALTER PROFILE MONITORING_USER LIMIT PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION;
}}}
to get the old password and put it back after
{{{
create user TEST identified by TEST;
grant create session to TEST;
select username, password from dba_users where username = 'TEST';
select username, password from dba_users where username = 'TEST';
USERNAME PASSWORD
------------------------------ ------------------------------
TEST 7A0F2B316C212D67
alter user TEST identified by TEST2;
Alter user TEST identified by values 'OLD HASH VALUE ';
Alter user TEST identified by values '7A0F2B316C212D67';
}}}
! expired and locked
{{{
select username, account_status from dba_users;
select 'ALTER USER ' || username || ' ACCOUNT UNLOCK;' from dba_users where account_status like '%LOCKED%';
set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('USER', username) || ';' usercreate
from dba_users where username = 'SYSMAN';
If you are, (if sec_case_sensitive_logon = TRUE), then you can do this:
select 'alter user '|| username '||' identified by values '||chr(39)||spare4||chr(39)||';' from dba_users where account_status like '%EXPIRED%';
If you're not using mixed case passwords (sec_case_sensitive_logon = FALSE), then do:
select 'alter user '||username '||' identified by value '||chr(39)||password||chr(39)||';' from dba_users where account_status like '%EXPIRED%';
select 'ALTER USER ' || username || ' identified by oracle1;' from dba_users where account_status like '%EXPIRED%';
http://laurentschneider.com/wordpress/2008/03/alter-user-identified-by-values-in-11g.html
http://coskan.wordpress.com/2009/03/11/alter-user-identified-by-values-on-11g-without-using-sysuser/
-- for sysman do this starting 10204
emctl setpasswd dbconsole
}}}
! example
{{{
21:06:35 SYS@cdb1> SET LONG 999999
21:06:45 SYS@cdb1> select dbms_metadata.get_ddl('USER','ALLOC_APP_USER') from dual;
DBMS_METADATA.GET_DDL('USER','ALLOC_APP_USER')
--------------------------------------------------------------------------------
CREATE USER "ALLOC_APP_USER" IDENTIFIED BY VALUES 'S:135E0A81F4B08AD2EE81B3A0
E4B28DB3A08983E40524264C4764EDCEE856;H:9BCF43B8002C09A03D7C5B0C80D35B86;T:42BE3A
2EC61307ED9FF704DF80951D76986F47D43EA835799836001399562899F95FB95B171C9D58A16E45
F7459ADEE74901C7B4A9A9AEFDD92FD03278F3038B0EE0A03F14A3C7520FABCC386FA6A72A;E0E79
5741BD02FB0'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP2"
21:06:49 SYS@cdb1>
21:06:51 SYS@cdb1> conn system/xxx
Connected.
21:07:15 SYSTEM@cdb1>
21:07:16 SYSTEM@cdb1>
21:07:16 SYSTEM@cdb1> alter user alloc_app_user identified by karlarao;
User altered.
21:07:37 SYSTEM@cdb1> conn alloc_app_user/karlarao
Connected.
21:07:45 ALLOC_APP_USER@cdb1>
21:07:46 ALLOC_APP_USER@cdb1>
21:07:46 ALLOC_APP_USER@cdb1> conn system/xxx
Connected.
21:07:51 SYSTEM@cdb1>
21:07:51 SYSTEM@cdb1>
21:07:51 SYSTEM@cdb1> alter user alloc_app_user identified by values 'S:135E0A81F4B08AD2EE81B3A0E4B28DB3A08983E40524264C4764EDCEE856;H:9BCF43B8002C09A03D7C5B0C80D35B86;T:42BE3A2EC61307ED9FF704DF80951D76986F47D43EA835799836001399562899F95FB95B171C9D58A16E45F7459ADEE74901C7B4A9A9AEFDD92FD03278F3038B0EE0A03F14A3C7520FABCC386FA6A72A;E0E795741BD02FB0';
User altered.
21:08:46 SYSTEM@cdb1> conn alloc_app_user/karlarao
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
21:08:55 @>
21:08:56 @> conn alloc_app_user/testalloc
Connected.
21:09:07 ALLOC_APP_USER@cdb1>
}}}
http://dbakevlar.blogspot.com/2010/08/simple-reporting-without-materialized.html
http://avdeo.com/2010/11/01/converting-migerating-database-character-set/
http://www.oracle-base.com/articles/9i/character-semantics-and-globalization-9i.php
https://forums.oracle.com/forums/thread.jspa?messageID=2371685
Modify NLS_LENGTH_SEMANTICS online http://gasparotto.blogspot.com/2009/03/modify-nlslengthsemantics-online.html
! 2022
https://www.kibeha.dk/2018/05/corrupting-characters-how-to-get.html
https://blogs.oracle.com/timesten/post/why-databasecharacterset-matters
''Chargeback Administration'' http://download.oracle.com/docs/cd/E24628_01/doc.121/e25179/chargeback_cloud_admin.htm#sthref232
''demo'' http://www.youtube.com/user/OracleLearning#start=0:00;end=6:18;autoreplay=false;showoptions=false <-- resources are managed like VM resources, VMWare has a similar tool
http://www.oracle.com/technetwork/oem/cloud-mgmt/wp-em12c-chargeback-final-1585483.pdf
@@
Roadmap of Oracle Database Patchset Releases (Doc ID 1360790.1)
Release Schedule of Current Database Releases (Doc ID 742060.1)
Note 207303.1 Client Server Interoperability Support
Note 161818.1 RDBMS Releases Support Status Summary
Oracle Clusterware (CRS/GI) - ASM - Database Version Compatibility (Doc ID 337737.1)
Support of Linux and Oracle Products on Linux (Doc ID 266043.1)
ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)
Master Note For Database and Client Certification (Doc ID 1298096.1)
@@
On What Unix/Linux OS are Oracle ODBC Drivers Available ?
Doc ID: Note:396635.1
Subject: Oracle - Compatibility Matrices and Release Information
Doc ID: Note:139580.1
Subject: Statement of Direction - JDBC Driver Support within Oracle Application Server
Doc ID: Note:365120.1
Subject: Oracle Database Server and Networking Patches for Microsoft Platforms
Doc ID: Note:161549.1
Subject: Oracle Database Extensions for .Net support statement for 64-bit Windows
Doc ID: Note:414947.1
Subject: Oracle Database Server support Matrix for Windows XP / 2003 64-Bit (Itanium)
Doc ID: Note:236183.1
Subject: Oracle Database Server support Matrix for Windows XP / 2003 32-Bit
Doc ID: Note:161546.1
Oracle Database Server product support Matrix for Windows 2000
Doc ID: Note:77627.1
INTEL: Oracle Database Server Support Matrix for Windows NT
Doc ID: Note:45997.1
Oracle Database Server support Matrix for Windows XP / 2003 64-Bit (x64)
Doc ID: Note:343737.1
Are Unix Clients Supported for Deploying Oracle Forms over the Web?
Doc ID: Note:266439.1
Tru64 UNIX Statement of Direction for Oracle
Doc ID: Note:264137.1
Is Oracle10g Instant Client Certified With Oracle 9i or Oracle 8i Databases
Doc ID: Note:273972.1
ODBC and Oracle10g Supportability
Doc ID: Note:273215.1
Starting With Oracle JDBC Drivers
Doc ID: Note:401934.1
JDBC Features - classes12.jar , oracle.jdbc.driver, and OracleConnectionCacheImpl
Doc ID: Note:335754.1
ORA-12170 When Connecting Directly or Via Dblink From 10g To 8i
Doc ID: Note:363105.1
Which Oracle Client versions will connect to and work against which version of the Oracle Database?
Doc ID: Note:172179.1
How To Determine The C/C++ And COBOL Compiler Version / Release on LINUX/UNIX
Doc ID: Note:549826.1
Precompiler FAQ's About Migration / Upgrade
Doc ID: Note:377161.1
How To Upgrade The Oracle Database Client Software?
Doc ID: Note:428732.1
Certified Compilers
Doc ID: Note:43208.1
-- AIX
Note.273051.1 - How to configure Reports with IBM-DB2 Database using Pluggable Data Source
Note.239558.1 - How to Set Up Reports 9i Connecting to DB2 with JDBC using Merant Drivers
Note.246787.1 - How to Configure JDBC-ODBC Bridge for Reports 9i?
-- JDBC
Example: Identifying Connection String Problems in JDBC Driver
Doc ID: Note:94091.1
http://srackham.wordpress.com/cloning-and-copying-virtualbox-virtual-machines/
<<<
easiest is https://www.youtube.com/watch?v=lbVi2yJOiZo on the current copied/renamed vdi
{{{
VBoxManage internalcommands sethduuid "/Volumes/vm/VirtualBox VMs/tableauserver/tableauserver.vdi"
}}}
<<<
or clone from vbox ui
/***
|Name:|CloseOnCancelPlugin|
|Description:|Closes the tiddler if you click new tiddler then cancel. Default behaviour is to leave it open|
|Version:|3.0.1 ($Rev: 3861 $)|
|Date:|$Date: 2008-03-08 10:53:09 +1000 (Sat, 08 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#CloseOnCancelPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
***/
//{{{
merge(config.commands.cancelTiddler,{
handler_mptw_orig_closeUnsaved: config.commands.cancelTiddler.handler,
handler: function(event,src,title) {
this.handler_mptw_orig_closeUnsaved(event,src,title);
if (!store.tiddlerExists(title) && !store.isShadowTiddler(title))
story.closeTiddler(title,true);
return false;
}
});
//}}}
http://en.wikipedia.org/wiki/Cloud_computing
http://johnmathon.wordpress.com/2014/02/11/a-simple-guide-to-cloud-computing-iaas-paas-saas-baas-dbaas-ipaas-idaas-apimaas/
! The Art of Scalability
[img[ https://lh3.googleusercontent.com/l_WZ2l_67mz-u1ouW0jsOnkiHP9cPbWfkXAcTiE8Yss=w2048-h2048-no ]]
<<<
This came from the book called.. “The Art of Scalability” by Marty Abbott (http://akfpartners.com/about/marty-abbott) and Michael Fisher (http://akfpartners.com/about/michael-fisher). Both ex-military and grad of West Point and after that worked a lot on web scale infrastructures (paypal, ebay, etc.)
They formed this company called “AKF partners” which wrote 3 awesome books
· The Art of Scalability
http://akfpartners.com/books/the-art-of-scalability, TOC here http://my.safaribooksonline.com/book/operating-systems-and-server-administration/9780137031436
· Scalability Rules
http://akfpartners.com/books/scalability-rules, TOC here http://my.safaribooksonline.com/book/operating-systems-and-server-administration/9780132614016
· The Power of Customer Misbehavior
http://akfpartners.com/books/the-power-of-customer-misbehavior, book preview here http://www.youtube.com/watch?v=w4twalWnfUg
and these are their clients http://akfpartners.com/clients
If you're an architect, engineer, or manager building or doing a cloud service model (IaaS, PaaS, SaaS, BaaS, DBaaS, iPaaS, IDaaS, APIMaaS), the 3 books mentioned above are awesome. Although you might actually just focus on 1st and 2nd, because the 3rd is about viral growth of products.
I think the book concepts are very suited for the “Exadata as a Service” or “DBaaS” service model.. it starts with Staffing, then Processes (incidents, escalations, headroom, perf testing, etc), then architecture, then challenges.
Check out the table of contents (TOC) links, you’ll love it.
<<<
''Pre-req readables: Introducing Cluster Health Monitor (IPD/OS) (Doc ID 736752.1)''
! On the Database Server side
''Oracle recommends to not install the UI on the servers.''
''The OS Tool consists of three daemons: ologgerd, oproxyd and osysmond''
ologgerd - master daemon
osysmond - the collector on each node
oproxyd - public interface for external clients (like oclumon and crfgui)
__''Installation''__
1) Download CHM here
Oracle Cluster Health Monitor - http://goo.gl/UZqS5
2)
On all nodes ''(as root)''
{{{
useradd -d /opt/crfuser -s /bin/sh -g oinstall crfuser
echo "crfuser" | passwd --stdin crfuser
}}}
{{{
Create the following directories...
<directory>/oracrf <--- the install directory
<directory>/oracrf_installer <--- put the installer here
<directory>/oracrf_gui <--- the GUI client goes here
<directory>/oracrf_dump <--- this is where you will dump the diagnostic data
chown -R crfuser:root <directory>/oracrf*
}}}
as per the README, ideally it should be at /usr/lib/oracrf or C:\Program Files\oracrf
On all nodes ''(as crfuser)''
{{{
add the /usr/lib/oracrf/bin on the .bash_profile PATH
vi .bash_profile
-- then add it
source .bash_profile
}}}
3) Setup passwordless ssh for the user created
''On all the nodes in the cluster'' create the RSA and DSA key pairs
{{{
1) su - crfuser
2)
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
3)
$ /usr/bin/ssh-keygen -t rsa
<then just hit ENTER all the way>
4)
$ /usr/bin/ssh-keygen -t dsa
<then just hit ENTER all the way>
5)
Repeat the above steps for each Oracle RAC node in the cluster.
}}}
''On the first node of the cluster'' Create an authorized key file on one of the nodes.
An authorized key file is nothing more than a single file that contains a copy of everyone's (every node's)
RSA and DSA public key. Once the authorized key file contains all of the public keys,
it is then distributed to all other nodes in the RAC cluster.
{{{
1)
$ cd ~/.ssh
$ ls -l *.pub
2)
Use SSH to copy the content of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub public key from each
Oracle RAC node in the cluster to the authorized key file just created (~/.ssh/authorized_keys). This will be done from the first node
$ ssh vmlinux1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ssh vmlinux1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ ssh vmlinux2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ssh vmlinux2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
3)
Copy the ~/.ssh/authorized_keys on the other nodes
$ scp -p ~/.ssh/authorized_keys vmlinux2:.ssh/authorized_keys
4)
Enable <--------------------------------------------- this is not anymore needed
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
ssh vmlinux1 date; ssh vmlinux2 date
}}}
4) If you have a previous install of this tool, delete it from all nodes. ''(as root)''
{{{
a. Disable the tool
"/etc/init.d/init.crfd disable"
"stopcrf" from a command prompt on Windows.
b. Uninstall
"/usr/lib/oracrf/install/crfinst.pl -d" on Linux
"perl C:\programm files\oracrf\install\crfinst.pl -d" on Windows
c. Make sure all BDB databases are deleted from all nodes.
d. Manually delete the install home if it still exists.
}}}
5) On the master node, Login as ''crfuser'' on Linux. Login as admin user on Windows.
Unzip the crfpack.zip file.
{{{
mv crfpack.zip <directory>/oracrf_installer
cd <directory>/oracrf_installer
unzip crfpack.zip
}}}
For the directory
The location should
be a path on a volume with at least 5GB per node space available
and writable by privileged user only. It cannot be on root
filesystem in Linux. This location is required to be same on all
hosts. If that can not be done, please specify a different location
during finalize (-f) operation on each host, following the above
size requirements. The path MUST not be on shared disk. If a shared
BDB path is provided to multiple hosts, BDB corruption will happen.
6) as ''crfuser'', run crfinst.pl on the <directory>/oracrf_installer/install directory
this will copy the installer on other nodes
{{{
$ ./crfinst.pl -i node1,node2,node3 -b <directory>/oracrf -m node1
}}}
7) as ''root'', once the step 6 finishes, it will instruct you to run crfinst.pl script
with -f and -b <bdb location> on each node to finalize the install on that node.
{{{
/home/oracle/oracrf_installer/install/crfinst.pl -f -b <directory>/oracrf
}}}
Don't be confused when it says.. "Installation completed successfully at /usr/lib/oracrf..."
the /usr/lib/oracrf directory just contains some installation binaries that consumes around 120MB
and the BDB files will still be put in the <directory>/oracrf directory
8) Enable the tool on all nodes ''(as root)''
{{{
# /etc/init.d/init.crfd enable, on Linux
> runcrf, on Windows
}}}
__''Using the tool''__
1) Start the deamons on all nodes..''(as root)'' (The install does not enable/run the daemons by default)
# /etc/init.d/init.crfd enable
On windows, type 'runcrf' from windows command prompt.
2) Run the GUI
-g : Standalone UI installation on current node. Oracle recommends to
not install the UI on the servers. You can use this option to
install the UI-only client on a separate machine outside of
cluster.
where -d is used to specify hours (<hh>), minutes (<mm>) and seconds (<ss>) in
the past from the current time to start the GUI from e.g. crfgui -d "05:10:00"
starts the GUI and displays information from the database which is 5 hours and
10 minutes in the past from the current time.
{{{
$ crfgui <-- to invoke on local node
$ crfgui -m <nodename> <-- from a client
$ crfgui -r 5 -m <nodename> <-- to change the refresh rate to 5, default is 1
$ crfgui -d "<hh>:<mm>:<ss>" -m <nodename> <-- Invoking the GUI with '-d' option starts it in historical mode.
}}}
3) __''The oclumon''__ - A command line tool is included in the package
{{{
$ oclumon -h
$ oclumon dumpnodeview -v -allnodes -last "00:30:00" <-- which will dump all stats for all nodes for last 30 minutes from the current time (includes process & device)
$ oclumon dumpnodeview -allnodes -s "2008-11-12 12:30:00" -e "2008-11-12 13:30:00" <-- which will dump stats for all nodes from 12:30 to 13:30 on Nov 12th, 2008
$ oclumon dumpnodeview -allnodes <-- To find the timezone on the servers in the cluster
$ oclumon dumpnodeview -v -n mynode -last "00:10:00" <-- will dump all stats for 'mynode' for last 10 minutes
$ oclumon dumpnodeview -v -allnodes -alert -last "00:30:00" <-- To use oclumon to query for alerts only, use the '-alert' option which will dump all records for all
nodes for last 30 minutes, which contains at least one alert.
}}}
{{{
Some useful attributes that can be passed to oclumon are
1. Showobjects
/usr/lib/oracrf/bin/oclumon showobjects -n stadn59 -time "2008-06-03 16:10:00"
2. Dumpnodeview
/usr/lib/oracrf/bin/oclumon dumpnodeview -n halinux4
3. Showgaps - The output of that command can be used to see if OSwatcher was not scheduled. This generally means some
problem with CPU scheduling or very high load on the node. Generally Cluster Health Monitor should always
be scheduled since it is running as RT process.
/usr/lib/oracrf/bin/oclumon showgaps -n celx32oe40d \
-s "2009-07-09 02:40:00" -e "2009-07-09 03:59:00"
Number of gaps found = 0
4. Showtrail
$/usr/lib/oracrf/bin/oclumon showtrail -n celx32oe40d -diskid \
sde qlen totalwaittime -s "2009-07-09 03:40:00" \
-e "2009-07-09 03:50:00" -c "red" "yellow" "green"
Parameter=QUEUE LENGTH
2009-07-09 03:40:00 TO 2009-07-09 03:41:31 GREEN
2009-07-09 03:41:31 TO 2009-07-09 03:45:21 GREEN
2009-07-09 03:45:21 TO 2009-07-09 03:49:18 GREEN
2009-07-09 03:49:18 TO 2009-07-09 03:50:00 GREEN
Parameter=TOTAL WAIT TIME
$/usr/lib/oracrf/bin/oclumon showtrail -n celx32oe40d -sys cpuqlen \
-s "2009-07-09 03:40:00" -e "2009-07-09 03:50:00" \
-c "red" "yellow" "green"
Parameter=CPU QUEUELENGTH
2009-07-09 03:40:00 TO 2009-07-09 03:41:31 GREEN
2009-07-09 03:41:31 TO 2009-07-09 03:45:21 GREEN
2009-07-09 03:45:21 TO 2009-07-09 03:49:18 GREEN
2009-07-09 03:49:18 TO 2009-07-09 03:50:00 GREEN
-- times for which the nicid eth1 has problems
./oclumon showtrail -n halinux4 -nicid eth1 effectivebw errors -c "red" "yellow" "orange" "green"
The above command tells us is the times for which the nicid eth1 has problems. The output is also depicted in colors such that
green means good and yellow means it is not good but it not exactly bad and red means problems
Similarly we can use the showtrail option to show cpu load
./oclumon showtrail -n halinux4 -sys usagepc cpuqlen cpunumprocess, openfds, numrt, numofiosps, lowmem, memfree, -c "red" "yellow"
From the above screen shot we can see that lowmem is in red all the time, Now we can get details of that lowmem usage using
./oclumon dumpnodeview -n halinux4 -s "2008-11-24 20:26:55" -e "2008-11-24 20:30:21"
}}}
__''Other Utilities''__
ologdbg: This utility provides a debug mode loggerd daemon
__''The Metrics''__
1) CPU
if a process consumes all of
one CPU on a 4 CPU system , the value reported is 100% for this process, and
aggregated system wide.
2) Data Sample retention
How much history of OS metrics is kept in Berkely DB?
By default the database retains the node views from all the nodes for the last
24 hours in a circular manner. However this limit can be increased to 72 hours
by using oclumon command : 'oclumon manage -bdb resize 259200'.
3) Process priority
What does the PRIORITY of a process mean?
The linux priorities range from -20 to 19. There is static priority and there
is nice value. We report the dynamic nice value only. We report +ve priority
in the range 0-39 for non-RealTime processes. Processes in the RT class
are reported to have priorities from 41 to 139. This way a consistent "high
number means high priority" priority is reported across platforms. The math
used is (19 - nice_val) for non-RT and (40 + rtprio) for RT processes, where
nice_val and rtprio are corresponding fields in the /proc/<pid>/stat. This
is consistent with the Unix utility 'ps'. Also note that, Unix utility 'top'
reports priority and nice as two different values, and are different from
what IPD-OS reports.
4) Disk devices
Some disk devices are missing from the device view
This can happen for two reasons:
* We only collect and show top (decided by wait time on the disk) 127
devices in the output. OCR/VOTING/ASM/SWAP devices are pinned forever.
So, the missing device may have just fallen off of this list if you
have more than 127 devices (luns).
* The disks were added after the Cluster Health Monitor was started. In this
case, just restart the Cluster Health Monitor stack. Future versions of
Cluster Health Monitor will be able to handle this case without restart.
! Data Collection
For Oracle 11.2 RAC installations use the diagcollection nscript that comes with Cluster Health Monitor:
{{{
/usr/lib/oracrf/bin/diagcollection.pl --collect --ipd
}}}
For other versions run
{{{
/usr/lib/oracrf/bin/oclumon dumpnodeview -allnodes -v -last "23:59:59" > <your-directory>/<your-filename>
}}}
Make sure <your-directory> has more than 2Gb space to create file<your-filename>
Zip or compress <your-filename> before uploading to the Service Request.
Also update the SR with the information when (date and time) you have observed a specific issue.
! On the Client side
The tool can be used by Customers to monitor their nodes online or offline. Generally when working with Oracle support, the data is viewed offline.
Online mode can be used to detect problems live on customer environment. The data can be viewed using Cluster Health Monitor utility /usr/lib/oracrf/bin/crfgui. The GUI is not installed on the nodes of the server but can be installed on any other client using
{{{
-- Create the following directories...
<directory>/oracrf_gui <--- the install directory
<directory>/oracrf_installer <--- put the installer here
chown -R crfuser:root <directory>/oracrf*
-- GUI installation
crfinst.pl -g <Install_dir>
}}}
''1.'' For example, To look at the load on a node you can run the command .
{{{
/usr/lib/oracrf/bin/crfgui.sh -m <Nodename>
}}}
The default refresh rate for this GUI is 1 second. To change refresh rate to 5 seconds execute
{{{
/usr/lib/oracrf/bin/crfgui.sh -n <Node_to_be_monitored> -r 5
}}}
''2.'' Another attribute that can be passed to the tool is -d. This is used to view the data in the past from the current time. So if there was a node reboot 4 hours ago and you need to look at the data about 10 minutes before the reboot, you would pass -d "04:10:00"
{{{
/usr/lib/oracrf/bin/crfgui.sh -d "04:10:05"
}}}
All the above usage scenarios requires gui access to the nodes.
! Mining the dumps
[karao@karl Downloads]$ less dump_20110103.txt | grep topcpu | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "#cpu" | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "type:" | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "spent too much time" | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "eth" | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "OCR" | less
! Installation troubleshooting
''Log file location ''
/usr/lib/oracrf/log/hostname/crfmond/crfmond.log
''Config file location''
/usr/lib/oracrf/admin/crfnhostname.ora
''You can do strace''
/etc/init.d/init.crfd stop
/etc/init.d/init.crfd disable
/etc/init.d/init.crfd enable
strace -fo /tmp/crf_start.out /etc/init.d/init.crfd start
upload generated crf_start.out file.
''The typical config file''
[root@racnode1 ~]# cat /usr/lib/oracrf/admin/crfracnode1.ora
HOSTS=racnode2,racnode1
CRFHOME=/usr/lib/oracrf
MYNAME=racnode1
BDBLOC=/u01/oracrf
USERNAME=crfuser
MASTERPUB=192.168.203.12
MASTER=racnode2
REPLICA=racnode1
DEAD=
ACTIVE=racnode2,racnode1
[root@racnode2 ~]# cat /usr/lib/oracrf/admin/crfracnode2.ora
HOSTS=racnode2,racnode1
CRFHOME=/usr/lib/oracrf
MYNAME=racnode2
BDBLOC=/u01/oracrf
USERNAME=crfuser
DEAD=
MASTERPUB=192.168.203.12
MASTER=racnode2
STATE=mutated
ACTIVE=racnode2,racnode1
REPLICA=racnode1
http://www.debian-administration.org/articles/551
https://www.shellcheck.net/
https://fiddles.io/ - there's a fiddle for that
http://www.smashingapps.com/2014/05/19/10-code-playgrounds-for-developers.html
jsFiddle
Test your JavaScript, CSS, HTML or CoffeeScript online with JSFiddle code editor.
LiveGap Editor
Free Online Html Editor with Syntax highlighting, live preview, code folding, fullscreen mode, themes, matching tags, auto completion, finding tags, frameWork and closing tags.
Codepen
CodePen is an HTML, CSS, and JavaScript code editor in your browser with instant previews of the code you see and write.
Cssdesk
Google Code Playground
The AJAX Code Playground is an educational tool to show code examples for various Google Javascript APIs.
jsbin
HTML, CSS, JavaScript playground that you can host on your server.
Editr
Ideone
Ideone is something more than a pastebin; it’s an online compiler and debugging tool which allows to compile and run code online in more than 40 programming languages.
Sqlfiddle
Application for testing and sharing SQL queries.
Chopapp
A little app from ZURB that lets people slice up bad code and share their feedback to help put it back together.
Gistboxapp
GistBox is the best interface to GitHub Gists. Organize your snippets with labels. Edit your code. Search by description. All in one speedy app.
D3-Playground
mongo , no-sql databases
https://mongoplayground.net/
bash playground
https://repl.it/repls/WorthyAbandonedDaemon
python
https://pyfiddle.io/
http://pythonfiddle.com/
! jupyter notebooks online
https://colab.research.google.com/notebooks/intro.ipynb <- run beam code for free, free GPUs
https://paiza.cloud/containers <- fast response time
https://notebooks.azure.com <- way slow but it works
cocalc.com <- meeh
CODING PRACTICE https://exercism.io/my/tracks/python
! cloud IDE
https://www.codeinwp.com/blog/best-cloud-ide/
<<<
If you just need to execute and share snippets of code, you should try JSFiddle or CodePen.
If you would like to create notebooks with a combination of Markdown and code outputs, you can give Azure Notebooks or Observable a try.
If you want an alternative to a local development environment, you should try out Google Cloud Shell.
If you would like a complete end-to-end solution, you should try Codeanywhere, Codenvy or Repl.it.
<<<
.
https://www.sonarlint.org/features/
https://www.castsoftware.com/products/code-analysis-tools
http://www.codecademy.com/en/tracks/python
! twitter globe sentiment analysis
http://challengepost.com/software/twitter-stream-globe
https://github.com/twitterdev/twitter-stream-globe
made by this guy http://joncipriano.com/#home
another platform you can use https://github.com/dataarts/webgl-globe
<<<
programming language framework
> data types
> conditional statements
> loops
> functions
> classes
<<<
''stories''
http://www.quora.com/What-do-full-time-software-developers-think-of-Codecademy-and-Code-School#
180 websites in 180 days http://jenniferdewalt.com/
http://irisclasson.com/2012/07/13/my-first-year-of-programming-july-11-2011-july-12-2012/
build your first iOS app http://www.lynda.com/articles/photographer-build-an-app, http://mikewong.me/how-to-build-your-first-ios-app/
http://rileyh.com/how-i-learned-to-code-in-under-10-months/
http://kodeaweso.me/is-full-stack-development-possible-in-windows/
knowledge to practice http://www.vit.vic.edu.au/prt/pages/3-applying-knowledge-to-practice-41.aspx
Theory and Research-based Principles of Learning http://www.cmu.edu/teaching/principles/learning.html
How do I learn to code? http://www.quora.com/How-do-I-learn-to-code-1/answer/Andrei-Soare?srid=Xff&share=1 , https://www.talentbuddy.co/blog/seven-villains-you-have-to-crush-when-learning-to-code/
write code every fucking day http://kaidez.com/write-code-every-f--king-day/
before you learn to code ask yourself why http://blog.underdog.io/post/129654418712/before-you-learn-to-code-ask-yourself-why
http://rob.conery.io/2015/10/06/how-to-learn-a-new-programming-language-while-maintaining-your-day-job-and-still-being-there-for-your-family/
https://medium.freecodecamp.com/being-a-developer-after-40-3c5dd112210c#.q2bocwagw
http://www.crashlearner.com/learn-to-code/
https://www.techinasia.com/talk/learn-to-learn-like-a-developer
(Programming|Computer) Language or Code https://gerardnico.com/wiki/language/start
Write Like A Programmer https://qntm.org/write
Programming Languages Don't Matter to Programmers https://github.com/t3rmin4t0r/notes/wiki/Language-Choice-and-Project-lifetimes
Things I wish I knew when I started Programming https://www.youtube.com/watch?v=GAgegNHVXxE <- this is the dude
* getting real - 37 signals - the smarter, faster, easier way to build a successful web app
* the phoenix project
* the power of customer behavior
* tao te programming
* R data structures and algorithms
* data modeling by example series
* the styles of database development
* oracle sql perf tuning and optimization
* pro active record https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=pro+active+record
* Refactoring Databases: Evolutionary Database Design (2007)
* Refactoring to patterns (2008)
* @@Applied Rapid Development Techniques for Database Engineers file:///C:/Users/karl/Downloads/applied-rapid-development-techniques-for-database-engineers.pdf@@
<<<
nice writeup by Dominic on the overall workflow of database development. he brought up cool ideas like:
* social database development
* enabling audit trail for DDL to monitor progress of developers and changes across environments
<<<
! schools/camps
http://www.zappable.com/2012/11/chart-for-learning-a-programming-langauge/
http://www.codeacademy.com/
http://www.ocwconsortium.org/ <-- was originally popularized by MIT’s 2002 move to put its course materials online
https://www.coursera.org/ <-- find a wealth of computer science courses from schools not participating in the OCW program
https://www.khanacademy.org/cs <-- includes science, economics, and yes, computer science.
Dash by General Assembly
udacity
codeschool
learnstreet
thinkful
http://venturebeat.com/2014/05/10/before-you-quit-your-job-to-become-a-developer-go-down-this-6-point-checklist/
http://venturebeat.com/2013/10/31/the-7-best-ways-to-learn-how-to-code/
! ''hadoop, big data''
{{{
Interesting India company there’s not a lot of players on the area of big data education (online), the company Lynda.com where I’m subscribed to does not have big data courses
And this one estimated to make $3M in FY14
http://www.edureka.in/company
http://www.edureka.in/company#media
http://www.edureka.in/hadoop-admin#CourseCurriculum
http://www.edureka.in/big-data-and-hadoop#CourseCurriculum
http://www.edureka.in/data-science#CourseCurriculum
}}}
! web dev
http://www.codengage.io/
{{{
Part 1 - Ruby and Object-Oriented Design
Part 2 - The Web, SQL, and Databases
Part 3 - ActiveRecord, Basic Rails, and Forms
Part 4 - Authentication and Advanced Rails
Part 5 - Javascript, jQuery, and AJAX
Part 6 - Opus Project, Useful Gems, and APIs
}}}
''HTML -> CSS -> JQuery -> Javascript programming'' path
{{{
https://www.khanacademy.org/cs
html and CSS http://www.codecademy.com/tracks/web
html dog - html,css,javascript http://htmldog.com/guides/
fundamentals of OOP and javascript http://codecombat.com/
ruby on rails http://railsforzombies.org/
http://www.w3fools.com/
https://www.codeschool.com
Dash https://dash.generalassemb.ly/ which is interactive..make a CSS robot! You can also check http://skillcrush.com/
}}}
see other paths here [[immersive code camps]]
! data
''Data Analysis Learning Path''
http://www.mysliderule.com/learning-paths/data-analysis/learn/
http://www.businessinsider.com/free-online-courses-for-professionals-2014-7
https://www.datacamp.com/ <-- R tutorials, some are paid
! vendor dev communities
https://developer.microsoft.com/en-us/collective/learning/courses?utm_campaign=DC19&utm_source=Instagram&utm_medium=Social&utm_content=CC36_videocard&utm_term=Grow
! meetup.com
* get all previous meetups https://webapps.stackexchange.com/questions/47707/how-to-get-all-meetups-ive-been-to
http://www.rackspace.com/cloud/blog/2011/05/17/infographic-evolution-of-computer-languages/
https://mremoteng.atlassian.net/wiki/display/MR/List+of+Free+Tools+for+Open+Source+Projects
http://www.headfirstlabs.com/books/hfda/
http://www.headfirstlabs.com/books/hfhtml/
http://www.headfirstlabs.com/books/hfhtml5prog/
http://www.headfirstlabs.com/books/hfjs/
http://www.headfirstlabs.com/books/hfjquery/
HF C http://shop.oreilly.com/product/0636920015482.do
HF jQuery http://shop.oreilly.com/product/0636920012740.do
HF mobile web http://shop.oreilly.com/product/0636920018100.do
HF iPhone dev http://shop.oreilly.com/product/9780596803551.do
http://venturebeat.com/2012/09/17/why-everyone-should-code/
''Long term trends on programming language'' http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
''Measuring programming popularity'' http://en.wikipedia.org/wiki/Measuring_programming_language_popularity
''10,000 hours'' http://norvig.com/21-days.html
http://venturebeat.com/2013/08/06/tynker-code-kids/
http://www.impactlab.net/2014/02/25/23-developer-skills-that-will-keep-you-employed-forever/
@@Try R http://tryr.codeschool.com/levels/1/challenges/1@@
file:///Volumes/T5_2TB/system/Users/kristofferson.a.arao/Dropbox2/Box%20Sync/bin/codeninja_comparison/codeninja_comparison.html (open with firefox)
https://github.com/andreis/interview
Cold failover for a single instance RAC database https://blogs.oracle.com/XPSONHA/entry/cold_failover_for_a_single_ins
Name: MptwSmoke
Background: #fff
Foreground: #000
PrimaryPale: #F5F5F5
PrimaryLight: #5C84A8
PrimaryMid: #111
PrimaryDark: #000
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
http://blogs.oracle.com/clive/entry/colour_dtrace
http://blogs.oracle.com/vlad/entry/coloring_dtrace_output
http://blogs.oracle.com/ahl/entry/open_sourcing_the_javaone_keynote
{{{
alter table credit_rating modify (person_id encrypt);
-- if you plan to create indexes on an encrypted column, you must create it with NO SALT
-- see if the columns in question are part of a foreign key relationship.
ALTER TABLE orders MODIFY (credit_card_number) ENCRYPT NO SALT)
-- rekey the master key
alter system set key identified by “e3car61”;
-- rekey the column keys without changing the encryption algorithm:
ALTER TABLE employee REKEY;
CREATE TABLE test_lob (
id NUMBER(15)
, clob_field CLOB
, blob_field BLOB
, bfile_field BFILE
)
/
alter table test_lob modify (clob_field encrypt no salt);
-- error on 11gR1
04:33:36 HR@db01> alter table test_lob modify (clob_field encrypt no salt);
alter table test_lob modify (clob_field encrypt no salt)
*
ERROR at line 1:
ORA-43854: use of a BASICFILE LOB where a SECUREFILE LOB was expected
-- error on 11gR2
00:06:54 HR@dbv_1> alter table test_lob modify (clob_field encrypt no salt);
alter table test_lob modify (clob_field encrypt no salt)
*
ERROR at line 1:
ORA-43856: Unsupported LOB type for SECUREFILE LOB operation
-- table should be altered to securefile first.. then encrypt
CREATE TABLE test1 (doc CLOB ENCRYPT USING 'AES128')
LOB(doc) STORE AS SECUREFILE
(CACHE NOLOGGING );
this of course can be done with online redef http://gjilevski.com/2011/05/11/migration-to-securefiles-using-online-table-redefinition-in-oracle-11gr2/
http://www.oracle-base.com/articles/11g/secure-files-11gr1.php#migration_to_securefiles
see tiddler about dbms_redef
}}}
! migration to securefiles
{{{
-- query table info
col column_name format a30
select table_name, column_name, securefile, encrypt from user_lobs;
TABLE_NAME COLUMN_NAME SEC
------------------------------ ------------------------------ ---
TEST_LOB CLOB_FIELD NO
TEST_LOB BLOB_FIELD NO
col clob format a30
col blob format a30
SELECT
id
, clob_field "Clob"
, UTL_RAW.CAST_TO_VARCHAR2(blob_field) "Blob"
FROM hr.test_lob;
-- create interim table
CREATE TABLE hr.test_lob_tmp (
id NUMBER(15)
, clob_field CLOB
, blob_field BLOB
, bfile_field BFILE
)
LOB(clob_field) STORE AS SECUREFILE (CACHE)
/
alter table hr.test_lob_tmp modify (clob_field encrypt no salt);
-- after encrypt and migration to securefiles
select table_name, column_name, securefile, encrypt from user_lobs;05:30:45 HR@db01> 05:30:45 HR@db01>
TABLE_NAME COLUMN_NAME SEC ENCR
------------------------------ ------------------------------ --- ----
TEST_LOB CLOB_FIELD NO NONE
TEST_LOB BLOB_FIELD NO NONE
TEST_LOB_TMP CLOB_FIELD YES YES
TEST_LOB_TMP BLOB_FIELD NO NONE
-- do the redefinition
begin
execute immediate 'ALTER SESSION ENABLE PARALLEL DML';
execute immediate 'ALTER SESSION FORCE PARALLEL DML PARALLEL 4';
execute immediate 'ALTER SESSION FORCE PARALLEL QUERY PARALLEL 4';
dbms_redefinition.start_redef_table
(
uname => 'HR',
orig_table => 'TEST_LOB',
int_table => 'TEST_LOB_TMP',
options_flag => dbms_redefinition.CONS_USE_ROWID
);
end start_redef;
/
ERROR at line 1:
ORA-12088: cannot online redefine table "HR"."TEST_LOB" with unsupported datatype
ORA-06512: at "SYS.DBMS_REDEFINITION", line 52
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1631
ORA-06512: at line 5
Do not attempt to online redefine a table containing a LONG column, an ADT column, or a FILE column. <-- of course!
}}}
! migration to securefiles.. 2nd take.. without the bfile
{{{
mkdir -p /home/oracle/oralobfiles
grant create any directory to hr;
DROP TABLE test_lob CASCADE CONSTRAINTS
/
CREATE TABLE test_lob (
id NUMBER(15)
, clob_field CLOB
, blob_field BLOB
)
/
CREATE OR REPLACE DIRECTORY
EXAMPLE_LOB_DIR
AS
'/home/oracle/oralobfiles'
/
INSERT INTO test_lob
VALUES ( 1001
, 'Some data for record 1001'
, '48656C6C6F' || UTL_RAW.CAST_TO_RAW(' there!')
);
COMMIT;
col clob format a30
col blob format a30
SELECT
id
, clob_field "Clob"
, UTL_RAW.CAST_TO_VARCHAR2(blob_field) "Blob"
FROM test_lob;
######
-- create interim table
CREATE TABLE hr.test_lob_tmp (
id NUMBER(15)
, clob_field CLOB
, blob_field BLOB
)
LOB(clob_field) STORE AS SECUREFILE (CACHE)
/
alter table hr.test_lob_tmp modify (clob_field encrypt no salt);
-- after encrypt and migration to securefiles
select table_name, column_name, securefile, encrypt from user_lobs;
TABLE_NAME COLUMN_NAME SEC ENCR
------------------------------ ------------------------------ --- ----
TEST_LOB CLOB_FIELD NO NONE
TEST_LOB BLOB_FIELD NO NONE
TEST_LOB_TMP CLOB_FIELD YES YES
TEST_LOB_TMP BLOB_FIELD NO NONE
-- do the redefinition
begin
dbms_redefinition.start_redef_table
(
uname => 'HR',
orig_table => 'TEST_LOB',
int_table => 'TEST_LOB_TMP',
options_flag => dbms_redefinition.CONS_USE_ROWID
);
end start_redef;
/
begin
dbms_redefinition.sync_interim_table(
uname => 'HR',
orig_table => 'TEST_LOB',int_table => 'TEST_LOB_TMP');
end;
/
begin
dbms_redefinition.finish_redef_table
(
uname => 'HR',
orig_table => 'TEST_LOB',
int_table => 'TEST_LOB_TMP'
);
end;
/
select table_name, column_name, securefile, encrypt from user_lobs;
TABLE_NAME COLUMN_NAME SEC ENCR
------------------------------ ------------------------------ --- ----
TEST_LOB_TMP CLOB_FIELD NO NONE
TEST_LOB_TMP BLOB_FIELD NO NONE
TEST_LOB CLOB_FIELD YES YES <-- it works!!
TEST_LOB BLOB_FIELD NO NONE
13:38:55 HR@db01> desc test_lob
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------- --------------------------------------------------------------------------------------------------------------------
ID NUMBER(15)
CLOB_FIELD CLOB ENCRYPT
BLOB_FIELD BLOB
}}}
http://documentation.commvault.com/dell/release_7_0_0/books_online_1/english_us/features/third_party_command_line/third_party_command_line.htm
http://documentation.commvault.com/dell/release_7_0_0/books_online_1/english_us/features/cli/rman_scripts.htm
http://www.streamreader.org/serverfault/questions/140055/commvault-oracle-rman-restore-to-new-host <-- SAMPLE COMMAND
http://www.orafaq.com/wiki/Oracle_database_Backup_and_Recovery_FAQ
<<<
IaaS - cloud provider
DBaaS - managed database, you don't need to maintain the underlying OS
PaaS - OS is managed for you, just deploy the app/software package
MLaaS - machine learning as a service
<<<
http://husnusensoy.wordpress.com/2008/02/01/using-oracle-table-compression/
Restrictions
http://oracle-randolf.blogspot.com/2010/07/compression-restrictions.html
''SOA 11G Database Growth Management Strategy'' http://www.oracle.com/technetwork/database/features/availability/soa11gstrategy-1508335.pdf
{{{
=CONCATENATE(G4,"-",C4)
}}}
concatenate percent
http://answers.yahoo.com/question/index?qid=20080605090839AA6Dnxk
http://www.wikihow.com/Apply-Conditional-Formatting-in-Excel
http://www.podcast.tv/video-episodes/excel-2011-conditional-formatting-12937144.html
http://www.cyberciti.biz/hardware/5-linux-unix-commands-for-connecting-to-the-serial-console/
Find out information about your serial ports
{{{
$ dmesg | egrep --color 'serial|ttyS'
$ setserial -g /dev/ttyS[0123]
}}}
{{{
#1 cu command
#2 screen command
#3 minicom command
#4 putty command
#5 tip command
}}}
What is Connection Management Call Elapsed Wait and How to Improve It (Doc ID 1936329.1)
Best Practices for Application Performance, Scalability, and Availability
https://support.oracle.com/epmos/main/downloadattachmentprocessor?attachid=1936329.1%3ABESTPRACTICE&action=inline
https://support.oracle.com/epmos/main/downloadattachmentprocessor?attachid=1380043.1%3A2014_JONES-20141002&action=inline
https://wiki.apache.org/hadoop/ConnectionRefused
https://stackoverflow.com/questions/28661285/hadoop-cluster-setup-java-net-connectexception-connection-refused
https://apple.stackexchange.com/questions/153589/trying-to-get-hadoop-to-work-connection-refused-in-hadoop-and-in-telnet
<<<
APPLIES TO:
Enterprise Manager for Oracle Database - Version 12.1.0.4.0 and later
Information in this document applies to any platform.
GOAL
Document describes about restrictions about multi level resource plan creation in 12.1.0.4 DB plugin.
SOLUTION
we were going to discourage users from using multi-level resource plans for 2 reasons:
(1) Most customers misinterpret how these multi-level plans work. Therefore, their multi-level plans do not work as they expect.
(2) Multi-level plans are not supported for PDBs or CDBs.
By default, SYS_GROUP is a consumer group that contains user sessions logged in as SYS. Resource Manager will control the CPU usage of sessions in SYS_GROUP. These SYS sessions include job scheduler slaves and automated maintenance tasks. However, any background work, such as LMS or PMON or DBWR or LGWR, is not managed in this consumer group. These backgrounds use very little CPU and are hence not managed by Resource Manager. The advantage of using Resource Manager is that these critical background processes do not have to compete with a heavy load of foreground processes to be scheduled by the O/S.
We have no plans for desupporting multi-level resource plans. However, we have decided on the following:
- Resource Plans for a PDB are required to be single-level, are limited to 8 consumer groups, and cannot contain subplans.
- Enterprise Manager does not support the creation of new multi-level resource plans. However, it will continue to support editing of existing multi-level resource plans. In addition, the PL/SQL interface can be used to create multi-level resource plans.
- We are actively encouraging customers not to use multi-level plans. The misconception shown in the “Common Mistakes” slide deck seems to be very pervasive and we feel that the single-level plans are sufficiently powerful for most customers.
The “Resource Manager – 12c” slide deck contains an overview of all the Resource Manager features, as of 12.1.0.1.
The “Resource Manager – Common Mistakes” slide deck contains various subtle “gotchas” with Resource Manager. There are a few slides on multi-level resource plans.
<<<
Using Consolidation Planner http://download.oracle.com/docs/cd/E24628_01/doc.121/e25179/consolid_plan.htm
Database as a Service using Oracle Enterprise Manager 12c Cookbook http://www.oracle.com/technetwork/oem/cloud-mgmt/em12c-dbaas-cookbook-1432364.pdf
''SPEC CPU2006'' http://www.spec.org/auto/cpu2006/Docs/result-fields.html
http://www.amd.com/us/products/server/benchmarks/Pages/specint-rate-base2006-four-socket.aspx
http://www.spec.org/cpu2006/results/res2010q1/cpu2006-20091218-09300.html
http://download.oracle.com/docs/cd/E24628_01/license.121/e24474/appendix_a.htm#BGBBAEDE <-- on the official doc
''This research is still in progress.. there will be more updates in the next few days.''
! Some things to validate/investigate here:
* are we doing the same thing on the CPUSPECRate? see what I'm doing here [[cpu - SPECint_rate2006]] vs the consolidation planner here [[em12c SPEC computation]]
** basically yes, but what I don't like about the em12c approach is getting the AVG(SPEC_RATE) across the diff hardware platforms with different config..although this will still serve the purpose of having a single currency system where you can compare how fast A is to Z. But normally what I would do is find the closest hardware for my source and get that SPEC number.. but here it's doing an AVG on the filtered samples
** the SPEC rate that the consolidation planner using is based on the ''SPEC Base number''.. and what I'm doing is ''Peak/Enabled Cores'' to get the ''SPECint_rate2006/core''
<<<
Here's the logic behind the SPEC search.. this is still a pretty cool stuff, but I would start on the hardware platform first. The thing is there's no way from the em12c side to get the server make and model from the MGMT_ECM_HW, MGMT$HW_CPU_DETAILS, and MGMT$OS_HW_SUMMARY views so there's really no way to start the search with the hardware platform BUT the consolidation planner allows you to override the SPEC values. Plus this tool is generic that you can use it on a non-database server.. so for them they need to come up with standard ways to derive things to productize it. And while I'm doing my investigation I came across the tables being used and the EMCT_* tables are tied not only to consolidation planner but also to the chargeback plugin.
{{{
-- Match with CPU Vendor
-- CPU Vendor not found, return AVG of current match
-- CPU Vendor matched, Now match with Cores
-- Cores not found, return AVG of current match + closest Cores match
-- CPU Vendor, Cores matched, Now match with CPU Family
-- Family not found, return AVG of current match
-- CPU Vendor, Family, Cores matched, Now match with Speed
-- Speed not found, return AVG of current match + closest Speed match
-- CPU Vendor, Cores, Family, Speed matched, Now match with Threads
-- No threads found, return AVG of current match + closest threads match
-- CPU Vendor, Cores, Family, Speed, Threads matched, Now match with Chips
-- Chips not found, return AVG of current match + closest chips match
-- CPU Vendor, Cores, Family, Speed, Threads, Chips matched, Now match with 1st Cache MB
-- 1st Cache MB not found, return AVG of current match + closest 1st Cache match
-- CPU Vendor, Cores, Family, Speed, Threads, Chipsi, 1st Cache matched, Now match with Memory GB
-- Memory GB not found, return AVG of current match + closest Memory GB match
-- CPU Vendor, Family, Cores, Speed, Threads, Chips, 1st Cache, Memory matched, Now match with System Vendor
-- System Vendor not found, return AVG of current match
}}}
The data points used by Consolidation Planner is here https://www.dropbox.com/s/41hjihib5xyz0lp/em12c_spec.csv
<<<
* comparison of the rollups with the AWR data
* can you do a stacked viz across 30+ databases?
** they did a pretty cool tremap (with different colors every 10% of utilization increase up to 100%) of what would the resource load on the destination server be across 31 days
* on consolidation planner the IO collection part is just an average across 30days (but the range can be adjusted), the thing here is if you are consolidating 30+ databases you have to stack the data points across time series and get their peaks and check any possible IO workload contentions.. that's where you know if you'll be implementing IORM on which databases.
** here they're just getting the AVG IOPS and at the end if you have a bunch of servers they just add the averages altogether and come up with a final number and account it to the destination server's capacity. Take note that it just gets the IOPS and no account of MB/s
! Things consolidation planner can/cannot do
* give you the end utilization of the consolidated servers
** the problem here is, on a multi node environment it is also critical to see the utilization of each server when you have overlapping instances provisioned across different nodes
* scenario module is the "what if" thing where you'll feed a bunch of servers to consolidate then you'll be able to see if they fit on a particular platform let's say half rack exadata
** on prov worksheet, I can do scenarios where I would know what the end utilization of the rest of the servers if I shutdown one of the nodes.
* the cool 31 days utilization treemap
** I can do this on each resource by doing a stacked graph in Tableau on a time dimension of AWR data.. what's also nice about that is I can tell which instance I should watch out for (peaks & high resource usage)
! The SQL used by Consolidation Planner plugin
{{{
set colsep ',' lines 4000
SELECT g.target_guid,
MAX(g.target_name) ServerName ,
ROUND(emct_target.get_spec_rate(g.target_guid),2) CPUSPECRate,
MAX(DECODE(b.item_id,8014,b.value,NULL)) cpuuserspecrate,
MAX(ROUND((c.mem/1024),2)) MemoryGB,
MAX(c.disk) DiskStorageGB,
MAX(DECODE(a.metric_column_name,'cpuUtil',a.metric,NULL)) cpuutil,
MAX(DECODE(a.metric_column_name,'memUsedPct',a.metric,NULL)) memutil,
MAX(DECODE(a.metric_column_name,'totpercntused',a.metric,NULL)) diskutil,
MAX(d.vendor_name) CpuVendor,
MAX(d.impl) CpuName,
MAX(d.freq_in_mhz) FreqInMhz,
MAX(DECODE(b.item_id,8063,b.value,NULL)) userdiskiocps,
MAX(DECODE(b.item_id,8062,b.value,NULL)) userdiskiombps,
MAX(DECODE(b.item_id,8061,b.value,NULL)) usernetworkiombps,
MAX(DECODE(a.metric_column_name,'totiosmade',a.metric,NULL)) diskiocpsvalue,
MAX(DECODE(a.metric_column_name,'totiosmade',a.max_metric,NULL)) diskiocpsmaxvalue,
NULL AS diskiombpsvalue,
MAX(DECODE(a.metric_column_name,'totalNetworkThroughPutRate',a.metric,NULL)) networkiombpsvalue,
MAX(DECODE(a.metric_column_name,'totalNetworkThroughPutRate',a.max_metric,NULL)) networkiombpsmaxvalue,
MAX(g.target_type) Type,
MAX(c.os_summary) os_summary
FROM
(SELECT entity_guid,
metric_column_name,
ROUND(AVG(avg_value),2) AS metric,
ROUND(MAX(max_value),2) AS max_metric
FROM gc_metric_values_daily
WHERE entity_type ='host'
AND (entity_guid in (null) and 1=0)
AND metric_group_name in ('Load', 'DiskActivitySummary', 'TotalDiskUsage', 'NetworkSummary')
AND metric_column_name in ('cpuUtil','memUsedPct', 'totiosmade', 'totpercntused', 'totalNetworkThroughPutRate')
AND collection_time > (sysdate - 30)
GROUP BY entity_guid, metric_column_name
) a,
emct$latest_user_attrs b,
mgmt$os_hw_summary c ,
mgmt$hw_cpu_details d,
gc$target g
WHERE a.entity_guid(+) =g.target_guid
AND b.original_target_guid(+)=g.target_guid
AND c.target_guid(+) =g.target_guid
AND d.target_guid(+) =g.target_guid
AND g.target_type ='host'
GROUP BY g.target_guid
ORDER BY 2;
TARGET_GUID ,SERVERNAME ,CPUSPECRATE,CPUUSERSPECRATE, MEMORYGB,DISKSTORAGEGB, CPUUTIL, MEMUTIL, DISKUTIL,CPUVENDOR ,CPUNAME , FREQINMHZ,USERDISKIOCPS,USERDISKIOMBPS,USERNETWORKIOMBPS,DISKIOCPSVALUE,DISKIOCPSMAXVALUE,D,NETWORKIOMBPSVALUE,NETWORKIOMBPSMAXVALUE,TYPE ,OS_SUMMARY
--------------------------------,-----------------------------------------------------------------------------------------------------------------------------------,-----------,---------------,----------,-------------,----------,----------,----------,--------------------------------------------------------------------------------------------------------------------------------,--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------,----------,-------------,--------------,-----------------,--------------,-----------------,-,------------------,---------------------,----------------------------------------------------------------,----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local , -121, , 15.61, 4773.3, , , ,GenuineIntel ,Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz , 3401, 100000, , 125, , , , , ,host ,Oracle Linux Server release 5.7 2.6.32 200.13.1.el5uek(64-bit)
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local , -10.48, , 3.87, 2033.27, , , ,GenuineIntel ,Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz , 3401, , , , , , , , ,host ,Oracle Linux Server release 5.7 2.6.32 100.0.19.el5(64-bit)
05:25:50 SYS@emrep12c> select * from emct$latest_user_attrs;
DATA_SOURCE_ID,ORIGINAL_TARGET_GUID , ITEM_ID,APP_TYPE ,CAT_TARGET_GUID ,STRING_VALUE , VALUE,UPDATED_BY ,START_DAT,END_DATE
--------------,--------------------------------,----------,----------------,--------------------------------,--------------------------------------,----------,----------------------------------------------------------------------------------------------------------------------------------------------------------------,---------,---------
2,0C474BF51B89823AFE1040B6ADC7147C, 4001,cat_common_lib ,0C474BF51B89823AFE1040B6ADC7147C,Estimated , 121,cat.target ,19-OCT-12,
2,0C474BF51B89823AFE1040B6ADC7147C, 4003,cat_cpa_lib ,0C474BF51B89823AFE1040B6ADC7147C,Estimated , 121,cat.target ,19-OCT-12,
2,0C474BF51B89823AFE1040B6ADC7147C, 8061,cpa ,0C474BF51B89823AFE1040B6ADC7147C, , 125, ,19-OCT-12,
2,0C474BF51B89823AFE1040B6ADC7147C, 8063,cpa ,0C474BF51B89823AFE1040B6ADC7147C, , 100000, ,19-OCT-12,
-- bwahaha it's a package!
select ROUND(emct_target.get_spec_rate(g.target_guid),2) CPUSPECRate from gc$target g where rownum < 11;
CPUSPECRATE
-----------
-224.31
-121
-224.31
-10.48
-224.31
-224.31
-224.31
-224.31
-224.31
-224.31
10 rows selected.
col ServerName format a20
SELECT entity_guid,
MAX(g.target_name) ServerName,
metric_column_name,
ROUND(AVG(avg_value),2) AS metric,
ROUND(MAX(max_value),2) AS max_metric
FROM gc_metric_values_daily a, gc$target g
where a.entity_guid(+) =g.target_guid
and metric_column_name in ('cpuUtil','memUsedPct','totpercntused','totiosmade','totalNetworkThroughPutRate')
group by entity_guid,metric_column_name
order by 2,3;
ENTITY_GUID ,SERVERNAME ,METRIC_COLUMN_NAME , METRIC,MAX_METRIC
--------------------------------,--------------------,----------------------------------------------------------------,----------,----------
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,cpuUtil , 11.2, 16.6
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,memUsedPct , 28.93, 37.04
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,totalNetworkThroughPutRate , 0, .02
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,totiosmade , 813.13, 2725.41
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,totpercntused , 38.27, 38.28
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local ,cpuUtil , 17.19, 90.86
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local ,memUsedPct , 70.19, 73.36
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local ,totalNetworkThroughPutRate , .01, .31
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local ,totiosmade , 67.41, 473.91
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local ,totpercntused , 7.14, 7.19
10 rows selected.
06:09:16 SYS@emrep12c> select owner, object_name, object_type from dba_objects where object_name = 'GC_METRIC_VALUES_DAILY';
OWNER ,OBJECT_NAME ,OBJECT_TYPE
------------------------------,--------------------------------------------------------------------------------------------------------------------------------,-------------------
SYSMAN ,GC_METRIC_VALUES_DAILY ,VIEW
SYSMAN_RO ,GC_METRIC_VALUES_DAILY ,SYNONYM
col ServerName format a20
SELECT entity_guid,
TO_CHAR(COLLECTION_TIME,'MM/DD/YY HH24:MI:SS'),
metric_column_name,
ROUND(avg_value,2) AS metric,
ROUND(max_value,2) AS max_metric
FROM gc_metric_values_daily a
where metric_column_name in ('totiosmade')
order by 2,3;
ENTITY_GUID ,TO_CHAR(COLLECTIO,METRIC_COLUMN_NAME , METRIC,MAX_METRIC
--------------------------------,-----------------,----------------------------------------------------------------,----------,----------
0EE088EC2D56D4DF9A747BBE24DFB7D8,10/17/12 00:00:00,totiosmade , 53.28, 385.44
0EE088EC2D56D4DF9A747BBE24DFB7D8,10/18/12 00:00:00,totiosmade , 81.55, 473.91
0C474BF51B89823AFE1040B6ADC7147C,10/18/12 00:00:00,totiosmade , 813.13, 2725.41
gc_metric_values_daily
gc_metric_values_hourly
gc_metric_values_latest
ENTITY_TYPE NOT NULL VARCHAR2(64)
ENTITY_NAME NOT NULL VARCHAR2(256)
ENTITY_GUID NOT NULL RAW(16)
PARENT_ME_TYPE VARCHAR2(64)
PARENT_ME_NAME VARCHAR2(256)
PARENT_ME_GUID NOT NULL RAW(16)
TYPE_META_VER NOT NULL VARCHAR2(8)
TIMEZONE_REGION VARCHAR2(64)
METRIC_GROUP_NAME NOT NULL VARCHAR2(64)
METRIC_COLUMN_NAME NOT NULL VARCHAR2(64)
COLUMN_TYPE NOT NULL NUMBER(1)
COLUMN_INDEX NOT NULL NUMBER(3)
DATA_COLUMN_TYPE NOT NULL NUMBER(2)
METRIC_GROUP_ID NOT NULL NUMBER(38)
METRIC_GROUP_GUID NOT NULL RAW(16)
METRIC_GROUP_LABEL VARCHAR2(64)
METRIC_GROUP_LABEL_NLSID VARCHAR2(64)
METRIC_COLUMN_ID NOT NULL NUMBER(38)
METRIC_COLUMN_GUID NOT NULL RAW(16)
METRIC_COLUMN_LABEL VARCHAR2(64)
METRIC_COLUMN_LABEL_NLSID VARCHAR2(64)
DESCRIPTION VARCHAR2(128)
SHORT_NAME VARCHAR2(40)
UNIT VARCHAR2(32)
IS_FOR_SUMMARY NUMBER
IS_STATEFUL NUMBER
IS_TRANSPOSED NOT NULL NUMBER(1)
NON_THRESHOLDED_ALERTS NUMBER
METRIC_TYPE NOT NULL NUMBER(1)
USAGE_TYPE NOT NULL NUMBER(1)
METRIC_KEY_ID NOT NULL NUMBER(38)
NUM_KEYS NOT NULL NUMBER(1)
METRIC_KEY_VALUE VARCHAR2(256)
KEY_PART_1 NOT NULL VARCHAR2(256)
KEY_PART_2 NOT NULL VARCHAR2(256)
KEY_PART_3 NOT NULL VARCHAR2(256)
KEY_PART_4 NOT NULL VARCHAR2(256)
KEY_PART_5 NOT NULL VARCHAR2(256)
KEY_PART_6 NOT NULL VARCHAR2(256)
KEY_PART_7 NOT NULL VARCHAR2(256)
COLLECTION_TIME NOT NULL DATE
COLLECTION_TIME_UTC DATE
COUNT_OF_COLLECTIONS NOT NULL NUMBER(38)
AVG_VALUE NUMBER
MIN_VALUE NUMBER
MAX_VALUE NUMBER
STDDEV_VALUE NUMBER
AVG_VALUES_VARRAY NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
MIN_VALUES_VARRAY NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
MAX_VALUES_VARRAY NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
STDDEV_VALUES_VARRAY NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
}}}
Where to find MAXxxxxxx control file parameters in Data Dictionary
Doc ID: Note:104933.1
Nutanix, Cisco HyperFlex, and Dell/EMC VxRail which is pretty much vSAN
https://www.reddit.com/r/sysadmin/comments/8oq5oi/nutanix_vs_vmware_vsan/
http://www.scribd.com/doc/19212001/To-convert-a-rac-node-using-asm-to-single-instance-node
''How to Convert a Single-Instance ASM to Cluster ASM [ID 452758.1]'' http://space.itpub.net/11134237/viewspace-687810
http://oracleinstance.blogspot.com/2010/07/converting-single-instance-to-rac.html
conference submission guidelines https://blogs.oracle.com/datawarehousing/entry/open_world_2015_call_for
Although this is very easy, handy notes will still be helpful
http://oracle.ittoolbox.com/documents/how-to-copy-an-oracle-database-to-another-machine-18603
http://www.pgts.com.au/pgtsj/pgtsj0211b.html
http://www.adp-gmbh.ch/ora/admin/creatingdbmanually.html
intro to coreos
http://youtu.be/l4oaIW37tU4
SmartOS (ZFS, DTrace, Zones and KVM) vs CoreOS (kernel+containers(docker/lxc - full isolation, docker/nspawn - little isolation))
https://www.youtube.com/watch?v=TtseOQoGJtk
CoreOS howto at digitalocean, based on Gentoo (Portage package manager)
https://www.digitalocean.com/community/tutorial_series/getting-started-with-coreos-2
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-coreos-cluster-on-digitalocean
https://coreos.com/blog/digital-ocean-supports-coreos/
http://0pointer.net/blog/projects/stateless.html
STACKX User Guide
Doc ID: Note:362791.1
HOW TO HANDLE CORE DUMPS ON UNIX
Doc ID: Note:1007808.6
Segmentation Fault and Core Dump During Execution
Doc ID: Note:1012079.6
SOLARIS: SGA size, sgabeg attach address and Sun architectures
Doc ID: Note:61896.1
How To Debug a Core File
Doc ID: Note:559167.1
CoreUtils for Windows
http://gnuwin32.sourceforge.net/packages/coreutils.htm
Doc ID: 465714.1 "Count of Targets Not Uploading Data" Metric not Clearing Even if the Cause is Gone
{{{
1) Create a new DBFS file system for APAC Cutover.
2) Whereas the current dbfs file system is /dbfs/work,
The new file system would be mounted under /dbfs as /dbfs/apac
3) The new file system would be created with initially with a max-size of 3TB.
4) The file for the new file system would be created on +RECO,
where there is currently about 49TB of usable space on PD01.
With the new 3TB dbfs file system, that would leave about 40TB.
5) This new file system would be temporary just for APAC cutover.
#################
To configure option #2 above, follow these steps:
Optionally create a second DBFS repository database.
Create a new tablespace and a DBFS repository owner account (database user) for the new DBFS filesystem as shown in step 4 above.
Create the new filesystem using the procedure shown in step 5 above. substituting the proper values for the tablespace name and desired filesystem name.
If using a wallet, you must create a separate TNS_ADMIN directory and a separate wallet. Be sure to use the proper ORACLE_HOME, ORACLE_SID, username and password when setting up those components.
Ensure you use the latest mount-dbfs.sh script attached to this note. Updates were made on 7-Oct-2010 to support multiple filesystems. If you are using previous versions of this script, download the new version and after applying the necessary configuration modifications in it, replace your current version.
To have Clusterware manage a second filesystem mount, use a second copy of the mount-dbfs.sh script. Rename it to a unique file name like mount-dbfs2.sh and place it in the proper directory as shown in step 16 above. Once mount-dbfs2.sh has been properly modified with proper configuration information, a second Clusterware resource (with a unique name) should be created. The procedure for this is outlined in step 17 above.
#################
###############
INSTALL
###############
create bigfile tablespace apac_tbs datafile '+DATA' size 500M autoextend on next 100M maxsize 1000M NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO ;
create user apac identified by welcome
default tablespace apac_tbs
temporary tablespace temp;
grant create session,
create table,
create procedure,
dbfs_role
to apac;
alter user apac quota unlimited on apac_tbs;
cd $ORACLE_HOME/rdbms/admin
sqlplus apac/welcome
@dbfs_create_filesystem_advanced.sql apac_tbs apac nocompress nodeduplicate noencrypt non-partition
-- create the file
/home/oracle/dba/bin/mount-dbfs-apac.sh
$ scp mount-dbfs-apac.sh td01db02:/home/oracle/dba/bin/
mount-dbfs-apac.sh 100% 8058 7.9KB/s 00:00
$ scp mount-dbfs-apac.sh td01db03:/home/oracle/dba/bin/
mount-dbfs-apac.sh 100% 8058 7.9KB/s 00:00
$ scp mount-dbfs-apac.sh td01db04:/home/oracle/dba/bin/
mount-dbfs-apac.sh 100% 8058 7.9KB/s 00:00
dcli -l root -g dbs_group mkdir /dbfs2
dcli -l root -g dbs_group chown oracle:oinstall /dbfs2
ACTION_SCRIPT=/home/oracle/dba/bin/mount-dbfs-apac.sh
RESNAME=ora.apac.filesystem
DBNAME=dbfs
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0.3/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
crsctl start res ora.apac.filesystem
crsctl stop res ora.apac.filesystem
###############
CLEANUP
###############
crsctl stop res ora.apac.filesystem
sqlplus apac/welcome
@$ORACLE_HOME/rdbms/admin/dbfs_drop_filesystem.sql apac
crsctl delete resource ora.apac.filesystem -f
crsstat | grep -i files
select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' immediate;'
from v$session s
where s.username = 'APAC';
drop user apac cascade;
drop tablespace apac_tbs including contents and datafiles;
}}}
http://www.oracle.com/technetwork/articles/servers-storage-admin/howto-create-zones-ops-center-1737990.html
{{{
-- create user
create user alloc_app_perf identified by testalloc;
-- user sql
alter user "alloc_app_perf" default tablespace "bas_data" temporary tablespace "temp" account unlock ;
-- quotas
alter user "alloc_app_perf" quota unlimited on bas_data;
-- roles
grant alloc_app_r to alloc_app_perf;
grant select_catalog_role to alloc_app_perf;
grant resource to alloc_app_perf;
grant select any dictionary to alloc_app_perf;
grant advisor to alloc_app_perf;
grant create job to alloc_app_perf;
grant oem_monitor to alloc_app_perf;
grant administer any sql tuning set to alloc_app_perf;
grant administer sql management object to alloc_app_perf;
grant create any sql_profile to alloc_app_perf;
grant drop any sql_profile to alloc_app_perf;
grant alter any sql_profile to alloc_app_perf;
-- execute
grant execute on dbms_monitor to alloc_app_perf;
grant execute on dbms_application_info to alloc_app_perf;
grant execute on dbms_workload_repository to alloc_app_perf;
grant execute on dbms_xplan to alloc_app_perf;
grant execute on dbms_sqltune to alloc_app_perf;
grant execute on sys.dbms_lock to alloc_app_perf;
}}}
also create [[kill session procedure]]
http://structureddata.org/2011/09/25/critical-skills-for-performance-work/
http://www.integrigy.com/oracle-security-blog/archive/2010/10/14/oracle-cpu-oct-2010-monster
XTTS Migrating a Mission Critical 40 TB Oracle E-Business Suite from HP Superdomes to Cisco Unified Computing System
http://www.cisco.com/c/en/us/solutions/collateral/servers-unified-computing/ucs-5100-series-blade-server-chassis/Whitepaper_c11-707249.html
HOWTO: Oracle Cross-Platform Migration with Minimal Downtime
http://www.pythian.com/news/3653/howto-oracle-cross-platform-migration-with-minimal-downtime/
Using Transportable Tablespace In Oracle Database 10g
http://avdeo.com/2009/12/22/using-transportable-tablespace-in-oracle-database-10g/
Migrating an Oracle database Solaris to Linux
http://blog.nominet.org.uk/tech/2006/01/18/migrating-an-oracle-database-solaris-to-linux/
Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backups [ID 1389592.1]
Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2
http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-platformmigrationtdb-131164.pdf
-- XTTS + RMAN
12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
https://dba.stackexchange.com/questions/137762/explain-plan-gives-different-results-depending-on-schema
https://magnusjohanssontuning.wordpress.com/2012/08/01/cursor-not-shared-for-different-users/
https://hourim.wordpress.com/2015/04/06/bind_equiv_failure-or-when-you-will-regret-using-adaptive-cursor-sharing/
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-i
http://oracleinaction.com/parent-child-curosr/
http://www.jcon.no/oracle/?p=1032 11gR2: “Unlucky” combination of a new feature, a fix, application design and code
https://gavinsoorma.com/2012/09/a-look-at-parsing-and-sharing-of-cursors/
https://www.google.com/search?ei=eO0DXbzZLvKe_QbWyJzACA&q=oracle+same+sql+plan+hash+value+across+different+schemas&oq=oracle+same+sql+plan+hash+value+across+different+schemas&gs_l=psy-ab.3...13632.16295..16476...0.0..0.135.1457.13j3......0....1..gws-wiz.......0i71j35i304i39.nH4q-OfkkJg
<<showtoc>>
! pre-req
d3vienno https://www.youtube.com/playlist?list=PL6il2r9i3BqH9PmbOf5wA5E1wOG3FT22p
http://www.pluralsight.com/courses/d3js-data-visualization-fundamentals
http://www.pluralsight.com/courses/interactive-data-visualization-d3js
http://www.lynda.com/D3js-tutorials/Data-Visualization-D3js/162449-2.html
d3.js
Visualizing Oracle Data - ApEx and Beyond http://ba6.us/book/export/html/268, http://ba6.us/d3js_application_express_basic_dynamic_action
Mike Bostock http://bost.ocks.org/mike/
https://github.com/mbostock/d3/wiki/Gallery
http://mbostock.github.com/d3/tutorial/bar-1.html
http://mbostock.github.com/d3/tutorial/bar-2.html
AJAX retrieval using Javascript Object Notation (JSON) http://anthonyrayner.blogspot.com/2007/06/ajax-retrieval-using-javascript-object.html
http://dboptimizer.com/2012/01/22/ash-visualizations-r-ggplot2-gephi-jit-highcharts-excel-svg/
Videos:
http://css.dzone.com/articles/d3js-way-way-more-just-another
''Video tutorials:''
http://www.youtube.com/user/d3Vienno/videos
http://www.quora.com/D3-JavaScript-library/Whats-the-best-way-for-someone-with-no-background-to-learn-D3-js
r to d3 https://github.com/hadley/r2d3
http://www.r-bloggers.com/basics-of-javascript-and-d3-for-r-users/
''Articles''
javascript viz without d3 http://dry.ly/data-visualization-with-javascript-without-d3
json - data
html - structure
JS+D3 - layout
CSS - pretty
webdevelop toolkit
chrome development
chrome developer toolkit
readables:
mbostock.github.com/d3/api
book: javascript: the good bits by douglas crockford
browse: https://developer.mozilla.org/en/SVG
watch: vimeo.com/29458354
clone: GraphDB https://github.com/sones/sones
clone: Cube http://square.github.com/cube
clone: d3py https://github.com/mikedewar/D3py
http://code.hazzens.com/d3tut/lesson_0.html
books:
Interactive Data Visualization for the Web An Introduction to Designing with D3 http://shop.oreilly.com/product/0636920026938.do
http://www.slideshare.net/arnicas/interactive-data-visualization-with-d3js
! other libraries
dc.js - Dimensional Charting Javascript Library https://dc-js.github.io/dc.js/
https://community.hortonworks.com/articles/56636/hive-understanding-concurrent-sessions-queue-alloc.html
https://hortonworks.com/blog/introducing-tez-sessions/
https://stackoverflow.com/questions/25521363/apache-tez-architecture-explanation?rq=1
http://yaping123.wordpress.com/2008/09/02/db-link/
http://marcel.vandewaters.nl/oracle/database-oracle/creating-database-links-for-another-schema
{{{
select username, profile from dba_users where username in ('HCMREADONLY');
ALTER PROFILE APPLICATION_USER LIMIT PASSWORD_VERIFY_FUNCTION NULL;
alter user HCMREADONLY identified by HCMREADONLY;
ALTER PROFILE APPLICATION_USER LIMIT PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION;
~oracle/rac11gr2_mon.pl -h "/u01/app/11.2.0.3/grid" -d HCM2UAT
oradcli cat /u01/app/oracle/product/11.2.0.3/dbhome_1/network/admin/tnsnames.ora | grep -i HCM2UAT
conn sysadm/<password>
select * from global_name@ROHCM2UAT;
set linesize 121
col owner format a15
col db_link format a45
col username format a15
col password format a15
col host format a15
SELECT owner, db_link, username, host, created FROM dba_db_links;
col name format a20
select NAME,USERID,PASSWORD,PASSWORDX from link$;
}}}
{{{
SQL> CREATE DATABASE LINK systemoracle CONNECT TO system IDENTIFIED BY oracle USING 'dw';
Database link created.
SQL> select sysdate from dual@systemoracle;
SYSDATE
-----------------
20120410 15:07:57
SQL>
SQL> select * from v$instance@systemoracle;
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
--------------- ---------------- ---------------------------------------------------------------- ----------------- ----------------- ------------ --- ---------- ------- --------------- ---------- --- ----------------- ------------------ --------- ---
1 dw desktopserver.local 11.2.0.3.0 20120405 22:54:29 OPEN NO 1 STOPPED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
SQL>
SQL> drop database link systemoracle;
Database link dropped.
set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('USER', username) || ';' usercreate
from dba_users where username = 'SYSTEM';
06C70D7478FCFC00B4DBF384D2AF15886964CF872A2960378E4570ECFC0F1790089FF8275365309F74A257102E0041F7ADF4F15CFB6E87C2D7E0595E23E519939EF992402796F5850657B52496C109A164F090970A852CF163010DCC91750381FD832C59F63DBC990D88777E91E61D77DAEA09D347BE9E4C4D2C003FB53E243
ALTER USER "SYSTEM" IDENTIFIED BY VALUES 'S:24BC4E96EFE7E21595038D261C75CFAAFC8BF2CF89C4EB867CA80C8C2850;2D594E86F93B17A1'
TEMPORARY TABLESPACE "TEMP";
CREATE DATABASE LINK "SYSTEMORACLE.LOCAL"
CONNECT TO "SYSTEM" IDENTIFIED BY VALUES '06C70D7478FCFC00B4DBF384D2AF15886964CF872A2960378E4570ECFC0F1790089FF8275365309F74A257102E0041F7ADF4F15CFB6E87C2D7E0595E23E519939EF992402796F5850657B52496C109A164F090970A852CF163010DCC91750381FD832C59F63DBC990D88777E91E61D77DAEA09D347BE9E4C4D2C003FB53E243E'
USING 'dw';
set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('DB_LINK', DB_LINK) || ';' dblinkcreate
from dba_db_links;
-- if you change the password you'll get this
12:57:32 SYS@dw> select sysdate from dual@systemoracle;
select sysdate from dual@systemoracle
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from SYSTEMORACLE
-- new password
ALTER USER "SYSTEM" IDENTIFIED BY VALUES 'S:5039460190FA01698510988435D8B7E678432D4B4A0E4C5BF7C19D2BD7F4;DC391A4F3C7CC080'
TEMPORARY TABLESPACE "TEMP";
-- this will not work!
CREATE DATABASE LINK "SYSTEMORACLE.LOCAL"
CONNECT TO "SYSTEM" IDENTIFIED BY VALUES 'S:5039460190FA01698510988435D8B7E678432D4B4A0E4C5BF7C19D2BD7F4;DC391A4F3C7CC080'
USING 'dw';
-- the real fix is to put back the password
13:02:07 SYS@dw> select sysdate from dual@systemoracle;
select sysdate from dual@systemoracle
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from SYSTEMORACLE
13:02:33 SYS@dw>
13:02:34 SYS@dw> alter user system identified by oracle;
User altered.
13:02:45 SYS@dw> select sysdate from dual@systemoracle;
SYSDATE
-----------------
20120910 13:02:50
}}}
! 2019
https://edn.embarcadero.com/
https://www.embarcadero.com/support
https://supportforms.embarcadero.com <- increase count
https://supportforms.embarcadero.com/product/
https://members.embarcadero.com/login.aspx <- download page
https://cc.embarcadero.com/Default.aspx <- documentation videos
https://community.idera.com/developer-tools/ <- URLs
''Documentation''
http://docs.embarcadero.com/products/db_optimizer/
''new feature''
3.0 http://docs.embarcadero.com/products/db_optimizer/3.0/ReadMe.htm
3.5 http://docs.embarcadero.com/products/db_optimizer/3.5/ReadMe.htm
''wiki''
http://docwiki.embarcadero.com/DBOptimizer/en/Main_Page
''DB Optimizer 3.0''
http://docs.embarcadero.com/products/db_optimizer/3.0/DBOptimizerQuickStartGuide.pdf
http://docs.embarcadero.com/products/db_optimizer/3.0/DBOptimizerUserGuide.pdf
http://docs.embarcadero.com/products/db_optimizer/3.0/ReadMe.htm
''Example usage - DB Optimizer example - 3mins to 10secs''
https://www.evernote.com/shard/s48/sh/070796b4-673e-418f-9ff9-d362ae9941dd/9636928fbcf370e0dcf9fb940cc5a9c8 <-- after reading this check out the [[SQLT-tc (test case builder)]] tiddler on how to generate VST with SQLTXPLAIN
''Pricing''
''$1500'' http://store.embarcadero.com/store/embt/en_US/DisplayCategoryProductListPage/categoryID.52346400
''the $99 good deal'' http://www.freelists.org/post/oracle-l/Special-Offer-for-readers-of-OracleL
Example usage - DB Optimizer example - 3mins to 10secs
https://www.evernote.com/shard/s48/sh/070796b4-673e-418f-9ff9-d362ae9941dd/9636928fbcf370e0dcf9fb940cc5a9c8 <— after reading this check out the SQLT-tc (test case builder) tiddler on how to generate VST with SQLTXPLAIN
<<showtoc>>
!! intro
https://thomaswdinsmore.com/2017/02/01/year-in-sql-engines/
http://coding-geek.com/how-databases-work/
!! RDBMSGenealogy
https://hpi.de/fileadmin/user_upload/fachgebiete/naumann/projekte/RDBMSGenealogy/RDBMS_Genealogy_V6.pdf
!! ACID, CAP theorem, BASE, NRW notation, BigData 4Vs + 1
[[ACID, CAP theorem, BASE, NRW notation, BigData 4Vs + 1]]
!! quick references
!!! Comparing Database Types: How Database Types Evolved to Meet Different Needs
https://www.prisma.io/blog/comparison-of-database-models-1iz9u29nwn37
!!! 7 databases in 7 weeks
2nd ed https://learning.oreilly.com/library/view/seven-databases-in/9781680505962/
1st ed https://learning.oreilly.com/library/view/seven-databases-in/9781941222829/
!!! Seven NoSQL Databases in a Week
https://learning.oreilly.com/library/view/seven-nosql-databases/9781787288867/
.
! DB2 Mainframe to Oracle Sizing
see discussions here https://www.evernote.com/l/ADBBWhmaiVJIpIL6imCf_Fi1OozqN9Usq08
modern day DBA vs Developer https://web.devopstopologies.com/index.html
https://www.teamblind.com/article/is-data-engineering-under-rated-5jtnitKv
http://tonyhasler.wordpress.com/2011/12/ FORCE_MATCH for Stored Outlines and/or SQL Baselines????? – follow up
How to use the Sql Tuning Advisor. [ID 262687.1]
<<<
SQL tuning information views, such as DBA_SQLTUNE_STATISTICS, DBA_SQLTUNE_BINDS,
and DBA_SQLTUNE_PLANS views can also be queried to get this information.
Note: it is possible for the SQL Tuning Advisor to return no recommendations for
a particular SQL statement e.g. in cases where the plan is already optimal or the
Automatic Tuning Optimization mode cannot find a better plan.
<<<
* you can’t change the encryption, all TS needs to be encrypted
* even if you change the encrypt_new_tablespaces, it will complain when you create an unencrypted TS
{{{
22:06:00 SYS@kacdb> desc dba_tablespaces
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
TABLESPACE_NAME NOT NULL VARCHAR2(30)
BLOCK_SIZE NOT NULL NUMBER
INITIAL_EXTENT NUMBER
NEXT_EXTENT NUMBER
MIN_EXTENTS NOT NULL NUMBER
MAX_EXTENTS NUMBER
MAX_SIZE NUMBER
PCT_INCREASE NUMBER
MIN_EXTLEN NUMBER
STATUS VARCHAR2(9)
CONTENTS VARCHAR2(21)
LOGGING VARCHAR2(9)
FORCE_LOGGING VARCHAR2(3)
EXTENT_MANAGEMENT VARCHAR2(10)
ALLOCATION_TYPE VARCHAR2(9)
PLUGGED_IN VARCHAR2(3)
SEGMENT_SPACE_MANAGEMENT VARCHAR2(6)
DEF_TAB_COMPRESSION VARCHAR2(8)
RETENTION VARCHAR2(11)
BIGFILE VARCHAR2(3)
PREDICATE_EVALUATION VARCHAR2(7)
ENCRYPTED VARCHAR2(3)
COMPRESS_FOR VARCHAR2(30)
DEF_INMEMORY VARCHAR2(8)
DEF_INMEMORY_PRIORITY VARCHAR2(8)
DEF_INMEMORY_DISTRIBUTE VARCHAR2(15)
DEF_INMEMORY_COMPRESSION VARCHAR2(17)
DEF_INMEMORY_DUPLICATE VARCHAR2(13)
SHARED VARCHAR2(13)
DEF_INDEX_COMPRESSION VARCHAR2(8)
INDEX_COMPRESS_FOR VARCHAR2(13)
DEF_CELLMEMORY VARCHAR2(14)
DEF_INMEMORY_SERVICE VARCHAR2(12)
DEF_INMEMORY_SERVICE_NAME VARCHAR2(1000)
LOST_WRITE_PROTECT VARCHAR2(7)
CHUNK_TABLESPACE VARCHAR2(1)
22:06:06 SYS@kacdb> select tablespace_name, encrypted, compress_for from dba_tablespaces;
TABLESPACE_NAME ENC COMPRESS_FOR
------------------------------ --- ------------------------------
SYSTEM NO
SYSAUX NO
UNDOTBS1 NO
TEMP NO
USERS YES
22:06:22 SYS@kacdb> select name from v$datafile;
NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/o1_mf_system_k22cl5v6_.dbf
/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/o1_mf_sysaux_k22clf46_.dbf
/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/o1_mf_undotbs1_k22cj072_.dbf
/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/o1_mf_users_k22cqyrg_.dbf
22:06:41 SYS@kacdb> create tablespace ts1 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts1.dbf' size 1M;
Tablespace created.
22:07:08 SYS@kacdb> select tablespace_name, encrypted, compress_for from dba_tablespaces;
TABLESPACE_NAME ENC COMPRESS_FOR
------------------------------ --- ------------------------------
SYSTEM NO
SYSAUX NO
UNDOTBS1 NO
TEMP NO
USERS YES
TS1 YES
6 rows selected.
22:07:14 SYS@kacdb> alter tablespace ts1 encryption online decrypt;
alter tablespace ts1 encryption online decrypt
*
ERROR at line 1:
ORA-28427: cannot create, import or restore unencrypted tablespace: TS1 in Oracle Cloud
22:13:58 SYS@kacdb> create tablespace ts2 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts2.dbf' decrypt size 1M;
create tablespace ts2 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts2.dbf' decrypt size 1M
*
ERROR at line 1:
ORA-02180: invalid option for CREATE TABLESPACE
22:15:45 SYS@kacdb> create tablespace ts2 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts2.dbf' size 1M decrypt;
Tablespace created.
22:15:59 SYS@kacdb> select tablespace_name, encrypted, compress_for from dba_tablespaces;
TABLESPACE_NAME ENC COMPRESS_FOR
------------------------------ --- ------------------------------
SYSTEM NO
SYSAUX NO
UNDOTBS1 NO
TEMP NO
USERS YES
TS1 YES
TS2 YES
7 rows selected.
22:16:05 SYS@kacdb> show parameter encrypt
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces string ALWAYS
22:16:26 SYS@kacdb>
22:17:32 SYS@kacdb> alter system set encrypt_new_tablespaces='DDL';
System altered.
22:17:42 SYS@kacdb>
22:17:43 SYS@kacdb> show parameter encrypt
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces string DDL
22:17:47 SYS@kacdb> create tablespace ts3 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts3.dbf' size 1M;
create tablespace ts3 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts3.dbf' size 1M
*
ERROR at line 1:
ORA-28427: cannot create, import or restore unencrypted tablespace: TS3 in Oracle Cloud
22:18:06 SYS@kacdb> show parameter encrypt
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces string DDL
22:18:37 SYS@kacdb> alter system set encrypt_new_tablespaces='ALWAYS';
System altered.
22:18:46 SYS@kacdb> create tablespace ts3 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts3.dbf' size 1M;
Tablespace created.
22:18:53 SYS@kacdb> select tablespace_name, encrypted, compress_for from dba_tablespaces;
TABLESPACE_NAME ENC COMPRESS_FOR
------------------------------ --- ------------------------------
SYSTEM NO
SYSAUX NO
UNDOTBS1 NO
TEMP NO
USERS YES
TS1 YES
TS2 YES
TS3 YES
8 rows selected.
}}}
df -k in dbfs has a bug.. which could be because of the fuse + securelob behavior discrepancies
but to get the whole space is get the
* ''expired_bytes + unexpired_bytes + size on df -k'' which should give you the rough number of the total space then subtract the number to the ''du -sm'' output on the /dbfs directory
* if it says you only have 256GB of space but when calculated you actually have 800GB.. then if you create a big 400GB file it will actually succeed
How DBFS Reclaims Free Space After Files Are Deleted [ID 1438356.1]
Bug 12662040 : SECUREFILE LOB SEGMENT KEEPS GROWING IN CASE OF PLENTY OF FREE SPACE
! run the following:
{{{
col segment_name format a30
select segment_name, tablespace_name, segment_type, round(bytes/1024/1024/1024,2) dbfs_segment
from dba_segments where owner='DBFS' and segment_type = 'LOBSEGMENT';
-- search for the lob segment
set serveroutput on
declare
v_segment_size_blocks number;
v_segment_size_bytes number;
v_ number;
v_used_blocks number;
v_used_bytes number;
v_expired_blocks number;
v_expired_bytes number;
v_unexpired_blocks number;
v_unexpired_bytes number;
begin
dbms_space.space_usage ('DBFS', '&LOBSEGMENT', 'LOB',
v_segment_size_blocks, v_segment_size_bytes,
v_used_blocks, v_used_bytes, v_expired_blocks, v_expired_bytes,
v_unexpired_blocks, v_unexpired_bytes );
dbms_output.put_line('Expired Blocks = '||v_expired_blocks);
dbms_output.put_line('Expired GB = '|| round(v_expired_bytes/1024/1024/1024,2) );
dbms_output.put_line('UNExpired Blocks = '||v_unexpired_blocks);
dbms_output.put_line('UNExpired GB = '|| round(v_unexpired_bytes/1024/1024/1024,2) );
end;
/
! echo "df output: `df -m /dbfs | grep dbfs | awk '{print $2/1024}'`"
! echo "du output: `du -sm /dbfs | awk '{print $1/1024}'`"
run this to check lob fragmentation http://www.idevelopment.info/data/Oracle/DBA_scripts/LOBs/lob_fragmentation_user.sql
}}}
! sample output
{{{
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
30G 17G 12G 60% /
/dev/sda1 124M 74M 44M 63% /boot
/dev/mapper/VGExaDb-LVDbOra1
148G 78G 65G 55% /u01
tmpfs 81G 0 81G 0% /dev/shm
dbfs-dbfs@:/ 258G 212G 47G 83% /dbfs
11:42:42 SYS@DBFS1> @unexpired
SEGMENT_NAME SEGMENT_TYPE BYTES/1024/1024
--------------------------------------------------------------------------------- ------------------ ---------------
T_WORK TABLE .1875
IG_SFS$_FST_42745 INDEX .0625
SYS_IL0000117281C00007$$ LOBINDEX .0625
LOB_SFS$_FST_42745 LOBSEGMENT 771964.125
IP_SFS$_FST_42745 INDEX .0625
IPG_SFS$_FST_42745 INDEX .0625
6 rows selected.
Expired Blocks = 72615478
Expired Bytes = 554.0121307373046875
UNExpired Blocks = 0
UNExpired Bytes = 0
PL/SQL procedure successfully completed.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
30G 17G 12G 60% /
/dev/sda1 124M 74M 44M 63% /boot
/dev/mapper/VGExaDb-LVDbOra1
148G 78G 65G 55% /u01
tmpfs 81G 0 81G 0% /dev/shm
dbfs-dbfs@:/ 258G 212G 47G 83% /dbfs
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
30G 17G 12G 60% /
/dev/sda1 124M 74M 44M 63% /boot
/dev/mapper/VGExaDb-LVDbOra1
148G 78G 65G 55% /u01
tmpfs 81G 0 81G 0% /dev/shm
dbfs-dbfs@:/ 261G 215G 47G 83% /dbfs
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
30G 17G 12G 60% /
/dev/sda1 124M 74M 44M 63% /boot
/dev/mapper/VGExaDb-LVDbOra1
148G 78G 65G 55% /u01
tmpfs 81G 0 81G 0% /dev/shm
dbfs-dbfs@:/ 262G 216G 47G 83% /dbfs
}}}
''The fix!'' the new dbfsfree script..
{{{
[pd01db01:oracle:dbm1] /home/oracle
> dbfsfree
Size Used Avail Used% Mounted on
Kilobytes 809,274,136 624,320,136 184,954,000 77.15 /dbfs
Megabytes 790,306 609,687 180,619 77.15 /dbfs
Giggabytes 771 595 176 77.15 /dbfs
[pd01db01:oracle:dbm1] /home/oracle
> df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
ext3 30G 22G 6.8G 76% /
/dev/sda1 ext3 124M 36M 82M 31% /boot
/dev/mapper/VGExaDb-LVDbOra1
ext3 99G 62G 33G 66% /u01
tmpfs tmpfs 81G 39M 81G 1% /dev/shm
dbfs-dbfs@:/ fuse 684G 596G 89G 88% /dbfs
}}}
http://www.evernote.com/shard/s48/sh/0545726e-b46b-4953-ad5e-f1d04fb38b1d/86dd3d261da9ab35c09d765157c1ac33
> what's your take on non-indexed FK constraints? is it safe to not have it?
is enq TM an issue? If yes then non-indexed is likely the first cause. If not then it might be a little less of an issue but still worth mentioning imho
> do we recommend the redundant indexes to be dropped on health checks?
usually yes (alternative is mark them as invisible for a while and after some time if no regression is experienced then drop them)
> do we disable the "auto optimizer stats collection" ? let's say if they have already an app specific stats gathering in place
*IF* the app way of gathering stats is solid then you can keep the automatic job just for Oracle objects (dictionary in example).
Alternatively the client can lock stats (and use FORCE=>TRUE in their custom jobs) so the automatic one will only collect those non locked.
> how do we evaluate the "Sequences prone to contention” ?
is enq SQ an issue? Assuming yes then sequences are a big concern. If no then we usually just mention to increase cache size and NOT use ORDER in RAC (because it requires sync across nodes so it’s a big overhead)
Master Note for Query Rewrite (Doc ID 1215173.1)
Using DBMS_ADVANCED_REWRITE When Binds Are Present (Avoiding ORA-30353) (Doc ID 392214.1)
Master Note for Materialized View (MVIEW) (Doc ID 1353040.1)
Using Execution Plan to Verify Materialized View Query Rewrite (Doc ID 245635.1)
Advanced Query Rewrite https://docs.oracle.com/cd/B28359_01/server.111/b28313/qradv.htm , http://pages.di.unipi.it/ghelli/didattica/bdldoc/B19306_01/server.102/b14223/qradv.htm
DBMS_ADVANCED_REWRITE https://docs.oracle.com/database/121/ARPLS/d_advrwr.htm#BEGIN
Oracle OLAP 11g and 12c: How to ensure use of Cube Materialized Views/Query Rewrite (Doc ID 577293.1)
Improving Performance using Query Rewrite in Oracle D atabase 10g http://www.oracle.com/technetwork/middleware/bi-foundation/twp-bi-dw-improve-perf-using-query--133436.pdf
How To Use DBMS_MVIEW.EXPLAIN_REWRITE and EXPLAIN_MVIEW To Diagnose Query Rewrite and Fast Refresh Problems (Doc ID 149815.1)
Manual Diagnosis & Troubleshooting for Query Rewrite Problems (Doc ID 236486.1) <- good stuff
https://gavinsoorma.com/2011/06/using-dbms_advanced_rewrite-with-an-hint-to-change-the-execution-plan/
Oracle by example http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/10g/r2/prod/bidw/mv/mv_otn.htm
https://gerardnico.com/db/oracle/query_rewriting
https://blog.go-faster.co.uk/2016/09/dbmsapplicationinfo.html
http://www.java2s.com/Tutorial/Oracle/0601__System-Packages/ReadOriginalValuesandDisplay.htm
https://gist.githubusercontent.com/richardpascual/b8674881dac0280f606d/raw/c2537c5e1a4a8d93128632a803d02946b8a0fcbb/oracle-plsql-exception-handling.sql
{{{
-- This is an example framework of how to implement tracking of package/procedure/function plsql
-- procedural code execution (Oracle) with a built-in package: DBMS_APPLICATION_INFO; the intent
-- is to set this syntax layout up so that developers can have a flexible, customizable syntax
-- to accomplish this task.
-- Look for my project here on Github, the ORA-EXCEPTION-HANDLER. (richardpascual)
CREATE or REPLACE PROCEDURE PROCESS_TWEET_LOG is
c_client_info constant V$SESSION.CLIENT_INFO%TYPE := 'DEV-DATABASE, OPS408 SCHEMA';
c_module_name constant V$SQLAREA.MODULE_NAME%TYPE := 'PROCESS_TWEET_LOG';
l_action V$SQLAREA.ACTION%TYPE:= null;
BEGIN
DBMS_APPLICATION_INFO.SET_CLIENT_INFO (client_info => c_client_info);
DBMS_APPLICATION_INFO.SET_MODULE ( module_name => c_module_name, action_name => l_action );
-- Initialize Twitterizer
l_action:= 'LoadTweetLog';
DBMS_APPLICATION_INFO.SET_ACTION (action_name => l_action);
... begin Tweet Log Loading Process here.
<more PL/SQL Code here>
-- Count Tweets by seven demographic dimensions
l_action:= 'CountBySeven';
DBMS_APPLICATION_INFO.SET_ACTION (action_name => l_action);
... begin Tweet Log Counting Process here.
<more PL/SQL Code here>
...
EXCEPTION
WHEN OTHERS THEN
err_pkg.handle; -- this is the singular, exception call from the ORA_EXCEPTION_HANDLER project.
END;
}}}
<<<
Or use DBMS_PDB package to construct an XML file describing the non-CDB data files to
plug the non-CDB into the CDB as a PDB. This method presupposes that the non-CDB is
an Oracle 12c database
----------
Using the DBMS_PDB package is the easiest option.
If the DBMS_PDB package is not used, then using export/import is usually simpler than using
GoldenGate replication, but export/import might require more down time during the switch from
the non-CDB to the PDB.
If you choose to use export/import, and you are moving a whole non-CDB into the CDB, then
transportable databases (TDB) is usually the best option. If you choose to export and import
part of a non-CDB into a CDB, then transportable tablespaces (TTS) is the best option.
----------
DBMS_PDB step by step
The technique with DBMS_PDB package creates an unplugged PDB from an Oracle database
12c non-CDB. The unplugged PDB can then be plugged in to a CDB as a new PDB. Running
the DBMS_PDB.DESCRIBE procedure on the non-CDB generates an XML file that describes
the future PDB. You can plug in the unplugged PDB in the same way that you can plug in any
unplugged PDB, using the XML file and the non-CDB data files. The steps are the following:
1. Connect to non-CDB ORCL and first ensure that the non-CDB ORCL is in a transactionally-
consistent state and place it in read-only mode.
2. Execute the DBMS_PDB.DESCRIBE procedure, providing the file name that will be
generated. The XML file contains the list of data files to be plugged.
The XML file and the data files described in the XML file comprise an unplugged PDB.
3. Connect to the target CDB to plug the unplugged ORCL as PDB2.
4. Before plugging the unplugged PDB, make sure it can be plugged into a CDB using the
DBMS_PDB.CHECK_PLUG_COMPATIBILITY procedure. Execute the CREATE
PLUGGABLE STATEMENT using the new clause USING ' XMLfile ' . The list of data files
from ORCL is read from the XML file to locate and name the data files of PDB2.
5. Run the ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql script to delete
unnecessary metadata from PDB SYSTEM tablespace. This script must be run before the
PDB can be opened for the first time. This script is required for plugging non-CDBs only.
6. Open PDB2 to verify that the application tables are in PDB2.
<<<
http://externaltable.blogspot.com/2014/04/a-closer-look-at-calibrateio.html
Thread: Answers to "Why are my jobs not running?"
http://forums.oracle.com/forums/thread.jspa?threadID=646581
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldev/r30/DBMSScheduler/DBMSScheduler.htm
http://www.oracle-base.com/articles/misc/SqlDeveloper31SchedulerSupport.php
-- other scheduling software
http://en.wikipedia.org/wiki/CA_Workload_Automation_AE
23 Managing Automatic System Tasks Using the Maintenance Window http://docs.oracle.com/cd/B19306_01/server.102/b14231/tasks.htm
CREATE_WINDOW (new 11g overload) http://psoug.org/reference/dbms_scheduler.html
Oracle Scheduling Resource Manager Plan http://www.dba-oracle.com/job_scheduling/resource_manager_plan.htm, http://www.dba-oracle.com/job_scheduling/windows.htm
Examples of Using the Scheduler http://docs.oracle.com/cd/B28359_01/server.111/b28310/schedadmin006.htm
Configuring Oracle Scheduler - Task 2B: Creating Windows http://docs.oracle.com/cd/B28359_01/server.111/b28310/schedadmin001.htm
http://www.oracle-base.com/articles/10g/Scheduler10g.php
http://www.oracle-base.com/articles/11g/SchedulerEnhancements_11gR2.php
{{{
select (:end_time-:start_time)*10 diff_in_sec from dual;
SQL> var start_time number;
SQL> exec :start_time:=DBMS_UTILITY.GET_TIME ;
PL/SQL procedure successfully completed.
SQL> var end_time number;
SQL> exec :end_time:=DBMS_UTILITY.GET_TIME ;
PL/SQL procedure successfully completed.
SQL> select (:end_time-:start_time)/100 diff_in_sec from dual;
SQL> select (:end_time-:start_time)*10 diff_in_ms from dual;
}}}
Procedure for renaming a database - Non-ASM - DBNEWID
http://www.evernote.com/shard/s48/sh/f00030b2-988c-4d9c-b4db-35dfd1bb6593/12702f51c6046d00cb2bff74c190c7e4
Init.ora Parameter "DB_WRITERS" [Port Specific] Reference Note
Doc ID: Note:35268.1
Top 8 init.ora Parameters Affecting Performance
Doc ID: Note:100709.1
DB_WRITER_PROCESSES or DBWR_IO_SLAVES?
Doc ID: Note:97291.1
Database Writer and Buffer Management
Doc ID: Note:91062.1
TROUBLESHOOTING GUIDE: Common Performance Tuning Issues
Doc ID: Note:106285.1
Systemwide Tuning using UTLESTAT Reports in Oracle7/8
Doc ID: Note:62161.1
DBWR in Oracle8i
Doc ID: Note:105518.1
DEC ALPHA: RAW DISK AND ASYNC_IO
Doc ID: Note:1029511.6
Understanding and Tuning Buffer Cache and DBWR
Doc ID: Note:62172.1
Asynchronous I/O and Multiple Database Writers
Doc ID: Note:69560.1
VIEW: "V$LOGFILE" Reference Note
Doc ID: Note:43746.1
Init.ora Parameter "DB_WRITERS" [Port Specific] Reference Note
Doc ID: Note:35268.1
CRITICAL BUGS LIST FOR V7.3.2.XX
Doc ID: Note:1023229.6
How to Resize a Datafile
Doc ID: Note:1029252.6
How to Resolve ORA-03297 When Resizing a Datafile by Finding the Table Highwatermark
Doc ID: Note:130866.1
Oracle8 and Oracle8i Database Limits
Doc ID: Note:114019.1
Oracle9i Database Limits
Doc ID: Note:217143.1
Database and File Size Limits in 10G release 2
Doc ID: Note:336186.1
Init.ora Parameter "DB_WRITERS" [Port Specific] Reference Note
Doc ID: Note:35268.1
ORA-00346: REDO LOG FILE HAS STATUS 'STALE'
Doc ID: Note:1014824.6
Archiver Best Practices
Doc ID: Note:45042.1
Shutdown Immediate Hangs
Doc ID: Note:179192.1
http://sarojkd.tripod.com/B001.html
http://www.riddle.ru/mirrors/oracledocs/server/sad73/ch505.html
https://blogs.oracle.com/oem/entry/database_as_a_service_on
!!!! THIS TIDDLER IS ON GOING...
I've done a couple tests lately on my Windows laptop (on Intel i5) and also on a "13 MacbookAir
To summarize the screenshots that you'll see below, it's divided into four test cases:
''1) The effect of DD to /dev/null''
* /dev/null is a special file that acts like a black hole, this test shows that you must use this facility with caution when doing your IO tests or else you may end up with super bloated numbers. One common error or misuse you may encounter is doing DD from /dev/zero straight to /dev/null.. see more from the screenshots below..
''2) DD Write, Read, and Read Write''
This shows how you can properly do Write, Read, and Read Write tests using DD
* Write - if=/dev/zero of=testfile.txt
* Read - if=testfile.txt of=/dev/null
* Read Write - if=testfile.txt of=testfile2.txt
time dd bs=16384 if=/Users/gaja/Data/Downloads/Software/"Rosetta Stone Version 3 Update.dmg" of=/dev/null
sync; time dd bs=1048576 count=4096 if=/dev/zero of=/tmp/testfile12.txt; sync;
''3) IOMeter tests''
I was never successful on doing a pure read operation using DD. To have a read only test I have to use IOMeter
''4) Actual MacBook Air test - part2''
Having my tests above in mind, I was able to get hold of a MacBook Air and did some tests on various block sizes.
So here it goes...
!
! The Effect of DD
!!!! So fast
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZbUNMaLDNI/AAAAAAAABJ4/nlslGxySL34/s800/so%20fast.png]]
<<<
!!!! First Run
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZY0yHjByUI/AAAAAAAABJQ/4mdJxRQ175c/s800/test10-the%20effect%20of%20dd.png]]
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZY2iptzAPI/AAAAAAAABJY/JYoZJBuASuc/s800/test10-the%20effect%20of%20dd-after%20cancel.png]]
<<<
!!!! Another Run
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZY1wan6kFI/AAAAAAAABJU/Omh7uM1RaXs/s800/test11-the%20effect%20of%20dd%2016k%20bs.png]]
asdadasd
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY4vB6IWOI/AAAAAAAABJc/etlhojzy-1U/s800/test11-the%20effect%20of%20dd%2016k%20bs-after%20cancel.png]]
<<<
!
! DD Write, Read, and Read Write
!!!! DD write
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY8lKpct2I/AAAAAAAABJg/NVy9RbQLDhM/s800/x2.png]]
<x2.png>
asd
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY8lGdoyBI/AAAAAAAABJk/Qz_gQ7huEyw/s800/x3.png]]
<x3.png>
<<<
!!!! DD read
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZb_NiIu0eI/AAAAAAAABJ8/jh0RDLxjU8o/s800/ddread.png]]
<<<
!!!! DD read write
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZY8lImV9CI/AAAAAAAABJo/oz8C7xwXgVE/s800/x5.png]]
<x5.png>
asdad
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZY8nLguFzI/AAAAAAAABJs/FYIpx3O6aRg/s800/x6.png]]
<x6.png>
<<<
!
! IOMeter tests
!!!! Read
<<<
asdsa
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY9nijeNlI/AAAAAAAABJ0/2I8e8KR-JX0/s800/test19-dynamo1M%2050outstanding%20all%20read-sequential.png]]
<test19-dynamo1M 50outstanding all read-sequential.png>
<<<
!!!! Write
<<<
asdada
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY9nRUTX5I/AAAAAAAABJw/1NWZeou2f7c/s800/test18-dynamo1M%2050outstanding%20all%20write-sequential.png]]
<test18-dynamo1M 50outstanding all write-sequential.png>
<<<
!
! Actual Macbook Air test "13
1MB http://db.tt/uVWYLkt
736 IOPS peak
188.6 MB/s peak
512K http://db.tt/sHsg8RV
645 IOPS peak
182.9 MB/s peak
16K http://db.tt/Z8zf6SO
631 IOPS peak
167.5 MB/s peak
8K http://db.tt/8UCAOOV
147 IOPS peak
145 MB/s peak
!
! References
Apple's 2010 MacBook Air (11 & 13 inch) Thoroughly Reviewed
http://www.anandtech.com/Show/Index/3991?cPage=13&all=False&sort=0&page=4&slug=apples-2010-macbook-air-11-13inch-reviewed <— GOOD STUFF
Support and Q&A for Solid-State Drives
http://blogs.msdn.com/b/e7/archive/2009/05/05/support-and-q-a-for-solid-state-drives-and.aspx <— GOOD STUFF
http://www.anandtech.com/show/2738 <— GOOD STUFF REVIEW + TRIM + NICE EXPLANATIONS
http://www.usenix.org/event/usenix08/tech/full_papers/agrawal/agrawal_html/index.html <— GOOD STUFF PAPER
https://www.oracle.com/technetwork/testcontent/o26performance-096310.html
{{{
Faster Batch Processing
By Mark Rittman Oracle ACE
LOG ERRORS handles errors quickly and simplifies batch loading.
When you need to load millions of rows of data into a table, the most efficient way is usually to use an INSERT, UPDATE, or MERGE statement to process your data in bulk. Similarly, if you want to delete thousands of rows, using a DELETE statement is usually faster than using procedural code. But what if the data you intend to load contains values that might cause an integrity or check constraint to be violated, or what if some values are too big for the column they are to be loaded into?
You may well have loaded 999,999 rows into your table, but that last row, which violates a check constraint, causes the whole statement to fail and roll back. In situations such as this, you have to use an alternative approach to loading your data.
For example, if your data is held in a file, you can use SQL*Loader to automatically handle data that raises an error, but then you have to put together a control file, run SQL*Loader from the command line, and check the output file and the bad datafile to detect any errors.
If, however, your data is held in a table or another object, you can write a procedure or an anonymous block to process your data row by row, loading the valid rows and using exception handling to process those rows that raise an error. You might even use BULK COLLECT and FORALL to handle data in your PL/SQL routine more efficiently, but even with these improvements, handling your data in this manner is still much slower than performing a bulk load by using a direct-path INSERT DML statement.
Until now, you could take advantage of the set-based performance of INSERT, UPDATE, MERGE, and DELETE statements only if you knew that your data was free from errors; in all other circumstances, you needed to resort to slower alternatives. All of this changes with the release of Oracle Database 10g Release 2, which introduces a new SQL feature called DML error logging.
Efficient Error Handling
DML error logging enables you to write INSERT, UPDATE, MERGE, or DELETE statements that automatically deal with certain constraint violations. With this new feature, you use the new LOG ERRORS clause in your DML statement and Oracle Database automatically handles exceptions, writing erroneous data and details of the error message to an error logging table you've created.
Before you can use the LOG ERRORS clause, you need to create an error logging table, either manually with DDL or automatically with the CREATE_ERROR_LOG procedure in the DBMS_ERRLOG package, whose specification is shown in Listing 1.
Code Listing 1: DBMS_ERRLOG.CREATE_ERROR_LOG parameters
DBMS_ERRLOG.CREATE_ERROR_LOG (
dml_table_name IN VARCHAR2,
err_log_table_name IN VARCHAR2 := NULL,
err_log_table_owner IN VARCHAR2 := NULL,
err_log_table_space IN VARCHAR2 := NULL,
skip_unsupported IN BOOLEAN := FALSE);
All the parameters except DML_TABLE_NAME are optional, and if the optional details are omitted, the name of the error logging table will be ERR$_ together with the first 25 characters of the DML_TABLE_NAME. The SKIP_UNSUPPORTED parameter, if set to TRUE, instructs the error logging clause to skip over LONG, LOB, and object type columns that are not supported and omit them from the error logging table.
With the error logging table created, you can add the error logging clause to most DML statements, using the following syntax:
LOG ERRORS [INTO [schema.]table]
[ (simple_expression) ]
[ REJECT LIMIT {integer|UNLIMITED} ]
The INTO clause is optional; if you omit it, the error logging clause will put errors into a table with the same name format used by the CREATE_ERROR_LOG procedure. SIMPLE_EXPRESSION is any expression that would evaluate to a character string and is used for tagging rows in the error table to indicate the process that caused the error, the time of the data load, and so on. REJECT LIMIT can be set to any integer or UNLIMITED and specifies the number of errors that can occur before the statement fails. This value is optional, but if it is omitted, the default value is 0, which effectively disables the error logging feature.
The following types of errors are handled by the error logging clause:
Column values that are too large
Constraint violations (NOT NULL, unique, referential, and check constraints), except in certain circumstances detailed below
Errors raised during trigger execution
Errors resulting from type conversion between a column in a subquery and the corresponding column of the table
Partition mapping errors
The following conditions cause the statement to fail and roll back without invoking the error logging capability:
Violated deferred constraints
Out-of-space errors
Any direct-path INSERT operation (INSERT or MERGE) that raises a unique constraint or index violation
Any UPDATE operation (UPDATE or MERGE) that raises a unique constraint or index violation
To show how the error logging clause works in practice, consider the following scenario, in which data needs to be loaded in batch from one table to another:
You have heard of the new error logging feature in Oracle Database 10g Release 2 and want to compare this new approach with your previous method of writing a PL/SQL package. To do this, you will use data held in the SH sample schema to try out each approach.
Using DML Error Logging
In this example, you will use the data in the SALES table in the SH sample schema, together with values from a sequence, to create a source table for the error logging test. This example assumes that the test schema is called ERRLOG_TEST and that it has the SELECT object privilege for the SH.SALES table. Create the source data and a target table called SALES_TARGET, based on the definition of the SALES_SRC table, and add a check constraint to the AMOUNT_SOLD column to allow only values greater than 0. Listing 2 shows the DDL for creating the source and target tables.
Code Listing 2: Creating the SALES_SRC and SALES_TARGET tables
SQL> CREATE SEQUENCE sales_id_seq;
Sequence created.
SQL> CREATE TABLE sales_src
2 AS
3 SELECT sales_id_seq.nextval AS "SALES_ID"
4 , cust_id
5 , prod_id
6 , channel_id
7 , time_id
8 , promo_id
9 , amount_sold
10 , quantity_sold
11 FROM sh.sales
12 ;
Table created.
SQL> SELECT count(*)
2 , min(sales_id)
3 , max(sales_id)
4 FROM sales_src
5 ;
COUNT(*) MIN(SALES_ID) MAX(SALES_ID)
------ -------- --------
918843 1 918843
SQL> CREATE TABLE sales_target
2 AS
3 SELECT *
4 FROM sales_src
5 WHERE 1=0
6 ;
Table created.
SQL> ALTER TABLE sales_target
2 ADD CONSTRAINT amount_sold_chk
3 CHECK (amount_sold > 0)
4 ENABLE
5 VALIDATE
6 ;
Table altered.
Note from the descriptions of the tables in Listing 2 that the SALES_TARGET and SALES_SRC tables have automatically inherited the NOT NULL constraints that were present on the SH.SALES table because you created these tables by using a CREATE TABLE ... AS SELECT statement that copies across these column properties when you are creating a table.
You now introduce some errors into your source data, so that you can subsequently test the error logging feature. Note that because one of the errors you want to test for is a NOT NULL constraint violation on the PROMO_ID column, you need to remove this constraint from the SALES_SRC table before adding null values. The following shows the SQL used to create the data errors.
SQL> ALTER TABLE sales_src
2 MODIFY promo_id NULL
3 ;
Table altered.
SQL> UPDATE sales_src
2 SET promo_id = null
3 WHERE sales_id BETWEEN 5000 and 5005
4 ;
6 rows updated.
SQL> UPDATE sales_src
2 SET amount_sold = 0
3 WHERE sales_id IN (1000,2000,3000)
4 ;
3 rows updated.
SQL> COMMIT;
Commit complete.
Now that your source and target tables are prepared, you can use the DBMS_ERRLOG.CREATE_ERROR_LOG procedure to create the error logging table. Supply the name of the table on which the error logging table is based; the procedure will use default values for the rest of the parameters. Listing 3 shows the creation and description of the error logging table.
Code Listing 3: Creating the err$_sales_target error logging table
SQL> BEGIN
2 DBMS_ERRLOG.CREATE_ERROR_LOG('SALES_TARGET');
3 END;
4 /
PL/SQL procedure successfully completed.
SQL> DESCRIBE err$_sales_target;
Name Null? Type
------------------- ---- -------------
ORA_ERR_NUMBER$ NUMBER
ORA_ERR_MESG$ VARCHAR2(2000)
ORA_ERR_ROWID$ ROWID
ORA_ERR_OPTYP$ VARCHAR2(2)
ORA_ERR_TAG$ VARCHAR2(2000)
SALES_ID VARCHAR2(4000)
CUST_ID VARCHAR2(4000)
PROD_ID VARCHAR2(4000)
CHANNEL_ID VARCHAR2(4000)
TIME_ID VARCHAR2(4000)
PROMO_ID VARCHAR2(4000)
AMOUNT_SOLD VARCHAR2(4000)
QUANTITY_SOLD VARCHAR2(4000)
Note that the CREATE_ERROR_LOG procedure creates five ORA_ERR_% columns, to hold the error number, error message, ROWID, operation type, and tag you will supply when using the error logging clause. Datatypes have been automatically chosen for the table columns that will allow you to store numbers and characters.
The first approach is to load data into the SALES_TARGET table by using a direct-path INSERT statement. This is normally the most efficient way to load data into a table while still making the DML recoverable, but in the past, this INSERT would have failed, because the check constraints on the SALES_TARGET table would have been violated. Listing 4 shows this INSERT and the check constraint violation.
Code Listing 4: Violating the check constraint with direct-path INSERT
SQL> SET SERVEROUTPUT ON
SQL> SET LINESIZE 150
SQL> SET TIMING ON
SQL> ALTER SESSION SET SQL_TRACE = TRUE;
Session altered.
Elapsed: 00:00:00.04
SQL> INSERT /*+ APPEND */
2 INTO sales_target
3 SELECT *
4 FROM sales_src
5 ;
INSERT /*+ APPEND */
*
ERROR at line 1:
ORA-02290: check constraint (ERRLOG_TEST.AMOUNT_SOLD_CHK) violated
Elapsed: 00:00:00.15
If you add the new LOG ERRORS clause to the INSERT statement, however, the statement will complete successfully and save any rows that violate the table constraints to the error logging table, as shown in Listing 5.
Code Listing 5: Violating the constraints and logging the errors with LOG ERRORS
SQL> INSERT /*+ APPEND */
2 INTO sales_target
3 SELECT *
4 FROM sales_src
5 LOG ERRORS
6 REJECT LIMIT UNLIMITED
7 ;
918834 rows created.
Elapsed: 00:00:05.75
SQL> SELECT count(*)
2 FROM err$_sales_target
3 ;
COUNT(*)
-----
9
Elapsed: 00:00:00.06
SQL> COLUMN ora_err_mesg$ FORMAT A50
SQL> SELECT ora_err_number$
2 , ora_err_mesg$
3 FROM err$_sales_target
4 ;
ORA_ERR_NUMBER$ ORA_ERR_MESG$
--------------- ------------------------------
2290 ORA-02290: check constraint (ERRLOG_TEST.AMOUNT_
SOLD_CHK) violated
2290 ORA-02290: check constraint (ERRLOG_TEST.AMOUNT_
SOLD_CHK) violated
2290 ORA-02290: check constraint (ERRLOG_TEST.AMOUNT_
SOLD_CHK) violated
1400 ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
"SALES_TARGET"."PROMO_ID")
1400 ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
"SALES_TARGET"."PROMO_ID")
1400 ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
"SALES_TARGET"."PROMO_ID")
1400 ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
"SALES_TARGET"."PROMO_ID")
1400 ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
"SALES_TARGET"."PROMO_ID")
1400 ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
"SALES_TARGET"."PROMO_ID")
9 rows selected.
Elapsed: 00:00:00.28
Listing 5 shows that when this INSERT statement uses direct path to insert rows above the table high-water mark, the process takes 5.75 seconds and adds nine rows to the error logging table. Try the same statement again, this time with a conventional-path INSERT, as shown in Listing 6.
Code Listing 6: Violating the check and NOT NULL constraints with conven
SQL> TRUNCATE TABLE sales_target;
Table truncated.
Elapsed: 00:00:06.07
SQL> TRUNCATE TABLE err$_sales_target;
Table truncated.
Elapsed: 00:00:00.25
SQL> INSERT INTO sales_target
2 SELECT *
3 FROM sales_src
4 LOG ERRORS
5 REJECT LIMIT UNLIMITED
6 ;
918834 rows created.
Elapsed: 00:00:30:65
As you might expect, the results in Listing 6 show that the direct-path load is much faster than the conventional-path load, because the former writes directly to disk whereas the latter writes to the buffer cache. The LOG ERRORS clause also causes kernel device table (KDT) buffering to be disabled when you're performing a conventional-path INSERT. One reason you might want to nevertheless use a conventional-path INSERT with error logging is that direct-path loads will fail when a unique constraint or index violation occurs, whereas a conventional-path load will log these errors to the error logging table and then continue. Oracle Database will also ignore the /*+ APPEND */ hint when the table you are inserting into contains foreign key constraints, because you cannot have these enabled when working in direct-path mode.
Now compare these direct- and conventional-path loading timings with the timing for using a PL/SQL anonymous block. You know that the traditional way of declaring a cursor against the source table—reading it row by row, inserting the contents into the target table, and dealing with exceptions as they occur—will be slow, but the column by Tom Kyte in the September/October 2003 issue of Oracle Magazine ("On HTML DB, Bulking Up, and Speeding") shows how BULK COLLECT, FORALL, and SAVE EXCEPTIONS could be used to process dirty data in a more efficient manner. How does Kyte's 2003 approach compare with using DML error logging? A version of Kyte's approach that, like the LOG ERRORS clause, writes error messages to an error logging table is shown in Listing 7.
Code Listing 7: PL/SQL anonymous block doing row-by-row INSERT
SQL> CREATE TABLE sales_target_errors
2 (sql_err_mesg varchar2(4000))
3 /
Table created.
Elapsed: 00:00:00.28
SQL> DECLARE
2 TYPE array IS TABLE OF sales_target%ROWTYPE
3 INDEX BY BINARY_INTEGER;
4 sales_src_arr ARRAY;
5 errors NUMBER;
6 error_mesg VARCHAR2(255);
7 bulk_error EXCEPTION;
8 l_cnt NUMBER := 0;
9 PRAGMA exception_init
10 (bulk_error, -24381);
11 CURSOR c IS
12 SELECT *
13 FROM sales_src;
14 BEGIN
15 OPEN c;
16 LOOP
17 FETCH c
18 BULK COLLECT
19 INTO sales_src_arr
20 LIMIT 100;
21 BEGIN
22 FORALL i IN 1 .. sales_src_arr.count
23 SAVE EXCEPTIONS
24 INSERT INTO sales_target VALUES sales_src_arr(i);
25 EXCEPTION
26 WHEN bulk_error THEN
27 errors :=
28 SQL%BULK_EXCEPTIONS.COUNT;
29 l_cnt := l_cnt + errors;
30 FOR i IN 1..errors LOOP
31 error_mesg := SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE);
32 INSERT INTO sales_target_errors
33 VALUES (error_mesg);
34 END LOOP;
35 END;
36 EXIT WHEN c%NOTFOUND;
37
38 END LOOP;
39 CLOSE c;
40 DBMS_OUTPUT.PUT_LINE
41 ( l_cnt || ' total errors' );
42 END;
43 /
9 total errors
PL/SQL procedure successfully completed.
Elapsed: 00:00:10.46
SQL> alter session set sql_trace = false;
Session altered.
Elapsed: 00:00:00.03
SQL> select * from sales_target_errors;
SQL_ERR_MESG
---------------------------------
ORA-02290: check constraint (.) violated
ORA-02290: check constraint (.) violated
ORA-02290: check constraint (.) violated
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()
9 rows selected.
Elapsed: 00:00:00.21
Processing your data with this method takes 10.46 seconds, longer than the 5.75 seconds when using DML error logging and a direct-path INSERT but quicker than using a conventional-path INSERT. The results are conclusive: If you use DML error logging and you can insert your data with direct path, your batches can load an order of magnitude faster than if you processed your data row by row, using PL/SQL, even if you take advantage of features such as BULK COLLECT, FORALL, and SAVE EXCEPTIONS.
Finally, use TKPROF to format the SQL trace file you generated during your testing and check the explain plan and statistics for the direct-path insertion, shown in Listing 8. Note that the insertions into the error logging table are carried out after the INSERT has taken place and that these rows will stay in the error logging table even if the main statement fails and rolls back.
Code Listing 8: Using TKPROF to look at direct-path INSERT statistics
INSERT /*+ APPEND */
INTO sales_target
SELECT *
FROM sales_src
LOG ERRORS
REJECT LIMIT UNLIMITED
call count cpu elapsed disk query current rows
--- --- ---- ---- ---- ---- ---- ----
Parse 1 0.01 0.10 0 0 0 0
Execute 1 2.84 5.52 3460 5226 6659 918834
Fetch 0 0.00 0.00 0 0 0 0
--- --- ---- ---- ---- ---- ---- ----
total 2 2.85 5.62 3460 5226 6659 918834
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99
Rows Row Source Operation
------- ---------------------------------------------------
1 LOAD AS SELECT (cr=5907 pr=3462 pw=5066 time=5539104 us)
918843 ERROR LOGGING (cr=5094 pr=3460 pw=0 time=92811603 us)
918843 TABLE ACCESS FULL SALES_SRC (cr=5075 pr=3458 pw=0 time=16547710 us)
***************************************************************************
INSERT INTO ERR$_SALES_TARGET (ORA_ERR_NUMBER$, ORA_ERR_MESG$,
ORA_ERR_ROWID$, ORA_ERR_OPTYP$, ORA_ERR_TAG$, SALES_ID, PROD_ID,
CUST_ID, CHANNEL_ID, TIME_ID, PROMO_ID, AMOUNT_SOLD, QUANTITY_SOLD)
VALUES
(:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)
call count cpu elapsed disk query current rows
--- --- ---- ---- ---- ---- ---- ----
Parse 1 0.00 0.00 0 0 0 0
Execute 9 0.00 0.01 2 4 39 9
Fetch 0 0.00 0.00 0 0 0 0
--- --- ---- ---- ---- ---- ---- ----
total 10 0.00 0.01 2 4 39 9
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99 (recursive depth: 1)
Next, locate the part of the formatted trace file that represents the PL/SQL approach and note how the execution of the anonymous block is split into four parts: (1) the anonymous block is parsed, (2) the source data is bulk-collected into an array, (3) the array is unloaded into the target table, and (4) the exceptions are written to the error logging table. Listing 9 shows that, together, these steps take more than twice as long to execute as a direct-path INSERT statement with DML error logging yet involve more coding and store less information about the rows that returned errors.
Code Listing 9: PROF to look at PL/SQL INSERT statistics
DECLARE
TYPE array IS TABLE OF sales_target%ROWTYPE
INDEX BY BINARY_INTEGER;
sales_src_arr ARRAY;
errors NUMBER;
error_mesg VARCHAR2(255);
bulk_error EXCEPTION;
l_cnt NUMBER := 0;
PRAGMA exception_init
(bulk_error, -24381);
CURSOR c IS
SELECT *
FROM sales_src;
BEGIN
OPEN c;
LOOP
FETCH c
BULK COLLECT
INTO sales_src_arr
LIMIT 100;
BEGIN
FORALL i IN 1 .. sales_src_arr.count
SAVE EXCEPTIONS
INSERT INTO sales_target VALUES sales_src_arr(i);
EXCEPTION
WHEN bulk_error THEN
errors :=
SQL%BULK_EXCEPTIONS.COUNT;
l_cnt := l_cnt + errors;
FOR i IN 1..errors LOOP
error_mesg := SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE);
INSERT INTO sales_target_errors
VALUES (error_mesg);
END LOOP;
END;
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
DBMS_OUTPUT.PUT_LINE
( l_cnt || ' total errors' );
END;
call count cpu elapsed disk query current rows
--- --- ---- ---- ---- ---- ---- ----
Parse 1 0.03 0.02 0 0 0 0
Execute 1 1.14 2.71 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
--- --- ---- ---- ---- ---- ---- ----
total 2 1.17 2.73 0 0 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99
********************************************************************
SELECT *
FROM
SALES_SRC
call count cpu elapsed disk query current rows
--- --- ---- ---- ---- ---- ---- ----
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 9189 3.60 3.23 0 14219 0 918843
--- --- ---- ---- ---- ---- ---- ----
total 9191 3.60 3.23 0 14219 0 918843
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 99 (recursive depth: 1)
Rows Row Source Operation
------- ---------------------------------------------------
918843 TABLE ACCESS FULL SALES_SRC (cr=14219 pr=0 pw=0 time=33083496 us)
**************************************************************************
INSERT INTO SALES_TARGET
VALUES
(:B1 ,:B2 ,:B3 ,:B4 ,:B5 ,:B6 ,:B7 ,:B8 )
call count cpu elapsed disk query current rows
--- --- ---- ---- ---- ---- ---- ----
Parse 1 0.00 0.00 0 0 0 0
Execute 9189 4.39 4.30 2 6886 54411 918834
Fetch 0 0.00 0.00 0 0 0 0
--- --- ---- ---- ---- ---- ---- ----
total 9190 4.39 4.30 2 6886 54411 918834
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99 (recursive depth: 1)
************************************************************************
INSERT INTO SALES_TARGET_ERRORS
VALUES (:B1 )
call count cpu elapsed disk query current rows
--- --- ---- ---- ---- ---- ---- ----
Parse 1 0.00 0.00 0 0 0 0
Execute 9 0.00 0.01 2 4 30 9
Fetch 0 0.00 0.00 0 0 0 0
--- --- ---- ---- ---- ---- ---- ----
total 10 0.00 0.01 2 4 30 9
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99 (recursive depth: 1)
Leftover from Listing 2***************
SQL> DESC sales_src
Name Null? Type
------------------- ---- -------------
SALES_ID NUMBER
CUST_ID NOT NULL NUMBER
PROD_ID NOT NULL NUMBER
CHANNEL_ID NOT NULL NUMBER
TIME_ID NOT NULL DATE
PROMO_ID NOT NULL NUMBER
AMOUNT_SOLD NOT NULL NUMBER(10,2)
QUANTITY_SOLD NOT NULL NUMBER(10,2)
SQL> DESC sales_target
Name Null? Type
------------------- ---- -------------
SALES_ID NUMBER
CUST_ID NOT NULL NUMBER
PROD_ID NOT NULL NUMBER
CHANNEL_ID NOT NULL NUMBER
TIME_ID NOT NULL DATE
PROMO_ID NOT NULL NUMBER
AMOUNT_SOLD NOT NULL NUMBER(10,2)
QUANTITY_SOLD NOT NULL NUMBER(10,2)
Conclusion
In the past, if you wanted to load data into a table and gracefully deal with constraint violations or other DML errors, you either had to use a utility such as SQL*Loader or write a PL/SQL procedure that processed each row on a row-by-row basis. The new DML error logging feature in Oracle Database 10g Release 2 enables you to add a new LOG ERRORS clause to most DML statements that allows the operation to continue, writing errors to an error logging table. By using the new DML error logging feature, you can load your batches faster, have errors handled automatically, and do away with the need for custom-written error handling routines in your data loading process.
}}}
http://www.linux-magazine.com/Online/Features/Will-DNF-Replace-Yum
https://fedoraproject.org/wiki/Features/DNF#Owner
https://github.com/rpm-software-management/dnf/wiki
http://dnf.readthedocs.org/en/latest/cli_vs_yum.html
https://en.wikipedia.org/wiki/DNF_(software)
http://www.liquidweb.com/kb/dnf-dandified-yum-command-examples-install-remove-upgrade-and-downgrade/
http://www.maketecheasier.com/dnf-package-manager/
https://anup07.wordpress.com/tag/dandified-yum/
https://blogs.oracle.com/XPSONHA/entry/using_dnfs_for_test_purposes
http://oracleprof.blogspot.com/2011/11/dnfs-configuration-and-hybrid-column.html
http://www.pythian.com/news/34425/oracle-direct-nfs-how-to-start/
Direct NFS vs Kernel NFS http://glennfawcett.wordpress.com/2009/12/14/direct-nfs-vs-kernel-nfs-bake-off-with-oracle-11g-and-solaris-and-the-winner-is/
! references
Mount Options for Oracle files when used with NFS on NAS devices (Doc ID 359515.1)
Step by Step - Configure Direct NFS Client (DNFS) on Linux (11g) (Doc ID 762374.1)
How To Setup DNFS (Direct NFS) On Oracle Release 11.2 (Doc ID 1452614.1)
Direct NFS: FAQ (Doc ID 954425.1)
Configuring Oracle Exadata Backup https://docs.oracle.com/cd/E28223_01/html/E27586/configappl.html
https://taliphakanozturken.wordpress.com/2013/01/22/what-is-oracle-direct-nfs-how-to-enable-it/
!! dnfs
http://blog.oracle48.nl/direct-nfs-configuring-and-network-considerations-in-practise/
http://www.dba86.com/docs/oracle/12.2/CWWIN/creating-an-oranfstab-file-for-direct-nfs-client.htm
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/enabling-and-disabling-direct-nfs-client-control-of-nfs.html#GUID-27DDB55B-F79E-4F40-8228-5D94456E620B
dnfs_workshop_ebernal.pdf
Step by Step - Configure Direct NFS Client (DNFS) on Linux (Doc ID 762374.1)
This Note Covers Some Frequently Asked Questions Related to Direct NFS (Doc ID 1496040.1)
How To Setup DNFS (Direct NFS) On Oracle Release 11.2 (Doc ID 1452614.1)
Direct NFS monitoring and v$views (Doc ID 1495739.1)
mondnfs.sql
mondnfs_pre11204.sql
TESTCASE Step by Step - Configure Direct NFS Client (DNFS) on Windows (Doc ID 1468114.1)
Collecting The Required Information For Support To Troubleshot DNFS (Direct NFS) Issues (11.1, 11.2 & 12c). (Doc ID 1464567.1)
How To Setup DNFS (Direct NFS) On Oracle Release 11.2 (Doc ID 1452614.1)
https://kb.netapp.com/app/answers/answer_view/a_id/1001816/~/best-practices-to-configure-a-dnfs-client-
https://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/oracle11gr2-zfssa-bestprac-2255303.pdf
https://blog.pythian.com/oracle-direct-nfs-how-to-start/
http://www.pythian.com/news/34425/oracle-direct-nfs-how-to-start/
How to Disable DNFS in Oracle (Doc ID 2247243.1)
Oracle DNS configuration for SCAN
http://www.oracle-base.com/articles/linux/DnsConfigurationForSCAN.php
Configuring a small DNS server for SCAN
http://blog.ronnyegner-consulting.de/2009/10/15/configuring-a-small-dns-server-for-scan/
LinuxHomeNetworking - DNS
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch18_:_Configuring_DNS
http://www.oracle.com/technetwork/database/oracledrcp11g-1-133381.pdf
Master Note: Overview of Database Resident Connection Pooling (DRCP) (Doc ID 1501987.1)
Is Database Resident Connection Pooling (DRCP) Supported with JDBC-THIN / JDBC-OCI ? (Doc ID 1087381.1)
How To Setup and Trace Database Resident Connection Pooling (DRCP) (Doc ID 567854.1)
How to tune Database Resident Connection Pooling(DRCP) for scalability (Doc ID 1391004.1)
Connecting to an already started session (Doc ID 1524070.1)
Managing Processes http://docs.oracle.com/cd/E11882_01/server.112/e25494/manproc.htm#ADMIN11000 <-- HOWTO
Example 9-6 Database Resident Connection Pooling Application http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI18203
When to Use Connection Pooling, Session Pooling, or Neither http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI16652
Database Resident Connection Pooling and LOGON/LOGOFF Triggers http://docs.oracle.com/cd/E11882_01/server.112/e25494/manproc.htm#ADMIN13400
Example 9-7 Connect String to Use for a Deployment in Dedicated Server Mode with DRCP Not Enabled http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI18204
http://progeeking.com/2013/10/15/database-resident-connection-pooling-drcp/
! DRCP with JDBC
supported starting 12c
12c New Features http://docs.oracle.com/database/121/NEWFT/chapter12101.htm#NEWFT182
! my thoughts on this
<<<
On one of our database here, there are multiple app schemas (around 6) tied to a JVM program and each of them has their own connection pool setting to around 250 max. The problem with this app setup is you don’t have a “central pool” of connection pools. And there will be a time where one schema is overloading the box vs the others. The issue here is that if we bring the pool down to a lower value (30-50 min/max) to let the app users queue for available pool on the app side and not on the DB side we have to do it across the board on all schemas because if we do it on just one/two apps they would still suffer as the database load is not reduced due to other applications still loading the database. Now lowering the value for everyone would make it faster but with so many development teams involved it is politically not easy to convince especially if it’s a bank. So unless you have a parallel environment (clone) where you can mimic the load with the proposed changes then you may see some progress on this effort. This one is really a tricky issue to control.
We explored a couple of things:
1) DRCP was a promising option for us but it is only supported in 12c (12c New Features http://docs.oracle.com/database/121/NEWFT/chapter12101.htm#NEWFT182) but there are limitations to this feature as well like DRCP only has one (default) pool, limits to this feature like ASO, etc.
2) Connection Rate Limiter on the Listener side, there’s a white paper http://www.oracle.com/technetwork/database/enterprise-edition/oraclenetservices-connectionratelim-133050.pdf and it seems like it just queues the sessions and doesn't kill them. You can see the demo here http://jhdba.wordpress.com/2010/09/02/using-the-connection_rate-parameter-to-stop-dos-attacks/
<<<
http://orainternals.wordpress.com/2010/03/25/rac-object-remastering-dynamic-remastering/
How DRM works in RAC cluster http://goo.gl/FZPZI
11.2 RAC ==> Dynamic Resource Mastering(DRM) (Reconfiguration) , LMS / LMD / LMON / GCS/ GES concepts explained http://goo.gl/dVZ1P
http://www.ads1st.com/rac-dynamic-resource-management.html
http://www.ora-solutions.net/web/2009/05/12/your-experience-with-rac-dynamic-remastering-drm-in-10gr2/
oracle racsig paper http://goo.gl/PrHNW
-- GOOD INTRO ON DTRACE
http://groups.google.com/group/comp.unix.solaris/msg/73d6407711b38014%3Fdq%3D%26start%3D50%26hl%3Den%26lr%3D%26ie%3DUTF-8?pli=1
-- Kyle - Getting Started with DTrace
http://dboptimizer.com/2011/12/13/getting-started-with-dtrace/
How to use DTrace and mdb to Interpret vmstat Statistics (Doc ID 1009494.1)
-- SYSTEM PRIVS PREREQ
http://blogs.oracle.com/yunpu/entry/giving_a_user_privileges_to
-- DTRACE ON MAC
http://www.mactech.com/articles/mactech/Vol.23/23.11/ExploringLeopardwithDTrace/index.html
''top 10 commands on mac'' http://dtrace.org/blogs/brendan/2011/10/10/top-10-dtrace-scripts-for-mac-os-x/
-- LOCKSTAT
A Primer On Lockstat [ID 1005868.1]
-- MEMORY LEAK
http://blogs.oracle.com/openomics/entry/investigating_memory_leaks_with_dtrace
-- FAST DUMP
How to Use the Oracle Solaris Fast Crash Dump Feature [ID 1128738.1]
-- CLOUD ANALYTICS
http://www.ustream.tv/recorded/12123446
https://blogs.oracle.com/brendan/entry/dtrace_cheatsheet
https://blogs.oracle.com/brendan/resource/DTrace-cheatsheet.pdf
MDB cheatsheet https://blogs.oracle.com/jwadams/entry/an_mdb_1_cheat_sheet
-- DTrace TCP
http://blogs.oracle.com/amaguire/entry/dtracing_tcp_congestion_control
https://blogs.oracle.com/wim/entry/trying_out_dtrace
https://blogs.oracle.com/OTNGarage/entry/how_to_get_started_using
''Create the test case script'' - this script does a sustained md5sum load which is a CPU centric load
{{{
root@solaris:/home/oracle# dd if=/dev/urandom of=testfile count=20 bs=1024k
root@solaris:/home/oracle# cat md5.sh
#!/bin/sh
i=0
while [ 1 ]
do
md5sum testfile
i=`expr $i + 1`
echo "Iteration: $i"
done
}}}
''Execute the script''
{{{
root@solaris:/home/oracle# sh md5.sh
sample output:
...
root@solaris:/home/oracle# sh md5.sh
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 1
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 2
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 3
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 4
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 5
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 6
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 7
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 8
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 9
a5238634023667d128026bbc3d77c1cd testfile
Iteration: 10
...
}}}
''Profile the session''
{{{
### TOP
root@solaris:/home/oracle# top -c
last pid: 20528; load avg: 0.84, 0.87, 0.78; up 0+00:35:49 14:13:39
102 processes: 100 sleeping, 1 running, 1 on cpu
CPU states: 0.0% idle, 74.0% user, 26.0% kernel, 0.0% iowait, 0.0% swap
Kernel: 526 ctxsw, 6184 trap, 315 intr, 5714 syscall, 25 fork, 5143 flt
Memory: 1024M phys mem, 53M free mem, 977M total swap, 976M free swap
PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
742 oracle 3 59 0 61M 43M sleep 0:24 2.45% /usr/bin/Xorg :0 -nolisten tcp -br -auth /tmp/gdm-auth-cookies-rQaOBb/auth-f
1009 oracle 2 59 0 89M 19M sleep 0:19 2.36% gnome-terminal
19296 root 1 10 0 8948K 2312K sleep 0:01 2.20% sh md5.sh
954 oracle 20 59 0 71M 51M sleep 0:18 0.86% /usr/bin/java -client -jar /usr/share/vpanels/vpanels-client.jar sysmon
1992 root 1 59 0 7544K 1668K sleep 0:07 0.61% mpstat 1 100000
20372 root 1 59 0 3920K 2260K cpu 0:00 0.28% top -c
348 root 1 59 0 3668K 2180K sleep 0:00 0.02% /usr/lib/hal/hald-addon-acpi
### PRSTAT
PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
25076 root 63 10 0.3 0.0 0.0 0.0 0.0 26 0 394 13K 0 md5sum/1
19296 root 0.5 2.0 0.0 0.0 0.0 0.0 84 13 114 76 2K 72 bash/1
1009 oracle 0.4 0.6 0.0 0.0 0.0 0.0 95 3.6 264 2 1K 0 gnome-termin/1
5 root 0.0 0.6 0.0 0.0 0.0 0.0 99 0.2 112 104 0 0 zpool-rpool/13
954 oracle 0.1 0.3 0.0 0.0 0.0 97 0.0 2.7 202 1 303 0 java/20
24399 root 0.0 0.4 0.0 0.0 0.0 0.0 100 0.0 44 2 525 0 prstat/1
742 oracle 0.1 0.3 0.0 0.0 0.0 0.0 99 0.1 95 0 785 0 Xorg/1
978 oracle 0.1 0.2 0.0 0.0 0.0 0.0 99 0.6 120 0 510 0 xscreensaver/1
24899 root 0.0 0.2 0.0 0.0 0.0 0.0 100 0.0 1 1 364 0 top/1
954 oracle 0.1 0.1 0.0 0.0 0.0 100 0.0 0.1 101 0 202 0 java/19
954 oracle 0.0 0.1 0.0 0.0 0.0 0.0 98 2.4 82 0 82 0 java/9
23106 oracle 0.1 0.0 0.0 0.0 0.0 0.0 100 0.0 30 0 167 0 xscreensaver/1
555 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 43 0 258 0 nscd/17
5 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 22 3 0 0 zpool-rpool/25
5 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 21 1 0 0 zpool-rpool/22
11204 oracle 0.0 0.0 0.0 0.0 0.0 0.0 100 0.1 1 3 23 0 sshd/1
958 oracle 0.0 0.0 0.0 0.0 0.0 100 0.0 0.2 10 0 20 0 mixer_applet/3
954 oracle 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 9 0 27 0 java/12
5 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 22 1 0 0 zpool-rpool/26
5 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 22 1 0 0 zpool-rpool/20
5 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 21 0 0 0 zpool-rpool/24
423 root 0.0 0.0 0.0 0.0 0.0 0.0 99 1.4 5 0 25 5 ntpd/1
5 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 2 1 0 0 zpool-rpool/2
510 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.3 9 0 9 0 VBoxService/7
5 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 20 0 0 0 zpool-rpool/23
5 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 20 0 0 0 zpool-rpool/19
979 oracle 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 5 0 10 0 updatemanage/1
5 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 20 0 0 0 zpool-rpool/21
954 oracle 0.0 0.0 0.0 0.0 0.0 100 0.0 0.1 5 0 5 0 java/3
134 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 3 0 10 0 dhcpagent/1
97 root 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 2 0 2 0 nwamd/1
969 oracle 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 1 0 3 0 gnome-power-/1
855 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.4 1 0 10 0 sendmail/1
510 root 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 1 0 2 0 VBoxService/6
510 root 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 1 0 2 0 VBoxService/5
510 root 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 1 0 3 0 VBoxService/3
255 root 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 2 0 4 0 devfsadm/3
1002 oracle 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 1 0 1 0 rad/3
984 oracle 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 1 0 2 0 nwam-manager/1
954 oracle 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 1 0 1 0 java/10
655 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 1 0 1 0 fmd/2
918 oracle 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 1 0 4 0 ssh-agent/1
555 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 1 0 1 0 nscd/31
Total: 105 processes, 469 lwps, load averages: 1.33, 1.33, 1.20
### VMSTAT
root@solaris:/home/oracle# vmstat 1 1000
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 -- -- in sy cs us sy id
0 0 0 1037596 187496 221 2144 0 0 3 0 306 17 -0 0 0 307 3362 688 29 17 54
0 0 0 915384 54608 481 4851 0 0 0 0 0 24 0 0 0 520 5772 773 72 28 0
0 0 0 915568 54832 520 5252 0 0 0 0 0 0 0 0 0 280 6024 458 74 26 0
0 0 0 915304 54588 522 5252 0 0 0 0 0 0 0 0 0 283 6025 457 74 26 0
2 0 0 915304 54612 521 5253 0 0 0 0 0 0 0 0 0 279 6068 465 75 25 0
0 0 0 915264 54584 520 5258 0 0 0 0 0 0 0 0 0 292 6039 464 74 26 0
0 0 0 915260 54580 487 4866 0 0 0 0 0 29 0 0 0 555 5730 792 72 28 0
1 0 0 915228 54592 520 5253 0 0 0 0 0 0 0 0 0 276 6092 449 74 26 0
0 0 0 915228 54612 522 5252 0 0 0 0 0 0 0 0 0 281 6010 469 74 26 0
### MPSTAT
root@solaris:/home/oracle# mpstat 1 10000
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 2175 0 0 307 108 687 65 0 7 0 3387 29 17 0 53
0 5304 0 0 286 84 480 113 0 0 0 6068 75 25 0 0
0 5226 0 0 289 91 474 120 0 0 0 5943 74 26 0 0
0 5346 0 0 294 92 480 113 0 0 0 6089 74 26 0 0
0 4829 0 0 550 351 832 189 0 76 0 5634 71 29 0 0
0 5279 0 0 285 85 480 116 0 0 0 6022 74 26 0 0
0 5278 0 0 286 86 478 130 0 0 0 5949 75 25 0 0
0 5278 0 0 280 84 463 110 0 0 0 6007 74 26 0 0
0 5331 0 0 283 86 464 112 0 0 0 6071 74 26 0 0
0 4893 0 0 454 253 689 158 0 43 0 5849 73 27 0 0
0 5257 0 0 276 83 463 120 0 0 0 6019 74 26 0 0
0 5278 0 0 279 83 461 107 0 0 0 6010 74 26 0 0
0 5227 0 0 279 85 454 110 0 0 0 5960 74 26 0 0
0 5292 0 0 282 87 468 110 0 1 0 6031 74 26 0 0
0 4926 0 0 425 226 616 137 0 28 0 5854 74 26 0 0
### IOSTAT
root@solaris:/home/oracle# iostat -xcd 1 100000
extended device statistics cpu
device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id
cmdk0 2.6 15.8 94.9 81.8 0.0 0.0 0.8 0 1 46 21 0 33
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
extended device statistics cpu
device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id
cmdk0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 74 26 0 0
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
extended device statistics cpu
device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id
cmdk0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 75 25 0 0
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
extended device statistics cpu
device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id
cmdk0 0.0 156.9 0.0 772.1 0.0 0.1 0.4 1 4 69 31 0 0
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
}}}
''DTrace!!!''
* we want to know what is causing those system calls, this measures system calls by process name.. here the top process is ''md5sum''
{{{
dtrace -n 'syscall:::entry { @[execname] = count(); }'
root@solaris:/home/oracle# dtrace -n 'syscall:::entry { @[execname] = count(); }'
dtrace: description 'syscall:::entry ' matched 233 probes
^C
fmd 1
in.routed 1
inetd 1
netcfgd 1
nwamd 1
svc.configd 1
iiimd 2
rad 2
utmpd 2
nwam-manager 4
ssh-agent 4
devfsadm 8
xscreensaver 10
gnome-power-mana 12
sshd 16
updatemanagernot 16
dhcpagent 18
hald 20
sendmail 22
VBoxService 23
hald-addon-acpi 26
mixer_applet2 30
ntpd 40
java 965
Xorg 1797
mpstat 2115
dtrace 2357
gnome-terminal 3225
expr 5820
bash 7683
md5sum 21214
}}}
* matching the syscall probe only when the execname matches our investigation target, ''md5sum'', and counting the syscall name
{{{
dtrace -n 'syscall:::entry /execname == "md5sum"/ { @[probefunc] = count(); }'
root@solaris:/home/oracle# dtrace -n 'syscall:::entry /execname == "md5sum"/ { @[probefunc] = count(); }'
dtrace: description 'syscall:::entry ' matched 233 probes
^C
llseek 111
rexit 111
write 111
getpid 112
getrlimit 112
ioctl 112
open64 112
sysi86 112
systeminfo 112
setcontext 224
sysconfig 224
mmapobj 336
fstat64 448
open 448
memcntl 560
resolvepath 560
stat64 560
close 669
brk 672
mmap 784
read 17961
}}}
* what is calling ''read'' by using the ustack() DTrace action
{{{
dtrace -n 'syscall::read:entry /execname == "md5sum"/ { @[ustack()] = count();}'
root@solaris:/home/oracle# dtrace -n 'syscall::read:entry /execname == "md5sum"/ { @[ustack()] = count();}'
dtrace: description 'syscall::read:entry ' matched 1 probe
^C
0xfeef25b5
0xfeebb91c
0xfeec00b0
0x80554f2
0x805304d
0x805382f
0x8052a7d
161
0xfeef25b5
0xfeebb91c
0xfeec00b0
0x80554f2
0x805304d
0x805382f
0x8052a7d
161
}}}
* show top process and syscall
{{{
dtrace -n 'syscall:::entry { @num[execname,probefunc] = count(); }'
root@solaris:/home/oracle# dtrace -n 'syscall:::entry { @num[execname,probefunc] = count(); }'
dtrace: description 'syscall:::entry ' matched 233 probes
^C
dtrace fstat 1
dtrace lwp_sigmask 1
dtrace mmap 1
dtrace schedctl 1
dtrace setcontext 1
dtrace sigpending 1
dtrace write 1
fmd pollsys 1
gnome-power-mana clock_gettime 1
gnome-power-mana write 1
inetd lwp_park 1
netcfgd lwp_park 1
ntpd getpid 1
ntpd pollsys 1
nwam-manager ioctl 1
nwam-manager pollsys 1
sendmail pollsys 1
ssh-agent getpid 1
ssh-agent pollsys 1
top pollsys 1
top sysconfig 1
top write 1
xscreensaver write 1
VBoxService ioctl 2
VBoxService lwp_park 2
devfsadm gtime 2
devfsadm lwp_park 2
dhcpagent pollsys 2
gnome-power-mana ioctl 2
gnome-power-mana read 2
gnome-terminal fcntl 2
rad lwp_park 2
sendmail lwp_sigmask 2
ssh-agent gtime 2
sshd read 2
sshd write 2
top close 2
top getdents 2
top getuid 2
top lseek 2
top uadmin 2
top zone 2
xscreensaver gtime 2
xscreensaver ioctl 2
xscreensaver read 2
Xorg writev 3
dtrace sysconfig 3
gnome-power-mana pollsys 3
sendmail pset 3
xscreensaver pollsys 3
dhcpagent lwp_sigmask 4
dtrace sigaction 4
sendmail gtime 4
sshd pollsys 4
top open 4
dtrace lwp_park 5
ntpd setcontext 5
ntpd sigsuspend 5
updatemanagernot ioctl 5
dtrace brk 6
top ioctl 6
updatemanagernot pollsys 6
sshd lwp_sigmask 8
VBoxService nanosleep 10
mixer_applet2 ioctl 10
mixer_applet2 lwp_park 10
Xorg pollsys 12
ntpd lwp_sigmask 15
java ioctl 18
top gtime 20
Xorg setitimer 24
Xorg read 25
Xorg clock_gettime 48
bash fcntl 66
bash pipe 66
bash write 66
expr getpid 66
expr getrlimit 66
expr ioctl 66
expr rexit 66
expr sysi86 66
expr systeminfo 66
expr write 66
md5sum getpid 66
md5sum getrlimit 66
md5sum ioctl 66
md5sum llseek 66
md5sum open64 66
md5sum rexit 66
md5sum sysi86 66
md5sum systeminfo 66
md5sum write 66
bash brk 67
java pollsys 92
top fstat 110
bash exece 132
bash forksys 132
bash lwp_self 132
bash schedctl 132
expr fstat64 132
expr setcontext 132
expr sysconfig 132
md5sum setcontext 132
md5sum sysconfig 132
bash setcontext 137
gnome-terminal clock_gettime 140
bash read 182
java lwp_cond_signal 198
md5sum mmapobj 198
bash waitsys 203
top pread 214
dtrace p_online 256
bash getpid 264
bash stat64 264
expr brk 264
expr mmapobj 264
md5sum fstat64 264
md5sum open 264
gnome-terminal write 274
java lwp_cond_wait 302
gnome-terminal ioctl 316
gnome-terminal pollsys 317
gnome-terminal read 322
expr open 330
md5sum memcntl 330
md5sum resolvepath 330
md5sum stat64 330
bash close 396
expr close 396
expr memcntl 396
md5sum brk 396
md5sum close 396
expr mmap 462
expr resolvepath 462
md5sum mmap 462
gnome-terminal lseek 497
expr stat64 528
dtrace ioctl 1299
bash sigaction 1452
bash lwp_sigmask 1523
md5sum read 10669
root@solaris:/home/oracle#
}}}
* Tanel has this script called ''dstackprof'' that you can use for a session, here you will notice on the samples that it's mostly doing a loop and read
{{{
root@solaris:/home/oracle# sh dstackprof.sh 19296
DStackProf v1.02 by Tanel Poder ( http://www.tanelpoder.com )
Sampling pid 19296 for 5 seconds with stack depth of 100 frames...
10 samples with stack below
__________________
libc.so.1`__close
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
10 samples with stack below
__________________
libc.so.1`__sigaction
bash`set_signal_handler
bash`0x808f250
bash`wait_for
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
10 samples with stack below
__________________
libc.so.1`__waitid
libc.so.1`waitpid
bash`0x80905f7
bash`0x8090554
libc.so.1`__sighndlr
libc.so.1`call_user_handler
libc.so.1`sigacthandler
libc.so.1`__read
bash`zread
bash`0x809a281
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
10 samples with stack below
__________________
libc.so.1`__write
libc.so.1`_xflsbuf
libc.so.1`_flsbuf
libc.so.1`putc
libc.so.1`putchar
bash`echo_builtin
bash`0x8081dbf
bash`0x80827f0
bash`0x808192d
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
10 samples with stack below
__________________
libc.so.1`lmutex_lock
libc.so.1`continue_fork
libc.so.1`forkx
libc.so.1`fork
bash`make_child
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
10 samples with stack below
__________________
libc.so.1`mutex_unlock
libc.so.1`stdio_unlocks
libc.so.1`libc_parent_atfork
libc.so.1`_postfork_parent_handler
libc.so.1`forkx
libc.so.1`fork
bash`make_child
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
10 samples with stack below
__________________
libc.so.1`syscall
libc.so.1`thr_sigsetmask
libc.so.1`sigprocmask
bash`0x8092043
bash`reap_dead_jobs
bash`0x80806aa
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
10 samples with stack below
__________________
libc.so.1`syscall
libc.so.1`thr_sigsetmask
libc.so.1`sigprocmask
bash`stop_pipeline
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
10 samples with stack below
__________________
libc.so.1`syscall
libc.so.1`thr_sigsetmask
libc.so.1`sigprocmask
bash`wait_for
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
10 samples with stack below
__________________
unix`do_splx
genunix`disp_lock_exit
genunix`post_syscall
genunix`syscall_exit
unix`0xfffffffffb800ea9
10 samples with stack below
__________________
unix`splr
genunix`thread_lock
genunix`post_syscall
genunix`syscall_exit
unix`0xfffffffffb800ea9
10 samples with stack below
__________________
unix`splr
unix`lock_set_spl
genunix`disp_lock_enter
unix`disp
unix`swtch
unix`preempt
genunix`post_syscall
genunix`syscall_exit
unix`0xfffffffffb800ea9
10 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime_unscaled
genunix`new_mstate
genunix`stop
genunix`pre_syscall
genunix`syscall_entry
unix`sys_syscall32
10 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime_unscaled
unix`swtch
genunix`stop
genunix`pre_syscall
genunix`syscall_entry
unix`sys_syscall32
10 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime
genunix`getproc
genunix`cfork
genunix`forksys
unix`sys_syscall32
11 samples with stack below
__________________
libc.so.1`lmutex_lock
libc.so.1`continue_fork
libc.so.1`forkx
libc.so.1`fork
bash`make_child
bash`0x8082a19
bash`0x8081a81
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
11 samples with stack below
__________________
unix`hat_kpm_page2va
unix`ppcopy
genunix`anon_private
genunix`segvn_faultpage
genunix`segvn_fault
genunix`as_fault
unix`pagefault
unix`trap
unix`0xfffffffffb8001d6
20 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime_unscaled
unix`page_get_freelist
unix`page_create_va
genunix`swap_getapage
genunix`swap_getpage
genunix`fop_getpage
genunix`anon_private
genunix`segvn_faultpage
genunix`segvn_fault
genunix`as_fault
unix`pagefault
unix`trap
unix`0xfffffffffb8001d6
40 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime_unscaled
genunix`syscall_mstate
unix`0xfffffffffb800eb8
50 samples with stack below
__________________
libc.so.1`__forkx
libc.so.1`fork
bash`make_child
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start
302 Total samples captured
}}}
* show the open file for the process 19296
{{{
root@solaris:/home/oracle# pfiles 19296
19296: sh md5.sh
Current rlimit: 256 file descriptors
0: S_IFCHR mode:0620 dev:551,0 ino:444541655 uid:54321 gid:7 rdev:243,1
O_RDWR
/dev/pts/1
offset:2823322
1: S_IFCHR mode:0620 dev:551,0 ino:444541655 uid:54321 gid:7 rdev:243,1
O_RDWR
/dev/pts/1
offset:2823322
2: S_IFCHR mode:0620 dev:551,0 ino:444541655 uid:54321 gid:7 rdev:243,1
O_RDWR
/dev/pts/1
offset:2823322
255: S_IFREG mode:0755 dev:174,65544 ino:204 uid:0 gid:0 size:100
O_RDONLY|O_LARGEFILE FD_CLOEXEC
/home/oracle/md5.sh
offset:100
}}}
http://www.solarisinternals.com/wiki/index.php/CPU/Processor
http://blog.tanelpoder.com/2008/09/02/oracle-hidden-costs-revealed-part2-using-dtrace-to-find-why-writes-in-system-tablespace-are-slower-than-in-others/
''pulsar''
http://bensullins.com/data-warehousing-pulsar-method-introduction/
''Active Data Guard'' http://www.oracle.com/au/products/database/data-guard-hol-176005.html
this is better http://gavinsoorma.com/wp-content/uploads/2011/03/active_data_guard_hands_on_lab.pdf
<<<
Enabling Active Data Guard
Reading real-time data from an Active Data Guard Standby Database
Automatically managing potential apply lag using Active Data Guard query SLAs
Writing data when using an Active Data Guard standby.
Using schema redirection with Active Data Guard
Using Active Data Guard automatic block repair to detect and repair corrupt blocks on either the primary or standby, transparent to applications and users
The Hands-On Lab requires that you have a system with the version of the Oracle Database (11.2 or 12.1) installed on a system and that you have created a database called SFO from the seed databases and a physical standby created with the name of NYC. You can use any names you wish but the examples in the handout use SFO and NYC. For complete instructions please refer to the "Setup and Configuration" section in the handout at the link above.
<<<
''Data Guard'' http://www.oracle.com/au/products/database/data-guard-hol-basic-427660.html
<<<
Creating a Physical Standby Database
Verifying that Redo Transport has been configured correctly.
Configuring the Data Guard Broker
Changing the Transport mode using Broker Properties
Changing the Protection mode to Maximum Availability
Performing a switchover from the Primary to the Standby
Enabling Flashback database
Performing a Manual Failover from the Primary to the standby
Enabling and using Fast-Start Failover
The Hands-On Lab requires that you have a system with the version of the Oracle Database (11.2 or 12.1) installed on a system and that you have created a database called SFO from the seed databases. You can use any name you wish but the examples in the handout use SFO as the Primary database name.
The Hands-On Lab is provided as-is. We believe the documentation is sufficient for DBA's to be successful following this lab. We ask that you read the documentation carefully in order to avoid problems. However, if you have difficulties accessing the Hands-On Lab, or if you believe there is an error in the documentation that makes it impossible to complete the lab successfully, please send email to larry.carpenter@oracle.com and Larry will do his best to assist you.
<<<
Set the Network Configuration and Highest Network Redo Rates http://docs.oracle.com/cd/E11882_01/server.112/e10803/config_dg.htm#HABPT4898
Data Guard Transport Considerations on Oracle Database Machine (Exadata) (Doc ID 960510.1)
also see [[coderepo data warehouse, data model]]
<<showtoc>>
! data model example index - databaseanswers
http://www.databaseanswers.org/data_models/index.htm
http://www.databaseanswers.org/index.htm
http://www.databaseanswers.org/site_map.htm
http://www.databaseanswers.org/tutorials.htm
Levels/Kinds of Data Model
[img[ http://i.imgur.com/L3H2ecK.png ]]
! Relational
http://www.sqlfail.com/2015/02/27/database-design-resources-my-reading-list/
<<<
http://database-programmer.blogspot.com/2008/09/comprehensive-table-of-contents.html
http://fastanimals.com/melissa/WhitePapers/NormalizationDenormalizationWhitePaper.pdf
https://mwidlake.wordpress.com/tag/index-organized-tables/
https://iggyfernandez.wordpress.com/2013/07/28/no-to-sql-and-no-to-nosql/
https://blogs.oracle.com/datawarehousing/entry/optimizing_queries_with_attribute_clustering
http://use-the-index-luke.com/
https://richardfoote.wordpress.com/
<<<
Re-engineering Your Database Using Oracle SQL Developer Data Modeler 3.0
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldevdm/r30/updatedb/updatedb.htm
Quest: Data Modeling for the Database Developer, Designer & Admin http://www.youtube.com/watch?v=gPCUAcbbQ-Q
A Layman’s Approach to Relational Database Normalization http://oracledba.ezpowell.com/oracle/papers/Normalization.htm
denormalization http://oracledba.ezpowell.com/oracle/papers/Denormalization.htm
The Very Basics of Data Warehouse Design http://oracledba.ezpowell.com/oracle/papers/TheVeryBasicsOfDataWarehouseDesign.htm
All About Indexes in Oracle http://oracledba.ezpowell.com/oracle/papers/AllAboutIndexes.htm
Why an Object Database and not a Relational Database? http://oracledba.ezpowell.com/odbms/ObjectsVSRelational.html
https://gerardnico.com/wiki/data_modeling/data_modeling
https://gerardnico.com/wiki/dw/start#what_s_a_data_warehouse
https://gerardnico.com/wiki/dit/dit
https://gerardnico.com/wiki/data_quality/data_quality
http://www.vtc.com/products/Data-Modeling-tutorials.htm
{{{
01 Welcome
0101 Welcome
0102 Prerequisites for this Course
0103 About this Course
0104 Where to Find Documentation
0105 Samples and Example Data Models Part pt. 1
0106 Samples and Example Data Models Part pt. 2
0107 A Relational Database Modeling Tool
0108 ERWin: Changing Physical Structure
0109 ERWin: Generating Scripts
02 The History of Data Modeling
0201 What is a Data Model?
0202 Types of Data Models
0203 The Evolution of Data Modeling
0204 File Systems
0205 Hierarchical Databases
0206 Network Databases
0207 Relational Databases
0208 Object Databases
0209 Object-Relational Databases
0210 The History of the Relational Database
03 Tools for Data Modeling
0301 Entity Relationship Diagrams
0302 Using ERWin Part pt. 1
0303 Using ERWin Part pt. 2
0304 Using ERWin Part pt. 3
0305 Modeling in Microsoft Access
0306 The Parts of an Object Data Model
0307 Basic UML for Object Databases
0308 What is a Class Diagram?
0309 Building Class Structures
0310 Other UML Diagrams
04 Introducing Data Modeling
0401 The Relational Data Model
0402 The Object Data Model
0403 The Object-Relational Data Model
0404 Data Warehouse Data Modeling
0405 Client-Server Versus OLTP Databases
0406 Available Database Engines
05 Relational Data Modeling
0501 What is Normalization?
0502 Normalization Made Simple
0503 Relational Terms and Jargon
0504 1st Normal Form
0505 Demonstrating 1st Normal Form
0506 2nd Normal Form
0507 Demonstrating 2nd Normal Form
0508 3rd Normal Form
0509 Demonstrating 3rd Normal Form
0510 4th and 5th Normal Forms
0511 Primary/Foreign Keys/Referential Integrity
0512 The Traditional Relational Database Model
0513 Surrogate Keys and the Relational Model
0514 Denormalization pt. 1
0515 Denormalization pt. 2
06 Object Data Modeling
0601 The Object-Relational Database Model
0602 Relational Versus Object Models
0603 What is the Object Data Model?
0604 What is a Class?
0605 Again - a Class and an Object
0606 What is an Attribute?
0607 What is a Method?
0608 The Simplicity of Objects
0609 What is Inheritance?
0610 What is Multiple Inheritance?
0611 Some Specifics of the Object Data Model
07 Data Warehouse Data Modeling
0701 The Origin of Data Warehouses
0702 Why the Relational Model Fails
0703 The Dimensional Data Model Part pt. 1
0704 The Dimensional Data Model Part pt. 2
0705 Star Schemas and Snowflake Schemas
0706 Data Warehouse Model Design Basics
08 Getting Data from a Database
0801 What is Structured Query Language (SQL)?
0802 The Roots of SQL
0803 Queries
0804 Changing Data
0805 Changing Metadata
0806 What is ODQL?
09 Tuning a Relational Data Model
0901 Normalization Versus Denormalizatrion
0902 Referential Integrity Part pt. 1
0903 Referential Integrity Part pt. 2
0904 Alternate Keys
0905 What is an Index?
0906 Indexing Considerations
0907 Too Many Indexes
0908 Composite Indexing
0909 Which Columns to Index?
0910 Index Types
0911 Match Indexes to SQL Code
0912 Types of Indexing in Detail pt. 1
0913 Types of Indexing in Detail pt. 2
0914 Where Index Types Apply
0915 Undoing Normalization
0916 What to Look For?
0917 Undoing Normal Forms
0918 Some Good and Bad Tricks Part pt. 1
0919 Some Good and Bad Tricks Part pt. 2
10 Tuning a Data Warehouse Data Model
1001 Denormalization
1002 Star Versus Snowflake Schemas
1003 Dimensional Hierarchies
1004 Specialized Data Warehouse Toys
11 Other Tricks
1101 RAID Arrays and Striping
1102 Standby Databases
1103 Replication
1104 Clustering
12 Wrapping it Up
1201 Some Available Database Engines
1202 The Future: Relational or Object?
1203 What You Have Learned
13 Credits
1301 About the Author
}}}
! NoSQL
Making the Shift from Relational to NoSQL http://www.couchbase.com/sites/default/files/uploads/all/whitepapers/Couchbase_Whitepaper_Transitioning_Relational_to_NoSQL.pdf
https://www.quora.com/How-is-relational-data-stored-in-a-NoSQL-database
https://www.quora.com/NoSQL-What-are-the-best-practices-to-convert-sql-based-relational-data-model-into-no-sql-model
http://blog.cloudthat.com/migration-from-relational-database-to-nosql-database/
!! books
NoSQL and SQL Data Modeling: Bringing Together Data, Semantics, and Software https://www.safaribooksonline.com/library/view/nosql-and-sql/9781634621113/
Next Generation Databases: NoSQL, NewSQL, and Big Data https://www.safaribooksonline.com/library/view/next-generation-databases/9781484213292/
An Overview of NoSQL Databases https://www.safaribooksonline.com/library/view/an-overview-of/9781634621649/
NoSQL for Mere Mortals https://www.safaribooksonline.com/library/view/nosql-for-mere/9780134029894/
Oracle NoSQL Database: Real-Time Big Data Management for the Enterprise https://www.safaribooksonline.com/library/view/oracle-nosql-database/9780071816533/
Data Modeling Explanation and Purpose https://www.safaribooksonline.com/library/view/data-modeling-explanation/9781634621632/97816346216321.html?autoStart=True
Data Modeling for MongoDB https://www.safaribooksonline.com/library/view/data-modeling-for/9781935504702/
Data Modeling Made Simple: A Practical Guide for Business and IT Professionals https://www.safaribooksonline.com/library/view/data-modeling-made/9780977140060/
! online tools
https://www.draw.io/
https://www.quora.com/Are-there-any-online-tools-for-database-model-design
https://www.modelio.org/about-modelio/license.html
https://kentgraziano.com/2012/02/20/the-best-free-data-modeling-tool-ever/
https://www.vertabelo.com/
https://en.wikipedia.org/wiki/Comparison_of_database_tools
https://en.wikipedia.org/wiki/Comparison_of_data_modeling_tools
https://www.simple-talk.com/sql/database-administration/five-online-database-modelling-services/
! data model books
Oracle SQL Developer Data Modeler for Database Design Mastery (Oracle Press) 1st Edition, Kindle Edition https://www.safaribooksonline.com/library/view/oracle-sql-developer/9780071850100/
https://www.amazon.com/Oracle-Developer-Modeler-Database-Mastery-ebook/dp/B00VMMR9EA/
! topics
!! composite primary key
https://www.youtube.com/watch?v=5yifu5JwYxE Oracle SQL Tutorial 20 - How to Create Composite Primary Keys, more here https://www.youtube.com/playlist?list=PL_c9BZzLwBRJ8f9-pSPbxSSG6lNgxQ4m9
https://weblogs.sqlteam.com/jeffs/2007/08/23/composite_primary_keys/
Is a composite primary key a good idea https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:580828234131
https://dba.stackexchange.com/questions/101635/modeling-optional-foreign-key-from-composite-primary-key-field-oracle-data-mode
How to define a composite primary key https://asktom.oracle.com/pls/asktom/f%3Fp%3D100:11:0::::P11_QUESTION_ID:136812348065
https://searchoracle.techtarget.com/answer/Can-I-put-two-primary-keys-in-one-table
!! self join
referred by example https://www.youtube.com/watch?v=W0p8KP0o8g4
manager_id example https://www.youtube.com/watch?v=G4vO83UUzek
!! information engineering notation
https://www.omg.org/retail-depository/arts-odm-73/data_modeling_methodology_and_.htm
!! data warehouse
!!! time dimension
https://www.youtube.com/results?search_query=data+warehouse+hourly+time+dimension
https://stackoverflow.com/questions/2507289/time-and-date-dimension-in-data-warehouse
https://blog.jamesbayley.com/2013/01/04/how-to-create-a-calendar-dimension-with-hourly-grain/
http://oracleolap.blogspot.com/2010/05/time-dimensions-with-hourly-time.html
https://www.google.com/search?q=hourly+time+dimension&oq=hourly+time+dimension&aqs=chrome..69i57.5208j1j4&sourceid=chrome&ie=UTF-8
!!! hierarchy
https://gerardnico.com/olap/dimensional_modeling/hierarchy
region hierarchy https://www.google.com/search?q=data+warehouse+region+hierarchy&oq=data+warehouse+region+hierarchy&aqs=chrome..69i57j69i60l2j69i64.348j0j4&sourceid=chrome&ie=UTF-8
Dimensional Modelling Design Patterns: Beyond Basics https://www.youtube.com/watch?v=ppIoWzeFTrk
http://www.databaseanswers.org/data_models/corporate_hierarchy/index.htm
http://www.databaseanswers.org/data_models/user_defined_hierarchies/index.htm
http://www.databaseanswers.org/data_models/hierarchies/index.htm
http://www.databaseanswers.org/data_models/recipes_recursive/index.htm
https://smartbridge.com/remodeling-recursive-hierarchy-tables-for-business-intelligence/ <- nice
https://www.informationweek.com/software/information-management/kimball-university-five-alternatives-for-better-employee-dimension-modeling/d/d-id/1082326
https://dwbi1.wordpress.com/2017/10/18/hierarchy-with-multiple-parents/
time dimension hierarchy https://www.nuwavesolutions.com/simple-hierarchical-dimensions-html/
https://www.nuwavesolutions.com/ragged_hierarchical_dimensions/ <- nice
https://www.google.com/search?q=data+warehouse+data+model+Hierarchical+Queries&oq=data+warehouse+data+model+Hierarchical+Queries&aqs=chrome..69i57j69i64.11811j0j1&sourceid=chrome&ie=UTF-8
https://www.linkedin.com/pulse/step-by-step-guide-creating-sql-hierarchical-queries-bibhas-mitra/ <-- good stuff
https://oracle-base.com/articles/misc/hierarchical-queries
Find parent record as well as child record https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9526387800346066619
https://stackoverflow.com/questions/11559612/how-to-get-the-final-parent-id-column-in-oracle-connect-by-sql
https://stackoverflow.com/questions/935098/database-structure-for-tree-data-structure
https://dba.stackexchange.com/questions/46127/recursive-self-joins
!!! star schema
!!!! convert relational to star schema
https://www.google.com/search?q=convert+relational+to+star+schema&oq=convert+relational+to+star+schema&aqs=chrome..69i57j69i60l2j0l2.3422j0j1&sourceid=chrome&ie=UTF-8
https://dba.stackexchange.com/questions/144826/star-schema-from-relational-database
http://cci.drexel.edu/faculty/song/courses/info%20607/tutorial_WESST/EXAMPLE.HTM
http://blog.bguiz.com/2010/03/28/how-to-transform-an-operational-database-into-a-data-warehouse/
https://www.youtube.com/results?search_query=convert+relational+to+warehouse+schema
!! EAV model (popular on nosql)
https://mikesmithers.wordpress.com/2013/12/22/the-anti-pattern-eavil-database-design/
Entity–attribute–value model
<<<
EAV data model, each attribute-value pair is a fact describing an entity, and a row in an EAV table stores a single fact. EAV tables are often described as "long and skinny": "long" refers to the number of rows, "skinny" to the few columns.
<<<
https://www.google.com/search?q=eav+pattern&oq=EAV+pattern&aqs=chrome.0.0l7.1934j0j1&sourceid=chrome&ie=UTF-8
https://www.slideshare.net/stepanyuk/implementation-of-eav-pattern-for-activerecord-models-13263311
Installing and Using Standby Statspack in 11g (Doc ID 454848.1)
http://mujiang.blogspot.com/2010/03/setup-statspack-to-monitor-standby.html
{{{
col value format a30
select * from v$dataguard_stats;
}}}
http://jarneil.wordpress.com/2010/11/16/monitoring-the-progress-of-an-11gr2-standby-database/
https://sites.google.com/site/oraclepracticals/oracle-admin/oracle-data-guard-1/oracle-data-guard-tips
http://jhdba.wordpress.com/tag/vdataguard_stats/
{{{
* Data Guard Protection Modes
short and sweet: http://jarneil.wordpress.com/2007/10/25/dataguard-protection-levels/
8i, 9i, 10g: http://www.oracle.com/technology/deploy/availability/htdocs/DataGuardRedoShipping.htm
* Data Guard Mind Map
http://jarneil.wordpress.com/2008/10/12/the-dataguard-mind-map/
11g
DEPRECATED:
- no more standby_archive_dest
----------------------------------------------------------------------------------------
10g
Log Transport Services
- the default is ARCn
- can only be ARCn SYNC (default)
- for Log Writer Process (LGWR) ... the defaul is LGWR SYNC, could also be LGWR ASYNC (see REAL-TIME APPLY)
- If using LGWR, In 10.1 the LGWR sends data to small buffer in the SGA and LNS transports it to the standby site
- If using LGWR, In 10.2 the LGWR LNS background process reads directly from the redo log and transports the redo to the standby site
- You can change between asynchronous and synchronous log transportation dynamically. However, any changes to the configuration parameters will not take effect until the next log switch operation on the primary database
- default for VALID_FOR (start 10.1) attribute format is VALID_FOR=(redo_log_type,database_role) for role transition... default is (ALL_LOGFILES,ALL_ROLES)
- default for LOG_ARCHIVE_CONFIG.. SEND RECEIVE
- REOPEN.. the default is 300
- LOG_ARCHIVE_DEST_n... the default is OPTIONAL
- AFFIRM (for SYNC only).. the default is NOAFFIRM
- REAL-TIME APPLY, In Oracle 10.1 and above, you can configure the standby database to be updated synchronously, as redo is written to the standby redo log
To activate (using LGWR ASYNC on Maximum Performance): alter database recover managed standby database using current logfile disconnect;
- STANDBY REDO LOGS, Doc ID 219344.1 Usage, Benefits and Limitations of Standby Redo Logs (SRL)
DIFFERENCE IN THE LOG APPLY SERVICES WHEN USING STANDBY REDO LOGS
In case you do not have Standby Redo Logs, an Archived Redo Log is created
by the RFS process and when it has completed, this Archived Redo Log is applied
to the Standby Database by the MRP (Managed Recovery Process) or the Logical
Apply in Oracle 10g when using Logical Standby. An open (not fully written)
ArchiveLog file cannot be applied on the Standby Database and will not be used
in a Failover situation. This causes a certain data loss.
If you have Standby Redo Logs, the RFS process will write into the Standby Redo
Log as mentioned above and when a log switch occurs, the Archiver Process of the
Standby Database will archive this Standby Redo Log to an Archived Redo Log,
while the MRP process applies the information to the Standby Database. In a
Failover situation, you will also have access to the information already
written in the Standby Redo Logs, so the information will not be lost.
Starting with Oracle 10g you have also the Option to use Real-Time Apply with
Physical and Logical Standby Apply. When using Real-Time Apply we directly apply
Redo Data from Standby RedoLogs. Real-Time Apply is also not able to apply Redo
from partial filled ArchiveLogs if there are no Standby RedoLogs. So Standby
RedoLogs are mandatory for Real-Time Apply.
- DB_UNIQUE_NAME, In 10.1
- LOG_ARCHIVE_CONFIG, In 10.1
DEPRECATED:
- no more LOG_ARCHIVE_START
- no more REMOTE_ARCHIVE_ENABLE, conflicts with LOG_ARCHIVE_CONFIG..
----------------------------------------------------------------------------------------
9i
- SWITCHOVER, In Oracle 9.0.1 and above, you can perform a switchover operation such that the primary database becomes a new standby database, and the old standby database becomes the new primary database. A successful switchover operation
should never result in any data loss, irrespective of the physical standby configuration.
- LGWR PROCESS, In 9.0.1 above, LGWR can also transport redo to standby database
- STANDBY REDO LOGS, In 9.0.1 above, standby redo logs can be created. Requires LGWR.
Doc ID 150584.1 Data Guard 9i Setup with Guaranteed Protection Mode
----------------------------------------------------------------------------------------
8i
- READ ONLY MODE, In Oracle 8.1.5 and above, you can cancel managed recovery on the standby database and open the database in read-only mode for reporting purposes
----------------------------------------------------------------------------------------
7.3
- FAILOVER, Since Oracle 7.3, performing a failover operation from the primary database to the standby database has been possible. A failover operation may result in data loss,
depending on the configuration of the log archive destinations on the primary database.
}}}
Build data guard using the following methods
* from backupset
* Active Duplicate
* recover database from service <primary_service> (new in 12.1)
http://gavinsoorma.com/2009/06/trigger-to-use-with-data-guard-to-change-service-name/
{{{
CREATE OR REPLACE TRIGGER manage_OCIservice
after startup on database
DECLARE
role VARCHAR(30);
BEGIN
SELECT DATABASE_ROLE INTO role FROM V$DATABASE;
IF role = ‘PRIMARY’ THEN
DBMS_SERVICE.START_SERVICE(‘apex_dg’);
ELSE
DBMS_SERVICE.STOP_SERVICE(‘apex_dg’);
END IF;
END;
/
}}}
<<<
To have an end to end view of the data guard transport and apply performance. Here’s how I will troubleshoot it (w/ scripts/tools on the table below):
1) get redo MB/s
• This is the redo generation
2) get the bandwidth link
• The bandwidth capacity
3) get the transport lag
• This metric will tell if there's a problem with the transport of the logs between the sites
• It's possible for redo to be generated at faster rates than what can be accommodated by the network
4) get apply lag
• This metric will tell if the managed recovery process is having a hard time reading the redo stream and applying it to the standby DB
• This is the difference of SCN of primary site and standby site that needs to be applied
5) get the IO breakdown/IO cell metrics
• Will tell if there is an IO capacity issue
• I would also get the cell metrics of primary just to compare
6) Primary and Standby DB wait events
• This will tell any obvious events causing the bottleneck
• On standby site AWR data is the same as primary. So we need to use ASH here because it’s in-memory.
8) Archive per hour/day
• output of archiving_per_day.sql to get the hourly/daily redo generation
9) Run the attached scripts from the following MOS notes
• Data Guard Physical Standby - Configuration Health Check (Doc ID 1581388.1)
• Script to Collect Data Guard Physical and Active Standby Diagnostic Information for Version 10g and above (Including RAC) (Doc ID 1577406.1)
• Monitoring a Data Guard Configuration (Doc ID 2064281.1)
<<<
[img[ http://i.imgur.com/qhKPFxi.png ]]
also run archiving_per_day.sql on Primary
All the scripts can be downloaded here https://github.com/karlarao/scripts/tree/83c4681e796ccff8eb2001ba7d05d1eff4a543e6/data_guard
! references
https://docs.oracle.com/cd/E18283_01/server.112/e17110/dynviews_1103.htm
http://emrebaransel.blogspot.com/2013/07/data-guard-queries.html
http://blog.yannickjaquier.com/oracle/data-guard-apply-lag-gap-troubleshooting.html
http://yong321.freeshell.org/oranotes/DataGuardMonitoringScripts.txt
Presentation “Minimal Downtime Oracle 11g Upgrade” at DOAG Conference 2010
http://goo.gl/ZTQVD
''The netem Commands''
examples below demonstrates a 10Mbps network transferring a file to another server.. theoretically you have 1.25MB/s.. if you want to play around different WAN config here's the list http://en.wikipedia.org/wiki/List_of_device_bandwidths#Wide_area_networks, see the stats of my tests below:
tc qdisc show <-- to show
tc qdisc add dev eth0 root handle 1: tbf rate 10000kbit burst 10000kbit latency 10ms <-- to set bandwidth
tc qdisc add dev eth0 parent 1: handle 10: netem delay 10ms <-- to set delay
tc qdisc change dev eth0 parent 1: handle 10: netem delay 100ms <-- to change delay
tc qdisc del dev eth0 root <-- to remove
-- no tweaks
{{{
[oracle@dg10g2 flash_recovery_area]$ du -sm dg10g.tar
192 dg10g.tar
[oracle@dg10g2 flash_recovery_area]$
[oracle@dg10g2 flash_recovery_area]$
[oracle@dg10g2 flash_recovery_area]$ scp dg10g.tar oracle@192.168.203.41:/u02/flash_recovery_area/
The authenticity of host '192.168.203.41 (192.168.203.41)' can't be established.
RSA key fingerprint is f2:ed:e1:43:a6:62:ee:b1:d0:70:39:cc:28:fb:9d:e8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.203.41' (RSA) to the list of known hosts.
oracle@192.168.203.41's password:
dg10g.tar 100% 192MB 27.4MB/s 00:07
[oracle@dg10g2 flash_recovery_area]$
[oracle@dg10g2 flash_recovery_area]$ ping 192.168.203.41
PING 192.168.203.41 (192.168.203.41) 56(84) bytes of data.
64 bytes from 192.168.203.41: icmp_seq=0 ttl=64 time=1.23 ms
64 bytes from 192.168.203.41: icmp_seq=1 ttl=64 time=0.198 ms
64 bytes from 192.168.203.41: icmp_seq=2 ttl=64 time=1.22 ms
64 bytes from 192.168.203.41: icmp_seq=3 ttl=64 time=0.311 ms
64 bytes from 192.168.203.41: icmp_seq=4 ttl=64 time=1.97 ms
--- 192.168.203.41 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.198/0.989/1.977/0.660 ms, pipe 2
}}}
-- configured with 156.25 KB/s with 100ms latancy (too slow so I cancelled it)
tc qdisc add dev eth0 root handle 1: tbf rate 1250kbit burst 1250kbit latency 10ms
{{{
[oracle@dg10g1 flash_recovery_area]$ ls -ltr
drwxr-xr-x 5 oracle oinstall 4096 Oct 20 09:37 dg10g
-rw-r--r-- 1 oracle oinstall 207912960 Oct 21 11:45 flash_recovery_area.tar
[oracle@dg10g1 flash_recovery_area]$ date
Thu Oct 21 11:48:53 PHT 2010
[oracle@dg10g1 flash_recovery_area]$ scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/
oracle@192.168.203.40's password:
flash_recovery_area.tar 65% 129MB 145.7KB/s 08:07 ETAKilled by signal 2.
[oracle@dg10g1 flash_recovery_area]$
[oracle@dg10g1 flash_recovery_area]$
[oracle@dg10g1 flash_recovery_area]$
[oracle@dg10g1 flash_recovery_area]$ ls
dg10g flash_recovery_area.tar
[oracle@dg10g1 flash_recovery_area]$ date
Thu Oct 21 12:04:23 PHT 2010
}}}
-- configured with 10MB/s with 100ms latancy
tc qdisc change dev eth0 root handle 1: tbf rate 10000kbit burst 10000kbit latency 10ms
[root@dg10g1 ~]# tc qdisc show
qdisc tbf 1: dev eth0 rate 10Mbit burst 1250Kb lat 9.8ms
qdisc netem 10: dev eth0 parent 1: limit 1000 delay 100.0ms
{{{
[oracle@dg10g1 flash_recovery_area]$ ping 192.168.203.40
PING 192.168.203.40 (192.168.203.40) 56(84) bytes of data.
64 bytes from 192.168.203.40: icmp_seq=0 ttl=64 time=201 ms
64 bytes from 192.168.203.40: icmp_seq=1 ttl=64 time=101 ms
64 bytes from 192.168.203.40: icmp_seq=2 ttl=64 time=100 ms
64 bytes from 192.168.203.40: icmp_seq=3 ttl=64 time=100 ms
64 bytes from 192.168.203.40: icmp_seq=4 ttl=64 time=100 ms
--- 192.168.203.40 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 100.480/120.839/201.008/40.085 ms, pipe 2
[oracle@dg10g1 flash_recovery_area]$ date
Thu Oct 21 12:14:12 PHT 2010
[oracle@dg10g1 flash_recovery_area]$ scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/
oracle@192.168.203.40's password:
flash_recovery_area.tar 100% 198MB 620.9KB/s 05:27
[oracle@dg10g1 flash_recovery_area]$ date
Thu Oct 21 12:20:58 PHT 2010
}}}
-- configured with 10MB/s with 10ms latency
tc qdisc change dev eth0 parent 1: handle 10: netem delay 10ms
[root@dg10g1 ~]# tc qdisc show
qdisc tbf 1: dev eth0 rate 10Mbit burst 1250Kb lat 9.8ms
qdisc netem 10: dev eth0 parent 1: limit 1000 delay 10.0ms
{{{
[oracle@dg10g1 flash_recovery_area]$ ping 192.168.203.40
PING 192.168.203.40 (192.168.203.40) 56(84) bytes of data.
64 bytes from 192.168.203.40: icmp_seq=0 ttl=64 time=20.4 ms
64 bytes from 192.168.203.40: icmp_seq=1 ttl=64 time=9.58 ms
64 bytes from 192.168.203.40: icmp_seq=2 ttl=64 time=10.1 ms
64 bytes from 192.168.203.40: icmp_seq=3 ttl=64 time=10.1 ms
64 bytes from 192.168.203.40: icmp_seq=4 ttl=64 time=10.1 ms
--- 192.168.203.40 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 9.586/12.093/20.428/4.174 ms, pipe 2
[oracle@dg10g1 flash_recovery_area]$
[oracle@dg10g1 flash_recovery_area]$ date ; scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/ ; date
Thu Oct 21 12:26:22 PHT 2010
oracle@192.168.203.40's password:
flash_recovery_area.tar 100% 198MB 1.2MB/s 02:53
Thu Oct 21 12:29:19 PHT 2010
}}}
-- configured with 10MB/s with 1ms latency
tc qdisc change dev eth0 parent 1: handle 10: netem delay 1ms
[root@dg10g1 ~]# tc qdisc show
qdisc tbf 1: dev eth0 rate 10Mbit burst 1250Kb lat 9.8ms
qdisc netem 10: dev eth0 parent 1: limit 1000 delay 999us
{{{
[root@dg10g1 ~]# ping 192.168.203.40
PING 192.168.203.40 (192.168.203.40) 56(84) bytes of data.
64 bytes from 192.168.203.40: icmp_seq=0 ttl=64 time=1.06 ms
64 bytes from 192.168.203.40: icmp_seq=1 ttl=64 time=1.20 ms
64 bytes from 192.168.203.40: icmp_seq=2 ttl=64 time=1.16 ms
64 bytes from 192.168.203.40: icmp_seq=3 ttl=64 time=1.71 ms
64 bytes from 192.168.203.40: icmp_seq=4 ttl=64 time=1.15 ms
64 bytes from 192.168.203.40: icmp_seq=5 ttl=64 time=1.55 ms
64 bytes from 192.168.203.40: icmp_seq=6 ttl=64 time=1.17 ms
64 bytes from 192.168.203.40: icmp_seq=7 ttl=64 time=1.37 ms
64 bytes from 192.168.203.40: icmp_seq=8 ttl=64 time=1.15 ms
[oracle@dg10g1 flash_recovery_area]$ date ; scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/ ; date
Thu Oct 21 12:35:04 PHT 2010
oracle@192.168.203.40's password:
flash_recovery_area.tar 100% 198MB 1.2MB/s 02:53
Thu Oct 21 12:38:00 PHT 2010
}}}
-- configured with 10MB/s with 1ms latency (including main)
tc qdisc change dev eth0 root handle 1: tbf rate 10000kbit burst 10000kbit latency 1ms
[root@dg10g1 ~]# tc qdisc show
qdisc tbf 1: dev eth0 rate 10Mbit burst 1250Kb lat 978us
qdisc netem 10: dev eth0 parent 1: limit 1000 delay 999us
{{{
[root@dg10g1 ~]# ping 192.168.203.40
PING 192.168.203.40 (192.168.203.40) 56(84) bytes of data.
64 bytes from 192.168.203.40: icmp_seq=0 ttl=64 time=1.05 ms
64 bytes from 192.168.203.40: icmp_seq=1 ttl=64 time=2.22 ms
64 bytes from 192.168.203.40: icmp_seq=2 ttl=64 time=1.14 ms
64 bytes from 192.168.203.40: icmp_seq=3 ttl=64 time=1.23 ms
64 bytes from 192.168.203.40: icmp_seq=4 ttl=64 time=2.46 ms
64 bytes from 192.168.203.40: icmp_seq=5 ttl=64 time=1.76 ms
64 bytes from 192.168.203.40: icmp_seq=6 ttl=64 time=2.81 ms
64 bytes from 192.168.203.40: icmp_seq=7 ttl=64 time=2.98 ms
64 bytes from 192.168.203.40: icmp_seq=8 ttl=64 time=2.98 ms
[oracle@dg10g1 flash_recovery_area]$ date ; scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/ ; date
Thu Oct 21 12:40:21 PHT 2010
oracle@192.168.203.40's password:
flash_recovery_area.tar 100% 198MB 1.2MB/s 02:53
Thu Oct 21 12:43:16 PHT 2010
}}}
References:
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem
http://fedoraforum.org/forum/showthread.php?t=243272
http://henrydu.com/blog/how-to/simulate-a-slow-link-by-linux-bridge-123.html
http://mywiki.ncsa.uiuc.edu/wiki/Tips_and_Tricks#How_to_Simulate_a_Slow_Network
Peoplesoft MAA
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/availability/maa-peoplesoft-bestpractices-134154.pdf
Data Guard Implications of NOLOGGING operations from PeopleTools 8.48
http://blog.psftdba.com/2007/06/stuff-changes.html
PeopleSoft for the Oracle DBA
https://docs.google.com/viewer?url=http://www.atloaug.org/presentations/PeopleSoftDBARiley200504.ppt&pli=1
Reducing PeopleSoft Downtime Using a Local Standby Database
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/availability/maa-peoplesoft-local-standby-128609.pdf
Batch Processing in Disaster Recovery Configurations
https://docs.google.com/viewer?url=http://www.hitachi.co.jp/Prod/comp/soft1/oracle/pdf/OBtecinfo-08-008.pdf <-- uses netem's (iproute rpm) Token Bucket Filter (TBF) to limit output
A whitepaper on workload based performance management for PeopleSoft and DB2 on z/OS
https://docs.google.com/viewer?url=http://www.hewittandlarsen.com/_documents/WLM/WLM%2520for%2520PS.pdf
Securing Sensitive Data in PeopleSoft Applications
https://docs.google.com/viewer?url=http://www.ingrian.com/resources/sol_briefs/peoplesoft-sb.pdf
My PeopleSoft Disaster Recovery Adventure
http://www.erpassociates.com/peoplesoft-corner-weblog/peoplesoft/my-peoplesoft-disaster-recovery-adventure.html
Excessive redo
http://tech.groups.yahoo.com/group/psftdba/message/4030
http://tech.groups.yahoo.com/group/psftdba/message/4273
http://gasparotto.blogspot.com/2010/06/goldengate-database-for-peoplesoft.html
http://www.freelists.org/post/oracle-l/PeopleSoft-and-Logical-Standby
http://www.pythian.com/news/17127/redo-transport-compression/
http://el-caro.blogspot.com/2006/11/archivelog-compression.html
http://download.oracle.com/docs/cd/B14099_19/core.1012/b14003/sshpinfo.htm <--
http://www.oracle.com/technetwork/database/features/availability/dataguardnetwork-092224.html
http://jarneil.wordpress.com/2007/11/21/protecting-oracle-redo-transport/
Implementing SSH port forwarding with Data Guard Doc ID: Note:225633.1
http://sdt.sumida.com.cn:8080/cs/blogs/wicky/archive/2006/10/30/448.aspx
Redo compression
Redo Transport Compression in a Data Guard Environment [ID 729551.1]
Enabling Encryption for Data Guard Redo Transport [ID 749947.1]
MAA - Data Guard Redo Transport and Network Best Practices [ID 387174.1]
Oracle 10g R2 and 11g R1 Database Feature Support Summary [ID 778861.1]
Changing the network used by the Data Guard Broker for redo transport [ID 730361.1]
Oracle Data Guard and SSH [ID 751528.1] <-- the announcement
Troubleshooting 9i Data Guard Network Issues [ID 241925.1]
Manual Standby Database under Oracle Standard Edition
http://goo.gl/TvMO7
-- CERTIFICATION, PRE-REQ
Certification and Prerequisites for Oracle DataGuard
Doc ID: Note:234508.1
-- FAQ
Data Guard Knowledge Browser Product Page [ID 267955.1]
11gR1 Dataguard Content
Doc ID: 798974.1
10gR2 Dataguard Content
Doc ID: 739396.1
-- MIXED ENVIRONMENT
Data Guard Support for Heterogeneous Primary and Standby Systems in Same Data Guard Configuration
Doc ID: 413484.1
Role Transitions for Data Guard Configurations Using Mixed Oracle Binaries
Doc ID: 414043.1
http://www.freelists.org/post/oracle-l/Hetergenous-Dataguard
-- ARCHIVELOG MAINTENANCE
Maintenance Of Archivelogs On Standby Databases [ID 464668.1]
RMAN Best Practices - Log Maintenance, RMAN Configuration Best Practices Setup Backup Management Policies http://www.oracle.com/technetwork/database/features/availability/298772-132349.pdf
Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
http://martincarstenbach.wordpress.com/2009/10/08/archivelog-retention-policy-changes-in-rman-11g/
RMAN backups in Max Performance/Max Availability Data Guard Environment [ID 331924.1]
Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
-- MAINTENANCE
Using RMAN Effectively In A Dataguard Environment. [ID 848716.1]
-- RAC DATA GUARD
MAA - Creating a Single Instance Physical Standby for a RAC Primary [ID 387339.1]
MAA - Creating a RAC Physical Standby for a RAC Primary [ID 380449.1]
MAA - Creating a RAC Logical Standby for a RAC Primary 10gr2 [ID 387261.1]
Usage, Benefits and Limitations of Standby Redo Logs (SRL)
Doc ID: Note:219344.1
Setup and maintenance of Data Guard Broker using DGMGRL
Doc ID: Note:201669.1
9i Data Guard FAQ
Doc ID: Note:233509.1
Migrating to RAC using Data Guard
Doc ID: Note:273015.1
Data Guard 9i Creating a Logical Standby Database
Doc ID: Note:186150.1
Reinstating a Logical Standby Using Backups Instead of Flashback Database
Doc ID: Note:416314.1
WAITEVENT: "log file sync" Reference Note
Doc ID: Note:34592.1
Standby Redo Logs are not Created when Creating a 9i Data Guard DB with RMAN
Doc ID: Note:185076.1
Oracle10g: Data Guard Switchover and Failover Best Practices
Doc ID: Note:387266.1
Script to Collect Data Guard Physical Standby Diagnostic Information
Doc ID: Note:241438.1
Script to Collect Data Guard Primary Site Diagnostic Information
Doc ID: Note:241374.1
Creating a 9i Data Guard Database with RMAN (Recovery Manager)
Doc ID: Note:183570.1
Upgrading to 10g with a Physical Standby in Place
Doc ID: Note:278521.1
Script to Collect Data Guard Logical Standby Table Information
Doc ID: Note:269954.1
Comparitive Study between Oracle Streams and Oracle Data Guard
Doc ID: Note:300223.1
Creating a 10g Data Guard Physical Standby on Linux
Doc ID: Note:248382.1
9i Data Guard Primary Site and Network Configuration Best Practices
Doc ID: Note:240874.1
The Gains and Pains of Nologging Operations
Doc ID: Note:290161.1
How I make a standby database with Oracle Database Standard Edition
Doc ID: Note:432514.1
Data Guard Gap Detection and Resolution
Doc ID: Note:232649.1
Steps To Setup Replication Using Oracle Streams
Doc ID: Note:224255.1
How To Setup Schema Level Streams Replication
Doc ID: Note:301431.1
Installing and Using Standby Statspack in 11gR1
Doc ID: Note:454848.1
Recovering After Loss of Redo Logs
Doc ID: Note:392582.1
Hardware Assisted Resilient Data H.A.R.D
Doc ID: Note:227671.1
A Study of Non-Partitioned NOLOGGING DML/DDL on Primary/Standby Data Dictionary
Doc ID: Note:150694.1
Extracting Data from Redo Logs Is Not A Supported Interface
Doc ID: Note:97080.1
-- RMAN - create physical standby
Step By Step Guide To Create Physical Standby Database Using RMAN
Doc ID: Note:469493.1
Creating a Data Guard Database with RMAN (Recovery Manager) using Duplicate Command
Doc ID: Note:183570.1
Creating a Standby Database using RMAN (Recovery Manager)
Doc ID: Note:118409.1
Step By Step Guide To Create Physical Standby Database Using RMAN
Doc ID: Note:469493.1
Steps To Create Physical Standby Database
Doc ID: Note:736863.1
-- SWITCHOVER, FAILOVER
Oracle10g: Data Guard Switchover and Failover Best Practices
Doc ID: Note:387266.1
Are Virtual IPs required for Data Guard?
http://blog.trivadis.com/blogs/yannneuhaus/archive/2008/02/06/are-virtual-ips-required-for-data-guard.aspx
Steps to workaround issue described in Alert 308698.1
Doc ID: 368276.1
-- CASCADED STANDBY DATABASES
Cascaded Standby Databases
Doc ID: Note:409013.1
-- LOG APPLY
Applied Archived Logs Not Getting Updated on the Standby Database
Doc ID: Note:197032.1
-- RESIZE DATAFILE
Standby Database Behavior when a Datafile is Resized on the Primary Database
Doc ID: Note:123883.1
-- UPGRADE WITH DATA GUARD
Upgrading to 10g with a Physical Standby in Place
Doc ID: Note:278521.1
Upgrading to 10g with a Logical Standby in Place
Doc ID: Note:278108.1
Upgrading Oracle Applications 11i Database to 10g with Physical Standby in Place [ID 340859.1]
-- PATCH, PATCHSET
187242 "patch or patch set" to a dataguard systems
Applying Patchset with a 10g Physical Standby in Place (Doc ID 278641.1)
-- NETWORK PERFORMANCE
Network Bandwidth Implications of Oracle Data Guard
http://www.oracle.com/technology/deploy/availability/htdocs/dataguardnetwork.htm
High ARCH wait on SENDREQ wait events found in statspack report.
Doc ID: Note:418709.1
Refining Remote Archival Over a Slow Network with the ARCH Process
Doc ID: Note:260040.1
Troubleshooting 9i Data Guard Network Issues
Doc ID: Note:241925.1
-- REDO TRANSPORT
Redo Corruption Errors During Redo Transport
Doc ID: 386417.1
-- LOGICAL STANDBY
Creating a Logical Standby with Minimal Production Downtime
Doc ID: 278371.1
-- CLONE PHYSICAL STANDBY, RMAN PHYSICAL STANDBY
How I Created a Test Database with the RMAN Backup of the Physical Standby Database
Doc ID: 428014.1
How to create a non ASM physical standby from an ASM primary [ID 790327.1]
-- DUPLICATE
Creating a Data Guard Database with RMAN (Recovery Manager) using Duplicate Command [ID 183570.1]
-- MINIMAL DOWNTIME
How I Create a Physical Standby Database for a 24/7 Shop
Doc ID: 580004.1
-- STARTUP
Data Guard 9i Data Guard Remote Process Startup Failed
Doc ID: Note:204848.1
-- DATA GUARD 8i
Data Guard 8i Setting up SSH using SSH-AGENT
Doc ID: Note:136377.1
How to Create a Oracle 8i Standby Database
Doc ID: Note:70233.1
Data Guard 8i Setup and Implementation
Doc ID: Note:132991.1
-- CREATE DATA GUARD CONFIGURATION
Creating a configuration using Data Guard Manager
Doc ID: Note:214071.1
Creating a Data Guard Configuration
Doc ID: Note:180031.1
Creating a Standby Database on a new host [ID 374069.1]
-- ROLLING FORWARD
Rolling a Standby Forward using an RMAN Incremental Backup in 10g
Doc ID: 290814.1
How To Calculate The Required Network Bandwidth Transfer Of Archivelogs In Dataguard Environments
Required bandwidth = ((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000 = bandwidth in Mbps
Note that if your primary database is a RAC database, you must run the Statspack snapshot on every RAC instance. Then, for each Statspack snapshot, sum the "Redo Size Per Second" value of each instance, to obtain the net peak redo generation rate for the primary database. Remember that for
a RAC primary database, each node generates its own redo and independently sends that redo to the standby database - hence the reason to sum up the redo rates for each RAC node, to obtain the net peak redo rate for the database.
Doc ID: 736755.1
Creating physical standby using RMAN duplicate without shutting down the primary
Doc ID: 789370.1
Effect of changing DBID using NID of Primary database when Physical standby in place - ORA-16012
Doc ID: 829095.1
Note 219344.1 - Usage, Benefits and Limitations of Standby Redo Logs (SRL)
TRANSPORT: Data Guard Protection Modes
Doc ID: 239100.1
Will a Standby Database in Read Only Mode Apply Archived Log Files?
Doc ID: 136830.1
Note 330103.1 Ext/Mod How to Move Asm Database Files From one Diskgroup To Another
Moving Files Between Asm Disk Groups For Rac Primary/Standby Configuration
Doc ID: 601643.1
How to Rename a Datafile in Primary Database When in Dataguard Configuration
Doc ID: 733796.1
Hybrid Configurations using Data Guard and Remote-Mirroring
Doc ID: 804623.1
http://www.oracle.com/technology/deploy/availability/htdocs/DataGuardRemoteMirroring.html
http://www.oracle.com/technology/deploy/availability/htdocs/dataguardprotection.html
What is the Database_role in Previous Version Equivalency for 9.2.X And 10g V$Database view
Doc ID: 313130.1
Is using Transportable Tablespaces method supported in DataGuard?
Doc ID: 471293.1
How to transport a Tablespace to Databases in a Physical Standby Configuration
Doc ID: 467752.1
Note 343424.1 - Creating a 10gr2 Data Guard Physical Standby database with Real-Time apply
Note 388431.1 - Creating a Duplicate Database on a New Host.
Monitoring Physical Standby Progress
Doc ID: 243709.1
Redo Corruption Errors During Redo Transport
Doc ID: 386417.1
Certification and Prerequisites for Oracle DataGuard
Doc ID: 234508.1
Special Considerations About Physical Standby Databases
Doc ID: 236659.1
V$ARCHIVED_LOG.APPLIED is Not Consistent With Standby Progress
Doc ID: 263994.1
How to Use Standby Database in Read-Only Mode and Managed Recovery Mode at the Same Time
Doc ID: 177859.1
Redo Transport Compression in a Data Guard Environment
Doc ID: 729551.1
Data Guard and Network Disconnects
Doc ID: 255959.1
Oracle Data Guard and SSH
Doc ID: 751528.1
Developer and DBA Tips to Optimize SQL Apply
Doc ID: 603361.1
Broker and SQL*Plus
Doc ID: 744396.1
Refining Remote Archival Over a Slow Network with the ARCH Process
Doc ID: 260040.1
How To Open Physical Standby For Read Write Testing and Flashback
Doc ID: 805438.1
Exporting Transportable Tablespace Fails from a Read-only Standby Database
Doc ID: 252866.1
What Does Database in Limbo Mean When Seen in the Alert File?
Doc ID: 165676.1
Standby Database Has Datafile In Recover Status
Doc ID: 270043.1
Oracle Label Security Packages affect Data Guard usage of Switchover and connections to Primary Database
Doc ID: 265192.1
Rman Backups On Standby Having Impact On Dataguard Max_availability Mode
Doc ID: 259946.1
Dataguard-Automate Removal Of Archives Once Applied Against Physical Standby
Doc ID: 260874.1
Alter Database Create Datafile
Doc ID: 2103994.6
Is my Standby Database Working ?
Doc ID: 136776.1
-- ORA-1031, HEARTBEAT FAILED TO CONNECT TO STANDBY
Transport : Remote Archival to Standby Site Fails with ORA-01031
Doc ID: 353976.1
ORA-1031 for Remote Archive Destination on Primary
Doc ID: 733793.1
-- ORA-16191 -PRIMARY LOG SHIPPING CLIENT NOT LOGGED ON STANDBY
Changing SYS password of PRIMARY database when STANDBY in place to avoid ORA-16191
Doc ID: 806703.1
DATA GUARD TRANSPORT: ORA-01017 AND ORA-16191 WHEN SEC_CASE_SENSITIVE_LOGON=FALSE
Doc ID: 815664.1
DATA GURAD LOG SHIPPING FAILS WITH ERROR ORA-16191 IN 11G
Doc ID: 462219.1
-- ORA-1017 & ORA-2063, DATABASE LINK
Database Link from 10g to 11g fails with ORA-1017 & ORA-2063
Doc ID: 473716.1
ORA-1017 : Invalid Username/Password; Logon Denied. When Attempting to Change An Expired Password.
Doc ID: 742961.1
-- EBUSINESS SUITE R12
Case Study : Configuring Standby Database(Dataguard) on R12 using RMAN Hot Backup
Doc ID: 753241.1
-- REDO LOG REPOSITORY / PSEUDO STANDBY
Data Guard Archived Redo Log Repository Example
Doc ID: 434164.1
-- RMAN ON STANDBY
Our Experience in Creating a clone database from RMAN backup of a physical standby database without using a recovery catalog
Doc ID: 467525.1
-- FLASHBACK
How To Flashback Primary Database In Standby Configuration [ID 728374.1]
-- STANDBY REDO LOGS
Usage, Benefits and Limitations of Standby Redo Logs (SRL)
Doc ID: 219344.1
Data Guard 9i Setup with Guaranteed Protection Mode <-- not yet read.. but good stuff
Doc ID: 150584.1
Online Redo Logs on Physical Standby <-- add, drop, drop standby logfile
Doc ID: 740675.1
-- DATA GUARD CONTROLFILE
CORRUPTION IN SNAPSHOT CONTROLFILE
Doc ID: 268719.1
Steps to recreate a Physical Standby Controlfile
Doc ID: 459411.1
Step By Step Guide On How To Recreate Standby Control File When Datafiles Are On ASM And Using Oracle Managed Files
Doc ID: 734862.1
-- DATA GUARD TROUBLESHOOTING
Dataguard Information gathering to upload with the Service Requests
Doc ID: 814417.1
10gR2 Dataguard Content <-- ALL ABOUT ADMINISTRATION OF DATA GUARD
Doc ID: 739396.1
Script to Collect Data Guard Logical Standby Table Information
Doc ID: 269954.1
Creating a 10gr2 Data Guard Physical Standby database with Real-Time apply
Doc ID: 343424.1
How to Add/Drop/Resize Redo Log with Physical Standby in place.
Doc ID: 473442.1
Online Redo Logs on Physical Standby
Doc ID: 740675.1
-- DATA GUARD REMOVE
How to Remove Standby Configuration from Primary Database
Doc ID: 733794.1
-- BROKER
Setup and maintenance of Data Guard Broker using DGMGRL
Doc ID: 201669.1
Creating a configuration using Data Guard Manager
Doc ID: 214071.1
10g DGMGRL CLI Configuration
Doc ID: 260112.1
Data Guard Broker and SQL*Plus
Doc ID: 783445.1
Data Guard Switchover Not Completed Successfully <-- 9i issue
Doc ID: 308158.1
-- BROKER BUG
Broker shutdown can lead to ora-600 [kjcvg04] in RAC ENV.
Doc ID: 840627.1
-- FAILSAFE
How to Use Oracle Failsafe With Oracle Data Guard for RDBMS versions 10g
Doc ID: 373204.1
-- FAST START FAILOVER
IMPLEMENTING FAST-START FAILOVER IN 10GR2 DATAGUARD BROKER ENVIRONMENT
Doc ID: 359555.1
-- DATA GUARD BEST PRACTICE
Oracle10g: Data Guard Switchover and Failover Best Practices
Doc ID: 387266.1
Data Guard Broker High Availability
Doc ID: 275977.1
''From "Oracle Data Guard 11g Handbook"''
<<<
If, however, you have chosen Maximum Availability or Maximum Protection mode, then that
latency is going to have a big effect on your production throughput. Several calculations can be
used to determine latency, most of which try to include the latency introduced by the various
hardware devices at each end. But since the devices used in the industry all differ, it is difficult to
determine how long the network has to be to maintain a 1 millisecond (ms) RTT. A good rule of
thumb (in a perfect world) is that a 1 ms RTT is about 33 miles (or 53 km). This means that if you
want to keep your production impact down to the 4 percent range, you will need to keep the
latency down to 10ms, or 300 miles (in a perfect world, of course). You will have to examine, test,
and evaluate your network to see if it actually matches up to these numbers. Remember that
latency depends on the size of the packet, so don’t just ping with 56 bytes, because the redo you
are generating is a lot bigger than that..
<<<
''Rule Of Thumb... taken from "Oracle Data Guard 11g Handbook"''
<<<
1mile = 1.604km
normal "ping" command = 56bytes
In a perfect world ===> ''1ms (ping RTT) = 33miles = 53km (52.932km)''
If you want to keep the production impact to ''4%''...then keep the latency down to ''10ms or 300miles''
<<<
''Tests taken from "Oracle Data Guard 11g Handbook"'':""
<<<
Output from a ping going from Texas to New Hampshire (''about 1990 miles'') at night, when nothing else is going on using ''56 bytes'' and ''64,000 bytes''
''==> @56bytes ping''
ping -c 10 <hostname>
ping average = 49.122
= 1990/49.122
= ''1ms = 40miles''
''==> @64000bytes ping''
ping -c 10 -s 64000 <hostname>
ping average = 66.82
= 1990/66.82
= ''1ms = 29.7miles'' but in the book it is 27miles
The small packet is getting about 40 miles to the millisecond,
but the larger packet is getting around only 27 miles per millisecond. Still not bad and right around
our guess of about 33 miles to the millisecond. So given this network, you could potentially go
270 miles and keep it within the 4 percent range, depending on the redo generation rate and the
bandwidth, which are not shown here. Of course, you would want to use a more reliable and
detailed tool to determine your network latency—something like traceroute.
These examples are just that, examples. A lot of things affect your ability to ship redo across the
network. As we have shown, these include the overhead caused by network acknowledgments,
network latency, and other factors. All of these will be unique to your workload and need to
be tested.
<<<
For Batch jobs
“Batch Processing in Disaster Recovery Configurations - Best Practices for Oracle Data Guard” (http://goo.gl/hHhK)
From Frits
<<<
You mentioned a 3.5T tablespace. If your storage connection is 1GbE (as an example), if you are able to use the entire bandwidth, restoring that tablespace should at least take
3.5TB * 1024 = 3,584 GB * 1024 = 3,670,016 MB
1 Gigabit / 8 = 125 MB /s
3,670,016/125 = 29,360 seconds needed to transport / 60 = 489 minutes / 60 = 8,15 hour
<<<
http://gjilevski.wordpress.com/2010/07/24/managing-data-guard-11g-r2-with-oem-11g/
State of the Art in Database Replication
https://docs.google.com/viewer?url=http://gorda.di.uminho.pt/library/wp1/GORDA-D1.1-V1.2-p.pdf
Improving Performance in Replicated Databases through Relaxed Coherency’
https://docs.google.com/viewer?url=http://reference.kfupm.edu.sa/content/i/m/improving_performance_in_replicated_data_60451.pdf
ETL Microservices using Kafka for Fast Big Data - DataTorrent AppFactory https://www.youtube.com/watch?v=4r11a65wY28
http://www.idevelopment.info/data/Oracle/DBA_tips/LOBs/
DDL commands for LOBs: http://www.idevelopment.info/data/Oracle/DBA_tips/LOBs/LOBS_2.shtml
-- ''LONG''
How to overcome a few restrictions of LONG data type [ID 205288.1]
How to Copy Data from a Table with a LONG Column into an Existing Table [ID 119489.1]
http://www.orafaq.com/wiki/SQL*Plus_FAQ
http://wwww.orafaq.net/wiki/LONG_RAW
http://wwww.orafaq.net/wiki/LONG
http://arjudba.blogspot.com/2008/07/char-varchar2-long-etc-datatype-limits.html
http://arjudba.blogspot.com/2008/06/how-to-convert-long-data-type-to-lob.html
http://www.orafaq.com/forum/t/119648/0/
http://articles.techrepublic.com.com/5100-10878_11-6177742.html# <----- nice explanation
-- ''BLOB''
Summary Note Index for BasicFiles(LOB's/BLOB's/CLOB's/NCLOB's,BFILES) and SecureFiles [ID 198160.1]
Export and Import of Table with LOB Columns (like CLOB and BLOB) has Slow Performance [ID 281461.1]
Troubleshooting Guide (TSG) - Large Objects (LOBs) [ID 846562.1]
LOBS - Storage, Redo and Performance Issues [ID 66431.1]
ORA-01555 And Other Errors while Exporting Table With LOBs, How To Detect Lob Corruption. [ID 452341.1]
LOBs and ORA-01555 troubleshooting [ID 846079.1]
How to determine the actual size of the LOB segments and how to free the deleted/unused space above/below the HWM [ID 386341.1]
How to move LOB Data to Another Tablespace [ID 130814.1]
-- ''NOT NULL INTERVAL DAY(5) TO SECOND(1)''
to convert to seconds http://www.dbforums.com/oracle/1044035-converting-interval-day-second-integer.html
to convert to days,hours,mins http://community.qlikview.com/thread/38211
example
{{{
-- TO VIEW RETENTION INFORMATION
set lines 300
col snap_interval format a30
col retention format a30
select DBID, SNAP_INTERVAL,
EXTRACT(DAY FROM SNAP_INTERVAL) ||
' days, ' || EXTRACT (HOUR FROM SNAP_INTERVAL) ||
' hours, ' || EXTRACT (MINUTE FROM SNAP_INTERVAL) ||
' minutes' as snap_interval
,
((TRUNC(SYSDATE) + SNAP_INTERVAL - TRUNC(SYSDATE)) * 86400)/60 AS SNAP_INTERVAL_MINS
,
RETENTION,
((TRUNC(SYSDATE) + RETENTION - TRUNC(SYSDATE)) * 86400)/60 AS RETENTION_MINS
,TOPNSQL from dba_hist_wr_control
where dbid in (select dbid from v$database);
}}}
''Timestamp data type''
{{{
DATE and TIMESTAMP Datatypes
http://www.databasejournal.com/features/oracle/article.php/2234501/A-Comparison-of-Oracles-DATE-and-TIMESTAMP-Datatypes.htm
http://psoug.org/reference/timestamp.html
}}}
[[cloud data warehouse, cloud dw, cloud datawarehouse]]
Data Warehouse page
http://www.oracle.com/us/solutions/datawarehousing/index.html
Database focus areas
http://www.oracle.com/technetwork/database/focus-areas/index.html
Parallelism and Scalability for Data Warehousing
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/dbbi-tech-info-sca-090608.html
DW and BI page - Oracle Database for Business Intelligence and Data Warehousing
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/index.html
Data Warehousing - Best Practices page
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/dbbi-tech-info-best-prac-092320.html
''Best Practices for Data Warehousing on the Oracle Database Machine X2-2 [ID 1297112.1]''
Best practices for a Data Warehouse on Oracle Database 11g http://www.uet.vnu.edu.vn/~thuyhq/Courses_PDF/$twp_dw_best_practies_11g11_2008_09.pdf
http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-dw-best-practies-11g11-2008-09-132076.pdf
2 day DW guide http://docs.oracle.com/cd/B28359_01/server.111/b28314.pdf
DATA WAREHOUSING BIG DATA "made" EASY https://www.youtube.com/watch?v=DeExbclijPg
1keydata tutorial http://www.1keydata.com/datawarehousing/datawarehouse.html
PX materclass https://www.slideshare.net/iarsov/parallel-execution-with-oracle-database-12c-masterclass
http://docs.oracle.com/cd/B28359_01/server.111/b28314/tdpdw_bandr.htm
''Data Warehouse Best Practices''
<<<
http://blogs.oracle.com/datawarehousing/2010/05/data_warehouse_best_practices.html
http://structureddata.org/2011/06/15/real-world-performance-videos-on-youtube-oltp/ <-- VIDEO
http://structureddata.org/2011/06/15/real-world-performance-videos-on-youtube-data-warehousing/ <-- VIDEO
http://www.oracle.com/technology/products/bi/db/11g/dbbi_tech_info_best_prac.html
<<<
''Parallelism and Scalability for Data Warehousing''
<<<
http://www.oracle.com/technology/products/bi/db/11g/dbbi_tech_info_sca.html
<<<
''Whitepapers''
{{{
http://www.oracle.com/technology/products/bi/db/11g/pdf/twp_dw_best_practies_11g11_2008_09.pdf
http://www.oracle.com/technology/products/bi/db/11g/pdf/twp_bidw_parallel_execution_11gr1.pdf
}}}
Dion Cho
{{{
http://dioncho.wordpress.com/2009/01/23/misunderstanding-on-top-sqls-of-awr-repository/
http://dioncho.wordpress.com/2009/02/20/how-was-my-parallel-query-executed-last-night-awr/
http://dioncho.wordpress.com/2009/02/16/the-most-poweful-way-to-monitor-parallel-execution-vpq_tqstat/
http://dioncho.wordpress.com/2009/03/12/automating-tkprof-on-parallel-slaves/
Following is a small test case to demonstrate how Oracle captures the top SQLs.
-- create objects
create table parallel_t1(c1 int, c2 char(100));
insert into parallel_t1
select level, 'x'
from dual
connect by level <= 1000000
;
commit;
-- generate one parallel query
select /*+ parallel(parallel_t1 4) */ count(*) from parallel_t1;
or
-- generate many many TOP sqls. here we generate 100 top sqls which do full scan on table t1
set heading off
set timing off
set feedback off
spool select2.sql
select 'select /*+ top_sql_' || mod(level,100) || ' */ count(*) from parallel_t1;'
from dual
connect by level <= 10000;
spool off
ed select2
-- check the select2.sql
-- Now we capture the SQLs
exec dbms_workload_repository.create_snapshot;
@select2
exec dbms_workload_repository.create_snapshot;
-- AWR Report would show that more than 30 top sqls are captured
@?/rdbms/admin/awrrpt
}}}
Jonathan Lewis
{{{
http://jonathanlewis.wordpress.com/2010/01/03/pseudo-parallel/
http://jonathanlewis.wordpress.com/2008/11/05/px-buffer/
http://jonathanlewis.wordpress.com/2007/06/25/qb_name/
http://jonathanlewis.wordpress.com/2007/05/29/autoallocate-and-px/
http://jonathanlewis.wordpress.com/2007/03/14/how-parallel/
http://jonathanlewis.wordpress.com/2007/02/19/parallelism-and-cbo/
http://jonathanlewis.wordpress.com/2007/01/11/rescresp/
http://jonathanlewis.wordpress.com/2006/12/28/parallel-execution/
}}}
Doug
{{{
http://oracledoug.com/serendipity/index.php?/archives/774-Direct-Path-Reads.html
}}}
Greg Rahn
{{{
http://structureddata.org/category/oracle/parallel-execution/
}}}
Riyaj Shamsudeen
{{{
RAC, parallel query and udpsnoop
http://orainternals.wordpress.com/2009/06/20/rac-parallel-query-and-udpsnoop/
}}}
Sheeri Cabral
{{{
Data Warehousing Best Practices: Comparing Oracle to MySQL
http://www.pythian.com/news/15157/data-warehousing-best-practices-comparing-oracle-to-mysql-part-1-introduction-and-power/
http://www.pythian.com/news/15167/data-warehousing-best-practices-comparing-oracle-to-mysql-part-2-partitioning/
}}}
-- Oracle Optimized Warehouse
Oracle Exadata Best Practices (Doc ID 757552.1)
Oracle Optimized Warehouse for HP (Doc ID 779222.1)
HP Oracle Exadata Performance Best Practices (Doc ID 759429.1)
Oracle Sun Database Machine Setup/Configuration Best Practices (Doc ID 1067527.1)
Oracle Sun Database Machine Performance Best Practices (Doc ID 1067520.1)
Oracle Sun Database Machine Application Best Practices for Data Warehousing (Doc ID 1094934.1)
HP Exadata Setup/Configuration Best Practices (Doc ID 757553.1)
http://www.emc.com/collateral/hardware/white-papers/h6015-oracle-data-warehouse-sizing-dmx-4-dell-wp.pdf
-- PARALLELISM
Tips to Reduce Waits for "PX DEQ CREDIT SEND BLKD" at Database Level (Doc ID 738464.1)
Parallel Direct Load Insert DML (Doc ID 146631.1)
Using Parallel Execution (Doc ID 203238.1)
Parallel Capabilities of Oracle Data Pump (Doc ID 365459.1)
How to Refresh a Materialized View in Parallel (Doc ID 577870.1)
FAQ's about Parallel/Noparallel Hints. (Doc ID 263153.1)
SQL statements that run in parallel with NO_PARALLEL hints (Doc ID 267330.1)
-- PX SETUP
Where to find Information about Parallel Execution in the Oracle Documentation (Doc ID 184417.1)
Fundamentals of the Large Pool (Doc ID 62140.1)
Health Check Alert: parallel_execution_message_size is not set greater than or equal to the recommended value (Doc ID 957436.1)
Disable Parallel Execution on Session/System Level (Doc ID 235400.1)
-- PARALLELISM ISSUES
Why didn't my parallel query use the expected number of slaves? (Doc ID 199272.1)
Note:196938.1 "Why did my query go parallel?"
-- PARALLELISM SCRIPT
Report for the Degree of Parallelism on Tables and Indexes (Doc ID 270837.1)
Old and new Syntax for setting Degree of Parallelism (Doc ID 260845.1)
Script to map Senderid in PX Wait Event to an Oracle Process (Doc ID 304317.1)
Procedure PqStat to monitor Current PX Queries (Doc ID 240762.1)
Script to map Parallel Execution Server to User Session (Doc ID 344196.1)
Script to map parallel query coordinators to slaves (Doc ID 202219.1)
Script to monitor PX limits from Resource Manager for active sessions (Doc ID 240877.1)
Script to monitor parallel queries (Doc ID 457857.1) <-------------- GOOD STUFF
-- PARALLELISM AND MEMORY
PX Slaves take sometimes a lot of memory (Doc ID 240883.1)
Parallel Execution the Large/Shared Pool and ORA-4031 (Doc ID 238680.1)
-- PX & TRIGGER
Can a PX Be Triggered by an User or an Event Can Trigger the PX (Doc ID 960694.1)
-- PARALLELISM WAIT EVENTS
Parallel Query Wait Events (Doc ID 191103.1)
Statspack Report has PX (Parallel Query) Idle Events shown in Top Waits (Doc ID 353603.1)
271767.1 “WAITEVENT: “PX Deq Credit: send blkd”
WAITEVENT: "PX Deq: Execute Reply" (Doc ID 270916.1)
WAITEVENT: "PX Deq: Execution Msg" Reference Note (Doc ID 69067.1)
WAITEVENT: "PX Deq: Table Q Normal" (Doc ID 270921.1)
WAITEVENT: "PX Deq Credit: need buffer" (Doc ID 253912.1)
Wait Event 'PX qref latch' (Doc ID 240145.1)
WAITEVENT: "PX Deq: Join ACK" (Doc ID 250960.1)
WAITEVENT: "PX Deq: Signal ACK" (Doc ID 257594.1)
WAITEVENT: "PX Deq: Parse Reply" (Doc ID 257596.1)
WAITEVENT: "PX Deq: reap credit" (Doc ID 250947.1)
WAITEVENT: "PX Deq: Msg Fragment" (Doc ID 254760.1)
WAITEVENT: "PX Idle Wait" (Doc ID 257595.1)
WAITEVENT: "PX server shutdown" (Doc ID 250357.1)
WAITEVENT: "PX create server" (Doc ID 69106.1)
-- 10046 TRACE ON PX
Tracing PX session with a 10046 event or sql_trace (Doc ID 242374.1)
Tracing Parallel Execution with _px_trace. Part I (Doc ID 444164.1)
-- PX ERRORS
OERR: ORA-12853 insufficient memory for PX buffers: current %sK, max needed %s (Doc ID 287751.1)
Bug 6981690 - Cursor not shared when running PX query on mounted RAC system (Doc ID 6981690.8)
Bug 4336528 - PQ may be slower than expected (timeouts on "PX Deq: Signal ACK") (Doc ID 4336528.8)
Bug 5023410 - QC can wait on "PX Deq: Join ACK" when slave is available (Doc ID 5023410.8)
Bug 5030215 - Excessive waits on PX Deq Signal ACK when RAC enabled (Doc ID 5030215.8)
Error With Create Session When Invoking PX (Doc ID 782073.1)
Creating Session Failed Within PX (Doc ID 781437.1)
5 minute Delay Observed In Message Processing after RAC reconfiguration (Doc ID 458898.1)
-- KILL PX
The simplest Solution to kill a PX Session at OS Level (Doc ID 738618.1)
-- WEBINARS
Selected Webcasts in the Oracle Data Warehouse Global Leaders Webcast Series (Doc ID 1306350.1)
{{{
parallel_automatic_tuning=false <--- currently set to TRUE which is a deprecated parameter in 10g
parallel_max_servers=64 <--- the current value is just too high, caused by parallel_automatic_tuning
parallel_adaptive_multi_user=false <--- best practice recommends to set this to false to have predictable performance
db_file_multiblock_read_count=64 <--- 1024/16......16 is your blocksize
parallel_execution_message_size=16384 <--- best practice recommends to set this to this value
}}}
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd
{{{
Christo Kutrovsky
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd,13
Note that only if you have parallel_automatic_tuning=true then the
buffers are allocated from LARGE_POOL, otherwise (the default) they
come from the shared pool, which may be an issue when you try to
allocate 64kb chunks.
}}}
{{{
Craig Shallahamer
http://shallahamer-orapub.blogspot.com/2010/04/finding-parallelization-sweet-spot-part.html
http://shallahamer-orapub.blogspot.com/2010/04/parallelization-vs-duration-part-2.html
http://shallahamer-orapub.blogspot.com/2010/04/parallelism-introduces-limits-part-3.html
}}}
{{{
Christian Antognini
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd,22
> alter session force parallel ddl parallel 32;
This should not be necessary. The parallel DDL are enabled by default...
You can check that with the following query:
select pddl_status
from v$session
where sid = sys_context('userenv','sid')
}}}
{{{
PX Deq Credit: send blkd - wait for what?
http://www.asktherealtom.ch/?p=8
PX Deq Credit: send blkd caused by IDE (SQL Developer, Toad, PL/SQL Developer)
http://iamsys.wordpress.com/2010/03/24/px-deq-credit-send-blkd-caused-by-ide-sql-developer-toad-plsql-developer/
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd
How can I associate the parallel query slaves with the session that's running the query?
http://www.jlcomp.demon.co.uk/faq/pq_proc.html
}}}
{{{
What event are the consumer slaves waiting on?
set linesize 150
col "Wait Event" format a30
select s.sql_id,
px.INST_ID "Inst",
px.SERVER_GROUP "Group",
px.SERVER_SET "Set",
px.DEGREE "Degree",
px.REQ_DEGREE "Req Degree",
w.event "Wait Event"
from GV$SESSION s, GV$PX_SESSION px, GV$PROCESS p, GV$SESSION_WAIT w
where s.sid (+) = px.sid and
s.inst_id (+) = px.inst_id and
s.sid = w.sid (+) and
s.inst_id = w.inst_id (+) and
s.paddr = p.addr (+) and
s.inst_id = p.inst_id (+)
ORDER BY decode(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID),
px.QCSID,
decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
px.SERVER_SET,
px.INST_ID;
}}}
Installing Database Vault in a Data Guard Environment
Doc ID: 754065.1
http://docs.oracle.com/cd/E11882_01/server.112/e23090/dba.htm
http://www.oracle.com/technetwork/database/security/twp-oracle-database-vault-sap-2009-128981.pdf
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r1/prod/security/datavault/datavault2.htm Restricting Command Execution Using Oracle Database Vault
[img[picturename| https://lh6.googleusercontent.com/-5ZeHtRGSEKI/TeVlmgna5XI/AAAAAAAABSo/N4hMYIhLLkc/s800/IMG_4070.JPG]]
Securing Linux Servers - https://www.pluralsight.com/courses/securing-linux-servers
Series: Project Lockdown - A phased approach to securing your database infrastructure
http://www.oracle.com/technetwork/articles/index-087388.html
http://blog.red-database-security.com/2010/09/10/update-of-project-lockdown-released/
http://www.cyberciti.biz/tips/tips-to-protect-linux-servers-physical-console-access.html
-- DoD files
http://www.disa.mil/About/Search-Results?q=oracle&col=iase&s=Search
http://iase.disa.mil/stigs/app_security/database/oracle.html
http://iase.disa.mil/stigs/app_security/database/general.html
http://www.cvedetails.com/vulnerability-list/vendor_id-93/product_id-13824/Oracle-Database-11g.html
http://www.cisecurity.org/
{{{
Pre-req reading materials
Read on the Chapter 14, 15, and 10 of this book (in particular order!!!) Beginning_11g_Admin_From_Novice_to_Professional.pdf
to know why we need to do database health checks and to have an idea about our value to our clients
Alignment to the IT Service Management
There are 10 components of ITSM and these are as follows:
Service Level Management
Financial Management
Service Continuity Management
Capacity Management
Availability Management
Incident Management
Problem Management
Change Management
Configuration Management
Release Management
For simplicity and aligning it to the health check tasks the 10 components are categorized as follows:
Performance and Availability
Service Level Management
Capacity Management
Availability Management
Backup and Recovery
Service Continuity Management
Incident/Problem Management
Incident Management
Problem Management
Configuration Management
Financial Management
Change Management
Configuration Management
Release Management
The Health Check Checklist
Gather information on the environment
Database Maintenance
Backups
Check the backup log
Log file maintenance (see TrimLogs)
Trim the alert log
Trim the backup log
Trim/delete files at the user dump directories
Trim listener log file
Trim sqlnet log file
Configuration Management
Check installed Oracle software
Gather RDA
Check the DBA_FEATURE_USAGE_STATISTICS
Statistics
Archive & Purge
Rebuilding
Auditing
User Management
Capacity Management
Patching
Database Monitoring
Database Availability
Check the alert log (see GetAlertLog)
Check the backup log
Check the archive mode
Check nologging tables
Check the control files
Check Redo log files and sizes
Check database parameters
SGA size
Undo management
Memory management
Database Changes
Check changes on the database parameters
Check on recent DDLs (if possible)
Security
Check the audit logs
Space and Growth
Check local and dictionary managed tablespace
Check tablespace usage
Check tablespace quotas
Check temporary tablespace
Check tablespace fragmentation
Check datafiles with autoextend
Check segment growth or top segments
Workload and Capacity
Check the AAS
Check the CPU, IO, memory, network workload
Check the top timed events
Performance
Check the top SQLs
Check unstable SQLs
Database Objects
Check objects unable to extend
Check objects reaching max extents
Check sequences reaching max value
Check row migration and chaining
Check invalid objects
Check table statistics
Check index statistics
Check rollback segments (for 8i below)
Check resource contention (locks, enqueue)
Analysis
Documentation and recommendation of action plans
Validation of action plans
Execution of action plans
}}}
See also [[PerformanceTuningReport]] for possible report/analysis formats
Top DBA Shell Scripts for Monitoring the Database
http://communities.bmc.com/communities/docs/DOC-9942#tablespace
''Interesting scripts on this grid infra directory''
{{{
oracle@enkdb01.enkitec.com:/home/oracle/dba/etc:dbm1
$ locate "/pluggable/unix"
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/alert_log_file_size_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/bdump_dest_trace_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_default_gateway.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_disk_asynch_io_linking.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_e1000.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_jumbo_frames.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_network_packet_reassembly.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_network_param.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_non_routable_network_interconnect.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_rp_filter.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_tcp_packet_retransmit.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_vip_restart_attempt.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_vmm.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkcorefile.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkhugepage.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkmemlock.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkportavail.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkramfs.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checksshd.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/common_include.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/core_dump_dest_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/css_diagwait.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/css_disk_timeout.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/css_misscount.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/css_reboot_time.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/getNICSpeed.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/hangcheck_margin.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/hangcheck_reboot.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/hangcheck_tick.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/hangchecktimer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/listener_naming_convention.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/ora_00600_errors_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/ora_07445_errors_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/shutdown_hwclock_sync.sh
}}}
Bug No. 1828368
SYS.LINK$ CONTAINS UNENCRYPTED PASSWORDS OF REMOTE LOGIN
Duplicate table over db link
http://laurentschneider.com/wordpress/2011/09/duplicate-table-over-database-link.html
Tuning query with database link using USE_NL hint http://msutic.blogspot.com/2012/03/tuning-distributed-query-using-usenl.html
version.extensions.DcTableOfContentsPlugin= {
major: 0, minor: 4, revision: 0,
type: "macro",
source: "http://devpad.tiddlyspot.com#DcTableOfContentsPlugin"
};
// Replace heading formatter with our own
for (var n=0; n<config.formatters.length; n++) {
var format = config.formatters[n];
if (format.name == 'heading') {
format.handler = function(w) {
// following two lines is the default handler
var e = createTiddlyElement(w.output, "h" + w.matchLength);
w.subWikifyTerm(e, this.termRegExp); //updated for TW 2.2+
// Only show [top] if current tiddler is using showtoc
if (w.tiddler && w.tiddler.isTOCInTiddler == 1) {
// Create a container for the default CSS values
var c = createTiddlyElement(e, "div");
c.setAttribute("style", "font-size: 0.5em; color: blue;");
// Create the link to jump to the top
createTiddlyButton(c, " [top]", "Go to top of tiddler", window.scrollToTop, "dcTOCTop", null, null);
}
}
break;
}
}
config.macros.showtoc = {
handler: function(place, macroName, params, wikifier, paramString, tiddler) {
var text = "";
var title = "";
var myTiddler = null;
// Did they pass in a tiddler?
if (params.length) {
title = params[0];
myTiddler = store.getTiddler(title);
} else {
myTiddler = tiddler;
}
if (myTiddler == null) {
wikify("ERROR: Could not find " + title, place);
return;
}
var lines = myTiddler .text.split("\n");
myTiddler.isTOCInTiddler = 1;
// Create a parent container so the TOC can be customized using CSS
var r = createTiddlyElement(place, "div", null, "dcTOC");
// create toggle button
createTiddlyButton(r, "/* Table of Contents */", "show/collapse table of contents",
function() { config.macros.showtoc.toggleElement(this.nextSibling); },
"toggleButton")
// Create a container so the TOC can be customized using CSS
var c = createTiddlyElement(r, "div");
if (lines != null) {
for (var x=0; x<lines.length; x++) {
var line = lines[x];
if (line.substr(0,1) == "!") {
// Find first non ! char
for (var i=0; i<line.length; i++) {
if (line.substr(i, 1) != "!") {
break;
}
}
var desc = line.substring(i);
// Remove WikiLinks
desc = desc.replace(/\[\[/g, "");
desc = desc.replace(/\]\]/g, "");
text += line.substr(0, i).replace(/[!]/g, '*');
text += '<html><a href="javascript:;" onClick="window.scrollToHeading(\'' + title + '\', \'' + desc+ '\', event)">' + desc+ '</a></html>\n';
}
}
}
wikify(text, c);
}
}
config.macros.showtoc.toggleElement = function(e) {
if(e) {
if(e.style.display != "none") {
e.style.display = "none";
} else {
e.style.display = "";
}
}
};
window.scrollToTop = function(evt) {
if (! evt)
var evt = window.event;
var target = resolveTarget(evt);
var tiddler = story.findContainingTiddler(target);
if (! tiddler)
return false;
window.scrollTo(0, ensureVisible(tiddler));
return false;
};
window.scrollToHeading = function(title, anchorName, evt) {
var tiddler = null;
if (! evt)
var evt = window.event;
if (title) {
story.displayTiddler(store.getTiddler(title), title, null, false);
tiddler = document.getElementById(story.idPrefix + title);
} else {
var target = resolveTarget(evt);
tiddler = story.findContainingTiddler(target);
}
if (tiddler == null)
return false;
var children1 = tiddler.getElementsByTagName("h1");
var children2 = tiddler.getElementsByTagName("h2");
var children3 = tiddler.getElementsByTagName("h3");
var children4 = tiddler.getElementsByTagName("h4");
var children5 = tiddler.getElementsByTagName("h5");
var children = new Array();
children = children.concat(children1, children2, children3, children4, children5);
for (var i = 0; i < children.length; i++) {
for (var j = 0; j < children[i].length; j++) {
var heading = children[i][j].innerHTML;
// Remove all HTML tags
while (heading.indexOf("<") >= 0) {
heading = heading.substring(0, heading.indexOf("<")) + heading.substring(heading.indexOf(">") + 1);
}
// Cut off the code added in showtoc for TOP
heading = heading.substr(0, heading.length-6);
if (heading == anchorName) {
var y = findPosY(children[i][j]);
window.scrollTo(0,y);
return false;
}
}
}
return false
};
Summary Of Bugs Which Could Cause Deadlock [ID 554616.1]
http://hemantoracledba.blogspot.com/2010/09/deadlocks.html
http://hoopercharles.wordpress.com/2010/01/07/deadlock-on-oracle-11g-but-not-on-10g/#comment-1793
http://markjbobak.wordpress.com/2008/06/09/11g-is-more-deadlock-sensitive-than-10g/
http://getfirebug.com/
http://jsonlint.com/
debugging book http://www.amazon.com/Debugging-David-J-Agans-ebook/dp/B002H5GSZ2/ref=tmm_kin_swatch_0
http://programmers.stackexchange.com/questions/93302/spending-too-much-time-debugging
Debugging with RStudio
https://support.rstudio.com/hc/en-us/articles/205612627-Debugging-with-RStudio
[[RSS & Search]] [[TagCloud]]
Document 1484775.1 Database Control To Be Desupported in DB Releases after 11.2
Document 1392280.1 Desupport of Oracle Cluster File System (OCFS) on Windows with Oracle DB 12
Document 1175293.1 Obsolescence Notice: Oracle COM Automation
Document 1175303.1 Obsolescence Notice: Oracle Objects for OLE
Document 1175297.1 Obsolescence Notice: Oracle Counters for Windows Performance Monitor
Document 1418321.1 CSSCAN and CSALTER To Be Desupported After DB 11.2
Document 1169017.1 Deprecating the cursor_sharing = ‘SIMILAR’ setting
Document 1469466.1: Deprecation of Oracle Net Connection Pooling feature in Oracle Database 11g Release 2
1) Mount the WD 3TB on linux server with virtual box installed
2) Install extension pack
http://www.oracle.com/technetwork/server-storage/virtualbox/downloads/index.html#extpack
-rwxr-xr-x 1 root root 9566803 Oct 17 11:43 Oracle_VM_VirtualBox_Extension_Pack-4.1.4-74291.vbox-extpack
[root@desktopserver installers]# VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.4-74291.vbox-extpack
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully installed "Oracle VM VirtualBox Extension Pack".
3) then mount on windows 7
http://blogs.oracle.com/wim/entry/oracle_vm_virtualbox_40_extens
http://www.cyberciti.biz/tips/fdisk-unable-to-create-partition-greater-2tb.html
http://plone.lucidsolutions.co.nz/linux/io/adding-a-xfs-filesystem-to-centos-5
http://blog.cloutier-vilhuber.net/?p=246
http://blogs.oracle.com/wim/entry/playing_with_btrfs
https://www.udemy.com/learn-devops-continuously-deliver-better-software/learn/v4/content
https://www.udemy.com/learn-devops-scaling-apps-on-premise-and-in-the-cloud/learn/v4/content
-- MICROSOFT
ODP.NET example code using password management with C#
Doc ID: Note:226759.1
http://www.dialogs.com/en/GetDialogs.html
http://www.dialogs.com/en/Downloads.html
http://www.dialogs.com/en/Manual.html
https://www.dialogs.com/en/cuf_req_thankyou.html
http://kdiff3.sourceforge.net/ <-- just like on linux
http://stackoverflow.com/questions/12625/best-diff-tool
http://intermediatesql.com/oracle/what-is-the-difference-between-sql-profile-and-spm-baseline/
http://yong321.freeshell.org/oranotes/DirectIO.txt <-- ''good stuff'' - linux, solaris, tru64
{{{
$ uname -a
SunOS countfleet 5.6 Generic_105181-31 sun4u sparc SUNW,Ultra-2
$ mount | grep ^/f[12] #/f2 has DIO turned on
/f1 on /dev/dsk/c0t1d0s0 setuid/read/write/largefiles on Wed Jan 15 16:17:29 2003
/f2 on /dev/dsk/c0t1d0s1 forcedirectio/setuid/read/write/largefiles on Wed Jan 15 16:17:29 2003
$ grep maxphys /etc/system
set maxphys = 1048576
Database 9.0.1.3
create tablespace test datafile '/f1/oradata/tiny/test.dbf' size 400m extent management local
uniform size 32k;
Three times it took 35,36,36 seconds, respectively. The same command except for f1 changed to f2
took 25,27,26 seconds, respectively, about 9 seconds faster. /f1 is regular UFS and /f2 is DIO UFS.
When the tablespace is being created on /f1, truss is run against the shadow process and the second
run shows:
$ truss -c -p 9704
^Csyscall seconds calls errors
read .00 1
write .00 3
open .00 2
close .00 10
time .00 2
lseek .00 2
times .03 282
semsys .00 31
ioctl .00 3 3
fdsync .00 1
fcntl .01 14
poll .01 146
sigprocmask .00 56
context .00 14
fstatvfs .00 3
writev .00 2
getrlimit .00 3
setitimer .00 28
lwp_create .00 2
lwp_self .00 1
lwp_cond_wai .03 427
lwp_cond_sig .15 427
kaio 5.49 469 430 <-- More kernelized IO time
stat64 .00 3 1
fstat64 .00 3
pread64 .00 32
pwrite64 .35 432 <-- Each pwrite() call takes 350/432 = 0.8 ms
open64 .00 6
---- --- ---
sys totals: 6.07 2405 434
usr time: 1.71
elapsed: 36.74
When the tablespace is created on /f2,
$ truss -c -p 9704
^Csyscall seconds calls errors
read .00 1
write .00 3
open .00 2
close .00 10
time .00 2
lseek .00 2
times .02 282
semsys .00 31
ioctl .00 3 3
fdsync .00 1
fcntl .00 14
poll .01 146
sigprocmask .00 56
context .00 14
fstatvfs .00 3
writev .00 2
getrlimit .00 3
setitimer .00 28
lwp_cond_wai .00 430
lwp_cond_sig .03 430
kaio .50 462 430 <-- Much less kernelized IO time
stat64 .00 3 1
fstat64 .00 3
pread64 .01 32
pwrite64 .00 432 <-- pwrite calls take practically no time.
open64 .00 6
---- --- ---
sys totals: .57 2401 434
usr time: 1.94
elapsed: 27.72
During the first run, the result on /f1 is even worse. But for good benchmark, I usually ignore the
first run.
}}}
http://www.pythian.com/news/22727/how-to-confirm-direct-io-is-getting-used-on-solaris/
''on Linux''
{{{
Now in Linux it becomes very easy.you just need to read /proc/slabinfo :
cat /proc/slabinfo | grep kio
In the SLAB allocator there are three different caches involved. The kioctx and kiocb are Async I/O data structures that are defined in aio.h header file. If it shows a non zero value that means async io is enabled.
}}}
''on Solaris''
{{{
truss -f -t open,ioctl -u ':directio' sqlplus "/ as sysdba"
27819/1: open(“/ora02/oradata/MYDB/undotbs101.dbf”, O_RDWR|O_DSYNC) = 13
27819/1@1: -> libc:directio(0×100, 0×1, 0×0, 0×0, 0xfefefefeffffffff, 0xfefefefeff726574)
27819/1: ioctl(256, _ION(‘f’, 76, 0), 0×00000001) = 0
27819/1@1: <- libc:directio() = 0
27819/1: open(“/ora02/oradata/MYDB/system01.dbf”, O_RDWR|O_DSYNC) = 13
27819/1@1: -> libc:directio(0×101, 0×1, 0×0, 0×0, 0xfefefefeffffffff, 0xfefefefeff726574)
27819/1: ioctl(257, _ION(‘f’, 76, 0), 0×00000001) = 0
27819/1@1: <- libc:directio() = 0
Table created.
SQL> drop table test;
Table dropped.
See the line “ioctl(256, _ION(‘f’, 76, 0), 0×00000001)” above.
The 3rd parameter as shown in the above output/line to the ioctl() call decides the use of direct IO.
It is 0 for directio off, and 1 for directio on and its ON in case of this database.i.e undo and system datafiles are opened with directio.
}}}
http://blogs.oracle.com/apatoki/entry/ensuring_that_directio_is_active
http://www.solarisinternals.com/si/tools/directiostat/index.php <-- ''directiostat tool''
VxFS DirectIO
http://mailman.eng.auburn.edu/pipermail/veritas-vx/2006-February/025477.html
When direct I/O attacks! - A sample of VxFS mount options
{{{
$ mount | grep u02
/u02 on /dev/vx/dsk/oradg/oradgvol01 read/write/setuid/mincache=direct/convosync=direct/delaylog/largefiles/ioerror=mwdisable/dev=3bd4ff0 on Mon Dec 5 22:21:31 2005
}}}
http://blogs.sybase.com/dwein/?p=326
http://www.freelists.org/post/oracle-l/direct-reads-and-writes-on-Solaris,4
http://orafaq.com/node/27
Setting mincache=direct and convosync=direct for VxFS on Solaris 10 - http://www.symantec.com/connect/forums/setting-mincachedirect-and-convosyncdirect-vxfs-solaris-10
What are the differences between the direct, dsync, and unbuffered settings for the Veritas File System mount options mincache and convosync, and how do those options affect I/O? - http://www.symantec.com/business/support/index?page=content&id=TECH49211
Pros and Cons of Using Direct I/O for Databases [ID 1005087.1]
Oracle Import Takes Longer When Using Buffered VxFS Then Using Unbuffered VxFS [ID 1018755.1]
Performance impact of file system when mounted as Buffered and Unbuffered option [ID 151719.1]
http://antognini.ch/2010/09/parallel-full-table-scans-do-not-always-perform-direct-reads/
http://oracle-randolf.blogspot.com/2011/10/auto-dop-and-direct-path-inserts.html
http://blog.tanelpoder.com/2012/09/03/optimizer-statistics-driven-direct-path-read-decision-for-full-table-scans-_direct_read_decision_statistics_driven/
http://www.pythian.com/news/27867/secrets-of-oracles-automatic-degree-of-parallelism/
http://dioncho.wordpress.com/2009/07/21/disabling-direct-path-read-for-the-serial-full-table-scan-11g/
How Parallel Execution Works http://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel002.htm
http://uhesse.com/2009/11/24/automatic-dop-in-11gr2/
http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-parallel-execution-fundamentals-133639.pdf
also see [[_small_table_threshold]]
! 2020 nigel
https://github.com/oracle/oracle-db-examples/tree/master/optimizer/direct_path
<<<
SQL scripts to compare direct path load in Oracle Database 11g Release 2 with Oracle Database 12c (12.1.0.2 and above). They are primarily intended to demonstrate the new Hybrid TSM/HWMB load strategy in 12c - comparing this to the TSM strategy available in 11g. See the 11g and 12c "tsm_v_tsmhwmb.sql" scripts and their associated spool file "tsm_v_tsmhwmb.lst" to see the difference in behavior between these two database versions. In particular, compare the reduced number of table extents created in the 12c example than 11g by comparing the "tsm_v_tsmhwmb.lst" files.
The 12c directory contains a comprehensive set of examples demonstrating how the SQL execution plan is decorated with the chosen load strategy.
The 11g directory contains a couple of examples for comparative purposes.
<<<
http://www.windowsnetworking.com/articles_tutorials/authenticating-linux-active-directory.html
''Centrify'' http://www.cerberis.com/images/produits/livreblanc/Active%20Directory%20Solutions%20for%20Red%20Hat%20Enterprise%20Linux.pdf, http://www.centrify.com/express/comparing-free-active-directory-integration-tools.asp
http://en.wikipedia.org/wiki/Active_Directory
https://wiki.archlinux.org/index.php/Active_Directory_Integration
http://en.gentoo-wiki.com/wiki/Active_Directory_with_Samba_and_Winbind
http://en.gentoo-wiki.com/wiki/Active_Directory_Authentication_using_LDAP
http://serverfault.com/questions/23632/how-to-use-active-directory-to-authenticate-linux-users
http://serverfault.com/questions/12454/linux-clients-on-a-windows-domains
http://serverfault.com/questions/15626/how-practical-is-to-authenticate-a-linux-server-against-ad
http://wiki.samba.org/index.php/Samba_&_Active_Directory
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch31_:_Centralized_Logins_Using_LDAP_and_RADIUS
http://helpdeskgeek.com/how-to/windows-2003-active-directory-setupdcpromo/
How to Disable Automatic Statistics Collection in 11g [ID 1056968.1]
http://www.oracle-base.com/articles/11g/AutomatedDatabaseMaintenanceTaskManagement_11gR1.php
http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_job.htm#i1000521
''to check''
{{{
SQL> col status format a20
SQL> r
1* select client_name,status from Dba_Autotask_Client
CLIENT_NAME STATUS
---------------------------------------------------------------- --------------------
auto optimizer stats collection ENABLED
auto space advisor ENABLED
sql tuning advisor ENABLED
}}}
''-- disable a specific job or auto job''
{{{
EXEC DBMS_AUTO_TASK_ADMIN.DISABLE('auto optimizer stats collection', NULL, NULL);
exec dbms_scheduler.disable('gather_stats_job');
exec dbms_scheduler.disable( 'SYS.BSLN_MAINTAIN_STATS_JOB' );
EXEC DBMS_JOB.BROKEN(62,TRUE);
}}}
''-- disable all maintenance jobs altogether''
{{{
EXEC DBMS_AUTO_TASK_ADMIN.disable;
EXEC DBMS_AUTO_TASK_ADMIN.enable;
}}}
''-- to address the maintenance window on your newly created resource_plan''
* you can just do a single level plan.. then add the ORA$AUTOTASK_SUB_PLAN and ORA$DIAGNOSTICS consumer groups and edit the maintenance windows from DEFAULT_MAINTENANCE_PLAN to your newly created resource plan. by doing this each time the window is executed all the jobs will conform to the new resource plan and their allocations
Enabling Oracle Database Resource Manager and Switching Plans http://docs.oracle.com/cd/B28359_01/server.111/b28310/dbrm005.htm#ADMIN11890
{{{
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:mydb_plan';
}}}
Windows http://docs.oracle.com/cd/B28359_01/server.111/b28310/schedover007.htm#i1106396
Configuring Resource Allocations for Automated Maintenance Tasks http://docs.oracle.com/cd/B28359_01/server.111/b28310/tasks005.htm#BABHEFEH
Automated Database Maintenance Task Management http://www.oracle-base.com/articles/11g/automated-database-maintenance-task-management-11gr1.php
{{{
There’s a parameter in 11.2 which you can force the px executions to be local on a node..
PARALLEL_FORCE_LOCAL:
If you are on 10gR2… you can set a hint..
Select /*+PARALLEL(TAB, DEGREE, INSTANCES) */
Or set it on the table level
ALTER TABLE NODETEST1 PARALLEL(DEGREE 4 INSTANCES 2)
ALTER SESSION DISABLE PARALLEL DML|DDL|QUERY
SELECT /*+ NOPARALLEL(hr_emp) */ last_name FROM hr.employees hr_emp;
}}}
http://rogunix.com/docs/Reversing&Exploiting/The.IDA.Pro.Book.2nd.Edition.Jun.2011.pdf
https://www.hex-rays.com/index.shtml
The compiler, assembler, linker, loader and process address space tutorial - hacking the process of building programs using C language: notes and illustrations
http://www.tenouk.com/ModuleW.html
<<showtoc>>
! prereq
* user should not have ADMINISTER DATABASE TRIGGER priv
* user should not own the trigger
* user that needs to be locked does not apply to SYS,SYSTEM
! the trigger to prevent users from login in
{{{
-- EXECUTE THIS SCRIPT AS ALLOC_APP_PERF
create or replace trigger alloc_app_perf.revoke_alloc_app_user
after logon on database
begin
-- not allow app schema
if sys_context('USERENV','SESSION_USER') in ('ALLOC_APP_USER')
-- not allow users outside of this server (change the server name accordingly)
and UPPER(SYS_CONTEXT ('USERENV','HOST')) not in ('KARLDEVFEDORA')
then
raise_application_error(-20001,'<<< NIGHTLY BATCH RUNNING. PLEASE COME BACK LATER. >>>');
end if;
end;
/
}}}
! the kill procedure executed in UC4
{{{
-- EXECUTE THIS SCRIPT AS SYSDBA
grant alter system to system;
grant select on sys.gv_$session to system;
create or replace procedure system.uc4_kill_all_alloc_app_user
as
BEGIN
FOR c IN (
SELECT sid, serial#, inst_id
FROM sys.gv_$session
WHERE USERNAME = 'ALLOC_APP_USER'
AND upper(MACHINE) NOT IN (select upper(sys_context ('userenv','HOST')) from dual)
AND STATUS <> 'KILLED'
)
LOOP
EXECUTE IMMEDIATE 'alter system kill session ''' || c.sid || ', ' || c.serial# || ', @' || c.inst_id || ''' immediate';
END LOOP;
END;
/
grant execute on system.uc4_kill_all_alloc_app_user to alloc_app_perf;
}}}
! the user that will execute the kill should have the following privs
{{{
-- quotas
alter user "alloc_app_perf" quota unlimited on bas_data;
-- roles, privs
grant alloc_app_r to alloc_app_perf;
grant select_catalog_role to alloc_app_perf;
grant resource to alloc_app_perf;
grant select any dictionary to alloc_app_perf;
grant advisor to alloc_app_perf;
grant create job to alloc_app_perf;
grant oem_monitor to alloc_app_perf;
grant administer any sql tuning set to alloc_app_perf;
grant administer sql management object to alloc_app_perf;
grant create any sql_profile to alloc_app_perf;
grant drop any sql_profile to alloc_app_perf;
grant alter any sql_profile to alloc_app_perf;
grant create any trigger to alloc_app_perf;
grant alter any trigger to alloc_app_perf;
grant administer database trigger to alloc_app_perf with admin option;
-- execute
grant execute on dbms_monitor to alloc_app_perf;
grant execute on dbms_application_info to alloc_app_perf;
grant execute on dbms_workload_repository to alloc_app_perf;
grant execute on dbms_xplan to alloc_app_perf;
grant execute on dbms_sqltune to alloc_app_perf;
grant execute on sys.dbms_lock to alloc_app_perf;
}}}
''references''
https://serverfault.com/questions/58856/disconnecting-an-oracle-session-from-a-logon-trigger
How do i prevent end users from connecting to the database other than my application https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:561622956788
Raise_application_error procedure in AFTER LOGON trigger https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3236035522926
http://oracle.ittoolbox.com/groups/technical-functional/oracle-db-l/after-logon-trigger-not-killing-the-user-session-5108880
https://www.freelists.org/post/oracle-l/Disconnecting-session-from-an-on-logon-trigger,3
https://stackoverflow.com/questions/55342/how-can-i-kill-all-sessions-connecting-to-my-oracle-database
http://kerryosborne.oracle-guy.com/2012/03/displaying-sql-baseline-plans/
https://12factor.net/
http://www.ec2instances.info/
Docker repo
https://registry.hub.docker.com/search?q=oracle&searchfield=
http://sve.to/2010/10/11/cannot-drop-the-first-disk-group-in-asm-11-2/
11gR2 (11.2.0.1) ORA-15027: active use of diskgroup precludes its dismount (With no database clients connected) [ID 1082876.1]
{{{
Dismounting DiskGroup DATA failed with the following message:
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DATA" precludes its dismount
}}}
http://drupal.org/download
http://drupal.org/project/themes?solrsort=sis_project_release_usage%20desc
http://drupal.org/start
http://drupal.org/search/apachesolr_multisitesearch/blog%20aggregator <-- AGGREGATOR
http://groups.drupal.org/node/21325 <-- VIEWS
http://alexanderanokhin.wordpress.com/2012/03/19/dtrace-lio-new-features/
http://www.jlcomp.demon.co.uk/faq/duplicates.html
http://oracletoday.blogspot.com/2005/08/magic-exceptions-into.html
http://www.java2s.com/Code/Oracle/PL-SQL/handleexceptionofduplicatevalueonindex.htm
http://www.unix.com/programming/176214-eliminate-duplicate-rows-sqlloader.html
http://database.itags.org/oracle/243273/
http://boardreader.com/thread/How_to_avoid_Duplicate_Insertion_without_l8ddXffgc.html
http://homepages.inf.ed.ac.uk/wenfei/tdd/reading/cleaning.pdf
http://docs.oracle.com/cd/B31104_02/books/EIMAdm/EIMAdm_UsageScen16.html
http://momendba.blogspot.com/2008/06/hi-there-was-interesting-post-on-otn.html
http://www.freelists.org/post/oracle-l/Is-it-a-good-idea-to-have-primary-key-on-DW-table
http://www.etl-tools.com/loading-data-into-oracle.html
http://www.justskins.com/forums/eliminate-duplicates-using-sqlldr-148572.html
http://www.akadia.com/services/ora_exchange_partition.html
http://www.dbforums.com/oracle/1008995-avoid-duplicate-rows-error-sqlldr.html
http://database.itags.org/oracle/19023/
http://www.club-oracle.com/forums/how-to-avoid-duplicate-rows-from-being-inserted-in-table-t2101/
http://www.dbforums.com/oracle/979143-performance-issue-using-sql-loader.html
! row_number PARTITION BY
https://www.sqlservercentral.com/articles/eliminating-duplicate-rows-using-the-partition-by-clause
{{{
select a.Emp_Name, a.Company, a.Join_Date, a.Resigned_Date, a.RowNumber
from
(select Emp_Name
,Company
,Join_Date
,Resigned_Date
,ROW_NUMBER() over (partition by Emp_Name, Company, Join_Date
,Resigned_Date
order by Emp_Name, Company, Join_Date
,Resigned_Date) RowNumber
from Emp_Details) a
where a.RowNumber > 1
}}}
http://forums.untangle.com/openvpn/14806-dyndns-openvpn.html
http://dyn.com/dns/dyndns-free/
-- DynamicSampling
http://blogs.oracle.com/optimizer/2010/08/dynamic_sampling_and_its_impact_on_the_optimizer.html
-- CursorSharing
http://db-optimizer.blogspot.com/2010/06/cursorsharing-picture-is-worth-1000.html
http://blogs.oracle.com/mt/mt-search.cgi?blog_id=3361&tag=cursor%20sharing&limit=20
http://blogs.oracle.com/optimizer/2009/05/whydo_i_have_hundreds_of_child_cursors_when_cursor_sharing_is_set_to_similar_in_10g.html
Formated V$SQL_SHARED_CURSOR Report by SQLID or Hash Value (Doc ID 438755.1)
Unsafe Literals or Peeked Bind Variables (Doc ID 377847.1)
Adaptive Cursor Sharing in 11G (Doc ID 836256.1)
-- HighVersionCount
High SQL version count and low executions from ADDM Report!!
http://forums.oracle.com/forums/thread.jspa?threadID=548770
Library Cache : Causes of Multiple Version Count for an SQL http://viveklsharma.wordpress.com/2009/09/12/ql/
http://viveklsharma.wordpress.com/2009/09/24/library-cache-latch-contention-due-to-multiple-version-count-day-2-of-aioug/
High Version Count with CURSOR_SHARING = SIMILAR or FORCE (Doc ID 261020.1)
-- PLAN_HASH_VALUE
http://oracle-randolf.blogspot.com/2009/07/planhashvalue-how-equal-and-stable-are.html
Thread: SQL with multiple plan hash value http://forums.oracle.com/forums/thread.jspa?threadID=897302
SQL PLAN_HASH_VALUE Changes for the Same SQL Statement http://hoopercharles.wordpress.com/2009/12/01/sql-plan_hash_value-changes-for-the-same-sql-statement/
-- LibraryCacheLatch
Higher Library Cache Latch contention in 10g than 9i (Doc ID 463860.1)
Understanding and Tuning the Shared Pool and Tuning Library Cache Latch Contention (Doc ID 62143.1)
Solutions for possible AWR Library Cache Latch Contention Issues in Oracle 10g (Doc ID 296765.1)
-- COE
TESTING SQL PERFORMANCE IMPACT OF AN ORACLE 9i TO ORACLE DATABASE 10g RELEASE 2 UPGRADE WITH SQL PERFORMANCE ANALYZER (Doc ID 562899.1)
Case Study: The Mysterious Performance Drop (Doc ID 369427.1)
http://office.microsoft.com/en-us/excel-help/using-named-ranges-to-create-dynamic-charts-in-excel-HA001109801.aspx
http://www.exceluser.com/explore/dynname1.htm
http://dmoffat.wordpress.com/2011/05/19/dynamic-range-names-and-charts-in-excel-2010/
http://www.eggheadcafe.com/software/aspnet/30309917/newbie-needs-translation-of-andy-popes-code.aspx
http://www.ozgrid.com/forum/showthread.php?t=56215&page=1
http://peltiertech.com/Excel/Charts/Dynamics.html
http://peltiertech.com/Excel/Charts/DynamicChartLinks.html
http://www.tushar-mehta.com/excel/newsgroups/dynamic_charts/index.html#BasicRange
http://www.tushar-mehta.com/excel/newsgroups/dynamic_charts/images/snapshot014.jpg
http://www.mrexcel.com/forum/showthread.php?p=1299121
http://www.eggheadcafe.com/software/aspnet/35201025/help-to-pick-constant-color-to-a-value-in-a-pie-chart.aspx
http://peltiertech.com/WordPress/vba-conditional-formatting-of-charts-by-category-label/
http://peltiertech.com/WordPress/using-colors-in-excel-charts/
http://peltiertech.com/WordPress/vba-conditional-formatting-of-charts-by-value/
http://peltiertech.com/WordPress/vba-conditional-formatting-of-charts-by-series-name/
http://peltiertech.com/WordPress/vba-conditional-formatting-of-charts-by-category-label/
Installing Oracle Apps 11i
http://avdeo.com/2010/11/01/installing-oracle-apps-11i/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+advait+(IN+ORACLE+MILIEU+...)
Virtualizing Oracle E-Business Suite through Oracle VM
http://kyuoracleblog.wordpress.com/2012/08/13/virtualizing-oracle-e-business-suite-through-oracle-vm/
Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration using AutoConfig
Doc ID: Note:279956.1
ALERT: Oracle 10g Release 2 (10.2) Support Status and Alerts
Doc ID: Note:316900.1
Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0)
Doc ID: Note:362203.1
Configuring Oracle Applications Release 11i with Oracle10g Release 2 Real Application Clusters and Automatic Storage Management
Doc ID: Note:362135.1
Oracle E-Business Suite Release 11i Technology Stack Documentation Roadmap
Doc ID: Note:207159.1
Patching Best Practices and Reducing Downtime
Doc ID: Note:225165.1
MAA Roadmap for the E-Business Suite
Doc ID: Note:403347.1
Oracle E-Business Suite Recommended Performance Patches
Doc ID: Note:244040.1
http://onlineappsdba.com
Upgrading Oracle Application 11i to E-Business Suite R12
http://advait.wordpress.com/2008/03/04/upgrading-oracle-application-11i-to-e-business-suite-r12/
Chapter 5. Patching - Part 1 by Elke Phelps and Paul Jackson
From Oracle Applications DBA Field Guide, Berkeley, Apress, March 2006.
http://www.dbazine.com/oracle/or-articles/phelps1
Oracle E-Business Suite Patching - Best Practices
http://www.appshosting.com/pub_doc/patching.html
Types Of application Patch
http://oracleebusinesssuite.wordpress.com/2007/05/28/types-of-application-patch/
http://patchsets12.blogspot.com/
E-Business Suite Applications 11i on RAC/ASM
http://www.ardentperf.com/2007/04/18/e-business-suite-applications-11i-on-racasm/
RAC Listener Best Practices
http://www.ardentperf.com/2007/02/28/rac-listener-best-practices/#comment-1412
http://www.integrigy.com/security-resources/whitepapers/Integrigy_Oracle_Listener_TNS_Security.pdf
--------------------------------
Upgrade Oracle Database to 10.2.0.2 : SOA Suite Install Part II
http://onlineappsdba.com/index.php/2007/06/16/upgrade-oracle-database-to-10202-soa-suite-install-part-ii/
Good Metalink Notes or Documentation on Apps 11i/R12/12i Patching
http://onlineappsdba.com/index.php/2008/05/28/good-metalink-notes-or-documentation-on-apps-11ir1212i-patching/
http://teachmeoracle.com/healthcheck02.html
Practical Interview Question for Oracle Apps 11i DBA
http://onlineappsdba.com/index.php/2007/12/08/practical-interview-question-for-oracle-apps-11i-dba/
Oracle Apps 11i with Database 10g R2 10.2.0.2
http://onlineappsdba.com/index.php/2006/08/28/oracle-apps-11i-with-database-10g-r2-10202/
-- INSTALL
Oracle E-Business Suite 11i and Database FAQ
Doc ID: 285267.1
Unbreakable Linux Enviroment check before R12 install
Doc ID: 421409.1
RCONFIG : Frequently Asked Questions
Doc ID: 387046.1
Using Oracle E-Business Suite Release 12 with a Database Tier Only Platform on Oracle 10g Release 2
Doc ID: 456197.1 Type: WHITE PAPER
-- ORACLE VM / VIRTUALIZATION
Using Oracle VM with Oracle E-Business Suite Release 11i or Release 12
(Doc ID 465915.1)
Certified Software on Oracle VM (Doc ID 464754.1)
Hardware Vendor Virtualization Technologies on non x86/x86-64 Architectures and Oracle E-Business Suite (Doc ID 794016.1)
-- CONCURRENT MANAGER
A Script We Use to Monitor Concurrent Jobs and Sessions that Hang (Doc ID 444611.1)
-- TUNING
http://blogs.oracle.com/stevenChan/2007/05/performance_tuning_the_apps_da.html
Troubleshooting Oracle Applications Performance Issues
Doc ID: Note:169935.1
coe_stats.sql - Automates CBO Stats Gathering using FND_STATS and Table sizes
Doc ID: Note:156968.1
bde_last_analyzed.sql - Verifies CBO Statistics
Doc ID: Note:163208.1
Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds and/or Waits generated by EVENT 10046
Doc ID: Note:224270.1
Diagnostic Scripts: Data Collection Performance Management
Doc ID: Note:183401.1
Tuning performance on eBusiness suite
Doc ID: Note:744143.1
Does Gather Schema Statistics collect statistics for indexes?
Doc ID: Note:170647.1
Which Method To Gather Statistics When On DB 10g
Doc ID: Note:427878.1
Script to Automate Gathering Stats on Applications 11.5 Using FND_STATS
Doc ID: Note:190177.1
Gather Schema Statistics program hangs or fails with ORA-54 errors
Doc ID: Note:331017.1
Purging Strategy for eBusiness Suite 11i
Doc ID: Note:732713.1
Gather Schema Statistics with LASTRUN Option does not Clean FND_STATS_HIST Table
Doc ID: Note:745442.1
How to get a Trace for And Begin to Analyze a Performance Issue
Doc ID: Note:117129.1
How to Troubleshoot Performance Issues
Doc ID: Note:232419.1
How Often Should Gather Schema Statistics Program be Run?
Doc ID: Note:168136.1
Using the FND_STATS Package for Gathering Statistics and 100% of Sample Data is Returned
Doc ID: Note:197386.1
A Holistic Approach to Performance Tuning Oracle Applications Systems
Doc ID: Note:69565.1
APS Performance TIPS
Doc ID: Note:209996.1
GATHERING STATS FOR APPS 11i IN PARARELL TAKES A LONG TIME
Doc ID: Note:603144.1
ways to calculate
419728.1
histogram
429002.1
How To Gather Statistics On Oracle Applications 11.5.10(and above) - Concurrent Process,Temp Tables, Manually
Doc ID: 419728.1
How To Gather Statistics For Oracle Applications Prior to 11.5.10
Doc ID: 122371.1
How to collect histograms in Apps Ebusiness Suite using FND_STATS
Doc ID: 429002.1
11i: Setup of the Oracle 8i Cost-Based Optimizer (CBO)
Doc ID: 101379.1
Gathering Statistics for the Cost Based Optimizer (Pre 10g)
Doc ID: 114671.1
-- TRACE APPS
Note 296559.1 Tracing FAQ: Common Tracing Techniques within the Oracle Applications 11i
Note 100964.1 - Troubleshooting Performance Issues Relating to the Database and Core/MFG MRP
Note 117129.1 - How to get a Trace for And Begin to Analyze a Performance Issue
Note 130182.1 - HOW TO TRACE FROM FORM, REPORT, PROGRAM AND OTHERS IN ORACLE APPLICATIONS
Note 142898.1 - How To Use Tkprof and Trace With Applications
Note 161474.1 - Oracle Applications Remote Diagnostics Agent (APPS_RDA)
Note 179848.1 - bde_system_event_10046.sql - SQL Trace any transaction with Event 10046 8.1-9.2
Note 224270.1 - Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds and/or Waits generated by EVENT 10046
Note 245974.1 - FAQ - How to Use Debug Tools and Scripts for the APS Suite
Note 279132.1 - set_FND_INIT_SQL.sql - Tracing sessions, Forms and Concurrent Request, for SINGLE Applications User (Binds+Waits)
Note 301372.1 - How to Generate a SQLTrace Including Binds and Waits for a Concurrent Program for 11.5.10 and R12
Note 76338.1 - Tracing Tips for Oracle Applications
A practical guide in Troubleshooting Oracle ERP Applications Performance
Issues can be found on Metalink under Note 169935.1
Trace 11i Bind Variables - Profile Option: Initialization SQL Statement - Custom
Doc ID: 170223.1
set_FND_INIT_SQL.sql - Tracing sessions, Forms and Concurrent Request, for SINGLE Applications User (Binds+Waits)
Doc ID: 279132.1
-- PLAN STABILITY
Best Practices for automatic statistics collection on Oracle 10g
Doc ID: 377152.1
Restoring table statistics in 10G onwards
Doc ID: 452011.1
Oracle Database Stats History Using dbms_stats.restore_table_stats
Doc ID: 281793.1
Statistics Best Practices: How to Backup and Restore Statistics
Doc ID: 464939.1
Tips for avoiding upgrade related query problems
Doc ID: 167086.1
Recording Explain Plans before an upgrade to 10g or 11g
Doc ID: 466350.1
-- DBMS_STATS
SIZE Clause in METHOD_OPT Parameter of DBMS_STATS Package
Doc ID: 338926.1
Recommendations for Gathering Optimizer Statistics on 10g
Doc ID: 605439.1
Recommendations for Gathering Optimizer Statistics on 11g
Doc ID: 749227.1
-- UPGRADE - MIGRATE
Consolidated Reference List For Migration / Upgrade Service Requests
Doc ID: 762540.1
-- PERFORMANCE SCENARIO
A Holistic Approach to Performance Tuning Oracle Applications Systems
Doc ID: 69565.1
When Conventional Thinking Fails: A Performance Case Study in Order Management Workflow customization
Doc ID: 431619.1
Create Service Request Performance Issue
Doc ID: 303150.1
EBPERF FAQ - Collecting Statistics with Oracle Apps 11i
Doc ID: 368252.1
-- CBO
Managing CBO Stats during an upgrade to 10g or 11g
Doc ID: 465787.1
-- APPLICATION SERVER
Oracle Application Server with Oracle E-Business Suite Release 11i FAQ
Doc ID: Note:186981.1
-- DEBUG
FAQ - How to Use Debug Tools and Scripts for the APS Suite
Doc ID: 245974.1
Debugging Platform Migration Issues in Oracle Applications 11i
Doc ID: 567703.1
-- CLONE
FAQ: Cloning Oracle Applications Release 11i
Doc ID: 216664.1
http://onlineappsdba.com/index.php/2008/02/07/cloning-in-oracle-apps-11i/
-- PLATFORM MIGRATION
Platform Migration with Oracle Applications Release 12
Doc ID: 438086.1
Migrating to Linux with Oracle Applications Release 11i
Doc ID: 238276.1
Oracle Applications R12 Migration from Solaris to Linux Platform
http://smartoracle.blogspot.com/2008/12/oracle-applications-r12-migration-from.html
http://forums.oracle.com/forums/thread.jspa?threadID=481742&start=0&tstart=0
Thread: 11i migration from solaris to linux
http://www.dbspecialists.com/files/presentations/cloning.html
-- INTEROPERABILITY
Interoperability Notes Oracle Applications Release 10.7 with Release 8.1.7
Doc ID: 148901.1
Interoperability Notes Oracle Applications Release 11.0 with Release 8.1.7
Doc ID: 148902.1
-- X86-64 SUPPORT
Frequently Asked Questions: Oracle E-Business Suite Support on x86-64
Doc ID: 343917.1
-- ITANIUM SUPPORT
Frequently Asked Questions: Oracle E-Business Suite Support on Itanium
Doc ID: 311717.1
-- DATABASE VAULT
Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 10.2.0.4
Doc ID: 428503.1 Type: WHITE PAPER
-- EXPORT IMPORT
Export/Import Process for Oracle E-Business Suite Release 12 using 10gR2
Doc ID: 454616.1
9i Export/Import Process for Oracle Applications Release 11i
Doc ID: 230627.1
-- RAC
Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration using AutoConfig
Doc ID: 279956.1 Type: WHITE PAPER
-- DATA GUARD
Case Study : Configuring Standby Database(Dataguard) on R12 using RMAN Hot Backup
Doc ID: 753241.1
-- NETWORK
Oracle E-Business Suite Network Utilities: Best Practices
Doc ID: Note:556738.1
Installation
Note: 452120.1 - How to locate the log files and troubleshoot RapidWiz for R12
Note: 329985.1 - How to locate the Rapid Wizard Installation log files for Oracle Applications 11.5.8 and higher
Note: 362135.1 - Configuring Oracle Applications Release 11i with Oracle10g Release 2 Real Application Clusters and Automatic Storage Management
Note: 312731.1 - Configuring Oracle Applications Release 11i with 10g RAC and 10g ASM
Note: 216550.1 - Oracle Applications Release 11i with Oracle9i Release 2 (9.2.0)
Note: 279956.1 - Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration using AutoConfig
Note: 294932.1 - Recommendations to Install Oracle Applications 11i
Note: 403339.1 - Oracle 10gR2 Database Preparation Guidelines for an E-Business Suite Release 12.0.4 Upgrade
Note: 455398.1 - Using Oracle 11g Release 1 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 11i
Note: 402311.1 - Oracle Applications Installation and Upgrade Notes Release 12 (12.0.4) for Microsoft Windows
Note: 405565.1 - Oracle Applications Release 12 Installation Guidelines
AD Utilities
Note: 178722.1 - How to Generate a Specific Form Through AD utility ADADMIN
Note: 109667.1 - What is AD Administration on APPS 11.0.x ?
Note: 112327.1 - How Does ADADMIN Know Which Forms Files To Regenerate?
Note: 136342.1 - How To Apply a Patch in a Multi-Server Environment
Note: 109666.1 - Release 10.7 to 11.0.3 : What is adpatch ?
Note: 152306.1 - How to Restart Failed AutoInstall Job
Note: 356878.1 - How to relink an Applications Installation of Release 11i and Release 12
Note: 218089.1 - Autoconfig FAQ
Note: 125922.1 - How To Find Oracle Application File Versions
Cloning
Note: 419475.1 - Removing Credentials from a Cloned EBS Production Database
Note: 398619.1 - Clone Oracle Applications 11i using Oracle Application Manager (OAM Clone)
Note: 230672.1 - Cloning Oracle Applications Release 11i with Rapid Clone
Note: 406982.1 - Cloning Oracle Applications Release 12 with Rapid Clone
Note: 364565.1 - Troubleshooting RapidClone issues with Oracle Applications 11i
Note: 603104.1 - Troubleshooting RapidClone issues with Oracle Applications R12
Note: 435550.1 - R12 Login issue on target after cloning
Note: 559518.1 - Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone
Note: 216664.1 - FAQ: Cloning Oracle Applications Release 11i
Patching
Note: 225165.1 - Patching Best Practices and Reducing Downtime
Note: 62418.1 - PATCHING/PATCHSET FREQUENTLY ASKED QUESTIONS
Note: 181665.1 - Release 11i Adpatch Basics
Note: 443761.1 - How to check if a certain Patch was applied to Oracle Applications instance?
Note: 231701.1 - How to Find Patching History (10.7, 11.0, 11i)
Note: 60766.1 - 11.0.x : Patch Installation Frequently Asked Questions
Note: 459156.1 - Oracle Applications Patching FAQ for Release 12
Note: 130608.1 - AdPatch Basics
Note::60766.1 - Patch Installation FAQ (Part 1)
Upgrade
Note: 461709.1 - Oracle E-Business Suite Upgrade Guide - Plan
Note: 293166.1 - Previous Versions of e-Business 11i Upgrade Assistant FAQ
Note: 224875.1 - Installation, Patching & Upgrade Frequently Asked Questions (FAQ’s)
Note: 224814.1 - Installation, Patching & Upgrade Current Issues
Note: 225088.1 - Installation, Patching & Upgrade Patches Guide
Note: 225813.1 - Installation, Patching & Upgrade Setup and Usage Guide
Note: 224816.1 - Installation, Patching & Upgrade Troubleshooting Guide
Note: 216550.1 - Oracle Applications Release 11i with Oracle9i Release 2 (9.2.0)
Note: 362203.1 - Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0)
Note: 423056.1 - Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0.2)
Note: 726982.1 - Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0.3)
Note: 452783.1 - Oracle Applications Release 11i with Oracle 11g Release 1 (11.1.0)
Note: 406652.1 - Upgrading Oracle Applications 11i DB to DB 10gR2 with Physical Standby in Place
Note: 316365.1 - Oracle Applications Release 11.5.10.2 Maintenance Pack Installation Instructions
Note: 418161.1 - Best Practices for Upgrading Oracle E-Business Suite
Printer
Note: 297522.1 - How to investigate printing issues and work towards its resolution ?
Note: 110406.1 - Check Printing Frequently Asked Questions
Note: 264118.1 - Pasta Pasta Printing Setup Test
Note: 200359.1 - Oracle Application Object Library Printer Setup Test
Note: 234606.1 - Oracle Application Object Library Printer Initialization String Setup Test
Note: 1014599.102 - Subject: How to Test Printer Initialization Strings in Unix
Performance
Note: 390137.1 - FAQ for Collections Performance
Note: 216205.1 - Database Initialization Parameters for Oracle Applications Release 11i
Note: 169935.1 - Troubleshooting Oracle Applications Performance Issues
Note: 171647.1 - Tracing Oracle Applications using Event 10046
Note: 153507.1 - Oracle Applications and StatsPack
Note: 356501.1 - How to Setup Pasta Quickly and Effectively
Note: 333504.1 - How To Print Concurrent Requests in PDF Format
Note: 356972.1 - 11i How to troubleshoot issues with printers
Working with Support: Collaborate (OAUG) 2009 Conference Notes
Doc ID: 820449.1
Tom's Handy SQL for the Oracle Applications
Doc ID: 731190.1
Others
Note: 189367.1 - Best Practices for Securing the E-Business Suite
Note: 403537.1 - Best Practices For Securing Oracle E-Business Suite Release 12
Note: 454616.1 - Export/Import Process for Oracle E-Business Suite Release 12 using 10gR2
Note: 394692.1 - Oracle Applications Documentation Resources, Release 12
Note: 370274.1 - New Features in Oracle Application 11i
Note: 130183.1 - How to Get Log Files from Various Programs for Oracle Applications
Note: 285267.1 - Oracle E-Business Suite 11i and Database FAQ
Note: 453137.1 - Oracle Workflow Best Practices Release 12 and Release 11i
Note: 398942.1 - FNDCPASS Utility New Feature ALLORACLE
Note: 187735.1 - Workflow FAQ - All Versions
-- AUTOCONFIG
Running Autoconfig on RAC instance, Failed with ORA-12504: TNS:listener was not given the SID in CONNECT_DATA
Doc ID: 577396.1
Troubleshooting Autoconfig issues with Oracle Applications RAC Databases
Doc ID: 756050.1
http://www.tomshardware.com/forum/221285-30-memory
https://db-engines.com/en/system/Elasticsearch%3BGoogle+BigQuery%3BSphinx
https://stackoverflow.com/questions/11264868/does-google-bigquery-support-full-text-search
https://medium.com/google-cloud/bigquery-performance-tips-searching-for-text-8x-faster-f9314927b8d2 <- nice
https://medium.com/inside-bizzabo/creating-an-elasticsearch-to-bigquery-data-pipeline-afe7c3f97369
https://www.youtube.com/results?search_query=elasticsearch+fulltext+bigquery
https://www.youtube.com/watch?v=WwN-vq67vBk
https://www.youtube.com/watch?v=H8f59-vxvn4 <- nice
https://www.google.com/search?client=firefox-b-1-d&q=elasticsearch+etl
https://discuss.elastic.co/t/etl-tool-for-elasticsearch/113803/14 <- logstash to etl
https://qbox.io/blog/integrating-elasticsearch-into-node-js-application <- nodejs
https://www.alooma.com/integrations/elasticsearch
! 2020
<<<
Haven't really played with the whole ELK or TIG stack but I'm watching and I have references and bought courses :p I just don't have time to catch up
There are two peeps who did it with Oracle in mind: See blog posts by Bertrand and Robin
Bertrand
https://bdrouvot.wordpress.com/2016/03/05/graphing-oracle-performance-metrics-with-telegraf-influxdb-and-grafana/
ELK:
Logstash to collect the information the way we want to.
Elasticsearch as an analytics engine.
Kibana to visualize the data.
TIG:
telegraf: to collect the Exadata metrics
InfluxDB: to store the time-series Exadata metrics
grafana: to visualise the Exadata metrics
Robin
https://www.elastic.co/blog/visualising-oracle-performance-data-with-the-elastic-stack
Here are some courses:
I like this one because of different ways of integration
https://www.udemy.com/course/grafana-graphite-and-statsd-visualize-metrics/
Integration with DataSources
Integration of Grafana with MySQL
Integration of Grafana with SQL Server (version 5.3 and above)
Integration of Grafana with Elasticsearch
Integration of Grafana with AWS Cloudwatch
Integration of Grafana with InfluxDB
Then there's also this thing called prometheus
https://prometheus.io/
https://www.udemy.com/course/monitoring-and-alerting-with-prometheus/
I think ELK and TIG are geared towards real time dashboarding. And these tools compete with New Relic, Splunk, Dynatrace and a lot more that are probably new in the market.
https://devconnected.com/how-to-setup-telegraf-influxdb-and-grafana-on-linux/
<<<
http://newappsdba.blogspot.com/2009/11/setting-em-blackouts-from-gui-and.html
http://dbakevlar.com/2012/01/getting-the-most-out-of-enterprise-manager-and-notifications/
How to Troubleshoot Process Control (start, stop, check status) the 10g Oracle Management Service(OMS) Component in 10g Enterprise Manager Grid Control [ID 730308.1]
Grid Control Performance: How to Troubleshoot OMS Crash / Restart Issues? [ID 964469.1]
11.1.0.1 emctl start oms gives the error message Unexpected error occurred. Check error and log files [ID 1331527.1]
Troubleshooting Why EM Express is not Working (Doc ID 1604062.1)
NOTE:1601454.1 - EM Express 12c Database Administration Page FAQ
http://www.oracle.com/technetwork/database/manageability/emx-intro-1970113.html
! https
{{{
select dbms_xdb_config.gethttpsport() from dual;
exec DBMS_XDB_CONFIG.SETHTTPSPORT(5500);
https://localhost:5500/em/
}}}
{{{
How Reset the SYSMAN password in OEM 12c
-- Stop all the OMS:
$OMS_HOME/bin/emctl stop oms
-- Modify the SYSMAN password:
$OMS_HOME/bin/emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd <sys pwd> -new_pwd <new sysman pwd>
-- Re-start all the OMS:
$OMS_HOME/bin/emctl stop oms -all
$OMS_HOME/bin/emctl start oms
}}}
http://oraclepoint.com/oralife/2011/10/11/difference-between-oracle-enterprise-manager-10g-and-11g/
''installers''
{{{
Oracle Enterprise Manager Grid Control for Linux x86-64
http://download.oracle.com/otn/linux/oem/1110/GridControl_11.1.0.1.0_Linux_x86-64_1of3.zip
http://download.oracle.com/otn/linux/oem/1110/GridControl_11.1.0.1.0_Linux_x86-64_2of3.zip
http://download.oracle.com/otn/linux/oem/1110/GridControl_11.1.0.1.0_Linux_x86-64_3of3.zip
Agent Software for 64-bit Platforms
http://download.oracle.com/otn/linux/oem/1110/Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip
Oracle WebLogic Server 11gR1 (10.3.2) - Package Installer
http://download.oracle.com/otn/nt/middleware/11g/wls/wls1032_generic.jar
}}}
http://www.oracle.com/technetwork/oem/grid-control/downloads/index.html
http://www.oracle.com/technetwork/middleware/ias/downloads/wls-main-097127.html
http://www.oracle.com/technetwork/oem/grid-control/downloads/linuxx8664soft-085949.html
http://www.oracle.com/technetwork/oem/grid-control/downloads/agentsoft-090381.html
http://www.oracle-base.com/articles/11g/GridControl11gR1InstallationOnOEL5.php
http://ocpdba.wordpress.com/2010/05/28/enterprise-manager-11g-installation/
http://gavinsoorma.com/2010/04/11g-enterprise-manager-grid-control-installation-overview/
http://ivan.kartik.sk/oracle/install_ora11gR1_elinux.html
http://www.masterschema.com/2010/04/install-enterprise-manager-grid-control-11g-release-1/
http://blogs.griddba.com/2010/05/enterprise-manger-grid-control-11g.html
Also check out the [[EnterpriseManagerMetalink]]
http://oemgc.files.wordpress.com/2012/10/em12c-monitoring-best-practices.pdf
How to Deploy Oracle Management Agent 12c http://www.gokhanatil.com/2011/10/how-to-deploy-oracle-management-agent.html
Em12c:Silent Oracle Management agent Installation http://askdba.org/weblog/2012/02/em12c-silent-oracle-management-agent-installation
EM12c:Automated discovery of Targets http://askdba.org/weblog/2012/02/em12c-automated-discovery-of-targets/
Rapid deployment of Enterprise Manager Cloud Control 12c (12.1) Agent http://goo.gl/vqrtK
Auto Discovery of Targets in EM12c http://oemgc.wordpress.com/2012/02/01/auto-discovery-of-targets-in-em12c/ <-- this will discover targets from an IP range
''Official Doc''
Installing Oracle Management Agent 12.1.0.2 http://docs.oracle.com/cd/E24628_01/install.121/e22624/install_agent.htm#CACJEFJI
Installing Oracle Management Agent 12.1.0.1 http://docs.oracle.com/html/E22624_12/install_agent.htm#CACJEFJI
Download additional agent 12.1.0.2 software using Self Update http://docs.oracle.com/cd/E24628_01/doc.121/e24473/self_update.htm#BEHGDJGE
Applying bundle patches on Exadata using Enterprise Manager Grid Control https://blogs.oracle.com/XPSONHA/entry/applying_bundle_patches_on_exadata
http://www.oracle.com/technetwork/oem/exa-mgmt/em12c-exadata-discovery-cookbook-1662643.pdf
<<<
Introduction ......................................................................................... 2
Before You Begin................................................................................ 2
Exadata discovery prerequisite check script................................... 3
Launching Discovery........................................................................... 9
Installing the Agents on the Compute Nodes.................................. 9
Running Guided Discovery ........................................................... 13
Post Discovery Setups .................................................................. 23
KVM .............................................................................................. 27
Discovering the Cluster and Oracle Databases ................................ 29
Conclusion ........................................................................................ 36
<<<
http://download.oracle.com/docs/cd/E24628_01/em.121/e25160/oracle_exadata.htm#BABFDHBG
http://blogs.oracle.com/XPSONHA/entry/racle_enterprise_manager_cloud_control
http://www.pythian.com/news/33261/oem12c-discovery-of-exadata-cluster/
12cr4 http://docs.oracle.com/cd/E24628_01/doc.121/e27442/toc.htm
http://www.pythian.com/news/38901/setup-exadata-for-cloud-control-12-1-0-2/
http://www.oracle.com/technetwork/oem/em12c-screenwatches-512013.html
Failover capability for plugins Exadata & EMGC Rapid deployment https://blogs.oracle.com/XPSONHA/entry/failover_capability_for_plugins_exadata
Set OEM 12c Self Update to Offline mode
https://blogs.oracle.com/VDIpier/entry/set_oem_12c_self_update
! Cloud Control Install
<<<
1) 11.2 RDBMS OS Prereqs
see [[11gR1 Install]]
2) Install RDBMS software
{{{
-- disable AMM first then set the following
ALTER SYSTEM SET pga_aggregate_target=1G SCOPE=SPFILE;
ALTER SYSTEM SET shared_pool_size=600M SCOPE=SPFILE;
ALTER SYSTEM SET job_queue_processes=20 SCOPE=SPFILE;
ALTER SYSTEM SET log_buffer=10485760 SCOPE=SPFILE;
ALTER SYSTEM SET open_cursors=300 SCOPE=SPFILE;
ALTER SYSTEM SET processes=1000 SCOPE=SPFILE;
ALTER SYSTEM SET session_cached_cursors=200 SCOPE=SPFILE;
ALTER SYSTEM SET sga_target=2G SCOPE=SPFILE;
EXEC dbms_auto_task_admin.disable('auto optimizer stats collection',null,null);
}}}
3) Deconfigure 11.2 DB control
{{{
oracle@emgc12c.local:/u01/installers/cloudcontrol:emrep12c
$ $ORACLE_HOME/bin/emca -deconfig dbcontrol db -repos drop
}}}
4) Install Cloud Control
5) Deploy stop/start scripts
{{{
oracle@emgc12c.local:/home/oracle/bin:emrep12c
$ cat start_grid.sh
export ORACLE_SID=emrep12c
export ORAENV_ASK=NO
. oraenv
lsnrctl start
sqlplus / as sysdba << EOF
startup
EOF
cd /u01/middleware/oms/bin
./emctl start oms
cd /u01/agent/agent_inst/bin
./emctl start agent
cd /u01/agent/agent_inst/bin
./emctl stop agent
cd /u01/middleware/oms/bin
./emctl stop oms -all
cd /u01/middleware/oms/bin
./emctl start oms
cd /u01/agent/agent_inst/bin
./emctl start agent
}}}
{{{
oracle@emgc12c.local:/home/oracle/bin:emrep12c
$ cat stop_grid.sh
cd /u01/agent/agent_inst/bin
./emctl stop agent
cd /u01/middleware/oms/bin
./emctl stop oms -all
export ORACLE_SID=emrep12c
export ORAENV_ASK=NO
. oraenv
sqlplus / as sysdba << EOF
shutdown immediate
EOF
lsnrctl stop
}}}
<<<
! Install Agent
<<<
<<<
Enterprise Manager Agent Downloads Page http://www.oracle.com/technetwork/oem/grid-control/downloads/agentsoft-090381.html
Enterprise Manager Agent 12.1.0.1 and 12.1.0.2 Binaries
You can get the 12.1.0.1 / 12.1.0.2 agent binaries for the agent installation by using the Self Updated feature. Refer to the Agent deployment section of the Advance Install guide available here for more details.
For information on using the Self Update feature, refer to the Oracle Enterprise Manager Cloud Control Administrator's Guide, available here.
<<<
1) Discover the local emrep database http://oemgc.wordpress.com/2012/01/09/discover-em12c-repository-database-after-installation/
2) Install agent using AgentDeploy from the OMS
edit the /etc/sudoers file on the target for the post install scripts (you can ignore this and run after install) http://www.gokhanatil.com/2011/10/how-to-deploy-oracle-management-agent.html
{{{
#Defaults requiretty <-- comment this
# Defaults !visiblepw <-- comment this
Defaults visiblepw
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
oracle ALL=(ALL) ALL
}}}
3) Activate ASH analytics by deploying the Database Management PL/SQL Packages on target databases
<<<
Also see [[EM12c Agent]]
! Activate other Plug-ins (requires OMS shutdown/restart and will disconnect all targets)
! Errors
Exception: OperationFailedException: Below host metric patches are not applied to OMS.[13426571]
Re: where can i download agent 12c for all platform? https://forums.oracle.com/forums/thread.jspa?threadID=2315005
SEVERE: OUI-10053: Unable to generate temporary script, Unable to continue install <-- corrupted inventory.xml file, debug opatch issue by ''export OPATCH_DEBUG=TRUE'', do a ''locate inventory.xml'' to get the backup of the inventory.xml
http://www.gokhanatil.com/2012/03/emcli-session-expired-error-and-fqdn.html <-- on manual agent install when getting the zip software on the OMS
! References
Release Schedule of Current Enterprise Manager Releases and Patch Sets (10g, 11g, 12c) [ID 793512.1]
How to Install Enterprise Manager Cloud Control 12.1.0.1 (12c) on Linux [ID 1359176.1]
EM 12c R2: How to Install Enterprise Manager Cloud Control 12.1.0.2 using GUI Mode [ID 1488154.1]
http://www.gokhanatil.com/2011/10/how-to-install-oracle-enterprise-manager-cloud-control-12c.html
http://docs.oracle.com/cd/E24628_01/install.121/e22624/preinstall_req_hw.htm#BACDDAAC
http://docs.oracle.com/cd/E24628_01/install.121/e22624/preinstall_req_packages.htm#CHDEHHCA
http://www.oracle.com/technetwork/oem/em12c-screenwatches-512013.html <-- includes agent install
http://blogs.oracle.com/VDIpier/entry/installing_oem_12c
http://www.dbspecialists.com/blog/database-monitoring/install-and-configure-oracle-enterprise-manager-cloud-control-12c/ <-- using manual agent install
EM 12c: How to Install EM 12c Agent using Silent Install Method with Response File [ID 1360083.1]
12c Cloud Control: How to Install Cloud Agent on Oracle RAC Nodes? [ID 1377434.1] <-- In 12c, there is no option to install a 'cluster Agent' as in the earlier versions
EM 12c: How to Install Enterprise Manager 12.1.0.1 Using Silent Method [ID 1361643.1]
How To De-Install the Enterprise Manager 12c Cloud Control [ID 1363418.1]
How to De-install the Enterprise Manager Cloud Control 12c Agent [ID 1368088.1]
FAQ: Enterprise Manager Agent 12c Availbility / Certification / Install / Upgrade Frequently Asked Questions [ID 1488133.1]
Note 1369575.1 EM 12c: Acquiring or Updating the Enterprise Manager Cloud Control 12.1.0.1 Management Agent Software Using the Self Update Feature
Note 406906.1 Understanding Enterprise Manager Certification in My Oracle Support
EM 12c: Troubleshooting 12c Management Agent Installation issues [ID 1396675.1]
oem12cR1 http://www.oracle-base.com/articles/12c/cloud-control-12cr1-installation-on-oracle-linux-5-and-6.php
oem12cR2 http://www.oracle-base.com/articles/12c/cloud-control-12cr2-installation-on-oracle-linux-5-and-6.php
-- display all devices
powermt display dev=all
''EMC VNX'' http://rogerluethy.wordpress.com/2011/01/18/emc-vnx-whats-in-the-box/
''EMC Symmetrix'' http://en.wikipedia.org/wiki/EMC_Symmetrix
''EMC xtremIO'' (the competitor to Nutanix) https://twitter.com/kevinclosson/status/420971534195232768
''Kevin's readables notes'' http://kevinclosson.wordpress.com/2012/01/30/emc-oracle-related-reading-material-of-interest/#comments
https://blogs.oracle.com/pankaj/entry/emcli_setup
http://archives.devshed.com/forums/linux-97/using-lvm-with-san-1988109.html
http://archives.devshed.com/forums/linux-97/using-lvm-with-emc-powerpath-1845854.html
http://lists.us.dell.com/pipermail/linux-poweredge/2006-October/028086.html
http://archives.free.net.ph/message/20060609.164110.a24b2220.en.html
http://www.mail-archive.com/centos@centos.org/msg19136.html
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/DM_Multipath/multipath_logical_volumes.html
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html <-- You can control which devices LVM scans by setting up filters in the lvm.conf configuration file
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/Cluster_Logical_Volume_Manager/lvmconf_file.html
http://kbase.redhat.com/faq/docs/DOC-1573
Support Info
http://www.emc.com/support-training/support/maintenance-tech-support/options/index.htm
Powelink:
emc193050 "vgcreate against emcpower device fails on Linux server."
emc193050 "vgcreate against emcpower device fails on Linux server."
emc46848 "Duplicate PVIDS on multiple disks"
emc118890 "How to create a Linux Sistina LVM2 logical volume"
emc118561 "Sistina LVM2 is reporting duplicate PV on RHEL"
emc120281 "How to set up a Linux host to use emcpower devices in LVM"
emc93760 "Where can I find Linux Solutions?"
http://www.pythian.com/news/14721/environment-variables-in-grid-control-user-defined-metrics/
* emctl start agent
* emctl stop agent
* emctl status agent
* emctl upload agent
* emctl resetTZ agent
<<<
if having OMS: AGENT_TZ_MISMATCH errors
<<<
* exec mgmt_admin.cleanup_agent('pd02db02.us.cbre.net:3872'); ''<-- this cleans up any info of that host, for De-commissioned Host''
<<<
Right After Install, the Grid Control Agent Generates ERROR-Agent is blocked. Blocked reason is: Agent is out-of-sync with repository [ID 1307816.1]
<<<
*/u01/app/oracle/product/12.1.0.4/middleware/oms/bin/@@emctl status oms -details@@ ''<-- to get status when deploying plugin''
''OEM Dashboard and Groups''
emctl status agent
emctl config agent listtargets
on repository first do this
{{{
exec mgmt_admin.cleanup_agent('<hostname>:3872'); <— this cleans up any info of that host, for De-commissioned Host
}}}
{{{
oracle@desktopserver:/app/oracle/product/agent11g/oui/bin:AGENT
$ . ~oracle/.karlenv
<HOME_LIST>
<HOME NAME="OraDb10g_asmhome" LOC="/app/oracle/product/10.2.0/asm" TYPE="O" IDX="1"/>
<HOME NAME="OraDb10g_home2" LOC="/app/oracle/product/10.2.0/db" TYPE="O" IDX="2"/>
<HOME NAME="agent11g1" LOC="/app/oracle/product/agent11g" TYPE="O" IDX="3"/>
<HOME NAME="OraDb10g_asm_10205_home" LOC="/app/oracle/product/10.2.0.5/asm" TYPE="O" IDX="4"/>
<HOME NAME="OraDb10g_db_10205_home" LOC="/app/oracle/product/10.2.0.5/db" TYPE="O" IDX="5"/>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/app/oracle/product/11.2.0.3/grid" TYPE="O" IDX="6"/>
<HOME NAME="OraDb11g_home1" LOC="/app/oracle/product/11.2.0.3/db" TYPE="O" IDX="7"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
1- epm10prd
2- cog10prd
3- statprd
4- AGENT
5- +ASM
Select the Oracle SID with given number [1]:
oracle@desktopserver:/app/oracle/product/agent11g/oui/bin:
$ ./runInstaller -deinstall ORACLE_HOME=/app/oracle/product/agent11g "REMOVE_HOMES={/app/oracle/product/agent11g}" -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 30047 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-11-28_04-07-20PM. Please wait ...oracle@desktopserver:/app/oracle/product/agent11g/oui/bin:
$ Oracle Universal Installer, Version 11.1.0.8.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.
Starting deinstall
Deinstall in progress (Wednesday, November 28, 2012 4:07:31 PM CST)
Configuration assistant "Agent Deinstall Assistant" succeeded
Configuration assistant "Oracle Configuration Manager Deinstall" succeeded
............................................................... 100% Done.
Deinstall successful
End of install phases.(Wednesday, November 28, 2012 4:08:13 PM CST)
End of deinstallations
Please check '/app/oraInventory/logs/silentInstall2012-11-28_04-07-20PM.log' for more details.
oracle@desktopserver:/app/oracle/product/agent11g/oui/bin:epm10prd
$ . ~oracle/.karlenv
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
<HOME_LIST>
<HOME NAME="OraDb10g_asmhome" LOC="/app/oracle/product/10.2.0/asm" TYPE="O" IDX="1"/>
<HOME NAME="OraDb10g_home2" LOC="/app/oracle/product/10.2.0/db" TYPE="O" IDX="2"/>
<HOME NAME="OraDb10g_asm_10205_home" LOC="/app/oracle/product/10.2.0.5/asm" TYPE="O" IDX="4"/>
<HOME NAME="OraDb10g_db_10205_home" LOC="/app/oracle/product/10.2.0.5/db" TYPE="O" IDX="5"/>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/app/oracle/product/11.2.0.3/grid" TYPE="O" IDX="6"/>
<HOME NAME="OraDb11g_home1" LOC="/app/oracle/product/11.2.0.3/db" TYPE="O" IDX="7"/>
<HOME NAME="agent11g1" LOC="/app/oracle/product/agent11g" TYPE="O" IDX="3" REMOVED="T"/>
</HOME_LIST>
1- epm10prd
2- cog10prd
3- statprd
4- AGENT <-- it's still there!!!
5- +ASM
Select the Oracle SID with given number [1]:
next is manually remove it from the /etc/oratab and /app/oraInventory/ContentsXML/inventory.xml
}}}
http://www.evernote.com/shard/s48/sh/0c1c4419-cc71-43d1-b833-3158554a16dd/4202762f0bd31d3becafa02b760ae6fa
Creating a view only user in Enterprise Manager grid control http://dbastreet.com/blog/?p=395
http://boomslaang.wordpress.com/2008/05/27/securing-oracle-agents/
Right After Install, the Grid Control Agent Generates ERROR-Agent is blocked. Blocked reason is: Agent is out-of-sync with repository [ID 1307816.1] <-- this fixed it
Communication: Agent to OMS Communication Fails if the Agent is 'Blocked' in the 10.2.0.5 Grid Console [ID 799618.1]
11.1 Agent Upload is Failing With "ERROR-Agent is blocked. Blocked reason is: Agent is out-of-sync with repository" [ID 1362430.1]
* ESCOM error while pressing enter, enter the following as root to change the behavior
xmodmap -e 'keycode 104 = Return'
Oracle Identity Management Certification
http://www.oracle.com/technology/software/products/ias/files/idm_certification_101401.html#BABFFCJA
eSSO: Overview And Troubleshooting Of OIM Integration With Provisioning Gateway
Doc ID: Note:550639.1
ESSO - debugging terminal emulator templates
Doc ID: Note:445012.1
How to Upgrade eSSO
Doc ID: Note:471825.1
eSSO: Credentials Might Get Corrupted
Doc ID: Note:563523.1
Installation and Configuration of the ESSO-LM with Oracle Database
Doc ID: Note:456062.1
ESSO - Putty autologin to Unix server
Doc ID: Note:412967.1
eSSO: Overview And Troubleshooting Provisioning Gateway
Doc ID: Note:549189.1
eSSO: How To Integrate an Application Having Windows Based Login and Web Based Password Change
Doc ID: Note:470492.1
Does Oracle Single Sing-On have any Means to Provide Two Factor Authentication?
Doc ID: Note:559094.1
Installing eSSO Login Manager On Windows Vista Fails If User Is Not Administrator
Doc ID: Note:469501.1
Failed To Detect Change Window Password Of Oracle Forms 6
Doc ID: Note:563955.1
ESSO - Logon Manager Agent - enabling traces for intercepted windows
Doc ID: Note:412995.1
https://statsbot.co/blog/etl-vs-elt/
<<<
ETL vs ELT: running transformations in a data warehouse
What exactly happens when we switch “L” and “T”? With new, fast data warehouses some of the transformation can be done at query time. But there are still a lot of cases where it would take quite a long time to perform huge calculations. So instead of doing these transformations at query time you can perform them in the warehouse, but in the background, after loading data.
<<<
<<showtoc>>
! home
!! gen1 home
https://docs.oracle.com/en/cloud/paas/exadata-cloud/
!! gen2 home
https://www.oracle.com/database/exadata-cloud-customer.html
! documentation
!! gen1 doc
Administering Oracle Database Exadata Cloud at Customer (Gen 1/OCI-C)
https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/service-instances.html#GUID-B34563D6-9581-4390-AE6E-3D2304E829EE
!! gen2 doc
https://docs.oracle.com/en-us/iaas/exadata/doc/ecc-exadata-cloud-at-customer-overview.html
! datasheet
!! gen1 ds
(until x7) https://www.oracle.com/technetwork/database/exadata/exacc-x7-ds-4126773.pdf
!! gen2 ds
(x8 onwards) https://www.oracle.com/a/ocom/docs/engineered-systems/exadata/gen2-exacc-commercial-faqs.pdf
https://www.oracle.com/a/ocom/docs/engineered-systems/exadata/gen2-exacc-ds.pdf
! Architecture_Diagrams
Understanding the Exadata Cloud at Customer Technical Architecture (Gen 1/OCI-C) https://www.oracle.com/webfolder/technetwork/tutorials/Architecture_Diagrams/ecc_arch/ecc_arch.html
Understanding the Exadata Cloud Service Technical Architecture (Gen 1/OCI-C) https://www.oracle.com/webfolder/technetwork/tutorials/Architecture_Diagrams/ecs_arch/ecs_arch.html#
! other references
!! gen2
https://wikibon.com/oracle-ups-its-game-with-gen-2-cloud-at-customer/
https://www.ejgallego.com/2018/10/oracle-cloud-gen-2/
https://blog.dbi-services.com/oracle-open-world-2018-cloudgen2/larrykeynote008/
https://www.oracle.com/a/ocom/docs/constellation-on-gen-2-exacc-fr.pdf
! EXACLI
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/exacli.html#GUID-6BF1E4F5-A63E-4A30-886A-2F3DB8A2830F
<<showtoc>>
oracle automated patching
https://www.google.com/search?q=oracle+automated+patching&oq=oracle+automated+patching&aqs=chrome..69i57j0l2j69i64l3.3858j0j1&sourceid=chrome&ie=UTF-8
https://www.oracle.com/technical-resources/articles/enterprise-manager/havewala-patching-oem12c.html
https://www.doag.org/formes/pubfiles/9627420/2017-DB-Nicolas_Jardot-Automate_Patching_for_Oracle_Database_in_your_Private_Cloud-Praesentation.pdf
oracle dbaascli patch oracle cloud on-prem
https://www.google.com/search?sxsrf=ACYBGNT09LMBA3N-RkJoTvWrRClLS9bTKw%3A1568043638172&ei=dnJ2Xd6CCtHU5gLhpIPgCA&q=oracle+dbaascli+patch+oracle+cloud+on-prem&oq=oracle+dbaascli+patch+oracle+cloud+on-prem&gs_l=psy-ab.3..33i160l3.8173.10244..10526...0.2..0.123.834.3j5......0....1..gws-wiz.......0i71j33i299.o-i2Ht-IvTQ&ved=0ahUKEwjela7gicTkAhVRqlkKHWHSAIwQ4dUDCAs&uact=5
https://gokhanatil.com/2016/12/how-to-patch-oracle-database-on-the-oracle-cloud.html
https://docs.oracle.com/en/cloud/paas/database-dbaas-cloud/csdbi/patch-hybrid-dr-deployment.html
patching on-premise oracle exadata
https://www.google.com/search?q=patching+on-premise+oracle+exadata&oq=patching+on-premise+oracle+exadata&aqs=chrome..69i57j33.10698j0j4&sourceid=chrome&ie=UTF-8
! exadata cloud
Patching Exadata Cloud Service https://docs.oracle.com/en/cloud/paas/exadata-cloud/csexa/patch.html
https://docs.oracle.com/en/cloud/paas/exadata-cloud/csexa/typical-workflow-using-service.html
! exadata cloud at customer
Patching Exadata Cloud at Customer https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/patch.html , https://docs.oracle.com/en/cloud/cloud-at-customer/index.html
https://www.oracle.com/database/exadata-cloud-service.html
<<showtoc>>
! sqltext_to_signature example use
{{{
-- sqltext_to_signature example use
exec DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE('karlarao');
-- 1 is force matching
select dbms_sqltune.sqltext_to_signature('karlarao',1) from dual;
2777083410832069452
}}}
https://docs.oracle.com/database/121/ARPLS/d_sqltun.htm#ARPLS68464
! the two example SQLs
{{{
select * from karlarao.skew where skew=3; --6fvyp18cvnzwa 375614277642158684 -- exact matching 4404474968209701751 -- force matching 1949605896 PHV
select * from karlarao.SKEW Where skew=3; --1myj38m1m3g2u 375614277642158684 -- exact matching 4404474968209701751 -- force matching 1949605896 PHV
set serveroutput on
VARIABLE sql1 VARCHAR2(100)
VARIABLE sql2 VARCHAR2(100)
BEGIN
:sql1 := q'[select * from karlarao.skew where skew=3]';
:sql2 := q'[select * from karlarao.SKEW Where skew=3]';
END;
/
col signature format 999999999999999999999999
SELECT :sql1 sql_text, dbms_sqltune.sqltext_to_signature(:sql1,0) signature FROM dual
UNION ALL
SELECT :sql2 sql_text, dbms_sqltune.sqltext_to_signature(:sql2,0) signature FROM dual
}}}
! EXACT_MATCHING_SIGNATURE vs FORCE_MATCHING_SIGNATURE vs dbms_sqltune.sqltext_to_signature across views
{{{
select * from v$sql where sql_id in ('6fvyp18cvnzwa','1myj38m1m3g2u');
select * from gv$sqlstats where sql_id in ('6fvyp18cvnzwa','1myj38m1m3g2u'); -- same as v$sql but planx uses this
--EXACT_MATCHING_SIGNATURE (dbms_sqltune.sqltext_to_signature) - Signature calculated on the normalized SQL text. The normalization includes the removal of white space and the uppercasing of all non-literal strings.
--FORCE_MATCHING_SIGNATURE - Signature used when the CURSOR_SHARING parameter is set to FORCE
select * from dba_sql_plan_baselines; -- only signature 375614277642158684 which is the EXACT_MATCHING_SIGNATURE
-- SQL_05367332076d025c SQL_HANDLE , SQL_PLAN_0admm683qu0kw08e93fe4 PLAN_NAME
select * from dba_sql_profiles; -- here signature means 4404474968209701751 FORCE_MATCHING_SIGNATURE, with FORCE_MATCHING=yes/no
select name, force_matching, signature, created from dba_sql_profiles where signature in (select force_matching_signature from dba_hist_sqlstat where sql_id = '6fvyp18cvnzwa');
select * from dba_hist_sqlstat where sql_id in ('6fvyp18cvnzwa'); --4404474968209701751 only contains FORCE_MATCHING_SIGNATURE
-- generate explain plans of SQL handle
SELECT sql_handle FROM dba_sql_plan_baselines WHERE signature = 375614277642158684 AND ROWNUM = 1; --SQL_05367332076d025c
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE('SQL_05367332076d025c'));
}}}
! how it works - detailed example using SQL Profile and SPM
{{{
--6fvyp18cvnzwa
-- start with full table scan plan
ALTER SESSION SET OPTIMIZER_USE_SQL_PLAN_BASELINES=false;
select * from karlarao.skew where skew=3;
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '6fvyp18cvnzwa',plan_hash_value=>'246648590', fixed =>'YES', enabled=>'YES');
END;
/
create index karlarao.skew_idx on skew(skew);
exec dbms_stats.gather_index_stats(user,'SKEW_IDX', no_invalidate => false);
exec dbms_stats.gather_table_stats(user,'SKEW', no_invalidate => false);
--1myj38m1m3g2u
-- to parse and pickup the new index and create a new PHV
ALTER SESSION SET OPTIMIZER_USE_SQL_PLAN_BASELINES=false;
select * from karlarao.SKEW Where skew=3
-- even with different SQL_ID, what matters is the text matches the EXACT_MATCHING_SIGNATURE 375614277642158684 to be tied to SQL_HANDLE SQL_05367332076d025c as a new SQL PLAN_NAME
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '1myj38m1m3g2u',plan_hash_value=>'1949605896', fixed =>'YES', enabled=>'YES');
END;
/
SQL handle: SQL_05367332076d025c
SQL text: select * from karlarao.skew where skew=3
Plan name: SQL_PLAN_0admm683qu0kw08e93fe4
Plan hash value: 1949605896
Plan name: SQL_PLAN_0admm683qu0kw950a48a8
Plan hash value: 246648590
02:19:18 KARLARAO@cdb1> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE('SQL_05367332076d025c'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
SQL handle: SQL_05367332076d025c
SQL text: select * from karlarao.skew where skew=3
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_0admm683qu0kw08e93fe4 Plan id: 149503972
Enabled: YES Fixed: YES Accepted: YES Origin: MANUAL-LOAD
Plan rows: From dictionary
--------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1949605896
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| SKEW | 1 | 7 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=3)
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_0admm683qu0kw950a48a8 Plan id: 2500479144
Enabled: YES Fixed: YES Accepted: YES Origin: MANUAL-LOAD
Plan rows: From dictionary
--------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 246648590
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 8 (100)| |
|* 1 | TABLE ACCESS FULL| SKEW | 1 | 7 | 8 (13)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 - filter("SKEW"=3)
46 rows selected.
-- another way to add a new plan to an existing baseline is to create a test SQL, add hints to get the desired plan
-- once created, the explain plan will show both profile and baseline are used although the old plan is still in effect
-- the SQL profile plan will only be in effect when the PHV is added to the SPM
select /* new */ * from karlarao.SKEW Where skew=3;
02:47:23 KARLARAO@cdb1> @copy_plan_hash_value.sql
Enter value for plan_hash_value to generate profile from (X0X0X0X0): 1949605896
Enter value for sql_id to attach profile to (X0X0X0X0): 6fvyp18cvnzwa
Enter value for child_no to attach profile to (0):
Enter value for category (DEFAULT):
Enter value for force_matching (false): true
old 18: plan_hash_value = '&&plan_hash_value_from'
new 18: plan_hash_value = '1949605896'
old 32: sql_id = '&&sql_id_to'
new 32: sql_id = '6fvyp18cvnzwa'
old 33: and child_number = &&child_no_to;
new 33: and child_number = 0;
old 37: dbms_output.put_line ('SQL_ID ' || '&&sql_id_to' || ' not found in v$sql');
new 37: dbms_output.put_line ('SQL_ID ' || '6fvyp18cvnzwa' || ' not found in v$sql');
old 49: sql_id = '&&sql_id_to';
new 49: sql_id = '6fvyp18cvnzwa';
old 53: dbms_output.put_line ('SQL_ID ' || '&&sql_id_to' || ' not found in dba_hist_sqltext');
new 53: dbms_output.put_line ('SQL_ID ' || '6fvyp18cvnzwa' || ' not found in dba_hist_sqltext');
old 60: name => 'SP_'||'&&sql_id_to'||'_'||'&&plan_hash_value_from',
new 60: name => 'SP_'||'6fvyp18cvnzwa'||'_'||'1949605896',
old 61: category => '&&category',
new 61: category => 'DEFAULT',
old 63: force_match => &&force_matching
new 63: force_match => true
PL/SQL procedure successfully completed.
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '6fvyp18cvnzwa',plan_hash_value=>'1949605896', fixed =>'YES', enabled=>'YES');
END;
/
375614277642158684 SQL_05367332076d025c select * from karlarao.skew where skew=3 SQL_PLAN_0admm683qu0kw08e93fe4 HR MANUAL-LOAD KARLARAO
375614277642158684 SQL_05367332076d025c select * from karlarao.skew where skew=3 SQL_PLAN_0admm683qu0kw950a48a8 SYS MANUAL-LOAD SYS
}}}
http://www.dbspecialists.com/files/presentations/semijoins.html
-- MATRIX
Export/Import DataPump: The Minimum Requirements to Use Export DataPump and Import DataPump (System Privileges)
Doc ID: Note:351598.1
Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions
Doc ID: Note:553337.1
Oracle Server - Export and Import FAQ
Doc ID: 175624.1
Oracle Server - Export Data Pump and Import DataPump FAQ (Doc ID 556636.1)
Compatibility Matrix for Export And Import Between Different Oracle Versions
Doc ID: 132904.1
Compatibility and New Features when Transporting Tablespaces with Export and Import
Doc ID: 291024.1
How to Gather the Header Information and the Content of an Export Dumpfile ?
Doc ID: 462488.1
Exporting to Tape on Unix System
Doc ID: Note:30428.1
How to Estimate Export File Size Without Creating Dump File
Doc ID: Note:106465.1
Exporting on Unix Systems
Doc ID: Note:1018477.6
Exporting/Importing From Multiple Tapes
Doc ID: Note:2035.1
Exporting to Tape Fails with Errors EXP-00002 and EXP-00000
Doc ID: Note:160764.1
Large File Issues (2Gb+) when Using Export (EXP-2 EXP-15), Import (IMP-2 IMP-21), or SQL*Loader
Doc ID: Note:30528.1
Export Using the Parameter VOLSIZE
Doc ID: Note:90620.1
Parameter FILESIZE - Make Export Write to Multiple Export Files
Doc ID: Note:290810.1
Compatibility Matrix for Export And Import Between Different Oracle Versions
Doc ID: Note:132904.1
How To Copy Database Schemas To A New Database With Same Login Password ?
Doc ID: Note:336012.1
How to Capture Table Constraints onto a SQL Script
Doc ID: Note:1016836.6
Using DBMS_METADATA To Get The DDL For Objects
Doc ID: Note:188838.1
-- DATA PUMP
Oracle DataPump Quick Start
Doc ID: Note:413965.1
DataPump Export/Import Generate Messages "The Value (30) Of Maxtrans Parameter Ignored" in Alert Log
Doc ID: Note:455021.1
How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ?
Doc ID: Note:336014.1
-- CANCEL, STOP, RESTART
How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ?
Doc ID: Note:336014.1
HOW TO CLEANUP ROWS IN DBA_DATAPUMP_JOBS FOR STOPPED EXP/IMP JOBS WHEN DUMPFILE IS NOT THERE OR CORRUPTED
Doc ID: Note:294618.1
-- 32bit 64bit
Note: 277650.1 - How to Use Export and Import when Transferring Data Across Platforms or Across 32-bit and 64-bit Servers
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=277650.1
Note: 553337.1 - Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=553337.1
Note: 132904.1 - Compatibility Matrix for Export And Import Between Different Oracle Versions
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=132904.1
-- EXP IMP PERFORMANCE
http://www.oracle.com/technology/products/database/utilities/htdocs/datapump_faq.html
http://www.freelists.org/post/oracle-l/import-tuning
http://www.dba-oracle.com/oracle_tips_load_speed.htm
IMPORT / EXPORT UTILITY RUNNING EXTREMELY SLOW
Doc ID: 1012699.102
Tuning Considerations When Import Is Slow
Doc ID: 93763.1
Parallel Capabilities of Oracle Data Pump
Doc ID: 365459.1
Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ?
Doc ID: 552424.1
-- DDL
Unix Script: IMPSHOW2SQL - Extracting SQL from an EXPORT file
Doc ID: 29765.1
How to Gather the Header Information and the Content of an Export Dumpfile ?
Doc ID: 462488.1
-- MIGRATION
How to Perform a Full Database Export Import during Upgrade, Migrate, Copy, or Move of a Database
Doc ID: 286775.1
-- EXPDP ON ASM
Creating dumpsets in ASM
Doc ID: 559878.1
How To Extract Datapump File From ASM Diskgroup To Local Filesystem?
Doc ID: 566941.1
<<showtoc>>
! ash elap - shark fin
* the idea behind shark fin viz is simple. The biggest shark = worst SQL
tableau calculated field
{{{
DATEDIFF('second',MIN([TMS]),max([TM]))
Above calculates the diff of these two columns to form a shark fin, bigger fins mean longer running query. Then you can use the ASH dimensions to slice and dice the data
min(SQL_EXEC_START), max(SAMPLE_TIME)
}}}
[img(100%,100%)[https://i.imgur.com/XqWgFxM.jpg]]
[img(100%,100%)[https://i.imgur.com/OzlGpU4.png]]
! ash elap scripts (script version of shark fin using SQL_EXEC_START and SAMPLE_TIME)
!! ash_elap_topsql.sql (top n by elapsed)
https://github.com/karlarao/scripts/blob/master/performance/ash_elap_topsql.sql
{{{
00:00:33 SYS@cdb1> @ash_elap_topsql
SQL_ID SQL_TYPE MODULE PARSING_SCHEMA DISTINCT_PHV EXEC_COUNT ELAP_AVG ELAP_MIN ELAP_MAX PCT_CPU PCT_WAIT PCT_IO MAX_TEMP_MB MAX_PGA_MB MAX_READ_MB MAX_WRITE_MB MAX_RIOPS MAX_WIOPS
------------- ---------------------------- -------------------------------------------------- -------------------- ------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------- ---------- ----------- ------------ ---------- ----------
2j1z0b4ptkqtb PL/SQL EXECUTE sqlplus@karldevfedora (TNS V1-V3) SYS 1 1 7.07 7.07 7.07 100 0 0 1 8.16 7.2 682
b9p45hkcx0pwh SELECT SYS 1 1 5.54 5.54 5.54 0 0 100 13.3 13.4 .08 675 83
cnphq355f5rah PL/SQL EXECUTE DBMS_SCHEDULER SYS 1 1 4.35 4.35 4.35 100 0 0 1 14.23 4.93 2.26 281 5
4u5zq7r9y690a SELECT DBMS_SCHEDULER SYS 2 2 2.89 2.78 3 100 0 0 44.17 40.93 .13 5157 7
acc988uzvjmmt DELETE MMON_SLAVE SYS 2 2 2.32 1.92 2.73 50 0 50 1.42 35.33 1255
6ajkhukk78nsr PL/SQL EXECUTE MMON_SLAVE SYS 14 14 1.56 .5 2.63 100 0 0 2 26.73 1.38 100
4d4gpy6vwqcyw SELECT SQL Developer HR 2 2 1.58 .66 2.49 100 0 0 2 12.09 2.24 172
3wrrjm9qtr2my SELECT MMON_SLAVE SYS 4 4 1.79 1.14 2.26 100 0 0 30.23 33.95 320
0w26sk6t6gq98 SELECT MMON_SLAVE SYS 1 1 2.12 2.12 2.12 100 0 0 1 4.36 2 249
6ajkhukk78nsr PL/SQL EXECUTE SQL Developer HR 1 1 2.12 2.12 2.12 100 0 0 2 11.53 30.8 .27 1577 18
d2tvgg49y2ap6 SELECT MMON_SLAVE SYS 1 1 1.98 1.98 1.98 100 0 0 1 22.79 2.08 217
SQL_ID SQL_TYPE MODULE PARSING_SCHEMA DISTINCT_PHV EXEC_COUNT ELAP_AVG ELAP_MIN ELAP_MAX PCT_CPU PCT_WAIT PCT_IO MAX_TEMP_MB MAX_PGA_MB MAX_READ_MB MAX_WRITE_MB MAX_RIOPS MAX_WIOPS
------------- ---------------------------- -------------------------------------------------- -------------------- ------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------- ---------- ----------- ------------ ---------- ----------
fuws5bqghb2qh SELECT MMON_SLAVE SYS 8 8 1.17 .27 1.96 100 0 0 7.29 .11 7
2n1wa7zpt48cg SELECT MMON_SLAVE SYS 1 1 1.91 1.91 1.91 100 0 0 1 52.23 16.51 777
2afh4r7z1rfv6 INSERT MMON_SLAVE SYS 1 1 1.85 1.85 1.85 100 0 0 11.17 .43 35
1k5c5twx2xr01 INSERT DBMS_SCHEDULER SYS 1 1 1.85 1.85 1.85 0 100 0 1.17
3xjw1ncw5vh27 SELECT DBMS_SCHEDULER SYS 4 4 1.28 .41 1.84 100 0 0 12.73 1.46 139
a6ygk0r9s5xuj SELECT MMON_SLAVE SYS 8 8 1.03 .3 1.83 75 13 13 8.98 .27 20
bkfnnm1unwz2b SELECT SQL Developer HR 1 1 1.83 1.83 1.83 100 0 0 25 26.23 189.17 .31 4607 8
15wvjr16nbyf9 SELECT MMON_SLAVE SYS 1 1 1.76 1.76 1.76 100 0 0 1 25.23 4.63 363
3vg8wn9rtb8r6 SELECT DBMS_SCHEDULER SYS 1 1 1.73 1.73 1.73 100 0 0 2 27.73 34.94 .14 2437 6
c9umxngkc3byq SELECT MMON_SLAVE SYS 3 3 1.26 .89 1.73 100 0 0 .86
gbb40ccatx69g SELECT sqlplus@karldevfedora (TNS V1-V3) SYS 1 1 1.72 1.72 1.72 100 0 0 31.11 1.13 89
}}}
!! ash_elap_hist.sql (by elap > seconds filter)
https://github.com/karlarao/scripts/blob/master/performance/ash_elap_hist.sql
{{{
23:57:26 SYS@cdb1> @ash_elap_hist
DBA_HIST_ACTIVE_SESS_HISTORY - ash_elap by exec (recent)
~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for run_time_sec: 5
old 26: where run_time_sec > &run_time_sec
new 26: where run_time_sec > 5
SQL_ID SQL_EXEC_ID SQL_PLAN_HASH_VALUE SQL_EXEC_START RUN_TIME_TIMESTAMP RUN_TIME_SEC
------------- ----------- ------------------- ------------------------------ ------------------------------ ------------
2j1z0b4ptkqtb 16777216 0 12-APR-21 12.19.03.000000 AM +000000000 00:00:07.071 7.071
b9p45hkcx0pwh 16777216 0 12-APR-21 10.58.07.000000 PM +000000000 00:00:05.541 5.541
DBA_HIST_ACTIVE_SESS_HISTORY - ash_elap exec avg min max
~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for sql_id: 2j1z0b4ptkqtb
SQL_PLAN_HASH_VALUE COUNT(*) AVG MIN MAX
------------------- ---------- ---------- ---------- ----------
0 1 7.07 7.07 7.07
1 7.07 7.07 7.07
}}}
!! ash_elap_hist_user.sql (by parsing_schema)
https://github.com/karlarao/scripts/blob/master/performance/ash_elap_hist_user.sql
{{{
23:58:26 SYS@cdb1> @ash_elap_hist_user
DBA_HIST_ACTIVE_SESS_HISTORY - ash_elap by exec (recent)
~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for user: HR
old 21: and user_id in (select user_id from dba_users where upper(username) like upper('&&user'))
new 21: and user_id in (select user_id from dba_users where upper(username) like upper('HR'))
SQL_ID SQL_EXEC_ID SQL_PLAN_HASH_VALUE SQL_EXEC_START RUN_TIME_TIMESTAMP RUN_TIME_SEC
------------- ----------- ------------------- ------------------------------ ------------------------------ ------------
510myug0qnp5j 16777216 0 06-APR-21 09.25.23.000000 PM +000000000 00:00:00.331 .331
bkfnnm1unwz2b 16777216 2479479715 07-APR-21 01.19.13.000000 AM +000000000 00:00:01.826 1.826
6ajkhukk78nsr 16777218 0 11-APR-21 11.53.05.000000 PM +000000000 00:00:02.124 2.124
5h7w8ykwtb2xt 16777433 4166561850 11-APR-21 11.53.36.000000 PM +000000000 00:00:01.166 1.166
4d4gpy6vwqcyw 16777220 1820166347 12-APR-21 02.03.27.000000 AM +000000000 00:00:02.494 2.494
4d4gpy6vwqcyw 16777223 1820166347 12-APR-21 02.51.13.000000 AM +000000000 00:00:00.663 .663
6 rows selected.
DBA_HIST_ACTIVE_SESS_HISTORY - ash_elap exec avg min max
~~~~~~~~~~~~~~~~~~~~~~~~~
SQL_PLAN_HASH_VALUE COUNT(*) AVG MIN MAX
------------------- ---------- ---------- ---------- ----------
2479479715 1 1.83 1.83 1.83
1820166347 2 1.58 .66 2.49
4166561850 1 1.17 1.17 1.17
0 2 1.23 .33 2.12
6 1.43 .33 2.49
}}}
! ash_elap as part of planx.sql (sqldb360)
https://github.com/karlarao/scripts/blob/master/performance/planx.sql
https://github.com/karlarao/sqldb360/blob/master/sql/planx.sql (same as above)
* I customized the planx.sql to include the ash_elap scripts to have a better view of recent elapsed time and performance of the SQL_ID
** temp,pga,read,write,riops,wiops by SQL_EXEC_ID
** PHV avg,min,max
{{{
@planx Y brfw9gfks2d37
}}}
{{{
SQL_ID: brfw9gfks2d37
SIGNATURE: 9196040709030148699
SIGNATUREF: 9196040709030148699
DB NAME: CDB1
INSTANCE NAME: cdb1
PDB NAME: cdb1
select min(hire_date), max(hire_date) from hr.employees
GV$SQL_PLAN_STATISTICS_ALL LAST (ordered by inst_id and child_number)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Inst: 1 Child: 0 Plan hash value: 1756381138
---------------------------------------------------------------------------------
| Id | Operation | Name | E-Rows |E-Bytes| Cost (%CPU)| E-Time |
---------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | TABLE ACCESS FULL| EMPLOYEES | 107 | 856 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
1 - SEL$1
2 - SEL$1 / EMPLOYEES@SEL$1
Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
DB_VERSION('12.1.0.2')
OPT_PARAM('_optimizer_use_feedback' 'false')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "EMPLOYEES"@"SEL$1")
END_OUTLINE_DATA
*/
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) MAX("HIRE_DATE")[7], MIN("HIRE_DATE")[7]
2 - (rowset=200) "HIRE_DATE"[DATE,7]
Note
-----
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level
GV$ACTIVE_SESSION_HISTORY - ash_elap by exec (recent)
~~~~~~~~~~~~~~~~~~~~~~~~~
SOURCE SQL_ID SQL_EXEC_ID SQL_PLAN_HASH_VALUE SQL_EXEC_START RUN_TIME_TIMESTAMP RUN_TIME_SEC TEMP_MB PGA_MB READ_MB WRITE_MB RIOPS WIOPS
-------- ------------- ----------- ------------------- ------------------------------ ------------------------------ ------------ ---------- ---------- ---------- ---------- ---------- ----------
realtime brfw9gfks2d37 16777225 1756381138 12-APR-21 09.54.58.000000 PM +000000000 00:00:00.483 .483 0 1.49 .2 0 19 0
realtime brfw9gfks2d37 16777285 1756381138 12-APR-21 09.55.08.000000 PM +000000000 00:00:00.493 .493 0 1.49 .2 0 19 0
GV$ACTIVE_SESSION_HISTORY - ash_elap exec avg min max
~~~~~~~~~~~~~~~~~~~~~~~~~
SOURCE SQL_PLAN_HASH_VALUE COUNT(*) AVG MIN MAX
-------- ------------------- ---------- ---------- ---------- ----------
realtime 1756381138 2 .49 .48 .49
2 .49 .48 .49
SOURCE SQL_PLAN_HASH_VALUE COUNT(*) AVG MIN MAX
---------- ------------------- ---------- ---------- ---------- ----------
0
GV$ACTIVE_SESSION_HISTORY
~~~~~~~~~~~~~~~~~~~~~~~~~
SOURCE SAMPLES PERCENT TIMED_EVENT
-------- ---------------- -------- ----------------------------------------------------------------------
realtime 30 100.0 ON CPU
GV$ACTIVE_SESSION_HISTORY - by inst_id
~~~~~~~~~~~~~~~~~~~~~~~~~
SOURCE SAMPLES PERCENT INST_ID TIMED_EVENT
-------- ---------------- -------- ---------- ----------------------------------------------------------------------
realtime 30 100.0 1 ON CPU
GV$ACTIVE_SESSION_HISTORY
~~~~~~~~~~~~~~~~~~~~~~~~~
SAMPLES PERCENT PLAN_HASH_VALUE LINE_ID OPERATION TIMED_EVENT
---------------- -------- --------------- -------- -------------------------------------------------- ----------------------------------------------------------------------
13 43.3 1756381138 0 ON CPU
12 40.0 1475428744 0 ON CPU
2 6.7 0 0 ON CPU
1 3.3 1756381138 0 SELECT STATEMENT ON CPU
1 3.3 1756381138 1 SORT AGGREGATE ON CPU
1 3.3 1756381138 2 TABLE ACCESS FULL ON CPU
GV$ACTIVE_SESSION_HISTORY
~~~~~~~~~~~~~~~~~~~~~~~~~
SAMPLES PERCENT PLAN_HASH_VALUE LINE_ID OPERATION CURRENT_OBJECT TIMED_EVENT
---------------- -------- --------------- -------- -------------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------------------------
13 43.3 1756381138 0 SERIAL -1 ON CPU
12 40.0 1475428744 0 SERIAL -1 ON CPU
2 6.7 0 0 SERIAL -1 ON CPU
1 3.3 1756381138 2 TABLE ACCESS FULL SERIAL -1 ON CPU
1 3.3 1756381138 0 SELECT STATEMENT SERIAL -1 ON CPU
1 3.3 1756381138 1 SORT AGGREGATE SERIAL -1 ON CPU
GV$ACTIVE_SESSION_HISTORY - px distribution
~~~~~~~~~~~~~~~~~~~~~~~~~
SQL_EXEC_START SQL_EXEC_ID SQL_PLAN_HASH_VALUE SQL_PLAN_LINE_ID DOP PROGRAM COUNT(*)
------------------------------ ----------- ------------------- ---------------- ---------- ------------------------------------------------------------ ----------
12-APR-21 09.54.58.000000 PM 16777225 1756381138 1 SERIAL sqlplus@karldevfedora (TNS V1-V3) 1
12-APR-21 09.55.08.000000 PM 16777285 1756381138 2 SERIAL sqlplus@karldevfedora (TNS V1-V3) 1
}}}
! ash elap explained
!! elapsed time (wall clock) using ASH
http://dboptimizer.com/2011/05/04/sql-execution-times-from-ash/
http://dboptimizer.com/2011/05/06/sql-timings-for-ash-ii/
http://dboptimizer.com/2011/05/06/sql-ash-timings-iii/
<<<
* this is pretty awesome way of characterizing the response times of SQLs.. another way of doing this is through 10046 trace and using the Mr. Tools, and there are so many things you can do with both of the tools, another thing I'm interested in (although not related to this tiddler) is getting the IO size distribution from the 10046 along side it is the data coming from ASH which is basically pulling the data from the p1,p2,p3 values of the IO events..
<<<
{{{
[oracle@oel5-11g bin]$ cat ash_test.sh
export DATE=$(date +%Y%m%d%H%M%S%N)
sqlplus "/ as sysdba" <<EOF
set timing on
set echo on
spool all_nodes_full_table_scan_$DATE.log
select /* ash_elapsed */ * from
(select owner, object_name from karltest
where owner = 'SYSTEM'
and object_type = 'TABLE'
union
select owner, object_name from karltest
where owner = 'SYSTEM'
and object_type = 'INDEX')
order by object_name
/
spool off
exit
EOF
[oracle@oel5-11g bin]$
[oracle@oel5-11g bin]$ cat loadtest.sh
(( n=0 ))
while (( n<$1 ));do
(( n=n+1 ))
sh ash_test.sh &
done
}}}
{{{
[oracle@oel5-11g bin]$ ls -ltr
total 1468
-rwxr-xr-x 1 oracle oinstall 107 Apr 23 08:21 startdb.sh
-rwxr-xr-x 1 oracle oinstall 118 Apr 23 08:21 stopdb.sh
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:12 all_nodes_full_table_scan_20110505181225583938000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508275739000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508273773000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508273060000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508269189000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508265790000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508262532000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508259253000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508256596000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508251337000.log
-rw-r--r-- 1 oracle oinstall 127675 May 5 18:17 all_nodes_full_table_scan_20110505181508245849000.log
-rw-r--r-- 1 oracle oinstall 64 May 5 19:23 loadtest.sh
-rw-r--r-- 1 oracle oinstall 397 May 5 19:23 ash_test.sh
}}}
{{{
[oracle@oel5-11g bin]$ cat *log | grep Elapsed
Elapsed: 00:00:15.00
Elapsed: 00:02:00.41
Elapsed: 00:02:00.10
Elapsed: 00:02:00.03
Elapsed: 00:02:00.15
Elapsed: 00:02:00.32
Elapsed: 00:02:00.08
Elapsed: 00:02:00.20
Elapsed: 00:01:59.99
Elapsed: 00:02:00.31
Elapsed: 00:02:00.11
}}}
{{{
SELECT /* example */ substr(sql_text, 1, 80) sql_text,
sql_id,
hash_value, address, child_number, plan_hash_value, FIRST_LOAD_TIME
FROM v$sql
WHERE
--sql_id = '6wps6tju5b8tq'
-- hash_value = 1481129178
sql_text LIKE '%ash_elapsed%'
AND sql_text NOT LIKE '%example%'
order by first_load_time;
SQL_TEXT SQL_ID HASH_VALUE ADDRESS CHILD_NUMBER PLAN_HASH_VALUE FIRST_LOAD_TIME
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------- ---------- ---------------- ------------ --------------- ----------------------------------------------------------------------------
select /* ash_elapsed */ * from (select owner, object_name from karltest where o gy6j5kg641saa 3426804042 000000006C523480 0 1959977140 2011-05-05/18:12:25
}}}
{{{
select sql_id,
run_time run_time_timestamp,
(EXTRACT(HOUR FROM run_time) * 3600
+ EXTRACT(MINUTE FROM run_time) * 60
+ EXTRACT(SECOND FROM run_time)) run_time_sec
from (
select
sql_id,
max(sample_time - sql_exec_start) run_time
from
dba_hist_active_sess_history
where
sql_exec_start is not null
group by sql_id,SQL_EXEC_ID
order by sql_id
)
-- where rownum < 100
where sql_id = 'gy6j5kg641saa'
order by sql_id, run_time desc
/
SQL_ID RUN_TIME_TIMESTAMP RUN_TIME_SEC
------------- --------------------------------------------------------------------------- ------------
gy6j5kg641saa +000000000 00:01:54.575 114.575
gy6j5kg641saa +000000000 00:01:54.575 114.575
gy6j5kg641saa +000000000 00:01:54.575 114.575
gy6j5kg641saa +000000000 00:01:54.575 114.575
gy6j5kg641saa +000000000 00:01:54.575 114.575
gy6j5kg641saa +000000000 00:01:54.575 114.575
gy6j5kg641saa +000000000 00:01:54.575 114.575
gy6j5kg641saa +000000000 00:01:54.575 114.575
gy6j5kg641saa +000000000 00:01:53.575 113.575
gy6j5kg641saa +000000000 00:01:53.575 113.575
gy6j5kg641saa +000000000 00:00:11.052 11.052
11 rows selected.
}}}
{{{
select sql_id,
count(*),
round(avg(EXTRACT(HOUR FROM run_time) * 3600
+ EXTRACT(MINUTE FROM run_time) * 60
+ EXTRACT(SECOND FROM run_time)),2) avg ,
round(min(EXTRACT(HOUR FROM run_time) * 3600
+ EXTRACT(MINUTE FROM run_time) * 60
+ EXTRACT(SECOND FROM run_time)),2) min ,
round(max(EXTRACT(HOUR FROM run_time) * 3600
+ EXTRACT(MINUTE FROM run_time) * 60
+ EXTRACT(SECOND FROM run_time)),2) max
from (
select
sql_id,
max(sample_time - sql_exec_start) run_time
from
dba_hist_active_sess_history
where
sql_exec_start is not null
and sql_id = 'gy6j5kg641saa'
group by sql_id,SQL_EXEC_ID
order by sql_id
)
-- where rownum < 100
group by sql_id
order by avg desc
/
SQL_ID COUNT(*) AVG MIN MAX
------------- ---------- -------- ---------- ----------
gy6j5kg641saa 11 104.980 11.05 114.58
}}}
-- Also verify the data points and avg min max in Excel
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TcKKM6OwQNI/AAAAAAAABQQ/6AunDw4VDvI/avgminmax.png]]
{{{
SQL> select count(*) from karltest;
COUNT(*)
----------
2215968
SQL> insert into karltest select * from dba_objects;
69249 rows created.
Elapsed: 00:00:00.86
SQL> commit;
Commit complete.
SQL> select count(*) from karltest;
COUNT(*)
----------
69249
[oracle@oel5-11g bin]$ cat *log | grep Elapsed
Elapsed: 00:00:00.67
Elapsed: 00:00:00.35
Elapsed: 00:00:01.16
Elapsed: 00:00:00.33
Elapsed: 00:00:00.35
Elapsed: 00:00:00.31
Elapsed: 00:00:00.31
Elapsed: 00:00:01.32
Elapsed: 00:00:00.34
Elapsed: 00:00:00.31
SELECT /* example */ substr(sql_text, 1, 80) sql_text,
sql_id,
hash_value, address, child_number, plan_hash_value, FIRST_LOAD_TIME
FROM v$sql
WHERE
--sql_id = '6wps6tju5b8tq'
-- hash_value = 1481129178
sql_text LIKE '%ash_elapsed2%'
AND sql_text NOT LIKE '%example%'
order by first_load_time;
SQL> SELECT /* example */ substr(sql_text, 1, 80) sql_text,
2 sql_id,
3 hash_value, address, child_number, plan_hash_value, FIRST_LOAD_TIME
4 FROM v$sql
5 WHERE
6 --sql_id = '6wps6tju5b8tq'
7 -- hash_value = 1481129178
8 sql_text LIKE '%ash_elapsed2%'
9 AND sql_text NOT LIKE '%example%'
order by first_load_time; 10
SQL_TEXT SQL_ID HASH_VALUE ADDRESS CHILD_NUMBER PLAN_HASH_VALUE FIRST_LOAD_TIME
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------- ---------- ---------------- ------------ --------------- ----------------------------------------------------------------------------
select /* ash_elapsed2 */ * from (select owner, object_name from karltest where 4bkcftyvj2j6p 3071362261 000000006C776858 0 1959977140 2011-05-05/19:59:58
SQL> BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
END;
/ 2 3 4
PL/SQL procedure successfully completed.
SQL> select sql_id,
2 run_time run_time_timestamp,
3 (EXTRACT(HOUR FROM run_time) * 3600
+ EXTRACT(MINUTE FROM run_time) * 60
4 5 + EXTRACT(SECOND FROM run_time)) run_time_sec
6 from (
7 select
8 sql_id,
9 max(sample_time - sql_exec_start) run_time
10 from
11 dba_hist_active_sess_history
12 where
13 sql_exec_start is not null
14 group by sql_id,SQL_EXEC_ID
order by sql_id
15 16 )
-- where rownum < 100
17 18 where sql_id = '4bkcftyvj2j6p'
19 order by sql_id, run_time desc
/ 20
no rows selected
}}}
!! Making use of STDDEV on elapsed time
This gets the avg,min,max,stddev on a specific time window.. then drill down further with a join on dba_hist_active_sess_history with particular filters (module, user, etc.)
{{{
-- CREATE A TEMP TABLE THAT SHOWS AVG,MIN,MAX,STDDEV RESPONSE TIME OF SQLS
define begin='03/08/2012 14:40'
define end='03/08/2012 14:45'
SYS@fsprd2> create table karl_sql_id2 as
select sql_id,
2 3 count(*) count,
4 round(avg(EXTRACT(HOUR FROM run_time) * 3600
5 + EXTRACT(MINUTE FROM run_time) * 60
6 + EXTRACT(SECOND FROM run_time)),2) avg ,
7 round(min(EXTRACT(HOUR FROM run_time) * 3600
8 + EXTRACT(MINUTE FROM run_time) * 60
9 + EXTRACT(SECOND FROM run_time)),2) min ,
10 round(max(EXTRACT(HOUR FROM run_time) * 3600
11 + EXTRACT(MINUTE FROM run_time) * 60
12 + EXTRACT(SECOND FROM run_time)),2) max,
13 round(stddev(EXTRACT(HOUR FROM run_time) * 3600
14 + EXTRACT(MINUTE FROM run_time) * 60
15 + EXTRACT(SECOND FROM run_time)),2) stddev
16 from (
17 select
18 sql_id,
19 max(sample_time - sql_exec_start) run_time
20 from
21 dba_hist_active_sess_history
22 where
23 sql_exec_start is not null
24 and sample_time
25 between to_date('&begin', 'MM/DD/YY HH24:MI:SS')
26 and to_date('&end', 'MM/DD/YY HH24:MI:SS')
27 group by sql_id,SQL_EXEC_ID
28 order by sql_id
29 )
30 group by sql_id
31 order by avg desc
32 /
Table created.
define _start_time='03/08/2012 14:40'
define _end_time='03/08/2012 14:45'
SYS@fsprd2> select * from karl_sql_id2
where sql_id in
2 3 (select sql_id from
4 dba_hist_active_sess_history
5 where sample_time
6 between to_date('&_start_time', 'MM/DD/YY HH24:MI')
7 and to_date('&_end_time', 'MM/DD/YY HH24:MI')
8 and lower(module) like 'ex_%')
9 order by stddev asc;
SQL_ID COUNT AVG MIN MAX STDDEV
------------- ---------- ---------- ---------- ---------- ----------
aadkvg74cknvc 1 .8 .8 .8 0
c96tdmv2wu0mb 1 .81 .81 .81 0
03zk40yazk2cj 1 .81 .81 .81 0
89s2kmgjcyg08 1 1.96 1.96 1.96 0
cb5gq5xu04sbb 3 2.6 1.92 3.93 1.15
991y15af5jxx9 5 2.07 .96 5.93 2.16
c2fn0swka653f 6 18.94 9.99 28.99 7.28
7 rows selected.
}}}
.
first I've setup my own mail server (where my DNS,NTP,Samba are also hosted in one VM)... [[R&D Mail Server]] http://www.evernote.com/shard/s48/sh/799368fe-07f0-4ebf-8a92-8b295e9bcf0d/61f0bb8e887507684925fad01d3f9245
Setup Email Notification
http://www.evernote.com/shard/s48/sh/a0869438-b44d-4b39-a280-c138dc21ac84/48be976fcc4fc894e8713d261cfc644a
tablespacealerts and repvfy install
http://www.evernote.com/shard/s48/sh/9568bb0c-c65b-482f-903b-b4b792e5f927/4745645ebf375d8abc950ca3f059dc3a
tablespacealerts-fixdbtimezone (I don't think you have to deal with this)
http://www.evernote.com/shard/s48/sh/9520da28-d89d-4b63-adbd-04b0cb4d819e/cfaa06dbdf41046a0597694180d66c43
''related notes''
RAC Metrics: Unable to get E-mail Notification for some metrics against Cluster Databases (Doc ID 403886.1)
! capture
{{{
Kindly send the 3 files to the customer and run it on each of the database that will be consolidated. The readme.txt shows the steps
$ ls
esp_collect-master.zip readme.txt run_awr-quickextract-master.zip
Please execute the following for each database as sysdba, for RAC just run the scripts on the 1st node
-- edb360-master
----------------------------------------
$ unzip edb360-master.zip
$ cd edb360-master
$ sqlplus / as sysdba
$ @edb360.sql T 31
-- esp_collect-master
----------------------------------------
$ unzip esp_collect-master.zip
$ cd esp_collect-master
$ sqlplus / as sysdba
$ @sql/resources_requirements.sql
$ @sql/esp_collect_requirements.sql
$ mv res_requirements_<hostname>.txt res_requirements_<hostname>_<databasename>.txt
$ mv esp_requirements_<hostname>.csv esp_requirements_<hostname>_<databasename>.csv
$ cat /proc/cpuinfo | grep -i name | sort | uniq >> cpuinfo_model_name.txt
$ zip esp_output.zip res_requirements_*.txt esp_requirements_*.csv cpuinfo_model_name.txt
-- run_awr-quickextract
----------------------------------------
$ unzip run_awr-quickextract-master.zip
$ cd run_awr-quickextract-master
$ sqlplus / as sysdba
$ @run_all.sql
$ zip run_awr_output.zip *tar
}}}
! quick howto
{{{
0) pull the esp
1) concatenate the est files
2) create the client
3) load the esp file, associate to the file to the client
4) check the summary
5) admin -> client -> click on pencil
6) check on host tab -> edit the specint
7) plan is implicitly created
8) click on report
9) click config per plan
stack
edit the params
}}}
http://en.wikipedia.org/wiki/The_Open_Group_Architecture_Framework
1) Setup Yum and install the following rpms
yum install curl compat-libstdc++-33 glibc nspluginwrapper
2) Download the flash player RPM
http://get.adobe.com/flashplayer/
rpm -ivh flash-plugin.rpm
3) Close the Firefox and restart it
http://www.flashconf.com/how-to/how-to-install-flash-player-on-centosredhat-linux/
! on 6.5
http://www.sysads.co.uk/2014/01/install-adobe-flash-player-11-2-centosrhel-6-5/
-- FAQ
Enterprise Manager Database Console FAQ (Doc ID 863631.1)
Master Note for Grid Control 11.1.0.1.0 Installation and Upgrade [ID 1067438.1] <-- MASTER NOTE
Oracle Support Master Note for 10g Grid Control OMS Performance Issues (Doc ID 1161003.1)
http://blogs.oracle.com/db/2010/09/oracle_support_master_note_for_10g_grid_control_oms_performance_issues_doc_id_11610031_1.html
-- INSTALLATION 10gR2
Doc ID: 763351.1 Documentation Reference for Grid Control 10.2.0.5.0 Installation and Upgrade
Note 412431.1 - Oracle Enterprise Manager 10g Grid Control Certification Checker
Note 464674.1 - Checklist for EM 10g Grid Control 10.2.x to 10.2.0.4/10.2.0.5 OMS and Repository Upgrades
Note 784963.1 - How to Install Grid Control 10.2.0.5.0 on Enterprise Linux 5 Using the Existing Database (11g) Option
Note 793870.1 - How to Install Grid Control 10.2.0.5.0 on Enterprise Linux 4 Using the Existing Database (11g) Option
Note 604520.1 - How to Install Grid Control 10.2.0.4.0 with an Existing (10.2.X.X/11.1.0.6) Database using the Software-only Option
Doc ID: 467677.1 How to Install Grid Control 10.2.0.4.0 to use an 11g Database for the Repository
Doc ID: 780836.1 How to Install Grid Control 10.2.0.5.0 on Enterprise Linux 5 Using the New Database Option <-- got from Jeff Hunter
-- INSTALLATION 11g
Enterprise Manager Grid Control and Database Control Certification with 11g R2 Database [ID 1266977.1]
11g Grid Control: 11.2.0.1 Database Containing Grid Control Repository Generates Core Dump with ORA-07445 Error [ID 1305569.1]
Checklist for EM 10g Grid Control 10.2.0.4/10.2.0.5 to 11.1.0.1.0 OMS and Repository Upgrades [ID 1073166.1]
Grid Control 11g: How to Install 11.1.0.1.0 on OEL5.3 x86_64 with a 11.1.0.7.0 Repository Database [ID 1064495.1]
Grid Control 11g install fails at OMS configuration stage - Wrong Weblogic Server version used. [ID 1135493.1]
http://kkempf.wordpress.com/2010/05/08/em-11g-grid-control-install/
http://www.ora-solutions.net/papers/HowTo_Installation_GridControl_11g_RHEL5.pdf
http://gavinsoorma.com/2010/10/11g-grid-control-installation-tips-and-solutions/
http://www.emarcel.com/myblog/44-oraclearticles/136-installingoem11gr1
http://www.oracle-wiki.net/startdocsgridcontrollinuxagentinstall11gmanual
http://www.oracle-wiki.net/startdocsgridcontrollinuxagentinstall11g
http://www.oracle-wiki.net/startdocsgridcontrolpostimplementation11g#toc9
http://www.oracle-wiki.net/startdocshowtobuildgridcontrol11101
http://download.oracle.com/docs/cd/E11857_01/install.111/e16847/install_agent_on_clstr.htm#CHDHEBFE <-- official doc
https://forums.oracle.com/forums/thread.jspa?threadID=2244102
http://www.gokhanatil.com/2011/08/how-to-deploy-em-grid-control-11g-agent.html <-- on windows
Installing Enterprise Manager Grid Control Fails with Error 'OUI-10133 Invalid staging area' [ID 443513.1] <-- staging
11g Grid Control: Details of the Directory Structure and Commonly Used Locations in a 11g OMS Installation [ID 1276554.1] <-- the detailed directory structure
-- OEM MAA
MAA home page http://www.oracle.com/technetwork/database/features/availability/em-maa-155389.html
http://docs.oracle.com/cd/E11857_01/em.111/e16790/part3.htm#sthref1164 <-- four levels of HA
http://docs.oracle.com/cd/E11857_01/em.111/e16790/ha_single_resource.htm#CHDEHBEG <-- single resource config
http://docs.oracle.com/cd/E11857_01/em.111/e16790/ha_multi_resource.htm#BABDAJEE <-- multiple resource config
http://blogs.oracle.com/db/entry/oracle_support_master_note_for_configuring_10g_grid_control_components_for_high_availability <-- collection of MOS notes for OEM HA
Enterprise Manager Community: Four Stages to MAA in Grid Control [ID 985082.1]
How To Configure Enterprise Manager for High Availability [ID 330072.1]
-- TROUBLESHOOTING
Files Needed for Troubleshooting an EM 10G Service Request if an RDA is not Available [ID 405755.1]
How to Run the RDA against a Grid Control Installation [ID 1057051.1]
Files to Upload for an Enterprise Manager Grid Control 10g Service Request [ID 377124.1]
-- CONSOLE, WEBSITE
Differences Between Oracle Enterprise Manager Console and Oracle Enterprise Manager Web Site
Doc ID: Note:222667.1
-- DATABASE CONTROL
278100.1 drop recreate dbconsole
Master Note for Enterprise Manager Configuration Assistant (EMCA) in Single Instance Database Environment [ID 1099271.1]
-- GRID CONTROL
Comparison Between the Database Healthcheck and Database Response Metrics
Doc ID: Note:469227.1
Overview Comparison of EM 9i to EM10g Features
Doc ID: Note:277066.1
EM 10gR2 GRID Control Release Notes (10.2.0.1.0)
Doc ID: Note:356236.1
OCM: Software Configuration Manager (SCM formerly known as MCP): FAQ and Troubleshooting for Oracle Configuration Manager (OCM)
Doc ID: Note:369619.1
Enterprise Manager Grid Control 10g (10.1.0) Frequently Asked Questions
Doc ID: Note:273579.1
Differences Between Oracle Enterprise Manager Console and Oracle Enterprise Manager Web Site
Doc ID: Note:222667.1
Where Are The Tuning Pack Advisors For A 9i DB Within 10g Gc Control?
Doc ID: Note:299729.1
Grid Control Reports FAQ
Doc ID: Note:460894.1
How do you display performance data for a period greater than 31 days in Enterprise Manager
Doc ID: Note:363880.1
Enterprise Manager DST Quick Fix Guide
Doc ID: Note:418792.1
What can you patch using Grid Control?
Doc ID: Note:457979.1
How To Discover RAC Listeners Started On VIPs In Grid Control
Doc ID: Note:461420.1
EM2GO (Enterprise Manager Grid Control 10G) Frequently Asked Questions
Doc ID: Note:400193.1
How To Access Advisor Central for 9i Target Databases in Grid Control
Doc ID: Note:332971.1
Frequently Asked Questions (FAQ) for EM Tuning Pack 9i
Doc ID: Note:169548.1
Where Are The Tuning Pack Advisors For A 9i DB Within 10g Gc Control?
Doc ID: Note:299729.1
Frequently Asked Questions (FAQ) for the EM Diagnostics Pack 9i
Doc ID: Note:169551.1
-- ISSUES/BUGS
Note 387212.1 - How to Locate the Installation Logs for Grid Control 10.2.0.x
10.2 Grid Agent Can Break RAID Mirroring and Cause Hard Disk To Go Offline
Doc ID: 454647.1
Known Issues: When Installing Grid Control Using Existing Database Which Is Configured With ASM
Doc ID: 738445.1
Doc ID: 787872.1 Grid Control 10.2.0.5.0 Known Issues
Files to Upload for an Enterprise Manager Grid Control 10g Service Request
Doc ID: 377124.1
Database Control Status Of Db Instance Is Unmounted (Doc ID 550712.1)
Problem: Database Status Unavailable in Grid Control with Metric Collection Error (Doc ID 340158.1)
Database Control Showing Database Status as Currently Unavailable. Connect via sqlplus is successfull. (Doc ID 315299.1)
Grid Control shows Database Status as Unmounted on the db Homepage, but the Database is actually Open (Doc ID 1094524.1)
PROBLEM: Top Activity Page Fails With Error "Java.Sql.Sqlexception: Unknown Host Specified" In Grid Control 11.1 [ID 1183783.1] <-- issue we had on exadata
-- METRICS
How to - Disable the Host Storage Metric on Multiple Hosts using an Enterprise Manager Job
Doc ID: 560905.1
Troubleshooting guide to remove old warning and critical alerts from grid console
Doc ID: 806052.1
Note 748630.1 - How to clear an Alert in Enterprise Manager Grid Control
Warning Alerts Still Reported for Metrics That Have Been Disabled
Doc ID: 744115.1
Understanding Oracle 10G - Server Generated Alerts
Doc ID: 266970.1
-- ''EMAIL NOTIFICATIONS''
Problem - RDBMS metrics, e.g.Tablespace Full(%), not clearing in Grid Control even though they are no longer present in dba_outstanding_alerts [ID 455222.1]
Understanding Oracle 10G - Server Generated Alerts [ID 266970.1]
How to set up dbconsole to send email notifications for a metric alert (eg. tablespace full) [ID 1266924.1]
Configuring Email Notification Method in EM - Steps and Troubleshooting [ID 429426.1]
New Features for Notifications in 10.2.0.5 Enterprise Manager Grid Control [ID 813399.1]
How to Add/Update Email Addresses and Configure a Notification Schedule in Grid Control ? [ID 438150.1]
How to Test an SMTP Mail Gateway From a Command Line Interface [ID 74269.1]
Grid Control and SMTP Authentication [ID 429836.1]
What are Short and Long Email Formats in Enterprise Manager Notifications? [ID 429292.1]
How To Configure Notification Rules in Enterprise Manager Grid Control? [ID 429422.1]
Configuring SNMP Trap Notification Method in EM - Steps and Troubleshooting [ID 434886.1]
Configuring Notifications for Job Executions in Enterprise Manager [ID 414409.1]
What are the Packs required for using the Notifications feature from Grid Control? [ID 552788.1]
How to Enable 'Repeat Notifications' from Enterprise Manager Grid Control? [ID 464847.1]
How to Troubleshoot Notifications That Are Hung / Stuck and Not Being Sent from EM 10g [ID 285093.1]
http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b25986/oracle_database.htm#sthref1176
EMDIAG REPVFY Kit - Download, Install/De-Install and Upgrade [ID 421499.1]
All Email Notifications Arrive With 1 Hour Delay In Grid control [ID 413718.1]
EMDIAG Master Index [ID 421053.1]
EMDIAG REPVFY Kit - How to Use the Repository Diagnostics [ID 421563.1]
EMDIAG REPVFY Kit - Download, Install/De-Install and Upgrade [ID 421499.1]
EMDIAG REPVFY Kit - Environment Variables [ID 421586.1]
EMDIAG REPVFY Kit - How to Configure the 'repvfy.cfg' File [ID 421600.1]
change dbtimzone https://forums.oracle.com/forums/thread.jspa?threadID=572038
Timestamps & time zones - Frequently Asked Questions [ID 340512.1]
http://practicaloracle.blogspot.com/2007/10/oracle-enterprise-manager-10g.html
Note: 271367.1 - Oracle Alert Alert Check Setup Test
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=271367.1
Note: 577392.1 - How To Check Oracle Alert Setup?
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=577392.1
Note: 75030.1 - Troubleshooting Oracle Alert on NT
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=75030.1
Note: 152687.1 - How to Troubleshoot E-mail and Alerts
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=152687.1
-- ENTERPRISE MANAGER
How to: Grant Access to non-DBA users to the Database Performance Tab
Doc ID: 455191.1
-- DISCOVERY
How to Troubleshoot EM Discovery Problems Caused by the Intelligent Agent Setup
Doc ID: Note:166935.1
-- TUTORIAL
Centrally Managing Your Enterprise Environment With EM 10g Grid Control - Oracle by Example Lesson
Doc ID: 277090.1
-- LINUX PACK
Un-Install / Rollback of RPM's on Linux OS from Enterprise Manager
Doc ID: 436535.1
Patching Linux Hosts through Deployment Procedure from Enterprise Manager
Doc ID: 436485.1
-- MIGRATION, DB2GC (starting 10.2.0.3)
Migrate targets from DB Control to Grid Control - db2gc
Doc ID: 605578.1
How To Move the Grid Control Repository Using an Inconsistent (Hot) Database Backup
Doc ID: 602955.1
-----------------------------------------------------------------------------------------------------------------------------------------------
Oracle Created Database Users: Password, Usage and Files References
Doc ID: Note:160861.1
How to change the password of the 10g database user sysman
Doc ID: Note:259379.1
How to change the password of the 10g database user dbsnmp
Doc ID: Note:259387.1
How to Start the Central Management Agent on an AS Instance Host
Doc ID: Note:297727.1
Understanding Network Address Translation in EM 10g Grid Control
Doc ID: Note:299595.1
How To Find Agents with time-skew problems
Doc ID: Note:359524.1
10gR2 - Where is the Management Services & Repository Monitoring Page in Grid Control
Doc ID: Note:356795.1
Problem: Performance: Agent High CPU Consumption
Doc ID: Note:361612.1
How To Perform Periodic Maintenance and Improve Performance of Grid Control Repository
Doc ID: Note:387957.1
How to Start and Stop Enterprise Manager Components
Doc ID: Note:298991.1
How to Start and Stop Enterprise Manager Components
Doc ID: Note:298991.1
How to: Add The Domain Name Of The Host To Name Of The Agent
Doc ID: Note:295949.1
Understanding the Enterprise Manager Management Agent 10g 'emd.properties' File
Doc ID: Note:235290.1
Understanding Oracle Enterprise Manager 10g Agent Resource Consumption
Doc ID: Note:375509.1
Understanding the Enterprise Manager 10g Grid Control Management Agent
Doc ID: Note:234872.1
How To Install A Grid Management Agent On 10g Rac Cluster And On Single Node
Doc ID: Note:309635.1
How do you display performance data for a period greater than 31 days in Enterprise Manager
Doc ID: Note:363880.1
How to Restrict access for EM Database Control only from Specific Hosts / IPs
Doc ID: Note:438493.1
How Do You Configure An Agent After Hostname Change?
Doc ID: Note:423565.1
Problem: Config: Why Does The Em-Application.Log Grow So Large?
Doc ID: Note:403525.1
Files Needed for Troubleshooting an EM 10G Service Request if an RDA is not Available
Doc ID: Note:405755.1
Files to Upload for an Enterprise Manager Grid Control 10g Service Request
Doc ID: Note:377124.1
HOW TO: give a user only read only access of Enterprise Manager Database Control
Doc ID: Note:465520.1
The dbconsole fails to start after a change in the hostname.
Doc ID: Note:467598.1
How to: Configure the DB Console to Use Dedicated Server Processes
Doc ID: Note:432972.1
Basic Troubleshooting Guide For Grid Control Oracle Mangement Server (OMS) Midtier
Doc ID: Note:550395.1
How to Point an Agent to a different Grid Control OMS and Repository?
Doc ID: Note:413228.1
How to Log and Trace the EM 10g Management Agents
Doc ID: Note:229624.1
How to Install The Downloadable Central Management Agent in EM 10g Grid Control
Doc ID: Note:235287.1
How To Discover An AS Instance In EM 10g Grid Control
Doc ID: Note:297721.1
How To Rename A Database Target in Grid Control
Doc ID: Note:295014.1
EM 10g Target Discovery White Paper
Doc ID: Note:239224.1
How to Cleanly De-Install the EM 10g Agent on Windows and Unix
Doc ID: Note:438158.1
How To Discover a Standalone Webcache Installation In EM 10g Grid Control
Doc ID: Note:297734.1
Problem: Database Upgraded, Now Database Home Page In Grid Control Still Shows Old Oracle Home
Doc ID: Note:290731.1
Is it Possible to Manage a Standalone OC4J Target using Grid Control?
Doc ID: Note:414635.1
How To Add a New OC4J Target To The Grid Control
Doc ID: Note:290261.1
How To Find the Target_name Of a 10G Database Or Other Grid Target
Doc ID: Note:371643.1
How to Remove a Target From The EM 10g Grid Control Console
Doc ID: Note:271691.1
Problem: App Server Not Being Discovered By Grid Agent
Doc ID: Note:454600.1
Problem: Not All Duplicate Database Target Names Can Be Discovered In Grid Control
Doc ID: Note:443520.1
How To Manually Add A Target (Host) To Grid Control 10g
Doc ID: Note:279975.1
Howto: How to remove a deleted agent from the GRID Control repository database?
Doc ID: Note:454081.1
How to Troubleshoot Grid Control Provisioning and Deployment Setup Issues.
Doc ID: Note:466798.1
----- GRID CONTROL UPGRADE -------------------------------
Problem: Listener Referring Old Oracle Home After Upgrading From 10.1.0.4 to 10.2.0.1
Doc ID: Note:423439.1
Problem: 10.1.0.4.0 Upgrade: Additonal Management Service Patching Needs First Oms To Be Stopped
Doc ID: Note:377303.1
How to Obtain Patch 4329444 for Upgrading Grid Control Repository to 10.2.0.3.0 / 10.2.0.4 on Windows
Doc ID: Note:456928.1
How To Find RDBMS patchsets on Metalink
Doc ID: Note:438049.1
How To Find and Download The Latest Patchset and Associated Patch Number For Oracle Database Release
Doc ID: Note:330374.1
How to be notified for all ORA- Errors recorded in the alert.log file
Doc ID: Note:405396.1
Different Upgrade Methods For Upgrading Your Database
Doc ID: Note:419550.1
Procedure To Upgrade The Database From 8.1.7.4.0 In AIX 4.3.3 64-bit To 10.2.0.X.0 On AIX 5L 64-bit
Doc ID: Note:413968.1
How to upgrade database control from 10gR1 to 10gR2 using emca upgrade
Doc ID: Note:465518.1
Does The RMAN Catalog Need To Be Downgraded When The Database Is Downgraded?
Doc ID: Note:558364.1
Complete checklist for manual upgrades of Oracle databases from anyversion to any version on any platform (documents only from 7.3.x>>8.0.x>>8.1.x>>9.0.x>>9.2.x>>10.1.x>>10.2.x>>11.1.x)
Doc ID: Note:421191.1
Key RDBMS Install Differences in 11gR1
Doc ID: Note:431768.1
Complete Checklist for Manual Upgrades to 10gR2
Doc ID: Note:316889.1
COMPATIBLE Initialization Parameter While Upgrading To 10gR2
Doc ID: Note:413186.1
RMAN Compatibility Matrix
Doc ID: Note:73431.1
How to upgrade a 10.1.0.5.0 Repository Database for Grid Control to a 10.2.0.2.0 Repository Database
Doc ID: Note:399520.1
Steps to upgrade 10.2.0.2.0 (or) higher Repository Database for EM Grid Control to 11.1.0.6.0
Doc ID: Note:467586.1
EM2GO (Enterprise Manager Grid Control 10G) Frequently Asked Questions
Doc ID: Note:400193.1
Quick Link to EM 10g Grid Control Installation Documentation
Doc ID: Note:414700.1
Installation Checklist for EM 10g Grid Control 10.1.x.x to 10.2.0.1 OMS and Repository Upgrades
Doc ID: Note:401592.1
-- RHEL 4.2
Prerequisites and Install Information for EM 10g Grid Control Components on Red Hat EL 4.0 Update 2 Platforms
Doc ID: Note:343364.1
-- SYSMAN and DBSNMP PASSWORD
Enterprise Manager Database Console FAQ
Doc ID: 863631.1
Oracle Created Database Users: Password, Usage and Files References
Doc ID: 160861.1
How To Change The DBSNMP Password For Multiple Databases Using Batch Mode EMCLI
Doc ID: 377357.1
Problem: dbsnmp password change for RAC database only updates one agent targets.xml
Doc ID: 368925.1
Problem: Modifying The SYS, SYSMAN, and DBSNMP Passwords Using EMCLI Fails
Doc ID: 369946.1
Dbsnmp Password Not Accepted
Doc ID: 337260.1
How to change the DBSNMP passwords for a target database in Grid Console?
Doc ID: 748668.1
Problem: The DBSNMP Account Becomes Locked And Database Shows A Status Of Down With A Metric Collection Error Of 'Ora-28000'
Doc ID: 352585.1
Security Risk on DBSNMP User Password Due to catsnmp.sql Launched by catproc.sql
Doc ID: 206870.1
How to Change the Monitoring Credentials for Database Targets in EM 10g
Doc ID: 271627.1
How to change the password of the 10g database user sysman <-- used for SR
Doc ID: 259379.1
How to Change the Password of the 10g Database User Dbsnmp <-- used for SR
Doc ID: 259387.1
http://www.andrewcmaxwell.com/2009/11/100-different-evernote-uses/
''Evernote''
How are attachments stored on my local machine? https://www.evernote.com/shard/s2/note/4cab39c8-f700-4570-881d-bfd5dff2cf0f/ensupport/faq#b=c88dd0ac-32c1-4bc5-b3f4-50612072e0ad&n=4cab39c8-f700-4570-881d-bfd5dff2cf0f
http://forensicartifacts.com/2011/06/evernote-note-storage/
File Locations
On Windows 7: C:\Users\\AppData\Local\Evernote\Evernote\Database\.exb
http://stackoverflow.com/questions/4471725/how-to-open-a-evernote-file-extension-exb
<<<
here are the features that I like -- 1) i heavily take notes on paper/mind maps/etc and then I take a photo of it (iphone) then send it to my evernote email 2) the text on the photos are searchable CTRL-F on all notes/photos for a search string, it even recognizes my handwriting 3) you can password protect notes 4) I can embed word/excel/pdf on each note 5) I can have a sharable link and post it on my tiddlywiki 6) there's a clip feature which you can use for webpages and emails and I'm sure there are more stuff
so for my note taking purposes evernote alone would not suffice, so I also use tiddlywiki http://karlarao.tiddlyspot.com/#About to organize them in ala-mind-map manner.. and both of them are in the cloud..If I'll have a laptop break down now, I can pull the tiddlywiki save it in a folder and just sync again my evernote.. as well as my dropbox folder.. and I'll be productive again.
<<<
! annnoying - How to Disable Automatic Spell Check in Evernote on a Mac
{{{
To do this, simply:
Open Evernote
Open a file within your Notebooks (anything you can type on)
Find a typed word, and right-click on it
Go to Spelling and Grammar in the menu that pops up
In the sub-menu, uncheck "Correct Spelling Automatically"
Repeat step 3 again, and go to Substitutions, uncheck "Text Replacement" if necessary
}}}
! add fonts - Courier New
http://www.christopher-mayo.com/?p=1512
! evernote limits
http://www.christopher-mayo.com/?p=169
https://www.google.com/search?q=oracle+exa+cs&oq=oracle+exa+cs&aqs=chrome..69i57j69i60j69i65j0l3.2576j0j0&sourceid=chrome&ie=UTF-8
https://www.oracle.com/technetwork/database/exadata/exadataservice-ds-2574134.pdf
! cons
<<<
* every database is a CDB and created with a separate oracle home
* consolidated environment is created as PDB
<<<
{{{
--V2
> cat /etc/fstab
/dev/md5 / ext3 defaults,usrquota,grpquota 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/md2 swap swap defaults 0 0
/dev/md7 /opt/oracle ext3 defaults,nodev 1 1
/dev/md4 /boot ext3 defaults,nodev 1 1
/dev/md11 /var/log/oracle ext3 defaults,nodev 1 1
[enkcel01:root] /root
>
[enkcel01:root] /root
> df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/md5 ext3 9.9G 5.8G 3.6G 62% /
tmpfs tmpfs 12G 0 12G 0% /dev/shm
/dev/md7 ext3 2.0G 684M 1.3G 36% /opt/oracle
/dev/md4 ext3 116M 42M 69M 38% /boot
/dev/md11 ext3 2.3G 149M 2.1G 7% /var/log/oracle
[enkcel01:root] /root
>
[enkcel01:root] /root
> mount
/dev/md5 on / type ext3 (rw,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md7 on /opt/oracle type ext3 (rw,nodev)
/dev/md4 on /boot type ext3 (rw,nodev)
/dev/md11 on /var/log/oracle type ext3 (rw,nodev)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
> cat /proc/mdstat
Personalities : [raid1]
md4 : active raid1 sdb1[1] sda1[0]
120384 blocks [2/2] [UU]
md5 : active raid1 sdb5[1] sda5[0]
10482304 blocks [2/2] [UU]
md6 : active raid1 sdb6[1] sda6[0]
10482304 blocks [2/2] [UU]
md7 : active raid1 sdb7[1] sda7[0]
2096384 blocks [2/2] [UU]
md8 : active raid1 sdb8[1] sda8[0]
2096384 blocks [2/2] [UU]
md2 : active raid1 sdb9[1] sda9[0]
2096384 blocks [2/2] [UU]
md11 : active raid1 sdb11[1] sda11[0]
2433728 blocks [2/2] [UU]
md1 : active raid1 sdb10[1] sda10[0]
714752 blocks [2/2] [UU]
unused devices: <none>
[enkcel01:root] /root
>
[enkcel01:root] /root
> mdadm --misc --detail /dev/md4
/dev/md4:
Version : 0.90
Creation Time : Sat May 15 13:47:10 2010
Raid Level : raid1
Array Size : 120384 (117.58 MiB 123.27 MB)
Used Dev Size : 120384 (117.58 MiB 123.27 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 4
Persistence : Superblock is persistent
Update Time : Sun Oct 16 04:22:03 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : d529a7ad:ed5936bb:b0502716:e8114570
Events : 0.88
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[enkcel01:root] /root
> mdadm --misc --detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Sat May 15 13:47:19 2010
Raid Level : raid1
Array Size : 10482304 (10.00 GiB 10.73 GB)
Used Dev Size : 10482304 (10.00 GiB 10.73 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Sun Oct 23 03:47:34 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 11ba27c1:6d6fa21d:8fa278dc:2cb77a67
Events : 0.70
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
[enkcel01:root] /root
> mdadm --misc --detail /dev/md6
/dev/md6:
Version : 0.90
Creation Time : Sat May 15 13:47:34 2010
Raid Level : raid1
Array Size : 10482304 (10.00 GiB 10.73 GB)
Used Dev Size : 10482304 (10.00 GiB 10.73 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 6
Persistence : Superblock is persistent
Update Time : Sun Oct 16 04:28:03 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : b9e70f92:9f86d4fd:e0cf405d:df6b60ef
Events : 0.26
Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
1 8 22 1 active sync /dev/sdb6
[enkcel01:root] /root
> mdadm --misc --detail /dev/md7
/dev/md7:
Version : 0.90
Creation Time : Sat May 15 13:48:06 2010
Raid Level : raid1
Array Size : 2096384 (2047.59 MiB 2146.70 MB)
Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 7
Persistence : Superblock is persistent
Update Time : Sun Oct 23 03:47:28 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 056ad8df:b649ca96:cc7d1691:c2e85879
Events : 0.90
Number Major Minor RaidDevice State
0 8 7 0 active sync /dev/sda7
1 8 23 1 active sync /dev/sdb7
[enkcel01:root] /root
> mdadm --misc --detail /dev/md8
/dev/md8:
Version : 0.90
Creation Time : Sat May 15 13:48:55 2010
Raid Level : raid1
Array Size : 2096384 (2047.59 MiB 2146.70 MB)
Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 8
Persistence : Superblock is persistent
Update Time : Sun Oct 16 04:23:44 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 9650421a:fd228e8e:e2e291ce:f8970923
Events : 0.84
Number Major Minor RaidDevice State
0 8 8 0 active sync /dev/sda8
1 8 24 1 active sync /dev/sdb8
[enkcel01:root] /root
> mdadm --misc --detail /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Sat May 15 13:46:43 2010
Raid Level : raid1
Array Size : 2096384 (2047.59 MiB 2146.70 MB)
Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sun Oct 16 04:23:00 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 14811b0c:f3cf3622:03f81e8a:89b2d031
Events : 0.78
Number Major Minor RaidDevice State
0 8 9 0 active sync /dev/sda9
1 8 25 1 active sync /dev/sdb9
[enkcel01:root] /root
> mdadm --misc --detail /dev/md11
/dev/md11:
Version : 0.90
Creation Time : Wed Sep 8 13:18:37 2010
Raid Level : raid1
Array Size : 2433728 (2.32 GiB 2.49 GB)
Used Dev Size : 2433728 (2.32 GiB 2.49 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 11
Persistence : Superblock is persistent
Update Time : Sun Oct 23 03:47:52 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 55c92014:d351004a:539a77be:6c759d6e
Events : 0.94
Number Major Minor RaidDevice State
0 8 11 0 active sync /dev/sda11
1 8 27 1 active sync /dev/sdb11
[enkcel01:root] /root
> mdadm --misc --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Sat May 15 13:46:44 2010
Raid Level : raid1
Array Size : 714752 (698.12 MiB 731.91 MB)
Used Dev Size : 714752 (698.12 MiB 731.91 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sun Oct 16 04:22:18 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : bdfabe50:2c42a387:120614c4:2f682052
Events : 0.78
Number Major Minor RaidDevice State
0 8 10 0 active sync /dev/sda10
1 8 26 1 active sync /dev/sdb10
> parted /dev/sda print
Model: LSI MR9261-8i (scsi)
Disk /dev/sda: 1999GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 123MB 123MB primary ext3 boot, raid
2 123MB 132MB 8225kB primary ext2
3 132MB 1968GB 1968GB primary
4 1968GB 1999GB 31.1GB extended lba
5 1968GB 1979GB 10.7GB logical ext3 raid
6 1979GB 1989GB 10.7GB logical ext3 raid
7 1989GB 1991GB 2147MB logical ext3 raid
8 1991GB 1994GB 2147MB logical ext3 raid
9 1994GB 1996GB 2147MB logical linux-swap raid
10 1996GB 1997GB 732MB logical raid
11 1997GB 1999GB 2492MB logical ext3 raid
Information: Don't forget to update /etc/fstab, if necessary.
[enkcel01:root] /root
> parted /dev/sdb print
Model: LSI MR9261-8i (scsi)
Disk /dev/sdb: 1999GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 123MB 123MB primary ext3 boot, raid
2 123MB 132MB 8225kB primary ext2
3 132MB 1968GB 1968GB primary
4 1968GB 1999GB 31.1GB extended lba
5 1968GB 1979GB 10.7GB logical ext3 raid
6 1979GB 1989GB 10.7GB logical ext3 raid
7 1989GB 1991GB 2147MB logical ext3 raid
8 1991GB 1994GB 2147MB logical ext3 raid
9 1994GB 1996GB 2147MB logical linux-swap raid
10 1996GB 1997GB 732MB logical raid
11 1997GB 1999GB 2492MB logical ext3 raid
Information: Don't forget to update /etc/fstab, if necessary.
> fdisk -l
Disk /dev/sda: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 15 120456 fd Linux raid autodetect
/dev/sda2 16 16 8032+ 83 Linux
/dev/sda3 17 239246 1921614975 83 Linux
/dev/sda4 239247 243031 30403012+ f W95 Ext'd (LBA)
/dev/sda5 239247 240551 10482381 fd Linux raid autodetect
/dev/sda6 240552 241856 10482381 fd Linux raid autodetect
/dev/sda7 241857 242117 2096451 fd Linux raid autodetect
/dev/sda8 242118 242378 2096451 fd Linux raid autodetect
/dev/sda9 242379 242639 2096451 fd Linux raid autodetect
/dev/sda10 242640 242728 714861 fd Linux raid autodetect
/dev/sda11 242729 243031 2433816 fd Linux raid autodetect
Disk /dev/sdb: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 15 120456 fd Linux raid autodetect
/dev/sdb2 16 16 8032+ 83 Linux
/dev/sdb3 17 239246 1921614975 83 Linux
/dev/sdb4 239247 243031 30403012+ f W95 Ext'd (LBA)
/dev/sdb5 239247 240551 10482381 fd Linux raid autodetect
/dev/sdb6 240552 241856 10482381 fd Linux raid autodetect
/dev/sdb7 241857 242117 2096451 fd Linux raid autodetect
/dev/sdb8 242118 242378 2096451 fd Linux raid autodetect
/dev/sdb9 242379 242639 2096451 fd Linux raid autodetect
/dev/sdb10 242640 242728 714861 fd Linux raid autodetect
/dev/sdb11 242729 243031 2433816 fd Linux raid autodetect
Disk /dev/sdc: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdg doesn't contain a valid partition table
Disk /dev/sdh: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdh doesn't contain a valid partition table
Disk /dev/sdi: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdi doesn't contain a valid partition table
Disk /dev/sdj: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdj doesn't contain a valid partition table
Disk /dev/sdk: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdk doesn't contain a valid partition table
Disk /dev/sdl: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdl doesn't contain a valid partition table
Disk /dev/sdm: 4009 MB, 4009754624 bytes
124 heads, 62 sectors/track, 1018 cylinders
Units = cylinders of 7688 * 512 = 3936256 bytes
Device Boot Start End Blocks Id System
/dev/sdm1 1 1017 3909317 83 Linux
Disk /dev/md1: 731 MB, 731906048 bytes
2 heads, 4 sectors/track, 178688 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md11: 2492 MB, 2492137472 bytes
2 heads, 4 sectors/track, 608432 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md11 doesn't contain a valid partition table
Disk /dev/md2: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/md8: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md8 doesn't contain a valid partition table
Disk /dev/md7: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md7 doesn't contain a valid partition table
Disk /dev/md6: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md6 doesn't contain a valid partition table
Disk /dev/md5: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md5 doesn't contain a valid partition table
Disk /dev/md4: 123 MB, 123273216 bytes
2 heads, 4 sectors/track, 30096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md4 doesn't contain a valid partition table
Disk /dev/sdn: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdn doesn't contain a valid partition table
Disk /dev/sdo: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdo doesn't contain a valid partition table
Disk /dev/sdp: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdp doesn't contain a valid partition table
Disk /dev/sdq: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdq doesn't contain a valid partition table
Disk /dev/sdr: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdr doesn't contain a valid partition table
Disk /dev/sds: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sds doesn't contain a valid partition table
Disk /dev/sdt: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdt doesn't contain a valid partition table
Disk /dev/sdu: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdu doesn't contain a valid partition table
Disk /dev/sdv: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdv doesn't contain a valid partition table
Disk /dev/sdw: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdw doesn't contain a valid partition table
Disk /dev/sdx: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdx doesn't contain a valid partition table
Disk /dev/sdy: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdy doesn't contain a valid partition table
Disk /dev/sdz: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdz doesn't contain a valid partition table
Disk /dev/sdaa: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdaa doesn't contain a valid partition table
Disk /dev/sdab: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdab doesn't contain a valid partition table
Disk /dev/sdac: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdac doesn't contain a valid partition table
V2--CellCLI> list physicaldisk attributes all
35:0 23 HardDisk 35 0 0 false 0_0 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:45-05:00 sata JK11D1YAJB8GGZ 1862.6559999994934G 0 normal
35:1 24 HardDisk 35 0 0 false 0_1 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:46-05:00 sata JK11D1YAJB4V0Z 1862.6559999994934G 1 normal
35:2 25 HardDisk 35 0 0 false 0_2 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:47-05:00 sata JK11D1YAJAZMMZ 1862.6559999994934G 2 normal
35:3 26 HardDisk 35 0 0 false 0_3 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:49-05:00 sata JK11D1YAJ7JX2Z 1862.6559999994934G 3 normal
35:4 27 HardDisk 35 0 0 false 0_4 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:50-05:00 sata JK11D1YAJ60R8Z 1862.6559999994934G 4 normal
35:5 28 HardDisk 35 0 0 false 0_5 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:51-05:00 sata JK11D1YAJB4J8Z 1862.6559999994934G 5 normal
35:6 29 HardDisk 35 0 0 false 0_6 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:52-05:00 sata JK11D1YAJ7JXGZ 1862.6559999994934G 6 normal
35:7 30 HardDisk 35 0 0 false 0_7 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:54-05:00 sata JK11D1YAJB4E5Z 1862.6559999994934G 7 normal
35:8 31 HardDisk 35 4 0 false 0_8 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:55-05:00 sata JK11D1YAJ8TY3Z 1862.6559999994934G 8 normal
35:9 32 HardDisk 35 0 0 false 0_9 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:56-05:00 sata JK11D1YAJ8TXKZ 1862.6559999994934G 9 normal
35:10 33 HardDisk 35 0 0 false 0_10 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:58-05:00 sata JK11D1YAJ8TYLZ 1862.6559999994934G 10 normal
35:11 34 HardDisk 35 0 0 false 0_11 "HITACHI H7220AA30SUN2.0T" JKAOA28A 2010-05-15T21:10:59-05:00 sata JK11D1YAJAZNKZ 1862.6559999994934G 11 normal
FLASH_1_0 FlashDisk 0 0 0 0 0 0 1_0 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JC3 22.8880615234375G 0 "PCI Slot: 1; FDOM: 0" normal
FLASH_1_1 FlashDisk 0 0 0 0 0 0 1_1 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JYG 22.8880615234375G 0 "PCI Slot: 1; FDOM: 1" normal
FLASH_1_2 FlashDisk 0 0 0 0 0 0 1_2 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JV9 22.8880615234375G 0 "PCI Slot: 1; FDOM: 2" normal
FLASH_1_3 FlashDisk 0 0 0 0 0 0 1_3 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02J93 22.8880615234375G 0 "PCI Slot: 1; FDOM: 3" normal
FLASH_2_0 FlashDisk 0 0 0 0 0 0 2_0 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JFK 22.8880615234375G 0 "PCI Slot: 2; FDOM: 0" normal
FLASH_2_1 FlashDisk 0 0 0 0 0 0 2_1 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JFL 22.8880615234375G 0 "PCI Slot: 2; FDOM: 1" normal
FLASH_2_2 FlashDisk 0 0 0 0 0 0 2_2 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JF7 22.8880615234375G 0 "PCI Slot: 2; FDOM: 2" normal
FLASH_2_3 FlashDisk 0 0 0 0 0 0 2_3 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JF8 22.8880615234375G 0 "PCI Slot: 2; FDOM: 3" normal
FLASH_4_0 FlashDisk 0 0 0 0 0 0 4_0 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02HP5 22.8880615234375G 0 "PCI Slot: 4; FDOM: 0" normal
FLASH_4_1 FlashDisk 0 0 0 0 0 0 4_1 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02HNN 22.8880615234375G 0 "PCI Slot: 4; FDOM: 1" normal
FLASH_4_2 FlashDisk 0 0 0 0 0 0 4_2 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02HP2 22.8880615234375G 0 "PCI Slot: 4; FDOM: 2" normal
FLASH_4_3 FlashDisk 0 0 0 0 0 0 4_3 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02HP4 22.8880615234375G 0 "PCI Slot: 4; FDOM: 3" normal
FLASH_5_0 FlashDisk 0 0 0 0 0 0 5_0 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JUD 22.8880615234375G 0 "PCI Slot: 5; FDOM: 0" normal
FLASH_5_1 FlashDisk 0 0 0 0 0 0 5_1 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JVF 22.8880615234375G 0 "PCI Slot: 5; FDOM: 1" normal
FLASH_5_2 FlashDisk 0 0 0 0 0 0 5_2 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JAP 22.8880615234375G 0 "PCI Slot: 5; FDOM: 2" normal
FLASH_5_3 FlashDisk 0 0 0 0 0 0 5_3 "MARVELL SD88SA02" D20Y 2011-05-06T12:00:49-05:00 sas 1014M02JVH 22.8880615234375G 0 "PCI Slot: 5; FDOM: 3" normal
CellCLI>
CellCLI> list lun attributes all
0_0 CD_00_cell01 /dev/sda HardDisk 0_0 TRUE FALSE 1861.712890625G 0_0 35:0 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_1 CD_01_cell01 /dev/sdb HardDisk 0_1 TRUE FALSE 1861.712890625G 0_1 35:1 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_2 CD_02_cell01 /dev/sdc HardDisk 0_2 FALSE FALSE 1861.712890625G 0_2 35:2 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_3 CD_03_cell01 /dev/sdd HardDisk 0_3 FALSE FALSE 1861.712890625G 0_3 35:3 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_4 CD_04_cell01 /dev/sde HardDisk 0_4 FALSE FALSE 1861.712890625G 0_4 35:4 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_5 CD_05_cell01 /dev/sdf HardDisk 0_5 FALSE FALSE 1861.712890625G 0_5 35:5 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_6 CD_06_cell01 /dev/sdg HardDisk 0_6 FALSE FALSE 1861.712890625G 0_6 35:6 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_7 CD_07_cell01 /dev/sdh HardDisk 0_7 FALSE FALSE 1861.712890625G 0_7 35:7 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_8 CD_08_cell01 /dev/sdi HardDisk 0_8 FALSE FALSE 1861.712890625G 0_8 35:8 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_9 CD_09_cell01 /dev/sdj HardDisk 0_9 FALSE FALSE 1861.712890625G 0_9 35:9 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_10 CD_10_cell01 /dev/sdk HardDisk 0_10 FALSE FALSE 1861.712890625G 0_10 35:10 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_11 CD_11_cell01 /dev/sdl HardDisk 0_11 FALSE FALSE 1861.712890625G 0_11 35:11 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
1_0 FD_00_enkcel01 /dev/sds FlashDisk 1_0 FALSE FALSE 22.8880615234375G 100.0 FLASH_1_0 normal
1_1 FD_01_enkcel01 /dev/sdr FlashDisk 1_1 FALSE FALSE 22.8880615234375G 100.0 FLASH_1_1 normal
1_2 FD_02_enkcel01 /dev/sdt FlashDisk 1_2 FALSE FALSE 22.8880615234375G 100.0 FLASH_1_2 normal
1_3 FD_03_enkcel01 /dev/sdu FlashDisk 1_3 FALSE FALSE 22.8880615234375G 100.0 FLASH_1_3 normal
2_0 FD_04_enkcel01 /dev/sdz FlashDisk 2_0 FALSE FALSE 22.8880615234375G 99.9 FLASH_2_0 normal
2_1 FD_05_enkcel01 /dev/sdaa FlashDisk 2_1 FALSE FALSE 22.8880615234375G 100.0 FLASH_2_1 normal
2_2 FD_06_enkcel01 /dev/sdab FlashDisk 2_2 FALSE FALSE 22.8880615234375G 100.0 FLASH_2_2 normal
2_3 FD_07_enkcel01 /dev/sdac FlashDisk 2_3 FALSE FALSE 22.8880615234375G 100.0 FLASH_2_3 normal
4_0 FD_08_enkcel01 /dev/sdn FlashDisk 4_0 FALSE FALSE 22.8880615234375G 100.0 FLASH_4_0 normal
4_1 FD_09_enkcel01 /dev/sdo FlashDisk 4_1 FALSE FALSE 22.8880615234375G 100.0 FLASH_4_1 normal
4_2 FD_10_enkcel01 /dev/sdp FlashDisk 4_2 FALSE FALSE 22.8880615234375G 100.0 FLASH_4_2 normal
4_3 FD_11_enkcel01 /dev/sdq FlashDisk 4_3 FALSE FALSE 22.8880615234375G 100.0 FLASH_4_3 normal
5_0 FD_12_enkcel01 /dev/sdv FlashDisk 5_0 FALSE FALSE 22.8880615234375G 100.0 FLASH_5_0 normal
5_1 FD_13_enkcel01 /dev/sdw FlashDisk 5_1 FALSE FALSE 22.8880615234375G 100.0 FLASH_5_1 normal
5_2 FD_14_enkcel01 /dev/sdx FlashDisk 5_2 FALSE FALSE 22.8880615234375G 100.0 FLASH_5_2 normal
5_3 FD_15_enkcel01 /dev/sdy FlashDisk 5_3 FALSE FALSE 22.8880615234375G 100.0 FLASH_5_3 normal
CellCLI>
CellCLI> list celldisk attributes all
CD_00_cell01 2010-05-28T13:09:11-05:00 /dev/sda /dev/sda3 HardDisk 0 0 00000128-e01a-793d-0000-000000000000 none 0_0 0 1832.59375G normal
CD_01_cell01 2010-05-28T13:09:15-05:00 /dev/sdb /dev/sdb3 HardDisk 0 0 00000128-e01a-8c16-0000-000000000000 none 0_1 0 1832.59375G normal
CD_02_cell01 2010-05-28T13:09:16-05:00 /dev/sdc /dev/sdc HardDisk 0 0 00000128-e01a-8e29-0000-000000000000 none 0_2 0 1861.703125G normal
CD_03_cell01 2010-05-28T13:09:16-05:00 /dev/sdd /dev/sdd HardDisk 0 0 00000128-e01a-904a-0000-000000000000 none 0_3 0 1861.703125G normal
CD_04_cell01 2010-05-28T13:09:17-05:00 /dev/sde /dev/sde HardDisk 0 0 00000128-e01a-9274-0000-000000000000 none 0_4 0 1861.703125G normal
CD_05_cell01 2010-05-28T13:09:18-05:00 /dev/sdf /dev/sdf HardDisk 0 1122.8125G ((offset=738.890625G,size=1122.8125G)) 00000128-e01a-948e-0000-000000000000 none 0_5 0 1861.703125G normal
CD_06_cell01 2010-05-28T13:09:18-05:00 /dev/sdg /dev/sdg HardDisk 0 0 00000128-e01a-96a9-0000-000000000000 none 0_6 0 1861.703125G normal
CD_07_cell01 2010-05-28T13:09:19-05:00 /dev/sdh /dev/sdh HardDisk 0 0 00000128-e01a-98ce-0000-000000000000 none 0_7 0 1861.703125G normal
CD_08_cell01 2010-05-28T13:09:19-05:00 /dev/sdi /dev/sdi HardDisk 0 0 00000128-e01a-9aec-0000-000000000000 none 0_8 0 1861.703125G normal
CD_09_cell01 2010-05-28T13:09:20-05:00 /dev/sdj /dev/sdj HardDisk 0 0 00000128-e01a-9cfe-0000-000000000000 none 0_9 0 1861.703125G normal
CD_10_cell01 2010-05-28T13:09:20-05:00 /dev/sdk /dev/sdk HardDisk 0 0 00000128-e01a-9f1b-0000-000000000000 none 0_10 0 1861.703125G normal
CD_11_cell01 2010-05-28T13:09:21-05:00 /dev/sdl /dev/sdl HardDisk 0 0 00000128-e01a-a13e-0000-000000000000 none 0_11 0 1861.703125G normal
FD_00_enkcel01 2011-09-22T20:51:10-05:00 /dev/sds /dev/sds FlashDisk 0 0 b8638e68-b436-48ab-9790-a53b6f188b53 none 1_0 22.875G normal
FD_01_enkcel01 2011-09-22T20:51:11-05:00 /dev/sdr /dev/sdr FlashDisk 0 0 7485b0c0-b6ef-4e8f-b4cb-ded2734dc424 none 1_1 22.875G normal
FD_02_enkcel01 2011-09-22T20:51:12-05:00 /dev/sdt /dev/sdt FlashDisk 0 0 2f0dee7e-3f0d-49af-9f10-865952a6362d none 1_2 22.875G normal
FD_03_enkcel01 2011-09-22T20:51:13-05:00 /dev/sdu /dev/sdu FlashDisk 0 0 9a7586dd-4fad-431b-8459-4c8a3504ce51 none 1_3 22.875G normal
FD_04_enkcel01 2011-09-22T20:51:14-05:00 /dev/sdz /dev/sdz FlashDisk 0 0 65acb88c-b5b4-4768-a029-04de9238442f none 2_0 22.875G normal
FD_05_enkcel01 2011-09-22T20:51:15-05:00 /dev/sdaa /dev/sdaa FlashDisk 0 0 f99d5e54-063f-423a-ad21-bb97fded6534 none 2_1 22.875G normal
FD_06_enkcel01 2011-09-22T20:51:15-05:00 /dev/sdab /dev/sdab FlashDisk 0 0 6d1af809-5f61-47cb-bdb5-3eceeb4804b4 none 2_2 22.875G normal
FD_07_enkcel01 2011-09-22T20:51:16-05:00 /dev/sdac /dev/sdac FlashDisk 0 0 d2c7735a-f646-4632-a063-bf9ce4093e10 none 2_3 22.875G normal
FD_08_enkcel01 2011-09-22T20:51:17-05:00 /dev/sdn /dev/sdn FlashDisk 0 0 ab088c83-e6bf-47e2-98e4-a45d67873a5b none 4_0 22.875G normal
FD_09_enkcel01 2011-09-22T20:51:18-05:00 /dev/sdo /dev/sdo FlashDisk 0 0 7ba2b17a-bcb2-4084-ba88-c5d7415b18fb none 4_1 22.875G normal
FD_10_enkcel01 2011-09-22T20:51:19-05:00 /dev/sdp /dev/sdp FlashDisk 0 0 b429e31e-cf38-412f-9c82-44a2d9ae346e none 4_2 22.875G normal
FD_11_enkcel01 2011-09-22T20:51:20-05:00 /dev/sdq /dev/sdq FlashDisk 0 0 fd8af61f-1a16-4a97-b82c-d81f2031cf9a none 4_3 22.875G normal
FD_12_enkcel01 2011-09-22T20:51:21-05:00 /dev/sdv /dev/sdv FlashDisk 0 0 8a6fa836-61b8-4718-b93f-bc22a5566182 none 5_0 22.875G normal
FD_13_enkcel01 2011-09-22T20:51:22-05:00 /dev/sdw /dev/sdw FlashDisk 0 0 1748c6a9-c24d-4324-bd85-5d5e9cbadcaf none 5_1 22.875G normal
FD_14_enkcel01 2011-09-22T20:51:22-05:00 /dev/sdx /dev/sdx FlashDisk 0 0 98200a21-a687-4afb-9cd0-6911be8c5be5 none 5_2 22.875G normal
FD_15_enkcel01 2011-09-22T20:51:23-05:00 /dev/sdy /dev/sdy FlashDisk 0 0 793fba33-3d8f-425d-b261-42fbfa71bfcb none 5_3 22.875G normal
CellCLI> list griddisk attributes all
AC10G_CD_05_cell01 AC10G AC10G_CD_05_CELL01 CD_05_cell01 2011-10-11T14:34:52-05:00 HardDisk 0 ac5e9ec2-0269-45e0-b7fe-ea8c0974c6b1 708.890625G 30G active
DATA_CD_00_cell01 DATA DATA_CD_00_CELL01 CD_00_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a070-0000-000000000000 32M 1282.8125G active
DATA_CD_01_cell01 DATA DATA_CD_01_CELL01 CD_01_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a09e-0000-000000000000 32M 1282.8125G active
DATA_CD_02_cell01 DATA DATA_CD_02_CELL01 CD_02_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a0d2-0000-000000000000 32M 1282.8125G active
DATA_CD_03_cell01 DATA DATA_CD_03_CELL01 CD_03_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a0f0-0000-000000000000 32M 1282.8125G active
DATA_CD_04_cell01 DATA DATA_CD_04_CELL01 CD_04_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a10e-0000-000000000000 32M 1282.8125G active
DATA_CD_06_cell01 DATA DATA_CD_06_CELL01 CD_06_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a159-0000-000000000000 32M 1282.8125G active
DATA_CD_07_cell01 DATA DATA_CD_07_CELL01 CD_07_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a176-0000-000000000000 32M 1282.8125G active
DATA_CD_08_cell01 DATA DATA_CD_08_CELL01 CD_08_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a193-0000-000000000000 32M 1282.8125G active
DATA_CD_09_cell01 DATA DATA_CD_09_CELL01 CD_09_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a1a9-0000-000000000000 32M 1282.8125G active
DATA_CD_10_cell01 DATA DATA_CD_10_CELL01 CD_10_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a1c2-0000-000000000000 32M 1282.8125G active
DATA_CD_11_cell01 DATA DATA_CD_11_CELL01 CD_11_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a1e0-0000-000000000000 32M 1282.8125G active
RECO_CD_00_cell01 RECO RECO_CD_00_CELL01 CD_00_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a656-0000-000000000000 1741.328125G 91.265625G active
RECO_CD_01_cell01 RECO RECO_CD_01_CELL01 CD_01_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a65b-0000-000000000000 1741.328125G 91.265625G active
RECO_CD_02_cell01 RECO RECO_CD_02_CELL01 CD_02_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a65f-0000-000000000000 1741.328125G 120.375G active
RECO_CD_03_cell01 RECO RECO_CD_03_CELL01 CD_03_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a664-0000-000000000000 1741.328125G 120.375G active
RECO_CD_04_cell01 RECO RECO_CD_04_CELL01 CD_04_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a668-0000-000000000000 1741.328125G 120.375G active
RECO_CD_06_cell01 RECO RECO_CD_06_CELL01 CD_06_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a672-0000-000000000000 1741.328125G 120.375G active
RECO_CD_07_cell01 RECO RECO_CD_07_CELL01 CD_07_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a676-0000-000000000000 1741.328125G 120.375G active
RECO_CD_08_cell01 RECO RECO_CD_08_CELL01 CD_08_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a67b-0000-000000000000 1741.328125G 120.375G active
RECO_CD_09_cell01 RECO RECO_CD_09_CELL01 CD_09_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a680-0000-000000000000 1741.328125G 120.375G active
RECO_CD_10_cell01 RECO RECO_CD_10_CELL01 CD_10_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a685-0000-000000000000 1741.328125G 120.375G active
RECO_CD_11_cell01 RECO RECO_CD_11_CELL01 CD_11_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a689-0000-000000000000 1741.328125G 120.375G active
SCRATCH_CD_05_cell01 SCRATCH SCRATCH_CD_05_CELL01 CD_05_cell01 2010-12-24T11:11:03-06:00 HardDisk 0 9fd44ab2-a674-40ba-aa4f-fb32d380c573 32M 578.84375G active
SMITHERS_CD_05_cell01 SMITHERS SMITHERS_CD_05_CELL01 CD_05_cell01 2011-02-16T13:38:19-06:00 HardDisk 0 ee413b30-fe57-47a3-b1ad-815fa25b471c 578.890625G 100G active
STAGE_CD_00_cell01 STAGE STAGE_CD_00_CELL01 CD_00_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a267-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_01_cell01 STAGE STAGE_CD_01_CELL01 CD_01_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a26c-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_02_cell01 STAGE STAGE_CD_02_CELL01 CD_02_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a271-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_03_cell01 STAGE STAGE_CD_03_CELL01 CD_03_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a277-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_04_cell01 STAGE STAGE_CD_04_CELL01 CD_04_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a27d-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_06_cell01 STAGE STAGE_CD_06_CELL01 CD_06_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a288-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_07_cell01 STAGE STAGE_CD_07_CELL01 CD_07_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a28d-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_08_cell01 STAGE STAGE_CD_08_CELL01 CD_08_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a293-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_09_cell01 STAGE STAGE_CD_09_CELL01 CD_09_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a299-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_10_cell01 STAGE STAGE_CD_10_CELL01 CD_10_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a29e-0000-000000000000 1282.859375G 458.140625G active
STAGE_CD_11_cell01 STAGE STAGE_CD_11_CELL01 CD_11_cell01 2010-06-14T17:41:12-05:00 HardDisk 0 00000129-389f-a2a4-0000-000000000000 1282.859375G 458.140625G active
SWING_CD_05_cell01 TENJEE SWING_CD_05_CELL01 CD_05_cell01 2011-02-21T14:36:03-06:00 HardDisk 0 aaf8a3bc-7f81-45f2-b091-5bf73c93d972 678.890625G 30G active
SYSTEM_CD_00_cell01 SYSTEM SYSTEM_CD_00_CELL01 CD_00_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a45f-0000-000000000000 1741G 336M active
SYSTEM_CD_01_cell01 SYSTEM SYSTEM_CD_01_CELL01 CD_01_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a464-0000-000000000000 1741G 336M active
SYSTEM_CD_02_cell01 SYSTEM SYSTEM_CD_02_CELL01 CD_02_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a468-0000-000000000000 1741G 336M active
SYSTEM_CD_03_cell01 SYSTEM SYSTEM_CD_03_CELL01 CD_03_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a46c-0000-000000000000 1741G 336M active
SYSTEM_CD_04_cell01 SYSTEM SYSTEM_CD_04_CELL01 CD_04_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a470-0000-000000000000 1741G 336M active
SYSTEM_CD_06_cell01 SYSTEM SYSTEM_CD_06_CELL01 CD_06_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a479-0000-000000000000 1741G 336M active
SYSTEM_CD_07_cell01 SYSTEM SYSTEM_CD_07_CELL01 CD_07_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a47e-0000-000000000000 1741G 336M active
SYSTEM_CD_08_cell01 SYSTEM SYSTEM_CD_08_CELL01 CD_08_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a482-0000-000000000000 1741G 336M active
SYSTEM_CD_09_cell01 SYSTEM SYSTEM_CD_09_CELL01 CD_09_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a486-0000-000000000000 1741G 336M active
SYSTEM_CD_10_cell01 SYSTEM SYSTEM_CD_10_CELL01 CD_10_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a48b-0000-000000000000 1741G 336M active
SYSTEM_CD_11_cell01 SYSTEM SYSTEM_CD_11_CELL01 CD_11_cell01 2010-06-14T17:41:13-05:00 HardDisk 0 00000129-389f-a48f-0000-000000000000 1741G 336M active
}}}
{{{
--X2
[root@enkcel04 ~]# cat /etc/fstab
/dev/md5 / ext3 defaults,usrquota,grpquota 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/md2 swap swap defaults 0 0
/dev/md7 /opt/oracle ext3 defaults,nodev 1 1
/dev/md4 /boot ext3 defaults,nodev 1 1
/dev/md11 /var/log/oracle ext3 defaults,nodev 1 1
[root@enkcel04 ~]#
[root@enkcel04 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/md5 ext3 9.9G 3.4G 6.1G 36% /
tmpfs tmpfs 12G 0 12G 0% /dev/shm
/dev/md7 ext3 2.0G 626M 1.3G 33% /opt/oracle
/dev/md4 ext3 116M 37M 74M 34% /boot
/dev/md11 ext3 2.3G 181M 2.0G 9% /var/log/oracle
[root@enkcel04 ~]#
[root@enkcel04 ~]# mount
/dev/md5 on / type ext3 (rw,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md7 on /opt/oracle type ext3 (rw,nodev)
/dev/md4 on /boot type ext3 (rw,nodev)
/dev/md11 on /var/log/oracle type ext3 (rw,nodev)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
[root@enkcel04 ~]# cat /proc/mdstat
Personalities : [raid1]
md4 : active raid1 sdb1[1] sda1[0]
120384 blocks [2/2] [UU]
md5 : active raid1 sdb5[1] sda5[0]
10482304 blocks [2/2] [UU]
md6 : active raid1 sdb6[1] sda6[0]
10482304 blocks [2/2] [UU]
md7 : active raid1 sdb7[1] sda7[0]
2096384 blocks [2/2] [UU]
md8 : active raid1 sdb8[1] sda8[0]
2096384 blocks [2/2] [UU]
md1 : active raid1 sdb10[1] sda10[0]
714752 blocks [2/2] [UU]
md11 : active raid1 sdb11[1] sda11[0]
2433728 blocks [2/2] [UU]
md2 : active raid1 sdb9[1] sda9[0]
2096384 blocks [2/2] [UU]
unused devices: <none>
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md4
/dev/md4:
Version : 0.90
Creation Time : Sat Mar 12 13:39:08 2011
Raid Level : raid1
Array Size : 120384 (117.58 MiB 123.27 MB)
Used Dev Size : 120384 (117.58 MiB 123.27 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 4
Persistence : Superblock is persistent
Update Time : Tue Aug 2 10:22:53 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 04a2efb0:05de7468:211366dd:d50b2c00
Events : 0.4
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Sat Mar 12 13:39:15 2011
Raid Level : raid1
Array Size : 10482304 (10.00 GiB 10.73 GB)
Used Dev Size : 10482304 (10.00 GiB 10.73 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Sun Oct 23 03:49:07 2011
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : a9bee465:3ab8337f:8f1ef237:01bcbae0
Events : 0.5
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md6
/dev/md6:
Version : 0.90
Creation Time : Sat Mar 12 13:39:17 2011
Raid Level : raid1
Array Size : 10482304 (10.00 GiB 10.73 GB)
Used Dev Size : 10482304 (10.00 GiB 10.73 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 6
Persistence : Superblock is persistent
Update Time : Thu Oct 13 10:31:39 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 94214ee2:ddb3cdb2:e6b53739:6b6e01df
Events : 0.4
Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
1 8 22 1 active sync /dev/sdb6
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md7
/dev/md7:
Version : 0.90
Creation Time : Sat Mar 12 13:39:18 2011
Raid Level : raid1
Array Size : 2096384 (2047.59 MiB 2146.70 MB)
Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 7
Persistence : Superblock is persistent
Update Time : Sun Oct 23 03:49:15 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : b95729e3:38d23a1f:22ee3182:4f2abebd
Events : 0.6
Number Major Minor RaidDevice State
0 8 7 0 active sync /dev/sda7
1 8 23 1 active sync /dev/sdb7
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md8
/dev/md8:
Version : 0.90
Creation Time : Sat Mar 12 13:39:20 2011
Raid Level : raid1
Array Size : 2096384 (2047.59 MiB 2146.70 MB)
Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 8
Persistence : Superblock is persistent
Update Time : Thu May 5 17:32:35 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : d0d2b027:1731b4d8:8bd77b3a:4588996c
Events : 0.6
Number Major Minor RaidDevice State
0 8 8 0 active sync /dev/sda8
1 8 24 1 active sync /dev/sdb8
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Sat Mar 12 13:39:00 2011
Raid Level : raid1
Array Size : 714752 (698.12 MiB 731.91 MB)
Used Dev Size : 714752 (698.12 MiB 731.91 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sat Mar 12 13:45:50 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 7929eb8f:e993f335:b6b1f5d6:0e21a218
Events : 0.2
Number Major Minor RaidDevice State
0 8 10 0 active sync /dev/sda10
1 8 26 1 active sync /dev/sdb10
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md11
/dev/md11:
Version : 0.90
Creation Time : Sat Mar 12 13:39:21 2011
Raid Level : raid1
Array Size : 2433728 (2.32 GiB 2.49 GB)
Used Dev Size : 2433728 (2.32 GiB 2.49 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 11
Persistence : Superblock is persistent
Update Time : Sun Oct 23 03:49:44 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : f6ce4c8e:98ed1e26:47116a89:70babf94
Events : 0.6
Number Major Minor RaidDevice State
0 8 11 0 active sync /dev/sda11
1 8 27 1 active sync /dev/sdb11
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Sat Mar 12 13:39:00 2011
Raid Level : raid1
Array Size : 2096384 (2047.59 MiB 2146.70 MB)
Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sat Mar 12 13:56:24 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 1582ff78:e1a1a2ae:ef5f5c1f:60d86130
Events : 0.6
Number Major Minor RaidDevice State
0 8 9 0 active sync /dev/sda9
1 8 25 1 active sync /dev/sdb9
[root@enkcel04 ~]# parted /dev/sda print
Model: LSI MR9261-8i (scsi)
Disk /dev/sda: 1999GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 123MB 123MB primary ext3 boot, raid
2 123MB 132MB 8225kB primary ext2
3 132MB 1968GB 1968GB primary
4 1968GB 1999GB 31.1GB extended lba
5 1968GB 1979GB 10.7GB logical ext3 raid
6 1979GB 1989GB 10.7GB logical ext3 raid
7 1989GB 1991GB 2147MB logical ext3 raid
8 1991GB 1994GB 2147MB logical ext3 raid
9 1994GB 1996GB 2147MB logical linux-swap raid
10 1996GB 1997GB 732MB logical raid
11 1997GB 1999GB 2492MB logical ext3 raid
Information: Don't forget to update /etc/fstab, if necessary.
[root@enkcel04 ~]#
[root@enkcel04 ~]# parted /dev/sdb print
Model: LSI MR9261-8i (scsi)
Disk /dev/sdb: 1999GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 123MB 123MB primary ext3 boot, raid
2 123MB 132MB 8225kB primary ext2
3 132MB 1968GB 1968GB primary
4 1968GB 1999GB 31.1GB extended lba
5 1968GB 1979GB 10.7GB logical ext3 raid
6 1979GB 1989GB 10.7GB logical ext3 raid
7 1989GB 1991GB 2147MB logical ext3 raid
8 1991GB 1994GB 2147MB logical ext3 raid
9 1994GB 1996GB 2147MB logical linux-swap raid
10 1996GB 1997GB 732MB logical raid
11 1997GB 1999GB 2492MB logical ext3 raid
Information: Don't forget to update /etc/fstab, if necessary.
[root@enkcel04 ~]# fdisk -l
Disk /dev/sda: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 15 120456 fd Linux raid autodetect
/dev/sda2 16 16 8032+ 83 Linux
/dev/sda3 17 239246 1921614975 83 Linux
/dev/sda4 239247 243031 30403012+ f W95 Ext'd (LBA)
/dev/sda5 239247 240551 10482381 fd Linux raid autodetect
/dev/sda6 240552 241856 10482381 fd Linux raid autodetect
/dev/sda7 241857 242117 2096451 fd Linux raid autodetect
/dev/sda8 242118 242378 2096451 fd Linux raid autodetect
/dev/sda9 242379 242639 2096451 fd Linux raid autodetect
/dev/sda10 242640 242728 714861 fd Linux raid autodetect
/dev/sda11 242729 243031 2433816 fd Linux raid autodetect
Disk /dev/sdb: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 15 120456 fd Linux raid autodetect
/dev/sdb2 16 16 8032+ 83 Linux
/dev/sdb3 17 239246 1921614975 83 Linux
/dev/sdb4 239247 243031 30403012+ f W95 Ext'd (LBA)
/dev/sdb5 239247 240551 10482381 fd Linux raid autodetect
/dev/sdb6 240552 241856 10482381 fd Linux raid autodetect
/dev/sdb7 241857 242117 2096451 fd Linux raid autodetect
/dev/sdb8 242118 242378 2096451 fd Linux raid autodetect
/dev/sdb9 242379 242639 2096451 fd Linux raid autodetect
/dev/sdb10 242640 242728 714861 fd Linux raid autodetect
/dev/sdb11 242729 243031 2433816 fd Linux raid autodetect
Disk /dev/sdc: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdg doesn't contain a valid partition table
Disk /dev/sdh: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdh doesn't contain a valid partition table
Disk /dev/sdi: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdi doesn't contain a valid partition table
Disk /dev/sdj: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdj doesn't contain a valid partition table
Disk /dev/sdk: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdk doesn't contain a valid partition table
Disk /dev/sdl: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdl doesn't contain a valid partition table
Disk /dev/sdm: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdm doesn't contain a valid partition table
Disk /dev/sdn: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdn doesn't contain a valid partition table
Disk /dev/sdo: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdo doesn't contain a valid partition table
Disk /dev/sdp: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdp doesn't contain a valid partition table
Disk /dev/sdq: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdq doesn't contain a valid partition table
Disk /dev/sdr: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdr doesn't contain a valid partition table
Disk /dev/sds: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sds doesn't contain a valid partition table
Disk /dev/sdt: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdt doesn't contain a valid partition table
Disk /dev/sdu: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdu doesn't contain a valid partition table
Disk /dev/sdv: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdv doesn't contain a valid partition table
Disk /dev/sdw: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdw doesn't contain a valid partition table
Disk /dev/sdx: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdx doesn't contain a valid partition table
Disk /dev/sdy: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdy doesn't contain a valid partition table
Disk /dev/sdz: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdz doesn't contain a valid partition table
Disk /dev/sdaa: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdaa doesn't contain a valid partition table
Disk /dev/sdab: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdab doesn't contain a valid partition table
Disk /dev/sdac: 4009 MB, 4009754624 bytes
126 heads, 22 sectors/track, 2825 cylinders
Units = cylinders of 2772 * 512 = 1419264 bytes
Device Boot Start End Blocks Id System
/dev/sdac1 1 2824 3914053 83 Linux
Disk /dev/md2: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/md11: 2492 MB, 2492137472 bytes
2 heads, 4 sectors/track, 608432 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md11 doesn't contain a valid partition table
Disk /dev/md1: 731 MB, 731906048 bytes
2 heads, 4 sectors/track, 178688 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md8: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md8 doesn't contain a valid partition table
Disk /dev/md7: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md7 doesn't contain a valid partition table
Disk /dev/md6: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md6 doesn't contain a valid partition table
Disk /dev/md5: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md5 doesn't contain a valid partition table
Disk /dev/md4: 123 MB, 123273216 bytes
2 heads, 4 sectors/track, 30096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md4 doesn't contain a valid partition table
X2--CellCLI> list physicaldisk attributes all
20:0 19 HardDisk 20 0 0 false 0_0 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:02:16-06:00 sata L3E5ZF 1862.6559999994934G 0 normal
20:1 18 HardDisk 20 0 0 false 0_1 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:02:21-06:00 sata L3E2MM 1862.6559999994934G 1 normal
20:2 17 HardDisk 20 0 0 false 0_2 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:02:26-06:00 sata L3GX6J 1862.6559999994934G 2 normal
20:3 16 HardDisk 20 0 0 false 0_3 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:02:31-06:00 sata L3G8QX 1862.6559999994934G 3 normal
20:4 15 HardDisk 20 0 0 false 0_4 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:02:37-06:00 sata L2CG8S 1862.6559999994934G 4 normal
20:5 14 HardDisk 20 0 0 false 0_5 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:02:42-06:00 sata L3H3TS 1862.6559999994934G 5 normal
20:6 13 HardDisk 20 0 0 false 0_6 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:02:47-06:00 sata L3GYH3 1862.6559999994934G 6 normal
20:7 12 HardDisk 20 0 0 false 0_7 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:02:52-06:00 sata L3G73C 1862.6559999994934G 7 normal
20:8 11 HardDisk 20 0 0 false 0_8 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:02:57-06:00 sata L3H3TJ 1862.6559999994934G 8 normal
20:9 10 HardDisk 20 0 0 false 0_9 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:03:02-06:00 sata L3GXVK 1862.6559999994934G 9 normal
20:10 9 HardDisk 20 0 0 false 0_10 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:03:08-06:00 sata L3G8N6 1862.6559999994934G 10 normal
20:11 8 HardDisk 20 0 0 false 0_11 "SEAGATE ST32000SSSUN2.0T" 0514 2011-03-12T14:03:13-06:00 sata L3HLN6 1862.6559999994934G 11 normal
[1:0:0:0] FlashDisk 4_0 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2e2aFMOD0 22.8880615234375G "PCI Slot: 4; FDOM: 0" normal
[1:0:1:0] FlashDisk 4_1 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2e2aFMOD1 22.8880615234375G "PCI Slot: 4; FDOM: 1" normal
[1:0:2:0] FlashDisk 4_2 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2e2aFMOD2 22.8880615234375G "PCI Slot: 4; FDOM: 2" normal
[1:0:3:0] FlashDisk 4_3 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2e2aFMOD3 22.8880615234375G "PCI Slot: 4; FDOM: 3" normal
[2:0:0:0] FlashDisk 1_0 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f27f0FMOD0 22.8880615234375G "PCI Slot: 1; FDOM: 0" normal
[2:0:1:0] FlashDisk 1_1 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f27f0FMOD1 22.8880615234375G "PCI Slot: 1; FDOM: 1" normal
[2:0:2:0] FlashDisk 1_2 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f27f0FMOD2 22.8880615234375G "PCI Slot: 1; FDOM: 2" normal
[2:0:3:0] FlashDisk 1_3 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f27f0FMOD3 22.8880615234375G "PCI Slot: 1; FDOM: 3" normal
[3:0:0:0] FlashDisk 5_0 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2eb4FMOD0 22.8880615234375G "PCI Slot: 5; FDOM: 0" normal
[3:0:1:0] FlashDisk 5_1 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2eb4FMOD1 22.8880615234375G "PCI Slot: 5; FDOM: 1" normal
[3:0:2:0] FlashDisk 5_2 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2eb4FMOD2 22.8880615234375G "PCI Slot: 5; FDOM: 2" normal
[3:0:3:0] FlashDisk 5_3 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2eb4FMOD3 22.8880615234375G "PCI Slot: 5; FDOM: 3" normal
[4:0:0:0] FlashDisk 2_0 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2de6FMOD0 22.8880615234375G "PCI Slot: 2; FDOM: 0" normal
[4:0:1:0] FlashDisk 2_1 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2de6FMOD1 22.8880615234375G "PCI Slot: 2; FDOM: 1" normal
[4:0:2:0] FlashDisk 2_2 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2de6FMOD2 22.8880615234375G "PCI Slot: 2; FDOM: 2" normal
[4:0:3:0] FlashDisk 2_3 "MARVELL SD88SA02" D20Y 2011-03-12T14:03:13-06:00 sas 5080020000f2de6FMOD3 22.8880615234375G "PCI Slot: 2; FDOM: 3" normal
CellCLI> list lun attributes all
0_0 CD_00_enkcel04 /dev/sda HardDisk 0_0 TRUE FALSE 1861.712890625G 0_0 20:0 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_1 CD_01_enkcel04 /dev/sdb HardDisk 0_1 TRUE FALSE 1861.712890625G 0_1 20:1 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_2 CD_02_enkcel04 /dev/sdc HardDisk 0_2 FALSE FALSE 1861.712890625G 0_2 20:2 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_3 CD_03_enkcel04 /dev/sdd HardDisk 0_3 FALSE FALSE 1861.712890625G 0_3 20:3 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_4 CD_04_enkcel04 /dev/sde HardDisk 0_4 FALSE FALSE 1861.712890625G 0_4 20:4 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_5 CD_05_enkcel04 /dev/sdf HardDisk 0_5 FALSE FALSE 1861.712890625G 0_5 20:5 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_6 CD_06_enkcel04 /dev/sdg HardDisk 0_6 FALSE FALSE 1861.712890625G 0_6 20:6 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_7 CD_07_enkcel04 /dev/sdh HardDisk 0_7 FALSE FALSE 1861.712890625G 0_7 20:7 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_8 CD_08_enkcel04 /dev/sdi HardDisk 0_8 FALSE FALSE 1861.712890625G 0_8 20:8 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_9 CD_09_enkcel04 /dev/sdj HardDisk 0_9 FALSE FALSE 1861.712890625G 0_9 20:9 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_10 CD_10_enkcel04 /dev/sdk HardDisk 0_10 FALSE FALSE 1861.712890625G 0_10 20:10 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
0_11 CD_11_enkcel04 /dev/sdl HardDisk 0_11 FALSE FALSE 1861.712890625G 0_11 20:11 0 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" normal
1_0 FD_00_enkcel04 /dev/sdq FlashDisk 1_0 FALSE FALSE 22.8880615234375G 100.0 [2:0:0:0] normal
1_1 FD_01_enkcel04 /dev/sdr FlashDisk 1_1 FALSE FALSE 22.8880615234375G 100.0 [2:0:1:0] normal
1_2 FD_02_enkcel04 /dev/sds FlashDisk 1_2 FALSE FALSE 22.8880615234375G 100.0 [2:0:2:0] normal
1_3 FD_03_enkcel04 /dev/sdt FlashDisk 1_3 FALSE FALSE 22.8880615234375G 100.0 [2:0:3:0] normal
2_0 FD_04_enkcel04 /dev/sdy FlashDisk 2_0 FALSE FALSE 22.8880615234375G 100.0 [4:0:0:0] normal
2_1 FD_05_enkcel04 /dev/sdz FlashDisk 2_1 FALSE FALSE 22.8880615234375G 100.0 [4:0:1:0] normal
2_2 FD_06_enkcel04 /dev/sdaa FlashDisk 2_2 FALSE FALSE 22.8880615234375G 100.0 [4:0:2:0] normal
2_3 FD_07_enkcel04 /dev/sdab FlashDisk 2_3 FALSE FALSE 22.8880615234375G 100.0 [4:0:3:0] normal
4_0 FD_08_enkcel04 /dev/sdm FlashDisk 4_0 FALSE FALSE 22.8880615234375G 100.0 [1:0:0:0] normal
4_1 FD_09_enkcel04 /dev/sdn FlashDisk 4_1 FALSE FALSE 22.8880615234375G 100.0 [1:0:1:0] normal
4_2 FD_10_enkcel04 /dev/sdo FlashDisk 4_2 FALSE FALSE 22.8880615234375G 100.0 [1:0:2:0] normal
4_3 FD_11_enkcel04 /dev/sdp FlashDisk 4_3 FALSE FALSE 22.8880615234375G 100.0 [1:0:3:0] normal
5_0 FD_12_enkcel04 /dev/sdu FlashDisk 5_0 FALSE FALSE 22.8880615234375G 100.0 [3:0:0:0] normal
5_1 FD_13_enkcel04 /dev/sdv FlashDisk 5_1 FALSE FALSE 22.8880615234375G 100.0 [3:0:1:0] normal
5_2 FD_14_enkcel04 /dev/sdw FlashDisk 5_2 FALSE FALSE 22.8880615234375G 100.0 [3:0:2:0] normal
5_3 FD_15_enkcel04 /dev/sdx FlashDisk 5_3 FALSE FALSE 22.8880615234375G 100.0 [3:0:3:0] normal
CellCLI> list celldisk attributes all
CD_00_enkcel04 2011-03-29T14:05:51-05:00 /dev/sda /dev/sda3 HardDisk 0 0 7ebe749b-5f94-427c-a636-d793f691f795 none 0_0 0 1832.59375G normal
CD_01_enkcel04 2011-03-29T14:05:55-05:00 /dev/sdb /dev/sdb3 HardDisk 0 0 ec5ca5d0-25a2-4f16-b8da-7ca87106f09b none 0_1 0 1832.59375G normal
CD_02_enkcel04 2011-03-29T14:05:56-05:00 /dev/sdc /dev/sdc HardDisk 0 0 81d59e7b-795c-4c68-8151-3d1a1574cbd2 none 0_2 0 1861.703125G normal
CD_03_enkcel04 2011-03-29T14:05:56-05:00 /dev/sdd /dev/sdd HardDisk 0 0 27f3a507-cb13-43b3-ad87-a54d57984013 none 0_3 0 1861.703125G normal
CD_04_enkcel04 2011-03-29T14:05:57-05:00 /dev/sde /dev/sde HardDisk 0 0 3732d8ee-1cc4-4acd-a39d-4467668a2211 none 0_4 0 1861.703125G normal
CD_05_enkcel04 2011-03-29T14:05:58-05:00 /dev/sdf /dev/sdf HardDisk 0 0 601e610b-ec1a-4b8a-8ef9-0faa6d9c754a none 0_5 0 1861.703125G normal
CD_06_enkcel04 2011-03-29T14:05:58-05:00 /dev/sdg /dev/sdg HardDisk 0 0 bf306119-c111-4538-b10e-d8279db6835a none 0_6 0 1861.703125G normal
CD_07_enkcel04 2011-03-29T14:05:59-05:00 /dev/sdh /dev/sdh HardDisk 0 0 67d280a4-dce7-4139-9a19-2ff7b2d5aa45 none 0_7 0 1861.703125G normal
CD_08_enkcel04 2011-03-29T14:06:00-05:00 /dev/sdi /dev/sdi HardDisk 0 0 e348a4a5-cc49-448d-9b82-4dac64dddf8a none 0_8 0 1861.703125G normal
CD_09_enkcel04 2011-03-29T14:06:01-05:00 /dev/sdj /dev/sdj HardDisk 0 0 ce155b98-d8c8-454d-8273-a8feb66546d9 none 0_9 0 1861.703125G normal
CD_10_enkcel04 2011-03-29T14:06:01-05:00 /dev/sdk /dev/sdk HardDisk 0 0 e4c88e9d-5d9d-4825-889e-0bce857bd85c none 0_10 0 1861.703125G normal
CD_11_enkcel04 2011-03-29T14:06:02-05:00 /dev/sdl /dev/sdl HardDisk 0 0 3c5a73a8-7a04-4213-a7c8-8b2d0f63de7f none 0_11 0 1861.703125G normal
FD_00_enkcel04 2011-03-25T14:05:33-05:00 /dev/sdq /dev/sdq FlashDisk 0 0 b3cf6d51-17ee-4269-a597-4af2d1e1f1ad none 1_0 22.875G normal
FD_01_enkcel04 2011-03-25T14:05:34-05:00 /dev/sdr /dev/sdr FlashDisk 0 0 3ca528d8-de3b-4fa8-919a-7ef45f131a51 none 1_1 22.875G normal
FD_02_enkcel04 2011-03-25T14:05:35-05:00 /dev/sds /dev/sds FlashDisk 0 0 fb19081d-685e-4b48-867a-5b09529fd786 none 1_2 22.875G normal
FD_03_enkcel04 2011-03-25T14:05:35-05:00 /dev/sdt /dev/sdt FlashDisk 0 0 33c049fe-0f90-4b25-afa7-e41c5db4bb8d none 1_3 22.875G normal
FD_04_enkcel04 2011-03-25T14:05:36-05:00 /dev/sdy /dev/sdy FlashDisk 0 0 0153e6d7-5116-4740-8b02-7b74d4b38aec none 2_0 22.875G normal
FD_05_enkcel04 2011-03-25T14:05:37-05:00 /dev/sdz /dev/sdz FlashDisk 0 0 8b5452b1-5fb0-48e0-8887-416760f08301 none 2_1 22.875G normal
FD_06_enkcel04 2011-03-25T14:05:38-05:00 /dev/sdaa /dev/sdaa FlashDisk 0 0 2771ec81-04f3-4935-a5ac-d06f46c0fbe0 none 2_2 22.875G normal
FD_07_enkcel04 2011-03-25T14:05:38-05:00 /dev/sdab /dev/sdab FlashDisk 0 0 8aaaf99f-736a-4e01-80eb-88efebd4dcb3 none 2_3 22.875G normal
FD_08_enkcel04 2011-03-25T14:05:39-05:00 /dev/sdm /dev/sdm FlashDisk 0 0 25f72e72-a962-4b9a-92c5-b8666e83a118 none 4_0 22.875G normal
FD_09_enkcel04 2011-03-25T14:05:40-05:00 /dev/sdn /dev/sdn FlashDisk 0 0 c023fe18-e077-498f-99fa-1dd61cd83cb1 none 4_1 22.875G normal
FD_10_enkcel04 2011-03-25T14:05:40-05:00 /dev/sdo /dev/sdo FlashDisk 0 0 388d006b-4c26-427a-9bd2-6b2ada755f3d none 4_2 22.875G normal
FD_11_enkcel04 2011-03-25T14:05:41-05:00 /dev/sdp /dev/sdp FlashDisk 0 0 c1a2f418-85d5-4fe2-bc67-9225e48c5184 none 4_3 22.875G normal
FD_12_enkcel04 2011-03-25T14:05:42-05:00 /dev/sdu /dev/sdu FlashDisk 0 0 039b1477-16ee-4d1e-aac9-b8e6ceefd6de none 5_0 22.875G normal
FD_13_enkcel04 2011-03-25T14:05:43-05:00 /dev/sdv /dev/sdv FlashDisk 0 0 0bd3d890-36cc-4e66-b404-c16af237d6b5 none 5_1 22.875G normal
FD_14_enkcel04 2011-03-25T14:05:43-05:00 /dev/sdw /dev/sdw FlashDisk 0 0 ee31e0ca-1ff9-4ea8-9a61-d1fe9cf66a85 none 5_2 22.875G normal
FD_15_enkcel04 2011-03-25T14:05:44-05:00 /dev/sdx /dev/sdx FlashDisk 0 0 0a808b2f-ea08-48f0-abfc-8d08cffa7d72 none 5_3 22.875G normal
CellCLI> list griddisk attributes all
DATA_CD_00_enkcel04 CD_00_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 cb535b02-e9bf-41d7-8e22-93009fff14fd 32M 1356G active
DATA_CD_01_enkcel04 CD_01_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 c691998e-f6c3-4337-b35a-9f94076c996c 32M 1356G active
DATA_CD_02_enkcel04 CD_02_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 57d84ced-040b-4446-96e7-b72d72c05534 32M 1356G active
DATA_CD_03_enkcel04 CD_03_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 9420aaaf-71e5-4d82-94ff-fc4c0a73537a 32M 1356G active
DATA_CD_04_enkcel04 CD_04_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 dbf36cae-e9e6-4cea-9cc8-3d04b97d91c7 32M 1356G active
DATA_CD_05_enkcel04 CD_05_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 e94f2844-3055-4c12-af18-890e173b134d 32M 1356G active
DATA_CD_06_enkcel04 CD_06_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 fe5db412-b695-493b-b3a2-6121cf5957ae 32M 1356G active
DATA_CD_07_enkcel04 CD_07_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 9452bb5e-c11f-4fa6-9323-9afad0d1f164 32M 1356G active
DATA_CD_08_enkcel04 CD_08_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 90655419-101c-4429-ac46-63eb4438692c 32M 1356G active
DATA_CD_09_enkcel04 CD_09_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 4d642e65-5b3b-4f7b-818d-2503e4bf3982 32M 1356G active
DATA_CD_10_enkcel04 CD_10_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 54768dd2-c63f-4d84-bfad-bd7d1e964ee6 32M 1356G active
DATA_CD_11_enkcel04 CD_11_enkcel04 2011-03-29T14:07:35-05:00 HardDisk 0 97aa7662-a126-44d6-b472-37c8d1ec7292 32M 1356G active
DBFS_DG_CD_02_enkcel04 CD_02_enkcel04 2011-03-29T14:06:46-05:00 HardDisk 0 de151b87-1eb2-48ae-976a-5e746d5a8580 1832.59375G 29.109375G active
DBFS_DG_CD_03_enkcel04 CD_03_enkcel04 2011-03-29T14:06:47-05:00 HardDisk 0 130e30fd-fba3-4edf-9870-c6b0a7241044 1832.59375G 29.109375G active
DBFS_DG_CD_04_enkcel04 CD_04_enkcel04 2011-03-29T14:06:48-05:00 HardDisk 0 935a39ea-9e4d-4979-83ff-b6fed9ecce48 1832.59375G 29.109375G active
DBFS_DG_CD_05_enkcel04 CD_05_enkcel04 2011-03-29T14:06:49-05:00 HardDisk 0 7da87467-7329-4f32-8667-73c22b8f2e05 1832.59375G 29.109375G active
DBFS_DG_CD_06_enkcel04 CD_06_enkcel04 2011-03-29T14:06:50-05:00 HardDisk 0 edc12d6b-66c2-4648-8605-162337e3c2cc 1832.59375G 29.109375G active
DBFS_DG_CD_07_enkcel04 CD_07_enkcel04 2011-03-29T14:06:50-05:00 HardDisk 0 b60a2162-ed3c-47df-9fd0-68868dc1df86 1832.59375G 29.109375G active
DBFS_DG_CD_08_enkcel04 CD_08_enkcel04 2011-03-29T14:06:51-05:00 HardDisk 0 035a0024-663b-4ac5-be35-5027c790c241 1832.59375G 29.109375G active
DBFS_DG_CD_09_enkcel04 CD_09_enkcel04 2011-03-29T14:06:52-05:00 HardDisk 0 c64080a5-22c8-46fa-81df-6175ce2a1066 1832.59375G 29.109375G active
DBFS_DG_CD_10_enkcel04 CD_10_enkcel04 2011-03-29T14:06:53-05:00 HardDisk 0 f0f34182-4751-4011-8496-d25a74192b09 1832.59375G 29.109375G active
DBFS_DG_CD_11_enkcel04 CD_11_enkcel04 2011-03-29T14:06:54-05:00 HardDisk 0 4e6c5015-d93b-4dab-a6d1-c09c850e542d 1832.59375G 29.109375G active
RECO_CD_00_enkcel04 CD_00_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 0da3ed9b-35e1-40e1-801c-08a9d7a614bd 1356.046875G 476.546875G active
RECO_CD_01_enkcel04 CD_01_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 229eac42-ee11-4752-96d0-1953f412e383 1356.046875G 476.546875G active
RECO_CD_02_enkcel04 CD_02_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 3094f748-517c-4950-bf09-b7aeece47790 1356.046875G 476.546875G active
RECO_CD_03_enkcel04 CD_03_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 d8340700-fe52-4afa-b837-17419bc4bfbf 1356.046875G 476.546875G active
RECO_CD_04_enkcel04 CD_04_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 418020ea-e3df-418a-ad70-90bd09c1ec1b 1356.046875G 476.546875G active
RECO_CD_05_enkcel04 CD_05_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 5ad78a48-ff99-4268-ae7e-fa50f909e9b2 1356.046875G 476.546875G active
RECO_CD_06_enkcel04 CD_06_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 fa03466f-329d-4c31-9a61-ba2ceb6e67c1 1356.046875G 476.546875G active
RECO_CD_07_enkcel04 CD_07_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 d6f247ed-6c97-4216-8c21-2f4fd92d58af 1356.046875G 476.546875G active
RECO_CD_08_enkcel04 CD_08_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 42494e34-2e5a-4b17-a7bf-bcf23c5b18a1 1356.046875G 476.546875G active
RECO_CD_09_enkcel04 CD_09_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 ca8fb645-f3c2-4dca-9224-d9181d23bb0f 1356.046875G 476.546875G active
RECO_CD_10_enkcel04 CD_10_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 e13d011a-6fed-477f-a3c3-3792beee3184 1356.046875G 476.546875G active
RECO_CD_11_enkcel04 CD_11_enkcel04 2011-03-29T14:07:40-05:00 HardDisk 0 3cde9c29-3119-44e9-a9d9-bd3f03ca2829 1356.046875G 476.546875G active
}}}
http://nnawaz.blogspot.com/2019/07/how-to-check-active-enabled-physical.html
{{{
[root@prod_node1 ~]# dbmcli
DBMCLI: Release - Production on Wed Jul 24 00:06:08 GMT-00:00 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.
DBMCLI> LIST DBSERVER attributes coreCount
16/48
[root@prod_node2 ~]# dbmcli
DBMCLI: Release - Production on Wed Jul 24 00:32:34 GMT 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.
DBMCLI> LIST DBSERVER attributes coreCount
16/48
[root@prod_node3 ~]# dbmcli
DBMCLI: Release - Production on Wed Jul 24 00:35:20 GMT 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.
DBMCLI> LIST DBSERVER attributes coreCount
16/48
SQL> show parameter cpu_count
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 32
}}}
https://community.oracle.com/tech/apps-infra/discussion/4277611/exadata-data-active-core-count-and-license
http://www.freelists.org/post/oracle-l/Performance-metrics,3
''-- "Oracle Exadata Database Machine Best Practices Series"''
Oracle E-Business Suite on Exadata http://goo.gl/2Yc4d 1133355.1 1110648.1 741818.1 557738.1 1055938.1
Oracle Siebel on Exadata http://goo.gl/3R6Iy 1187674.1 744769.1
Oracle Peoplesoft on Exadata http://goo.gl/Sg1yX 744769.1
Oracle Exadata and OLTP Applications http://goo.gl/sDCKF
* 757552.1 Exadata Best Practices
* 1269706.1 OLTP Best Practices
* 888828.1
Using Resource Manager on Exadata http://goo.gl/db5cx
* 1207483.1 CPU Resource Manager - Example: How to control CPU Resources using the Resource Manager [ID 471265.1]
* 1208064.1 Instance Caging
* 1208104.1 max_utilization_limit
* 1208133.1 Managing Runaway Queries
Migrating to Oracle Exadata http://goo.gl/DCxpg
* 785351.1 - Upgrade Companion
* 1055938.1 - Database Machine using Data Guard
* 413484.1 - Data Guard Heterogeneous Support
* 737460.1 - Changing Storage Characteristics on Logical Standby
* 1054431.1 - DBFS
* 888828.1 - Latest Exadata Software
Using DBFS on Exadata http://goo.gl/oOFs1
* 1191144.1 - Configuring a database for DBFS on Exadata
* 1054431.1 - Configuring DBFS on Exadata
Monitoring Oracle Exadata http://goo.gl/vYzpD
* 1110675.1 - Manageability Best Practices
* ASR installation guide at OTN
Oracle Exadata Backup and Recovery http://goo.gl/FBrIa
Oracle MAA and Oracle Exadata http://goo.gl/Q1a8d
* 888828.1 - Exadata recommended software
* 1262380.1 - Exadata testing and patching practices
* 757552.1 - Hub of MAA and Exadata best practices
* 1070954.1 - Exadata MAA HealthCheck (every 3months)
* 1110675.1 - Exadata Monitoring
* ASR (OTN)
* 565535.1 - Flashback MOS
* Data Guard
* 1206603.1
* 960510.1
* 951152.1
* 1265700.1 - Data Guard Standby-First Patch Apply
* Patching
* 1262380.1
* 757552.1 - Hub of MAA and Exadata Best Practices
* Storage Grid High Redundancy and file placement (OTN)
Troubleshooting Oracle Exadata http://goo.gl/USRIX
* 1274324.1 - Exadata X2-2 Diagnosability & Troubleshooting Best Practices
* 1283341.1 - Exadata Hardware Alert: All logical drives are in writethrough caching mode
Patching and Upgrading Oracle Exadata http://goo.gl/B2ztC
* metalink notes 888828.1 (11.2) 835032.1 (11.1) 1262380.1 1265998.1 1265700.1
Oracle Exadata Health Check http://goo.gl/Pyw4k
* metalink notes 1070954.1 757552.1 888828.1 835032.1
https://www.evernote.com/shard/s48/sh/b59aa9c0-4df9-44ac-b81f-8b23ae4ce7ea/4b8573572afe79a983fb6979c443cf09
* related tiddlers
[[cpu - SPECint_rate2006]]
[[cpu core comparison]]
[[Exadata CPU comparative sizing]]
http://www.oracle.com/technetwork/articles/oem/exadata-commands-intro-402431.html
http://arup.blogspot.com/p/collection-of-some-of-my-very-popular.html
http://www.proligence.com/pres/nyoug/2012/nyoug_mar13_exadata_article.pdf
http://www.centroid.com/knowledgebase/blog/exadata-initial-installation-validation
http://www.unixarena.com/2014/11/exadata-storage-cell-commands-cheat-sheet.html
http://www.oracle.com/technetwork/oem/exadata-management/em12c-exadata-lcm-webcast-1721225.pdf
Guide to a create a performance monitoring dashboard report for DB Machine targets discovered by Enterprise Manager Cloud Control 12c (Doc ID 1458346.1)
note: rep user is the os user
http://docs.oracle.com/cd/E24628_01/doc.121/e27442/ch4_post_discovery.htm#EMXIG298
https://www.dropbox.com/sh/l8rrab8u8fli850/sXdr8PmWhG
https://www.dropbox.com/home/Documents/KnowledgeFiles/Books/Oracle/Exadata/DataSheets
Oracle System Options http://www.oracle.com/technetwork/documentation/oracle-system-options-190050.html#solid
https://twitter.com/karlarao/status/375289300360765440
http://www.evernote.com/shard/s48/sh/320a6b86-5203-499b-823c-577e9b641188/ec46229148b6b09478dbce95c27bc00b
sort this http://dbastreet.com/blog/?page_id=603
* the EM plugins for the ''db nodes'' is just like monitoring a database server..
* for the cells, you have to have the OMS server to have that can passwordlessly login to the ''cellmonitor'' account on the cell servers and that's it. the OMS just executes SSH commands and does cellcli command on the cells to have data points that will be stored on the OMS server for graphing
* it actually executes a command similar to this ''ssh -l cellmonitor cell1 cellcli -e 'list cell detail' ''
* and cellmonitor just have access to cellcli
{{{
[celladmin@cell1 ~]$ ssh -l root cell1 ls -ltra ~cellmonitor/
root@cell1's password:
total 48
-rw-r--r-- 1 cellmonitor cellmonitor 658 Jul 20 14:14 .zshrc
-r--r--r-- 1 root root 49 Jul 20 14:14 .profile
drwxr-xr-x 4 cellmonitor cellmonitor 4096 Jul 20 14:14 .mozilla
drwxr-xr-x 3 cellmonitor cellmonitor 4096 Jul 20 14:14 .kde
-rw-r--r-- 1 cellmonitor cellmonitor 515 Jul 20 14:14 .emacs
-r-xr-xr-x 1 root root 1760 Jul 20 14:14 cellcli
-r--r--r-- 1 root cellmonitor 162 Jul 20 14:14 .bashrc
-r--r--r-- 1 root cellmonitor 214 Jul 20 14:14 .bash_profile
drwxr-xr-x 4 root root 4096 Jul 20 14:14 ..
drwx------ 4 cellmonitor cellmonitor 4096 Aug 18 12:45 .
-rw------- 1 cellmonitor cellmonitor 263 Aug 18 12:55 .bash_history
}}}
* see why
{{{
[cellmonitor@cell1 ~]$ ls -ltr
-rbash: ls: command not found
[cellmonitor@cell1 ~]$
[cellmonitor@cell1 ~]$ which
-rbash: /usr/bin/which: restricted: cannot specify `/' in command names
[cellmonitor@cell1 ~]$
[cellmonitor@cell1 ~]$ cellcli
CellCLI: Release 11.2.2.2.0 - Production on Thu Aug 18 13:47:58 CDT 2011
Copyright (c) 2007, 2009, Oracle. All rights reserved.
Cell Efficiency Ratio: 22M
CellCLI>
}}}
* follow the steps below to setup passwordless SSH
{{{
## PASSWORDLESS SSH ORACLE TO CELLADMIN
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
/usr/bin/ssh-keygen -t dsa
<then just hit ENTER all the way>
Repeat the above steps for each node in the cluster
cd ~/.ssh
ls -l *.pub
ssh db1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh celladmin@cell1 cat ~celladmin/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh celladmin@cell2 cat ~celladmin/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh celladmin@cell3 cat ~celladmin/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp -p ~/.ssh/authorized_keys celladmin@cell1:.ssh/authorized_keys
scp -p ~/.ssh/authorized_keys celladmin@cell2:.ssh/authorized_keys
scp -p ~/.ssh/authorized_keys celladmin@cell3:.ssh/authorized_keys
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
ssh -l oracle db1 date;ssh -l celladmin cell1 date;ssh -l celladmin cell2 date;ssh -l celladmin cell3 date
Thu Aug 18 13:32:14 CDT 2011
Thu Aug 18 13:32:07 CDT 2011
Thu Aug 18 13:32:07 CDT 2011
Thu Aug 18 13:32:04 CDT 2011
## PASSWORDLESS SSH ORACLE TO CELLMONITOR
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
/usr/bin/ssh-keygen -t dsa
<then just hit ENTER all the way>
Repeat the above steps for each node in the cluster
cd ~/.ssh
ls -l *.pub
scp id_dsa.pub celladmin@cell1:~
ssh -l root cell1 mkdir -p ~cellmonitor/.ssh
ssh -l root cell1 chmod 700 ~cellmonitor/.ssh
ssh -l root cell1 touch ~cellmonitor/.ssh/authorized_keys
ssh -l root cell1 chown -R cellmonitor:cellmonitor ~cellmonitor/.ssh
ssh -l root cell1 ls -ltra ~cellmonitor
ssh -l root cell1 "cat ~celladmin/id_dsa.pub >> ~cellmonitor/.ssh/authorized_keys"
ssh -l root cell1 rm ~celladmin/id_dsa.pub
Repeat the above steps for each node in the cluster
cd ~/.ssh
ls -l *.pub
scp id_dsa.pub celladmin@cell2:~
ssh -l root cell2 mkdir -p ~cellmonitor/.ssh
ssh -l root cell2 chmod 700 ~cellmonitor/.ssh
ssh -l root cell2 touch ~cellmonitor/.ssh/authorized_keys
ssh -l root cell2 chown -R cellmonitor:cellmonitor ~cellmonitor/.ssh
ssh -l root cell2 ls -ltra ~cellmonitor
ssh -l root cell2 "cat ~celladmin/id_dsa.pub >> ~cellmonitor/.ssh/authorized_keys"
ssh -l root cell2 rm ~celladmin/id_dsa.pub
cd ~/.ssh
ls -l *.pub
scp id_dsa.pub celladmin@cell3:~
ssh -l root cell3 mkdir -p ~cellmonitor/.ssh
ssh -l root cell3 chmod 700 ~cellmonitor/.ssh
ssh -l root cell3 touch ~cellmonitor/.ssh/authorized_keys
ssh -l root cell3 chown -R cellmonitor:cellmonitor ~cellmonitor/.ssh
ssh -l root cell3 ls -ltra ~cellmonitor
ssh -l root cell3 "cat ~celladmin/id_dsa.pub >> ~cellmonitor/.ssh/authorized_keys"
ssh -l root cell3 rm ~celladmin/id_dsa.pub
login on db1.. and execute the following command
ssh -l cellmonitor cell1 cellcli -e 'list cell detail'
ssh -l cellmonitor cell2 cellcli -e 'list cell detail'
ssh -l cellmonitor cell3 cellcli -e 'list cell detail'
}}}
* TO ADD ROOT DB1 ON PASSWORDLESS SSH
{{{
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
/usr/bin/ssh-keygen -t dsa
ssh db1 cat ~root/.ssh/id_dsa.pub >> ~root/.ssh/authorized_keys
ssh -l root cell1 cat ~root/.ssh/authorized_keys >> ~root/.ssh/authorized_keys
scp -p authorized_keys cell1:~root/.ssh/authorized_keys
scp -p authorized_keys cell2:~root/.ssh/authorized_keys
scp -p authorized_keys cell3:~root/.ssh/authorized_keys
ssh db1 date;ssh cell1 date;ssh cell2 date;ssh cell3 date
}}}
* TO ADD ROOT DB1 and DB2 ON PASSWORDLESS SSH
{{{
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
/usr/bin/ssh-keygen -t dsa
ssh -l root db1 cat ~root/.ssh/id_dsa.pub >> ~root/.ssh/authorized_keys
ssh -l root db2 cat ~root/.ssh/id_dsa.pub >> ~root/.ssh/authorized_keys
ssh -l root cell1 cat ~root/.ssh/authorized_keys >> ~root/.ssh/authorized_keys
scp -p authorized_keys db2:~root/.ssh/authorized_keys
scp -p authorized_keys cell1:~root/.ssh/authorized_keys
scp -p authorized_keys cell2:~root/.ssh/authorized_keys
scp -p authorized_keys cell3:~root/.ssh/authorized_keys
ssh db1 date; ssh db2 date;ssh cell1 date;ssh cell2 date;ssh cell3 date
}}}
-- Passwordless SSH
{{{
To do this, first create an SSH keypair on the Grid Control server (one time only):
ssh-keygen -t dsa -f id_dsa
mv id_dsa.pub id_dsa ~oracle/.ssh/
cd ~oracle/.ssh/
Next, perform each of these steps for every storage cell:
-- Passwordless SSH to cellmonitor
scp id_dsa.pub celladmin@cell1:~
ssh -l root cell1 "mkdir ~cellmonitor/.ssh; chmod 700 ~cellmonitor/.ssh; cat ~celladmin/id_dsa.pub >> ~cellmonitor/.ssh/authorized_keys; chown -Rf cellmonitor:cellmonitor ~cellmonitor/.ssh"
ssh -l cellmonitor cell1 cellcli -e 'list cell detail'
-- Passwordless SSH to celladmin
scp id_dsa.pub celladmin@cell1:~
ssh -l root cell1 "mkdir ~celladmin/.ssh; chmod 700 ~celladmin/.ssh; cat ~celladmin/id_dsa.pub >> ~celladmin/.ssh/authorized_keys; chown -Rf celladmin:celladmin ~celladmin/.ssh"
ssh -l celladmin cell1 cellcli -e 'list cell detail'
After all of these steps have been completed, the Exadata Storage Management Plug-In can be installed and deployed.
}}}
''Agent Failover''
http://blogs.oracle.com/XPSONHA/entry/failover_capability_for_plugins_exadata
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/exadata/exadatav2/38_DBM_EM_Plugin_HA/38_dbm_em_plugin_ha_viewlet_swf.html
''Monitoring Exadata database machine with Oracle Enterprise Manager 11g'' http://dbastreet.com/blog/?p=674
''“Plugging” in the Database Machine'' http://dbatrain.wordpress.com/2011/06/
''Oracle Enterprise Manager Grid Control Exadata Monitoring plug-in bundle'' http://www.oracle.com/technetwork/oem/grid-control/downloads/devlic-188770.html <-- download link
PDU Threshold Settings for Oracle Exadata Database Machine using Enterprise Manager [ID 1299851.1]
* Install and Configure the Agent and the Plugins
Follow MOS Note 1110675.1 to install the agents and configure the exadata cell plugin
Oracle Exadata Avocent MergePoint Unity Switch http://download.oracle.com/docs/cd/E11857_01/install.111/e20086/toc.htm
Oracle Exadata Cisco Switch http://download.oracle.com/docs/cd/E11857_01/install.111/e20084/toc.htm
Oracle Exadata ILOM http://download.oracle.com/docs/cd/E11857_01/install.111/e20083/toc.htm
Oracle Exadata Infiniband Switch http://download.oracle.com/docs/cd/E11857_01/install.111/e20085/toc.htm
Oracle Exadata Power Distribution Unit http://download.oracle.com/docs/cd/E11857_01/install.111/e20087/toc.htm
Oracle Exadata Storage Server http://download.oracle.com/docs/cd/E11857_01/install.111/e14591/toc.htm
* Additional tutorials with screenshots on configuring the plugins can be found below
Monitor Exadata Database Machine: Agent Installation and Configuration http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5504,2
Monitor Exadata Database Machine: Configuring ASM and Database Targets http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5505,2
Monitor Exadata Database Machine: Configuring the Exadata Storage Server Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5506,2
Monitor Exadata Database Machine: Configuring the ILOM Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5507,2
Monitor Exadata Database Machine: Configuring the InfiniBand Switch Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5508,2
Monitor Exadata Database Machine: Configuring the Cisco Ethernet Switch Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5509,2
Monitor Exadata Database Machine: Configuring the Avocent KVM Switch Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5510,2
Monitor Exadata Database Machine: Configuring User Defined Metrics for Additional Network Monitoring http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5511,2
Monitor Exadata Database Machine: Configuring Plug-ins for High Availability http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5512,2
Monitor Exadata Database Machine: Creating a Dashboard for Database Machine http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5513,2
''Exadata Plugin names''
oracle_cell oracle_cell_11.2.2.3.jar
cisco_switch cisco_switch.jar
kvm kvm.jar
oracle_x2ib oracle_x2_ib.jar
oracle_x2cn.jar
oracle_exadata_hc.jar
pdu.jar
Exadata X5-2: Extreme Flash and Elastic Configurations https://www.youtube.com/watch?v=xfnGIiFoSAE
https://docs.oracle.com/cd/E50790_01/doc/doc.121/e51953/app_whatsnew.htm#CEGEAGDH
How to Replace an Exadata X5-2 Storage Server NVMe drive (Doc ID 2003727.1)
Oracle® Exadata Storage Server X5-2 Extreme Flash Service Manual https://docs.oracle.com/cd/E41033_01/html/E55031/z4000419165586.html#scrolltoc
http://www.evernote.com/shard/s48/sh/1eb5b0c7-11c9-439c-a24f-4b8f8f6f3fae/f8eee4a52c650d87ec993039237237bb
https://blogs.oracle.com/AlejandroVargas/entry/exadata_parameter_auto_manage_exadata
Troubleshooting guide for Underperforming FlashDisks [ID 1348938.1]
http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f20-data-sheet-403555.pdf
http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f40-data-sheet-1733796.pdf
http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f80-ds-2043658.pdf
http://www.oracle.com/technetwork/database/exadata/exadata-smart-flash-cache-366203.pdf
http://pages.cs.wisc.edu/~jignesh/publ/SmartSSD-slides.pdf
http://www.evernote.com/shard/s48/sh/bdaba4a6-f2f3-4a0f-bff0-d7daacc9252b/f29b87c951fbf58f175ffaf87a3a899e
explained to a customer the correlation of instance IO vs cellmetrics by (CG,DB - flash vs hard disk)
http://www.evernote.com/l/ADB0VbIOPs1Leb4s79Np5GvKmHPER93wW0g/
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r1/exadata_perf/exadata_perf_viewlet_swf.html
''IO bi = KBread/s
[img[picturename| https://lh4.googleusercontent.com/-r0rWQPyALcM/Tdsg7C9s_OI/AAAAAAAABRw/PdBeB9HPkxQ/throughput.png]]
[img[picturename| https://lh3.googleusercontent.com/-XlSllE-5cXY/Tdsg61gILdI/AAAAAAAABRo/y03hHMUpP8Y/throughput2.png]]
[img[picturename| https://lh5.googleusercontent.com/-gyDVbldZCFE/Tdsg7PdN3MI/AAAAAAAABRs/ZibKFYUiK7I/throughput3.png]]
[img[picturename| https://lh5.googleusercontent.com/-UNqgMcCmEtM/Tdsg7U04VGI/AAAAAAAABR4/gBTshCeU4x0/throughput4.png]]
[img[picturename| https://lh4.googleusercontent.com/-HpaTu09g_cA/Tdsg7aaL20I/AAAAAAAABR0/POykxlhuLUs/throughput5.png]]
[img[picturename| https://lh4.googleusercontent.com/-UC5gv5s3Icg/Tdsg7b8jQUI/AAAAAAAABR8/cd3OScz11Nw/throughput6.png]]
[img[picturename| https://lh3.googleusercontent.com/-RHkHb0v2Hwg/Tdsg7tt1LMI/AAAAAAAABSA/4mEraclNL8w/throughput7.png]]
[img[picturename| https://lh6.googleusercontent.com/-TX-cRRXCIZQ/Tdsg7saUh7I/AAAAAAAABSI/7Q0jptO8wIo/throughput8.png]]
[img[picturename| https://lh4.googleusercontent.com/-sDLBaNYUbng/Tdsg77vdqeI/AAAAAAAABSE/pQrSAsIeocY/throughput9.png]]
[img[picturename| https://lh5.googleusercontent.com/-SXinl7d3gA8/Tdsg75hmG7I/AAAAAAAABSM/w1_Je-hvv5Y/throughput10.png]]
[img[picturename| https://lh6.googleusercontent.com/-F81qnfIBUw0/Tdsg8BbGvuI/AAAAAAAABSQ/YcxLcF6rswA/throughput11.png]]
http://www.evernote.com/shard/s48/sh/af2c6e95-ebc3-4a03-9a54-ac1d36b82970/e7c1df5ecd2cb878f0c86e2fac019b79
surprising to know that the infiniband are running on centos, the whole update process is just a rpm update
http://www.evernote.com/shard/s48/sh/fed1e421-7b10-4d19-92d0-c2538a3f3c7c/0862beef70fc133490e8ae4ffeac8a42
<<showtoc>>
''collectl -sX'' https://lists.sdsc.edu/pipermail/npaci-rocks-discussion/2009-April/038950.html
http://collectl.sourceforge.net/Infiniband.html
http://collectl-utils.sourceforge.net/colmux.html
http://collectl-utils.sourceforge.net/
search "exadata infiniband bidirectional"
http://www.infosysblogs.com/oracle/2011/05/oracle_exadata_and_datawarehou.html
http://www.hpcuserforum.com/presentations/April2009Roanoke/MellanoxTechnologies.ppt
http://en.wikipedia.org/wiki/InfiniBand
http://www.oreillynet.com/pub/a/network/2002/02/04/windows.html
http://www.oracle.com/technetwork/database/exadata/dbmachine-x2-2-datasheet-175280.pdf
http://gigaom.com/cloud/infiniband-back-from-the-dead/
https://blogs.oracle.com/miker/entry/how_to_monitor_the_bandwidth
http://docs.oracle.com/cd/E23824_01/html/821-1459/gjwwf.html
http://www.scribd.com/doc/232417505/20/Infiniband-Network-Monitoring
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Testing_Early_InfiniBand_RDMA_operation.html
''watch E4 presentation of KJ on infiniband''
! 2020
<<<
1-Are there any dba_hist or v$ views that shows processes usage of RDMA over IB?
here are the relevant DBA_HIST views...
In 11g
Interconnect Ping Latency Stats - DBA_HIST_INTERCONNECT_PINGS
In 11gR2
Interconnect Throughput by Client - DBA_HIST_IC_CLIENT_STATS
Interconnect Device Statistics - DBA_HIST_IC_DEVICE_STATS, DBA_HIST_CLUSTER_INTERCON
2-how can we monitor the performance of rdma or when a server process reaches to remote node memory?
The only way to do this is to use OS networking tools
https://weidongzhou.wordpress.com/2013/08/11/tools-to-check-out-network-traffic-on-exadata/
https://husnusensoy.wordpress.com/2009/08/28/full-coverage-in-infiniband-monitoring-with-oswatcher-3-0-part-1/
3-on exadata system 5x and 6x are both ib0 and ib1 supposed to be working together or one is just a backup for th other ?
I think since X4 the Infiniband is active-active
<<<
<<<
So what does these system metrics mean?
txn cache remote copy
txn cache remote copy misses
txn cache remote etc...
<-- I don't know
Also how can we till if a session is utilizing rdma from remote node? <-- probably do a pstack and see if a similar rdma function call exist Bug 24326846 - RMAN channel using rdma dNFS may hang (Doc ID 24326846.8) , but then if this is exadata it is implied that you are using rdma
Have you encountered any issues related to ib/rds and MTU size in exadata and 18c? <-- usually exachk would be able to flag the MTU issues, if there are any findings on for that specific version then we change the values for that customer. If there are perf issues then if the perf data points to some MTU/SDU config issue then that's when we do the resizing
John Clarke has some useful infiniband commands here as well
https://learning.oreilly.com/library/view/oracle-exadata-recipes/9781430249146/9781430249146_Ch13.xhtml
other resources
https://www.slideshare.net/khailey/collaborate-nfs-kylefinal?next_slideshow=1
https://docs.oracle.com/cd/E23824_01/html/821-1459/gjwwf.html
OSB - Using RDS / RDMA over InfiniBand (Doc ID 1510603.1)
<<<
http://www.evernote.com/shard/s48/sh/0987f447-b24a-4a40-9f0a-2f7e19ad6bf0/f8bd7d1a1f948d9c162cd6ee88d8c8f4
http://www.evernote.com/shard/s48/sh/0ce1cfde-99b9-4e82-8e92-7be7dc5e60f9/02ae66088cc1509e580cab382d25a0f8
DR for Exalogic and Exadata + Oracle GoldenGate on Exadata https://blogs.oracle.com/XPSONHA/entry/dr_for_exalogic_and_exadata
''Oracle Sun Database Machine X2-2/X2-8 Backup and Recovery Best Practices'' [ID 1274202.1]
''Backup and Recovery Performance and Best Practices for Exadata Cell and Oracle Exadata Database Machine'' http://www.oracle.com/technetwork/database/features/availability/maa-tech-wp-sundbm-backup-11202-183503.pdf
''Oracle Data Guard: Disaster Recovery for Oracle Exadata Database'' Machine http://www.oracle.com/technetwork/database/features/availability/maa-wp-dr-dbm-130065.pdf
''Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration'' ID 1302539.1
http://vimeo.com/62754145 Exadata Maximum Availability Tests
Monitoring exadata health and resource usage white paper http://bit.ly/160dJrn
http://www.pythian.com/news/29333/exadata-memory-expansion-kit/
Exadata MAA Best Practices Migrating Oracle Databases
http://www.oracle.com/au/products/database/xmigration-11-133466.pdf
''Exadata FAQ''
http://www.oracle.com/technology/products/bi/db/exadata/exadata-faq.html
''My Experiences''
http://karlarao.wordpress.com/2010/05/30/seeing-exadata-in-action/
''Exadata Links''
http://tech.e2sn.com/oracle/exadata/links
http://tech.e2sn.com/oracle/exadata/articles
A grand tour of Oracle Exadata
http://www.pythian.com/expertise/oracle/exadata
http://www.pythian.com/news/13569/exadata-part-1/
http://www.pythian.com/news/13967/exadata-part2/
http://www.pythian.com/news/15673/exadata-part3/
http://www.pythian.com/news/15425/making-the-most-of-exadata/
http://www.pythian.com/news/15531/designing-for-exadata-maximizing-storage-indexes-use/
http://dbastreet.com/blog/?page_id=603 <-- good collection of links
''Exadata Comparisons''
Comparing Exadata and Netezza TwinFin
http://www.business-intelligence-quotient.com/?p=1030
''Exadata adhoc reviews''
''* A nice comment by Tanel Poder'' http://www.linkedin.com/groupItem?view=&srchtype=discussedNews&gid=3156190&item=32433184&type=member&trk=EML_anet_qa_ttle-dnhOon0JumNFomgJt7dBpSBA
<<<
-- Question by Ron Batra
I was wondering if people had any experiences to share regarding RAC on ExaData..?
-- Reply by Tanel Poder (http://tech.e2sn.com/team/tanel-poder)
Do you want good ones or bad ones? ;-)
As it's a general question, the answer will be quite general, too:
The "bad" thing is that RAC is still RAC on Exadata too. So, especially if you plan to use it for OLTP environments, there are things to consider.
Even the low-latency infiniband interconnect doesn't eliminate interconnect (and scheduling) latency and global cache wait events when you run write-write OLTP workload on the same dataset in multiple different instances. You should make sure (using services) that any serious write-write activity happens within the same physical server. But oh wait, Exadata v1 and v2 both consist of small 8-core DB nodes, so with serious OLTP workload it may not be possible to fit all the write-write activity into one 8-core node at all. So, got to be careful when planning heavy OLTP into Exadata. It's doable but needs more planning & testing if your workload is going to be significant. The new Exadata x2-8 would be better for heavy OLTP workloads as a single rack has only 2 physical DB layer servers (each with 64 cores) in it, so it'd be much easier to direct all write-write workload into one physical server.
For (a properly designed) DW workload with mostly no concurrent write-write activity on the same dataset, you shouldn't have GC bottleneck problem. However the DW should ideally be designed for (parallel) direct path full table scans (with proper partitioning design for partition pruning).
So, when you migrate your old reporting application to exadata (and it doesn't use good partitioning, indexes used everywhere and no parallel execution is used) then you might not end up getting much out of the smart scans. Or when the ETL job is a tight (PL/SQL) loop, performing single row fetches and inserts, then you won't get anywhere near the "promised" Exadata data load speeds etc.
What else... If anyone (even from Oracle) says, you don't need any indexes in Exadata, don't believe them. I have a client who didn't use any indexes even before they moved to Exadata (their schema was explicitly designed for partition pruning, full partition scans and "brute-force" hash joins). They were very happy when they moved to Exadata, because this is the kind of workload which allows smart scans to kick in.
Another client's applications relied on indexes in their old environments. They followed someone's (apparently from Oracle) recommendation to drop all indexes (to save storage space) and the performnace on exadata sucked. This is because their schema & application was not optimized for such brute force processing. They started adding indexes back to get the performance back to acceptable levels.
Another surprise from the default Exadata configuration was related to the automatic parallel execution configuration. Some queries ended up allocating 512 slaves across the whole Exadata rack. The only way to limit this was to use resource manager (and this is what I always use). All the other magic automatic features failed in some circumstances (I'll blog about it some day).
Btw, don't hope to ever these promised 5TB/hour load times in real life. In real life you probably want to use compression to save space in the limited Exadata storage and compression is done in the database nodes only (while the cells may be completely idle), so your real life load rate with compression is going to be much lower, depending on the compression options you use (well that's the same for all other vendors too, but marketing usually doesn't tell you that).
Phew, this wasn't just specific to RAC on Exadata, but just some experiences I've had to deal with. Exadata doesn't make everything always faster - out of the box. But if your application/schema design is right, then it will rock!
<<<
''* Kevin Closson interview'' http://www.pythian.com/news/1267/interview-kevin-closson-on-the-oracle-exadata-storage-server/
''Exadata Patches''
http://www.pythian.com/news/15477/exadata-bp5-patching-issues/
Potential data loss issue on Exadata http://goo.gl/8c1t3
''Exadata Presentations''
http://husnusensoy.wordpress.com/2010/10/16/exadata-v2-fast-track-session-slides-in-rac-sig-turkey/
Cool product on predictive performance management - BEZVision for Databases http://goo.gl/aQRvK + ExadataV2 presentation http://goo.gl/Zf6Pw
Tanel Poder - Performance stories from Exadata Migrations http://goo.gl/hQPdq
''Exadata Features''
* Hybrid Columnar Compression
http://blogs.oracle.com/databaseinsider/2010/11/exadata_hybrid_columnar_compre.html
http://oracle-randolf.blogspot.com/2010/10/112-new-features-subtle-restrictions.html
* Cell offload
http://dbatrain.wordpress.com/2009/06/23/measuring-exadata-offloads-efficiency/
http://dbatrain.wordpress.com/2010/11/05/dbms-for-dbas-offloads-are-for-you-too/
* Smart Scan
http://www.pythian.com/news/18077/exadata-smart-scans-and-the-flash-cache/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+PythianGroupBlog+(Pythian+Group+Blog)
''Exadata v2 InfiniBand Network 880 Gb/sec aggregate throughput''
{{{
Each machine has 40Gb/sec Infiniband network card (two HCA ports bonded together, but it's only 40 Gb/sec per machine).
Exadata v2 have 8 DB Machine and 14 Storage servers and total is 22 servers.
So 22 X 40 Gb/sec is 880 Gb/sec.
}}}
— Enables storage predicates to be showed in the SQL execution plans of your session even if you do not have Exadata
alter session set CELL_OFFLOAD_PLAN_DISPLAY = ALWAYS;
You can use the above V$ views and corresponding statistics to monitor Exadata cells’ activity from a database instance:
* V$CELL view provides identifying information extracted from the cellip.ora file.
* V$BACKUP_DATAFILE view contains various columns relevant to Exadata Cell during RMAN incremental backups. The BLOCKS_SKIPPED_IN_CELL column is a count of the number of blocks that were read and filtered at the Exadata Cell to optimize the RMAN incremental backup.
* You can query the V$SYSSTAT view for key statistics that can be used to compute Exadata Cell effectiveness:
<<<
physical IO disk bytes - Total amount of I/O bytes processed with physical disks (includes when processing was offloaded to the cell and when processing was not offloaded)
cell physical IO interconnect bytes - Number of I/O bytes exchanged over the interconnection (between the database host and cells)
cell physical IO bytes eligible for predicate offload - Total number of I/O bytes processed with physical disks when processing was offloaded to the cell
''The following statistics show the Exadata Cell benefit due to optimized file creation and optimized RMAN file restore operations:''
cell physical IO bytes saved during optimized file creation - Number of bytes of I/O saved by the database host by offloading the file creation operation to cells
cell physical IO bytes saved during optimized rman file restore - Number of bytes of I/O saved by the database host by offloading the RMAN file restore operation to cells
<<<
* Wait Events
<<<
cell single block physical read - Same as db file sequential read for a cell
cell multiblock physical read - Same as db file scattered read for a cell
cell smart table scan - DB waiting for table scans to complete
cell smart index scan - DB waiting for index or IOT fast full scans
cell smart file creation - waiting for file creation completion
cell smart incremental backup - waiting for incremental backup completion
cell smart restore from backup - waiting for file initialization completion for restore
cell statistics gather
<<<
The query displays the cell path and disk name corresponding to cell wait events.. also possible for drill down on ASH
{{{
SELECT w.event, c.cell_path, d.name, w.p3
FROM V$SESSION_WAIT w, V$EVENT_NAME e, V$ASM_DISK d, V$CELL c
WHERE e.name LIKE 'cell%'
AND e.wait_class_id = w.wait_class_id
AND w.p1 = c.cell_hashval
AND w.p2 = d.hash_value;
}}}
* Assess offload processing efficiency, this query calculates the percentage of I/Os that were filtered by offloading to Exadata.
{{{
SQL> select 100 - 100*s1.value/s2.value io_filtering_percentage 2 from v$mystat s1
3 , v$mystat s2
4 , v$statname n1
5 , v$statname n2
6 where s1.statistic# = n1.statistic#
7 and s2.statistic# = n2.statistic#
8 and n1.name = 'cell physical IO interconnect bytes'
9 and n2.name = 'cell physical IO bytes eligible for predicate offload' ;
IO_FILTERING_PERCENTAGE
-----------------------
99.9872062
}}}
* It is also possible to use SQL Performance Analyzer to access offload processing. You can use the tcellsim.sql script located in $ORACLE_HOME/rdbms/admin for that purpose. The comparison uses the IO_INTERCONNECT_BYTES statistics.
http://www.evernote.com/shard/s48/sh/b9a4437d-9444-4748-b4c4-6d0a84113fc2/ab3682c18ba5e3fe08478378ea3b5804
''Advisor Webcast Archived Recordings [ID 740964.1]''
Database https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#data
OEM https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#em
Exadata https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#exadata
http://apex.oracle.com/pls/apex/f?p=44785:2:3562636332635165:FORCE_QUERY::2,CIR,RIR:P2_TAGS:Exadata
{{{
Exadata Smart Flash Log Self-Study Module Tutorial 24-Nov-11 26 mins
Exadata Smart Flash Log Demonstration Video 21-Nov-11 9 mins
Using Exadata Smart Scan Self-Study Module Tutorial 09-Nov-11 45 mins
Using Exadata Smart Scan Demonstration Demo 08-Nov-11 11 mins
Oracle Enterprise Manager 12c: Manage Oracle Exadata with Oracle Enterprise Manager Video 02-Nov-11 4 mins
Oracle Enterprise Manager 12c: Monitor an Exadata Environment Video 02-Oct-11 9 mins
Part 1 - Load the Data Video 01-Jun-11 14 mins
Part 2 - Gather Optimizer Statistics on the Data Video 01-Jun-11 8 mins
Part 3 - Validate and Transform the Data Video 01-Jun-11 10 mins
Part 4 - Query the Data Video 01-Jun-11 11 mins
Oracle Real World Performance Video Series - Migrate a 1TB Datawarehouse in 20 Minutes Video 01-Jun-11 40 mins
Administer Exadata Database Machine: Exadata Storage Server Patch Rollback Demo 23-May-11
Administer Exadata Database Machine: Exadata Storage Server Rolling Patch Application Demo 23-May-11
Exadata Database Machine: Using Quality of Service Management Demo 23-May-11
Exadata Database Machine: Configuring Quality of Service Management Demo 23-May-11
Monitor Exadata Database Machine: Agent Installation and Configuration Demo 23-May-11
Monitor Exadata Database Machine: Configuring ASM and Database Targets Demo 23-May-11
Monitor Exadata Database Machine: Configuring the Exadata Storage Server Plug-in Demo 23-May-11
Monitor Exadata Database Machine: Configuring the ILOM Plug-in Demo 23-May-11
Monitor Exadata Database Machine: Configuring the InfiniBand Switch Plug-in Demo 23-May-11
Monitor Exadata Database Machine: Configuring the Cisco Ethernet Switch Plug-in Demo 23-May-11
Monitor Exadata Database Machine: Configuring the Avocent KVM Switch Plug-in Demo 23-May-11
Monitor Exadata Database Machine: Configuring User Defined Metrics for Additional Network Monitoring Demo 23-May-11
Monitor Exadata Database Machine: Configuring Plug-ins for High Availability Demo 23-May-11
Monitor Exadata Database Machine: Creating a Dashboard for Database Machine Demo 23-May-11
Monitor Exadata Database Machine: Monitoring Exadata Storage Servers using Enterprise Manager Grid Control and the System Monitoring Plug-in for Exadata Storage Server Demo 23-May-11
Monitor Exadata Database Machine: Managing Exadata Storage Server Alerts and Checking for Undelivered Alerts Demo 23-May-11
Monitor Exadata Database Machine: Exadata Storage Server Monitoring and Management using Integrated Lights Out Manager (ILOM) Demo 23-May-11
Monitor Exadata Database Machine: Monitoring the Database Machine InfiniBand network Demo 23-May-11
Monitor Exadata Database Machine: Monitoring the Cisco Catalyst Ethernet switch and the Avocent MergePoint Unity KVM using Grid Control Demo 23-May-11
Monitor Exadata Database Machine: Using HealthCheck Demo 23-May-11
Monitor Exadata Database Machine: Using DiagTools Demo 23-May-11
Monitor Exadata Database Machine: Using ADRCI on an Exadata Storage Cell Demo 23-May-11
Monitor Exadata Database Machine Demo 23-May-11
Oracle Exadata Database Machine Best Practices Series Tutorial 29-Mar-11
Managing Parallel Processing with the Database Resource Manager Demo 19-Nov-10 60 mins
Exadata and Database Machine Version 2 Series - 1 of 25: Introduction to Smart Scan Demo 19-Sep-10 10 mins
Exadata and Database Machine Version 2 Series - 2 of 25: Introduction to Exadata Hybrid Columnar Compression Demo 19-Sep-10 10 mins
Exadata and Database Machine Version 2 Series - 3 of 25: Introduction to Exadata Smart Flash Cache Demo 19-Sep-10 12 mins
Exadata and Database Machine Version 2 Series - 4 of 25: Exadata Process Introduction Demo 19-Sep-10 6 mins
Exadata and Database Machine Version 2 Series - 5 of 25: Hierarchy of Exadata Storage Objects Demo 19-Sep-10 8 mins
Exadata and Database Machine Version 2 Series - 6 of 25: Creating Interleaved Grid Disks Demo 19-Sep-10 8 mins
Exadata and Database Machine Version 2 Series - 7 of 25: Examining Exadata Smart Flash Cache Demo 19-Sep-10 8 mins
Exadata and Database Machine Version 2 Series - 8 of 25: Exadata Cell Configuration Demo 19-Sep-10 6 mins
Exadata and Database Machine Version 2 Series - 9 of 25: Exadata Storage Provisioning Demo 19-Sep-10 7 mins
Exadata and Database Machine Version 2 Series - 10 of 25: Consuming Exadata Grid Disks Using ASM Demo 19-Sep-10 10 mins
Exadata and Database Machine Version 2 Series - 11 of 25: Exadata Cell User Accounts Demo 19-Sep-10 5 mins
Exadata and Database Machine Version 2 Series - 12 of 25: Monitoring Exadata Using Metrics, Alerts and Active Requests Demo 19-Sep-10 10 mins
Exadata and Database Machine Version 2 Series - 13 of 25: Monitoring Exadata From Within Oracle Database Demo 19-Sep-10 10 mins
Exadata and Database Machine Version 2 Series - 14 of 25: Exadata High Availability Demo 19-Sep-10 10 mins
Exadata and Database Machine Version 2 Series - 15 of 25: Intradatabase I/O Resource Management Demo 19-Sep-10 10 mins
Exadata and Database Machine Version 2 Series - 16 of 25: Interdatabase I/O Resource Management Demo 19-Sep-10 12 mins
Exadata and Database Machine Version 2 Series - 17 of 25: Configuring Flash-Based Disk Groups Demo 19-Sep-10 16 mins
Exadata and Database Machine Version 2 Series - 18 of 25: Examining Exadata Hybrid Columnar Compression Demo 19-Sep-10 14 mins
Exadata and Database Machine Version 2 Series - 19 of 25: Index Elimination with Exadata Demo 19-Sep-10 8 mins
Exadata and Database Machine Version 2 Series - 20 of 25: Database Machine Configuration Example using Configuration Worksheet Demo 19-Sep-10 14 mins
Exadata and Database Machine Version 2 Series - 21 of 25: Migrating to Database Machine Using Transportable Tablespaces Demo 19-Sep-10 14 mins
Exadata and Database Machine Version 2 Series - 22 of 25: Bulk Data Loading with Database Machine Demo 19-Sep-10 20 mins
Exadata and Database Machine Version 2 Series - 23 of 25: Backup Optimization Using RMAN and Exadata Demo 19-Sep-10 15 mins
Exadata and Database Machine Version 2 Series - 24 of 25: Recovery Optimization Using RMAN and Exadata Demo 19-Sep-10 12 mins
Exadata and Database Machine Version 2 Series - 25 of 25: Using the distributed command line utility (dcli) Demo 19-Sep-10 14 mins
Using Exadata Smart Scan Video 19-Aug-10 4 mins
Storage Index in Exadata Demo 01-Mar-10
Hybrid Columnar Compression Demo 01-Oct-09 22 mins
Smart Flash Cache Architecture Demo 01-Oct-09 8 mins
Cell First Boot Demo 01-Sep-09 5 mins
Cell Configuration Demo 01-Sep-09 10 mins
Smart Scan Scale Out Example Demo 01-Sep-09 10 mins
Smart Flash Cache Monitoring Demo 01-Sep-09 25 mins
The Magic of Exadata Demo 01-Jul-07
Configuring DCLI Demo 01-Jul-07 5 mins
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 1) Demo 01-Jul-07 24 mins
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 2) Demo 01-Jul-07 30 mins
Exadata Cell First Boot Initialization Demo 01-Jul-07 12 mins
Exadata Calibrate and Cell/Grid Disks Configuration Demo 01-Jul-07 12 mins
IORM and Exadata Demo 01-Jul-07 40 mins
Possible Execution Plans with Exadata Offloading Demo 01-Jul-07
Real Performance Tests with Exadata Demo 01-Jul-07 42 mins
Exadata Automatic Reconnect Demo 01-Jul-07 12 mins
Exadata Cell Failure Scenario Demo 01-Jul-07 10 mins
}}}
with screenshots
http://netsoftmate.blogspot.com/2017/01/discover-exadata-database-machine-in.html
https://netsoftmate.com/discover-exadata-database-machine-in/
Enterprise Manager Oracle Exadata Database Machine Getting Started Guide
https://docs.oracle.com/cd/E63000_01/EMXIG/ch4_post_discovery.htm#EMXIG143
http://kevinclosson.wordpress.com/2012/02/27/modern-servers-are-better-than-you-think-for-oracle-database-part-i-what-problems-actually-need-fixed/
http://sqlblog.com/blogs/joe_chang/archive/2011/11/29/intel-server-strategy-shift-with-sandy-bridge-en-ep.aspx
http://kevinclosson.wordpress.com/2012/05/02/oracles-timeline-copious-benchmarks-and-internal-deployments-prove-exadata-is-the-worlds-first-best-oltp-machine/#comments
http://kevinclosson.wordpress.com/2011/11/01/flash-is-fast-provisioning-flash-for-oracle-database-redo-logging-emc-f-a-s-t-is-flash-and-fast-but-leaves-redo-where-it-belongs/
http://glennfawcett.wordpress.com/2011/05/10/exadata-drives-exceed-the-laws-of-physics-asm-with-intelligent-placement-improves-iops/
''Conversations with Kevin about OLTP and IOPS - FW: Fwd: IOPs from your scripts - Exadata - link'' - http://www.evernote.com/shard/s48/sh/c270db94-a167-4913-8676-024a7e2cdefa/9146389f651cc09202e1182d2c883b2c
On the cell nodes /usr/share/doc/oracle/Exadata/doc
On the edelivery zip file p18084575_121111_Linux-x86-64.zip go to directory
/Users/karl/Downloads/software/database/iso_exadata_121111/V46534-01.zip Folder/dl180/boot/cellbits/doclib.zip
''112240''
Most of the things that were removed were put into the storage server owner's guide (multi rack cabling is now an appendix, site planning has been broken out into relevant chapters in owner's guide), etc.
<<<
''* Release Notes''
[[ e15589.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e15589.pdf ]] <- Oracle® Exadata Storage Server Hardware Read This First 11g Release 2 ##
[[ e13875.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13875.pdf ]] <- Oracle Exadata Database Machine Release Notes 11g Release 2 ##
[[ e13862.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13862.pdf ]] <- Oracle® Exadata Storage Server Software Release Notes 11g Release 2 ##
[[ e13106.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13106.pdf ]] <- Oracle® Enterprise Manager Release Notes for System Monitoring Plug-In for Oracle Exadata Storage Server ##
''* Site/Hardware Readiness''
[[ e17431.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e17431.pdf ]] <- Sun Oracle Database Machine Site Planning Guide
[[ e16099.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e16099.pdf ]] <- Oracle® Exadata Database Machine Configuration Worksheets 11g Release 2 ##
[[ e10594.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e10594.pdf ]] <- Oracle® Database Licensing Information 11g Release 2 ###
''* Installation''
[[ e17432.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e17432.pdf ]] <- Sun Oracle Database Machine Installation Guide
[[ e13874.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13874.pdf ]] <- Oracle® Exadata Database Machine Owner's Guide 11g Release 2 ##
[[ install.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\install.pdf ]] <- Oracle Exadata Quick-Installation Guide
[[ e14591.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e14591.pdf ]] <- Oracle® Enterprise Manager System Monitoring Plug-In Installation Guide for Oracle Exadata Storage Server ##
''* Administration'' 112240
[[ e13861.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13861.pdf ]] <- Oracle® Exadata Storage Server Software User's Guide 11g Release 2 ##
''* Cabling/Monitoring'' 112240
[[ e17435.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e17435.pdf ]] <- SunOracle Database Machine Multi-Rack Cabling Guide
[[ e13105.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13105.pdf ]] <- Oracle® Enterprise Manager System Monitoring Plug-In Metric Reference Manual for Oracle Exadata Storage Server ##
<<<
''112232''
<<<
''* Release Notes''
[[ e15589.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e15589.pdf ]] <- Oracle® Exadata Storage Server Hardware Read This First 11g Release 2
[[ e13875.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13875.pdf ]] <- Oracle Exadata Database Machine Release Notes 11g Release 2
[[ e13862.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13862.pdf ]] <- Oracle® Exadata Storage Server Software Release Notes 11g Release 2
[[ e13106.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13106.pdf ]] <- Oracle® Enterprise Manager Release Notes for System Monitoring Plug-In for Oracle Exadata Storage Server
''* Site/Hardware Readiness''
[[ e17431.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e17431.pdf ]] <- Sun Oracle Database Machine Site Planning Guide
[[ e16099.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e16099.pdf ]] <- Oracle® Exadata Database Machine Configuration Worksheets 11g Release 2
[[ e10594.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e10594.pdf ]] <- Oracle® Database Licensing Information 11g Release 2
''* Installation''
[[ e17432.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e17432.pdf ]] <- Sun Oracle Database Machine Installation Guide
[[ e13874.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13874.pdf ]] <- Oracle® Exadata Database Machine Owner's Guide 11g Release 2
[[ install.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\install.pdf ]] <- Oracle Exadata Quick-Installation Guide
[[ e14591.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e14591.pdf ]] <- Oracle® Enterprise Manager System Monitoring Plug-In Installation Guide for Oracle Exadata Storage Server
''* Administration'' 112232
[[ e13861.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13861.pdf ]] <- Oracle® Exadata Storage Server Software User's Guide 11g Release 2
''* Cabling/Monitoring'' 112232
[[ e17435.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e17435.pdf ]] <- SunOracle Database Machine Multi-Rack Cabling Guide
[[ e13105.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13105.pdf ]] <- Oracle® Enterprise Manager System Monitoring Plug-In Metric Reference Manual for Oracle Exadata Storage Server
<<<
''112220''
<<<
''* Release Notes''
[[e15589.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e15589.pdf ]] <- Oracle® Exadata Storage Server Hardware Read This First 11g Release 2
[[e13875.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13875.pdf ]] <- Oracle Exadata Database Machine Release Notes 11g Release 2
[[e13862.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13862.pdf ]] <- Oracle® Exadata Storage Server Software Release Notes 11g Release 2
[[e13106.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13106.pdf ]] <- Oracle® Enterprise Manager Release Notes for System Monitoring Plug-In for Oracle Exadata Storage Server
''* Site/Hardware Readiness'' 112220
[[e17431.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e17431.pdf ]] <- Sun Oracle Database Machine Site Planning Guide
[[e16099.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e16099.pdf ]] <- Oracle® Exadata Database Machine Configuration Worksheets 11g Release 2
[[e10594.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e10594.pdf ]] <- Oracle® Database Licensing Information 11g Release 2
''* Installation'' 112220
[[e17432.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e17432.pdf ]] <- Sun Oracle Database Machine Installation Guide
[[e13874.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13874.pdf ]] <- Oracle® Exadata Database Machine Owner's Guide 11g Release 2
[[install.pdf| C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\install.pdf ]] <- Oracle Exadata Quick-Installation Guide
[[e14591.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e14591.pdf ]] <- Oracle® Enterprise Manager System Monitoring Plug-In Installation Guide for Oracle Exadata Storage Server
''* Administration'' 112220
[[e13861.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13861.pdf ]] <- Oracle® Exadata Storage Server Software User's Guide 11g Release 2
''* Cabling/Monitoring'' 112220
[[e17435.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e17435.pdf ]] <- SunOracle Database Machine Multi-Rack Cabling Guide
[[e13105.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13105.pdf ]] <- Oracle® Enterprise Manager System Monitoring Plug-In Metric Reference Manual for Oracle Exadata Storage Server
<<<
A nice diagram of the whole HW installation process
http://www.evernote.com/shard/s48/sh/3b3b70a8-b28e-48b7-bc99-141e8ca1b5ba/851bf62bc26c4de0e13b18e2f7b9a592
''The blueprint''
http://www.facebook.com/photo.php?pid=7017079&l=72efd9ea41&id=552113028
''Treemap version''
http://www.facebook.com/photo.php?pid=6973769&l=9b4b053f64&id=552113028
http://www.facebook.com/photo.php?pid=7076816&l=beea222cd0&id=552113028
''Failure scenario''
http://www.facebook.com/photo.php?pid=7118589&l=cd58bfb8e4&id=552113028
''The Provisioning Worksheet''
http://www.facebook.com/photo.php?pid=7163444&l=9e30e54cea&id=552113028
some other notes, speeds and feeds, etc. http://www.evernote.com/shard/s48/sh/a8c75ac7-9019-43cc-8ada-fad80681a63a/fdf513512c3bef27d4ac00c1912a8b13
-- ''Papers''
http://www.linkedin.com/groupItem?view=&srchtype=discussedNews&gid=918317&item=63941267&type=member&trk=eml-anet_dig-b_pd-ttl-cn&ut=0pKCK5WPN524Y1 <-- kerry explains how we do it
<<<
Kerry Osborne
We've worked on a number of consolidation projects. The first step is always an analysis of the DB's that need to be migrated. This is not significantly different than a consolidation on to a non-Exadata platform. This step includes gathering a bunch of raw data including current memory usage (both SGA and sessions), type of CPU's (so a calculation can compare to the relative speed of CPU on Exadata), number of CPU's and utilization at peak, storage usage, projected growth, etc… One key early step is to determine which (if any) databases can be combined into a single database. Determining which can live together is just standard analysis of whether they can play nicely together (downtime windows, backup requirements, version constraints should match up fairly closely). This is usually done with databases that are not considered super critical from a performance standpoint by the way. We're working on a project now that started with 90 instances going onto a half rack. In this case there is only one large system and a whole bunch of very small systems. In this case, combining instances was a key part of the plan.
Once the mapping of source to destination instances has been determined then we work on defining the requirements for each new instance on Exadata. This includes HA considerations (RAC or not). This is where a little bit of art enters the picture. Since Exadata is capable of offloading work to the storage tier, some estimation as to how well each individual system be able to take advantage of Exadata optimizations should be a part of the process. Systems that can offload a lot of work don't need as much CPU on the compute nodes, for example, as on the original platforms.
The next step os to take those requirements and lay the instances out across the compute nodes and storage cells. Since we've done several of these projects, we've built some tools to help automate the process including visualizing how resources are divided amongst the instances. This allows us to easily play with "what if scenarios" to see what happens if you lose a node in a RAC cluster for example.
Also, you might want to consider using Instance Caging and DBRM/IORM to limit resource usage. This will help avoid the situation where users of the first few systems that get migrated getting disappointed when the system slows down as more and more systems are migrated on to the platform.
One final thing you might want to consider is that you can carve the storage up into independent clusters as well. We call this a "Split Config". If for example you want to make sure that work on your test environment is relatively isolated from your production environment, you can create two separate clusters each with their own compute nodes and their own storage cells inside a single rack. You'll still be sharing the IB network, but the rest will be separated. This can also provide a way to test patches on part of the system (dev/test for example), without affecting the production cluster. It's not as good as having a separate Rack, but it's better than than not having anywhere to vet out a patch before applying it to production.
For sizing, you won't have to go through as much detail, but you should consider the same issues and do some basic calculations. In practice, most of the sizing decisions we've observed have been dominated by storage requirements including projected growth over whatever time the business is intending to amortize the purchase. This includes throughput as well as volume considerations.
Hope that helps.
!
Kevin,
Yes, the process I described is certainly more involved than a one day exercise. It really depends on how accurate you want to be, but there is a fair amount of leg work that should be done to be confident about your sizing and capacity planning. The larger ones we've worked on have been a few weeks (2-4) depending on the number of environments. For the most part it is something that can be done by any experienced Oracle person. I would expect that someone that doesn't have experience with Exadata will tend to over estimate memory and CPU requirements on the DB tier based on current usage, but I could be wrong about that. OLTP systems won't see much reduction in those requirements by the way, while DW type systems will.
On the issue of index usage, you should definitely allow time for testing to prove to the business whether dropping some will be beneficial or not. In some cases they will be absolutely necessary (OLTP oriented workloads). In others they will not. In my opinion, many systems are over indexed, and the process of moving to Exadata provides a good excuse to evaluate them and get rid of some that are not necessary. This is a hard sell in many shops, so Exadata can actually be a political help in some of these situations. As far as your comment about sizing and indexes, if you think that the business will not allow you make changes to the application (including index usage), you should probably do your POC / POV with the app as it exists today We do commonly find that a little bit of tweaking can pay huge dividends though. So again, I would highly recommend allowing time for testing prior to doing a production cut over.
<<<
''Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles'' http://www.oracle.com/technetwork/database/focus-areas/availability/maa-exadata-consolidated-roles-459605.pdf
''Database Instance Caging: A Simple Approach to Server Consolidation'' http://www.oracle.com/technetwork/database/focus-areas/performance/instance-caging-wp-166854.pdf
''Boris - Capacity Management for Oracle Database Machine Exadata v2'' https://docs.google.com/viewer?url=http://www.nocoug.org/download/2010-05/DB_Machine_5_17_2010.pdf&pli=1
''Performance Stories from Exadata Migrations'' http://www.slideshare.net/tanelp/tanel-poder-performance-stories-from-exadata-migrations
''Workload Management for Operational Data Warehousing'' http://blogs.oracle.com/datawarehousing/entry/workload_management_for_operat
''Workload Management – Statement Queuing'' http://blogs.oracle.com/datawarehousing/entry/workload_management_statement
''Workload Management – A Simple (but real) Example'' http://blogs.oracle.com/datawarehousing/entry/workload_management_a_simple_b
''A fair bite of the CPU pie? Monitoring & Testing Oracle Resource Manager'' http://rnm1978.wordpress.com/2010/09/10/a-fair-bite-of-the-cpu-pie-monitoring-testing-oracle-resource-manager/
''Parallel Execution and workload management for an Operational DW environment'' http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/twp-bidw-parallel-execution-130766.pdf
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/index.html
! Other cool stuff the prov worksheet can do:
''HOWTO: Update the worksheet from an existing environment'' http://www.evernote.com/shard/s48/sh/5c981ef3-f504-4c20-8f19-34ba57f4d0d6/c404ce1e552ce3d7440a2911573dde3e
''provisioning email to tanel'' RE: refreshing dev/testing databases from exadata - v2 - link - http://www.evernote.com/shard/s48/sh/2e1ca2e0-7bb5-4829-b18a-4bb8ac3d003e/54d7fc3956cd6cb1c5761b17c2055c6b
''free -m, hugepages, free memory'' http://www.evernote.com/shard/s48/sh/efec6f4e-da2a-464f-87d4-69a79d5339f0/f848c602817940e5015df8f6fae5437e
''diff on prov worksheet, configuration changes, instance mapping changes'' http://www.evernote.com/shard/s48/sh/47a62c47-c05c-4ac2-839c-17f6e6d2cae5/70b4c2021804eb4e86016e782dca6b73
this guy talks about workload placement
https://www.linkedin.com/pulse/workload-placement-optimizing-capacity-prashant-wali
http://www.evernote.com/shard/s48/sh/0151d8f8-e00e-4aed-8e9a-9266e3a43e36/13be76ca387aa5d2130edba30672d9ff
Changing IP addresses on Exadata Database Machine [ID 1317159.1]
<<<
https://twitter.com/GavinAtHQ/status/1532075662684524545
Recently announced #Exadata System Software 22.1 introduces an exciting new monitoring capability, @OracleExadata
Real-Time Insight. Check out the deep dive over on the Exadata PM blog site
<<<
https://blogs.oracle.com/exadata/post/exadata-real-time-insight
What's New In Exadata 22.1 - link https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmso/new-features-exadata-system-software-release-22.html#GUID-C0643E3C-ED50-45DB-8248-1B1A1D6C9F9A
Using Real-Time Insight - link https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-monitoring.html#GUID-8448C324-784E-44F5-9D44-9CB5C697E436
Alter metricdefinition - cellcli link, dbmcli link https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-cellcli.html#GUID-1D67C9CD-1077-43C5-9056-62EF4E42B3F0
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/exadata-dbmcli.html#GUID-1D67C9CD-1077-43C5-9056-62EF4E42B3F0
Code https://github.com/oracle-samples/oracle-db-examples/tree/main/exadata
http://www.evernote.com/shard/s48/sh/ce6b1dc4-1166-4135-ab97-4f5726c40680/3fb775712c4a6ce2ee128dece9deb5fc
http://www.evernote.com/shard/s48/sh/1eb5b0c7-11c9-439c-a24f-4b8f8f6f3fae/f8eee4a52c650d87ec993039237237bb
{{{
dcli -g ~/cell_group -l celladmin 'cellcli -e list flashlog detail'
}}}
{{{
Exadata Smart Flash Log - video demo --> http://j.mp/svbfrR
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/exadata/exadatav2/Exadata_Smart_Flash_Log/player.html
}}}
http://guyharrison.squarespace.com/blog/2011/12/6/using-ssd-for-redo-on-exadata-pt-2.html
''enable the esfl''
http://minersoracleblog.wordpress.com/2013/03/19/improving-log-file-sync-times-with-exadata-smart-flash-logs/
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/exadata/exadatav2/Exadata_Smart_Flash_Log/data/presentation.xml
There’s a white paper about mixed disks
http://www.oracle.com/technetwork/database/availability/maa-exadata-upgrade-asm-2339100.pdf
Scenario 2: Add 4 TB storage servers to 3 TB storage servers and expand existing disk groups
Understanding ASM Capacity and Reservation of Free Space in Exadata (Doc ID 1551288.1) <- contains a cool PL/SQL script
http://prutser.wordpress.com/2013/01/03/demystifying-asm-required_mirror_free_mb-and-usable_file_mb/
https://aprakash.wordpress.com/2014/09/17/asm-diskgroup-shows-usable_file_mb-value-in-negative/
<<<
This statement is correct:
"If I have 1GB worth of data in my DB I should be using 2GB for Normal Redundancy and 3 GB for High Redundancy."
but then you also have to account for the "required mirror free" which is required in the case of a lost of failure group.
So this is the output of the script I sent you, it already accounts for the redundancy level you are on. Just look at the columns with "REAL" on it. On your statement above, the 4869.56 is used and that already accounts for the normal redundancy.. you said you have 4605 GB (incl TEMP) so that's just about right. Now you have to add the 2538 which will total to 7407.56 and if you subtract the total space requirement to the capacity (7614 - 7407.56) you'll get 206.44
{{{
REQUIRED USABLE
RAW REAL REAL REAL MIRROR_FREE FILE
STATE TYPE TOTAL_GB TOTAL_GB USED_GB FREE_GB GB GB PCT_USED PCT_FREE NAME
-------- ------ ---------- ---------- ---------- ---------- ----------- ---------- -------- -------- ----------
CONNECTE NORMAL 15228 7614 4869.56 2744.44 2538 206.44 64 36 DATA_AEX1
CONNECTE NORMAL 3804.75 1902.38 1192.5 709.87 634.13 75.75 63 37 RECO_AEX1
MOUNTED NORMAL 873.75 436.88 1.23 435.64 145.63 290.02 0 100 DBFS_DG
---------- ---------- ---------- ---------- ----------- ----------
sum 19906.5 9953.26 6063.29 3889.95 3317.76 572.21
}}}
I hope that clears up the confusion on the space usage.
I'm also referencing a very good blog post that discuss about the required mirror free and usable file mb
http://prutser.wordpress.com/2013/01/03/demystifying-asm-required_mirror_free_mb-and-usable_file_mb/
<<<
{{{
-- WITH REDUNDANCY
set colsep ','
set lines 600
col state format a9
col dgname format a15
col sector format 999990
col block format 999990
col label format a25
col path format a40
col redundancy format a25
col pct_used format 990
col pct_free format 990
col voting format a6
BREAK ON REPORT
COMPUTE SUM OF raw_gb ON REPORT
COMPUTE SUM OF usable_total_gb ON REPORT
COMPUTE SUM OF usable_used_gb ON REPORT
COMPUTE SUM OF usable_free_gb ON REPORT
COMPUTE SUM OF required_mirror_free_gb ON REPORT
COMPUTE SUM OF usable_file_gb ON REPORT
COL name NEW_V _hostname NOPRINT
select lower(host_name) name from v$instance;
select
trim('&_hostname') hostname,
name as dgname,
state,
type,
sector_size sector,
block_size block,
allocation_unit_size au,
round(total_mb/1024,2) raw_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * total_mb, 'NORMAL', .5 * total_mb, total_mb))/1024,2) usable_total_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * (total_mb - free_mb), 'NORMAL', .5 * (total_mb - free_mb), (total_mb - free_mb)))/1024,2) usable_used_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * free_mb, 'NORMAL', .5 * free_mb, free_mb))/1024,2) usable_free_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * required_mirror_free_mb, 'NORMAL', .5 * required_mirror_free_mb, required_mirror_free_mb))/1024,2) required_mirror_free_gb,
round(usable_file_mb/1024,2) usable_file_gb,
round((total_mb - free_mb)/total_mb,2)*100 as "PCT_USED",
round(free_mb/total_mb,2)*100 as "PCT_FREE",
offline_disks,
voting_files voting
from v$asm_diskgroup
where total_mb != 0
order by 1;
}}}
{{{
-- count of datafiles for each disk group
select count(*), name
from
(select regexp_substr(name, '[^/]+', 1, 1) name from v$datafile
union all
select regexp_substr(name, '[^/]+', 1, 1) name from v$tempfile)
group by name
order by 1 desc;
478 +DATA
241 +DATAHC
213 +DATAEF
1 +DBFS
-- count datafile vs tempfile
col name format a30
select count(*), name || ' - Datafile' name
from
(select regexp_substr(name, '[^/]+', 1, 1) name from v$datafile)
group by name
union all
select count(*), name || ' - Tempfile' name
from
(select regexp_substr(name, '[^/]+', 1, 1) name from v$tempfile)
group by name
order by 1 desc, 2 asc;
COUNT(*) NAME
---------- ------------------------------
39 +DATA - Datafile
4 +DATA - Tempfile
4 +DATA2 – Tempfile
}}}
{{{
Exadata Internals - Data Processing and I/O Flow
Measuring Exadata - Troubleshoo?ng at the Database Layer
Cell Metrics in V$SESSTAT - A storage cell is not a black box to the database session!
Exadata Snapper - Measure I/O Reduc?on and Offloading Efficiency
Measuring Exadata - Storage Cell Layer
Flash Cache
Write-back Flash Cache
Flash Logging
Parallel Execu?on, Par??oning and Bloom Filters on Exadata
Hybrid Columnar Compression
Data Loading, DML
####################################################
1st
####################################################
-- dba registry history
@reg
-- cell config
@exadata/cellver
mpstat -P ALL
@desc SALES_ARCHIVE_HIGH_BIG
@sn 1 1 1364
@snapper all 5 1 1364
####################################################
internals
####################################################
strace -cp 31676
* the -c is for system calls
* select count(*) from dba_source;
* then do a CTRL-C
-- on linux non-exadata, to get the FD being read
strace -p 31676
ls -l /proc/31676/fd/ <-- then look for the device
iostat -xmd
-- io translation
@asmdg
@asmls data
@sgastat asm "ASM extent pointer array" ... maps the physical to logical block mapping
after the ASM metadata is cached then the database process itself will do the IO..
* ASM is a disk address translation layer, and the DB processes does the actual IO
* after it's cached you don't have to talk that much to ASM.. when you allocate datafile then that's when you talk..
* ASM does the mirroring
-- MPP layer
* cells don't talk to each other
* unlike RAC they don't synchronize any data between them
* it's the database layer who orchestrates what cells do independently
* ASM only reads the primary allocation unit..
* storage cells are shared nothing
* the compute nodes sees everything that makes it a shared everything
@1:31 start working.. each cell.. advantage of this is each cell will do the work independently, also discusses 1MB block prefetch
the more cells you have the more workers and the faster the retrieval will go
-- IO request flow, levels of caching & buffering
@1:41 explains the disk, controller caching and flash
* flash cards don't have battery.. they have super capacitor
* cellsrv on critical path of all IO request
-- cell thread history
@2:22:58
strings cellsrv > /tmp/cellsrv_strings.txt
@desc v$cell_thread_history
@sqlid <sqlid> %
@sqlidx <sqlid> %
@awr/gen_awr_report
open <the html file>
@cellver.sql
@cth
-- on v2
select count(*)
from tanel.sales_archive_high
where prod_id+cust_id+channel_id+promo_id+quantity_sold+amount_sold < 10;
set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('TABLE','SALES_ARCHIVE_HIGH','TANEL') from dual;
CREATE TABLE "TANEL"."SALES_ARCHIVE_HIGH"
( "PROD_ID" NUMBER,
"CUST_ID" NUMBER,
"TIME_ID" DATE,
"CHANNEL_ID" NUMBER,
"PROMO_ID" NUMBER,
"QUANTITY_SOLD" NUMBER(10,2),
"AMOUNT_SOLD" NUMBER(10,2)
) SEGMENT CREATION IMMEDIATE
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
COMPRESS FOR ARCHIVE HIGH LOGGING
STORAGE(INITIAL 16777216 NEXT 16777216 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TANEL_BIGFILE"
-- examining cell storage disks
iostat -xmd 5 | egrep "Device|^sd[a-l] "
** tanel has charbench with 50 users doing roughly 300 TPS
while : ; do iostat -xmd 5 | egrep "Device|^sd[a-l] " ; echo "--" ; sleep 5; done | while read line ; do echo "`date +%T`" "$line" ; done
select /*+ PARALLEL(8) */ count(*) from tanel.t4;
-- a simple test case to prove large and small IO size
cell_offload_processing=false
_serial_direct_read=always
_db_file_exec_read_count=128Â Â Â <- parameter to set to to how many blocks to read starting in 10.2... this means 128 blocks of 8kb each = 1MB
select coun(*) from tanel.sales;
@mys "cell flash cache read hits"
_db_file_exec_read_count=17    <- 136kb large   <- not aligned to the extent size so ends up reading just a couple of blocks at the end of exents .... so 17 17 17 then before the end of extents is 5blocks so that 5blocks will be small IOs <128kb
_db_file_exec_read_count=16Â Â Â Â <- 128kb large
_db_file_exec_read_count=15Â Â Â Â <- 120kb small
-- io reasons
v$iostat_function_detail
alter cell events = 'immediate cellsrv_dump(ioreasons,0)'
####################################################
networking
####################################################
rds-ping
rds-stress
rds-info
ibdump  <-- download this tool from melanox website to dump the infiniband traffic similar to tcpdump
cellcli -e 'list metriccurrent N_MB_SENT_SEC'Â Â Â Â Â Â Â Â Â Â Â Â Â <- doesn't show the zero-copy
cellcli -e 'list metriccurrent N_HCA_MB_TRANS_SEC'Â Â Â <- shows the zero-copy statistics, shows the total traffic went through your infiniband card... the netstat and tcpdump doesn't show all the low level traffic
####################################################
measuring exadata
####################################################
SELECT tablespace_name,status,contents
,logging,predicate_evaluation,compress_for
FROM dba_tablespaces;
-- table
select avg(line) from tanel.t4 where owner like 'S%'; <-- "storage" on the row source means oracle is using cell storage aware codepath
-- mview
select count(*) from tanel.mv1 where owner like 'S%';
col name format a50
col PARAMETER1 format a10
col PARAMETER2 format a10
col PARAMETER3 format a10
SELECT name,wait_class,parameter1,parameter2,parameter3 from v$event_name where name like 'cell%';
SELECT name,wait_class,parameter1,parameter2,parameter3 from v$event_name where name like '%flash%' and name not like '%flashback%';
NAME WAIT_CLASS PARAMETER1 PARAMETER2 PARAMETER3
-------------------------------------------------- ---------------------------------------------------------------- ---------- ---------- ----------
cell smart table scan User I/O cellhash#
cell smart index scan User I/O cellhash#
cell statistics gather User I/O cellhash#
cell smart incremental backup System I/O cellhash#
cell smart file creation User I/O cellhash#
cell smart restore from backup System I/O cellhash#
cell single block physical read User I/O cellhash# diskhash# bytes
cell multiblock physical read User I/O cellhash# diskhash# bytes
cell list of blocks physical read User I/O cellhash# diskhash# blocks
cell manager opening cell System I/O cellhash#
cell manager closing cell System I/O cellhash#
cell manager discovering disks System I/O cellhash#
cell worker idle Idle
cell smart flash unkeep Other cellhash#
cell worker online completion Other cellhash#
cell worker retry Other cellhash#
cell manager cancel work request Other
17 rows selected.
NAME WAIT_CLASS PARAMETER1 PARAMETER2 PARAMETER3
-------------------------------------------------- ---------------------------------------------------------------- ---------- ---------- ----------
write complete waits: flash cache Configuration file# block#
db flash cache single block physical read User I/O
db flash cache multiblock physical read User I/O
db flash cache write User I/O
db flash cache invalidate wait Concurrency
db flash cache dynamic disabling wait Administrative
cell smart flash unkeep Other cellhash#
7 rows selected.
@cellio.sql
@xpa
-- X$KCBBES – breakdown of DBWR buffer write reasons and priori?es
-- measures how The Direct Path Read Ckpt buffers wriren metric is insignificant compared to other CKPT activity
@kcbbs
select name, value from v$sysstat where name like 'cell%' and value > 0;
#########################
cell metrics in sesstat
#########################
alter session set current_schema=tanel;
select count(*) from sales where amount_sold > 3;
19:26:12 SYS@DEMO1> select table_name from dba_tables where owner = 'TANEL' order by 1 asc;
TABLE_NAME
------------------------------
BIG
BLAH
CUSTOMERS_WITH_RAW
CUSTOMERS_WITH_RAW_HEX
DBC1
EX_SESSION
EX_SESSTAT
EX_SNAPSHOT
FLASH_WRITE_TEST4
FLASH_WRITE_TEST5
NETWORK_DUMP
OOW1
SALES
SALES2
SALES3
SALES_ARCHIVE_HIGH
SALES_ARCHIVE_HIGH_BIG
SALES_C
SALES_CL
SALES_COMPRESSED_OLTP
SALES_FLASH_CACHED
SALES_FLASH_CACHED2
SALES_FLASH_CACHED3
SALES_HACK
SALES_M
SALES_ORDERED
SALES_Q
SALES_QUERY_HIGH
SALES_QUERY_LOW
SALES_U
SALES_UPD_VS_SEL
SMALL_FLASH_TEST
SUMMARY
SUMMARY2
T1
T2
T3
T4
T5
T9
TANEL_DW
TANEL_TMP
TBLAH2
TEST_MERGE
TF
TF_SMALL
TMP
TMP1
TTT
T_BP1
T_BP2
T_BP3
T_BP4
T_BP5
T_CHAINED_TEST
T_CHAR
T_GC
T_GROUP_SEPARATOR
T_INS
T_SEQ_TMP
T_TMP
T_V
UKOUG_EXA
X
@sys "<seach string for sysstat value>"
SELECT sql_id, physical_read_bytes
FROM V$SQLSTATS
WHERE io_cell_offload_eligible_bytes = 0 ORDER BY physical_read_bytes DESC
@xls
#########################
exadata snapper
#########################
SELECT * FROM TABLE(exasnap.display_sid(123));
SELECT * FROM TABLE(exasnap.display_snap(90, 91, 'BASIC'));
https://cloudcontrol.enkitec.com:7801/em
-- demo: big select on high OLTP
select /*+ monitor noparallel */ sum(length(d)) from sales_c;
@xpa <sid>
select /*+ monitor noparallel */ sum(length(d)) from sales_c;
select * from table(exasnap.display(2141, 5, '%')); <-- monitor for 5 secs, and output all metrics
select * from table(exasnap.display('2141@4', 5, '%')); <-- monitor for 5 secs, and output all metrics, on sid 2141 on RAC node 4
-- demo: big update on high OLTP @2:27:20 -- you should see the txn layer drop on number vs cache + data layers
-- exec while true loop update t set a=-a; commit; end loop; <-- not this
update sales_u set quantity_sold = quantity_sold + quantity_sold + 1 where prod_id = 123;
@trans sid=<sid> <-- USED_UREC shows the undo rows for that transaction, if it has indexes then every index update is one undo record
var begin_snap number
var end_snap number
exec :begin_snap := exasnap.begin_snap;
select /*+ monitor noparallel */ sum(quantity_sold) from sales_u;
exec :end_snap := exasnap.end_snap;
select * from table(exasnap.display_snap(:begin_snap, :end_snap, p_detail=>'%'));
-- but how about the IOPS for OLTP sessions?
#########################
storage cell layer
#########################
@ash/event_hist cell.*read
@ash/event_hist log.file
iostat -xmd on storage cells!
@exadata/cellio
@exadata/cellio sysdata-1/24/60 sysdate
ls -ltr *txt
cellsrvstat_metrics.txt
exadata_cleanouts.txt
metricdefinition_detail.txt
-- to troubleshoot high disk latency
cellcli -e list metriccurrent CD_IO_TM_R_SM_RQ; <-- output is in microseconds.. so divide by 1000
@exadata/exadisktopo2
iostat -xmd 5 | egrep "Device|sd[a-z] |^$"
iostat -xmd 5 | egrep "Device|^sd[a-l] "
select table_name, cell_flash_cache from dba_tables where owner = 'SOE';
@exadata/default_flash_cache_for_user.sql SOE
cellsrvstat -interval=5 -count=2 <-- statspack for storage cells, STORAGE CELLS ALSO HAVE SGA!
-- but it doesn't really behave like the DB SGA.. it just has bufers for sending things over network before writing to disk, and a bunch of metadata (storage index, flash cache, etc.), etc...
cellsrvstat -help
cellsrvstat -stat=exec_ntwork,exec_ntreswait,exec_ntmutexwait,exec_ntnetwait -interval=5 -count=999
*** HASH JOIN can filter based from the bitmap calculation of the driving table...
-- oswatcher and cellservstat
/opt/oracle.oswatcher/osw/archive/oswcellsrvstat
# ./oswextract.sh "Number of latency threshold warnings for redo log writes" \ enkcel03.enkitec.com_cellsrvstat_11.05.25.*.dat.bz2 

@exadata/cellver
when is the next battery lean cycle time?
LIST CELL ATTRIBUTES bbuLearnCycleTime
/opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -a0
-- check the top user io SQL
@ashtop username,sqlid "wait_class='User I/O'" sysdate-1/24/60 sysdate
@sqlid <sql_id> %
@xpia <sql_id> % <-- sql monitor report
@xpd <SID>
-- top cell cpu consuming sql from cellcli (undocumented, where the top sql for storage cells is being pulled)
list topcpu 2, 16,6000 detail;
-- top sqls across the cells
@ashtop cell_name,wait_state,sqlid 1=1 sysdate-1/24/60 sysdate
#########################
flash cache
#########################
-- if you specify KEEP on one partition then it will "NOT" be honored
@tabpart TEST_MERGE %
select cell_flash_cache from dba_tables where table_name = 'TEST_MERGE';
select partition_name, cell_flash_cache from dba_tab_partitions where table_name = 'TEST_MERGE';
alter table test_merge modify partition P_20130103 storage (cell_flash_cache keep);
select partition_name, cell_flash_cache from dba_tab_partitions where table_name = 'TEST_MERGE';
select * from table(exasnap.display_sid(7, 10, '%'));
@desc test_merge
@descxx test_merge <-- show num_distinct, density, etc.
select count(*) from test_merge where item_idnt = 1;
select count(*) from test_merge where item_idnt*loc_idnt = 1; <-- you'll not get any help from storage index here because it only knows about individual columns
and not the expressions.. but it will still be offloaded!
alter table test_merge storage (cell_flash_cache keep); <-- put the table and all partitions in flash cache
select count(*) from test_merge where item_idnt*loc_idnt = 1; <-- now if this is executed, at first exec there will be no flash IOs because they are actuall put on cache
the 2nd exec will be read on flash!
-- flash cache metrics
"the physical read IO request" should be the same with "cell flash cache read hits"
select sql_id, executions, physical_read_requests, optimized_physical_read_requests from v$sql where sql_id = '<sql_id>'; <-- from v$sql, optimized_physical_read_requests could either be flash cache hits or storage index meaning you avoided going to the spinning disk and made use of flash
AWR report has this "UnOptimized top sql" which does a minus of "physical_read_requests - optimized_physical_read_requests"
-- requirements
@reg
-- write back flash cache protection against failures
* ASM mirroring done at a higher level
- by default large IOs do not get written to the cache... so a direct path load doesn't get to the write back cache and it goes directly to disk.. because if you do many many MBs load of data the sequential writing speed of disk is also good enough, and if terabytes of terabytes they will end up also in disk anyway.
- it decides on the caching based on the size of the IO.. just like read cache
- tanel said, by default IO gets cached based on the size of the IO.. if large IOs it will end up straight to the disk because it's a direct path load (sequential writing speed of disk is also good enough) <-- but what if the table is on KEEP ?
- and this also means that your DBWR will benefit from flash
-- a simple write test case
@snapper4 all 10 1 dbwr
alter system checkpoint;
* A write I/O gets sent to 2 – 3 separate cells
* Depending on ASM disk redundancy
Thus it will be mirrored in mul?ple cells
-- a simple write test case
@snapper4 all 10 1 dbwr
alter system checkpoint;
"cell flash cache read hits" <-- around 1015... there's not metric like "write hits", both read & write are both accumulated under the read metric
"physical read requests optimized" <-- also 1015.. means you avoided the spinning disks
"physical writes total IO request" <-- around 918
<-- so if we are going to take into account the mirroring (let's say normal redundancy) it's going to be "physical writes total IO request" x 2
which is 1836... so here you hit the flash 55% of the time (1015/1836).. write hits to flash!
-- DBWR doesn't have to read anyway.. DBWR only take blocks from buffer cache and writes them that's why you only see "physical writes total IO request"
-- write back cache behavior vs the other storage arrays
so if you just keep on writting and let's say you only have 1TB worth of writes and 5TB of flash then you may end up caching everything.. oracle doesn't have to immediately start to write it to disk at all (from cache)... flash media is persistent and mirrored (thanks to ASM).. it's not just a cache it's sort of an extension of storage as well which can be destaged to disk if there's space issue in flash
#########################
flash logging
#########################
-- test case
must be done with smarts scans going on too, else LSI cache is not too overwhelmed and disk will still win
flash logging speeds up LGWR writes to reduce commit latency.. so LGWR can write to disk faster so it would not wait for the slow disk to acknowledge the write.
Cells LSI RAID cards do have write-cache (barery backed), but...
• It's just 512MB cache per cell and shared between all IO reasons
• Theres a lot of other disk I/O going on, reads, loads, TEMP IO etc
• If the cache is full (as disks can't keep up) you'll end up wai?ng for disk
• 100ms+, 1-2 second commit ?mes if disks are busy & with long IO queues
smart flash logging IO flow:
* when the LGWR writes, it sends the write IO request to 2 or 3 cells depending on mirroring
* inside the cells, the cell knows that this is the write coming from the LGWR.. and internally it will issue IO on both the "disk and flash"
* whatever IO complete first it will return an acknowledgement "IO done" back
-- flash housekeeping issue
but every now and then flash devices has this internal housekeeping happening.. where you may have a hiccup of a few hundred ms where internal flash housekeeping is happening, this is a known problem with flash.. usually it's fast but every now and then you have a short spike in latency. And this is now when the disk would win! so normally you enable flash logging so your commits would run faster and will not be affected by all the smart scans hammering the disks.. but every now and then when you have this behavior (flash housekeeping hiccup) then the disk will win.. so hopefully you will not wait for half a second for the commit to complete.
Also if you are on IORM enabled it can prioritize IO, it knows that LGWR is more important than other IO.. so it actually sends out this request first.. but of course the problem might be in iostat on the OS level you may already have hunders of IO queues waiting already.
-- smart flash logging after crash
@1:36:52 after the crash, and if the IOs were not yet written to disk then at startup those IOs will be applied from flash to disk..
flash log size is 64MB x 16 = 1GB <-- caches log writes doesn't really keep the redo log
-- Smart Flash Logging Metrics: DB <-- you wait on "log file sync" when you commit, "log file parallel write" when you do large updates/deletes.. LGWR independently writes to disk even if you don't commit
@ash/shortmon log
@ash/shortmon "log file|cell.*read"
-- Smart Flash Logging Metrics: Cell
cellcli -e "LIST METRICHISTORY WHERE name LIKE 'FL_.*' AND collectionTime > '"`date --date \ 
 '1 day ago' "+%Y-%m-%dT%H:%M:%S%:z"`"'" | ./exastat FL_DISK_FIRST FL_FLASH_FIRST
-- saturated flash disks
* even if you have flash logs and IORM enabled, if the flash disks are very busy then it may still take time for the IO to complete because of long queues (avgqu-sz) on the flash disks.. essentially it still has to honor the long OS IO queue
* also, on the DISK_FIRST, FLASH_FIRST.. you could be seeing DISK_FIRST having more numbers because the IOs to disk are being helped by the controller cache and only when it spikes on latency that it hits the flash
./exastat <-- LIST METRICHISTORY convenience script.. parses the text file output or pipe then output just the columns you specify
-- IORM and flash
* you can specify that this database may not use flash cache at all..
* starting 11.2.3.2 IORM is always enabled.. the "list iormplan detail;" output is BASIC instead of OFF
* IORM allows high priority IO (redo writes, controlfile writes, etc.) to be submitted first to the OS queues before any IO
#########################
PX and bloom filters
#########################
* line 18, "table access storage full" and SYS_OP_BLOOM_FILTER... bloom filter is just a bitmap which describes what kind of data you have in the driving table
* hash join joins table A and table B.. we send in the bloom filter on table B
* on the A table we compute for bitmap which tells us what kind of join values we have on the join column (whatever column I join by it tells us what kind of values we have on the join column) then we will ship this bitmap to the driving table which is table B... when we start scanning table B when we start the smart scan we will send in the bloom filter as well so we will know basically when we scan on B for a particular value that it is not on A anyway
* it can do early filtering based on the join on the storage cell
* table A is driving table
* bloom filter is also used for partition elimination
* nested loop joins doesn't do bloom filter
* early filtering based on bloom filter and WHERE condition
-- simple example
select username, mod(ora_hash(username),8) bloom_bit from dba_users where rownum <= 10 order by bloom_bit; <-- bit value from 0-7
@pd bloom%size
@hint join%filter
@jf <SID>
}}}
https://www.oracle.com/technetwork/database/availability/exadata-ovm-2795225.pdf
http://blog.umairmansoob.com/wp-content/uploads/2016/08/Exadata-Deployment-Life-Cycle-By-Umair-Mansoob.pdf
''MindMap: Exadata Workload Characterization'' http://www.evernote.com/shard/s48/sh/1a5fae96-fea1-42eb-8436-f1f27c98dc5a/286eb93d18749c845808749fb3590418
<<<
One Exadata XT Storage Server will include twelve 14 TB SAS disk drives with 168 TB total raw disk capacity. To achieve a lower cost, Flash is not included, and storage software is optional.
This lower-cost addition to the Exadata Storage Server lineup delivers Exadata class benefits:
• Efficient – The XT server offers the same high capacity as the HC Storage server, including Hybrid Columnar Compression
• Simple – The XT server adds capacity to Exadata while remaining transparent to applications, transparent to SQL, and retains the same operational model
• Secure – The XT server enables customers to extend the same security model and encryption used for online data to low-use data, because it is integrated within the same Exadata
• Fast and Scalable – Unlike other low-access data storage solutions, the XT server is integrated to the Exadata fabric, for fast access and easy scale-out
• Compatible – The XT server is just another flavor of Exadata Storage server – you can just add XT servers to any Exadata rack
<<<
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmso/whats-new-oracle-exadata-database-machine-19.2.html
https://www.oracle.com/a/ocom/docs/engineered-systems/exadata/exadata-x8-2-ds.pdf
<<<
A new storage configuration is available starting with Oracle Exadata Database Machine X8-2. The XT model does not have Flash drives, only 14 TB hard drives with HCC compression. This is a lower cost storage option, with only one CPU, less memory, and with SQL Offload capability turned off by default. If used without the SQL Offload feature, then you are not required to purchase the Exadata Storage license for the servers.
<<<
Also on the price list, the minimum two XT per rack
https://www.oracle.com/assets/exadata-pricelist-070598.pdf
[23] Minimum two Exadata Storage Server Extended (XT) required per rack. No mandatory Exadata Storage Software license required.
! the use case
<<<
I haven’t been able to find documentation (haven’t looked too hard) on how the XT servers are used, by I would expect that they have to be in their own ASM diskgroup. That diskgroup would then have the cell.smart_scan_capable attribute set to false, nullifying the ability to produce smart scans.
I think Oracle is trying to look for a way to provide “cheaper” storage solutions for stale data that don’t include running a big data appliance, gluent, or any other solution. The one good thing that you get out of the XT servers is that everything still sits in ASM, rather than over NFS from a ZFSSA. You could take the very stale data, compress it with HCC, and then just let it sit in those separate diskgroups in case anybody wants it. Not the most elegant thing, but I think that’s the idea.
Another idea is to take 2, 3, 4 of those XT storage servers and build out a giant RECO diskgroup to hold RMAN backups if you don’t have any other solution.
<<<
http://drsalbertspijkers.blogspot.com/2019/08/oracle-exadata-hardware-x8-2-and-x8-8.html
https://technology.amis.nl/2019/04/20/newly-released-oracle-exadata-x8-2-bigger-disks-for-saving-money-expanding-capacity/
https://emilianofusaglia.net/2019/10/06/exadata-x8m-architectural-changes/?utm_campaign=58cf92e3d4dbac245c04c47c&utm_content=5d99be432e38dc00012249b4&utm_medium=smarpshare&utm_source=linkedin
https://www.oracle.com/sa/a/ocom/docs/engineered-systems/exadata/exadata-x8m-2-ds.pdf
https://www.oracle.com/technetwork/database/exadata/exadata-x8-2-ds-5444350.pdf
! RDMA
https://zcopy.wordpress.com/2010/10/08/quick-concepts-part-1-%e2%80%93-introduction-to-rdma/
! NUMA
https://www.morganslibrary.org/reference/numa.html
! RoCE
<<<
What is RDMA over Converged Ethernet (RoCE)? https://www.youtube.com/watch?v=dLw5bA5ziwU
https://www.electronicdesign.com/industrial-automation/11-myths-about-rdma-over-converged-ethernet-roce
http://www.mellanox.com/related-docs/whitepapers/WP_RoCE_vs_iWARP.pdf
https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet
<<<
! Intel Optane
<<<
Intel Optane DC Persistent Memory Fills the Gap between DRAM and SSDs https://www.youtube.com/watch?v=f9pIXw1ndRI
<<<
! extending exadata x8m
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmr/preparing-to-extend.html#GUID-EF7BA63C-D3CC-4FDC-8524-2709E4F85ED7
https://twitter.com/karlarao/status/1174436375682326531
<<<
for companies w/ on-premises multi-fullrack config let's say two X7-8. they can only extend their #Exadata cluster w/ the old X8-8 and not w/ the new X8M-8
@OracleExadata
, is this correct? just curious about the HW upgrade path for big multi-rack environments
<<<
<<<
up to X8-2 it is Xen. Then starting with X8M-2 it is KVM, at the same time the backend networking changed from IB to RcoE
our X3-2 was re-imaged from physical to Xen at some point
as far as i know if you want to change from physical to virtual or vise versa, you have to re-image
<<<
https://dl.dropboxusercontent.com/u/66720567/Exa_backup_recovery.pdf
http://www.evernote.com/shard/s48/sh/2f784775-a9c0-408d-9c8d-a03c4b82f37e/d1a0b87b148ef71ecf5ea300d1e952b9
{{{
Database Server:
root/welcome1
oracle/welcome1
grid/welcome1
grub/sos1Exadata
Exadata Storage Servers:
root/welcome1
celladmin/welcome1
cellmonitor/welcome1
InfiniBand switches:
root/welcome1
nm2user/changeme
Ethernet switches:
admin/welcome1
Power distribution units (PDUs):
admin/welcome1
root/welcome1
Database server ILOMs:
root/welcome1
Exadata Storage Server ILOMs:
root/welcome1
InfiniBand ILOMs:
ilom-admin/ilom-admin
ilom-operator/ilom-operator
Keyboard, video, mouse (KVM):
admin/welcome1
}}}
{{{
@bryangrenn try this select * from gv$cell; in mr. tools I do this mrskew --name='smart.*scan' --group='$p1' *trc
@bryangrenn and v$asm_disk.hash_value could be your diskhash# in exadata waits
}}}
https://technicalsanctuary.wordpress.com/2014/06/06/creating-an-infiniband-listener-on-supercluster/
http://ermanarslan.blogspot.com/2013/10/oracle-exadata-infiniband-ofed.html
http://vijaydumpa.blogspot.com/2012/05/configure-infiniband-listener-on.html
also check [[1GbE to 10GbE upgrade]]
http://allthingsoracle.com/method-for-huge-diagnostic-information-in-exadata/
Location of Different Logfiles in Exadata Environment [ID 1326382.1]
{{{
Location of Different Logfiles in Exadata Environment
On the cell nodes
================
1. Cell alert.log file
/opt/oracle/cell11.2.1.2.1_LINUX.X64_100131/log/diag/asm/cell/<node name>/trace/alert.log.
or
if the CELLTRACE parameter is set just do cd $CELLTRACE
2. MS logfile
/opt/oracle/cell11.2.1.2.1_LINUX.X64_100131/log/diag/asm/cell/<node name>/trace/ms-odl.log.
or
if the CELLTRACE parameter is set just do cd $CELLTRACE
3. OS watcher output data
/opt/oracle.oswatcher/osw/archive/
To get OS watcher data of specific date :
cd /opt/oracle.oswatcher/osw/archive
find . -name '*11.04.11*' -print -exec zip /tmp/osw_`hostname`.zip {} \;
4. Os message logfile
/var/log/messages
5. VM Core files
/var/crash/
6. SunDiag output files.
/tmp/sundiag_.tar.bz2
7. Imaging issues related logfiles:
/var/log/cellos
8. Disk controller firmware logs:
/opt/MegaRAID/MegaCli/Megacli64 -fwtermlog -dsply -a0
On the Database nodes
=====================
1. Database alert.log
$ORACLE_BASE/diag/rdbms/{sid}/{sid}/trace/alert_{sid}.log
2. ASM alert.log
/diag/asm/+asm/+ASM2/trace
3. Clusterware CRS alert.log
$GRID_HOME/log/<node name>
4. Diskmon logfiles
$GRID_HOME/log/<node name>/diskmon
5. OS Watcher output files
/opt/oracle.oswatcher/osw/archive/
6. Os message logfile
/var/log/messages
7. VM Core files for Linux
/var/crash/ or /var/log/oracle/crashfiles
8. Imaging/patching issues related logfiles:
/var/log/cellos
9. Disk controller firmware logs:
/opt/MegaRAID/MegaCli/Megacli64 -fwtermlog -dsply -a0
}}}
''how exadata is manufactured'' http://vimeo.com/46778003
check here https://www.evernote.com/shard/s48/sh/3b53d0f2-8bdd-47f1-928e-9d3a93750c07/6629a92562f2269417bf595cf2081dfe
Method for Huge Diagnostic Information in Exadata
http://allthingsoracle.com/method-for-huge-diagnostic-information-in-exadata/
* https://fritshoogland.wordpress.com/2013/10/21/exadata-and-the-passthrough-or-pushback-mode/
* https://www.oracle.com/webfolder/community/engineered_systems/4108865.html AWR shows 100% passthru reasons as "cell num smart IO sessions using passthru mode due to cellsrv"
https://www.google.com/search?source=hp&ei=CoGsX_2qCZKc_Qa44rqgDA&q=exadata+passthrough&oq=exadata+passthru&gs_lcp=CgZwc3ktYWIQAxgAMgsIABDJAxAWEAoQHjIICAAQFhAKEB46CwgAELEDEIMBEMkDOgUIABCxAzoICAAQsQMQgwE6AggAOgsILhCxAxDHARCjAjoICAAQsQMQyQM6BAgAEAo6CAguEMcBEK8BOgUIABDJAzoGCAAQFhAeOgkIABDJAxAWEB5QsgRY7xZg5iJoAHAAeACAAW-IAboJkgEEMTUuMZgBAKABAaoBB2d3cy13aXo&sclient=psy-ab
Steps to shut down or reboot an Exadata storage cell without affecting ASM (Doc ID 1188080.1)
https://oracleracdba1.wordpress.com/2013/08/14/steps-to-shut-down-or-reboot-an-exadata-storage-cell-without-affecting-asm/
https://baioradba.wordpress.com/2012/02/03/steps-to-power-down-or-reboot-a-cell-without-affecting-asm/
http://www.oracle.com/us/products/database/exadata-vs-ibm-1870172.pdf
https://www.evernote.com/shard/s48/sh/7de6a930-08b6-47cf-812e-cab2b2a83b5b/ed7e27628608f801b8ba48d553e7c82e
What I did here is I compiled the "Whats New?" on the official doc into groups of components and versioned it by hardware and software
This way I can easily track the improvements on the storage software. So if you are on an older release pretty much you know the software features you are missing which is easier to justify the testing of that patch level.
The URLs below are the placeholders of the documents and I'll keep on updating them moving forward. See my tweet here https://twitter.com/karlarao/status/558611482368573441 to get some idea how these files look like.
Check out the files below:
@@ ''Spreadsheet'' - ''Exadata-FeaturesAcrossVersions'' - https://db.tt/BZly5L13 @@
''MindMap version'' - 12cExaNewFeat.mm https://db.tt/xAwgzk6N
''Note:'' BTW the docs of the new release (12.1.2.1.0) is at patch #10386736 which is not really obvious if you look at 888828.1 note
IMG_4319.JPG - X4270 cell server
IMG_4325.JPG - X4170 db server
IMG_4330.JPG - SAS2 10K RPM 300GB
INTERNAL Exadata Database Machine Hardware Training and Knowledge [ID 1360358.1]
Oracle Sun Database Machine X2-2/X2-8 Diagnosability and Troubleshooting Best Practices [ID 1274324.1]
Oracle System Options http://www.oracle.com/technetwork/documentation/oracle-system-options-190050.html#solid
https://twitter.com/karlarao/status/375289300360765440
Information Center: Troubleshooting Oracle Exadata Database Machine [ID 1346612.2]
Exadata V2 Starter Kit [ID 1244344.1]
Master Note for Oracle Database Machine and Exadata Storage Server [ID 1187674.1]
http://blogs.oracle.com/db/2011/01/oracle_database_machine_and_exadata_storage_server.html
Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions [ID 888828.1] <-- ALERTS ON NEW PATCH BUNDLES
Oracle Exadata Storage Server Software 11g Release 2 (11.2.1) Patch Set 2 (11.2.1.2.0) [ID 888834.1] <-- UPGRADING THE EXADATA
Oracle Database Machine Monitoring Best Practices [ID 1110675.1]
OS Watcher User Guide [ID 301137.1] <-- Version 3.0.1 now supports Exadata
''Webinars''
Selected Webcasts in the Oracle Data Warehouse Global Leaders Webcast Series [ID 1306350.1]
''Exadata Best Practices''
Oracle Exadata Best Practices [ID 757552.1]
Engineered Systems Welcome Center [ID 1392174.1]
INTERNAL Master Note for Exadata Database Machine Hardware Support [ID 1354631.1]
Oracle Sun Database Machine X2-2 Diagnosability and Troubleshooting Best Practices (Doc ID 1274324.1)
Oracle Sun Database Machine Setup/Configuration Best Practices (Doc ID 1274318.1)
''TROUBLESHOOT INFORMATION CENTER''
TROUBLESHOOT INFORMATION CENTER: Exadata Database Machine - Storage Cell Issues (cellcli,celldisks,griddisks,processes rs,ms,cellsrv) and Offload Processing Issues (Doc ID 1531832.2)
''Exadata Maintenance''
Oracle Database Machine HealthCheck [ID 1070954.1]
Oracle Auto Service Request (Doc ID 1185493.1)
Oracle Database Machine and Exadata Storage Server Information Center (Doc ID 1306791.1)
''Resize /u01 on compute node''
Doc ID 1357457.1 How to Expand Exadata Compute Node File Systems ()
Doc ID 1359297.1 Unable To Resize filesystem on Exadata ()
tune2fs -l /dev/mapper/VGExaDb-LVDbOra1 | grep -i features
Filesystem features: has_journal filetype needs_recovery sparse_super large_file
The feature needed is: resize_inode
Without that feature the filesystem cannot be resized
''Exadata shutdown procedure''
Steps to shut down or reboot an Exadata storage cell without affecting ASM: [ID 1188080.1]
Steps To Shutdown/Startup The Exadata & RDBMS Services and Cell/Compute Nodes On An Exadata Configuration. [ID 1093890.1]
''Exadata Enterprise Manager''
Enterprise Manager for Oracle Exadata Database Machine (Doc ID 1308449.1)
''Exadata versions''
12c - Exadata - Exadata 12.1.1.1.0 release and patch (16980054 ) (Doc ID 1571789.1)
''Exadata Patching''
Oracle Support Lifecycle Advisors [ID 250.1] <— new! it has a demo video on patching db and cell nodes
Patching & Maintenance Advisor: Database (DB) Oracle Database 11.2.0.x [ID 331.1]
Exadata Critical Issues [ID 1270094.1]
Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions [ID 888828.1]
Exadata Patching Overview and Patch Testing Guidelines [ID 1262380.1]
Exadata Critical Issues [ID 1270094.1] <-- MUST READ
List of Critical Patches Required For Oracle 11.2 DBFS and DBFS Client [ID 1150157.1]
Oracle Software Patching with OPLAN [ID 1306814.1]
Patch Oracle Exadata Database Machine via Oracle Enterprise Manager 11gR1 (11.1.0.1) [ID 1265998.1]
Oracle Patch Assurance - Data Guard Standby-First Patch Apply [ID 1265700.1]
Patch 12577723: EXADATA 11.2.2.3.2 (MOS NOTE 1323958.1)
Exadata 11.2.2.3.2 release and patch (12577723 ) for Exadata 11.1.3.3, 11.2.1.2.x, 11.2.2.2.x, 11.2.2.3.1 [ID 1323958.1]
Quarterly Cpu vs Patch bundle, Patch collide http://dbaforums.org/oracle/index.php?showtopic=18588
-- ''Major Release upgrade''
11.2.0.1 to 11.2.0.2 Database Upgrade on Exadata Database Machine [ID 1315926.1]
* Upgrade Advisor: Database (DB) Exadata from 11.2.0.1 to 11.2.0.2 [ID 336.1] https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=336.1#evaluate
* Advisor Webcast Archives - 2011 [ID 1400762.1] https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1400762.1#OraSSEXA
** Exadata Patching Cell Server Demo - 11.2.2.3.2 - https://oracleaw.webex.com/oracleaw/lsr.php?AT=pb&SP=EC&rID=64442222&rKey=3150b16a21d3dc69
** Exadata Patching Database Server Demo https://oracleaw.webex.com/oracleaw/lsr.php?AT=pb&SP=EC&rID=64455552&rKey=6b4faa15ff3cc250
** Exadata Patching Strategy https://oracleaw.webex.com/oracleaw/lsr.php?AT=pb&SP=EC&rID=63488107&rKey=422b763d21527597
11.2.0.1/11.2.0.2 to 11.2.0.3 Database Upgrade on Exadata Database Machine [ID 1373255.1]
* for 11201 BPs, there's a separate patch for GI and DB.. both patch patches both homes.. silly!
* the BPs should be staged on all DB nodes,
* the DB & Grid Patchsets are staged only on DB node 1, & will be pused to all the DB nodes
* the CELL image software is staged only on DB node 1, & will be pushed to all the cells
* the CELL EX patch (critical issues) is staged only on DB node 1, & will be pushed to all the cells... they are fixed (cumulative) on the latest cell software version
''x2-8''
Exadata 11.2.2.2.0 release and patch (10356485) for Exadata 11.1.3.3.1, 11.2.1.2.3, 11.2.1.2.4, 11.2.1.2.6, 11.2.1.3.1, 11.2.2.1.0, 11.2.2.1.1 [ID 1270634.1] <-- mentions of UEK
''DB BP''
BP8 https://updates.oracle.com/Orion/Services/download?type=readme&aru=13789775
''Cell SW''
''11.2.2.3.2 patch 12577723 and My Oracle Support note 1323958.1'' https://updates.oracle.com/Orion/Services/download?type=readme&aru=13852123
''Exadata onecommand''
Ntpd Does not Use Defined NTP Server [ID 1178614.1]
''Exadata networking''
Changing IP addresses on Exadata Database Machine [ID 1317159.1]
Configuring Exadata Database Server Routing [ID 1306154.1]
How to Change Interconnect/Public Network (Interface or Subnet) in Oracle Clusterware [ID 283684.1]
How to update the IP address of the SCAN VIP resources (ora.scan.vip) [ID 952903.1]
''Exadata Bare Metal''
Bare Metal Restore Procedure for Compute Nodes on an Exadata Environment [ID 1084360.1]
BMR(bare metal restore) document. Doc ID 1084360.1
''Exadata bugs''
-- Exadata grid disks going offline.
<<<
The bug below is the software specific bug and it has now been closed:
Bug 12431721 - UNEXPECTED STATUS OF GRIDDISK DEVICE STATUS IS NOT 'ACTIVE'
- provided a fix via setting _cell_io_hang_time = 30 on all cells
The fix to extend the IO hang timeout is merged into our next release of 11.2.2.3.2.
The root cause of the disks going offline in an unknown state is being further investigated in a hardware bug. We believe LSI to be causing this unknown disk state - the bug is still being investigated.
<<<
Bug 10180307 - Dbrm dbms_resouce_manager.calibrate_io reports very high values for max_pmbps (Doc ID 10180307.8) <-- Automatic Degree of Parallelism in 11.2.0.2 (Doc ID 1269321.1)
memlock setting http://translate.google.com/translate?sl=auto&tl=en&u=http://www.oracledatabase12g.com/archives/warning-even-exadata-has-a-wrong-memlock-setting.html
Flashcache missing, in status critical after multiple "Flash disk removed" alerts [ID 1383267.1]
__''Exadata HW failure''__
-- STORAGE CELLS - FAILED DISK
How To Gather/Backup ASM Metadata In A Formatted Manner? [ID 470211.1]
Script to Report the Percentage of Imbalance in all Mounted Diskgroups [ID 367445.1]
Oracle Exadata Diagnostic Information required for Disk Failures (Doc ID 761868.1)
Things to Check in ASM When Replacing an ONLINE disk from Exadata Storage Cell [ID 1326611.1]
Steps to manually create cell/grid disks on Exadata V2 if auto-create fails during disk replacement [ID 1281395.1]
High Redundancy Disk Groups in an Exadata Environment [ID 1339373.1]
{{{
1)Upload Sundiag output from Exadata Storage server having disk problems.
# /opt/oracle.SupportTools/sundiag.sh
Oracle Exadata Diagnostic Information required for Disk Failures (Doc ID 761868.1)
1.1) Serial Numbers for System Components
#/opt/oracle.SupportTools/CheckHWnFWProfile -S
1.2)Using cellcli provide the following
# cellcli -e "list griddisk attributes name,asmmodestatus,asmdeactivationoutcome"
1.3)#/usr/sbin/sosreport { need file created in /tmp (LINUX)]
1.4) Please upload a ILOM snapshot. Follow this note Diagnostic information for ILOM, ILO , LO100 issues (Doc ID 1062544.1) - How To Create a Snapshot With the ILOM Web Interface
}}}
-- COMPUTE NODE - FAILED DISK
Dedicated and Global Hot Spares for Exadata Compute Nodes in 11.2.2.3.2 (Doc ID 1339647.1)
Removing HotSpare Flag on replaced disk in Exadata storage cell [ID 1300310.1]
Marking a replaced disk as Hot Spare in Exadata Compute Node [ID 1289684.1]
''Compute Node / DB node''
How to Expand Exadata Compute Node File Systems (Doc ID 1357457.1)
''Exadata Migration''
Migrating an Oracle E-Business Suite Database to Oracle Exadata Database Machine [ID 1133355.1]
''Exadata DBFS''
Configuring DBFS on Oracle Database Machine (Doc ID 1054431.1)
Configuring a Database for DBFS on Oracle Database Machine (Doc ID 1191144.1)
''MegaCli''
http://www.myoraclesupports.com/content/oracle-sun-database-machine-diagnosability-and-troubleshooting-best-practices
''Exadata Resource Management''
Tool for Gathering I/O Resource Manager Metrics: metric_iorm.pl (Doc ID 1337265.1)
Scripts and Tips for Monitoring CPU Resource Manager (Doc ID 1338988.1)
Configuring Resource Manager for Mixed Workloads in a Database (Doc ID 1358709.1)
''Exadata 3rd party software or tools on compute nodes''
Installing Third Party Monitoring Tools in Exadata Environment [ID 1157343.1]
''ASM redundancy''
Understanding ASM Capacity and Reservation of Free Space in Exadata (Doc ID 1551288.1)
''MindMap - EMGC Monitoring'' http://www.evernote.com/shard/s48/sh/67300d1c-00c0-4d25-b113-a644eb3ba58a/33abcafa4dda6538de9c19f65930d022
''Oracle Database Machine Monitoring Best Practices [ID 1110675.1]'' -> deployment documents are here https://www.dropbox.com/s/95qv1ejspkrzavf
<<<
fo_ext.sql
emudm_netif_state.sh
emudm_ibconnect.sh
Sun_Oracle_Database_Machine_Monitoring_v120.pdf
OEM_Exadata_Dashboard_Deployment_v104.pdf
OEM_Exadata_Dashboard_Prerequisites_and_Overview_v100.pdf
<<<
''Patch Requirements for Setting up Monitoring and Administration for Exadata [ID 1323298.1]'' <-- take note of this first
http://www.oracle.com/technetwork/oem/grid-control/downloads/devlic-188770.html <-- ''exadata plugin bundle link''
http://www.oracle.com/technetwork/oem/grid-control/downloads/exadata-plugin-194085.html <-- ''exadata plugin link''
http://www.oracle.com/technetwork/oem/extensions/index.html <-- ''extensions exchange link''
''em11.1''
A script to deploy the agents and the plugins to the compute nodes is available as patch 11852882
A script to create a Grid Control 11 environment from scratch is available as patch 11852869
''em12c''
The script to deploy the agents to the compute nodes is available as patch 12960596
The script to create a Cloud Control 12c environment from scratch is available as 12960610
The documentation for Exadata target discovery is located in the Cloud Control Administration Guide (chapter 28)
''ASR''
http://www.oracle.com/technetwork/server-storage/asr/documentation/exadata-asr-quick-install-330086.pdf
''MIBs''
How to Obtain MIBs for Exadata Database Machine Components [ID 1315086.1]
''agent failover'' http://blogs.oracle.com/XPSONHA/entry/failover_capability_for_plugins_exadata
check the white paper here [[MAA - Exadata Health and Resource Usage Monitoring]]
! check this
[[check patch level]]
''Software Updates, Best Practices and Notes'' https://blogs.oracle.com/XPSONHA/entry/software_updates_best_practices_and
Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions ''[ID 888828.1]''
https://support.oracle.com/epmos/faces/ui/km/DocContentDisplay.jspx?_afrLoop=908824894646555&id=888828.1
''How to determine BP Level?'' https://forums.oracle.com/forums/thread.jspa?threadID=2224966
{{{
opatch lsinv -bugs_fixed | egrep -i 'bp|exadata|bundle'
/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'bp|exadata|bundle'
OR
registry$history or dba_registry_history
col action format a10
col namespace format a10
col action_time format a30
col version format a10
col comments format a30
select * from dba_registry_history;
and then.. go to
MOS 888828.1 --> Patch Release History for Exadata Database Machine Components --> Exadata Storage Server software patches
}}}
<<<
I think you want this doc: 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 Grid Infrastructure and Database Upgrade on Exadata Database Machine running Oracle Linux (Doc ID 1681467.1).
And all of the patches' readme files have detailed installation instructions. Read every one of them.
It's working out all of the dependencies that is most complicated. There's a long string of dependencies that's going to come into play for you:
Grid Infrastructure 12.1.0.x requires Exadata storage software (ESS) 12.1.1.1.1. (You can use ESS 11.2.3.3.1 but you will lose some Exadata features).
ESS 12.1.1.1.1 requires Oracle Linux (OL) 5.5 (kernel 2.6.18-194) or later, so you'll likely have to upgrade the OS on the database nodes.
ESS updates are full OS images, so you'll get an OL6 upgrade on your storage servers with the update.
Because of #3, you should update your database servers to OL6 in #2 rather than the minimum 5.5, so that everything is on OL6.
Some more links you'll want:
The starting point for Exadata patching is Information Center: Upgrading Oracle Exadata Database Machine(1364356.2).
There's a patching overview at Exadata Patching Overview and Patch Testing Guidelines.
To update the database server OS, follow Document 1284070.1.
Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)
11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 Grid Infrastructure and Database Upgrade on Exadata Database Machine running Oracle Linux (Doc ID 1681467.1)
Exadata Patching Overview and Patch Testing Guidelines (Doc ID 1262380.1)
Updating key software components on database hosts to match those on the cells (Doc ID 1284070.1)
<<<
{{{
Boris Erlikhman http://goo.gl/2LvXU
smart scan http://goo.gl/chy2s
flash cache http://goo.gl/YlCA7
smart flash log http://goo.gl/TwyRx
write back cache http://goo.gl/2WCmw
Roger Macnicol http://goo.gl/oxxu7
hcc http://goo.gl/9ptFe, http://goo.gl/3IOSi
Sue Lee http://goo.gl/6WCFw, http://goo.gl/bI0pd
iorm http://goo.gl/BHIc1
}}}
''phydisk, lun, celldisk, griddisk mapping''
{{{
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:0">
<Attribute NAME="deviceId" VALUE="23"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975845146"></Attribute>
<Attribute NAME="errMediaCount" VALUE="53"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="name" VALUE="35:0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_00_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sda3"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-793d-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sda"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_0"></Attribute>
<Attribute NAME="name" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070151040"></Attribute>
<Attribute NAME="size" VALUE="1832.59375G"></Attribute>
</Target>
CellCLI> list physicaldisk 35:0 detail
name: 35:0
deviceId: 23
diskType: HardDisk
enclosureDeviceId: 35
errMediaCount: 53
errOtherCount: 0
foreignState: false
luns: 0_0
makeModel: "HITACHI H7220AA30SUN2.0T"
physicalFirmware: JKAOA28A
physicalInsertTime: 2010-05-15T21:10:45-05:00
physicalInterface: sata
physicalSerial: JK11D1YAJB8GGZ
physicalSize: 1862.6559999994934G
slotNumber: 0
status: normal
CellCLI> list lun 0_0 detail
name: 0_0
cellDisk: CD_00_cell01
deviceName: /dev/sda
diskType: HardDisk
id: 0_0
isSystemLun: TRUE
lunAutoCreate: FALSE
lunSize: 1861.712890625G
lunUID: 0_0
physicalDrives: 35:0
raidLevel: 0
lunWriteCacheMode: "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"
status: normal
CellCLI> list celldisk where name = CD_00_cell01 detail
name: CD_00_cell01
comment:
creationTime: 2010-05-28T13:09:11-05:00
deviceName: /dev/sda
devicePartition: /dev/sda3
diskType: HardDisk
errorCount: 0
freeSpace: 0
id: 00000128-e01a-793d-0000-000000000000
interleaving: none
lun: 0_0
raidLevel: 0
size: 1832.59375G
status: normal
CellCLI> list griddisk where name = DATA_CD_00_cell01 detail
name: DATA_CD_00_cell01
availableTo:
cellDisk: CD_00_cell01
comment:
creationTime: 2010-06-14T17:41:12-05:00
diskType: HardDisk
errorCount: 0
id: 00000129-389f-a070-0000-000000000000
offset: 32M
size: 1282.8125G
status: active
CellCLI> list griddisk where name = RECO_CD_00_cell01 detail
name: RECO_CD_00_cell01
availableTo:
cellDisk: CD_00_cell01
comment:
creationTime: 2010-06-14T17:41:13-05:00
diskType: HardDisk
errorCount: 0
id: 00000129-389f-a656-0000-000000000000
offset: 1741.328125G
size: 91.265625G
status: active
CellCLI> list griddisk where name = STAGE_CD_00_cell01 detail
name: STAGE_CD_00_cell01
availableTo:
cellDisk: CD_00_cell01
comment:
creationTime: 2010-06-14T17:41:12-05:00
diskType: HardDisk
errorCount: 0
id: 00000129-389f-a267-0000-000000000000
offset: 1282.859375G
size: 458.140625G
status: active
CellCLI> list griddisk where name = SYSTEM_CD_00_cell01 detail
name: SYSTEM_CD_00_cell01
availableTo:
cellDisk: CD_00_cell01
comment:
creationTime: 2010-06-14T17:41:13-05:00
diskType: HardDisk
errorCount: 0
id: 00000129-389f-a45f-0000-000000000000
offset: 1741G
size: 336M
status: active
<Target TYPE="oracle.ossmgmt.ms.core.MSCell" NAME="enkcel01">
<Target TYPE="oracle.ossmgmt.ms.core.MSIDBPlan" NAME="enkcel01_IORMPLAN">
---
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:0">
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_0">
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCache" NAME="enkcel01_FLASHCACHE">
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="35fff6cd-001e-4ebf-8a48-a53b36b22fbf">
$ cat enkcel01-collectl.txt | grep -i "target type" | grep CD_00_cell01
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_00_cell01">
list celldisk where name = CD_00_cell01 detail
list griddisk where name = SYSTEM_CD_00_cell01 detail
}}}
''/opt/oracle/cell/cellsrv/deploy/config/cell_disk_config.xml config file''
{{{
/opt/oracle/cell/cellsrv/deploy/config/cell_disk_config.xml
<?xml version="1.0" encoding="UTF-8"?>
<Targets version="0.0">
<Target TYPE="oracle.ossmgmt.ms.core.MSCell" NAME="enkcel01">
<Attribute NAME="interconnect1" VALUE="bondib0"></Attribute>
<Attribute NAME="hwRetentionDays" VALUE="0"></Attribute>
<Attribute NAME="metricHistoryDays" VALUE="14"></Attribute>
<Attribute NAME="locatorLEDStatus" VALUE="off"></Attribute>
<Attribute NAME="bbuLastLearnCycleTime" VALUE="1310886021911"></Attribute>
<Attribute NAME="smtpFrom" VALUE="Enkitec Exadata"></Attribute>
<Attribute NAME="bbuLearnCycleTime" VALUE="1318834800000"></Attribute>
<Attribute NAME="snmpSubscriber" VALUE="((host=server,port=3872,community=public))"></Attribute>
<Attribute NAME="smtpServer" VALUE="server"></Attribute>
<Attribute NAME="sellastcollection" VALUE="1312830889000"></Attribute>
<Attribute NAME="cellVersion" VALUE="OSS_11.2.0.3.0_LINUX.X64_110520"></Attribute>
<Attribute NAME="management_ip" VALUE="0.0.0.0"></Attribute>
<Attribute NAME="id" VALUE="1017XFG056"></Attribute>
<Attribute NAME="notificationMethod" VALUE="mail,snmp"></Attribute>
<Attribute NAME="notificationPolicy" VALUE="critical"></Attribute>
<Attribute NAME="adrLastMineTime" VALUE="1313845212042"></Attribute>
<Attribute NAME="makeModel" VALUE="SUN MICROSYSTEMS SUN FIRE X4275 SERVER SATA"></Attribute>
<Attribute NAME="OEHistory" VALUE="3112.791028881073 5706.6555216653005 4995.632752835751 4996.394891858101 5121.992709875107 3480.762350344658 4270.716751503945 5062.652316808701 4987.846175163984 5702.463975906372 5222.782039854262 5016.283513784409 5752.083408117294 5852.781406164169 5710.337441308157 5712.052912848337 4999.097516179085 5714.517756598337 5431.95593547821 6157.329520089285 5004.541987478733 5720.458814076015 5722.174355370657 5723.889757156372 "></Attribute>
<Attribute NAME="smtpFromAddr" VALUE="x@server"></Attribute>
<Attribute NAME="realmName" VALUE="enkitec_realm"></Attribute>
<Attribute NAME="iormBoost" VALUE="0.0"></Attribute>
<Attribute NAME="offloadEfficiency" VALUE="5213.871233422416"></Attribute>
<Attribute NAME="name" VALUE="enkcel01"></Attribute>
<Attribute NAME="smtpToAddr" VALUE="x@server"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSIDBPlan" NAME="enkcel01_IORMPLAN">
<Attribute NAME="objective" VALUE="high_throughput"></Attribute>
<Attribute NAME="catPlan"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="dbPlan"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_00_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sda3"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-793d-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sda"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_0"></Attribute>
<Attribute NAME="name" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070151040"></Attribute>
<Attribute NAME="size" VALUE="1832.59375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_01_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdb3"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-8c16-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdb"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_1"></Attribute>
<Attribute NAME="name" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070155868"></Attribute>
<Attribute NAME="size" VALUE="1832.59375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_02_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdc"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-8e29-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdc"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_2"></Attribute>
<Attribute NAME="name" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070156404"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_03_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdd"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-904a-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdd"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_3"></Attribute>
<Attribute NAME="name" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070156954"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_04_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sde"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-9274-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sde"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_4"></Attribute>
<Attribute NAME="name" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070157500"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_05_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdf"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-948e-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="1152.8125G"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdf"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_5"></Attribute>
<Attribute NAME="name" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070158041"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_06_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdg"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-96a9-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdg"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_6"></Attribute>
<Attribute NAME="name" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070158585"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_07_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdh"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-98ce-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdh"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_7"></Attribute>
<Attribute NAME="name" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070159129"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_08_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdi"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-9aec-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdi"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_8"></Attribute>
<Attribute NAME="name" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070159672"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_09_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdj"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-9cfe-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdj"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_9"></Attribute>
<Attribute NAME="name" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070160199"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_10_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdk"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-9f1b-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdk"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_10"></Attribute>
<Attribute NAME="name" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070160741"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_11_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdl"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-a13e-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdl"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_11"></Attribute>
<Attribute NAME="name" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070161295"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_00_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdr"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-a3b6-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdr"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="1_0"></Attribute>
<Attribute NAME="name" VALUE="FD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070161933"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_00_enkcel01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdaa"></Attribute>
<Attribute NAME="id" VALUE="1b0ee672-a892-4f58-9dd5-04f9f6aee3e9"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdaa"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="2_1"></Attribute>
<Attribute NAME="name" VALUE="FD_00_enkcel01"></Attribute>
<Attribute NAME="creationTime" VALUE="1313091948052"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_01_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sds"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-a633-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sds"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="1_1"></Attribute>
<Attribute NAME="name" VALUE="FD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070162567"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_02_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdt"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-a8b1-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdt"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="1_2"></Attribute>
<Attribute NAME="name" VALUE="FD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070163206"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_03_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdu"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-ab2d-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdu"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="1_3"></Attribute>
<Attribute NAME="name" VALUE="FD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070163842"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_04_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdz"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-ada7-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdz"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="2_0"></Attribute>
<Attribute NAME="name" VALUE="FD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070164476"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_06_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdab"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-b297-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdab"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="2_2"></Attribute>
<Attribute NAME="name" VALUE="FD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070165741"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_07_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdac"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-b512-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdac"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="2_3"></Attribute>
<Attribute NAME="name" VALUE="FD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070166377"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_08_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdn"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-b78f-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdn"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="4_0"></Attribute>
<Attribute NAME="name" VALUE="FD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070167015"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_09_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdo"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-ba0e-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdo"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="4_1"></Attribute>
<Attribute NAME="name" VALUE="FD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070167653"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_10_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdp"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-bc8b-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdp"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="4_2"></Attribute>
<Attribute NAME="name" VALUE="FD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070168288"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_11_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdq"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-bf0a-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdq"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="4_3"></Attribute>
<Attribute NAME="name" VALUE="FD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070168926"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_12_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdv"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-c182-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdv"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="5_0"></Attribute>
<Attribute NAME="name" VALUE="FD_12_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070169561"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_13_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdw"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-c3fe-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdw"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="5_1"></Attribute>
<Attribute NAME="name" VALUE="FD_13_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070170198"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_14_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdx"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-c677-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdx"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="5_2"></Attribute>
<Attribute NAME="name" VALUE="FD_14_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070170828"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_15_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdy"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-c8ef-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdy"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="5_3"></Attribute>
<Attribute NAME="name" VALUE="FD_15_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070171459"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_00_cell01">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a070-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272349"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_01_cell01">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a09e-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272400"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_02_cell01">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a0d2-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272431"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_03_cell01">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a0f0-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272461"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_04_cell01">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a10e-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272503"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_06_cell01">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a159-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272565"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_07_cell01">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a176-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272594"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_08_cell01">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a193-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272616"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_09_cell01">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a1a9-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272640"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_10_cell01">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a1c2-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272671"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_11_cell01">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a1e0-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272700"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_00_cell01">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a656-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273818"></Attribute>
<Attribute NAME="size" VALUE="91.265625G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_01_cell01">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a65b-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273822"></Attribute>
<Attribute NAME="size" VALUE="91.265625G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_02_cell01">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a65f-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273827"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_03_cell01">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a664-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273831"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_04_cell01">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a668-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273836"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_06_cell01">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a672-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273845"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_07_cell01">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a676-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273850"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_08_cell01">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a67b-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273855"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_09_cell01">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a680-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273860"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_10_cell01">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a685-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273864"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_11_cell01">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a689-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273869"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SCRATCH_CD_05_cell01">
<Attribute NAME="cellDisk" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="id" VALUE="9fd44ab2-a674-40ba-aa4f-fb32d380c573"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SCRATCH_CD_05_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1293210663053"></Attribute>
<Attribute NAME="size" VALUE="578.84375G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SMITHERS_CD_05_cell01">
<Attribute NAME="cellDisk" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="id" VALUE="ee413b30-fe57-47a3-b1ad-815fa25b471c"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SMITHERS_CD_05_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1297885099027"></Attribute>
<Attribute NAME="size" VALUE="100G"></Attribute>
<Attribute NAME="offset" VALUE="578.890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_00_cell01">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a267-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272811"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_01_cell01">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a26c-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272816"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_02_cell01">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a271-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272822"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_03_cell01">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a277-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272828"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_04_cell01">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a27d-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272833"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_06_cell01">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a288-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272844"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_07_cell01">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a28d-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272850"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_08_cell01">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a293-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272856"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_09_cell01">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a299-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272861"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_10_cell01">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a29e-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272867"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_11_cell01">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a2a4-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272872"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SWING_CD_05_cell01">
<Attribute NAME="cellDisk" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="id" VALUE="aaf8a3bc-7f81-45f2-b091-5bf73c93d972"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SWING_CD_05_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1298320563479"></Attribute>
<Attribute NAME="size" VALUE="30G"></Attribute>
<Attribute NAME="offset" VALUE="678.890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_00_cell01">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a45f-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273315"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_01_cell01">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a464-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273318"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_02_cell01">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a468-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273323"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_03_cell01">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a46c-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273327"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_04_cell01">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a470-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273332"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_06_cell01">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a479-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273341"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_07_cell01">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a47e-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273345"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_08_cell01">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a482-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273349"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_09_cell01">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a486-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273354"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_10_cell01">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a48b-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273358"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_11_cell01">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a48f-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273363"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:0">
<Attribute NAME="deviceId" VALUE="23"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975845146"></Attribute>
<Attribute NAME="errMediaCount" VALUE="61"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="name" VALUE="35:0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:1">
<Attribute NAME="deviceId" VALUE="24"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB4V0Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975846476"></Attribute>
<Attribute NAME="errMediaCount" VALUE="8"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB4V0Z"></Attribute>
<Attribute NAME="name" VALUE="35:1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:2">
<Attribute NAME="deviceId" VALUE="25"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJAZMMZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975847789"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJAZMMZ"></Attribute>
<Attribute NAME="name" VALUE="35:2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:3">
<Attribute NAME="deviceId" VALUE="26"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ7JX2Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975849109"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ7JX2Z"></Attribute>
<Attribute NAME="name" VALUE="35:3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:4">
<Attribute NAME="deviceId" VALUE="27"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ60R8Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975850399"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="4"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ60R8Z"></Attribute>
<Attribute NAME="name" VALUE="35:4"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:5">
<Attribute NAME="deviceId" VALUE="28"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB4J8Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975851693"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="5"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB4J8Z"></Attribute>
<Attribute NAME="name" VALUE="35:5"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:6">
<Attribute NAME="deviceId" VALUE="29"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ7JXGZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975852946"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="6"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ7JXGZ"></Attribute>
<Attribute NAME="name" VALUE="35:6"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:7">
<Attribute NAME="deviceId" VALUE="30"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB4E5Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975854177"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="7"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB4E5Z"></Attribute>
<Attribute NAME="name" VALUE="35:7"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:8">
<Attribute NAME="deviceId" VALUE="31"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ8TY3Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975855496"></Attribute>
<Attribute NAME="errMediaCount" VALUE="506"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="8"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ8TY3Z"></Attribute>
<Attribute NAME="name" VALUE="35:8"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:9">
<Attribute NAME="deviceId" VALUE="32"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ8TXKZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975856931"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="9"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ8TXKZ"></Attribute>
<Attribute NAME="name" VALUE="35:9"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:10">
<Attribute NAME="deviceId" VALUE="33"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ8TYLZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975858176"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="10"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ8TYLZ"></Attribute>
<Attribute NAME="name" VALUE="35:10"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:11">
<Attribute NAME="deviceId" VALUE="34"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJAZNKZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975859476"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="11"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJAZNKZ"></Attribute>
<Attribute NAME="name" VALUE="35:11"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_1_0">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JC3"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249971"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 1; FDOM: 0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JC3"></Attribute>
<Attribute NAME="name" VALUE="FLASH_1_0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_1_1">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JYG"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249972"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 1; FDOM: 1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JYG"></Attribute>
<Attribute NAME="name" VALUE="FLASH_1_1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_1_2">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JV9"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249972"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 1; FDOM: 2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JV9"></Attribute>
<Attribute NAME="name" VALUE="FLASH_1_2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_1_3">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02J93"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249972"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 1; FDOM: 3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02J93"></Attribute>
<Attribute NAME="name" VALUE="FLASH_1_3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_2_0">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JFK"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249972"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 2; FDOM: 0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JFK"></Attribute>
<Attribute NAME="name" VALUE="FLASH_2_0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_2_1">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JFL"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 2; FDOM: 1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JFL"></Attribute>
<Attribute NAME="name" VALUE="FLASH_2_1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_2_2">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JF7"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 2; FDOM: 2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JF7"></Attribute>
<Attribute NAME="name" VALUE="FLASH_2_2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_2_3">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JF8"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 2; FDOM: 3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JF8"></Attribute>
<Attribute NAME="name" VALUE="FLASH_2_3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_4_0">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02HP5"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 4; FDOM: 0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02HP5"></Attribute>
<Attribute NAME="name" VALUE="FLASH_4_0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_4_1">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02HNN"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 4; FDOM: 1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02HNN"></Attribute>
<Attribute NAME="name" VALUE="FLASH_4_1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_4_2">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02HP2"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249974"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 4; FDOM: 2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02HP2"></Attribute>
<Attribute NAME="name" VALUE="FLASH_4_2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_4_3">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02HP4"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249974"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 4; FDOM: 3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02HP4"></Attribute>
<Attribute NAME="name" VALUE="FLASH_4_3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_5_0">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JUD"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249974"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 5; FDOM: 0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JUD"></Attribute>
<Attribute NAME="name" VALUE="FLASH_5_0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_5_1">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JVF"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249975"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 5; FDOM: 1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JVF"></Attribute>
<Attribute NAME="name" VALUE="FLASH_5_1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_5_2">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JAP"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249975"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 5; FDOM: 2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JAP"></Attribute>
<Attribute NAME="name" VALUE="FLASH_5_2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_5_3">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JVH"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249975"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 5; FDOM: 3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JVH"></Attribute>
<Attribute NAME="name" VALUE="FLASH_5_3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_0">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_0"></Attribute>
<Attribute NAME="id" VALUE="0_0"></Attribute>
<Attribute NAME="isSystemLun" VALUE="TRUE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sda"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_0"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_1">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_1"></Attribute>
<Attribute NAME="id" VALUE="0_1"></Attribute>
<Attribute NAME="isSystemLun" VALUE="TRUE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJB4V0Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdb"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_1"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_2">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_2"></Attribute>
<Attribute NAME="id" VALUE="0_2"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJAZMMZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdc"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_2"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_3">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_3"></Attribute>
<Attribute NAME="id" VALUE="0_3"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ7JX2Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdd"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_3"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_4">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_4"></Attribute>
<Attribute NAME="id" VALUE="0_4"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ60R8Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sde"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_4"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_5">
<Attribute NAME="cellDisk" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_5"></Attribute>
<Attribute NAME="id" VALUE="0_5"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJB4J8Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdf"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_5"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_6">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_6"></Attribute>
<Attribute NAME="id" VALUE="0_6"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ7JXGZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdg"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_6"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_7">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_7"></Attribute>
<Attribute NAME="id" VALUE="0_7"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJB4E5Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdh"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_7"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_8">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_8"></Attribute>
<Attribute NAME="id" VALUE="0_8"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ8TY3Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdi"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_8"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_9">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_9"></Attribute>
<Attribute NAME="id" VALUE="0_9"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ8TXKZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdj"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_9"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_10">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_10"></Attribute>
<Attribute NAME="id" VALUE="0_10"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ8TYLZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdk"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_10"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_11">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_11"></Attribute>
<Attribute NAME="id" VALUE="0_11"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJAZNKZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdl"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_11"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="1_0">
<Attribute NAME="physicalDrives" VALUE="1014M02JC3"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_00_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdr"></Attribute>
<Attribute NAME="id" VALUE="1_0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="1_0"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="1_1">
<Attribute NAME="physicalDrives" VALUE="1014M02JYG"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_01_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sds"></Attribute>
<Attribute NAME="id" VALUE="1_1"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="1_1"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="1_2">
<Attribute NAME="physicalDrives" VALUE="1014M02JV9"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_02_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdt"></Attribute>
<Attribute NAME="id" VALUE="1_2"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="1_2"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="1_3">
<Attribute NAME="physicalDrives" VALUE="1014M02J93"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_03_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdu"></Attribute>
<Attribute NAME="id" VALUE="1_3"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="1_3"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="2_0">
<Attribute NAME="physicalDrives" VALUE="1014M02JFK"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_04_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdz"></Attribute>
<Attribute NAME="id" VALUE="2_0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="2_0"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="2_1">
<Attribute NAME="physicalDrives" VALUE="1014M02JFL"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_00_enkcel01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdaa"></Attribute>
<Attribute NAME="id" VALUE="2_1"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="2_1"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="2_2">
<Attribute NAME="physicalDrives" VALUE="1014M02JF7"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_06_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdab"></Attribute>
<Attribute NAME="id" VALUE="2_2"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="2_2"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="2_3">
<Attribute NAME="physicalDrives" VALUE="1014M02JF8"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_07_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdac"></Attribute>
<Attribute NAME="id" VALUE="2_3"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="2_3"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="4_0">
<Attribute NAME="physicalDrives" VALUE="1014M02HP5"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_08_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdn"></Attribute>
<Attribute NAME="id" VALUE="4_0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="4_0"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="4_1">
<Attribute NAME="physicalDrives" VALUE="1014M02HNN"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_09_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdo"></Attribute>
<Attribute NAME="id" VALUE="4_1"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="4_1"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="4_2">
<Attribute NAME="physicalDrives" VALUE="1014M02HP2"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_10_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdp"></Attribute>
<Attribute NAME="id" VALUE="4_2"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="4_2"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="4_3">
<Attribute NAME="physicalDrives" VALUE="1014M02HP4"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_11_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdq"></Attribute>
<Attribute NAME="id" VALUE="4_3"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="4_3"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="5_0">
<Attribute NAME="physicalDrives" VALUE="1014M02JUD"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_12_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdv"></Attribute>
<Attribute NAME="id" VALUE="5_0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="5_0"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="5_1">
<Attribute NAME="physicalDrives" VALUE="1014M02JVF"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_13_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdw"></Attribute>
<Attribute NAME="id" VALUE="5_1"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="5_1"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="5_2">
<Attribute NAME="physicalDrives" VALUE="1014M02JAP"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_14_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdx"></Attribute>
<Attribute NAME="id" VALUE="5_2"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="5_2"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="5_3">
<Attribute NAME="physicalDrives" VALUE="1014M02JVH"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_15_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdy"></Attribute>
<Attribute NAME="id" VALUE="5_3"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="5_3"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCache" NAME="enkcel01_FLASHCACHE">
<Attribute NAME="cellDisk" VALUE="FD_10_cell01,FD_02_cell01,FD_06_cell01,FD_01_cell01,FD_12_cell01,FD_03_cell01,FD_15_cell01,FD_04_cell01,FD_09_cell01,FD_14_cell01,FD_00_enkcel01,FD_11_cell01,FD_08_cell01,FD_00_cell01,FD_07_cell01,FD_13_cell01"></Attribute>
<Attribute NAME="degradedCelldisks"></Attribute>
<Attribute NAME="effectiveCacheSize" VALUE="365.25G"></Attribute>
<Attribute NAME="id" VALUE="8347628f-365d-436b-8dc0-30162514ae6a"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="name" VALUE="enkcel01_FLASHCACHE"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="365.25G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="35fff6cd-001e-4ebf-8a48-a53b36b22fbf">
<Attribute NAME="cellDisk" VALUE="FD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="35fff6cd-001e-4ebf-8a48-a53b36b22fbf"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="35fff6cd-001e-4ebf-8a48-a53b36b22fbf"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="914968cf-bfdf-48e8-98f7-5159af6347cd">
<Attribute NAME="cellDisk" VALUE="FD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="914968cf-bfdf-48e8-98f7-5159af6347cd"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="914968cf-bfdf-48e8-98f7-5159af6347cd"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="9c7cf975-3291-4fa5-8527-7991e4e8d868">
<Attribute NAME="cellDisk" VALUE="FD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="9c7cf975-3291-4fa5-8527-7991e4e8d868"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="9c7cf975-3291-4fa5-8527-7991e4e8d868"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="db895100-a9d4-427c-960a-940a43bcda6d">
<Attribute NAME="cellDisk" VALUE="FD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="db895100-a9d4-427c-960a-940a43bcda6d"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="db895100-a9d4-427c-960a-940a43bcda6d"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="a86c5ab5-9b93-49cf-832b-125893ac23ee">
<Attribute NAME="cellDisk" VALUE="FD_12_cell01"></Attribute>
<Attribute NAME="id" VALUE="a86c5ab5-9b93-49cf-832b-125893ac23ee"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="a86c5ab5-9b93-49cf-832b-125893ac23ee"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="15d1c631-58fd-47d0-aa08-70328d97e07a">
<Attribute NAME="cellDisk" VALUE="FD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="15d1c631-58fd-47d0-aa08-70328d97e07a"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="15d1c631-58fd-47d0-aa08-70328d97e07a"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="d0a06d79-d65d-485a-b5a9-d8db55a07a4b">
<Attribute NAME="cellDisk" VALUE="FD_15_cell01"></Attribute>
<Attribute NAME="id" VALUE="d0a06d79-d65d-485a-b5a9-d8db55a07a4b"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="d0a06d79-d65d-485a-b5a9-d8db55a07a4b"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="e8542b03-e2e8-4cc6-8bdc-1baf88da17cf">
<Attribute NAME="cellDisk" VALUE="FD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="e8542b03-e2e8-4cc6-8bdc-1baf88da17cf"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="e8542b03-e2e8-4cc6-8bdc-1baf88da17cf"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="ef3893ef-d779-4a8e-b738-f7c7a85a7a65">
<Attribute NAME="cellDisk" VALUE="FD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="ef3893ef-d779-4a8e-b738-f7c7a85a7a65"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="ef3893ef-d779-4a8e-b738-f7c7a85a7a65"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="f01bf8e0-3e59-4c2d-bdc5-f83b230c72b4">
<Attribute NAME="cellDisk" VALUE="FD_14_cell01"></Attribute>
<Attribute NAME="id" VALUE="f01bf8e0-3e59-4c2d-bdc5-f83b230c72b4"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="f01bf8e0-3e59-4c2d-bdc5-f83b230c72b4"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="124ef0dc-15b6-4f35-914d-8e7af9c2ff7c">
<Attribute NAME="cellDisk" VALUE="FD_00_enkcel01"></Attribute>
<Attribute NAME="id" VALUE="124ef0dc-15b6-4f35-914d-8e7af9c2ff7c"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="124ef0dc-15b6-4f35-914d-8e7af9c2ff7c"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="bfa93ed1-0965-4b54-a0a2-3d9625fa345d">
<Attribute NAME="cellDisk" VALUE="FD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="bfa93ed1-0965-4b54-a0a2-3d9625fa345d"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="bfa93ed1-0965-4b54-a0a2-3d9625fa345d"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="ccd6ae62-5676-4e86-aa62-126a9a5d8876">
<Attribute NAME="cellDisk" VALUE="FD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="ccd6ae62-5676-4e86-aa62-126a9a5d8876"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="ccd6ae62-5676-4e86-aa62-126a9a5d8876"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="afabcda1-bb4d-4c46-96e0-e3f8245ab1e9">
<Attribute NAME="cellDisk" VALUE="FD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="afabcda1-bb4d-4c46-96e0-e3f8245ab1e9"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="afabcda1-bb4d-4c46-96e0-e3f8245ab1e9"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="14ac943a-5589-4d86-bb93-530b1c7b809f">
<Attribute NAME="cellDisk" VALUE="FD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="14ac943a-5589-4d86-bb93-530b1c7b809f"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="14ac943a-5589-4d86-bb93-530b1c7b809f"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="39555b4d-2503-48fc-a4bb-509924cd3ddd">
<Attribute NAME="cellDisk" VALUE="FD_13_cell01"></Attribute>
<Attribute NAME="id" VALUE="39555b4d-2503-48fc-a4bb-509924cd3ddd"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="39555b4d-2503-48fc-a4bb-509924cd3ddd"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
</Targets>
}}}
http://blogs.oracle.com/ATeamExalogicCAF/entry/exalogic_networking_part_1
! Exalogic OBE Series:
''Oracle Exalogic: Storage Appliance'' http://apex.oracle.com/pls/apex/f?p=44785:24:2875967671743702::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5110,29
http://msdn.microsoft.com/en-us/library/ff700515.aspx
http://betterandfasterdecisions.com/2011/01/10/improving-calculation-performance-in-excelfinal/
http://betterandfasterdecisions.com/2011/01/07/improving-calculation-performance-in-excel/
http://betterandfasterdecisions.com/2011/01/08/improving-calculation-performance-in-excelpart-2/
http://betterandfasterdecisions.com/2011/01/09/improving-calculation-performance-in-excelpart-3/
http://social.msdn.microsoft.com/Forums/en/exceldev/thread/b7c63f9d-e373-4455-a793-f58707353032
http://www.databison.com/index.php/excel-slow-to-respond-avoiding-mistakes-that-make-excel-slow-down-to-a-crawl/
''Scripts''
http://www.expertoracleexadata.com/scripts/
''Errata''
http://www.expertoracleexadata.com/errata/
http://www.apress.com/9781430233923
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/twp-explain-the-explain-plan-052011-393674.pdf
{{{
In order to determine if you are looking at a good execution plan or not, you need to understand how
the Optimizer determined the plan in the first place. You should also be able to look at the execution
plan and assess if the Optimizer has made any mistake in its estimations or calculations, leading to a
suboptimal plan. The components to assess are:
• Cardinality– Estimate of the number of rows coming out of each of the operations.
• Access method – The way in which the data is being accessed, via either a table scan or index
access.
• Join method – The method (e.g., hash, sort-merge, etc.) used to join tables with each other.
• Join type – The type of join (e.g., outer, anti, semi, etc.).
• Join order – The order in which the tables are joined to each other.
• Partition pruning – Are only the necessary partitions being accessed to answer the query?
• Parallel Execution – In case of parallel execution, is each operation in the plan being
conducted in parallel? Is the right data redistribution method being used?
}}}
How to understand connect by explain plans [ID 729201.1]
''watch this first''
Assign a Macro to a Button, Check box, or any object in Microsoft Excel http://www.youtube.com/watch?v=XmOk1QW6T0g&feature=relmfu
Insert Macros into an Excel Workbook or File and Delete Macros from Excel http://www.youtube.com/watch?v=8pfdm7xs3QE
http://www.ozgrid.com/forum/showthread.php?t=76720
{{{
Sub testexport()
'
' export Macro
Range("A3:A5").Select
Selection.Copy
Workbooks.Add
ActiveSheet.Paste
ActiveWorkbook. SaveAs Filename:= _
"C:\Documents and Settings\Simon\My Documents\Book2.csv" _
, FileFormat:=xlCSV, CreateBackup:=False
Application.DisplayAlerts = False
ActiveWorkbook.Close
Application.DisplayAlerts = True
End Sub
}}}
another source
http://www.mrexcel.com/forum/showthread.php?18262-Select-variable-range-then-save-as-csv-Macro!
http://www.pcreview.co.uk/forums/excel-2007warning-following-fetures-cannot-saved-macro-free-workbook-t3037442.html
http://awads.net/wp/2011/05/17/shell-script-output-to-oracle-database-via-external-table/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+EddieAwadsFeed+%28Eddie+Awad%27s+blog%29
''create external table using SQL Developer'' http://sueharper.blogspot.com/2006/08/i-didnt-know-you-could-do-that.html
''a cool code to do external tables on the DB side'' http://mikesmithers.wordpress.com/2011/08/26/oracle-external-tables-or-what-i-did-on-my-holidays/
''Executing operating system commmands from PL/SQL'' http://www.oracle.com/technetwork/database/enterprise-edition/calling-shell-commands-from-plsql-1-1-129519.pdf
''Shell Script Output to Oracle Database Via External Table'' http://awads.net/wp/2011/05/17/shell-script-output-to-oracle-database-via-external-table/
''Calling OS Commands from Plsql'' https://forums.oracle.com/forums/thread.jspa?threadID=369320
''Execute operating system commands from PL/SQL'' http://hany4u.blogspot.com/2008/12/execute-operating-system-commands-from.html
you can't use wildcard on external tables http://www.freelists.org/post/oracle-l/10g-External-Table-Location-Parameter,3
http://jiri.wordpress.com/2010/03/29/oracle-external-tables-by-examples-part-4-column_transforms-clause-load-clob-blob-or-any-constant-using-external-tables/
''Performant and scalable data loading withOracle Database 11g'' http://www.scribd.com/doc/61785526/26/Accessing-remote-data-staging-files-using-Oracle-external-tables
http://decipherinfosys.wordpress.com/2007/04/28/writing-data-to-a-text-file-from-oracle/
http://decipherinfosys.wordpress.com/2007/04/17/using-external-tables-in-oracle-to-load-up-data/
/***
|Name:|ExtentTagButtonPlugin|
|Description:|Adds a New tiddler button in the tag drop down|
|Version:|3.2 ($Rev: 3861 $)|
|Date:|$Date: 2008-03-08 10:53:09 +1000 (Sat, 08 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#ExtendTagButtonPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License|http://mptw.tiddlyspot.com/#TheBSDLicense|
***/
//{{{
window.onClickTag_mptw_orig = window.onClickTag;
window.onClickTag = function(e) {
window.onClickTag_mptw_orig.apply(this,arguments);
var tag = this.getAttribute("tag");
var title = this.getAttribute("tiddler");
// Thanks Saq, you're a genius :)
var popup = Popup.stack[Popup.stack.length-1].popup;
createTiddlyElement(createTiddlyElement(popup,"li",null,"listBreak"),"div");
wikify("<<newTiddler label:'New tiddler' tag:'"+tag+"'>>",createTiddlyElement(popup,"li"));
return false;
}
//}}}
http://oracle-randolf.blogspot.com/2011/12/extended-displaycursor-with-rowsource.html
''download link'' http://www.sqltools-plusplus.org:7676/media/xplan_extended_display_cursor.sql
maria colgan https://blogs.oracle.com/optimizer/extended-statistics
http://blogs.oracle.com/optimizer/entry/extended_statistics
http://jonathanlewis.wordpress.com/2012/03/09/index-upgrades/#comments
http://structureddata.org/2007/10/31/oracle-11g-extended-statistics/
HOWTO Using Extended Statistics to Optimize Multi-Column Relationships and Function-Based Statistics
https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r1/prod/perform/multistats/multicolstats.htm
Nigel Bayliss
Why do I have SQL statement plans that change for the worse? https://blogs.oracle.com/optimizer/sql-plans-change-for-worse
Use Extended Statistics For Better SQL Execution Plans https://blogs.oracle.com/optimizer/extended-statistics-better-plans
! howto
{{{
--create column group
exec DBMS_STATS.GATHER_TABLE_STATS('TC69649','PK_SUBMISSIONENTRY', method_opt=>'for all columns size auto for columns (entry_type, name) size 254');
exec DBMS_STATS.GATHER_TABLE_STATS('TC69649','PK_SUBMISSION', method_opt=>'for all columns size auto for columns (syncdeleted, ignored, submission_state, submission_type, interface_id) size 254');
--to delete the column group
exec dbms_stats.drop_extended_stats('TC69649',tabname => 'PK_SUBMISSIONENTRY',extension => '(entry_type, name)');
exec dbms_stats.drop_extended_stats('TC69649',tabname => 'PK_SUBMISSION',extension => '(syncdeleted, ignored, submission_state, submission_type, interface_id)');
}}}
http://tonguc.wordpress.com/2008/03/11/a-little-more-on-external-tables/
http://tonguc.wordpress.com/2007/08/09/unload-data-with-external-tables-and-data-pump/
http://prsync.com/oracle/owb-gr-ndash-bulk-file-loading---more-faster-easier-12451/
http://www.oracle-developer.net/display.php?id=512
http://tkyte.blogspot.com/2006/08/interesting-data-set.html
http://sueharper.blogspot.com/2006/08/i-didnt-know-you-could-do-that.html <-- SQL Developer demo, oh crap!
http://www.oracle-developer.net/display.php?id=204 <-- GOOD STUFF, this is much easier!
On hardware and ETL
http://glennfawcett.wordpress.com/2010/06/08/open-storage-s7000-with-exadata-a-good-fit-etlelt-operations/
http://viralpatel.net/blogs/oracle-xmltable-tutorial/ <— xml random data
http://www.mistersoft.org/freelancing/getafreelancer/2009/10/Javascript-Oracle-SQL-Visual-Basic-XML-Extract-xml-from-Oracle-Db-then-reload-in-another-Oracle-DB-nbsp-519463.html
http://www.cosort.com/products/FACT
http://www.access-programmers.co.uk/forums/showthread.php?t=162752
http://www.codeguru.com/forum/showthread.php?t=466326
http://www.attunity.com/forums/data-access/running-multiple-data-extract-sql-jcl-1233.html
http://www.oracle.com/technology/pub/articles/jain-xmldb.html
http://www.scribd.com/doc/238504/Load-XML-to-Oracle-Database
http://docs.fedoraproject.org/en-US/Fedora/16/html/Release_Notes/sect-Release_Notes-Changes_for_Sysadmin.html
http://fedoraproject.org/wiki/Releases/16/FeatureList
http://fedoraproject.org/wiki/Features/XenPvopsDom0
http://blog.xen.org/index.php/2011/05/13/xen-support-upstreamed-to-qemu/
<<<
EMC calls it FAST
Hitachi calls it Dynamic Tiering
Dell Compellent has "Storage Center" which has a feature called "Dynamic Block Architecture"
<<<
EMC Workload Profile Assessment for Oracle AWR Report / StatsPack Gathering Procedures Instructions https://community.emc.com/docs/DOC-13949
New Assessment Available: Oracle AWR/Statspack Assessment https://community.emc.com/docs/DOC-14008
White Paper: EMC Tiered Storage for Oracle Database 11g — Data Warehouse Enabled by EMC Symmetrix VMAX with FAST and EMC Ionix ControlCenter StorageScope — A Detailed Review https://community.emc.com/docs/DOC-14191
EMC Tiered Storage for Oracle Database 11g — Data Warehouse Enabled by EMC Symmetrix VMAX with FAST and EMC Ionix ControlCenter StorageScope https://community.emc.com/docs/DOC-11047
Demo Station 3: Maximize Oracle Database Performance https://community.emc.com/docs/DOC-11912
Service Overview: EMC Database Performance Tiering Assessment https://community.emc.com/docs/DOC-14012
Maximize Operational Efficiency for Oracle RAC Environments with EMC Symmetrix FAST VP (Automated Tiering) https://community.emc.com/docs/DOC-11138
http://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet
CNA
<<<
Computers connect to FCoE with Converged Network Adapters (CNAs), which contain both Fibre Channel Host Bus Adapter (HBA) and Ethernet Network Interface Card (NIC) functionality on the same adapter card. CNAs have one or more physical Ethernet ports. FCoE encapsulation can be done in software with a conventional Ethernet network interface card, however FCoE CNAs offload (from the CPU) the low level frame processing and SCSI protocol functions traditionally performed by Fibre Channel host bus adapters.
<<<
Kyle also has some good reference on making use of FIO https://github.com/khailey/fio_scripts/blob/master/README.md
https://github.com/khailey/fio_scripts/blob/master/README.md
https://sites.google.com/site/oraclemonitor/i-o-graphics#TOC-Percentile-Latency
explanation of the graphs https://plus.google.com/photos/105986002174480058008/albums/5773661884246310993
''RSS to Groups''
http://www.facebook.com/topic.php?uid=4915599711&topic=4658#topic_top
http://www.youtube.com/watch?v=HgGxgX9KFfc
timeline
https://www.facebook.com/about/timeline
facebook download ALL info https://www.facebook.com/help/?page=116481065103985
! graph search
https://www.facebook.com/find-friends/browser/
https://www.sitepoint.com/facebook-graph-search/
https://www.labnol.org/internet/facebook-graph-search-commands/28542/
http://graph.tips/
https://www.google.com/search?q=oracle+Failed+Logon+Delay&ei=O9uJW9OVIKGQggfhl5KwBw&start=0&sa=N&biw=1389&bih=764
https://www.dba-resources.com/oracle/finding-the-origin-of-failed-login-attempts/
{{{
1. Using database auditing (if already enabled)
Caveat: This is the simplest method to determine the source of failed login attempts providing that auditing is already enabled on your database as the information has (probably) already been captured. However, if auditing is not enabled then doing so will require that the database be restarted, in which case this option is no longer the simplest!
Firstly, check to see whether auditing is enabled and set to "DB" (meaning the audit trail is written to a database table).
show parameter audit_trail
If not set, then you will need to enable auditing, restart the database and then enable auditing of unsucessful logins as follows:
audit session whenever not successful;
The audit records for unsuccessful logon attempts can then be found as follows:
col ntimestamp# for a30 heading "Timestamp"
col userid for a20 heading "Username"
col userhost for a15 heading "Machine"
col spare1 for a15 heading "OS User"
col comment$text for a80 heading "Details" wrap
select ntimestamp#, userid, userhost, spare1, comment$text from sys.aud$ where returncode=1017 order by 1;
Sample output:
Timestamp Username Machine OS User
------------------------------ -------------------- --------------- ---------------
Details
--------------------------------------------------------------------------------
08-DEC-14 12.39.42.945635 PM APPUSER unix_app_001 orafrms
Authenticated by: DATABASE; Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.218.
64.44)(PORT=42293))
08-DEC-14 12.42.10.170957 PM APPUSER unix_app_001 orafrms
Authenticated by: DATABASE; Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.218.
64.44)(PORT=48541))
Note: the USERHOST column is only populated with the Client Host machine name as of 10G, in earlier versions this was the Numeric instance ID for the Oracle instance from which the user is accessing the database in a RAC environment.
2. Use a trigger to capture additional information
The following trigger code can be used to gather additional information about unsuccessful login attempts and write them to the database alert log, it is recommended to integrate this code into an existing trigger if you already have a trigger for this triggering event.
CREATE OR REPLACE TRIGGER logon_denied_write_alertlog AFTER SERVERERROR ON DATABASE
DECLARE
l_message varchar2(2000);
BEGIN
-- ORA-1017: invalid username/password; logon denied
IF (IS_SERVERERROR(1017)) THEN
select 'Failed login attempt to the "'|| sys_context('USERENV' ,'AUTHENTICATED_IDENTITY') ||'" schema'
|| ' using ' || sys_context ('USERENV', 'AUTHENTICATION_TYPE') ||' authentication'
|| ' at ' || to_char(logon_time,'dd-MON-yy hh24:mi:ss' )
|| ' from ' || osuser ||'@'||machine ||' ['||nvl(sys_context ('USERENV', 'IP_ADDRESS'),'Unknown IP')||']'
|| ' via the "' ||program||'" program.'
into l_message
from sys .v_$session
where sid = to_number(substr(dbms_session.unique_session_id,1 ,4), 'xxxx')
and serial# = to_number(substr(dbms_session.unique_session_id,5 ,4), 'xxxx');
-- write to alert log
sys.dbms_system .ksdwrt( 2,l_message );
END IF;
END;
/
Some sample output from the alert.log looks like:
Tue Jan 06 09:45:36 2015
Failed login attempt to the "appuser" schema using DATABASE authentication at 06-JAN-15 09:45:35 from orafrms@unix_app_001 [10.218.64.44] via the "frmweb@unix_app_001 (TNS V1-V3)" program.
3. Setting an event to generate trace files on unsuccessful login.
You can instruct the database to write a trace file whenever an unsuccessful login attempt is made by setting the following event (the example below will only set the event until the next time the database is restarted. Update your pfile or spfile accordingly if you want this to be permanent).
alter system set events '1017 trace name errorstack level 10';
Trace files will be generated in user_dump_dest whenever someone attempts to login using an invalid username / password. As the trace is requested at level 10 it will include a section labeled PROCESS STATE that includes trace information such as :
O/S info: user:orafrms, term: pts/15, ospid: 29959, machine:unix_app_001
program: frmweb@unix_app_001 (TNS V1-V3)
application name: frmweb@unix_app_001 (TNS V1-V3), hash value=0
last wait for 'SQL*Net message from client' blocking sess=0x0 seq=2 wait_time=5570 seconds since wait started=0
In this case it was an 'frmweb' client running as OS user 'orafrms' that started the client session. The section "Call Stack Trace" may aid support in further diagnosing the issue.
Note: If the OS user or program is 'oracle' the connection may originate from a Database Link.
4. Using SQL*Net tracing to gather information
A sqlnet trace can provide you with even more details about the connection attempt but use this only if none of the above are successful in determining the origin of the failed login as it will be hard to find what you are looking for if you enable sqlnet tracing (and it can potentially consume large amounts of disk space).
To enable SQL*Net tracing create or edit the server side sqlnet.ora file and add the following parameters:
# server side sqlnet trace parameters
trace_level_server = 16
trace_file_server=server
trace_directory_server = <any directory on a volume with enough freespace>
}}}
Pull disk test case http://www.evernote.com/shard/s48/sh/bff156b7-9898-4010-9346-f16ba106354b/32ec97f26d92eb07b8b5974f4a4093ff
{{{
-- identify a specific disk in the storage cells.. will make the amber light blink
/opt/MegaRAID/MegaCli/MegaCli64 -pdlocate -physdrv '[35:5]' -a0
/opt/MegaRAID/MegaCli/MegaCli64 -pdlocate –stop -physdrv '[35:5]' -a0
}}}
ITI - causing harddisk failure http://www.evernote.com/shard/s48/sh/4d0b5df6-8e55-4a61-9b16-b0995ec5511c/3349abb1342f5cfb996a6a35a246b561
Why gridisks are not automatically created after replacing a diskdrive in a cellserver? http://jeyaseelan-m.blogspot.com/2011/07/why-gridisks-are-not-automatically.html
Replacing a failed Exadata Storage Server System Drive http://jarneil.wordpress.com/2011/10/21/replacing-a-failed-exadata-storage-server-system-drive/
http://www.evernote.com/shard/s48/sh/ee3819be-be40-4c6b-a2b3-7b415d63e1ba/b6755dc12c817f384b807449821037b2
http://iggyfernandez.wordpress.com/2011/07/04/take-that-exadata-fast-index-creation-using-noparallel/
http://www.rittmanmead.com/files/oow2010_bryson_fault_tolerance.pdf
http://www.rittmanmead.com/2010/02/data-warehouse-fault-tolerance-an-introduction/
http://www.rittmanmead.com/2010/02/data-warehouse-fault-tolerance-part-1-resuming/
http://www.rittmanmead.com/2010/02/data-warehouse-fault-tolerance-part-2-restarting/
http://www.rittmanmead.com/2010/02/data-warehouse-fault-tolerance-part-3-restoring/
fedora dvd iso http://mirrors.rit.edu/fedora/fedora/linux/releases/
http://blogs.oracle.com/kirkMcgowan/2007/06/who_are_the_rac_pack.html
http://blogs.oracle.com/kirkMcgowan/2007/06/whos_afraid_of_the_big_bad_rac.html
http://blogs.oracle.com/kirkMcgowan/2007/08/fencing_yet_again.html
<<<
Fencing - yet again
By kirk.mcgowan on August 9, 2007 12:12 AM
Sheesh. It is amazing to me how this topic continues to spin. Clearly people just like speculate, and I suppose a little controversy can serve to energize, but this topic seems to have taken on a life of its own. The real question in my mind is why do you care? Fencing is a core functionality of the cluster infrastructure. You cant control it, or influence it in any way. It has to be there in some form, or bad things will happen (corruptions being one of them). And if the particular fencing implementation in Oracle clusterware was fundamentally flawed, it would have been exposed long ago over the course of the 5+ years of existence, and the thousands of deployments.
So any discussion of fencing and the Oracle implementation is purely theoretical, and largely academic, since it has more than proven itself. ok. I enjoy lively academic or theortical technical debate, particularly over a beer or 2, but not at the expense of ignoring reality. So lets pull apart the discussions Ive seen, and address them point by point. Note that this discussion is focused solely on Oracle Clusterware used in conjunction with RAC.
Oracle Clusterware uses the Stonith algorithm. This is only partially true. Oracles fencing mechanism is based on the Stonith Algorithm. However, there is no general design rule of how that algorithm should be implemented. Strict use of the algorithm is complicated, or perhaps even prevented, by the fact that there is no API on many platforms for doing a remote power-off reset of the system. So the current implementation is in fact a suicide, as opposed to an execution. As system/OS vendors makes such APIs available, Oracle will be able to make use of them.
Suicide is not reliable because you are expecting an already unhealthy system to respond to some other directive. Sure. There are corner cases where this is a possibility, but these have proven to be very rare, they have been fixed when they appeared, and the real underlying concern, which is exposure to data corruption, in non-existent (see next point). This issue is actually related to the FUD we often see about some cluster managers running in Kernel mode vs user space where Oracle Clusterware runs. Well ... If the OS kernel is misbehaving, then it doesnt really matter where the clusterware runs - bad things are going to happen. (Weve seen this occur in several situations.) If someone makes a programming error in the clusterware code and it is running in kernel mode, then the OS kernel is exposed. (This is theoretical since Oracle clusterware does not run in kernel mode, but its not like this hasnt happened before in other envrionments where user/application code is allowed to run in kernel space). And lastly, if running in userspace, and other user space programs misbehave, then the obvious concern in the sensitivity the cluster has to that misbehaving application - like not being able to get CPU time to communicate in a timely manner. We have certainly seen this kind of scenario many times, but in general it is easily mitigated by renicing or increasing the priority of the key background communication processes. Bottom line is that suicide has proven sufficiently reliable. Any claim to the contrary is pure speculation.
Because suicide is unreliable, you are exposed to data corruptions. Not true. Either in theory, or in practice. Its no secret RAC does unbuffered IO (bypasses the OS cache), and any IO done in a RAC environment is in complete coordination with the other nodes in the cluster. Cache fusion assures this. And this holds true in a split brain condition. If RAC cant coordinate the write with the other nodes as a result of interconnect failure, then that write is put on hold until communication is restored, or until the eviction protocol is invoked.
This is obviously over simplified, but frankly, so are the criticisms in this area. The challenge to any non-believer is the following: Find me a repeatable test case where interconnect failure, and the resulting fencing algorithm implemented in Oracle clusterware, results in database corruption. If you are successful, I will:
1. Fall off my chair in disbelief
2. Write : They were right, I was wrong, 1000 times in my blog, and apologize profusely to anyone who may have taken offence to the claims made in this posting.
3. File a bug, and get the damn thing fixed.
Now that I think about it, it would probably be prudent to reverse 2. and 3. Note however, that in the off chance you are successful, it is a bug, and will be fixed as such. As opposed to a fundamental architectural flaw.
So lets put this one to bed. Next topic.
<<<
http://msdn.microsoft.com/en-us/library/aa365247(v=vs.85).aspx
http://www.computing.net/answers/windows-xp/file-name-or-extension-is-too-long/183526.html
-- a workaround on this on Windows 7 is to map the folder you want to copy on a network drive.. then create a short directory on c:\x then do the copy!
http://blog.unmaskparasites.com/2009/09/01/beware-filezilla-doesnt-protect-your-ftp-passwords/
http://www.makeuseof.com/tag/how-to-recover-passwords-from-asterisk-characters/
http://www.groovypost.com/howto/retrieve-recover-filezilla-ftp-passwords/
http://arjudba.blogspot.com/2008/05/how-to-discover-find-dbid.html
http://oraclepoint.com/oralife/2010/10/21/how-to-find-the-date-when-a-database-object-role-was-created/
http://dboptimizer.com/?p=694
http://en.wikipedia.org/wiki/IEEE_1394
Unfortunately, the program is question is pulling back 3 millions rows, all the rows from a view, and then doing some processing, so we are generating a lot of network traffic. This is one old program that will take some effort to change.
We got a boost in performance by disabling SQL*Net Inspection on the firewall. ( See below )
We are also tuning the SDU sizes in the sqlnet files to help performance as you suggested below.
From their admin about their firewall:
===============================
A few notes from what we learned overnight.
1. Looks like removing the SQL inspection (sometimes called a fix up helped our performance)
a. About a 75% gain. (See note at bottom for more about SQL inspection)
{{{
SQL*Net Inspection
SQL*Net inspection is enabled by default.
The SQL*Net protocol consists of different packet types that the ASA handles to make the data stream appear consistent to the Oracle applications on either side of the ASA.
The default port assignment for SQL*Net is 1521. This is the value used by Oracle for SQL*Net, but this value does not agree with IANA port assignments for Structured Query Language (SQL). Use the class-map command to apply SQL*Net inspection to a range of port numbers.
<AA5A66F3-3B66-45D2-BC45-6810C3BCBA83.png>
________________________________________
Note <745DBE7D-7180-4918-A1A7-D03BDDEA1CF6.png>Disable SQL*Net inspection when SQL data transfer occurs on the same port as the SQL control TCP port 1521. The security appliance acts as a proxy when SQL*Net inspection is enabled and reduces the client window size from 65000 to about 16000 causing data transfer issues.
________________________________________
}}}
http://arup.blogspot.com/2010/06/build-simple-firewall-for-databases.html
{{{
select /* usercheck */ s.sid sid, s.serial# serial#, lpad(p.spid,7) unix_pid
from gv$process p, gv$session s
where p.addr=s.paddr
and s.username is not null
and (s.inst_id, s.sid) in (select inst_id, sid from gv$mystat where rownum < 2);
-- make sure that you have this set as ROOT
-- [root@localhost ~]# echo -1 > /proc/sys/kernel/perf_event_paranoid
$ cat flamegraph.sql
set lines 200
with d as
(
select '&procid' spid,
'&&prefix._perf_graph.data' newfilename,
'&&prefix._perf_graph.data-folded' folded_filename,
'&&prefix._flamegraph.svg' flamegraph_filename,
'&&prefix.' tarname
from dual
)
SELECT
'perf record -g -p ' || spid || chr(10) ||
'mv perf.data ' || newfilename || chr(10) ||
'perf script -i ' || newfilename || ' | ./stackcollapse-perf.pl > ' || folded_filename || chr(10) ||
'cat ' || folded_filename || '| ./flamegraph.pl > ' || flamegraph_filename || chr(10) ||
'tar -cjvpf ' || tarname || '_perf_data.tar.bz2 ' || tarname || '*'
as commands
from d
;
COMMANDS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
perf record -g -p 12345
mv perf.data testcase1_perf_graph.data
perf script -i testcase1_perf_graph.data | ./stackcollapse-perf.pl > testcase1_perf_graph.data-folded
cat testcase1_perf_graph.data-folded| ./flamegraph.pl > testcase1_flamegraph.svg
tar -cjvpf testcase1_perf_data.tar.bz2 testcase1*
}}}
! references
http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html
!! windows
https://randomascii.wordpress.com/2013/03/26/summarizing-xperf-cpu-usage-with-flame-graphs/
!! tanel
https://blog.tanelpoder.com/posts/visualizing-sql-plan-execution-time-with-flamegraphs/
!! other examples
https://tableau.github.io/Logshark/docs/logshark_art#flamecharts
http://www.flashconf.com/how-to/how-to-install-flash-player-on-centosredhat-linux/
Master Note For Oracle Flashback Technologies [ID 1138253.1]
Flashback Database Best Practices & Performance
Doc ID: Note:565535.1
What Do All 10g Flashback Features Rely on and what are their Limitations ?
Doc ID: Note:435998.1
Restrictions on Flashback Table Feature
Doc ID: 270535.1
Creating a 10gr2 Data Guard Physical Standby database with Real-Time apply [ID 343424.1]
11gR1 Data Guard Portal [ID 798974.1]
Master Note for Data Guard [ID 1101938.1]
How To Open Physical Standby For Read Write Testing and Flashback [ID 805438.1]
Step by Step Guide on How To Reinstate Failed Primary Database into Physical Standby [ID 738642.1]
Using RMAN Effectively In A Dataguard Environment. [ID 848716.1]
Reinstating a Physical Standby Using Backups Instead of Flashback [ID 416310.1]
Oracle11g Data Guard: Database Rolling Upgrade Shell Script [ID 949322.1]
Steps to perform for Rolling forward a standby database using RMAN incremental backup when primary and standby are in ASM filesystem [ID 836986.1]
''Flashback and nologging''
http://docs.oracle.com/cd/B28359_01/backup.111/b28273/rcmsynta023.htm
http://www.pythian.com/news/4884/questions-you-always-wanted-to-ask-about-flashback-database/
http://rnm1978.wordpress.com/2011/06/28/oracle-11g-how-to-force-a-sql_id-to-use-a-plan_hash_value-using-sql-baselines/
By Tanel Poder:
cat /tmp/x | awk '{ printf "%s", $0 ; if (NR % 3 == 0) print } END { print }'
Getting Started With Forms 9i - Hints and Tips
Doc ID: Note:237191.1
Troubleshooting Web Deployed Oracle Forms Performance Issues
Doc ID: Note:363285.1
-- NETWORK PERFORMANCE
Bandwith Per User Session For Oracle Form Base Web Deployment In Oracle9ias
Doc ID: Note:287237.1
How to Find Out How Much Network Traffic is Created by Web Deployed Forms?
Doc ID: Note:109597.1
Few Basic Techniques to Improve Performance of Forms.
Doc ID: Note:221529.1
-- MIGRATE TO 9i/10g
Migrating to Oracle Forms 9i / 10g - Forms Upgrade Center
Doc ID: Note:234540.1
-- CORRUPTION
Recovering Corrupted Forms
Doc ID: 161430.1
''Good stuff topics I need to catch up on..''
oaktable - linux filesystems https://mail.google.com/mail/u/0/#inbox/14a779368d7427ed
oracle -l Memory operations on Sun/Oracle M class servers vs T class servers https://mail.google.com/mail/u/0/#inbox/14a54d821527fd6d
oaktable - CPU wait https://mail.google.com/mail/u/0/#inbox/14ae89d485847182
counting memory stall on Sandy Bridge https://software.intel.com/en-us/forums/topic/514733
oaktable - CPU overhead in multiple instance RAC databases https://mail.google.com/mail/u/0/#inbox/14af37881a9831f2
oaktable - Has anyone seem current-ish AMD Opterons lately https://mail.google.com/mail/u/0/#inbox/14afef9391d6dd76
oracle-l - Exadata + OMCS https://mail.google.com/mail/u/0/#inbox/14ae51e818976ce7
oracle-l - how do you manage your project list https://mail.google.com/mail/u/0/#inbox/14a12a1121b829a1
http://jonathanlewis.wordpress.com/2010/07/13/fragmentation-1/
http://jonathanlewis.wordpress.com/2010/07/16/fragmentation-2/
http://jonathanlewis.wordpress.com/2010/07/19/fragmentation-3/
http://jonathanlewis.wordpress.com/2010/07/22/fragmentation-4/
http://joshodgers.com/storage/fusionio-iodrive2-virtual-machine-performance-benchmarking-part-1/
http://longwhiteclouds.com/2012/08/17/io-blazing-datastore-performance-with-fusion-io/
<<showtoc>>
! video courses
!! design and architecture
https://www.pluralsight.com/courses/google-dataflow-architecting-serverless-big-data-solutions
https://www.pluralsight.com/courses/google-cloud-platform-leveraging-architectural-design-patterns
https://www.pluralsight.com/courses/google-cloud-functions-architecting-event-driven-serverless-solutions
https://www.pluralsight.com/courses/google-dataproc-architecting-big-data-solutions
https://www.pluralsight.com/courses/google-machine-learning-apis-designing-implementing-solutions
https://www.pluralsight.com/courses/google-bigquery-architecting-data-warehousing-solutions
https://www.pluralsight.com/courses/google-cloud-automl-designing-implementing-solutions
https://www.linkedin.com/learning/search?keywords=apache%20beam
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-building-data-pipelines/what-goes-into-a-data-pipeline <-- good summary
https://www.linkedin.com/learning/google-cloud-platform-for-enterprise-essential-training/enterprise-ready-gcp
https://www.linkedin.com/learning/architecting-big-data-applications-batch-mode-application-engineering/dw-lay-out-the-architecture <-- good 5 use cases
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-architecting-solutions/architecting-data-science <-- good 4 use cases
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-designing-data-warehouses/why-data-warehouses-are-important
https://www.linkedin.com/learning/architecting-big-data-applications-real-time-application-engineering/sm-analyze-the-problem <-- good 4 use cases
!! detailed tech
https://www.pluralsight.com/courses/google-cloud-platform-firestore-leveraging-realtime-database-solutions
https://www.pluralsight.com/courses/google-cloud-sql-creating-administering-instances
!! authors
https://www.pluralsight.com/authors/janani-ravi
https://www.linkedin.com/learning/instructors/kumaran-ponnambalam <-- nice architecture courses
! ### GCP architecture references
https://cloud.google.com/docs/tutorials#architecture
! ### architecture - Smart analytics reference patterns
https://cloud.google.com/solutions/smart-analytics/reference-patterns/overview
! ### GCP solutions by industry
https://cloud.google.com/solutions/migrating-oracle-to-cloud-spanner
! ### migration patterns
!! multiple source systems to bigquery
from pythian whitepapers
https://resources.pythian.com/hubfs/Framework-For-Migrate-Your-Data-Warehouse-Google-BigQuery-WhitePaper.pdf
https://resources.pythian.com/hubfs/White-Papers/Migrate-Teradata-to-Google-BigQuery.pdf
[img(70%,70%)[ https://i.imgur.com/IVdnYBz.png]]
!! from on-prem hadoop to dataproc and bigquery
<<<
You use a Hadoop cluster both for serving analytics and for processing and transforming data. The data is currently stored on HDFS in Parquet format. The data processing jobs run for 6 hours each night. Analytics users can access the system 24 hours a day. Phase 1 is to quickly migrate the entire Hadoop environment without a major re-architecture. Phase 2 will include migrating to BigQuery for analytics and to Cloud Dataflow for data processing. You want to make the future migration to BigQuery and Cloud Dataflow easier by following Google-recommended practices and managed services. What should you do?
A. Lift and shift Hadoop/HDFS to Cloud Dataproc.
B. Lift and shift Hadoop/HDFS to Compute Engine.
C. Create a single Cloud Dataproc cluster to support both analytics and data processing, and point it at a Cloud Storage bucket that contains the Parquet files that were previously stored on HDFS.
D. Create separate Cloud Dataproc clusters to support analytics and data processing, and point both at the same Cloud Storage bucket that contains the Parquet files that were previously stored on HDFS.
Feedback
A is not correct because it is not recommended to attach persistent HDFS to Cloud Dataproc clusters in GCP. (see references link)
B Is not correct because they want to leverage managed services which would mean Cloud Dataproc.
C is not correct because it is recommended that Cloud Dataproc clusters be job specific.
D Is correct because it leverages a managed service (Cloud Dataproc), the data is stored on GCS in Parquet format which can easily be loaded into BigQuery in the future and the Cloud Dataproc clusters are job specific.
<<<
https://cloud.google.com/solutions/migration/hadoop/hadoop-gcp-migration-jobs
! ### performance comparison
Cloud Data Warehouse Benchmark: Redshift, Snowflake, Azure, Presto and BigQuery https://fivetran.com/blog/warehouse-benchmark
! ### data and file storage
Understanding Data and File Storage https://cloud.google.com/appengine/docs/standard/java/storage
https://cloud.google.com/products/storage
!! ### cloud Spanner (globally scalable transactions RDMBS)
https://cloud.google.com/spanner/
!! ### cloud SQL (managed RDBMS)
!! ### cloud datastore (NOSQL single region - storing structured application data that are mutable - think User entity, Blog post, etc)
!!! cloud firestore datastore (NEWER managed schemaless NoSQL)
Google Cloud Storage vs Google Cloud DataStore https://groups.google.com/forum/#!topic/gcd-discuss/SajfBn79LVw
<<<
Google Cloud Storage is for storing immutable blob objects (think images, and static files).
Google Cloud Datastore is for storing structured application data that are mutable (think User entity, Blog post, etc).
<<<
<<<
Another difference that is worth mentioning is that Google Cloud Storage supports Multi-Regional buckets that synchronize data across regions automatically,
while Google Cloud Datastore is stored within a single region. So if you want to store your data across multiple regions, for example, Cloud Storage is your way to go.
<<<
Firestore in Datastore mode documentation https://cloud.google.com/datastore/docs
https://stackoverflow.com/questions/46549766/whats-the-difference-between-cloud-firestore-and-the-firebase-realtime-database
<<<
It is an improved version
Firebase database was enough for basic applications. But it was not powerful enough to handle complex requirements. That is why Cloud Firestore is introduced. Here are some major changes.
The basic file structure is improved.
Offline support for the web client.
Supports more advanced querying.
Write and transaction operations are atomic.
Reliability and performance improvements
Scaling will be automatic.
Will be more secure.
Pricing
In Cloud Firestore, rates have lowered even though it charges primarily on operations performed in your database along with bandwidth and storage. You can set a daily spending limit too. Here is the complete details about billing.
Future plans of Google
When they discovered the flaws with Real-time Database, they created another product rather than improving the old one. Even though there are no reliable details revealing their current standings on Real-time Database, it is the time to start thinking that it is likely to be abandoned.
<<<
!!! FireBase Real Time DB (OLDER - JSON based NO SQL DB)
https://firebase.google.com/docs/database/rtdb-vs-firestore
What's the Difference Between Cloud Firestore & Firebase Realtime Database https://www.youtube.com/watch?v=KeIx-mArUck
!! ### cloud storage (persistent HDFS bucket S3 Multi-Regional buckets - storing immutable blob objects - think images, and static files)
https://medium.com/@jana.avula/google-cloud-dataproc-hdfs-vs-google-cloud-storage-for-dataproc-data-processing-jobs-5800de2ecfa
cloud storage documentation https://cloud.google.com/storage/docs
!! ### bigtable (hbase)
https://www.guru99.com/hbase-shell-general-commands.html
https://www.google.com/search?biw=1675&bih=985&sxsrf=ACYBGNRSdd9JLZO0oq181zQHdenKslJd_Q%3A1571860356250&ei=hK-wXdfrDpGzggf89YFQ&q=bigtable++cassandra&oq=bigtable++cassandra&gs_l=psy-ab.3..0i7i30l2j0j0i7i30j0i7i5i30j0i5i30j0i8i30l4.31693.31693..31906...0.3..0.79.79.1......0....1..gws-wiz.......0i71.3eXHaAqlSRY&ved=0ahUKEwjXva6RlLPlAhWRmeAKHfx6AAo4ChDh1QMICw&uact=5
https://db-engines.com/en/system/Cassandra%3BGoogle+Cloud+Bigtable%3BHBase
https://www.google.com/search?q=cassandra+vs+bigtable&oq=cassandra+vs+bi&aqs=chrome.0.0j69i57j0l3j69i60.3802j0j4&sourceid=chrome&ie=UTF-8
https://stackshare.io/stackups/cassandra-vs-google-cloud-bigtable
https://stackoverflow.com/questions/41579281/db-benchmarks-cassandra-vs-bigtable-vs-hadoops
Moving from Cassandra to Auto-Scaling Bigtable at Spotify (Cloud Next '19) https://www.youtube.com/watch?v=Hfd3VZOYXNU
!! ### google pubsub (real-time kafka)
!!! pubsub architecture
https://cloud.google.com/pubsub/architecture
!!! windowing basics
https://beam.apache.org/documentation/programming-guide/#windowing
! ### cloud dataproc (ephermal hadoop processing namenodes - spark,hive jobs)
https://www.g2.com/compare/amazon-emr-vs-google-cloud-dataproc
!! cloud storage connector
https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage
! ### big query (hive SQL engine)
!! data transfer service
https://cloud.google.com/bigquery/docs/dts
!! query plan
https://cloud.google.com/bigquery/query-plan-explanation
https://medium.com/google-cloud/visualising-bigquery-41bf6833b98
!! clustered tables
https://cloud.google.com/bigquery/docs/creating-clustered-tables
!! bq command line
https://cloud.google.com/bigquery/docs/bq-command-line-tool
!! import not match byte-for-byte
https://cloud.google.com/bigquery/docs/loading-data#loading_encoded_data
!! external data source (also known as a federated data source)
https://cloud.google.com/bigquery/external-data-sources
<<<
BigQuery offers support for querying data directly from:
Bigtable
Cloud Storage
Google Drive
<<<
https://supermetrics.com/?utm_source=adwords&utm_medium=cpc&utm_campaign=supermetrics-brand-us&utm_adgroup=brand-exact&utm_category=search-brand&utm_term=supermetrics&location&gclid=EAIaIQobChMIjPib2Jeu5QIVEYTICh0IdAyIEAAYASAAEgJbCPD_BwE
! ### data fusion (based on CDAP, works like Talend) gui no programming (creates spark pipeline runs on dataproc cluster)
https://cloud.google.com/data-fusion/pricing
https://cloud.google.com/data-fusion/
https://cloud.google.com/data-fusion/docs/tutorials/reusable-pipeline
! ### data prep (wrangling by Trifacta) (creates beam pipeline runs on dataflow)
https://cloud.google.com/dataprep/
https://cloud.google.com/dataprep/docs/quickstarts/quickstart-dataprep
https://www.stitchdata.com/vs/google-cloud-dataprep/google-cloud-data-fusion/
https://stackoverflow.com/questions/58175386/can-google-data-fusion-make-the-same-data-cleaning-than-dataprep
<<<
Datafusion and Datapred can perform the same things. However their execution are different.
Datafusion create a Spark pipeline and run it on Dataproc cluster
Datapred create a Beam pipeline and run it on Dataflow
IMO, Datafusion is more designed for data ingestion from one source to another one, with few transformation. Dataprep is more designed for data preparation (as its name means), data cleaning, new column creation, splitting column. Dataprep also provide insight of the data for helping you in your recipes.
In addition, Beam is a part of Tensorflow extended and your Data engineer pipeline will be more consistent if you use a tool compliant with Beam
That's I will recommend Dataprep instead Datafusion.
<<<
! ### dataflow (apache beam batch and streaming) w/ programming
https://cloud.google.com/dataflow/docs/guides/stopping-a-pipeline
!! dataflow commands reference
https://cloud.google.com/dataflow/docs/reference/sql/operators
! ### cloud composer (airflow)
https://cloud.google.com/composer/docs/how-to/using/writing-dags
! ### machine learning
drfib.me/ML - a handful of machine learning courses
https://docs.google.com/presentation/d/1gVf_6SL0JkI9fPQ-2pZYr4QYIiNtIXXhFeyEwU4tTIE/present?slide=id.g4c173eec31_0_0
!! bigquery ML
https://cloud.google.com/bigquery-ml/pricing
<<<
3 categories
* Storage
* Queries (analysis)
* BigQuery ML CREATE MODEL queries
<<<
https://towardsdatascience.com/machine-learning-with-sql-ae46b1fe78a9
!! google AutoML
https://www.statworx.com/at/blog/a-performance-benchmark-of-different-automl-frameworks/
https://medium.com/@brianray_7981/google-clouds-automl-first-look-cb7d29e06377
! ### data studio (Tableau)
https://www.holistics.io/blog/google-data-studio-pricing-and-in-depth-reviews/
! whitepapers
https://resources.pythian.com/hubfs/Framework-For-Migrate-Your-Data-Warehouse-Google-BigQuery-WhitePaper.pdf
https://resources.pythian.com/hubfs/White-Papers/Migrate-Teradata-to-Google-BigQuery.pdf
! end to end example
https://medium.com/@imrenagi/how-i-should-have-orchestrated-my-etl-pipeline-better-with-cloud-dataflow-template-f140b958f544
https://medium.com/@samueljboficial/bigdata-project-using-google-ecosystem-with-storage-function-composer-dataflow-and-bigquery-6ed8b5d42f9f
https://towardsdatascience.com/no-code-data-pipelines-a-first-impression-of-cloud-data-fusion-2b6f117a3ce8
! exam
https://towardsdatascience.com/passing-the-google-cloud-professional-data-engineer-certification-87da9908b333
https://linuxacademy.com/cp/modules/view/id/208
https://cloud.google.com/certification/practice-exam/data-engineer
.
<<showtoc>>
also see [[GCP exercices, GCP community]]
! dev tools
! setup and iam
! storage
! compute
! networking
! big data pipelines
! ci/cd tools
! ai and ml
! sample data
! gcp essential urls
<<showtoc>>
! main exercises portals
!! codelabs
https://codelabs.developers.google.com/
https://codelabs.developers.google.com/codelabs/real-time-csv-cdf-bq/index.html?index=..%2F..index#0
!! qwiklabs
qwiklabs.com
https://googlepluralsight.qwiklabs.com
https://googlepluralsight.qwiklabs.com/focuses/8196573?parent=lti_session
!! tutorials
https://cloud.google.com/community/tutorials/ssh-port-forwarding-set-up-load-testing-on-compute-engine
https://cloud.google.com/community/tutorials/bigquery-from-dbeaver-odbc
https://cloud.google.com/community/tutorials/write
https://cloud.google.com/docs/tutorials
https://github.com/GoogleCloudPlatform/community/tree/master/tutorials
! others
https://www.qwiklabs.com/focuses/6376?locale=en&parent=catalog
https://www.qwiklabs.com/focuses/3719?parent=catalog
https://www.qwiklabs.com/focuses/925?parent=catalog
https://www.qwiklabs.com/focuses/3392?parent=catalog
https://www.qwiklabs.com/focuses/3460?parent=catalog
https://www.qwiklabs.com/focuses/1159?parent=catalog
https://cloud.google.com/vpc-service-controls
https://cloud.google.com/vpc-service-controls/docs/quickstart
<<showtoc>>
! course workshop
{{{
GCP streaming analytics
==============
Day 1
Intro
History: Why MapReduce to Flume, Describing a pipeline declaratively - DAG
Basic Windows, Timestamps, Triggers
Exploring the SDK Part I
>>SDKs & Runners Basics
>>Building a pipeline building blocks
>>Utility functions
Dataflow Runner
>>Overview
>>Autoscaling
Sources and Sinks
Example Pipelines Part I
Day 2
Schemas & SQL
>>Schemas
>>Beam / Dataflow SQL
Example Pipelines Part II
Execution Model
>>Bundles & DoFn Lifecycle
>>Autoscaling part II & Dynamic Work Rebalancing
>>Advanced Watermarks
SDK Part II - State and Timers API
Pipeline Productionization Part I
>>Monitoring
>>Performance
>>Troubleshooting & Debugging
Pipeline Productionization Part II
>>Templates
Pipeline Productionization Part III
>>Testing & CI / CD
}}}
! example use case - reference architecture traveloka
Stream analytics on GCP: How Traveloka’s multi-cloud, fully-managed data stack keeps the focus on revolutionizing human mobility
https://learning.oreilly.com/videos/strata-data-conference/9781491976326/9781491976326-video316623
[img(70%,70%)[https://i.imgur.com/eb6pcqJ.png]]
[img(70%,70%)[https://i.imgur.com/VbMoU8N.png]]
[img(70%,70%)[https://i.imgur.com/81oyBZM.png]]
[img(70%,70%)[https://i.imgur.com/A8JN4dq.png]]
[img(70%,70%)[https://i.imgur.com/FWgxEjB.png]]
[img(70%,70%)[https://i.imgur.com/BSxIXst.png]]
[img(70%,70%)[https://i.imgur.com/hCh49UL.png]]
[img(70%,70%)[https://i.imgur.com/DfIMJ5j.png]]
6 THINGS DATABASE ADMINISTRATORS SHOULD KNOW ABOUT GDPR
http://info.enterprisedb.com/rs/069-ALB-339/images/GDPR%20for%20DBA_EDB%20Tech%20Guide.pdf?aliId=93035477
{{{
For each Oracle RAC database homes and the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.
unzip <OPATCH-ZIP> -d <ORACLE_HOME>
<ORACLE_HOME>/OPatch/opatch version
-----------------
As the Grid home owner execute:
%<ORACLE_HOME>/OPatch/ocm/bin/emocmrsp
-----------------
%<ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>
-----------------
As the Oracle RAC database home owner execute:
%<ORACLE_HOME>/bin/emctl stop dbconsole
-----------------
The Opatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes. It operates by querying existing configurations and automating the steps required for patching each Oracle RAC database home of same version and the GI home.
The utility must be executed by an operating system (OS) user with root privileges (usually the user root), and it must be executed on each node in the cluster if the GI home or Oracle RAC database home is in Non-shared storage. The utility should not be run in parallel on the cluster nodes.
Depending on command line options specified, one invocation of Opatch can patch the GI home, one or more Oracle RAC database homes, or both GI and Oracle RAC database homes of the same Oracle release version. You can also roll back the patch with the same selectivity.
Add the directory containing the opatch to the $PATH environment variable.
For example:
export PATH=$PATH:<GI_HOME path>/OPatch
To patch GI home and all Oracle RAC database homes of the same version:
#opatch auto <UNZIPPED_PATCH_LOCATION>
To patch only the GI home:
#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>
To patch one or more Oracle RAC database homes:
#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to RAC database1 home>, <path of the RAC database1 home>
To roll back the patch from the GI home and each Oracle RAC database home:
#opatch auto <UNZIPPED_PATCH_LOCATION> -rollback
To roll back the patch from the GI home:
#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to GI home> -rollback
To roll back the patch from the Oracle RAC database home:
#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to RAC database home> -rollback
-----------------
2.6 Patch Post-Installation Instructions for Databases Created or Upgraded after Installation of PSU 11.2.0.2.3 in the Oracle Home
These instructions are for a database that is created or upgraded after the installation of PSU 11.2.0.2.3.
You must execute the steps in Section 2.5.2, "Loading Modified SQL Files into the Database" for any new database only if it was created by any of the following methods:
Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)
Using a script that was created by DBCA that creates a database from a sample database
There are no actions required for databases that have been upgraded.
}}}
Loving GIMP for this LOMOfied photo :)
Check the original photo here http://www.facebook.com/photo.php?pid=4737724&l=6a5d70369b&id=552113028
To LOMOfy go here http://blog.grzadka.info/2010/07/02/lomografia-w-gimp/
BTW, the author (Samuel Albrecht) of the GIMP plugin emailed me with the batch mode (elsamuko-lomo-batch.scm).. go here for details http://sites.google.com/site/elsamuko/gimp/lomo
now you can run it on all your digital photos as
gimp -i -b '(elsamuko-lomo-batch "*.JPG" 1.5 10 10 0.8 5 1 3 128 2 FALSE FALSE TRUE FALSE 0 0 115)' -b '(gimp-quit 0)'
the 10th input value is the "color effect", see below:
0 - neutral
1 - old red
2 - xpro green
3 - blue
4 - intense red
5 - movie
6 - vintage-look
7 - LAB
8 - light blue
9 - redscale
10 - retro bw
11 - paynes
12 - sepia
Enjoy!
What are the GPL and LGPL and how do they differ? https://www.youtube.com/watch?v=JlIrSMzF8T4
I found a bunch of references. See below. From what I’ve read so far, GPU database products/technologies heavily relies on being columnar oriented. And GPUs are used mainly as an accelerator. The MapD for example is a columnar database that uses GPUs as a primary cache, it’s pretty much like the flash cache but SIMD-aware.
In Oracle, we have the Oracle Database In-Memory although it uses the CPU SIMD. Basically any program that have to use the GPU capabilities have to use the functions/APIs of the hardware so let’s say in the future if Oracle will introduce some kind of GPU accelerator then Oracle should make use of or programmed with NVIDIA APIs. So it’s really not a straightforward stuff, see the GPU and R integration you need to install some package and make use of the functions under that package to take advantage of the GPU hardware although I thought R would take advantage of the GPU right away because of the innate vector operations.
Oracle Database In-Memory uses CPU SIMD vector processing. SIMD instructions are available on both CPU and GPU. Oracle TimesTen on the other hand is completely different from Oracle DB In-Memory and doesn't use SIMD.
https://blogs.oracle.com/In-Memory/entry/getting_started_with_oracle_database2
http://blog.tanelpoder.com/2014/10/05/oracle-in-memory-column-store-internals-part-1-which-simd-extensions-are-getting-used/
https://community.oracle.com/thread/3687203?start=0&tstart=0
http://www.nextplatform.com/2015/10/26/sgi-targets-oracle-in-memory-on-big-iron/
http://stackoverflow.com/questions/27333815/cpu-simd-vs-gpu-simd
https://www.quora.com/Whats-the-difference-between-a-CPU-and-a-GPU
http://superuser.com/questions/308771/why-are-we-still-using-cpus-instead-of-gpus
http://stackoverflow.com/questions/7690230/in-depth-analysis-of-the-difference-between-the-cpu-and-gpu
the book Computer-Architecture-A-Quantitative-Aproach-5thEd talks about instruction level, data level, and thread level parallelism. data level parallelism is the one related to SIMD/GPU/Vector operations
https://www.dropbox.com/sh/shu0r3rvfodtdnz/AAAogcKfP_cE83UdTfl2avwsa?dl=0
GPU topics
https://www.quora.com/Are-there-any-available-material-or-good-tutorials-for-GPGPU-GPU-computing-using-CUDA-and-applied-to-database-query-acceleration
https://www.quora.com/How-does-the-performance-of-GPU-databases-like-MapD-and-Sqream-and-GPUdb-and-BlazingDB-compare-to-Spark-SQL-and-columnar-databases
Exploring High Performance SQL Databases with Graphics Processing Units http://hgpu.org/?p=11557
Do GPU optimized databases threaten the hegemony of Oracle, Splunk and Hadoop? http://diginomica.com/2016/04/11/do-gpu-optimized-databases-threaten-the-hegemony-of-oracle-splunk-and-hadoop/ , ycombinator discussion https://news.ycombinator.com/item?id=11476141
http://blog.accelereyes.com/blog/2009/01/22/data-parallelism-vs-task-parallelism/
Papers
Parallelism in Database Operations https://www.cs.helsinki.fi/webfm_send/1002/kalle_final.pdf
From this example alone we can see that these operations
require the data elements to be placed strategically. As the
SQL search query has criteria on certain data elements, and if
the database system applies some of the strategies described
here, the data layout must be columnar for the system to be
the most efficient. If the layout is not columnar, the system
needs to transform the dataset into a columnar shape if SIMD
operations are to be used.
Fast Computation of Database Operations Using Graphics Processors http://gamma.cs.unc.edu/DB/
Rethinking SIMD Vectorization for In-Memory Databases http://www.cs.columbia.edu/~orestis/sigmod15.pdf
Scaling database performance on GPUs https://ir.nctu.edu.tw/bitstream/11536/16582/1/000307276000006.pdf
Accelerating SQL Database Operations on a GPU with CUDA https://www.cs.virginia.edu/~skadron/Papers/bakkum_sqlite_gpgpu10.pdf
GPU database products/technologies
PGStorm/Pgopencl https://wiki.postgresql.org/wiki/PGStrom , https://wiki.postgresql.org/images/6/65/Pgopencl.pdf
sqream http://sqream.com/solutions/products/sqream-db/ , http://sqream.com/where-are-the-gpu-based-sql-databases/
Alenka https://github.com/antonmks/Alenka , https://www.reddit.com/r/programming/comments/oxq6a/a_database_engine_that_runs_on_a_gpu_outperforms/
MapD https://moodle.technion.ac.il/pluginfile.php/568218/mod_resource/content/1/mapd_overview.pdf , https://devblogs.nvidia.com/parallelforall/mapd-massive-throughput-database-queries-llvm-gpus/
CUDADB http://www.contrib.andrew.cmu.edu/~tchitten/418/writeup.pdf
gpudb,kinetica https://www.linkedin.com/pulse/we-just-turned-your-oracle-12c-environment-overpriced-wes-showfety?trk=prof-post , https://www.datanami.com/2016/07/25/gpu-powered-analytics-improves-mail-delivery-usps/
GPU and R
http://blog.revolutionanalytics.com/2015/01/parallel-programming-with-gpus-and-r.html
https://www.r-bloggers.com/r-gpu-programming-for-all-with-gpur/
http://www.r-tutor.com/gpu-computing
https://devblogs.nvidia.com/parallelforall/accelerate-r-applications-cuda/
http://datascience.stackexchange.com/questions/9945/r-machine-learning-on-gpu
https://www.kaggle.com/forums/f/15/kaggle-forum/t/19178/gpu-computing-in-r
https://www.researchgate.net/post/How_do_a_choose_the_best_GPU_for_parellel_processing_in_R_and_PhotoScan
http://www.afterthedeadline.com
https://www.shoeboxed.com/
http://xpenser.com/docs/features/
http://www.alananna.co.uk/blog/2016/fenix-3-back-up-and-restore/
https://forums.garmin.com/showthread.php?80345-FENIX-2-SETTINGS-DATA-PAGES-is-there-a-way-to-backup-or-a-file-to-easy-edit
-- WINDOWS
C:\Windows\system32>fsutil fsinfo ntfsinfo c:
NTFS Volume Serial Number : 0xe278db3378db0567
Version : 3.1
Number Sectors : 0x000000001d039fff
Total Clusters : 0x0000000003a073ff
Free Clusters : 0x000000000059f08d
Total Reserved : 0x0000000000000870
Bytes Per Sector : 512
Bytes Per Cluster : 4096
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x00000000141c0000
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x0000000000000002
Mft Zone Start : 0x000000000263a720
Mft Zone End : 0x0000000002643c40
RM Identifier: BA5F6457-522B-11E0-B977-D967961022A3
C:\Windows\system32>
C:\Windows\system32>
http://arjudba.blogspot.com/2008/07/how-to-determine-os-block-size-for.html
! Linux
{{{
-- blocksize of the filesystem
[root@desktopserver ~]# blockdev --getbsz /dev/sda1
1024
-- blocksize of the device
[root@desktopserver ~]# blockdev --getbsz /dev/sda
4096
[root@desktopserver ~]# blockdev --getbsz /dev/sdb
4096
-- physical sector size
[root@desktopserver ~]# cat /sys/block/sda/queue/physical_block_size
512
-- logical sector size
[root@desktopserver ~]# cat /sys/block/sda/queue/logical_block_size
512
dumpe2fs /dev/sda1 | grep -i 'Block size'
dumpe2fs 1.41.12 (17-May-2010)
Block size: 1024
[root@desktopserver ~]# dumpe2fs /dev/sda | grep -i 'Block size'
dumpe2fs 1.41.12 (17-May-2010)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda
}}}
more blockdev
{{{
[root@desktopserver ~]# blockdev --getbsz /dev/sda1 <-- Print blocksize in bytes
1024
[root@desktopserver ~]# blockdev --getsize /dev/sda1 <-- Print device size in sectors (BLKGETSIZE). Deprecated in favor of the --getsz option.
614400
[root@desktopserver ~]# blockdev --getsize64 /dev/sda1 <-- Print device size in bytes (BLKGETSIZE64)
314572800
[root@desktopserver ~]# blockdev --getsz /dev/sda1 <-- Get size in 512-byte sectors (BLKGETSIZE64 / 512).
614400
[root@desktopserver ~]# blockdev --getss /dev/sda1 <-- Print sectorsize in bytes - usually 512.
512
[root@desktopserver ~]#
[root@desktopserver ~]#
[root@desktopserver ~]# blockdev --getbsz /dev/sda
4096
[root@desktopserver ~]# blockdev --getsize /dev/sda
1953525168
[root@desktopserver ~]# blockdev --getsize64 /dev/sda
1000204886016
[root@desktopserver ~]# blockdev --getsz /dev/sda
1953525168
[root@desktopserver ~]# blockdev --getss /dev/sda
512
}}}
http://www.linuxnix.com/2011/07/find-block-size-linux.html
http://prefetch.net/blog/index.php/2009/09/12/why-partition-x-does-now-end-on-cylinder-boundary-warnings-dont-matter/
{{{
$ du -sm * | sort -rnk1
911 sysaux01.dbf
701 system01.dbf
301 undotbs01.dbf
52 temp02.dbf
51 redo03.log
51 redo02.log
51 redo01.log
10 control01.ctl
9 users01.dbf
oracle@karldevfedora:/u01/app/oracle/oradata/cdb1:cdb1
$ du -sm
2130 .
-- 2187.875
select
( select sum(bytes)/1024/1024 data_size from dba_data_files ) +
( select nvl(sum(bytes),0)/1024/1024 temp_size from dba_temp_files ) +
( select sum(bytes)/1024/1024 redo_size from sys.v_$log ) +
( select sum(BLOCK_SIZE*FILE_SIZE_BLKS)/1024/1024 controlfile_size from v$controlfile) "Size in GB"
from
dual
/
-- 2187.875
SELECT a.data_size + b.temp_size + c.redo_size + d.controlfile_size
"total_size in GB"
FROM (SELECT SUM (bytes) / 1024 / 1024 data_size FROM dba_data_files) a,
(SELECT NVL (SUM (bytes), 0) / 1024 / 1024 temp_size
FROM dba_temp_files) b,
(SELECT SUM (bytes) / 1024 / 1024 redo_size FROM sys.v_$log) c,
(SELECT SUM (BLOCK_SIZE * FILE_SIZE_BLKS) / 1024 / 1024
controlfile_size
FROM v$controlfile) d
/
-- Database Size Used space Free space
-- -------------------- -------------------- --------------------
-- 2169 MB 1640 MB 529 MB
col "Database Size" format a20
col "Free space" format a20
col "Used space" format a20
select round(sum(used.bytes) / 1024 / 1024 ) || ' MB' "Database Size"
, round(sum(used.bytes) / 1024 / 1024 ) -
round(free.p / 1024 / 1024 ) || ' MB' "Used space"
, round(free.p / 1024 / 1024 ) || ' MB' "Free space"
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p
/
}}}
{{{
Go to the bdump directory to run these shell commands
Date and errors in alert.log
cat alert_+ASM.log | \
awk 'BEGIN{buf=""}
/[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
/ORA-/{print buf,$0}' > ORA-errors-$(date +%Y%m%d%H%M).txt
Use the following script to easily find the trace files on the alert log. Just run it on the bdump directory
cat alert_prod1.log | \
awk 'BEGIN{buf=""}
/[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
/.trc/{print buf,$0}'
Use the following script to easily find the ORA- errors and trace files on the alert log. Just run it on the bdump directory
cat alert_prod1.log | \
awk 'BEGIN{buf=""}
/[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
/.trc|ORA-/{print buf,$0}'
Date of startups in the alert.log
cat RDA_LOG_alert_log.txt | \
awk 'BEGIN{buf=""}
/[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
/Starting ORACLE/{print buf,$0}' > StartupTime-$(date +%Y%m%d%H%M).txt
Date of startups in the RDA alert.log
cat RDA_LOG_alert_log.txt | \
awk 'BEGIN{buf=""}
/[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
/Starting ORACLE/{print buf,$0}' > StartupTime-$(date +%Y%m%d%H%M).txt
########################################################################
-- create a file called getalert
-- run it as ./getalert <node name>
export node=$1
cat alert_"$node".log | \
awk 'BEGIN{buf=""}
/[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
/.trc|ORA-/{print buf,$0}' > alert_"$node"_ORA-TRC_$(date +%Y%m%d%H%M).log
cat alert_"$node".log | \
awk 'BEGIN{buf=""}
/[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
/Starting ORACLE/{print buf,$0}' > alert_"$node"_startup_$(date +%Y%m%d%H%M).log
}}}
{{{
historical awr_storagesize_summary-tableau-cdb1-karldevfedora.csv
from MOS 1551288.1
------ DISK and CELL Failure Diskgroup Space Reserve Requirements ------
This procedure determines how much space you need to survive a DISK or CELL failure. It also shows the usable space
available when reserving space for disk or cell failure.
Please see MOS note 1551288.1 for more information.
. . .
Description of Derived Values:
One Cell Required Mirror Free MB : Required Mirror Free MB to permit successful rebalance after losing largest CELL regardless of redundancy type
Disk Required Mirror Free MB : Space needed to rebalance after loss of single or double disk failure (for normal or high redundancy)
Disk Usable File MB : Usable space available after reserving space for disk failure and accounting for mirroring
Cell Usable File MB : Usable space available after reserving space for SINGLE cell failure and accounting for mirroring
. . .
ASM Version: 11.2.0.4
. . .
----------------------------------------------------------------------------------------------------------------------------------------------------
| | | | | | | |Cell Req'd |Disk Req'd | | | | | |
| |DG |Num |Disk Size |DG Total |DG Used |DG Free |Mirror Free |Mirror Free |Disk Usable |Cell Usable | | |PCT |
|DG Name |Type |Disks|MB |MB |MB |MB |MB |MB |File MB |File MB |DFC |CFC |Util |
----------------------------------------------------------------------------------------------------------------------------------------------------
|DATA_SOCPP|NORMAL | 84| 2,260,992| 189,923,328| 156,156,856| 33,766,472| 29,845,094| 2,370,683| 15,697,895| 1,960,689|PASS|PASS| 82.2%|
|DBFS_DG |NORMAL | 70| 34,608| 2,422,560| 14,692| 2,407,868| 380,688| 184,664| 1,111,602| 1,013,590|PASS|PASS| .6%|
|RECO_SOCPP|NORMAL | 84| 565,360| 47,490,240| 37,766,648| 9,723,592| 7,462,752| 633,084| 4,545,254| 1,130,420|PASS|PASS| 79.5%|
----------------------------------------------------------------------------------------------------------------------------------------------------
. . .
Script completed.
-- 1e.77. Database Size on Disk (GV$DATABASE)
WITH
sizes AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 1e.77 */
'Data' file_type,
SUM(bytes) bytes
FROM v$datafile
UNION ALL
SELECT 'Temp' file_type,
SUM(bytes) bytes
FROM v$tempfile
UNION ALL
SELECT 'Log' file_type,
SUM(bytes) * MAX(members) bytes
FROM v$log
UNION ALL
SELECT 'Control' file_type,
SUM(block_size * file_size_blks) bytes
FROM v$controlfile
),
dbsize AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 1e.77 */
'Total' file_type,
SUM(bytes) bytes
FROM sizes
)
SELECT d.dbid,
d.name db_name,
s.file_type,
s.bytes,
ROUND(s.bytes/POWER(10,9),3) gb,
CASE
WHEN s.bytes > POWER(10,15) THEN ROUND(s.bytes/POWER(10,15),3)||' P'
WHEN s.bytes > POWER(10,12) THEN ROUND(s.bytes/POWER(10,12),3)||' T'
WHEN s.bytes > POWER(10,9) THEN ROUND(s.bytes/POWER(10,9),3)||' G'
WHEN s.bytes > POWER(10,6) THEN ROUND(s.bytes/POWER(10,6),3)||' M'
WHEN s.bytes > POWER(10,3) THEN ROUND(s.bytes/POWER(10,3),3)||' K'
WHEN s.bytes > 0 THEN s.bytes||' B' END display
FROM v$database d,
sizes s
UNION ALL
SELECT d.dbid,
d.name db_name,
s.file_type,
s.bytes,
ROUND(s.bytes/POWER(10,9),3) gb,
CASE
WHEN s.bytes > POWER(10,15) THEN ROUND(s.bytes/POWER(10,15),3)||' P'
WHEN s.bytes > POWER(10,12) THEN ROUND(s.bytes/POWER(10,12),3)||' T'
WHEN s.bytes > POWER(10,9) THEN ROUND(s.bytes/POWER(10,9),3)||' G'
WHEN s.bytes > POWER(10,6) THEN ROUND(s.bytes/POWER(10,6),3)||' M'
WHEN s.bytes > POWER(10,3) THEN ROUND(s.bytes/POWER(10,3),3)||' K'
WHEN s.bytes > 0 THEN s.bytes||' B' END display
FROM v$database d,
dbsize s;
-- 2b.201. Data Files Usage (DBA_DATA_FILES)
WITH
alloc AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.201 */
tablespace_name,
COUNT(*) datafiles,
ROUND(SUM(bytes)/POWER(10,9)) gb
FROM dba_data_files
GROUP BY
tablespace_name
),
free AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.201 */
tablespace_name,
ROUND(SUM(bytes)/POWER(10,9)) gb
FROM dba_free_space
GROUP BY
tablespace_name
),
tablespaces AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.201 */
a.tablespace_name,
a.datafiles,
a.gb alloc_gb,
(a.gb - f.gb) used_gb,
f.gb free_gb
FROM alloc a, free f
WHERE a.tablespace_name = f.tablespace_name
ORDER BY
a.tablespace_name
),
total AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.201 */
SUM(alloc_gb) alloc_gb,
SUM(used_gb) used_gb,
SUM(free_gb) free_gb
FROM tablespaces
)
SELECT v.tablespace_name,
v.datafiles,
v.alloc_gb,
v.used_gb,
CASE WHEN v.alloc_gb > 0 THEN
LPAD(TRIM(TO_CHAR(ROUND(100 * v.used_gb / v.alloc_gb, 1), '990.0')), 8)
END pct_used,
v.free_gb,
CASE WHEN v.alloc_gb > 0 THEN
LPAD(TRIM(TO_CHAR(ROUND(100 * v.free_gb / v.alloc_gb, 1), '990.0')), 8)
END pct_free
FROM (
SELECT tablespace_name,
datafiles,
alloc_gb,
used_gb,
free_gb
FROM tablespaces
UNION ALL
SELECT 'Total' tablespace_name,
TO_NUMBER(NULL) datafiles,
alloc_gb,
used_gb,
free_gb
FROM total
) v;
-- 2b.207. Largest 200 Objects (DBA_SEGMENTS)
WITH schema_object AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.207 */
segment_type,
owner,
segment_name,
tablespace_name,
COUNT(*) segments,
SUM(extents) extents,
SUM(blocks) blocks,
SUM(bytes) bytes
FROM dba_segments
WHERE 'Y' = 'Y'
GROUP BY
segment_type,
owner,
segment_name,
tablespace_name
), totals AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.207 */
SUM(segments) segments,
SUM(extents) extents,
SUM(blocks) blocks,
SUM(bytes) bytes
FROM schema_object
), top_200_pre AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.207 */
ROWNUM rank, v1.*
FROM (
SELECT so.segment_type,
so.owner,
so.segment_name,
so.tablespace_name,
so.segments,
so.extents,
so.blocks,
so.bytes,
ROUND((so.segments / t.segments) * 100, 3) segments_perc,
ROUND((so.extents / t.extents) * 100, 3) extents_perc,
ROUND((so.blocks / t.blocks) * 100, 3) blocks_perc,
ROUND((so.bytes / t.bytes) * 100, 3) bytes_perc
FROM schema_object so,
totals t
ORDER BY
bytes_perc DESC NULLS LAST
) v1
WHERE ROWNUM < 201
), top_200 AS (
SELECT p.*,
(SELECT object_id
FROM dba_objects o
WHERE o.object_type = p.segment_type
AND o.owner = p.owner
AND o.object_name = p.segment_name
AND o.object_type NOT LIKE '%PARTITION%') object_id,
(SELECT data_object_id
FROM dba_objects o
WHERE o.object_type = p.segment_type
AND o.owner = p.owner
AND o.object_name = p.segment_name
AND o.object_type NOT LIKE '%PARTITION%') data_object_id,
(SELECT SUM(p2.bytes_perc) FROM top_200_pre p2 WHERE p2.rank <= p.rank) bytes_perc_cum
FROM top_200_pre p
), top_200_totals AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.207 */
SUM(segments) segments,
SUM(extents) extents,
SUM(blocks) blocks,
SUM(bytes) bytes,
SUM(segments_perc) segments_perc,
SUM(extents_perc) extents_perc,
SUM(blocks_perc) blocks_perc,
SUM(bytes_perc) bytes_perc
FROM top_200
), top_100_totals AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.207 */
SUM(segments) segments,
SUM(extents) extents,
SUM(blocks) blocks,
SUM(bytes) bytes,
SUM(segments_perc) segments_perc,
SUM(extents_perc) extents_perc,
SUM(blocks_perc) blocks_perc,
SUM(bytes_perc) bytes_perc
FROM top_200
WHERE rank < 101
), top_20_totals AS (
SELECT /*+ MATERIALIZE NO_MERGE */ /* 2b.207 */
SUM(segments) segments,
SUM(extents) extents,
SUM(blocks) blocks,
SUM(bytes) bytes,
SUM(segments_perc) segments_perc,
SUM(extents_perc) extents_perc,
SUM(blocks_perc) blocks_perc,
SUM(bytes_perc) bytes_perc
FROM top_200
WHERE rank < 21
)
SELECT v.rank,
v.segment_type,
v.owner,
v.segment_name,
v.object_id,
v.data_object_id,
v.tablespace_name,
CASE
WHEN v.segment_type LIKE 'INDEX%' THEN
(SELECT i.table_name
FROM dba_indexes i
WHERE i.owner = v.owner AND i.index_name = v.segment_name)
WHEN v.segment_type LIKE 'LOB%' THEN
(SELECT l.table_name
FROM dba_lobs l
WHERE l.owner = v.owner AND l.segment_name = v.segment_name)
END table_name,
v.segments,
v.extents,
v.blocks,
v.bytes,
ROUND(v.bytes / POWER(10,9), 3) gb,
LPAD(TO_CHAR(v.segments_perc, '990.000'), 7) segments_perc,
LPAD(TO_CHAR(v.extents_perc, '990.000'), 7) extents_perc,
LPAD(TO_CHAR(v.blocks_perc, '990.000'), 7) blocks_perc,
LPAD(TO_CHAR(v.bytes_perc, '990.000'), 7) bytes_perc,
LPAD(TO_CHAR(v.bytes_perc_cum, '990.000'), 7) perc_cum
FROM (
SELECT d.rank,
d.segment_type,
d.owner,
d.segment_name,
d.object_id,
d.data_object_id,
d.tablespace_name,
d.segments,
d.extents,
d.blocks,
d.bytes,
d.segments_perc,
d.extents_perc,
d.blocks_perc,
d.bytes_perc,
d.bytes_perc_cum
FROM top_200 d
UNION ALL
SELECT TO_NUMBER(NULL) rank,
NULL segment_type,
NULL owner,
NULL segment_name,
TO_NUMBER(NULL),
TO_NUMBER(NULL),
'TOP 20' tablespace_name,
st.segments,
st.extents,
st.blocks,
st.bytes,
st.segments_perc,
st.extents_perc,
st.blocks_perc,
st.bytes_perc,
TO_NUMBER(NULL) bytes_perc_cum
FROM top_20_totals st
UNION ALL
SELECT TO_NUMBER(NULL) rank,
NULL segment_type,
NULL owner,
NULL segment_name,
TO_NUMBER(NULL),
TO_NUMBER(NULL),
'TOP 100' tablespace_name,
st.segments,
st.extents,
st.blocks,
st.bytes,
st.segments_perc,
st.extents_perc,
st.blocks_perc,
st.bytes_perc,
TO_NUMBER(NULL) bytes_perc_cum
FROM top_100_totals st
UNION ALL
SELECT TO_NUMBER(NULL) rank,
NULL segment_type,
NULL owner,
NULL segment_name,
TO_NUMBER(NULL),
TO_NUMBER(NULL),
'TOP 200' tablespace_name,
st.segments,
st.extents,
st.blocks,
st.bytes,
st.segments_perc,
st.extents_perc,
st.blocks_perc,
st.bytes_perc,
TO_NUMBER(NULL) bytes_perc_cum
FROM top_200_totals st
UNION ALL
SELECT TO_NUMBER(NULL) rank,
NULL segment_type,
NULL owner,
NULL segment_name,
TO_NUMBER(NULL),
TO_NUMBER(NULL),
'TOTAL' tablespace_name,
t.segments,
t.extents,
t.blocks,
t.bytes,
100 segemnts_perc,
100 extents_perc,
100 blocks_perc,
100 bytes_perc,
TO_NUMBER(NULL) bytes_perc_cum
FROM totals t) v;
}}}
..
''command here''
{{{
cat `cat /etc/oraInst.loc | grep -i inventory | sed 's/..............\(.*\)/\1/'`/ContentsXML/inventory.xml | grep HOME
}}}
$ cat `cat /etc/oraInst.loc | grep -i inventory | sed 's/..............\(.*\)/\1/'`/ContentsXML/inventory.xml | grep HOME
<HOME_LIST>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="1"/>
<HOME NAME="oms11g1" LOC="/u01/app/oracle/product/middleware/oms11g" TYPE="O" IDX="2"/>
<HOME NAME="agent11g1" LOC="/u01/app/oracle/product/middleware/agent11g" TYPE="O" IDX="3"/>
<HOME NAME="common11g1" LOC="/u01/app/oracle/product/middleware/oracle_common" TYPE="O" IDX="4"/>
<HOME NAME="webtier11g1" LOC="/u01/app/oracle/product/middleware/Oracle_WT" TYPE="O" IDX="5"/>
</HOME_LIST>
$ vi gethome.sh
oracle@emgc11g:/home/oracle:emrep
$ chmod 755 gethome.sh
oracle@emgc11g:/home/oracle:emrep
$ ''sh gethome.sh''
<HOME_LIST>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="1"/>
<HOME NAME="oms11g1" LOC="/u01/app/oracle/product/middleware/oms11g" TYPE="O" IDX="2"/>
<HOME NAME="agent11g1" LOC="/u01/app/oracle/product/middleware/agent11g" TYPE="O" IDX="3"/>
<HOME NAME="common11g1" LOC="/u01/app/oracle/product/middleware/oracle_common" TYPE="O" IDX="4"/>
<HOME NAME="webtier11g1" LOC="/u01/app/oracle/product/middleware/Oracle_WT" TYPE="O" IDX="5"/>
</HOME_LIST>
''find home''
{{{
#!/bin/bash
# A little helper script for finding ORACLE_HOMEs for all running instances in a Linux server
# by Tanel Poder (http://blog.tanelpoder.com)
printf "%6s %-20s %-80s\n" "PID" "NAME" "ORACLE_HOME"
pgrep -lf _pmon_ |
while read pid pname y ; do
printf "%6s %-20s %-80s\n" $pid $pname `ls -l /proc/$pid/exe | awk -F'>' '{ print $2 }' | sed 's/bin\/oracle$//' | sort | uniq`
done
}}}
''find home from sqlplus''
{{{
var OH varchar2(200);
EXEC dbms_system.get_env('ORACLE_HOME', :OH) ;
PRINT OH
}}}
{{{
# logon storms by hour
fgrep "30-OCT-2010" listener.log | fgrep "establish" | \
awk '{ print $1 " " $2 }' | awk -F: '{ print $1 }' | \
sort | uniq –c
# logon storms by minute
fgrep "30-OCT-2010 22:" listener.log | fgrep "establish" | \
awk '{ print $1 " " $2 }' | awk -F: '{ print $1 ":" $2 }' | \
sort | uniq –c
}}}
* CPU-Z
* System Information for Windows - Gabriel Topala
* WinDirStat
{{{
-- SHOW FREE
SET LINESIZE 300
SET PAGESIZE 9999
SET VERIFY OFF
COLUMN status FORMAT a9 HEADING 'Status'
COLUMN name FORMAT a25 HEADING 'Tablespace Name'
COLUMN type FORMAT a12 HEADING 'TS Type'
COLUMN extent_mgt FORMAT a10 HEADING 'Ext. Mgt.'
COLUMN segment_mgt FORMAT a9 HEADING 'Seg. Mgt.'
COLUMN pct_free FORMAT 999.99 HEADING "% Free"
COLUMN gbytes FORMAT 99,999,999 HEADING "Total GBytes"
COLUMN used FORMAT 99,999,999 HEADING "Used Gbytes"
COLUMN free FORMAT 99,999,999 HEADING "Free Gbytes"
BREAK ON REPORT
COMPUTE SUM OF gbytes ON REPORT
COMPUTE SUM OF free ON REPORT
COMPUTE SUM OF used ON REPORT
SELECT d.status status, d.bigfile, d.tablespace_name name, d.contents type, d.extent_management extent_mgt, d.segment_space_management segment_mgt, df.tssize gbytes, (df.tssize - fs.free) used, fs.free free
FROM
dba_tablespaces d,
(SELECT tablespace_name, ROUND(SUM(bytes)/1024/1024/1024) tssize FROM dba_data_files GROUP BY tablespace_name) df,
(SELECT tablespace_name, ROUND(SUM(bytes)/1024/1024/1024) free FROM dba_free_space GROUP BY tablespace_name) fs
WHERE
d.tablespace_name = df.tablespace_name(+)
AND d.tablespace_name = fs.tablespace_name(+)
AND NOT (d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY')
UNION ALL
SELECT d.status status, d.bigfile, d.tablespace_name name, d.contents type, d.extent_management extent_mgt, d.segment_space_management segment_mgt, df.tssize gbytes, (df.tssize - fs.free) used, fs.free free
FROM
dba_tablespaces d,
(select tablespace_name, sum(bytes)/1024/1024/1024 tssize from dba_temp_files group by tablespace_name) df,
(select tablespace_name, sum(bytes_cached)/1024/1024/1024 free from v$temp_extent_pool group by tablespace_name) fs
WHERE
d.tablespace_name = df.tablespace_name(+)
AND d.tablespace_name = fs.tablespace_name(+)
AND d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY'
ORDER BY 9;
CLEAR COLUMNS BREAKS COMPUTES
-- SHOW FREE SPACE IN DATAFILES
SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY OFF
COLUMN tablespace FORMAT a18 HEADING 'Tablespace Name'
COLUMN filename FORMAT a50 HEADING 'Filename'
COLUMN filesize FORMAT 99,999,999,999 HEADING 'File Size'
COLUMN used FORMAT 99,999,999,999 HEADING 'Used (in MB)'
COLUMN pct_used FORMAT 999 HEADING 'Pct. Used'
BREAK ON report
COMPUTE SUM OF filesize ON report
COMPUTE SUM OF used ON report
COMPUTE AVG OF pct_used ON report
SELECT /*+ ordered */
d.tablespace_name tablespace
, d.file_name filename
, d.file_id file_id
, d.bytes/1024/1024 filesize
, NVL((d.bytes - s.bytes)/1024/1024, d.bytes/1024/1024) used
, TRUNC(((NVL((d.bytes - s.bytes) , d.bytes)) / d.bytes) * 100) pct_used
FROM
sys.dba_data_files d
, v$datafile v
, ( select file_id, SUM(bytes) bytes
from sys.dba_free_space
GROUP BY file_id) s
WHERE
(s.file_id (+)= d.file_id)
AND (d.file_name = v.name)
UNION
SELECT
d.tablespace_name tablespace
, d.file_name filename
, d.file_id file_id
, d.bytes/1024/1024 filesize
, NVL(t.bytes_cached/1024/1024, 0) used
, TRUNC((t.bytes_cached / d.bytes) * 100) pct_used
FROM
sys.dba_temp_files d
, v$temp_extent_pool t
, v$tempfile v
WHERE
(t.file_id (+)= d.file_id)
AND (d.file_id = v.file#)
ORDER BY 1;
-- SHOW AUTOEXTEND TABLESPACES (9i,10G SqlPlus)
set lines 300
col file_name format a65
select
c.file#, a.tablespace_name as "TS", a.file_name, a.bytes/1024/1024 as "A.SIZE", a.increment_by * c.block_size/1024/1024 as "A.INCREMENT_BY", a.maxbytes/1024/1024 as "A.MAX"
from
dba_data_files a, dba_tablespaces b, v$datafile c
where
a.tablespace_name = b.tablespace_name
and a.file_name = c.name
and a.tablespace_name in (select tablespace_name from dba_tablespaces)
and a.autoextensible = 'YES'
union all
select
c.file#, a.tablespace_name as "TS", a.file_name, a.bytes/1024/1024 as "A.SIZE", a.increment_by * c.block_size/1024/1024 as "A.INCREMENT_BY", a.maxbytes/1024/1024 as "A.MAX"
from
dba_temp_files a, dba_tablespaces b, v$tempfile c
where
a.tablespace_name = b.tablespace_name
and a.file_name = c.name
and a.tablespace_name in (select tablespace_name from dba_tablespaces)
and a.autoextensible = 'YES';
}}}
{{{
WITH d AS
(SELECT TO_CHAR(startup_time,'MM/DD/YYYY HH24:MI:SS') startup_time,startup_time startup_time2,
TO_CHAR(lag(startup_time) over ( partition BY dbid, instance_number order by startup_time ),'MM/DD/YYYY HH24:MI:SS') last_startup
FROM dba_hist_database_instance
order by startup_time2 desc
)
SELECT startup_time,
last_startup,
ROUND(
CASE
WHEN last_startup IS NULL
THEN 0
ELSE (TO_DATE(startup_time,'MM/DD/YYYY HH24:MI:SS') - TO_DATE(last_startup,'MM/DD/YYYY HH24:MI:SS'))
END,0) days
FROM d;
}}}
https://community.oracle.com/thread/2391416?tstart=0
<<showtoc>>
! watch this first
http://www.git-tower.com/learn/git/videos
! read this first
http://nvie.com/posts/a-successful-git-branching-model/
! other videos
Short and Sweet: Advanced Git Command-Line Mastery https://www.udemy.com/course/draft/742846/
Short and Sweet: Next-Level Git and GitHub - Get Productive https://www.udemy.com/course/draft/531846/
https://www.udemy.com/course/short-and-sweet-get-started-with-git-and-github-right-now/
https://www.udemy.com/course/the-complete-github-course-for-developers/
! test accounts
emberdev1
emberdev2
emberdevgroup
! official documentation, videos, and help
https://guides.github.com/activities/hello-world/
https://guides.github.com/
https://help.github.com/articles/setting-up-teams/
https://www.youtube.com/githubguides
https://help.github.com/
https://guides.github.com/features/mastering-markdown/
! version control format
http://git-scm.com/book/en/v2/Git-Basics-Tagging
Semantic Versioning 2.0.0 http://semver.org/
''Awesome github walkthrough - video series'' http://308tube.com/youtube/github/
https://github.com/karlarao
http://git-scm.com/download/win
http://www.javaworld.com/javaworld/jw-08-2012/120830-osjp-github.html?page=1
! HOWTO - general workflow
[img[ https://lh5.googleusercontent.com/-9Sx7XCA7jQ8/UyKDQRA3ngI/AAAAAAAACJA/fzmAgpCi6xU/w2048-h2048-no/github_basic_workflow.png ]]
! Basic commands and getting started
<<<
''Git Data Flow''
{{{
1) Current Working Directory <-- git init <project>
2) Index (cache) <-- git add .
3) Local Repository <-- git commit -m "<comment>"
4) Remote Repository
}}}
<<<
<<<
''Client side setup''
{{{
http://git-scm.com/downloads <-- download here
git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"
}}}
<<<
<<<
''Common commands''
{{{
git init awrscripts <-- or you can just cd on "awrscripts" folder and execute "git init"
git status
git add . <-- add all the files under the master folder to the staging area
git <filename> <-- add just a file
git rm --cached <filename> <-- remote a file
git commit -m "initial commit" <-- to commit changes (w/ comment), and save a snapshot of the local repository
* note that when you modify, you have to do a "git add ." first..else it will say no changes added to commit
git log <-- show summary of commits
vi README.md <-- markdown format readme file, header should start with #
git diff
git add .
git diff --cached <-- get the differences in the staging area, because you've already executed the "add"..
## shortcuts
git commit -a -m "short commit" <-- combination of add and commit
git log --oneline <-- shorter summary
git status -s <-- shorter show changes
}}}
''Exclude file''
https://coderwall.com/p/n1d-na/excluding-files-from-git-locally
<<<
! Integration with Github.com
<<<
''Github.com setup''
{{{
go to github.com and create a new repository
on your PC go to C:\Users\Karl
open git bash and type in ssh-keygen below
ssh-keygen.exe -t rsa -C "karlarao@gmail.com" <-- this will create RSA on C:\Users\Karl directory
copy the contents of id_rsa.pub under C:\Users\karl\.ssh directory
go to github.com -> Account Settings -> SSH Keys -> Add SSH Key
ssh -T git@github.com <-- to test the authentication
}}}
''Github.com integrate and push''
{{{
go to repositories folder -> on SSH tab -> copy the key
git remote add origin <repo ssh key from website>
git remote add origin git@github.com:karlarao/awrscripts.git
git push origin master
}}}
''Github.com integrate with GUI''
{{{
download the GUI here http://windows.github.com/
login and configure, at the end just hit skip
go to tools -> options -> change the default storage directory to the local git directory C:\Dropbox\CodeNinja\GitHub
click Scan For Repositories -> click Add -> click Update
click Publish -> click Sync
}}}
''for existing repos, you can do a clone''
{{{
git clone git@github.com:<name>/<repo>
}}}
<<<
! github pages
sync the repo to github pages
{{{
cd ~ubuntu/telegram/
git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"
git add .
git status # to see what changes are going to be commited
git commit -m "."
git remote add origin git@github.com:karlarao/telegram.git
git push origin master
# git branch gh-pages # this is one time
git checkout gh-pages # go to the gh-pages branch
git rebase master # bring gh-pages up to date with master
git push origin gh-pages # commit the changes
git checkout master # return to the master branch
access the page at http://karlarao.github.io/<repo>/
}}}
https://github.com/blog/2289-publishing-with-github-pages-now-as-easy-as-1-2-3
!! github pages for blogging
https://howchoo.com/g/yzg0yjdmntl/how-to-blog-in-markdown-using-github-and-jekyll-now
https://blog.iarsov.com/general/blog-migrated-to-github-pages/
! render HTML file in github without git pages
http://stackoverflow.com/questions/8446218/how-to-see-an-html-page-on-github-as-a-normal-rendered-html-page-to-see-preview
! track a zipfile based script repo, useful for blogs or sites
{{{
add this on your crontab
# refresh git with the <script> scripts
0 4 * * * /home/karl/bin/git_<script>.sh
$ cat /home/karl/bin/git_<script>.sh
cd ~/github/<script directory>
rm <script>.zip
wget http://<site>.com/files/<script>.zip
unzip -o <script>.zip -d ~/github/<script directory>
git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"
git add .
git commit -m "."
#git remote add origin git@github.com:karlarao/<script>.git
git push origin master
then make sure to favorite the repo to get emails!
}}}
! Branch, Merge, Clone, Fork
{{{
Branching <-- allows you to create a separate working copy of your code
Merging <-- merge branches together
Cloning <-- other developers can get a copy of your code from a remote repo
Forking <-- make use of someone's code as starting point of a new project
-- 1st developer created a branch r2_index
git branch <-- show branches
git branch r2_index <-- create a branch name "r2_index"
git checkout r2_index <-- to switch to the "r2_index" branch
git checkout <the branch you want to go> * make sure to close all files before switching to another branch
-- 2nd developer on another machine created r2_misc
git clone <ssh link> <-- to clone a project
git branch r2_misc
git checkout r2_misc
git push origin <branch name> <-- to update the remote repo
-- bug fix on master
git checkout master
git push origin master
-- merge to combine the changes from 1st developer to the master project
* conflict may happen due to changes at the same spot for both branches
git branch r2_index
git merge master
* conflict looks like the following:
<<<<<<< HEAD
1)
=======
TOC:
1) one
2) two
3) three
>>>>>>> master
git push origin r2_index
-- pull, synchronizes the local repo with the remote repo
* remember, PUSH to send up GitHub, PULL to sync with GitHub
git pull origin master
}}}
! Delete files on git permanently
http://stackoverflow.com/questions/1983346/deleting-files-using-git-github <-- good stuff
http://dalibornasevic.com/posts/2-permanently-remove-files-and-folders-from-a-git-repository
https://www.kernel.org/pub/software/scm/git/docs/git-filter-branch.html
{{{
cd /Users/karl/Dropbox/CodeNinja/GitHub/tmp
git init
git status
git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch *' --prune-empty --tag-name-filter cat -- --all
git commit -m "."
git push origin master --force
}}}
! delete history
https://rtyley.github.io/bfg-repo-cleaner/
https://help.github.com/articles/remove-sensitive-data/
stackoverflow.com/questions/37219/how-do-you-remove-a-specific-revision-in-the-git-history
https://hellocoding.wordpress.com/2015/01/19/delete-all-commit-history-github/
http://samwize.com/2014/01/15/how-to-remove-a-commit-that-is-already-pushed-to-github/
! Deleting a repository
https://help.github.com/articles/deleting-a-repository
! rebase
rebase https://www.youtube.com/watch?v=SxzjZtJwOgo
! forking
forking https://www.youtube.com/watch?v=5oJHRbqEofs
<<<
Some notes on forking:
* Let's say you get assigned as a collaborator on a private repo called REPO1
* If REPO1 gets forked as a private REPO2 by another guy, instantly you'll also be part of that REPO2
* When the original creator delete you as a collaborator on REPO1 you will no longer see anything from that but you will still get access to REPO2
<<<
! pull request
pull request (contributing to the fork) https://www.youtube.com/watch?v=d5wpJ5VimSU
! team collaboration
team https://www.youtube.com/watch?v=61WbzS9XMwk
! git on dropbox conflicts
http://edinburghhacklab.com/2012/11/when-git-on-dropbox-conflicts-no-problem/
http://stackoverflow.com/questions/12773488/git-fatal-reference-has-invalid-format-refs-heads-master
! other references
gitflow http://nvie.com/posts/a-successful-git-branching-model/
''master zip''
http://stackoverflow.com/questions/8808164/set-the-name-of-a-zip-downloadable-from-github-or-other-ways-to-enroll-google-tr
http://stackoverflow.com/questions/7106012/download-a-single-folder-or-directory-from-a-github-repo
http://alblue.bandlem.com/2011/09/git-tip-of-week-git-archive.html
http://gitready.com/intermediate/2009/01/29/exporting-your-repository.html
http://manpages.ubuntu.com/manpages/intrepid/man1/git-archive.1.html
http://stackoverflow.com/questions/8377081/github-api-download-zip-or-tarball-link
''uploading binary files (zip)''
https://help.github.com/articles/distributing-large-binaries/
https://help.github.com/articles/about-releases/
https://help.github.com/articles/creating-releases/
https://gigaom.com/2013/07/09/oops-github-did-it-again-relaunches-binary-uploads-after-scuttling-them/
https://github.com/blog/1547-release-your-software
''live demos''
http://solutionoptimist.com/2013/12/28/awesome-github-tricks/
''Git hook to send email notification on repo changes''
http://stackoverflow.com/questions/552360/git-hook-to-send-email-notification-on-repo-changes
''git rebase'' http://git-scm.com/docs/git-rebase
''git gist'' https://gist.github.com/
''gitignore.io'' https://www.gitignore.io/ <- Create useful .gitignore files for your project, there's also a webstorm plugin for this
! git merge upstream
https://www.google.com/search?q=git+merge+upstream+tower+2&oq=git+merge+upstream+tower+2&aqs=chrome..69i57j69i64.9674j0j1&sourceid=chrome&ie=UTF-8
! git - contributing to a repo or project
!! fork and pull model (ideal for large projects)
here you create a branch
!! direct clone model / collaboration model (ideal for small projects)
here you work on master branch
cb
pb
pbg
pull
! git revert / fix a damaging commit
create branch
head
revert
commit
! Equivalent of “svn checkout” for git
http://stackoverflow.com/questions/18900774/equivalent-of-svn-checkout-for-git
http://stackoverflow.com/questions/15595778/github-what-does-checkout-do
! HOWTO rename a repo
<<<
what happens when I rename a git repo? do you have any advice on properly renaming a git repo?
<<<
<<<
I have pretty good news for you -- since 2013, GitHub automatically redirects people from the old repo to the new repo. (There are some caveats, so read on.) Please read the Stack Overflow answer that starts with "since May 2013" on this page:
http://stackoverflow.com/questions/5751585/how-do-i-rename-a-repository-on-github
Here is the GitHub announcement of the redirect feature:
https://github.com/blog/1508-repository-redirects-are-here
https://help.github.com/articles/renaming-a-repository/
Note, it is still recommended that you update your local repository to specifically point at the new repository. You can do that with a simple:
git remote set-url origin https://github.com/YourGitHubUserName/NewRepositoryName.git
(where you replace YourGitHubUserName and NewRepositoryName with the appropriate information)
Ideally, your collaborators will also update their local repositories, but it's not an immediate requirement unless you are going to re-use the old repository name (which will break redirects and I don't recommend doing that).
<<<
! github acquisition
https://blog.github.com/2018-06-04-github-microsoft/
https://news.microsoft.com/2018/06/04/microsoft-to-acquire-github-for-7-5-billion/
https://blogs.microsoft.com/blog/2018/06/04/microsoft-github-empowering-developers/
https://thenextweb.com/dd/2018/06/04/microsoft-buying-github-doesnt-scare-me/
! awesome github clients
!! gitsome
https://github.com/donnemartin/gitsome
<<<
* needs python 3.5.0
example query commands:
{{{
gh feed karlarao -p
gh search-repos "created:>=2017-01-01 user:karlarao"
}}}
installation:
{{{
brew install pyenv
xcode-select --install
pyenv local 3.5.0
PATH="~/.pyenv/versions/3.5.0/bin:${PATH}"
pip3 install gitsome
# then configure
gh configure
}}}
add on .bash_profile
{{{
cat .bash_profile
PATH="~/.pyenv/versions/3.5.0/bin:${PATH}"
}}}
<<<
references
https://apple.stackexchange.com/questions/237430/how-to-install-specific-version-of-python-on-os-x
https://www.chrisjmendez.com/2017/08/03/installing-multiple-versions-of-python-on-your-mac-using-homebrew/
http://mattseymour.net/blog/2016/03/brew-installing-specific-python-version/
https://github.com/pyenv/pyenv
!! gist
https://github.com/defunkt/gist
<<<
example query commands:
{{{
gist -l
}}}
installation:
{{{
brew install gist
gist --login
}}}
walking the json tree:
example file https://api.github.com/gists/7460546580cc3969547029aca27c5fe6
https://stackoverflow.com/questions/28983131/is-there-any-way-to-retrieve-the-name-for-a-gist-that-github-displays
https://dev.to/m1guelpf/3-ways-to-get-data-from-github-gists-9bg
https://sendgrid.com/blog/gist-please/
https://stackoverflow.com/questions/49382979/how-to-loop-a-json-keys-result-from-bash-script
https://github.com/ingydotnet/json-bash
https://starkandwayne.com/blog/bash-for-loop-over-json-array-using-jq/
https://github.com/stedolan/jq
https://stackoverflow.com/questions/48384217/get-secret-gists-using-github-graphql
{{{
sample='[{"name":"foo"},{"name":"bar"}]'
for row in $(echo "${sample}" | jq -r '.[] | @base64'); do
_jq() {
echo ${row} | base64 --decode | jq -r ${1}
}
echo $(_jq '.name')
done
foo
bar
}}}
https://stackoverflow.com/questions/33655700/github-api-fetch-issues-with-exceeds-rate-limit-prematurely
<<<
! github flagged account
https://webapps.stackexchange.com/questions/105956/my-github-account-has-been-suddenly-flagged-and-hidden-from-public-view-how
https://github.community/t5/How-to-use-Git-and-GitHub/My-account-is-flagged/td-p/221
https://stackoverflow.com/questions/41344476/github-account-disabled-for-posting-gists
http://a-habakiri.hateblo.jp/entry/20161208accountflagged
.
Oracle Global Data Services (GDS) MAA Best Practices http://www.oracle.com/technetwork/database/availability/maa-globaldataservices-3413211.pdf
Intelligent Workload Management across Database Replicas https://zenodo.org/record/45065/files/SummerStudentReport-RitikaNevatia.pdf
-- hierarchy
nls_database
nls_instance
nls_session
environment
-- nls_length_semantics
LS_LENGTH_SEMANTICS enables you to create CHAR and VARCHAR2 columns using either byte or character length semantics. Existing columns are not affected.
NCHAR, NVARCHAR2, CLOB, and NCLOB columns are always character-based. You may be required to use byte semantics in order to maintain compatibility with existing applications.
NLS_LENGTH_SEMANTICS does not apply to tables in SYS and SYSTEM. The data dictionary always uses byte semantics.
http://oracle.ittoolbox.com/groups/technical-functional/oracle-db-installs-l/need-to-change-nls_length_semantics-from-byte-to-char-on-production-systems-1168275
http://www.oracle-base.com/articles/9i/CharacterSemanticsAndGlobalization9i.php
http://decipherinfosys.wordpress.com/2007/02/19/nls_length_semantics/
The National Character Set ( NLS_NCHAR_CHARACTERSET ) in Oracle 9i, 10g and 11g (Doc ID 276914.1)
Unicode Character Sets In The Oracle Database (Doc ID 260893.1)
AL32UTF8 / UTF8 (Unicode) Database Character Set Implications (Doc ID 788156.1)
Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) (Doc ID 260192.1)
Complete Checklist for Manual Upgrades to 11gR2 (Doc ID 837570.1)
Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9iR2 (9.2.0) (Doc ID 159657.1)
Problems connecting to AL32UTF8 databases from older versions (8i and lower) (Doc ID 237593.1)
NLS considerations in Import/Export - Frequently Asked Questions (Doc ID 227332.1)
-- TIME
Dates & Calendars - Frequently Asked Questions
Doc ID: 227334.1
Time related columns can get ahead of SYSDATE
Doc ID: Note:268967.1
Impact of changes to daylight saving time (DST) rules on the Oracle database
Doc ID: Note:357056.1
What are the effects of changing the system clock on an Oracle Server instance?
Doc ID: Note:77370.1
Y2K FAQ - Server Products
Doc ID: Note:69388.1
-- DST
http://www.pythian.com/news/18111/have-your-scheduler-jobs-changed-run-times-since-dst/
-- UTC
Time Zones in MySQL http://www.youtube.com/watch?v=RDgGzaZIpbk
— GNOME3 FALLBACK MODE
http://forums.fedoraforum.org/showthread.php?t=263491
https://www.virtualbox.org/wiki/Downloads
* install kernel-headers, kernel-devel
* install vbox guest additions
* install vbox extension pack
* enable 3d on vbox
* reboot
http://go-database-sql.org/references/
http://goprouser.freeforums.org/how-do-you-carry-your-gopro-t362-20.html
users guide http://gopro.com/wp-content/uploads/2011/03/HD-HERO-UM-ENG-110110.pdf
goalsontrack http://www.youtube.com/watch?v=Rmb0OxMw95I, http://www.goalsontrack.com/
http://lifetick.com/
http://lifehacker.com/5873909/five-best-goal-tracking-services
https://www.mindbloom.com/lifegame
* lifestyle
* career
* creativity
* spirituality
* health
* relationships
* finances
SMART goal
http://mobileoffice.about.com/od/glossary/g/smart-goals-definition.htm
<<<
Definition: SMART is an acronym used as a mnemonic to make sure goals or objectives are actionable and achievable. Project managers use the criteria spelled out in SMART to evaluate goals, but SMART can also be used by individuals for personal development or personal productivity.
What Does SMART Mean?
There are many variations to the SMART definition; the letters can alternately signify:
S - specific, significant, simple
M - measurable, meaningful, manageable
A - achievable, actionable, appropriate, aligned
R - relevant, rewarding, realistic, results-oriented
T - timely, tangible, trackable
Alternate Spellings: S.M.A.R.T.
Examples:
A general goal may be to "make more money" but a SMART goal would definte the who, what, where, when, and why of the objective: e.g., "Make $500 more a month by freelancing writing for online blogs 3 hours a week"
<<<
SMART goal worksheet http://www.goalsontrack.com/resources/SMART_Goal_Worksheet_(PDF).pdf
Task flow worksheet http://www.goalsontrack.com/resources/Task_Flow_Worksheet_(PDF).pdf
12 month success planner http://www.goalsontrack.com/resources/TSP-12MonthPlanner.pdf
<<showtoc>>
! golden gate health check
<<<
• Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1)
• Oracle GoldenGate Performance Data Gathering (Doc ID 1488668.1)
• https://www.ateam-oracle.com/loren-penton
• https://www.oracle.com/technetwork/database/availability/maa-gg-performance-1969630.pdf
• https://www.oracle.com/a/tech/docs/maa-goldengate-hub.pdf
• SRDC: Oracle GoldenGate Integrated Extract and Replicat Performance Diagnostic Collector (Doc ID 2262988.1)
• Master Note for Streams Recommended Configuration (Doc ID 418755.1)
MOS Note:1298562.1:
Oracle GoldenGate database Complete Database Profile check script for Oracle DB (All Schemas) Classic Extract
MOS Note: 1296168.1
Oracle GoldenGate database Schema Profile check script for Oracle DB
MOS Note: 1448324.1
GoldenGate Integrated Capture and Integrated Replicat Healthcheck Script
<<<
[img(100%,100%)[ https://i.imgur.com/TZiyftQ.jpg ]]
[img(70%,70%)[https://i.imgur.com/avZx13h.png]]
[img(70%,70%)[https://i.imgur.com/qmNlMaH.jpg]]
[img(70%,70%)[https://i.imgur.com/qeu3wUA.jpg]]
[img(70%,70%)[https://i.imgur.com/hGon9iR.jpg]]
[img(70%,70%)[https://i.imgur.com/evzj3DP.jpg]]
[img(70%,70%)[https://i.imgur.com/dx5xSt0.png]]
[img(70%,70%)[https://i.imgur.com/AoOiyMi.png]]
[img(70%,70%)[https://i.imgur.com/JYlRjnt.jpg]]
! Enkitec materials
https://connectedlearning.accenture.com/curator/chanea-heard
https://connectedlearning.accenture.com/learningboard/goldengate-administration
! Oracle youtube materials
Oracle GoldenGate 12c Overview https://www.youtube.com/watch?v=GdjuiWPPmVs
Oracle GoldenGate 12c New Features https://www.youtube.com/watch?v=ABle015pRXY&list=PLgvgXKR2fhHBa592Btv5qT3tdqKF5ifuF
Oracle Golden Gate: The Essentials of Data Replication https://www.youtube.com/watch?v=d-YAouQ1g0Y
Oracle GoldenGate Tutorials for Beginners https://www.youtube.com/watch?v=qQJvc1DyLIw
Oracle GoldenGate Deep Dive Hands on Lab - Part 1https://www.youtube.com/watch?v=5Yp6bvGeP2s
Oracle GoldenGate Deep Dive Hands on Lab - Part 2 https://www.youtube.com/watch?v=bOnGgnjXdNo
Oracle GoldenGate Deep Dive Hands on Lab - Part 3 https://www.youtube.com/watch?v=86QK9NXEKks
Oracle GoldenGate Integrated Extract and Replicat demo https://www.youtube.com/watch?v=dF5RfCeClIo
https://www.youtube.com/user/oraclegoldengate/videos
Oracle GoldenGate 12c: Fundamentals for Oracle http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=609&get_params=dc:D84357,clang:EN
Oracle Vbox VM http://www.oracle.com/technetwork/middleware/data-integrator/odi-demo-2032565.html , http://www.oracle.com/technetwork/middleware/data-integrator/downloads/odi-12c-getstart-vm-install-guide-2401840.pdf
! other youtube materials
Oracle Goldengate Installation and COnfiguration https://www.youtube.com/watch?v=XnyqS6_IVMQ
Oracle GoldenGate 12c Installation/Walkthrough on VirtualBox https://www.youtube.com/watch?v=c4NxBTnJYvo
11 part series - MS SQL Server to Oracle GG https://www.youtube.com/playlist?list=PLbkU_gVPZ7OTgLRLABah9kdrJ07Tml8E4
6 part series - GoldenGate Tutorial goldengate installation on linux https://www.youtube.com/watch?v=lb3UKpgCA1U&list=PLZSKX9aay1XvIQjy0lWJ5RSn0iuCWGrnL
! golden gate waits
Integrated Replicat Stuck With REPL Capture/Apply: flow control (Doc ID 2354344.1)
[img(50%,50%)[ https://lh3.googleusercontent.com/-kIc50LO2TF4/UYqTs_DcWxI/AAAAAAAAB5s/aplJ3QweTYI/w757-h568-no/GoldenGateUseCases.jpg ]]
http://38.114.158.111/
http://forums.oracle.com/forums/thread.jspa?messageID=4306770
http://cglendenningoracle.blogspot.com/2010/02/streams-vs-golden-gate.html
GoldenGate Quick Start Tutorials
http://gavinsoorma.com/oracle-goldengate-veridata-web/
Oracle Active Data Guard and Oracle GoldenGate
http://www.oracle.com/technetwork/database/features/availability/dataguardgoldengate-096557.html
Oracle GoldenGate high availability using Oracle Clusterware
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/middleware/goldengate/overview/ha-goldengate-whitepaper-128197.pdf
Zero-Downtime Database Upgrades Using Oracle GoldenGate
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/middleware/goldengate/overview/ggzerodowntimedatabaseupgrades-174928.pdf
Oracle GoldenGate 11g: Real-Time Access to Real-Time Information
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/middleware/data-integration/goldengate11g-realtime-wp-168153.pdf%3FssSourceSiteId%3Dotnen
Golden Gate Scripting
http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/goldengate/11g/ogg_automate/index.html
Oracle GoldenGate Tutorial 10- performing a zero downtime cross platform migration and 11g database upgrade
http://gavinsoorma.wordpress.com/2010/03/08/oracle-goldengate-tutorial-10-performing-a-zero-downtime-cross-platform-migration-and-11g-database-upgrade/
! console
http://console.cloud.google.com/
https://stackoverflow.com/questions/22697049/what-is-the-difference-between-google-app-engine-and-google-compute-engine
! official documentation
https://cloud.google.com/products/
https://cloud.google.com/compute/docs/how-to
! cloud sdk
https://cloud.google.com/sdk/downloads
https://github.com/GoogleCloudPlatform
! GCP vs AWS comparison
https://cloud.google.com/docs/compare/aws
https://www.coursera.org/learn/gcp-fundamentals-aws
! hadoop and gcp
https://www.lynda.com/Hadoop-tutorials/Hadoop-Google-Cloud-Platform/516574/593166-4.html
! GCP networking
!! edge network
https://peering.google.com/#/
! online courses
http://bit.ly/2Al1rUP
https://www.coursera.org/specializations/gcp-architecture
https://www.coursera.org/learn/gcp-fundamentals-aws#pricing <- Google Cloud Platform Fundamentals for AWS Professionals
https://www.udemy.com/courses/search/?q=google%20compute%20engine&src=ukw
https://www.lynda.com/Cloud-tutorials/Google-Cloud-Compute-Engine-Essential-Training/181244-2.html
https://www.pluralsight.com/search?q=google%20compute
https://www.pluralsight.com/authors/lynn-langit
https://www.safaribooksonline.com/search/?query=%22Google%20Compute%20Engine%22&extended_publisher_data=true&highlight=true&is_academic_institution_account=false&source=user&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&formats=video&sort=relevance
!! Lynn-Langit
https://www.lynda.com/Google-Cloud-Platform-tutorials/Google-Cloud-Platform-Essential-Training/540539-2.html
https://www.lynda.com/Google-Cloud-tutorials/Google-Cloud-Spanner-First-Look/597023-2.html
!! Joseph Lowery
app engine https://www.lynda.com/Developer-Cloud-Computing-tutorials/Google-App-Engine-Essential-Training/194134-2.html
compute engine https://www.lynda.com/Cloud-tutorials/Google-Cloud-Compute-Engine-Essential-Training/181244-2.html
https://www.lynda.com/Cloud-tutorials/Google-Cloud-Storage-Data-Essential-Training/181243-2.html
!! James Wilson
cloud functions https://app.pluralsight.com/library/courses/google-cloud-functions-getting-started/table-of-contents
http://net.tutsplus.com/tutorials/other/easy-graphs-with-google-chart-tools/ <-- very detailed tutorial
http://code.google.com/apis/chart/image/docs/chart_playground.html <-- chart playground
http://code.google.com/apis/ajax/playground/?type=visualization#tree_map <-- treemap code playground
http://cran.r-project.org/web/packages/googleVis/vignettes/googleVis.pdf <-- Using the Google Visualisation API with R: googleVis-0.2.13 Package Vignette
http://psychopyko.com/tutorial/how-to-use-google-charts/
http://www.a2ztechguide.com/2011/11/example-on-how-to-use-google-chart-api.html
http://forums.msexchange.org/m_1800499864/mpage_1/key_exchsvr/tm.htm#1800499871
Embedding a Googe Chart within an email http://groups.google.com/group/google-chart-api/browse_thread/thread/0ca54c8281952005
http://www.bencurtis.com/2011/02/quick-tip-sending-google-chart-links-via-email/
http://googleappsdeveloper.blogspot.com/2011/09/visualize-your-data-charts-in-google.html
http://www.ibm.com/developerworks/data/library/techarticle/dm-1111googlechart/index.html?ca=drs-
http://www.2webvideo.com/blog/data-visualization-tutorial-using-google-chart-tools
http://www.guidingtech.com/7221/create-charts-graphs-google-image-chart-editor/
http://blog.ouseful.info/2009/02/17/creating-your-own-results-charts-for-surveys-created-with-google-forms/
http://awads.net/wp/2006/04/17/orana-powered-by-google-and-feedburner/
google chrome linux
http://superuser.com/questions/52428/where-does-google-chrome-for-linux-store-user-specific-data
http://www.google.com/support/forum/p/Chrome/thread?tid=328b2114587dd5ee&hl=en
http://www.google.com/support/forum/p/Chrome/thread?tid=08e9aa36ad5159cb&hl=en <-- profile
http://www.google.ru/support/forum/p/Chrome/thread?tid=6a3d820ca818336b&hl=en <-- transfer settings
http://www.google.com/support/forum/p/Chrome/thread?tid=328b2114587dd5ee&hl=en <-- sync
google chrome windows
http://www.google.com.ph/support/forum/p/Chrome/thread?tid=34397b8ff6a48a99&hl=en <-- windows
http://www.walkernews.net/2010/09/13/how-to-backup-and-restore-google-chrome-bookmark-history-plugin-and-theme/
sync
http://www.google.com/support/chrome/bin/answer.py?answer=185277
manual uninstall
http://support.google.com/chrome/bin/answer.py?hl=en&answer=111899
shockwave issue http://www.howtogeek.com/103292/how-to-fix-shockwave-flash-crashes-in-google-chrome/
http://superuser.com/questions/772092/making-google-chrome-35-work-on-centos-6-5
{{{
The Google Chrome team no longer officially supports CentOS 6. That doesn't mean it won't work, however. Richard Lloyd has put together a script that does everything necessary to get it running:
wget http://chrome.richardlloyd.org.uk/install_chrome.sh
sudo bash install_chrome.sh
}}}
''Wiki'' http://en.wikipedia.org/wiki/Greenplum
''Datasheets''
http://www.greenplum.com/sites/default/files/h7419.5-greenplum-dca-ds.pdf
http://www.greenplum.com/sites/default/files/h8995-greenplum-database-ds.pdf
http://finland.emc.com/collateral/campaign/global/forums/greenplum-emc-driving-the-future.pdf
http://goo.gl/Baa60
''Architecture document'' http://goo.gl/lGwQ1
this is how onecommand creates grid disks
http://www.evernote.com/shard/s48/sh/65b7e258-543e-4d79-a855-78458a82b830/4f043b0a2dbc947dc603b93718974910
http://www.pythian.com/news/16103/how-to-gns-process-log-level-for-diagnostic-purposes-11g-r2-rac-scan-gns/
http://coskan.wordpress.com/2010/09/11/dbca-could-not-startup-the-asm-instance-configured-on-this-node-error-for-lower-versions-with-11gr2-gi/
https://blogs.oracle.com/fatbloke/entry/growing_your_virtualbox_virtual_disk
http://www.perfdynamics.com/Manifesto/gcaprules.html
http://karlarao.wordpress.com/2010/07/27/guesstimations
http://blog.oracle-ninja.com/2015/07/haip-and-exadata/
11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip [ID 1210883.1]
http://blogs.oracle.com/AlejandroVargas/resource/HAIP-CHM.pdf <-- alejandro's introduction
https://forums.oracle.com/forums/thread.jspa?threadID=2220975
http://oraxperts.com/wordpress/highly-available-ip-redundant-private-ip-in-oracle-grid-infrastructure-11g-release-2-11-2-0-2-or-above/
http://www.oracleangels.com/2011/05/public-virtual-private-scan-haip-in-rac.html
Hardware Assisted Resilient Data H.A.R.D
Doc ID: Note:227671.1
http://www.oracle.com/technology/deploy/availability/htdocs/vendors_hard.html
http://www.oracle.com/technology/deploy/availability/htdocs/HARD.html
http://www.oracle.com/technology/deploy/availability/htdocs/hardf.html
http://www.oracle.com/corporate/press/1067828.html
http://www.dba-oracle.com/real_application_clusters_rac_grid/hard.html
''8Gb/s Fibre Channel HBAs — All the Facts''
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&ved=0CEYQFjAF&url=http%3A%2F%2Fwww.emulex-oracle.com%2Fartifacts%2Ff1f78fdb-7501-4baf-84ea-fe0dfb8e62ec%2Felx_wp_all_hba_8Gb_next_gen.pdf&ei=dqSIUNLFJsWS2AXW-YGgAg&usg=AFQjCNG9HMM9j2GFZYXJf1nAPJ-WkryALg
https://twiki.cern.ch/twiki/bin/view/PDBService/OrionTests
<<<
* Sequential IO performance is almost inevitably the HBA speed, that is typically 400 MB per sec, or 800 MB when multipathing is used.
* Maximum MBPS typically saturates to the HBA speed. For a single ported 4Gbps HBA you will see something less than 400 MBPS. If the HBA is dual ported and you are using multipathing the number should be close to 800 MBPS
<<<
https://docs.oracle.com/cd/E18283_01/appdev.112/e16760/d_compress.htm# <-- official doc
https://oracle-base.com/articles/11g/dbms_compression-11gr2
https://oracle-base.com/articles/12c/dbms_compression-enhancements-12cr1
http://uhesse.com/2011/09/12/dbms_compression-example/
https://jonathanlewis.wordpress.com/2011/10/04/hcc/
https://antognini.ch/2010/05/how-good-are-the-values-returned-by-dbms_compression-get_compression_ratio/ <-- good stuff
https://oraganism.wordpress.com/2013/01/10/compression-advisory-dbms_compression/ <-- security grants
https://hortonworks.com/services/training/certification/exam-objectives/#hdpcd
https://learn.hortonworks.com/hdp-certified-developer-hdpcd2019-exam
https://2xbbhjxc6wk3v21p62t8n4d4-wpengine.netdna-ssl.com/wp-content/uploads/2018/12/DEV-331_Apache_Hive_Advanced_SQL_DS.pdf
.
{{{
cd c:\Dropbox\Python
c:\Python32\python.exe
import pyreadline as readline
import readline
import rlcompleter
readline.parse_and_bind("tab: complete")
-- index of modules
http://localhost:7464/
}}}
! Autocompletion Windows
{{{
c:\Python32>cd Scripts
c:\Python32\Scripts>easy_install.exe pyreadline
c:\Python32\Scripts>cd ..
c:\Python32>python.exe
Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import pyreadline as readline
>>> import readline
>>> import rlcompleter
>>> readline.parse_and_bind("tab: complete")
>>> print(var)
test
}}}
http://www-01.ibm.com/support/docview.wss?uid=swg21425643
http://www.littletechtips.com/2012/03/how-to-enable-tab-completion-in-python.html
http://stackoverflow.com/questions/6024952/readline-functionality-on-windows-with-python-2-7 <-- good stuff
http://www.python.org/ftp/python/contrib-09-Dec-1999/Misc/readline-python-win32.README
''Easy_install doc'' http://packages.python.org/distribute/easy_install.html
http://blog.sadphaeton.com/2009/01/20/python-development-windows-part-1installing-python.html <-- good stuff
http://blog.sadphaeton.com/2009/01/20/python-development-windows-part-2-installing-easyinstallcould-be-easier.html
http://www.varunpant.com/posts/how-to-setup-easy_install-on-windows
''setuptools (old version 3<)'' http://pypi.python.org/pypi/setuptools#files
''autocompletion linux'' http://www.youtube.com/watch?v=zUHFu8OlDZg
''distribute (new version 3>)'' http://stackoverflow.com/questions/7558518/will-setuptools-work-with-python-3-2-x
http://regebro.wordpress.com/2009/02/01/setuptools-and-easy_install-for-python-3/
http://pypi.python.org/pypi/distribute
!
! Ch1 - starting to code
functions
* print
* int
* input
code branches (aka path)
* branch condition (true or false)
* if/else branches
* Python uses indents to connect paths (nested if/else)
IDLE tidbits
* make use of : on if/else
* it automatically indents.. and indents matter for the code path
* when you TAB, it automatically converts it to 4 spaces
{{{
# simple if/else
if gas > 10:
print("trip is good to go!")
else:
if money > 100:
print("you should buy food")
else:
print("withdraw from atm and buy food")
print("lets go!")
}}}
Loop
* if the loop condition is true, then a loop will run a given piece of code, until it becomes false
* Did you notice that you had to set the value of the answer variable to something sensible before you started the loop? This is important, because if the answer variable doesn’t already have the value no, the loop condition would have been false and the code in the loop body would never have run at all.
{{{
# simple loop
answer = "no"
while answer == "no":
answer = input("Are we there? ")
print("We're there!")
}}}
''# code template: a simple loop game''
{{{
from random import randint
secret = randint(1, 10)
print("Welcome!")
guess = 0
while guess != secret:
g = input("Guess the number:")
guess = int(g)
if guess == secret:
print("You win!")
else:
if guess > secret:
print("Too high!")
else:
print("Too low!")
print("You lose!")
print("Game over!")
}}}
!
! Ch2 - textual data
The computer keeps track of individual characters by using ''two pieces of information:''
1) the ''start'' of the string and the offset of an individual character.
2) The ''offset'' is how far the individual character is from the start of the string. - up to, but not including
* The first character in a string has an offset of 0.. and so on.. this offset is also called ''index''
* The offset value is always 1 less than the position
''substring''
s[138:147]
s[a:b]
a is the index of the first character
b is the index after the last character
''function''
print(msg.upper())
''library and function''
page = urllib.request.urlopen("http://...")
library name.function name
{{{
# simple search code
import urllib.request
import time
price = 99.99
while price > 4.74:
time.sleep(900)
page = urllib.request.urlopen("http://www.beans-r-us.biz/prices-loyalty.html")
text = page.read().decode("utf8")
index = text.find(">$")
position = int(index)
price = float(text[position+2:position+6])
print("Buy!")
print(price)
}}}
<<<
''built-in string methods''
text.endswith(".jpg")
* Return the value True if the string has the given substring at the end.
text.upper():
* Return a copy of the string converted to uppercase.
text.lower():
* Return a copy of the string converted to lowercase.
text.replace("tomorrow", "Tuesday"):
* Return a copy of the string with all occurrences of one substring replaced by another.
text.strip():
* Return a copy of the string with the leading and trailing whitespace removed.
text.find("python"):
* Return the first index value when the given substring is found.
text.startswith("<HTML>")
* Return the value True if the string has the given substring at the beginning.
<<<
<<<
''some of the functions provided by Python’s built-in time library''
time.clock()
* The current time in seconds, given as a floating point number.
time.daylight()
* This returns 0 if you are not currently in Daylight Savings Time.
time.gmtime()
* Tells you current UTC date and time (not affected by the timezone).
time.localtime()
* Tells you the current local time (is affected by your timezone).
time.sleep(secs)
* Don’t do anything for the specified number of seconds.
time.time()
* Tells you the number of seconds since January 1st, 1970.
time.timezone()
* Tells you the number of hours difference between your timezone and the UTC timezone (London).
<<<
!
! Ch3 - Functions
* A function is a boxed-up piece of reusable code.
* In Python, use the ''def'' keyword to define a new function
{{{
# a simple smoothie function
def make_smoothie():
juice = input("What juice would you like? ")
fruit = input("OK - and how about the fruit? ")
print("Thanks. Let's go!")
print("Crushing the ice...")
print("Blending the " + fruit)
print("Now adding in the " + juice + " juice")
print("Finished! There's your " + fruit + " and " + juice + " smoothie!")
print("Welcome to smoothie-matic 2.0")
another = "Y"
while another == "Y":
make_smoothie()
another = input("How about another(Y/N)? ")
}}}
* If you use the ''return()'' command within a function, you can send a data value back to the calling code.
* The value assigned to “price" is 5.51. The assignment happens after the code in the function executes
* Well... sort of. The print() command is designed
to display (or output) a message, typically on screen. The
return() command is designed to allow you to arrange for a
function you write to provide a value to your program. Recall the
use of randint() in Chapter 1: a random number between
two values was returned to your code. So, obviously, when
providing your code with a random number, the randint()
function uses return() and not print(). In fact, if
randint() used print() instead of return(), it
would be pretty useless as a reusable function.
Q: Does return() always come at the end of the function?
A: Usually, but this is not a requirement, either. The
return() can appear anywhere within a function and, when it
is executed, control returns to the calling code from that point in the
function. It is perfectly reasonable, for instance, to have multiple
uses of return() within a function, perhaps embedded
with if statements which then provide a way to control which
return() is invoked when.
Q: Can return() send more than one result back to the
caller?
A: Yes, it can. return() can provide a list of results to the
calling code. But, let’s not get ahead of ourselves, because lists are
not covered until the next chapter. And there’s a little bit more to
learn about using return() first, so let’s read on and get back
to work.
{{{
# send to twitter function
def send_to_twitter():
msg = "I am a message that will be sent to Twitter"
password_manager = urllib.request.HTTPPasswordMgr()
password_manager.add_password("Twitter API",
"http://twitter.com/statuses", "...", "...")
http_handler = urllib.request.HTTPBasicAuthHandler(password_manager)
page_opener = urllib.request.build_opener(http_handler)
urllib.request.install_opener(page_opener)
params = urllib.parse.urlencode( {'status': msg} )
resp = urllib.request.urlopen("http://twitter.com/statuses/update.json", params)
resp.read()
}}}
* Use parameters to avoid duplicating functions
* Just like it’s a bad idea to use copy’n’paste for repeated usages of code, it’s also a bad idea to create multiple copies of a function with only minor differences between them.
* A parameter is a value that you send into your function.
* The parameter’s value works just like a variable within the function, ''except for the fact that its initial value is set outside the function code''
To use a parameter in Python, simply put a variable name between the parentheses that come after the definition of the function name and before the colon.
Then within the function itself, simply use the variable like you would any other
{{{
# sample function parameter
def shout_out(the_name):
return("Congratulations " + the_name + "!")
# use it as follows
print(shout_out('Wanda'))
msg = shout_out('Graham, John, Michael, Eric, and Terry by 2')
print(shout_out('Monty'))
}}}
* check out the use of ''msg'' parameter on the function and also on the price watch code
* also ''password'' variable is defined globally
{{{
# sample send to twitter code
import urllib.request
import time
password="C8H10N4O2"
def send_to_twitter(msg):
password_manager = urllib.request.HTTPPasswordMgr()
password_manager.add_password("Twitter API",
"http://twitter.com/statuses", "starbuzzceo", password)
http_handler = urllib.request.HTTPBasicAuthHandler(password_manager)
page_opener = urllib.request.build_opener(http_handler)
urllib.request.install_opener(page_opener)
params = urllib.parse.urlencode( {'status': msg} )
resp = urllib.request.urlopen("http://twitter.com/statuses/update.json", params)
resp.read()
def get_price():
page = urllib.request.urlopen("http://www.beans-r-us.biz/prices.html")
text = page.read().decode("utf8")
where = text.find('>$')
start_of_price = where + 2
end_of_price = start_of_price + 4
return float(text[start_of_price:end_of_price])
price_now = input("Do you want to see the price now (Y/N)? ")
if price_now == "Y":
send_to_twitter(get_price())
else:
price = 99.99
while price > 4.74:
time.sleep(900)
price = get_price()
send_to_twitter("Buy!")
}}}
* The rest of the program can’t see the ''local variable'' from another function
* Programming languages record variables using a section of memory called the stack. It works like a notepad.
* When you call a function, the computer creates a fresh list of variables.. But when you call a function, Python starts to record any new variables created in the function’s code on a new sheet of paper on the stack
* This new sheet of paper on the stack is
called a new stack frame. Stack frames
record all of the new variables that are
created within a function. These are known
as local variables.
The variables that were created before the
function was called are still there if the function
needs them; they are on the previous stack frame.
Twitter Basic vs OAuth authentication
http://www.linuxjournal.com/content/twittering-command-line <-- OLD STYLE basic authentication removed June 2010
http://jeffmiller.github.com/2010/05/31/twitter-from-the-command-line-in-python-using-oauth <-- NEW STYLE
http://forums.oreilly.com/topic/20756-sending-messages-to-twitter/page__st__20
http://dev.twitter.com/pages/oauth_faq
http://dev.twitter.com/pages/basic_to_oauth
-- some issues I encountered
http://answers.yahoo.com/question/index?qid=20090504211017AAQexjf
!!!! Step by step HOWTO - send tweets on command line (all codes are python3)
just go to this page and follow the guide posted by ''Core_500'' no need to install tweepy
{{{
# oauth1.py
import tweepy
CONSUMER_KEY = 'lGfFmQHYEdGGp2TAE6P0A'
CONSUMER_SECRET = 'iD7OfMrCEWY7X6mQ85QrEhMA2jGtqPmvIoR0mU2gg'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth_url = auth.get_authorization_url()
print ('Please authorize:' + auth_url)
verifier = input('PIN: ').strip()
auth.get_access_token(verifier)
print ("ACCESS_KEY = '%s'" % auth.access_token.key)
print ("ACCESS_SECRET = '%s'" % auth.access_token.secret)
}}}
{{{
# oauth2.py
import sys
import tweepy
CONSUMER_KEY = 'lGfFmQHYEdGGp2TAE6P0A'
CONSUMER_SECRET = 'iD7OfMrCEWY7X6mQ85QrEhMA2jGtqPmvIoR0mU2gg'
ACCESS_KEY = '277601098-oVnCXceKKih6B37huPNfxNJsM6q6xvhtZQTdLci8'
ACCESS_SECRET = 'JRzzK88I3oNEEj4FDknVAoJSzC6AhBqkarbkKv59UM'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
api.update_status(sys.argv[1])
}}}
{{{
# putting it all together
import sys
import tweepy
import urllib.request
import time
def send_to_twitter(msg):
CONSUMER_KEY = 'lGfFmQHYEdGGp2TAE6P0A'
CONSUMER_SECRET = 'iD7OfMrCEWY7X6mQ85QrEhMA2jGtqPmvIoR0mU2gg'
ACCESS_KEY = '277601098-oVnCXceKKih6B37huPNfxNJsM6q6xvhtZQTdLci8'
ACCESS_SECRET = 'JRzzK88I3oNEEj4FDknVAoJSzC6AhBqkarbkKv59UM'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
api.update_status(msg)
def get_price():
page = urllib.request.urlopen("http://www.beans-r-us.biz/prices.html")
text = page.read().decode("utf8")
where = text.find('>$')
start_of_price = where + 2
end_of_price = start_of_price + 4
return float(text[start_of_price:end_of_price])
price_now = input("Do you want to see the price now (Y/N)? ")
if price_now == "Y":
send_to_twitter(get_price())
else:
price = 99.99
while price > 4.74:
time.sleep(900)
send_to_twitter("Buy!")
}}}
-- use it!
C:\Dropbox\Python>oauth2.py "my 5 tweet"
!
! Ch4 - Data in Files and Arrays
!!! read data in files
{{{
result_f = open("results.txt") <-- open it!
...
result_f.close() <-- close it!
}}}
!!!the ''for loop shredder''
* The entire file is fed into the for loop shredder...
* Note: unlike a real shredder, the for loop shredderTM doesn't destroy your data—it just chops it into lines.
* ...which breaks it up into oneline- at-a-time chunks (which are themselves strings).
* Each time the body of the for loop runs, a variable is set to a string containing the current line of text in the file. This is referred to as ''iterating'' through the data in the file
{{{
result_f = open("results.txt")
for each_line in result_f:
print(each_line)
result_f.close()
}}}
!!!''Split'' each line as you read it
* Python strings have a built-in split() method.
* Split into ''separate variables''
rock_band = "Al Carl Mike Brian"
{{{
highest_score = 0
result_f = open("results.txt")
for line in result_f:
(name,score) = line.split()
if float(score) > highest_score:
highest_score = float(score)
result_f.close()
print("The highest score was:")
print(highest_score)
}}}
Using a programming feature called ''multiple assignment'', you can take the result from the cut performed by split() and assign it to a collection of variables
(rhythm, lead, vocals, bass) = rock_band.split()
!!! ''Sorting'' is easier in memory
* Keep the data in files on the disk
* Keep the data in memory
!!! Sometimes, you need to deal with a whole bundle of data, all at once. To do that, most languages give you the ''array''.
* Think of an array as a data train. Each car in the train is called an array element and can store a single piece of data. If you want to store a number in one element and a string in another, you can.
* Even though an array contains a whole bunch of data items, the array itself is a single variable, which just so happens to contain a collection of data. Once your data is in an array, you can treat the array just like any other variable.
* For example, in Python most programmers think array when they are actually using a Python list. For our purposes, think of Python lists and arrays as the essentially same thing.
{{{
my_words = ["Dudes", "and"]
print(my_words[0])
Dudes
print(my_words[1])
and
}}}
* But what if you need to add some extra information to an array?.. you can use ''append''
* you can start with ''zero values'' from your array and just do ''append''
{{{
my_words.append("Bettys")
print(my_words[2])
Bettys
}}}
<<<
''some of the methods that come built into every array''
count()
* Tells you how many times a value is in the array
extend()
* Adds a list of items to an array
index()
* Looks for an item and returns its index value
insert()
* Adds an item at any index location
pop()
* Removes and returns the last array item
remove()
* Removes and returns the first array item
reverse()
* Reverses the order of the array
sort()
* Sorts the array into a specified order (low to high)
<<<
!!! ''Sort'' the array before displaying the results
It was very simple to sort an array of data using just two lines of code. But it turns out you can do even
better than that if you use an option with the sort() method. Instead of using these two lines:
''scores.sort()
scores.reverse()''
you could have used just one, which gives the same result: ''scores.sort(reverse = True)''
!!! putting it all together
{{{
scores = []
result_f = open("results.txt")
for line in result_f:
(name, score) = line.split()
scores.append(float(score))
result_f.close()
scores.sort(reverse=True)
print("The highest score was:")
print(scores[0])
print(scores[1])
print(scores[2])
}}}
!
! Ch5 - Hashes and Databases
Data Structure A standard method of organizing a collection of data items in your computer's memory. You've already met one of the classic data structures: ''the array''
<<<
''data structure names''
Array
* A variable with multiple indexed slots for holding data
Linked list
* A variable that creates a chain of data where one data item points to another data item, which itself points to another data item, and another, and so on and so forth
Queue
* A variable that allows data to enter at one end of a collection and leave at the other end, supporting a first-in, first-out mechanism
Hash
* A variable that has exactly two columns and (potentially) many rows of data
* Known in the Python world as a “dictionary.”
Set
* A variable that contains a collection of unique data items
Multi-dimensional array
* A variable that contains data arranged as a matrix of multiple dimensions (but typically, only two)
<<<
!!! Associate a key with a value using a ''hash''
* Start with an empty hash, curly brackets
{{{
scores = {}
}}}
* After splitting out the name and the score, use the value of “score" as the key of the hash and the value of “name" as the value.
{{{
for line in result_f:
(name, score) = line.split()
scores[score] = name
}}}
* Use a ''for loop'' to process/print the contents of the hash
{{{
# not sorted
for each_score in scores.keys():
print('Surfer ' + scores[each_score] + ' scored ' + each_score)
}}}
* Python hashes don't have a sort() method, you must use ''sorted()''
* Now that you are sorting the keys of the hash (which represent the surfer’s scores), it should be clear why the scores were used as the key when adding data into the hash: you need to sort the scores, not the surfer names, so the scores need to be on the left side of the hash (because that’s what the built-in sorted() function works with).
{{{
# sorted using function sorted()
for each_score in sorted(scores.keys(), reverse = True):
print('Surfer ' + scores[each_score] + ' scored ' + each_score)
}}}
!!! Iterate hash data with ''for''
There are two methods to iterate hash data
1) using ''keys()'' method
{{{
for each_score in scores.keys():
print('Surfer ' + scores[each_score] + ' scored ' + each_score)
}}}
2) using ''items()'' method, returns each key-value pair
{{{
for score, surfer in scores.items():
print(surfer + ' had a score of ' + str(score))
}}}
{{{
21:07:29 SYS@cdb1> show parameter log_archive_start
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_start boolean FALSE
21:10:26 SYS@cdb1> startup mount
ORACLE instance started.
Total System Global Area 734003200 bytes
Fixed Size 2928728 bytes
Variable Size 633343912 bytes
Database Buffers 92274688 bytes
Redo Buffers 5455872 bytes
Database mounted.
21:10:40 SYS@cdb1>
21:10:56 SYS@cdb1>
21:10:57 SYS@cdb1> alter database archivelog;
Database altered.
21:11:02 SYS@cdb1> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 656
Next log sequence to archive 658
Current log sequence 658
21:11:11 SYS@cdb1> alter database open
21:11:17 2 ;
Database altered.
21:11:24 SYS@cdb1> show parameter log_archive_start
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_start boolean FALSE
alter system switch logfile;
alter system switch logfile;
21:17:27 SYS@cdb1>
1 select snap_id, archived from dba_hist_log
2* order by snap_id asc
SNAP_ID ARC
---------- ---
2785 NO
2785 NO
... output snipped ...
2919 NO
2919 NO
2919 NO
405 rows selected.
21:18:22 SYS@cdb1> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
21:18:36 SYS@cdb1> select snap_id, archived from dba_hist_log order by snap_id asc;
SNAP_ID ARC
---------- ---
2785 NO
2785 NO
... output snipped ...
2920 NO
2920 YES
SNAP_ID ARC
---------- ---
2920 YES
408 rows selected.
}}}
<<showtoc>>
<<<
Alright, here’s something that’s working. This script/command/process can be fired from just the Global Zone and will output the data on each running instances on every non-Global Zone.
We don’t have to login on each zone then su- to oracle and set the environment for every database. This is useful for resource accounting and general monitoring.
You can also modify the scripts to pull anything you want from the instances and format it in such a way that’s easily grep’able. Let’s say you can put “Zone :” and “Instance :” in front of every output so you can easily grep it on the final text file. The advantage of this zlogin (this is how we login on every zone) method over using dcli is it’s native and we don’t have to mess with SSH keys on every zone.
Moving forward I’ll put all the scripts under /root/dba/scripts/ for Global Zones and under /export/home/oracle/dba/scripts/ for non-Global zones
Sample output below:
<<<
{{{
################################################
Zone : ssc1s1vm04
Oracle Corporation SunOS 5.11 11.3 August 2016
Instance : +ASM1
Instance : dbm041
################################################
Zone : ssc1s1vm05
Oracle Corporation SunOS 5.11 11.3 August 2016
Instance : +ASM1
Instance : dbm051
}}}
! Here’s the step by step:
!! # Login on Global Zone and create oracle/dba directory under /export/home/oracle on every zone
{{{
for i in `zoneadm list|grep -v global`; do echo "################################################
$i"; mkdir -p /zoneHome/$i/root/export/home/oracle/dba/scripts; done
for i in `zoneadm list|grep -v global`; do echo "################################################
$i"; ls -ld /zoneHome/$i/root/export/home/oracle/dba; done
for i in `zoneadm list|grep -v global`; do echo "################################################
$i"; chown -R 1001:1001 /zoneHome/$i/root/export/home/oracle/dba; done
}}}
!! # Copy script files from global to non-global
{{{
mkdir -p dba/scripts
}}}
{{{
for i in `zoneadm list|grep -v global`; do echo "################################################
$i"; cp /root/dba/scripts/get_* /zoneHome/$i/root/export/home/oracle/dba/scripts/; done
for i in `zoneadm list|grep -v global`; do echo "################################################
$i"; chown -R 1001:1001 /zoneHome/$i/root/export/home/oracle/dba; done
for i in `zoneadm list|grep -v global`; do echo "################################################
$i"; chmod -R 755 /zoneHome/$i/root/export/home/oracle/dba; done
for i in `zoneadm list|grep -v global`; do echo "################################################
$i"; ls -l /zoneHome/$i/root/export/home/oracle/dba/scripts; done
}}}
!! # Execute shell for every zone and output to file.txt
{{{
for i in `zoneadm list|grep -v global`; do echo "################################################
$i"; zlogin -l oracle $i /export/home/oracle/dba/scripts/get_inst; done > file.txt ; cat file.txt
for i in `zoneadm list|grep -v global`; do echo "################################################
$i"; zlogin -l oracle $i /export/home/oracle/dba/scripts/get_asm_size; done > file.txt ; cat file.txt
}}}
!! # Example scripts (create under /root/dba/scripts/ of Global Zone)
!!! get_inst
{{{
#!/bin/bash
# get_inst script
db=`ps -ef | grep pmon | grep -v grep | cut -f3 -d_`
for i in $db ; do
export ORATAB=/var/opt/oracle/oratab
export ORACLE_SID=$i
export ORAINST=`ps -ef | grep pmon | grep -v grep | cut -f3 -d_ | grep -i $ORACLE_SID | sed 's/.$//' `
export ORACLE_HOME=`egrep -i ":Y|:N" $ORATAB | grep -v ^# | grep $ORAINST | cut -d":" -f2 | grep -v "\#" | grep -v "\*"`
$ORACLE_HOME/bin/sqlplus -s /nolog <<EOF
connect / as sysdba
set echo off
set heading off
select instance_name from v\$instance;
EOF
done
}}}
!!! get_asm_size
{{{
#!/bin/bash
# get_asm_size script
db=`ps -ef | grep pmon | grep -v grep | grep -i asm | cut -f3 -d_`
for i in $db ; do
export ORATAB=/var/opt/oracle/oratab
export ORACLE_SID=$i
export ORAINST=`ps -ef | grep pmon | grep -v grep | cut -f3 -d_ | grep -i $ORACLE_SID | sed 's/.$//' `
export ORACLE_HOME=`egrep -i ":Y|:N" $ORATAB | grep -v ^# | grep $ORAINST | cut -d":" -f2 | grep -v "\#" | grep -v "\*"`
$ORACLE_HOME/bin/sqlplus -s /nolog <<EOF
connect / as sysdba
set colsep ','
set lines 600
col state format a9
col dgname format a15
col sector format 999990
col block format 999990
col label format a25
col path format a40
col redundancy format a25
col pct_used format 990
col pct_free format 990
col voting format a6
BREAK ON REPORT
COMPUTE SUM OF raw_gb ON REPORT
COMPUTE SUM OF usable_total_gb ON REPORT
COMPUTE SUM OF usable_used_gb ON REPORT
COMPUTE SUM OF usable_free_gb ON REPORT
COMPUTE SUM OF required_mirror_free_gb ON REPORT
COMPUTE SUM OF usable_file_gb ON REPORT
COL name NEW_V _hostname NOPRINT
select lower(host_name) name from v\$instance;
select
trim('&_hostname') hostname,
name as dgname,
state,
type,
sector_size sector,
block_size block,
allocation_unit_size au,
round(total_mb/1024,2) raw_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * total_mb, 'NORMAL', .5 * total_mb, total_mb))/1024,2) usable_total_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * (total_mb - free_mb), 'NORMAL', .5 * (total_mb - free_mb), (total_mb - free_mb)))/1024,2) usable_used_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * free_mb, 'NORMAL', .5 * free_mb, free_mb))/1024,2) usable_free_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * required_mirror_free_mb, 'NORMAL', .5 * required_mirror_free_mb, required_mirror_free_mb))/1024,2) required_mirror_free_gb,
round(usable_file_mb/1024,2) usable_file_gb,
round((total_mb - free_mb)/total_mb,2)*100 as "PCT_USED",
round(free_mb/total_mb,2)*100 as "PCT_FREE",
offline_disks,
voting_files voting
from v\$asm_diskgroup
where total_mb != 0
order by 1;
EOF
done
}}}
!!! get_zone
{{{
dcli -l root -c er2s1app01,er2s1app02,er2s2app01,er2s2app02 zoneadm list -civ
}}}
!! # Example output (get_asm_size)
{{{
root@ssc1s1db01:~/dba/scripts# cat file.txt
################################################
ssc1s1vm01
Oracle Corporation SunOS 5.11 11.3 August 2016
old 2: trim('&_hostname') hostname,
new 2: trim('ssc1s1vm01') hostname,
HOSTNAME ,DGNAME ,STATE ,TYPE , SECTOR, BLOCK, AU, RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm01,DBFSBWDR ,MOUNTED ,HIGH , 512, 4096, 4194304, 1528, 509.28, 5.67, 503.61, 5.33, 498.33, 1, 99, 0,Y
ssc1s1vm01,RECOBWDR ,MOUNTED ,NORMAL, 512, 4096, 4194304, 8213, 4106.5, 145.06, 3961.44, 21.5, 3939.94, 4, 96, 0,N
ssc1s1vm01,DATABWDR ,MOUNTED ,NORMAL, 512, 4096, 4194304, 32661, 16330.5, 164.71, 16165.79, 85.5, 16080.29, 1, 99, 0,N
, , , , , , ,----------,---------------,--------------,--------------,-----------------------,--------------, , , ,
sum , , , , , , , 42402, 20946.28, 315.44, 20630.84, 112.33, 20518.56, , , ,
################################################
ssc1s1vm02
Oracle Corporation SunOS 5.11 11.3 August 2016
old 2: trim('&_hostname') hostname,
new 2: trim('ssc1s1vm02') hostname,
HOSTNAME ,DGNAME ,STATE ,TYPE , SECTOR, BLOCK, AU, RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm02,RECODEV ,MOUNTED ,NORMAL, 512, 4096, 4194304, 6303, 3151.5, 200.83, 2950.67, 16.5, 2934.17, 6, 94, 0,N
ssc1s1vm02,DBFSDEV ,MOUNTED ,HIGH , 512, 4096, 4194304, 1528, 509.28, 6.69, 502.59, 5.33, 497.3, 1, 99, 0,Y
ssc1s1vm02,DATADEV ,MOUNTED ,NORMAL, 512, 4096, 4194304, 14325, 7162.5, 367.28, 6795.22, 37.5, 6757.72, 5, 95, 0,N
, , , , , , ,----------,---------------,--------------,--------------,-----------------------,--------------, , , ,
sum , , , , , , , 22156, 10823.28, 574.8, 10248.48, 59.33, 10189.19, , , ,
################################################
ssc1s1vm03
Oracle Corporation SunOS 5.11 11.3 August 2016
old 2: trim('&_hostname') hostname,
new 2: trim('ssc1s1vm03') hostname,
HOSTNAME ,DGNAME ,STATE ,TYPE , SECTOR, BLOCK, AU, RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm03,DATASBX ,MOUNTED ,NORMAL, 512, 4096, 4194304, 20437, 10218.5, 166.21, 10052.29, 53.5, 9998.79, 2, 98, 0,N
ssc1s1vm03,DBFSSBX ,MOUNTED ,HIGH , 512, 4096, 4194304, 1528, 509.28, 5.67, 503.61, 5.33, 498.33, 1, 99, 0,Y
ssc1s1vm03,RECOSBX ,MOUNTED ,NORMAL, 512, 4096, 4194304, 8213, 4106.5, 145.11, 3961.39, 21.5, 3939.89, 4, 96, 0,N
, , , , , , ,----------,---------------,--------------,--------------,-----------------------,--------------, , , ,
sum , , , , , , , 30178, 14834.28, 316.99, 14517.29, 80.33, 14437.01, , , ,
################################################
ssc1s1vm04
Oracle Corporation SunOS 5.11 11.3 August 2016
old 2: trim('&_hostname') hostname,
new 2: trim('ssc1s1vm04') hostname,
HOSTNAME ,DGNAME ,STATE ,TYPE , SECTOR, BLOCK, AU, RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm04,DATAQA ,MOUNTED ,NORMAL, 512, 4096, 4194304, 108106, 54053, 168.17, 53884.83, 283, 53601.83, 0, 100, 0,N
ssc1s1vm04,DBFSQA ,MOUNTED ,HIGH , 512, 4096, 4194304, 1528, 509.28, 5.56, 503.72, 5.33, 498.44, 1, 99, 0,Y
ssc1s1vm04,RECOQA ,MOUNTED ,NORMAL, 512, 4096, 4194304, 34762, 17381, 153.69, 17227.31, 91, 17136.31, 1, 99, 0,N
, , , , , , ,----------,---------------,--------------,--------------,-----------------------,--------------, , , ,
sum , , , , , , , 144396, 71943.28, 327.42, 71615.86, 379.33, 71236.58, , , ,
################################################
ssc1s1vm05
Oracle Corporation SunOS 5.11 11.3 August 2016
old 2: trim('&_hostname') hostname,
new 2: trim('ssc1s1vm05') hostname,
HOSTNAME ,DGNAME ,STATE ,TYPE , SECTOR, BLOCK, AU, RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm05,DATAECCDR ,MOUNTED ,NORMAL, 512, 4096, 4194304, 61311, 30655.5, 99.79, 30555.71, 160.5, 30395.21, 0, 100, 0,N
ssc1s1vm05,DBFSECCDR ,MOUNTED ,HIGH , 512, 4096, 4194304, 1528, 509.28, 5.57, 503.71, 5.33, 498.43, 1, 99, 0,Y
ssc1s1vm05,RECOECCDR ,MOUNTED ,NORMAL, 512, 4096, 4194304, 20437, 10218.5, 83.4, 10135.1, 53.5, 10081.6, 1, 99, 0,N
, , , , , , ,----------,---------------,--------------,--------------,-----------------------,--------------, , , ,
sum , , , , , , , 83276, 41383.28, 188.76, 41194.52, 219.33, 40975.24, , , ,
################################################
ssc1s1vm06
Oracle Corporation SunOS 5.11 11.3 August 2016
old 2: trim('&_hostname') hostname,
new 2: trim('ssc1s1vm06') hostname,
HOSTNAME ,DGNAME ,STATE ,TYPE , SECTOR, BLOCK, AU, RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm06,DATAPODR ,MOUNTED ,NORMAL, 512, 4096, 4194304, 10314, 5157, 100.34, 5056.66, 27, 5029.66, 2, 98, 0,N
ssc1s1vm06,RECOPODR ,MOUNTED ,NORMAL, 512, 4096, 4194304, 4202, 2101, 100.19, 2000.81, 11, 1989.81, 5, 95, 0,N
ssc1s1vm06,DBFSPODR ,MOUNTED ,HIGH , 512, 4096, 4194304, 1528, 509.28, 5.56, 503.72, 5.33, 498.44, 1, 99, 0,Y
, , , , , , ,----------,---------------,--------------,--------------,-----------------------,--------------, , , ,
sum , , , , , , , 16044, 7767.28, 206.09, 7561.19, 43.33, 7517.91, , , ,
################################################
ssc1s1vm07
Oracle Corporation SunOS 5.11 11.3 August 2016
old 2: trim('&_hostname') hostname,
new 2: trim('ssc1s1vm07') hostname,
HOSTNAME ,DGNAME ,STATE ,TYPE , SECTOR, BLOCK, AU, RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm07,DATAECCSTG ,MOUNTED ,NORMAL, 512, 4096, 4194304, 61311, 30655.5, 160.73, 30494.77, 160.5, 30334.27, 1, 99, 0,N
ssc1s1vm07,RECOECCSTG ,MOUNTED ,NORMAL, 512, 4096, 4194304, 20437, 10218.5, 60.07, 10158.43, 53.5, 10104.93, 1, 99, 0,N
ssc1s1vm07,DBFSECCSTG ,MOUNTED ,HIGH , 512, 4096, 4194304, 1528, 509.28, 5.56, 503.72, 5.33, 498.44, 1, 99, 0,Y
, , , , , , ,----------,---------------,--------------,--------------,-----------------------,--------------, , , ,
sum , , , , , , , 83276, 41383.28, 226.36, 41156.92, 219.33, 40937.64, , , ,
################################################
ssc1s1vm08
Oracle Corporation SunOS 5.11 11.3 August 2016
old 2: trim('&_hostname') hostname,
new 2: trim('ssc1s1vm08') hostname,
HOSTNAME ,DGNAME ,STATE ,TYPE , SECTOR, BLOCK, AU, RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm08,DATAPOSTG ,MOUNTED ,NORMAL, 512, 4096, 4194304, 10314, 5157, 100.51, 5056.49, 27, 5029.49, 2, 98, 0,N
ssc1s1vm08,DBFSPOSTG ,MOUNTED ,HIGH , 512, 4096, 4194304, 1528, 509.28, 5.57, 503.71, 5.33, 498.43, 1, 99, 0,Y
ssc1s1vm08,RECOPOSTG ,MOUNTED ,NORMAL, 512, 4096, 4194304, 4202, 2101, 99.38, 2001.62, 11, 1990.62, 5, 95, 0,N
, , , , , , ,----------,---------------,--------------,--------------,-----------------------,--------------, , , ,
sum , , , , , , , 16044, 7767.28, 205.46, 7561.82, 43.33, 7518.54, , , ,
root@ssc1s1db01:~/dba/scripts#
}}}
! Tableau calculated fields
{{{
DGTYPE
IF contains(lower(trim([Dgname])),'dbfs')=true THEN 'DBFS'
ELSEIF contains(lower(trim([Dgname])),'reco')=true THEN 'RECO'
ELSEIF contains(lower(trim([Dgname])),'data')=true THEN 'DATA'
ELSE 'OTHER' END
DATA CENTER
IF contains(lower(trim([Ldom])),'er1')=true THEN 'ER1'
ELSEIF contains(lower(trim([Ldom])),'er2')=true THEN 'ER2'
ELSE 'OTHER' END
CHASSIS
IF contains(lower(trim([Ldom])),'er1p1')=true THEN 'er1p1'
ELSEIF contains(lower(trim([Ldom])),'er1p2')=true THEN 'er1p2'
ELSEIF contains(lower(trim([Ldom])),'er2s1')=true THEN 'er2s1'
ELSEIF contains(lower(trim([Ldom])),'er2s2')=true THEN 'er2s2'
ELSE 'OTHER' END
}}}
! Visualization
!! Here’s the high level storage usage/allocation by DATA,RECO,and DBFS disk groups
[img(80%,80%)[ http://i.imgur.com/IrMJZGK.png ]]
!! Here’s the breakdown of that by Zone
[img(80%,80%)[ http://i.imgur.com/sTYOyGL.png ]]
!! Another view of the breakdown by zone
[img(80%,80%)[ http://i.imgur.com/S7lmHde.png ]]
! the final workbook
https://public.tableau.com/profile/karlarao#!/vizhome/SPARCSuperclusterLDOM-ZoneASMStorageMapping/LDOM-ZoneASMStorageMapping
<<showtoc>>
HOWTO: Resource Manager and IORM by Cluster Service
http://goo.gl/I1mjd
• This HOWTO shows the following
o how to make use of Cluster Services to map users on resource manager
o limit the PX slaves per user
o cancel SQLs running longer than 15secs (just for testing purposes)
o limit the backup operations
o activate the IORM intradatabase plan
Also, at the end of this guide is an INTERDATABASE IORM PLAN
The FYIs section at the bottom are some of my observations during the test cases
! INTRADATABASE IORM PLAN
{{{
-- Create the cluster service
#############################################
--Create a service for Reporting sessions
srvctl add service -d dbm -s DBM_REPORTING -r dbm1,dbm2
-- srvctl add service -d dbm -s DBM_REPORTING -r dbm1
srvctl start service -d dbm -s DBM_REPORTING
srvctl stop service -d dbm -s DBM_REPORTING
srvctl remove service -d dbm -s DBM_REPORTING
--Create a service for ETL sessions
srvctl add service -d dbm -s DBM_ETL -r dbm1,dbm2
-- srvctl add service -d dbm -s DBM_ETL -r dbm1
srvctl start service -d dbm -s DBM_ETL
srvctl stop service -d dbm -s DBM_ETL
srvctl remove service -d dbm -s DBM_ETL
-- check service status
srvctl status service -d dbm
-- Create Resource Groups
#############################################
BEGIN
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.create_consumer_group(
consumer_group => 'REPORTING',
comment => 'Consumer group for REPORTS');
dbms_resource_manager.create_consumer_group(
consumer_group => 'ETL',
comment => 'Consumer group for ETL');
dbms_resource_manager.create_consumer_group(
consumer_group => 'MAINT',
comment => 'Consumer group for maintenance jobs');
dbms_resource_manager.validate_pending_area();
dbms_resource_manager.submit_pending_area();
END;
-- Create Consumer Group Mapping Rules
#############################################
BEGIN
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.set_consumer_group_mapping(
attribute => dbms_resource_manager.service_name,
value => 'DBM_REPORTING',
consumer_group => 'REPORTING');
dbms_resource_manager.set_consumer_group_mapping(
attribute => dbms_resource_manager.service_name,
value => 'DBM_ETL',
consumer_group => 'ETL');
dbms_resource_manager.set_consumer_group_mapping(
attribute => dbms_resource_manager.oracle_function,
value => 'BACKUP',
consumer_group => 'MAINT');
dbms_resource_manager.set_consumer_group_mapping(
attribute => dbms_resource_manager.oracle_function,
value => 'COPY',
consumer_group => 'MAINT');
dbms_resource_manager.validate_pending_area();
dbms_resource_manager.submit_pending_area();
END;
-- Resource Group Mapping Priorities
#############################################
BEGIN
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.set_consumer_group_mapping_pri(
explicit => 1,
service_name => 2,
oracle_user => 3,
client_program => 4,
service_module_action => 5,
service_module => 6,
module_name_action => 7,
module_name => 8,
client_os_user => 9,
client_machine => 10 );
dbms_resource_manager.validate_pending_area();
dbms_resource_manager.submit_pending_area();
END;
-- Create the Resource Plan and Plan Directives
-- * DAYTIME for reports
-- * NIGHTTIME for ETL jobs
#############################################
-- create DAYTIME plan
BEGIN
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.create_plan(
plan => 'DAYTIME',
comment => 'Resource plan for normal business hours');
dbms_resource_manager.create_plan_directive(
plan => 'DAYTIME',
group_or_subplan => 'REPORTING',
comment => 'High priority for users/applications',
mgmt_p1 => 70,
PARALLEL_DEGREE_LIMIT_P1 => 4);
dbms_resource_manager.create_plan_directive(
plan => 'DAYTIME',
group_or_subplan => 'ETL',
comment => 'Medium priority for ETL processing',
mgmt_p2 => 50);
dbms_resource_manager.create_plan_directive(
plan => 'DAYTIME',
group_or_subplan => 'MAINT',
comment => 'Low priority for daytime maintenance',
mgmt_p3 => 50);
dbms_resource_manager.create_plan_directive(
plan => 'DAYTIME',
group_or_subplan => 'OTHER_GROUPS',
comment => 'All other groups not explicitely named in this plan',
mgmt_p3 => 50);
dbms_resource_manager.validate_pending_area();
dbms_resource_manager.submit_pending_area();
END;
-- create NIGHTTIME plan
BEGIN
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.create_plan(
plan => 'NIGHTTIME',
comment => 'Resource plan for ETL hours');
dbms_resource_manager.create_plan_directive(
plan => 'NIGHTTIME',
group_or_subplan => 'ETL',
comment => 'High priority for ETL processing',
mgmt_p1 => 70);
dbms_resource_manager.create_plan_directive(
plan => 'NIGHTTIME',
group_or_subplan => 'REPORTING',
comment => 'Medium priority for users/applications',
mgmt_p2 => 50,
PARALLEL_DEGREE_LIMIT_P1 => 4,
SWITCH_GROUP=>'CANCEL_SQL',
SWITCH_TIME=>15,
SWITCH_ESTIMATE=>false
);
dbms_resource_manager.create_plan_directive(
plan => 'NIGHTTIME',
group_or_subplan => 'MAINT',
comment => 'Low priority for daytime maintenance',
mgmt_p3 => 50);
dbms_resource_manager.create_plan_directive(
plan => 'NIGHTTIME',
group_or_subplan => 'OTHER_GROUPS',
comment => 'All other groups not explicitely named in this plan',
mgmt_p3 => 50);
dbms_resource_manager.validate_pending_area();
dbms_resource_manager.submit_pending_area();
END;
-- Grant the consumer group to the users
-- if you do not do this, the user's RESOURCE_CONSUMER_GROUP will show as OTHER_GROUPS
#############################################
BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
dbms_resource_manager_privs.grant_switch_consumer_group ('oracle','REPORTING',FALSE);
dbms_resource_manager_privs.grant_switch_consumer_group ('oracle','ETL',FALSE);
dbms_resource_manager_privs.grant_switch_consumer_group ('oracle','MAINT',FALSE);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
-- Activate the Resource Plan
#############################################
ALTER SYSTEM SET resource_manager_plan='NIGHTTIME' SCOPE=BOTH SID='*';
ALTER SYSTEM SET resource_manager_plan='DAYTIME' SCOPE=BOTH SID='*';
-- to deactivate
ALTER SYSTEM SET resource_manager_plan='' SCOPE=BOTH SID='*';
-- You can also enable the resource plan with the FORCE Option to avoid the Scheduler window to activate a different plan during the job execution.
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:DAYTIME';
or
-- The window starts at 11:00 PM (hour 23) and runs through 7:00 AM (480 minutes).
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE(
Name => '"SYS"."WEEKNIGHT_WINDOW"',
Attribute => 'RESOURCE_PLAN',
Value => 'NIGHTTIME');
DBMS_SCHEDULER.SET_ATTRIBUTE(
name => '"SYS"."WEEKNIGHT_WINDOW"',
attribute => 'REPEAT_INTERVAL',
value => 'FREQ=WEEKLY;BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;BYHOUR=23;BYMINUTE=00;BYSECOND=0');
DBMS_SCHEDULER.SET_ATTRIBUTE(
name=>'"SYS"."WEEKNIGHT_WINDOW"',
attribute=>'DURATION',
value=>numtodsinterval(480, 'minute'));
DBMS_SCHEDULER.ENABLE(name=>'"SYS"."WEEKNIGHT_WINDOW"');
END;
-- The window starts at 7:00 AM (hour 7) and runs until 11:00 PM (960 minutes)
BEGIN
DBMS_SCHEDULER.CREATE_WINDOW(
window_name => '"WEEKDAY_WINDOW"',
resource_plan => 'DAYTIME',
start_date => systimestamp at time zone '-6:00',
duration => numtodsinterval(960, 'minute'),
repeat_interval => 'FREQ=WEEKLY;BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;BYHOUR=7;BYMINUTE=0;BYSECOND=0',
end_date => null,
window_priority => 'HIGH',
comments => 'Weekday window. Sets the active resource plan to DAYTIME');
DBMS_SCHEDULER.ENABLE(name=>'"SYS"."WEEKDAY_WINDOW"');
END;
-- Activate IORM on Exadata
#############################################
-- In each storage cell...
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = auto'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan catPlan=\"\"'
dcli -g ~/cell_group -l root cellcli -e alter iormplan active
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
-- Revert/Delete
#############################################
BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.DELETE_PLAN (PLAN => 'NIGHTTIME');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.DELETE_CONSUMER_GROUP(CONSUMER_GROUP => 'REPORTS');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.DELETE_CONSUMER_GROUP(CONSUMER_GROUP => 'ETL');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
-- Check Resource Manager configuration
#############################################
set wrap off
set head on
set linesize 300
set pagesize 132
col comments format a64
-- show current resource plan
select * from V$RSRC_PLAN;
-- show all resource plans
select PLAN,NUM_PLAN_DIRECTIVES,CPU_METHOD,substr(COMMENTS,1,64) "COMMENTS",STATUS,MANDATORY
from dba_rsrc_plans
order by plan;
-- show consumer groups
select CONSUMER_GROUP,CPU_METHOD,STATUS,MANDATORY,substr(COMMENTS,1,64) "COMMENTS"
from DBA_RSRC_CONSUMER_GROUPS
where CONSUMER_GROUP in ('REPORTING','ETL','MAINT')
order by consumer_group;
-- show category
SELECT consumer_group, category
FROM DBA_RSRC_CONSUMER_GROUPS
ORDER BY category;
-- show mappings
col value format a30
select ATTRIBUTE, VALUE, CONSUMER_GROUP, STATUS
from DBA_RSRC_GROUP_MAPPINGS
where CONSUMER_GROUP in ('REPORTING','ETL','MAINT')
order by 3;
-- show mapping priority
select * from DBA_RSRC_MAPPING_PRIORITY;
-- show directives
SELECT plan,group_or_subplan,cpu_p1,cpu_p2,cpu_p3, PARALLEL_DEGREE_LIMIT_P1, status
FROM dba_rsrc_plan_directives
where plan in ('DAYTIME','NIGHTTIME')
order by 1,3 desc,4 desc,5 desc;
-- show grants
select * from DBA_RSRC_CONSUMER_GROUP_PRIVS order by grantee;
select * from DBA_RSRC_MANAGER_SYSTEM_PRIVS order by grantee;
-- show scheduler windows
select window_name, resource_plan, START_DATE, DURATION, WINDOW_PRIORITY, enabled, active from dba_scheduler_windows;
-- Useful monitoring SQLs
#############################################
## Check the service name used by each session
select inst_id, username, SERVICE_NAME, RESOURCE_CONSUMER_GROUP, count(*)
from gv$session
where SERVICE_NAME <> 'SYS$BACKGROUND'
group by inst_id, username, SERVICE_NAME, RESOURCE_CONSUMER_GROUP order by 2,3,1;
## List the Active Resource Consumer Groups since instance startup
select INST_ID, NAME, ACTIVE_SESSIONS, EXECUTION_WAITERS, REQUESTS, CPU_WAIT_TIME, CPU_WAITS, CONSUMED_CPU_TIME, YIELDS, QUEUE_LENGTH, ACTIVE_SESSION_LIMIT_HIT
from gV$RSRC_CONSUMER_GROUP
-- where name in ('SYS_GROUP','BATCH','OLTP','OTHER_GROUPS')
order by 2,1;
## Session level details
SET pagesize 50
SET linesize 155
SET wrap off
COLUMN name format a11 head "Consumer|Group"
COLUMN sid format 9999
COLUMN username format a16
COLUMN CONSUMED_CPU_TIME head "Consumed|CPU time|(s)" format 999999.9
COLUMN IO_SERVICE_TIME head "I/O time|(s)" format 999999.9
COLUMN CPU_WAIT_TIME head "CPU Wait|Time (s)" FOR 99999
COLUMN CPU_WAITS head "CPU|Waits" format 99999
COLUMN YIELDS head "Yields" format 99999
COLUMN state format a10
COLUMN osuser format a8
COLUMN machine format a16
COLUMN PROGRAM format a12
SELECT
rcg.name
, rsi.sid
, s.username
, rsi.state
, rsi.YIELDS
, rsi.CPU_WAIT_TIME / 1000 AS CPU_WAIT_TIME
, rsi.CPU_WAITS
, rsi.CONSUMED_CPU_TIME / 1000 AS CONSUMED_CPU_TIME
, rsi.IO_SERVICE_TIME /1000 AS IO_SERVICE_TIME
, s.osuser
, s.program
, s.machine
, sw.event
FROM V$RSRC_SESSION_INFO rsi INNER JOIN v$rsrc_consumer_group rcg
ON rsi.CURRENT_CONSUMER_GROUP_ID = rcg.id
INNER JOIN v$session s ON rsi.sid=s.sid
INNER JOIN v$session_wait sw ON s.sid = sw.sid
WHERE rcg.id !=0 -- _ORACLE_BACKGROUND_GROUP_
and (sw.event != 'SQL*Net message from client' or rsi.state='RUNNING')
ORDER BY rcg.name, s.username,rsi.cpu_wait_time + rsi.IO_SERVICE_TIME + rsi.CONSUMED_CPU_TIME ASC, rsi.state, sw.event, s.username, rcg.name,s.machine,s.osuser
/
## By consumer group - time series
set linesize 160
set pagesize 60
set colsep ' '
column total head "Total Available|CPU Seconds" format 99990
column consumed head "Used|Oracle Seconds" format 99990.9
column consumer_group_name head "Consumer|Group Name" format a25 wrap off
column "throttled" head "Oracle Throttled|Time (s)" format 99990.9
column cpu_utilization head "% of Host CPU" format 99990.9
break on time skip 2 page
select to_char(begin_time, 'YYYY-DD-MM HH24:MI:SS') time,
consumer_group_name,
60 * (select value from v$osstat where stat_name = 'NUM_CPUS') as total,
cpu_consumed_time / 1000 as consumed,
cpu_consumed_time / (select value from v$parameter where name = 'cpu_count') / 600 as cpu_utilization,
cpu_wait_time / 1000 as throttled,
IO_MEGABYTES
from v$rsrcmgrmetric_history
order by begin_time,consumer_group_name
/
## High level
set linesize 160
set pagesize 50
set colsep ' '
column "Total Available CPU Seconds" head "Total Available|CPU Seconds" format 99990
column "Used Oracle Seconds" head "Used Oracle|Seconds" format 99990.9
column "Used Host CPU %" head "Used Host|CPU %" format 99990.9
column "Idle Host CPU %" head "Idle Host|CPU %" format 99990.9
column "Total Used Seconds" head "Total Used|Seconds" format 99990.9
column "Idle Seconds" head "Idle|Seconds" format 99990.9
column "Non-Oracle Seconds Used" head "Non-Oracle|Seconds Used" format 99990.9
column "Oracle CPU %" head "Oracle|CPU %" format 99990.9
column "Non-Oracle CPU %" head "Non-Oracle|CPU %" format 99990.9
column "throttled" head "Oracle Throttled|Time (s)" format 99990.9
select to_char(rm.BEGIN_TIME,'YYYY-MM-DD HH24:MI:SS') as BEGIN_TIME
,60 * (select value from v$osstat where stat_name = 'NUM_CPUS') as "Total Available CPU Seconds"
,sum(rm.cpu_consumed_time) / 1000 as "Used Oracle Seconds"
,min(s.value) as "Used Host CPU %"
,(60 * (select value from v$osstat where stat_name = 'NUM_CPUS')) * (min(s.value) / 100) as "Total Used Seconds"
,((100 - min(s.value)) / 100) * (60 * (select value from v$osstat where stat_name = 'NUM_CPUS')) as "Idle Seconds"
,((60 * (select value from v$osstat where stat_name = 'NUM_CPUS')) * (min(s.value) / 100)) - sum(rm.cpu_consumed_time) / 1000 as "Non-Oracle Seconds Used"
,100 - min(s.value) as "Idle Host CPU %"
,((((60 * (select value from v$osstat where stat_name = 'NUM_CPUS')) * (min(s.value) / 100)) - sum(rm.cpu_consumed_time) / 1000) / (60 * (select value from v$osstat where stat_name = 'NUM_CPUS')))*100 as "Non-Oracle CPU %"
,(((sum(rm.cpu_consumed_time) / 1000) / (60 * (select value from v$osstat where stat_name = 'NUM_CPUS'))) * 100) as "Oracle CPU %"
, sum(rm.cpu_wait_time) / 1000 as throttled
from gv$rsrcmgrmetric_history rm
inner join
gV$SYSMETRIC_HISTORY s
on rm.begin_time = s.begin_time
where s.metric_id = 2057
and s.group_id = 2
group by rm.begin_time,s.begin_time
order by rm.begin_time
/
-- PROS/CONS
#############################################
* When you implement a category plan you specify percentage settings a higher level than the database resource manager and this percentage allocation
is generic across the databases. Now, if you want to dynamically alter things you have to do it on both DBRM and on the IORM plans
* If you want it to dynamically change the allocation then just go with the intradatabase plan so you can easily alter the "resource_manager_plan" according to
a scheduler window or just by altering the resource plan
-- FYIs
#############################################
* on resource_manager_cpu_allocation parameter
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16 <-- set to do instance caging
parallel_threads_per_cpu integer 1
resource_manager_cpu_allocation integer 32 <-- this parameter is deprecated DON'T ALTER THIS, and if this is set with cpu_count this will take precedence, see warning below
alter system set cpu_count=32 scope=both sid='dbm1';
WARNING: the resource_manager_cpu_allocation was introduced in 11106 and deprecated right away on 11107,
...BUT... if you have this set together with the cpu_count parameter then this will take precedence.
let's say you set both cpu_count to 3 and resource_manager_cpu_allocation to 16 and you have 32 CPUs on the system
and a workload burning all of the 32 CPUs... what will happen is the server will show as 50% utilized (16 CPUs burned) because
the resource_manager_cpu_allocation is set to 16!
* one more reason of deprecating the resource_manager_cpu_allocation parameter is the cpu_count has dependencies on other stuff like the parallel settings, doing so is one less parameter to worry
* for IORM, on a 70/30 percentage plan directive scheme on DAYTIME & NIGHTTIME plans.. the percentage allocation will only take effect at saturation point.. but if only
one consumer group is active then that group should be able to get the 100% of the IO
-- 70/30 NIGHTTIME taking effect on idle CPU, with only 1 session
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 05:57:46,05/02/13 05:57:58, 12, 2877
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 05:57:48,05/02/13 05:58:07, 19, 1817
-- 70/30 NIGHTTIME taking effect on idle CPU, but with more sessions doing IOs
-- etl
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 06:02:58,05/02/13 06:03:17, 18, 1918
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 06:02:58,05/02/13 06:03:17, 19, 1817
-- reporting
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 06:03:00,05/02/13 06:03:30, 30, 1151
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 06:03:00,05/02/13 06:03:30, 30, 1151
-- 70/30 NIGHTTIME taking effect on 100% CPU utilization and 32 AAS "resmgr:cpu quantum"
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 05:52:07,05/02/13 05:52:22, 14, 2466
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 05:52:16,05/02/13 05:55:05, 167, 207
* for PX, if you set the PARALLEL_DEGREE_LIMIT_P1=4 then a session will be flagged with a "Req. DOP" of 32 but it will really have an "Actual DOP" of 4
TIME CONSUMER_GROUP_NAME CPU Seconds Oracle Seconds % of Host CPU Time (s) IO_MEGABYTES
------------------- ------------------------------ --------------- -------------- ------------- ---------------- ------------
2013-02-05 05:45:36 ETL 1920 1521.4 79.2 1319.5 68511
MAINT 1920 0.0 0.0 0.0 0
OTHER_GROUPS 1920 10.9 0.6 156.6 0
REPORTING 1920 389.3 20.3 1630.1 1952
_ORACLE_BACKGROUND_GROUP_ 1920 0.0 0.0 0.0 20
TIME CONSUMER_GROUP_NAME CPU Seconds Oracle Seconds % of Host CPU Time (s) IO_MEGABYTES
------------------- ------------------------------ --------------- -------------- ------------- ---------------- ------------
2013-02-05 05:46:37 ETL 1920 1589.9 82.8 343.5 0
MAINT 1920 0.0 0.0 0.0 0
OTHER_GROUPS 1920 10.8 0.6 149.0 0
REPORTING 1920 322.3 16.8 2074.0 15016
_ORACLE_BACKGROUND_GROUP_ 1920 0.0 0.0 0.0 20
* for PX, if you set the PARALLEL_DEGREE_LIMIT_P1=4 and if you have a GROUP BY on the SQL then that part of the operation will be another 4 PX slaves
Username QC/Slave Group SlaveSet SID Slave INS STATE WAIT_EVENT QC SID QC INS Req. DOP Actual DOP SQL_ID
------------ -------- ------ -------- ------ --------- -------- ------------------------------ ------ ------ -------- ---------- -------------
ORACLE QC 684 1 WAIT PX Deq: Execute Reply 684 7bb5hpfv8jd4a
- p028 (Slave) 1 1 3010 1 WAIT PX Deq: Execution Msg 684 1 16 4 7bb5hpfv8jd4a
- p004 (Slave) 1 1 2721 1 WAIT PX Deq: Execution Msg 684 1 16 4 7bb5hpfv8jd4a
- p012 (Slave) 1 1 391 1 WAIT PX Deq: Execution Msg 684 1 16 4 7bb5hpfv8jd4a
- p020 (Slave) 1 1 1559 1 WAIT PX Deq: Execution Msg 684 1 16 4 7bb5hpfv8jd4a
- p060 (Slave) 1 2 3013 1 WAIT cell smart table scan 684 1 16 4 7bb5hpfv8jd4a
- p036 (Slave) 1 2 685 1 WAIT cell smart table scan 684 1 16 4 7bb5hpfv8jd4a
- p044 (Slave) 1 2 1459 1 WAIT cell smart table scan 684 1 16 4 7bb5hpfv8jd4a
- p052 (Slave) 1 2 2237 1 WAIT cell smart table scan 684 1 16 4 7bb5hpfv8jd4a
* for CANCEL_SQL, if you are currently on DAYTIME plan.. and if there's already a long running SQL, if you switch it to NIGHTTIME which has the CANCEL_SQL directive
the SWITCH_TIME of 15secs will take effect upon activation. So a SQL that's already running for 1000secs will be canceled after 1015secs if you switch to the NIGHTTIME
plan at 1000secs
* for MAINT consumer group where we have percentage allocation for RMAN backups, it will only kick in once you execute the backup command "backup incremental level 0 database;"
and if you run reports and ETL while the backup is running.. the percentage allocation for the rest of the consumer groups will take effect.. below the ETL still got the
IO priority than the reporting and backups on NIGHTTIME resource plan
INST_ID USERNAME SERVICE_NAME RESOURCE_CONSUMER_GROUP COUNT(*)
---------- ------------------------------ ---------------------------------------------------------------- -------------------------------- ----------
1 SYS SYS$USERS MAINT 1 <-- the RMAN session
1 SYS SYS$USERS OTHER_GROUPS 8
2 SYS SYS$USERS OTHER_GROUPS 3
TIME CONSUMER_GROUP_NAME CPU Seconds Oracle Seconds % of Host CPU Time (s) IO_MEGABYTES
------------------- ------------------------------ --------------- -------------- ------------- ---------------- ------------
2013-02-05 07:01:37 ETL 1920 0.0 0.0 0.0 0
MAINT 1920 1.1 0.1 0.0 3172 <-- RMAN
OTHER_GROUPS 1920 1.1 0.1 0.0 17
REPORTING 1920 0.0 0.0 0.0 0
_ORACLE_BACKGROUND_GROUP_ 1920 0.0 0.0 0.0 25
Total Available Used Oracle Throttled
TIME CONSUMER_GROUP_NAME CPU Seconds Oracle Seconds % of Host CPU Time (s) IO_MEGABYTES
------------------- ------------------------------ --------------- -------------- ------------- ---------------- ------------
2013-02-05 07:02:37 ETL 1920 6.2 0.3 5.1 4873
MAINT 1920 15.5 0.8 0.0 65644 <-- RMAN with reports and ETL
OTHER_GROUPS 1920 0.0 0.0 0.0 0
REPORTING 1920 1.6 0.1 0.9 3964
_ORACLE_BACKGROUND_GROUP_ 1920 0.0 0.0 0.0 20
-- etl
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 07:03:32,05/02/13 07:03:57, 25, 1381
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 07:03:32,05/02/13 07:03:57, 25, 1381
-- reporting
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 07:03:34,05/02/13 07:04:10, 36, 959
BENCHMARK ,INSTNAME ,START ,END , ELAPSED, MBs
----------,---------------,-----------------,-----------------,----------,-------
benchmark ,dbm1 ,05/02/13 07:03:34,05/02/13 07:04:10, 36, 959
}}}
! INTERDATABASE IORM PLAN
{{{
-- INTERDATABASE IORM PLAN
#############################################
* do a show parameter db_unique_name across all the databases, this will be the name you'll be putting as a name on the IORM plan
# main commands
alter iormplan dbPlan=( -
(name=dbm, level=1, allocation=60), -
(name=exadb, level=1, allocation=40), -
(name=other, level=2, allocation=100));
alter iormplan active
list iormplan detail
list iormplan attributes objective
alter iormplan objective = auto
# list
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
# implement
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\( \(name=dbm, level=1, allocation=60\), \(name=exadb, level=1, allocation=40\), \(name=other, level=2, allocation=100\)\);'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan active'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = low_latency'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
# revert
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan catPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan inactive'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
}}}
''FYI:''
* IMPLICITLY CAPTURED baselines are ACCEPTED only if they are the FIRST baselines for the statement
* If SQL baseline already exists and the same SQL is generating a new plan for any reason, a new SPM baseline will be created, but with NOT ACCEPTED status. That means that it WILL NOT BE USED, unless we do something to explicitly enable it.
* accepted baseline, but at runtime Oracle is still using the worse plan
look at the v$sql.SQL_PLAN_BASELINE field for your query cursor to see what/whether baseline is actually used (it will be empty if it’s not) or running dbms_xplan on your cursor and looking at the “notes” section. Depending on the results you can look deeper by i.e. dumping 10053 trace and seeing why baseline was (not) chosen.
* index add
If you have a baseline on your query that uses a full table scan and then you add an index, ORACLE will not automatically switch to using that index, even though a new plan with the index will likely be generated and be vastly more efficient. You have to do something to enable the new plan, by either evolving it or simply forcing it ( or dropping the baseline).
* index drop
it tries to reproduce the plan from baseline during parse. If it cannot (i.e. if the index is not there), it cannot use this plan, even if it is ACCEPTED and needs to either try other ACCEPTED plans from baseline or parse a new one.
The plan with the index remain ACCEPTED, I believe (did not check on that) but will not be used.
Keep in mind that you can have several ACCEPTED plans for the same statement and ORACLE can choose between them during parse. ACCEPTED does not mean BEST, nor does it mean THE ONLY ONE that can be used. Rather it means: it was approved at some point (which you can do manually, btw)
* plan flips on ACCEPTED plans
it can also populate baselines with several ACCEPTED plans for the same sql, meaning that your sql will execute sometimes with plan A and sometimes with plan B. These plan flips can be disastrous if your users are relying on stable execution of that sql
* alter object or indexes, add column
both Profiles and Baselines do not care about the contents of the object or even the structure of the object when they are “attached”. The only thing that seems to matter is: TEXT OF THE QUERY (by the way, upper/lower case or different number of “spaces” are irrelevant).
* auto capture using logon trigger
another thing to consider is that you could turn baseline capture on at a session level for those sessions if you can identify them e.g. using logon trigger.
{{{
Build HOWTO:
* auto capture
show parameter optimizer_capture_sql_plan_baselines
alter system set optimizer_capture_sql_plan_baselines=TRUE; -- to turn on auto capture system wide
ALTER SESSION SET optimizer_capture_sql_plan_baselines=TRUE; -- to turn on auto capture on session level
Logon Trigger on auto capture:
DROP TRIGGER SYS.SESSION_OPTIMIZATIONS;
CREATE OR REPLACE TRIGGER SYS.session_optimizations after logon on database
begin
if (user in ('HR')) then
execute immediate('ALTER SESSION SET optimizer_capture_sql_plan_baselines=TRUE');
end if;
end;
/
* manual capture from cursor cache - individual SQL
-- Then, let's build the baseline
var nRet NUMBER
EXEC :nRet := dbms_spm.load_plans_from_cursor_cache('1z5x9vpqr5t95');
-- And finally, let's double check that the baseline has really been built
SET linesize 180
colu sql_text format A30
SELECT plan_name, sql_text, optimizer_cost, accepted
FROM dba_sql_plan_baselines
WHERE to_char(sql_text) LIKE 'SELECT * FROM t WHERE n=45'
ORDER BY signature, optimizer_cost
/
PLAN_NAME SQL_TEXT OPTIMIZER_COST ACC
------------------------------ ------------------------------ -------------- ---
SQL_PLAN_01yu884fpund494ecae5c SELECT * FROM t WHERE n=45 68764 YES
-- manual load a SQL and plan hash value
var v_num number;
exec :v_num:=dbms_spm.load_plans_from_cursor_cache(sql_id => 'duk2ypk5fz9g6',plan_hash_value => 1357081020 );
* manual capture from cursor cache - Parsing Schema
other methods:
> The entire schema
> Particular module/action
> All similar SQLs
> SQL tuning sets (through dbms_spm.load_plans_from_sqlset function)
DECLARE
nRet NUMBER;
BEGIN
nRet := dbms_spm.load_plans_from_cursor_cache(
attribute_name => 'PARSING_SCHEMA_NAME',
attribute_value => 'HR'
);
END;
/
* forcing an execution plan - fake baselines
http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/
declare
m_clob clob;
begin
select
sql_fulltext
into
m_clob
from
v$sql
where
sql_id = '&m_sql_id_1'
and child_number = &m_child_number_1
;
dbms_output.put_line(m_clob);
dbms_output.put_line(
dbms_spm.load_plans_from_cursor_cache(
sql_id => '&m_sql_id_2',
plan_hash_value => &m_plan_hash_value_2,
sql_text => m_clob,
fixed => 'NO',
enabled => 'YES'
)
);
end;
/
There is one thing I would like to point as well. If unhinted statement is using a bind variables, new hinted statement has to use it as well.
declare
stm varchar2(4000);
a1 varchar2(128) := '999999';
TYPE CurTyp IS REF CURSOR;
tmpcursor CurTyp;
begin
stm:='select /*HINT */ * from t1 where id = :1';
open tmpcursor for stm using a1;
end;
/
* migrate (dump) baseline from one database to another
1) create the staging table in the source database,
exec DBMS_SPM.CREATE_STGTAB_BASELINE('STAGE_SPM');
2) pack SQL baselines into the staging table,
exec :n:=DBMS_SPM.PACK_STGTAB_BASELINE('STAGE_SPM');
SET long 1000000
SET longchunksize 30
colu sql_text format a30
colu optimizer_cost format 999,999 heading 'Cost'
colu buffer_gets format 999,999 heading 'Gets'
SELECT sql_text, OPTIMIZER_COST, CPU_TIME, BUFFER_GETS, COMP_DATA FROM STAGE_SPM;
3) copy the staging table to the target database,
4) and unpack baselines from the staging table into the SQL Management Base
exec :n:=DBMS_SPM.UNPACK_STGTAB_BASELINE('STAGE_SPM');
For Profiles:
EXEC dbms_sqltune.create_stgtab_sqlprof('profile_stg');
EXEC dbms_sqltune.pack_stgtab_sqlprof(staging_table_name => 'profile_stg');
For SPM Baselines:
var n NUMBER
EXEC dbms_spm.create_stgtab_baseline('baseline_stg');
EXEC :n := dbms_spm.pack_stgtab_baseline('baseline_stg');
* evolve
SQL> SELECT sql_handle FROM dba_sql_plan_baselines
WHERE plan_name='SQL_PLAN_4wm24mwmr8n9z0efda8a7';
SQL_HANDLE
------------------------------
SYS_SQL_4e4c449f2774513f
SQL> SET long 1000000
SQL> SET longchunksize 180
SELECT dbms_spm.evolve_sql_plan_baseline('SQL_bb77a3e93c0ea7f3') FROM dual;
* disable
declare
myplan pls_integer;
begin
myplan:=DBMS_SPM.ALTER_SQL_PLAN_BASELINE (sql_handle => '&sql_handle',plan_name => '&plan_name',attribute_name => 'ENABLED', attribute_value => 'NO');
end;
/
* drop
DECLARE
plans_dropped PLS_INTEGER;
BEGIN
plans_dropped := DBMS_SPM.drop_sql_plan_baseline (
sql_handle => 'SYS_SQL_51dcc66dae94c669',
plan_name => 'SQL_PLAN_53r66dqr99jm98a727c3d');
DBMS_OUTPUT.put_line(plans_dropped);
END;
/
* configure
DBMS_SPM.CONFIGURE (‘plan_retention_weeks’, < Number of weeks to retain unused plans before they are purged>) ;
Scripts: SQL PLAN MANAGEMENT [ID 456518.1]
@find_sql_using_baseline
-- find SPM baseline by SQL_ID
col parsing_schema format a8
col created format a10
SELECT parsing_schema_name parsing_schema, created, plan_name, sql_handle, sql_text, optimizer_cost, accepted, enabled, origin
FROM dba_sql_plan_baselines
WHERE signature IN (SELECT exact_matching_signature FROM v$sql WHERE sql_id='&SQL_ID')
/
-- find sql using baseline
SELECT b.sql_handle, b.plan_name, s.child_number,
s.plan_hash_value, s.executions
FROM v$sql s, dba_sql_plan_baselines b
WHERE s.exact_matching_signature = b.signature(+)
AND s.sql_plan_baseline = b.plan_name(+)
AND s.sql_id='&SQL_ID'
/
@baselines
@baseline_hints
@create_baseline
@create_baseline_awr
col parsing_schema format a8
col created format a20
col sql_handle format a25
col sql_text format a40
SELECT parsing_schema_name parsing_schema, TO_CHAR(created,'MM/DD/YY HH24:MI:SS') created, plan_name, sql_handle, substr(sql_text,1,35) sql_text, optimizer_cost, accepted, enabled, origin
FROM dba_sql_plan_baselines order by 2 asc;
set lines 200
select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','typical'))
/
}}}
Complete HOWTO is here https://www.evernote.com/l/ADCcW786eL1Ei5Z-dd3-CzTRw9ddUXyNuS8
LMAX - How to Do 100K TPS at Less than 1ms Latency http://www.infoq.com/presentations/LMAX
https://en.wikipedia.org/wiki/Hybrid_transactional/analytical_processing
https://www.kdnuggets.com/2016/11/evaluating-htap-databases-machine-learning-applications.html
.
http://lists.w3.org/Archives/Public/public-coremob/2012Sep/0021.html
http://engineering.linkedin.com/linkedin-ipad-5-techniques-smooth-infinite-scrolling-html5
http://www.html5rocks.com/en/
http://www.hackintosh.com/
http://lifehacker.com/348653/install-os-x-on-your-hackintosh-pc-no-hacking-required
http://www.sysprobs.com/hackintosh-10-6-7-snow-leopard-on-virtualbox-4-working-sound
http://www.sysprobs.com/install-mac-snow-leopard-1063-oracle-virtualbox-32-apple-intel-pc
http://geeknizer.com/install-snow-leopard-virtualbox/
http://www.youtube.com/watch?v=PLL_qOLpqs4
http://lifehacker.com/5841604/the-always-up+to+date-guide-to-building-a-hackintosh
-- on final cut pro
http://www.disturbingnewtrend.blogspot.com/
http://www.insanelymac.com/forum/index.php?showtopic=69855
-- virtual box preinstalled
http://isohunt.com/torrent_details/261669825/mac+os+x+snow+leopard+hazard?tab=summary
-- vmware preinstalled
http://isohunt.com/torrent_details/326417697/Mac+OS+X+Snow+Leopard+10.6.8+VMware+Image+Ultimate+Build?tab=summary
-- osx lion
https://www.virtualbox.org/wiki/Mac%20OS%20X%20build%20instructions
http://www.sysprobs.com/guide-mac-os-x-10-7-lion-on-virtualbox-with-windows-7-and-intel-pc
http://www.sysprobs.com/create-bootable-lion-os-installer-image-vmware-windows-intel-based-computers
http://www.sysprobs.com/working-method-install-mac-107-lion-vmware-windows-7-intel-pc
http://www.youtube.com/watch?v=-fxz7jVI9kQ
http://ewangi.info/275/how-to-install-mac-os-x-lion-in-vmware-or-virtualbox-on-pc/
<<showtoc>>
! ''Comparing Hadoop Appliances''
http://www.pythian.com/news/29955/comparing-hadoop-appliances/
http://www.cloudera.com/blog/2010/08/hadoophbase-capacity-planning/
! ''Hadoop VM''
https://ccp.cloudera.com/display/SUPPORT/Cloudera%27s+Hadoop+Demo+VM
https://ccp.cloudera.com/display/SUPPORT/Cloudera%27s+Hadoop+Demo+VM+for+CDH4
https://ccp.cloudera.com/display/SUPPORT/Hadoop+Tutorial
! ''12TB/hour data load''
High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database http://www.oracle.com/technetwork/bdc/hadoop-loader/connectors-hdfs-wp-1674035.pdf
! ''Hadoop Applications''
http://blog.revolutionanalytics.com/2010/12/how-orbitz-uses-hadoop-and-r-to-optimize-hotel-search.html
! ''Hadoop Tools''
http://toadforcloud.com/pageloader.jspa?sbinPageName=hadoop.html&sbinPageTitle=Quest%20Solutions%20for%20Hadoop
! ''Guy Harrison Articles''
http://guyharrison.squarespace.com/blog/tag/hadoop
http://guyharrison.squarespace.com/blog/tag/r
http://guyharrison.squarespace.com/blog/tag/cassandra
http://guyharrison.squarespace.com/blog/tag/hive
http://guyharrison.squarespace.com/blog/tag/mongodb
http://guyharrison.squarespace.com/blog/tag/nosql
http://guyharrison.squarespace.com/blog/tag/pig
http://guyharrison.squarespace.com/blog/tag/python
http://guyharrison.squarespace.com/blog/tag/sqoop
! ''Hadoop developer course - follow up''
http://www.cloudera.com/content/cloudera/en/resources/library/training/cloudera-essentials-for-apache-hadoop-the-motivation-for-hadoop.html
! ''Free large data sets''
http://stackoverflow.com/questions/2674421/free-large-datasets-to-experiment-with-hadoop
! Hadoop2
Introduction to MapReduce with Hadoop on Linux http://www.linuxjournal.com/content/introduction-mapreduce-hadoop-linux
Hadoop2 http://hortonworks.com/blog/apache-hadoop-2-is-ga/
! ''Hadoop Tutorials''
http://www.cloudera.com/content/cloudera/en/resources/library.html?category=cloudera-resources:using-cloudera/tutorials&p=1
http://hortonworks.com/tutorials/
! end
https://github.com/t3rmin4t0r/notes/wiki/Hadoop-Tuning-notes
{{{
# Timeouts, slowness and issues as you scale your query?
Scaling down data usually brings down the bi-partite traffic (i.e total # of mappers X total # of reducers) and total amount of shuffle load produced by a faster engine.
Tez is faster than MRv2 and that pushes the standard configured linux machine much harder & runs at a higher network utilization.
There's no easy way to detect the following issues from any hadoop logs.
Hortonworks has a learning automation engine to constantly measure & recommend settings for your cluster as it grows - [Hortonworks SmartSense](http://hortonworks.com/info/hortonworks-smartsense/)
## Known bad hardware + kernels
Centos 6.x kernel bugs on Intel 10Gbps drivers
GRO/LRO - https://access.redhat.com/solutions/20278
## Known problem kernel features
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
Centos 7 with auto cgroup turned on.
$ cat /proc/self/autogroup
and your system loadavg is > the number printed by this script
https://gist.github.com/t3rmin4t0r/605cefddd32c427b7dc0
## Known JVM issues
If you run Java in server mode, inside a LAN, the following issue is killing your DNS server
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6247501
/etc/init.d/nscd restart <-- dns fix
## Basic tuning for 10Gbps + >3 nodes
here's pretty much everything that Rajesh & I have collected over 3 years.
(fix ethX params)
```
sysctl -w "net.core.somaxconn=16384"
sysctl -w "net.core.netdev_max_backlog=20000" <— backlog before dropping packets
sysctl –w "net.core.rmem_max = 134217728"<— Max amount of read/write buffers that can be set via setSockOpt/client side
sysctl –w "net.core.wmem_max = 134217728"
sysctl –w "net.core.rmem_default = 524288" <— Default read/write buffers set by kernel
sysctl –w "net.core.wmem_default = 524288"
sysctl -w "net.ipv4.tcp_rmem="4096 65536 134217728" <— min/start/max. even if 30K connections are there in the node, 30K * 64KB ~ 2 GB?. Should be fine in machine with large RAM.
sysctl -w "net.ipv4.tcp_wmem="4096 65536 134217728"
sysctl -w "net.ipv4.ip_local_port_range ="4096 61000"
sysctl -w "net.ipv4.conf.ethX.forwarding=0" <— Change ethX to relevant nic
sysctl -w "net.ipv4.tcp_mtu_probing=1"
sysctl -w "net.ipv4.tcp_fin_timeout=4"
sysctl -w "net.ipv4.conf.lo.forwarding=0"
sysctl -w "vm.dirty_background_ratio=80"
sysctl -w "vm.dirty_ratio=80"
sysctl -w "vm.swappiness=0"
```
Now these aren't really performance options, those are a different discussion (DSack, jumbo frames, slow_start_after_idle), so email me if you've bought some fancy hardware :)
}}}
<<showtoc>>
https://www.udemy.com/home/my-courses/learning/?instructor_filter=14145628
! Learn Big Data: The Hadoop Ecosystem Masterclass
https://www.udemy.com/learn-big-data-the-hadoop-ecosystem-masterclass/learn/v4/content
! Learn DevOps: Scaling apps On-Premise and in the Cloud
https://www.udemy.com/learn-devops-scaling-apps-on-premise-and-in-the-cloud/learn/v4/content
! Learn Devops: Continuously Deliver Better Software
https://www.udemy.com/learn-devops-continuously-deliver-better-software/learn/v4/content
http://perfdynamics.blogspot.com/2013/04/harmonic-averaging-of-monitored-rate.html
http://www.huffingtonpost.com/colm-mulcahy/mean-questions-with-harmonious-answers_b_2469351.html
http://www.pugetsystems.com/labs/articles/Z87-H87-H81-Q87-Q85-B85-What-is-the-difference-473/
http://www.evernote.com/shard/s48/sh/52071d59-e00d-4bc3-8d47-481d382e150f/06fe80eca9566349595203a1255afb2c
http://www.evernote.com/shard/s48/sh/781cbf9a-4ef0-4b97-a9fa-87d8b65c8e52/2b5ff04e9561ad1450e630db09893f5f
http://www.holovaty.com/writing/aws-notes/
Heroku (cloud platform as a service (PaaS))
http://stackoverflow.com/questions/11008787/what-exactly-is-heroku
/***
|Name:|HideWhenPlugin|
|Description:|Allows conditional inclusion/exclusion in templates|
|Version:|3.1 ($Rev: 3919 $)|
|Date:|$Date: 2008-03-13 02:03:12 +1000 (Thu, 13 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#HideWhenPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
For use in ViewTemplate and EditTemplate. Example usage:
{{{<div macro="showWhenTagged Task">[[TaskToolbar]]</div>}}}
{{{<div macro="showWhen tiddler.modifier == 'BartSimpson'"><img src="bart.gif"/></div>}}}
***/
//{{{
window.hideWhenLastTest = false;
window.removeElementWhen = function(test,place) {
window.hideWhenLastTest = test;
if (test) {
removeChildren(place);
place.parentNode.removeChild(place);
}
};
merge(config.macros,{
hideWhen: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( eval(paramString), place);
}},
showWhen: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( !eval(paramString), place);
}},
hideWhenTagged: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( tiddler.tags.containsAll(params), place);
}},
showWhenTagged: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( !tiddler.tags.containsAll(params), place);
}},
hideWhenTaggedAny: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( tiddler.tags.containsAny(params), place);
}},
showWhenTaggedAny: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( !tiddler.tags.containsAny(params), place);
}},
hideWhenTaggedAll: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( tiddler.tags.containsAll(params), place);
}},
showWhenTaggedAll: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( !tiddler.tags.containsAll(params), place);
}},
hideWhenExists: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( store.tiddlerExists(params[0]) || store.isShadowTiddler(params[0]), place);
}},
showWhenExists: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( !(store.tiddlerExists(params[0]) || store.isShadowTiddler(params[0])), place);
}},
hideWhenTitleIs: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( tiddler.title == params[0], place);
}},
showWhenTitleIs: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( tiddler.title != params[0], place);
}},
'else': { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
removeElementWhen( !window.hideWhenLastTest, place);
}}
});
//}}}
How to Use the Solaris Truss Command to Trace and Understand System Call Flow and Operation [ID 1010771.1] <— good stuff
Case Study: Using DTrace and truss in the Solaris 10 OS http://www.oracle.com/technetwork/systems/articles/dtrace-truss-jsp-140760.html
How to Analyze High CPU Utilization In Solaris [ID 1008930.1] <-- lockstat, kstat, dtrace
How to use DTrace and mdb to Interpret vmstat Statistics [ID 1009494.1]
— sys time kernel profiling
http://dtracebook.com/index.php/Kernel#lockstat_Provider
http://wikis.sun.com/display/DTrace/lockstat+Provider
http://blogs.technet.com/b/markrussinovich/archive/2008/04/07/3031251.aspx
http://helgeklein.com/blog/2010/01/how-to-analyze-kernel-performance-bottlenecks-and-find-that-atis-catalyst-drivers-cause-50-cpu-utilization/
http://prefetch.net/blog/index.php/2010/03/08/breaking-down-system-time-usage-in-the-solaris-kernel/ <— Breaking down system time usage in the Solaris kernel
http://orainternals.wordpress.com/2008/10/31/performance-issue-high-kernel-mode-cpu-usage/ , http://www.orainternals.com/investigations/high_cpu_usage_shmdt.pdf, http://www.pythian.com/news/1324/oracle-performance-issue-high-kernel-mode-cpu-usage/ <— ''riyaj high sys''
http://www.oracledatabase12g.com/archives/resolving-high-cpu-usage-on-oracle-servers.html <— oracle metalink sys high
http://www.freelists.org/post/oracle-l/Solaris-CPU-Consumption,3
http://www.solarisinternals.com/wiki/index.php/CPU/Processor <— ''good drill down examples - filebench''
AAA Pipeline Consumes 100% CPU [ID 1083994.1]
http://www.princeton.edu/~unix/Solaris/troubleshoot/process.html <-- LWP
http://web.archiveorange.com/archive/v/ejz8xZLNsakZx7OAzhCz <-- high sys cpu time, any way to use dtrace to do troubleshooting?
http://opensolaris.org/jive/thread.jspa?threadID=103737 <-- Thread: DBWR write performance
-- ''lockstat''
http://dtracebook.com/index.php/Kernel#lockstat_Provider
http://wikis.sun.com/display/DTrace/lockstat+Provider
How to Analyze High CPU Utilization In Solaris [ID 1008930.1] <-- lockstat, kstat, dtrace
A Primer On Lockstat [ID 1005868.1]
https://blogs.oracle.com/sistare/entry/measuring_lock_spin_utilization
-- ''stack trace''
https://blogs.oracle.com/sistare/entry/lies_damned_lies_and_stack
-- ''mdb''
https://blogs.oracle.com/sistare/entry/wicked_fast_memstat
https://blogs.oracle.com/optimizer/entry/how_does_the_method_opt
* use METHOD_OPT=>’FOR ALL COLUMNS SKEW ONLY’ - for initial histogram creation
* then do a METHOD_OPT=>'FOR ALL COLUMNS SIZE REPEAT' - for subsequent runs
** also consider "sample size"
statistics gathering - locking table... good for static table, and could be bad for low and high values not being representative
http://translate.google.com/translate?sl=auto&tl=en&u=http://www.dbform.com/html/2010/1200.html
http://neerajbhatia.wordpress.com/2010/11/12/everything-you-want-to-know-about-oracle-histograms-part-1/
http://neerajbhatia.files.wordpress.com/2010/11/everything-you-want-to-know-about-oracle-histograms-part-1.pdf
http://structureddata.org/2008/10/14/dbms_stats-method_opt-and-for-all-indexed-columns/
''starting 10g onwards''
- we don't invalidate on the shared pool pace it out up to 5hours before this will take effect... so specify ''NOINVALIDATE=FALSE'' to instantly take effect
- the auto gathering of histograms on "where" clause started
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=59690156
http://www.hplsql.org/doc
http://www.hplsql.org/features
! course
* udemy - Talend For Big Data Integration Course : Beginner to Expert - https://www.udemy.com/talend-for-big-data/learn/v4/overview
https://www.youtube.com/results?search_query=talend+hadoop
Talend for Big Data https://www.safaribooksonline.com/library/view/talend-for-big/9781782169499/#toc
http://www.talend.com/products/big-data/big-data-open-studio/
Talend Open Studio for Big Data for Dummies https://info.talend.com/en_bd_bd_dummies.html?type=productspage&_ga=2.77959124.1360921166.1516402754-1190038700.1516402754&_gac=1.205267492.1516402754.EAIaIQobChMItc3Rt5Dl2AIVm4izCh2-XQhcEAAYASAAEgKRn_D_BwE
https://github.com/t3rmin4t0r/notes/wiki/Hive:-Production-Realities-(WIP)
{{{
In the last 2 years, I have heard a lot of customer requirements that are downright ordinary.
Below is a list of these ordinary requirements that are often unvoiced, because they are taken for granted.
I'm a performance engineer - but here are some things that outweigh performance when it comes to really running it in production.
### #1 Handle a concurrent ETL pipeline into the same data warehouse
* ETL in new partitions, while a query is running against old partitions
* Most queries run against fresh partitions within minutes of insertions
* ETL can be de-prioritized against the rest of the workload
In real production clusters, you don't load up ~1Tb of data once and query the exact same data-set. Nearly every few hours at least, you get new data to insert.
Also Infra teams frown on needing 2 clusters for ETL/query.
This is by-far the most common reason to use Hive - ETL+JDBC/ODBC in one go.
### #2 ETL needs to be complete once a partition is in place
* Metadata updates like Statistics should be automatic
* New partitions are visible to existing clients
* Adding a partition shouldn't need a full-pass for metadata collection
If your query engine needs some sort of stats, you should collect them during ETL - in production, that also counts as part of the ETL workload. You cannot wave away that cost in your system from the actual insertion pipeline -
Hive does the right things there, with `hive.stats.autogather` and `hive.stats.dbclass=fs`.
Even more relevantly, any query speedups by artificially updating statistics by hand or by changing cluster settings to run stats generation. Those are alright for demos, but in reality with an ETL firehose that doesn't work - the Hive implementation extrapolates from existing statistics, when a query spans cold and fresh data.
### #3 ETL+Query is what matters
* If you can satisfy #1 and #2, then real life performance can be measured
* Most people care about the freshness of data
Taking 2+ hours of cluster downtime to load, with manual intervention & stats tuning cannot be required for a ~10s query.
That can indeed be useful to demonstrate speed, but the reality ends up taking a huge bite out of those approaches.
### #4 Failure tolerance
* Automatic failure tolerance is a must for large scale systems
* Node failures should not affect running queries
* At the very least, they shouldn't affect the next query
In this context, Hive's ACID impl is built for same minute delivery (#3) into a current partition, with repeatable read during retries (#4).
Several SQL solutions competing with hive are out of the picture already, but there's more to Hive that really helps me not lose sleep if I had a pager.
### #5 Cluster growth - can you add new machines/remove old machines
* Machines always need maintenance (otherwise they'd have taken over - re: my nick)
* This is a direct counter-part of #4
* If you can fail-over running queries, you can pull worker machines out
* Adding new machines is more of a soft requirement - otherwise you can't put them back
Hive leaves this up to the well proven YARN implementation to do this.
In this context, blacklisting broken machines is the least the system should do - but old/new machine swaps imply that they are functional, but in need of decommission.
### #6 Cluster growth - bigger machines or more machines?
* For production, you should be able to upgrade a cluster simply by adding more machines (assuming #5, then see #10)
* If your execution engine is limited by a single node's RAM, then this obviously fails
* Usually new purchases are also faster machines (more RAM, more disk)
* So your query platform cannot assume identical machines in all query plans
* Does it need to fixate on the lowest-common h/w config or can it slice/dice work
Your scale of testing can't depend on whether you have 20 x 384Gb vs 60 x 128Gb machines. Sure there's more network traffic, but you can't complain about running out of memory when you scale horizontally with same aggregate RAM.
Hive uses containers of fixed size which is acceptable to YARN, so they can be redistributed across a heterogenous cluster.
### #7 Cluster age - now you have old machines and new machines
* Old machines tend to have worse hardware (barring bad firmware on new)
* They tend to bit-rot, throw errors and lose data that was written to disk
* We can't have any "remove seatbelt" speedups then (disk data + checksums)
* Tasks need retries when errors are detected and basic failure tolerance
Both Mapreduce and Tez have explicit checksums for the data that moves between machines. So does HDFS.
Because of this Hive is relatively safe from this bit-rot, but this is a performance penalty to keep your data safe.
### #8 Isolation between queries - so you have a bad query
* Need a query kill mechanism with an inbuilt cleanup
* Killing a query shouldn't need a cluster restart
* Immediately after a bad query is killed, the cluster should return to usable state
YARN does this for Hive, so that it is no different from the cleanup routines for any other MapReduce application.
And since that feature has been robust and well-tested, there will not be any orphan tasks left behind.
### #9 Rolling upgrades/Fast restarts
* Upgrades don't need downtime - HiveServer2 is actually a client instance
* Starting a new one is almost immediate - no state is stored/reloaded, so restart is not dependent on hcatalog size or cluster count
* The server can't fail because there are too many partitions or tables during a restart cycle
This is particularly important in scenarios where a HiveServer2 needs a restart (like to load a new SSL cert).
The production workloads can't wait 10 minutes while it restarts or even fails because there are too many partitions in the cluster (Mithun from Y! has a post talking about nearly ~100k partitions being added per day).
### #10 "Gone Viral" scenario - ~10x data on one day
* This is the true test of nearly everything in play here
* Most ETL systems will handle it slower, but complete successfully
* Adding more machines in a hurry to temporary increase capacity/throughput
This happens way more often than anyone predicts.
The biggest problem is the opportunity cost - at peak load is when the PMs really want to see how the experiments are doing. Not the day to find out that it went over the RAM in the cluster and that the dashboards are empty because queries are failing.
}}}
tableau forecast model - Holt-Winters exponential smoothing
http://onlinehelp.tableausoftware.com/v8.1/pro/online/en-us/help.html#forecast_describe.html
google search - exponential smoothing
https://www.google.com/search?q=exponential+smoothing&oq=exponential+smoothing&aqs=chrome..69i57j69i60j69i65l3j69i60.3507j0j7&sourceid=chrome&es_sm=119&ie=UTF-8
http://en.wikipedia.org/wiki/Exponential_smoothing
exponential growth functions
https://www.khanacademy.org/math/algebra2/exponential_and_logarithmic_func/exp_growth_decay/v/exponential-growth-functions
simple exponential smoothing
http://freevideolectures.com/Course/3096/Operations-and-Supply-Chain-Management/2
google search - Holt-Winters exponential smoothing model
https://www.google.com/search?q=Holt-Winters+exponential+smoothing+model&oq=Holt-Winters+exponential+smoothing+model&aqs=chrome..69i57.1391j0j7&sourceid=chrome&es_sm=119&ie=UTF-8
The Holt-Winters Approach to Exponential Smoothing: 50 Years Old and Going Strong - Paul Goodwin
http://www.forecasters.org/pdfs/foresight/free/Issue19_goodwin.pdf
Time series Forecasting using Holt-Winters Exponential Smoothing <-- good stuff, with good overview of different smoothing models
http://www.it.iitb.ac.in/~praj/acads/seminar/04329008_ExponentialSmoothing.pdf
! some more
Mod-02 Lec-04 Forecasting -- Winter's model, causal models, Goodness of forecast, Aggregate Planning
http://www.youtube.com/watch?v=MbNmIZNy3qI
Excel - Time Series Forecasting - Part 1 of 3
http://www.youtube.com/watch?v=gHdYEZA50KE
Applied regression analysis
http://blog.minitab.com/blog/adventures-in-statistics/applied-regression-analysis-how-to-present-and-use-the-results-to-avoid-costly-mistakes-part-2
http://www.r-tutor.com/
https://www.quora.com/search?q=holt+winter
google search "mean absolute error"
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=mean%20abosolute%20error
http://www.amazon.com/s/ref=sr_pg_1?rh=i%3Aaps%2Ck%3Amean+absolute+error&keywords=mean+absolute+error&ie=UTF8&qid=1401658352
http://www.amazon.com/Predictive-Analytics-Dummies-Business-Personal/dp/1118728963/ref=sr_1_15?ie=UTF8&qid=1401658288&sr=8-15&keywords=mean+absolute+error
Mean Absolute Deviation/Error (MAD or MAE)
http://www.vanguardsw.com/101/mean-absolute-deviation-mad-mean-absolute-error-mae.htm
Predictive Analytical Modelling
http://community.tableausoftware.com/thread/112660
Forecasting Help (nonlinear and trends and exponential smoothing)
http://community.tableausoftware.com/thread/131081 <-- I categorize a "Good" forecast as one with a mean absolute scaled error (MASE) of less than 0.4
Scott Tennican
http://community.tableausoftware.com/people/scotttennican0 <-- the developer of "Foreast" in tableau
http://community.tableausoftware.com/people/scotttennican0/content?filterID=participated
Using R forecasting packages from Tableau
http://boraberan.wordpress.com/2014/01/19/using-r-forecasting-packages-from-tableau/ <-- Program Manager at Tableau Software focus on statistics
R integration, object of different length than original data
http://community.tableausoftware.com/thread/137551
running sum of forecast - holt winters
http://community.tableausoftware.com/thread/137167
Exponential smoothing or Forecasting in tableau - guys questioning the accuracy of the forecast feature
http://community.tableausoftware.com/thread/140495
https://plus.google.com/+KennethBlack/posts <-- this guy investigated on the trend models in tableau, and he works for this company http://blog.qualproinc.com/blog-qualpro-mvt/ctl/all-posts/
Additional Insight and Clarification of #Tableau Exponential Trend Models
http://3danim8.wordpress.com/2013/10/18/additional-insight-and-clarification-of-tableau-exponential-trend-models/
A Help Guide for Better Understanding all of #Tableau Trend Models
http://3danim8.wordpress.com/2013/10/15/a-help-guide-for-better-understanding-all-of-tableau-trend-models/
How to Better Understand and Use Linear Trend Models in #Tableau
http://3danim8.wordpress.com/2013/09/11/how-to-better-understand-and-use-linear-trend-models-in-tableau/
How to use a trick in #Tableau for adjusting a scatter plot trend line
http://3danim8.wordpress.com/2013/08/30/how-to-use-a-trick-in-tableau-for-adjusting-a-scatter-plot-trend-line/
Using #Tableau to Create Dashboards For Tracking Salesman Performance
http://3danim8.wordpress.com/2013/07/02/using-tableau-to-create-dashboards-for-tracking-salesman-performance/
Tableau, Correlations and Scatter Plots
http://3danim8.wordpress.com/2013/06/11/tableau-correlations-and-scatter-plots/
Qualpro company
http://blog.qualproinc.com/blog-qualpro-mvt/ctl/all-posts/
http://blog.qualproinc.com/blog-qualpro-mvt/bid/315478/How-to-Use-Tableau-Turning-Complexity-into-Simplicity
Holt-Winters forecast using ggplot2
http://www.r-bloggers.com/holt-winters-forecast-using-ggplot2/
{{{
. QAS Agent Uninstall Commands
PACKAGECOMMAND
RPM# rpm -e vasclnt
DEB# dpkg -r vaslcnt
Solaris# pkgrm vasclnt
HP-UX# swremove vasclnt
AIX# installp -u vasclnt
Mac OS X/<mount>/Uninstall.app/Contents/MacOS/Uninstall' --console --force vasclnt
Hey, in the meantime what I have done in /etc/sudo.conf: disable the quest modules and re-enable the old ones. works in Linux, not sure if the same file exists in Solaris. It will break QAS but can be enabled later.
I tried just uninstalling the package before and the pam plugin is still enabled, even after reboot, restoring the previous /etc/sudo.conf works (it is saved with some prefix with the qas pam module disabled).
}}}
How to work with NULL
https://livesql.oracle.com/apex/livesql/file/content_NNUGN6Z352RH87FHF1GKWWJIP.html
http://kevinclosson.wordpress.com/2010/09/28/configuring-linux-hugepages-for-oracle-database-is-just-too-difficult-part-i/
http://kevinclosson.wordpress.com/2010/10/21/configuring-linux-hugepages-for-oracle-database-is-just-too-difficult-isn%e2%80%99t-it-part-%e2%80%93-ii/
http://kevinclosson.wordpress.com/2010/11/18/configuring-linux-hugepages-for-oracle-database-is-just-too-difficult-isn%E2%80%99t-it-part-%E2%80%93-iii-do-you-really-want-to-configure-the-absolute-minimum-hugepages/
http://martincarstenbach.wordpress.com/2013/05/13/more-on-use_large_pages-in-linux-and-11-2-0-3/
http://software.intel.com/en-us/articles/intel-performance-counter-monitor/
http://software.intel.com/en-us/articles/performance-insights-to-intel-hyper-threading-technology/
http://software.intel.com/en-us/articles/intel-hyper-threading-technology-analysis-of-the-ht-effects-on-a-server-transactional-workload
http://software.intel.com/en-us/articles/hyper-threading-be-sure-you-know-how-to-correctly-measure-your-servers-end-user-response-time-1
http://software.intel.com/en-us/articles/intel-64-architecture-processor-topology-enumeration
http://cache-www.intel.com/cd/00/00/01/77/17705_htt_user_guide.pdf <-- Intel Hyper-Threading Technology Technical User's Guide
http://www.evernote.com/shard/s48/sh/6d8994bc-2eb6-4d8c-8880-c5af7a12fbe5/84d73ea45c6bf62779fb9092d4ae3648 <-- Intel Hyper-Threading Technology Technical User's Guide, with annotation
http://herbsutter.com/welcome-to-the-jungle/ <-- cool stuff
http://herbsutter.com/2012/11/30/256-cores-by-2013/
http://plumbr.eu/blog/how-many-threads-do-i-need <-- java thread
http://sg.answers.yahoo.com/question/index?qid=20101013191827AAswM81 <-- clock speed
http://en.wikipedia.org/wiki/Out-of-order_execution
http://en.wikipedia.org/wiki/P6_(microarchitecture)
http://highscalability.com/blog/2013/6/6/paper-memory-barriers-a-hardware-view-for-software-hackers.html <-- hardware view for software hackers
http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.07.23a.pdf
''How to Maximise CPU Performance for the Oracle Database on Linux'' https://communities.intel.com/community/itpeernetwork/datastack/blog/2013/08/05/how-to-maximise-cpu-performance-for-the-oracle-database-on-linux
http://afatkulin.blogspot.ca/2013/11/hyperloglog-in-oracle.html
http://blog.aggregateknowledge.com/2012/10/25/sketch-of-the-day-hyperloglog-cornerstone-of-a-big-data-infrastructure/
https://github.com/t3rmin4t0r/notes/wiki/I-Like-Tez,-DevOps-Edition-(WIP)
<<<
I work on Tez, so it would be hard to not like Tez. There's a reason for it too, whenever Tez does something I don't like, I can put my back into it and shove Tez towards that straight & narrow path.
Just before Hortonworks, I was part of the ZCloud division in Zynga - the casual disregard for devs towards operations has hurt my sleep cycle and general peace of mind. I know they're chasing features, but whenever someone puts in a change that takes actual work to rollback, I cringe. And I like how Tez doesn't make the same mistakes here.
First of all, you don't install "Tez" on a cluster. The cluster runs YARN, which means two very important things.
There is no "installing Tez" on your 350 nodes and waiting for it to start up. You throw a few jars into an HDFS directory and write tez-site.xml on exactly one machine pointing to that HDFS path.
This means several important things for a professional deployment of the platform. There's no real pains about rolling upgrades, because there is nothing to restart - all existing queries use the old version, all new queries will automatically use the new version. This is particularly relevant for a 24 hour round-the-clock data insertion pipeline, but perhaps not for a BI centric service where you can bounce it pretty quickly after emailing a few people.
Letting you run different versions of Tez at the same time is very different from how MR used to behave. Personally on a day to day basis, this helps me a lot to share a multi-tenant dev environment & the overall quality of my work - I test everything I write on a big cluster, without worrying about whether I'll nuke anyone else's Tez builds.
Next up, I like how Tez handles failure. You can lose connectivity to half your cluster and the tasks will keep running, perhaps a bit slowly. YARN takes care of bad nodes, cases where the nodes are having disk failures or any such hiccup in the cluster that is normal when you're maxing out 400+ nodes all day long. And coming from the MR school of thought, the task failiure scenario is pretty much easily covered with re-execution mechanisms.
There's something important to be covered here with failure. For any task attempt that accidentally kills a container (like a bad UDF with a memory leak) there is no real data loss for any previous data, because the data already committed in a task is not served out of a container at all. The NodeManager serves all the data across the cluster with its own secure shuffle handlers. As long as the NodeManager is running, you could kill the existing containers on that node and hand off that capacity to another task.
This is very important for busy clusters, because as the aphorism goes "The difference between time and space is that you can re-use space". I guess the same applies to a container holding onto an in-memory structure, waiting for its data to be pulled off to another task.
And any hadoop-2 installation already has node manager alerts/restarts already coded in without needing any new devops work to bring back errant nodes back online.
This brings me to the next bit of error tolerance in the system - the ApplicationMaster. The old problem with hadoop-1.x was that the JobTracker was a somewhat single point of failure for any job. With YARN, that went away entirely with the ApplicationMaster being coded particularly for a task type.
Now most applications do not want to write up all the bits and bobs required to run their own ApplicationMaster. Something like Hive could've built its own ApplicationMaster (rather we could've built it as part of our perf effort) - after all Storm did, HBase did and so did Giraph.
The vision of Tez is that there's a possibe generalization for the problem. Just like MR was a simple distribution mechanism for a bi-partite graph which spawned a huge variety of tools, there exists a way to express more complex graphs in a generic way, building a new assembly language for data driven applications.
Make no misake, Tez is an assembly language at its very core. It is raw and expressive but is an expert's tool, meant to be wielded by compiler developers catering to a tool userland. Pig and Hive already have compilers into this new backend. Cascading and then Scalding will add some API fun to the mix, but the framework sits below all those and consolidates everyone's efforts into a common rich baseline for performance. And there's a secret hidden away MapReduce compiler for Tez as well, which get ignored often.
A generalization is fine, but it is often a limitation as well - nearly every tool listed above want to write small parts of the scheduling mechanisms which allows for custom data routing and connecting up task outputs to task inputs manually (like a bucketed map-join). Tez is meant to be a good generalization to build each application's custom components on top of, but without actually writing any of the complex YARN code required to have error tolerance, rack/host locality and recovery from AM crashes. The VertexManager plugin API is one classic example of how an application can now interfere with how a DAG is scheduled and how its individual tasks are managed.
And last of all, I like how Tez is not self centered, it works towards global utilization ratio on a cluster, not just its own latency figures. It can be built to elastically respond to queue/cluster pressures from other tasks running on the cluster.
People are doing Tez a disfavour by comparing it to framworks which rely on keeping slaves running to not just execute CPU tasks but to hold onto temporary storage as well. On a production cluster, getting 4 fewer containers than you asked for will not stall Tez, because of the way it uses the Shuffle mechanism as a temporary data store between DAG vertexes - it is designed to be all-or-best-effort, instead of waiting for the perfect moment to run your entire query. A single stalling reducer doesn't require any of the other JVMs to stay resident and wait. This isn't a problem for a daemon based multi-tenant cluster, because if there is another job for that cluster it will execute, but for a hadoop ecosystem cluster system built on YARN, this means that your cluster utilization takes a nose-dive due to the inability to acquire or release cluster resources incrementally/elastically during your actual data operation.
Between the frameworks I've played with, that is the real differentiating feature of Tez - Tez does not require containers to be kept running to do anything, just the AM running in the idle periods between different queries. You can hold onto containers, but it is an optimization, not a requirement during idle periods for the session.
I might not exactly be a fan of the user-friendliness of this assembly language layer for hadoop, but the flexibility of this more than compensates.
<<<
https://www.evernote.com/shard/s48/sh/ec01b659-2271-453f-b8b4-32e0c29ac848/2c1730994c0352208b373694f16de4e1
IBM Cúram Social Program Management 7.0.10 - 7.0.11
Batch Streaming Architecture
https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1BatchStreamingArchitecture1.html
The Chunker
https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1Chunker1.html
The Stream
https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1Stream1.html
! Verifying I/O bandwidth
{{{
-bash-3.00% id
uid=1000(oracle) gid=10000(dba) groups=1001(oinstall)
-bash-3.00% hostname
r09n01.pbm.ihost.com
-bash-3.00% pwd
/bench1/orion
-bash-3.00% ./orion.pl –t dss –f params/dss_params.txt –d 120 –n verification_test1
Checking and processing input arguments..
Workload type is : DSS
Input parameter file is : params/dss_params.txt
Run duration is : 120 (seconds)
Processed all input arguments..
Number of nodes is : 2
Degree of parallelism is 320 on Node r09n01
Degree of parallelism is 320 on Node r09n02
Starting iostat on node : r09n01
Starting Orion on node : r09n01
Starting iostat on node : r09n02
Starting Orion on node : r09n02
ORION: Oracle IO Numbers – Version 11.1.0.4.0
Test will take approximately 3 minutes
Larger caches may take longer
ORION: Oracle IO Numbers – Version 11.1.0.4.0
Test will take approximately 3 minutes
Larger caches may take longer
Copying results to results/verification_test1
From Node r09n01
From Node r09n02
Results from node r09n01
Maximum Large MBPS=1339.17 @ Small=0 and Large=320
Results from node r09n01
Maximum Large MBPS=1342.04 @ Small=0 and Large=320
-bash-3.00$
}}}
The aggregate bandwidth should exceed 1400 MBPS for one node, 2600 MBPS for two nodes, 3700
MBPS for three nodes and 4900 MBPS for four nodes. The preceding test achieved 2681 MBPS for
two nodes and passes the I/O bandwidth verification test.
! Verifying database integrity
The second verification test is to load 100 GB of test data into the TS_DATA table space and then run
a full table scan against the data. This test verifies the database installation, and ensures that an SQL
query can run to completion and that a full table scan can achieve similar I/O performance to the
ORION results. Successful completion of a load and query constitutes passing this database integrity test.
{{{
Here is the set of scripts to run the full table scan test.
Please do the following things
1. Copy each script below into an executable shell script.
2. Execute the scripts in the order they are presented here
a. Table_creation.sh
b. Data_grow.sh
c. Full_table_scan.sh
3. Compare the MB/sec result from this test to the number achieved with
ORION. If the numbers are comparable then the test has been
successful
#################### Table_creation.sh ################################
# This script creates the user oracle and the table owitest.
sqlplus /nolog<<EOF
connect / as sysdba
drop user oracle cascade;
grant DBA to oracle identified by oracle;
alter user oracle default tablespace ts_data;
alter user oracle temporary tablespace temp;
connect oracle/oracle
create table owitest parallel nologging as select * from sys.dba_extents;
commit;
exit
EOF
#################### Data_grow.sh ####################################
# This script grows the data in the owitest table to over 100GB
(( n=0 ))
while (( n<20 ));do
(( n=n+1 ))
sqlplus -s /NOLOG <<! &
connect oracle/oracle;
set timing on
set time on
alter session enable parallel dml;
insert /*+ APPEND */ into owitest select * from owitest;
commit;
exit;
!
wait
done
wait
#################### full_table_scan.sh
####################################
-- This SQL script is called from the full_test.sh script.
sqlplus -s /NOLOG <<! &
connect oracle/oracle;
set timing on
set echo on
spool all_nodes_full_table_scan.log
col time1 new_value time1
col time2 new_value time2
select to_char(sysdate, 'SSSSS') time1 from dual;
Select count(*) from owitest;
select to_char(sysdate, 'SSSSS') time2 from dual;
select (sum(s.bytes)/1024/1024)/(&&time2 - &&time1) MB_PER_SEC
from sys.dba_segments s
where segment_name='OWITEST';
undef time1
undef time2
spool off
exit;
!
}}}
! setup_ssh.sh script
{{{
#! /bin/ksh
#
#
HOSTNAME=r09n01
HOME=/home/oracle
#
cd $HOME
mkdir $HOME/.ssh
chmod 700 $HOME/.ssh
touch $HOME/.ssh/authorized_keys
chmod 600 $HOME/.ssh/authorized_keys
cd $HOME/.ssh
#
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
#
ssh $HOSTNAME cat $HOME/.ssh/id_rsa.pub >>authorized_keys
ssh $HOSTNAME cat $HOME/.ssh/id_dsa.pub >>authorized_keys
}}}
! ORION dss_params.txt file
{{{
./orion.pl –t dss –f params/dss_params.txt –d 120 –n verification_test1
# DSS workload parameter file (keywords are case in-sensitive, values are
case sensitive)
# disk device or LUN path=number of spindles (one line per device).
/dev/rhdisk12=5
/dev/rhdisk13=5
/dev/rhdisk14=5
/dev/rhdisk15=5
/dev/rhdisk16=5
/dev/rhdisk17=5
/dev/rhdisk18=5
/dev/rhdisk19=5
/dev/rhdisk26=5
/dev/rhdisk27=5
/dev/rhdisk28=5
/dev/rhdisk29=5
/dev/rhdisk30=5
/dev/rhdisk31=5
/dev/rhdisk32=5
/dev/rhdisk33=5
/dev/rhdisk40=5
/dev/rhdisk41=5
/dev/rhdisk42=5
/dev/rhdisk43=5
/dev/rhdisk44=5
/dev/rhdisk45=5
/dev/rhdisk46=5
/dev/rhdisk47=5
/dev/rhdisk53=5
/dev/rhdisk54=5
/dev/rhdisk55=5
/dev/rhdisk56=5
/dev/rhdisk57=5
/dev/rhdisk58=5
/dev/rhdisk59=5
/dev/rhdisk60=5
#
# default large random IO size, should be specified in bytes
dss_io_size=1048576
num_nodes=2
node_names=r09n01, r09n02
dop_per_node=320, 320
orion_location=/bench1/orion/bin/orion
}}}
http://forums.cnet.com/7723-6121_102-392293/confused-primary-vs-secondary-master-vs-slave/
http://wiki.linuxquestions.org/wiki/IDE_master/slave
Identity Management 10.1.4.0 Product Cheat Sheet
Doc ID: Note:389468.1
MAA Best Practices - Oracle Fusion Middleware
https://www.oracle.com/database/technologies/high-availability/fusion-middleware-maa.html
10.4.2 Creating a Database Service for Oracle Internet Directory
https://docs.oracle.com/cd/E52734_01/core/IMEDG/db_repos.htm#IMEDG30626
Fusion Middleware Enterprise Deployment Guide for Oracle Identity and Access Management
https://docs.oracle.com/cd/E52734_01/core/IMEDG/toc.htm
https://www.oracletutorial.com/oracle-basics/oracle-insert-all/
https://oracle-base.com/articles/9i/multitable-inserts
example code
{{{
INSERT /*+ APPEND */ ALL
/* FIRST there is a match ie the standard join returns rows */
WHEN (indicator is not null and indicator_sell is not null) THEN
INTO matched
(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR_BUY
,COMMENTS_BUY
,INDICATOR_SELL
,COMMENTS_SELL)
VALUES
(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR
,COMMENTS
,INDICATOR_SELL
,COMMENTS_SELL)
/* SELL row not buy */
WHEN (indicator is null and indicator_sell is not null) THEN
into postmatch_sell(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR
,COMMENTS
)
VALUES
(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR_SELL
,COMMENTS_SELL)
/* Buy row but not sell */
WHEN (indicator is not null and indicator_sell is null) THEN
into postmatch_buy(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR
,COMMENTS
)
VALUES
(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR
,COMMENTS)
select /*+ MONITOR */ /* Match Processing */
buy.*,sell.indicator as indicator_sell,sell.comments as comments_sell
from temp_prematch_buy buy
FULL OUTER JOIN temp_prematch_sell sell
ON
(buy.CODE = sell.CODE
AND buy.SOURCE_ID = sell.SOURCE_ID
AND buy.SOURCE_ACCOUNT = sell.SOURCE_ACCOUNT
AND buy.PROGRAM_ID = sell.PROGRAM_ID
AND buy.TRANSACTION_DATE = sell.TRANSACTION_DATE
AND buy.TRANSACTION_NUMBER = sell.TRANSACTION_NUMBER
AND buy.VALUE = sell.VALUE
AND buy.FUNCTION = sell.FUNCTION
)
}}}
http://kevinclosson.wordpress.com/2009/04/28/how-to-produce-raw-spreadsheet-ready-physical-io-data-with-plsql-good-for-exadata-good-for-traditional-storage
{{{
set serveroutput on format wrapped size 1000000
create or replace directory mytmp as '/tmp';
DECLARE
n number;
m number;
gb number := 1024 * 1024 * 1024;
mb number := 1024 * 1024 ;
bpio number; -- 43 physical IO disk bytes
apio number;
disp_pio number(8,0);
bptrb number; -- 39 physical read total bytes
aptrb number;
disp_trb number(8,0);
bptwb number; -- 42 physical write total bytes
aptwb number;
disp_twb number(8,0);
x number := 1;
y number := 0;
fd1 UTL_FILE.FILE_TYPE;
BEGIN
fd1 := UTL_FILE.FOPEN('MYTMP', 'mon.log', 'w');
LOOP
bpio := 0;
apio := 0;
select sum(value) into bpio from gv$sysstat where statistic# = '43';
select sum(value) into bptwb from gv$sysstat where statistic# = '42';
select sum(value) into bptrb from gv$sysstat where statistic# = '39';
n := DBMS_UTILITY.GET_TIME;
DBMS_LOCK.SLEEP(5);
select sum(value) into apio from gv$sysstat where statistic# = '43';
select sum(value) into aptwb from gv$sysstat where statistic# = '42';
select sum(value) into aptrb from gv$sysstat where statistic# = '39';
m := DBMS_UTILITY.GET_TIME - n ;
disp_pio := ( (apio - bpio) / ( m / 100 )) / mb ;
disp_trb := ( (aptrb - bptrb) / ( m / 100 )) / mb ;
disp_twb := ( (aptwb - bptwb) / ( m / 100 )) / mb ;
UTL_FILE.PUT_LINE(fd1, TO_CHAR(SYSDATE,'HH24:MI:SS') || '|' || disp_pio || '|' || disp_trb || '|' || disp_twb || '|');
UTL_FILE.FFLUSH(fd1);
x := x + 1;
END LOOP;
UTL_FILE.FCLOSE(fd1);
END;
/
}}}
http://www.evernote.com/shard/s48/sh/2250c1af-3a88-482a-aac9-09902a243abf/9a0e5ea8f2bfa65c07fcdff8d8c06519
awr_iowl on sizing IOPS
http://www.evernote.com/shard/s48/sh/a9635aa5-8b78-4355-909f-2503e3a35a94/c0b0763b76a8bf532b597d8bdc08a2e9
http://www.evernote.com/shard/s48/sh/b5340200-f965-4cdd-9b2d-9cc49b8897e1/d366b016ea580e371fea56e44229eea1
What is the suggested I/O scheduler to improve disk performance when using Red Hat Enterprise Linux with virtualization?
http://kbase.redhat.com/faq/docs/DOC-5428
Thread: I/O scheduler in Oracle Linux 5.7
https://forums.oracle.com/forums/thread.jspa?threadID=2263820&tstart=0
http://www.thomas-krenn.com/en/oss/linux-io-stack-diagram.html
http://www.thomas-krenn.com/en/oss/linux-io-stack-diagram/linux-io-stack-diagram_v1.0.png
http://www.evernote.com/shard/s48/sh/dcc1cd2b-a858-424a-a95d-2a667e78eec1/f6a9c6e48d9ac153a9e1326cef1eac5a
http://www.iometer.org/doc/downloads.html
Guides
http://kb.fusionio.com/KB/a41/using-iometer-to-verify-iodrive-performance-on-windows.aspx
http://kb.fusionio.com/KB/a40/verifying-windows-system-performance.aspx
http://greg.porter.name/wiki/HowTo:iometer
http://blog.fosketts.net/2010/03/19/microsoft-intel-starwind-iscsi/
Useful
http://communities.vmware.com/docs/DOC-3961
http://old.nabble.com/understanding-disk-target-%22maximum-disk-size%22-td14341532.html
''smartscanloop'' http://www.evernote.com/shard/s48/sh/5985b021-4f70-4a1f-9578-0f719f8580da/d4aee8ac98395a9b8385eb7ccfb1a6f7
Tool for Gathering I/O Resource Manager Metrics: metric_iorm.pl [ID 1337265.1]
Guy's tool http://guyharrison.squarespace.com/blog/2011/7/31/a-perl-utility-to-improve-exadata-cellcli-statistics.html?lastPage=true#comment15512073
Configuring Exadata I/O Resource Manager for Common Scenarios [ID 1363188.1]
http://www.youtube.com/user/OPITZCONSULTINGpl
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r1/exadata_iorm2/exadata_iorm2_viewlet_swf.html
wise words from twitter
{{{
@kevinclosson @orcldoug @timurakhmadeev I haven't seen a client yet who'd prefer inconsistent great performance to consistent decent perf.
}}}
! ''IORM test cases''
http://www.evernote.com/shard/s48/sh/300076b7-cdd5-48b9-89af-60acd3130058/972308e8a6a7233a7dec53cd301dfebd
''-- set test case environment''
{{{
Memory Component:
db_cache_size 7.00
java_pool_size 0.06
large_pool_size 0.91
memory_max_target 0.00
memory_target 0.00
pga_aggregate_target 10.00
sga_max_size 12.16
sga_target 0.00
shared_pool_size 4.00
alter system set sga_max_size=10G scope=spfile sid='ACTEST1';
alter system set sga_target=10G scope=spfile sid='ACTEST1';
alter system set pga_aggregate_target=7G scope=spfile sid='ACTEST1';
alter system set db_cache_size=7G scope=spfile sid='ACTEST1';
alter system set shared_pool_size=2G scope=spfile sid='ACTEST1';
alter system set java_pool_size=200M scope=spfile sid='ACTEST1';
alter system set large_pool_size=200M scope=spfile sid='ACTEST1';
grant EXECUTE ON DBMS_LOCK to oracle;
grant SELECT ON V_$SESSION to oracle;
grant SELECT ON V_$STATNAME to oracle;
grant SELECT ON V_$SYSSTAT to oracle;
grant SELECT ON V_$LATCH to oracle;
grant SELECT ON V_$TIMER to oracle;
grant SELECT ON V_$SQL to oracle;
grant CREATE TYPE to oracle;
grant CREATE TABLE to oracle;
grant CREATE VIEW to oracle;
grant CREATE PROCEDURE to oracle;
select sum(bytes)/1024/1024 from dba_segments where segment_name = 'OWITEST';
-- 21420.625 MB
-- 34503.375 MB dba_objects
select count(*) from oracle.owitest;
-- 261652480 rows
-- 327680000 rows dba_objects
}}}
''-- IORM commands - NOLIMIT''
{{{
test10-multi-10exadb-iorm-nolimit
sh orion_3_ftsallmulti.sh 10 exadb1
cat *log | egrep "dbm|exadb" | sort -rnk5
# main commands
alter iormplan dbPlan=( -
(name=dbm, level=1, allocation=60), -
(name=exadb, level=1, allocation=40), -
(name=other, level=2, allocation=100));
alter iormplan active
list iormplan detail
list iormplan attributes objective
alter iormplan objective = low_latency
# list
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
# implement
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\( \(name=dbm, level=1, allocation=60\), \(name=exadb, level=1, allocation=40\), \(name=other, level=2, allocation=100\)\);'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan active'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = low_latency'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
# revert
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan catPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan inactive'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
}}}
''-- IORM commands - LIMIT''
{{{
test-10-multi-4dbm-6exadb-iorm-limit
sh saturate 4 dbm1 6 exadb1
cat *log | egrep "dbm|exadb" | sort -rnk5
# main commands
alter iormplan dbPlan=( -
(name=dbm, level=1, allocation=60, limit=60), -
(name=exadb, level=1, allocation=40, limit=40), -
(name=other, level=2, allocation=100));
alter iormplan active
list iormplan detail
list iormplan attributes objective
alter iormplan objective = low_latency
# list
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
# implement
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\( \(name=dbm, level=1, allocation=60, limit=60\), \(name=exadb, level=1, allocation=40, limit=40\), \(name=other, level=2, allocation=100\)\);'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan active'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = low_latency'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
# revert
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan catPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan inactive'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
}}}
''iosaturation toolkit - simple output sort''
{{{
# sort smartscan MB/s
less smartscanloop.txt | sort -nk5 | tail
}}}
''iosaturation toolkit - advanced output sort''
{{{
# sort AAS
cat smartscanloop.txt | sort -nk5 | tail
# sort latency
cat smartscanloop.txt | sort -nk6 | tail
# sort smart scans returned
cat smartscanloop.txt | sort -nk7 | tail
# sort interconnect
cat smartscanloop.txt | sort -nk8 | tail
# sort smartscan MB/s
cat smartscanloop.txt | sort -nk9 | tail
}}}
<<<
You can make use of the cell_iops.sh here http://karlarao.wordpress.com/scripts-resources/ to get a characterization of the IOs across the databases on the cell level. This only have to be executed on one of the cells. And that's a per minute detail.
Whenever I need to characterize the IO profile of the databases for IORM config I would:
> pull the IO numbers from AWR
> pull the awr top events from AWR (this will tell if the DB is IO or CPU bound)
> get all these numbers in a consolidated view
> then from there depending on priority, criticality, workload type I would decide what makes sense for all of them (percentage allocation and IORM objective)
Whenever I need to evaluate an existing config with IORM already in place I would:
> pull the IO numbers from AWR
> pull the awr top events and look at the IO latency numbers (IO wait_class)
> pull the cell_iops.sh output on the workload periods where I'm seeing some high activity
> get all these numbers in a consolidated view
> get the different views of IO performance from AWR on all the databases http://goo.gl/YNUCEE
> validate the IO capacity of both Flash and Hard disk from the workload numbers of both AWR and CELL data
> for per consumer group I would use the "Useful monitoring SQLs" here http://goo.gl/I1mjd
> and if that not enough then I would even do a more fine grained latency & IO monitoring using snapper
On the cell_iops.sh I have yet to add the latency by database and consumer group as well as the IORM_MODE, but the methods I've listed above works very well.
-Karl
<<<
https://community.oracle.com/thread/2613363
see also [[list metric history / current]]
http://www.slideshare.net/Enkitec/io-resource-management-on-exadata, https://dl.dropbox.com/u/92079964/IORM%20Planning%20Calculator.xlsx
ios dev - itunes U - fall 2013 Semister ffrom Paul Hegady (STANFORD)
I/O Performance Tuning Tools for Oracle Database 11gR2
http://www.dbasupport.com/oracle/ora11g/Oracle-Database-11gR2-IO-Tuning02.shtml
''MindMap IOsaturationtoolkit-v2 - IORM instrumentation blueprint'' http://www.evernote.com/shard/s48/sh/d1422308-0127-4c2f-97c3-561c59c9ef80/a93392f3d15097a258333a623da07481
http://www.natecarlson.com/2010/09/10/configuring-ipmp-on-nexentastor-3/
IP network multipathing http://en.wikipedia.org/wiki/IP_network_multipathing
Subject: Using IPv6 with Oracle E-Business Suite Releases 11i and 12
Doc ID: Note:567015.1
Subject: Oracle Fusion Middleware Support of IPv6
Doc ID: Note:431028.1
Subject: Does Oracle Application Server 10g R2 Version 10.1.2.0.2 Support IPv6?
Doc ID: Note:338011.1
Subject: Does Oracle 10g / 10gR2 support IPv6 ?
Doc ID: Note:362956.1
Subject: Oracle E-Business Suite R12 Configuration in a DMZ
Doc ID: Note:380490.1
Subject: E-Business Suite Recommended Set Up for Client/Server Products
Doc ID: Note:277535.1
Subject: Oracle Application Server Installer Incorrectly Parses IP6V Entries in /etc/inet/ipnodes on Solaris 10
Doc ID: Note:438323.1
Subject: Oracle Application Server 10g (10.1.3) Requirements for Linux (OEL 5.0 and RHEL 5.0)
Doc ID: Note:465159.1
Subject: Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12
Doc ID: Note:387859.1
Subject: Using AutoConfig to Manage System Configurations with Oracle Applications 11i
Doc ID: Note:165195.1
ISM or DISM Misconfiguration can Slow Down Oracle Database Performance (Doc ID 1472108.1)
When Will DISM Start On Oracle Database? (Doc ID 778777.1)
http://lifehacker.com/5691489/how-can-i-find-out-if-my-isp-is-limiting-my-download-speed
http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.html
{{{
There are two basic alternatives to solve the ITL wait problem:
(1) INITRANS
(2) Less Space for Data
select snap_id, ITL_WAITS_TOTAL, ITL_WAITS_DELTA from DBA_HIST_SEG_STAT;
select ini_trans from dba_tables;
}}}
http://www.antognini.ch/2011/04/itl-waits-changes-in-recent-releases/
http://www.antognini.ch/2011/06/itl-waits-changes-in-recent-releases-script/
http://www.antognini.ch/2013/05/itl-deadlocks-script/
http://neeraj-dba.blogspot.com/2012/05/interested-transaction-list-itl-in.html
http://avdeo.com/2008/06/16/interested-transaction-list-itl/ <-- interesting explanation of block, ITL, and tied to Undo transaction table and segments
{{{
Oracle Data block is divided into 3 major portions.
> Oracle Fixed size header
> Oracle Variable size header
> Oracle Data content space
}}}
! mos Troubleshooting waits for 'enq: TX - allocate ITL entry' (Doc ID 1472175.1)
{{{
SYMPTOMS
Observe high waits for event enq: TX - allocate ITL entry
Top 5 Timed Foreground Events
Event Waits Time(s) Avg wait (ms) % DB time Wait Class
enq: TX - allocate ITL entry 1,200 3,129 2607 85.22 Configuration
DB CPU 323 8.79
gc buffer busy acquire 17,261 50 3 1.37 Cluster
gc cr block 2-way 143,108 48 0 1.32 Cluster
gc current block busy 10,631 46 4 1.24 Cluster
CAUSE
By default INITRANS value for table is 1 and for index is 2. When too many concurrent DML transactions are competing for the same data block we observe this wait event - " enq: TX - allocate ITL entry"
Once the table or index is reorganized by altering the INITRANS or PCTFREE parameter, it helps to reduce "enq: TX - allocate ITL entry" wait events.
As per AWR report below are the tables which reported this wait event
Segments by ITL Waits
* % of Capture shows % of ITL waits for each top segment compared
* with total ITL waits for all segments captured by the Snapshot
Owner Tablespace Name Object Name Subobject Name Obj. Type ITL Waits % of Capture
PIN BRM_TABLES SERVICE_T TABLE 188 84.30
PIN BRM_TABLES BILLINFO_T P_R_06202012 TABLE PARTITION 35 15.70
To know more details, In the AWR report, search for the section "Segments by ITL Waits" .
SOLUTION
To reduce enq: TX - allocate ITL entry" wait events, We need to follow the below steps.
A)
1) Depending on the amount of transactions in the table we need to alter the value of INITRANS.
alter table <table name> INITRANS 50;
2) Then re-organize the table using move (alter table <table_name> move;)
3) Then rebuild all the indexes of this table as below
alter index <index_name> rebuild INITRANS 50;
If the issue is not resolved by the above steps, please try by increasing PCTFREE
B)
1) Spreading rows into more number of blocks will also helps to reduce this wait event.
alter table <table name> PCTFREE 40;
2) Then re-organize the table using move (alter table service_T move;)
3) Rebuild index
alter index index_name rebuild PCTFREE 40;
OR You can combine steps A and B as below
1) Set INITRANS to 50 pct_free to 40
alter table <table_name> PCTFREE 40 INITRANS 50;
2) Then re-organize the table using move (alter table <table_name> move;)
3) Then rebuild all the indexes of the table as below
alter index <index_name> rebuild PCTFREE 40 INITRANS 50;
NOTE:
The table/index can be altered to set the new value for INITRANS. But the altered value takes effect for new blocks only. Basically you need to rebuild the objects so that the blocks are initialized again.
For an index this means the index needs to be rebuild or recreated.
For a table this can be achieved through:
exp/imp
alter table move
dbms_redefenition
}}}
IT Systems Management [ID 280.1]
IT Risk Management Advisor: Oracle [ID 318.1]
My Oracle Support Health Check Catalog [ID 868955.1]
https://www.quora.com/search?q=openstack+vs+enterprise+architecture
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=openstack+enterprise+architecture
https://www.google.com/search?q=openstack&espv=2&rlz=1C5CHFA_enUS696US696&biw=1276&bih=703&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiN5YezpvvPAhWC5iYKHcJDCDcQ_AUICCgD#imgrc=gqzdQ8G0MtutgM%3A
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=itil%20vs%20enterprise%20architecture
COBIT vs ITIL vs TOGAF https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=itil%20vs%20enterprise%20architecture
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=ITSM+openstack
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=database+as+a+service+ITIL&start=10
https://www.openstack.org/assets/path-to-cloud/OpenStack-6x9Booklet-online.pdf
https://www.linkedin.com/pulse/itil-vs-ea-matthew-kern
http://blogs.vmware.com/accelerate/2015/07/itsm-relevance-in-the-cloud.html
ITIL Open Source Solution Stack http://events.linuxfoundation.org/sites/events/files/slides/ITIL-v1.1.pdf
http://www.slideshare.net/rajib_kundu/itil-relation-with-dba
DBAs and the ITIL Framework http://www.sqlservercentral.com/articles/ITIL/131734/
http://www.tesora.com/database-as-a-service/
http://www.oracle.com/technetwork/oem/cloud-mgmt/con3028-dbaasatboeing-2332603.pdf
https://wiki.openstack.org/wiki/Trove , Oracle databases and OpenStack Trove https://www.youtube.com/watch?v=C7iNPv4LNB0
http://www.allenhayden.com/cgi/getdoc.pl?file=doc111.pdf
Identifying Resource Intensive SQL in a production environment - Virag Saksena
/***
|Name|ImageSizePlugin|
|Source|http://www.TiddlyTools.com/#ImageSizePlugin|
|Version|1.2.2|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|adds support for resizing images|
This plugin adds optional syntax to scale an image to a specified width and height and/or interactively resize the image with the mouse.
!!!!!Usage
<<<
The extended image syntax is:
{{{
[img(w+,h+)[...][...]]
}}}
where ''(w,h)'' indicates the desired width and height (in CSS units, e.g., px, em, cm, in, or %). Use ''auto'' (or a blank value) for either dimension to scale that dimension proportionally (i.e., maintain the aspect ratio). You can also calculate a CSS value 'on-the-fly' by using a //javascript expression// enclosed between """{{""" and """}}""". Appending a plus sign (+) to a dimension enables interactive resizing in that dimension (by dragging the mouse inside the image). Use ~SHIFT-click to show the full-sized (un-scaled) image. Use ~CTRL-click to restore the starting size (either scaled or full-sized).
<<<
!!!!!Examples
<<<
{{{
[img(100px+,75px+)[images/meow2.jpg]]
}}}
[img(100px+,75px+)[images/meow2.jpg]]
{{{
[<img(34%+,+)[images/meow.gif]]
[<img(21% ,+)[images/meow.gif]]
[<img(13%+, )[images/meow.gif]]
[<img( 8%+, )[images/meow.gif]]
[<img( 5% , )[images/meow.gif]]
[<img( 3% , )[images/meow.gif]]
[<img( 2% , )[images/meow.gif]]
[img( 1%+,+)[images/meow.gif]]
}}}
[<img(34%+,+)[images/meow.gif]]
[<img(21% ,+)[images/meow.gif]]
[<img(13%+, )[images/meow.gif]]
[<img( 8%+, )[images/meow.gif]]
[<img( 5% , )[images/meow.gif]]
[<img( 3% , )[images/meow.gif]]
[<img( 2% , )[images/meow.gif]]
[img( 1%+,+)[images/meow.gif]]
{{tagClear{
}}}
<<<
!!!!!Revisions
<<<
2010.07.24 [1.2.2] moved tip/dragtip text to config.formatterHelpers.imageSize object to enable customization
2009.02.24 [1.2.1] cleanup width/height regexp, use '+' suffix for resizing
2009.02.22 [1.2.0] added stretchable images
2008.01.19 [1.1.0] added evaluated width/height values
2008.01.18 [1.0.1] regexp for "(width,height)" now passes all CSS values to browser for validation
2008.01.17 [1.0.0] initial release
<<<
!!!!!Code
***/
//{{{
version.extensions.ImageSizePlugin= {major: 1, minor: 2, revision: 2, date: new Date(2010,7,24)};
//}}}
//{{{
var f=config.formatters[config.formatters.findByField("name","image")];
f.match="\\[[<>]?[Ii][Mm][Gg](?:\\([^,]*,[^\\)]*\\))?\\[";
f.lookaheadRegExp=/\[([<]?)(>?)[Ii][Mm][Gg](?:\(([^,]*),([^\)]*)\))?\[(?:([^\|\]]+)\|)?([^\[\]\|]+)\](?:\[([^\]]*)\])?\]/mg;
f.handler=function(w) {
this.lookaheadRegExp.lastIndex = w.matchStart;
var lookaheadMatch = this.lookaheadRegExp.exec(w.source)
if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
var floatLeft=lookaheadMatch[1];
var floatRight=lookaheadMatch[2];
var width=lookaheadMatch[3];
var height=lookaheadMatch[4];
var tooltip=lookaheadMatch[5];
var src=lookaheadMatch[6];
var link=lookaheadMatch[7];
// Simple bracketted link
var e = w.output;
if(link) { // LINKED IMAGE
if (config.formatterHelpers.isExternalLink(link)) {
if (config.macros.attach && config.macros.attach.isAttachment(link)) {
// see [[AttachFilePluginFormatters]]
e = createExternalLink(w.output,link);
e.href=config.macros.attach.getAttachment(link);
e.title = config.macros.attach.linkTooltip + link;
} else
e = createExternalLink(w.output,link);
} else
e = createTiddlyLink(w.output,link,false,null,w.isStatic);
addClass(e,"imageLink");
}
var img = createTiddlyElement(e,"img");
if(floatLeft) img.align="left"; else if(floatRight) img.align="right";
if(width||height) {
var x=width.trim(); var y=height.trim();
var stretchW=(x.substr(x.length-1,1)=='+'); if (stretchW) x=x.substr(0,x.length-1);
var stretchH=(y.substr(y.length-1,1)=='+'); if (stretchH) y=y.substr(0,y.length-1);
if (x.substr(0,2)=="{{")
{ try{x=eval(x.substr(2,x.length-4))} catch(e){displayMessage(e.description||e.toString())} }
if (y.substr(0,2)=="{{")
{ try{y=eval(y.substr(2,y.length-4))} catch(e){displayMessage(e.description||e.toString())} }
img.style.width=x.trim(); img.style.height=y.trim();
config.formatterHelpers.addStretchHandlers(img,stretchW,stretchH);
}
if(tooltip) img.title = tooltip;
// GET IMAGE SOURCE
if (config.macros.attach && config.macros.attach.isAttachment(src))
src=config.macros.attach.getAttachment(src); // see [[AttachFilePluginFormatters]]
else if (config.formatterHelpers.resolvePath) { // see [[ImagePathPlugin]]
if (config.browser.isIE || config.browser.isSafari) {
img.onerror=(function(){
this.src=config.formatterHelpers.resolvePath(this.src,false);
return false;
});
} else
src=config.formatterHelpers.resolvePath(src,true);
}
img.src=src;
w.nextMatch = this.lookaheadRegExp.lastIndex;
}
}
config.formatterHelpers.imageSize={
tip: 'SHIFT-CLICK=show full size, CTRL-CLICK=restore initial size',
dragtip: 'DRAG=stretch/shrink, '
}
config.formatterHelpers.addStretchHandlers=function(e,stretchW,stretchH) {
e.title=((stretchW||stretchH)?this.imageSize.dragtip:'')+this.imageSize.tip;
e.statusMsg='width=%0, height=%1';
e.style.cursor='move';
e.originalW=e.style.width;
e.originalH=e.style.height;
e.minW=Math.max(e.offsetWidth/20,10);
e.minH=Math.max(e.offsetHeight/20,10);
e.stretchW=stretchW;
e.stretchH=stretchH;
e.onmousedown=function(ev) { var ev=ev||window.event;
this.sizing=true;
this.startX=!config.browser.isIE?ev.pageX:(ev.clientX+findScrollX());
this.startY=!config.browser.isIE?ev.pageY:(ev.clientY+findScrollY());
this.startW=this.offsetWidth;
this.startH=this.offsetHeight;
return false;
};
e.onmousemove=function(ev) { var ev=ev||window.event;
if (this.sizing) {
var s=this.style;
var currX=!config.browser.isIE?ev.pageX:(ev.clientX+findScrollX());
var currY=!config.browser.isIE?ev.pageY:(ev.clientY+findScrollY());
var newW=(currX-this.offsetLeft)/(this.startX-this.offsetLeft)*this.startW;
var newH=(currY-this.offsetTop )/(this.startY-this.offsetTop )*this.startH;
if (this.stretchW) s.width =Math.floor(Math.max(newW,this.minW))+'px';
if (this.stretchH) s.height=Math.floor(Math.max(newH,this.minH))+'px';
clearMessage(); displayMessage(this.statusMsg.format([s.width,s.height]));
}
return false;
};
e.onmouseup=function(ev) { var ev=ev||window.event;
if (ev.shiftKey) { this.style.width=this.style.height=''; }
if (ev.ctrlKey) { this.style.width=this.originalW; this.style.height=this.originalH; }
this.sizing=false;
clearMessage();
return false;
};
e.onmouseout=function(ev) { var ev=ev||window.event;
this.sizing=false;
clearMessage();
return false;
};
}
//}}}
http://www.oracle.com/technetwork/database/database-technologies/timesten/overview/ds-imdb-cache-1470955.pdf?ssSourceSiteId=ocomen
Using Oracle In-Memory Database Cache to Accelerate the Oracle Database
http://www.oracle.com/technetwork/database/database-technologies/performance/wp-imdb-cache-130299.pdf?ssSourceSiteId=ocomen
@@a columnar, compressed, in-memory cache of your on-disk data@@
http://www.oracle.com/us/corporate/press/2020717
http://www.oracle.com/us/corporate/features/database-in-memory-option/index.html
http://www.oracle.com/us/products/database/options/database-in-memory/overview/index.html
http://oracle-base.com/articles/12c/in-memory-column-store-12cr1.php
http://www.scaleabilities.co.uk/2014/07/25/oracles-in-memory-database-the-true-cost-of-licensing/
Oracle Database 12c In-Memory videos https://www.youtube.com/playlist?list=PLKCk3OyNwIzu4veZ1FFe32ZsvFHGlT4gZ
@@Oracle by Example: Oracle 12c In-Memory series https://apexapps.oracle.com/pls/apex/f?p=44785:24:106572632124906::::P24_CONTENT_ID,P24_PREV_PAGE:10152,24@@
@@official documentation http://www.oracle.com/technetwork/database/in-memory/documentation/index.html @@
! others
using the in-memory column store http://docs.oracle.com/database/121/ADMIN/memory.htm#ADMIN14257
about the in-memory column store http://docs.oracle.com/database/121/TGDBA/tune_sga.htm#TGDBA95379
glossary http://docs.oracle.com/database/121/CNCPT/glossary.htm#CNCPT89131
concepts guide - in-memory column store http://docs.oracle.com/database/121/CNCPT/memory.htm#CNCPT89659
ORACLE DATABASE 12 C IN-MEMORY OPTION http://www.oracle.com/us/solutions/sap/nl23-db12c-imo-en-2209396.pdf
http://www.oracle-base.com/articles/12c/in-memory-column-store-12cr1.php
! and others
http://www.oracle.com/us/solutions/sap/nl23-db12c-imo-en-2209396.pdf
http://www.oracle.com/technetwork/database/in-memory/overview/twp-oracle-database-in-memory-2245633.html
http://www.oracle.com/technetwork/database/options/database-in-memory-ds-2210927.pdf
https://search.oracle.com/search/search?search_p_main_operator=all&group=Blogs&q=engineered%20weblog:In-Memory
http://blog.tanelpoder.com/2014/06/10/our-take-on-the-oracle-database-12c-in-memory-option/
http://www.oracle.com/technetwork/database/in-memory/documentation/index.html
http://docs.oracle.com/database/121/CNCPT/memory.htm#CNCPT89659
http://docs.oracle.com/database/121/ADMIN/memory.htm#ADMIN14257
http://www.rittmanmead.com/2014/08/taking-a-look-at-the-oracle-database-12c-in-memory-option/
http://www.oracle.com/us/products/database/options/database-in-memory/overview/index.html
http://www.oracle.com/technetwork/database/options/dbim-vs-sap-hana-2215625.pdf?ssSourceSiteId=ocomen
http://www.oracle.com/technetwork/database/bi-datawarehousing/data-warehousing-wp-12c-1896097.pdf
http://www.oracle.com/us/solutions/sap/nl23-db12c-imo-en-2209396.pdf
https://docs.oracle.com/database/121/NEWFT/chapter12102.htm#BGBEGFAF
! disable in-memory option
{{{
ALTER SYSTEM SET INMEMORY_FORCE=OFF SCOPE=both sid='*';
ALTER SYSTEM SET INMEMORY_QUERY=DISABLE SCOPE=both sid='*';
ALTER SYSTEM RESET INMEMORY_SIZE SCOPE=SPFILE sid='*';
SHUTDOWN IMMEDIATE;
STARTUP;
}}}
RMAN puzzle: database reincarnation is not in sync with catalog https://blogs.oracle.com/gverma/entry/rman_puzzle_database_reincarna
Randolf Geist on 11g Incremental Statistics
http://www.oaktable.net/content/randolf-geist-11g-incremental-statistics
https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics
https://blogs.oracle.com/optimizer/incremental-statistics-maintenance-what-statistics-will-be-gathered-after-dml-occurs-on-the-table <- comments by maria
http://oracledoug.com/serendipity/index.php?/archives/1596-Statistics-on-Partitioned-Tables-Part-6a-COPY_TABLE_STATS.html
https://blogs.oracle.com/optimizer/efficient-statistics-maintenance-for-partitioned-tables-using-incremental-statistics-part-1
! other speed up options
!! concurrent = true
http://www.dbspecialists.com/blog/uncategorized/index-usage-monitoring-and-keeping-the-horses-out-front/
bde_rebuild.sql - Validates and rebuilds indexes occupying more space than needed
Doc ID: 182699.1
Script to capture INDEX_STAT Information
Doc ID: 35492.1
How Does the Index Block Splitting Mechanism Work for B*tree Indexes?
Doc ID: 183612.1
Note 30405.1 How Btree Indexes Are Maintained
Script to List Percentage Utilization of Index Tablespace
Doc ID: 1039284.6
Full Coverage in Infiniband Monitoring with OSWatcher 3.0: IB Monitoring
http://husnusensoy.wordpress.com/tag/infiniband/
<<<
Infiniband bonding is somewhat similar to classical network bonding (or aggregation) with some behavioral differences. The major difference is that Infiniband network bonding interface is running in active/passive mode over Infiniband HCAs. No trunking is allowed as it is possible with classical Ethernet network. So if you have two 20 GBit interfaces you will have 20 Gbit theoretical throughput in an active IB network even that you have two (or more) interfaces. This can be seen easily at the output of ifconfig also. While ib0 interface has send/receive statistics, there is almost no traffic running over ib2 interface.
In case of a failure (or it can be done manually) bonding interface will detect the failure in the active component and will failover to the passive one and you will see some informative warning message in the /var/log/messages file just like in Ethernet bonding.
<<<
<<<
''In a successful RAC configuration failover duration should be less than any CRS or watchdog timeout value.'' That’s because for a period of time no interconnect traffic (heartbeats, or cache fusion) will be available. So if this failover duration is too long due to host CPU utilization, a problem in HCA firmware, a configuration problem at IB switch,or any other problem clusterware or some watchdog will assume that node should be evicted from the cluster to protect cluster integrity.
<<<
https://blogs.oracle.com/networking/entry/infiniband_vocabulary
http://www.spinics.net/lists/linux-rdma/msg07546.html
http://www.mail-archive.com/general@lists.openfabrics.org/msg08014.html
http://people.redhat.com/dledford/infiniband_get_started.html
http://www.mail-archive.com/general@lists.openfabrics.org/msg08014.html
/***
|Name|InlineJavascriptPlugin|
|Source|http://www.TiddlyTools.com/#InlineJavascriptPlugin|
|Documentation|http://www.TiddlyTools.com/#InlineJavascriptPluginInfo|
|Version|1.9.5|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|Insert Javascript executable code directly into your tiddler content.|
''Call directly into TW core utility routines, define new functions, calculate values, add dynamically-generated TiddlyWiki-formatted output'' into tiddler content, or perform any other programmatic actions each time the tiddler is rendered.
!!!!!Documentation
>see [[InlineJavascriptPluginInfo]]
!!!!!Revisions
<<<
2009.04.11 [1.9.5] pass current tiddler object into wrapper code so it can be referenced from within 'onclick' scripts
2009.02.26 [1.9.4] in $(), handle leading '#' on ID for compatibility with JQuery syntax
|please see [[InlineJavascriptPluginInfo]] for additional revision details|
2005.11.08 [1.0.0] initial release
<<<
!!!!!Code
***/
//{{{
version.extensions.InlineJavascriptPlugin= {major: 1, minor: 9, revision: 5, date: new Date(2009,4,11)};
config.formatters.push( {
name: "inlineJavascript",
match: "\\<script",
lookahead: "\\<script(?: src=\\\"((?:.|\\n)*?)\\\")?(?: label=\\\"((?:.|\\n)*?)\\\")?(?: title=\\\"((?:.|\\n)*?)\\\")?(?: key=\\\"((?:.|\\n)*?)\\\")?( show)?\\>((?:.|\\n)*?)\\</script\\>",
handler: function(w) {
var lookaheadRegExp = new RegExp(this.lookahead,"mg");
lookaheadRegExp.lastIndex = w.matchStart;
var lookaheadMatch = lookaheadRegExp.exec(w.source)
if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
var src=lookaheadMatch[1];
var label=lookaheadMatch[2];
var tip=lookaheadMatch[3];
var key=lookaheadMatch[4];
var show=lookaheadMatch[5];
var code=lookaheadMatch[6];
if (src) { // external script library
var script = document.createElement("script"); script.src = src;
document.body.appendChild(script); document.body.removeChild(script);
}
if (code) { // inline code
if (show) // display source in tiddler
wikify("{{{\n"+lookaheadMatch[0]+"\n}}}\n",w.output);
if (label) { // create 'onclick' command link
var link=createTiddlyElement(w.output,"a",null,"tiddlyLinkExisting",wikifyPlainText(label));
var fixup=code.replace(/document.write\s*\(/gi,'place.bufferedHTML+=(');
link.code="function _out(place,tiddler){"+fixup+"\n};_out(this,this.tiddler);"
link.tiddler=w.tiddler;
link.onclick=function(){
this.bufferedHTML="";
try{ var r=eval(this.code);
if(this.bufferedHTML.length || (typeof(r)==="string")&&r.length)
var s=this.parentNode.insertBefore(document.createElement("span"),this.nextSibling);
if(this.bufferedHTML.length)
s.innerHTML=this.bufferedHTML;
if((typeof(r)==="string")&&r.length) {
wikify(r,s,null,this.tiddler);
return false;
} else return r!==undefined?r:false;
} catch(e){alert(e.description||e.toString());return false;}
};
link.setAttribute("title",tip||"");
var URIcode='javascript:void(eval(decodeURIComponent(%22(function(){try{';
URIcode+=encodeURIComponent(encodeURIComponent(code.replace(/\n/g,' ')));
URIcode+='}catch(e){alert(e.description||e.toString())}})()%22)))';
link.setAttribute("href",URIcode);
link.style.cursor="pointer";
if (key) link.accessKey=key.substr(0,1); // single character only
}
else { // run script immediately
var fixup=code.replace(/document.write\s*\(/gi,'place.innerHTML+=(');
var c="function _out(place,tiddler){"+fixup+"\n};_out(w.output,w.tiddler);";
try { var out=eval(c); }
catch(e) { out=e.description?e.description:e.toString(); }
if (out && out.length) wikify(out,w.output,w.highlightRegExp,w.tiddler);
}
}
w.nextMatch = lookaheadMatch.index + lookaheadMatch[0].length;
}
}
} )
//}}}
// // Backward-compatibility for TW2.1.x and earlier
//{{{
if (typeof(wikifyPlainText)=="undefined") window.wikifyPlainText=function(text,limit,tiddler) {
if(limit > 0) text = text.substr(0,limit);
var wikifier = new Wikifier(text,formatter,null,tiddler);
return wikifier.wikifyPlain();
}
//}}}
// // GLOBAL FUNCTION: $(...) -- 'shorthand' convenience syntax for document.getElementById()
//{{{
if (typeof($)=='undefined') { function $(id) { return document.getElementById(id.replace(/^#/,'')); } }
//}}}
--------------------------------------------------------------
WHEN INSTALLING ORACLE, GO TO THESE SITES AND METALINK NOTES
--------------------------------------------------------------
# Note 466757.1 Critical Patch Update January 2008 Availability Information for Oracle Server and Middleware Products
- this is where you check the CPUs that you'll download
# Note 466759.1 Known Issues for Oracle Database Critical Ph Update
- this document lists the known issues for Oracle Database Critical Patch Update dated January 2008 (CPUJan2008).
These known issues are in addition to the issues listed in the individual CPUJan2008 READMEs.
# Note 394486.1 Risk Matrix Glossary -- terms and definitions for Critical Patch Update risk matrices
- this explains the columns found on the CPU vulnerability matrix and explains the Common Vulnerability Scoring Standard (CVSS)
# Note 394487.1 Use of Common Vulnerability Scoring System (CVSS) by Oracle
- explains the CVSS
# Note 455294.1 Oracle E-Business Suite Critical Patch Update Note October 2007
- when you're patching e-Business suite, go to this note
# Note 438314.1 Critical Patch Update - Introduction to Database n-Apply CPUs
- merge apply
# Oracle� Database on AIX�,HP-UX�,Linux�,Mac OS� X,Solaris�,Tru64 Unix� Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.1)
Doc ID: Note:169706.1
# ALERT: Oracle 10g Release 2 (10.2) Support Status and Alerts
Doc ID: Note:316900.1
# Upgrade Companion
--------------------------------------------------
SEPARATE ASM ORACLE_HOME AND ORACLE ORACLE_HOME
--------------------------------------------------
separating an ASM ORACLE_HOME and ORACLE ORACLE_HOME was introduced on 10gR2, also includes separating CLUSTERWARE home
so you'll have three (3) ORACLE_HOMEs if you're configuring a RAC environment
ORACLE_HOME
ASM_HOME
CRS_HOME
-----------------------------------
CLUSTER SYNCHRONIZATION SERVICES
-----------------------------------
ORACLE_HOME
ASM_HOME
If you're in ASM to remove the Oracle Software ORACLE_HOME, make sure that CSS is not running on ORACLE_HOME
if it's running then reconfigure the CSS deamon to run on another home (ASM_HOME).. but by default, if you make another HOME
for ASM, then CSS will be created there.
CSS is created when:
1) you use ASM as storage
2) when you install Clusterware (RAC, but Clusterware has its separate home already)
For Oracle Real Application Clusters installations, the CSS daemon is installed with Oracle Clusterware in a separate Oracle home
directory (also called the Clusterware home directory). For single-node installations, the CSS daemon is installed in and runs from
the same Oracle home as Oracle Database.
If you plan to have more than one Oracle Database 10g installation on a single system and you want to use Automatic Storage Management
for database file storage, then Oracle recommends that you run the CSS daemon and the Automatic Storage Management instance from the
same Oracle home directory and use different Oracle home directories for the database instances.
Oracle� Database Installation Guide 10g Release 2 (10.2) for Linux x86 --> 6 Removing Oracle Software
Enter the following command to identify the Oracle home directory being used to run the CSS daemon:
# more /etc/oracle/ocr.loc
The output from this command is similar to the following:
ocrconfig_loc=/u01/app/oracle/product/10.2.0/db_1/cdata/localhost/local.ocr
local_only=TRUE
The ocrconfig_loc parameter specifies the location of the Oracle Cluster Registry (OCR) used by the CSS daemon. The path up to the cdata directory
is the Oracle home directory where the CSS daemon is running (/u01/app/oracle/product/10.2.0/db_1 in this example).
Note:
If the value of the local_only parameter is FALSE, Oracle Clusterware is installed on this system.
as ROOT
Set the ORACLE_HOME environment variable to specify the path to this Oracle home directory:
Bourne, Bash, or Korn shell:
# ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_2;
# export ORACLE_HOME
Enter the following command to reconfigure the CSS daemon to run from this Oracle home:
# $ORACLE_HOME/bin/localconfig reset $ORACLE_HOME
This command stops the Oracle CSS daemon, reconfigures it in the new Oracle home, and then restarts it.
When the system boots, the CSS daemon starts automatically from the new Oracle home.
Then edit /etc/oratab..
+ASM:/u01/app/oracle/product/10.2.0/db_2:N
-----------------------------------
MEMORY FAQs
-----------------------------------
32bit linux
shmax max value is up to 4GB but on 32bit Oracle database REGARDLESS OF PLATFORM the SGA is 1.7GB max
32bit windows
memory for 32bit windows is upto 2GB max but the sga is up to 1.7GB max (REGARDLESS OF PLATFORM)
-----------------------------------
Enterprise Manager Grid Control
-----------------------------------
# ORACLE_HOME
You can install this release more than once on the same system, as long as each installation is done in a separate Oracle home directory.
# Management Agent
Ensure the Management Agent Oracle home must not contain any other Oracle software installation.
For Management Agent deployments, make sure that /tmp directory has 1300 MB of disk space available on the target machine.
Before you begin the installation of a Management Agent, ensure that the target host where you want to install the Management Agent has the appropriate users and operating system groups created. For information about creating required users and operating system groups, see Chapter1, "Creating Required Operating System Groups and Users". Also ensure that the target host has the group name as well as the group id created. Otherwise, the installation will fail.
You can install management agent in 7 ways:
Agent Deploy Application
(installation types)
Fresh Installation of the Management Agent
Installation Using a Shared Agent Home
NOTE:
NFS agent deployment is not supported on a cluster. If you want the agent to monitor a cluster and Oracle RAC, you must use the agent deployment with the cluster option, and not the NFS (network file system) deployment method.
NOTE:
Do not attempt to view the prerequisite check status while the prerequisite checks are still in progress. If you do so while the checks are still in progress, the application will display an error.
Ensure that you do not specify duplicate entries of the host list. If there are duplicate host entries in this list, the application hangs. Also ensure that you use the same host names for which the SSH has been set.
The important parameters for Agent Installation are -b, -c, -n, -z and optionally -i, -p, -t, -d.
An unsecure agent cannot upload data to the secure Management Service. Oracle also recommends for security reasons that you change the Management Service password specified here after the installation is complete.
/etc/sudoers
After the installation and configuration phase, the Agent Deploy application checks for the existence of the Central Inventory (located at /etc/oraInst.loc). If this is the first Oracle product installation, Agent Deploy executes the following scripts:
1.orainstRoot.sh - UNIX Machines only: This creates oraInst.loc that contains the central inventory.
2. root.sh - UNIX Machines only: This runs all the scripts that must be executed as root.
If this is not the first Oracle product installation, Agent Deploy executes only the root.sh script.
nfsagentinstall Script
Sharing the Agent Oracle Home Using the nfsagentinstall Script
The agent Oracle home cannot be installed in an Oracle Cluster Shared File System (OCFS) drive, but is supported on an NAS (Network Attached Storage) drive.
You can perform only one nfsagent installation per host. Multiple nfsagent installations on the same host will fail.
When you are performing an NFS Agent installation, the operating system (and version) of the target machine where the NFS Agent needs to be installed should be the same as the operating system (and version) of the machine where the master agent is located. If the target machine has a different operating system, then the NFS Agent installation will fail. For example, if the master agent is on Red Hat Linux Version 4, then the NFS agent can be installed only on those machines that run Red Hat Linux Version 4. If you try to install on Red Hat Linux Version 3 or a different operating system for that matter, then the NFS installation will fail.
NOTE:
For NFS Agent installation from 10.2.0.3.0 master agents, the NFS agents will be started automatically after rebooting the machine.
For NFS Agent installation from 10.2.0.3.0 master agents, agentca script for rediscovery of targets present in the <statedir>/bin directory can be used to rediscover targets on that host.
agentDownload Script
Use the agentDownload script to perform an agent installation on a cluster environment
For Enterprise Manager 10g R2, the <version> value in the preceding syntax will be 10.2.0.2.0
NOTE:
If the Management Service is using a load balancer, you must modify the s_omsHost and s_omsPort values in the <OMS_HOME>/sysman/agent_download/<version>/agentdownload.rsp file to reflect the load balancer host and port before using the agentDownload script.
The base directory for the agent installation must be specified using the -b option. For example, if you specified the parent directory to be agent_download (/scratch/agent_download), then the command to be specified is:
-b /scratch/agent_download
The agent Oracle home (agent10g) is created as a subdirectory under this parent directory.
The agent that you are installing is not secure by default. If you want to secure the agent, you must specify the password using the AGENT_INSTALL_PASSWORD environment variable, or by executing the following command after the installation is complete:
<Agent_Home>/bin/emctl secure agent
For Enterprise Manager 10.2.0.3.0, if the agent_download.rsp file does not contain the encrypted registration password or the AGENT_INSTALL_PASSWORD environment variable is not set, the agentDownload script in UNIX will prompt for the Agent Registration password which is used for securing the agent. Provide the password to secure the agent. If you do not want to secure the agent, continue running the agentDownload script by pressing Enter.
The root.sh script must be run as root; otherwise, the Enterprise Manager job system will not be accessible to the user. The job system is required for some Enterprise Manager features, such as hardware and software configuration tasks and configuring managed database targets.
This script uses the -ignoresysPrereqs flag to bypass prerequisite check messages for operating system-specific patches during installation; prerequisite checks are still performed and saved to the installer logs. While this makes the Management Agent easier to deploy, check the logs to make sure the target machines on which you are installing Management Agents are properly configured for successful installation.
Cluster Agent Installation
Management Agent Cloning
Interactive Installation Using Oracle Universal Installer
Silent Installation
If you are deploying the Management Agent in an environment having multiple Management Service installations that are using a load balancer, you should not access the Agent Deploy application using this load balancer. Oracle recommends that you access the Management Service directly.
you'll have issue with OMS running on load balancer, have some configurations to do
The default port value for 10.2 Management Agent is 3872.
The default port for Grid Control is 4889. This should be available after you install the Management Service.
# PATCHING
For 10.2.0.1, the OMS installation not only installs an OMS, but also automatically installs a Management Agent. However, when you upgrade that OMS to 10.2.0.4.0 using the Patch Set, the Patch Set does not upgrade any of the associated Management Agents. To upgrade the Management Agents, you have to manually apply the Patch Set on each of the Management Agent homes, as they are separate Oracle Homes.
# POST INSTALL
Agent Reconfiguration and Rediscovery
Note:
You must specify either the -f or -d option when executing this script. Using one of these two options is mandatory.
Caution:
Do not use the agentca -f option to reconfigure any upgraded agent (standalone and RAC).
-----------------------------------
ROOT.SH
-----------------------------------
Logging In As Root During Installation (UNIX Only)?
At least once during installation, the installer prompts you to log in as the root user and run a script. You must log in as root because the script edits files in the /etc directory.
The installer prompts you to run the root.sh script in a separate window. This script creates files in the local bin directory (/usr/local/bin, by default).
On IBM AIX and HP UX platforms, the script the files in the /var/opt directory.
-----------------------------------
ASMLIB and raw devices
-----------------------------------
by running the /etc/init.d/oracleasm configure, it will configure the /etc/sysconfig/oracleasm file
# it must be DBA and not be OINSTALL, the oraInventory owner should not have access to the disks it should be the DBA
# ASM
raw/raw[67]:oracle:dba:0660
# OCR
raw/raw[12]:root:oinstall:0640
# Voting Disks
raw/raw[3-5]:crs:oinstall:0640 <-- this is usually user ORACLE, in this scenario the owner of the clusterware software is owned by CRS so he has to own the Voting Disks
-----------------------------------
CLONING HOME
-----------------------------------
The cloning process works by copying all of the files from the source Oracle home to the destination Oracle home. Thus, any files used by the source instance that are located outside the source Oracle home's directory structure are not copied to the destination location.
The size of the binaries at the source and the destination may differ because these are relinked as part of the clone operation and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.
OUI Cloning is more beneficial than using the tarball approach because cloning configures the Central Inventory and the Oracle home inventory in the cloned home. Cloning also makes the home manageable and allows the paths in the cloned home and the target home to be different.
The cloning process uses the OUI cloning functionality. This operation is driven by a set of scripts and add-ons that are included in the respective Oracle software.
The cloning process has two phases:
1) Source Preparation Phase
- $ORACLE_HOME/clone/bin/prepare_clone.pl needs to be executed only for the Application Server Cloning. Database and CRS Oracle home Cloning does not need this
- archive the home, exclude the following:
*.log, *.dbf, listerner.ora, sqlnet.ora, and tnsnames.ora
- Also ensure that you do not archive the following folders:
$ORACLE_HOME/<Hostname>_<SID>
$ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_<Hostname>_<SID>
Create ExcludeFileList.txt:
[oracle@dg10g2 10.2.0]$ find db_1 -iname *.log > ExcludeFileList.txt
[oracle@dg10g2 10.2.0]$ find db_1 -iname *.dbf >> ExcludeFileList.txt
[oracle@dg10g2 10.2.0]$ find db_1 -iname listener.ora >> ExcludeFileList.txt
[oracle@dg10g2 10.2.0]$ find db_1 -iname sqlnet.ora >> ExcludeFileList.txt
[oracle@dg10g2 10.2.0]$ find db_1 -iname tnsnames.ora >> ExcludeFileList.txt
[oracle@dg10g2 10.2.0]$ echo "db_1/dg10g2.us.oracle.com_orcl" >> ExcludeFileList.txt
[oracle@dg10g2 10.2.0]$ echo "db_1/oc4j/j2ee/OC4J_DBConsole_dg10g2.us.oracle.com_orcl" >> ExcludeFileList.txt
TAR home:
nohup tar -X ExcludeFileList.txt -cjvpf db_1.tar.bz2 db_1 &
2) Cloning Phase
- 10gR1 run: $ORACLE_HOME\oui\bin\runInstaller.sh ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_2 ORACLE_HOME_NAME=asm_home1 -clone
- 10gR2 run: perl <Oracle_Home>/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_2 ORACLE_HOME_NAME=asm_home1
3) Check log files
The cloning script runs multiple tools, each of which may generate its own log files. However, the following log files that OUI and the cloning scripts generate, are the key log files of interest for diagnostic purposes:
<Central_Inventory>/logs/cloneActions timestamp.log: Contains a detailed log of the actions that occur during the OUI part of the cloning.
<Central_Inventory>/logs/oraInstall timestamp.err: Contains information about errors that occur when OUI is running.
<Central_Inventory>/logs/oraInstall timestamp.out: Contains other miscellaneous messages generated by OUI.
$ORACLE_HOME/clone/logs/clone timestamp.log: Contains a detailed log of the actions that occur during the pre-cloning and cloning operations.
$ORACLE_HOME/clone/logs/error timestamp.log: Contains information about errors that occur during the pre-cloning and cloning operations.
To find the location of the Oracle inventory directory:On all UNIX system computers except Linux and IBM AIX, look in /var/opt/oracle/oraInst.loc. On IBM AIX and Linux-based systems look in /etc/oraInst.loc file.
On Windows system computers, the location can be obtained from the Windows Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\INST_LOC.
After the clone.pl script finishes running, refer to these log files to obtain more information about the cloning process.
-- Reference for 11.2 home cloning http://blogs.oracle.com/AlejandroVargas/2010/11/oracle_rdbms_home_install_usin.html
-----------------------------------
Windows Install
-----------------------------------
1) Install Loopback Adapter
2) Configure Listener (port number must be different if installing multiple softwares)
3) Create Database
Scenarios:
==========
1) When already have an existing database with EM, then dropped the database..
It drops everything including the services, except the LISTENER and iSQLPLUS service.
Then, when I create again, it creates the database and EM with 5500 port number.
2) Noticed that when I remove this on TNSNAMES.ORA, the EM fails.
*** This is because, when you configured your LISTENER to be on a different port number (1522)
it will put a value on the parameter LOCAL_LISTENER=LISTENER_ORA10, and will put a value on TNSNAMES.ORA...
ORA10 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sqlnbcn-014.corp.sqlwizard.com)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ora10.ph.oracle.com)
)
)
LISTENER_ORA10 =
(ADDRESS = (PROTOCOL = TCP)(HOST = sqlnbcn-014.corp.sqlwizard.com)(PORT = 1522)) <-- THIS!!!
<<showtoc>>
Oracle� Database on AIX�,HP-UX�,Linux�,Mac OS� X,Solaris�,Tru64 Unix� Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.1)
Doc ID: Note:169706.1
-- also located at Installations folder
-- 11R2 Changes
11gR2 Install (Non-RAC): Understanding New Changes With All New 11.2 Installer [ID 884232.1]
11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]
Requirements for Installing Oracle 11gR2 RDBMS on RHEL (and OEL) 5 on AMD64/EM64T [ID 880989.1]
-- CPU, PSU, SPU - Oracle Critical Patch Update Terminology Update
http://www.integrigy.com/oracle-security-blog/cpu-psu-spu-oracle-critical-patch-update-terminology-update
New Patch Nomenclature for Oracle Products [ID 1430923.1]
MOS Note:1962125.1
Overview of Database Patch Delivery Methods <- 20150207
-- PATCHES
Good practices applying patches and patchsets
Doc ID: Note:176311.1
Oracle Recommended Patches -- Oracle Database [ID 756671.1]
Recommended Patch Bundles Note 756388.1
Generic Support Status Notes (strongly recommended to keep an eye on notes below)
* For 11.1.0 Note id 454507.1
* For 10.2.0 Note id 316900.1
* For 10.1.0 Note id 263719.1
* For 9.2 Note id 189908.1
-- PATCH SET
Release Schedule of Current Database Patch Sets
Doc ID: 742060.1
rolling back a patchset (new functionality provided with 9.2.0.7 and 10.2)
How to rollback a patchset
Doc ID: Note:334598.1
How To Find RDBMS patchsets on Metalink
Doc ID: 438049.1
MOS Note 1189783.1 – Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2
-- 11.2 PATCH SET
MOS Note 1189783.1 – Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2
How to deinstall "old" SW after 11.2.0.2 has been applied?
http://blogs.oracle.com/UPGRADE/2010/10/how_to_deinstall_old_sw_after.html
-- PATCH SET UPDATES
Intro to Patch Set Updates (PSU)
Doc ID: 854428.1
Patch Set Updates - One-off Patch Conflict Resolution [ID 1061295.1]
-- CPU
Reference List of Critical Patch Update Availability Documents For Oracle Database and Fusion Middleware Product
Doc ID: 783141.1
http://www.freelists.org/post/oracle-l/patch-source,4
How To Find The Description/Details Of The Bugs Fixed By A Patch Using Opatch?
Doc ID: 750350.1
http://www.oracle.com/technology/deploy/security/cpu/cpufaq.htm
Critical Patch Update - Introduction to Database n-Apply CPUs
Doc ID: 438314.1
http://blogs.oracle.com/security/2007/07/17/#a62
http://www.integrigy.com/security-resources/whitepapers/IOUG_Oracle_Critical_Patch_Updates_Unwrapped.pdf
Security Alerts and Critical Patch Updates- Frequently Asked Questions
Doc ID: 360470.1
OPatch - New features
Doc ID: 749368.1
How To Find The Description/Details Of The Bugs Fixed By A Patch Using Opatch?
Doc ID: 750350.1
10.2.0.4 Patch Set - List of Bug Fixes by Problem Type
Doc ID: 401436.1
Critical Patch Update April 2009 Database Known Issues
Doc ID: 786803.1
-- PATCHES WINDOWS
Oracle Database Server and Networking Patches for Microsoft Platforms
Doc ID: 161549.1
-- ROLLING PATCH
Oracle Clusterware (formerly CRS) Rolling Upgrades
Doc ID: Note:338706.1
Rolling Patch - OPatch Support for RAC [ID 244241.1]
-- ORAINVENTORY
How To Move The Central Inventory To Another Location
Doc ID: Note:299260.1
-- ORACLE_HOME
MOVING ORACLE_HOME
Doc ID: Note:28433.1
Can You Rename/Change The Oracle Home Directory After Installation ?
Doc ID: Note:423285.1
-- OUI
Overview of the Oracle Universal Installer
Doc ID: Note:74182.1
-- OUI DEBUG
How to Diagnose Oracle Installer Errors On Unix About Permissions or Lack of Space?
Doc ID: 401317.1
ERROR STARTING RUNINSTALLER /tmp/...../jre/lib/PA_RISC2.0/libmawt.sl: Not enough space
Doc ID: 308199.1
-- DBA_REGISTRY
Information On Installed Database Components and Schemas
Doc ID: Note:472937.1
How to remove the OLAP Catalog and OLAP APIs from the database
Doc ID: Note:224746.1
How to Uninstall OLAP Options from ORACLE_HOME?
Doc ID: Note:331808.1
How To Remove or De-activate OLAP After Migrating From 9i To Standard Edition 10g
Doc ID: Note:467643.1
Database Status Check Before, During And After Migrations And Upgrades
Doc ID: Note:437794.1
What to do if you run an upgrade or migration with invalid objects and no backup
Doc ID: Note:453642.1
Packages and Types Invalid in Dba_registry
Doc ID: Note:457861.1
DBA_REGISTRY Shows Components Of A New Database Are At The Base Level, Even Though A Patchset Is Installed
Doc ID: Note:339614.1
DBA_REGISTRY is invalid
Doc ID: Note:393319.1
How to see what options are installed
Doc ID: Note:473542.1
RAC Option Invalid After Migration
Doc ID: Note:312071.1
DBA_REGISTRY Shows Status of Loaded After Migration to 9.2
Doc ID: Note:252090.1
How to Diagnose Invalid or Missing Data Dictionary (SYS) Objects
Doc ID: Note:554520.1
Oracle9.2 New Feature: Migration Infrastructure Improvements
Doc ID: Note:177382.1
-- DBA_REGISTRY, after wordsize change 10.2.0.4
How to check if Intermedia Audio/Image/Video is Installed Correctly?
Doc ID: 221337.1
Manual upgrade of the 10.2.x JVM fails with ORA-3113 and ORA-7445
Doc ID: 459060.1
Jserver Java Virtual Machine Become Invalid After Catpatch.Sql
Doc ID: 312140.1
How to Reload the JVM in 10.1.0.X and 10.2.0.X
Doc ID: 276554.1
Script to Check the Status of the JVM within the Database
Doc ID: 456949.1
How to Tell if Java Virtual Machine Has Been Installed Correctly
Doc ID: 102717.1
-- RHEL 5
Requirements For Installing Oracle10gR2 On RHEL 5/OEL 5 (x86_64)
Doc ID: 421308.1
-- RHEL4
Requirements for Installing Oracle 10gR2 RDBMS on RHEL 4 on AMD64/EM64T
Doc ID: Note:339510.1
-- LINUX ITANIUM
montecito bug
http://k-freedom.spaces.live.com/blog/cns!CF84914AA1F284FD!167.entry
How To Install Oracle RDBMS Software On Itanium Servers With Montecito Processors
Doc ID: Note:400227.1
-- http://www.ora-solutions.net/web/blog/
Requirements for Installing Oracle 10gR2 RDBMS on RHEL 5 on Linux Itanium (ia64)
Doc ID: Note:748378.1
Recently, I had to install 10gR2 on Linux Itanium (Montecito CPUs) and found out that the Java version that ships with the binaries does not work on this platform. Therefore you have to download Patch 5390722 and perform the following steps for RAC installation:
1. Install Patch 5390722: Install JDK into new 10.2 CRS Home, then install JRE into new 10.2 CRS Home.
2. Take a tar backup of the CRS Home containing these two components. You will need it.
3. Install 10.2.0.1 Clusterware by running from 10.2.0.1 binaries: ./runInstaller -jreLoc $CRS_HOME/jre/1.4.2
4. Install Patch 5390722 with the option CLUSTER_NODES={"node1", "node2", ...}: Install JDK into new 10.2 RDBMS Home, then install JRE into new 10.2 RDBMS
5. Install 10.2.0.1 RDBMS Binaries into the new 10.2 RDBMS: ./runInstaller -jreLoc $ORACLE_HOME/jre/1.4.2
6. If you want to install the 10.2.0.4 patchset, you will have to follow these steps:
for CRS: ./runInstaller -jreLoc $ORA_CRS_HOME/jdk/jre
for RDBMS: ./runInstaller -jreLoc $ORACLE_HOME/jdk/jre
7. After that, you have to repair the JRE because the 10.2.0.4 patchset has overwritten the patched JRE with the defective versions. (7448301)
% cd $ORACLE_HOME/jre
% rm -rf 1.4.2
% tar –xvf $ORACLE_HOME/jre/1.4.2-5390722.tar
Sources:
* Note: 404248.1 - How To Install Oracle CRS And RAC Software On Itanium Servers With Montecito Processors
* Note: 400227.1 - How To Install Oracle RDBMS Software On Itanium Servers With Montecito Processors
* Bug 7448301 - Linux Itanium: 10.2.0.4 Patchset for Linux Itanium (Montecito) has wrong Java runtime
Support of Linux and Oracle Products on Linux (Doc ID 266043.1)
How To Install Oracle RDBMS Software On Itanium Servers With Montecito Processors (Doc ID 400227.1)
Requirements for Installing Oracle 10gR2 RDBMS on RHEL 5 on Linux Itanium (ia64) (Doc ID 748378.1)
Frequently Asked Questions: Oracle E-Business Suite Support on Itanium (Doc ID 311717.1)
How To Identify A Server Which Has Intel® Montecito Processors Installed (Doc ID 401332.1)
Oracle® Database on Unix AIX®,HP-UX®,Linux®,Mac OS® X,Solaris®,Tru64 Unix® Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2) (Doc ID 169706.1)
Installing Oracle Data Integrator On Intel Itanium (64-bit) Hardware (Doc ID 451928.1)
-- DATABASE VAULT
Note 726568.1 How to Install Database Vault Patches on top of 11.1.0.6
How to Install Database Vault Patches on top of 10.2.0.4
Doc ID: 731466.1
How to Install Database Vault Patches on top of 9.2.0.8.1 and 10.2.0.3
Doc ID: 445092.1
-- CRS, ASM, RDBMS HOMES COMPATIBILITY
Note 337737.1 Oracle Clusterware - ASM - Database Version Compatibility
Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment
-- DEBUG
How to Diagnose Oracle Installer Errors On Unix About Permissions or Lack of Space?
Doc ID: 401317.1
-- CLONE
Cloning A Database Home And Changing The User/Group That Owns It
Doc ID: 558478.1
An Example Of How To Clone An Existing Oracle9i Release 2 (9.2.0.x) RDBMS Installation Using OUI
Doc ID: 559863.1
Cloning An Existing Oracle9i Release 2 (9.2.0.x) RDBMS Installation Using OUI
Doc ID: 559299.1
How To Clone An Existing RDBMS Installation Using EMGC
Doc ID: 549268.1
While Cloning Oracle9i Release 2 (9.2.0.x), OUI Fails With "Exception in thread "main" java.lang.NoClassDefFoundError: oracle/sysman/oii/oiic/OiicInstaller"
Doc ID: 559859.1
Cloning with -ignoreSysPrereqs on OS versions certified after initial release
Doc ID: 443376.1
Cloning An Existing Oracle9i Release 2 (9.2.0.x) RDBMS Installation Using OUI
Doc ID: 559299.1
-- MD5
How To Determine md5 and SHA-1 Check-sum in AIX?
Doc ID: 427591.1
-- UNINSTALL - WINDOWS
WIN: Manually Removing all Oracle Components on Microsoft Windows Platforms
Doc ID: 124353.1
-- REINSTALL
How to Reinstall ASM or DB HOME on One RAC Node From the Install Media. [ID 864614.1]
-- CASE SENSITIVENESS
ORACLE_SID, TNS Alias,Password File and others Case Sensitiveness
Doc ID: 225097.1
-- OPEN FILES
Can't ssh into the system with specific user account: Connection reset by peer (Doc ID 788064.1)
Check the processes run by user 'oracle':
[oracle@rac2 ~]$ ps -u oracle|wc -l
489
Check the files opened by user 'oracle':
[oracle@rac ~]$ /usr/sbin/lsof -u oracle | wc -l
62490
! DBCA troubleshooting
Master Note: Troubleshooting Database Configuration Assistant (DBCA) (Doc ID 1510457.1)
Master Note: Troubleshooting Database Configuration Assistant (DBCA)_1510457.1 http://blog.itpub.net/17252115/viewspace-1158370/
DBCA/DBUA APPEARS TO HANG AFTER CLICKING FINISH BUTTON (Doc ID 727290.1)
Tracing the Database Configuration Assistant (DBCA) (Doc ID 188134.1)
{{{
-DTRACING.ENABLED=true -DTRACING.LEVEL=2
}}}
dbca setting Fatal Error: ORA-01501 https://www.google.com/search?q=dbca+setting+Fatal+Error%3A+ORA-01501&oq=dbca+setting+Fatal+Error%3A+ORA-01501&aqs=chrome..69i57.1417j0j1&sourceid=chrome&ie=UTF-8
Oracle DBCA hangs at 2% https://xcoolwinds.wordpress.com/2013/06/06/oracle-nh/
DBCA ksvrdp
DBCA errors when cluster_interconnects is set (Doc ID 1373591.1)
asm Received signal #18, SIGCLD
11.2.0.4 Patch Set - List of Bug Fixes by Problem Type (Doc ID 1562142.1)
SYSDBA Connection Fails With ORA-12547 (Doc ID 1447317.1)
ASMCMD KSTAT_IOC_READ
Shutdown Normal or Immediate Hang Waiting for MMON process (Doc ID 1183213.1)
ASMCMD Is Not Working Due To LIBCLNTSH.SO.11.1 Is Missing Or Corrupted. (Doc ID 1407913.1)
asmcmd slow and high cpu (Doc ID 2217709.1)
How To Trace ASMCMD on Unix (Doc ID 824354.1)
asm enq: FA - access file
ASM Instance Is Hanging On 'ENQ: FA - ACCESS FILE' (Doc ID 1371297.1)
ASM KFN Operation
Onnn (ASM Connection Pool Processes) Present Memory Leaks Over The Time In 11.2.0.X.0 or 12.1.0.1 RAC/Standalone Database Instances. (Doc ID 1639119.1)
asm GCS lock cvt S
Bug 11710422 - Queries against V$ASM_FILE slow - waiting on "GCS lock open S" events (Doc ID 11710422.8)
ASM Instance Is Hanging On 'ENQ: FA - ACCESS FILE' (Doc ID 1371297.1)
ASM KFN Operation
Onnn (ASM Connection Pool Processes) Present Memory Leaks Over The Time In 11.2.0.X.0 or 12.1.0.1 RAC/Standalone Database Instances. (Doc ID 1639119.1)
asm GCS lock cvt S
Bug 11710422 - Queries against V$ASM_FILE slow - waiting on "GCS lock open S" events (Doc ID 11710422.8)
ASM Instance Is Hanging On 'ENQ: FA - ACCESS FILE' (Doc ID 1371297.1)
Bug 6934636 - Hang possible for foreground processes in ASM instance (Doc ID 6934636.8)
Parallel file allocation slowness on ASM even after applying patch 13253549 (Doc ID 1916340.1)
kfk: async disk IO LGWR
Rebalance hang:: waiting for kfk: async disk IO on other node (Doc ID 1556836.1)
ASM log write(even)
ORA-27601 raised in ASM I/O path due to wrong value inside cellinit.ora under /etc/oracle/cell/network-config (Doc ID 2135801.1)
ASM log write(odd) log write(even) - A closer Look inside Oracle ASM - Luca Canali - CERN
Instance Caging is available from Oracle Database 11g Release 2 onwards.
Database Instance Caging: A Simple Approach to Server Consolidation http://www.oracle.com/technetwork/database/focus-areas/performance/instance-caging-wp-166854.pdf
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCQQFjAA&url=http%3A%2F%2Fioug.itconvergence.com%2Fpls%2Fapex%2FDWBISIG.download_my_file%3Fp_file%3D2617.&ei=8TNHT_r0HO_HsQLEvoDrCA&usg=AFQjCNE-zX-tmwuuqcz311WuHbBqq4YPpA
Configuring and Monitoring Instance Caging [ID 1362445.1]
CPU count consideration for Oracle Parameter setting when using Hyper-Threading Technology [ID 289870.1]
/***
|Name:|InstantTimestampPlugin|
|Description:|A handy way to insert timestamps in your tiddler content|
|Version:|1.0.10 ($Rev: 3646 $)|
|Date:|$Date: 2008-02-27 02:34:38 +1000 (Wed, 27 Feb 2008) $|
|Source:|http://mptw.tiddlyspot.com/#InstantTimestampPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!!Usage
If you enter {ts} in your tiddler content (without the spaces) it will be replaced with a timestamp when you save the tiddler. Full list of formats:
* {ts} or {t} -> timestamp
* {ds} or {d} -> datestamp
* !ts or !t at start of line -> !!timestamp
* !ds or !d at start of line -> !!datestamp
(I added the extra ! since that's how I like it. Remove it from translations below if required)
!!Notes
* Change the timeFormat and dateFormat below to suit your preference.
* See also http://mptw2.tiddlyspot.com/#AutoCorrectPlugin
* You could invent other translations and add them to the translations array below.
***/
//{{{
config.InstantTimestamp = {
// adjust to suit
timeFormat: 'DD/0MM/YY 0hh:0mm',
dateFormat: 'DD/0MM/YY',
translations: [
[/^!ts?$/img, "'!!{{ts{'+now.formatString(config.InstantTimestamp.timeFormat)+'}}}'"],
[/^!ds?$/img, "'!!{{ds{'+now.formatString(config.InstantTimestamp.dateFormat)+'}}}'"],
// thanks Adapted Cat
[/\{ts?\}(?!\}\})/ig,"'{{ts{'+now.formatString(config.InstantTimestamp.timeFormat)+'}}}'"],
[/\{ds?\}(?!\}\})/ig,"'{{ds{'+now.formatString(config.InstantTimestamp.dateFormat)+'}}}'"]
],
excludeTags: [
"noAutoCorrect",
"noTimestamp",
"html",
"CSS",
"css",
"systemConfig",
"systemConfigDisabled",
"zsystemConfig",
"Plugins",
"Plugin",
"plugins",
"plugin",
"javascript",
"code",
"systemTheme",
"systemPalette"
],
excludeTiddlers: [
"StyleSheet",
"StyleSheetLayout",
"StyleSheetColors",
"StyleSheetPrint"
// more?
]
};
TiddlyWiki.prototype.saveTiddler_mptw_instanttimestamp = TiddlyWiki.prototype.saveTiddler;
TiddlyWiki.prototype.saveTiddler = function(title,newTitle,newBody,modifier,modified,tags,fields,clearChangeCount,created) {
tags = tags ? tags : []; // just in case tags is null
tags = (typeof(tags) == "string") ? tags.readBracketedList() : tags;
var conf = config.InstantTimestamp;
if ( !tags.containsAny(conf.excludeTags) && !conf.excludeTiddlers.contains(newTitle) ) {
var now = new Date();
var trans = conf.translations;
for (var i=0;i<trans.length;i++) {
newBody = newBody.replace(trans[i][0], eval(trans[i][1]));
}
}
// TODO: use apply() instead of naming all args?
return this.saveTiddler_mptw_instanttimestamp(title,newTitle,newBody,modifier,modified,tags,fields,clearChangeCount,created);
}
// you can override these in StyleSheet
setStylesheet(".ts,.ds { font-style:italic; }","instantTimestampStyles");
//}}}
https://github.com/intel-analytics/BigDL
https://bigdl-project.github.io/0.5.0/#presentations/
https://bigdl-project.github.io/0.5.0/#ScalaUserGuide/examples/
https://github.com/intel-analytics/BigDL-Tutorials
https://github.com/intel-analytics/BigDL-Tutorials/blob/master/notebooks/neural_networks/linear_regression.ipynb
https://github.com/intel-analytics/BigDL-Tutorials/blob/master/notebooks/neural_networks/lstm.ipynb
https://github.com/intel-analytics/BigDL-Tutorials/blob/master/notebooks/spark_basics/DataFrame.ipynb
https://github.com/intel-analytics/BigDL-Tutorials/blob/master/notebooks/spark_basics/spark_sql.ipynb
https://github.com/intel-analytics/BigDL/blob/branch-0.1/pyspark/example/tutorial/simple_text_classification/text_classfication.ipynb
bigdl vs h2o https://www.google.com/search?ei=0PgjW_XcHsHH5gLQnaDAAw&q=bigdl+vs+h2o&oq=bigdl+vs+h2o&gs_l=psy-ab.3..33i22i29i30k1.110827.111412.0.112312.3.3.0.0.0.0.77.216.3.3.0....0...1.1.64.psy-ab..0.2.139...33i160k1.0.lK5AeityuH8
https://www.infoworld.com/article/3158162/artificial-intelligence/intels-bigdl-deep-learning-framework-snubs-gpus-for-cpus.html
https://mapr.com/blog/tensorflow-mxnet-caffe-h2o-which-ml-best/
-tick tock model
http://en.wikipedia.org/wiki/Intel_Tick-Tock
http://www.intel.com/content/www/us/en/silicon-innovations/intel-tick-tock-model-general.html
http://en.wikipedia.org/wiki/Intel_Tick-Tock
<<<
"Tick-Tock" is a model adopted by chip manufacturer Intel Corporation since 2007 to follow every microarchitectural change with a die shrink of the process technology. Every "tick" is a shrinking of process technology of the previous microarchitecture and every "tock" is a new microarchitecture.[1] Every year, there is expected to be one tick or tock.[1]
<<<
http://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchitectures
http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck
Identify Data Dictionary Inconsistency
Doc ID: 456468.1
X tables
http://www.adp-gmbh.ch/ora/misc/x.html
http://www.stormloader.com/yonghuang/computer/x$table.html
The names for the x$ tables can be queried with
select kqftanam from x$kqfta;
How To Give Grant Select On X$ Objects In Oracle 10g?
Doc ID: Note:453076.1
Script to Extract SQL Statements for all V$ Views
Doc ID: Note:132793.1
-- chinese
http://translate.google.com/translate?sl=auto&tl=en&u=http://www.oracledatabase12g.com/archives/oracle-internal-research.html
How an Oracle block# is mapped to a file offset (in bytes or OS blocks) [ID 761734.1]
-- OCI
Howto Trace Clientside Applications on OCI Level On Windows [ID 749498.1]
How to Perform Client-Side Tracing of Programmatic Interfaces on Windows Platforms [ID 216912.1]
alter index idx_empid invisible; <-- make the index invisible
select /*+ index(idx_empid) */ * from employee where empid = 1001; <-- even with invisible, will force it to use index
alter session set optimizer_use_invisible_indexes = true; <-- with invisible indexes, optimizer will be aware about it and may use the indexes
http://viralpatel.net/blogs/2010/06/invisible-indexes-in-oracle-11g.html
http://oracletoday.blogspot.com/2007/08/invisible-indexes-in-11g.
http://www.orafaq.com/forum/t/159978/0/
http://avdeo.com/2011/03/23/virual-index-and-invisible-index/ <-- virtual and invisible indexes
! CMAN package conflict
<<<
If you have installed the Cluster RPM group then you will hit an RPM conflict on CMAN.. the workaround is to remove the CMAN package
<<<
! FENCE AGENTS error
<<<
since I got the unsigned fence-agents RPM I have to disable the gpg-check on the yum repo
<<<
! VDS service and LIBVIRTD issue
NOTE: Do this before adding the host if you are going to place RHEVM on the same host
{{{
THE HOST IS UNRESPONSVE AND I HAVE TO RESTART/START THE VDS SERVICE TOGETHER WITH LIBVIRTD
[root@iceman ~]# chkconfig --list | egrep -i "libvirt|vds"
libvirtd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
vdsmd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# service libvirtd status
libvirtd (pid 6745) is running...
[root@iceman ~]#
[root@iceman ~]# service vdsmd status
Using /usr/share/vdsm/vdsm
VDS daemon server is running
AFTER RESTART
[root@iceman ~]# chkconfig --list | egrep -i "libvirt|vds"
libvirtd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
vdsmd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# service libvirtd status
libvirtd (pid 5387) is running...
[root@iceman ~]#
[root@iceman ~]# service vdsmd status
Using /usr/share/vdsm/vdsm
VDS daemon is not running
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# date
Sat Nov 7 16:34:30 PHT 2009
NOW START THE VDSM AND LIBVIRTD
[root@iceman ~]# service vdsmd stop
Using /usr/share/vdsm/vdsm
Shutting down vdsm daemon:
vdsm: not running [FAILED]
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# service libvirtd stop
Stopping libvirtd daemon: [ OK ]
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# service vdsmd stop
Using /usr/share/vdsm/vdsm
Shutting down vdsm daemon:
vdsm: not running [FAILED]
[root@iceman ~]#
[root@iceman ~]# service vdsmd start
Using /usr/share/vdsm/vdsm
Starting up vdsm daemon:
vdsm start [ OK ]
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# service libvirtd start
Starting libvirtd daemon: [ OK ]
FOUND OUT THAT FAILS TO CONNECT TO DB
CHANGE THE FOLLOWING BEFORE RESTART
[root@iceman ~]# chkconfig --list | egrep -i "libvirt|vds"
libvirtd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
vdsmd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# chkconfig --level 2345 libvirtd on
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# chkconfig --list | egrep -i "libvirt|vds"
libvirtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
vdsmd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
AFTER RESTART STILL VDSMD NOT RUNNING
[root@iceman ~]# service libvirtd status
libvirtd (pid 5393) is running...
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# service vdsmd status
Using /usr/share/vdsm/vdsm
VDS daemon is not running
[root@iceman ~]#
[root@iceman ~]#
[root@iceman ~]# date
Sat Nov 7 17:01:18 PHT 2009
YEAH VDS IS NOT REALLY STARTING MAYBE BECAUSE RHEVM IS ON THE SAME SERVER AND IT DOES NOT DETECT THE HOST
[root@iceman vdsm]# ls -ltr
total 4752
drwxr-xr-x 2 vdsm kvm 4096 Oct 1 23:43 backup
-rw-rw---- 1 vdsm kvm 0 Nov 3 13:00 metadata.log
-rw-rw---- 1 vdsm kvm 4848393 Nov 7 16:56 vdsm.log.bak
[root@iceman vdsm]#
[root@iceman vdsm]#
[root@iceman vdsm]# cd backup/
[root@iceman backup]# ls
[root@iceman backup]# cd ..
[root@iceman vdsm]# ls
backup metadata.log vdsm.log.bak
[root@iceman vdsm]# cat metadata.log
[root@iceman vdsm]#
[root@iceman vdsm]#
[root@iceman vdsm]# service vdsmd status
Using /usr/share/vdsm/vdsm
VDS daemon is not running
AS A WORKAROUND I ADDED THIS LINE
[root@iceman ~]# cat /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
service libvirtd stop
service vdsmd stop
service vdsmd start
service libvirtd start
}}}
! Mounting NFS RPC host error
<<<
still have to be researched
<<<
! VirtIO
<<<
On Linux, when creating a new virtual disk.. and if you choose VirtIO the device name will be /dev/vda
On Windows, there are specific drivers to use the VirtIO.. see KBASE links
<<<
! When using vmware and KVM together
{{{
[karao@karl ~]$ cat /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
# temporarily removes the kvm module
/etc/init.d/libvirtd stop
modprobe -r kvm_intel
modprobe -r kvm
}}}
<<<
The latest Itanium is the Montvale.. http://en.wikipedia.org/wiki/List_of_Intel_Itanium_microprocessors#Montvale_.2890_nm.29
From the Oracle's Certification matrix, it is still the Montecito. Although I saw one benchmark where Montvale was used (http://www.intel.com/performance/server/itanium/summary.htm)
If they want to verify if Montvale is supported they can file an SR for that. Below is the certification for 10gR2 (both single instance & RAC)
10gR2 64-bit Linux Itanium Red Hat Enterprise 5 Certified
10gR2 64-bit Linux Itanium Red Hat Enterprise 4 Certified
10gR2 64-bit Linux Itanium SLES-9 Certified
10gR2 RAC Linux Itanium Red Hat Enterprise 4 Certified
10gR2 RAC Linux Itanium Red Hat Enterprise 3 Certified
10gR2 RAC Linux Itanium SLES-8 Certified
10gR2 RAC Linux Itanium SLES-9 Certified
10gR2 RAC Linux Itanium Red Hat Enterprise 2.1 Certified
If they are on the process of evaluation, I would still go for the multicore Xeon (Nehalem). If they've not heard of the news that RedHat will not support Itanium on RHEL6 better read this http://www.theregister.co.uk/2009/12/18/redhat_rhel6_itanium_dead/
Below are more articles regarding Itanium on the Oracle Support site:
Support of Linux and Oracle Products on Linux (Doc ID 266043.1)
How To Install Oracle RDBMS Software On Itanium Servers With Montecito Processors (Doc ID 400227.1)
Requirements for Installing Oracle 10gR2 RDBMS on RHEL 5 on Linux Itanium (ia64) (Doc ID 748378.1)
Frequently Asked Questions: Oracle E-Business Suite Support on Itanium (Doc ID 311717.1)
How To Identify A Server Which Has Intel® Montecito Processors Installed (Doc ID 401332.1)
Oracle® Database on Unix AIX®,HP-UX®,Linux®,Mac OS® X,Solaris®,Tru64 Unix® Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2) (Doc ID 169706.1)
Installing Oracle Data Integrator On Intel Itanium (64-bit) Hardware (Doc ID 451928.1)
<<<
the folowing are things to try:
1) Oracle Net Listener Connection Rate Limiter
http://www.oracle.com/technetwork/database/enterprise-edition/oraclenetservices-connectionratelim-133050.pdf
> setup another listener
> connect swingbench to that new listener
2) DRCP on JDBC
> setup DRCP on 11.2 database
> run swingbench using JDBC connection
Example: Identifying Connection String Problems in JDBC Driver
Doc ID: Note:94091.1
https://jonathanlewis.wordpress.com/2015/12/03/five-hints/
https://www.doag.org/formes/pubfiles/7502432/2015-K-DB-Jonathan_Lewis-Five_Hints_for_Optimising_SQL-Praesentation.pdf
{{{
Merge / no_merge — Whether to use complex view merging
Push_pred / no_push_pred — What to do with join predicates to non-merged views
Unnest / no_unnest — Whether or not to unnest subqueries
Push_subq / no_push_subq — When to handle a subquery that has not been unnested
Driving_site — Where to execute a distributed query
}}}
.
http://jsonviewer.stack.hu/
http://www.jcon.no/oracle/?p=1942
<<<
Part 1: Install/setup Oracle database (in docker)
Part 2: Installing Java (JDK), Eclipse and Maven
Part 3: Git, Oracle schemas and your first Java application
Part 4: Your first JDBC Application
Part 5: Spring-Boot, JdbcTemplate & DB migration (Using FlywayDB)
Part 6: Spring-boot, JPA and Hibernate (this)
<<<
http://stackoverflow.com/questions/647116/how-to-decompile-a-whole-jar-file
http://stackoverflow.com/questions/31353/is-jad-the-best-java-decompiler
http://www.youtube.com/watch?v=mcWuYbn4NBg
on rhel 4
{{{
-- INSTALL JAVA FROM SUN
1) install rpm /usr/java/<version>
2) make symbolic link
ln -s /usr/java/j2sdk1.4.2_16 /usr/java/jdk
3) "which java"
4) go to /etc/profile.d
5) edit "java.sh"
[root@sqlnbcn-004 profile.d]# cat java.sh
export JAVA_HOME='/usr/java/jdk'
export PATH="${JAVA_HOME}/bin:${PATH}"
}}}
on rhel5
{{{
alternatives --config java
alternatives --install link name path priority
alternatives --install /usr/bin/java java /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java 2
alternatives --config java
java -version
[root@desktopserver ~]# java -version
java version "1.4.2"
gij (GNU libgcj) version 4.1.2 20080704 (Red Hat 4.1.2-51)
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[root@desktopserver ~]# alternatives --install /usr/bin/java java /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java 2
[root@desktopserver ~]# alternatives --config java
There are 2 programs which provide 'java'.
Selection Command
-----------------------------------------------
*+ 1 /usr/lib/jvm/jre-1.4.2-gcj/bin/java
2 /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java
Enter to keep the current selection[+], or type selection number: 2
[root@desktopserver ~]# alternatives --config java
There are 2 programs which provide 'java'.
Selection Command
-----------------------------------------------
* 1 /usr/lib/jvm/jre-1.4.2-gcj/bin/java
+ 2 /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java
Enter to keep the current selection[+], or type selection number: ^C
[root@desktopserver ~]# java -version
java version "1.5.0_30"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_30-b03)
Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_30-b03, mixed mode)
}}}
-- USE JAVA SHIPPED WITH ORACLE SOFTWARE
$ cd $ORACLE_HOME/jre/1.4.2/bin
$ setenv PATH $ORACLE_HOME/jre/1.4.2/bin:$PATH
-- install guides fedora
http://fedorasolved.org/browser-solutions/java-i386
http://www.mjmwired.net/resources/mjm-fedora-f12.html#java
http://oliver.net.au/?p=92
jenkins CI/CD and Github in One Hour Video Course
https://learning.oreilly.com/videos/jenkins-ci-cd-and/50106VIDEOPAIML/50106VIDEOPAIML-c1_s1
http://oraclepoint.com/oralife/2012/02/16/how-to-set-up-the-job-scheduling-via-sudo-on-oem/
https://blogs.oracle.com/optimizer/optimizer-transformation:-join-predicate-pushdown
<<<
The decision whether to push down join predicates into a view is determined by evaluating the costs of the outer query with and without the join predicate pushdown transformation under Oracle's cost-based query transformation framework.
The join predicate pushdown transformation applies to both non-mergeable views and mergeable views and to pre-defined and inline views as well as to views generated internally by the optimizer during various transformations. The following shows the types of views on which join predicate pushdown is currently supported.
UNION ALL/UNION view
Outer-joined view
Anti-joined view
Semi-joined view
DISTINCT view
GROUP-BY view
<<<
! left join and left outer join are the same
https://stackoverflow.com/questions/5706437/whats-the-difference-between-inner-join-left-join-right-join-and-full-join#:~:text=INNER%20JOIN%3A%20returns%20rows%20when,matches%20in%20the%20left%20table.&text=Note%20%3AIt%20will%20return%20all%20selected%20values%20from%20both%20tables.
https://www.quora.com/What-is-the-difference-between-left-join-and-left-outer-join-in-sql#:~:text=In%20SQL%2C%20the%20left%20join,same%20results%20as%20left%20join.
http://jonathanlewis.wordpress.com/2010/08/02/joins/
http://jonathanlewis.wordpress.com/2010/08/09/joins-nlj/
http://jonathanlewis.wordpress.com/2010/08/10/joins-hj/
http://jonathanlewis.wordpress.com/2010/08/15/joins-mj/
-- Optimizing two table join - video! TROUG
http://jonathanlewis.wordpress.com/2011/06/23/video/
SQL Joins Graphically
http://db-optimizer.blogspot.com/2010/09/sql-joins-graphically.html based on http://www.codeproject.com/KB/database/Visual_SQL_Joins.aspx?msg=2919602
http://db-optimizer.blogspot.com/2009/06/sql-joins.html based on http://blog.mclaughlinsoftware.com/oracle-sql-programming/basic-sql-join-semantics/
http://www.gplivna.eu/papers/sql_join_types.htm
http://www.oaktable.net/content/sql-joins-visualized-surprising-way
https://stevestedman.com/2015/05/tsql-join-types-poster-version-4/
https://www.techonthenet.com/oracle/joins.php <-- GOOD STUFF
http://searchoracle.techtarget.com/answer/Alternative-to-LEFT-OUTER-JOIN
http://docwiki.embarcadero.com/DBOptimizer/en/Subquery_Diagramming
http://blog.mclaughlinsoftware.com/oracle-sql-programming/basic-sql-join-semantics/
! visualized
[img(100%,100%)[https://i.imgur.com/MsLpVJ2.png]]
[img(100%,100%)[https://i.imgur.com/uDbi422.png]]
[img(100%,100%)[https://i.imgur.com/LGlvRD3.png]]
.
http://www.joomla.org/
http://docs.joomla.org/Main_Page
http://www.cloudaccess.net/joomla-training-video-series-beyond-the-basics.html <-- GOOD STUFF tutorials
http://docs.joomla.org/Can_you_remove_the_%22Powered_by_Joomla!%22_message%3F <-- remove unnecessary stuff
http://docs.joomla.org/Changing_the_site_favicon
http://forums.digitalpoint.com/showthread.php?t=526998 <-- AGGREGATOR
http://3dwebdesign.org/view-document-details/16-joomla-rss-feed-aggregator.html
http://www.associatedcontent.com/article/420973/mastering_joomla_how_to_get_rss_news.html
http://goo.gl/4w1lf
http://extensions.joomla.org/extensions/image/14087
http://3dwebdesign.org/en/rss-feed-aggregators-comparison.html
http://3dwebdesign.org/en/joomla-extensions/wordpress-aggregator-lite.html
http://3dwebdesign.org/en/wordpress-aggregators/wordpress-aggregator-platinum
http://blog.scottlowe.org/2012/08/21/working-with-kvm-guests/
https://networkbuilders.intel.com/
Network Function Virtualization Packet Processing Performance of Virtualized Platforms with Linux* and Intel® Architecture® https://networkbuilders.intel.com/docs/network_builders_RA_NFV.pdf
https://confluence.atlassian.com/display/AGILE/Tutorial+-+Tracking+a+Kanban+Team
http://en.wikipedia.org/wiki/Pomodoro_Technique
http://www.businessinsider.com/productivity-hacks-from-startup-execs-2014-5
http://www.quora.com/Productivity/As-a-startup-CEO-what-is-your-favorite-productivity-hack/answer/Paul-A-Klipp?srid=n2Fg&share=1
https://kano.me/app
help.kano.me
kano.me/world
kano.me/shop
youtube/teamkano
* Optimizing Oracle Performance - Chapter 7.1.1 The sys call Transition
* understanding.the.linux.kernel http://oreilly.com/catalog/linuxkernel/chapter/ch10.html
{{{
Be aware that a preempted process is not suspended, since it remains in the TASK_RUNNING state; it simply no longer uses the CPU.
Some real-time operating systems feature preemptive kernels, which means that a process running in Kernel Mode can be interrupted after any instruction, just as it can in User Mode. The Linux kernel is not preemptive, which means that a process can be preempted only while running in User Mode; nonpreemptive kernel design is much simpler, since most synchronization problems involving the kernel data structures are easily avoided (see the section "Nonpreemptability of Processes in Kernel Mode" in Chapter 11, Kernel Synchronization).
}}}
Understanding User and Kernel Mode http://www.codinghorror.com/blog/2008/01/understanding-user-and-kernel-mode.html
http://kevinclosson.wordpress.com/2012/04/16/critical-analysis-meets-exadata/
''Exadata Critical Analysis Part I'' http://www.youtube.com/watch?v=K3lXkIuBJqk&feature=youtu.be
''Exadata Critical Analysis Part II'' http://www.youtube.com/watch?v=0ii5xV9sicM&feature=youtu.be
''Q&A'' http://kevinclosson.wordpress.com/criticalthinking/
''Exadata Deep Dive Part 1'' http://www.youtube.com/watch?v=dw-PnKDrcDE
[[Platform Topics for DBAs]]
http://blog.tanelpoder.com/2010/02/17/how-to-cancel-a-query-running-in-another-session/
http://oracle-randolf.blogspot.com/2011/11/how-to-cancel-query-running-in-another.html
! new
{{{
set serveroutput on
BEGIN
FOR c IN (
SELECT username, machine, osuser, sid, serial#, inst_id
FROM sys.gv_$session
WHERE sql_id = '549wyn38pr0hd'
)
LOOP
EXECUTE IMMEDIATE 'alter system kill session ''' || c.sid || ', ' || c.serial# || ', @' || c.inst_id || ''' immediate';
dbms_output.put_line('Kill session : ''' || c.username || ', ' || c.machine || ', ' || c.osuser || ', ' || c.sid || ', ' || c.serial# || ', @' || c.inst_id || ''' ');
END LOOP;
END;
/
}}}
{{{
spool TERMINATE_SESSIONS.SQL
select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' post_transaction;'
from v$process p, v$session s, v$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
and s.sql_id = '158gjtpj0vzkc'
--and sa.sql_text NOT LIKE '%usercheck%'
--and lower(sa.sql_text) LIKE '%cputoolkit%'
order by status desc;
spool off
set echo on
set feedback on
}}}
<<showtoc>>
! ebook paperwhite convert/transfer
https://calibre-ebook.com/download
http://www.howtogeek.com/69481/how-to-convert-pdf-files-for-easy-ebook-reading/
http://tidbits.com/article/16691
How to send large files to @free.kindle.com to get converted https://www.amazon.com/forum/kindle?_encoding=UTF8&cdForum=Fx1D7SY3BVSESG&cdThread=Tx1EQT6ICAB7D7A
https://transfer.pcloud.com/
Previous Announcements from New in the Knowledge Base
Doc ID: Note:370936.1
Kubernetes Microservices https://learning.oreilly.com/videos/kubernetes-microservices/10000DIHV201804/?autoplay=false
! tutorial for oracle
https://www.devart.com/dotconnect/oracle/articles/tutorial_linq.html
<<showtoc>>
! what is this?
there's an intermittent slowness on the SQLLDR process coming from any of the 150 app servers
the SQLLDR process was just spinning on CPU, the ASH data just shows "ON CPU" and that's it
so what we did is profiled the good and bad long running/slow session with snapper and pstack and compared the numbers
{{{
---------------------------------------------------------------------------------------------------------------
ActSes %Thread | INST | SQL_ID | SQL_CHILD | EVENT | WAIT_CLASS
---------------------------------------------------------------------------------------------------------------
1.00 (100%) | 1 | 9vgb48rzqvqqz | 0 | ON CPU | ON CPU
}}}
from the pstack comparison of the good and the bad. The KDZH is the EHCC function. So the underlying table is EHCC compressed and we already have a bug open on SQLLDR and EHCC.
And these are the other related bugs/issues:
Bug 14690273 : SQLLDR INSERTS VERY SLOW/HANGS WITH ADVANCED COMPRESSION (EXADATA)
ORA-04030 A Direct Load Into Many Partitions With Huge Allocation Request From "KLLCQGF:KLLSLTBA" (Doc ID 1578849.1)
so the session is just spinning on CPU , but under the covers it seems to be waiting on that compression function
and look how low are the numbers in general for the slow run given that we sampled this for 5 minutes and the good one for just a few seconds (10K range number on ENQG vs the good)
possibly the slow run (for whatever reason) is just holding the other enqueues back and so TM/TX just show up
TX is not even the cause here, it’s more “whatever is left” from the “big chunk of the stuff is stuck” (potentially because of KDZ - compression)
! the commands
{{{
@snapper all,gather=a 5 1 (<instance>, <pid>) <- create a sql file with 60 lines of this snapper command and spool in a text file, that's 5mins sample
pstack <ospid> <- as root user
}}}
! session - good profile
[img(90%,90%)[ https://raw.githubusercontent.com/karlarao/blog/82d44c4578e610044eef25f62b6c25d4ffb181b7/images/20160427_lios/good.png ]]
!! good pstack
{{{
#0 0x000000000950707d in kcbgtcr () kcb cache manages Oracle's buffer cache operation as well as operations used by capabilities such as direct load, has clusters , etc.
#1 0x000000000957a178 in ktrget3 () txn/lcltx ktr - kernel transaction read consistency
#2 0x000000000957981e in ktrget2 () txn/lcltx ktr - kernel transaction read consistency
#3 0x00000000094d5bc7 in kdst_fetch () kds kdt kdu ram/data operations on data such as retrieving a row and updating existing row data
#4 0x0000000000cb87ec in kdstfRRRRRRRRRRRkmP () kds kdt kdu ram/data operations on data such as retrieving a row and updating existing row data
#5 0x00000000094bd0f4 in kdsttgr () kds kdt kdu ram/data operations on data such as retrieving a row and updating existing row data
#6 0x000000000976f979 in qertbFetch () sqlexec/rowsrc row source operators
#7 0x000000000269f3f3 in qergsFetch () sqlexec/rowsrc row source operators
#8 0x0000000009615052 in opifch2 ()
#9 0x000000000961457e in opifch ()
#10 0x000000000961b68f in opiodr ()
#11 0x00000000096fbdd7 in rpidrus ()
#12 0x000000000986e3d8 in skgmstack ()
#13 0x00000000096fd8c8 in rpiswu2 ()
#14 0x00000000096fceeb in rpidrv ()
#15 0x00000000096ff420 in rpifch ()
#16 0x00000000010eed56 in ktsi_is_dmts ()
#17 0x0000000000c2e2b0 in kdbl_is_dmts ()
#18 0x0000000000c2bc8a in kdblfpl ()
#19 0x0000000000c0b629 in kdblfl ()
#20 0x000000000203ab01 in klafin ()
#21 0x0000000001cde467 in kpodpfin ()
#22 0x0000000001cdc35b in kpodpmop ()
#23 0x000000000961b68f in opiodr ()
#24 0x000000000980a6af in ttcpip ()
#25 0x000000000196d78e in opitsk ()
#26 0x00000000019722b5 in opiino ()
#27 0x000000000961b68f in opiodr ()
#28 0x00000000026ecb43 in opirip ()
#29 0x000000000196984d in opidrv ()
#30 0x0000000001f56827 in sou2o ()
#31 0x0000000000a2a236 in opimai_real ()
#32 0x0000000001f5cb45 in ssthrdmain ()
#33 0x0000000000a2a12d in main ()
}}}
! session - bad profile
[img(90%,90%)[ https://raw.githubusercontent.com/karlarao/blog/82d44c4578e610044eef25f62b6c25d4ffb181b7/images/20160427_lios/bad.png ]]
!! bad pstack
{{{
#0 0x0000000002d29459 in kdzca_cval_init ()
#1 0x0000000002d05d4a in kdzcompress ()
#2 0x0000000002d05c12 in kdzcompress_target_size ()
#3 0x0000000000cb994d in kdzhcl () ehcc related
#4 0x0000000000c10818 in kdblsync () kdbl kdc kdd ram/data support for direct load operation, cluster space management and deleting rows
#5 0x0000000000c0e851 in kdblcmtt () kdbl kdc kdd ram/data support for direct load operation, cluster space management and deleting rows
#6 0x000000000203a814 in kladsv () kla klc klcli klx tools/sqlldr support for direct path sql loader operation
#7 0x0000000001cdc3f8 in kpodpmop () kpoal8 kpoaq kpob kpodny kpodp kpods kpokgt kpolob kpolon kpon progint/kpo support for programmatic operations
#8 0x000000000961b68f in opiodr ()
#9 0x000000000980a6af in ttcpip ()
#10 0x000000000196d78e in opitsk ()
#11 0x00000000019722b5 in opiino ()
#12 0x000000000961b68f in opiodr ()
#13 0x00000000026ecb43 in opirip ()
#14 0x000000000196984d in opidrv ()
#15 0x0000000001f56827 in sou2o ()
#16 0x0000000000a2a236 in opimai_real ()
#17 0x0000000001f5cb45 in ssthrdmain ()
#18 0x0000000000a2a12d in main ()
}}}
! non-viz way, just do a grep/sort on raw data
{{{
$ cat snapper_all_bad_5min.txt | grep ENQG | sort -n -k9
-1 @1, , ENQG, TX - Transaction , 8064, 1.53k, , , , ,
-1 @1, , ENQG, TX - Transaction , 8109, 1.52k, , , , ,
-1 @1, , ENQG, TX - Transaction , 8447, 1.46k, , , , ,
-1 @1, , ENQG, TX - Transaction , 8716, 1.65k, , , , ,
-1 @1, , ENQG, TX - Transaction , 9051, 1.59k, , , , ,
-1 @1, , ENQG, TX - Transaction , 9196, 1.52k, , , , ,
-1 @1, , ENQG, TX - Transaction , 9382, 1.76k, , , , ,
-1 @1, , ENQG, TX - Transaction , 9450, 1.79k, , , , ,
-1 @1, , ENQG, TX - Transaction , 9940, 1.74k, , , , ,
-1 @1, , ENQG, TX - Transaction , 11031, 1.85k, , , , ,
}}}
! references
Tanel’s blog on gather=a option http://blog.tanelpoder.com/2009/11/19/finding-the-reasons-for-excessive-logical-ios/ .
{{{
ALTER TABLE .. MODIFY LOB (..)(CACHE);
alter table mytable modify lob (mycolumn) (cache) ;
}}}
https://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
LOB performance guidelines http://www.oracle.com/technetwork/articles/lob-performance-guidelines-128437.pdf
http://support.esri.com/fr/knowledgebase/techarticles/detail/35521
{{{
High fsync() times to VRTSvxfs Files can be reduced using Solaris VMODSORT Feature [ID 842718.1]
Symptoms
When RDBMS processes perform cached writes to files (i.e. writes which are not issued by DBWR)
such as to a LOB object which is
stored out-of-line (e.g. because the LOB column length exceeds 3964 bytes)
and for which "STORE AS ( NOCACHE )" option has not been used
then increased processing times can be experienced which are due to longer fsync() call times to flush the dirty pages to disk.
Changes
Performing (datapump) imports or writes to LOB segments and
1. running "truss -faedDl -p " for the shadow or background process doing the writes
shows long times spent in fsync() call.
Example:
create table lobtab(n number not null, c clob);
-- insert.sql
declare
mylob varchar2(4000);
begin
for i in 1..10 loop
mylob := RPAD('X', 3999, 'Z');
insert into lobtab values (i , rawtohex(mylob));
end loop;
end;
/
truss -faedDl sqlplus user/passwd @insert
shows 10 fsync() calls being executed possibly having high elapsed times:
25829/1: 1.3725 0.0121 fdsync(257, FSYNC) = 0
25829/1: 1.4062 0.0011 fdsync(257, FSYNC) = 0
25829/1: 1.4112 0.0008 fdsync(257, FSYNC) = 0
25829/1: 1.4164 0.0010 fdsync(257, FSYNC) = 0
25829/1: 1.4213 0.0008 fdsync(257, FSYNC) = 0
25829/1: 1.4508 0.0008 fdsync(257, FSYNC) = 0
25829/1: 1.4766 0.0207 fdsync(257, FSYNC) = 0
25829/1: 1.4821 0.0006 fdsync(257, FSYNC) = 0
25829/1: 1.4931 0.0063 fdsync(257, FSYNC) = 0
25829/1: 1.4985 0.0007 fdsync(257, FSYNC) = 0
25829/1: 1.5406 0.0002 fdsync(257, FSYNC) = 0
2. Solaris lockstat command showing frequent hold events for fsync internal functions:
Example:
Adaptive mutex hold: 432933 events in 7.742 seconds (55922 events/sec)
------------------------------------------------------------------------
Count indv cuml rcnt nsec Lock Hottest Caller
15052 48% 48% 0.00 385437 vph_mutex[32784] pvn_vplist_dirty+0x368
nsec ------ Time Distribution ------ count Stack
8192 |@@@ 1634 vx_putpage_dirty+0xf0
16384 | 187 vx_do_putpage+0xac
32768 | 10 vx_fsync+0x2a4
65536 |@@@@@@@@@@@@@@@@@@@@@@ 12884 fop_fsync+0x14
131072 | 255 fdsync+0x20
262144 | 30 syscall_trap+0xac
3. AWR report would show increased CPU activity (SYS_TIME is unusual high in Operating System Statistics section).
Cause
The official Sun document explaining this issue is former Solaris Alert # 201248 and new
"My Oracle Support" Doc Id 1000932.1
From a related Sun document:
Sun introduced a page ordering vnode optimization in Solaris 9
and 10. The optimization includes a new vnode flag, VMODSORT,
which, when turned on, indicates that the Virtual Memory (VM)
should maintain the v_pages list in an order depending on if
a page is modified or unmodified.
Veritas File System (VxFS) can now take advantage of that flag,
which can result in significant performance improvements on
operations that depend on flushing, such as fsync.
This optimization requires the fixes for Sun BugID's 6393251 and
6538758 which are included in Solaris kernel patches listed below.
Symatec information about VMODSORT can be found in the Veritas 5.0 MP1RP2 Patch README:
https://sort.symantec.com/patch/detail/276
Solution
The problem is resolved by applying Solaris patches and enabling the VMODSORT
feature in /etc/system:
1. apply patches as per Sun document (please always refer to
the Sun alert for the most current recommended version of patches):
SPARC Platform
VxFS 4.1 (for Solaris 9) patches 122300-11 and 123828-04 or later
VxFS 5.0 (for Solaris 9) patches 122300-11 and 125761-02 or later
VxFS 4.1 (for Solaris 10) patches 127111-01 and 123829-04 or later
VxFS 5.0 (for Solaris 10) patches 127111-01 and 125762-02
x86 Platform
VxFS 5.0 (for Solaris 10) patches 127112-01 and 125847-01 or later
2. enable vmodsort in /etc/system and reboot server
i.e. add line to /etc/system after vxfs forceload:
set vxfs:vx_vmodsort=1 * enable vxfs vmodsort
Please be aware that enabling VxFS VMODSORT functionality without
the correct OS kernel patches can result in data corruption.
References
http://sunsolve.sun.com/search/document.do?assetkey=1-66-201248-1
}}}
http://neerajbhatia.wordpress.com/2011/10/07/capacity-planning-and-performance-management-on-ibm-powervm-virtualized-environment/
also check on this youtube video http://www.youtube.com/watch?v=WphGQx-N98U PowerBasics What is a Virtual Processor? and Shared Processor
{{{
Some possible actions in case of threshold violations can be investigating the individual partition contributing to the server's utilization, workload management if possible or as a last resort stop/migrate least critical partition. Workload behavior of partitions is very important and configuration needs to be done in such a way that not many partitions should compete for the processor resources at the same time.
One gauge of system's health is CPU run queue length. The run-queue length represents the number of processes that are currently running or waiting (queued) to run. Setting thresholds for run queue length is tricky in partitioned environment because uncapped partitioned can potentially consume more than their entitlement up to number of virtual processors. SMT introduced further complexity as it enable parallel execution: 2 simultaneous thread on Power5 and Power6 and 4 on Power7 environments.
To summarize – entitlement should be defined in such a way that it represents “nearly right” capacity requirements for a partition. Thus on average each partition’s entitled capacity utilization would be close to 100 percent and there will be a balance between capacity donors and borrowers in the system. While reviewing a partition’s utilization it’s important to know that any capacity used beyond entitled capacity isn’t guaranteed (as it might be some other partition’s entitlement). Therefore, if a partition’s entitled CPU utilization is beyond 100 percent, it might be forced back down to 100 percent if another partition requires that borrowed capacity. Processing units also decide the number of partitions that can run on a system. As the total processing units of all partitions running on a system cannot more than the number of physical processors, by assigning smaller processing units you can maximize the number of partitions on a system.
„h Have separate shared-processor pools for production partitions. But the scope of this solution is limited as multiple shared-processor pools capability is only available in Power6 and Power7 based systems.
„h Configure the non-production partitions as capped. Capped partitions are restricted to consume additional processor cycles beyond their entitled capacity.
„h A more flexible way is to configure the non-production partitions as uncapped and keep their uncapped weight to minimum. The number of virtual processors should be set to maximum physical CPUs which you think a partition should consume. This will effectively cap the partition at number of virtual processors. The benefits of this approach is that, non-production partitions can get additional resources up to their virtual processors but at the same time will remain harmless to production partitions with higher uncapped weights.
„h Determine the purpose and nature of the applications to be run on the partition, like web server supporting an online web store or batch database of a banking system.
„h Understand the business workload profile.
„h Identify any seasonal or periodic trends and its impact on the workload profile.
„h Understanding of the busiest hour in the working day, the busiest day in the week, busiest month of the year.
„h Calculate the processing requirements necessary to support workload profiles.
It is always better to measure and forecast the capacity in business metric terms because that's what business understands and same units are used by business to perceive the performance, throughput and forecast the business demand. We will call our business metrics as metric1 and metric2.
Clearly current value of entitled capacity of 2.0 processing units is not going to support additional workload. So based on this analysis, we should increase the entitled CPUs to 4 and to keep some margin for unexpected workload, set the virtual processors to 5 or 6. Another option which is worth considering for reducing the pressure on additional processing capacity is to shift metric2 workload by few hours, if possible. It will reduce the chances of running two business processes at the same time and result in CPU spikes. Such workload management options should be more important from the business perspective than their technical implications. I have simplified the illustration a lot but the principle of capacity planning would be the same
}}}
http://www.ibm.com/developerworks/wikis/display/WikiPtype/CPU+frequency+monitoring+using+lparstat
''A Comparison of Virtualization Features of HP-UX, Solaris & AIX'' http://www.osnews.com/comments/20393
''A comparison of virtualization features of HP-UX, Solaris and AIX'' http://www.ibm.com/developerworks/aix/library/au-aixvirtualization/?ca=dgr-jw30CompareFeatures&S_TACT=105AGX59&S_cmp=GRsitejw30
https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/it_s_good_when_it_goes_wrong_and_i_am_on_holiday_nmon_question_peaks291?lang=en
<<<
"Shared CPU, Uncapped LPAR utilisation number do not look right nor does the average of the logical CPUs?"
Correct. They are very misleading.
I have been pointing this out for 5+ years.
For these types of LPARs, you need to monitor the physical CPU use.
The problem is the utilisation numbers (User+System) get to roughly 95% as you get to the Entitlement and stay at just below 100% as you use double, quadruple or higher numbers of physical CPU. They do not show you how much CPU time you are using above Entitlement.
Plus you can't average the logical CPUs (these are the SMT threads) to get the machine average because they are time-sharing the physical CPUs.
Also for Dedicated CPU LPARs all the Shared Processor stats don't mean anything, so they are not collected and there is no LPAR Tab in the nmon Analyser.
Lesson: POWER systems are function rich with advanced features that means we can't use 1990's stats to understand them.
<<<
<<<
There are two main critical LPARs on the heavily over committed machine - By this I mean that if you add up the LPAR Entitlements of a machine they have to add up to at most to the number of physical CPUs in the shared pool. But they have most LPARs Uncapped with the Virtual CPU (spreading factor) number much higher than the Entitlement. Normally, I don't recommend this for performance, as the LPAR has to compete for CPU cycles above the Entitlement. In this case, the two main LPARs have an Entitlement of 6 to 10 CPUs but a Virtual CPU of 40. Now the bad news, these two LPARs are busy at the same time - they are doing a database unload in one and a load of the same data in the other LPAR. If I tell you the machine has 64 physical CPUs, you can immediately see the problem. Both LPARs can't get 40 CPUs at the same time (we can't run 80 Virtual CPUs flat-out on 64 physical CPUs) and that does not include the other LPARs also running.
<<<
''vmstat physical cpu''
http://aix4admins.blogspot.com/2011/09/vmstat-t-5-3-shows-3-statistics-in-5.html
{{{
To measure cpu utilization measure us+sy together (and compare it to physc):
- if us+sy is always greater than 80%, then CPU is approaching its limits (but check physc as well and in "sar -P ALL" for each lcpu)
- if us+sy = 100% -> possible CPU bottleneck, but in an uncapped shared lpar check physc as well.
- if sy is high, your appl. is issuing many system calls to the kernel and asking the kernel to work. It measures how heavily the appl. is using kernel services.
- if sy is higher then us, this means your system is spending less time on real work (not good)
Don't forget to compare these values with ouputs where each logical CPU can be seen (like "sar -p ALL 1 5")
Some examples when physical consumption of a CPU should be also looked when smt is on.:
- usr+sys=16%, but physc=0.56, it means i see 16% is utliized of a CPU, but actually half of the physical CPU (0.56) is used.
- if us+sys=100 and physc=0.45 we have to look both. If someone says 100% percent is used, then 100% of what? The 100% of the half of the CPU (physc=0.45) is used.
- %usr+%sys=83% for lcpu 0 (output from command sar). It looks a high number at the first sight, but if you check physc, you can see only 0.01 physical core has been used, and the entitled capacityis 0.20, so this 83% is actually very little CPU consumption.
}}}
my LVM config conversation with Rhojel Echano showing how I configured the devices and the idea/reasoning behind it, also showing the partition table and layout
https://www.evernote.com/shard/s48/sh/fd84183b-293b-45b1-8d89-3fc13e945506/16f222922fe85eeed19aaa722bf1ff42
remember beginning of the disk is at the outer edge (faster), so /dev/sdb1 is at the outer and goes inwards (slower) the next partitions
http://techreport.com/forums/viewtopic.php?f=5&t=3843
[img(95%,95%)[ https://lh4.googleusercontent.com/-hzWcpuQsKmw/UjN31J0Zz_I/AAAAAAAACBo/AilxCoeE0w4/w1185-h450-no/desktopserverdisklayout.png ]]
! OEL 6 (current)
{{{
# SWAP
pvcreate /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
vgcreate vgswap /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
lvcreate -n lvswap -i 8 -I 4096 vgswap -l 5112
mkswap /dev/vgswap/lvswap
/dev/vgswap/lvswap swap swap defaults 0 0 <-- add this in fstab
swapon -va
cat /proc/swaps
# Oracle
pvcreate /dev/sda5 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2
vgcreate vgoracle /dev/sda5 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2
lvcreate -n lvoracle -i 8 -I 4096 vgoracle -l 15368
mkfs.ext3 /dev/vgoracle/lvoracle
# VBOX
pvcreate /dev/sda6 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3
vgcreate vgvbox /dev/sda6 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3
lvcreate -n lvvbox -i 8 -I 4096 vgvbox -l 624648
mkfs.ext3 /dev/vgvbox/lvvbox
# ASM <-- ASM disks
pvcreate /dev/sda7 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 /dev/sdg5 /dev/sdh5
# RECO
pvcreate /dev/sda8 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6 /dev/sdg6 /dev/sdh6
vgcreate vgreco /dev/sda8 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6 /dev/sdg6 /dev/sdh6
lvcreate -n lvreco -i 8 -I 4096 vgreco -l 370104
mkfs.ext3 /dev/vgreco/lvreco
[root@desktopserver dev]# lvdisplay | egrep "LV Name|Size"
LV Name lvreco
LV Size 1.41 TiB
LV Name lvvbox
LV Size 2.38 TiB
LV Name lvoracle
LV Size 60.03 GiB
LV Name lvswap
LV Size 19.97 GiB
[root@desktopserver dev]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda2 ext4 22G 15G 5.9G 72% /
tmpfs tmpfs 7.9G 76K 7.9G 1% /dev/shm
/dev/sda1 ext4 291M 55M 221M 20% /boot
/dev/mapper/vgoracle-lvoracle
ext3 60G 180M 56G 1% /u01
/dev/mapper/vgvbox-lvvbox
ext3 2.4T 200M 2.3T 1% /vbox
/dev/mapper/vgreco-lvreco
ext3 1.4T 198M 1.4T 1% /reco
#### UDEV!!!
-- oel6
[root@desktopserver ~]# scsi_id -g -u -d /dev/sda7
35000c50038257afa
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdb5
35000c50038276171
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdc5
350014ee2b2d7f017
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdd5
350014ee2082d419c
[root@desktopserver ~]# scsi_id -g -u -d /dev/sde5
35000c500382b0b28
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdf5
35000c50038274bcb
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdg5
35000c50038270d54
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdh5
35000c50038278abf
If you are using the subpartition of the device (for short stroking on the fast area of disk), better filter it with the device name and the major minor of the subpartition
* edit the scsi_id.config file
[root@desktopserver ~]# vi /etc/scsi_id.config
# add this line
options=-g
* create the UDEV rules
vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sda7", SYSFS{dev}=="8:7" , NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdb5", SYSFS{dev}=="8:21" , NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdc5", SYSFS{dev}=="8:37" , NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdd5", SYSFS{dev}=="8:53" , NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sde5", SYSFS{dev}=="8:69" , NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf5", SYSFS{dev}=="8:85" , NAME="asm-disk6", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdg5", SYSFS{dev}=="8:101", NAME="asm-disk7", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdh5", SYSFS{dev}=="8:117", NAME="asm-disk8", OWNER="oracle", GROUP="dba", MODE="0660"
* test the UDEV rules
-- oel6
udevadm test /block/sda/sda7
udevadm test /block/sdb/sdb5
udevadm test /block/sdc/sdc5
udevadm test /block/sdd/sdd5
udevadm test /block/sde/sde5
udevadm test /block/sdf/sdf5
udevadm test /block/sdg/sdg5
udevadm test /block/sdh/sdh5
* activate the rules
# #OL6
udevadm control --reload-rules
# #OL5 and OL6
/sbin/start_udev
[root@desktopserver dev]# ls -ltr /dev/asm*
brw-rw---- 1 oracle root 8, 53 Sep 12 17:05 /dev/asm-disk4
brw-rw---- 1 oracle root 8, 37 Sep 12 17:05 /dev/asm-disk3
brw-rw---- 1 oracle root 8, 69 Sep 12 17:05 /dev/asm-disk5
brw-rw---- 1 oracle root 8, 85 Sep 12 17:05 /dev/asm-disk6
brw-rw---- 1 oracle root 8, 101 Sep 12 17:05 /dev/asm-disk7
brw-rw---- 1 oracle root 8, 21 Sep 12 17:05 /dev/asm-disk2
brw-rw---- 1 oracle root 8, 117 Sep 12 17:05 /dev/asm-disk8
brw-rw---- 1 oracle root 8, 7 Sep 12 17:06 /dev/asm-disk1
[root@desktopserver ~]# ls /dev
adsp disk loop4 parport2 ram3 sda4 sdc4 sde6 sdh1 shm tty16 tty30 tty45 tty6 usbdev1.2 vcs vgoracle
asm-disk1 dm-0 loop5 parport3 ram4 sda5 sdc5 sdf sdh2 snapshot tty17 tty31 tty46 tty60 usbdev1.2_ep00 vcs2 vgreco
asm-disk2 dsp loop6 port ram5 sda6 sdc6 sdf1 sdh3 snd tty18 tty32 tty47 tty61 usbdev1.2_ep81 vcs3 vgswap
asm-disk3 fd loop7 ppp ram6 sda7 sdd sdf2 sdh4 stderr tty19 tty33 tty48 tty62 usbdev1.3 vcs4 vgvbox
asm-disk4 full MAKEDEV ptmx ram7 sda8 sdd1 sdf3 sdh5 stdin tty2 tty34 tty49 tty63 usbdev2.1 vcs5 VolGroup00
asm-disk5 fuse mapper pts ram8 sdb sdd2 sdf4 sdh6 stdout tty20 tty35 tty5 tty7 usbdev2.1_ep00 vcs6 X0R
asm-disk6 gpmctl mcelog ram ram9 sdb1 sdd3 sdf5 sequencer systty tty21 tty36 tty50 tty8 usbdev2.1_ep81 vcs7 zero
asm-disk7 hpet md0 ram0 ramdisk sdb2 sdd4 sdf6 sequencer2 tty tty22 tty37 tty51 tty9 usbdev2.2 vcs8
asm-disk8 initctl mem ram1 random sdb3 sdd5 sdg sg0 tty0 tty23 tty38 tty52 ttyS0 usbdev2.2_ep00 vcsa
audio input mixer ram10 rawctl sdb4 sdd6 sdg1 sg1 tty1 tty24 tty39 tty53 ttyS1 usbdev2.2_ep81 vcsa2
autofs kmsg net ram11 root sdb5 sde sdg2 sg2 tty10 tty25 tty4 tty54 ttyS2 usbdev2.3 vcsa3
bus log null ram12 rtc sdb6 sde1 sdg3 sg3 tty11 tty26 tty40 tty55 ttyS3 usbdev2.3_ep00 vcsa4
console loop0 nvram ram13 sda sdc sde2 sdg4 sg4 tty12 tty27 tty41 tty56 urandom usbdev2.3_ep02 vcsa5
core loop1 oldmem ram14 sda1 sdc1 sde3 sdg5 sg5 tty13 tty28 tty42 tty57 usbdev1.1 vboxdrv vcsa6
cpu loop2 parport0 ram15 sda2 sdc2 sde4 sdg6 sg6 tty14 tty29 tty43 tty58 usbdev1.1_ep00 vboxnetctl vcsa7
device-mapper loop3 parport1 ram2 sda3 sdc3 sde5 sdh sg7 tty15 tty3 tty44 tty59 usbdev1.1_ep81 vboxusb vcsa8
# the subpartitions (no output, the udev took it)
[root@desktopserver rules.d]# ls -l /dev/sd*7
[root@desktopserver rules.d]# ls -l /dev/sd*5 | grep -v sda
* the asm_diskstring would be '/dev/asm-disk*'
}}}
! OEL 5 (before the disk failure)
{{{
# SWAP
pvcreate /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
vgcreate vgswap /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
lvcreate -n lvswap -i 8 -I 4096 vgswap -l 4888
mkswap /dev/vgswap/lvswap
/dev/vgswap/lvswap swap swap defaults 0 0 <-- add this in fstab
swapon -va
cat /proc/swaps
# Oracle
pvcreate /dev/sda5 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2
vgcreate vgoracle /dev/sda5 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2
lvcreate -n lvoracle -i 8 -I 4096 vgoracle -l 14664
mkfs.ext3 /dev/vgoracle/lvoracle
# VBOX
pvcreate /dev/sda6 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3
vgcreate vgvbox /dev/sda6 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3
lvcreate -n lvvbox -i 8 -I 4096 vgvbox -l 625008
mkfs.ext3 /dev/vgvbox/lvvbox
# ASM <-- ASM disks
pvcreate /dev/sda7 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 /dev/sdg5 /dev/sdh5
this is what I used for the udev rules, see here for more details -->
udev ASM - single path - https://www.evernote.com/shard/s48/sh/485425bc-a16f-4446-aebd-988342e3c30e/edc860d713dd4a66ff57cbc920b4a69c
$ cat 99-oracle-asmdevices.rules
KERNEL=="sda7", SYSFS{dev}=="8:7" , NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdb5", SYSFS{dev}=="8:21" , NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdc5", SYSFS{dev}=="8:37" , NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdd5", SYSFS{dev}=="8:53" , NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sde5", SYSFS{dev}=="8:69" , NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf5", SYSFS{dev}=="8:85" , NAME="asm-disk6", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdg5", SYSFS{dev}=="8:101", NAME="asm-disk7", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdh5", SYSFS{dev}=="8:117", NAME="asm-disk8", OWNER="oracle", GROUP="dba", MODE="0660"
# RECO
pvcreate /dev/sda8 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6 /dev/sdg6 /dev/sdh6
vgcreate vgreco /dev/sda8 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6 /dev/sdg6 /dev/sdh6
lvcreate -n lvreco -i 8 -I 4096 vgreco -l 596672
[root@localhost orion]# lvdisplay | egrep "LV Name|Size"
LV Name /dev/vgreco/lvreco
LV Size 2.28 TB
LV Name /dev/vgvbox/lvvbox
LV Size 2.38 TB
LV Name /dev/vgoracle/lvoracle
LV Size 57.28 GB
LV Name /dev/vgswap/lvswap
LV Size 19.09 GB
LV Name /dev/VolGroup00/lvroot
LV Size 20.00 GB
[root@localhost ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-lvroot
ext3 20G 14G 4.7G 75% /
/dev/sda1 ext3 244M 24M 208M 11% /boot
tmpfs tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/mapper/vgoracle-lvoracle
ext3 57G 180M 54G 1% /u01
/dev/mapper/vgvbox-lvvbox
ext3 2.4T 200M 2.3T 1% /vbox
/dev/mapper/vgreco-lvreco
ext3 2.3T 201M 2.2T 1% /reco
}}}
http://book.soundonair.ru/hall2/ch06lev1sec1.html Got the cool trick here 6.1 LVM Striping (RAID 0)
''Distributed Logical Volume Trick''
{{{
NOTE: you have to increase the /etc/lvm directory
pvcreate --metadatasize 1000000K /dev/sdb1
pvcreate --metadatasize 1000000K /dev/sdc1
pvcreate --metadatasize 1000000K /dev/sdd1
pvcreate --metadatasize 1000000K /dev/sde1
vgcreate vgshortstroke /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
lvcreate -n shortstroke -l 1 vgshortstroke
vgdisplay
PV1=/dev/sdb1
PV2=/dev/sdc1
PV3=/dev/sdd1
PV4=/dev/sde1
SIZE=145512 <-- from vgdisplay output
COUNT=1
while [ $COUNT -le $SIZE ]
do
lvextend -l $COUNT /dev/vgshortstroke/shortstroke $PV1
let COUNT=COUNT+1
lvextend -l $COUNT /dev/vgshortstroke/shortstroke $PV2
let COUNT=COUNT+1
lvextend -l $COUNT /dev/vgshortstroke/shortstroke $PV3
let COUNT=COUNT+1
lvextend -l $COUNT /dev/vgshortstroke/shortstroke $PV4
let COUNT=COUNT+1
done
lvdisplay -vm /dev/vgshortstroke/shortstroke | less
}}}
''LVM kilobyte-striping''
{{{
"lvcreate -i 3 -I 8 -L 100M vg00" tries to create a striped logical volume with 3 stripes, a stripesize of 8KB and a size
of 100MB in the volume group named vg00. The logical volume name will be chosen by lvcreate.
}}}
started 3:26PM
end 3:48PM 11GB
rate of 171MB/minute whoa this is way too slow..
but this volume is the same performance as 4 raw short stroked disk (partition) :)
Orion run here
{{{
ORION VERSION 11.1.0.7.0
Commandline:
-run simple -testname mytest -num_disks 4
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 29
Name: /dev/vgshortstroke/shortstroke Size: 13514047488
1 FILEs found.
Maximum Large MBPS=232.00 @ Small=0 and Large=8
Maximum Small IOPS=942 @ Small=20 and Large=0
Minimum Small Latency=6.61 @ Small=1 and Large=0
}}}
Other experiments ongoing..
Here's the HD used
Barracuda 7200 SATA 3Gb/s (375MB/s) interface 1TB Hard Drive
http://www.seagate.com/ww/v/index.jsp?vgnextoid=20b92d0ca8dce110VgnVCM100000f5ee0a0aRCRD#tTabContentOverview
the regular kernel gives lower MB/s on sequential reads/writes
http://www.evernote.com/shard/s48/sh/36636b46-995a-4812-bd07-e88fa0dfd191/d36f37565243025e7b5792f496dc5a37
! 2020
http://sethmiller.org/it/oracleasmlib-not-necessary/
https://titanwolf.org/Network/Articles/Article?AID=76740ebe-e81a-4f0f-8c23-ab482de97ba9#gsc.tab=0
https://community.oracle.com/mosc/discussion/2937970/asmlib-uek-kernel
https://oracle-base.com/blog/2012/03/16/oracle-linux-5-8-and-udev-issues/
https://blogs.oracle.com/wim/asmlib
Oracleasm Kernel Driver for the 64-bit (x86_64) Red Hat Compatible Kernel for Oracle Linux 6 (Doc ID 1578579.1)
<<<
ASMLib is a support library for the Automatic Storage Management feature used in Oracle Databases running on the Oracle Linux Unbreakable Enterprise Kernel (UEK) and RedHat Compatible Kernel (RHCK). Oracle ASMLib is included in the UEK kernel, but must be installed as a separate package for RHCK. This document provides a set of steps on how to get the oracleasm kernel driver for Oracle Linux 6 RHCK and also how to validate the driver was provided by Oracle and not another vendor.
<<<
!! to asmlib or not asmlib
<<<
In terms of performance, I think the kernel would matter vs the asmlib or udev
On my previous benchmark (this was on my R&D server way back OEL5). When I used the UEK kernel it gave me higher MB/s on sequential reads/writes for both LVM and ASM
see the numbers here
http://www.evernote.com/shard/s48/sh/36636b46-995a-4812-bd07-e88fa0dfd191/d36f37565243025e7b5792f496dc5a37
<<<
<<<
Running RedHat or Oracle Linux? If it's RedHat, I'd be 100% udev. Doesn't require a separate package, and easier to set up via config files. Even if it's Oracle Linux, I'd still go the udev route for those same reasons.
I don't think there's a difference in performance by using ASMlib, either good or bad.
<<<
<<<
This has been discussed a lot of times before.
To shortcut to my preference: udev.
When ASMLib was still current (AFD, asm filter driver is the new, current version), and oracle was actively supporting it, I did ask wim coeckaerts what the hell the actual performance features were, because I couldn’t see it, nor measure it. It turns out it’s pooling file descriptors (you cannot get a huge performance boost from that).
There are two other advantages of asmlib:
It’s a kernel module which scans the headers of the block devices that are visible to the kernel, and provides the ASM devices as asmlib devices, based on the information in the device header, not requiring any unique device data to make it an ASM device. In one situation, when using oracle cloud V1 (OCI for me is still the oracle c interface 😊), the block devices did not provide any unique information, and thus UDEV could not be used. The linux kernel names devices as they become visible to the kernel, which can differ between reboots, so you should never use the /dev/sdb naming (for SCSI devices, but equally for other native kernel namings).
ASMLib scans IO going to asmlib devices, and will reject non-oracle IO.
However, the inner working of asmlib is absolutely and totally undocumented. Also, the way IO is done changes in the oracle engine, you will see other system calls (yes, really, despite how surreal that sounds). This means that if it all of a sudden doesn’t work, there is literally nobody that can help you. Or alternatively described: you are left to oracle support. The only human being who wrote something about the inner working is James Morle.
Udev isn’t that well documented, but there are several blogs (including mine) that describe how to configure and troubleshoot it. So that means it’s not a black box, it is possible to investigate issues. I don’t like that you have to change the udev scripts between OL6 and OL7 (OL8 seems not requiring a change), but still, once you know how to troubleshoot it, it’s doable.
<<<
..
''HOWTO'' http://www.math.umbc.edu/~rouben/beamer/
http://wifo.eecs.berkeley.edu/wiki/doku.php/latex:latex_resources
http://superuser.com/questions/221624/latex-vs-powerpoint-for-presentations
https://bitbucket.org/rivanvx/beamer/overview
http://readingsml.blogspot.com/2009/11/keynote-vs-powerpoint-vs-beamer.html
http://www.johndcook.com/blog/2008/07/24/latex-and-powerpoint-presentations/
http://www.johndcook.com/blog/2008/07/24/including-images-in-latex-files/
http://sourceforge.net/projects/latex-beamer/
http://sourceforge.net/projects/latex-beamer/forums/forum/319190
http://www.johndcook.com/blog/2008/07/24/latex-and-powerpoint-presentations/
/***
|Name:|LessBackupsPlugin|
|Description:|Intelligently limit the number of backup files you create|
|Version:|3.0.1 ($Rev: 2320 $)|
|Date:|$Date: 2007-06-18 22:37:46 +1000 (Mon, 18 Jun 2007) $|
|Source:|http://mptw.tiddlyspot.com/#LessBackupsPlugin|
|Author:|Simon Baird|
|Email:|simon.baird@gmail.com|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!!Description
You end up with just backup one per year, per month, per weekday, per hour, minute, and second. So total number won't exceed about 200 or so. Can be reduced by commenting out the seconds/minutes/hours line from modes array
!!Notes
Works in IE and Firefox only. Algorithm by Daniel Baird. IE specific code by by Saq Imtiaz.
***/
//{{{
var MINS = 60 * 1000;
var HOURS = 60 * MINS;
var DAYS = 24 * HOURS;
if (!config.lessBackups) {
config.lessBackups = {
// comment out the ones you don't want or set config.lessBackups.modes in your 'tweaks' plugin
modes: [
["YYYY", 365*DAYS], // one per year for ever
["MMM", 31*DAYS], // one per month
["ddd", 7*DAYS], // one per weekday
//["d0DD", 1*DAYS], // one per day of month
["h0hh", 24*HOURS], // one per hour
["m0mm", 1*HOURS], // one per minute
["s0ss", 1*MINS], // one per second
["latest",0] // always keep last version. (leave this).
]
};
}
window.getSpecialBackupPath = function(backupPath) {
var now = new Date();
var modes = config.lessBackups.modes;
for (var i=0;i<modes.length;i++) {
// the filename we will try
var specialBackupPath = backupPath.replace(/(\.)([0-9]+\.[0-9]+)(\.html)$/,
'$1'+now.formatString(modes[i][0]).toLowerCase()+'$3')
// open the file
try {
if (config.browser.isIE) {
var fsobject = new ActiveXObject("Scripting.FileSystemObject")
var fileExists = fsobject.FileExists(specialBackupPath);
if (fileExists) {
var fileObject = fsobject.GetFile(specialBackupPath);
var modDate = new Date(fileObject.DateLastModified).valueOf();
}
}
else {
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
var file = Components.classes["@mozilla.org/file/local;1"].createInstance(Components.interfaces.nsILocalFile);
file.initWithPath(specialBackupPath);
var fileExists = file.exists();
if (fileExists) {
var modDate = file.lastModifiedTime;
}
}
}
catch(e) {
// give up
return backupPath;
}
// expiry is used to tell if it's an 'old' one. Eg, if the month is June and there is a
// June file on disk that's more than an month old then it must be stale so overwrite
// note that "latest" should be always written because the expiration period is zero (see above)
var expiry = new Date(modDate + modes[i][1]);
if (!fileExists || now > expiry)
return specialBackupPath;
}
}
// hijack the core function
window.getBackupPath_mptw_orig = window.getBackupPath;
window.getBackupPath = function(localPath) {
return getSpecialBackupPath(getBackupPath_mptw_orig(localPath));
}
//}}}
http://orainternals.wordpress.com/2009/06/02/library-cache-lock-and-library-cache-pin-waits/
http://dioncho.wordpress.com/2009/05/15/releasing-library-cache-pin/
http://oracle-study-notes.blogspot.com/2009/05/resolving-library-cache-lock-issue.html
Library Cache Pin/Lock Pile Up hangs the application [ID 287059.1]
HOW TO FIND THE SESSION HOLDING A LIBRARY CACHE LOCK [ID 122793.1]
Database Hangs with Library Cache Lock and Pin Waits [ID 338367.1]
How to Find the Blocker of the 'library cache pin' in a RAC environment? [ID 780514.1]
How to analyze ORA-04021 or ORA-4020 errors? [ID 169139.1]
WAITEVENT: "library cache pin" Reference Note [ID 34579.1]
http://oracleprof.blogspot.com/2010/07/process-hung-on-library-cache-lock.html
http://logicalread.solarwinds.com/oracle-library-cache-pin-wait-event-mc01/#.VtnJcvkrLwc
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-library-cache#TOC-latch:-library-cache-lock-
WAITEVENT: "library cache lock" Reference Note (Doc ID 34578.1)
SRDC - How to Collect Standard Information for an Issue Where 'library cache lock' Waits Are the Primary Waiters on the Database (Doc ID 1904807.1)
Truncate - Causes Invalidations in the LIBRARY CACHE (Doc ID 123214.1)
'library cache lock' Waits: Causes and Solutions (Doc ID 1952395.1)
Troubleshooting Library Cache: Lock, Pin and Load Lock (Doc ID 444560.1)
How to Find which Session is Holding a Particular Library Cache Lock (Doc ID 122793.1)
http://www.oraclemusings.com/?p=103
https://oracle-base.com/blog/2013/12/11/oracle-license-audit/
As a DBA you have to know the licensing schemes of Oracle..
http://www.orafaq.com/wiki/Oracle_Licensing
http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/toc.htm
http://www.oracle.com/corporate/pricing/sig.html
https://docs.google.com/viewer?url=http://www.oracle.com/corporate/pricing/application_licensing_table.pdf
http://www.oracle.com/corporate/pricing/askrightquestions.html
http://www.liferay.com/home
http://blog.scottlowe.org/2012/10/26/link-aggregation-and-vlan-trunking-with-brocade-fastiron-switches/
http://www.linuxjournal.com/content/containers%E2%80%94not-virtual-machines%E2%80%94are-future-cloud?page=0,1&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A%20linuxjournalcom%20%28Linux%20Journal%20-%20The%20Original%20Magazine%20of%20the%20Linux%20Community%29
* containers are build on namespaces and cgroups
* namespaces provide isolation similar to hypervisors
* cgroups provide resource limiting and accounting
* these tools can be mixed to create hybrids
http://lxr.free-electrons.com/source/mm/compaction.c?v=2.6.35
http://ltp.sourceforge.net/tooltable.php
{{{
Linux Test Tools
The purpose of this Linux Test Tools Table is to provide the open-source community with a comprehensive list of tools commonly used for testing the various components of Linux.
My hope is that the community will embrace and contribute to this list making it a valuable addition to the Linux Test Project.
Please feel free to send additions, updates or suggestions to Jeff Martin. Last update:07/12/06
Cluster
HINT allows fair comparisons over extreme variations in computer architecture, absolute performance, storage capacity, and precision. It's listed as a Past Projectwith a link to http://hint.byu.edu but I have not been able to find where it is being maintained. If you know, please drop me a note.
Code Coverage Analysis
gcov Code analysis tool for profiling code and determining: 1) how often each line of code executes, 2) what lines of code are actually executed, 3.) how much computing time each section of codeuses
lcov LCOV is an extension of GCOV, a GNU tool which provides information about what parts of a program are actually executed (i.e. "covered") while running a particular test case. The extension provides HTML output and support for large projects.
Database
DOTS Database Opensource Test Suite
dbgrinder perl script to inflict stress on a mysql server
OSDL Database Testsuite OSDL Database Testsuite
Debug
Dynamic Probes Dynamic Probes is a generic and pervasive debugging facility.
Kernel Debug (KDB) KDB is an interactive debugger built into the Linux kernel. It allows the user to examine kernel memory, disassembled code and registers.
Linux Kernel Crash Dump LKCD project is designed to help detect, save and examine system crashes and crash info.
Linux Trace Toolkit (LTT) The Linux Trace Toolkit is a fully-featured tracing system for the Linux kernel.
Defect Tracking
Bugzilla allows individuals or groups of developers to keep track of outstanding bugs in their product effectively
Desktop/GUI Libraries
Android open source testing tool for GUI programs
ldtp GNU/Linux Desktop Testing Project
Event Logging
included tests Various tests are included in the tarball
Filesystems
Bonnie Bonnie++ is test suite, which performs several hard drive/ filesystem tests.
dbench Filesystem benchmark that generates good filesystem load
fs_inode Part of the LTP: This test creates several subdirectories and files off of two parent directories and removes directories and files as part of the test.
fs_maim Part of the LTP: a set of scripts to test and stress filesystem and storage management utilities
IOZone Filesystem benchmark tool (read, write, re-read, re-write, read backwards, read strided, fread, fwrite, random read, pread, aio_read, aio_write)
lftest Part of the LTP:lftest is a tool/test designed to create large files and lseek from the beginning of the file to the end of the file after each block write. This test verifies large file support and can be used to generate large files for other filesystem tests. Files up to 2Tb have been created using this tool. This test is VERY picky about glibc version.
LTP The Linux Test Project is a collection of tools for testing the Linux kernel and related features.
PostMark Filesystem benchmark that simulates load generated by enterprise applications such as email, news and web-based commerce.
stress puts the system under a specified amount of load
mongo set of the programs to test linux filesystems for performance and functionality
fsx File system exerciser from Apple. The test is most effective if you let it run for a minute or two, so that it overlaps the periodic sync that most Unix systems do.
xdd Storage I/O Performance Characterization tool that runs on most UNIX-like systems and Windows. Has been around since 1992 and is in use at various government labs.
Harnesses
Cerberus The Cerberus Test Control System(CTCS) is a free (freedom) test suite for use by developers and others to test hardware. It generates good filesystem stress in the process.
STAF The Software Testing Automation Framework (STAF) is an open source framework designed to improvethe level of reuse and automation in test cases and test environments.
I/O & Storage
tiobench Portable, robust, fully-threaded I/O benchmark program
xdd Storage I/O Performance Characterization tool that runs on most UNIX-like systems and Windows. Has been around since 1992 and is in use at various government labs.
Kernel System Calls
crashme a tool for testing the robustness of an operating environment using a technique of "Random Input" response analysis
LTP The Linux Test Project is a collection of tools for testing the Linux kernel and related features.
Network
Connectathon NFS Testsuite This testsuite tests the NFS Protocol
ISIC ISIC is a suite of utilities to exercise the stability of an IP Stack and its component stacks
LTP The Linux Test Project has a collection of tools for testing the network components of the Linux kernel.
netperf Netperf is a benchmark that can be used to measure the performance of many different types of networking.
NetPIPE Variable time bench mark, ie, it measures network performance using variable sized communiation transfers
TAHI Providesinteroperability and conformance tests for IPv6
VolanoMark A java chatroom benchmark/stress
UNH IPv6 Tests there are several IPv6 tests on this site
Iperf for measuring TCP and UDP bandwidth performance
Network Security
Kerberos Test suite These tests are for testing Kerberos clients (kinit,klist and kdestroy) and Kerberized Applications, ftp and telnet.
Other
cpuburn This program was designed by Robert Redelmeier to heavily loadCPU chips.
Performance
contest test system responsiveness by running kernel compilation under anumber of different load conditions
glibench/clibench benchmarking tool to check your computer CPU and hard disk performance
lmbench Suite of simple, portable benchmarks
AIM Benchmark Performance benchmark
unixbench Performance benchmark based on the early BYTE UNIX Benchmarks "retired" since about 1997, but still used by some testers
Scalability
dbench Used for dcache scalability testing
Chat Used for file_struct scalability testing
httperf Used for dcache scalability testing
Scheduler
LTP The Linux Test Project is a collection of tools for testing the Linux kernel and related features. sched_stress and process_stress
VolanoMark A java chatroom benchmark/stress VolanoMark has been used to stress the scheduler.
SCSI Hardening
Bonnie Bonnie is test suite, which performs several hard drive and filesystem tests.
LTP The Linux Test Project is a collection of toolsfor testing the Linux kernel and related features. disktest
dt dt (Data Test) is a generic data test program used to verify proper operation of peripherals, file systems, device drivers, or any data stream supported by the operating system
Security
Nessus remote security scanner
Standards
LSB Test suites used for LSB compliance testing
Stream Controlled Transmission Protocol
LTP The Linux Test Project is a collection of tools for testing the Linux kernel and related features.
System Management
sblim The "SBLIM Reference Implementation (SRI)" is a component of the SBLIM project. Its purposes are (among others): (1) easily set up, run and test systems management scenarios based on CIM/CIMOM technology (2) test CIM Providers (on local and/or remote Linux machines)
Threads
LTP The Linux Test Project is a collection of tools for testing the Linux kernel and related features.
VSTHlite Tests for compliance with IEEE POSIX 1003.1c extensions (pthreads).
USB
usbstress Sent to us by the folks at Linux-usb.org
Version Control
cvs the dominant open-source network-transparent version control system
BitKeeper BK/Pro is a scalable configuration management system, supporting globally distributed development, disconnected operation, compressed repositories, change sets, and repositories as branches. Read the licensing info
Subversion
VMM
vmregress regrssion, testing and benchmark tool
LTP The Linux Test Project is a collection of tools for testing the Linux kernel and related features.
memtest86 A thorough real-mode memory tester
stress puts the system under a specified amount of load
memtest86+ fork / enhanced version of the memtest86
memtester Utility to test for faulty memory subsystem
Web Server
Hammerhead Hammerhead is a web server stress tool that can simulate multiple connections and users.
httperf httperf is a popular web server benchmark tool for measuring web server performance
siege Siege is an http regression testing and benchmarking utility.
PagePoker for loadtesting and benchmarking web servers
}}}
just make use of this tool, and download an ubuntu live DVD
http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/
Centrify - Linux AD authentication
http://goo.gl/R1hRL
{{{
.bashprofile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
unset USERNAME
### PARAMETERS FOR ORACLE DATABASE 10G
umask 022
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
export ORACLE_BASE=/u01/app/oracle
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_SID=orcl
PATH=$ORACLE_HOME/bin:$PATH
}}}
{{{
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
export LD_ASSUME_KERNEL=2.4.1
# Oracle Environment
export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=/u01/oracle/product/9.2.0
export ORACLE_SID=PETDB1
export ORACLE_TERM=xterm
export TNS_ADMIN=$ORACLE_HOME/network/admin
# Optional Oracle Environment
export NLS_LANG=AMERICAN
export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
export LD_LIBRARY_PATH
# Set shell search path
PATH=$PATH:/sbin:$ORACLE_HOME/bin
# Display Environment
DISPLAY=127.0.0.1:0.0
DISPLAY=192.9.200.7:0.0
export DISPLAY
# Oracle CLASSPATH Environment
# CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
# CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib
# export CLASSPATH
export PATH
unset USERNAME
}}}
http://www.techrepublic.com/blog/10things/10-outstanding-linux-backup-utilities/895
{{{
http://ubuntu-rescue-remix.org/node/6
http://ubuntuforums.org/showthread.php?s=bb3a288a58fdd087cca4367677b2544a&t=417761&page=2
http://www.cgsecurity.org/wiki/TestDisk
http://www.cgsecurity.org/wiki/PhotoRec
http://www.linux-ntfs.org/doku.php?id=ntfs-en
http://www.linux-ntfs.org/doku.php?id=howto:hexedityourway
http://www.student.dtu.dk/~s042078/magicrescue/manpage.html
http://www.cgsecurity.org/wiki/Intel_Partition_Table
https://answers.launchpad.net/ubuntu/+question/2178
http://www.cgsecurity.org/wiki/HowToHelp
http://www.cgsecurity.org/wiki/After_Using_PhotoRec
}}}
http://dolavim.us/blog/archives/2007/11/linux-kernel-lo.html
''you can't have lockstat on rhel5''
http://dag.wieers.com/blog/rpm-packaging-news-lockstat-and-httpreplicator
https://forums.oracle.com/forums/thread.jspa?messageID=4535884
http://dolavim.us/blog/2007/11/06/linux-kernel-lock-profiling-with-lockstat/
Oracle� Database on AIX�,HP-UX�,Linux�,Mac OS� X,Solaris�,Tru64 Unix� Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.1)
Doc ID: Note:169706.1
Linux Quick Reference
Doc ID: Note:341782.1
Things to Know About Linux
Doc ID: Note:265262.1
Server Architecture on UNIX and NT
Doc ID: Note:48681.1
Unix Commands on Different OS's
Doc ID: 293561.1
Oracle's 9i Platform Strategy Advisory
Doc ID: Note:149914.1
-- INSTALLATION
Defining a "default RPMs" installation of the RHEL OS
Doc ID: Note:376183.1
-- SUPPORT
Support of Linux and Oracle Products on Linux
Doc ID: Note:266043.1
Linux Kernel Support - Policy on Tainted Kernels
Doc ID: Note:284823.1
Unbreakable Linux Support Policies For Virtualization And Emulation
Doc ID: Note:417770.1
-- MIGRATION FROM 32 to 64
How to convert a 32-bit database to 64-bit database on Linux?
Doc ID: Note:341880.1
How to Determine Whether the OS is 32-bit or 64-bit
Doc ID: 421453.1
How to Determine a Linux OS and the OS Association of Staged and Installed Oracle Products
Doc ID: 752155.1
-- MIGRATION/UPGRADE OF OS VERSION
Preserving Your Oracle Database 10g Environment
when Upgrading from Red Hat Enterprise Linux 2.1
AS to Red Hat Enterprise Linux 3
-- located as Oracle on Linux directory
Is Relinking Of Oracle (Relink All) Required After Patching OS?
Doc ID: 395605.1
When is a relink required after an AIX OS upgrade --- YES
Doc ID: 726811.1
Upgrading RHEL 3 To RHEL 4 With Oracle Database
Doc ID: 416005.1
How to Relink Oracle Database Software on UNIX
Doc ID: 131321.1
-- ITANIUM SERVER ISSUE
Messages In Console: Oracle(9581): Floating-Point Assist Fault At Ip -- for itanium servers
Doc ID: Note:279456.1
What's up with those "floating-point assist fault" messages? - Linux on Itanium�
http://h21007.www2.hp.com/portal/site/dspp/menuitem.863c3e4cbcdc3f3515b49c108973a801/?ciid=62080055abe021100055abe02110275d6e10RCRD
Bug No. 3777000
FLOATING-POINT ASSIST FAULT(FPSWA) CAUSES POOR PERFORMANCE
Bug No. 3796598
KERNEL: ORACLE(570): FLOATING-POINT ASSIST FAULT AT IP CAUSES CONNECTION PROBLEM
Bug No. 3437795
RMAN BACKUP HANGS INSTANCE IN RAC, 'DATAFILECOPY HEADER VALIDATION FAILURE'
Oracle RDBMS and RedHat Linux AS on a Box with AMD Processor
Doc ID: Note:227904.1
What about this floating-point assist fault?
--------------------------------------------
When one does computation involving floats, the result may not always be turned into normalized representation, these numbers are called "denormals". They can be thought of as really tiny numbers (almost zero). The IEEE754 standard
handles these cases, but not always does the Floating-Point Unit. There are two ways to deal with this problem:
-Silently ignore it (maybe by turning the number into zero)
-Inform the user that the result is a denormal and let him do what he wants with it (=we ask the user and his software to assist the FPU).
The Intel Itanium does not fully support IEEE denormals and requires software assistance to handle them. Without further informations, the ia64 GNU/Linux kernel triggers a fault when denormals are computed. This is the "floating-point
software assist" fault (FPSWA) in the kernel messages. It is the user's task to clearly design his program to prevent such cases .< ===== (this sentence implies Oracle code)
-- SERVICES
Linux OS Service 'xendomains'
Doc ID: Note:558719.1
-- OCFS1
Installing and setting up ocfs on Linux - Basic Guide
Doc ID: 220178.1
Step-By-Step Upgrade of Oracle Cluster File System (OCFS v1) on Linux
Doc ID: Note:251578.1
Linux OCFS - Best Practices
Doc ID: 237997.1
Automatic Storage Management (ASM) and Oracle Cluster File System (OCFS) in Oracle10g
Doc ID: 255359.1
OCFS mount point does not mount for the first time
Doc ID: 302206.1
Update on OCFS for Linux
Doc ID: 252331.1
-- OCFS1 DEBUG
OCFS Most Common Defects / Bugs
Doc ID: 430451.1
-- OCFS1 ON WINDOWS
Raw Devices and Cluster Filesystems With Real Application Clusters <-- windows 2k3
Doc ID: 183408.1
Installing CRS on Windows 2008 Fails When Checking OCFS and Orafence Driver's Signatures
Doc ID: 762193.1
OCFS for EM64T SMP not available on OSS website.
Doc ID: 315734.1
WINDOWS 64-BIT: OCFS Drives Formatted Under 10.2.0.1/10.2.0.2/10.2.0.3 May Need Reformatting
Doc ID: 749006.1
How to Add Another OCFS Drive for RAC on Windows
Doc ID: 229060.1
How to Change a Drive Letter Associated with an OCFS Drive on Windows
Doc ID: 338852.1
How to Use More Than 26 Drives With OCFS on Windows
Doc ID: 357698.1
WIN RAC: How to Remove a Failed OCFS Install
Doc ID: 230290.1
OCFS: Blue Screen After A Reboot of a Node
Doc ID: 372986.1
WIN: Does Oracle Cluster File System (OCFS) Support Access from Mapped Drives?
Doc ID: 225550.1
OCFS Most Common Defects / Bugs
Doc ID: 430451.1
Cabnot Resize Datafile on OCFS Even If There is Sufficient Free Space
Doc ID: 338080.1
can not delete the file physically From Ocfs after dropping tablespace
Doc ID: 284775.1
Where Can I Find Ocfs For Windows Documentation
Doc ID: 269855.1
How Do We Find Out The Version Of Ocfs That'S Installed?
Doc ID: 302503.1
DBCA Failure on OCFS
Doc ID: 234700.1
New Partitions in Windows 2003 RAC Environments Not Visible on Remote Nodes
Doc ID: 454607.1
-- OCFS1 ADD NODE
How to add a new node to the existing OCFS setup on Windows
Doc ID: 316410.1
-- OCFS2
OCFS2: Considerations and requirements for working with BCV/cloned volumes
Doc ID: Note:567604.1
Linux OCFS2 - Best Practices
Doc ID: Note:603080.1
OCFS2: Supportability as a general purpose filesystem
Doc ID: Note:421640.1
Common reasons for OCFS2 Kernel Panic or Reboot Issues
Doc ID: Note:434255.1
OCFS2 User's Guide for Release 1.4
Doc ID: Note:736223.1
OCFS2 Version 1.4 New Features
Doc ID: Note:736230.1
OCFS2 - FREQUENTLY ASKED QUESTIONS
Doc ID: Note:391771.1
A Reference Guide for Upgrading OCFS2
Doc ID: Note:603246.1
Supportability of OCFS2 on certified and non-certified Linux distributions
Doc ID: Note:566819.1
OCFS2: Supportability as a general purpose filesystem
Doc ID: Note:421640.1
How to resize an OCFS2 filesystem
Doc ID: Note:445082.1
How to find the current OCFS or OCFS2 version for Linux
Doc ID: Note:238278.1
Problem Using Labels On OCFS2
Doc ID: 579153.1
-- OCFS/2 BLOCK SIZE
How to Query the blocksize of OCFS or OCFS2 Filesystem
Doc ID: 469404.1
-- OCFS2 SAN
OCFS2 and SAN Interactions
Doc ID: 603038.1
Host-Based Mirroring and OCFS2
Doc ID: 413195.1
-- OCFS2 SETUP, NETWORK, TIMEOUT
OCFS2 Fencing, Network, and Disk Heartbeat Timeout Configuration
Doc ID: 457423.1
Some Symptoms of OCFS2 Not Functioning when SELinux is Enabled
Doc ID: 432740.1
Using Bonded Network Device Can Cause OCFS2 to Detect Network Outage
Doc ID: 423183.1
Common reasons for OCFS2 o2net Idle Timeout
Doc ID: 734085.1
How to Use "tcpdump" to Log OCFS2 Interconnect (o2net) Messages
Doc ID: 789010.1
Heartbeat/Voting/Quorum Related Timeout Configuration for Linux, OCFS2, RAC Stack to avoid unnessary node fencing, panic and reboot
Doc ID: 395878.1
Common reasons for OCFS2 Kernel Panic or Reboot Issues
Doc ID: 434255.1
http://oss.oracle.com/pipermail/ocfs2-users/2007-January/001159.html
http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html#TIMEOUT
http://www.mail-archive.com/ocfs2-users@oss.oracle.com/msg00426.html <-- using tcpdump
http://www.mail-archive.com/ocfs2-users@oss.oracle.com/msg00409.html
-- OCFS2 DEBUG
Script to gather OCFS2 diagnostic information
Doc ID: 391292.1
OCFS2: df and du commands display different results
Doc ID: 558824.1
OCFS2 Performance: Measurement, Diagnosis and Tuning
Doc ID: 727866.1
Troubleshooting a multi-node OCFS2 installation
Doc ID: 806645.1
Trouble Mounting OCFS File System after changing Network Card
Doc ID: 298889.1
-- X
Enterprise Linux: Common GUI / X-Window Issues
Doc ID: Note:418963.1
How to configure, manage and secure user access to the Linux X server
Doc ID: Note:459029.1
-- KERNEL
Linux: Tainted Kernels, Definitions, Checking and Diagnosing
Doc ID: Note:395353.1
-- ORACLE VALIDATED
Linux OS Installation with Reduced Set of Packages for Running Oracle Database Server
Doc ID: Note:728346.1
Linux OS Installation with Reduced Set of Packages for Running Oracle Database Server without ULN/RHN
Doc ID: Note:579101.1
Defining a "default RPMs" installation of the Oracle Enterprise Linux (OEL) OS
Doc ID: Note:401167.1
Defining a "default RPMs" installation of the RHEL OS
Doc ID: Note:376183.1
Defining a "default RPMs" installation of the SLES OS
Doc ID: Note:386391.1
The 'oracle-validated' RPM Package for Installation Prerequisities
Doc ID: Note:437743.1
-- RELINK
How to Relink Oracle Database Software on UNIX
Doc ID: Note:131321.1
-- ASYNC IO
Kernel Parameter "aio-max-size" does not exist in RHEL4 / EL4 / RHEL5 /EL5
Doc ID: Note:549075.1
"Warning: OS async I/O limit 128 is lower than recovery batch 1024" in Alert log
Doc ID: Note:471846.1
Asynchronous I/O (aio) on RedHat Advanced Server 2.1 and RedHat Enterprise Linux 3
Doc ID: Note:225751.1
-- ORACLE VM
Oracle VM and External Storage Systems
Doc ID: Note:558041.1
Steps to Create Test RAC Setup On Oracle VM
Doc ID: Note:742603.1
-- SHUTDOWN ABORT HANG
Shutdown Abort Hangs
Doc ID: Note:161234.1
-- MEMORY
Oracle Background Processes Memory Consumption
Doc ID: 77547.1
Monitoring Memory Use
Doc ID: Note:2060096.6
TECH: Unix Virtual Memory, Paging & Swapping explained
Doc ID: Note:17094.1
UNIX: Determining the Size of an Oracle Process
Doc ID: Note:174555.1
How to Check the Environment Variables for an Oracle Process
Doc ID: Note:373303.1
How to Configure RHEL/OEL 4/5 32-bit for Very Large Memory with ramfs and HugePages
Doc ID: Note:317141.1
HugePages on Linux: What It Is... and What It Is Not...
Doc ID: Note:361323.1
Linux IA64 example of allocating 48GB SGA using hugepages
Doc ID: Note:397568.1
Shell Script to Calculate Values Recommended HugePages / HugeTLB Configuration
Doc ID: Note:401749.1
Linux: How to Check Current Shared Memory, Semaphore Values
Doc ID: Note:226209.1
Maximum SHMMAX values for Linux x86 and x86-64
Doc ID: Note:567506.1
TECH: Unix Semaphores and Shared Memory Explained
Doc ID: Note:15566.1
SHARED MEMORY REQUIREMENTS ON UNIX
Doc ID: Note:1011658.6
Linux Big SGA, Large Memory, VLM - White Paper
Doc ID: Note:260152.1
OS Configuration for large SGA
Doc ID: Note:225220.1
Configuring 2.7Gb SGA in RHEL by Relocating the SGA Attach Address
Doc ID: Note:329378.1
How To Set SHMMAX On SOLARIS 10 From CLI
Doc ID: Note:372972.1
How Important It Is To Set shmsys:shminfo_shmmax Above 4 GB
Doc ID: Note:467960.1
DETERMINING WHICH INSTANCE OWNS WHICH SHARED MEMORY & SEMAPHORE SEGMENTS
Doc ID: Note:68281.1
Operating System Tuning Issues on Unix
Doc ID: Note:1012819.6
Linux Big SGA, Large Memory, VLM - White Paper
Doc ID: Note:260152.1
How to Configure RHEL 3.0 32-bit for Very Large Memory and HugePages
Doc ID: Note:317055.1
How to Configure RHEL 3.0 32-bit for Very Large Memory and HugePages
Doc ID: Note:317055.1
ORA-824, ORA-1078 When Enabling PAE on VLM on 10g When Sga_Target Parameter is Set
Doc ID: Note:286093.1
Linux: How to Check Current Shared Memory, Semaphore Values
Doc ID: Note:226209.1
UNIX VIRTUAL MEMORY: UNDERSTANDING AND MEASURING MEMORY USAGE
Doc ID: Note:1012017.6
HOW TO INVESTIGATE THE USE OF SHARED MEMORY SEGMENTS AND SEMAPHORES AT A UNIX LEVEL?
Doc ID: Note:1007971.6
How To Identify Shared Memory Segments for Each Instance <-- dump it..
Doc ID: Note:1021010.6
-- SHARED MEMORY / SEMAPHORES
TECH: Calculating Oracle's SEMAPHORE Requirements
Doc ID: 15654.1
TECH: Unix Semaphores and Shared Memory Explained
Doc ID: 15566.1
Linux Big SGA, Large Memory, VLM - White Paper
Doc ID: Note:260152.1
Modifying Kernel Parameters on RHEL, SLES, and Oracle Enterprise Linux using sysctl
Doc ID: Note:390279.1
Linux: How to Check Current Shared Memory, Semaphore Values
Doc ID: Note:226209.1
How to permanently set kernel parameters on Linux
Doc ID: Note:242529.1
Configuring 2.7Gb SGA in RHEL by Relocating the SGA Attach Address
Doc ID: Note:329378.1
Linux IA64 example of allocating 48GB SGA using hugepages
Doc ID: Note:397568.1
How to Configure RHEL 3.0 32-bit for Very Large Memory and HugePages
Doc ID: Note:317055.1
-- HUGE PAGES, VLM
Configuring RHEL 3 and Oracle 9iR2 32-bit with Hugetlb and Remap_file_pages
Doc ID: Note:262004.1
Database Buffer Cache is not Loaded into Shared Memory when using VLM
Doc ID: Note:454465.1
OS Configuration for large SGA
Doc ID: Note:225220.1
Increasing Usable Address Space for Oracle on 32-bit Linux
Doc ID: Note:200266.1
How to Configure RHAS 2.1 32-bit for Very Large Memory (VLM) with shmfs and bigpages
Doc ID: Note:211424.1
ORA-27123: 3.6 GB SGA size on Red Hat 3.0
Doc ID: Note:273544.1
Red Hat Release 3.0; Advantages for Oracle
Doc ID: Note:259772.1
HugePages on Linux: What It Is... and What It Is Not...
Doc ID: Note:361323.1
Oracle Database Server and the Operating System Memory Limitations
Doc ID: Note:269495.1
-- REMOVE DISK
How to Dynamically Add and Remove SCSI Devices on Linux
Doc ID: 603868.1
-- DEVICE PERSISTENCE
How to set device persistence for RAC Oracle on Linux
Doc ID: 729613.1
-- DEBUG
How to generate and analyze the core files on linux
Doc ID: 278173.1
-- MDAM
Doc ID 759260.1 How to Configure Oracle Enterprise Linux to be Highly Available Using RAID1
Doc ID 343092.1 How to setup Linux md devices for CRS and ASM
-- SCSI
How to Dynamically Add and Remove SCSI Devices on Linux
Doc ID: 603868.1
Note 357472.1 - Configuring device-mapper for CRS/ASM
Note 414897.1 - How to Setup UDEV Rules for RAC OCR & Voting devices on SLES10, RHEL5, OEL5
Note 456239.1 - Understanding Device-mapper in Linux 2.6 Kernel
Note 465001.1 - Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5
Note 564580.1 - Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5
Note 605828.1 - Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0) on RHEL5/OEL5
udev(8) man page
mount(8) man page
-- x25-M - FLASH STORAGE
http://guyharrison.squarespace.com/blog/2009/11/24/using-the-oracle-11gr2-database-flash-cache.html
http://tholis.webnode.com/news/hardware-adventures/
http://www.hardwarezone.com/articles/view.php?cid=10&id=2990
http://www.hardwarezone.com/articles/view.php?cid=10&id=2697&pg=2
http://www.tipidpc.com/viewitem.php?iid=4580739
http://www.everyjoe.com/thegadgetblog/160gb-intel-x25-m-ssd-for-sale/
http://computerworld.com.ph/intel-releases-windows-7-ssd-optimization-toolbox/
http://www.villman.com/Product-Detail/Intel_80GB_SSD_X25-M
http://www.anandtech.com/cpuchipsets/Intel/showdoc.aspx?i=3403&cp=4
http://www.youtube.com/watch?v=-rCC9y1u-8c
-- SWAP
Swap Space on RedHat Advanced Server
Doc ID: Note:225451.1
-- CUSTOM SHUTDOWN / STARTUP
How to Automate Startup/Shutdown of Oracle Database on Linux
Doc ID: Note:222813.1
Customizing System Startup in RedHat Linux
Doc ID: Note:126146.1
-- HUGEMEM KERNEL
-- as per RHCE notes, hugemem is not anymore available in rhel5 when you use x86-64 kernel you have a really high limit
https://blogs.oracle.com/gverma/entry/common_incorrect_beliefs_about_1
https://blogs.oracle.com/gverma/entry/redhat_linux_kernels_and_proce_1
Mind the Gap http://static.usenix.org/event/hotos11/tech/final_files/Mogul.pdf
http://perfdynamics.blogspot.com/2012/08/littles-law-and-io-performance.html
fusion io
http://www.theregister.co.uk/2012/01/06/fusion_billion_iops/
http://www.theregister.co.uk/2011/10/04/fusion_io_gen_2/
/***
|''Name:''|LoadRemoteFileThroughProxy (previous LoadRemoteFileHijack)|
|''Description:''|When the TiddlyWiki file is located on the web (view over http) the content of [[SiteProxy]] tiddler is added in front of the file url. If [[SiteProxy]] does not exist "/proxy/" is added. |
|''Version:''|1.1.0|
|''Date:''|mar 17, 2007|
|''Source:''|http://tiddlywiki.bidix.info/#LoadRemoteFileHijack|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0|
***/
//{{{
version.extensions.LoadRemoteFileThroughProxy = {
major: 1, minor: 1, revision: 0,
date: new Date("mar 17, 2007"),
source: "http://tiddlywiki.bidix.info/#LoadRemoteFileThroughProxy"};
if (!window.bidix) window.bidix = {}; // bidix namespace
if (!bidix.core) bidix.core = {};
bidix.core.loadRemoteFile = loadRemoteFile;
loadRemoteFile = function(url,callback,params)
{
if ((document.location.toString().substr(0,4) == "http") && (url.substr(0,4) == "http")){
url = store.getTiddlerText("SiteProxy", "/proxy/") + url;
}
return bidix.core.loadRemoteFile(url,callback,params);
}
//}}}
A Locking Mechanism in Oracle 10g for Web Applications
http://husnusensoy.wordpress.com/2007/07/28/a-locking-mechanism-in-oracle-10g-for-web-applications/
http://www.evernote.com/shard/s48/sh/194a9a05-18ce-4a9b-9cae-1fa9f230d94a/6fc7a6c6bde37e5910b3c8464ed17df4
-- LOG MINER
Truncate Statement is not Detected by Log Miner -- not true on 10gR2
Doc ID: 168738.1
Capture is Slow to Mine Redo Containing a Significant Number of DDL Operations.
Doc ID: 564772.1
Can not delete Archive Log used in a CONTINUOUS_MINE mode Logminer Session
Doc ID: 763700.1
Log Miner Generating Huge Amount Of Undo
Doc ID: 353780.1
How to Recover from a Truncate Command
Doc ID: 117055.1
Doc ID: 223543.1 How to Recover From a DROP / TRUNCATE / DELETE TABLE with RMAN
Doc ID: 141194.1 How to Recover from a Truncate Command on the Wrong Table
Avoiding the truncate during a complete snapshot / materialized view refresh
Doc ID: 1029824.6
http://www.antognini.ch/2012/03/analysing-row-lock-contention-with-logminer/
http://www.nocoug.org/download/2008-05/LogMiner4.pdf
http://docs.oracle.com/cd/B12037_01/server.101/b10825/logminer.htm
https://oraclespin.wordpress.com/category/general-dba/log-miner/
http://oracle-randolf.blogspot.de/2011/07/logical-io-evolution-part-1-baseline.html
http://oracle-randolf.blogspot.com/2011/07/logical-io-evolution-part-2-9i-10g.html
http://alexanderanokhin.wordpress.com/2012/07/26/buffer-is-pinned-count/
http://alexanderanokhin.wordpress.com/tools/digger/
''LIO reasons''
http://blog.tanelpoder.com/2009/11/19/finding-the-reasons-for-excessive-logical-ios/
http://www.jlcomp.demon.co.uk/buffer_usage.html
http://hoopercharles.wordpress.com/2011/01/24/watching-consistent-gets-10200-trace-file-parser/
http://oracle-randolf.blogspot.com/2011/05/assm-bug-reprise-part-1.html
http://oracle-randolf.blogspot.com/2011/05/assm-bug-reprise-part-2.html
http://structureddata.org/2008/09/08/understanding-performance/
-- simulate a logical corruption
http://goo.gl/bhXgh
{{{
create or replace
trigger sys.etl_logon
after logon
on database
begin
if user = 'CCMETL' then
execute immediate 'alter session set "_serial_direct_read" = ''ALWAYS''';
else null;
end if;
end;
}}}
''for SAP active data guard (execute on primary)''
{{{
CREATE OR REPLACE TRIGGER adg_pxforce_trigger
AFTER LOGON ON database
WHEN (USER in ('ENTERPRISE'))
BEGIN
IF (SYS_CONTEXT('USERENV','DATABASE_ROLE') IN ('PHYSICAL STANDBY')) -- check if standby
AND (UPPER(SUBSTR(SYS_CONTEXT ('USERENV','SERVER_HOST'),1,4)) IN ('X4DP')) -- check if the ADG cluster
THEN
execute immediate 'alter session force parallel query parallel 4';
END IF;
END;
/
}}}
use [[SYS_CONTEXT]] to instrument
http://www.oracle.com/us/products/servers-storage/storage/storage-software/031855.htm
http://wiki.lustre.org/index.php/Main_Page
http://lists.lustre.org/pipermail/lustre-announce/attachments/20100414/34394870/attachment-0001.pdf
http://lists.lustre.org/pipermail/lustre-discuss/2011-June/015655.html
! M6, M5, T5
M6 is just the same speed as M5 and T5, compared to M5 the M6 just have double the number of cores (12 vs 6 cores).
So you can’t really say “M6 is XX% faster” but with more cores you can say that in M6 you can put/consolidate more workload
Also on the SPECint_rate2006 they don’t have any M5 or M6 available. So the speed/core around 29 to 30 across the T5 flavors
3750/128=29.296875
-- below are the variable values (raw and final header)
Result/# Cores, # Cores, # Chips, # Cores Per Chip, # Threads Per Core, Baseline, Result, Hardware Vendor, System, Published
$ less spec.txt | sort -rnk1 | grep -i sparc | grep -i oracle
30.5625, 16, 1, 16, 8, 441, 489, Oracle Corporation, SPARC T5-1B, Oct-13
29.2969, 128, 8, 16, 8, 3490, 3750, Oracle Corporation, SPARC T5-8, Apr-13
29.1875, 16, 1, 16, 8, 436, 467, Oracle Corporation, SPARC T5-1B, Apr-13
18.6, 2, 1, 2, 2, 33.7, 37.2, Oracle Corporation, SPARC Enterprise M3000, Apr-11
14.05, 4, 1, 4, 2, 50.3, 56.2, Oracle Corporation, SPARC Enterprise M3000, Apr-11
13.7812, 64, 16, 4, 2, 806, 882, Oracle Corporation, SPARC Enterprise M8000, Dec-10
13.4375, 128, 32, 4, 2, 1570, 1720, Oracle Corporation, SPARC Enterprise M9000, Dec-10
12.3047, 256, 64, 4, 2, 2850, 3150, Oracle Corporation, SPARC Enterprise M9000, Dec-10
11.1875, 16, 4, 4, 2, 158, 179, Oracle Corporation, SPARC Enterprise M4000, Dec-10
11, 32, 8, 4, 2, 313, 352, Oracle Corporation, SPARC Enterprise M5000, Dec-10
10.4688, 32, 2, 16, 8, 309, 335, Oracle Corporation, SPARC T3-2, Feb-11
10.4062, 64, 4, 16, 8, 614, 666, Oracle Corporation, SPARC T3-4, Feb-11
10.375, 16, 1, 16, 8, 153, 166, Oracle Corporation, SPARC T3-1, Jan-11
References below:
http://www.oracle.com/us/corporate/features/sparc-m6/index.html
By leveraging a common set of technologies across product lines, the price/performance metric of a SPARC M6-32 server with 32 processors is similar to Oracle's SPARC T5 server with 2, 4 or 8 processors.
http://www.oracle.com/us/products/servers-storage/servers/sparc/oracle-sparc/m6-32/overview/index.html
Unlike competitive large servers the SPARC M6-32 has the same price/performance as entry-level servers meaning no price premium for the benefits of a large server.
Near-linear pricing delivers the same price/performance as Oracle's smaller T-series servers and provides large-scale server benefits without the pricing premium of big servers
http://en.wikipedia.org/wiki/SPARC
http://www.oracle.com/technetwork/server-storage/sun-sparc-enterprise/documentation/o13-066-sparc-m6-32-architecture-2016053.pdf
! compared to X4-8
But.. X4-8 is faster than T5,M5,M6 which has speed of 38 vs 30
X4-8
https://twitter.com/karlarao/status/435882623500423168
and X4-8 is pretty much the same speed as the compute nodes of X4-2 so you also get that linear scaling that they’re saying in T5,M5,M6 but a much faster CPU
also here’s my “T5-8 vs IBM P780 SPECint_rate2006” comparison http://goo.gl/xj7o8
! compared to IBM
[[T5-8 vs IBM P780 SPECint_rate2006]]
https://blogs.oracle.com/EMMAA/entry/exadata_health_and_resource_usage1
<<<
A newly updated version of the Exadata Health and Resource Usage monitoring has been released! This white paper documents an end to end approach to health and resource utilization monitoring for Oracle Exadata Environments. The document has been substantially modified to help Exadata administrators easily follow the troubleshooting methodology defined. Other additions include:
Exadata 12.1.0.6 plugin for Enterprise Manager new features
Enterprise Manager 12.1.0.4 updates
Updates to Include X4 environment
Download the white paper as the link below:
http://www.oracle.com/technetwork/database/availability/exadata-health-resource-usage-2021227.pdf
<<<
https://blogs.oracle.com/EMMAA/entry/exadata_health_and_resource_usage
<<<
MAA has recently published a new whitepaper documenting an end to end approach to health and resource utilization monitoring for Oracle Exadata Environments. In an addition to technical details a troubleshooting methodology will be explored that allows administrators to quickly identify and correct issues in an expeditious manner.
The document takes a “rule out” approach in that components of the system will be verified as performing correctly to eliminate its role in the incident. There will be five areas of concentration in the overall system diagnosis
1. Steps to take before problems occur that can assist in troubleshooting
2. Changes made to the system
3. Quick analysis
4. Baseline comparison
5. Advanced diagnostics
http://www.oracle.com/technetwork/database/availability/exadata-health-resource-usage-2021227.pdf
<<<
''homepage'' http://www.oracle.com/technetwork/database/features/availability/maa-090890.html
''MAA Best Practices - Oracle Database '' http://www.oracle.com/technetwork/database/features/availability/oracle-database-maa-best-practices-155386.html
''High Availability Customer Case Studies, Presentations, Profiles, Analyst Reports, and Press Releases'' http://www.oracle.com/technetwork/database/features/availability/ha-casestudies-098033.html
''High Availability Demonstrations'' http://www.oracle.com/technetwork/database/features/availability/demonstrations-092317.html
''MAA Articles'' http://www.oracle.com/technetwork/database/features/availability/ha-articles-099205.html
! oracle cloud
this oaktable thread (https://mail.google.com/mail/u/0/#inbox/15bef235f979050a) got me curious on looking up "Maximum Cloud Availability Architecture" and found this http://www.oracle.com/technetwork/database/features/availability/oracle-cloud-maa-3046100.html
Oracle Private Database Cloud using Cloud Control 13c https://www.udemy.com/oracle-private-database-cloud/learn/v4/overview
! AWS reference architecture oracle database availability
Get Oracle Flying in AWS Cloud https://www.udemy.com/get-oracle-flying-in-aws-cloud/learn/v4/content
Best Practices for Running Oracle Database on Amazon Web Services https://d0.awsstatic.com/whitepapers/best-practices-for-running-oracle-database-on-aws.pdf
Oracle Database on the AWS Cloud https://s3.amazonaws.com/quickstart-reference/oracle/database/latest/doc/oracle-database-on-the-aws-cloud.pdf
Advanced Architectures for Oracle Database on Amazon EC2 https://d0.awsstatic.com/enterprise-marketing/Oracle/AWSAdvancedArchitecturesforOracleDBonEC2.pdf
! Azure
Cloud Design Patterns for Azure: Availability and Resilience https://www.pluralsight.com/courses/azure-design-patterns-availability-resilience
<<<
Do any of you know how to check if E5-2650 v2 is a MCM chip or even better do you know of a official Intel list of MCM chips. The reason i ask is that it hat impact on Oracle Standard edition licenses (SE) & (SEO)
https://communities.intel.com/message/239585#239585
https://communities.intel.com/message/239195#239195 -- " I regret to inform you that Intel does not have a list of MCM processors available on the Intel web site."
http://unix.ittoolbox.com/groups/technical-functional/ibm-aix-l/need-help-in-understanding-the-cpu-cores-concept-on-the-pseries-machines-4345233
http://oracleoptimization.com/2010/03/15/multi-chip-modules/
https://community.oracle.com/thread/925590?start=0&tstart=0
https://neerajbhatia.wordpress.com/2011/01/17/understanding-oracle-database-licensing-policies/
http://research.engineering.wustl.edu/~songtian/pdf/intel-haswell.pdf <-- desktop
http://en.wikipedia.org/wiki/Broadwell_%28microarchitecture%29 <-- desktop/mobile
http://www.fudzilla.com/home/item/26786-intel-migrates-to-desktop-multi-chip-module-mcm-with-14nm-broadwell <-- desktop
amd http://www.internetnews.com/hardware/article.php/3745836/Why+AMD+Went+the+MultiChip+Module+Route.htm <-- amd mcm
intel forums
https://communities.intel.com/search.jspa?q=multi+chip+module
http://help.howproblemsolution.com/777220/is-it-intel-xeon-e5-2609-processors-is-mcm-multi-chip-module
https://communities.intel.com/message/188146
https://communities.intel.com/thread/48897
https://communities.intel.com/message/230883#230883
https://communities.intel.com/message/252954
https://communities.intel.com/message/259243#259243
https://communities.intel.com/message/239195#239195
<<<
<<showtoc>>
! high level
[img(60%,60%)[ https://i.imgur.com/1amolkO.png ]]
* http://senthilmkumar-utilities.blogspot.com/2013/11/oracle-utilties-meter-data-management.html
<<showtoc>>
! high level
[img(60%,60%)[ https://i.imgur.com/1amolkO.png ]]
https://www.google.com/search?q=master+data+management+in+data+lake&oq=master+data+management+in+data+lake&aqs=chrome..69i57.4474j0j7&sourceid=chrome&ie=UTF-8
https://www.udemy.com/courses/search/?src=ukw&q=master%20data%20management
https://www.udemy.com/master-data-management/
https://www.udemy.com/informatica-master-data-management-hub-tool/
https://www.udemy.com/user/sandip-mohite/
https://www.udemy.com/overview-of-informatica-data-director-idd/
https://learning.oreilly.com/library/view/master-data-management/9781118085684/
https://learning.oreilly.com/library/view/master-data-management/9780123742254/
https://learning.oreilly.com/library/view/building-a-scalable/9780128026489/B978012802510900009X/B978012802510900009X.xhtml
https://www.youtube.com/results?search_query=master+data+management+repository+data+lake
The Big Picture of Metadata Management for Data Governance & Enterprise Architecture https://www.youtube.com/watch?v=Zg9BNGV_DAg
What is a Data Lake https://www.youtube.com/watch?v=LxcH6z8TFpI
Big Data & MDM https://www.youtube.com/watch?v=67d8QIg9k9s
Informatica Big Data Management with Intelligent Data Lake Deep Dive and Demo https://www.youtube.com/watch?v=FUXP4nI92l8
Ten Best Practices for Master Data Management and Data Governance https://www.youtube.com/watch?v=kFok_3SPmKw
How to Use Azure Data Catalog https://www.youtube.com/watch?v=Ei7UynF_S_s
Enterprise Data Architecture Strategy - Build a Meta Data Repository https://www.youtube.com/watch?v=HAt22r_KNJI
DW vs MDM https://www.youtube.com/watch?v=XF4p2gZNLvQ
Data Lake VS Data Warehouse https://www.youtube.com/watch?v=AwbKwcw7bgg
https://www.youtube.com/user/Intricity101/featured
https://www.google.com/search?sxsrf=ACYBGNTQmHf9SsK0FvLtntLkoT6nXfZh5g%3A1564161164855&ei=jDQ7XbPiM7Kvggexm42IDw&q=oracle+master+data+management+install&oq=oracle+master+data+management+install&gs_l=psy-ab.3..33i22i29i30l2.14153.16006..16299...0.0..0.117.696.7j1......0....1..gws-wiz.......0i71j0j0i22i30.Lb8tNhjwOw0&ved=0ahUKEwiz2Oi0itPjAhWyl-AKHbFNA_EQ4dUDCAo&uact=5
http://www.oracle.com/us/products/applications/master-data-management/mdm-overview-1954202.pdf
https://www.google.com/search?q=rules+engine+master+data+management&oq=rules+engine+master+data+management&aqs=chrome..69i57j33.5062j0j7&sourceid=chrome&ie=UTF-8
http://www.oracle.com/us/products/applications/master-data-management/018874.pdf
! MDM services
https://www.intricity.com/data-management-health-checks/
https://www.intricity.com/category/videos/
https://www.youtube.com/user/Intricity101
https://aws.amazon.com/mp/scenarios/bi/mdm/
https://learning.oreilly.com/search/?query=master%20data%20management%20tools&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&include_collections=false&include_notebooks=false&is_academic_institution_account=false&sort=relevance&facet_json=true&page=0
https://learning.oreilly.com/library/view/cloud-data-design/9781484236154/A448498_1_En_5_Chapter.html
https://learning.oreilly.com/library/view/a-practical-guide/0738438022/8084ch02.xhtml
! MDM tools used
looker https://looker.com/product/new-features
collibra https://www.collibra.com/ , https://www.youtube.com/results?search_query=collibra , https://www.youtube.com/watch?v=ncLqaBYa0NE
informatica https://www.informatica.com/products/big-data/enterprise-data-catalog.html#fbid=TwzPerm1Zph
ibm https://www.ibm.com/us-en/marketplace/ibm-infosphere-master-data-management
pimcore https://pimcore.com/en/lp/mdm?gclid=Cj0KCQjwyerpBRD9ARIsAH-ITn9vykyfOcPeDQSwRrxPYrAkEC6CQXnSSEex2_3v-BiPEsnnc7Recm0aAhKMEALw_wcB
cloudera navigator https://www.cloudera.com/products/product-components/cloudera-navigator.html
https://atlas.apache.org/#/Architecture
!! mother of all MDM
http://metaintegration.net/
<<<
Informatica https://youtu.be/l50H3nLfyng?t=244
Collibra https://www.collibra.com/
Oracle has a powerful metadata harvester https://www.oracle.com/a/tech/docs/omm-12213-help-userguide.pdf
Centurylink uses Cloudera Navigator to augment their custom developed MDM
And, I just found out that under the hood everyone (Oracle, Informatica and others) uses this company’s metadata harvester tool http://metaintegration.net/Company/ (you’ll see on the section “Vendors embedding Meta Integration components in their software”)
<<<
.
http://hemantoracledba.blogspot.com/2010/08/adding-datafile-that-had-been-excluded.html
! aws sagemaker vs azure ml vs google ml
https://www.altexsoft.com/blog/datascience/comparing-machine-learning-as-a-service-amazon-microsoft-azure-google-cloud-ai-ibm-watson/
https://towardsdatascience.com/aws-sagemaker-vs-azure-machine-learning-3ac0172495da
limitations:
* https://sqlmaria.com/2017/08/01/getting-the-most-out-of-oracle-sql-monitor/
* https://blogs.oracle.com/optimizer/using-sql-patch-to-add-hints-to-a-packaged-application
script
https://carlos-sierra.net/2014/06/19/skipping-acs-ramp-up-using-a-sql-patch/
https://carlos-sierra.net/2016/02/29/sql-monitoring-without-monitor-hint/
{{{
----------------------------------------------------------------------------------------
--
-- File name: sqlpch.sql
--
-- Purpose: Create Diagnostics SQL Patch for one SQL_ID
--
-- Author: Carlos Sierra
--
-- Version: 2013/12/28
--
-- Usage: This script inputs two parameters. Parameter 1 the SQL_ID and Parameter 2
-- the set of Hints for the SQL Patch (default to GATHER_PLAN_STATISTICS
-- MONITOR BIND_AWARE).
--
-- Example: @sqlpch.sql f995z9antmhxn BIND_AWARE
--
-- Notes: Developed and tested on 11.2.0.3 and 12.0.1.0
--
---------------------------------------------------------------------------------------
SPO sqlpch.txt;
DEF def_hint_text = 'GATHER_PLAN_STATISTICS MONITOR BIND_AWARE';
SET DEF ON TERM OFF ECHO ON FEED OFF VER OFF HEA ON LIN 2000 PAGES 100 LONG 8000000 LONGC 800000 TRIMS ON TI OFF TIMI OFF SERVEROUT ON SIZE 1000000 NUMF "" SQLP SQL>;
SET SERVEROUT ON SIZE UNL;
COL hint_text NEW_V hint_text FOR A300;
SET TERM ON ECHO OFF;
PRO
PRO Parameter 1:
PRO SQL_ID (required)
PRO
DEF sql_id_1 = '&1';
PRO
PRO Parameter 2:
PRO HINT_TEXT (default: &&def_hint_text.)
PRO
DEF hint_text_2 = '&2';
PRO
PRO Values passed:
PRO ~~~~~~~~~~~~~
PRO SQL_ID : "&&sql_id_1."
PRO HINT_TEXT: "&&hint_text_2." (default: "&&def_hint_text.")
PRO
SET TERM OFF ECHO ON;
SELECT TRIM(NVL(REPLACE('&&hint_text_2.', '"', ''''''), '&&def_hint_text.')) hint_text FROM dual;
WHENEVER SQLERROR EXIT SQL.SQLCODE;
-- trim sql_id parameter
COL sql_id NEW_V sql_id FOR A30;
SELECT TRIM('&&sql_id_1.') sql_id FROM DUAL;
VAR sql_text CLOB;
VAR sql_text2 CLOB;
EXEC :sql_text := NULL;
EXEC :sql_text2 := NULL;
-- get sql_text from memory
DECLARE
l_sql_text VARCHAR2(32767);
BEGIN -- 10g see bug 5017909
FOR i IN (SELECT DISTINCT piece, sql_text
FROM gv$sqltext_with_newlines
WHERE sql_id = TRIM('&&sql_id.')
ORDER BY 1, 2)
LOOP
IF :sql_text IS NULL THEN
DBMS_LOB.CREATETEMPORARY(:sql_text, TRUE);
DBMS_LOB.OPEN(:sql_text, DBMS_LOB.LOB_READWRITE);
END IF;
l_sql_text := REPLACE(i.sql_text, CHR(00), ' '); -- removes NUL characters
DBMS_LOB.WRITEAPPEND(:sql_text, LENGTH(l_sql_text), l_sql_text);
END LOOP;
-- if found in memory then sql_text is not null
IF :sql_text IS NOT NULL THEN
DBMS_LOB.CLOSE(:sql_text);
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('getting sql_text from memory: '||SQLERRM);
:sql_text := NULL;
END;
/
SELECT :sql_text FROM DUAL;
-- get sql_text from awr
DECLARE
l_sql_text VARCHAR2(32767);
l_clob_size NUMBER;
l_offset NUMBER;
BEGIN
IF :sql_text IS NULL OR NVL(DBMS_LOB.GETLENGTH(:sql_text), 0) = 0 THEN
SELECT sql_text
INTO :sql_text2
FROM dba_hist_sqltext
WHERE sql_id = TRIM('&&sql_id.')
AND sql_text IS NOT NULL
AND ROWNUM = 1;
END IF;
-- if found in awr then sql_text2 is not null
IF :sql_text2 IS NOT NULL THEN
l_clob_size := NVL(DBMS_LOB.GETLENGTH(:sql_text2), 0);
l_offset := 1;
DBMS_LOB.CREATETEMPORARY(:sql_text, TRUE);
DBMS_LOB.OPEN(:sql_text, DBMS_LOB.LOB_READWRITE);
-- store in clob as 64 character pieces
WHILE l_offset < l_clob_size
LOOP
IF l_clob_size - l_offset > 64 THEN
l_sql_text := REPLACE(DBMS_LOB.SUBSTR(:sql_text2, 64, l_offset), CHR(00), ' ');
ELSE -- last piece
l_sql_text := REPLACE(DBMS_LOB.SUBSTR(:sql_text2, l_clob_size - l_offset + 1, l_offset), CHR(00), ' ');
END IF;
DBMS_LOB.WRITEAPPEND(:sql_text, LENGTH(l_sql_text), l_sql_text);
l_offset := l_offset + 64;
END LOOP;
DBMS_LOB.CLOSE(:sql_text);
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('getting sql_text from awr: '||SQLERRM);
:sql_text := NULL;
END;
/
SELECT :sql_text2 FROM DUAL;
SELECT :sql_text FROM DUAL;
-- validate sql_text
BEGIN
IF :sql_text IS NULL THEN
RAISE_APPLICATION_ERROR(-20100, 'SQL_TEXT for SQL_ID &&sql_id. was not found in memory (gv$sqltext_with_newlines) or AWR (dba_hist_sqltext).');
END IF;
END;
/
PRO generate SQL Patch for SQL "&&sql_id." with CBO Hints "&&hint_text."
SELECT loaded_versions, invalidations, address, hash_value
FROM v$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT child_number, plan_hash_value, executions, is_shareable
FROM v$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;
-- drop prior SQL Patch
WHENEVER SQLERROR CONTINUE;
PRO ignore errors
EXEC DBMS_SQLDIAG.DROP_SQL_PATCH(name => 'sqlpch_&&sql_id.');
WHENEVER SQLERROR EXIT SQL.SQLCODE;
-- create SQL Patch
PRO you have to connect as SYS
BEGIN
SYS.DBMS_SQLDIAG_INTERNAL.I_CREATE_PATCH (
sql_text => :sql_text,
hint_text => '&&hint_text.',
name => 'sqlpch_&&sql_id.',
category => 'DEFAULT',
description => '/*+ &&hint_text. */'
);
END;
/
-- flush cursor from shared_pool
PRO *** before flush ***
SELECT inst_id, loaded_versions, invalidations, address, hash_value
FROM gv$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT inst_id, child_number, plan_hash_value, executions, is_shareable
FROM gv$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;
PRO *** flushing &&sql_id. ***
BEGIN
FOR i IN (SELECT address, hash_value
FROM gv$sqlarea WHERE sql_id = '&&sql_id.')
LOOP
DBMS_OUTPUT.PUT_LINE(i.address||','||i.hash_value);
BEGIN
SYS.DBMS_SHARED_POOL.PURGE (
name => i.address||','||i.hash_value,
flag => 'C'
);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(SQLERRM);
END;
END LOOP;
END;
/
PRO *** after flush ***
SELECT inst_id, loaded_versions, invalidations, address, hash_value
FROM gv$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT inst_id, child_number, plan_hash_value, executions, is_shareable
FROM gv$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;
WHENEVER SQLERROR CONTINUE;
SET DEF ON TERM ON ECHO OFF FEED 6 VER ON HEA ON LIN 80 PAGES 14 LONG 80 LONGC 80 TRIMS OFF TI OFF TIMI OFF SERVEROUT OFF NUMF "" SQLP SQL>;
SET SERVEROUT OFF;
PRO
PRO SQL Patch "sqlpch_&&sql_id." will be used on next parse.
PRO To drop SQL Patch on this SQL:
PRO EXEC DBMS_SQLDIAG.DROP_SQL_PATCH(name => 'sqlpch_&&sql_id.');
PRO
UNDEFINE 1 2 sql_id_1 sql_id hint_text_2 hint_text
CL COL
PRO
PRO sqlpch completed.
SPO OFF;
}}}
"DB CPU" / "CPU + Wait for CPU" / "CPU time" Reference Note (Doc ID 1965757.1)
MPTW is a distribution or edition of TiddlyWiki that includes a standard TiddlyWiki core packaged with some plugins designed to improve usability and provide a better way to organise your information. For more information see http://mptw.tiddlyspot.com/.
http://gigaom.com/2012/11/12/mram-takes-another-step-closer-to-the-real-world/
! transfer outlook to new computer
https://www.stellarinfo.com/blog/transfer-outlook-data-to-new-computer/
! manual archive
http://office.microsoft.com/en-us/outlook-help/archive-a-folder-manually-HA001121610.aspx
https://support.office.com/en-us/article/archive-items-manually-ecf54f37-14d7-4ee3-a830-46a5c33274f6
! turn off auto archive
<<<
To archive only when you want, turn off AutoArchive.
Click File > Options > Advanced.
Under AutoArchive, click AutoArchive Settings.
Uncheck the Run AutoArchive every n days box.
<<<
http://office.microsoft.com/en-us/powerpoint-help/view-your-speaker-notes-privately-while-delivering-a-presentation-on-multiple-monitors-HA010067383.aspx
http://www.labnol.org/software/see-speaker-notes-during-presentation/17927/
http://techmonks.net/using-the-presenter-view-in-microsoft-powerpoint/
! ink and erase
{{{
CTRL-P for "pen"
press E for "erase"
}}}
http://office.microsoft.com/en-us/project-help/setting-working-times-and-days-off-by-using-project-calendars-HA001020995.aspx
http://forums.techarena.in/microsoft-project/1264277.htm
The silly IO test
http://www.facebook.com/photo.php?pid=6096927&l=5082945abc&id=552113028
<<<
A simple IO test on a Macbook Air 11" 2GB memory 64GB SSD..
the peak write IOPS is just too high for this small lightweight laptop..
For a clearer image http://lh3.ggpht.com/_F2x5WXOJ6Q8/TTwvCC02nWI/AAAAAAAABBU/WBP3z81nifM/SillyTest.jpg
Also you can compare the performance numbers of 4 disk spindles.. short stroked or not.. here http://karlarao.tiddlyspot.com/#OrionTestCases
<<<
Gaja buys MacAir with 1 CPU with 2 cores (2.18Ghz), 4GB of RAM and 256GB of flash storage
http://www.facebook.com/Leo4Evr/posts/10150123422217659
<<<
Karl Arao Hi Gaja.. I would be interested to see the output of this silly IO test on your new Mac http://goo.gl/lZdUw ;)
---------------------------------------------------------------------------------------------------------------------
Gaja Krishna Vaidyanatha @Karl - Silly Test...indeed...one of the reason for all of the memory being consumed is due to the excessive growth of the filesystem buffer cache (FSBC)/Page Cache, due to an increased amount of I/O load on the system. That in turn causes the paging/swapping daemon to be overactive, thus inflating the CPU consumption on the machine. It is a classic case of buffered I/O killing your system. Realistically, this test is more about creating an severe I/O bottleneck instead of measuring IOPS and transfer rates.
A true IOPS test will entail doing just direct I/O and bypassing the FSBC/Page cache. One way to simulate that(if the FSBC/Page cache cannot be bypassed) is to do a dd of a large file that has never been read before. Reboot the system and repeat as often as needed. I just did a few of those and got approx 10,000 - 20,000 IOPS (depending on the bs size) with a transfer rate of approx 200MB/sec. The dd bs (block size) that I tried were 8K and 16K. The numbers are good enough for me :)))
Gaja Krishna Vaidyanatha One more thing...if you run "dd" a very small blocksize (default), it will generate more overhead due to the large number of I/O requests, potentially spending more time under "%sys" instead of "%usr"
---------------------------------------------------------------------------------------------------------------------
Karl Arao That is awesome and the numbers are just impressive... I want to have one! :)
Yes, I made that IO test with the intention of bringing the system down to its knees and characterizing the IO performance on that level of stress. That time I want to know if a laptop on SSD will out number the IO performance of my R&D server http://goo.gl/eLVo2 (running lots of VMs) having 8GB memory, IntelCore2Quad Q9500 & 5 1TB short stroked disk (on 100GB area) on an LVM stripe that's about 900+ IOPS & 300+ MB/s on my Orion and dbms_resource_manager.calibrate_io runs and actually running 250 parallel sessions doing SELECT * on a 300GB table http://goo.gl/PYYyH (the same disks but as ASM disks on the next 100GB area - short stroked).
Also prior to running that IO test on a MacAir I ran the same DD command on my old laptop w/ 4GB memory, IntelCore2 T8100 & 160GB SATA.. just two DDs will instantly go IO WAIT% of 60% going to 90% and load average shooting up and will be completely unresponsive. I was monitoring the General workload, CPU, & Disk using COLLECTL and I can see that the disk is being hammered with lots of 4K blocks IO size & I'm really having high Qlen,Wait, & 100% Disk Utilization. And I have to restart the laptop to be usable again.
So on the IO test on MacAir, that's my first time seeing the GUI perf monitor which I noticed there's no IO WAIT% on the metrics and see SYS% shoot up as I invoke more DDs (I don't know if that's really the nature of machines on SSDs). And surprisingly after 60 DDs I can still move my mouse and the system is still responsive. Cool!
On the test you did that is interesting and you had really nice insights on your reply, I'd also like to try that sometime ;) Can you mail me the exact commands for the test case? karlarao@gmail.com
BTW for an R&D machine that weights 1+kg, 10K IOPS, 200MB/s that's not bad!
<<<
<<showtoc>>
! annoyances
Top Mac OS X annoyances and how to fix them http://www.voipsec.eu/?p=740
path finder and dropbox integration http://blip.tv/appshrink/os-x-tips-and-tweaks-how-to-enable-dropbox-contextual-menu-items-in-pathfinder-6110422
how to lock your mac http://www.howtogeek.com/howto/32810/how-to-lock-your-mac-os-x-display-when-youre-away/
http://gadgetwise.blogs.nytimes.com/2011/05/02/qa-changing-the-functions-of-a-macs-f-keys/
http://osxdaily.com/2010/09/06/change-your-mac-hostname-via-terminal/
http://apple.stackexchange.com/questions/66611/how-to-change-computer-name-so-terminal-displays-it-in-mac-os-x-mountain-lion
http://www.cultofmac.com/108120/how-to-change-the-scrolling-direction-in-lion-os-x-tips/
http://support.apple.com/kb/ht2490
http://superuser.com/questions/322983/how-to-let-ctrl-page-down-switch-tabs-inside-vim-in-terminal-app
http://askubuntu.com/questions/105224/ctrl-page-down-ctrl-page-up
http://www.danrodney.com/mac/
http://www.mac-forums.com/forums/switcher-hangout/121984-easy-way-show-desktop.html
http://www.silvermac.com/2010/show-desktop-on-mac/
alt-enter on excel http://dropline.net/2009/02/adding-new-lines-to-cells-in-excel-for-the-mac/
damn you autocorrect http://osxdaily.com/2011/07/28/turn-off-auto-correct-in-mac-os-x-lion/
windows key https://forums.virtualbox.org/viewtopic.php?f=1&t=17641
sublime text column selection https://www.sublimetext.com/docs/3/column_selection.html
! software
homebrew http://brew.sh/ (to install wget, parallel)
uninstall http://lifehacker.com/5828738/the-best-app-uninstaller-for-mac
ntfs mounts http://macntfs-3g.blogspot.com/, http://www.tuxera.com/products/tuxera-ntfs-for-mac/
filesystem space analyzer http://www.derlien.com/downloads/index.html
jedit, textwrangler https://groups.google.com/forum/?fromgroups=#!topic/textwrangler/nb3Nw1GC4Fo
teamviewer
dropbox
evernote
tiddlywiki
show desktop
virtualbox
ms office for mac
skype
sqldeveloper
picasa
fx photo studio pro
camtasia
little snapper
mpeg streamclip
mucommander
chicken vnc
crossover
flashplayer
Firefox 4.0 RC 2 , do this after the install https://discussions.apple.com/message/21335991#21335991
filezilla
appcleaner
http://www.ragingmenace.com/software/menumeters/index.html#sshot
kdiff
http://manytricks.com/timesink/ alternative to manictime
http://www.macupdate.com/app/mac/28171/ichm
http://i-funbox.com/ifunboxmac/ copying from iphone to mac
terminator http://software.jessies.org/terminator/#downloads, https://drive.google.com/folderview?id=0BzZNCgKvEkQYZDBNTm1HWThOaEU&usp=drive_web#list
http://www.freemacware.com/jellyfissh/ <-- can save passwords
nmap http://nmap.org/download.html#macosx
http://adium.im/ <-- instant messenger
https://itunes.apple.com/us/app/battery-time/id547105832 <-- battery time
ithoughtsx https://itunes.apple.com/us/app/ithoughtsx/id720669838?mt=12, http://toketaware.com/howto/
snagit, http://feedback.techsmith.com/techsmith/topics/snagit_file_back_up?page=1#reply_8579747
path finder
ntfs-3g tuxerant
http://support.agilebits.com/kb/syncing/how-to-move-your-1password-data-file-between-pc-and-mac
http://www.donationcoder.com/Software/Mouser/screenshotcaptor/, http://download.cnet.com/Screenshot-Captor/3000-20432_4-10433616.html <-- long screenshots
http://thepdf.com/unlock-pdf.html <-- unlock PDF restrictions
[[licecap - gif]] - gif creator
istat menus
cleanmymac3
pending:
http://lifehacker.com/5880540/the-best-screen-capture-tool-for-mac-os-x
http://mac.appstorm.net/roundups/utilities-roundups/10-screen-recording-tools-for-mac/
oracle and mac
http://tjmoracle.tumblr.com/post/26025230295/os-x-software-for-oracle-developers
http://blog.enkitec.com/2011/08/get-oracle-instant-client-working-on-mac-os-x-lion/
! macport and fink
http://macosx.com/forums/mac-os-x-system-mac-software/306582-yum-apt-get.html
http://sparkyspider.blogspot.com/2010/03/apt-get-install-yum-install-on-mac-os-x.html
http://www.macports.org/index.php
https://developer.apple.com/xcode/
http://forums.macrumors.com/showthread.php?t=720035
http://scottlab.ucsc.edu/~wgscott/xtal/wiki/index.php/Main_Page
! hibernate
http://www.youtube.com/watch?feature=fvwp&v=XA0MnnEFmDQ&NR=1
http://forums.macrumors.com/showthread.php?t=1491002
http://apple.stackexchange.com/questions/26842/is-there-a-way-to-hibernate-in-mac
http://www.macworld.com/article/1053471/sleepmode.html
http://deepsleep.free.fr/deepsleep.pdf
http://www.geekguides.co.uk/104/how-to-enable-hibernate-mode-on-a-mac/
http://blog.kaputtendorf.de/2007/08/17/hibernation-tool-for-mac-os/
http://www.garron.me/mac/macbook-hibernate-sleep-deep-standby.html
http://etherealmind.com/osx-hibernate-mode/
http://www.jinx.de/SmartSleep.html
https://itunes.apple.com/au/app/smartsleep/id407721554?mt=12
! presentations
https://georgecoghill.wordpress.com/2012/08/12/highlight-draw-on-your-mac-screen/
http://lifehacker.com/304418/rock-your-presentation-with-the-right-tools-and-apps
http://lifehacker.com/281921/call-out-anything-on-your-screen-with-highlight?tag=softwarefeaturedmacdownload
http://lifehacker.com/255361/mac-tip--zoom-into-any-area-on-the-screen?tag=softwaremacosx
http://lifehacker.com/191126/download-of-the-day--doodim?tag=softwaredownloads
http://forums.macrumors.com/showthread.php?t=1195196
http://www.dummies.com/how-to/content/erase-pen-and-highlighter-drawings-on-your-powerpo.html
! mount iso
http://osxdaily.com/2008/04/22/easily-mount-an-iso-in-mac-os-x/
{{{
using disk utility
or
hdiutil mount sample.iso
}}}
! create iso compatible on windows
http://www.makeuseof.com/tag/how-to-create-windows-compatible-iso-disc-images-in-mac-os-x/
{{{
* use disk utility to create the CDR file
* then, enter this line of code to transform the .cdr to an ISO file:
hdiutil makehybrid -iso -joliet -o [filename].iso [filename].cdr
}}}
or just do this all in command line
{{{
hdiutil makehybrid -iso -joliet -o tmp.iso tmp -ov
}}}
! burn DVD
http://www.youtube.com/watch?v=5x7jpIoFixc
! burn ISO linux installer
http://switchingtolinux.blogspot.com/2007/07/burning-ubuntu-iso-in-mac-os-x.html
! migrate boot device to SSD
http://www.youtube.com/watch?v=Zda6pGH8_1Q
! programming editors
I got the sublime text 2 and text wrangler
http://smyck.net/2011/10/02/text-editors-for-programmers-on-the-mac/
http://sixrevisions.com/web-development/the-15-most-popular-text-editors-for-developers/
http://mac.appstorm.net/roundups/office-roundups/top-10-mac-text-editors/
http://meandmark.com/blog/2010/01/getting-started-with-mac-programming/
sublime text tutorial http://www.youtube.com/watch?v=TZ-bgcJ6fQo
! install fonts
http://www.youtube.com/watch?v=3AIR7_ch9No
! juniper vpn
http://wheatoncollege.edu/technology/started/networks-wheaton/juniper-vpn-instructions/juniper-vpn-instructions-for-macintosh/
! compare folders
http://www.macworld.com/article/1167853/use_visualdiffer_to_compare_the_contents_of_folders_and_files.html
! SecureCrt
http://www.vandyke.com/support/tips/backupsessions.html
https://www.vandyke.com/products/securecrt/faq/025.html
http://www.vandyke.com/download/securecrt/5.2/index.html, http://www.itpub.net/forum.php?mod=viewthread&tid=739092
{{{
1) Install SecureCRT on windows and point it to C:\Dropbox\Putty\SecureCRT\Config location
2) Install SecureCRT on mac and copy all Sessions file to windows
Karl-MacBook:Sessions karl$ pwd
/Users/karl/Library/Application Support/VanDyke/SecureCRT/Config/Sessions
Karl-MacBook:Sessions karl$ cp -rpv * /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/
Default.ini -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/Default.ini
__FolderData__.ini -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/__FolderData__.ini
v2 -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/v2
v2/__FolderData__.ini -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/v2/__FolderData__.ini
v2/enkdb01.ini -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/v2/enkdb01.ini
3) Create symbolic link on Sessions folder to Dropbox
cd /Users/karl/Library/Application Support/VanDyke/SecureCRT/Config/
rm -rf Sessions/
ln -s /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions Sessions
}}}
! outlook on vbox with zimbra connector
Get the connector from this site
https://mail.physics.ucla.edu/downloads/ZimbraConnectorOLK_7.0.1.6307_x86.msi
and the outlook SP3 here, the connector requires SP3.. so you have to install this first..
http://www.microsoft.com/en-us/download/details.aspx?id=27838
then setup a shared folder across the mac and VM called /Users/karl/Dropbox/tmp which is also selectively synced by Dropbox on the windows VM
! safeboot, rescue mode
http://www.macworld.com/article/2018853/when-good-macs-go-bad-steps-to-take-when-your-mac-wont-start-up.html
! format hard disk for time machine - MAC and NTFS
http://www.youtube.com/watch?v=hdDSpIkv-4o
! enable SSH to localhost
http://bluishcoder.co.nz/articles/mac-ssh.html
http://superuser.com/questions/555810/how-do-i-ssh-login-into-my-mac-as-root
! SSD migration
http://www.amazon.com/Samsung-Electronics-EVO-Series-2-5-Inch-MZ-7TE250BW/dp/B00E3W1726/ref=pd_bxgy_pc_img_y
http://www.amazon.com/Doubler-Converter-Solution-selected-SuperDrive/dp/B00724W0N2
http://www.youtube.com/watch?v=YWUKAUlxrkg.
google search https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=mac%20os%20x%20migrate%20to%20ssd
! restart audio/sound service without reboot
http://apple.stackexchange.com/questions/16842/restarting-sound-service
{{{
sudo kill -9 `ps ax|grep 'coreaudio[a-z]' | awk '{print $1}'`
sudo kextunload /System/Library/Extensions/AppleHDA.kext
sudo kextload /System/Library/Extensions/AppleHDA.kext
}}}
! command tab doesn't work, dock not responding
http://superuser.com/questions/7715/cmd-tab-suddenly-stopped-working-and-my-dock-is-unresponsive-what-do-i-do
{{{
killall -9 Dock
}}}
! killstuff
{{{
Karl-MacBook:~ root# cat killstuff.sh
kill -9 `ps -ef | grep -i "macos/iphoto" | grep -v grep | awk '{print $2}'`
kill -9 `ps -ef | grep -i "macos/itunes" | grep -v grep | awk '{print $2}'`
kill -9 `ps -ef | grep -i "GoogleSoftwareUpdate" | grep -v grep | awk '{print $2}'`
}}}
! vnc, screensharing
http://www.davidtheexpert.com/post.php?id=5
open safari and type
{{{
vnc://192.168.1.9
}}}
! verify, repair hard disk
http://www.macissues.com/2014/03/22/how-to-verify-and-repair-your-hard-disk-in-os-x/
! dot_clean ._ underscore files
https://coderwall.com/p/yf7yjq/clean-up-osx-dotfiles
! get CPU information
{{{
-- 15 inch
AMAC02P37MYG3QC:~ kristofferson.a.arao$ system_profiler SPHardwareDataType
Hardware:
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: MacBookPro11,2
Processor Name: Intel Core i7
Processor Speed: 2.2 GHz
Number of Processors: 1
Total Number of Cores: 4
L2 Cache (per Core): 256 KB
L3 Cache: 6 MB
Memory: 16 GB
Boot ROM Version: MBP112.0138.B16
SMC Version (system): 2.18f15
Serial Number (system): C02P37MYG3QC
Hardware UUID: F0DCC410-9E8A-5D77-98E3-C7767EB0CF8F
-- 13 inch
Karl-MacBook:~ karl$ system_profiler SPHardwareDataType
Hardware:
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: MacBookPro9,2
Processor Name: Intel Core i7
Processor Speed: 2.9 GHz
Number of Processors: 1
Total Number of Cores: 2
L2 Cache (per Core): 256 KB
L3 Cache: 4 MB
Memory: 16 GB
Boot ROM Version: MBP91.00D3.B0C
SMC Version (system): 2.2f44
Serial Number (system): C1MK911MDV31
Hardware UUID: 78B26406-BF78-531D-BCFB-1C3289BD44A5
Sudden Motion Sensor:
State: Enabled
}}}
http://fortysomethinggeek.blogspot.com/2012/11/getting-cpu-info-from-command-line-in.html
<<<
sysctl -n machdep.cpu.brand_string
<<<
{{{
Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
}}}
<<<
sysctl -a | grep machdep.cpu
<<<
{{{
machdep.cpu.max_basic: 13
machdep.cpu.max_ext: 2147483656
machdep.cpu.vendor: GenuineIntel
machdep.cpu.brand_string: Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
machdep.cpu.family: 6
machdep.cpu.model: 70
machdep.cpu.extmodel: 4
machdep.cpu.extfamily: 0
machdep.cpu.stepping: 1
machdep.cpu.feature_bits: 9221959987971750911
machdep.cpu.leaf7_feature_bits: 10155
machdep.cpu.extfeature_bits: 142473169152
machdep.cpu.signature: 263777
machdep.cpu.brand: 0
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 AVX2 BMI2 INVPCID FPU_CSDS
machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT RDTSCP TSCI
machdep.cpu.logical_per_package: 16
machdep.cpu.cores_per_package: 8
machdep.cpu.microcode_version: 15
machdep.cpu.processor_flag: 5
machdep.cpu.mwait.linesize_min: 64
machdep.cpu.mwait.linesize_max: 64
machdep.cpu.mwait.extensions: 3
machdep.cpu.mwait.sub_Cstates: 270624
machdep.cpu.thermal.sensor: 1
machdep.cpu.thermal.dynamic_acceleration: 1
machdep.cpu.thermal.invariant_APIC_timer: 1
machdep.cpu.thermal.thresholds: 2
machdep.cpu.thermal.ACNT_MCNT: 1
machdep.cpu.thermal.core_power_limits: 1
machdep.cpu.thermal.fine_grain_clock_mod: 1
machdep.cpu.thermal.package_thermal_intr: 1
machdep.cpu.thermal.hardware_feedback: 0
machdep.cpu.thermal.energy_policy: 1
machdep.cpu.xsave.extended_state: 7 832 832 0
machdep.cpu.xsave.extended_state1: 1 0 0 0
machdep.cpu.arch_perf.version: 3
machdep.cpu.arch_perf.number: 4
machdep.cpu.arch_perf.width: 48
machdep.cpu.arch_perf.events_number: 7
machdep.cpu.arch_perf.events: 0
machdep.cpu.arch_perf.fixed_number: 3
machdep.cpu.arch_perf.fixed_width: 48
machdep.cpu.cache.linesize: 64
machdep.cpu.cache.L2_associativity: 8
machdep.cpu.cache.size: 256
machdep.cpu.tlb.inst.large: 8
machdep.cpu.tlb.data.small: 64
machdep.cpu.tlb.data.small_level1: 64
machdep.cpu.tlb.shared: 1024
machdep.cpu.address_bits.physical: 39
machdep.cpu.address_bits.virtual: 48
machdep.cpu.core_count: 4
machdep.cpu.thread_count: 8
machdep.cpu.tsc_ccc.numerator: 0
machdep.cpu.tsc_ccc.denominator: 0
}}}
{{{
sysctl -a | grep machdep.cpu | grep core_count
machdep.cpu.core_count: 4
sysctl -a | grep machdep.cpu | grep thread_count
machdep.cpu.thread_count: 8
}}}
! get memory information
{{{
hostinfo | grep memory
}}}
! 850 EVO mSATA vs 2.5
http://www.samsung.com/global/business/semiconductor/minisite/SSD/global/html/ssd850evo/specifications.html
http://www.legitreviews.com/samsung-evo-850-msata-m2-ssd-review_160540/7
http://www.storagereview.com/samsung_ssd_850_evo_ssd_review
http://www.storagereview.com/samsung_850_evo_msata_ssd_review
! Caffeine - keep your mac awake
http://apple.stackexchange.com/questions/76107/how-can-i-keep-my-mac-awake-and-locked
! teamviewer reset id
http://changeteamviewerid.blogspot.com/2012/10/get-new-teamviewer-id-on-windows.html
! install gnu parallel
homebrew http://brew.sh/ (to install wget, parallel)
https://www.0xcb0.com/2011/10/19/running-parallel-bash-tasks-on-os-x/
https://darknightelf.wordpress.com/2015/01/01/gnu-parallel-on-osx/
! pdftotext
"brew install poppler"
! ._ in dropbox SSD
https://www.dropboxforum.com/t5/Installation-and-desktop-app/Dot-underscore-files-appeared-after-moving-Dropbox-location-on/td-p/107034/page/2
! ms word blank images
http://www.worldstart.com/seeing-blank-boxes-instead-of-pasted-pictures-in-ms-word/ "show picture placeholders" settings
! dropbox conflicted copy
https://www.dropbox.com/en/help/36
https://www.dropbox.com/en/help/7674
https://ttboj.wordpress.com/2014/09/30/fixing-dropbox-conflicted-copy-problems/
https://gist.github.com/purpleidea/0ed86f735807759d455c
https://www.dropboxforum.com/t5/Installation-and-desktop-app/How-do-you-avoid-to-create-conflicted-copies-when-you-make-some/td-p/45234
https://www.engadget.com/2013/02/20/finding-dropbox-conflicted-copy-files-automatically/
! migrate dropbox to new drive from 1TB to 2TB
<<<
* here the vm drive is 1TB
* format new ssd drive named it as vm2, encrypted and password protected (GUID)
* use carbon copy cloner to clone 1TB to 2TB
* rename vm2 to vm
* also rename the old drive
all paths stay the same!
<<<
! How Dropbox handles downgrades
https://news.ycombinator.com/item?id=16445751
! macbook recovery
http://www.toptenreviews.com/software/backup-recovery/best-mac-hard-drive-recovery/
https://www.youtube.com/watch?v=PKUlMHkCXUk
https://www.prosofteng.com/data-rescue-recovery-software/
https://discussions.apple.com/thread/2653403?start=0&tstart=0
http://www.tomshardware.com/answers/id-2625068/accidentally-formatted-full-1tb-hard-drive-disk-manager-blank-chance-recovering-data.html
! closed lid no sleep
http://superuser.com/questions/38840/is-there-a-way-to-close-the-lid-on-a-macbook-without-putting-it-sleep
http://lifehacker.com/5934158/nosleep-for-mac-prevents-your-macbook-from-sleeping-when-you-close-the-lid
! disk encryption
http://www.imore.com/encrypted-disk-images-dropbox-protect-sensitive-files
http://lifehacker.com/5794486/how-to-add-a-second-layer-of-encryption-to-dropbox
http://apple.stackexchange.com/questions/42257/how-can-i-mount-an-encrypted-disk-from-the-command-line
{{{
diskutil cs list
diskutil list
diskutil cs unlockVolume 110B820A-52A5-4E3D-AB1C-FCF0263DC5A6
diskutil mount /dev/disk3
diskutil list | grep "vm" | awk '{print $NF}' | sed -e 's/s[0-9].*$//'
diskutil eject disk4
}}}
! disk unlock encrypted
External Hard Disc won't mount - Partition Map error. Can't repair with Disk Utility https://discussions.apple.com/thread/6775802?start=0&tstart=0
https://derflounder.wordpress.com/2011/11/23/using-the-command-line-to-unlock-or-decrypt-your-filevault-2-encrypted-boot-drive/
! Spinning Down Unmounted Disks in OS X
https://anders.com/cms/405/Mac/OS.X/Hard.Disk/Spindown
External Hard Drive - Does it Need To Spin Down? Is "Safely Remove" Enough https://ubuntuforums.org/showthread.php?t=2235050
https://www.reddit.com/r/techsupport/comments/2s6xnu/external_hd_not_spinning_down_after_unmounting/
! ulimit - too many open files
http://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1
{{{
seems like there is an entirely different method for changing the open files limit for each version of OS X!
For OS X Sierra (10.12.X) you need to:
1. In Library/LaunchDaemons create a file named limit.maxfiles.plist and paste the following in (feel free to change the two numbers (which are the soft and hard limits, respectively):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>limit.maxfiles</string>
<key>ProgramArguments</key>
<array>
<string>launchctl</string>
<string>limit</string>
<string>maxfiles</string>
<string>64000</string>
<string>524288</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>ServiceIPC</key>
<false/>
</dict>
</plist>
2. Change the owner of your new file:
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
3. Load these new settings:
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
4. Finally, check that the limits are correct:
launchctl limit maxfiles
}}}
! macos sierra bugs
!! Not Allowing Identified Developer App Downloads
https://support.apple.com/kb/PH25088?locale=en_US
Mac OS Sierra: Install apps from unidentified developers (anywhere) https://www.youtube.com/watch?v=AA9jQFxn9Pw
https://www.tekrevue.com/tip/gatekeeper-macos-sierra/
{{{
sudo spctl --master-disable
}}}
!! OS X kernel asynchronous I/O limits
http://www.firewing1.com/blog/2013/08/02/setting-os-x-kernel-asynchronous-io-limits-avoid-virtualbox-crashing-os-x
{{{
sudo sysctl -w kern.aiomax=10240 kern.aioprocmax=10240 kern.aiothreads=16
sudo spctl --master-disable
}}}
!! Remove Symantec software for Mac OS using RemoveSymantecMacFiles
https://discussions.apple.com/message/31506414#31506414 MacBookPro13,3 2016 w/ touchbar frequent crash (Sierra 10.12.3)
https://support.symantec.com/en_US/article.TECH103489.html ftp://ftp.symantec.com/misc/tools/mactools/RemoveSymantecMacFiles.zip
Mac Malware Guide : How does Mac OS X protect me? http://www.thesafemac.com/mmg-builtin/
!! read mac crash reports
https://www.cnet.com/news/tutorial-an-introduction-to-reading-mac-os-x-crash-reports/
!! mds_Stores some times used 100% of CPU
https://discussions.apple.com/thread/5779822?tstart=0
! how to run a script in mac on startup
http://stackoverflow.com/questions/6442364/running-script-upon-login-mac <- use automator
http://stackoverflow.com/questions/6442364/running-script-upon-login-mac/13372744#13372744
http://www.developernotes.com/archive/2011/04/06/169.aspx
! uninstall spotify
https://community.spotify.com/t5/Desktop-Linux-Windows-Web-Player/Spotify-constantly-crashes-on-my-MAC/td-p/15283
! disable internal keyboard
https://pqrs.org/osx/karabiner/
http://www.mackungfu.org/cat-proofing-a-macbook-keyboard
! macbook double key press fix
https://github.com/aahung/Unshaky
https://unshaky.nestederror.com/?ref=producthunt
https://www.wsj.com/graphics/apple-still-hasnt-fixed-its-macbook-keyboard-problem/?ns=prod/accounts-wsj
https://www.reddit.com/r/macbook/comments/9n8hgi/my_experience_with_macbook_pro_2018_keyboard/
! ms paint in macbook
https://paintbrush.sourceforge.io/downloads/
! mount from read only to RW , external drive
{{{
sudo mount -u -o rw,noowners /Volumes/vm
}}}
https://apple.stackexchange.com/questions/92979/how-to-remount-an-internal-drive-as-read-write-in-mountain-lion
! zsh as the default shell on your Mac
https://support.apple.com/en-us/HT208050
{{{
To silence this warning, you can add this command to ~/.bash_profile or ~/.profile:
export BASH_SILENCE_DEPRECATION_WARNING=1
}}}
! end
[[About]] [[RSS & Search]] [[TagCloud]] [[Oracle]] [[.MOSNotes]] [[OraclePerformance]] [[Benchmark]] [[Capacity Planning]] [[Hardware and OS]] [[EngineeredSystems]] [[Exadata]] [[High Availability]] [[PerformanceTools]] [[Troubleshooting & Internals]] [[EnterpriseManager]] [[MigrationUpgrade]] [[BackupAndRecovery]] [[Solaris]] [[AIX]] [[Linux]] [[DevOps]] [[Data Engineering]] [[Data Science]] [[CodeNinja]] [[Coderepo]] [[SQL Tuning]] [[DataWarehouse]] [[DB flavors]] [[etc..]]
! Mainframe (MIPS) to Sparc sizing
see discussions here https://www.evernote.com/l/ADCaWiqj_VxB9anL0h3PAbDiv8fzv4G48pU
! stromasys
https://stromasys.atlassian.net/wiki/spaces/KBP/pages/17039488/CHARON+Linux+server+-+Connection+to+guest+console+blocked+by+firewall
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:16370675423662:
<<<
- We have a third party application we use for selecting specific
values from several tables. The character values stored in a field could be in any of the following formats:
"Bell", "bell" , "BELL" , "beLL" etc..
1) Is there a way to force (change) values (either to UPPER or Lower case) during a DML other than using triggers?
if not,
2) Is there a way to force a select query (other than using functions UPPER or LOWER) to return all the values regardless of the case that a user enters?
<<<
! nls_comp and nls_sort at the system level or logon trigger
{{{
1) no
2) in 10g, yes.
ops$tkyte@ORA10G> create table t ( data varchar2(20) );
Table created.
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> insert into t values ( 'Hello' );
1 row created.
ops$tkyte@ORA10G> insert into t values ( 'HeLlO' );
1 row created.
ops$tkyte@ORA10G> insert into t values ( 'HELLO' );
1 row created.
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> create index t_idx on
2 t( nlssort( data, 'NLS_SORT=BINARY_CI' ) );
Index created.
ops$tkyte@ORA10G> pause
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> variable x varchar2(25)
ops$tkyte@ORA10G> exec :x := 'hello';
PL/SQL procedure successfully completed.
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select * from t where data = :x;
no rows selected
ops$tkyte@ORA10G> pause
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> alter session set nls_comp=ansi;
Session altered.
ops$tkyte@ORA10G> alter session set nls_sort=binary_ci;
Session altered.
ops$tkyte@ORA10G> select * from t where data = :x;
DATA
--------------------
Hello
HeLlO
HELLO
ops$tkyte@ORA10G> pause
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> set autotrace on
ops$tkyte@ORA10G> select /*+ first_rows */ * from t where data = :x;
DATA
--------------------
Hello
HeLlO
HELLO
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=2 Card=1 Bytes=12)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (TABLE) (Cost=2 Card=1 Bytes=12)
2 1 INDEX (RANGE SCAN) OF 'T_IDX' (INDEX) (Cost=1 Card=1)
}}}
''Management Repository Views'' http://docs.oracle.com/cd/B16240_01/doc/em.102/b40007/views.htm#BACCEIBI
http://nyoug.org/Presentations/2011/March/Iotzov_OEM_Repository.pdf
{{{
MGMT$METRIC_CURRENT
MGMT$METRIC_DETAILS
MGMT$METRIC_HOURLY
MGMT$METRIC_DAILY
-- Reference Data
MGMT$TARGET_TYPE
MGMT$GROUP_DERIVED_MEMBERSHIPS
}}}
http://www.oracledbasupport.co.uk/querying-grid-repository-tables/
{{{
Modify these retention policies by updating the mgmt_parameters table in the OMR.
Table Name Retention Parameter Retention Days
MGMT_METRICS_RAW mgmt_raw_keep_window 7
MGMT_METRICS_1HOUR mgmt_hour_keep_window 31
MGMT_METRICS_1DAY mgmt_day_keep_window 365
}}}
10gR2 MGMT$METRIC_DAILY http://docs.oracle.com/cd/B19306_01/em.102/b16246/views.htm#sthref444
12cR2 MGMT$METRIC_DAILY http://docs.oracle.com/cd/E24628_01/doc.121/e25161/views.htm#sthref1824
12cR2 GC$METRIC_VALUES_DAILY http://docs.oracle.com/cd/E24628_01/doc.121/e25161/views.htm#BABFCJBD <-- new in 12c
gc_metric_values_daily/hourly are capacity planning gold mine views in #em12c #upallnightinvestigatingem12c http://goo.gl/YgMox <-- coolstuff!
! purging before 13c
http://learnwithmedba.blogspot.com/2013/01/increasemodifypurging-retention-for.html
! purging 13c
https://support.oracle.com/knowledge/Enterprise%20Management/2251910_1.html
https://gokhanatil.com/2016/08/how-to-modify-the-retention-time-for-metric-data-in-em13c.html
! OEM sampling
{{{
The collection is every 15 minutes, I made use of the following:
$ cat metric_current.sql
col tm format a20
col target_name format a20
col metric_label format a20
col metric_column format a20
col value format a20
select TO_CHAR(collection_timestamp,'MM/DD/YY HH24:MI:SS') tm, target_name, metric_label, metric_column, value
from mgmt$metric_current
where metric_label = 'Load'
and metric_column = 'cpuUtil'
and target_name = 'desktopserver.local';
oracle@desktopserver.local:/home/oracle/dba/karao/scripts:emrep
$ cat metric_currentloop
#!/bin/bash
while :; do
sqlplus "/ as sysdba" <<! &
spool metric_current.txt append
set lines 300
@metric_current.sql
exit
!
sleep 10
echo
done
while : ; do cat metric_current.txt | awk '{print $2}' | sort | uniq ; echo "---"; sleep 2; done
--------------------
15:11:59
15:26:59
15:41:59
15:56:59
}}}
{{{
col tm format a20
col target_name format a20
col metric_label format a20
col metric_column format a20
col value format a20
select TO_CHAR(collection_timestamp,'MM/DD/YY HH24:MI:SS') tm, target_name, metric_label, metric_column, value
from mgmt$metric_current
where metric_label = 'Load'
and metric_column = 'cpuUtil'
and target_name = 'desktopserver.local';
select * from (
select TO_CHAR(rollup_timestamp,'MM/DD/YY HH24:MI:SS') tm, target_name, metric_label, metric_column, sample_count, average
from mgmt$metric_hourly
where metric_label = 'Load'
and metric_column = 'cpuUtil'
and target_name = 'desktopserver.local'
order by tm desc
)
where rownum < 11;
}}}
! other references
http://www.slideshare.net/MaazAnjum/maaz-anjum-ioug-em12c-capacity-planning-with-oem-metrics
http://www.rmoug.org/wp-content/uploads/News-Letters/fall13web.pdf
http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/em12c/metric_extensions/Metric_Extensions.html
http://docs.oracle.com/cd/E24628_01/doc.121/e24473/metric_extension.htm#EMADM10033
http://www.oracledbasupport.co.uk/querying-grid-repository-tables/
http://blog.dbi-services.com/query-the-enterprise-manager-collected-metrics/
http://www.nyoug.org/Presentations/2011/March/Iotzov_OEM_Repository.pdf
http://www.slideshare.net/Datavail/optimizing-alert-monitoring-with-oracle-enterprise-manager?next_slideshow=1
http://www.manictime.com/Support/Help/v15/2/how-do-i-transfer-data-to-another-computer
! manic time on mac
!! install the server
https://www.manictime.com/Teams/How-To-Install-linux-mac
http://localhost:8080/#/personal/day-view?dayDate=8-21-2018&groupBy=2&userIds=0
!! then install the mac client
https://www.manictime.com/mac/download
https://blogs.oracle.com/optimizer/entry/how_does_sql_plan_management
<<<
A signature is a unique SQL identifier generated from the normalized SQL text (uncased and with whitespaces removed). This is the same technique used by SQL profiles and SQL patches. This means, if you issue identical SQL statements from two different schemas they would resolve to the same SQL plan baseline.
<<<
<!-- Start of StatCounter Code -->
<script type="text/javascript">
var sc_project=5604557;
var sc_invisible=1;
var sc_partition=63;
var sc_click_stat=1;
var sc_security="e91e8daa";
</script>
<script type="text/javascript"
src="http://www.statcounter.com/counter/counter.js"></script><noscript><div
class="statcounter"><a title="create counter"
href="http://www.statcounter.com/free_hit_counter.html"
target="_blank"><img class="statcounter"
src="http://c.statcounter.com/5604557/0/e91e8daa/1/"
alt="create counter" ></a></div></noscript>
<!-- End of StatCounter Code -->
{{{
http://goo.gl/qxTw0
http://www.techimo.com/forum/technical-support/220605-new-hard-drive-not-detected-bios.html
http://www.sevenforums.com/hardware-devices/140312-hdd-sata3-working-win7-but-not-detected-bios.html
--crapy siig controller
http://goo.gl/3CwQc
http://www.newegg.com/Product/Product.aspx?Item=N82E16816150028
--sata3 backwards compatible
http://www.techpowerup.com/forums/showthread.php?t=125631
http://www.tomshardware.com/forum/262341-32-sata-hard-disk-pluged-sata-port
http://en.wikipedia.org/wiki/Serial_ATA#Backward_and_forward_compatibility
-- find disk on SATA slot, find disk SATA speed
http://serverfault.com/questions/194506/find-out-if-disk-is-ide-or-sata
http://hardforum.com/showthread.php?t=1619242
http://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
http://forums.gentoo.org/viewtopic-t-883181-start-0.html <-- GOOD STUFF
http://ubuntuforums.org/archive/index.php/t-1635904.html
http://www.spinics.net/lists/raid/msg32885.html
http://www.linux-archive.org/centos/316405-how-map-ata-numbers-dev-sd-numbers.html
http://www.issociate.de/board/post/507665/Possible_HDD_error,_how_do_I_find_which_HDD_it_is?.html
http://serverfault.com/questions/5336/how-do-i-make-linux-recognize-a-new-sata-dev-sda-drive-i-hot-swapped-in-without
http://forums.fedoraforum.org/showthread.php?t=230618 <-- GOOD STUFF
-- WRITE FPDMA QUEUED
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/550559
http://us.generation-nt.com/answer/problem-reproduceable-storage-errors-high-io-load-help-203628822.html?page=2
http://www.linuxonlinehelp.de/?tag=write-fpdma-queued
http://ubuntuforums.org/archive/index.php/t-903198.html
http://web.archiveorange.com/archive/v/PqQ0RyEKwX1raCQooTfa
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/550559
http://forums.gentoo.org/viewtopic-p-4104334.html
http://lime-technology.com/wiki/index.php?title=The_Analysis_of_Drive_Issues
http://forums.fedoraforum.org/archive/index.php/t-220438.html <-- MAKES SENSE.. ERRORS ON NCQ
http://www.linuxquestions.org/questions/linux-hardware-18/sata-link-down-on-non-existing-sata-channel-694937/
http://ubuntuforums.org/archive/index.php/t-1037819.html
http://www.spinics.net/lists/linux-ide/msg41261.html
https://forums.openfiler.com/viewtopic.php?id=3551
http://www.linuxquestions.org/questions/linux-hardware-18/sata-link-down-on-non-existing-sata-channel-694937/
}}}
http://www.java2s.com/Tutorial/Oracle/0160__View/creatematerializedviewempdeptbuildimmediaterefreshondemandenablequeryrewrite.htm
https://gerardnico.com/db/oracle/methodology_for_designing_and_building_the_materialized_views
https://gerardnico.com/db/oracle/materialized_view
https://gerardnico.com/db/oracle/pre_compute_operations
https://gerardnico.com/dit/owb/materialized_view
https://gerardnico.com/db/oracle/partition/materialized_view
http://kimballgroup.forumotion.net/t127-appropriate-use-of-materialized-views
MATERIALIZED VIEW REFRESH: Locking, Performance, Monitoring
Doc ID: Note:258252.1
-- REPLICATION
Troubleshooting Guide: Replication Propagation
Doc ID: Note:1035874.6
-- DIAGNOSIS
ORA-00917 While Using using DBMS_MVIEW.EXPLAIN_REWRITE
Doc ID: 471056.1
ORA-12899 When Executing DBMS_MVIEW.EXPLAIN_REWRITE
Doc ID: 469448.1
Snapshot Refresh Fails with ORA-2055 and ORA-7445
Doc ID: 141086.1
Privileges To Refresh A Snapshot Or Materialized View
Doc ID: 1027174.6
Materialized View Refresh Fails With ORA-942: table or view does not exist
Doc ID: 236652.1
How To Use DBMS_MVIEW.EXPLAIN_REWRITE and EXPLAIN_MVIEW To Diagnose Query Rewrite Problems
Doc ID: 149815.1
-- BLOG
http://avdeo.com/2012/10/14/materialized-views-concepts-discussion-series-1/
http://avdeo.com/2012/10/16/materialized-views-concepts-discussion-series-2/
http://avdeo.com/2012/10/24/materialized-view-concepts-discussion-series-3/
• Linear Algebra = https://www.khanacademy.org/math/linear-algebra
• Differential Calculus = https://www.khanacademy.org/math/differential-calculus
• Integral Calculus = https://www.khanacademy.org/math/integral-calculus
• Probability and Statistics = https://www.khanacademy.org/math/probability
! math notation
https://www.adelaide.edu.au/mathslearning/seminars/MathsNotation2013.pdf
https://www.adelaide.edu.au/mathslearning/seminars/mathsnotation.html
https://www.mathsisfun.com/sets/symbols.html
http://abstractmath.org/MM/MMTOC.htm
http://www.abstractmath.org/Word%20Press/?p=9471
https://www.safaribooksonline.com/library/view/introduction-to-abstract/9781118311738/
https://www.safaribooksonline.com/library/view/introduction-to-abstract/9781118347898/
https://www.safaribooksonline.com/library/view/technical-math-for/9780470598740/
! discrete math
https://www.lynda.com/Programming-Foundations-tutorials/Basics-discrete-mathematics/411376/475394-4.html
! udemy
''you only need to replace a physical disk on Exadata if it's a predictive failure''
''References:''
LSI KnowledgeBase http://kb.lsi.com/KnowledgebaseArticle16516.aspx
http://www.fatmin.com/2011/10/lsi-megacli-check-for-failed-raid-controller-battery.html
http://windowsmasher.wordpress.com/2011/08/13/using-megacli-to-monitor-openfiler/
http://timjacobs.blogspot.com/2008/05/installing-lsi-logic-raid-monitoring.html
http://lists.us.dell.com/pipermail/linux-poweredge/2010-December/043835.html
http://www.watters.ws/mediawiki/index.php/RAID_controller_commands
http://artipc10.vub.ac.be/wordpress/2011/09/12/megacli-useful-commands/
http://thornelaboratories.net/documentation/2013/02/01/megacli64-command-usage-cheat-sheet.html
https://wiki.xkyle.com/MegaCLI
{{{
check the firmware version
/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -a0
[root@enkcel04 ~]# dcli -l root -g cell_group /opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aALL | egrep "Degraded|Failed Disks"
enkcel04: Degraded : 0
enkcel04: Failed Disks : 0
enkcel05: Degraded : 0
enkcel05: Failed Disks : 0
enkcel06: Degraded : 0
enkcel06: Failed Disks : 0
enkcel07: Degraded : 0
enkcel07: Failed Disks : 0
[root@enkcel04 ~]# dcli -l root -g cell_group /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -LALL -aALL | egrep "Virtual|State"
enkcel04: Adapter 0 -- Virtual Drive Information:
enkcel04: Virtual Drive: 0 (Target Id: 0)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 1 (Target Id: 1)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 2 (Target Id: 2)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 3 (Target Id: 3)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 4 (Target Id: 4)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 5 (Target Id: 5)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 6 (Target Id: 6)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 7 (Target Id: 7)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 8 (Target Id: 8)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 9 (Target Id: 9)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 10 (Target Id: 10)
enkcel04: State : Optimal
enkcel04: Virtual Drive: 11 (Target Id: 11)
enkcel04: State : Optimal
enkcel05: Adapter 0 -- Virtual Drive Information:
enkcel05: Virtual Drive: 0 (Target Id: 0)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 1 (Target Id: 1)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 2 (Target Id: 2)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 3 (Target Id: 3)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 4 (Target Id: 4)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 5 (Target Id: 5)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 6 (Target Id: 6)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 7 (Target Id: 7)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 8 (Target Id: 8)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 9 (Target Id: 9)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 10 (Target Id: 10)
enkcel05: State : Optimal
enkcel05: Virtual Drive: 11 (Target Id: 11)
enkcel05: State : Optimal
enkcel06: Adapter 0 -- Virtual Drive Information:
enkcel06: Virtual Drive: 0 (Target Id: 0)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 1 (Target Id: 1)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 2 (Target Id: 2)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 3 (Target Id: 3)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 4 (Target Id: 4)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 5 (Target Id: 5)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 6 (Target Id: 6)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 7 (Target Id: 7)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 8 (Target Id: 8)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 9 (Target Id: 9)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 10 (Target Id: 10)
enkcel06: State : Optimal
enkcel06: Virtual Drive: 11 (Target Id: 11)
enkcel06: State : Optimal
enkcel07: Adapter 0 -- Virtual Drive Information:
enkcel07: Virtual Drive: 0 (Target Id: 0)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 1 (Target Id: 1)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 2 (Target Id: 2)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 3 (Target Id: 3)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 4 (Target Id: 4)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 5 (Target Id: 5)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 6 (Target Id: 6)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 7 (Target Id: 7)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 8 (Target Id: 8)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 9 (Target Id: 9)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 10 (Target Id: 10)
enkcel07: State : Optimal
enkcel07: Virtual Drive: 11 (Target Id: 11)
enkcel07: State : Optimal
[root@enkcel04 ~]# dcli -l root -g cell_group /opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL | egrep "Slot Number|Device Id|Count"
enkcel04: Slot Number: 0
enkcel04: Device Id: 19
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 1
enkcel04: Device Id: 18
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 2
enkcel04: Device Id: 17
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 3
enkcel04: Device Id: 16
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 4
enkcel04: Device Id: 15
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 5
enkcel04: Device Id: 14
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 6
enkcel04: Device Id: 13
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 7
enkcel04: Device Id: 12
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 8
enkcel04: Device Id: 11
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 9
enkcel04: Device Id: 10
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 10
enkcel04: Device Id: 9
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 11
enkcel04: Device Id: 8
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel05: Slot Number: 0
enkcel05: Device Id: 19
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 1
enkcel05: Device Id: 18
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 2
enkcel05: Device Id: 17
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 3
enkcel05: Device Id: 16
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 4
enkcel05: Device Id: 15
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 5
enkcel05: Device Id: 22
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 6
enkcel05: Device Id: 13
enkcel05: Media Error Count: 146
enkcel05: Other Error Count: 1
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 7
enkcel05: Device Id: 12
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 8
enkcel05: Device Id: 11
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 9
enkcel05: Device Id: 10
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 10
enkcel05: Device Id: 9
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 11
enkcel05: Device Id: 8
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel06: Slot Number: 0
enkcel06: Device Id: 19
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 1
enkcel06: Device Id: 18
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 2
enkcel06: Device Id: 17
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 1
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 3
enkcel06: Device Id: 16
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 4
enkcel06: Device Id: 15
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 5
enkcel06: Device Id: 14
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 6
enkcel06: Device Id: 13
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 7
enkcel06: Device Id: 12
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 8
enkcel06: Device Id: 11
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 9
enkcel06: Device Id: 10
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 10
enkcel06: Device Id: 9
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 11
enkcel06: Device Id: 8
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel07: Slot Number: 0
enkcel07: Device Id: 34
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 1
enkcel07: Device Id: 33
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 2
enkcel07: Device Id: 32
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 1
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 3
enkcel07: Device Id: 31
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 4
enkcel07: Device Id: 30
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 5
enkcel07: Device Id: 29
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 6
enkcel07: Device Id: 28
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 7
enkcel07: Device Id: 27
enkcel07: Media Error Count: 1
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 8
enkcel07: Device Id: 26
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 9
enkcel07: Device Id: 25
enkcel07: Media Error Count: 14
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 10
enkcel07: Device Id: 24
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 11
enkcel07: Device Id: 23
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
}}}
''The memory experts''
http://www.crucial.com/index.aspx
''You Probably Don't Need More DIMMs'' http://h30507.www3.hp.com/t5/Eye-on-Blades-Blog-Trends-in/You-Probably-Don-t-Need-More-DIMMs/ba-p/81647#.UvvgJUJdWig
http://img339.imageshack.us/i/hynix2gb.jpg/sr=1
Here are the details you need to know when buying/upgrading memory for your machine
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZwsfhjwZNI/AAAAAAAABLo/oHZs5EC7HRI/physicalmemory.png]]
SDRAM vs DIMM
http://forums.techguy.org/hardware/161660-sdram-vs-dimm.html
1333 just runs 1066?
http://forum.notebookreview.com/dell-latitude-vostro-precision/475324-e6410-owners-thread-149.html
http://forums.anandtech.com/showthread.php?t=2141483
http://www.computerhope.com/issues/ch001376.htm#1
http://www.liberidu.com/blog/?p=2343&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+Bloggralikecom+(blog.gralike.com)
https://blogs.oracle.com/UPGRADE/
Migration Solutions Directory
http://www.oracle.com/technology/tech/migration/mti/index.html
http://www.oracle.com/technology/tech/migration/maps/index.html
Migration Workbench
Application Migration Assistant
Oracle Database Migration Verifier
* Migration Technology Center on OTN is the main entry point on OTN for all your migration requirements.
http://www.oracle.com/technology/tech/migration/index.html
* Migration Solutions Directory on OTN provides a quick and easy way to search for the migration solutions that are best suited to your particular migration.
http://www.oracle.com/technology/tech/migration/mti/index.html
* Migration Maps provide a set of step-by-step instructions to guide you through the recommended process for the migration of an existing third-party database to Oracle.
http://www.oracle.com/technology/tech/migration/maps/index.html
* Oracle Migration Knowledge Base offers a collection of technical articles to help you resolve any migration issue.
http://www.oracle.com/technology/tech/migration/kb/index.html
* Discussion Forums on OTN, monitored by developers
o Oracle Migration Workbench Forum
http://forums.oracle.com/forums/forum.jspa?forumID=1
o Application Migration Assistant Forum
http://forums.oracle.com/forums/forum.jspa?forumID=182
* Relational Migration Center of Excellence Introduction
http://www.oracle.com/technology/tech/migration/isv/mig_services.html
Migration Services
------------------
The PTS group has successfully completed over 1,000 partner migrations from Sybase, Informix, Microsoft, IBM DB2, Mumps, and other databases to Oracle-based solutions. PTS has also successfully completed over 300 partner migrations from BEA Weblogic, IBM Websphere, JBoss, and Sun iPlanet to Oracle Application Server 10g. PTS migration engagements typically last from a few days to a couple of weeks depending on the size, type, and complexity of the project. Technical resources such as on-site support, phone support, and e-mail support are all available.
When you have your solution running on Oracle, PTS also provides hands-on architecture and design reviews, database and middle-tier benchmarks, performance and tuning, Java/J2EE coding, RAC validations, proofs of concept, project planning and monitoring, product deployment, and new Oracle product release implementation.
---------------------------------------------------------------------------
Customer Information
Migration Reasons and Goals
Database Information
Operational Procedures and Requirements
Other Special Logic/Subsystems to Migrate
Documentation
----
Migration Lifecycle:
1) Evaluation
2) Assessment
3) Migration
4) Testing
5) Optimization
6) Customer Acceptance
7) Production
8) Project Support
Information and Guide for Finding Migration Information
Doc ID: Note:468083.1
Consolidated Reference List For Migration / Upgrade Service Requests
Doc ID: 762540.1
-- UPGRADE PLANNER
What is the Upgrade Planner and how do I use it? [ID 1277424.1]
http://supportweb.siebel.com/crmondemand/videos/Customer_Support/UITraining/MOS2010/upgradeplanner_introduction/upgradeplanner_introduction.htm
http://supportweb.siebel.com/crmondemand/videos/Customer_Support/UITraining/MOS2010/upgradeplanner_advanced/upgradeplanner_advanced.htm
-- UPGRADE ADVISOR
Upgrade Advisor: Database [ID 251.1]
Oracle Support Upgrade Advisors [ID 250.1]
Upgrade Advisor: OracleAS 10g Forms/Reports Services to FMW 11g [ID 252.1]
Oracle Support Lifecycle Advisors [ID 250.1] <-- ''new!''
-- UPGRADE COMPANION
Oracle 11gR2 Upgrade Companion
Doc ID: 785351.1
10g Upgrade Companion
Doc ID: Note:466181.1
Oracle 11g Upgrade Companion
Doc ID: Note:601807.1
Oracle 11gR2 Upgrade Companion
Doc ID: 785351.1
Compatibility Matrix for Export And Import Between Different Oracle Versions
Doc ID: 132904.1
-- UPGRADE METHODS
Different Upgrade Methods For Upgrading Your Database
Doc ID: Note:419550.1
How to Perform a Full Database Export Import during Upgrade, Migrate, Copy, or Move of a Database
Doc ID: 286775.1
-- UPGRADE WITH DATA GUARD
Upgrading to 10g with a Physical Standby in Place
Doc ID: Note:278521.1
Upgrading to 10g with a Logical Standby in Place
Doc ID: Note:278108.1
-- 7 to 8/8i
Note 122926.1 What Happens Inside Oracle when Migrating from 7 to 8/8i
-- CATCPU
Do I Need To Run catcpu.sql After Upgrading A Database?
Doc ID: Note:461082.1
-- UTLRP, UTLIRP, UTLIP
Difference between UTLRP.SQL - UTLIRP.SQL - UTLIP.SQL?
Doc ID: Note:272322.1
-- CONVERT SE TO EE
How to Convert Database from Standard to Enterprise Edition ?
Doc ID: Note:117048.1
How to convert a RAC database from Standard Edition (SE) to Enterprise Edition (EE)?
Doc ID: Note:451981.1
-- CONVERT EE TO SE
Converting from Enterprise Edition to Standard Edition
Doc ID: Note:139642.1
-- ISSUES on converting from SE to EE
Unable To Recompile Invalid Objects with UTLRP Script After Upgrading From 9i To 10g
Doc ID: Note:465050.1
ORA-07445 [zllcini] or ORA-04045 in a Database with OLS Set to FALSE
Doc ID: Note:233110.1
Queries Against Tables Protected by OLS Are Erroring Out
Doc ID: Note:577569.1
While compiling, Ora-04063: Package Body 'Lbacsys.Lbac_events' Has Errors
Doc ID: Note:359649.1
-- UPGRADE CHECKLIST
Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9iR2 (9.2.0)
Doc ID: Note:159657.1
Upgrading Directly to a 9.2.0 Patch Set
Doc ID: Note:214887.1
Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9iR2 (9.2.0)
Doc ID: Note:159657.1
-- MIGRATION FROM 32 to 64
Memory Requirements of Databases Migrated from 32-bit to 64-bit
Doc ID: 209766.1
How to convert a 32-bit database to 64-bit database on Linux?
Doc ID: Note:341880.1
Failure to Create new Control File Migrating From 32-Bit 10gr1 To 64-Bit 10gr2
Doc ID: Note:458401.1
How I Solved a Problem During a Migration of 32 bit to 64 bit on 10.2.0.2
Doc ID: Note:452416.1
How To Change The Platform From Linux X86 To Linux IA64 Itanium (RH or Suse)
Doc ID: Note:316358.1
How to Migrate Oracle 10.2 32bit to 10.2 64bit on Microsoft Windows
Doc ID: Note:403522.1
How I Solved a Problem During a Migration of 32 bit to 64 bit on 10.2.0.2
Doc ID: 452416.1
http://dba.5341.com/msg/66637.html
http://seilerwerks.wordpress.com/2007/03/06/fixing-a-32-to-64-bit-migration-with-utlirpsql/
http://www.oraclealchemist.com/2007/12/
http://www.miraclelinux.com/english/case/index.html
http://pat98.tistory.com/tag/oracle%2064bit
FULL EXPORT FAILS AFTER 32BIT TO 64BIT CONVERSION with ORA-7445
Doc ID: Note:559777.1
How to Migrate Oracle 10.2 32bit to 10.2 64bit on Microsoft Windows
Doc ID: Note:403522.1
Changing between 32-bit and 64-bit Word Sizes
Doc ID: Note:62290.1
How To Verify the Word Size(32bit vs 64bit) of Oracle and UNIX Operating Systems
Doc ID: Note:168604.1
AIX - 32bit vs 64bit
Doc ID: Note:225551.1
Upgrading OLAP from 32 to 64 bits
Doc ID: Note:352306.1
Can you restore RMAN backups taken on 32-bit Oracle with 64-bit Oracle?
Doc ID: Note:430278.1
Got Ora-600 [17069] While Migrating To 64bit From 32bit DB On 64bit Solaris.
Doc ID: Note:434458.1
ORA-25153: Temporary Tablespace is Empty during 32-Bit To 64-Bit 9iR2 on Linux Conversion
Doc ID: Note:602849.1
How To Change Oracle 11g Wordsize from 32-bit to 64-bit.
Doc ID: Note:548978.1
Changing between 32-bit and 64-bit Word Sizes
Doc ID: 62290.1
RMAN Restoring A 32 bit Database to 64 bit - An Example
Doc ID: 467676.1
How to convert a 32-bit database to 64-bit database on Linux?
Doc ID: 341880.1
Can you restore RMAN backups taken on 32-bit Oracle with 64-bit Oracle?
Doc ID: 430278.1
How to Upgrade a Database from 32 Bit Oracle to 64 Bit Oracle
Doc ID: 164997.1
http://gavinsoorma.com/2012/10/performing-a-32-bit-to-64-bit-migration-using-the-transportable-database-rman-feature/
-- DBA_REGISTRY, after wordsize change 10.2.0.4
How to check if Intermedia Audio/Image/Video is Installed Correctly?
Doc ID: 221337.1
Manual upgrade of the 10.2.x JVM fails with ORA-3113 and ORA-7445
Doc ID: 459060.1
Jserver Java Virtual Machine Become Invalid After Catpatch.Sql
Doc ID: 312140.1
How to Reload the JVM in 10.1.0.X and 10.2.0.X
Doc ID: 276554.1
Script to Check the Status of the JVM within the Database
Doc ID: 456949.1
How to Tell if Java Virtual Machine Has Been Installed Correctly
Doc ID: 102717.1
-- CROSS PLATFORM
Answers To FAQ For Restoring Or Duplicating Between Different Versions And Platforms
Doc ID: 369644.1
Migration of Oracle Database Instances Across OS Platforms
Doc ID: 733205.1
How to Use Export and Import when Transferring Data Across Platforms or Across 32-bit and 64-bit Servers
Doc ID: 277650.1
How to Perform a Full Database Export Import during Upgrade, Migrate, Copy, or Move of a Database
Doc ID: 286775.1
-- ITANIUM to X86-64
How To Migrate a Database From Linux Itanium 64-bit To Linux x86-64 (AMD64/EM64T)
Doc ID: Note:550042.1
-- HP-UX PA-RISC to ITANIUM
427712.1 pa-risc to itanium
-- DOWNGRADE
How to Downgrade from Oracle RDBMS 10gR2?
Doc ID: Note:398372.1
Complete Checklist For Downgrading The Database From 11g To Lower Releases
Doc ID: 443890.1
-- PARAMETERS
What is 'STARTUP MIGRATE'?
Doc ID: Note:252273.1
Difference Between Deprecated and Obsolete Parameters
Doc ID: Note:342875.1
-- PATCHING
Clarity On Database Patchset 10.2.0.3.0 Apply, Where The README Has References To Oracle Database Vault Option
Doc ID: Note:405042.1
How to rollback a patchset
Doc ID: Note:334598.1
Restoring a database to a higher patchset
Doc ID: 558408.1
-- MINIMAL DOWNTIME
My Experience in Moving a 1 Terabyte Database Across Platforms With Minimal Downtime
Doc ID: 431096.1
How I Create a Physical Standby Database for a 24/7 Shop
Doc ID: 580004.1
-- FORMS MIGRATE TO 9i/10g
FRM-10256: User is not authorized to run Oracle Forms Menu
Cause:
Forms menu is relying on the FRM50_ENABLED_ROLES view for the menu security; this view is
owned by SYSTEM. The application schemas are only imported on the 10.2.0.4 database and as a
result this view was not created.
Solution:
I found Metalink Note 28933.1, labeled “Implementing and Troubleshooting Menu Security in Forms”,
that suggested to run the FRMSEC.SQL to create the view. This script could be found in the
D:\$oracle_home$\tools\dbtab\forms directory of a Developer Suite installation on a client desktop.
Executed the FRMSEC.SQL on the 10.2.0.4 database as SYSTEM user and granted the
FRM50_ENABLED_ROLES view to PUBLIC
Migrating to Oracle Forms 9i / 10g - Forms Upgrade Center
Doc ID: Note:234540.1
MindMap - Plan Stability http://www.evernote.com/shard/s48/sh/727c84ca-a25e-4ffa-89f9-4d1e96c471c4/dcad83781f8a07f8983e26fbb8c066a3
<<showtoc>>
! my mindmap workflow
> freemind as a default tool for mindmap creation
> if I'm reading from my iPad I mindmap using iThoughtsX
> if I need a better search functionality inside my mindmap notes to find the specific leaf node I would use the iThoughtsX for mac
> if I need to layout my TODOs in a timeline manner for Goal setting, then I would use xmind
! software you need
''FREE online MindMap tool''
http://mind42.com/mindmaps
''iThoughtsX (Mac and IOS - iphone and ipad)'' <- I just like the search feature of this software, that's it
http://toketaware.com/ithoughtsx-faq/
''MindMap offline viewer''
http://freemind.sourceforge.net/wiki/index.php/Download
''xmind''
https://www.xmind.net/ <- for a mindmap + timeline view (gantt chart) + office integration
! freemind
freemind spacing between nodes ''vgap'' http://sourceforge.net/p/freemind/discussion/22102/thread/6d7a8d0b, ''drag'' http://sourceforge.net/p/freemind/discussion/22102/thread/6d7a8d0b, http://www.linuxquestions.org/questions/attachment.php?attachmentid=7061&d=1305983839, http://www.linuxquestions.org/questions/attachment.php?attachmentid=7074&d=1306148002
! My Mind Maps
''this is a bit outdated, I moved all of them to a cloud directory where I can view them across my laptop, iphone, ipad - the iThoughtsX+dropbox integration let me do the syncing on my mobile and I still use freemind as my main software for mind mapping on my laptop''
https://sites.google.com/site/karlarao/home/mindmap
<<<
!!! Capacity Planning
Mining the AWR repository for Capacity Planning, Visualization, & other real world stuff https://sites.google.com/site/karlarao/mindmap/mining-the-awr-repository
Prov worksheet vs Consolidation Planner https://sites.google.com/site/karlarao/home/mindmap/apx
Prov worksheet https://sites.google.com/site/karlarao/home/mindmap/provworksheet
Capacity Planning paper https://sites.google.com/site/karlarao/home/mindmap/capacity-planning-paper
Threads vs Cores https://sites.google.com/site/karlarao/home/mindmap/cpuvsthread
Exadata Consolidation Success Story https://sites.google.com/site/karlarao/home/mindmap/exadata-consolidation-success-story
OaktableWorld12 https://sites.google.com/site/karlarao/home/mindmap/oaktableworld12
!!! Visualization
AWR Tableau and R visualization examples https://sites.google.com/site/karlarao/home/mindmap/awr-tableau-and-r-toolkit-visualization-examples
!!! Exadata
write-back flash cache https://sites.google.com/site/karlarao/home/mindmap/write-back-flash-cache
!!! Speaking
E412 https://sites.google.com/site/karlarao/home/mindmap/e4_12
IOUG13 https://sites.google.com/site/karlarao/home/mindmap/ioug13
E413 https://sites.google.com/site/karlarao/home/mindmap/e4_13
kscope13-css https://sites.google.com/site/karlarao/home/mindmap/kscope13-css
!!! Code ninja
Python https://sites.google.com/site/karlarao/home/mindmap/python
!!! SQL
Plan Stability http://www.evernote.com/shard/s48/sh/727c84ca-a25e-4ffa-89f9-4d1e96c471c4/dcad83781f8a07f8983e26fbb8c066a3
<<<
! mindmap as timeline view
http://hubaisms.com/tag/timeline/
http://vismap.blogspot.com/2009/02/from-mind-map-to-timeline-in-one-click.html
http://www.matchware.com/en/products/mindview/education/storyboarding.htm
http://www.pcworld.com/article/2029529/review-mindview-5-makes-mind-maps-first-class-citizens-in-the-office-ecosystem.html
http://www.techrepublic.com/blog/tech-decision-maker/brainstorm-project-solutions-with-mindview-mind-mapping-software/
http://www.techrepublic.com/blog/tech-decision-maker/build-milestone-charts-faster-with-mindview-3-business-software/
! Create a Mind Map File from a Directory Structure
https://gist.github.com/karlarao/c14413ba48e84f4de4dac84a297da1f6
https://leftbraintinkering.blogspot.com/2014/09/linux-create-mind-map-freemind-from.html?showComment=1479152008083
{{{
## the XML output is 5 directories deep and filtering any folder name with "tmp" in it
tree -d -L 5 -X -I tmp /Users/karl/Dropbox/CodeNinja/GitHub | sed 's/directory/node/g'| sed 's/name/TEXT/g' | sed 's/tree/map/g' | sed '$d' | sed '$d' | sed '$d'| sed "1d" | sed 's/report/\/map/g' | sed 's/<map>/<map version="1.0.1">/g' > /Users/karl/Dropbox/CodeNinja/GitHub/Gitmap.mm
-- tree output
Karl-MacBook:example karl$ tree -d -L 5 -X
<?xml version="1.0" encoding="UTF-8"?>
<tree>
<directory name=".">
<directory name="root_folder">
<directory name="folder1">
<directory name="subfolder1">
</directory>
</directory>
<directory name="folder2">
</directory>
</directory>
</directory>
<report>
<directories>4</directories>
</report>
</tree>
-- what I'd like it to be
Karl-MacBook:example karl$ cat root_folder.mm
<map version="1.0.1">
<!-- To view this file, download free mind mapping software FreeMind from http://freemind.sourceforge.net -->
<node CREATED="1479134130850" ID="ID_83643410" MODIFIED="1479134208336" TEXT="X">
<node CREATED="1479134193879" ID="ID_281547913" MODIFIED="1479149770801" POSITION="right" TEXT="root_folder">
<node CREATED="1479134139495" ID="ID_1307804355" MODIFIED="1479134141733" TEXT="folder1">
<node CREATED="1479149779339" ID="ID_12170653" MODIFIED="1479149782231" TEXT="subfolder1"/>
</node>
<node CREATED="1479134143525" ID="ID_880690660" MODIFIED="1479134146575" TEXT="folder2"/>
</node>
</node>
</map>
-- XML to mindmap
Karl-MacBook:example karl$ tree -d -L 5 -X | sed 's/directory/node/g'| sed 's/name/TEXT/g' | sed 's/tree/map/g' | sed '$d' | sed '$d' | sed '$d'| sed "1d" | sed 's/report/\/map/g' | sed 's/<map>/<map version="1.0.1">/g'
<map version="1.0.1">
<node TEXT=".">
<node TEXT="root_folder">
<node TEXT="folder1">
<node TEXT="subfolder1">
</node>
</node>
<node TEXT="folder2">
</node>
</node>
</node>
</map>
-- final script
./Gitmap.sh
tree -d -L 5 -X -I tmp /Users/karl/Dropbox/CodeNinja/GitHub | sed 's/directory/node/g'| sed 's/name/TEXT/g' | sed 's/tree/map/g' | sed '$d' | sed '$d' | sed '$d'| sed "1d" | sed 's/report/\/map/g' | sed 's/<map>/<map version="1.0.1">/g' > /Users/karl/Dropbox/CodeNinja/GitHub/Gitmap.mm
}}}
! references
https://en.wikipedia.org/wiki/List_of_concept-_and_mind-mapping_software
http://mashable.com/2013/09/25/mind-mapping-tools/
! end
http://karlarao.wordpress.com/2011/12/06/mining-emgc-notification-alerts/
* nice video on ''configuring rules'' http://www.oracle.com/webfolder/technetwork/tutorials/demos/em/gc/r10205/notifications/notifications_viewlet_swf.html
* tutorial on ''creating notification methods (os or snmp)'' and ''mapping it to notification rules'' http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/emgc10gr2/quick_start/notification/notification.htm
* configure ''preferences'' http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/emgc10gr2/quick_start/preferred_credentials/preferred_credentials.htm
http://arup.blogspot.com/2010/05/mining-listener-logs.html
https://docs.google.com/viewer?url=http://www.proligence.com/MiningListenerLogPart1.pdf&pli=1
https://docs.google.com/viewer?url=http://www.proligence.com/MiningListenerLogPart2.pdf&pli=1
https://docs.google.com/viewer?url=http://www.proligence.com/MiningListenerLogPart3.pdf&pli=1
''Some prereq readables''
* Metrics and DBA_HIST tables https://docs.google.com/viewer?url=http://www.perfvision.com/ftp/emea_2010_may/04_NEW_features.ppt
* http://dioncho.wordpress.com/2009/01/23/misunderstanding-on-top-sqls-of-awr-repository/
* http://www.freelists.org/post/oracle-l/SQLs-run-in-any-period,6
* http://www.freelists.org/post/oracle-l/Missing-SQL-in-DBA-HIST-SQLSTAT
* ''Slide 14 of the OOW presentation S317114 What Else Can I Do with System and Session Performance Data'' http://asktom.oracle.com/pls/apex/z?p_url=ASKTOM%2Edownload_file%3Fp_file%3D3400036420700662395&p_cat=oow_2010.zip&p_company=822925097021874 the presentation says "Remember they are snapshots, not movies.. DBA_HIST_SQLTEXT will not be 100% complete for example - especially if you have a poorly written application"
* http://kerryosborne.oracle-guy.com/2009/04/hidden-sql-why-cant-i-find-my-sql-text/
* ''Slide 45-59'' http://www.slideshare.net/karlarao/unconference-mining-the-awr-repository-for-capacity-planning-visualization-other-real-world-stuff the presentation shows the correlation of SQLs to the workload of the server in a time series manner.. that is, when tuned will have big impact on the workload reduction
* Andy Rivenes www.appsdba.com/papers/Oracle_Workload_Measurement.pdf good read about interval based monitoring
* http://oracledoug.com/serendipity/index.php?/archives/1402-MMON-Sampling-ASH-Data.html
How To Interpret DBA_HIST_SQLSTAT [ID 471053.1]
http://shallahamer-orapub.blogspot.com/2011/01/when-is-vsqlstats-refreshed.html
''Object and SQLs used for the test case (by Dion Cho):''
{{{
-- create objects
create table t1(c1 int, c2 char(100));
insert into t1
select level, 'x'
from dual
connect by level <= 10000
;
commit;
}}}
{{{
set heading off
set timing off
set feedback off
spool select2.sql
select 'select /*+ top_sql_' || mod(level,10000) || ' */ count(*) from t1;'
from dual
connect by level <= 10000;
spool off
}}}
''Executed as follows:''
{{{
exec dbms_workload_repository.create_snapshot;
@select2
exec dbms_workload_repository.create_snapshot;
}}}
''The first test was done on SNAP_ID 1329, you'll notice the 33% Oracle CPU and 269.110 exec/s
the 2nd test was on SNAP_ID 1332 with 43% Oracle CPU and 315.847 exec/s
There was also a shutdown on period SNAP_ID 1330-1331 because I increased the SGA_MAX_SIZE from 300-700M
The increased SGA helped as I was able to have more SQLs appearing on the awr_topsql.sql
SNAP_ID 1329 -- 300M SGA -- 54sqls
SNAP_ID 1332 -- 700M SGA -- 106sqls''
{{{
AWR CPU and IO Workload Report
i *** *** ***
n Total Total Total U S
Snap s Snap C CPU A Oracle OS Physical Oracle RMAN OS S Y I
Snap Start t Dur P Time DB DB Bg RMAN A CPU OS CPU Memory IOPs IOPs IOPs IO r IO w Redo Exec CPU CPU CPU R S O
ID Time # (m) U (s) Time CPU CPU CPU S (s) Load (s) (mb) r w redo (mb)/s (mb)/s (mb)/s Sess /s % % % % % %
------ --------------- --- ---------- --- ----------- ---------- --------- --------- -------- ----- ----------- ------- ----------- ---------- --------- --------- --------- --------- --------- --------- ---- --------- ------ ---- ---- ---- ---- ----
1328 10/10/18 21:36 1 1.23 1 73.80 4.41 4.19 0.33 0.00 0.1 4.52 0.46 8.51 0.05 0.257 2.019 0.217 0.004 0.026 0.006 21 2.073 6 0 12 5 5 2
1329 10/10/18 21:37 1 1.46 1 87.60 29.04 28.73 0.43 0.00 0.3 29.16 0.97 43.35 0.03 0.342 1.906 0.148 0.004 0.019 0.006 21 269.110 33 0 49 21 27 2
1330 10/10/18 21:38 1 7.02 1 421.20 6.45 4.37 2.14 0.00 0.0 6.51 0.15 35.12 0.02 0.306 0.715 0.674 0.004 0.010 0.004 21 19.418 2 0 8 3 4 2
1331 10/10/18 21:45 1 12.77 1 766.20 -417.20 -100.73 -33.41 0.00 -0.5 -134.13 0.39 -663.97 0.07 -7.794 -2.536 -1.541 -0.230 -0.033 -0.014 22 -60.715 -18 0 -87 -14 -59 -20
1332 10/10/18 21:58 1 1.26 1 75.60 36.63 32.37 0.30 0.00 0.5 32.67 0.85 46.88 0.04 6.548 0.463 0.185 0.100 0.010 0.008 25 315.847 43 0 62 28 33 3
}}}
''Drilling down on SNAP_ID 1332 awr_topsql.sql output (below) the following are the filter options''
AND snap_id = 1332
AND lower(st.sql_text) like '%top_sql%'
''and it just shows 26 rows''
''notice the top_sql_7199 below.. the time rank is 9 and has the elapsed time of 0.04sec, notice that the query is arranged on Elapsed Time which really what matters
and on top_sql_7199 half of the time was on CPU .02sec (approximate, this may be different when you actually trace the SQL)
also notice the "SQL Text" column (far right).. the lowest from the 10K execution starts on the range of top_sql_6xxx and highest is top_sql_9xxx... this may happen as SQL goes in and out of the shared pool which could be aged out or cycled or simply because my shared pool is too small..
and then as per the official doc
//"DBA_HIST_SQLSTAT displays historical information about SQL statistics. This view captures the top SQL statements based on a set of criteria and captures the statistics information from V$SQL. The total value is the value of the statistics since instance startup. The delta value is the value of the statistics from the BEGIN_INTERVAL_TIME to the END_INTERVAL_TIME in the DBA_HIST_SNAPSHOT view."//
so this correlates with Tom Kyte's statement that they are based on snapshots... but the "top SQL statements based on a set of criteria" are only captured on this view (hmm have to check if there will be top_sql on v$sql with no load that are not captured on dba_hist_sqlstat)
"Remember they are snapshots, not movies.. DBA_HIST_SQLTEXT will not be 100% complete for example - especially if you have a poorly written application"
Note that these are not the only SQLs running on this SNAP period... you'll see on the next sections that on this snap period (1332) there are 90 rows selected
(You can have a better output by double clicking this whole page and copy paste it on a textpad)
''
{{{
AWR Top SQL Report
i
n Elapsed
Snap s Snap Plan Elapsed Time CPU A
Snap Start t Dur SQL Hash Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
1332 10/10/18 21:58 1 1.26 6yd53x1zjqts9 3724264953 sqlplus@dbrocaix01.b 0.04 0.04 0.02 0 223 0 1 1 1 0 0.00 9 select /*+ top_sql_7199 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 bpxnmunkcywzg 3724264953 sqlplus@dbrocaix01.b 0.03 0.03 0.00 0 223 0 1 1 1 0 0.00 12 select /*+ top_sql_8170 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 7fa2r0xkfbs6b 3724264953 sqlplus@dbrocaix01.b 0.02 0.02 0.02 0 223 0 1 1 1 0 0.00 27 select /*+ top_sql_8314 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 f71p3w4xx1pfc 3724264953 sqlplus@dbrocaix01.b 0.02 0.02 0.02 0 223 0 1 1 1 0 0.00 33 select /*+ top_sql_8286 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 f3wcc30napt5a 3724264953 sqlplus@dbrocaix01.b 0.02 0.02 0.02 0 223 0 1 1 1 0 0.00 36 select /*+ top_sql_7198 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 ghvnum1dfm05q 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 58 select /*+ top_sql_9331 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 2ta3r31t0z08a 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 60 select /*+ top_sql_7523 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 59kybrhwdk040 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 61 select /*+ top_sql_9853 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 9wf93m8rau04d 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 62 select /*+ top_sql_8652 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 fuhanmqynt02p 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 63 select /*+ top_sql_9743 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 1dzkrjdvjt03n 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 65 select /*+ top_sql_8498 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 0s5uzug7cr029 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 66 select /*+ top_sql_8896 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 gq6kp76f1307x 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 67 select /*+ top_sql_8114 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 bfa3qt29jg07b 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 68 select /*+ top_sql_9608 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 9nk1jwamsy02n 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 69 select /*+ top_sql_9724 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 2sry32gac2079 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 70 select /*+ top_sql_7316 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 atp84rb53u072 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 71 select /*+ top_sql_9091 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 1wb6wx2nb8093 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 73 select /*+ top_sql_9446 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 3czfc573u505f 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 74 select /*+ top_sql_9702 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 c31xpspd8n08k 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 75 select /*+ top_sql_8045 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 3k07s1fhv6043 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 77 select /*+ top_sql_9321 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 0qh6dbs79n06s 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 78 select /*+ top_sql_9052 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 9xt7tfmzut065 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 79 select /*+ top_sql_9429 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 28hu85p69d047 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 80 select /*+ top_sql_8978 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 4w2jxfhrfh037 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 81 select /*+ top_sql_7464 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 5kzjxrqgqv03x 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 83 select /*+ top_sql_6849 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
26 rows selected.
}}}
''Below is the dba_hist_sqltext filtered output this only shows the '%top_sql%' SQLs
and only has 41 rows
as per the official doc //"the DBA_HIST_SQLTEXT displays the text of SQL statements belonging to shared SQL cursors captured in the Workload Repository. This view captures information from V$SQL and is used with the DBA_HIST_SQLSTAT view."//
and also when you do
''
select count(*) from dba_hist_sqltext — this view does not have SNAP_ID.. and the total row count is 3243
''for the script awr_topsqlx.sql I outer join it with the dba_hist_snapshot and dba_hist_sqlstat''
where st.sql_id(+) = sqt.sql_id
and st.dbid(+) = &_dbid
''to get the sql_text information on the SELECT portion''
, nvl(substr(st.sql_text,1,6), to_clob('** SQL Text Not Available **')) sql_text
''
on later sections, you'll see that even if I remove dba_hist_sqltext and dba_hist_snapshot from the join I will still get the same amount of SQL output (90 rows) on a specific SNAP_ID
''
(You can have a better output by double clicking this whole page and copy paste it on a textpad)
{{{
sys@IVRS> select * from dba_hist_sqltext where lower(sql_text) like '%top_sql%'
2 /
AWR Top SQL Report
SQL SQL
ID Text COMMAND_TYPE
--------------- ---------------------------------------- ------------
93s9k7wvfs05m select snap_interval, retention,most_rec 3
ent_snap_time, most_recent_snap_id, stat
us_flag, most_recent_purge_time, most_re
cent_split_id, most_recent_split_time, m
rct_snap_time_num, mrct_purge_time_num,
snapint_num, retention_num, swrf_version
, registration_status, mrct_baseline_id,
topnsql from wrm$_wr_control where dbid
= :dbid
7k5ymabz2vkgu update wrm$_wr_control set snap_inter 6
val = :bind1, snapint_num = :bind2, rete
ntion = :bind3, retention_num = :bi
nd4, most_recent_snap_id = :bind5,
most_recent_snap_time = :bind6, mrct_sna
p_time_num = :bind7, status_flag =
:bind8, most_recent_purge_time = :bind9,
mrct_purge_time_num = :bind10,
most_recent_split_id = :bind11, most_r
ecent_split_time = :bind12, swrf_ve
rsion = :bind13, registration_status = :
bind14, mrct_baseline_id = :bind15,
topnsql = :bind16 where dbid = :dbid
f83wtgbnb9usa select 'select /*+ top_sql_' || mod(leve 3
l,100) || ' */ count(*) from t1;'
from dual
connect by level <= 10000
89tw99zyhrcbz select 'select /*+ top_sql_' || mod(leve 3
l,10000) || ' */ count(*) from t1;'
from dual
connect by level <= 10000
1wb6wx2nb8093 select /*+ top_sql_9446 */ count(*) from 3
t1
2v0d7sukxs097 select /*+ top_sql_8504 */ count(*) from 3
t1
gw8hg6m5ur0ac select /*+ top_sql_9717 */ count(*) from 3
t1
7g2ssk0p2a0ap select /*+ top_sql_8764 */ count(*) from 3
t1
d8dw2zx62c0by select /*+ top_sql_9869 */ count(*) from 3
t1
6ucxssb64u0c3 select /*+ top_sql_9803 */ count(*) from 3
t1
8drg8cmfhj0c9 select /*+ top_sql_9336 */ count(*) from 3
t1
db1tvtsu2u0d2 select /*+ top_sql_8531 */ count(*) from 3
t1
b66tsa9sxa0dh select /*+ top_sql_9444 */ count(*) from 3
t1
bq0yu0jjry0fg select /*+ top_sql_8799 */ count(*) from 3
t1
0s5uzug7cr029 select /*+ top_sql_8896 */ count(*) from 3
t1
9nk1jwamsy02n select /*+ top_sql_9724 */ count(*) from 3
t1
fuhanmqynt02p select /*+ top_sql_9743 */ count(*) from 3
t1
4w2jxfhrfh037 select /*+ top_sql_7464 */ count(*) from 3
t1
1dzkrjdvjt03n select /*+ top_sql_8498 */ count(*) from 3
t1
5kzjxrqgqv03x select /*+ top_sql_6849 */ count(*) from 3
t1
59kybrhwdk040 select /*+ top_sql_9853 */ count(*) from 3
t1
3k07s1fhv6043 select /*+ top_sql_9321 */ count(*) from 3
t1
28hu85p69d047 select /*+ top_sql_8978 */ count(*) from 3
t1
9wf93m8rau04d select /*+ top_sql_8652 */ count(*) from 3
t1
3czfc573u505f select /*+ top_sql_9702 */ count(*) from 3
t1
ghvnum1dfm05q select /*+ top_sql_9331 */ count(*) from 3
t1
3qw7025q1tcf3 select /*+ top_sql_8865 */ count(*) from 3
t1
423v9vytv8064 select /*+ top_sql_6733 */ count(*) from 3
t1
9xt7tfmzut065 select /*+ top_sql_9429 */ count(*) from 3
t1
0qh6dbs79n06s select /*+ top_sql_9052 */ count(*) from 3
t1
atp84rb53u072 select /*+ top_sql_9091 */ count(*) from 3
t1
2sry32gac2079 select /*+ top_sql_7316 */ count(*) from 3
t1
bfa3qt29jg07b select /*+ top_sql_9608 */ count(*) from 3
t1
gq6kp76f1307x select /*+ top_sql_8114 */ count(*) from 3
t1
2ta3r31t0z08a select /*+ top_sql_7523 */ count(*) from 3
t1
c31xpspd8n08k select /*+ top_sql_8045 */ count(*) from 3
t1
f3wcc30napt5a select /*+ top_sql_7198 */ count(*) from 3
t1
7fa2r0xkfbs6b select /*+ top_sql_8314 */ count(*) from 3
t1
6yd53x1zjqts9 select /*+ top_sql_7199 */ count(*) from 3
t1
f71p3w4xx1pfc select /*+ top_sql_8286 */ count(*) from 3
t1
bpxnmunkcywzg select /*+ top_sql_8170 */ count(*) from 3
t1
41 rows selected.
}}}
(You can have a better output by double clicking this whole page and copy paste it on a textpad)
{{{
AWR Top SQL Report
i
n Elapsed
Snap s Snap Plan Elapsed Time CPU A
Snap Start t Dur SQL Hash Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
1332 10/10/18 21:58 1 1.26 404qh4yx36y1v 2586623307 9.25 0.00 9.16 0 660002 134 10000 10000 10000 0 0.12 1 SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS I
GNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB
) opt_param('parallel_execution_enabled'
, 'false') NO_PARALLEL_INDEX(SAMPLESUB)
NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C
2),0) FROM (SELECT /*+ NO_PARALLEL("T1")
FULL("T1") NO_PARALLEL_INDEX("T1") */ 1
AS C1, 1 AS C2 FROM "T1" SAMPLE BLOCK (
41.447368 , 1) SEED (1) "T1") SAMPLESUB
1332 10/10/18 21:58 1 1.26 bunssq950snhf 2694099131 0.80 0.80 0.80 0 146 0 8 1 1 0 0.01 2 insert into wrh$_sga_target_advice (sn
ap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_P
HYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SI
ZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice
1332 10/10/18 21:58 1 1.26 7vgmvmy8vvb9s 43914496 0.08 0.08 0.08 0 168 0 1 1 1 0 0.00 3 insert into wrh$_tempstatxs (snap_id,
dbid, instance_number, file#, creation_c
hange#, phyrds, phywrts, singleblkrds
, readtim, writetim, singleblkrdtim, phy
blkrd, phyblkwrt, wait_count, time)
select :snap_id, :dbid, :instance_num
ber, tf.tfnum, to_number(tf.tfcrc_scn
) creation_change#, ts.kcftiopyr, ts.
kcftiopyw, ts.kcftiosbr, ts.kcftioprt, t
s.kcftiopwt, ts.kcftiosbt, ts.kcftiop
br, ts.kcftiopbw, fw.count, fw.time fro
m x$kcftio ts, x$kcctf tf, x$kcbfwait
fw where tf.tfdup != 0 and tf.tf
num = ts.kcftiofno and fw.indx+1 = (
ts.kcftiofno + :db_files)
1332 10/10/18 21:58 1 1.26 6hwjmjgrpsuaa 2721822575 0.05 0.05 0.02 0 196 5 57 1 1 0 0.00 4 insert into wrh$_enqueue_stat (snap_id
, dbid, instance_number, eq_type, req_re
ason, total_req#, total_wait#, succ_r
eq#, failed_req#, cum_wait_time, even
t#) select :snap_id, :dbid, :instanc
e_number, eq_type, req_reason, total_
req#, total_wait#, succ_req#, failed_req
#, cum_wait_time, event# from v$e
nqueue_statistics where total_req# !
= 0 order by eq_type, req_reason
1332 10/10/18 21:58 1 1.26 84qubbrsr0kfn 3385247542 0.04 0.04 0.04 0 372 0 388 1 1 0 0.00 5 insert into wrh$_latch (snap_id, dbid,
instance_number, latch_hash, level#, ge
ts, misses, sleeps, immediate_gets, i
mmediate_misses, spin_gets, sleep1, s
leep2, sleep3, sleep4, wait_time) selec
t :snap_id, :dbid, :instance_number,
hash, level#, gets, misses, sleeps, i
mmediate_gets, immediate_misses, spin_ge
ts, sleep1, sleep2, sleep3, sleep4, w
ait_time from v$latch order by h
ash
1332 10/10/18 21:58 1 1.26 db78fxqxwxt7r 3312420081 0.04 0.00 0.03 0 1163 3 5135 379 20 0 0.00 6 select /*+ rule */ bucket, endpoint, col
#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucke
t
1332 10/10/18 21:58 1 1.26 96g93hntrzjtr 2239883476 0.04 0.00 0.04 0 3517 0 736 1346 20 0 0.00 7 select /*+ rule */ bucket_cnt, row_cnt,
cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval,
hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and
intcol#=:2
1332 10/10/18 21:58 1 1.26 130dvvr5s8bgn 1160622595 0.04 0.00 0.04 0 1105 0 198 18 18 0 0.00 8 select obj#, dataobj#, part#, hiboundlen
, hiboundval, ts#, file#, block#, pctfre
e$, pctused$, initrans, maxtrans, flags,
analyzetime, samplesize, rowcnt, blkcnt
, empcnt, avgspc, chncnt, avgrln, length
(bhiboundval), bhiboundval from tabpart$
where bo# = :1 order by part#
1332 10/10/18 21:58 1 1.26 6yd53x1zjqts9 3724264953 sqlplus@dbrocaix01.b 0.04 0.04 0.02 0 223 0 1 1 1 0 0.00 9 select /*+ top_sql_7199 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 70utgu2587mhs 1395584798 0.04 0.04 0.01 0 173 10 4 1 1 0 0.00 10 insert into wrh$_java_pool_advice (s
nap_id, dbid, instance_number, java
_pool_size_for_estimate, java_pool_size_
factor, estd_lc_size, estd_lc_memor
y_objects, estd_lc_time_saved, estd
_lc_time_saved_factor, estd_lc_load_time
, estd_lc_load_time_factor, estd_lc
_memory_object_hits) select :snap_
id, :dbid, :instance_number, java_p
ool_size_for_estimate, java_pool_size_fa
ctor, estd_lc_size, estd_lc_memory_
objects, estd_lc_time_saved, estd_l
c_time_saved_factor, estd_lc_load_time,
estd_lc_load_time_factor, estd_lc_m
emory_object_hits from v$java_pool_advi
ce
1332 10/10/18 21:58 1 1.26 c3zymn7x3k6wy 3446064519 0.03 0.00 0.03 0 1035 0 209 19 19 0 0.00 11 select obj#, dataobj#, part#, hiboundlen
, hiboundval, flags, ts#, file#, block#,
pctfree$, initrans, maxtrans, analyzeti
me, samplesize, rowcnt, blevel, leafcnt,
distkey, lblkkey, dblkkey, clufac, pctt
hres$, length(bhiboundval), bhiboundval
from indpart$ where bo# = :1 order by pa
rt#
1332 10/10/18 21:58 1 1.26 bpxnmunkcywzg 3724264953 sqlplus@dbrocaix01.b 0.03 0.03 0.00 0 223 0 1 1 1 0 0.00 12 select /*+ top_sql_8170 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 3252fkazwq930 3220283061 0.03 0.03 0.02 0 34 0 0 1 1 0 0.00 13 UPDATE WRH$_SERVICE_NAME SET snap_id = :
lah_snap_id WHERE dbid = :dbid AND (
SERVICE_NAME_HASH) IN (SELECT NUM1_KEWRA
TTR FROM X$KEWRATTRSTALE)
1332 10/10/18 21:58 1 1.26 fdxrh8tzyw0yw 2786456350 0.03 0.03 0.03 0 38 0 0 1 1 0 0.00 14 SELECT snap_id , SERVICE_NAME_HASH FROM
(SELECT /*+ ordered use_nl(t2) index(t
2) */ t2.snap_id , t1.NAME_HASH SERVICE
_NAME_HASH FROM V$SERVICES t1, WRH$_SERV
ICE_NAME t2 WHERE t2.dbid(+) = :db
id AND t2.SERVICE_NAME_HASH(+) = t1.NAM
E_HASH) WHERE nvl(snap_id, 0) < :snap_id
1332 10/10/18 21:58 1 1.26 7k6zct1sya530 2444078832 0.03 0.03 0.03 0 152 0 0 1 1 0 0.00 15 insert into WRH$_STREAMS_APPLY_SUM (s
nap_id, dbid, instance_number, apply_nam
e, startup_time, reader_total_messag
es_dequeued, reader_lag, coord_total
_received, coord_total_applied, coord_to
tal_rollbacks, coord_total_wait_deps
, coord_total_wait_cmts, coord_lwm_lag,
server_total_messages_applied, serve
r_elapsed_dequeue_time, server_elaps
ed_apply_time) select * from (select
:snap_id, :dbid, :instance_number, ac.a
pply_name, ac.startup_time, a
r.total_messages_dequeued, ar
.dequeue_time - ar.dequeued_message_crea
te_time, ac.total_received, a
c.total_applied, ac.total_rollbacks,
ac.total_wait_deps, ac.total_wai
t_commits, ac.lwm_time - ac.l
wm_message_create_time, al.to
tal_messages_applied, al.elapsed_dequeue
_time, al.elapsed_apply_time
from v$streams_apply_coordinator a
c, v$streams_apply_reader ar,
(select apply_name,
sum(total_messages_applied) t
otal_messages_applied,
sum(elapsed_dequeue_time) elapsed_dequ
eue_time, sum(elapsed
_apply_time) elapsed_apply_time
from v$streams_apply_server
group by apply_name) al wher
e al.apply_name=ac.apply_name and
ar.apply_name=ac.apply_name
order by ac.total_applied desc) where
rownum <= 25
1332 10/10/18 21:58 1 1.26 7qjhf5dzmazsr 751380177 0.03 0.03 0.01 0 143 7 1 1 1 0 0.00 16 SELECT snap_id , OBJ#, DATAOBJ# FROM (
SELECT /*+ ordered use_nl(t2) index(t2)
*/ t2.snap_id , t1.OBJN_KEWRSEG OBJ#, t
1.OBJD_KEWRSEG DATAOBJ# FROM X$KEWRTSEG
STAT t1, WRH$_SEG_STAT_OBJ t2 WHERE
t2.dbid(+) = :dbid AND t2.OBJ#(+) = t
1.OBJN_KEWRSEG AND t2.DATAOBJ#(+) = t1.O
BJD_KEWRSEG) WHERE nvl(snap_id, 0) < :sn
ap_id
1332 10/10/18 21:58 1 1.26 32wqka2zwvu65 875704766 0.03 0.03 0.03 0 557 0 264 1 1 0 0.00 17 insert into wrh$_parameter (snap_id, d
bid, instance_number, parameter_hash, va
lue, isdefault, ismodified) select
:snap_id, :dbid, :instance_number, i.k
sppihash hash, sv.ksppstvl, sv.ksppst
df, decode(bitand(sv.ksppstvf,7), 1, 'MO
DIFIED', 'FALSE') from x$ksppi i, x$ksp
psv sv where i.indx = sv.indx and ((
(i.ksppinm not like '#_%' escape '#') or
(sv.ksppstdf = 'FALSE') or
(bitand(sv.ksppstvf,5) > 0)) or
(i.ksppinm like '#_#_%' escape '#'
)) order by hash
1332 10/10/18 21:58 1 1.26 53saa2zkr6wc3 1514015273 0.03 0.00 0.03 0 2192 0 633 463 15 0 0.00 18 select intcol#,nvl(pos#,0),col#,nvl(spar
e1,0) from ccol$ where con#=:1
1332 10/10/18 21:58 1 1.26 4qju99hqmn81x 4055547183 0.02 0.02 0.02 0 591 0 4 1 1 0 0.00 19 INSERT INTO WRH$_ACTIVE_SESSION_HISTORY
( snap_id, dbid, instance_number, sample
_id, sample_time, session_id, session
_serial#, user_id, sql_id, sql_child_
number, sql_plan_hash_value, force_ma
tching_signature, service_hash, sessi
on_type, sql_opcode, plsql_entry_obje
ct_id, plsql_entry_subprogram_id, pls
ql_object_id, plsql_subprogram_id, bl
ocking_session, blocking_session_serial#
, qc_session_id, qc_instance_id, x
id, current_obj#, current_file#, curr
ent_block#, event_id, seq#, p1, p2
, p3, wait_time, time_waited, program
, module, action, client_id ) (SELECT :
snap_id, :dbid, :instance_number, a.samp
le_id, a.sample_time, a.session
_id, a.session_serial#, a.user_id,
a.sql_id, a.sql_child_number,
a.sql_plan_hash_value, a.force_matchi
ng_signature, a.service_hash, a
.session_type, a.sql_opcode, a.
plsql_entry_object_id, a.plsql_entry_sub
program_id, a.plsql_object_id,
a.plsql_subprogram_id, a.blocki
ng_session, a.blocking_session_
serial#, a.qc_session_id, a.qc_instance_
id, a.xid, a.current_o
bj#, a.current_file#, a.current_block#,
a.event_id, a.seq#, a.
p1, a.p2, a.p3, a.wait_time, a.time_wait
ed, substrb(a.program, 1, 64),
a.module, a.action, a.client_id FROM
x$ash a, (SELECT h.sample_addr
, h.sample_id FROM x$kewash
h WHERE ( (h.s
ample_id >= :begin_flushing) and
(h.sample_id < :latest_sampl
e_id) ) and (MOD(h.sample_id
, :disk_filter_ratio) = 0) ) s
hdr WHERE shdr.sample_addr = a.sample_
addr and shdr.sample_id = a.sample
_id)
1332 10/10/18 21:58 1 1.26 32whwm2babwpt 183139296 0.02 0.02 0.02 0 420 0 0 1 1 0 0.00 20 insert into wrh$_seg_stat_obj (
snap_id , dbid , ts#
, obj# , dataobj#
, owner , object_name
, subobject_name , partiti
on_type , object_type
, tablespace_name) select :lah_snap_
id , :dbid , ss1.tsn_k
ewrseg , ss1.objn_kewrseg
, ss1.objd_kewrseg , ss1.ow
nername_kewrseg , ss1.objname_k
ewrseg , ss1.subobjname_kewrseg
, decode(po.parttype, 1, 'RANG
E', 2, 'HASH',
3, 'SYSTEM', 4, 'LIST',
NULL, 'NONE', 'U
NKNOWN') , decode(ss1.objtype_k
ewrseg, 0, 'NEXT OBJECT',
1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER'
, 4, 'VIEW', 5,
'SYNONYM', 6, 'SEQUENCE',
7, 'PROCEDURE', 8, 'FUNCTION', 9
, 'PACKAGE', 11, 'PACKA
GE BODY', 12, 'TRIGGER',
13, 'TYPE', 14, 'TYPE BODY',
19, 'T
ABLE PARTITION',
20, 'INDEX PARTITION', 2
1, 'LOB', 22
, 'LIBRARY', 23, 'DIRECTORY', 24, 'QUEUE
', 28, 'JAVA SOURCE', 2
9, 'JAVA CLASS',
30, 'JAVA RESOURCE', 32, 'INDEXTYPE',
33, 'OPERATOR',
34, 'TABLE SUBPARTITION',
35, 'INDEX SUBPARTITION',
40, 'LOB PAR
TITION', 41, 'LOB SUBPARTITION',
42, 'MATERIALIZED VIEW',
43, 'DIMENSION',
44, 'CONTEXT', 47, 'RE
SOURCE PLAN', 48, 'CON
SUMER GROUP',
51, 'SUBSCRIPTION', 52, 'LOCATION'
, 55, 'XML SCHEMA', 56
, 'JAVA DATA', 57, 'S
ECURITY PROFILE', 'UND
EFINED') , ss1.tsname_kewrse
g from x$kewrattrnew at,
x$kewrtsegstat ss1, (selec
t tp.obj#, pob.parttype fr
om sys.tabpart$ tp, sys.partobj$ pob
where tp.bo# = pob.obj#
union all select
ip.obj#, pob.parttype fro
m sys.indpart$ ip, sys.partobj$ pob
where ip.bo# = pob.obj#)
po where at.num1_kewrattr = ss1.ob
jn_kewrseg and at.num2_kewrattr
= ss1.objd_kewrseg and at.num1_ke
wrattr = po.obj#(+) and (ss1.obj
type_kewrseg not in
(1 /* INDEX - handled below */,
10 /* NON-EXISTEN
T */) or (ss1.objtype_kewrse
g = 1 and 1
= (select 1 from ind$ i
where i.obj# = ss1.objn_k
ewrseg
and i.type# in
(1, 2, 3,
4, 6, 7, 9)))) and ss1.objname_
kewrseg != '_NEXT_OBJECT'
and ss1.objname_kewrseg != '_defa
ult_auditing_options_'
1332 10/10/18 21:58 1 1.26 fktqvw2wjxdxc 2042248707 0.02 0.02 0.02 0 293 0 13 1 1 0 0.00 21 insert into wrh$_filestatxs (snap_id,
dbid, instance_number, file#, creation_c
hange#, phyrds, phywrts, singleblkrds
, readtim, writetim, singleblkrdtim, phy
blkrd, phyblkwrt, wait_count, time)
select :snap_id, :dbid, :instance_num
ber, df.file#, (df.crscnbas + (df.crs
cnwrp * power(2,32))) creation_change#,
fs.kcfiopyr, fs.kcfiopyw, fs.kcfiosbr
, fs.kcfioprt, fs.kcfiopwt, fs.kcfios
bt, fs.kcfiopbr, fs.kcfiopbw, fw.count,
fw.time from x$kcfio fs, file$ df, x
$kcbfwait fw where fw.indx+1 = fs.k
cfiofno and df.file# = fs.kcfiofno
and df.status$ = 2
1332 10/10/18 21:58 1 1.26 2ym6hhaq30r73 3755742892 0.02 0.00 0.02 0 1428 0 476 476 476 0 0.00 22 select type#,blocks,extents,minexts,maxe
xts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hw
mincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and blo
ck#=:3
1332 10/10/18 21:58 1 1.26 71y370j6428cb 3717298615 0.02 0.02 0.02 0 146 0 1 1 1 0 0.00 23 insert into wrh$_thread (snap_id, db
id, instance_number, thread#, threa
d_instance_number, status, open_tim
e, current_group#, sequence#) select
:snap_id, :dbid, :instance_number,
t.thread#, i.instance_number, t.statu
s, t.open_time, t.current_group#, t
.sequence# from v$thread t, v$instance
i where i.thread#(+) = t.thread#
1332 10/10/18 21:58 1 1.26 f9nzhpn9854xz 2614576983 0.02 0.02 0.02 0 499 5 57 1 1 0 0.00 24 insert into wrh$_seg_stat (snap_id, db
id, instance_number, ts#, obj#, dataobj#
, logical_reads_total, logical_reads_
delta, buffer_busy_waits_total, buffer_b
usy_waits_delta, db_block_changes_tot
al, db_block_changes_delta, physical_rea
ds_total, physical_reads_delta, physi
cal_writes_total, physical_writes_delta,
physical_reads_direct_total, physica
l_reads_direct_delta, physical_writes
_direct_total, physical_writes_direct_de
lta, itl_waits_total, itl_waits_delta
, row_lock_waits_total, row_lock_wait
s_delta, gc_buffer_busy_total, gc_buf
fer_busy_delta, gc_cr_blocks_received
_total, gc_cr_blocks_received_delta,
gc_cu_blocks_received_total, gc_cu_block
s_received_delta, space_used_total, s
pace_used_delta, space_allocated_tota
l, space_allocated_delta, table_scans
_total, table_scans_delta, chain_row_
excess_total, chain_row_excess_delta) s
elect :snap_id, :dbid, :instance_number,
tsn_kewrseg, objn_kewrseg, objd_kewr
seg, log_rds_kewrseg, log_rds_dl_kewr
seg, buf_busy_wts_kewrseg, buf_busy_w
ts_dl_kewrseg, db_blk_chgs_kewrseg, d
b_blk_chgs_dl_kewrseg, phy_rds_kewrse
g, phy_rds_dl_kewrseg, phy_wrts_kewrs
eg, phy_wrts_dl_kewrseg, phy_rds_drt_
kewrseg, phy_rds_drt_dl_kewrseg, phy_
wrts_drt_kewrseg, phy_wrts_drt_dl_kewrse
g, itl_wts_kewrseg, itl_wts_dl_kewrse
g, row_lck_wts_kewrseg, row_lck_wts_d
l_kewrseg, gc_buf_busy_kewrseg, gc_bu
f_busy_dl_kewrseg, gc_cr_blks_rcv_kew
rseg, gc_cr_blks_rcv_dl_kewrseg, gc_c
u_blks_rcv_kewrseg, gc_cu_blks_rcv_dl_ke
wrseg, space_used_kewrseg, space_used
_dl_kewrseg, space_alloc_kewrseg, spa
ce_alloc_dl_kewrseg, tbl_scns_kewrseg
, tbl_scns_dl_kewrseg, chn_exc_kewrse
g, chn_exc_dl_kewrseg from X$KEWRTSEGST
AT order by objn_kewrseg, objd_kewrseg
1332 10/10/18 21:58 1 1.26 bqnn4c3gjtmgu 592198678 0.02 0.02 0.02 0 129 0 23 1 1 0 0.00 25 insert into wrh$_bg_event_summary (sna
p_id, dbid, instance_number, event_id
, total_waits, total_timeouts, time_w
aited_micro) select /*+ ordered use_nl(
e) */ :snap_id, :dbid, :instance_numb
er, e.event_id, sum(e.total_waits),
sum(e.total_timeouts), sum(e.time_wait
ed_micro) from v$session bgsids, v$s
ession_event e where bgsids.type = '
BACKGROUND' and bgsids.sid = e.sid
group by e.event_id
1332 10/10/18 21:58 1 1.26 39m4sx9k63ba2 323350262 0.02 0.00 0.01 0 138 1 42 12 12 0 0.00 26 select /*+ index(idl_ub2$ i_idl_ub21) +*
/ piece#,length,piece from idl_ub2$ wher
e obj#=:1 and part=:2 and version=:3 ord
er by piece#
1332 10/10/18 21:58 1 1.26 7fa2r0xkfbs6b 3724264953 sqlplus@dbrocaix01.b 0.02 0.02 0.02 0 223 0 1 1 1 0 0.00 27 select /*+ top_sql_8314 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 1uk5m5qbzj1vt 0 sqlplus@dbrocaix01.b 0.02 0.02 0 155 0 0 0 1 0 0.00 28 BEGIN dbms_workload_repository.create_sn
xxxxxxx.com (TNS V1- apshot; END;
V3)
1332 10/10/18 21:58 1 1.26 cp3gpd7z878w8 1950636251 0.02 0.02 0.02 0 288 0 25 1 1 0 0.00 29 insert into wrh$_sgastat (snap_id, dbi
d, instance_number, pool, name, bytes)
select :snap_id, :dbid, :instance_num
ber, pool, name, bytes from (selec
t pool, name, bytes, 100*(by
tes) / (sum(bytes) over (partition by po
ol)) part_pct from v$sgastat) w
here part_pct >= 1 or pool is null
or name = 'free memory' order by
name, pool
1332 10/10/18 21:58 1 1.26 dsd2yqyggtc59 3648994037 0.02 0.02 0 5 0 0 0 1 0 0.00 30 select SERVICE_ID, NAME, NAME_HASH, NETW
ORK_NAME, CREATION_DATE, CREATION_DATE_H
ASH, GOAL, DTP, AQ_HA_NOTIFICATION, CLB
_GOAL from GV$SERVICES where inst_id =
USERENV('Instance')
1332 10/10/18 21:58 1 1.26 bu95jup1jp5t3 2436512634 0.02 0.02 0.02 0 338 3 21 1 1 0 0.00 31 insert into wrh$_db_cache_advice
(snap_id, dbid, instance_number,
bpid, buffers_for_estimate, name, block
_size, advice_status, size_for_e
stimate, size_factor, physical_r
eads, base_physical_reads, actual_physic
al_reads) select :snap_id, :dbid, :ins
tance_number, a.bpid, a.nbufs,
b.bp_name, a.blksz, decode(a.st
atus, 2, 'ON', 'OFF'), a.poolsz
, round((a.poolsz / a.actual_poolsz), 4)
, a.preads, a.base_preads, a.ac
tual_preads from x$kcbsc a, x$kcbwbp
d b where a.bpid = b.bp_id
1332 10/10/18 21:58 1 1.26 350myuyx0t1d6 1838802114 0.02 0.02 0.02 0 299 0 11 1 1 0 0.00 32 insert into wrh$_tablespace_stat (sna
p_id, dbid, instance_number, ts#, tsname
, contents, status, segment_space_ma
nagement, extent_management, is_back
up) select :snap_id, :dbid, :instanc
e_number, ts.ts#, ts.name as tsname,
decode(ts.contents$, 0, (decode(bitan
d(ts.flags, 16), 16, 'UNDO', '
PERMANENT')), 1, 'TEMPORARY')
as contents, decode(ts.online$, 1, '
ONLINE', 2, 'OFFLINE', 4, 'REA
D ONLY', 'UNDEFINED') as st
atus, decode(bitand(ts.flags,32), 32,
'AUTO', 'MANUAL') as segspace_mgmt, d
ecode(ts.bitmapped, 0, 'DICTIONARY', 'LO
CAL') as extent_management, (case w
hen b.active_count > 0 then 'TR
UE' else 'FALSE' end) as is
_backup from sys.ts$ ts, (select
dfile.ts#, sum( case when
bkup.status = 'ACTIVE'
then 1 else 0 end ) as active_cou
nt from v$backup bkup, file$ dfi
le where bkup.file# = dfile.file
# and dfile.status$ = 2
group by dfile.ts#) b where ts.online
$ != 3 and bitand(ts.flags, 2048) !=
2048 and ts.ts# = b.ts#
1332 10/10/18 21:58 1 1.26 f71p3w4xx1pfc 3724264953 sqlplus@dbrocaix01.b 0.02 0.02 0.02 0 223 0 1 1 1 0 0.00 33 select /*+ top_sql_8286 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 c6awqs517jpj0 1780865333 0.02 0.00 0.00 0 36 1 6 12 12 0 0.00 34 select /*+ index(idl_char$ i_idl_char1)
+*/ piece#,length,piece from idl_char$ w
here obj#=:1 and part=:2 and version=:3
order by piece#
1332 10/10/18 21:58 1 1.26 agpd044zj368m 3821145811 0.02 0.02 0.02 0 284 10 45 1 1 0 0.00 35 insert into wrh$_system_event (snap_id
, dbid, instance_number, event_id, total
_waits, total_timeouts, time_waited_m
icro) select :snap_id, :dbid, :insta
nce_number, event_id, total_waits, to
tal_timeouts, time_waited_micro from
v$system_event order by event_id
1332 10/10/18 21:58 1 1.26 f3wcc30napt5a 3724264953 sqlplus@dbrocaix01.b 0.02 0.02 0.02 0 223 0 1 1 1 0 0.00 36 select /*+ top_sql_7198 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 71k5024zn7c9a 3286887626 0.02 0.02 0.02 0 295 0 6 1 1 0 0.00 37 insert into wrh$_latch_misses_summary
(snap_id, dbid, instance_number, parent_
name, where_in_code, nwfail_count, sl
eep_count, wtr_slp_count) select :sn
ap_id, :dbid, :instance_number, parent_n
ame, "WHERE", sum(nwfail_count), sum(
sleep_count), sum(wtr_slp_count) from
v$latch_misses where sleep_count >
0 group by parent_name, "WHERE" or
der by parent_name, "WHERE"
1332 10/10/18 21:58 1 1.26 83taa7kaw59c1 3765558045 0.02 0.00 0.02 0 220 0 913 69 21 0 0.00 38 select name,intcol#,segcol#,type#,length
,nvl(precision#,0),decode(type#,2,nvl(sc
ale,-127/*MAXSB1MINAL*/),178,scale,179,s
cale,180,scale,181,scale,182,scale,183,s
cale,231,scale,0),null$,fixedstorage,nvl
(deflength,0),default$,rowid,col#,proper
ty, nvl(charsetid,0),nvl(charsetform,0),
spare1,spare2,nvl(spare3,0) from col$ wh
ere obj#=:1 order by intcol#
1332 10/10/18 21:58 1 1.26 cvn54b7yz0s8u 2334475966 0.02 0.00 0.00 0 62 7 20 12 12 0 0.00 39 select /*+ index(idl_ub1$ i_idl_ub11) +*
/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 ord
er by piece#
1332 10/10/18 21:58 1 1.26 66gs90fyynks7 1662736584 0.02 0.02 0.02 0 202 0 1 1 1 0 0.00 40 insert into wrh$_instance_recovery (snap
_id, dbid, instance_number, recovery_est
imated_ios, actual_redo_blks, target_red
o_blks, log_file_size_redo_blks, log_chk
pt_timeout_redo_blks, log_chkpt_interval
_redo_blks, fast_start_io_target_redo_bl
ks, target_mttr, estimated_mttr, ckpt_bl
ock_writes, optimal_logfile_size, estd_c
luster_available_time, writes_mttr, writ
es_logfile_size, writes_log_checkpoint_s
ettings, writes_other_settings, writes_a
utotune, writes_full_thread_ckpt) select
:snap_id, :dbid, :instance_number, reco
very_estimated_ios, actual_redo_blks, ta
rget_redo_blks, log_file_size_redo_blks,
log_chkpt_timeout_redo_blks, log_chkpt_
interval_redo_blks, fast_start_io_target
_redo_blks, target_mttr, estimated_mttr,
ckpt_block_writes, optimal_logfile_size
, estd_cluster_available_time, writes_mt
tr, writes_logfile_size, writes_log_chec
kpoint_settings, writes_other_settings,
writes_autotune, writes_full_thread_ckpt
from v$instance_recovery
1332 10/10/18 21:58 1 1.26 5ngzsfstg8tmy 3317232865 0.01 0.00 0.01 0 321 0 107 107 19 0 0.00 41 select o.owner#,o.name,o.namespace,o.rem
oteowner,o.linkname,o.subname,o.dataobj#
,o.flags from obj$ o where o.obj#=:1
1332 10/10/18 21:58 1 1.26 7ng34ruy5awxq 306576078 0.01 0.00 0.01 0 566 0 78 68 18 0 0.00 42 select i.obj#,i.ts#,i.file#,i.block#,i.i
ntcols,i.type#,i.flags,i.property,i.pctf
ree$,i.initrans,i.maxtrans,i.blevel,i.le
afcnt,i.distkey,i.lblkkey,i.dblkkey,i.cl
ufac,i.cols,i.analyzetime,i.samplesize,i
.dataobj#,nvl(i.degree,1),nvl(i.instance
s,1),i.rowcnt,mod(i.pctthres$,256),i.ind
method#,i.trunccnt,nvl(c.unicols,0),nvl(
c.deferrable#+c.valid#,0),nvl(i.spare1,i
.intcols),i.spare4,i.spare2,i.spare6,dec
ode(i.pctthres$,null,null,mod(trunc(i.pc
tthres$/256),256)),ist.cachedblk,ist.cac
hehit,ist.logicalread from ind$ i, ind_s
tats$ ist, (select enabled, min(cols) un
icols,min(to_number(bitand(defer,1))) de
ferrable#,min(to_number(bitand(defer,4))
) valid# from cdef$ where obj#=:1 and en
abled > 1 group by enabled) c where i.ob
j#=c.enabled(+) and i.obj# = ist.obj#(+)
and i.bo#=:1 order by i.obj#
1332 10/10/18 21:58 1 1.26 79uvsz1g1c168 187762771 0.01 0.01 0.01 0 216 0 1 1 1 0 0.00 43 insert into wrh$_buffer_pool_statistics
(snap_id, dbid, instance_number, id, n
ame, block_size, set_msize, cnum_repl
, cnum_write, cnum_set, buf_got, sum_wri
te, sum_scan, free_buffer_wait, write
_complete_wait, buffer_busy_wait, fre
e_buffer_inspected, dirty_buffers_inspec
ted, db_block_change, db_block_gets,
consistent_gets, physical_reads, physica
l_writes) select :snap_id, :dbid, :i
nstance_number, id, name, block_size, se
t_msize, cnum_repl, cnum_write, cnum_
set, buf_got, sum_write, sum_scan, fr
ee_buffer_wait, write_complete_wait, buf
fer_busy_wait, free_buffer_inspected,
dirty_buffers_inspected, db_block_chang
e, db_block_gets, consistent_gets, ph
ysical_reads, physical_writes from v
$buffer_pool_statistics
1332 10/10/18 21:58 1 1.26 b0cxc52zmwaxs 3771206753 0.01 0.01 0.01 0 187 0 2 1 1 0 0.00 44 insert into wrh$_sess_time_stats (sna
p_id, dbid, instance_number, session_typ
e, min_logon_time, sum_cpu_time, sum
_sys_io_wait, sum_user_io_wait) select :
snap_id, :dbid, :instance_number, type,
min(logon_time) min_logon_time,
sum(cpu_time) cpu_time, sum(s
ys_io_wait) sys_io_wait, sum(user_io_
wait) user_io_wait from (select sid, se
rial#, max(type) type,
max(logon_time) logon_time,
max(cpu_time) cpu_time, s
um(case when kslcsclsname = 'System I/O'
then kslcstim else 0
end) as sys_io_wait, sum(case w
hen kslcsclsname ='User I/O'
then kslcstim else 0 end) as user
_io_wait from (select /*+ ordered
*/ allsids.sid sid, allsids.
serial# serial#, max(type)
type, max(logon_time) l
ogon_time, sum(kewsval) c
pu_time from (select type,
allsids.sid, sess.ksuseser as serial#,
sess.ksuseltm as logon_time from
(select /*+ ordered index(p) */
s.indx as sid, decode(l.ro
le, 'reader', 'Logminer Reader',
'preparer','Logminer
Preparer', 'bui
lder', 'Logminer Builder') as type
from x$logmnr_process l, x$ksupr p, x$ks
use s where l.role in ('reader','pr
eparer','builder') and l.pid = p.
indx and bitand(p.ksspaflg,1)!=0
and p.ksuprpid = s.ksusepid un
ion all select sid_knst as sid,
decode(type_knst, 8,'STREAMS Captur
e', 7,'STREA
MS Apply Reader',
2,'STREAMS Apply Server',
1,'STREAMS Apply Coo
rdinator') as type from x$knstcap
where type_knst in (8,7,2,1) unio
n all select indx as sid, (case when
ksusepnm like '%(q00%)'
then 'QMON Slaves'
else 'QMON Coordina
tor' end) as type from x$ksuse
where ksusepnm like '%(q00%)' o
r ksusepnm like '%(QMNC)' union all
select kwqpssid as sid, 'Propagation S
ender' as type from x$kwqps unio
n all select kwqpdsid as sid, 'Propag
ation Receiver' as type from x$kwqp
d) allsids, x$ksuse sess where bitand(
sess.ksspaflg,1) != 0 and bitand(ses
s.ksuseflg,1) != 0 and allsids.sid =
sess.indx) allsids, x$kewsse
sv sesv, x$kewssmap map
where allsids.sid = sesv.ksusenum
and sesv.kewsnum = map.soffst
and map.aggid = 1 and
(map.stype = 2 or map.stype = 3)
and map.sname in ('DB CPU', 'backgro
und cpu time') group by sid, seria
l#) allaggr, x$kslcs allio where
allaggr.sid = allio.kslcssid(+) and
allio.kslcsclsname in ('System I/O',
'User I/O') group by allaggr.sid, alla
ggr.serial#) group by type
1332 10/10/18 21:58 1 1.26 1tn90bbpyjshq 722989617 0.01 0.01 0.01 0 87 0 0 1 1 0 0.00 45 UPDATE wrh$_tempfile tfh SET (snap_id,
filename, tsname) = (SELECT :lah_sn
ap_id, tf.name name, ts.name tsname
FROM v$tempfile tf, ts$ ts WHERE
tf.ts# = ts.ts# AND tfh.file# =
tf.file# AND tfh.creation_chang
e# = tf.creation_change#) WHERE (file#,
creation_change#) IN (SELECT tf.
tfnum, to_number(tf.tfcrc_scn) creation_
change# FROM x$kcctf tf
WHERE tf.tfdup != 0) AND dbid
= :dbid AND snap_id < :snap_id
1332 10/10/18 21:58 1 1.26 a73wbv1yu8x5c 2570921597 0.01 0.00 0.01 0 680 0 463 71 5 0 0.00 46 select con#,type#,condlength,intcols,rob
j#,rcon#,match#,refact,nvl(enabled,0),ro
wid,cols,nvl(defer,0),mtime,nvl(spare1,0
) from cdef$ where obj#=:1
1332 10/10/18 21:58 1 1.26 6c06mfv01xt2h 2399945022 0.01 0.01 0.01 0 201 1 1 1 1 0 0.00 47 update wrh$_seg_stat_obj sso set (ind
ex_type, base_obj#, base_object_name, ba
se_object_owner) = (selec
t decode(ind.type#,
1, 'NORMAL'||
decode(bitand(ind.property, 4), 0, '',
4, '/REV'), 2, 'BIT
MAP', 3, 'CLUSTER', 4, 'IOT - TOP',
5, 'IOT - NESTED', 6,
'SECONDARY', 7, 'ANSI',
8, 'LOB', 9, 'DOMAIN') as index_ty
pe, base_obj.obj# as base
_obj#, base_obj.name as b
ase_object_name, base_own
er.name as base_object_owner fro
m sys.ind$ ind, sys.us
er$ base_owner, sys.obj$
base_obj where ind.obj# =
sso.obj# and ind.dataobj# = s
so.dataobj# and ind.bo#
= base_obj.obj# and base_obj.
owner# = base_owner.user#) where sso.d
bid = :dbid and (obj#, dataob
j#) in (select objn_kewrseg, obj
d_kewrseg from x$kewrtseg
stat ss1 where objtype_kewrseg = 1)
and sso.snap_id = :lah_snap_id a
nd sso.object_type = 'INDEX'
1332 10/10/18 21:58 1 1.26 45jb7msfn4x4m 669385525 0.01 0.01 0 5 0 0 0 1 0 0.00 48 select SADDR , SID , SERIAL# , AUDSID ,
PADDR , USER# , USERNAME , COMMAND , OW
NERID, TADDR , LOCKWAIT , STATUS , SERVE
R , SCHEMA# , SCHEMANAME ,OSUSER , PROCE
SS , MACHINE , TERMINAL , PROGRAM , TYPE
, SQL_ADDRESS , SQL_HASH_VALUE, SQL_ID,
SQL_CHILD_NUMBER , PREV_SQL_ADDR , PREV
_HASH_VALUE , PREV_SQL_ID, PREV_CHILD_NU
MBER , PLSQL_ENTRY_OBJECT_ID, PLSQL_ENTR
Y_SUBPROGRAM_ID, PLSQL_OBJECT_ID, PLSQL_
SUBPROGRAM_ID, MODULE , MODULE_HASH , AC
TION , ACTION_HASH , CLIENT_INFO , FIXED
_TABLE_SEQUENCE , ROW_WAIT_OBJ# , ROW_WA
IT_FILE# , ROW_WAIT_BLOCK# , ROW_WAIT_RO
W# , LOGON_TIME , LAST_CALL_ET , PDML_EN
ABLED , FAILOVER_TYPE , FAILOVER_METHOD
, FAILED_OVER, RESOURCE_CONSUMER_GROUP,
PDML_STATUS, PDDL_STATUS, PQ_STATUS, CUR
RENT_QUEUE_DURATION, CLIENT_IDENTIFIER,
BLOCKING_SESSION_STATUS, BLOCKING_INSTAN
CE,BLOCKING_SESSION,SEQ#, EVENT#,EVENT,P
1TEXT,P1,P1RAW,P2TEXT,P2,P2RAW, P3TEXT,P
3,P3RAW,WAIT_CLASS_ID, WAIT_CLASS#,WAIT_
CLASS,WAIT_TIME, SECONDS_IN_WAIT,STATE,S
ERVICE_NAME, SQL_TRACE, SQL_TRACE_WAITS,
SQL_TRACE_BINDS from GV$SESSION where i
nst_id = USERENV('Instance')
1332 10/10/18 21:58 1 1.26 asvzxj61dc5vs 3028786551 0.01 0.00 0.01 0 325 0 75 125 125 0 0.00 49 select timestamp, flags from fixed_obj$
where obj#=:1
1332 10/10/18 21:58 1 1.26 04xtrk7uyhknh 2853959010 0.01 0.00 0.01 0 125 1 41 42 22 0 0.00 50 select obj#,type#,ctime,mtime,stime,stat
us,dataobj#,flags,oid$, spare1, spare2 f
rom obj$ where owner#=:1 and name=:2 and
namespace=:3 and remoteowner is null an
d linkname is null and subname is null
1332 10/10/18 21:58 1 1.26 6769wyy3yf66f 299250003 0.01 0.00 0.01 0 704 0 274 78 20 0 0.00 51 select pos#,intcol#,col#,spare1,bo#,spar
e2 from icol$ where obj#=:1
1332 10/10/18 21:58 1 1.26 1gu8t96d0bdmu 3526770254 0.01 0.00 0.01 0 242 1 59 59 20 0 0.00 52 select t.ts#,t.file#,t.block#,nvl(t.bobj
#,0),nvl(t.tab#,0),t.intcols,nvl(t.cluco
ls,0),t.audit$,t.flags,t.pctfree$,t.pctu
sed$,t.initrans,t.maxtrans,t.rowcnt,t.bl
kcnt,t.empcnt,t.avgspc,t.chncnt,t.avgrln
,t.analyzetime,t.samplesize,t.cols,t.pro
perty,nvl(t.degree,1),nvl(t.instances,1)
,t.avgspc_flb,t.flbcnt,t.kernelcols,nvl(
t.trigflag, 0),nvl(t.spare1,0),nvl(t.spa
re2,0),t.spare4,t.spare6,ts.cachedblk,ts
.cachehit,ts.logicalread from tab$ t, ta
b_stats$ ts where t.obj#= :1 and t.obj#
= ts.obj# (+)
1332 10/10/18 21:58 1 1.26 88brhumsyg325 146261960 0.01 0.01 0 6 0 0 0 1 0 0.00 53 select d.inst_id,d.kslldadr,la.latch#,d.
kslldlvl,d.kslldnam,d.kslldhsh, l
a.gets,la.misses, la.sleeps,la.im
mediate_gets,la.immediate_misses,la.wait
ers_woken, la.waits_holding_latch
,la.spin_gets,la.sleep1,la.sleep2,
la.sleep3,la.sleep4,la.sleep5,la.sleep
6,la.sleep7,la.sleep8,la.sleep9,
la.sleep10, la.sleep11, la.wait_time fr
om x$kslld d, (select kslltnum latch#
, sum(kslltwgt) gets,sum(kslltwff
) misses,sum(kslltwsl) sleeps, su
m(kslltngt) immediate_gets,sum(kslltnfa)
immediate_misses, sum(kslltwkc)
waiters_woken,sum(kslltwth) waits_holdin
g_latch, sum(ksllthst0) spin_gets
,sum(ksllthst1) sleep1,sum(ksllthst2) sl
eep2, sum(ksllthst3) sleep3,sum(k
sllthst4) sleep4,sum(ksllthst5) sleep5,
sum(ksllthst6) sleep6,sum(ksllths
t7) sleep7,sum(ksllthst8) sleep8,
sum(ksllthst9) sleep9,sum(ksllthst10) s
leep10,sum(ksllthst11) sleep11, s
um(kslltwtt) wait_time from x$ksllt g
roup by kslltnum) la where la.latch# =
d.indx
1332 10/10/18 21:58 1 1.26 7rx9z1ddww1j2 2439216106 0.00 0.00 0 5 0 0 0 1 0 0.00 54 select SID, SERIAL#, APPLY#, APPLY_NAME,
SERVER_ID, STATE, XIDUSN, XIDSLT, XIDSQN
, COMMITSCN,DEP_XIDUSN, DEP_XIDSLT, DEP_
XIDSQN, DEP_COMMITSCN, MESSAGE_SEQUENCE,
TOTAL_ASSIGNED, TOTAL_ADMIN, TOTAL_ROLLB
ACKS,TOTAL_MESSAGES_APPLIED, APPLY_TIME,
APPLIED_MESSAGE_NUMBER, APPLIED_MESSAGE
_CREATE_TIME,ELAPSED_DEQUEUE_TIME, ELAPS
ED_APPLY_TIME from GV$STREAMS_APPLY_SERV
ER where INST_ID = USERENV('Instance')
1332 10/10/18 21:58 1 1.26 6aq34nj2zb2n7 2874733959 0.00 0.00 0.00 0 130 0 0 65 20 0 0.00 55 select col#, grantee#, privilege#,max(mo
d(nvl(option$,0),2)) from objauth$ where
obj#=:1 and col# is not null group by p
rivilege#, col#, grantee# order by col#,
grantee#
1332 10/10/18 21:58 1 1.26 17k8dh7vntd3w 669385525 0.00 0.00 0 3 0 0 0 1 0 0.00 56 select s.inst_id,s.addr,s.indx,s.ksusese
r,s.ksuudses,s.ksusepro,s.ksuudlui,s.ksu
udlna,s.ksuudoct,s.ksusesow, decode(s.ks
usetrn,hextoraw('00'),null,s.ksusetrn),d
ecode(s.ksqpswat,hextoraw('00'),null,s.k
sqpswat),decode(bitand(s.ksuseidl,11),1,
'ACTIVE',0,decode(bitand(s.ksuseflg,4096
),0,'INACTIVE','CACHED'),2,'SNIPED',3,'S
NIPED', 'KILLED'),decode(s.ksspatyp,1,'D
EDICATED',2,'SHARED',3,'PSEUDO','NONE'),
s.ksuudsid,s.ksuudsna,s.ksuseunm,s.ksu
sepid,s.ksusemnm,s.ksusetid,s.ksusepnm,
decode(bitand(s.ksuseflg,19),17,'BACKGRO
UND',1,'USER',2,'RECURSIVE','?'), s.ksus
esql, s.ksusesqh, s.ksusesqi, decode(s.k
susesch, 65535, to_number(null), s.ksuse
sch), s.ksusepsq, s.ksusepha, s.ksuseps
i, decode(s.ksusepch, 65535, to_number(
null), s.ksusepch), decode(s.ksusepeo,0
,to_number(null),s.ksusepeo), decode(s.
ksusepeo,0,to_number(null),s.ksusepes),
decode(s.ksusepco,0,to_number(null),s.k
susepco), decode(s.ksusepco,0,to_number
(null),s.ksusepcs), s.ksuseapp, s.ksuse
aph, s.ksuseact, s.ksuseach, s.ksusecli,
s.ksusefix, s.ksuseobj, s.ksusefil, s.k
suseblk, s.ksuseslt, s.ksuseltm, s.ksuse
ctm,decode(bitand(s.ksusepxopt, 12),0,'N
O','YES'),decode(s.ksuseft, 2,'SESSION',
4,'SELECT',8,'TRANSACTIONAL','NONE'),de
code(s.ksusefm,1,'BASIC',2,'PRECONNECT',
4,'PREPARSE','NONE'),decode(s.ksusefs, 1
, 'YES', 'NO'),s.ksusegrp,decode(bitand(
s.ksusepxopt,4),4,'ENABLED',decode(bitan
d(s.ksusepxopt,8),8,'FORCED','DISABLED')
),decode(bitand(s.ksusepxopt,2),2,'FORCE
D',decode(bitand(s.ksusepxopt,1),1,'DISA
BLED','ENABLED')),decode(bitand(s.ksusep
xopt,32),32,'FORCED',decode(bitand(s.ksu
sepxopt,16),16,'DISABLED','ENABLED')),
s.ksusecqd, s.ksuseclid, decode(s.ksuseb
locker,4294967295,'UNKNOWN', 4294967294
, 'UNKNOWN',4294967293,'UNKNOWN',4294967
292,'NO HOLDER', 4294967291,'NOT IN WAI
T','VALID'),decode(s.ksuseblocker, 42949
67295,to_number(null),4294967294,to_numb
er(null), 4294967293,to_number(null), 42
94967292,to_number(null),4294967291, to
_number(null),bitand(s.ksuseblocker, 214
7418112)/65536),decode(s.ksuseblocker, 4
294967295,to_number(null),4294967294,to_
number(null), 4294967293,to_number(null)
, 4294967292,to_number(null),4294967291,
to_number(null),bitand(s.ksuseblocker,
65535)),s.ksuseseq, s.ksuseopc,e.ksledn
am, e.ksledp1, s.ksusep1,s.ksusep1r,e.ks
ledp2, s.ksusep2,s.ksusep2r,e.ksledp3,s.
ksusep3,s.ksusep3r,e.ksledclassid, e.ks
ledclass#, e.ksledclass, decode(s.ksuset
im,0,0,-1,-1,-2,-2, decode(round(s.ksuse
tim/10000),0,-1,round(s.ksusetim/10000))
), s.ksusewtm,decode(s.ksusetim, 0, 'WAI
TING', -2, 'WAITED UNKNOWN TIME', -1, '
WAITED SHORT TIME', decode(round(s.ksu
setim/10000),0,'WAITED SHORT TIME','WAIT
ED KNOWN TIME')),s.ksusesvc, decode(bita
nd(s.ksuseflg2,32),32,'ENABLED','DISABLE
D'),decode(bitand(s.ksuseflg2,64),64,'TR
UE','FALSE'),decode(bitand(s.ksuseflg2,1
28),128,'TRUE','FALSE')from x$ksuse s, x
$ksled e where bitand(s.ksspaflg,1)!=0 a
nd bitand(s.ksuseflg,1)!=0 and s.ksuseop
c=e.indx
1332 10/10/18 21:58 1 1.26 7tc5u8t3mmzgf 2144485289 0.00 0.00 0.00 0 180 0 0 180 17 0 0.00 57 select cachedblk, cachehit, logicalread
from tab_stats$ where obj#=:1
1332 10/10/18 21:58 1 1.26 ghvnum1dfm05q 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 58 select /*+ top_sql_9331 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 cqgv56fmuj63x 1310495014 0.00 0.00 0.00 0 156 1 39 22 22 0 0.00 59 select owner#,name,namespace,remoteowner
,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,d_attrs from dependency$ d
, obj$ o where d_obj#=:1 and p_obj#=obj#
(+) order by order#
1332 10/10/18 21:58 1 1.26 2ta3r31t0z08a 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 60 select /*+ top_sql_7523 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 59kybrhwdk040 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 61 select /*+ top_sql_9853 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 9wf93m8rau04d 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 62 select /*+ top_sql_8652 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 fuhanmqynt02p 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 63 select /*+ top_sql_9743 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 2q93zsrvbdw48 2874733959 0.00 0.00 0.00 0 136 0 6 65 20 0 0.00 64 select grantee#,privilege#,nvl(col#,0),m
ax(mod(nvl(option$,0),2))from objauth$ w
here obj#=:1 group by grantee#,privilege
#,nvl(col#,0) order by grantee#
1332 10/10/18 21:58 1 1.26 1dzkrjdvjt03n 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 65 select /*+ top_sql_8498 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 0s5uzug7cr029 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 66 select /*+ top_sql_8896 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 gq6kp76f1307x 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 67 select /*+ top_sql_8114 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 bfa3qt29jg07b 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 68 select /*+ top_sql_9608 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 9nk1jwamsy02n 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 69 select /*+ top_sql_9724 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 2sry32gac2079 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 70 select /*+ top_sql_7316 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 atp84rb53u072 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 71 select /*+ top_sql_9091 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 f8pavn1bvsj7t 1224215794 0.00 0.00 0.00 0 144 0 1 71 15 0 0.00 72 select con#,obj#,rcon#,enabled,nvl(defer
,0) from cdef$ where robj#=:1
1332 10/10/18 21:58 1 1.26 1wb6wx2nb8093 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 73 select /*+ top_sql_9446 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 3czfc573u505f 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 74 select /*+ top_sql_9702 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 c31xpspd8n08k 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 75 select /*+ top_sql_8045 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 cbdfcfcp1pgtp 142600749 0.00 0.00 0.00 0 74 0 74 37 37 0 0.00 76 select intcol#, col# , type#, spare1, se
gcol#, charsetform from partcol$ where
obj# = :1 order by pos#
1332 10/10/18 21:58 1 1.26 3k07s1fhv6043 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 77 select /*+ top_sql_9321 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 0qh6dbs79n06s 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 78 select /*+ top_sql_9052 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 9xt7tfmzut065 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 79 select /*+ top_sql_9429 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 28hu85p69d047 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 80 select /*+ top_sql_8978 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 4w2jxfhrfh037 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 81 select /*+ top_sql_7464 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 5x83v19wj302c 2439216106 0.00 0.00 0 3 0 0 0 1 0 0.00 82 select inst_id,sid_knst,serial_knst,appl
ynum_knstasl, applyname_knstasl,slavid_k
nstasl,decode(state_knstasl,0,'IDLE',1,'
POLL SHUTDOWN',2,'RECORD LOW-WATERMARK',
3,'ADD PARTITION',4,'DROP PARTITION',5,'
EXECUTE TRANSACTION',6,'WAIT COMMIT',7,'
WAIT DEPENDENCY',8,'GET TRANSACTIONS',9,
'WAIT FOR NEXT CHUNK',12,'ROLLBACK TRANS
ACTION',13,'TRANSACTION CLEANUP',14,'REQ
UEST UA SESSION',15,'INITIALIZING'), xid
_usn_knstasl,xid_slt_knstasl,xid_sqn_kns
tasl,cscn_knstasl,depxid_usn_knstasl,dep
xid_slt_knstasl,depxid_sqn_knstasl,depcs
cn_knstasl,msg_num_knstasl,total_assigne
d_knstasl,total_admin_knstasl,total_roll
backs_knstasl,total_msg_knstasl, last_ap
ply_time_knstasl, last_apply_msg_num_kns
tasl,last_apply_msg_time_knstasl,elapsed
_dequeue_time_knstasl, elapsed_apply_tim
e_knstasl from x$knstasl x where type_kn
st=2 and exists (select 1 from v$session
s where s.sid=x.sid_knst and s.serial#=
x.serial_knst)
1332 10/10/18 21:58 1 1.26 5kzjxrqgqv03x 3724264953 sqlplus@dbrocaix01.b 0.00 0.00 0.00 0 223 0 1 1 1 0 0.00 83 select /*+ top_sql_6849 */ count(*) from
xxxxxxx.com (TNS V1- t1
V3)
1332 10/10/18 21:58 1 1.26 8hd36umbhpgsz 3362549386 0.00 0.00 0.00 0 74 0 37 37 37 0 0.00 84 select parttype, partcnt, partkeycols, f
lags, defts#, defpctfree, defpctused, de
finitrans, defmaxtrans, deftiniexts, def
extsize, defminexts, defmaxexts, defextp
ct, deflists, defgroups, deflogging, spa
re1, mod(spare2, 256) subparttype, mod(t
runc(spare2/256), 256) subpartkeycols, m
od(trunc(spare2/65536), 65536) defsubpar
tcnt, mod(trunc(spare2/4294967296), 256)
defhscflags from partobj$ where obj# =
:1
1332 10/10/18 21:58 1 1.26 ga9j9xk5cy9s0 1516415349 0.00 0.00 0.00 0 55 0 18 12 12 0 0.00 85 select /*+ index(idl_sb4$ i_idl_sb41) +*
/ piece#,length,piece from idl_sb4$ wher
e obj#=:1 and part=:2 and version=:3 ord
er by piece#
1332 10/10/18 21:58 1 1.26 1fkh93md0802n 2485227045 0.00 0.00 0 5 0 0 0 1 0 0.00 86 select LOW_OPTIMAL_SIZE, HIG
H_OPTIMAL_SIZE, OPTIMAL_EXECUT
IONS, ONEPASS_EXECUTIONS,
MULTIPASSES_EXECUTIONS,
TOTAL_EXECUTIONS from GV$SQL_WORKAR
EA_HISTOGRAM where INST_ID = USERENV
('Instance')
1332 10/10/18 21:58 1 1.26 8swypbbr0m372 893970548 0.00 0.00 0.00 0 106 0 31 22 22 0 0.00 87 select order#,columns,types from access$
where d_obj#=:1
1332 10/10/18 21:58 1 1.26 dpvv2ua0tfjcv 467914355 0.00 0.00 0.00 0 19 0 0 19 18 0 0.00 88 select cachedblk, cachehit, logicalread
from ind_stats$ where obj#=:1
1332 10/10/18 21:58 1 1.26 6qz82dptj0qr7 2819763574 0.00 0.00 0.00 0 16 0 4 5 5 0 0.00 89 select l.col#, l.intcol#, l.lobj#, l.ind
#, l.ts#, l.file#, l.block#, l.chunk, l.
pctversion$, l.flags, l.property, l.rete
ntion, l.freepools from lob$ l where l.o
bj# = :1 order by l.intcol# asc
1332 10/10/18 21:58 1 1.26 b1wc53ddd6h3p 1637390370 0.00 0.00 0.00 0 9 0 3 3 3 0 0.00 90 select audit$,options from procedure$ wh
ere obj#=:1
90 rows selected.
}}}
''Even if not joined with dba_hist_sqltext it still shows 90 rows''
{{{
select snap_id, sql_id, module, elap, cput, exec, time_rank
from
(
select s0.snap_id,
e.sql_id,
max(e.module) module,
sum(e.elapsed_time_delta)/1000000 elap,
sum(e.cpu_time_delta)/1000000 cput,
sum(e.executions_delta) exec,
DENSE_RANK() OVER (
PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
from
dba_hist_snapshot s0,
dba_hist_sqlstat e
where e.dbid = s0.dbid
and e.instance_number = s0.instance_number
and e.snap_id = s0.snap_id + 1
group by s0.snap_id, e.sql_id, e.elapsed_time_delta
)
where
-- time_rank <= 5 and
snap_id in (1332)
SNAP_ID SQL_ID MODULE ELAP CPUT EXEC TIME_RANK
---------- ------------- ---------------------------------------------------------------- ---------- ---------- ---------- ----------
1332 404qh4yx36y1v 9.254373 9.155145 10000 1
1332 bunssq950snhf .801489 .801489 1 2
1332 7vgmvmy8vvb9s .083412 .083412 1 3
1332 6hwjmjgrpsuaa .050253 .015212 1 4
1332 84qubbrsr0kfn .044464 .044464 1 5
1332 db78fxqxwxt7r .04239 .031295 379 6
1332 96g93hntrzjtr .040821 .040821 1346 7
1332 130dvvr5s8bgn .04013 .04013 18 8
1332 6yd53x1zjqts9 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .039 .016832 1 9
1332 70utgu2587mhs .035026 .012265 1 10
1332 c3zymn7x3k6wy .033542 .033542 19 11
1332 bpxnmunkcywzg sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .033223 .003645 1 12
1332 3252fkazwq930 .033124 .022902 1 13
1332 fdxrh8tzyw0yw .030767 .028199 1 14
1332 7k6zct1sya530 .028502 .028502 1 15
1332 7qjhf5dzmazsr .028234 .006275 1 16
1332 32wqka2zwvu65 .025672 .025672 1 17
1332 53saa2zkr6wc3 .025226 .025226 463 18
1332 4qju99hqmn81x .024763 .024763 1 19
1332 32whwm2babwpt .022436 .022436 1 20
1332 fktqvw2wjxdxc .022079 .022079 1 21
1332 2ym6hhaq30r73 .02171 .02171 476 22
1332 71y370j6428cb .021454 .017777 1 23
1332 f9nzhpn9854xz .021191 .020515 1 24
1332 bqnn4c3gjtmgu .02018 .02018 1 25
1332 39m4sx9k63ba2 .019843 .008332 12 26
1332 7fa2r0xkfbs6b sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .019497 .015929 1 27
1332 1uk5m5qbzj1vt sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .019044 .019044 0 28
1332 cp3gpd7z878w8 .018802 .018802 1 29
1332 dsd2yqyggtc59 .018707 .016998 0 30
1332 bu95jup1jp5t3 .018591 .018301 1 31
1332 350myuyx0t1d6 .01829 .017399 1 32
1332 f71p3w4xx1pfc sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .01743 .016771 1 33
1332 c6awqs517jpj0 .01715 .004729 12 34
1332 agpd044zj368m .0166 .016206 1 35
1332 f3wcc30napt5a sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .016506 .016506 1 36
1332 71k5024zn7c9a .016167 .016167 1 37
1332 83taa7kaw59c1 .016105 .016105 69 38
1332 cvn54b7yz0s8u .015882 .004263 12 39
1332 66gs90fyynks7 .015312 .015312 1 40
1332 5ngzsfstg8tmy .013302 .013302 107 41
1332 7ng34ruy5awxq .013015 .013015 68 42
1332 79uvsz1g1c168 .012924 .012924 1 43
1332 b0cxc52zmwaxs .01172 .011716 1 44
1332 1tn90bbpyjshq .010358 .010358 1 45
1332 a73wbv1yu8x5c .009111 .009111 71 46
1332 6c06mfv01xt2h .008496 .008496 1 47
SNAP_ID SQL_ID MODULE ELAP CPUT EXEC TIME_RANK
---------- ------------- ---------------------------------------------------------------- ---------- ---------- ---------- ----------
1332 45jb7msfn4x4m .007906 .007906 0 48
1332 asvzxj61dc5vs .007837 .007837 125 49
1332 04xtrk7uyhknh .00661 .00661 42 50
1332 6769wyy3yf66f .006371 .006371 78 51
1332 1gu8t96d0bdmu .006356 .006356 59 52
1332 88brhumsyg325 .005116 .005116 0 53
1332 7rx9z1ddww1j2 .004431 .004431 0 54
1332 6aq34nj2zb2n7 .004392 .004392 65 55
1332 17k8dh7vntd3w .003737 .003737 0 56
1332 7tc5u8t3mmzgf .003626 .003626 180 57
1332 ghvnum1dfm05q sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .003133 .003133 1 58
1332 cqgv56fmuj63x .003087 .003087 22 59
1332 2ta3r31t0z08a sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002808 .002808 1 60
1332 59kybrhwdk040 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002756 .002756 1 61
1332 9wf93m8rau04d sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002671 .002671 1 62
1332 fuhanmqynt02p sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002665 .002665 1 63
1332 2q93zsrvbdw48 .002652 .002652 65 64
1332 1dzkrjdvjt03n sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002596 .002596 1 65
1332 0s5uzug7cr029 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002508 .002508 1 66
1332 gq6kp76f1307x sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002491 .002491 1 67
1332 bfa3qt29jg07b sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002475 .002475 1 68
1332 9nk1jwamsy02n sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002465 .002465 1 69
1332 2sry32gac2079 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002461 .002461 1 70
1332 atp84rb53u072 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002449 .002449 1 71
1332 f8pavn1bvsj7t .002441 .002441 71 72
1332 1wb6wx2nb8093 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .00243 .00243 1 73
1332 3czfc573u505f sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002369 .002369 1 74
1332 c31xpspd8n08k sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002352 .002352 1 75
1332 cbdfcfcp1pgtp .002347 .002347 37 76
1332 3k07s1fhv6043 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002305 .002305 1 77
1332 0qh6dbs79n06s sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002299 .002299 1 78
1332 9xt7tfmzut065 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002274 .002274 1 79
1332 28hu85p69d047 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002269 .002269 1 80
1332 4w2jxfhrfh037 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002238 .002238 1 81
1332 5x83v19wj302c .002196 .002196 0 82
1332 5kzjxrqgqv03x sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3) .002108 .002108 1 83
1332 8hd36umbhpgsz .002057 .002057 37 84
1332 ga9j9xk5cy9s0 .002001 .002001 12 85
1332 1fkh93md0802n .001568 .001568 0 86
1332 8swypbbr0m372 .001303 .001303 22 87
1332 dpvv2ua0tfjcv .000683 .000683 19 88
1332 6qz82dptj0qr7 .000329 .000329 5 89
1332 b1wc53ddd6h3p .000242 .000242 3 90
90 rows selected.
}}}
''Even if you query dba_hist_sqlstat alone, it will still return 90''
{{{
select count(*) from dba_hist_sqlstat where snap_id = 1333 -- returns 90
-- NOTE.. the values from snap 1333 is actually what you see in AWR report 1332-1333.. so it's just getting the end value..
-- need a way to show the 1332 that's why i have to do the SQL trick e.snap_id = s0.snap_id + 1
select count(*) -- also returns 90
from
(
select s0.snap_id,
e.sql_id,
max(e.module) module,
sum(e.elapsed_time_delta)/1000000 elap,
sum(e.cpu_time_delta)/1000000 cput,
sum(e.executions_delta) exec,
DENSE_RANK() OVER (
PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
from
dba_hist_snapshot s0,
dba_hist_sqlstat e
where e.dbid = s0.dbid
and e.instance_number = s0.instance_number
and e.snap_id = s0.snap_id + 1
group by s0.snap_id, e.sql_id, e.elapsed_time_delta
)
where
-- time_rank <= 5 and
snap_id in (1332)
select * from dba_hist_sqlstat where snap_id = 1333 order by elapsed_time_delta desc -- will show SQL_ID 404qh4yx36y1v, bunssq950snhf, 7vgmvmy8vvb9s, 6hwjmjgrpsuaa, 84qubbrsr0kfn as top five
select snap_id, sql_id, module, elap, cput, exec, time_rank -- will show SQL_ID 404qh4yx36y1v, bunssq950snhf, 7vgmvmy8vvb9s, 6hwjmjgrpsuaa, 84qubbrsr0kfn as top five
from
(
select s0.snap_id,
e.sql_id,
max(e.module) module,
sum(e.elapsed_time_delta)/1000000 elap,
sum(e.cpu_time_delta)/1000000 cput,
sum(e.executions_delta) exec,
DENSE_RANK() OVER (
PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
from
dba_hist_snapshot s0,
dba_hist_sqlstat e
where e.dbid = s0.dbid
and e.instance_number = s0.instance_number
and e.snap_id = s0.snap_id + 1
group by s0.snap_id, e.sql_id, e.elapsed_time_delta
)
where
-- time_rank <= 5 and
snap_id in (1332)
select sql_id from dba_hist_sqlstat where snap_id = 1333 -- will return zero
minus
select sql_id
from
(
select s0.snap_id,
e.sql_id,
max(e.module) module,
sum(e.elapsed_time_delta)/1000000 elap,
sum(e.cpu_time_delta)/1000000 cput,
sum(e.executions_delta) exec,
DENSE_RANK() OVER (
PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
from
dba_hist_snapshot s0,
dba_hist_sqlstat e
where e.dbid = s0.dbid
and e.instance_number = s0.instance_number
and e.snap_id = s0.snap_id + 1
group by s0.snap_id, e.sql_id, e.elapsed_time_delta
)
where
-- time_rank <= 5 and
snap_id in (1332)
}}}
''some other queries i used''
select count(*) from DBA_HIST_SQLSTAT where snap_id = 1349
select count(*) from dba_hist_sqltext where snap_id = 1349
-- 50 appeared
select * from dba_hist_sqltext where lower(sql_text) like '% top_sql_%'
-- starts at 5505 - 9999 with 8KB sharable_mem per cursor that is when dynamic sampling 0
select * from v$sql where lower(sql_text) like '%top_sql%' order by sql_text
-- starts at 5505 - 9999 that is when dynamic sampling 0
select * from v$sqlstats where sql_text like '%top_sql%' order by sql_text
-- starts at 5505 - 9999 that is when dynamic sampling 0
select * from v$sqlarea where sql_text like '%top_sql%' order by sql_text
-- starts at 2890 - 9999 that is when dynamic sampling 0 ''if dynamic sampling 2 starts with 8266 ends at 9999''
select * from v$sqltext where sql_text like '%top_sql%' order by 6 -- starts with 8266 ends at 0
''also on the row count of the 10K execution''
select count(*) from v$sql -- 5853
select count(*) from v$sqlstats -- 5916
select count(*) from v$sqlarea -- 5839
select count(*) from dba_hist_sqltext -- this view does not have SNAP_ID.. but the total row count is 3243
http://alternativeto.net/software/balsamiq-mockups/
<<showtoc>>
! workflow
! installation and upgrade
! commands
! performance and troubleshooting
!! sizing and capacity planning
!! benchmark
!! modeling
! high availability
! security
! time series mongodb
https://www.mongodb.com/blog/post/time-series-data-and-mongodb-part-2-schema-design-best-practices
https://medium.com/oracledevs/build-a-go-lang-based-rest-api-on-top-of-cassandra-3ac5d9316852
https://www3.nd.edu/~dial/publications/xian2018restful.pdf
! xxxxxxxxxxxxxxxxxxxxxxxx
! installation
!! on 14.04 ubuntu
https://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
https://www.howtoforge.com/tutorial/install-mongodb-on-ubuntu-14.04/ <-- good stuff
! references
http://rsmith.co/2012/11/05/mongodb-gotchas-and-how-to-avoid-them/
picking a data store
http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/
! SQL to MongoDB Mapping Chart
http://stackoverflow.com/questions/11507995/sql-view-in-mongodb
https://docs.mongodb.com/manual/reference/sql-comparison/
https://docs.mongodb.com/manual/reference/sql-aggregation-comparison/
http://www.evernote.com/shard/s48/sh/b4ed1850-abe5-4c21-8871-3a6d4584a456/f142132af40954b6b385f531738a468a
http://www.evernote.com/shard/s48/sh/4e630767-1a54-44b0-a885-7a8ba2bb3afe/938bc5de1fe3dcc7bb2249fd42927684
How to mount an LVM partition on another system
http://www.techbytes.ca/techbyte118.html <-- first article i saw
http://tldp.org/HOWTO/LVM-HOWTO/recipemovevgtonewsys.html <-- lvm howto
http://forgetmenotes.blogspot.com/2009/06/how-to-mount-lvm-partition.html
http://www.thegibson.org/blog/archives/467 <-- "WARNING: Duplicate VG name"
http://forums.fedoraforum.org/archive/index.php/t-183575.html
http://www.linuxquestions.org/questions/linux-general-1/how-to-rename-a-vol-group-433993/ <-- rename VG
http://forums13.itrc.hp.com/service/forums/questionanswer.do?admit=109447627+1286430324270+28353475&threadId=1133855
http://www.gossamer-threads.com/lists/gentoo/user/215444
http://www.linuxquestions.org/questions/linux-general-1/lvm-stop-functioning-after-unmounting-usr-660010/
http://evuraan.blogspot.com/2005/05/sbinlvmstatic-in-rhel40-systems.html <-- lvm.static
ubuntu
http://www.linuxquestions.org/questions/fedora-35/how-can-i-mount-lvm-partition-in-ubuntu-569507/
http://www.linux-sxs.org/storage/fedora2ubuntu.html
http://www.brandonhutchinson.com/Mounting_a_Linux_LVM_volume.html
http://linux.byexamples.com/archives/321/fstab-with-uuid/
http://ubuntuforums.org/showthread.php?t=283131 <-- great detail
http://www.g-loaded.eu/2009/01/04/always-use-a-block-device-label-or-its-uuid-in-fstab/
How To Setup LUN Persistence in non-Multipathing environment [ID 1076299.1]
How to Configure Oracle Enterprise Linux to be Highly Available Using RAID1 [ID 759260.1]
http://husnusensoy.wordpress.com/2008/06/13/moving-any-file-between-asm-diskgroups-1/
{{{
mkdir backup
# UP TO PER DAY TIME FRAME
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td\n" | sort | uniq | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td/%f\n" | sh
# UP TO PER HOUR TIME FRAME
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td%TH\n" | sort | uniq | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td%TH/%f\n" | sh
# UP TO PER MINUTE TIME FRAME
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td%TH%TM\n" | sort | uniq | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td%TH%TM/%f\n" | sh
# UP TO PER MINUTE AND FILTER HCMPRD6 FILE
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td%TH%TM\n" | grep 2012031912 | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td%TH%TM/%f\n" | grep 2012031912 | sh
}}}
http://www.unix.com/unix-dummies-questions-answers/144957-move-file-based-time-stamp.html
! clean up of aud files
{{{
cd /u01/app/oracle/admin/dw/adump/
rm -rf * <-- will error with "Argument list too long"
[root@desktopserver adump]# ls -1 | wc -l
9602
# UP TO PER DAY TIME FRAME
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td\n" | sort | uniq | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td/%f\n" | sh
[root@desktopserver adump]# ls -ltr
total 4
drwxr-xr-x 31 root root 4096 Mar 2 21:49 backup
[root@desktopserver adump]# cd backup/
[root@desktopserver backup]# ls
20120311 20120420 20120427 20120521 20120603 20120613 20120928 20121204 20121210 20130124
20120410 20120421 20120429 20120525 20120604 20120920 20121018 20121205 20121212 20130127
20120419 20120424 20120504 20120529 20120605 20120927 20121019 20121206 20121213
}}}
or do this
{{{
find . -name '*aud' | xargs rm
find . -name '*trc' | xargs rm
find . -name '*trm' | xargs rm
find . -name '*xml' | xargs rm
}}}
{{{
reports <-- source dir
csvfiles <-- target dir
mkdir csvfiles
find reports -name '*.txt' | while read file; do
cp "$file" "csvfiles/$(tr / _ <<< "$file")"
done
}}}
on tarfiles
{{{
mkdir tarfiles
mkdir tarfilesconsolidated
find tarfiles -name '*.tar' | while read file; do
cp "$file" "tarfilesconsolidated/$(tr / _ <<< "$file")"
done
for i in *.tar; do tar xf $i; done
gunzip -vf *.gz
}}}
then consolidate the textfiles
{{{
mkdir awr_topevents
mv *awr_topevents-tableau-* awr_topevents
cat awr_topevents/*csv > awr_topevents.txt
mkdir awr_services
mv *awr_services-tableau-* awr_services
cat awr_services/*csv > awr_services.txt
mkdir awr_cpuwl
mv *awr_cpuwl-tableau-* awr_cpuwl
cat awr_cpuwl/*csv > awr_cpuwl.txt
mkdir awr_sysstat
mv *awr_sysstat-tableau-* awr_sysstat
cat awr_sysstat/*csv > awr_sysstat.txt
mkdir awr_topsqlx
mv *awr_topsqlx-tableau-* awr_topsqlx
cat awr_topsqlx/*csv > awr_topsqlx.txt
mkdir awr_iowl
mv *awr_iowl-tableau-* awr_iowl
cat awr_iowl/*csv > awr_iowl.txt
mkdir awr_storagesize_summary
mv *awr_storagesize_summary-tableau-* awr_storagesize_summary
cat awr_storagesize_summary/*csv > awr_storagesize_summary.txt
mkdir awr_storagesize_detail
mv *awr_storagesize_detail-tableau-* awr_storagesize_detail
cat awr_storagesize_detail/*csv > awr_storagesize_detail.txt
}}}
shortcut can also be like this
{{{
find . -name '*awr_sysstat*.txt' | while read file; do
cat "$file" >> awr_sysstat.txt
done
}}}
http://stackoverflow.com/questions/14345714/recursively-moving-all-files-of-a-specific-type-into-a-target-directory-in-bash
http://www.usn-it.de/index.php/2007/03/09/how-to-move-or-add-a-controlfile-when-asm-is-involved/
How To Move SQL Profiles From One Database To Another Database (Doc ID 457531.1)
! how to run two versions of mozilla (need to create a new profile)
{{{
"C:\Program Files (x86)\MozillaFirefox4RC2\firefox.exe" -P "karlarao" -no-remote
}}}
! addons
! run 2nd instance of firefox (older version)
https://support.mozilla.org/en-US/questions/974208
http://kb.mozillazine.org/Opening_a_new_instance_of_Firefox_with_another_profile
https://developer.mozilla.org/en-US/Firefox/Multiple_profiles
Name: MptwBlack
Background: #000
Foreground: #fff
PrimaryPale: #333
PrimaryLight: #555
PrimaryMid: #888
PrimaryDark: #aaa
SecondaryPale: #111
SecondaryLight: #222
SecondaryMid: #555
SecondaryDark: #888
TertiaryPale: #222
TertiaryLight: #666
TertiaryMid: #888
TertiaryDark: #aaa
Error: #300
This is in progress. Help appreciated.
Name: MptwBlue
Background: #fff
Foreground: #000
PrimaryPale: #cdf
PrimaryLight: #57c
PrimaryMid: #114
PrimaryDark: #012
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
/***
|Name:|MptwConfigPlugin|
|Description:|Miscellaneous tweaks used by MPTW|
|Version:|1.0 ($Rev: 3646 $)|
|Date:|$Date: 2008-02-27 02:34:38 +1000 (Wed, 27 Feb 2008) $|
|Source:|http://mptw.tiddlyspot.com/#MptwConfigPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#MptwConfigPlugin|
!!Note: instead of editing this you should put overrides in MptwUserConfigPlugin
***/
//{{{
var originalReadOnly = readOnly;
var originalShowBackstage = showBackstage;
config.options.chkHttpReadOnly = false; // means web visitors can experiment with your site by clicking edit
readOnly = false; // needed because the above doesn't work any more post 2.1 (??)
showBackstage = true; // show backstage for same reason
config.options.chkInsertTabs = true; // tab inserts a tab when editing a tiddler
config.views.wikified.defaultText = ""; // don't need message when a tiddler doesn't exist
config.views.editor.defaultText = ""; // don't need message when creating a new tiddler
config.options.chkSaveBackups = true; // do save backups
config.options.txtBackupFolder = 'twbackup'; // put backups in a backups folder
config.options.chkAutoSave = (window.location.protocol == "file:"); // do autosave if we're in local file
config.mptwVersion = "2.5.3";
config.macros.mptwVersion={handler:function(place){wikify(config.mptwVersion,place);}};
if (config.options.txtTheme == '')
config.options.txtTheme = 'MptwTheme';
// add to default GettingStarted
config.shadowTiddlers.GettingStarted += "\n\nSee also [[MPTW]].";
// add select theme and palette controls in default OptionsPanel
config.shadowTiddlers.OptionsPanel = config.shadowTiddlers.OptionsPanel.replace(/(\n\-\-\-\-\nAlso see AdvancedOptions)/, "{{select{<<selectTheme>>\n<<selectPalette>>}}}$1");
// these are used by ViewTemplate
config.mptwDateFormat = 'DD/MM/YY';
config.mptwJournalFormat = 'Journal DD/MM/YY';
//}}}
Name: MptwGreen
Background: #fff
Foreground: #000
PrimaryPale: #9b9
PrimaryLight: #385
PrimaryMid: #031
PrimaryDark: #020
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
Name: MptwRed
Background: #fff
Foreground: #000
PrimaryPale: #eaa
PrimaryLight: #c55
PrimaryMid: #711
PrimaryDark: #500
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
|Name|MptwRounded|
|Description|Mptw Theme with some rounded corners (Firefox only)|
|ViewTemplate|MptwTheme##ViewTemplate|
|EditTemplate|MptwTheme##EditTemplate|
|PageTemplate|MptwTheme##PageTemplate|
|StyleSheet|##StyleSheet|
!StyleSheet
/*{{{*/
[[MptwTheme##StyleSheet]]
.tiddler,
.sliderPanel,
.button,
.tiddlyLink,
.tabContents
{ -moz-border-radius: 1em; }
.tab {
-moz-border-radius-topleft: 0.5em;
-moz-border-radius-topright: 0.5em;
}
#topMenu {
-moz-border-radius-bottomleft: 2em;
-moz-border-radius-bottomright: 2em;
}
/*}}}*/
Name: MptwSmoke
Background: #fff
Foreground: #000
PrimaryPale: #F5F5F5
PrimaryLight: #5C84A8
PrimaryMid: #111
PrimaryDark: #000
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
|Name|MptwStandard|
|Description|Mptw Theme with the default TiddlyWiki PageLayout and Styles|
|ViewTemplate|MptwTheme##ViewTemplate|
|EditTemplate|MptwTheme##EditTemplate|
Name: MptwTeal
Background: #fff
Foreground: #000
PrimaryPale: #B5D1DF
PrimaryLight: #618FA9
PrimaryMid: #1a3844
PrimaryDark: #000
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #f8f8f8
TertiaryLight: #bbb
TertiaryMid: #999
TertiaryDark: #888
Error: #f88
|Name|MptwTheme|
|Description|Mptw Theme including custom PageLayout|
|PageTemplate|##PageTemplate|
|ViewTemplate|##ViewTemplate|
|EditTemplate|##EditTemplate|
|StyleSheet|##StyleSheet|
http://mptw.tiddlyspot.com/#MptwTheme ($Rev: 1829 $)
!PageTemplate
<!--{{{-->
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
</div>
<!-- horizontal MainMenu -->
<div id='topMenu' refresh='content' tiddler='MainMenu'></div>
<!-- original MainMenu menu -->
<!-- <div id='mainMenu' refresh='content' tiddler='MainMenu'></div> -->
<div id='sidebar'>
<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->
!ViewTemplate
<!--{{{-->
[[MptwTheme##ViewTemplateToolbar]]
<div class="tagglyTagged" macro="tags"></div>
<div class='titleContainer'>
<span class='title' macro='view title'></span>
<span macro="miniTag"></span>
</div>
<div class='subtitle'>
(updated <span macro='view modified date {{config.mptwDateFormat?config.mptwDateFormat:"MM/0DD/YY"}}'></span>
by <span macro='view modifier link'></span>)
<!--
(<span macro='message views.wikified.createdPrompt'></span>
<span macro='view created date {{config.mptwDateFormat?config.mptwDateFormat:"MM/0DD/YY"}}'></span>)
-->
</div>
<div macro="showWhen tiddler.tags.containsAny(['css','html','pre','systemConfig']) && !tiddler.text.match('{{'+'{')">
<div class='viewer'><pre macro='view text'></pre></div>
</div>
<div macro="else">
<div class='viewer' macro='view text wikified'></div>
</div>
<div class="tagglyTagging" macro="tagglyTagging"></div>
<!--}}}-->
!ViewTemplateToolbar
<!--{{{-->
<div class='toolbar'>
<span macro="showWhenTagged systemConfig">
<span macro="toggleTag systemConfigDisable . '[[disable|systemConfigDisable]]'"></span>
</span>
<span macro="showWhenTagged systemTheme"><span macro="applyTheme"></span></span>
<span macro="showWhenTagged systemPalette"><span macro="applyPalette"></span></span>
<span macro="showWhen tiddler.tags.contains('css') || tiddler.title == 'StyleSheet'"><span macro="refreshAll"></span></span>
<span style="padding:1em;"></span>
<span macro='toolbar closeTiddler closeOthers +editTiddler deleteTiddler > fields syncing permalink references jump'></span> <span macro='newHere label:"new here"'></span>
<span macro='newJournalHere {{config.mptwJournalFormat?config.mptwJournalFormat:"MM/0DD/YY"}}'></span>
</div>
<!--}}}-->
!EditTemplate
<!--{{{-->
<div class="toolbar" macro="toolbar +saveTiddler saveCloseTiddler closeOthers -cancelTiddler cancelCloseTiddler deleteTiddler"></div>
<div class="title" macro="view title"></div>
<div class="editLabel">Title</div><div class="editor" macro="edit title"></div>
<div macro='annotations'></div>
<div class="editLabel">Content</div><div class="editor" macro="edit text"></div>
<div class="editLabel">Tags</div><div class="editor" macro="edit tags"></div>
<div class="editorFooter"><span macro="message views.editor.tagPrompt"></span><span macro="tagChooser"></span></div>
<!--}}}-->
!StyleSheet
/*{{{*/
/* a contrasting background so I can see where one tiddler ends and the other begins */
body {
background: [[ColorPalette::TertiaryLight]];
}
/* sexy colours and font for the header */
.headerForeground {
color: [[ColorPalette::PrimaryPale]];
}
.headerShadow, .headerShadow a {
color: [[ColorPalette::PrimaryMid]];
}
/* separate the top menu parts */
.headerForeground, .headerShadow {
padding: 1em 1em 0;
}
.headerForeground, .headerShadow {
font-family: 'Trebuchet MS' sans-serif;
font-weight:bold;
}
.headerForeground .siteSubtitle {
color: [[ColorPalette::PrimaryLight]];
}
.headerShadow .siteSubtitle {
color: [[ColorPalette::PrimaryMid]];
}
/* make shadow go and down right instead of up and left */
.headerShadow {
left: 1px;
top: 1px;
}
/* prefer monospace for editing */
.editor textarea, .editor input {
font-family: 'Consolas' monospace;
background-color:[[ColorPalette::TertiaryPale]];
}
/* sexy tiddler titles */
.title {
font-size: 250%;
color: [[ColorPalette::PrimaryLight]];
font-family: 'Trebuchet MS' sans-serif;
}
/* more subtle tiddler subtitle */
.subtitle {
padding:0px;
margin:0px;
padding-left:1em;
font-size: 90%;
color: [[ColorPalette::TertiaryMid]];
}
.subtitle .tiddlyLink {
color: [[ColorPalette::TertiaryMid]];
}
/* a little bit of extra whitespace */
.viewer {
padding-bottom:3px;
}
/* don't want any background color for headings */
h1,h2,h3,h4,h5,h6 {
background-color: transparent;
color: [[ColorPalette::Foreground]];
}
/* give tiddlers 3d style border and explicit background */
.tiddler {
background: [[ColorPalette::Background]];
border-right: 2px [[ColorPalette::TertiaryMid]] solid;
border-bottom: 2px [[ColorPalette::TertiaryMid]] solid;
margin-bottom: 1em;
padding:1em 2em 2em 1.5em;
}
/* make options slider look nicer */
#sidebarOptions .sliderPanel {
border:solid 1px [[ColorPalette::PrimaryLight]];
}
/* the borders look wrong with the body background */
#sidebar .button {
border-style: none;
}
/* this means you can put line breaks in SidebarOptions for readability */
#sidebarOptions br {
display:none;
}
/* undo the above in OptionsPanel */
#sidebarOptions .sliderPanel br {
display:inline;
}
/* horizontal main menu stuff */
#displayArea {
margin: 1em 15.7em 0em 1em; /* use the freed up space */
}
#topMenu br {
display: none;
}
#topMenu {
background: [[ColorPalette::PrimaryMid]];
color:[[ColorPalette::PrimaryPale]];
}
#topMenu {
padding:2px;
}
#topMenu .button, #topMenu .tiddlyLink, #topMenu a {
margin-left: 0.5em;
margin-right: 0.5em;
padding-left: 3px;
padding-right: 3px;
color: [[ColorPalette::PrimaryPale]];
font-size: 115%;
}
#topMenu .button:hover, #topMenu .tiddlyLink:hover {
background: [[ColorPalette::PrimaryDark]];
}
/* make 2.2 act like 2.1 with the invisible buttons */
.toolbar {
visibility:hidden;
}
.selected .toolbar {
visibility:visible;
}
/* experimental. this is a little borked in IE7 with the button
* borders but worth it I think for the extra screen realestate */
.toolbar { float:right; }
/* fix for TaggerPlugin. from sb56637. improved by FND */
.popup li .tagger a {
display:inline;
}
/* makes theme selector look a little better */
#sidebarOptions .sliderPanel .select .button {
padding:0.5em;
display:block;
}
#sidebarOptions .sliderPanel .select br {
display:none;
}
/* make it print a little cleaner */
@media print {
#topMenu {
display: none ! important;
}
/* not sure if we need all the importants */
.tiddler {
border-style: none ! important;
margin:0px ! important;
padding:0px ! important;
padding-bottom:2em ! important;
}
.tagglyTagging .button, .tagglyTagging .hidebutton {
display: none ! important;
}
.headerShadow {
visibility: hidden ! important;
}
.tagglyTagged .quickopentag, .tagged .quickopentag {
border-style: none ! important;
}
.quickopentag a.button, .miniTag {
display: none ! important;
}
}
/* get user styles specified in StyleSheet */
[[StyleSheet]]
/*}}}*/
|Name|MptwTrim|
|Description|Mptw Theme with a reduced header to increase useful space|
|ViewTemplate|MptwTheme##ViewTemplate|
|EditTemplate|MptwTheme##EditTemplate|
|StyleSheet|MptwTheme##StyleSheet|
|PageTemplate|##PageTemplate|
!PageTemplate
<!--{{{-->
<!-- horizontal MainMenu -->
<div id='topMenu' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<span refresh='content' tiddler='SiteTitle' style="padding-left:1em;font-weight:bold;"></span>:
<span refresh='content' tiddler='MainMenu'></span>
</div>
<div id='sidebar'>
<div id='sidebarOptions'>
<div refresh='content' tiddler='SideBarOptions'></div>
<div style="margin-left:0.1em;"
macro='slider chkTabSliderPanel SideBarTabs {{"tabs \u00bb"}} "Show Timeline, All, Tags, etc"'></div>
</div>
</div>
<div id='displayArea'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
</div>
For upgrading. See [[ImportTiddlers]].
URL: http://mptw.tiddlyspot.com/upgrade.html
/***
|Description:|A place to put your config tweaks so they aren't overwritten when you upgrade MPTW|
See http://www.tiddlywiki.org/wiki/Configuration_Options for other options you can set. In some cases where there are clashes with other plugins it might help to rename this to zzMptwUserConfigPlugin so it gets executed last.
***/
//{{{
// example: set your preferred date format
//config.mptwDateFormat = 'MM/0DD/YY';
//config.mptwJournalFormat = 'Journal MM/0DD/YY';
// example: set the theme you want to start with
//config.options.txtTheme = 'MptwRoundTheme';
// example: switch off autosave, switch on backups and set a backup folder
//config.options.chkSaveBackups = true;
//config.options.chkAutoSave = false;
//config.options.txtBackupFolder = 'backups';
// uncomment to disable 'new means new' functionality for the new journal macro
//config.newMeansNewForJournalsToo = false;
//}}}
''-- software download''
http://method-r.com/downloads
''-- changelog''
http://method-r.com/component/content/article/157
''product home page''
http://method-r.com/software/mrtools
''-- useful commands''
{{{
-- show which sqlid consumes the most R across all your trace files
mrskew *.trc --group='$sqlid'
-- show which files have the most R for the sqlid(s) that the first query identified as interesting.
mrskew *.trc --where='$sqlid eq "96g93hntrzjtr"' --group='$file'
-- show you whether there's skew in the individual execution times of EXEC calls but giving you Accounted For time
mrskew *.trc --where='$sqlid eq "4c8mrs99xp26b"' --group='"$file $line"' --name=EXEC
-- or all calls.. It's possible that none of your executions bears any resemblance to the 218.10-second average response time per execution that AWR is reporting. It could be that one execution is responsible for almost all the response time, and the others are near zero. With mrskew, you'll know.
mrskew *.trc --where='$sqlid eq "4c8mrs99xp26b"' --group='"$file $line"'
-- you can count EXEC calls with mrskew using this ...That's in case you just want to reconcile the average per execution with the AWR data; this is how you can determine your denominator.
mrskew *.trc --name=EXEC --where='$sqlid eq "4c8mrs99xp26b"'
-- use --select='$dur' and see the total response time attributable to your sqlid. This figure should match what AWR is telling you
mrskew *.trc --select='$dur' --where='$sqlid eq "4c8mrs99xp26b"'
-- with --select='$uaf', you'll be able to see how much of that response time for the given sqlid is unaccounted for by the trace data
mrskew *.trc --select='$uaf' --where='$sqlid eq "4c8mrs99xp26b"'
-- show you whether there's skew in the individual execution times of EXEC calls but giving you the total duration RT
mrskew *.trc --where='$sqlid eq "4c8mrs99xp26b"' --group='"$file $line"' --name=EXEC --select='$dur'
-- show you whether there's skew in the individual execution times of EXEC calls but giving you the total duration UAF
mrskew *.trc --where='$sqlid eq "4c8mrs99xp26b"' --group='"$file $line"' --name=EXEC --select='$uaf'
-- command below would be similar to the "Profile by Subroutine" of the Method R profiler
mrskew *.trc --select='$dur'
-- below shows the total UAF
mrskew *.trc --select='$uaf'
-- drill down on the SQL that has the most unaccounted for time
mrskew *.trc --select='$dur' --group='$sqlid'
mrskew *.trc --select='$uaf' --group='$sqlid'
-- give the latency numbers of smart scan stats
mrskew --name='smart.*scan' --ebucket *trc
-- group by storage servers and will show the statistical distribution of calls, and time spent
mrskew --name='smart.*scan' --group='$p1' *trc
-- group by module and account
mrskew *.trc --where='$mod eq "xxx" and $act eq "yyy"' --group='"$file $line"' --name=EXEC --select='$dur'
-- you have to compare an integer with $tim. Therefore, you need to convert the human readable to a tim and then use that as a comparison with $tim.
$ mrtim '2011-05-10 05:00:00.000'
1305021600000000
$ mrtim '2011-05-10 05:15:00.000'
1305022500000000
$ mrskew *.trc --group='$sqlid' --where='1305021600000000 <= $tim and $tim <= 1305022500000000'
--
mrskew ODEV11_ora_14370.trc --group='"$sqlid $name"' --where='$tim == 1305035383.707178'
mrskew *.trc --group='$sqlid' --where='(1305035000.000000 <= $tim) and ($tim <= 1305035900.000000)'
}}}
* mrskew v1 doesn't recognize $sqlid, but you could use $hv in v1 to get the same kind of result.
* DURATION column is in seconds
* mrskew reports total elapsed times for the group clause that you use. It's not dividing by anything. That's one of the principle design criteria for this skew analysis tool.
* accounted-for (by c and ela) time
* unaccounted-for time (that is, difference between $e and $c + sum($ela for children of the calls)) ... In other words, I think that roughly 2/3 of your response time for that statement is being consumed by processes (42 of them) that want CPU time but have been preempted and can't get it.
* The AWR habit of reporting averages (such as response time divided by execution count) actually hides important phenomena that mrskew can help you find
''other examples...''
''mrls''
http://method-r.com/component/content/article/124#examples
''mrtim''
http://method-r.com/component/content/article/162#examples
''mrskew''
http://method-r.com/component/content/article/126#examples
''mrcallrm''
http://method-r.com/component/content/article/164#examples
''mrtimfix''
http://method-r.com/component/content/article/163#examples
''package requirements''
{{{
Other than Linux x86, there are no requirements that I'm aware of. We don't distribute it as an rpm and I'm not aware of any requirements because of the way we compile the tools.
I can let you know which rpm's we have installed on our build machine but there's a good chance your rpm's are newer.
Here are the shared libraries required by the most recent release:
$ ldd mrls
libnsl.so.1 => /lib/libnsl.so.1 (0x0083d000)
libdl.so.2 => /lib/libdl.so.2 (0x005c7000)
libm.so.6 => /lib/tls/libm.so.6 (0x005cd000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x00422000)
libutil.so.1 => /lib/libutil.so.1 (0x00a07000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x006fd000)
libc.so.6 => /lib/tls/libc.so.6 (0x00496000)
/lib/ld-linux.so.2 (0x0047c000)
$ ldd mrnl
libnsl.so.1 => /lib/libnsl.so.1 (0x0083d000)
libdl.so.2 => /lib/libdl.so.2 (0x005c7000)
libm.so.6 => /lib/tls/libm.so.6 (0x005cd000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x00422000)
libutil.so.1 => /lib/libutil.so.1 (0x00a07000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x006fd000)
libc.so.6 => /lib/tls/libc.so.6 (0x00496000)
/lib/ld-linux.so.2 (0x0047c000)
$ ldd mrskew
libnsl.so.1 => /lib/libnsl.so.1 (0x0083d000)
libdl.so.2 => /lib/libdl.so.2 (0x005c7000)
libm.so.6 => /lib/tls/libm.so.6 (0x005cd000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x00422000)
libutil.so.1 => /lib/libutil.so.1 (0x00a07000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x006fd000)
libc.so.6 => /lib/tls/libc.so.6 (0x00111000)
/lib/ld-linux.so.2 (0x0047c000)
}}}
http://kbase.redhat.com/faq/docs/DOC-7715
http://www.evernote.com/shard/s48/sh/374cdb18-97d3-421d-85b6-0be1d270cc77/fcde8c4f5ca369745cfd3d6de07379e9
<<<
* Xen kernel does not differentiate between multi-core, multi-processor or hyperthreading processors. Each "processor", regardless of type, is treated as a unique, single-core processor under Xen.
* The ''physical id'' value is a number assigned to each processor socket. The number of unique physical id values on a system tells you the number of CPU sockets that are in use. All logical processors (cores or hyperthreaded images) contained within the same physical processor will share the same physical id value.
* The ''siblings'' value tells you how many logical processors are provided by each physical processor.
* The ''core id'' values are numbers assigned to each physical processor core. Systems with hyperthreading will see duplications in this value as each hyperthreaded image is part of a physical core. Under Red Hat Enterprise Linux 5, these numbers are an index within a particular CPU socket so duplications will also occur in multi-socket systems. Under Red Hat Enterprise Linux 4, which uses APIC IDs to assign core id values, these numbers are not reused between sockets so any duplications seen will be due solely to hyperthreading.
* The ''cpu cores'' value tells you how many physical cores are provided by each physical processor.
''Indications of HT enabled:''
* If the siblings and cpu cores values match, the processors do not support hyperthreading (or hyperthreading is turned off in the BIOS).
* If siblings is twice the value of cpu cores, the processors support hyperthreading and it is in use by the system.
* Duplication of the core id values is also indicative of hyperthreading.
* It is worth noting that the presence of the "ht" flag in the cpuflags section of /proc/cpuinfo does not necessarily indicate that a system has hyperthreading capabilities. That flag indicates that the processor is capable of reporting the number of siblings it has, not that it specifically has the hyperthreading feature.
<<<
http://kevinclosson.wordpress.com/2009/04/22/linux-thinks-its-a-cpu-but-what-is-it-really-mapping-xeon-5500-nehalem-processor-threads-to-linux-os-cpus/
''the script''
''for solaris use this'' https://blogs.oracle.com/sistare/entry/cpu_to_core_mapping
{{{
[root@desktopserver ~]# cat cpu
cat /proc/cpuinfo | grep -i "model name" | uniq
function filter(){
sed 's/^.*://g' | xargs echo
}
echo "processor " `grep processor /proc/cpuinfo | filter`
echo "physical id (processor socket) " `grep 'physical id' /proc/cpuinfo | filter`
echo "siblings (logical cores/socket) " `grep siblings /proc/cpuinfo | filter`
echo "core id " `grep 'core id' /proc/cpuinfo | filter`
echo "cpu cores (physical cores/socket)" `grep 'cpu cores' /proc/cpuinfo | filter`
[root@desktopserver ~]# ./cpu
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
processor 0 1 2 3 4 5 6 7
physical id (processor socket) 0 0 0 0 0 0 0 0
siblings (logical cores/socket) 8 8 8 8 8 8 8 8
core id 0 1 2 3 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4
----------------- ----------------- ----------------- -----------------
Socket0 OScpu#| 0 4| 1 5| 2 6| 3 7|
Core |S0_c0_t0 S0_c0_t1|S0_c1_t0 S0_c1_t1|S0_c2_t0 S0_c2_t1|S0_c3_t0 S0_c3_t1|
----------------- ----------------- ----------------- -----------------
}}}
{{{
Intel Nehalem E5540 2s8c16t
[enkdb01:root] /home/oracle/dba/benchmark/cpu_topology
> sh cpu_topology
model name : Intel(R) Xeon(R) CPU E5540 @ 2.53GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
physical id (processor socket) 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
siblings (logical CPUs/socket) 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
core id (# assigned to a core) 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
----------------- ----------------- ----------------- -----------------
Socket0 OScpu#| 0 8| 1 9| 2 10| 3 11|
Core |S0_c0_t0 S0_c0_t1|S0_c1_t0 S0_c1_t1|S0_c2_t0 S0_c2_t1|S0_c3_t0 S0_c3_t1|
----------------- ----------------- ----------------- -----------------
Socket1 OScpu#| 4 12| 5 13| 6 14| 7 15|
Core |S1_c0_t0 S1_c0_t1|S1_c1_t0 c1_t1|S1_c2_t0 S1_c2_t1|S1_c3_t0 S1_c3_t1|
----------------- ----------------- ----------------- -----------------
[enkdb01:root] /home/oracle/dba/benchmark/cpu_topology
> ./turbostat
pkg core CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
13.36 2.30 2.53 21.21 16.29 49.15 0.00 0.00
0 0 0 40.71 1.63 2.53 51.78 7.52 0.00 0.00 0.00
0 0 8 14.97 1.63 2.53 77.51 7.52 0.00 0.00 0.00
0 1 1 7.47 1.62 2.53 16.16 13.55 62.81 0.00 0.00
0 1 9 8.10 1.75 2.53 15.53 13.55 62.81 0.00 0.00
0 2 2 7.30 1.62 2.53 15.34 10.80 66.56 0.00 0.00
0 2 10 7.35 1.88 2.53 15.29 10.80 66.56 0.00 0.00
0 3 3 2.28 1.65 2.53 5.73 10.53 81.46 0.00 0.00
0 3 11 3.91 1.92 2.53 4.10 10.53 81.46 0.00 0.00
1 0 4 99.79 2.79 2.53 0.21 0.00 0.00 0.00 0.00
1 0 12 3.07 2.77 2.53 96.93 0.00 0.00 0.00 0.00
1 1 5 5.31 2.75 2.53 8.19 24.35 62.14 0.00 0.00
1 1 13 3.67 2.75 2.53 9.83 24.35 62.14 0.00 0.00
1 2 6 1.92 2.73 2.53 4.65 40.08 53.35 0.00 0.00
1 2 14 2.14 2.73 2.53 4.43 40.08 53.35 0.00 0.00
1 3 7 2.97 2.74 2.53 6.72 23.45 66.85 0.00 0.00
1 3 15 2.78 2.74 2.53 6.91 23.45 66.85 0.00 0.00
Linux OS CPU Package Locale
0 S0_c0_t0
1 S0_c1_t0
2 S0_c2_t0
3 S0_c3_t0
4 S1_c0_t0
5 S1_c1_t0
6 S1_c2_t0
7 S1_c3_t0
8 S0_c0_t1
9 S0_c1_t1
10 S0_c2_t1
11 S0_c3_t1
12 S1_c0_t1
13 S1_c1_t1
14 S1_c2_t1
15 S1_c3_t1
}}}
! ''output on exadata v2 - db node & storage cell''
Intel® Xeon® Processor E5540 (8M Cache, 2.53 GHz, 5.86 GT/s Intel® QPI)
http://ark.intel.com/Product.aspx?id=37104
{{{
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5540 @ 2.53GHz
stepping : 5
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 1
siblings : 8
core id : 3
cpu cores : 4
apicid : 23
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx rdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips : 5054.02
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
}}}
! ''output on exadata x2 - db node''
Intel® Xeon® Processor X5670 (12M Cache, 2.93 GHz, 6.40 GT/s Intel® QPI)
http://ark.intel.com/Product.aspx?id=47920
{{{
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10
6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
processor : 23
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2
cpu MHz : 2926.096
cache size : 12288 KB
physical id : 1
siblings : 12
core id : 10
cpu cores : 6
apicid : 53
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips : 5852.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
}}}
! ''output on exadata x2 - storage cell''
Intel® Xeon® Processor L5640 (12M Cache, 2.26 GHz, 5.86 GT/s Intel® QPI)
http://ark.intel.com/Product.aspx?id=47926
{{{
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10
6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
processor : 23
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU L5640 @ 2.27GHz
stepping : 2
cpu MHz : 2261.060
cache size : 12288 KB
physical id : 1
siblings : 12
core id : 10
cpu cores : 6
apicid : 53
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips : 4522.01
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
}}}
! ''x3-8 and x2-8''
{{{
8,80cores,160threads
$ sh cpu_topology
model name : Intel(R) Xeon(R) CPU E7- 8870 @ 2.40GHz
processor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159
physical id (processor socket) 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7
siblings (logical cores/socket) 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
core id 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25
cpu cores (physical cores/socket) 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
}}}
! ''output on ODA''
Intel® Xeon® Processor X5675 (12M Cache, 3.06 GHz, 6.40 GT/s Intel® QPI)
http://ark.intel.com/products/52577/Intel-Xeon-Processor-X5675-(12M-Cache-3_06-GHz-6_40-GTs-Intel-QPI)
{{{
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10
6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
processor : 23
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5675 @ 3.07GHz
stepping : 2
cpu MHz : 3059.102
cache size : 12288 KB
physical id : 1
siblings : 12
core id : 10
cpu cores : 6
apicid : 53
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips : 6118.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
}}}
''Orapub - core-vs-threadcpu-utilization''
http://shallahamer-orapub.blogspot.com/2011/04/core-vs-threadcpu-utilization-part-1.html
http://shallahamer-orapub.blogspot.com/2011/05/cores-vs-threads-util-differencespart-2.html
http://shallahamer-orapub.blogspot.com/2011/05/cores-vs-threads-util-differencepart2b.html
http://content.dell.com/us/en/enterprise/d/large-business/thread-cores-which-you-need.aspx, http://itexpertvoice.com/home/threads-or-cores-which-do-you-need/
https://plus.google.com/117773751083866603675/posts/HrEbMPTeVxp <-- greg rahn threads vs cores
http://openlab.web.cern.ch/sites/openlab.web.cern.ch/files/technical_documents/Evaluation_of_the_4_socket_Intel_Sandy_Bridge-EP_server_processor.pdf
CPU count consideration for Oracle Parameter setting when using Hyper-Threading Technology [ID 289870.1]
How Memory Allocation Affects Performance in Multithreaded Programs
by Rickey C. Weisner, March 2012
http://www.oracle.com/technetwork/articles/servers-storage-dev/mem-alloc-1557798.html
Application Scaling on CMT and Multicore Systems http://developers.sun.com/solaris/articles/scale_cmt.html
Tutorial: DTrace by Example http://developers.sun.com/solaris/articles/dtrace_tutorial.html
facebook https://code.launchpad.net/mysqlatfacebook
profiler http://poormansprofiler.org/
data warehouse in mysql http://mysql.rjweb.org/doc.php/datawarehouse
https://dba.stackexchange.com/questions/75550/is-data-warehousing-possible-in-mysql-and-postgressql
https://blog.panoply.io/mysql-as-a-data-warehouse-is-it-really-your-best-option
https://www.zdnet.com/article/oracle-takes-a-new-twist-on-mysql-adding-data-warehousing-to-the-cloud-service/
https://cloudwars.co/oracle/oracle-unleashes-heatwave-mysql-thumps-amazon-redshift-aurora/
mysql heatwave whitepaper https://www.oracle.com/a/ocom/docs/mysql-database-service-technical-paper.pdf
https://github.com/oracle/heatwave-tpch
Getting Started to MySQL HeatWave for Analytics https://www.youtube.com/watch?v=Xk6ZeO-tHz8
https://juliandontcheff.wordpress.com/2021/06/07/heatwave-mysql-db-systems-in-oci/
https://medium.com/oracledevs/connect-tableau-to-oracle-mysql-database-service-powered-by-heatwave-5d18bb4a1b5c
https://gitlab.oracle.k8scloud.site/devops_admin/mysql-heatwave-workshop
https://druid.apache.org/docs/latest/ingestion/schema-design.html
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/adding-druid/content/druid_ingest.html
https://metatron.app/2019/05/24/what-to-keep-in-mind-when-ingesting-a-data-source-to-druid-from-metatron-discovery/
https://dzone.com/articles/ultra-fast-olap-analytics-with-apache-hive-and-dru
hive druid integration https://gist.github.com/rajkrrsingh/f01475f4bfa4a33240134561171f378f
https://stackoverflow.com/questions/58693625/need-to-load-data-from-hadoop-to-druid-after-applying-transformations-if-i-use
https://stackoverflow.com/questions/51106037/upload-data-to-druid-incrementally
https://druid.apache.org/docs/latest/querying/sql.html
https://druid.apache.org/docs/latest/querying/joins.html
BI ENGINE
https://cloud.google.com/bi-engine/docs/optimized-sql#unsupported-features
..
<<showtoc>>
! merge
http://blog.mclaughlinsoftware.com/2009/05/25/mysql-merge-gone-awry/
http://www.xaprb.com/blog/2006/06/17/3-ways-to-write-upsert-and-merge-queries-in-mysql/
http://www.mysqlperformanceblog.com/2012/09/18/the-math-of-automated-failover/
http://techblog.netflix.com/2011/04/lessons-netflix-learned-from-aws-outage.html
-- CLUSTERING
http://blogs.oracle.com/mysql/2011/01/managing_database_clusters_-_a_whole_lot_simpler.html
-- PERFORMANCE
https://blogs.oracle.com/MySQL/entry/mysql_cluster_performance_best_practices
High Performance MySQL
http://oreilly.com/catalog/9780596003067
http://mysql-dba-journey.blogspot.com/search/label/MySQL%20for%20Oracle%20DBAs
http://www.pythian.com/news/13369/notes-on-learning-mysql-as-an-oracle-dba/
http://ronaldbradford.com/mysql-oracle-dba/
http://www.ardentperf.com/2010/09/08/mysterious-oracle-net-errors/
connor's presentation on statistics
http://www.evernote.com/shard/s48/sh/dde62582-24a5-42dd-b401-7352f5caff87/38efb9575a9f6cfe8457ad20308bb3c8
Introduced in 11g
http://dioncho.wordpress.com/2010/08/16/batching-nlj-optimization-and-ordering/
http://jeffreylui.wordpress.com/2011/02/21/thoughts-on-nlj_batching/
https://learning.oreilly.com/search/?query=natural%20language%20processing&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_playlists=true&include_collections=true&include_notebooks=true&is_academic_institution_account=false&source=user&sort=relevance&facet_json=true&page=0&include_facets=false&include_scenarios=true&include_sandboxes=true&json_facets=true
* Applied Natural Language Processing with Python : Implementing Machine Learning and Deep Learning Algorithms for Natural Language Processing https://learning.oreilly.com/library/view/applied-natural-language/9781484237335/
* Practical Natural Language Processing https://learning.oreilly.com/library/view/practical-natural-language/9781492054047/
* Natural Language Processing with Spark NLP https://learning.oreilly.com/library/view/natural-language-processing/9781492047759/
* Chapter 19. Productionizing NLP Applications https://learning.oreilly.com/library/view/natural-language-processing/9781492047759/ch19.html#productionizing_nlp_applications
https://towardsdatascience.com/deploying-a-machine-learning-model-as-a-rest-api-4a03b865c166
* Oracle Coherence
Sizing Oracle Coherence Applications
http://soainfrastructure.blogspot.com/2010/08/sizing-oracle-coherence-applications.html
* EBusiness Suite
A Primer on Hardware Sizing for Oracle E-Business Suite
http://blogs.oracle.com/stevenChan/2010/08/ebs_sizing_primer.html
http://www.oracle.com/apps_benchmark/html/white-papers-e-business.html
http://blogs.oracle.com/stevenChan/2010/02/oracle_e-business_suite_platform_smorgasbord.html
http://blogs.oracle.com/stevenChan/2010/04/ebs_1211_tsk.html
http://blogs.oracle.com/stevenChan/2009/11/ebs_tuning_oow09.html
http://blogs.oracle.com/stevenChan/2008/10/case_study_redux_oracles_own_ebs12_upgrade.html
http://blogs.oracle.com/stevenChan/2007/11/analyzing_memory_vs_performanc.html
* From Martin Widlake's
http://mwidlake.wordpress.com/2010/11/05/how-big-is-a-person/
http://mwidlake.wordpress.com/2010/11/11/database-sizing-%E2%80%93-how-much-disk-do-i-need-the-easy-way/
http://mwidlake.wordpress.com/2009/09/27/big-discs-are-bad/
http://www.pythian.com/news/170/750g-disks-are-bahd-for-dbs-a-call-to-arms/
Workload Management for Operational Data Warehousing
http://blogs.oracle.com/datawarehousing/2010/09/workload_management_for_operat.html
Workload Management – Statement Queuing
http://blogs.oracle.com/datawarehousing/2010/09/workload_management_statement.html
Workload Management – A Simple (but real) Example
http://blogs.oracle.com/datawarehousing/2010/10/workload_management_a_simple_b.html
A fair bite of the CPU pie? Monitoring & Testing Oracle Resource Manager
http://rnm1978.wordpress.com/2010/09/10/a-fair-bite-of-the-cpu-pie-monitoring-testing-oracle-resource-manager/
Performance Tips
http://blogs.oracle.com/rtd/2010/11/performance_tips.html
Database Instance Caging: A Simple Approach to Server Consolidation http://www.oracle.com/technetwork/database/focus-areas/performance/instance-caging-wp-166854.pdf
Workload Management for Operational Data Warehousing http://blogs.oracle.com/datawarehousing/entry/workload_management_for_operat
Workload Management – Statement Queuing http://blogs.oracle.com/datawarehousing/entry/workload_management_statement
Workload Management – A Simple (but real) Example http://blogs.oracle.com/datawarehousing/entry/workload_management_a_simple_b
A fair bite of the CPU pie? Monitoring & Testing Oracle Resource Manager http://rnm1978.wordpress.com/2010/09/10/a-fair-bite-of-the-cpu-pie-monitoring-testing-oracle-resource-manager/
Parallel Execution and workload management for an Operational DW environment http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/twp-bidw-parallel-execution-130766.pdf
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/index.html
-- Exadata specific
http://www.linkedin.com/groupItem?view=&srchtype=discussedNews&gid=918317&item=63941267&type=member&trk=eml-anet_dig-b_pd-ttl-cn&ut=0pKCK5WPN524Y1 <— kerry explains how we do it
Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles http://www.oracle.com/technetwork/database/focus-areas/availability/maa-exadata-consolidated-roles-459605.pdf
Boris - Capacity Management for Oracle Database Machine Exadata v2 https://docs.google.com/viewer?url=http://www.nocoug.org/download/2010-05/DB_Machine_5_17_2010.pdf&pli=1
Performance Stories from Exadata Migrations http://www.slideshare.net/tanelp/tanel-poder-performance-stories-from-exadata-migrations
http://www.brennan.id.au/04-Network_Configuration.html
http://www.redhat.com/magazine/010aug05/departments/tips_tricks/
http://gigaom.com/2013/06/15/how-to-prevent-the-nsa-from-reading-your-email/
https://communities.intel.com/community/itpeernetwork/datastack/blog/2015/03/19/nvm-express-technology-goes-viral-from-data-center-to-client-to-fabrics
''2015'' Under the Hood: Unlocking SSD Performance with NVM Express (NVMe) Technology https://www.youtube.com/watch?v=I7Cic0Rb7D0
''2013'' Under the Hood: Data Center Storage - PCI Express SSDs with NVM Express (NVMe) https://www.youtube.com/watch?v=ACyTonhxXd8
Intel SSD DC P3700 800GB Review - Ludicrous Speed for the Masses! https://www.youtube.com/watch?v=NL_jzPCrdog
http://www.nvmexpress.org/index.php/download_file/view/18/1/
http://en.wikipedia.org/wiki/Nagios
http://www.rittmanmead.com/2012/09/advanced-monitoring-of-obiee-with-nagios/
! net services best practices
Batch Processing in Disaster Recovery Configurations http://www.hitachi.co.jp/Prod/comp/soft1/oracle/pdf/OBtecinfo-08-008.pdf
Oracle Net Services - Best Practices for Database Performance and Scalability https://www.doag.org/formes/pubfiles/2261823/169-2010-K-DB-Mensah-Net_Services.pdf
https://www.doag.org/formes/pubfiles/2261824/169-2010-K-DB-Mensah-Net_Services_Best-PRAESENTATION.pdf
! terms
<<<
* OS tuning - network kernel params and settings - TCP Buffer Sizes
* jumbo frames (for GigE networks) - setting mtu (https://www.youtube.com/watch?v=bCWPdYKPnO4&t=8s)
* buffer size
* sdu (SDU value be a multiple of the MTU) - The relation between MTU (Maximum Transmission Unit) and SDU (Session Data Unit) (Doc ID 274483.1)
* client load balance and failover
* server load balance advisory
* bdp - bandwidth delay product Oracle recommends to set RECV_BUF_SIZE and SEND_BUF_SIZE three time the BDP’s value (Bandwidth delay product) in order to fully use network bandwidth over TCP protocol
also check [[DataGuardNetworkBandwidth]]
<<<
! references
How to Set SEND_BUF_SIZE And RECV_BUF_SIZE On Thin JDBC Clients (Doc ID 2037434.1)
Network Tuning Best Practices for Oracle Streams Propagation (Doc ID 1377929.1)
Oracle® Database Net Services Administrator's Guide 11g Release 2 (11.2) https://docs.oracle.com/cd/E18283_01/network.112/e10836/performance.htm
CHAPTER 13 - Migrating to Exadata https://learning.oreilly.com/library/view/expert-oracle-exadata/9781430262428/9781430262411_Ch13.xhtml
How to Change MTU Size in Exadata Environment (Doc ID 1586212.1)
Do Changes To MTU Settings On Exadata, Necessitate Similar Changes On Exalytics? (Doc ID 1928262.1)
Oracle Exadata Database Machine Performance Best Practices (Doc ID 1274475.1)
Recommendation for the Real Application Cluster Interconnect and Jumbo Frames (Doc ID 341788.1)
How to test and verify network support of Jumbo Frames (Doc ID 2423930.1)
Setting SEND_BUF_SIZE and RECV_BUF_SIZE of Agent Managed Listeners on Oracle RAC or Grid Infrastructure Standalone Server (Doc ID 2048018.1)
How to improve performance of impdp over a network_link https://www.oracle.com/webfolder/community/oracle_database/3616011.html
Required MTU Size for an Exadata Machine https://www.oracle.com/webfolder/community/engineered_systems/3685442.html
OraRac11g Enable Jumbo Frames demo https://www.youtube.com/watch?v=bCWPdYKPnO4&t=8s
How to Determine SDU Value Being Negotiated Between Client and Server (Doc ID 304235.1)
.
.
UDP Versus TCP/IP: An Overview
Doc ID: Note:1080335.6
How to Configure Linux OS Ethernet TCP/IP Networking
Doc ID: 132044.1
ORA-12154 While Attempting to Connect to New Database Via SQL*Net
Doc ID: Note:464505.1
TROUBLESHOOTING GUIDE: TNS-12154 TNS:could not resolve service name
Doc ID: Note:114085.1
OERR: ORA 12154 "TNS:could not resolve service name"
Doc ID: Note:21321.1
-- TROUBLESHOOTING
Network Products and Error Stack Components
Doc ID: 39662.1
-- TNSPING
Comparison of Oracle's tnsping to TCP/IP's ping [ID 146264.1]
-- FIREWALL
Oracle Connections and Firewalls (Doc ID 125021.1
SQL*NET PACKET STRUCTURE: NS PACKET HEADER (Doc ID 1007807.6
Resolving Problems with Connection Idle Timeout With Firewall (Doc ID 257650.1
-- NETWORK PERFORMANCE
Oracle Net Performance Tuning (Doc ID 67983.1
Troubleshooting 9i Data Guard Network Issues
Doc ID: Note:241925.1
Oracle Net Performance Tuning
Doc ID: Note:67983.1
How can I automatically detect slow connections?
Doc ID: Note:305299.1
Network Performance Troubleshooting - SQL*NET And CORE/MFG
Doc ID: Note:101007.1
Bandwith Per User Session For Oracle Form Base Web Deployment In Oracle9ias
Doc ID: Note:287237.1
How to Find Out How Much Network Traffic is Created by Web Deployed Forms?
Doc ID: Note:109597.1
Few Basic Techniques to Improve Performance of Forms.
Doc ID: Note:221529.1
Troubleshooting Web Deployed Oracle Forms Performance Issues
Doc ID: Note:363285.1
High ARCH wait on SENDREQ wait events found in statspack report.
Doc ID: Note:418709.1
Refining Remote Archival Over a Slow Network with the ARCH Process
Doc ID: Note:260040.1
Poor Performance When Using CLOBS and Oracle Net
Doc ID: 398380.1
-- ARRAYSIZE
SET LONG, ARRAYSIZE, AND MAXDATA SYSTEM VARIABLES to display LONG columns
Doc ID: 2062061.6
Relationship of Longs/Arraysize/LongChunk when using Oracle Reports?
Doc ID: 10747.1
-- SDU, MTU
The relation between MTU (Maximum Transmission Unit), SDU (Session Data Unit) and TDU (Transmission Data Unit)
Doc ID: 274483.1
1) Note 67983.1 "Oracle Net Performance Tuning"
2) Note 125021.1 "SQL*Net Packet Sizes (SDU & TDU Parameters)"
Bug 1113588 - New SQLNET.ORA parameter DEFAULT_SDU_SIZE
Doc ID: 1113588.8
Net8 Assistant places SDU parameter incorrectly
Doc ID: Note:99220.1
Recommendation for the Real Application Cluster Interconnect and Jumbo Frames
Doc ID: 341788.1
Asm Does Not Start After Relinking With RDS/Infiniband
Doc ID: 741720.1
304235.1 How to configure and verify that SDU Setting Are Being Read
76412.1 Network Performance Considerations in Designing Client/Server Applications
99715.1 When to modify, when not to modify the Session data unit (SDU)
160738.1 How To Configure the Size of TCP/IP Packets
How to set MTU (Maximum Transmission Unit) size for interfaces (network interfaces). (Doc ID 1017799.1)
How to configure Jumbo Frames on 10-Gigabit Ethernet (Doc ID 1002594.1)
-- BANDWIDTH DELAY PRODUCT
http://forums.oracle.com/forums/thread.jspa?threadID=629524
Please find below some info on how to calculate the BDP, hope this would help
Note:
TCP/IP buffer data into send and receive buffers while sending and receiving to or from lower and upper layer protocols. The sizes of these buffers affect network performance, as these buffer sizes influence flow control decisions.
The parameters specify sizes of socket receive and send buffers, respectively, associated with Oracle Net connections RECV_BUF_SIZE and SEND_BUF_SIZE.
Please note that some operating systems have parameters that set the maximum size for all send and receive socket buffers. You must ensure that these values have been adjusted to allow Oracle Net to use a larger socket buffer size.
Oracle recommends to set RECV_BUF_SIZE and SEND_BUF_SIZE three time the BDP’s value (Bandwidth delay product) in order to fully use network bandwidth over TCP protocol.
how to calculate RECV_BUF_SIZE and SEND_BUF_SIZE find below the details
Bandwidth= 10mbps=10 000 000 bits /s
Assume RTT=10ms=10/1000 (0.01s) ( RTT obtain through ping @server)
BDP= 10 Mbps * 10msec (0.01 sec) --à 10 ,000,000 * .01=100, 000bits/s Note: I took the worst RTT value=10ms
BDP= 100,000 / 8 = 12, 500 bytes
The optimal send and receive socket buffer sizes are calculated as follows:
Socket buffer size (RECV_BUF_SIZE and SEND_BUF_SIZE ) = 3 * bandwidth * delay = 12,500 * 3 = 37500 bytes
-- BUFFER OVERFLOW
BUFFER OVERFLOW ERROR WHEN RUNNING QUERY
Doc ID: 1020381.6
SQL*Plus: 'BUFFER OVERFLOW' Explained
Doc ID: 2171.1
-- TIMEOUT
VMS: How to Lower Connect Retry Limit and/or Connect Timeout in SQL*Net
Doc ID: 1077706.6
-- LISTENER
TNS Listener Crashes Intermittantly with No Error Message
Doc ID: 237887.1
Dynamic Registration and TNS_ADMIN
Doc ID: 181129.1
How to Diagnose Slow TNS Listener / Connection Performance
Doc ID: 557416.1
Connections To 11g TNS Listener are Slow.
Doc ID: 561429.1
-- SERVICE
Issues Affecting Automatic Service Registration
Doc ID: 235562.1
-- PPP
Point-to-Point Protocol Internals
Doc ID: 47936.1
-- EMAIL
Oracle Email Basics
Doc ID: Note:217140.1
-- DEBUG
Troubleshooting Oracle Net
Doc ID: 779226.1
Note 69642.1 - UNIX: Checklist for Resolving Connect AS SYSDBA Issues
How to Perform a SQL*Net Loopback on Unix
Doc ID: 1004599.6
Finding the source of failed login attempts.
Doc ID: 352389.1
Taking Systemstate Dumps when You cannot Connect to Oracle
Doc ID: 121779.1
How To Track Dead Connection Detection(DCD) Mechanism Without Enabling Any Client/Server Network Tracing
Doc ID: 438923.1
-- ADVANCED NETWORKING OPTION
Setup and Testing Advanced Networking Option
Doc ID: 1068871.6
Oracle Advanced Security SSL Troubleshooting Guide
Doc ID: 166492.1
-- KERBEROS
Kerberos: High Level Introduction and Flow
Doc ID: 294136.1
-- 11g /etc/hosts
11g Network Layer Does Not Use /etc/hosts on UNIX
Doc ID: 803838.1
-- INBOUND_CONNECT_TIMEOUT
Description of Parameter SQLNET.INBOUND_CONNECT_TIMEOUT
Doc ID: 274303.1
ORA - 12170 Occured While Connecting to RAC DB using NAT external IP address
Doc ID: 453544.1
How I Resolved ORA-03135: connection lost contact
Doc ID: 465572.1
-- LISTENER
How to Create Multiple Oracle Listeners and Multiple Listener Addresses
Doc ID: 232010.1
How to Create Additional TNS listeners and Load Balance Connections Between them
Doc ID: 557946.1
How to Disable AutoRegistration of an Instance with the Listener
Doc ID: 140571.1
-- LISTENER - AUDIT VAULT
How To Change The Port of The Listener Configured for the AV Database ?
Doc ID: 753577.1
-- MTS, SHARED SERVER
How MTS and DNS are related, MTS_DISPATCHER and ORA-12545
Doc ID: 131658.1
-- LISTENER TRACING
How to Enable Oracle SQLNet Client , Server , Listener , Kerberos and External procedure Tracing from Net Manager
Doc ID: 395525.1
How to Match Oracle Net Client and Server Trace Files
Doc ID: 374116.1
Using and Disabling the Automatic Diagnostic Repository (ADR) with Oracle Net for 11g
Doc ID: 454927.1
Examining Oracle Net, Net8, SQL*Net Trace Files
Doc ID: 156485.1
{{{
NOTE: you need the boot.iso to do the network install
########## PREPARE THE REPOSITORY (for FTP install) ##########
NOTE: in VSFTPD, the directory root for this service is
/var/ftp/pub you have to create the directory under this
1)
# mkdir -pv install/centos/4/{os,updates}/i386
2)
contents of the installation CD (RHEL4):
base
- contains key images required and must be in source tree, below are the contents of base
-r--r--r-- 1 oracle root 718621 Apr 17 04:31 comps.xml
-r--r--r-- 1 oracle root 15118336 Apr 17 04:43 netstg2.img
-r--r--r-- 1 oracle root 14835712 Apr 17 04:43 hdstg2.img
-r--r--r-- 1 oracle root 69660672 Apr 17 04:44 stage2.img
-r--r--r-- 1 oracle root 22358872 Apr 17 04:46 hdlist2
-r--r--r-- 1 oracle root 8716184 Apr 17 04:46 hdlist
-r--r--r-- 1 oracle root 9525755 Apr 17 04:54 comps.rpm
-r--r--r-- 1 oracle root 1546 Apr 17 05:00 TRANS.TBL
RPMS
SRPMS
- contains source RPMS
images
- create different type of boot disks
- boot.iso <-- create boot cdrom for network install
- diskboot.img <-- devices larget than floppy
- pxeboot <-- installed on the DHCP server
release notes
- copy all the release notes
3)
for RHEL4 and 5, you could just copy all the contents of the CD
# cp RELEASE-NOTES-* /install
4)
for http:
# cp -av /media/cdrecorder/RedHat/ /install
below will be the final contents of the directory
dr-xr-xr-x 2 oracle root 4096 Apr 17 04:54 base
dr-xr-xr-x 3 oracle root 94208 Apr 17 04:46 RPMS
-r--r--r-- 1 oracle root 432 Apr 17 05:00 TRANS.TBL
for ftp:
cp -a --reply=yes /mnt/discx/RedHat /var/ftp/pub
cp -a --reply=yes /mnt/discx/images /var/ftp/pub
cp -a /mnt/discx/* /var/ftp/pub/docs
5) eject and insert disk2
6)
# cp -av /media/cdrecorder/RedHat/ /install
########## HTTPD (apache) ##########
NOTE: in HTTPD (RHEL) the directory root is /var/www/html the config is in /etc/httpd/conf/httpd.conf
in SUSE the directory root is /srv/www/htdocs the config is in /etc/apache2/default-server.conf
1) edit the httpd.conf look for "alias"
2) add the following lines
<-- ALIAS, any request thats made to our server
redirect them to a location in the hard drive
because the document root is on a different location
so you have to redirect the files..
WEBSPACE MAPPING to FILESYSTEM MAPPING
Alias /install "/var/ftp/pub/install"
<Directory "/var/ftp/pub/install">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory> <-- if you dont specify this you'll not see the tree
OR.... this could be another directory outside of /var/ftp/pub/install... see below:
[root@oel4 ~]# vi /etc/httpd/conf/httpd.conf
# ADD THE LINE BELOW ON THE ALIAS PART
Alias /oel4.6 "/oracle/installers/oel/4.6/os/x86"
<Directory "/oracle/installers/oel/4.6/os/x86">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>
[root@oel4 ~]# service httpd restart
Stopping httpd: [FAILED]
Starting httpd: [ OK ]
and have it as a YUM repository
[root@racnode1 yum.repos.d]# mv ULN-Base.repo ULN-Base.repo.bak
[root@racnode1 yum.repos.d]# vi oel46.repo
# ADD THE FOLLOWING LINES
[OEL4.6]
name=Enterprise-$releasever - Media
baseurl=http://192.168.203.24/oel4.6/Enterprise/RPMS
gpgcheck=1
gpgkey=http://192.168.203.24/oel4.6/RPM-GPG-KEY-oracle
3) restart the service, now you have installers ready for FTP and HTTP install
########## NFS install ##########
For NFS, export the directory by adding an entry to /etc/exports to export to a specific system:
/location/of/disk/space client.ip.address(ro,no_root_squash)
To export to all machines (not appropriate for all NFS systems), add:
/location/of/disk/space *(ro,no_root_squash)
# service nfs reload
}}}
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/3/html/Installation_and_Configuration_Guide/Disabling_Network_Manager.html
http://www.softpanorama.org/Net/Linux_networking/RHEL_networking/disabling_network_manager_in_rhel6.shtml
http://xmodulo.com/2014/02/disable-network-manager-linux.html
http://serverfault.com/questions/429014/what-is-the-relation-between-networkmanager-and-network-service-in-fedora-rhel-c
http://blog.beausanders.org/blog7/?q=node/19
https://apex.oracle.com/database-features/
https://twitter.com/dominic_giles/status/1169161999026184193
[img(100%,100%)[ https://i.imgur.com/csARb3I.png]]
.
<<<
Oracle has announced the latest generation of sun fire m3 servers…The new servers will have the Intel E5 based CPUs, which mean higher speeds and more memory. The X4170M3 server (Exadata compute nodes) will support up to 512GB of RAM in a single 1U server, along with 4 onboard 10GbE NICs. Could certainly make the next generation of Exadata even more interesting.
http://www.oracle.com/us/products/servers-storage/servers/x86/overview/index.html
<<<
<<<
Yes, it will be pretty exciting..
See the comparison of the X2 CPU (X5670) against the benchmark of Oracle on Ebiz with the new E5 CPU (http://goo.gl/vGTrg)
Comparison here http://ark.intel.com/compare/64596,47920
<<<
/***
|Name:|NewHerePlugin|
|Description:|Creates the new here and new journal macros|
|Version:|3.0 ($Rev: 3861 $)|
|Date:|$Date: 2008-03-08 10:53:09 +1000 (Sat, 08 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#NewHerePlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License|http://mptw.tiddlyspot.com/#TheBSDLicense|
***/
//{{{
merge(config.macros, {
newHere: {
handler: function(place,macroName,params,wikifier,paramString,tiddler) {
wikify("<<newTiddler "+paramString+" tag:[["+tiddler.title+"]]>>",place,null,tiddler);
}
},
newJournalHere: {
handler: function(place,macroName,params,wikifier,paramString,tiddler) {
wikify("<<newJournal "+paramString+" tag:[["+tiddler.title+"]]>>",place,null,tiddler);
}
}
});
//}}}
/***
|Name:|NewMeansNewPlugin|
|Description:|If 'New Tiddler' already exists then create 'New Tiddler (1)' and so on|
|Version:|1.1.1 ($Rev: 2263 $)|
|Date:|$Date: 2007-06-13 04:22:32 +1000 (Wed, 13 Jun 2007) $|
|Source:|http://mptw.tiddlyspot.com/empty.html#NewMeansNewPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License|http://mptw.tiddlyspot.com/#TheBSDLicense|
!!Note: I think this should be in the core
***/
//{{{
// change this or set config.newMeansNewForJournalsToo it in MptwUuserConfigPlugin
if (config.newMeansNewForJournalsToo == undefined) config.newMeansNewForJournalsToo = true;
String.prototype.getNextFreeName = function() {
var numberRegExp = / \(([0-9]+)\)$/;
var match = numberRegExp.exec(this);
if (match) {
var num = parseInt(match[1]) + 1;
return this.replace(numberRegExp," ("+num+")");
}
else {
return this + " (1)";
}
}
config.macros.newTiddler.checkForUnsaved = function(newName) {
var r = false;
story.forEachTiddler(function(title,element) {
if (title == newName)
r = true;
});
return r;
}
config.macros.newTiddler.getName = function(newName) {
while (store.getTiddler(newName) || config.macros.newTiddler.checkForUnsaved(newName))
newName = newName.getNextFreeName();
return newName;
}
config.macros.newTiddler.onClickNewTiddler = function()
{
var title = this.getAttribute("newTitle");
if(this.getAttribute("isJournal") == "true") {
title = new Date().formatString(title.trim());
}
// ---- these three lines should be the only difference between this and the core onClickNewTiddler
if (config.newMeansNewForJournalsToo || this.getAttribute("isJournal") != "true")
title = config.macros.newTiddler.getName(title);
var params = this.getAttribute("params");
var tags = params ? params.split("|") : [];
var focus = this.getAttribute("newFocus");
var template = this.getAttribute("newTemplate");
var customFields = this.getAttribute("customFields");
if(!customFields && !store.isShadowTiddler(title))
customFields = String.encodeHashMap(config.defaultCustomFields);
story.displayTiddler(null,title,template,false,null,null);
var tiddlerElem = story.getTiddler(title);
if(customFields)
story.addCustomFields(tiddlerElem,customFields);
var text = this.getAttribute("newText");
if(typeof text == "string")
story.getTiddlerField(title,"text").value = text.format([title]);
for(var t=0;t<tags.length;t++)
story.setTiddlerTag(title,tags[t],+1);
story.focusTiddler(title,focus);
return false;
};
//}}}
http://venturebeat.com/2012/06/18/nginx-the-web-server-tech-youve-never-heard-of-that-powers-netflix-facebook-wordpress-and-more/
http://tengine.taobao.org/
http://www.cpearson.com/excel/noblanks.aspx
http://chandoo.org/wp/2010/01/26/delete-blank-rows-excel/
=IFERROR(INDEX(CpuCoreBlank,SMALL((IF(LEN(CpuCoreBlank),ROW(INDIRECT("1:"&ROWS(CpuCoreBlank))))),ROW(A1)),1),"")
<<showtoc>>
https://github.com/mitchellh/vagrant-google/issues/234
{{{
I experienced this "NoMethodError in run_instance" issue and fixed it by adding "compute admin" on my service account. By the way, this is just my test environment so use a subset of this role on prod.
Initially I was having this error when accessing the API. I agree that there should be a more descriptive error message on permission issue.
gcurl https://compute.googleapis.com/compute/v1/projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard
{
"error": {
"code": 403,
"message": "Required 'compute.diskTypes.get' permission for 'projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard'",
"errors": [
{
"message": "Required 'compute.diskTypes.get' permission for 'projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard'",
"domain": "global",
"reason": "forbidden"
}
]
}
}
}}}
! the detailed error
!! error msg
{{{
kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ vagrant up --provider=google
Bringing machine 'default' up with 'google' provider...
==> default: Checking if box 'google/gce' version '0.1.0' is up to date...
==> default: Launching an instance with the following settings...
==> default: -- Name: develmach
==> default: -- Project: example-dev-284123
==> default: -- Type: n1-standard-2
==> default: -- Disk type: pd-standard
==> default: -- Disk size: 10 GB
==> default: -- Disk name:
==> default: -- Image:
==> default: -- Image family: ubuntu-os-cloud
==> default: -- Instance Group:
==> default: -- Zone: us-east1-b
==> default: -- Network: default
==> default: -- Network Project: example-dev-284123
==> default: -- Metadata: '{}'
==> default: -- Labels: '{}'
==> default: -- Network tags: '[]'
==> default: -- IP Forward:
==> default: -- Use private IP: false
==> default: -- External IP:
==> default: -- Network IP:
==> default: -- Preemptible: false
==> default: -- Auto Restart: true
==> default: -- On Maintenance: MIGRATE
==> default: -- Autodelete Disk: true
==> default: -- Additional Disks:[]
Traceback (most recent call last):
35: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
34: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/machine.rb:194:in `action'
33: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/machine.rb:194:in `call'
32: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/environment.rb:614:in `lock'
31: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/machine.rb:208:in `block in action'
30: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/machine.rb:239:in `action_raw'
29: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
28: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
27: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
26: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
25: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
24: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
23: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
22: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
21: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
20: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/box_check_outdated.rb:84:in `call'
19: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
18: from /home/kristofferson.a.arao/.vagrant.d/gems/2.5.5/gems/vagrant-google-2.5.0/lib/vagrant-google/action/connect_google.rb:45:in `call'
17: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
16: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/call.rb:53:in `call'
15: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
14: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
13: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
12: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
11: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
10: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
9: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
8: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/provision.rb:80:in `call'
7: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
6: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/synced_folders.rb:87:in `call'
5: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
4: from /home/kristofferson.a.arao/.vagrant.d/gems/2.5.5/gems/vagrant-google-2.5.0/lib/vagrant-google/action/warn_networks.rb:28:in `call'
3: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
2: from /home/kristofferson.a.arao/.vagrant.d/gems/2.5.5/gems/vagrant-google-2.5.0/lib/vagrant-google/action/warn_ssh_keys.rb:28:in `call'
1: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
/home/kristofferson.a.arao/.vagrant.d/gems/2.5.5/gems/vagrant-google-2.5.0/lib/vagrant-google/action/run_instance.rb:106:in `call': undefined method `self_link' for nil:NilClass (NoMethodError)
kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ ls -ltr
total 8
-rw-r--r-- 1 kristofferson.a.arao kristofferson.a.arao 2339 Sep 13 18:40 example-dev-284123-1c4cf8cf3f8c.json
-rw-r--r-- 1 kristofferson.a.arao kristofferson.a.arao 983 Sep 13 19:09 Vagrantfile
kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ gcloud auth activate-service-account --key-file=example-dev-284123-1c4cf8cf3f8c.json
Activated service account credentials for: [example-dev-svc@example-dev-284123.iam.gserviceaccount.com]
kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ alias gcurl='curl -H "Authorization: Bearer $(gcloud auth print-access-token)"'
kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ gcurl https://compute.googleapis.com/compute/v1/projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard
{
"error": {
"code": 403,
"message": "Required 'compute.diskTypes.get' permission for 'projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard'",
"errors": [
{
"message": "Required 'compute.diskTypes.get' permission for 'projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard'",
"domain": "global",
"reason": "forbidden"
}
]
}
}
}}}
Videos about RAC performance tuning and Under the Hoods of Cache Fusion, GES, GCS and GRD
http://oraclenz.com/2010/07/26/nzoug-and-laouc-june-and-july-webinars-recording/
see also RACMetalink
see LinkedIn
http://www.linkedin.com/groupItem?view=&gid=2922607&type=member&item=16757466&qid=6465cd0e-4d67-4e50-a293-55f69c318507&goback=.gmp_2922607
{{{
Node Evictions on RAC , what to do and what to collect
We have worked with customers who have had node evictions and we have been asked to determine root cause of the same. First node evictions in RAC are part of a mechanism to prevent the nodes from corrupting the data when they get into a hung state or are no longer healthy to continue on as part of the cluster and start to cause performance as a whole on the cluster to degrade. Oracle uses its Clusterware as part of the GI (Grid Infrastructure) stack to decide if the nodes are healthy enough using a voting disk and a heartbeat mechanism across nodes. If either of these are missing or do not make it then Oracle initiates a voting cycle to decide which portion of the subcluster survives and then the remaining nodes continue as if nothing happened. There are couple of basic things which come to mind (I'll supplant this discussion with MYoracle notes later)
- /var/log/messages file from all nodes
- all the clusterware logs (diagcollection.pl does this for you)
- If this is an instance eviction then the logs from bdump and udump destinations of the database
- There is a daemon called oprocd which is no longer present in 11gR2 although for previous releases it exists when there is no vendor clusterware. The logs for the same are in /etc/oracle/oprocd , this tells us if the Clusterware or oprocd rebooted the node
- Most systems tend to have crash dumps and this illustrates which process took out the node , this can help to also determine what went wrong. The current process list , the active process on the runqueue also tell you more.
}}}
RACHELP - node eviction
http://www.rachelp.nl/index_kb.php?menu=articles&actie=show&id=25
{{{
Date 2008-12-06 09:42:43
Component CRS
Title What can cause a Node Eviction ?
Version 10.1.0 - 11.1.0.7
Problem
Node evictions can occur in a cluster environment, the main question is why did the eviction occured ? Below I try to make that part easier.
Solution
There are 4 possible causes why a node eviction can occur.
*
Kernel Hang/ extreem load on the system. (OPROCD and/or HANCHECK TIMER)
*
Heartbeat lost Interconnect
*
Heartbeat lost Voting Disk
*
OCLSMON detects CSSD hang.
The title start with cause, but an Node eviction is a symptom of another problem not the cause. Keep this always in mind when investigating why a node eviction can occur.
Kernel Hang depended on the Operation System used. For Window or Linux this can be done based on the Hangcheck Timer and other Unix environments OPROCD is started. From Oracle 10.2.0.4 and higher OPROCD is also active on LINUX. (Still install the hangcheck timer) To validate if HANGCHECK timer or OPROCD was causing the node eviction validate the OS logfiles for the hangcheck timer. For OPROCD validate the OPROCD logfile.
An other possible node eviction can be triggered by OCLSMON starting with the 10.2.0.3 patchset or higher. The Clusterware proces is validating if there is an issue with CSSD. When this is the case it will kill the CSSD deamon, which will lead to the eviction. When this issue occur validate the oclsmon logfile and contact Oracle support. In this note we don’t focus on these parts, but on heartbeat lost.
Below are two examples of a heartbeat lost symptom. The OCSSD background process is taking care of the heartbeats. In the cssd.log file you can find detail information about the node eviction. In case of an eviction validate all the cssd.log file on all the nodes in your cluster environment. But start with the evicted node. The logging information logged can be changed during patchset and Oracle releases.
Node eviction due to Interconnect lost symptom.
Oracle 11g
[ CSSD]2008-11-20 10:59:36.510 [1220598112] >TRACE: clssnmCheckDskSleepTime: Node 3, dbq0223,
dead, last DHB (1227175136, 73583764) after NHB (1227175121, 73568724), but LATS - current (39090) >
DTO (27000)
[ CSSD]2008-11-20 10:59:36.512 [1147169120] >TRACE: clssnmReadDskHeartbeat: node 1, dbq0123,
has a disk HB, but no network HB, DHB has rcfg 122475875, wrtcnt, 164452, LATS 58728604, lastSeqNo
164452, timestamp 1227175122/73251784
[ CSSD]2008-11-20 10:59:37.513 [1199618400] >WARNING: clssnmPollingThread: node dbq0227 (5) at
90% heartbeat fatal, eviction in 1.660 seconds
[ CSSD]2008-11-20 10:59:37.513 [1220598112] >TRACE: clssnmSendSync: syncSeqNo(122475875)
[ CSSD]2008-11-20 10:59:37.513 [1220598112] >TRACE: clssnm_print_syncacklist: syncacklist (4)
Oracle 10g
[ CSSD]2006-10-18 23:49:06.199 [3600] >TRACE: clssnmCheckDskInfo: Checking disk info...
[ CSSD]2006-10-18 23:49:06.199 [3600] >TRACE: clssnmCheckDskInfo: node(2) timeout(172) state_network(0) state_disk(3) missCount(30)
[ CSSD]2006-10-18 23:49:06.226 [1] >USER: NMEVENT_SUSPEND [00][00][00][06]
[ CSSD]2006-10-18 23:49:07.028 [1030] >TRACE: clssnmReadDskHeartbeat: node(2) is down. rcfg(23) wrtcnt(634353) LATS(2345204583) Disk lastSeqNo(634353)
[ CSSD]2006-10-18 23:49:07.199 [3600] >TRACE: clssnmCheckDskInfo: node(2) disk HB found, network state 0, disk state(3) missCount(31)
[ CSSD]2006-10-18 23:49:08.032 [1030] >TRACE: clssnmReadDskHeartbeat: node(2) is down. rcfg(23) wrtcnt(634354) LATS(2345205587) Disk lastSeqNo(634354)
[ CSSD]2006-10-18 23:49:08.199 [3600] >TRACE: clssnmCheckDskInfo: node(2) disk HB found, network state 0, disk state(3) missCount(32)
[ CSSD]2006-10-18 23:49:09.199 [3600] >TRACE: clssnmCheckDskInfo: node(2) timeout(1167) state_network(0) state_disk(3) missCount(33)
[ CSSD]2006-10-18 23:49:10.199 [3600] >TRACE: clssnmCheckDskInfo: node(2) timeout(2167) state_network(0) state_disk(3) missCount(33)
…….
[ CSSD]2006-10-18 23:49:18.571 [3086] >WARNING: clssnmPollingThread: state(0) clusterState(2) exit
[ CSSD]2006-10-18 23:49:18.572 [1287] >ERROR: clssnmvDiskKillCheck: Evicted by node 1, sync 23, stamp -1949751541,
[ CSSD]2006-10-18 23:49:18.698 [3600] >TRACE: 0x110013a80 00 00 00 00 00 00 00 00 - 00 00 00 00 00 00 00 00
Here we see that the Diskkillcheck is report by node 1 and this node is evicted.
The diskkillcheck is done using a poison packets trough the voting disk, as interconnect is lost.
Possible action: check the availability of the Adapters, large network load/port scans and the OS logfiles for reported errrors related to the interconnect.
Node eviction due to Voting disk lost symptom.
Below an example where we lose the heartbeat to the voting disk.
[ CSSD]2006-10-11 00:35:33.658 [1801] >TRACE: clssnmHandleSync: Acknowledging sync: src[1] srcName[alligator] seq[9] sync[15]
[ CSSD]2006-10-11 00:35:36.956 [1801] >TRACE: clssnmHandleSync: diskTimeout set to (27000)ms
[ CSSD]2006-10-11 00:35:36.957 [1801] >WARNING: CLSSNMCTX_NODEDB_UNLOCK: lock held for 3300 ms
[ CSSD]2006-10-11 00:35:36.956 [1544] >TRACE: clssnmDiskPMT: stale disk (32490 ms) (0//dev/rora_vote_raw)
[ CSSD]2006-10-11 00:35:36.966 [1544] >ERROR: clssnmDiskPMT: 1 of 1 voting disks unavailable (0/0/1)
[ CSSD]2006-10-11 00:35:37.043 [2058] >TRACE: clssgmClientConnectMsg: Connect from con(112a8a9f0) proc(112a8f9d0) pid(480150) proto(10:2:1:1)
[ CSSD]2006-10-11 00:35:37.960 [3343] >TRACE: clscsendx: (11145a3f0) Physical connection (111459b30) not active
[ CSSD]2006-10-11 00:35:37.051 [1] >USER: NMEVENT_SUSPEND [00][00][00]06]
Possible action: check the availability of the Disk subsystem and the OS logfiles for reported errrors related to the voting disk
Trace the heartbeat: If needed you can enable a higher level of tracing to debug the heartbeat part. This can be done using the command, level 5 tracing. Level 0 disables the extra trace again. Please keep in mind that this can make your cssd.log growth hard. (4 lines added every second).
crsctl debug log css CSSD:5
crsctl debug log css CSSD:0
NOTICE: Node evictions is a symptom for another problem !
}}}
''The Clusterware logs''
{{{
My CRS_HOME on my test environment is at /u01/app/oracle/product/crs
-- alert log
/u01/app/oracle/product/crs/log/racnode1/alertracnode1.log
-- CSS log
/u01/app/oracle/product/crs/log/racnode1/cssd/cssdOUT.log
/u01/app/oracle/product/crs/log/racnode1/cssd/ocssd.log
/u01/app/oracle/product/crs/log/racnode1/cssd/racnode1.pid
-- CRSD log
/u01/app/oracle/product/crs/log/racnode1/crsd/crsd.log
-- RACG log
/u01/app/oracle/product/crs/log/racnode1/racg/ora.racnode1.ons.log
-- CRS EVM log
/u01/app/oracle/product/crs/evm/log/racnode1_evmdaemon.log
/u01/app/oracle/product/crs/evm/log/racnode1_evmlogger.log
/u01/app/oracle/product/crs/log/racnode1/evmd/evmd.log
/u01/app/oracle/product/crs/log/racnode1/evmd/evmdOUT.log
-- client log
/u01/app/oracle/product/crs/log/racnode1/client/clsc.log
/u01/app/oracle/product/crs/log/racnode1/client/ocr_15504_3.log
/u01/app/oracle/product/crs/log/racnode1/client/oifcfg.log
-- oprocd logs
/etc/oracle/oprocd
}}}
''Things to check on node eviction (Karl's notes)'' (see also ClusterHealthMonitor and RDA-RemoteDiagnosticAgent and GetAlertLog)
{{{
- Execute GetAlertLog script, do this every end of the day or if you see any signs of a node eviction (on all nodes as oracle)
- Execute the AWR scripts, unzip the awrscripts.zip and execute the run_all.sql, zip the output files (on all nodes as oracle)
- Do a ClusterHealthMonitor dump (just on node1 as crfuser)
/usr/lib/oracrf/bin/oclumon dumpnodeview -allnodes -v -last "23:59:59" > <your-directory>/<your-filename>
- Execute multinode RDA-RemoteDiagnosticAgent (just on node1 as oracle)
ssh-agent $SHELL
ssh-add
./rda.sh -vX Remote setup_cluster
./rda.sh -vX Remote list
./rda.sh -v -e REMOTE_TRACE=1
- Do a zip of directory /etc/oracle/oprocd (on all nodes as oracle)
- Do a zip of directory /var/log/sa (on all nodes as oracle)
- Do a zip of /var/log/messages file (on all nodes as root)
- Execute $ORA_CRS_HOME/bin/diagcollection.pl --collect (on all nodes as root, see Doc ID 330358.1)
}}}
[img[picturename| https://lh5.googleusercontent.com/-8CphyN6W-aE/TOuwimLbTYI/AAAAAAAAA9Y/JCqgr-y3Acg/s2048/RacNodeEviction.gif]]
Nologging in the E-Business Suite
Doc ID: Note:216211.1
Force_logging in Physical Standby Environment
Doc ID: Note:367560.1
Force Logging Feature in Oracle Database
Doc ID: Note:174951.1
Changing Storage Definition in a Logical Standby Database
Doc ID: Note:737460.1
The Gains and Pains of Nologging Operations
Doc ID: Note:290161.1
A Study of Non-Partitioned NOLOGGING DML/DDL on Primary/Standby Data Dictionary
Doc ID: Note:150694.1
Using Oracle7 UNRECOVERABLE and Oracle8 NOLOGGING Option
Doc ID: Note:147474.1
https://taliphakanozturken.wordpress.com/tag/alter-table-logging/
http://www.ehow.com/how_6915411_import-non-csv-file-excel.html
http://office.microsoft.com/en-us/excel-help/import-or-export-text-files-HP010099725.aspx
http://karlarao.wordpress.com/2010/06/28/the-not-a-problem-problem-and-other-related-stuff
http://www.samsalek.net/?p=2506
[img(30%,30%)[ http://www.samsalek.net/wp-content/uploads/2011/04/samsalek.net_notetakingv2.jpg ]]
http://highscalability.com/numbers-everyone-should-know
http://www.geekologie.com/2010/06/how-big-is-a-yottabyte-spoiler.php
http://highscalability.com/blog/2012/9/11/how-big-is-a-petabyte-exabyte-zettabyte-or-a-yottabyte.html
bytes to yotabyes visualized http://thumbnails.visually.netdna-cdn.com/bytes-sized_51c8d615a7b04.png
http://stevenpoitras.com/the-nutanix-bible/
Securing Your Application with OAuth and Passport
https://www.pluralsight.com/courses/oauth-passport-securing-application
* Exadata and Database Machine Version 2 Series - 1 of 25: Introduction to Smart Scan Demo 19-Sep-10 10 mins http://goo.gl/AA48J
<<<
{{{
-- start
set timing on
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name in
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes')
or a.name like 'cell phy%');
-- do a non smart scan
select /*+ OPT_PARAM('cell_offload_processing' 'false') */
count(*) from sales
where time_id between '01-JAN-2003' and '31-DEC-2003'
and amount_sold = 1;
-- end
set timing on
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name in
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes')
or a.name like 'cell phy%');
-- new session
connect sh/sh
-- start
set timing on
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name in
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes')
or a.name like 'cell phy%');
-- do the smart scan
select count(*) from sales
where time_id between '01-JAN-2003' and '31-DEC-2003'
and amount_sold = 1;
-- end
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name in
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes')
or a.name like 'cell phy%');
}}}
<<<
* Exadata and Database Machine Version 2 Series - 2 of 25: Introduction to Exadata Hybrid Columnar Compression Demo 19-Sep-10 10 mins http://goo.gl/jBKSM
<<<
{{{
select table_name, compression, compress_for
from user_tables
where table_name like '<table_name>';
-- ensure direct path read is done
alter session force parallel query;
alter session force parallel ddl;
alter session force parallel dml;
create table mycust_query compress for query high
parallel 16 as select * from mycustomers;
create table mycust_archive compress for archive high
parallel 16 as select * from mycustomers;
select table_name, compression, compress_for
from user_tables
where table_name like '<table_name>';
select segment_name, sum(bytes)/1024/1024
from user_segments;
}}}
<<<
* Exadata and Database Machine Version 2 Series - 3 of 25: Introduction to Exadata Smart Flash Cache Demo 19-Sep-10 12 mins http://goo.gl/4UBic
<<<
{{{
-- start
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name like '%flash cache read hits'
or a.name like 'cell phy%'
or a.name like 'physical read tot%'
or a.name like 'physical read req%');
-- ensure IO is satisfied using Exadata storage
alter system flush buffer_cache;
-- performs 10000 record lookups, typical OLTP load
set serveroutput on
set timing on
declare
a number;
s number := 0;
begin
for n in 1 .. 10000 loop
select cust_credit_limit into a from customers
where cust_id=n*5000;
s := s+a;
end loop;
dbms_output.put_line('Transaction total = '||s);
end;
/
-- end
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name like '%flash cache read hits'
or a.name like 'cell phy%'
or a.name like 'physical read tot%'
or a.name like 'physical read req%');
connect sh/sh
-- ensure IO is satisfied using Exadata storage
alter system flush buffer_cache;
-- then re execute the loop, you'll see better performance!
}}}
<<<
* Exadata and Database Machine Version 2 Series - 4 of 25: Exadata Process Introduction Demo 19-Sep-10 6 mins http://goo.gl/qQ6dk
<<<
{{{
connect <celladmin>
-- show processes associated with Exadata restart server (RS)
ps -ef | grep cellrs
-- show Management Server
-- the parent process of MS is RS
ps -ef | grep ms.err
-- the main CELLSRV process
-- the parent process of CELLSRV is RS
ps -ef | grep "/cellsrv "
-- the OSWatcher.. output files located at /opt/oracle.oswatcher
ps -ef | grep OSWatcher
cellcli
list cell detail <-- displays attributes of the cell
}}}
<<<
* Exadata and Database Machine Version 2 Series - 5 of 25: Hierarchy of Exadata Storage Objects Demo 19-Sep-10 8 mins http://goo.gl/KYoyV
<<<
{{{
connect <celladmin>
# LUN
cellcli
list lun <-- list all the LUN on a cell
<-- 12 disk based LUNs, 16 flash based LUNs
list lun where disktype = harddisk <-- only show disk based LUNs
list lun 0_0 detail <-- detailed attributes of the LUN
<-- isSystemLun=TRUE means it's part of system disk, around 29GB reserved for OS, cell SW
# PHYSICAL DISK
list physicaldisk 20:10 detail <-- detailed attributes of physical disk, associated with a LUN
# CELL DISK - a higher level storage abstraction, each cell disk is based on a LUN
list celldisk CD_10_exa9cel01 detail <-- detailed attributes
# GRID DISK
list griddisk where celldisk = CD_10_exa9cel01 detail
<-- a grid disk defines an area of storage on a cell disk
<-- grid disk are consumed by ASM and used as storage for ASM disk groups
<-- each cell disk can contain a number of grid disks
<-- grid disk are visible as disks inside ASM
select name,path,state,total_mb from v$asm_disk
where name like '%_CD_10_EXA9CEL01';
<-- path to the disk has the form o/<cell IP address>/<grid disk name>
select d.name disk, dg.name diskgroup
from v$asm_disk d, v$asm_diskgroup dg
where dg.group_number = d.group_number
and d.name like '%_CD_10_EXA9CEL01';
<-- grid disk to disk group mapping
}}}
<<<
* Exadata and Database Machine Version 2 Series - 6 of 25: Creating Interleaved Grid Disks Demo 19-Sep-10 8 mins http://goo.gl/FrHes
<<<
{{{
cellcli
list lun where celldisk = null <-- list all empty LUNs
-- interleaving option is specified in cell disk
create celldisk interleaving_test lun=0_11, INTERLEAVING='normal_redundancy'
list celldisk interleaving_test detail
create griddisk data1_interleaving_test celldisk=interleaving_test, size=200G
create griddisk data2_interleaving_test celldisk=interleaving_test
list griddisk where celldisk=interleaving_test detail
drop griddisk data1_interleaving_test
drop girddisk data2_interleaving_test
drop celldisk interleaving_test
<-- you cannot create non-interleaved grid disk on a cell disk that has the
INTERLEAVING='normal_redundancy' attribute
}}}
<<<
* Exadata and Database Machine Version 2 Series - 7 of 25: Examining Exadata Smart Flash Cache Demo 19-Sep-10 8 mins http://goo.gl/TC41l
<<<
{{{
cellcli
list celldisk where disktype=flashdisk
list flashcache detail <-- by default all flash-based disk are configured as Exadata Smart Flash Cache
list flashcachecontent detail <-- shows info about the data inside flash cache, can help assess cache efficiency for specific db objects
list flashcachecontent where objectnumber=74576 and tablespacenumber=7 and dbuniquename=ST01 detail <-- show info on specific db object
}}}
<<<
* Exadata and Database Machine Version 2 Series - 8 of 25: Exadata Cell Configuration Demo 19-Sep-10 6 mins http://goo.gl/yy2uh
<<<
{{{
list cell detail
> temperatureReading - current metrics
> notificationMethod - metrics that can be changed
> notificationPolicy
alter cell smtpToAddr='admin1@example.com, admin2@example.com' <-- set the adjustable cell attributes
alter cell validate mail <-- sends a test email
alter cell validate configuration <-- to do a complete internal check of the cell config settings
}}}
<<<
* Exadata and Database Machine Version 2 Series - 9 of 25: Exadata Storage Provisioning Demo 19-Sep-10 7 mins http://goo.gl/BiK0w
<<<
{{{
list lun where diskType = hardDisk and cellDisk = null <-- will show all disk based LUNs that do not contain cell disks!
typically cell disks and grid disks are created on each hard disk so that
data can be spread evenly across the cell
list celldisk where freeSpace != 0 <-- show unallocated free space on cell disks
create celldisk all harddisk interleaving='normal_redundancy' <-- the command creates cell disks on all the available hard disks.. the hard disks that dont already
contain cell disks. the new cell disks are configured in preparation for interleaved grid disks
list celldisk where freeSpace != 0 <-- will show the newly created cell disks
create griddisk all harddisk prefix=st01data2, size=280G <-- this command creates two sets of interleaved disks on the recently created cell disks, others will be skipped if
they dont have the required space
create griddisk all harddisk prefix=st02data2 <--
list griddisk attributes name, size, ASMModeStatus <-- list of all the grid disks, UNUSED means not yet consumed by ASM
}}}
<<<
Exadata and Database Machine Version 2 Series - 10 of 25: Consuming Exadata Grid Disks Using ASM Demo 19-Sep-10 10 mins http://goo.gl/Bmr7D
<<<
{{{
select name, header_status, path from v$asm_disk
where path like 'o/%/st01%'
and header_status = 'CANDIDATE'; <-- shows the list of CANDIDATE grid disks, the grid disk format is
o/<cell IP address>/<grid disk name> .. the IP represents the storage cell
alter diskgroup st01data add disk 'o/*/st01data2_CD_11_exa9cel01'; <-- adds grid disk to ASM disk group
alter diskgroup st01data drop disk st01data2_CD_11_exa9cel01 rebalance power 11 wait; <-- drops the disk
create diskgroup st01data2 normal redundancy
disk 'o/*/st01data2*'
attribute 'compatible.rdbms' = '11.2.0.0.0',
'compatible.asm' = '11.2.0.0.0',
'cell.smart_scan_capable' = 'TRUE',
'au_size' = '4M'; <-- creates disk group with the recommended disk group attributes!!! you'll also notice that grid disk are automatically
grouped into separate failure groups
}}}
<<<
Exadata and Database Machine Version 2 Series - 11 of 25: Exadata Cell User Accounts Demo 19-Sep-10 5 mins http://goo.gl/P5Dfi
<<<
{{{
cellmonitor <-- able to monitor Exadata using LIST
celladmin <-- can create, modify, drop exadata cell objects
root <-- can only execute the CALIBRATE command
}}}
<<<
Exadata and Database Machine Version 2 Series - 12 of 25: Monitoring Exadata Using Metrics, Alerts and Active Requests Demo 19-Sep-10 10 mins http://goo.gl/34Puy
<<<
{{{
list metricdefinition <-- metrics are recorded observations of important run-time properties or internal instrumentation
of the storage cell or its components (cell disks, grid disks)
list metricdefinition detail <-- provides more comprehensive info about all the metrics
list metricdefinition where name like 'CL_.*' detail <-- add a WHERE condition to view specific metrics
list metriccurrent <-- shows the most current metric observations
list metriccurrent where objecttype = 'CELL' <-- add WHERE to show subset of metrics
list metriccurrent where alertState != normal <-- shows metrics in abnormal state
list metriccurrent cl_temp <-- shows specific metric, shows current temperature measured inside the Exadata server
list metriccurrent <-- shows the space utilization of the cell OS and exadata software binaries
list metrichistory where alertState != normal <-- historical alerts, default retention is 7days. This command will determine if there where any
abnormal state on the past 7days!
list metrichistory where cl_temp memory <-- list historical, but the data which are still held in memory
list alerthistory <-- shows all the alerts maintained in the alert repository
drop alerthistory all <-- clear out unwanted alerts, this command clears the entire alert history
list threshold <-- list the defined threshold on exadata cell, default is none defined
list alertdefinition <-- list all available sources of the alerts on the cell
create threshold cl_fsut."/" comparison='>', warning=48 <-- creates threshold on the cell filesystem
list threshold detail <-- shows the definition of threshold
dd if=/dev/zero of=/tmp/file.out bs=1024 count=950000 <-- creates a big file
list alerthistory <-- check the alert generated!!!
list alerthistory detail
alter alerthistory 1_1 examinedby='st01' <-- modify the alert to indicate that you have examined it!
rm /tmp/file.out
list metriccurrent cl_fsut
list alerthistory
list alerthistory <-- will show the begin and end of the alert condition
alter session force parallel dml;
update customers set cust_credit=0.9*cust_credit_limit
where cust_id < 2000000;
list activerequest detail <-- view of IO requests that are currently being processed by a cell..
shows reason for IO, size of IO, grid disk accessed, TBS number, obj number, SQLID
}}}
<<<
Exadata and Database Machine Version 2 Series - 13 of 25: Monitoring Exadata From Within Oracle Database Demo 19-Sep-10 10 mins http://goo.gl/RNNqC
<<<
{{{
explain plan for
select avg(cust_credit_limit)
from customers where cust_credit_limit < 10000; <-- you can identify smart scan is used by looking at execution plan
select * from table(dbms_xplan.display);
select sql_text, physical_read_bytes, physical_write_bytes, io_interconnect_bytes, io_cell_offload_eligible_bytes, io_cell_uncompressed_bytes,
io_cell_offload_returned_bytes, optimized_phy_read_requests
from v$sql where sql_text like 'select avg%'; <-- you can determine the effectiveness of smart scan for a query by evaluating the ratio between
IO_CELL_OFFLOAD_ELIGIBLE_BYTES AND IO_CELL_OFFLOAD_RETURNED_BYTES. IOs optimized by the use of storage index or
exadata smart flash cache are counted under OPTIMIZED_PHY_READ_REQUESTS
select statistic_name, value
from v$segment_statistics
where owner='SH' and object_name='CUSTOMERS'
and statistic_name = 'optimized physical reads'; <-- shows number of IO requests optimized by exadata
"cell session smart scan efficiency" <-- sysstat value , the higher value.. better
select w.event, c.cell_path, d.name, w.p3
from v$session_wait w, v$event_name e, v$asm_disk d, v$cell c
where e.name like 'cell%'
and e.wait_class_id = w.wait_class_id
and w.p1 = c.cell_hashval
and w.p2 = d.hash_value; <-- shows WAITS related to Exadata IOs
}}}
<<<
Exadata and Database Machine Version 2 Series - 14 of 25: Exadata High Availability Demo 19-Sep-10 10 mins http://goo.gl/JrnN3
<<<
{{{
-- long running query
ps -ef | grep "/cellsrv "
kill cellsrv
ps -ef | grep "/cellsrv " <-- will create a new process
list alerthistory
alter cell restart services all
ps -ef | grep "/cellsrv " <-- will create a new process
-- long running query not interrupted and completed
}}}
<<<
Exadata and Database Machine Version 2 Series - 15 of 25: Intradatabase I/O Resource Management Demo 19-Sep-10 10 mins http://goo.gl/aqx2J
<<<
{{{
create user fred identified by fred account unlock;
create user dave identified by dave account unlock;
grant connect to fred, dave;
grant select any table to fred, dave;
-- then connect as fred and dave on separate windows and execute this
select count(*) from sh.sales where amount_sold=1; <-- with no intradatabase resource plan, both queries by users will have no effect
-- now as SYSDBA create a database resource plan, specified 80/20 split between two consumer groups HI and LO
begin
dbms_resource_manager.create_simple_plan(
simple_plan => 'my_plan',
consumer_group1 => 'HI', group1_percent => 80,
consumer_group2 => 'LO', group1_percent => 20)
end;
/
begin
dbms_resource_manager.create_pending_area();
dbms_resource_manager_privs.grant_switch_consumer_group(
grantee_name => 'FRED',
consumer_group => 'HI',
grant_option => true);
dbms_resource_manager_privs.grant_switch_consumer_group(
grantee_name => 'DAVE',
consumer_group => 'LO',
grant_option => true);
dbms_resource_manager.set_consumer_group_mapping(
dbms_resource_manager.oracle_user,'FRED','HI');
dbms_resource_manager.set_consumer_group_mapping(
dbms_resource_manager.oracle_user,'DAVE','LO');
dbms_resource_manager.submit_pending_area();
end;
/
alter system set resource_manager_plan = 'my_plan'; <-- the newly created db resource mgt plan is enabled!!! when you set the plan in the database
the plan is automatically propagated to the exadata cells to enable intradatabase io resource
management. for this to work you must have an active iormplan on your exadata cells even if its a null plan.
select * from dba_rsrc_consumer_group_privs; <-- confirm consumer group associations
select count(*) from sh.sales where amount_sold=1; <-- reexecute for both users will have elapsed time change
}}}
<<<
Exadata and Database Machine Version 2 Series - 16 of 25: Interdatabase I/O Resource Management Demo 19-Sep-10 12 mins http://goo.gl/jZptS
<<<
{{{
create bigfile tablespace test
datafile '+STO1DATA2' size 40g; <-- on both databases
list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*' <-- shows large write throughtput
alter iormplan dbplan=((name=ST01, level=1, allocation=100), (name=other, level=2, allocation=100))
alter iormplan active
list iormplan detail
create bigfile tablespace test
datafile '+STO1DATA2' size 40g; <-- on both databases
}}}
<<<
Exadata and Database Machine Version 2 Series - 17 of 25: Configuring Flash-Based Disk Groups Demo 19-Sep-10 16 mins http://goo.gl/9Ve8c
<<<
{{{
list flashcache detail <-- each exadata server contains 384GB of high performance flash memory. by default all flash memory
is configured as exadata smart flash cache.
drop flashcache <-- drop flash cache
list celldisk attributes name,freeSpace,size where diskType=FlashDisk <-- after dropping, each flash based cell disk shows that all the usable space is free
create flashcache all size=100g <-- now a smaller than default, smart flash cache is configured spread across cell disk 6.25GB x 16 = 100
create griddisk all flashdisk prefix=st01flash, size=8G <-- will create flash based grid disk on the 100GB just created.. the same command used to create disk based grid disk
except the FLASHDISK keyword
create griddisk all flashdisk prefix=st02flash <-- this will create flash based grid disk on all of the remaining free space on the flash based cell disks
list griddisk attributes name,size,ASMModeStatus where disktype=flashdisk <-- this will list the newly created flash based grid disks! ready to be consumed by ASM
select path, header_status from v$asm_disk
where path like 'o/%/st01flash%'; <-- will list flash based grid disks.. from viewpoint of ASM flash and disk based grid disks are the same
create diskgroup st01flash normal redundancy
disk 'o/*/st01flash*'
attribute 'compatible.rdbms' = '11.2.0.0.0',
'compatible.asm' = '11.2.0.0.0',
'cell.smart_scan_capable' = 'TRUE',
'au_size' = '4M'; <-- this will create a flash based disk group!!! this will also be automatically grouped into separate failure groups
drop diskgroup st01flash; <-- drops the disk group
drop griddisk all prefix=st01flash
drop griddisk all prefix=st02flash
drop flashcache
create flashcache all <-- the default smart flash cache is configured on the cell
}}}
<<<
Exadata and Database Machine Version 2 Series - 18 of 25: Examining Exadata Hybrid Columnar Compression Demo 19-Sep-10 14 mins http://goo.gl/ppP3q
<<<
{{{
set serveroutput on
set timing on
declare
b_cmp number;
b_ucmp number;
r_cmp number;
r_ucmp number;
cmp_ratio number(6,2);
cmp_type varchar2(1024);
begin
dbms_compression.get_compression_ratio('SH','SH','MYCUSTOMERS',NULL,DBMS_COMPRESSION.COMP_FOR_QUERY_HIGH,b_cmp,b_ucmp,r_cmp,r_ucmp,cmp_ratio,cmp_type);
dbms_output.put_line('Table: MYCUSTOMERS');
dbms_output.put_line('Compression Ratio: '||cmp_ratio);
dbms_output.put_line('Compression Type: '||cmp_type);
dbms_compression.get_compression_ratio('SH','SH','MYCUSTOMERS',NULL,DBMS_COMPRESSION.COMP_FOR_ARCHIVE_HIGH,b_cmp,b_ucmp,r_cmp,r_ucmp,cmp_ratio,cmp_type);
dbms_output.put_line('Table: MYCUSTOMERS');
dbms_output.put_line('Compression Ratio: '||cmp_ratio);
dbms_output.put_line('Compression Type: '||cmp_type);
end;
/ <-- will show you the compression advisor rates!!! 5.3 and 6.6 respectively
select segment_name, sum(bytes)/1024/1024
from user_segments
where segment_name like 'MYCUST%'
group by segment_name; <-- will show the ratio
MYCUST_QUERY 1673 <-- 5.3 RATIO (8850/1673)
MYCUSTOMERS 8850
MYCUST_ARCHIVE 1301 <-- 6.7 RATIO
-- ensure direct path read is done
alter session force parallel query;
alter session force parallel ddl;
alter session force parallel dml;
insert /*+ APPEND */ into mycustomers
select * from seed_data; <-- 00:00:01.22 at 1000000 rows NORMAL
<-- 00:00:00.89 at 1000000 rows QUERY COMPRESSION.. performance is offset by less IO operations, suited for DW environments large data loads
<-- 00:00:03.57 at 1000000 rows ARCHIVE.. slower, uses more costly algorithm for high compression. suited for archiving
select avg(cust_credit_limit) from mycustomers; <-- 00.00.10.92 NORMAL
cell physical IO interconnect bytes 939MB <-- data returned by smart scan
cell physical IO interconnect bytes returned by smart scan 939MB
cell physical IO bytes eligible for predicate offload 8892MB <-- offloaded to exadata
<-- 00.00.02.03 QUERY.. IO reduction results better query performance
cell physical IO interconnect bytes 266MB <-- data returned by smart scan
cell physical IO interconnect bytes returned by smart scan 266MB
cell physical IO bytes eligible for predicate offload 1667MB <-- offloaded to exadata
<-- 00.00.01.86 ARCHIVE
cell physical IO interconnect bytes 239MB <-- data returned by smart scan
cell physical IO interconnect bytes returned by smart scan 239MB
cell physical IO bytes eligible for predicate offload 1297MB <-- offloaded to exadata
}}}
<<<
Exadata and Database Machine Version 2 Series - 19 of 25: Index Elimination with Exadata Demo 19-Sep-10 8 mins http://goo.gl/T0SFq
<<<
{{{
shows how to make an index invisible so that you can test the effect on your queries without actually dropping the index
set timing on
set autotrace on explain
select avg(cust_credit_limit) from customers
where cust_id between 2000000 and 2500000; <-- test query, 15.09 seconds elapsed.. shows index range scan
alter index customers_pk invisible; <-- makes index invisible, and not used by optimizer for queries
select status from user_constraints
where constraint_name = 'CUSTOMERS_PK'; <-- ENABLED and associated with PK constraint,
note that even though invisible the associated constraint is still ENABLED
select avg(cust_credit_limit) from customers
where cust_id between 2000000 and 2500000; <-- with invisible index, 23.99 seconds elapsed, and uses SMART SCAN!
alter index customers_pk visible; <-- makes it visible
}}}
<<<
Exadata and Database Machine Version 2 Series - 20 of 25: Database Machine Configuration Example using Configuration Worksheet Demo 19-Sep-10 14 mins http://goo.gl/cXgKu
<<<
{{{
.
}}}
<<<
Exadata and Database Machine Version 2 Series - 21 of 25: Migrating to Database Machine Using Transportable Tablespaces Demo 19-Sep-10 14 mins http://goo.gl/otDOF
<<<
{{{
this demo shows how to use RMAN in conjunction with TTS to migrate data from a bid endian platform to exadata
-- TTS dumps and metadata is created.. on real world scenario, you must dump the files to a DBFS!!!
-- The EXADATA is LITTLE ENDIAN!!!
select d.platform_name, endian_format
from v$transportable_platform tp, v$database d
where tp.platform_name = d.platform_namel;
RMAN> convert datafile '/home/st01/TTS/soe_TTS_AIX.dbf'
to platform="Linux x86 64-bit"
from platform="AIX-Based Systems (64-bit)"
parallelism=1
format '+ST01DATA'; <-- this converts from big to little endian and loads the converted file into ASM
-- For TTS work the same schema must pre-exist in the destination database
create user soe identified by soe account unlock;
grant connect,resource to soe;
create directory tts as '/home/st01/TTS'; <-- creates a directory object that houses the TTS files
impdp system dumpfile=expSOE_TTS.dmp directory=tts logfile=imp_SOE.log transport_datafiles='+ST01DATA/st01/datafile/soe.268.727217185' <-- Data Pump to import TTS metadata
alter tablespace soe read write;
}}}
<<<
Exadata and Database Machine Version 2 Series - 22 of 25: Bulk Data Loading with Database Machine Demo 19-Sep-10 20 mins http://goo.gl/KFWyu
<<<
{{{
-- configure DBFS!!! best practice is put it on a separate database
create bigfile tablespace dbfs datafile '+ST01DATA' size 10G;
grant create session, create table, create procedure, dbfs_role to dbfs; <-- should be installed in a dedicated schema
mkdir DBFS <-- this will be the filesystem mount point
cd $ORACLE_HOME/rdbms/admin
sqlplus dbfs/dbfs
@dbfs_create_filesystem_advanced.sql dbfs st01dbfs nocompress nodeduplicate noencrypt non-partition <-- this creates
the database objects for the dbfs store
1st - Tablespace where DBFS store is created
2nd - name of the DBFS store
3,4,5,6 - whether or not to enable the various features
typically it is recommended to leave the advanced features
DISABLED for a DBFS store that is used to stage data files
for BULK DATA loading
echo dbfs > passwd.txt
nohup $ORACLE_HOME/bin/dbfs_client dbfs@st01 -o allow_other,direct_io /home/st01/DBFS < passwd.txt & <-- dbfs_client has a mount interface that utilizes the FUSE kernel module
to implement a file system mount.
dbfs_client receives standard file system calls from FUSE and translates them
into calls to the DBFS PL/SQL API
ps -ef | grep dbfs_client
df -k
cp CSV/customers.csv DBFS/st01dbfs/ <-- transfer files to staging area
cd DBFS/st01dbfs/
ls -l
head customers.csv
sqlplus "/ as sysdba"
create directory staging as '/home/st01/DBFS/st01dbfs';
grant read, write on directory staging to sh; <-- create directory object which references to the DBFS
connect sh/sh
create table ext_customers
(
customer_id number(12),
cust_first_name varchar2(30),
cust_last_name varchar2(30),
nls_language varchar2(3),
nls_territory varchar2(30),
credit_limit number(9,2),
cust_email varchar2(100),
account_mgr_id number(6)
)
organization external
(
type oracle_loader
default directory staging
access parameters
(
records delimited by newline
badfile staging:'custxt%a_%p.bad'
logfile staging:'custxt%a_%p.log'
fields terminated by ',' optionally enclosed by '"'
missing field values are null
(
customer_id, cust_first_name, cust_last_name, nls_language,
nls_territory, credit_limit, cust_email, account_mgr_id
)
)
location ('customers.csv')
)
parallel
reject limit unlimited;
select count(*) from ext_customers; <-- query the external table, it is queried in parallel!!
create table loaded_customers
as select * from ext_customers; <-- actual data loading!!!
fusermount -u /home/st01/DBFS <-- to unmount!!!
df -k
ps -ef | grep dbfs_client
}}}
<<<
Exadata and Database Machine Version 2 Series - 23 of 25: Backup Optimization Using RMAN and Exadata Demo 19-Sep-10 15 mins http://goo.gl/q5Dz8
<<<
{{{
alter database enable block change tracking;
configure device type disk parallelism 2;
backup as backupset incremental level 0 tablespace sh; <-- full backup of the SH tablespace
list backup; <-- 90GB 00:04:17 elapsed
select a.name, sum(b.value/1024/1024) MB
from v$sysstat a, v$sesstat b, v$session c
where a.statistic# = b.statistic#
and b.sid = c.sid
and upper(c.program) like 'RMAN%'
and (a.name in
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes',)
or a.name like 'cell phy%')
group by a.name; <-- at Level 0 no offloading
-- do a massive update on the table
backup as backupset incremental level 1 tablespace sh;
list backup; <-- 944KB
select a.name, sum(b.value/1024/1024) MB
from v$sysstat a, v$sesstat b, v$session c
where a.statistic# = b.statistic#
and b.sid = c.sid
and upper(c.program) like 'RMAN%'
and (a.name in
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes',)
or a.name like 'cell phy%')
group by a.name; <-- at Level 1 very significant offloading!!! BCT also helped instead of reading 90GB, read only 488MB
Also smart scan also kicked in to optimize RMAN reads so that instead of returning 454 MB of data to RMAN
for further processing.. only 12.5MB was returned
cell physical IO bytes eligible for predicate offload 454MB
cell physical IO interconnect bytes 51MB
cell physical IO interconnect bytes returned by smart scan .89MB
physical write total bytes 12.49MB
physical read total bytes 487MB
select file#, incremental_level, datafile_blocks, blocks, blocks_read, blocks_skipped_in_cell
from v$backup_datafile; <-- BLOCKS_SKIPPED_IN_CELL is another good metric for backup optimization!
}}}
<<<
Exadata and Database Machine Version 2 Series - 24 of 25: Recovery Optimization Using RMAN and Exadata Demo 19-Sep-10 12 mins http://goo.gl/TOl2o
<<<
{{{
rm sh.dbf
restore tablespace sh;
select a.name, sum(b.value/1024/1024) MB
from v$sysstat a, v$sesstat b, v$session c
where a.statistic# = b.statistic#
and b.sid = c.sid
and upper(c.program) like 'RMAN%'
and (a.name in
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes',)
or a.name like 'cell phy%')
group by a.name; <-- restore... cell physical IO bytes saved during optimized RMAN file restore 1753MB
when RMAN restores a file, any blocks in the file that have not been altered since the
file was first formatted can be re created by Exadata. This optimization removes the need
to transport empty formatted blocks across the storage network. Rather, RMAN is able to instruct
Exadata to conduct the IO on its behalf in the same way that optimized file creation is performed.
cell physical IO bytes eligible for predicate offload 1753MB
cell physical IO interconnect bytes 398395MB
cell physical IO interconnect bytes returned by smart scan 0MB
cell physical IO bytes saved during optimized RMAN file restore 1753MB
physical write total bytes 154479MB
physical read total bytes 92939.36MB
}}}
<<<
Exadata and Database Machine Version 2 Series - 25 of 25: Using the distributed command line utility (dcli) Demo 19-Sep-10 14 mins http://goo.gl/3vAUN
<<<
{{{
-- configure environment
ORACLE_SID=ST01
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
cat << END > mycells
exa9cel01
exa9cel02
END
ssh-keygen -t sda <-- if doing dcli 1st time, then create ssh key
dcli -g mycells -k <-- establish SSH equivalence
-- usage
dcli -g mycells cellcli -e list cell <-- list cells, BASIC CELLCLI COMMANDS
dcli -g mycells df -k <-- list filesystems, OS COMMANDS
dcli -g mycells cellcli -e list iormplan <-- can also be used for configuration changes across servers
dcli -g mycells cellcli -e alter iormplan active
dcli -g mycells cellcli -e alter iormplan inactive
dcli -g mycells "cellcli -e list metriccurrent where name like \'CD_IO_RQ_W_.?.?\' and metricobjectname like \'CD.*\'" <-- monitor across cells
dcli -g mycells -r '.*CD_0.*' "cellcli -e list metriccurrent where name like \'CD_IO_RQ_W_.?.?\' and metricobjectname like \'CD.*\'" <-- monitor across cells with regex exclude
dcli -g mycells "cellcli -e list metriccurrent where name like \'CD_IO_RQ_W_.?.?\' and metricobjectname like \'CD.*\' | grep CD_00" <-- with GREP
dcli -g mycells -f testfile.txt <-- distributed file transfer -f option
dcli -g mycells testfile.txt
cat << END > st01script.sh
HST=\`hostname -s\`
DTE=\`date\`
echo -n \`cat testfile.txt\`
echo " on ${HST} at ${DTE}."
END
chmod +x st01script.sh
dcli -g mycells -x st01script.sh <-- -x option causes the associated file to be copied to and run on the target system
a filename with .SCL extension is run by the CELLCLI UTILITY
a filename with different extension is run by the OPERATING SYSTEM SHELL on the target server
the file is copied to the default home directory on the target server
}}}
<<<
''-- usage tracking''
{{{
$ cat obi_reports.sql
set lines 200
set echo off
set feedback off
col "Elapse Time(Min)" form 999,999
col "Elapse Time(Hr)" form 999.9
col "Total Row Ct." form 999,999,999
col "Exec Ct." form 999,999,999
col "SQL Ct." form 999,999,999
col "Db Time(Sec)" form 999,999,999
col "Db Time(Min)" form 999,999
col "Db Time(Hr)" form 999.9
col "Total Row Ct." form 999,999,999,999
SELECT to_char(to_date(start_dt, 'dd-MON-yy'), 'yyyy-mm-dd') "Exec Date",
count(*) "Exec Ct",
sum(row_count) "Row Ct",
sum(total_time_sec/60) "Elapse Time(Min)",
sum(total_time_sec/60/60) "Elapse Time(Hr)",
sum(num_db_query) "SQL Ct.",
sum(cum_db_time_sec) "Db Time(Sec)",
sum(cum_db_time_sec/60) "Db Time(Min)",
sum(cum_db_time_sec/60/60) "Db Time(Hr)",
sum(cum_num_db_row) "Total Row Ct."
FROM OBIUSAGE.S_NQ_ACCT
WHERE 1=1
AND QUERY_SRC_CD NOT IN ('ValuePrompt','DashboardPrompt')
AND cache_ind_flg = 'N'
AND presentation_name = 'CBRE Financials - GL Profit and Loss'
AND start_dt > = '07-MAY-2012'
GROUP BY start_dt
ORDER BY 1;
}}}
! OBIEE workload separation
{{{
IF contains(lower(trim([Module])),'BIP')=true THEN 'BIP'
ELSEIF contains(lower(trim([Module])),'ODI')=true THEN 'ODI'
ELSEIF contains(lower(trim([Module])),'nqs')=true THEN 'nqsserver'
ELSE 'OTHER' END
}}}
http://www.rittmanmead.com/2012/03/an-obiee-11g-security-primer-introduction/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-row-level-security/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-subject-area-catalog-and-functional-area-security-2/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-understanding-obiee-11g-security-application-roles-and-application-policies/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-managing-application-roles-and-policies-and-managing-security-migrations-and-deployments/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-connecting-to-active-directory-and-obtaining-group-membership-from-database-tables/
http://www.rittmanmead.com/2003/09/securing-data-warehouses-with-oid-advanced-security-and-vpd/
http://www.rittmanmead.com/2007/05/obiee-and-row-level-security/
https://blogs.oracle.com/BI4success/entry/high_level_flow_of_obiee
https://blogs.oracle.com/obieeTips/entry/aix_checklist_for_stable_obiee
https://blogs.oracle.com/obieeTips/entry/obiee_memory_usage
! sizing
OBIEE 11g and 12c: Architectural Deployment Capacity Planning Guide (Doc ID 1323646.1)
NOTE:1333049.1 - OBIEE 11g Infrastructure Performance Tuning Guide
NOTE:2106183.1 - OBIEE 12c: Best Practices Guide for Infrastructure Tuning Oracle® Business Intelligence Enterprise Edition 12c (12.2.1)
NOTE:1323646.1 - OBIEE 11g | 12c: Architectural Deployment Capacity Planning Guide
NOTE:2106183.1 - OBIEE 12c: Best Practices Guide for Infrastructure Tuning Oracle® Business Intelligence Enterprise Edition 12c (12.2.1)
NOTE:1611188.1 - OBIEE: Load Testing OBIEE Using Oracle Load Testing (OLT) 12.x
https://blogs.oracle.com/cealteam/obiee-1111-tuning-guide-script-v1
https://blogs.oracle.com/proactivesupportepm/obiee-tuning-guide-whitepaper-update-available
NOTE:1611188.1 - OBIEE: Load Testing OBIEE Using Oracle Load Testing (OLT) 12.x
NOTE:1323646.1 - OBIEE 11g | 12c: Architectural Deployment Capacity Planning Guide
NOTE:2087801.1 - OBIEE 12c: How To Configure The External Subject Area (XSA) Cache For Data Blending| Mashup And Performance
! obiee active data guard
https://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-biee-activedataguard-1-131999.pdf
https://www.slideshare.net/bogloap/it-2020-technology-optimism-an-oracle-scenario
Upgrading OCFS2 - 1.4
http://www.idevelopment.info/data/Oracle/DBA_tips/OCFS2/OCFS2_1.shtml
http://www.idevelopment.info/data/Oracle/DBA_tips/OCFS2/OCFS2_5.shtml
https://blogs.oracle.com/observability/post/announcing-support-for-exadata-monitoring-in-performance-hub-v2
https://blogs.oracle.com/cloud-infrastructure/post/available-now-exadata-insights-in-oracle-cloud-infrastructure-operations-insights
https://docs.oracle.com/en-us/iaas/operations-insights/doc/operations-insights.html
https://docs.oracle.com/en-us/iaas/operations-insights/doc/analyze-exadata-resources.html
<<<
Use cases
Forecast resource requirements
Using the Capacity Planning app for Exadata systems, you can perform the following analyses:
Enterprise-wide analysis of resource utilization, capacity planning for Exadata
Improve resource utilization by identifying under and overutilized resources
Identify Exadata systems projected to reach high utilization
Identify total lead time to expand capacity through machine learning-based forecast, based on long-term historic data to project future resource growth
Use forecasting and capacity planner functionality to ensure that Exadata satisfies future needs of databases being consolidated
Estimate usage after 12 months
<<<
<<<
Consolidate Oracle databases on Exadata
You can inspect details of individual Exadata systems and look at performance characteristics of all databases, hosts, and storage servers for the following capabilities:
Identify top databases by the resource type CPU, memory, I/O, and storage
Identify top hosts by the resource type CPU and memory
Identify top Exadata storage servers by storage, I/O, and throughput
Determine which Exadata hosts satisfy resource requirements
Find low resource utilization servers
Plan using performance history and seasonality
Ensure that service levels can be met over time
<<<
Support of Oracle Transparent Data Encryption (Oracle TDE)
https://help.sap.com/viewer/4b99f675d74f4990b75a8630869a0cd2/CURRENT_VERSION/en-US/bc2f528da0ed423bbaf6aee70b633c01.html
<<<
my experience:
* automatically configured in X8M
* CDBs are set to use_large_pages=ONLY
<<<
https://blog.pythian.com/hugepages-for-oracle-database-in-oracle-cloud/
OCI-Classic to OCI IaaS Migration
IaaS Migration Tools
https://cloud.oracle.com/iaas/training/slides/cloud_migration_tools_300.pdf
''OCM preparation exams'' http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=501
http://translate.google.com/translate?langpair=zh-CN%7Cen&hl=zh-CN&ie=UTF8&u=http://www.oracledatabase12g.com/archives/11g-ocm-upgrade-exam-tips.html
http://blogs.oracle.com/certification/entry/0372
Oracle Database 11g Administrator http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=198
Oracle Database 11g Certified Master Upgrade Exam http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=41&p_exam_id=11gOCMU
Oracle Database 11g Certified Master Exam http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=41&p_exam_id=11gOCM
http://www.pythian.com/news/34911/how-to-prepare-to-oracle-database-11g-certified-master-exam/
http://wenku.baidu.com/view/452d880a6c85ec3a87c2c526.html
http://laurentschneider.com/wordpress/2012/09/ocm-11g-upgrade.html
http://gavinsoorma.com/2011/02/passing-the-11g-ocm-exam-some-thoughts/
mclean http://goo.gl/QTFvX
kamran http://kamranagayev.com/2013/08/16/how-to-become-an-oracle-certified-master-my-ocm-journey/
this guy http://jko-licorne.com/oracle/
The Cert Path
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=198&p_org_id=&lang=
Upgrade program
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=44
Oracle Database 11g: New Features for Administrators
http://education.oracle.com/pls/web_prod-plq-dad/show_desc.redirect?dc=D50081GC10&p_org_id=&lang=&source_call=
1Z0_050 - Oracle Database 11g: New Features for Administrators
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=41&p_exam_id=1Z0_050
Release 2
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=609&p_org_id=1001&lang=US&get_params=dc:D50081GC20,p_preview:N
Oracle Database 12c: New Features for Administrators
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=609&get_params=dc:D77758GC10,p_preview:N
http://www.databasejournal.com/features/oracle/article.php/3630231/Oracle-RAC-Administration---Part-4-Administering-the-Clusterware--Components.htm
http://blogs.oracle.com/AlejandroVargas/archives.html
http://blogs.oracle.com/AlejandroVargas/2007/05/rac_with_asm_on_linux_crash_sc_2.html
http://blogs.oracle.com/AlejandroVargas/2007/05/rac_with_asm_on_linux_crash_sc_3.html
http://onlineappsdba.com/index.php/2009/06/09/backup-and-recovery-of-oracle-clusterware/
http://el-caro.blogspot.com/2006/07/ocr-backups.html
http://deepthinking99.wordpress.com/2008/09/20/recover-the-corruption-ocr/
http://askdba.org/weblog/2008/09/how-to-recover-from-corrupted-ocr-disk/
http://www.oracle-dba-database-administration.com/backup-recover-OCR.html
http://achatzia.blogspot.com/2007/06/scripts-for-rac-backup.html
http://www.pythian.com/news/832/how-to-recreate-the-oracle-clusterware-voting-disk/
http://www.databasejournal.com/features/oracle/article.php/3626471/Oracle-RAC-Administration---Part-3-Administering-the-Clusterware-Components.htm
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_65.shtml#Backup the Voting Disk
What's in a voting disk http://orainternals.wordpress.com/2010/10/29/whats-in-a-voting-disk/
OCR & Voting Disk on ASM http://blog.ronnyegner-consulting.de/2010/10/20/oracle-11g-release-2-asm-best-practises/
How to restore Oracle Grid Infrastructure OCR and vote disk on ASM
http://oracleprof.blogspot.com/2011/09/after-reading-book-about-oracle-rac-see.html
Placement of Voting disk and OCR Files in Oracle RAC 10g and 11gR1 [ID 293819.1]
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]
http://noriegaaoracleexpert.blogspot.com/2017/08/demythifying-oracle-database-appliance.html
http://drsalbertspijkers.blogspot.com/2015/04/oracle-database-appliance-x5-2.html
https://blog.pythian.com/oracle-database-appliance-storage-performance-part-1/
https://blog.pythian.com/insiders-guide-to-oda-performance/
Database Sizing for Oracle Database Appliance https://docs.oracle.com/cd/E22693_01/doc.12/e55580/sizing.htm#CHDCCDGD
https://community.oracle.com/blogs/heemasatapathy/2018/07/19/x52-oracle-database-appliance-system-io-assessment
https://www.doag.org/formes/pubfiles/7519722/2015-K-INF-Tammy_Bednar-Deep_Dive_into_Oracle_Database_Appliance_Architecture-Manuskript.pdf
https://www.doag.org/formes/pubfiles/7519746/2015-K-INF-Tammy_Bednar-Deep_Dive_into_Oracle_Database_Appliance_Architecture-Praesentation.pdf
http://www.nocoug.org/download/2013-05/NoCOUG_201305_ODA_IO_and_Performance_Architecuture.pdf
oracle database appliance flash disk group https://www.google.com/search?client=firefox-b-1-d&q=oracle+database+appliance+flash+disk+group
Oracle Database Appliance Software Configuration Defaults https://docs.oracle.com/cd/E22693_01/doc.12/e55580/referapp.htm
Database Disk Group Sizes for Oracle Database Appliance https://docs.oracle.com/cd/E68623_01/doc.121/e68637/GUID-FE280580-F361-494F-B377-10137A6BEA34.htm#CMTAR858
DBFC - Using SSDs to Solve I/O Bottlenecks https://learning.oreilly.com/library/view/oracle-database-problem/9780134429267/ch17.html
Using Oracle Database Appliance SSDs https://docs.oracle.com/cd/E22693_01/doc.12/e55580/dbadmin.htm#CACEHIJJ , https://docs.oracle.com/cd/E64530_01/doc.121/e64200/referapp.htm#CEGBFHFB
Flash Cache in ODA x5-2 Virtual platform - https://community.oracle.com/thread/4195281?parent=MOSC_EXTERNAL&sourceId=MOSC&id=4195281
https://blogs.oracle.com/emeapartnerweblogic/what-you-need-to-know-about-the-new-oda-x5-2-by-simon-haslam
''Configure and Deploy Oracle Database Appliance'' http://apex.oracle.com/pls/apex/f?p=44785:24:2875967671743702::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5903,2
http://www.evernote.com/shard/s48/sh/372c667d-d4d0-4a51-a505-f7010f124f29/05ee84e49642f4d556da907c7d212e35
''Check out the offline configurator'' http://blogs.oracle.com/eSTEP/entry/oda_offline_configurator_for_demo
''Demo: How to Set Up ILOM on the Oracle Database Appliance'' http://download.oracle.com/technology/server-storage/ilom/ILOM-Setup-1-5-12.mp4
Pointed by Bjoern Rost, @karlarao How do you reduce planned downtime with #ODA when rolling patches are not (yet?) supported
Well that really sucks because you have to patch through the ''appliance manager'', as per this doc http://download.oracle.com/docs/cd/E22693_01/doc.21/e22692/undrstd.htm#CIHEFJBA it's not rolling patch yet and they haven't issued any new patches for it yet and that's the other problem :p
''hmm'' "At the time of this release, Oracle Appliance Manager Patching does not support rolling patching. The entire system must be taken down and both servers patched before restarting the database."
''even worse is this note:'' Caution: Only patch Oracle Database Appliance with an Oracle Database Appliance patch bundle. Do not use Oracle Grid Infrastructure, Oracle Database patches, or any Linux distribution patch with an Oracle Appliance. If you use non-Oracle Appliance patches with an Oracle Appliance using Opatch or an equivalent tool, then the Oracle Database Appliance inventory is not updated, and future Oracle Appliance patch updates cannot be completed.
Oracle Database Appliance Firmware Page [ID 1360299.1]
http://www.pythian.com/news/34715/migrating-your-10g-database-to-oda-with-minimal-downtime/
http://ermanarslan.blogspot.com/2014/06/ovm-oakcli-command-examples.html
{{{
OVM -- oakcli command examples
To import a Template:
oakcli import vmtemplate EBS_12_2_3_PROD_DB -assembly /OVS/EBS/Oracle-E-Business-Suite-PROD-12.2.3.ova -repo vmtemp2 -node 0
To list the available Templates:
oakcli show vmtemplate
To list the Virtual Machines:
oakcli show vm
To list the repositories:
oakcli show repo
To start a virtual machine:
oakcli start vm EBS_12_2_3_VISION
To create a Repository (size in gb, by default):
oakcli create repo vmrepo1 -dg data -size 2048
To configure a Virutal Machine: (cpu, memory etc..)
oakcli configure vm EBS_12_2_3_PROD_APP -vcpu 16 -maxvcpu 16
oakcli configure vm EBS_12_2_3_PROD_APP -memory 32768M -maxmemory 32768M
To open a console for a virtual machine (Vnc required)
oakcli show vmconsole EBS_12_2_3_VISION
To create a virtual machine from a template:
oakcli clone vm EBS_12_2_3_PROD_APP -vmtemplate EBS_12_2_3_PROD_APP -repo vmrepo1 -node 1
}}}
Oracle Database Appliance - Steps to Generate a Key via MOS to change your CORE Count and apply this Core Key (Doc ID 1447093.1)
ODA FAQ : Understanding the Oracle Database Appliance Core Key Generation usage, common questions and problems ( FAQ ) (Doc ID 1597084.1)
{{{
# /opt/oracle/oak/bin/oakcli show core_config_key
Host's serialnumber = 01234AB56C7
Configured Cores = 20
Note: The CPU’s in the Database Appliance are hyper threaded, so when verifying the number of CPU cores with the cpuinfo command, you will see two times (2X) the number of cores configured pre server. For example, In this note we configured 10 cores per server, for a total of 20 cores for the appliance, so the cpuinfo command will return the following:
# cat /proc/cpuinfo | grep -i processor
processor : 0
processor : 1
processor : 2
processor : 3
processor : 4
processor : 5
processor : 6
processor : 7
processor : 8
processor : 9
processor : 10
processor : 11
processor : 12
processor : 13
processor : 14
processor : 15
processor : 16
processor : 17
processor : 18
processor : 19
...
... -- The maximum number of cores available is HW version dependent
}}}
Certified Compilers
Doc ID: Note:43208.1
Client / Server / Interoperability Support Between Different Oracle Versions
Doc ID: Note:207303.1
Oracle Server (RDBMS) Releases Support Status Summary
Doc ID: Note:161818.1
Is Oracle10g Instant Client Certified With Oracle 9i or Oracle 8i Databases
Doc ID: Note:273972.1
Client Application Fails After Upgrade of Client Libraries
Doc ID: Note:268174.1
Basic OCI8 Testcase
Doc ID: Note:277543.1
Basic OCCI Testcase
Doc ID: Note:277544.1
OCI/OCCI/Precompilers Testcase FAQ
Doc ID: Note:271406.1
Where do I Find OCCI Support for Microsoft Visual Studio 2005 / Microsoft Visual C++ 8.0?
Doc ID: Note:362644.1
Which OCI Functions Where Introduced In What Release Starting With ORACLE RDBMS 8.0
Doc ID: Note:301983.1
On What Unix/Linux OS are Oracle ODBC Drivers Available ?
Doc ID: Note:396635.1
Supported ODBC Configurations
Doc ID: Note:66403.1
ODBC COMPATABILITY ISSUES
Doc ID: Note:1027811.6
"ORACLE CLIENT NETWORKING COMPONENTS WERE NOT FOUND" w/CONFIGURING ODBC
Doc ID: Note:1014690.102
Oracle� Database Client Certification Notes 10g Release 2 (10.2.0.3) for Microsoft Windows Vista
Doc ID: Note:415166.1
Can Instant Client 10g Run On Windows Vista?
Doc ID: Note:459507.1
Installation Instructions for Oracle ODBC Driver Release 9.2.0.5.4
Doc ID: Note:290886.1
ODBC and Oracle10g Supportability
Doc ID: Note:273215.1
How To Implement Expiration Of Passwords Using ODBC
Doc ID: Note:268240.1
Using ODBC From a Windows NT Service
Doc ID: Note:1016672.4
ODBC COMPATABILITY ISSUES
Doc ID: Note:1027811.6
ODBC Compatibility Matrix for the Macintosh Platform
Doc ID: Note:76570.1
Connection from ODBC Test Fails With TNS-12535
Doc ID: Note:170795.1
Unable to Use SET SAVEPOINT While Using ODBC Application
Doc ID: Note:163986.1
ODBC ARCHITECTURE FOR ORACLE DATABASE
Doc ID: Note:106110.1
Setting up the Oracle ODBC Driver and DSN on Windows 95/98/NT Client
Doc ID: Note:107364.1
Bug 3564573 - ORA-1017 when 10g client connects to 8i/9i server with EBCDIC <-> ASCII connection
Doc ID: Note:3564573.8
Bug 3437884 - 10g client cannot connect to 8.1.7.0 - 8.1.7.3 server
Doc ID: Note:3437884.8
ALERT: Connections from Oracle 9.2 to Oracle7 are Not Supported
Doc ID: Note:207319.1
Database, FMW, and OCS Software Error Correction Support Policy
Doc ID: Note:209768.1
Oracle Database Server support Matrix for Windows XP / 2003 64-Bit (Itanium)
Doc ID: Note:236183.1
Oracle Database Server and Networking Patches for Microsoft Platforms
Doc ID: Note:161549.1
"An Unsupported Operation was Attempted" Error When Trying to Create DSN With ODBC 10.2.0.3.0
Doc ID: Note:403021.1
Unable to Connect With Microsoft ODBC Driver for Oracle and 64-Bit Oracle Client
Doc ID: Note:417246.1
ODBC BASIC OVERVIEW
Doc ID: Note:1003717.6
http://support.microsoft.com/kb/190475
http://support.microsoft.com/kb/244661
http://support.microsoft.com/kb/259959/
http://support.microsoft.com/kb/306787/
Install ODI
http://avdeo.com/2009/01/19/installing-oracle-data-integrator-odi/
Oracle® Fusion Middleware
Integrating Big Data with Oracle Data Integrator
12 c (12.2.1.2.6)
https://docs.oracle.com/middleware/122126/odi/odi-big-data/ODIBD.pdf
http://download.oracle.com/docs/cd/E15985_01/index.htm
http://download.oracle.com/docs/cd/E15985_01/doc.10136/release/ODIRN.pdf
How To Set Up ODI With Mainframes And Mid-Range Servers? [ID 423769.1]
Performance Optimization Strategies For ODI [ID 423726.1]
Compatibility Of Non Transactional Databases With ODI [ID 424454.1]
Version Compatibility Between ODI Components [ID 423825.1]
Where Are The Certification Matrices For ODI 10g and 11g Which Indicate Platform And Database Compatibilities [ID 424527.1]
What Are The Best Practices When Installing Oracle Data Integrator ? [ID 424598.1]
Oracle Data Integrator/Sunopsis, Releases and Patches [ID 456313.1]
http://www.toadworld.com/platforms/oracle/w/wiki/11469.oracle-exadata-deployment-assistance-oeda
also on ch8 of ExaBook2ndEd
https://community.oracle.com/message/12572401
install xterm!!!
desktopserver kernel http://www.audentia-gestion.fr/oracle/uek-for-linux-177034.pdf , https://oss.oracle.com/pipermail/el-errata/2011-August/002251.html , https://oss.oracle.com/el5/docs/RELEASE-NOTES-U7-en.html
<<<
IO affinity
IO affinity ensures processing of a completed IO is handled by the same CPU that initiated the IO. It can have a fairly large impact on performance, especially on large NUMA machines. IO affinity is turned on by default, but it can be controlled via the tunable in /sys/block/xxx/queue/rq_affinity. For example, the following will turn IO affinity on:
echo 1> /sys/block/sda/queue/rq_affinity
<<<
newer kernels https://docs.oracle.com/en/operating-systems/uek/ , https://www.oracle.com/a/ocom/docs/linux/oracle-linux-ds-1985973.pdf
https://blogs.oracle.com/scoter/oracle-linux-and-unbreakable-enterprise-kernel-uek-releases
https://en.wikipedia.org/wiki/Oracle_Linux#cite_note-57
https://community.oracle.com/tech/apps-infra/discussion/comment/11032808
https://oss.oracle.com/el5/docs/
https://www.oracle.com/technetwork/cn/community/developer-day/3-oracle-linux-2525017-zhs.pdf
https://support.purestorage.com/Solutions/Linux/Linux_Reference/Linux_Recommended_Settings
{{{
# Recommended settings for Pure Storage FlashArray.
# Use noop scheduler for high-performance solid-state storage for SCSI devices
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"
ACTION=="add|change", KERNEL=="dm-[0-9]*", SUBSYSTEM=="block", ENV{DM_NAME}=="3624a937*", ATTR{queue/scheduler}="noop"
# Reduce CPU overhead due to entropy collection
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"
ACTION=="add|change", KERNEL=="dm-[0-9]*", SUBSYSTEM=="block", ENV{DM_NAME}=="3624a937*", ATTR{queue/add_random}="0"
# Spread CPU load by redirecting completions to originating CPU
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"
ACTION=="add|change", KERNEL=="dm-[0-9]*", SUBSYSTEM=="block", ENV{DM_NAME}=="3624a937*", ATTR{queue/rq_affinity}="2"
# Set the HBA timeout to 60 seconds
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{device/timeout}="60"
}}}
Oracle Linux and MySQL TPC-C Optimizations When Implementing the Sun Flash Accelerator F80 PCIe Card
http://www.oracle.com/us/technologies/linux/linux-and-mysql-optimizations-wp-2332321.pdf
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/performance_tuning_guide/index
<<<
rq_affinity
By default, I/O completions can be processed on a different processor than the processor that issued the I/O request. Set rq_affinity to 1 to disable this ability and perform completions only on the processor that issued the I/O request. This can improve the effectiveness of processor data caching.
<<<
https://www.google.com/search?source=hp&ei=yurHX-ayNaOGwbkP-Z2gWA&q=OEL+rq_affinity&oq=OEL+rq_affinity&gs_lcp=CgZwc3ktYWIQAzoICAAQsQMQgwE6AggAOggILhCxAxCDAToLCC4QsQMQxwEQowI6DgguELEDEIMBEMcBEKMCOgUIABCxAzoFCC4QsQM6AgguOgsILhDHARCjAhCTAjoICC4QxwEQrwE6CggAELEDEIMBEAo6DQguELEDEMcBEKMCEAo6DgguELEDEMcBEKMCEJMCOggIABCxAxDJAzoICC4QxwEQowI6DgguEMcBEK8BEMkDEJMCOgcIABCxAxAKOgQIABAKOg0ILhCxAxDJAxAKEJMCOgkIABDJAxAWEB46BggAEBYQHjoFCCEQoAE6BwghEAoQoAFQjwhY2LI8YLe1PGgFcAB4AIABsgGIAZ4RkgEEMTQuOZgBAKABAaoBB2d3cy13aXqwAQA&sclient=psy-ab&ved=0ahUKEwjmv5jzg7DtAhUjQzABHfkOCAsQ4dUDCAg&uact=5
-- from http://www.perfvision.com/info/oem.html
{{{
default OEM web port
http://host:1158/em/console/
OEM license
Database Diagnostics Pack
Automatic Workload Repository
ADDM (Automated Database Diagnostic Monitor)
Performance Monitoring (Database and Host)
Event Notifications: Notification Methods, Rules and Schedules
Event history/metric history (Database and Host)
Blackouts
Dynamic metric baselines
Memory performance monitoring
Database Tuning Pack
SQL Access Advisor
SQL Tuning Advisor
SQL Tuning Sets
Reorganize Objects
Configuration Management Pack
Database and Host Configuration
Deployments
Patch Database and View Patch Cache
Patch staging
Clone Database
Clone Oracle Home
Search configuration
Compare configuration
Policies
}}}
{{{
set arraysize 5000
COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
-- ttitle center 'AWR IO Workload Report' skip 2
set pagesize 50000
set linesize 550
col instname format a15 heading instname -- instname
col hostname format a30 heading hostname -- hostname
col tm format a17 heading tm -- "tm"
col id format 99999 heading id -- "snapid"
col inst format 90 heading inst -- "inst"
col dur format 999990.00 heading dur -- "dur"
col cpu format 90 heading cpu -- "cpu"
col cap format 9999990.00 heading cap -- "capacity"
col dbt format 999990.00 heading dbt -- "DBTime"
col dbc format 99990.00 heading dbc -- "DBcpu"
col bgc format 99990.00 heading bgc -- "BGcpu"
col rman format 9990.00 heading rman -- "RMANcpu"
col aas format 990.0 heading aas -- "AAS"
col totora format 9999990.00 heading totora -- "TotalOracleCPU"
col busy format 9999990.00 heading busy -- "BusyTime"
col load format 990.00 heading load -- "OSLoad"
col totos format 9999990.00 heading totos -- "TotalOSCPU"
col mem format 999990.00 heading mem -- "PhysicalMemorymb"
col IORs format 99990.000 heading IORs -- "IOPsr"
col IOWs format 99990.000 heading IOWs -- "IOPsw"
col IORedo format 99990.000 heading IORedo -- "IOPsredo"
col IORmbs format 99990.000 heading IORmbs -- "IOrmbs"
col IOWmbs format 99990.000 heading IOWmbs -- "IOwmbs"
col redosizesec format 99990.000 heading redosizesec -- "Redombs"
col logons format 990 heading logons -- "Sess"
col logone format 990 heading logone -- "SessEnd"
col exsraw format 99990.000 heading exsraw -- "Execrawdelta"
col exs format 9990.000 heading exs -- "Execs"
col oracpupct format 990 heading oracpupct -- "OracleCPUPct"
col rmancpupct format 990 heading rmancpupct -- "RMANCPUPct"
col oscpupct format 990 heading oscpupct -- "OSCPUPct"
col oscpuusr format 990 heading oscpuusr -- "USRPct"
col oscpusys format 990 heading oscpusys -- "SYSPct"
col oscpuio format 990 heading oscpuio -- "IOPct"
col SIORs format 99990.000 heading SIORs -- "IOPsSingleBlockr"
col MIORs format 99990.000 heading MIORs -- "IOPsMultiBlockr"
col TIORmbs format 99990.000 heading TIORmbs -- "Readmbs"
col SIOWs format 99990.000 heading SIOWs -- "IOPsSingleBlockw"
col MIOWs format 99990.000 heading MIOWs -- "IOPsMultiBlockw"
col TIOWmbs format 99990.000 heading TIOWmbs -- "Writembs"
col TIOR format 99990.000 heading TIOR -- "TotalIOPsr"
col TIOW format 99990.000 heading TIOW -- "TotalIOPsw"
col TIOALL format 99990.000 heading TIOALL -- "TotalIOPsALL"
col ALLRmbs format 99990.000 heading ALLRmbs -- "TotalReadmbs"
col ALLWmbs format 99990.000 heading ALLWmbs -- "TotalWritembs"
col GRANDmbs format 99990.000 heading GRANDmbs -- "TotalmbsALL"
col readratio format 990 heading readratio -- "ReadRatio"
col writeratio format 990 heading writeratio -- "WriteRatio"
col diskiops format 99990.000 heading diskiops -- "HWDiskIOPs"
col numdisks format 99990.000 heading numdisks -- "HWNumofDisks"
col flashcache format 990 heading flashcache -- "FlashCacheHitsPct"
col cellpiob format 99990.000 heading cellpiob -- "CellPIOICmbs"
col cellpiobss format 99990.000 heading cellpiobss -- "CellPIOICSmartScanmbs"
col cellpiobpreoff format 99990.000 heading cellpiobpreoff -- "CellPIOpredoffloadmbs"
col cellpiobsi format 99990.000 heading cellpiobsi -- "CellPIOstorageindexmbs"
col celliouncomb format 99990.000 heading celliouncomb -- "CellIOuncompmbs"
col cellpiobs format 99990.000 heading cellpiobs -- "CellPIOsavedfilecreationmbs"
col cellpiobsrman format 99990.000 heading cellpiobsrman -- "CellPIOsavedRMANfilerestorembs"
SELECT * FROM
(
SELECT trim('&_instname') instname,
trim('&_dbid') db_id,
trim('&_hostname') hostname,
s0.snap_id id,
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
s0.instance_number inst,
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
(((s20t1.value - s20t0.value) - (s21t1.value - s21t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as SIORs,
(((s23t1.value - s23t0.value) - (s24t1.value - s24t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as SIOWs,
((s13t1.value - s13t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as IORedo,
(((s22t1.value - s22t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as TIORmbs,
(((s25t1.value - s25t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as TIOWmbs,
(((s29t1.value - s29t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as cellpiobpreoff,
((s33t1.value - s33t0.value) / (s20t1.value - s20t0.value))*100 as flashcache
FROM dba_hist_snapshot s0,
dba_hist_snapshot s1,
dba_hist_sysstat s13t0, -- redo writes, diffed
dba_hist_sysstat s13t1,
dba_hist_sysstat s20t0, -- physical read total IO requests, diffed
dba_hist_sysstat s20t1,
dba_hist_sysstat s21t0, -- physical read total multi block requests, diffed
dba_hist_sysstat s21t1,
dba_hist_sysstat s22t0, -- physical read total bytes, diffed
dba_hist_sysstat s22t1,
dba_hist_sysstat s23t0, -- physical write total IO requests, diffed
dba_hist_sysstat s23t1,
dba_hist_sysstat s24t0, -- physical write total multi block requests, diffed
dba_hist_sysstat s24t1,
dba_hist_sysstat s25t0, -- physical write total bytes, diffed
dba_hist_sysstat s25t1,
dba_hist_sysstat s29t0, -- cell physical IO bytes eligible for predicate offload, diffed, cellpiobpreoff
dba_hist_sysstat s29t1,
dba_hist_sysstat s33t0, -- cell flash cache read hits
dba_hist_sysstat s33t1
WHERE s0.dbid = &_dbid -- CHANGE THE DBID HERE!
AND s1.dbid = s0.dbid
AND s13t0.dbid = s0.dbid
AND s13t1.dbid = s0.dbid
AND s20t0.dbid = s0.dbid
AND s20t1.dbid = s0.dbid
AND s21t0.dbid = s0.dbid
AND s21t1.dbid = s0.dbid
AND s22t0.dbid = s0.dbid
AND s22t1.dbid = s0.dbid
AND s23t0.dbid = s0.dbid
AND s23t1.dbid = s0.dbid
AND s24t0.dbid = s0.dbid
AND s24t1.dbid = s0.dbid
AND s25t0.dbid = s0.dbid
AND s25t1.dbid = s0.dbid
AND s29t0.dbid = s0.dbid
AND s29t1.dbid = s0.dbid
AND s33t0.dbid = s0.dbid
AND s33t1.dbid = s0.dbid
--AND s0.instance_number = &_instancenumber -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number = s0.instance_number
AND s13t0.instance_number = s0.instance_number
AND s13t1.instance_number = s0.instance_number
AND s20t0.instance_number = s0.instance_number
AND s20t1.instance_number = s0.instance_number
AND s21t0.instance_number = s0.instance_number
AND s21t1.instance_number = s0.instance_number
AND s22t0.instance_number = s0.instance_number
AND s22t1.instance_number = s0.instance_number
AND s23t0.instance_number = s0.instance_number
AND s23t1.instance_number = s0.instance_number
AND s24t0.instance_number = s0.instance_number
AND s24t1.instance_number = s0.instance_number
AND s25t0.instance_number = s0.instance_number
AND s25t1.instance_number = s0.instance_number
AND s29t0.instance_number = s0.instance_number
AND s29t1.instance_number = s0.instance_number
AND s33t0.instance_number = s0.instance_number
AND s33t1.instance_number = s0.instance_number
AND s1.snap_id = s0.snap_id + 1
AND s13t0.snap_id = s0.snap_id
AND s13t1.snap_id = s0.snap_id + 1
AND s20t0.snap_id = s0.snap_id
AND s20t1.snap_id = s0.snap_id + 1
AND s21t0.snap_id = s0.snap_id
AND s21t1.snap_id = s0.snap_id + 1
AND s22t0.snap_id = s0.snap_id
AND s22t1.snap_id = s0.snap_id + 1
AND s23t0.snap_id = s0.snap_id
AND s23t1.snap_id = s0.snap_id + 1
AND s24t0.snap_id = s0.snap_id
AND s24t1.snap_id = s0.snap_id + 1
AND s25t0.snap_id = s0.snap_id
AND s25t1.snap_id = s0.snap_id + 1
AND s29t0.snap_id = s0.snap_id
AND s29t1.snap_id = s0.snap_id + 1
AND s33t0.snap_id = s0.snap_id
AND s33t1.snap_id = s0.snap_id + 1
AND s13t0.stat_name = 'redo writes'
AND s13t1.stat_name = s13t0.stat_name
AND s20t0.stat_name = 'physical read total IO requests'
AND s20t1.stat_name = s20t0.stat_name
AND s21t0.stat_name = 'physical read total multi block requests'
AND s21t1.stat_name = s21t0.stat_name
AND s22t0.stat_name = 'physical read total bytes'
AND s22t1.stat_name = s22t0.stat_name
AND s23t0.stat_name = 'physical write total IO requests'
AND s23t1.stat_name = s23t0.stat_name
AND s24t0.stat_name = 'physical write total multi block requests'
AND s24t1.stat_name = s24t0.stat_name
AND s25t0.stat_name = 'physical write total bytes'
AND s25t1.stat_name = s25t0.stat_name
AND s29t0.stat_name = 'cell physical IO bytes eligible for predicate offload'
AND s29t1.stat_name = s29t0.stat_name
AND s33t0.stat_name = 'cell flash cache read hits'
AND s33t1.stat_name = s33t0.stat_name
)
-- WHERE
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (3391)
-- aas > 1
-- oscpuio > 50
-- rmancpupct > 0
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1 -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900 -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss') -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{
SELECT A.INST_ID
, A.SNAP_ID
, TO_CHAR(A.START_TIME, 'YYYYMMDD HH24MISS') AS START_TIME
, A.DURATION_IN_MIN
, A.STAT_NAME_RPT
, DECODE( A.STAT_NAME_RPT
, 'flashcache'
, 100 * SUM((A.STAT_VALUE * A.STAT_OPER)) / SUM((A.STAT_VALUE2 * A.STAT_OPER))
, 'SIORs'
, SUM((A.STAT_VALUE * A.STAT_OPER)) / (60 * A.DURATION_IN_MIN)
, 'SIOWs'
, SUM((A.STAT_VALUE * A.STAT_OPER)) / (60 * A.DURATION_IN_MIN)
, 'IORedo'
, SUM((A.STAT_VALUE * A.STAT_OPER)) / (60 * A.DURATION_IN_MIN)
, SUM((A.STAT_VALUE * A.STAT_OPER)) / (60 * A.DURATION_IN_MIN) /1024/1024
) AS STAT_VALUE
FROM (SELECT S0.INSTANCE_NUMBER INST_ID
, S0.SNAP_ID AS SNAP_ID
, S0.END_INTERVAL_TIME AS START_TIME
, round( EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) as duration_in_min
, DECODE( ST0.STAT_NAME
, 'redo writes', 'IORedo' -- 13
, 'physical read total IO requests', 'SIORs' -- 20
, 'physical read total multi block requests', 'SIORs' -- 21
, 'physical read total bytes', 'TIORmbs' -- 22
, 'physical write total IO requests', 'SIOWs' -- 23
, 'physical write total multi block requests', 'SIOWs' -- 24
, 'physical write total bytes', 'TIOWmbs' -- 25
, 'cell physical IO bytes eligible for predicate offload', 'cellpiobpreoff' -- 29
-- , 'cell flash cache read hits', '' -- 33
, '') AS STAT_NAME_RPT
, DECODE( ST0.STAT_NAME
, 'redo writes', 1 -- 13
, 'physical read total IO requests', 1 -- 20
, 'physical read total multi block requests', -1 -- 21
, 'physical read total bytes', 1 -- 22
, 'physical write total IO requests', 1 -- 23
, 'physical write total multi block requests', -1 -- 24
, 'physical write total bytes', 1 -- 25
, 'cell physical IO bytes eligible for predicate offload', 1 -- 29
-- , 'cell flash cache read hits', '' -- 33
, '') AS STAT_OPER
, ST1.VALUE - ST0.VALUE AS STAT_VALUE
, 0 AS STAT_VALUE2
, DECODE( ST0.STAT_NAME
, 'redo writes', 3 -- 13
, 'physical read total IO requests', 1 -- 20
, 'physical read total multi block requests', 1 -- 21
, 'physical read total bytes', 4 -- 22
, 'physical write total IO requests', 2 -- 23
, 'physical write total multi block requests', 2 -- 24
, 'physical write total bytes', 5 -- 25
, 'cell physical IO bytes eligible for predicate offload', 6 -- 29
, 99) AS STAT_ORDER
FROM V$DATABASE VD
, dba_hist_snapshot s0
, dba_hist_snapshot s1
, dba_hist_sysstat st0
, dba_hist_sysstat st1
WHERE VD.DBID = S0.DBID
AND S1.DBID = S0.DBID
AND S1.INSTANCE_NUMBER = S0.INSTANCE_NUMBER
AND S0.DBID = ST0.DBID
AND S0.INSTANCE_NUMBER = ST0.INSTANCE_NUMBER
AND S0.SNAP_ID = ST0.SNAP_ID
AND S1.DBID = ST1.DBID
AND S1.INSTANCE_NUMBER = ST1.INSTANCE_NUMBER
AND S1.SNAP_ID = ST1.SNAP_ID
AND S0.SNAP_ID +1 = ST1.SNAP_ID
AND ST0.STAT_ID = ST1.STAT_ID
AND ST0.STAT_NAME IN ( 'redo writes'
, 'physical read total IO requests'
, 'physical read total multi block requests'
, 'physical read total bytes'
, 'physical write total IO requests'
, 'physical write total multi block requests'
, 'physical write total bytes'
, 'cell physical IO bytes eligible for predicate offload'
)
UNION ALL
SELECT S0.INSTANCE_NUMBER INST_ID
, S0.SNAP_ID AS SNAP_ID
, S0.END_INTERVAL_TIME AS START_TIME
, round( EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) as duration_in_min
, 'flashcache' AS STAT_NAME_RPT
, 1 AS STAT_OPER
, DECODE(ST0.STAT_NAME ,'cell flash cache read hits', ( ST1.VALUE - ST0.VALUE) ,0) AS STAT_VALUE
, DECODE(ST0.STAT_NAME ,'physical read total IO requests', ( ST1.VALUE - ST0.VALUE) ,0) AS STAT_VALUE2
, 7 AS STAT_ORDER
FROM V$DATABASE VD
, dba_hist_snapshot s0
, dba_hist_snapshot s1
, dba_hist_sysstat st0
, dba_hist_sysstat st1
WHERE VD.DBID = S0.DBID
AND S1.DBID = S0.DBID
AND S1.INSTANCE_NUMBER = S0.INSTANCE_NUMBER
AND S0.DBID = ST0.DBID
AND S0.INSTANCE_NUMBER = ST0.INSTANCE_NUMBER
AND S0.SNAP_ID = ST0.SNAP_ID
AND S1.DBID = ST1.DBID
AND S1.INSTANCE_NUMBER = ST1.INSTANCE_NUMBER
AND S1.SNAP_ID = ST1.SNAP_ID
AND S0.SNAP_ID +1 = ST1.SNAP_ID
AND ST0.STAT_ID = ST1.STAT_ID
AND ST0.STAT_NAME IN ( 'physical read total IO requests' -- 20
, 'cell flash cache read hits' -- 33
)
) A
GROUP BY A.INST_ID
, A.SNAP_ID
, A.START_TIME
, A.DURATION_IN_MIN
, A.STAT_NAME_RPT
, A.STAT_ORDER
ORDER BY 2 DESC, 1 ASC, A.STAT_ORDER ASC
;
################################################################################################################################################################
-- awr_iowl column format
instname DB_ID hostname id tm inst dur SIORs SIOWs IORedo TIORmbs TIOWmbs cellpiobpreoff flashcache
--------------- ---------- ------------------------------ ------ ----------------- ---- ---------- ---------- ---------- ---------- ---------- ---------- -------------- ----------
pib01scp1 1859430704 x03pdb01 1464 05/11/13 23:00:07 1 60.18 28.203 40.732 1.427 99.087 27.639 97.651 5
-- final out of row format
INST_ID SNAP_ID START_TIME DURATION_IN_MIN STAT_NAME_RPT STAT_VALUE
---------- ---------- --------------- --------------- -------------- ----------
1 1464 20130511 230007 60.18 SIORs 28.20316827
1 1464 20130511 230007 60.18 SIOWs 40.7322477
1 1464 20130511 230007 60.18 IORedo 1.426553672
1 1464 20130511 230007 60.18 TIORmbs 99.08688756
1 1464 20130511 230007 60.18 TIOWmbs 27.63852702
1 1464 20130511 230007 60.18 cellpiobpreoff 97.65057259
1 1464 20130511 230007 60.18 flashcache 4.588101237
-- computations
flashcache = 28588/623090 = 4.5
SIORs = (623090-521254)/3610.8 = 28.2031682729589
-- raw 2nd union row format
INST_ID SNAP_ID START_TIME DURATION_IN_MIN STAT_NAME_RPT STAT_OPER STAT_VALUE STAT_VALUE2 STAT_ORDER
---------- ---------- ------------------------------- --------------- ------------- ---------- ---------- ----------- ----------
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 flashcache 1 0 623090 7 <<
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 flashcache 1 28588 0 7 <<
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 flashcache 1 94156 0 7
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 flashcache 1 0 706102 7
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 flashcache 1 31014 0 7
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 flashcache 1 0 175835 7
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 flashcache 1 0 69121 7
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 flashcache 1 29802 0 7
-- raw 1st union row format
INST_ID SNAP_ID START_TIME DURATION_IN_MIN STAT_NAME_RPT STAT_OPER STAT_VALUE STAT_VALUE2 STAT_ORDER
---------- ---------- ------------------------------- --------------- -------------- ---------- ---------- ----------- ----------
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 SIORs 1 623090 0 1 <<
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 SIORs -1 521254 0 1 <<
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 SIOWs 1 174630 0 2
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 SIOWs -1 27554 0 2
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 IORedo 1 5151 0 3
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 TIORmbs 1 3.8E+11 0 4
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 TIOWmbs 1 1.0E+11 0 5
1 1464 11-MAY-13 11.00.07.038000000 PM 60.18 cellpiobpreoff 1 3.7E+11 0 6
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 IORedo 1 5882 0 3
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 SIORs 1 706102 0 1
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 SIORs -1 567419 0 1
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 SIOWs -1 33964 0 2
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 SIOWs 1 169899 0 2
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 TIORmbs 1 4.2E+11 0 4
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 TIOWmbs 1 1.1E+11 0 5
2 1464 11-MAY-13 11.00.07.133000000 PM 60.18 cellpiobpreoff 1 4.1E+11 0 6
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 IORedo 1 5978 0 3
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 SIORs -1 141463 0 1
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 SIORs 1 175835 0 1
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 SIOWs -1 48135 0 2
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 SIOWs 1 77786 0 2
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 TIORmbs 1 4.6E+11 0 4
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 TIOWmbs 1 2.0E+11 0 5
3 1464 11-MAY-13 11.00.07.143000000 PM 60.18 cellpiobpreoff 1 4.5E+10 0 6
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 IORedo 1 6011 0 3
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 SIORs 1 69121 0 1
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 SIORs -1 38704 0 1
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 SIOWs -1 19434 0 2
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 SIOWs 1 163201 0 2
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 TIORmbs 1 1.5E+11 0 4
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 TIOWmbs 1 7.2E+10 0 5
4 1464 11-MAY-13 11.00.07.134000000 PM 60.18 cellpiobpreoff 1 121110528 0 6
32 rows selected
}}}
Oracle Enterprise Manager 12c: Oracle Exadata Discovery Cookbook http://www.oracle.com/technetwork/oem/exa-mgmt/em12c-exadata-discovery-cookbook-1662643.pdf
Prerequisite script for Exadata Discovery in Oracle Enterprise Manager Cloud Control 12c (Doc ID 1473912.1)
https://community.oracle.com/message/12261216#12261216 <-- sucks you can't make use of existing 12cR2 agent, so I just installed a new agent (12cR3) on a new home
http://docs.oracle.com/cd/E24628_01/install.121/e24089/appdx_repoint_agent.htm#BABICJCE
follow the following from the docs
http://docs.oracle.com/cd/E24628_01/doc.121/e27442/ch2_deployment.htm#EMXIG215
http://docs.oracle.com/cd/E24628_01/doc.121/e27442/ch3_discovery.htm#EMXIG206
{{{
-- OLTP
alter system set sga_max_size=18G scope=spfile sid='*';
alter system set sga_target=0 scope=spfile sid='*';
alter system set db_cache_size=10G scope=spfile sid='*';
alter system set shared_pool_size=2G scope=spfile sid='*';
alter system set large_pool_size=4G scope=spfile sid='*';
alter system set java_pool_size=256M scope=spfile sid='*';
alter system set pga_aggregate_target=5G scope=spfile sid='*';
-- DW
alter system set sga_max_size=18G scope=spfile sid='*';
alter system set sga_target=0 scope=spfile sid='*';
alter system set db_cache_size=10G scope=spfile sid='*';
alter system set shared_pool_size=2G scope=spfile sid='*';
alter system set large_pool_size=4G scope=spfile sid='*';
alter system set java_pool_size=256M scope=spfile sid='*';
alter system set pga_aggregate_target=20G scope=spfile sid='*';
}}}
http://www.rittmanmead.com/2008/09/testing-advanced-oltp-compression-in-oracle-11g/
http://www.rittmanmead.com/2006/07/techniques-to-reduce-io-partitioning-and-compression/
10205OMS Restarts When XMLLoader Times out For Repository Connection or 11G OMS login hangs via Cisco Firewall [ID 1073473.1]
http://wiki.oracle.com/page/Oracle+OpenWorld+Unconference
http://wiki.oracle.com/page/What+to+Expect+at+the+Unconference
2010 unconference
http://wikis.sun.com/display/JavaOne/Unconferences+at+JavaOne+and+Oracle+Develop+2010
Best Practices for Maintaining Your Oracle RAC Cluster [CON8252]
https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=8252&tclass=popup
Oracle RAC Operational Best Practices [CON8171]
https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=8171
see other presentations and slides here, no need to register
https://oracleus.activeevents.com/2014/connect/search.ww#loadSearch-event=null&searchPhrase=&searchType=session&tc=0&sortBy=&p=&i(10009)=10105
<<<
Goal
Sometimes after migration from earlier version to higher version, performance degrades. For example, after migrating from 10g to 11g, the performance may degrade due to migration. This may be true in cases where thorough testing has not been done before the migration. In such cases, reverting back to previous version for parameter OPTIMIZER_FEATURES_ENABLE may improve performance.
Solution
The parameter can be set from the system or session level:
1. alter system set optimizer_features_enable='10.2.0.4'
scope=spfile;
2. alter session set optimizer_features_enable='10.2.0.4';
When setting this parameter, remember the optimizer parameters will revert back to the older version. Thus, the new optimizer features will not be used. Furthermore, this parameter is not meant to be a permanent fix. It is recommended to use temporarily until permanent tuning and fix is implemented.
<<<
I've got two clients, both of them pure oltp environments that upgraded to new hardware and faster storage.. and from 10gR2 to 11gR2..
both of them have their own tricks to retain the plans of the SQL to where they were before.
1) client1
after the upgrade.. most of the plans changed and tend to favor the faster CPU of the new environment.. after investigating using SQLTXPLAIN - SQLTCOMPARE
and some test cases I found out that the fix is to put back the system statistics of the CPU to the old value of the old processor... then when I did that, everything
went back to their old plans.
2) client2
now this client environment is interesting, they had these parameters set
{{{
optimizer_mode "FIRST_ROWS_10"
optimizer_index_cost_adj "5"
}}}
after the upgrade.. the only plan changes we had were the reporting SQLs which we just easily fixed with profiles from the old environment. but the OLTP stuff did not change at all
and that's because of these two parameters which made the database hardware agnostic, cool! so right at the implementation they have thought about this ;)
so for OLTP environments.. you've got these two tricks at your disposal..
http://oprofile.sourceforge.net/examples/
<<showtoc>>
! Problem and fix - function not closing cursors
<<<
Here the "BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE" function is called multiple times and inside it's not closing the opened cursors.
Adding the close of cursors fixed the issue.
The troubleshooting steps:
* profile the session and cursor usage (get all diagnostic data)
* increase open_cursors from 300 to 1000
* restart the database and weblogic app server (to get a clean slate)
* profile the session and cursor usage and detail on the SQL_ID - catch the increase of cursors up to 1000 max, and validate the SQL from initial profiling. In this case the same function popped up as the culprit
* implement fix on the function
* kill the problem weblogic session to get clean slate on cursors of that session
* re-run app
<<<
!! side note
* Cursor leak is different from PGA continuously increasing
{{{
I encountered a similar issue recently on a custom module of Oracle CC&B. The process errors with ORA-04036
If you dump the dba_hist_active_sess_history and graph it in time series you'll see PGA_ALLOCATED increases overtime, and you can color that by SQL_ID and you'll be able to track the PL/SQL entry object id that invoked those SQLs.
The problem was the package contained logic that would loop based on the number of rows of the driving cursor and push it to a PL/SQL collection in memory which overloads the PGA (reaching up to 30GB).The culprit SQL_ID was executed 283 million times with 282 million rows processed all that pushed to PGA. (Increasing the Size of a Collection (EXTEND Method) https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/collections.htm#CJAIJHEI)
The recommendation to the developer was to rewrite the package to a set-based approach rather than row by row.
Putting 282 million rows on a PL/SQL collection is not scalable (due to server physical memory limitations) and will run longer as more rows are processed vs one parallelized high IO bandwidth operation.
}}}
! Below are the SQLs I used for troubleshooting
!! count SQLs on v$open_cursor
{{{
COLUMN USER_NAME FORMAT A15
SELECT s.machine, oc.user_name, oc.sql_text, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
GROUP BY s.machine, oc.user_name, oc.sql_text
HAVING COUNT(1) > 2
ORDER BY count(1) DESC
;
MACHINE USER_NAME SQL_TEXT COUNT(1)
---------------------------------------------------------------- --------------- ------------------------------------------------------------ ----------
appserver1 ALLOC_APP_USER SELECT BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE 360
appserver1 ALLOC_APP_USER WITH MAX_SEQ AS (SELECT 40
appserver1 ALLOC_APP_USER SELECT apbs.SKU, 33
appserver1 ALLOC_APP_USER WITH MAX_SEQ AS (SELECT /*+ MA 28
appserver1 ALLOC_APP_USER WITH MAX_SEQ AS (SELECT /*+ 25
appserver1 ALLOC_APP_USER SELECT DISTINCT FISCAL_MONTH, CASE WHEN FISCAL_MONTH = 23
appserver1 ALLOC_APP_USER WITH MAX_SEQ AS (SELECT /*+ MATERIALI 22
appserver1 ALLOC_APP_USER SELECT COUNT ( DISTINCT AIR.STORE_ID) STR_COUNT_STYLE FROM 22
appserver1 ALLOC_APP_USER SELECT DISTINCT BUYER_NAME, BUYER_ID FROM COMPANY_HIER_BUYER 21
appserver1 ALLOC_APP_USER SELECT LI_DETAILS.BATCH_ID, ALLOC_SKU0.SKU2, 21
appserver1 ALLOC_APP_USER SELECT DISTINCT ALLOC_ID, ALLOC_LINE_ID, APPT_DATE, DC_RE 20
}}}
!! breakdown by SID on v$open_cursor
{{{
set lines 300
COLUMN USER_NAME FORMAT A15
SELECT s.machine, oc.user_name, oc.sql_text, s.sid, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
GROUP BY s.machine, oc.user_name, oc.sql_text, s.sid
HAVING COUNT(1) > 2
ORDER BY count(1) DESC
;
MACHINE USER_NAME SQL_TEXT SID COUNT(1)
---------------------------------------------------------------- --------------- ------------------------------------------------------------ ---------- ----------
appserver1 ALLOC_APP_USER SELECT BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE 4 167
appserver1 ALLOC_APP_USER SELECT BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE 1466 109
appserver1 ALLOC_APP_USER SELECT BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE 1489 84
appserver1 ALLOC_APP_USER SELECT apbs.SKU, 1466 30
appserver1 ALLOC_APP_USER UPDATE ALGO_INPUT_FOR_REVIEW SET LOCK_FLAG ='Y' WHERE STORE_ 4 7
appserver1 ALLOC_APP_USER WITH MAX_SEQ AS (SELECT 4 6
}}}
!! sesstat open cursors count by SID, SQL_ID
here SID 1012 shows open cursors reaching the 1000 limit of the database parameter (notice the number is increasing which is a per session limit).
{{{
set lines 300
col username format a30
select c.username,
a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id,
sum(a.value) "opened cursors current"
from v$sesstat a, v$statname b, v$session c
where a.statistic# = b.statistic#
and b.name = 'opened cursors current'
and c.sid = a.sid
group by c.username, a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id
order by sum(a.value) asc;
USERNAME SID MACHINE SQL_ID PREV_SQL_ID PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------
...
ALLOC_APP_USER 412 appserver1 4azbr8f51a1nr 49
ALLOC_APP_USER 98 appserver1 bm3tbmdznzg62 80
ALLOC_APP_USER 1012 appserver1 4ps27vbzfnw10 122 <<
135 rows selected.
USERNAME SID MACHINE SQL_ID PREV_SQL_ID PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------
...
ALLOC_APP_USER 1106 appserver1 7xs6pawnx3gj2 51
ALLOC_APP_USER 98 appserver1 bm3tbmdznzg62 80
ALLOC_APP_USER 1012 appserver1 5m4nu3860346k 335 <<
137 rows selected.
USERNAME SID MACHINE SQL_ID PREV_SQL_ID PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------
...
ALLOC_APP_USER 1106 appserver1 31vdxhgw47a00 72
ALLOC_APP_USER 98 appserver1 bm3tbmdznzg62 80
ALLOC_APP_USER 1012 appserver1 akkhfudfrvf92 996 <<
135 rows selected.
USERNAME SID MACHINE SQL_ID PREV_SQL_ID PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------
...
ALLOC_APP_USER 1106 appserver1 31vdxhgw47a00 72
ALLOC_APP_USER 98 appserver1 bm3tbmdznzg62 80
ALLOC_APP_USER 1012 appserver1 9uydavp0gr167 997 <<
135 rows selected.
USERNAME SID MACHINE SQL_ID PREV_SQL_ID PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------
...
ALLOC_APP_USER 1106 appserver1 31vdxhgw47a00 72
ALLOC_APP_USER 98 appserver1 bm3tbmdznzg62 80
ALLOC_APP_USER 1012 appserver1 7t6r7kutfr2s0 1000 <<
135 rows selected.
USERNAME SID MACHINE SQL_ID PREV_SQL_ID PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------
...
ALLOC_APP_USER 1106 appserver1 31vdxhgw47a00 72
ALLOC_APP_USER 98 appserver1 31vdxhgw47a00 79
ALLOC_APP_USER 1012 appserver1 31vdxhgw47a00 1000 <<
128 rows selected.
}}}
!! detail on specific SQLs
{{{
-- then from SQL Developer to investigate in detail just do
select * from v$open_cursor
set lines 300
col username format a30
select SQL_ID, hash_value, sid, user_name, sql_text
from v$open_cursor
where sid in (
select sid
from (
select c.username,
a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id,
sum(a.value) "opened cursors current"
from v$sesstat a, v$statname b, v$session c
where a.statistic# = b.statistic#
and b.name = 'opened cursors current'
and c.sid = a.sid
group by c.username, a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id
order by sum(a.value) desc
)
where rownum < 2
)
order by sql_text asc;
}}}
!! Also run sqld360 to get the overall metadata definition of objects and related objects
! References
!! database
asktom - Open cursors exceeded http://bit.ly/2sjvcMN
Working with Cursors http://www.oracle.com/technetwork/issue-archive/2013/13-mar/o23plsql-1906474.html
http://gennick.com/database/does-plsql-implicitly-close-cursors
http://gennick.com/database/more-on-plsqls-cursor-handling
http://gennick.com/database/plsql-cursor-handling-explained
http://gennick.com/the-box/on-the-importance-of-mental-models
Troubleshooting Open Cursor Issues https://docs.oracle.com/cd/E40329_01/admin.1112/e27149/cursor.htm#OMADM5352
How To: Identify a cursor leak in Oracle http://support.esri.com/technical-article/000010136
!! weblogic
weblogic inactive connection timeout https://stackoverflow.com/questions/21006782/why-we-need-weblogic-inactive-connection-timeout
https://stackoverflow.com/questions/21006782/why-we-need-weblogic-inactive-connection-timeout
Tuning Data Source Connection Pools https://docs.oracle.com/cd/E17904_01/web.1111/e13737/ds_tuning.htm#JDBCA490
https://stackoverflow.com/questions/18328886/weblogic-leaked-connection-timeout
"Inactive Connection Timeout" and "Remove Infected Connections Enabled" parameters in WebLogic Server http://blog.raastech.com/2015/07/inactive-connection-timeout-and-remove.html
http://andrejusb.blogspot.com/2010/02/monitoring-data-source-connection-leaks.html
Setting the JDBC Connection timeout properties in weblogic server through WLST http://www.albinsblog.com/2014/04/setting-jdbc-connection-timeouts.html#.WVQYqSJKWkI
JDBC Connection leaks – Generation and Detection [BEA-001153] http://blog.sysco.no/db/locking/jdbc-leak/ <- this blog has a program to generate leak JDBCLeak.zip
! final
{{{
REM ##########################################
REM count SQLs on v$open_cursor
REM ##########################################
set lines 300
COLUMN USER_NAME FORMAT A15
SELECT s.machine, oc.user_name, oc.sql_text, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
GROUP BY s.machine, oc.user_name, oc.sql_text
HAVING COUNT(1) > 2
ORDER BY count(1) ASC
;
REM ##########################################
REM breakdown by SID on v$open_cursor
REM ##########################################
set lines 300
COLUMN USER_NAME FORMAT A15
SELECT s.machine, oc.user_name, oc.sql_text, s.sid, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
GROUP BY s.machine, oc.user_name, oc.sql_text, s.sid
HAVING COUNT(1) > 2
ORDER BY count(1) ASC
;
REM ##########################################
REM sesstat open cursors count by SID, SQL_ID
REM ##########################################
set lines 300
col username format a30
select c.username,
a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id,
sum(a.value) "opened cursors current"
from v$sesstat a, v$statname b, v$session c
where a.statistic# = b.statistic#
and b.name = 'opened cursors current'
and c.sid = a.sid
group by c.username, a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id
order by sum(a.value) asc;
REM ##########################################
REM detail on specific SQLs
REM ##########################################
set lines 300
col username format a30
select SQL_ID, hash_value, sid, user_name, sql_text
from v$open_cursor
where sid in (
select sid
from (
select c.username,
a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id,
sum(a.value) "opened cursors current"
from v$sesstat a, v$statname b, v$session c
where a.statistic# = b.statistic#
and b.name = 'opened cursors current'
and c.sid = a.sid
group by c.username, a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id
order by sum(a.value) desc
)
where rownum < 2
)
order by sql_text asc;
}}}
! other references
https://tanelpoder.com/2014/03/26/oracle-memory-troubleshooting-part-4-drilling-down-into-pga-memory-usage-with-vprocess_memory_detail/
<<showtoc>>
! 2 sessions testcase
{{{
alter system set undo_tablespace = undotbs2;
drop tablespace small_undo including contents and datafiles;
create undo tablespace small_undo
datafile '/u01/app/oracle/oradata/ORCLCDB/orcl/smallundo.dbf' size 10m autoextend off
;
alter system set undo_tablespace = small_undo;
drop table t1 purge;
create table t1(c1 int, c2 char(2000), c3 char(2000), c4 char(2000));
insert into t1 values(1, 'x', 'x', 'x');
commit;
-- session #2
variable rc refcursor
exec open :rc for select * from t1 where c1 = 1;
-- session #1 (execute 3x)
begin
for idx in 1 .. 500 loop
update t1 set c2 = idx, c3 = idx, c4 = idx where c1 = 1;
if mod(idx, 500) = 0 then
commit;
end if;
end loop;
end;
/
-- session #1 (execute 2x)
begin
for idx in 1 .. 500 loop
update t1 set c2 = idx, c3 = idx, c4 = idx where c1 = 1;
end loop;
end;
/
--session #2 (snapshot too old error)
print rc
--session #2 (force cleanout to avoid snapshot too old error)
select count(*) from t1;
print rc
--monitoring sqls
--check high MQL SQLs
select TO_CHAR(end_time,'MM/DD/YY HH24:MI') end_tm,
maxquerysqlid, maxconcurrency, undotsn, undoblks, txncount, activeblks, unexpiredblks, expiredblks, round(maxquerylen/60,0) maxqlen, round(tuned_undoretention/60,0) Tuned
from dba_hist_undostat where end_time > sysdate-30 order by maxquerylen desc
/
--undostat
select TO_CHAR(end_time,'MM/DD/YY HH24:MI') end_tm, a.* from v$undostat a order by 1 desc;
}}}
! 3 sessions testcase - UPDATE and MERGE
{{{
-- session #1
alter system set undo_tablespace = undotbs2;
drop tablespace small_undo including contents and datafiles;
create undo tablespace small_undo
datafile '/u01/app/oracle/oradata/ORCLCDB/orcl/smallundo.dbf' size 10m autoextend off
;
alter system set undo_tablespace = small_undo;
col c1 format 9999
col c2 format a30
col c3 format a30
col c4 format a30
drop table t1 purge;
create table t1(c1 int, c2 char(2000), c3 char(2000), c4 char(2000));
insert into t1 values(1, 'x', 'x', 'x');
insert into t1 values(2, 'z', 'z', 'z');
commit;
drop table t2 purge;
create table t2(c1 int, c2 char(2000), c3 char(2000), c4 char(2000));
insert into t2 values(2, 'y', 'y', 'y');
commit;
drop table t3 purge;
create table t3(c1 int, c2 char(2000), c3 char(2000), c4 char(2000));
insert into t3 values(1, 'x', 'x', 'x');
commit;
-- session #1
variable rc refcursor
exec open :rc for select * from t1 where c1 = 1;
-- session #2
variable rc refcursor
exec open :rc for select * from t1 where c1 = 1;
-- session #3
variable rc refcursor
exec open :rc for select * from t3 where c1 = 1;
-- session #1
MERGE INTO t2 s1
USING (select * from t1) s0
ON (
s1.c1 = s0.c1
)
WHEN MATCHED THEN UPDATE
SET
s1.c2 = s0.c2,
s1.c3 = s0.c3,
s1.c4 = s0.c4
WHEN NOT MATCHED THEN
INSERT VALUES (
s0.c1,
s0.c2,
s0.c3,
s0.c4
);
-- session #1 (execute 1x)
-- this table is on the select part of merge
-- without this UPDATE there's no dirty block for t1 table, hence no ORA-01555
begin
for i in 1 .. 10 loop
update t1 set c1 = i, c2 = i, c3 = i, c4 = i;
end loop;
end;
/
-- session #1 (execute 3x)
begin
for i in 1 .. 500 loop
update t3 set c1 = i, c2 = i, c3 = i, c4 = i;
if mod(i, 500) = 0 then
commit;
end if;
end loop;
end;
/
-- session #1 (execute 2x)
begin
for i in 1 .. 500 loop
update t3 set c1 = i, c2 = i, c3 = i, c4 = i;
end loop;
end;
/
--session #1,2,3 (snapshot too old error)
print rc
--monitoring sqls
--check high MQL SQLs
select TO_CHAR(end_time,'MM/DD/YY HH24:MI') end_tm,
maxquerysqlid, maxconcurrency, undotsn, undoblks, txncount, activeblks, unexpiredblks, expiredblks, round(maxquerylen/60,0) maxqlen, round(tuned_undoretention/60,0) Tuned
from dba_hist_undostat where end_time > sysdate-30 order by maxquerylen desc
/
--undostat
select TO_CHAR(end_time,'MM/DD/YY HH24:MI') end_tm, a.* from v$undostat a order by 1 desc;
}}}
! asktom
{{{
it doesn't matter if it is used for "actual" undo or because of the retention period -- it is all "actual" undo.
But anyway, select sum(used_ublk) from v$transaction will tell you how much undo is being used for current, right now, transactions.
And -- allow me to clarify. IF the undo tablespace can grow to accomidate the undo retention period -- it will. If it cannot -- it will not. So consider this example:
ops$tkyte@ORA920> @test
<b>shows my undo tablespace is 1m right now.
The biggest it can autoextent to is 2gig and it'll grow in 1m increments (i know that cause I created it that way, this report doesn't show that 1m increment)
</b>
MaxPoss Max
Tablespace Name KBytes Used Free Used Kbytes Used
---------------- ------------ ------------ ------------ ------ ---------- ------
...
*UNDOTBS 1,024 960 64 93.8 2,088,960 .0
.....
------------ ------------ ------------
sum 2,001,920 1,551,936 449,984
13 rows selected.
ops$tkyte@ORA920>
ops$tkyte@ORA920> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 10800
undo_suppress_errors boolean FALSE
undo_tablespace string UNDOTBS
ops$tkyte@ORA920>
ops$tkyte@ORA920> drop table t;
Table dropped.
<b> my undo retention is 3 hours -- 10,800 seconds...</b>
ops$tkyte@ORA920> create table t ( x char(2000), y char(2000), z char(2000) );
Table created.
ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t values ( 'x', 'x', 'x' );
1 row created.
ops$tkyte@ORA920>
ops$tkyte@ORA920> begin
2 for i in 1 .. 500
3 loop
4 update t set x = i, y = i, z = i;
5 commit;
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
<b>now each of those transactions is 6+ kbytes of undo -- 3 * 2000 byte "before images" to save off... That should generate well over 3meg of undo by the time it is done BUT in 500 tiny transactions.
If the undo retention period is 3hours and I have 1meg of undo and that 1meg of undo can grow to 2gig -- Oracle will grow it and we can see that:</b>
ops$tkyte@ORA920> set echo off
MaxPoss Max
Tablespace Name KBytes Used Free Used Kbytes Used
---------------- ------------ ------------ ------------ ------ ---------- ------
...
*UNDOTBS 5,120 4,608 512 90.0 2,088,960 .2
.....
13 rows selected.
<b>the RBS is now 5m with 4.6 meg "used" (well, none of the undo is really used right now, it is just going to sit there for 3 hours waiting to be reused).
Now I do this:</b>
ops$tkyte@ORA920> create undo tablespace undotbl_new datafile size 1m;
Tablespace created.
ops$tkyte@ORA920> alter system set undo_tablespace = undotbl_new scope=both;
System altered.
ops$tkyte@ORA920> drop tablespace undotbs;
Tablespace dropped.
ops$tkyte@ORA920> exec print_table( 'select * from dba_data_files where tablespace_name = ''UNDOTBL_NEW'' ' );
FILE_NAME : /usr/oracle/ora920/OraHome1/oradata/ora920/o1_mf_undotbl__z0936pcx_.dbf
FILE_ID : 2
TABLESPACE_NAME : UNDOTBL_NEW
BYTES : 1048576
BLOCKS : 128
STATUS : AVAILABLE
RELATIVE_FNO : 2<b>
AUTOEXTENSIBLE : NO
MAXBYTES : 0
MAXBLOCKS : 0
INCREMENT_BY : 0</b>
USER_BYTES : 983040
USER_BLOCKS : 120
-----------------
PL/SQL procedure successfully completed.
<b>
And I rerun the test:</b>
ops$tkyte@ORA920> drop table t;
Table dropped.
ops$tkyte@ORA920> create table t ( x char(2000), y char(2000), z char(2000) );
Table created.
ops$tkyte@ORA920> insert into t values ( 'x', 'x', 'x' );
1 row created.
ops$tkyte@ORA920> begin
2 for i in 1 .. 500
3 loop
4 update t set x = i, y = i, z = i;
5 commit;
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
ops$tkyte@ORA920> set echo off
old 29: order by &1
new 29: order by 1
% MaxPoss Max
Tablespace Name KBytes Used Free Used Kbytes Used
---------------- ------------ ------------ ------------ ------ ------- ------
*UNDOTBL_NEW 1,024 1,024 0 100.0 0 .0
13 rows selected.
<b>and here, we can see that the undo tablespace is still 1m. Oracle could not grow the undo -- but it did not fail the transactions.
So, in that respect, yes, the undo retention can be thought of as a "desire" -- if there is no way to get the undo space AND the undo space can be reused - it will reuse it. If the datafiles are autoextend or the undo tablespace is big enough all by itself, it will not reuse it</b>
}}}
https://asktom.oracle.com/pls/apex/f?p=100:11:::::P11_QUESTION_ID:6894817116500
https://www.google.com/search?q=ORA-01723%3A+zero-length+columns+are+not+allowed&oq=ORA-01723%3A+zero-length+columns+are+not+allowed&aqs=chrome..69i57j69i58.510j0j1&sourceid=chrome&ie=UTF-8#q=ORA-01723:+zero-length+columns+are+not+allowed&start=20
https://brainfizzle.wordpress.com/2014/09/10/create-table-as-select-with-additional-or-null-columns/
{{{
SQL> SELECT * from orig_tab;
COL1 COL2
---------- ----------
val1 1
val2 2
SQL> CREATE TABLE copy_tab AS
2 SELECT col1, col2, CAST( NULL AS NUMBER ) col3
3 FROM orig_tab;
Table created.
SQL> DESC copy_tab;
Name Null? Type
----------------------------------------- -------- --------------
COL1 VARCHAR2(10)
COL2 NUMBER
COL3 NUMBER
SQL> SELECT * FROM copy_tab;
COL1 COL2 COL3
---------- ---------- ----------
val1 1
val2 2
}}}
https://www.experts-exchange.com/questions/21558874/ORA-01790-expression-must-have-same-datatype-as-corresponding-expression.html
data conversion https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2958161889874
ORA-08004: sequence exceeds MAXVALUE and cannot be instantiated
https://www.funoracleapps.com/2012/06/ora-08004-sequence-fndconcurrentprocess.html
{{{
Solution:
Increase max value;(should be greater than previous Max value)
SQL> ALTER SEQUENCE APPLSYS.FND_CONCURRENT_PROCESSES_S MAXVALUE 99999999;
Sequence altered.
}}}
http://harvarinder.blogspot.com/2016/04/ora-10458-standby-database-requires.html
ORA-01194: When Opening Database After Restoring Backup http://www.parnassusdata.com/en/node/568
https://dborasol.wordpress.com/2013/10/29/rolling-forward-standby-database-with-rman-incremental-backup/
''TNS listener could not find available handler with matching protocol stack''
TNS:listener could not find available handler witht matching protocol stack https://community.oracle.com/thread/362226
Oracle Net Listener Parameters (listener.ora) http://docs.oracle.com/cd/B28359_01/network.111/b28317/listener.htm#NETRF424
http://jhdba.wordpress.com/2010/09/02/using-the-connection_rate-parameter-to-stop-dos-attacks/ <-- good stuff
http://www.oracle.com/technetwork/database/enterprise-edition/oraclenetservices-connectionratelim-133050.pdf <-- good stuff Oracle Net Listener Connection Rate Limiter
http://rnm1978.wordpress.com/2010/09/02/misbehaving-informatica-kills-oracle/
http://rnm1978.wordpress.com/2010/10/18/when-is-a-bug-not-a-bug-when-its-a-design-decision/
bad
{{{
As per your update on the Application connection pooling settings
========================================
*****Connection pooling parameters*****
========================================
JMS.SESSION_CACHE_SIZE 50
JMS.CONCURRENT_CONSUMERS 50
JMS.RECEIVE_TIMEOUT_MILLIS 1
POOL.MAX_IDLE 10
POOL SIZE 250
POOL MAX WAIT -1
With that said, however, if we look at the settings logically from a purely client-server communication perspective,
We see that the pool itself (ie: how many connections will be made) is set to 250.
Here is the line which stands out:
POOL SIZE 250
From the SQL*Net point of view for JDBC Thin Connection Pooled connections, we usually see in the listener log from working environments, are 10 to 20 connections.
The value of 250 is very High
++++
}}}
tested the commands on my personal dev environment. The LOAD_SALES_SKUS does the following:
* Drop partition - alter table hr.SALES_SKUS drop partition WEEK_END_DATE_20160305;
* Add partition - alter table hr.SALES_SKUS add partition WEEK_END_DATE_20160305 values (to_date('20160305','YYYYMMDD'));
* Insert on partition - insert into hr.SALES_SKUS
The error “ORA-14400: inserted partition key does not map to any partition” means the partition key is not there when Insert happens. On this highlighted part of the code of LOAD_SALES_SKUS is where the error happens, the command to add the partition errors so the insert part errors that it doesn’t exist.
So the fix here is to change this -> TO_CHAR(L_DATE, 'YYYYMMDD')
To this -> TO_CHAR(L_DATE)
! References
https://www.toadworld.com/platforms/oracle/w/wiki/4498.list-partitioned-tables-maintenance
https://gerardnico.com/wiki/oracle/partition/list
http://hemora.blogspot.com/2012/02/ora-38760-this-database-instance-failed.html
http://blogs.oracle.com/db/entry/ora-4030_troubleshooting
http://dioncho.wordpress.com/2009/07/27/playing-with-ora-4030-error/
11gR2 - finding the SQL that caused the ORA-4030
{{{
1) Check the ORA-4030 on the alert log
cat alert_mtauat112.log | \
awk 'BEGIN{buf=""}
/[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
/ORA-/{print buf,$0}' > ORA-errors-$(date +%Y%m%d%H%M).txt
2) Check the recent occurrence (10:26:07)
Tue May 22 10:26:07 2012 ORA-04030: out of process memory when trying to allocate 2136 bytes (kxs-heap-c,qkkele)
ls -ltr *trc
-rw-r----- 1 oracle dba 15199 May 22 10:19 mtauat112_smon_9962.trc
-rw-r----- 1 oracle dba 2680 May 22 10:26 mtauat112_ora_26771.trc <-- this is the trace file
-rw-r----- 1 oracle dba 3121 May 22 10:26 mtauat112_diag_9901.trc
-rw-r----- 1 oracle dba 18548 May 22 10:52 mtauat112_vkrm_10450.trc
-rw-r----- 1 oracle dba 38504 May 22 10:53 mtauat112_mmon_9971.trc
-rw-r----- 1 oracle dba 173892 May 22 10:53 mtauat112_dbrm_9905.trc
-rw-r----- 1 oracle dba 357614 May 22 10:53 mtauat112_lmhb_9950.trc
3) Open the trace file
less mtauat112_ora_26771.trc
mmap(offset=211263488, len=4096) failed with errno=12 for the file oraclemtauat112
mmap(offset=211263488, len=4096) failed with errno=12 for the file oraclemtauat112
mmap(offset=211263488, len=4096) failed with errno=12 for the file oraclemtauat112
Incident 439790 created, dump file: /u01/app/oracle/diag/rdbms/mtauat11/mtauat112/incident/incdir_439790/mtauat112_ora_26771_i439790.trc
ORA-04030: out of process memory when trying to allocate 2136 bytes (kxs-heap-c,qkkele)
4) Open the dump
less /u01/app/oracle/diag/rdbms/mtauat11/mtauat112/incident/incdir_439790/mtauat112_ora_26771_i439790.trc
* On the 4030 dump, it will show you the top memory users
========= Dump for incident 439790 (ORA 4030) ========
----- Beginning of Customized Incident Dump(s) -----
=======================================
TOP 10 MEMORY USES FOR THIS PROCESS
---------------------------------------
74% 3048 MB, 196432 chunks: "permanent memory " SQL
kxs-heap-c ds=0x2ad56bf501e0 dsprt=0xbb1e6a0
23% 950 MB, 62433 chunks: "free memory "
top call heap ds=0xbb1e6a0 dsprt=(nil)
1% 31 MB, 194215 chunks: "free memory " SQL
kxs-heap-c ds=0x2ad56bf501e0 dsprt=0xbb1e6a0
0% 12 MB, 316351 chunks: "chedef : qcuatc "
TCHK^a0c3c921 ds=0x2ad56bf5ff48 dsprt=0xbb1d780
0% 11 MB, 2426 chunks: "kkecpst : kkehs "
TCHK^a0c3c921 ds=0x2ad56bf5ff48 dsprt=0xbb1d780
0% 7373 KB, 1282 chunks: "kkecpst: kkehev "
TCHK^a0c3c921 ds=0x2ad56bf5ff48 dsprt=0xbb1d780
0% 7367 KB, 2982 chunks: "permanent memory "
kkqctdrvTD: co ds=0x2ad56d31da90 dsprt=0x2ad56bf5ff48
0% 6555 KB, 2846 chunks: "kkqct.c.kgght "
TCHK^a0c3c921 ds=0x2ad56bf5ff48 dsprt=0xbb1d780
0% 5272 KB, 24970 chunks: "kkqcscpopn:kccdef "
TCHK^a0c3c921 ds=0x2ad56bf5ff48 dsprt=0xbb1d780
0% 4497 KB, 2094 chunks: "qkkele " SQL
kxs-heap-c ds=0x2ad56bf501e0 dsprt=0xbb1e6a0
5) Search the "Current SQL" in the 4030 dump
*** 2012-05-22 10:26:07.513
dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
----- Current SQL Statement for this session (sql_id=cdafm3qhc7k91) -----
select distinct "DAAll_LdPrdDescr"."CHARTFIELD1" "CHARTFIELD1" from (select "DAAll_LdOffDescr"."DEAL_TYPE_RPT" "DEAL_TYPE_RPT", "DAAll_LdOffDescr"."CREATED_DTTM" "CREATED_DT
/Current SQL <-- do a search
}}}
Troubleshooting: Tuning the Shared Pool and Tuning Library Cache Latch Contention (Doc ID 62143.1)
http://www.oracle.com/technetwork/database/focus-areas/manageability/ps-s003-274003-106-1-fin-v2-128827.pdf
http://coskan.wordpress.com/2007/09/14/what-i-learned-about-shared-pool-management/
http://www.dbas-oracle.com/2013/05/5-Easy-Step-to-Solve-ORA-04031-with-Oracle-Support-Provided-Tool.html
https://blogs.oracle.com/db/entry/ora-4031_troubleshooting
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-library-cache#TOC-cursor:-mutex-X-
http://blog.tanelpoder.com/files/Oracle_Latch_And_Mutex_Contention_Troubleshooting.pdf
https://sites.google.com/site/embtdbo/wait-event-documentation
http://tech.e2sn.com/oracle/troubleshooting/latch-contention-troubleshooting#TOC-Download-LatchProf-and-LatchProfX
http://blog.tanelpoder.com/files/scripts/latchprof.sql
http://blog.tanelpoder.com/files/scripts/latchprofx.sql
http://blog.tanelpoder.com/files/scripts/dba.sql
http://m.blog.csdn.net/blog/caixingyun/41827529
sgastatx http://blog.tanelpoder.com/2009/06/04/ora-04031-errors-and-monitoring-shared-pool-subpool-memory-utilization-with-sgastatxsql/
http://yong321.freeshell.org/oranotes/SharedPoolDuration.txt
http://grumpyolddba.blogspot.com/2014/03/final-version-of-my-hotsos-2014.html
library cache internals www.juliandyke.com/Presentations/LibraryCacheInternals.ppt
! Information Gathering Script For ORA-4031 Analysis On Shared Pool (Doc ID 1909791.1)
{{{
REM srdc_db_ora4031sp.sql - Collect information for ORA-4031 analysis on shared pool
define SRDCNAME='DB_ORA4031SP'
SET MARKUP HTML ON PREFORMAT ON
set TERMOUT off FEEDBACK off VERIFY off TRIMSPOOL on HEADING off
COLUMN SRDCSPOOLNAME NOPRINT NEW_VALUE SRDCSPOOLNAME
select 'SRDC_'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_'||
to_char(sysdate,'YYYYMMDD_HH24MISS') SRDCSPOOLNAME from v$instance;
set TERMOUT on MARKUP html preformat on
REM
spool &SRDCSPOOLNAME..htm
select '+----------------------------------------------------+' from dual
union all
select '| Diagnostic-Name: '||'&&SRDCNAME' from dual
union all
select '| Timestamp: '||
to_char(systimestamp,'YYYY-MM-DD HH24:MI:SS TZH:TZM') from dual
union all
select '| Machine: '||host_name from v$instance
union all
select '| Version: '||version from v$instance
union all
select '| DBName: '||name from v$database
union all
select '| Instance: '||instance_name from v$instance
union all
select '+----------------------------------------------------+' from dual
/
set HEADING on MARKUP html preformat off
REM === -- end of standard header -- ===
REM
SET PAGESIZE 9999
SET LINESIZE 256
SET TRIMOUT ON
SET TRIMSPOOL ON
COL 'Total Shared Pool Usage' FORMAT 99999999999999999999999
COL bytes FORMAT 999999999999999
COL current_size FORMAT 999999999999999
COL name FORMAT A40
COL value FORMAT A20
ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';
SET MARKUP HTML ON PREFORMAT ON
/* Database identification */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Database identification:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT name, platform_id, database_role FROM v$database;
SELECT * FROM v$version WHERE banner LIKE 'Oracle Database%';
/* Current instance parameter values */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Current instance parameter values:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT n.ksppinm name, v.KSPPSTVL value
FROM x$ksppi n, x$ksppsv v
WHERE n.indx = v.indx
AND (n.ksppinm LIKE '%shared_pool%' OR n.ksppinm IN ('_kghdsidx_count', '_ksmg_granule_size', '_memory_imm_mode_without_autosga'))
ORDER BY 1;
/* Current memory settings */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Current instance parameter values:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT component, current_size FROM v$sga_dynamic_components;
/* Memory resizing operations */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Memory resizing operations:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT start_time, end_time, component, oper_type, oper_mode, initial_size, target_size, final_size, status
FROM v$sga_resize_ops
ORDER BY 1, 2;
/* Historical memory resizing operations */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Historical memory resizing operations:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT start_time, end_time, component, oper_type, oper_mode, initial_size, target_size, final_size, status
FROM dba_hist_memory_resize_ops
ORDER BY 1, 2;
/* Shared pool 4031 information */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Shared pool 4031 information:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT request_failures, last_failure_size FROM v$shared_pool_reserved;
/* Shared pool reserved 4031 information */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Shared pool reserved 4031 information:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT requests, request_misses, free_space, avg_free_size, free_count, max_free_size FROM v$shared_pool_reserved;
/* Shared pool memory allocations by size */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Shared pool memory allocations by size:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT name, bytes FROM v$sgastat WHERE pool = 'shared pool' AND (bytes > 999999 OR name = 'free memory') ORDER BY bytes DESC;
/* Total shared pool usage */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Total shared pool usage:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT SUM(bytes) "Total Shared Pool Usage" FROM v$sgastat WHERE pool = 'shared pool' AND name != 'free memory';
--------------------------------------------------------------------------------
--
-- File name: sgastatx
-- Purpose: Show shared pool stats by sub-pool from X$KSMSS
--
-- Author: Tanel Poder
-- Copyright: (c) http://www.tanelpoder.com
--
-- Usage: @sgastatx <statistic name>
-- @sgastatx "free memory"
-- @sgastatx cursor
--
-- Other: The other script for querying V$SGASTAT is called sgastat.sql
--
--
--
--------------------------------------------------------------------------------
COL sgastatx_subpool HEAD SUBPOOL FOR a30
PROMPT
PROMPT -- All allocations:
SELECT
'shared pool ('||NVL(DECODE(TO_CHAR(ksmdsidx),'0','0 - Unused',ksmdsidx), 'Total')||'):' sgastatx_subpool
, SUM(ksmsslen) bytes
, ROUND(SUM(ksmsslen)/1048576,2) MB
FROM
x$ksmss
WHERE
ksmsslen > 0
--AND ksmdsidx > 0
GROUP BY ROLLUP
( ksmdsidx )
ORDER BY
sgastatx_subpool ASC
/
BREAK ON sgastatx_subpool SKIP 1
PROMPT -- Allocations matching "&1":
SELECT
subpool sgastatx_subpool
, name
, SUM(bytes)
, ROUND(SUM(bytes)/1048576,2) MB
FROM (
SELECT
'shared pool ('||DECODE(TO_CHAR(ksmdsidx),'0','0 - Unused',ksmdsidx)||'):' subpool
, ksmssnam name
, ksmsslen bytes
FROM
x$ksmss
WHERE
ksmsslen > 0
AND LOWER(ksmssnam) LIKE LOWER('%&1%')
)
GROUP BY
subpool
, name
ORDER BY
subpool ASC
, SUM(bytes) DESC
/
BREAK ON sgastatx_subpool DUP
/* Cursor sharability problems */
/* This version is for >= 10g; for <= 9i substitute ss.kglhdpar for ss.address!!!! */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Cursor sharability problems (this version is for >= 10g; for <= 9i substitute ss.kglhdpar for ss.address!!!!):' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT sa.sql_text,sa.version_count,ss.*
FROM v$sqlarea sa,v$sql_shared_cursor ss
WHERE sa.address=ss.address AND sa.version_count > 50
ORDER BY sa.version_count ;
SPOOL OFF
EXIT
}}}
http://blog.tanelpoder.com/files/scripts/topcur.sql
http://blog.tanelpoder.com/files/scripts/topcurmem.sql
{{{
grep -l "ORA-04031" *trc | xargs ls -ltr
./sdbcaj10_ora_1030.trc:ORA-04031: unable to allocate 4000 bytes of shared memory ("shared pool","SELECT * FROM MFA_MESSAGE_ST...","sga heap(1,0)","kglsim heap")
./sdbcaj10_ora_8755.trc:ORA-04031: unable to allocate 4000 bytes of shared memory ("shared pool","SELECT * FROM MEM_ACCT_MFA_R...","sga heap(6,0)","kglsim heap")
./sdbcaj10_smon_26728.trc:ORA-04031: unable to allocate 4000 bytes of shared memory ("shared pool","select o.name from obj$ o wh...","sga heap(2,0)","kglsim heap")
./sdbcaj10_w008_26962.trc:ORA-04031: unable to allocate 3000 bytes of shared memory ("shared pool","unknown object","sga heap(2,0)","call")
}}}
http://www.eygle.com/digest/2010/08/ora_00600_code_explain.html
{{{
Subject: ORA-600 Lookup Error Categories
Doc ID: 175982.1 Type: BULLETIN
Modified Date : 04-JUN-2009 Status: PUBLISHED
In this Document
Purpose
Scope and Application
ORA-600 Lookup Error Categories
Internal Errors Categorised by number range
Internal Errors Categorised by mnemonic
Applies to:
Oracle Server - Enterprise Edition - Version:
Oracle Server - Personal Edition - Version:
Oracle Server - Standard Edition - Version:
Information in this document applies to any platform.
Checked for relevance 04-Jun-2009
Purpose
This note aims to provide a high level overview of the internal errors which may be encountered on the Oracle Server (sometimes referred to as the Oracle kernel). It is written to provide a guide to where a particular error may live and give some indication as to what the impact of the problem may be. Where a problem is reproducible and connected with a specific feature, you might obviously try not using the feature. If there is a consistent nature to the problem, it is good practice to ensure that the latest patchsets are in place and that you have taken reasonable measures to avoid known issues.
For repeatable issues which the ora-600 tool has not listed a likely cause , it is worth constructing a test case. Where this is possible, it greatly assists in the resolution time of any issue. It is important to remember that, in a many instances , the Server is very flexible and a workaround can very often be achieved.
Scope and Application
This bulletin provides Oracle DBAs with an overview of internal database errors.
Disclaimer: Every effort has been made to provide a reasonable degree of accuracy in what has been stated. Please consider that the details provided only serve to provide an indication of functionality and, in some cases, may not be wholly correct.
ORA-600 Lookup Error Categories
In the Oracle Server source, there are two types of ora-600 error :
the first parameter is a number which reflects the source component or layer the error is connected with; or
the first parameter is a mnemonic which indicates the source module where the error originated. This type of internal error is now used in preference to an internal error number.
Both types of error may be possible in the Oracle server.
Internal Errors Categorised by number range
The following table provides an indication of internal error codes used in the Oracle server. Thus, if ora-600[X] is encountered, it is possible to glean some high level background information : the error in generated in the Y layer which indicates that there may be a problem with Z.
Ora-600 Base Functionality Description
1 Service Layer The service layer has within it a variety of service related components which are associated with in memory related activities in the SGA such as, for example : the management of Enqueues, System Parameters, System state objects (these objects track the use of structures in the SGA by Oracle server processes), etc.. In the main, this layer provides support to allow process communication and provides support for locking and the management of structures to support multiple user processes connecting and interacting within the SGA.
Note : vos - Virtual Operating System provides features to support the functionality above. As the name suggests it provides base functionality in much the same way as is provided by an Operating System.
Ora-600 Base Functionality Description
1 vos Component notifier
100 vos Debug
300 vos Error
500 vos Lock
700 vos Memory
900 vos System Parameters
1100 vos System State object
1110 vos Generic Linked List management
1140 vos Enqueue
1180 vos Instance Locks
1200 vos User State object
1400 vos Async Msgs
1700 vos license Key
1800 vos Instance Registration
1850 vos I/O Services components
2000 Cache Layer Where errors are generated in this area, it is advisable to check whether the error is repeatable and whether the error is perhaps associated with recovery or undo type operations; where this is the case and the error is repeatable, this may suggest some kind of hardware or physical issue with a data file, control file or log file. The Cache layer is responsible for making the changes to the underlying files and well as managing the related memory structures in the SGA.
Note : rcv indicates recovery. It is important to remember that the Oracle cache layer is effectively going through the same code paths as used by the recovery mechanism.
Ora-600 Base Functionality Description
2000 server/rcv Cache Op
2100 server/rcv Control File mgmt
2200 server/rcv Misc (SCN etc.)
2400 server/rcv Buffer Instance Hash Table
2600 server/rcv Redo file component
2800 server/rcv Db file
3000 server/rcv Redo Application
3200 server/cache Buffer manager
3400 server/rcv Archival & media recovery component
3600 server/rcv recovery component
3700 server/rcv Thread component
3800 server/rcv Compatibility segment
It is important to consider when the error occurred and the context in which the error was generated. If the error does not reproduce, it may be an in memory issue.
4000 Transaction Layer Primarily the transaction layer is involved with maintaining structures associated with the management of transactions. As with the cache layer , problems encountered in this layer may indicate some kind of issue at a physical level. Thus it is important to try and repeat the same steps to see if the problem recurs.
Ora-600 Base Functionality Description
4000 server/txn Transaction Undo
4100 server/txn Transaction Undo
4210 server/txn Transaction Parallel
4250 server/txn Transaction List
4300 space/spcmgmt Transaction Segment
4400 txn/lcltx Transaction Control
4450 txn/lcltx distributed transaction control
4500 txn/lcltx Transaction Block
4600 space/spcmgmt Transaction Table
4800 dict/rowcache Query Row Cache
4900 space/spcmgmt Transaction Monitor
5000 space/spcmgmt Transaction Extent
It is important to try and determine what the object involved in any reproducible problem is. Then use the analyze command. For more information, please refer to the analyze command as detailed in the context of Note 28814.1; in addition, it may be worth using the dbverify as discussed in Note 35512.1.
6000 Data Layer The data layer is responsible for maintaining and managing the data in the database tables and indexes. Issues in this area may indicate some kind of physical issue at the object level and therefore, it is important to try and isolate the object and then perform an anlayze on the object to validate its structure.
Ora-600 Base Functionality Description
6000 ram/data
ram/analyze
ram/index data, analyze command and index related activity
7000 ram/object lob related errors
8000 ram/data general data access
8110 ram/index index related
8150 ram/object general data access
Again, it is important to try and determine what the object involved in any reproducible problem is. Then use the analyze command. For more information, please refer to the analyze command as detailed in the context of Note 28814.1; in addition, it may be worth using the dbverify as discussed in Note 35512.1.
12000 User/Oracle Interface & SQL Layer Components This layer governs the user interface with the Oracle server. Problems generated by this layer usually indicate : some kind of presentation or format error in the data received by the server, i.e. the client may have sent incomplete information; or there is some kind of issue which indicates that the data is received out of sequence
Ora-600 Base Functionality Description
12200 progint/kpo
progint/opi lob related
errors at interface level on server side, xa , etc.
12300 progint/if OCI interface to coordinating global transactions
12400 sqlexec/rowsrc table row source access
12600 space/spcmgmt operations associated with tablespace : alter / create / drop operations ; operations associated with create table / cluster
12700 sqlexec/rowsrc bad rowid
13000 dict/if dictionary access routines associated with kernel compilation
13080 ram/index kernel Index creation
13080 sqllang/integ constraint mechanism
13100 progint/opi archival and Media Recovery component
13200 dict/sqlddl alter table mechanism
13250 security/audit audit statement processing
13300 objsupp/objdata support for handling of object generation and object access
14000 dict/sqlddl sequence generation
15000 progint/kpo logon to Oracle
16000 tools/sqlldr sql loader related
You should try and repeat the issue and with the use of sql trace , try and isolate where exactly the issue may be occurring within the application.
14000 System Dependent Component internal error values This layer manages interaction with the OS. Effectively it acts as the glue which allows the Oracle server to interact with the OS. The types of operation which this layer manages are indicated as follows.
Ora-600 Base Functionality Description
14000 osds File access
14100 osds Concurrency management;
14200 osds Process management;
14300 osds Exception-handler or signal handler management
14500 osds Memory allocation
15000 security/dac,
security/logon
security/ldap local user access validation; challenge / response activity for remote access validation; auditing operation; any activities associated with granting and revoking of privileges; validation of password with external password file
15100 dict/sqlddl this component manages operations associated with creating, compiling (altering), renaming, invalidating, and dropping procedures, functions, and packages.
15160 optim/cbo cost based optimizer layer is used to determine optimal path to the data based on statistical information available on the relevant tables and indexes.
15190 optim/cbo cost based optimizer layer. Used in the generation of a new index to determine how the index should be created. Should it be constructed from the table data or from another index.
15200 dict/shrdcurs used to in creating sharable context area associated with shared cursors
15230 dict/sqlddl manages the compilation of triggers
15260 dict/dictlkup
dict/libcache dictionary lookup and library cache access
15400 server/drv manages alter system and alter session operations
15410 progint/if manages compilation of pl/sql packages and procedures
15500 dict/dictlkup performs dictionary lookup to ensure semantics are correct
15550 sqlexec/execsvc
sqlexec/rowsrc hash join execution management;
parallel row source management
15600 sqlexec/pq component provides support for Parallel Query operation
15620 repl/snapshots manages the creation of snapshot or materialized views as well as related snapshot / MV operations
15640 repl/defrdrpc layer containing various functions for examining the deferred transaction queue and retrieving information
15660 jobqs/jobq manages the operation of the Job queue background processes
15670 sqlexec/pq component provides support for Parallel Query operation
15700 sqlexec/pq component provides support for Parallel Query operation; specifically mechanism for starting up and shutting down query slaves
15800 sqlexec/pq component provides support for Parallel Query operation
15810 sqlexec/pq component provides support for Parallel Query operation; specifically functions for creating mechanisms through which Query co-ordinator can communicate with PQ slaves;
15820 sqlexec/pq component provides support for Parallel Query operation
15850 sqlexec/execsvc component provides support for the execution of SQL statements
15860 sqlexec/pq component provides support for Parallel Query operation
16000 loader sql Loader direct load operation;
16150 loader this layer is used for 'C' level call outs to direct loader operation;
16200 dict/libcache this is part of library Cache operation. Amongst other things it manages the dependency of SQL objects and tracks who is permitted to access these objects;
16230 dict/libcache this component is responsible for managing access to remote objects as part of library Cache operation;
16300 mts/mts this component relates to MTS (Multi Threaded Server) operation
16400 dict/sqlddl this layer contains functionality which allows tables to be loaded / truncated and their definitions to be modified. This is part of dictionary operation;
16450 dict/libcache this layer layer provides support for multi-instance access to the library cache; this functionality is applicable therefore to OPS environments;
16500 dict/rowcache this layer provides support to load / cache Oracle's dictionary in memory in the library cache;
16550 sqlexec/fixedtab this component maps data structures maintained in the Oracle code to fixed tables such that they can be queried using the SQL layer;
16600 dict/libcache this layer performs management of data structures within the library cache;
16651 dict/libcache this layer performs management of dictionary related information within library Cache;
16701 dict/libcache this layer provides library Cache support to support database creation and forms part of the bootstrap process;
17000 dict/libcache this is the main library Cache manager. This Layer maintains the in memory representation of cached sql statements together will all the necessary support that this demands;
17090 generic/vos this layer implementations error management operations: signalling errors, catching errors, recovering from errors, setting error frames, etc.;
17100 generic/vos Heap manager. The Heap manager manages the storage of internal data in an orderly and consistent manner. There can be many heaps serving various purposes; and heaps within heaps. Common examples are the SGA heap, UGA heap and the PGA heap. Within a Heap there are consistency markers which aim to ensure that the Heap is always in a consistent state. Heaps are use extensively and are in memory structures - not on disk.
17200 dict/libcache this component deals with loading remote library objects into the local library cache with information from the remote database.
17250 dict/libcache more library cache errors ; functionality for handling pipe operation associated with dbms_pipe
17270 dict/instmgmt this component manages instantiations of procedures, functions, packages, and cursors in a session. This provides a means to keep track of what has been loaded in the event of process death;
17300 generic/vos manages certain types of memory allocation structure. This functionality is an extension of the Heap manager.
17500 generic/vos relates to various I/O operations. These relate to async i/o operation, direct i/o operation and the management of writing buffers from the buffer cache by potentially a number of database writer processes;
17625 dict/libcache additional library Cache supporting functions
17990 plsql plsql 'standard' package related issues
18000 txn/lcltx transaction and savepoint management operations
19000 optim/cbo cost based optimizer related operations
20000 ram/index bitmap index and index related errors.
20400 ram/partnmap operations on partition related objects
20500 server/rcv server recovery related operation
21000 repl/defrdrpc,
repl/snapshot,
repl/trigger replication related features
23000 oltp/qs AQ related errors.
24000 dict/libcache operations associated with managing stored outlines
25000 server/rcv tablespace management operations
Internal Errors Categorised by mnemonic
The following table details mnemonics error stems which are possible. If you have encountered : ora-600[kkjsrj:1] for example, you should look down the Error Mnemonic column (errors in alphabetical order) until you find the matching stem. In this case, kkj indicates that something unexpected has occurred in job queue operation.
Error Mnemonic(s) Functionality Description
ain ainp ram/index ain - alter index; ainp - alter index partition management operation
apacb optim/rbo used by optimizer in connect by processing
atb atbi atbo ctc ctci cvw dict/sqlddl alter table , create table (IOT) or cluster operations as well as create view related operations (with constraint handling functionality)
dbsdrv sqllang/parse alter / create database operation
ddfnet progint/distrib various distributed operations on remote dictionary
delexe sqlexec/dmldrv manages the delete statement operation
dix ram/index manages drop index or validate index operation
dtb dict/sqlddl manages drop table operation
evaa2g evah2p evaa2g dbproc/sqlfunc various functions involves in evaluating operand outcomes such as : addition , average, OR operator, bites AND , bites OR, concatenation, as well as Oracle related functions : count(), dump() , etc. The list is extensive.
expcmo expgon dbproc/expreval handles expression evaluation with respect to two operands being equivalent
gra security/dac manages the granting and revoking of privilege rights to a user
gslcsq plsldap support for operations with an LDAP server
insexe sqlexec/dmldrv handles the insert statement operation
jox progint/opi functionality associated with the Java compiler and with the Java runtime environment within the Server
k2c k2d progint/distrib support for database to database operation in distributed environements as well as providing, with respect to the 2-phase commit protocol, a globally unique Database id
k2g k2l txn/disttx support for the 2 phase commit protocol protocol and the coordination of the various states in managing the distributed transaction
k2r k2s k2sp progint/distrib k2r - user interface for managing distributed transactions and combining distributed results ; k2s - handles logging on, starting a transaction, ending a transaction and recovering a transaction; k2sp - management of savepoints in a distributed environment.
k2v txn/disttx handles distributed recovery operation
kad cartserv/picklercs handles OCIAnyData implementation
kau ram/data manages the modification of indexes for inserts, updates and delete operations for IOTs as well as modification of indexes for IOTs
kcb kcbb kcbk kcbl kcbs kcbt kcbw kcbz cache manages Oracle's buffer cache operation as well as operations used by capabilities such as direct load, has clusters , etc.
kcc kcf rcv manages and coordinates operations on the control file(s)
kcit context/trigger internal trigger functionality
kck rcv compatibility related checks associated with the compatible parameter
kcl cache background lck process which manages locking in a RAC or parallel server multiple instance environment
kco kcq kcra kcrf kcrfr kcrfw kcrp kcrr kcs kct kcv rcv various buffer cache operation such as quiesce operation , managing fast start IO target, parallel recovery operation , etc.
kd ram/data support for row level dependency checking and some log miner operations
kda ram/analyze manages the analyze command and collection of statistics
kdbl kdc kdd ram/data support for direct load operation, cluster space management and deleting rows
kdg ram/analyze gathers information about the underlying data and is used by the analyze command
kdi kdibc3 kdibco kdibh kdibl kdibo kdibq kdibr kdic kdici kdii kdil kdir kdis kdiss kdit kdk ram/index support of the creation of indexes on tables an IOTs and index look up
kdl kdlt ram/object lob and temporary lob management
kdo ram/data operations on data such as inserting a row piece or deleting a row piece
kdrp ram/analyze underlying support for operations provided by the dbms_repair package
kds kdt kdu ram/data operations on data such as retrieving a row and updating existing row data
kdv kdx ram/index functionality for dumping index and managing index blocks
kfc kfd kfg asm support for ASM file and disk operations
kfh kfp kft rcv support for writing to file header and transportable tablespace operations
kgaj kgam kgan kgas kgat kgav kgaz argusdbg/argusdbg support for Java Debug Wire Protocol (JDWP) and debugging facilites
kgbt kgg kgh kghs kghx kgkp vos kgbt - support for BTree operations; kgg - generic lists processing; kgh - Heap Manager : managing the internal structures withing the SGA / UGA / PGA and ensures their integrity; kghs - Heap manager with Stream support; kghx - fixed sized shared memory manager; kgkp - generic services scheduling policies
kgl kgl2 kgl3 kgla kglp kglr kgls dict/libcache generic library cache operation
kgm kgmt ilms support for inter language method services - or calling one language from another
kgrq kgsk kgski kgsn kgss vos support for priority queue and scheduling; capabilities for Numa support; Service State object manager
kgupa kgupb kgupd0 kgupf kgupg kgupi kgupl kgupm kgupp kgupt kgupx kguq2 kguu vos Service related activities activities associated with for Process monitor (PMON); spawning or creating of background processes; debugging; managing process address space; managing the background processes; etc.
kgxp vos inter process communication related functions
kjak kjat kjb kjbl kjbm kjbr kjcc kjcs kjctc kjcts kjcv kjdd kjdm kjdr kjdx kjfc kjfm kjfs kjfz kjg kji kjl kjm kjp kjr kjs kjt kju kjx ccl/dlm dlm related functionality ; associated with RAC or parallel server operation
kjxgf kjxgg kjxgm kjxgn kjxgna kjxgr ccl/cgs provides communication & synchronisation associated with GMS or OPS related functionality as well as name service and OPS Instance Membership Recovery Facility
kjxt ccl/dlm DLM request message management
kjzc kjzd kjzf kjzg kjzm ccl/diag support for diagnosibility amongst OPS related services
kkb dict/sqlddl support for operatoins which load/change table definitions
kkbl kkbn kkbo objsupp/objddl support for tables with lobs , nested tables and varrays as well as columns with objects
kkdc kkdl kkdo dict/dictlkup support for constraints, dictionary lookup and dictionary support for objects
kke optim/cbo query engine cost engine; provides support functions that provide cost estimates for queries under a number of different circumstances
kkfd sqlexec/pq support for performing parallel query operation
kkfi optim/cbo optimizer support for matching of expressions against functional ndexes
kkfr kkfs sqlexec/pq support for rowid range handling as well as for building parallel query query operations
kkj jobqs/jobq job queue operation
kkkd kkki dict/dbsched resource manager related support. Additionally, provides underlying functions provided by dbms_resource_manager and dbms_resource_manager_privs packages
kklr dict/sqlddl provides functions used to manipulate LOGGING and/or RECOVERABLE attributes of an object (non-partitioned table or index or partitions of a partitioned table or index)
kkm kkmi dict/dictlkup provides various semantic checking functions
kkn ram/analyze support for the analyze command
kko kkocri optim/cbo Cost based Optimizer operation : generates alternative execution plans in order to find the optimal / quickest access to the data. Also , support to determine cost and applicability of scanning a given index in trying to create or rebuild an index or a partition thereof
kkpam kkpap ram/partnmap support for mapping predicate keys expressions to equivalent partitions
kkpo kkpoc kkpod dict/partn support for creation and modification of partitioned objects
kkqg kkqs kkqs1 kkqs2 kkqs3 kkqu kkqv kkqw optim/vwsubq query rewrite operation
kks kksa kksh kksl kksm dict/shrdcurs support for managing shared cursors/ shared sql
kkt dict/sqlddl support for creating, altering and dropping trigger definitions as well as handling the trigger operation
kkxa repl/defrdrpc underlying support for dbms_defer_query package operations
kkxb dict/sqlddl library cache interface for external tables
kkxl dict/plsicds underlying support for the dbms_lob package
kkxm progint/opi support for inter language method services
kkxs dict/plsicds underlying support for the dbms_sys_sql package
kkxt repl/trigger support for replication internal trigger operation
kkxwtp progint/opi entry point into the plsql compiler
kky drv support for alter system/session commands
kkz kkzd kkzf kkzg kkzi kkzj kkzl kkzo kkzp kkzq kkzr kkzu kkzv repl/snapshot support for snapshots or Materialized View validation and operation
kla klc klcli klx tools/sqlldr support for direct path sql loader operation
kmc kmcp kmd kmm kmr mts/mts support for Multi Threaded server operation (MTS) : manange and operate the virtual circuit mechanism, handle the dispatching of massages, administer shared servers and for collecting and maintaining statistics associated with MTS
knac knafh knaha knahc knahf knahs repl/apply replication apply operation associated with Oracle streams
kncc repl/repcache support for replication related information stored and maintained in library cache
kncd knce repl/defrdrpc replication related enqueue and dequeue of transction data as well as other queue related operations
kncog repl/repcache support for loading replicaiton object group information into library cache
kni repl/trigger support for replication internal trigger operation
knip knip2 knipi knipl knipr knipu knipu2 knipx repl/intpkg support for replication internal package operation.
kno repl/repobj support for replication objects
knp knpc knpcb knpcd knpqc knps repl/defrdrpc operations assocaied with propagating transactions to a remote node and coordination of this activity.
knst repl/stats replication statistics collection
knt kntg kntx repl/trigger support for replication internal trigger operation
koc objmgmt/objcache support for managing ADTs objects in the OOCI heap
kod objmgmt/datamgr support for persistent storage for objects : for read/write objects, to manage object IDs, and to manage object concurrency and recovery.
koh objmgmt/objcache object heap manager provides memory allocation services for objects
koi objmgmt/objmgr support for object types
koka objsupp/objdata support for reading images, inserting images, updating images, and deleting images based on object references (REFs).
kokb kokb2 objsupp/objsql support for nested table objects
kokc objmgmt/objcache support for pinning , unpinning and freeing objects
kokd objsupp/datadrv driver on the server side for managing objects
koke koke2 koki objsupp/objsql support for managing objects
kokl objsupp/objdata lob access
kokl2 objsupp/objsql lob DML and programmatic interface support
kokl3 objsupp/objdata object temporary LOB support
kokle kokm objsupp/objsql object SQL evaluation functions
kokn objsupp/objname naming support for objects
koko objsupp/objsup support functions to allow oci/rpi to communicate with Object Management Subsystem (OMS).
kokq koks koks2 koks3 koksr objsupp/objsql query optimisation for objects , semantic checking and semantic rewrite operations
kokt kokt2 kokt3 objsupp/objddl object compilation type manager
koku kokv objsupp/objsql support for unparse object operators and object view support
kol kolb kole kolf kolo objmgmt/objmgr support for object Lob buffering , object lob evaluation and object Language/runtime functions for Opaque types
kope2 kopi2 kopo kopp2 kopu koputil kopz objmgmt/pickler 8.1 engine implementation, implementation of image ops for 8.1+ image format together with various pickler related support functions
kos objsupp/objsup object Stream interfaces for images/objects
kot kot2 kotg objmgmt/typemgr support for dynamic type operations to create, delete, and update types.
koxs koxx objmgmt/objmgt object generic image Stream routines and miscellaneous generic object functions
kpcp kpcxlt progint/kpc Kernel programmatic connection pooling and kernel programmatic common type XLT translation routines
kpki progint/kpki kernel programatic interface support
kpls cartserv/corecs support for string formatting operations
kpn progint/kpn support for server to server communication
kpoal8 kpoaq kpob kpodny kpodp kpods kpokgt kpolob kpolon kpon progint/kpo support for programmatic operations
kpor progint/opi support for streaming protocol used by replication
kposc progint/kpo support for scrollable cursors
kpotc progint/opi oracle side support functions for setting up trusted external procedure callbacks
kpotx kpov progint/kpo support for managing local and distributed transaction coordination.
kpp2 kpp3 sqllang/parse kpp2 - parse routines for dimensions;
kpp3 - parse support for create/alter/drop summary statements
kprb kprc progint/rpi support for executing sql efficiently on the Oracle server side as well as for copying data types during rpi operations
kptsc progint/twotask callback functions provided to all streaming operation as part of replication functionality
kpu kpuc kpucp progint/kpu Oracle kernel side programmatic user interface, cursor management functions and client side connection pooling support
kqan kqap kqas argusdbg/argusdbg server-side notifiers and callbacks for debug operations.
kql kqld kqlp dict/libcache SQL Library Cache manager - manages the sharing of sql statements in the shared pool
kqr dict/rowcache row cache management. The row cache consists of a set of facilities to provide fast access to table definitions and locking capabilities.
krbi krbx krby krcr krd krpi rcv Backup and recovery related operations :
krbi - dbms_backup_restore package underlying support.; krbx - proxy copy controller; krby - image copy; krcr - Recovery Controlfile Redo; krd - Recover Datafiles (Media & Standby Recovery); krpi - support for the package : dbms_pitr
krvg krvt rcv/vwr krvg - support for generation of redo associated with DDL; krvt - support for redo log miner viewer (also known as log miner)
ksa ksdp ksdx kse ksfd ksfh ksfq ksfv ksi ksim ksk ksl ksm ksmd ksmg ksn ksp kspt ksq ksr kss ksst ksu ksut vos support for various kernel associated capabilities
ksx sqlexec/execsvc support for query execution associated with temporary tables
ksxa ksxp ksxr vos support for various kernel associated capabilities in relation to OPS or RAC operation
kta space/spcmgmt support for DML locks and temporary tables associated with table access
ktb ktbt ktc txn/lcltx transaction control operations at the block level : locking block, allocating space within the block , freeing up space, etc.
ktec ktef ktehw ktein ktel kteop kteu space/spcmgmt support for extent management operations :
ktec - extent concurrency operations; ktef - extent format; ktehw - extent high water mark operations; ktein - extent information operations; ktel - extent support for sql loader; kteop - extent operations : add extent to segment, delete extent, resize extent, etc. kteu - redo support for operations changing segment header / extent map
ktf txn/lcltx flashback support
ktfb ktfd ktft ktm space/spcmgmt ktfb - support for bitmapped space manipulation of files/tablespaces; ktfd - dictionary-based extent management; ktft - support for temporary file manipulation; ktm - SMON operation
ktp ktpr ktr ktri txn/lcltx ktp - support for parallel transaction operation; ktpr - support for parallel transaction recovery; ktr - kernel transaction read consistency;
ktri - support for dbms_resumable package
ktsa ktsap ktsau ktsb ktscbr ktsf ktsfx ktsi ktsm ktsp ktss ktst ktsx ktt kttm space/spcmgmt support for checking and verifying space usage
ktu ktuc ktur ktusm txn/lcltx internal management of undo and rollback segments
kwqa kwqi kwqic kwqid kwqie kwqit kwqj kwqm kwqn kwqo kwqp kwqs kwqu kwqx oltp/qs support for advanced queuing :
kwqa - advanced queue administration; kwqi - support for AQ PL/SQL trusted callouts; kwqic - common AQ support functions; kwqid - AQ dequeue support; kwqie - AQ enqueu support ; kwqit - time management operation ; kwqj - job queue scheduler for propagation; kwqm - Multiconsumer queue IOT support; kwqn - queue notifier; kwqo - AQ support for checking instType checking options; kwqp - queueing propagation; kwqs - statistics handling; kwqu - handles lob data. ; kwqx - support for handling transformations
kwrc kwre oltp/re rules engine evaluation
kxcc kxcd kxcs sqllang/integ constraint processing
kxdr sqlexec/dmldrv DML driver entrypoint
kxfp kxfpb kxfq kxfr kxfx sqlexec/pq parallel query support
kxhf kxib sqlexec/execsvc khhf- support for hash join file and memory management; kxib - index buffering operations
kxs dict/instmgmt support for executing shared cursors
kxti kxto kxtr dbproc/trigger support for trigger operation
kxtt ram/partnmap support for temporary table operations
kxwph ram/data support for managing attributes of the segment of a table / cluster / table-partition
kza security/audit support for auditing operations
kzar security/dac support for application auditing
kzck security/crypto encryption support
kzd security/dac support for dictionary access by security related functions
kzec security/dbencryption support inserting and retrieving encrypted objects into and out of the database
kzfa kzft security/audit support for fine grained auditing
kzia security/logon identification and authentication operations
kzp kzra kzrt kzs kzu kzup security/dac security related operations associated with privileges
msqima msqimb sqlexec/sqlgen support for generating sql statments
ncodef npi npil npixfr progint/npi support for managing remote network connection from within the server itself
oba sqllang/outbufal operator buffer allocate for various types of operators : concatenate, decode, NVL, etc. the list is extensive.
ocik progint/oci OCI oracle server functions
opiaba opidrv opidsa opidsc opidsi opiexe opifch opiino opilng opipar opipls opirip opitsk opix progint/opi OPI Oracle server functions - these are at the top of the server stack and are called indirectly by ythe client in order to server the client request.
orlr objmgmt/objmgr support for C langauge interfaces to user-defined types (UDTs)
orp objmgmt/pickler oracle's external pickler / opaque type interfaces
pesblt pfri pfrsqc plsql/cox pesblt - pl/sql built in interpreter; pfri - pl/sql runtime; pfrsqc - pl/sql callbacks for array sql and dml with returning
piht plsql/gen/utl support for pl/sql implementation of utl_http package
pirg plsql/cli/utl_raw support for pl/sql implementation of utl_raw package
pism plsql/cli/utl_smtp support for pl/sql implementation of utl_smtp package
pitcb plsql/cli/utl_tcp support for pl/sql implementation of utl_tcp package
piur plsql/gen/utl_url support for pl/sql implementation of utl_url package
plio plsql/pkg pl/sql object instantiation
plslm plsql/cox support for NCOMP processing
plsm pmuc pmuo pmux objmgmt/pol support for pl/sql handling of collections
prifold priold plsql/cox support to allow rpc forwarding to an older release
prm sqllang/param parameter handling associated with sql layer
prsa prsc prssz sqllang/parse prsa - parser for alter cluster command; prsc - parser for create database command; prssz - support for parse context to be saved
psdbnd psdevn progint/dbpsd psdbnd - support for managing bind variables; psdevn - support for pl/sql debugger
psdicd progint/plsicds small number of ICD to allow pl/sql to call into 'C' source
psdmsc psdpgi progint/dbpsd psdmsc - pl/sql system dependent miscellaneous functions ; psdpgi - support for opening and closing cursors in pl/sql
psf plsql/pls pl/sql service related functions for instantiating called pl/sql unit in library cache
qbadrv qbaopn sqllang/qrybufal provides allocation of buffer and control structures in query execution
qcdl qcdo dict/dictlkup qcdl - query compile semantic analysis; qcdo - query compile dictionary support for objects
qci dict/shrdcurs support for SQL language parser and semantic analyser
qcop qcpi qcpi3 qcpi4 qcpi5 sqllang/parse support for query compilation parse phase
qcs qcs2 qcs3 qcsji qcso dict/dictlkup support for semantic analysis by SQL compiler
qct qcto sqllang/typeconv qct - query compile type check operations; qcto - query compile type check operators
qcu sqllang/parse various utilities provided for sql compilation
qecdrv sqllang/qryedchk driver performing high level checks on sql language query capabilities
qerae qerba qerbc qerbi qerbm qerbo qerbt qerbu qerbx qercb qercbi qerco qerdl qerep qerff qerfi qerfl qerfu qerfx qergi qergr qergs qerhc qerhj qeril qerim qerix qerjm qerjo qerle qerli qerlt qerns qeroc qeroi qerpa qerpf qerpx qerrm qerse qerso qersq qerst qertb qertq qerua qerup qerus qervw qerwn qerxt sqlexec/rowsrc row source operators :
qerae - row source (And-Equal) implementation; qerba - Bitmap Index AND row source; qerbc - bitmap index compaction row source; qerbi - bitmap index creation row source; qerbm - QERB Minus row source; qerbo - Bitmap Index OR row source; qerbt - bitmap convert row source; qerbu - Bitmap Index Unlimited-OR row source; qerbx - bitmap index access row source; qercb - row source: connect by; qercbi - support for connect by; qerco - count row source; qerdl - row source delete; qerep - explosion row source; qerff - row source fifo buffer; qerfi - first row row source; qerfl - filter row source definition; qerfu - row source: for update; qerfx - fixed table row source; qergi - granule iterator row source; qergr - group by rollup row source; qergs - group by sort row source; qerhc - row sources hash clusters; qerhj - row source Hash Join; qeril - In-list row source; qerim - Index Maintenance row source; qerix - Index row source; qerjo - row source: join; qerle - linear execution row source implementation; qerli - parallel create index; qerlt - row source populate Table; qerns - group by No Sort row source; qeroc - object collection iterator row source; qeroi - extensible indexing query component; qerpa - partition row sources; qerpf - query execution row source: prefetch; qerpx - row source: parallelizer; qerrm - remote row source; qerse - row source: set implementation; qerso - sort row source; qersq - row source for sequence number; qerst - query execution row sources: statistics; qertb - table row source; qertq - table queue row source; qerua - row source : union-All;
qerup - update row source; qerus - upsert row source ; qervw - view row source; qerwn - WINDOW row source; qerxt - external table fetch row source
qes3t qesa qesji qesl qesmm qesmmc sqlexec/execsvc run time support for sql execution
qkacon qkadrv qkajoi qkatab qke qkk qkn qkna qkne sqlexec/rwsalloc SQL query dynamic structure allocation routines
qks3t sqlexec/execsvc query execution service associated with temp table transformation
qksmm qksmms qksop sqllang/compsvc qksmm - memory management services for the SQL compiler; qksmms - memory management simulation services for the SQL compiler; qksop - query compilation service for operand processing
qkswc sqlexec/execsvc support for temp table transformation associated for with clause.
qmf xmlsupp/util support for ftp server; implements processing of ftp commands
qmr qmrb qmrs xmlsupp/resolver support hierarchical resolver
qms xmlsupp/data support for storage and retrieval of XOBs
qmurs xmlsupp/uri support for handling URIs
qmx qmxsax xmlsupp/data qmx - xml support; qmxsax - support for handling sax processing
qmxtc xmlsupp/sqlsupp support for ddl and other operators related to the sql XML support
qmxtgx xmlsupp support for transformation : ADT -> XML
qmxtsk xmlsupp/sqlsupp XMLType support functions
qsme summgmt/dict summary management expression processing
qsmka qsmkz dict/dictlkup qsmka - support to analyze request in order to determine whether a summary could be created that would be useful; qsmkz - support for create/alter summary semantic analysis
qsmp qsmq qsmqcsm qsmqutl summgmt/dict qsmp - summary management partition processing; qsmq - summary management dictionary access; qsmqcsm - support for create / drop / alter summary and related dimension operations; qsmqutl - support for summaries
qsms summgmt/advsvr summary management advisor
qxdid objsupp/objddl support for domain index ddl operations
qxidm objsupp/objsql support for extensible index dml operations
qxidp objsupp/objddl support for domain index ddl partition operations
qxim objsupp/objsql extensible indexing support for objects
qxitex qxopc qxope objsupp/objddl qxitex - support for create / drop indextype; qxope - execution time support for operator callbacks; qxope - execution time support for operator DDL
qxopq qxuag qxxm objsupp/objsql qxopq - support for queries with user-defined operators; qxuag - support for user defined aggregate processing; qxxm - queries involving external tables
rfmon rfra rfrdb rfrla rfrm rfrxpt drs implements 9i data guard broker monitor
rnm dict/sqlddl manages rename statement operation
rpi progint/rpi recursive procedure interface which handles the the environment setup where multiple recursize statements are executed from one top level statement
rwoima sqlexec/rwoprnds row operand operations
rwsima sqlexec/rowsrc row source implementation/retrieval according to the defining query
sdbima sqlexec/sort manages and performs sort operation
selexe sqlexec/dmldrv handles the operation of select statement execution
skgm osds platform specific memory management rountines interfacing with O.S. allocation functions
smbima sor sqlexec/sort manages and performs sort operation
sqn dict/sqlddl support for parsing references to sequences
srdima srsima stsima sqlexec/sort manages and performs sort operation
tbsdrv space/spcmgmt operations for executing create / alter / drop tablespace and related supporting functions
ttcclr ttcdrv ttcdty ttcrxh ttcx2y progint/twotask two task common layer which provides high level interaction and negotiation functions for Oracle client when communicating with the server. It also provides important function of converting client side data / data types into equivalent on the server and vice versa
uixexe ujiexe updexe upsexe sqlexec/dmldrv support for : index maintenance operations, the execution of the update statement and associated actions connected with update as well as the upsert command which combines the operations of update and insert
vop optim/vwsubq view optimisation related functionality
xct txn/lcltx support for the management of transactions and savepoint operations
xpl sqlexec/expplan support for the explain plan command
xty sqllang/typeconv type checking functions
zlke security/ols/intext label security error handling component
}}}
https://community.hortonworks.com/questions/2067/orc-vs-parquet-when-to-use-one-over-the-other.html
<<<
In my mind the two biggest considerations for ORC over Parquet are:
1. Many of the performance improvements provided in the Stinger initiative are dependent on features of the ORC format including block level index for each column. This leads to potentially more efficient I/O allowing Hive to skip reading entire blocks of data if it determines predicate values are not present there. Also the Cost Based Optimizer has the ability to consider column level metadata present in ORC files in order to generate the most efficient graph.
2. ACID transactions are only possible when using ORC as the file format.
<<<
https://hortonworks.com/blog/orcfile-in-hdp-2-better-compression-better-performance/
https://stackoverflow.com/questions/32373460/parquet-vs-orc-vs-orc-with-snappy
http://parquet.apache.org/presentations/
<<showtoc>>
! ORDS for REST API ?
{{{
I would say ORDS for convenience so you don't have to manually create the JSON API
There's a nice pluralsight ORDS course out there to get started with code examples to practice. It also shows how to secure (oauth2) and deploy ords to a webtier
https://www.pluralsight.com/courses/oracle-rest-data-services
Dan McGhan did a presentation at OOW that shows how to create REST API the manual way using node-oracledb (https://oracle.github.io/node-oracledb/) and also using ORDS.
Creating RESTful Web Services the Easy Way with Node.js https://www.youtube.com/watch?v=tSW72IlTJGw
code examples: https://github.com/oracle/node-oracledb/issues/962, https://github.com/oracle/oracle-db-examples/tree/master/javascript/rest-api, http://web.archive.org/web/20201128020553/https://jsao.io/2018/03/creating-a-rest-api-with-node-js-and-oracle-database/
And just for completeness, if you are using one of the web frameworks out there like Ember, Angular, React, etc. here's how everything will be glued together from DB to frontend. Example below uses Ember.js app
Oracle DB <-> REST JSON API (node-oracledb or ORDS) <-> built-in Ember JSONAPIAdapter <-> Ember Data <-> Ember
Just for comparison if you are using Postgres as DB and Django for creating REST API, here's how it will look like
Postgresql <-> REST JSON API (Django) <-> built-in Ember JSONAPIAdapter <-> Ember Data <-> Ember
If you are using Postgres as DB and Rails REST API
Postgresql <-> REST JSON API (rails) <-> built-in JSONAPIAdapter <-> Ember Data <-> Ember
Then the old school app architecture using Postgres as DB and Rails ORM
Postgresql <-> ActiveRecord ORM (in a Rails Controller) <-> ActiveModel::Serializers <-> Ember Data (with active-model-adapter) <-> Ember
}}}
! references
https://www.oracle.com/database/technologies/databaseappdev-vm.html
https://www.pluralsight.com/courses/oracle-rest-data-services
https://www.thatjeffsmith.com/oracle-rest-data-services-ords/
https://oracle-base.com/articles/misc/articles-misc#ords
Creating a REST API with Node.js and Oracle Database http://web.archive.org/web/20201128020553/https://jsao.io/2018/03/creating-a-rest-api-with-node-js-and-oracle-database/
https://github.com/oracle/oracle-db-examples/tree/master/javascript/rest-api
https://github.com/oracle/node-oracledb/issues/962
Creating RESTful Web Services the Easy Way with Node.js https://www.youtube.com/watch?v=tSW72IlTJGw
https://blogs.oracle.com/author/dan-mcghan-3
https://oracle.github.io/node-oracledb/
https://developer.oracle.com/dsl/haefel-oracle-ruby.html
.
https://blog.acolyer.org/2019/07/12/view-centric-performance-optimization/
https://blog.acolyer.org/2018/06/28/how-_not_-to-structure-your-database-backed-web-applications-a-study-of-performance-bugs-in-the-wild/
View-Centric Performance Optimization for Database-Backed Web Applications https://people.cs.uchicago.edu/~shanlu/paper/panorama.pdf
https://developers.google.com/web/fundamentals/performance/why-performance-matters/
https://www.oracle.com/technical-resources/documentation/fsgbu.html
! batch stack
Oracle Revenue Management and Billing https://docs.oracle.com/cd/E87761_01/homepage.htm
https://docs.oracle.com/cd/E87761_01/books/V2.6.0.0.0/Oracle_Revenue_Management_and_Billing_Transaction_Feed_Management_-_Batch_Execution_Guide.pdf
! analytics stack
Oracle Revenue Management and Billing Analytics https://docs.oracle.com/cd/E64452_01/homepage.htm
https://docs.oracle.com/cd/E64452_01/books/V2.8.0.0.0/Oracle_Revenue_Management_and_Billing_Analytics_Installation_Guide.pdf
https://docs.oracle.com/cd/E64452_01/books/V2.8.0.0.0/Oracle_Revenue_Management_and_Billing_Analytics_Admin_Guide.pdf
.
https://forums.oracle.com/forums/thread.jspa?threadID=369320&start=15&tstart=0
http://www.oracle.com/technetwork/database/enterprise-edition/calling-shell-commands-from-plsql-1-1-129519.pdf
<<<
{{{
Below is the list of activities on the OSB project
*** some observations
-
---
1) OSB installation
Read on the install guide
Installing and Configuring Oracle Secure Backup 10.2 http://st-curriculum.oracle.com/obe/db/11g/r1/prod/ha/osb10_2install/osb1.htm
2) testing of RMAN backups
Performing Encrypted Backups with Oracle Secure Backup 10.2 http://st-curriculum.oracle.com/obe/db/11g/r1/prod/ha/osb10_2encrypt/osb2.htm#t3
Performing Database and File System Backups and Restores Using Oracle Secure
Backup http://st-curriculum.oracle.com/obe/db/10g/r2/prod/ha/ob/ob_otn.htm
- RMAN backs up directly to tape using
backup incremental level 0 device type sbt_tape database plus archivelog;
OR
- Daily backup of recovery area to tape.. I noticed it pulls only new archivelogs to tape
backup device type sbt_tape recovery area;
OR
- Daily backup of recovery area and backup sets to tape <-- this is more promising!!!
backup device type sbt_tape recovery files;
3) testing of filesystem backups, possible to just pull the RMAN backups created on the filesystem
4) recovery testing of RMAN backups from tape (direct)
5) recovery testing of RMAN backups from tape-to-disk
6) media policy creation
7) creation of backup scripts for OSB
}}}
<<<
{{{
C:\Documents and Settings\Sopraadmin>sqlplus "/ as sysdba"
SQL*Plus: Release 11.1.0.7.0 - Production on Wed Nov 3 05:59:30 2010
Copyright (c) 1982, 2008, Oracle. All rights reserved.
Connected to an idle instance.
SQL>
SQL>
SQL> startup
ORACLE instance started.
Total System Global Area 535662592 bytes
Fixed Size 1348508 bytes
Variable Size 331353188 bytes
Database Buffers 197132288 bytes
Redo Buffers 5828608 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
SQL>
SQL>
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 osbtest
PHBSPSERV010
11.1.0.7.0 03-NOV-10 MOUNTED NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
SQL>
SQL>
SQL> select name, status from v$datafile;
NAME
--------------------------------------------------------------------------------
STATUS
-------
C:\ORACLE\ORADATA\OSBTEST\SYSTEM01.DBF
SYSTEM
C:\ORACLE\ORADATA\OSBTEST\SYSAUX01.DBF
ONLINE
C:\ORACLE\ORADATA\OSBTEST\UNDOTBS01.DBF
ONLINE
NAME
--------------------------------------------------------------------------------
STATUS
-------
C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
ONLINE
C:\ORACLE\ORADATA\OSBTEST\EXAMPLE01.DBF
ONLINE
SQL>
SQL> set lines 300
SQL> r
1* select name, status from v$datafile
NAME
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
STATUS
-------
C:\ORACLE\ORADATA\OSBTEST\SYSTEM01.DBF
SYSTEM
C:\ORACLE\ORADATA\OSBTEST\SYSAUX01.DBF
ONLINE
C:\ORACLE\ORADATA\OSBTEST\UNDOTBS01.DBF
ONLINE
NAME
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
STATUS
-------
C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
ONLINE
C:\ORACLE\ORADATA\OSBTEST\EXAMPLE01.DBF
ONLINE
SQL>
SQL>
SQL> col name format a30
SQL> r
1* select name, status from v$datafile
NAME STATUS
------------------------------ -------
C:\ORACLE\ORADATA\OSBTEST\SYST SYSTEM
EM01.DBF
C:\ORACLE\ORADATA\OSBTEST\SYSA ONLINE
UX01.DBF
C:\ORACLE\ORADATA\OSBTEST\UNDO ONLINE
TBS01.DBF
C:\ORACLE\ORADATA\OSBTEST\USER ONLINE
S01.DBF
NAME STATUS
------------------------------ -------
C:\ORACLE\ORADATA\OSBTEST\EXAM ONLINE
PLE01.DBF
SQL>
SQL>
SQL> col name format a50
SQL> r
1* select name, status from v$datafile
NAME STATUS
-------------------------------------------------- -------
C:\ORACLE\ORADATA\OSBTEST\SYSTEM01.DBF SYSTEM
C:\ORACLE\ORADATA\OSBTEST\SYSAUX01.DBF ONLINE
C:\ORACLE\ORADATA\OSBTEST\UNDOTBS01.DBF ONLINE
C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF ONLINE
C:\ORACLE\ORADATA\OSBTEST\EXAMPLE01.DBF ONLINE
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL> desc v$datafile
Name N
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
FILE#
CREATION_CHANGE#
CREATION_TIME
TS#
RFILE#
STATUS
ENABLED
CHECKPOINT_CHANGE#
CHECKPOINT_TIME
UNRECOVERABLE_CHANGE#
UNRECOVERABLE_TIME
LAST_CHANGE#
LAST_TIME
OFFLINE_CHANGE#
ONLINE_CHANGE#
ONLINE_TIME
BYTES
BLOCKS
CREATE_BYTES
BLOCK_SIZE
NAME
PLUGGED_IN
BLOCK1_OFFSET
AUX_NAME
FIRST_NONLOGGED_SCN
FIRST_NONLOGGED_TIME
FOREIGN_DBID
FOREIGN_CREATION_CHANGE#
FOREIGN_CREATION_TIME
PLUGGED_READONLY
PLUGIN_CHANGE#
PLUGIN_RESETLOGS_CHANGE#
PLUGIN_RESETLOGS_TIME
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_ ERROR CHANGE# TIME
---------- ------- ------- ----------------------------------------------------------------- ---------- ---------
4 ONLINE ONLINE FILE NOT FOUND 0
SQL>
SQL>
SQL>
SQL>
SQL> recover datafile 4;
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
SQL>
SQL>
SQL>
SQL>
SQL> alter database open;
Database altered.
SQL> select * from v$recover_file;
no rows selected
SQL>
SQL>
SQL> create table test1 as select * from dba_objects;
Table created.
SQL>
SQL>
SQL>
SQL> alter system switch logfile;
System altered.
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL> alter system switch logfile;
System altered.
SQL> alter system switch logfile;
System altered.
SQL> shutdown abort
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 535662592 bytes
Fixed Size 1348508 bytes
Variable Size 331353188 bytes
Database Buffers 197132288 bytes
Redo Buffers 5828608 bytes
Database mounted.
Database opened.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 535662592 bytes
Fixed Size 1348508 bytes
Variable Size 331353188 bytes
Database Buffers 197132288 bytes
Redo Buffers 5828608 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
SQL> alter database open;
Database altered.
SQL> select count(*) from test1;
COUNT(*)
----------
69614
SQL>
RMAN SESSION
================================================================================
RMAN> list backup of database summary;
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
1 B 0 A * 28-OCT-10 1 2 YES TAG20101028T065752
4 B 0 A * 28-OCT-10 1 2 YES TAG20101028T070243
8 B F A SBT_TAPE 28-OCT-10 1 1 NO TEST4
9 B F A SBT_TAPE 28-OCT-10 1 1 NO TEST5
10 B 0 A SBT_TAPE 28-OCT-10 1 1 NO TEST6
12 B 0 A SBT_TAPE 28-OCT-10 1 1 NO TAG20101028T083340
14 B 0 A * 03-NOV-10 1 2 YES TAG20101103T035736
19 B 0 A SBT_TAPE 03-NOV-10 1 1 NO TAG20101103T042106
23 B 0 A * 03-NOV-10 1 2 YES TAG20101103T042801
30 B 1 A * 03-NOV-10 1 2 YES TAG20101103T052913
RMAN>
RMAN>
RMAN> exit
Recovery Manager complete.
C:\>
C:\>
C:\>rman target /
Recovery Manager: Release 11.1.0.7.0 - Production on Wed Nov 3 06:01:24 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: OSBTEST (DBID=3880221928, not open)
RMAN>
RMAN>
RMAN> restore datafile 4;
Starting restore at 03-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=155 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=151 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1
channel ORA_DISK_1: ORA-19870: error while restoring backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1
ORA-19505: failed to identify file "C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
channel ORA_DISK_1: failover to duplicate backup on device SBT_TAPE
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00004 to C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_SBT_TAPE_1: reading from backup piece 16ls21mh_1_2
channel ORA_SBT_TAPE_1: piece handle=16ls21mh_1_2 tag=TAG20101103T042801
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:02:15
Finished restore at 03-NOV-10
RMAN>
RMAN>
RMAN> recover datafile 4;
Starting recover at 03-NOV-10
using channel ORA_DISK_1
using channel ORA_SBT_TAPE_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1
channel ORA_DISK_1: ORA-19870: error while restoring backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1
ORA-19505: failed to identify file "C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
channel ORA_DISK_1: failover to duplicate backup on device SBT_TAPE
channel ORA_SBT_TAPE_1: starting incremental datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_SBT_TAPE_1: reading from backup piece 1els259a_1_2
channel ORA_SBT_TAPE_1: piece handle=1els259a_1_2 tag=TAG20101103T052913
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:05
starting media recovery
media recovery complete, elapsed time: 00:00:01
Finished recover at 03-NOV-10
RMAN> exit
Recovery Manager complete.
C:\>rman target /
Recovery Manager: Release 11.1.0.7.0 - Production on Wed Nov 3 06:08:25 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: OSBTEST (DBID=3880221928)
RMAN>
RMAN>
RMAN>
RMAN> backup device type sbt_tape recovery files;
Starting backup at 03-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=152 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 11/03/2010 06:08:43
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2HZ807_.ARC
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
RMAN>
RMAN>
RMAN>
RMAN>
RMAN> backup device type sbt_tape recovery files;
Starting backup at 03-NOV-10
using channel ORA_SBT_TAPE_1
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 11/03/2010 06:09:02
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2HZ807_.ARC
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
RMAN>
RMAN>
RMAN> backup device type sbt_tape recovery files;
Starting backup at 03-NOV-10
using channel ORA_SBT_TAPE_1
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
skipping backup set key 1; already backed up 1 time(s)
skipping backup set key 2; already backed up 1 time(s)
skipping backup set key 3; already backed up 1 time(s)
skipping backup set key 4; already backed up 1 time(s)
skipping backup set key 5; already backed up 1 time(s)
skipping backup set key 6; already backed up 1 time(s)
skipping backup set key 13; already backed up 1 time(s)
skipping backup set key 14; already backed up 1 time(s)
skipping backup set key 15; already backed up 1 time(s)
skipping backup set key 16; already backed up 1 time(s)
skipping backup set key 17; already backed up 1 time(s)
skipping backup set key 22; already backed up 1 time(s)
skipping backup set key 23; already backed up 1 time(s)
skipping backup set key 24; already backed up 1 time(s)
skipping backup set key 25; already backed up 1 time(s)
skipping backup set key 30; already backed up 1 time(s)
skipping backup set key 31; already backed up 1 time(s)
channel ORA_SBT_TAPE_1: starting archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=436 RECID=433 STAMP=734075800
input archived log thread=1 sequence=437 RECID=434 STAMP=734076483
input archived log thread=1 sequence=438 RECID=435 STAMP=734076495
input archived log thread=1 sequence=439 RECID=436 STAMP=734076498
channel ORA_SBT_TAPE_1: starting piece 1 at 03-NOV-10
channel ORA_SBT_TAPE_1: finished piece 1 at 03-NOV-10
piece handle=1jls27la_1_1 tag=TAG20101103T060944 comment=API Version 2.0,MMS Version 10.3.0.2
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:01:55
Finished backup at 03-NOV-10
Starting Control File and SPFILE Autobackup at 03-NOV-10
piece handle=c-3880221928-20101103-08 comment=API Version 2.0,MMS Version 10.3.0.2
Finished Control File and SPFILE Autobackup at 03-NOV-10
RMAN> backup device type sbt_tape recovery files;
Starting backup at 03-NOV-10
using channel ORA_SBT_TAPE_1
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2HZ807_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2JNM41_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2JNZBW_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_439_6F2JO2TJ_.ARC; already backed up 1 time(s)
skipping backup set key 1; already backed up 1 time(s)
skipping backup set key 2; already backed up 1 time(s)
skipping backup set key 3; already backed up 1 time(s)
skipping backup set key 4; already backed up 1 time(s)
skipping backup set key 5; already backed up 1 time(s)
skipping backup set key 6; already backed up 1 time(s)
skipping backup set key 13; already backed up 1 time(s)
skipping backup set key 14; already backed up 1 time(s)
skipping backup set key 15; already backed up 1 time(s)
skipping backup set key 16; already backed up 1 time(s)
skipping backup set key 17; already backed up 1 time(s)
skipping backup set key 22; already backed up 1 time(s)
skipping backup set key 23; already backed up 1 time(s)
skipping backup set key 24; already backed up 1 time(s)
skipping backup set key 25; already backed up 1 time(s)
skipping backup set key 30; already backed up 1 time(s)
skipping backup set key 31; already backed up 1 time(s)
Finished backup at 03-NOV-10
RMAN>
RMAN>
RMAN> exit
Recovery Manager complete.
C:\>rman target /
Recovery Manager: Release 11.1.0.7.0 - Production on Wed Nov 3 06:15:08 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: OSBTEST (DBID=3880221928, not open)
RMAN> restore datafile 4;
Starting restore at 03-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=156 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=151 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1
channel ORA_DISK_1: ORA-19870: error while restoring backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1
ORA-19505: failed to identify file "C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
channel ORA_DISK_1: failover to duplicate backup on device SBT_TAPE
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00004 to C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_SBT_TAPE_1: reading from backup piece 16ls21mh_1_2
channel ORA_SBT_TAPE_1: piece handle=16ls21mh_1_2 tag=TAG20101103T042801
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:45
Finished restore at 03-NOV-10
RMAN>
RMAN>
RMAN> recover datafile 4;
Starting recover at 03-NOV-10
using channel ORA_DISK_1
using channel ORA_SBT_TAPE_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1
channel ORA_DISK_1: ORA-19870: error while restoring backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1
ORA-19505: failed to identify file "C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
channel ORA_DISK_1: failover to duplicate backup on device SBT_TAPE
channel ORA_SBT_TAPE_1: starting incremental datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_SBT_TAPE_1: reading from backup piece 1els259a_1_2
channel ORA_SBT_TAPE_1: piece handle=1els259a_1_2 tag=TAG20101103T052913
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:05
starting media recovery
archived log for thread 1 with sequence 440 is already on disk as file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_440_6F2K0239_.ARC
channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=435
channel ORA_SBT_TAPE_1: reading from backup piece 1hls26c7_1_1
channel ORA_SBT_TAPE_1: piece handle=1hls26c7_1_1 tag=TAG20101103T054751
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:05
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC thread=1 sequence=435
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC RECID=438 STAMP=734077220
channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=436
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=437
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=438
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=439
channel ORA_SBT_TAPE_1: reading from backup piece 1jls27la_1_1
channel ORA_SBT_TAPE_1: piece handle=1jls27la_1_1 tag=TAG20101103T060944
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:05
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC thread=1 sequence=436
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC RECID=439 STAMP=734077286
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC thread=1 sequence=437
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC RECID=442 STAMP=734077287
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC thread=1 sequence=438
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC RECID=441 STAMP=734077286
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_439_6F2KFP0N_.ARC RECID=440 STAMP=734077286
media recovery complete, elapsed time: 00:00:03
Finished recover at 03-NOV-10
RMAN>
RMAN>
RMAN>
RMAN>
RMAN> backup device type sbt_tape recovery files;
Starting backup at 03-NOV-10
released channel: ORA_DISK_1
using channel ORA_SBT_TAPE_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 11/03/2010 06:29:28
RMAN-20021: database not set
RMAN> exit
Recovery Manager complete.
C:\>rman target /
Recovery Manager: Release 11.1.0.7.0 - Production on Wed Nov 3 06:29:31 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: OSBTEST (DBID=3880221928)
RMAN> backup device type sbt_tape recovery files;
Starting backup at 03-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=152 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2HZ807_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2JNM41_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2JNZBW_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_439_6F2JO2TJ_.ARC; already backed up 1 time(s)
skipping backup set key 1; already backed up 1 time(s)
skipping backup set key 2; already backed up 1 time(s)
skipping backup set key 3; already backed up 1 time(s)
skipping backup set key 4; already backed up 1 time(s)
skipping backup set key 5; already backed up 1 time(s)
skipping backup set key 6; already backed up 1 time(s)
skipping backup set key 13; already backed up 1 time(s)
skipping backup set key 14; already backed up 1 time(s)
skipping backup set key 15; already backed up 1 time(s)
skipping backup set key 16; already backed up 1 time(s)
skipping backup set key 17; already backed up 1 time(s)
skipping backup set key 22; already backed up 1 time(s)
skipping backup set key 23; already backed up 1 time(s)
skipping backup set key 24; already backed up 1 time(s)
skipping backup set key 25; already backed up 1 time(s)
skipping backup set key 30; already backed up 1 time(s)
skipping backup set key 31; already backed up 1 time(s)
channel ORA_SBT_TAPE_1: starting archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=440 RECID=437 STAMP=734076850
channel ORA_SBT_TAPE_1: starting piece 1 at 03-NOV-10
channel ORA_SBT_TAPE_1: finished piece 1 at 03-NOV-10
piece handle=1lls28qf_1_1 tag=TAG20101103T062935 comment=API Version 2.0,MMS Version 10.3.0.2
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:02:15
Finished backup at 03-NOV-10
Starting Control File and SPFILE Autobackup at 03-NOV-10
piece handle=c-3880221928-20101103-09 comment=API Version 2.0,MMS Version 10.3.0.2
Finished Control File and SPFILE Autobackup at 03-NOV-10
RMAN>
ALERT LOG
===========================================
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side spfile C:\APP\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEOSBTEST.ORA
System parameters with non-default values:
processes = 150
resource_limit = TRUE
nls_territory = "PHILIPPINES"
memory_target = 820M
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL01.CTL"
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL02.CTL"
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL03.CTL"
db_block_size = 8192
compatible = "11.1.0.0.0"
log_archive_format = "ARC%S_%R.%T"
db_recovery_file_dest = "\oracle\flash_recovery_area"
db_recovery_file_dest_size= 40000M
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = "epassport.ph"
dispatchers = "(PROTOCOL=TCP) (SERVICE=osbtestXDB)"
audit_file_dest = "C:\APP\ORACLE\ADMIN\OSBTEST\ADUMP"
audit_trail = "NONE"
db_name = "osbtest"
open_cursors = 300
diagnostic_dest = "C:\APP\ORACLE"
Wed Nov 03 05:56:30 2010
PMON started with pid=2, OS id=77428
Wed Nov 03 05:56:30 2010
VKTM started with pid=3, OS id=78532 at elevated priority
Wed Nov 03 05:56:30 2010
DIAG started with pid=4, OS id=76744
VKTM running at (20)ms precision
Wed Nov 03 05:56:30 2010
DBRM started with pid=5, OS id=72456
Wed Nov 03 05:56:30 2010
PSP0 started with pid=6, OS id=75816
Wed Nov 03 05:56:30 2010
DIA0 started with pid=7, OS id=77752
Wed Nov 03 05:56:30 2010
MMAN started with pid=8, OS id=75856
Wed Nov 03 05:56:30 2010
DBW0 started with pid=9, OS id=78564
Wed Nov 03 05:56:30 2010
LGWR started with pid=10, OS id=74368
Wed Nov 03 05:56:30 2010
CKPT started with pid=11, OS id=76372
Wed Nov 03 05:56:30 2010
SMON started with pid=12, OS id=77512
Wed Nov 03 05:56:30 2010
RECO started with pid=13, OS id=78396
Wed Nov 03 05:56:30 2010
MMON started with pid=14, OS id=79724
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = C:\app\oracle
Wed Nov 03 05:56:30 2010
ALTER DATABASE MOUNT
Wed Nov 03 05:56:30 2010
MMNL started with pid=15, OS id=73768
Wed Nov 03 05:56:34 2010
Sweep Incident[6004]: completed
Sweep Incident[5155]: completed
Sweep Incident[5154]: completed
Setting recovery target incarnation to 2
Successful mount of redo thread 1, with mount id 3888072718
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
Wed Nov 03 05:56:35 2010
ALTER DATABASE OPEN
Sweep Incident[5153]: completed
Beginning crash recovery of 1 threads
parallel recovery started with 7 processes
Started redo scan
Completed redo scan
8 redo blocks read, 3 data blocks need recovery
Started redo application at
Thread 1: logseq 436, block 372
Recovery of Online Redo Log: Thread 1 Group 1 Seq 436 Reading mem 0
Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Completed redo application of 0.00MB
Completed crash recovery at
Thread 1: logseq 436, block 380, scn 12289793
3 data blocks read, 3 data blocks written, 8 redo blocks read
LGWR: STARTING ARCH PROCESSES
Wed Nov 03 05:56:38 2010
ARC0 started with pid=21, OS id=77892
Wed Nov 03 05:56:38 2010
ARC1 started with pid=27, OS id=79792
Wed Nov 03 05:56:38 2010
ARC2 started with pid=28, OS id=78260
ARC0: Archival started
Wed Nov 03 05:56:38 2010
ARC3 started with pid=29, OS id=79212
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 437 (thread open)
Thread 1 opened at log sequence 437
ARC0: Becoming the 'no FAL' ARCH
Current log# 2 seq# 437 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
ARC0: Becoming the 'no SRL' ARCH
Successful open of redo thread 1
ARC3: Becoming the heartbeat ARCH
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
SMON: enabling cache recovery
db_recovery_file_dest_size of 40000 MB is 44.23% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan
Starting background process FBDA
Wed Nov 03 05:56:41 2010
FBDA started with pid=30, OS id=77832
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Wed Nov 03 05:56:42 2010
QMNC started with pid=31, OS id=77904
Wed Nov 03 05:56:56 2010
Completed: ALTER DATABASE OPEN
Stopping background process FBDA
Shutting down instance: further logons disabled
Stopping background process QMNC
Wed Nov 03 05:57:06 2010
Stopping background process MMNL
Stopping background process MMON
Shutting down instance (immediate)
License high water mark = 8
Waiting for dispatcher 'D000' to shutdown
All dispatchers and shared servers shutdown
ALTER DATABASE CLOSE NORMAL
Wed Nov 03 05:57:10 2010
SMON: disabling tx recovery
SMON: disabling cache recovery
Wed Nov 03 05:57:11 2010
Shutting down archive processes
Archiving is disabled
Wed Nov 03 05:57:11 2010
ARCH shutting down
Wed Nov 03 05:57:11 2010
ARCH shutting down
ARC0: Archival stopped
ARC1: Archival stopped
Wed Nov 03 05:57:11 2010
ARCH shutting down
ARC2: Archival stopped
Wed Nov 03 05:57:11 2010
ARCH shutting down
ARC3: Archival stopped
Thread 1 closed at log sequence 437
Successful close of redo thread 1
Completed: ALTER DATABASE CLOSE NORMAL
ALTER DATABASE DISMOUNT
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Wed Nov 03 05:57:16 2010
Stopping background process VKTM:
Archiving is disabled
Archive process shutdown avoided: 0 active
Wed Nov 03 05:57:18 2010
Instance shutdown complete
Wed Nov 03 05:59:32 2010
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side spfile C:\APP\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEOSBTEST.ORA
System parameters with non-default values:
processes = 150
resource_limit = TRUE
nls_territory = "PHILIPPINES"
memory_target = 820M
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL01.CTL"
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL02.CTL"
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL03.CTL"
db_block_size = 8192
compatible = "11.1.0.0.0"
log_archive_format = "ARC%S_%R.%T"
db_recovery_file_dest = "\oracle\flash_recovery_area"
db_recovery_file_dest_size= 40000M
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = "epassport.ph"
dispatchers = "(PROTOCOL=TCP) (SERVICE=osbtestXDB)"
audit_file_dest = "C:\APP\ORACLE\ADMIN\OSBTEST\ADUMP"
audit_trail = "NONE"
db_name = "osbtest"
open_cursors = 300
diagnostic_dest = "C:\APP\ORACLE"
Wed Nov 03 05:59:32 2010
PMON started with pid=2, OS id=78020
Wed Nov 03 05:59:32 2010
VKTM started with pid=3, OS id=77528 at elevated priority
Wed Nov 03 05:59:32 2010
DIAG started with pid=4, OS id=79056
VKTM running at (20)ms precision
Wed Nov 03 05:59:32 2010
DBRM started with pid=5, OS id=78224
Wed Nov 03 05:59:32 2010
PSP0 started with pid=6, OS id=79316
Wed Nov 03 05:59:32 2010
DIA0 started with pid=7, OS id=76608
Wed Nov 03 05:59:32 2010
MMAN started with pid=8, OS id=78704
Wed Nov 03 05:59:33 2010
DBW0 started with pid=9, OS id=79276
Wed Nov 03 05:59:33 2010
LGWR started with pid=10, OS id=78604
Wed Nov 03 05:59:33 2010
CKPT started with pid=11, OS id=79412
Wed Nov 03 05:59:33 2010
SMON started with pid=12, OS id=77836
Wed Nov 03 05:59:33 2010
RECO started with pid=13, OS id=78544
Wed Nov 03 05:59:33 2010
MMON started with pid=14, OS id=77560
Wed Nov 03 05:59:33 2010
MMNL started with pid=15, OS id=79556
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = C:\app\oracle
Wed Nov 03 05:59:33 2010
ALTER DATABASE MOUNT
Setting recovery target incarnation to 2
Successful mount of redo thread 1, with mount id 3888089029
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
Wed Nov 03 05:59:37 2010
ALTER DATABASE OPEN
Errors in file c:\app\oracle\diag\rdbms\osbtest\osbtest\trace\osbtest_dbw0_79276.trc:
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
ORA-1157 signalled during: ALTER DATABASE OPEN...
Wed Nov 03 05:59:39 2010
Checker run found 1 new persistent data failures
Wed Nov 03 06:01:14 2010
ALTER DATABASE RECOVER datafile 4
Media Recovery Start
Fast Parallel Media Recovery NOT enabled
Wed Nov 03 06:01:14 2010
Errors in file c:\app\oracle\diag\rdbms\osbtest\osbtest\trace\osbtest_dbw0_79276.trc:
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
Media Recovery failed with error 1110
ORA-283 signalled during: ALTER DATABASE RECOVER datafile 4 ...
Wed Nov 03 06:03:34 2010
Full restore complete of datafile 4 C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF. Elapsed time: 0:00:01
checkpoint is 12264222
Wed Nov 03 06:05:06 2010
Incremental restore complete of datafile 4 C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
checkpoint is 12268354
Wed Nov 03 06:05:21 2010
alter database recover datafile list clear
Completed: alter database recover datafile list clear
alter database recover if needed
datafile 4
Media Recovery Start
Fast Parallel Media Recovery NOT enabled
parallel recovery started with 7 processes
Recovery of Online Redo Log: Thread 1 Group 3 Seq 435 Reading mem 0
Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
Recovery of Online Redo Log: Thread 1 Group 1 Seq 436 Reading mem 0
Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Recovery of Online Redo Log: Thread 1 Group 2 Seq 437 Reading mem 0
Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
Completed: alter database recover if needed
datafile 4
Wed Nov 03 06:05:43 2010
alter database open
Wed Nov 03 06:05:43 2010
LGWR: STARTING ARCH PROCESSES
Wed Nov 03 06:05:43 2010
ARC0 started with pid=30, OS id=78636
Wed Nov 03 06:05:43 2010
ARC1 started with pid=31, OS id=78152
Wed Nov 03 06:05:43 2010
ARC2 started with pid=32, OS id=79756
ARC0: Archival started
Wed Nov 03 06:05:43 2010
ARC3 started with pid=33, OS id=78272
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 opened at log sequence 437
ARC2: Becoming the 'no FAL' ARCH
Current log# 2 seq# 437 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
ARC2: Becoming the 'no SRL' ARCH
Successful open of redo thread 1
ARC3: Becoming the heartbeat ARCH
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Nov 03 06:05:44 2010
SMON: enabling cache recovery
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan
Starting background process FBDA
Wed Nov 03 06:05:45 2010
FBDA started with pid=34, OS id=79520
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Wed Nov 03 06:05:46 2010
QMNC started with pid=35, OS id=78484
Wed Nov 03 06:05:50 2010
db_recovery_file_dest_size of 40000 MB is 44.23% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Nov 03 06:06:00 2010
Completed: alter database open
Wed Nov 03 06:08:02 2010
Thread 1 advanced to log sequence 438 (LGWR switch)
Current log# 3 seq# 438 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
Wed Nov 03 06:08:15 2010
Thread 1 advanced to log sequence 439 (LGWR switch)
Current log# 1 seq# 439 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Thread 1 cannot allocate new log, sequence 440
Checkpoint not complete
Current log# 1 seq# 439 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Thread 1 advanced to log sequence 440 (LGWR switch)
Current log# 2 seq# 440 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
Wed Nov 03 06:09:39 2010
Starting background process CJQ0
Wed Nov 03 06:09:39 2010
CJQ0 started with pid=22, OS id=77120
Wed Nov 03 06:10:48 2010
Starting background process SMCO
Wed Nov 03 06:10:48 2010
SMCO started with pid=23, OS id=79416
Wed Nov 03 06:13:38 2010
Shutting down instance (abort)
License high water mark = 12
USER (ospid: 78892): terminating the instance
Instance terminated by USER, pid = 78892
Wed Nov 03 06:13:41 2010
Instance shutdown complete
Wed Nov 03 06:14:00 2010
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side spfile C:\APP\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEOSBTEST.ORA
System parameters with non-default values:
processes = 150
resource_limit = TRUE
nls_territory = "PHILIPPINES"
memory_target = 820M
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL01.CTL"
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL02.CTL"
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL03.CTL"
db_block_size = 8192
compatible = "11.1.0.0.0"
log_archive_format = "ARC%S_%R.%T"
db_recovery_file_dest = "\oracle\flash_recovery_area"
db_recovery_file_dest_size= 40000M
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = "epassport.ph"
dispatchers = "(PROTOCOL=TCP) (SERVICE=osbtestXDB)"
audit_file_dest = "C:\APP\ORACLE\ADMIN\OSBTEST\ADUMP"
audit_trail = "NONE"
db_name = "osbtest"
open_cursors = 300
diagnostic_dest = "C:\APP\ORACLE"
Wed Nov 03 06:14:00 2010
PMON started with pid=2, OS id=79700
Wed Nov 03 06:14:00 2010
VKTM started with pid=3, OS id=80292 at elevated priority
Wed Nov 03 06:14:00 2010
DIAG started with pid=4, OS id=77928
Wed Nov 03 06:14:00 2010
DBRM started with pid=5, OS id=79248
VKTM running at (20)ms precision
Wed Nov 03 06:14:00 2010
PSP0 started with pid=6, OS id=78088
Wed Nov 03 06:14:00 2010
DIA0 started with pid=7, OS id=79172
Wed Nov 03 06:14:00 2010
MMAN started with pid=8, OS id=80988
Wed Nov 03 06:14:00 2010
DBW0 started with pid=9, OS id=74844
Wed Nov 03 06:14:01 2010
LGWR started with pid=10, OS id=67128
Wed Nov 03 06:14:01 2010
CKPT started with pid=11, OS id=80376
Wed Nov 03 06:14:01 2010
SMON started with pid=12, OS id=78936
Wed Nov 03 06:14:01 2010
RECO started with pid=13, OS id=76408
Wed Nov 03 06:14:01 2010
MMON started with pid=14, OS id=80164
Wed Nov 03 06:14:01 2010
MMNL started with pid=15, OS id=79160
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = C:\app\oracle
Wed Nov 03 06:14:01 2010
ALTER DATABASE MOUNT
Setting recovery target incarnation to 2
Successful mount of redo thread 1, with mount id 3888103977
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
Wed Nov 03 06:14:05 2010
ALTER DATABASE OPEN
Beginning crash recovery of 1 threads
parallel recovery started with 7 processes
Started redo scan
Completed redo scan
477 redo blocks read, 144 data blocks need recovery
Started redo application at
Thread 1: logseq 440, block 3
Recovery of Online Redo Log: Thread 1 Group 2 Seq 440 Reading mem 0
Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
Completed redo application of 0.20MB
Completed crash recovery at
Thread 1: logseq 440, block 480, scn 12311039
144 data blocks read, 144 data blocks written, 477 redo blocks read
LGWR: STARTING ARCH PROCESSES
Wed Nov 03 06:14:08 2010
ARC0 started with pid=26, OS id=80964
Wed Nov 03 06:14:08 2010
ARC1 started with pid=27, OS id=80512
Wed Nov 03 06:14:08 2010
ARC2 started with pid=28, OS id=80416
ARC0: Archival started
Wed Nov 03 06:14:08 2010
ARC3 started with pid=29, OS id=81440
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 441 (thread open)
Thread 1 opened at log sequence 441
ARC1: Becoming the 'no FAL' ARCH
Current log# 3 seq# 441 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
ARC1: Becoming the 'no SRL' ARCH
Successful open of redo thread 1
ARC0: Becoming the heartbeat ARCH
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
SMON: enabling cache recovery
db_recovery_file_dest_size of 40000 MB is 44.23% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan
Starting background process FBDA
Wed Nov 03 06:14:11 2010
FBDA started with pid=30, OS id=79456
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Wed Nov 03 06:14:12 2010
QMNC started with pid=31, OS id=81280
Wed Nov 03 06:14:25 2010
Completed: ALTER DATABASE OPEN
Stopping background process FBDA
Shutting down instance: further logons disabled
Stopping background process QMNC
Stopping background process MMNL
Wed Nov 03 06:14:36 2010
Stopping background process MMON
Shutting down instance (immediate)
License high water mark = 8
Waiting for dispatcher 'D000' to shutdown
All dispatchers and shared servers shutdown
ALTER DATABASE CLOSE NORMAL
Wed Nov 03 06:14:39 2010
SMON: disabling tx recovery
SMON: disabling cache recovery
Wed Nov 03 06:14:39 2010
Shutting down archive processes
Archiving is disabled
Wed Nov 03 06:14:39 2010
ARCH shutting down
ARC3: Archival stopped
Wed Nov 03 06:14:39 2010
ARCH shutting down
ARC0: Archival stopped
Wed Nov 03 06:14:39 2010
ARCH shutting down
ARC1: Archival stopped
Wed Nov 03 06:14:39 2010
ARCH shutting down
ARC2: Archival stopped
Thread 1 closed at log sequence 441
Successful close of redo thread 1
Completed: ALTER DATABASE CLOSE NORMAL
ALTER DATABASE DISMOUNT
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Wed Nov 03 06:14:45 2010
Stopping background process VKTM:
Archive process shutdown avoided: 0 active
Wed Nov 03 06:14:47 2010
Instance shutdown complete
Wed Nov 03 06:14:54 2010
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side spfile C:\APP\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEOSBTEST.ORA
System parameters with non-default values:
processes = 150
resource_limit = TRUE
nls_territory = "PHILIPPINES"
memory_target = 820M
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL01.CTL"
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL02.CTL"
control_files = "C:\ORACLE\ORADATA\OSBTEST\CONTROL03.CTL"
db_block_size = 8192
compatible = "11.1.0.0.0"
log_archive_format = "ARC%S_%R.%T"
db_recovery_file_dest = "\oracle\flash_recovery_area"
db_recovery_file_dest_size= 40000M
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = "epassport.ph"
dispatchers = "(PROTOCOL=TCP) (SERVICE=osbtestXDB)"
audit_file_dest = "C:\APP\ORACLE\ADMIN\OSBTEST\ADUMP"
audit_trail = "NONE"
db_name = "osbtest"
open_cursors = 300
diagnostic_dest = "C:\APP\ORACLE"
Wed Nov 03 06:14:55 2010
PMON started with pid=2, OS id=80900
Wed Nov 03 06:14:55 2010
VKTM started with pid=3, OS id=81160 at elevated priority
Wed Nov 03 06:14:55 2010
DIAG started with pid=4, OS id=80624
VKTM running at (20)ms precision
Wed Nov 03 06:14:55 2010
DBRM started with pid=5, OS id=81604
Wed Nov 03 06:14:55 2010
PSP0 started with pid=6, OS id=80676
Wed Nov 03 06:14:55 2010
DIA0 started with pid=7, OS id=81684
Wed Nov 03 06:14:55 2010
MMAN started with pid=8, OS id=80892
Wed Nov 03 06:14:55 2010
DBW0 started with pid=9, OS id=80360
Wed Nov 03 06:14:55 2010
LGWR started with pid=10, OS id=81376
Wed Nov 03 06:14:55 2010
CKPT started with pid=11, OS id=80732
Wed Nov 03 06:14:55 2010
SMON started with pid=12, OS id=80852
Wed Nov 03 06:14:55 2010
RECO started with pid=13, OS id=80636
Wed Nov 03 06:14:55 2010
MMON started with pid=14, OS id=81796
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = C:\app\oracle
Wed Nov 03 06:14:55 2010
ALTER DATABASE MOUNT
Wed Nov 03 06:14:55 2010
MMNL started with pid=15, OS id=80088
Setting recovery target incarnation to 2
Successful mount of redo thread 1, with mount id 3888118879
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
Wed Nov 03 06:15:00 2010
ALTER DATABASE OPEN
Errors in file c:\app\oracle\diag\rdbms\osbtest\osbtest\trace\osbtest_dbw0_80360.trc:
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
ORA-1157 signalled during: ALTER DATABASE OPEN...
Wed Nov 03 06:17:27 2010
Full restore complete of datafile 4 C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF. Elapsed time: 0:00:01
checkpoint is 12264222
Wed Nov 03 06:18:14 2010
alter database open
ORA-1113 signalled during: alter database open...
Wed Nov 03 06:18:15 2010
Checker run found 1 new persistent data failures
Wed Nov 03 06:19:14 2010
Incremental restore complete of datafile 4 C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
checkpoint is 12268354
Wed Nov 03 06:19:28 2010
alter database recover datafile list clear
Completed: alter database recover datafile list clear
alter database recover if needed
datafile 4
Media Recovery Start
Fast Parallel Media Recovery NOT enabled
parallel recovery started with 7 processes
ORA-279 signalled during: alter database recover if needed
datafile 4
...
Wed Nov 03 06:20:20 2010
db_recovery_file_dest_size of 40000 MB is 44.23% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Nov 03 06:20:35 2010
alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC'
Media Recovery Log C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC
ORA-279 signalled during: alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC'...
Wed Nov 03 06:21:40 2010
alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC'
Media Recovery Log C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC
ORA-279 signalled during: alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC'...
alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC'
Media Recovery Log C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC
ORA-279 signalled during: alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC'...
alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC'
Media Recovery Log C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC
Recovery of Online Redo Log: Thread 1 Group 1 Seq 439 Reading mem 0
Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Recovery of Online Redo Log: Thread 1 Group 2 Seq 440 Reading mem 0
Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
Recovery of Online Redo Log: Thread 1 Group 3 Seq 441 Reading mem 0
Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
Completed: alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC'
Wed Nov 03 06:24:26 2010
alter database open
Wed Nov 03 06:24:27 2010
LGWR: STARTING ARCH PROCESSES
Wed Nov 03 06:24:27 2010
ARC0 started with pid=30, OS id=82388
Wed Nov 03 06:24:27 2010
ARC1 started with pid=31, OS id=79392
Wed Nov 03 06:24:27 2010
ARC2 started with pid=32, OS id=83504
ARC0: Archival started
Wed Nov 03 06:24:27 2010
ARC3 started with pid=33, OS id=80064
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 opened at log sequence 441
ARC0: Becoming the 'no FAL' ARCH
Current log# 3 seq# 441 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
ARC0: Becoming the 'no SRL' ARCH
Successful open of redo thread 1
ARC3: Becoming the heartbeat ARCH
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Nov 03 06:24:27 2010
SMON: enabling cache recovery
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan
Starting background process FBDA
Wed Nov 03 06:24:28 2010
FBDA started with pid=34, OS id=83160
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Wed Nov 03 06:24:29 2010
QMNC started with pid=35, OS id=83316
Wed Nov 03 06:24:42 2010
Completed: alter database open
Wed Nov 03 06:25:01 2010
Starting background process CJQ0
Wed Nov 03 06:25:01 2010
CJQ0 started with pid=37, OS id=82416
Wed Nov 03 06:29:31 2010
Starting background process SMCO
Wed Nov 03 06:29:31 2010
SMCO started with pid=20, OS id=83932
}}}
{{{
[oracle@dbrocaix01 ~]$ obtool
ob>
ob>
ob> lsdev
ob>
ob> obtool -u admin chhost -r client,admin,mediaserver "dbrocaix01.bayantel.com"
Error: unknown command, obtool
ob> chhost -r client,admin,mediaserver "dbrocaix01.bayantel.com"
Error: can't fetch host dbrocaix01.bayantel.com - name not found
ob>
ob>
ob> chhost -r client,admin,mediaserver "dbrocaix01"
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t library -o -S 36 -I 4 -a dbrocaix01:/flash_reco/vlib -v vlib > NULL
Password:
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdte1 -v -l vlib -d 1 vdte1 > NULL
Password:
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdte2 -v -l vlib -d 2 vdte2 > NULL
Password:
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdte3 -v -l vlib -d 3 vdte3 > NULL
Password:
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdte4 -v -l vlib -d 4 vdte4 > NULL
Password:
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t library -o -I 4 -a dbrocaix01:/flash_reco/vlib2 -v vlib2 > NULL
Password:
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdrive1 -v -l vlib2 -d 1 vdrive1 > NULL
Password:
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdrive2 -v -l vlib2 -d 2 vdrive2 > NULL
Password:
[oracle@dbrocaix01 ~]$ obtool
ob> lsdev
library vlib in service
drive 1 vdte1 in service
drive 2 vdte2 in service
drive 3 vdte3 in service
drive 4 vdte4 in service
library vlib2 in service
drive 1 vdrive1 in service
drive 2 vdrive2 in service
ob> lshost
dbrocaix01 admin,mediaserver,client (via OB) in service
ob>
ob>
ob> lsdev
library vlib in service
drive 1 vdte1 in service
drive 2 vdte2 in service
drive 3 vdte3 in service
drive 4 vdte4 in service
library vlib2 in service
drive 1 vdrive1 in service
drive 2 vdrive2 in service
ob>
ob>
ob> insertvol -L vlib -c 250 unlabeled 1-32
ob> insertvol -L vlib2 -c 250 unlabeled 1-14
ob>
ob>
ob> lsmf --long
OFFSITE_7Y:
Keep volume set: 7 years
Appendable: yes
Volume ID used: unique to this media family
Comment: Store for 7 years offsite - for compliance with XYZ law
UUID: 00cee284-7185-102d-9cae-000c293b8104
OFFSITE_TEST:
Keep volume set: 10 minutes
Appendable: yes
Volume ID used: unique to this media family
Comment: Edit the test values later
UUID: 319d2c68-7185-102d-9cae-000c293b8104
OSB-CATALOG-MF:
Write window: 7 days
Keep volume set: 14 days
Appendable: yes
Volume ID used: unique to this media family
Comment: OSB catalog backup media family
UUID: 2bab93d0-717b-102d-b17d-000c293b8104
RMAN-DEFAULT:
Keep volume set: content manages reuse
Appendable: yes
Volume ID used: unique to this media family
Comment: Default RMAN backup media family
UUID: 2a824562-717b-102d-b17d-000c293b8104
}}}
Thread: Drive or volume on Which mount attempted is unusable
http://forums.oracle.com/forums/thread.jspa?threadID=475197
Thread: Oracle Secure Backup
http://forums.oracle.com/forums/thread.jspa?threadID=672792&start=0&tstart=0
''Error: waiting for snapshot controlfile enqueue''
http://www.dbasupport.com/forums/archive/index.php/t-12492.html <-- this solved it
http://surachartopun.com/2008/03/rman-waiting-for-snapshot-control-file.html
http://www.symantec.com/business/support/index?page=content&id=TECH18161
http://www.freelists.org/post/oracle-l/ORA00230-during-RMAN-backup,4
SELECT s.SID, USERNAME AS "User", PROGRAM, MODULE, ACTION, LOGON_TIME "Logon", l.*
FROM V$SESSION s, V$ENQUEUE_LOCK l
WHERE l.SID = s.SID AND l.TYPE = 'CF' AND l.ID1 = 0 AND l.ID2 = 2;
''Thread: Unable to open qlm connection - drive database is corrupted''
http://forums.oracle.com/forums/thread.jspa?messageID=1515577
http://forums.oracle.com/forums/thread.jspa?messageID=4296914
http://forums.oracle.com/forums/thread.jspa?messageID=2436266
http://forums.oracle.com/forums/thread.jspa?threadID=587033&tstart=210
''ORA-600 krbb3crw_inv_blk when compressed backupet is on''
- workaround is to turn off compression..
''References''
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle_Secure_Backup/OSB_10.shtml
http://download.oracle.com/docs/cd/E10317_01/doc/backup.102/e05410/obtool_commands.htm#insertedID41
Backup Recovery Area
http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmbckad.htm#i1006854
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkscenar002.htm
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/rpfbdb003.htm#BABCAGIB
http://www.orafaq.com/forum/t/75250/2/
{{{
some obtool commands:
lsdev -lvg <-- shows detailed info of the devices
catxcr -fl0 oracle/5.1 <-- shows the error messages
lsvol --library libraryname <-- shows storage element address
insertvol -L libraryname <storage element range> <-- inserts volume
inventory libraryname <-- inventory the library
}}}
https://martincarstenbach.wordpress.com/2018/10/18/little-things-worth-knowing-oswatcher-analyser-dashboard/
https://blog.dbi-services.com/oswatcher-blackbox-analyzer/
When your query takes too long ...
http://forums.oracle.com/forums/thread.jspa?threadID=501834
HOW TO: Post a SQL statement tuning request - template posting
http://forums.oracle.com/forums/thread.jspa?threadID=863295
Basic SQL statement performance diagnosis - HOW TO, step by step instructions
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
http://oracleprof.blogspot.com/2010/11/unsafe-deinstall-using-oracle-univeral.html
{{{
CREATE OR REPLACE PROCEDURE emp_name (id IN NUMBER, emp_name OUT varchar)
IS
BEGIN
SELECT ENAME INTO emp_name
FROM emp_tbl WHERE EMPNO = id;
END;
/
set serveroutput on
DECLARE
empName varchar(20);
CURSOR id_cur IS SELECT EMPNO FROM emp_ids;
BEGIN
FOR emp_rec in id_cur
LOOP
emp_name(emp_rec.EMPNO, empName);
dbms_output.put_line('The employee ' || empName || ' has id ' || emp_rec.EMPNO);
END LOOP;
END;
/
}}}
https://www.ovh.com/world/dedicated-servers/all_servers.xml
Looking "Under the Hood" at Networking in Oracle VM Server for x86 http://www.oracle.com/technetwork/articles/servers-storage-admin/networking-ovm-x86-1873548.html
<<showtoc>>
! Reading Execution Plans
https://docs.oracle.com/database/121/TGSQL/tgsql_interp.htm#TGSQL94618
https://blogs.oracle.com/sql/query-tuning-101:-comparing-execution-plans-and-access-vs-filter-predicates
http://blog.tanelpoder.com/files/Oracle_SQL_Plan_Execution.pdf
14 part series https://jonathanlewis.wordpress.com/explain-plan/
! “Access Predicates”
* Predicates used to locate rows in an access structure. For example, start or stop predicates for an index range scan, hash join
! “Filter Predicates”
* Predicates used to filter rows before producing them.
* Any condition that would throw away/filter rows
! column projection
<<<
“column projection” is what you extract from the rowsource you are reading.
In set theory you project (subset of columns) and filter (subset of rows)
so for example “select object_id from table” then you are projecting just “object_id” column
<<<
! QB name
* name of the query block, either system-generated or defined by the user with the QB_NAME hint
! gather_plan_statistics columns
<<<
https://www.red-gate.com/simple-talk/sql/oracle/execution-plans-part-11-actuals/
here’s a reference to the rest of the columns relating to execution statistics:
Starts: The number of times this operation actually occurred
E-rows: Estimated rows (per execution of the operation) – i.e. the “Rows” column from a call to display()
A-rows: The accumulated number of rows forwarded by this operation
A-time: The accumulated time spent in this operation – including time spent in its descendents.
Buffers: Accumulated buffer visits made by this operation – including its descendents.
Reads: Accumulated number of blocks read from disc by this operation – including its descendents.
Writes: Accumulated number of blocks written to disc by this operation – including its descendents.
<<<
! starts
<<<
so when we are trying to assess the accuracy of the optimizer’s predictions we generally need to compare A-Rows with E-rows * Starts and (as we shall see in the next article) we still have to be careful about deciding when that is a useful comparison and when it is meaningless.
<<<
.
Example1 - Online Redefinition - partition example - manually create indexes (no constraints)
http://www.evernote.com/shard/s48/sh/c2ffc788-7d1e-44df-8bd0-c04b62401eb6/48b56fe63e1c28d2e8ee2276c2c0955d
Example 2 - Online Redefinition - Employees Table - all automatic
http://www.evernote.com/shard/s48/sh/8d9633bb-178a-484c-b83f-2fe526d680e7/596829d7f1bd43f60bb3917897df5dcd
Example 3 - Online Redefinition - Employees Table - manually create constraints and indexes
http://www.evernote.com/shard/s48/sh/d80aeaef-03d3-47b8-a6f2-6941c25a75b3/2939053a281a59f6701fc947128293a4
http://asktom.oracle.com/pls/asktom/f?p=100:11:1930891738933501::::P11_QUESTION_ID:7490088329317
Best Practices for Online Table Redefinition [ID 1080969.1]
LOB redefinition http://blog.trivadis.com/b/mathiaszarick/archive/2012/03/05/lob-compression-with-oracle-strange-multiple-physical-reads.aspx
Metadata scripts are here [[dbms_metadata]]
What Are The Possible Ways To Find Out An Oracle Database Patchset/Patch And Download It?
Doc ID: Note:423016.1
FAQs on OPatch Version : 11.1
Doc ID: Note:453495.1
How to download and install opatch (generic platform).
Doc ID: Note:274526.1
How to find whether the one-off Patches will conflict or not?
Doc ID: Note:458485.1
OPatch version 10.2 - FAQ
Doc ID: Note:334108.1
How To Do The Prerequisite/Conflicts Checks Using OUI(Oracle Universal Installer) And Opatch Before Applying/Rolling Back A Patch
Doc ID: Note:459360.1
Location Of Logs For Opatch And OUI
Doc ID: Note:403212.1
Critical Patch Update - Introduction to Database n-Apply CPUs
Doc ID: Note:438314.1
Critical Patch Update January 2008 – Database Patch Security Vulnerability Molecule Mapping
Doc ID: Note:466764.1
SUDO utility in 10gR2 Grid Control
Doc ID: Note:377934.1
Can Root.Sh Be Run Via SUDO?
Doc ID: Note:413855.1
IS THE ROOT.SH ABSOLUTELY NECESSARY? OR RUN 2ND TIME?
Doc ID: Note:1007934.6
How to setup Linux md devices for CRS and ASM
Doc ID: Note:343092.1
-- DISK FULL
MetaLink Note 550522.1 (Subject: How To Avoid Disk Full Issues Because OPatch Backups Take Big Amount Of Disk Space.
-- VERIFY
Good practices applying patches and patchsets
Doc ID: 176311.1
How To Verify The Integrity Of A Patch/Software Download?
Doc ID: 549617.1
What Is The Difference Between ftp'ing An Unzipped File And A Zipped File, From One Machine To Another?
Doc ID: 787775.1
-- DATABASE VAULT
Note 726568.1 How to Install Database Vault Patches on top of 11.1.0.6
How to Install Database Vault Patches on top of 10.2.0.4
Doc ID: 731466.1
How to Install Database Vault Patches on top of 9.2.0.8.1 and 10.2.0.3
Doc ID: 445092.1
https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-ash3/851560_196423357203561_929747697_n.pdf
http://venturebeat.com/2013/09/16/facebook-explains-secrets-of-building-hugely-scalable-sites/
hip hop https://github.com/facebook/hiphop-php
http://apex.oracle.com/pls/apex/f?p=44785:24:0:::24:P24_CONTENT_ID,P24_PREV_PAGE:6613,1#prettyPhoto
http://openvpn.net/
http://openvpn.net/index.php/open-source/documentation/howto.html#install
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch35_:_Configuring_Linux_VPNs
http://www.throx.net/2008/04/13/openvpn-and-centos-5-installation-and-configuration-guide/
http://blog.wains.be/2006/10/08/simple-vpn-tunnel-using-openvpn/
http://blog.laimbock.com/2008/05/27/howto-add-firewall-rules-to-rhel-5-or-centos-5/
* IaaS (on-premise provisioning) - manage your physical infra through openstack api
https://blogs.oracle.com/oem/entry/enterprise_manager_ops_center_using
http://www.gokhanatil.com/2012/04/how-to-install-oracle-ops-center-12c.html
http://www.gokhanatil.com/2011/09/integrating-enterprise-manager-grid.html
<<showtoc>>
! Facebook engineering
flash cache by Domas Mituzas
interesting to see this technology transforming to Exadata Flash Cache.. they are calling the write-back cache as write-behind caching which has a similar concept if you read on the article
https://www.facebook.com/notes/facebook-engineering/flashcache-at-facebook-from-2010-to-2013-and-beyond/10151725297413920
https://www.facebook.com/notes/facebook-engineering/linkbench-a-database-benchmark-for-the-social-graph/10151391496443920
Though in most cases tools like ‘iostat’ are useful to understand general system performance, for our needs we needed deeper inspection. We used the ‘blktrace’ facility in Linux to trace every request issued by database software and analyze how it was served by our flash- and disk-based devices. Doing so helped identify a number of areas for improvement,including three major ones: read-write distribution, cache eviction, and write efficiency.
Jay Parikh on VLDB13 keynote - "Data Infrastructure at Web Scale"
video here http://www.ustream.tv/recorded/37879841 @1:02:14 is the awesome Q&A (resource management, etc.)
https://www.facebook.com/notes/facebook-academics/facebook-makes-big-impact-on-big-data-at-vldb/594819857236092
If you're a database guy you'll love this 2 hour video, facebook engineers discussed the following – performance focus, server provisioning, automatic server rebuilds, backup & recovery, online schema changes, sharding, HBase and Hadoop, the Q&A part at the end is also interesting at 1:28:46 Mark Callaghan also answered why they chose MySQL vs commercial databases that already have the features that their engineers are hacking. Good stuff!
http://www.livestream.com/fbtechtalks/video?clipId=pla_a3d62538-1238-4202-a3be-e257cd866bb9
corona resource manager vs yarn
https://www.facebook.com/notes/facebook-engineering/under-the-hood-scheduling-mapreduce-jobs-more-efficiently-with-corona/10151142560538920
real time analytics http://gigaom.com/cloud/how-facebook-is-powering-real-time-analytics/
flash memory field study http://users.ece.cmu.edu/~omutlu/pub/flash-memory-failures-in-the-field-at-facebook_sigmetrics15.pdf
! DBHangops
Good stuff, periodic meetup of devops/dbguys and everything about mysql database. Some of these guys come from high transaction web environments so it’s good to get their view of things even in a mysql point of view. They record their google hangouts so you can watch the previous meetups.
https://twitter.com/DBHangops
! Others
http://highscalability.com/blog/2012/9/19/the-4-building-blocks-of-architecting-systems-for-scale.html
http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer
http://highscalability.com/blog/2012/11/15/gone-fishin-justintvs-live-video-broadcasting-architecture.html
http://highscalability.com/youtube-architecture
etsy performance http://codeascraft.etsy.com/category/performance/
scaling pinterest http://www.slideshare.net/eonarts/mysql-meetup-july2012scalingpinterest#btnNext, http://gigaom.com/cloud/pinterest-flipboard-and-yelp-tell-how-to-save-big-bucks-in-the-cloud/
http://gigaom.com/2013/03/28/3-shades-of-latency-how-netflix-built-a-data-architecture-around-timeliness/
http://techblog.netflix.com/2013/03/system-architectures-for.html
http://gigaom.com/2013/03/03/how-and-why-linkedin-is-becoming-an-engineering-powerhouse/
http://gigaom.com/2013/03/05/facebook-kisses-dram-goodbye-builds-memcached-for-flash/
The 10 Deadly Sins Against Scalability http://highscalability.com/blog/2013/6/10/the-10-deadly-sins-against-scalability.html
22 Recommendations For Building Effective High Traffic Web Software http://highscalability.com/blog/2013/12/16/22-recommendations-for-building-effective-high-traffic-web-s.html
Gathering Statistics for the Cost Based Optimizer
Doc ID: Note:114671.1
ORA-20000 when running DBMS_STATS.GATHER_DATABASE_STATS
Doc ID: Note:462496.1
Getting ORA-01031 when gathering database stats in 9i using SYSTEM user
Doc ID: Note:455221.1
Poor performance after gathering statistics
Doc ID: Note:278020.1
Poor Database Performance after running DBMS_STATS.GATHER_DATABASE_STATS
Doc ID: Note:223069.1
Monitoring Statistics in 10g
Doc ID: Note:295249.1
Bug 4706964 - DBMS_STATS.GATHER_DICTIONARY_STATS errors if schema name has special characters
Doc ID: Note:4706964.8
ERROR:" WARNING: --> Database contains stale optimizer statistics.Refer to the 10g Upgrade Guide for instructions to update"
Doc ID: Note:437371.1
Script to Check Schemas with Stale Statistics
Doc ID: Note:560336.1
http://www.globusz.com/ebooks/Oracle/00000015.htm
http://hungrydba.com/databasestats.aspx
http://www.fadalti.com/oracle/database/how_to_statistics.htm
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1154434873552
http://www.dbanotes.net/mirrors/www.psoug.org/reference/dbms_stats.html
http://www.pafumi.net/Gather_Statistics.html
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:27658118048105
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:60121137844769
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:735625536552
http://tonguc.wordpress.com/2007/10/09/oracle-best-practices-part-5/
http://www.maroc-it.ma/blogs/fahd/?p=42
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:247162600346210706
http://www.dba-oracle.com/t_worst_practices.htm
http://structureddata.org/
http://structureddata.org/category/oracle/optimizer/
http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
http://structureddata.org/2008/01/02/what-are-your-system-statistics/
http://structureddata.org/2007/12/05/oracle-optimizer-development-team-starts-a-blog/
http://optimizermagic.blogspot.com/2007/11/welcome-to-our-blog.html
-- GATHER STATISTICS FOR SYS
Gather Optimizer Statistics For Sys And System
Doc ID: Note:457926.1
Gathering Statistics For All fixed Objects In The Data Dictionary.
Doc ID: Note:272479.1
Is ANALYZE on the Data Dictionary Supported (TABLES OWNED BY SYS)?
Doc ID: 35272.1
-- MIGRATE TO CBO
Migrating to the Cost-Based Optimizer
Doc ID: Note:222627.1
Rule Based Optimizer is to be Desupported in Oracle10g
Doc ID: Note:189702.1
Cost Based Optimizer - Common Misconceptions and Issues
Doc ID: Note:35934.1
-- GATHER SYSTEM STATISTICS
System Statistics: Collect and Display System Statistics (CPU and IO) for CBO us
Doc ID: Note:149560.1
System Statistics: Scaling the System to Improve CBO optimizer
Doc ID: Note:153761.1
Using Actual System Statistics (Collected CPU and IO information)
Doc ID: 470316.1
-- GATHER STATISTICS
How to Move from ANALYZE to DBMS_STATS - Introduction
Doc ID: 237293.1
Gathering Schema or Database Statistics Automatically in 8i and 9i - Examples
Doc ID: 237901.1
Statistics Gathering: Frequency and Strategy Guidelines
Doc ID: 44961.1
What are the Default Parameters when Gathering Table Statistics on 9i and 10g?
Doc ID: 406475.1
http://awads.net/wp/2006/04/17/orana-powered-by-google-and-feedburner/
-- MONITOR STATISTICS
Monitoring Statistics in 10g
Doc ID: 295249.1
How to Automate Change Based Statistic Gathering - Monitoring Tables
Doc ID: 102334.1
-- GATHER STALE
Differences between GATHER STALE and GATHER AUTO
Doc ID: 228186.1
Best Practices to Minimize Downtime during Upgrade
Doc ID: 455744.1
-- HISTOGRAMS
Histograms: An Overview
Doc ID: 1031826.6
-- DUPLICATE ROWS
http://www.jlcomp.demon.co.uk/faq/duplicates.html
-- DISABLE AUTO STATS IN 10G
How to Disable Automatic Statistics Collection in 10G ?
Doc ID: 311836.1
-- CHAINED ROWS
How to Identify, Avoid and Eliminate Chained and Migrated Rows ?
Doc ID: 746778.1
Monitoring Chained Rows on IOTs
Doc ID: 102932.1
Row Chaining and Row Migration
Doc ID: 122020.1
Analyze Table List chained rows Into chained_rows Gives ORA-947
Doc ID: 265707.1
-- SET STATISTICS
http://decipherinfosys.wordpress.com/2007/07/31/dbms_statsset_table_stats/
http://www.oracle.com/technology/oramag/oracle/06-may/o36asktom.html
http://www.psoug.org/reference/tuning.html
http://www.freelists.org/post/oracle-l/CBO-Predicate-selectivity,10
http://www.orafaq.com/forum/?t=msg&th=71350/0/
http://www.oracle.com/technology/oramag/oracle/04-sep/o54asktom.html
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:735625536552
http://blogs.oracle.com/optimizer/entry/optimizer_technical_papers1
<<showtoc>>
! collection scripts
https://github.com/oracle/oracle-db-examples/blob/master/optimizer/compare_ofe/ofe.sql <- nigel
https://github.com/tanelpoder/tpt-oracle/blob/master/cofef.sql
https://github.com/tanelpoder/tpt-oracle/blob/master/cofep.sql
https://github.com/tanelpoder/tpt-oracle/blob/master/tools/optimizer/optimizer_features_matrix.sql
https://blog.tanelpoder.com/posts/scripts-for-drilling-down-into-unknown-optimizer-changes/
https://flowingdata.com/2018/04/17/visualizing-differences/
! 9i
!! bind peek
<<<
allow the
optimizer to peek at the value of bind variables and then use a histogram to pick an appropriate plan,
just like it would do with literals. The problem with the new feature was that it only looked at the
variables once, when the statement was parsed
<<<
! 10g
!! Bind variable capture
is a feature introduced in 10g. It is just a periodic capture of the underlying values of bind variables
<<<
The bind values peeked can be seen in the OTHER_XML column in V$SQL_PLAN or BIND_DATA column in V$SQL. The space used for peeked binds in V$SQL_PLAN is determined by the parameter “_xpl_peeked_binds_log_size”. In some circumstances (ex – sql with large inlists of bind variables) the size may be exceeded and we might not see all the peeked bind values. But that doesnt mean that the variable value is not peeked. the max value of binds peeked is determined by the parameter “_xpl_peeked_binds_log_size” and it maxes out at 8192
V$SQL_BIND_CAPTURE. The capturing interval is determinded by the parameter “_cursor_bind_capture_interval”. The space used for captured binds is determined by the parameter “_cursor_bind_capture_area_size”.
In some circumstances (ex – sql with large inlists of bind variables) the size may be exceeded and we might not see all the captured bind values. In such cases setting “_cursor_bind_capture_interval” to the max value of 3999 helps in looking at most of the captured values.
<<<
! 11g
!! dynamic sampling - no stats
<<<
tries to fix problems with execution plans as they occur, that is if they have no statistics
<<<
!! cardinality feedback - store cardinality to fix estimates issues, doesn't work with bind variables
<<<
we just wait for the result of each step in the execution plan, store it in the shared pool and reference it on subsequent executions, in the hope that the information will give us a good idea of how well we did the last time.
However, remember that we mentioned that the statement needs to execute at least once for the optimizer to store the actual rows so that it can compare them to the estimated number of rows. If dynamic sampling has already been used (because it was needed and it was not disabled), then cardinality feedback will not be used. Also because of the problems that can be introduced by bind variables (especially if you have skewed data), cardinality feedback will not be used for parts of the statement that involve bind variables.
<<<
!! adaptive cursor sharing (ACS) - bind_aware
<<<
aimed at fixing performance issues due to bind
variable peeking. The basic idea is to try to automatically recognize when a statement might benefit from multiple
plans. If a statement is found that the optimizer thinks is a candidate, it is marked as bind aware. Subsequent
executions will peek at the bind variables and new cursors with new plans may result
<<<
!! sql plan management
! 12cR1
!! adaptive query optimization
<<<
check [[12c Adaptive Optimization]], [[12c Adaptive Plans]], [[12cR2 Adaptive Features]]
https://docs.oracle.com/database/121/TGSQL/tgsql_optcncpt.htm#TGSQL221
https://blogs.oracle.com/optimizer/optimizer-adaptive-features-in-oracle-database-12c-release-2
<<<
Adaptive query optimization is a set of capabilities that enables the optimizer to make run-time adjustments to execution plans and discover additional information intended to lead to better query optimization, especially when existing statistics are insufficient to generate an optimal plan. Adaptive query optimization has two major components:
!!! adaptive execution plans (inflection point)
https://blogs.oracle.com/optimizer/optimizer-adaptive-features-in-oracle-database-12c-release-2
Adaptive Plans includes features addressing:
!!!! Join Methods
!!!! Parallel Distribution Methods
!!! Adaptive Statistics address:
!!!! Adaptive Dynamic Statistics
!!!! Automatic Re-optimization
!!!! SQL Plan Directives
!!!! Automatic Extended Statistics (Group Detection and expression stats)
!! Concurrent Statistics Gathering
! 12cR2
!! Optimizer Statistics Advisor
! 19c
!! sql plan management enhancements
<<<
https://mikedietrichde.com/2019/06/03/automatic-sql-plan-management-in-oracle-database-19c/
<<<
! References
!! https://apex.oracle.com/database-features/
<<<
go to "database overall" -> "optimizer"
<<<
!! Plan Stability - Apress Book
https://www.evernote.com/shard/s48/client/snv?noteGuid=013cd51e-e484-49ac-911b-e01bdd54ac06¬eKey=ce780dd4ca02d3d0b72b493acf8c33fd&sn=https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs48%2Fsh%2F013cd51e-e484-49ac-911b-e01bdd54ac06%2Fce780dd4ca02d3d0b72b493acf8c33fd&title=Plan%2BStability%2B-%2BApress%2BBook
!! optimizer papers
https://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf
https://sites.google.com/site/oraclemonitor/optimizer-mistakes
Choosing An Optimal Stats Gathering Strategy
http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/ <-- good stuff
Restoring the statistics – Oracle Database 10g
http://avdeo.com/2010/11/01/restoring-the-statistics-oracle-database-10g
''Poor Quality Statistics:''
{{{
1) Sample size
Inadequate sample sizes
Infrequently collected samples
No samples on some objects
Relying on auto sample collections and not checking what has been collected.
2) Histograms
Collecting histograms when not needed
Not collecting histograms when needed.
Collecting very small sample sizes on histograms.
3) Not using more advanced options like extended statistics to set up correlation between related columns.
4) Collecting statistics at the wrong time
}}}
http://blogs.oracle.com/mt/mt-search.cgi?IncludeBlogs=3361&tag=optimizer%20transformations&limit=20
https://apex.oracle.com/odc_activity
.
..
...
....
http://www.oracle.com/technetwork/oem/app-test/etest-101273.html
http://radar.oreilly.com/2011/10/oracles-big-data-appliance.html
http://www.oracle.com/us/corporate/press/512001#sf2272790
http://www.oracle.com/us/technologies/big-data/index.html?origref=http://www.oracle.com/us/corporate/press/512001#sf2272790
''Roll your own Big Data Appliance'' http://www.pythian.com/news/30749/roll-your-own-big-data-appliance/
Instructions to Download/Install/Setup Oracle SQL Connector for Hadoop Distributed File System (HDFS) [ID 1519162.1]
NOTE:1492125.1 - Instructions to Download/Install/Setup CDH3 Client to access HDFS on BDA 1.1
NOTE:1506203.1 - Instructions to Download/Install/Setup CDH4 Client to access HDFS on Oracle Big Data Appliance X3-2
NOTE:1519287.1 - Oracle SQL Connector for Hadoop Distributed File System (HDFS) Sample to Publish Data into External Table
Oracle SQL Connector for Hadoop Distributed File System (HDFS) Sample to Create External Table from Hive Table [ID 1557525.1]
''jan 23 oracle cloud policy'' https://www.google.com/search?q=jan+23+oracle+cloud+policy&oq=jan+23+oracle+cloud+policy&aqs=chrome..69i57.5666j0j7&sourceid=chrome&ie=UTF-8
Licensing Oracle Software in the Cloud Computing Environment http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf
Oracle programs are eligible for Authorized Cloud Environments http://www.oracle.com/us/corporate/pricing/authorized-cloud-environments-3493562.pdf
https://oracle-base.com/blog/2017/01/28/oracles-cloud-licensing-change-be-warned/
http://houseofbrick.com/oracle-gives-itself-a-100-raise-in-authorized-cloud-compute-environments/
http://www.redwoodcompliance.com/dealing-oracles-cloud-licensing-policy/
https://www.pythian.com/blog/oracle-new-public-cloud-licensing-policy-good-or-bad/
Oracle Authorized Cloud Environments Overview of Policy Changes http://www.version1.com/getattachment/3cd6ec0f-2e95-42be-b082-25b32afc71da/Version-1-Oracle-Licensing-in-the-Cloud-Policy
http://madora.co.uk/oracle-changes-the-licensing-rules-for-cloud-again/
https://www.linkedin.com/pulse/dealing-oracles-cloud-licensing-policy-mohammad-inamullah
https://awsinsider.net/articles/2017/01/31/oracle-licensing-cost-for-aws.aspx
http://houseofbrick.com/resources/white-papers/
Stéphane Faroult - Oracle DBA tutorial
http://www.youtube.com/watch?v=yk8esAZKz4k&list=PLD33650E97A140FC8
https://github.com/oracle/docker-images
! Oracle Exadata Recipes A Problem-Solution Approach by John Clarke
http://www.apress.com/9781430249146
Summary of the topics per chapter of the exadata recipe book. You can search through this, it’s easier than going through the nested table of contents of the PDF
{{{
####################################
Part1: Exadata Architecture
####################################
CH1: Exadata Hardware
1-1. Identifying Exadata Database Machine Components
1-2. Displaying Storage Server Architecture Details
1-3. Displaying Compute Server Architecture Details
1-4. Listing Disk Storage Details on the Exadata Storage Servers
1-5. Listing Disk Storage Details on the Compute Servers
1-6. Listing Flash Storage on the Exadata Storage Servers
1-7. Gathering Configuration Information for the InfiniBandSwitches
CH2: Exadata Software
2-1. Understanding the Role of Exadata Storage Server Software
2-2. Validating Oracle 11gR2 Databases on Exadata
2-3. Validating Oracle 11gR2 Grid Infrastructure on Exadata
2-4. LocatingtheOracleClusterRegistryandVotingDisksonExadata
2-5. Validating Oracle 11gR2 Real Application Clusters Installation and Database Storage on Exadata
2-6. Validating Oracle 11gR2 Real Application Clusters Networking on Exadata
CH3: How Oracle Works on Exadata
3-1. Mapping Physical Disks, LUNs, and Cell Disks on the Storage Servers
3-2. Mapping ASM Disks, Grid Disks, and Cell Disks
3-3. Mapping Flash Disks to Smart Flash Storage
3-4. Identifying Cell Server Software Processes
3-5. Tracing Oracle I/O Requests on Exadata Compute Nodes
3-6. Validating That Your Oracle RAC Interconnect Is Using InfiniBand
3-7. Tracing cellsrv on the Storage Servers
####################################
Part2: Preparing for Exadata
####################################
CH4: Workload Qualification
4-1. Quantifying I/O Characteristics of Your Current Database
4-2. Conducting a Smart Scan Fit Analysis Using AWR
4-3. Conducting a Smart Scan Fit Analysis Using Exadata Simulation
4-4. Performing a Hybrid Columnar Compression Fit Assessment
CH5: Sizing Exadata
5-1. Determining CPU Requirements
5-2. Determining IOPs Requirements
5-3. Determining I/O Bandwidth Requirements
5-4. Determining ASM Redundancy Requirements
5-5. Forecasting Storage Capacity
5-6. Planning for Database Growth
5-7. Planning for Disaster Recovery
5-8. Planning for Backups
5-9. Determining Your Fast Recovery Area and RECO Disk Group Size Requirements
CH6: Preparing for Exadata
6-1. Planning and Understanding Exadata Networking
6-2. Configuring DNS
6-3. Running checkip.sh
6-4. Customizing Your InfiniBand Network Configuration
6-5. Determining Your DATA and RECO Storage Requirements
6-6. Planning for ASM Disk Group Redundancy
6-7. Planning Database and ASM Extent Sizes
6-8. Completing the Pre-Delivery Survey
6-9. Completing the Configuration Worksheet
####################################
Part3: Exadata Administration
####################################
CH7: Administration and Diagnostics Utilities
7-1. Logging in to the Exadata Compute and Storage Cells Using SSH
7-2. Configuring SSH Equivalency
7-3. Locating Key Configuration Files and Directories on the Cell Servers
7-4. Locating Key Configuration Files and Directories on the Compute Nodes
7-5. Starting and Stopping Cell Server Processes
7-6. Administering Storage Cells Using CellCLI
7-7. Administering Storage Cells Using dcli
7-8. Generating Diagnostics from the ILOM Interface
7-9. Performing an Exadata Health Check Using exachk
7-10. Collecting Compute and Cell Server Diagnostics Using the sundiag.sh Utility
7-11. Collecting RAID Storage Information Using the MegaCLI utility
7-12. Administering the Storage Cell Network Using ipconf
7-13. Validating Your InfiniBand Switches with the CheckSWProfile.sh Utility
7-14. Verifying Your InfiniBand Network Topology
7-15. Diagnosing Your InfiniBand Network
7-16. Connecting to Your Cisco Catalyst 4948 Switch and Changing Switch Configuration
CH8: Backup and Recovery
8-1. Backing Up the Storage Servers
8-2. Displaying the Contents of Your CELLBOOT USB Flash Drive
8-3. Creating a Cell Boot Image on an External USB Drive
8-4. Backing Up Your Compute Nodes Using Your Enterprise Backup Software
8-5. Backing Up the Compute Servers Using LVM Snapshots
8-6. Backing Up Your Oracle Databases with RMAN
8-7. Backing Up the InfiniBand Switches
8-8. Recovering Storage Cells from Loss of a Single Disk
8-9. Recovering Storage Cells from Loss of a System Volume Using CELLBOOT Rescue
8-10. Recovering from a Failed Storage Server Patch
8-11. Recovering Compute Server Using LVM Snapshots
8-12. Reimaging a Compute Node
8-13. Recovering Your InfiniBand Switch Configuration
8-14. Recovering from Loss of Your Oracle Cluster Registry and Voting Disks
CH9: Storage Administration
9-1. Building ASM Disk Groups on Exadata
9-2. Properly Configuring ASM Disk Group Attributes on Exadata
9-3. Identifying Unassigned Grid Disks
9-4. Configuring ASM Redundancy on Exadata
9-5. Displaying ASM Partner Disk Relationships on Exadata
9-6. Measuring ASM Extent Balance on Exadata
9-7. Rebuilding Cell Disks
9-8. Creating Interleaved Cell Disks and Grid Disks
9-9. Rebuilding Grid Disks
9-10. Setting smart_scan_capable on ASM Disk Groups
9-11. Creating Flash Grid Disks for Permanent Storage
CH10: Network Administration
10-1. Configuring the Management Network on the Compute Nodes
10-2. Configuring the Client Access Network
10-3. Configuring the Private Interconnect on the Compute Nodes
10-4. Configuring the SCAN Listener
10-5. Managing Grid Infrastructure Network Resources
10-6. Configuring the Storage Server Ethernet Network
10-7. Changing IP Addresses on Your Exadata Database Machine
CH11: Patching and Upgrades
11-1. Understanding Exadata Patching Definitions, Alternatives, and Strategies
11-2. Preparing to Apply Exadata Patches
11-3. Patching Your Exadata Storage Servers
11-4. Patching Your Exadata Compute Nodes and Databases
11-5. Patching the InfiniBand Switches
11-6. Patching Your Enterprise Manager Systems Management Software
CH12: Security
12-1. Configuring Multiple Oracle Software Owners on Exadata Compute Nodes
12-2. Installing Multiple Oracle Homes on Your Exadata Compute Nodes
12-3. Configuring ASM-Scoped Security
12-4. Configuring Database-Scoped Security
####################################
Part4: Monitoring Exadata
####################################
CH13: Monitoring Exadata Storage Cells
13-1. Monitoring Storage Cell Alerts
13-2. Monitoring Cells with Active Requests
13-3. Monitoring Cells with Metrics
13-4. Configuring Thresholds for Cell Metrics
13-5. Using dcli with Special Characters
13-6. Reporting and Summarizing metrichistory Using R
13-7. Reporting and Summarizing metrichistory Using Oracle and SQL
13-8. Detecting Cell Disk I/O Bottlenecks
13-9. Measuring Small I/O vs. Large I/O Requests
13-10. Detecting Grid Disk I/O Bottlenecks
13-11. Detecting Host Interconnect Bottlenecks
13-12. Measuring I/O Load and Waits per Database, Resource Consumer Group, and Resource Category
CH14: Host and Database Performance Monitoring
14-1. Collecting Historical Compute Node and Storage Cell Host Performance Statistics
14-2. Displaying Real-Time Compute Node and Storage Cell Performance Statistics
14-3. Monitoring Exadata with Enterprise Manager
14-4. Monitoring Performance with SQL Monitoring
14-5. Monitoring Performance by Database Time
14-6. Monitoring Smart Scans by Database Time and AAS
14-7. Monitoring Exadata with Wait Events
14-8. Monitoring Exadata with Statistics and Counters
14-9. Measuring Cell I/O Statistics for a SQL Statement
####################################
Part5: Exadata Software
####################################
CH15: Smart Scan and Cell Offload
15-1. Identifying Cell Offload in Execution Plans
15-2. Controlling Cell Offload Behavior
15-3. Measuring Smart Scan with Statistics
15-4. Measuring Offload Statistics for Individual SQL Cursors
15-5. Measuring Offload Efficiency
15-6. Identifying Smart Scan from 10046 Trace Files
15-7. Qualifying for Direct Path Reads
15-8. Influencing Exadata’s Decision to Use Smart Scans
15-9. Identifying Partial Cell Offload
15-10. Dealing with Fast Object Checkpoints
CH16: Hybrid Columnar Compression
16-1. Estimating Disk Space Savings for HCC
16-2. Building HCC Tables and Partitions
16-3. Contrasting Oracle Compression Types
16-4. Determining the Compression Type of a Segment
16-5. Measuring the Performance Impact of HCC for Queries
16-6. Direct Path Inserts into HCC Segments
16-7. Conventional Inserts to HCC Segments
16-8. DML and HCC
16-9. Decompression and the Performance Impact
CH17: I/O Resource Management and Instance Caging
17-1. Prioritizing I/O Utilization by Database
17-2. Limiting I/O Utilization for Your Databases
17-3. Managing Resources within a Database
17-4. Prioritizing I/O Utilization by Category of Resource Consumers
17-5. Prioritizing I/O Utilization by Categories of Resource Consumers and Databases
17-6. Monitoring Performance When IORM Is Enabled
17-7. Obtaining IORM Plan Information
17-8. Controlling Smart Flash Cache and Smart Flash Logging with IORM
17-9. Limiting CPU Resources with Instance Caging
CH18: Smart Flash Cache and Smart Flash Logging
18-1. Managing Smart Flash Cache and Smart Flash Logging
18-2. Determining Which Database Objects Are Cached
18-3. Determining What’s Consuming Your Flash Cache Storage
18-4. Determining What Happens When Querying Uncached Data
18-5. Measuring Smart Flash Cache Performance
18-6. Pinning Specific Objects in Smart Flash Cache
18-7. Quantifying Benefits of Smart Flash Logging
CH19: Storage Indexes
19-1. Measuring Performance Impact of Storage Indexes
19-2. Measuring Storage Index Performance with Not-So-Well-Ordered Data
19-3. Testing Storage Index Behavior with Different Query Predicate Conditions
19-4. Tracing Storage Index Behavior
19-5. Tracing Storage Indexes When More than Eight Columns Are Referenced
19-6. Tracing Storage Indexes when DML Is Issued against Tables
19-7. Disabling Storage Indexes
19-8. Troubleshooting Storage Indexes
####################################
Post Implementation Tasks
####################################
CH20: Post-Installation Monitoring Tasks
20-1. Installing Enterprise Manager 12c Cloud Control Agents for Exadata
20-2. Configuring Enterprise Manager 12c Cloud Control Plug-ins for Exadata
20-3. Configuring Automated Service Requests
CH21: Post-Install Database Tasks
21-1. Creating a New Oracle RAC Database on Exadata
21-2. Setting Up a DBFS File System on Exadata
21-3. Configuring HugePages on Exadata
21-4. Configuring Automatic Degree of Parallelism
21-5. Setting I/O Calibration on Exadata
21-6. Measuring Impact of Auto DOP and Parallel Statement Queuing
21-7. Measuring Auto DOP and In-Memory Parallel Execution
21-8. Gathering Optimizer Statistics on Exadata
}}}
The Lifetime Support Policy provides access to technical experts for as long
as you license your Oracle products and consists of three support stages:
Premier Support,
Extended Support,
and Sustaining Support.
Expect Lifetime Support
With Oracle Support, you know up front and with certainty how long your Oracle products
are supported. The Lifetime Support Policy provides access to technical experts for as long
as you license your Oracle products and consists of three support stages: Premier Support,
Extended Support, and Sustaining Support. It delivers maximum value by providing you
with rights to major product releases so you can take full advantage of technology and
product enhancements. Your technology and your business keep moving forward together.
Premier Support provides a standard five-year support policy for Oracle Technology and
Oracle Applications products. You can extend support for an additional three years with
Extended Support for specific releases, or receive indefinite technical support with
Sustaining Support.
Premier Support
As an Oracle customer, you can expect the best with Premier Support, our award-winning,
next-generation support program. Premier Support provides you with maintenance and
support of your Oracle Database, Oracle Fusion Middleware, and Oracle Applications for five
years from their general availability date. You benefit from
� Major product and technology releases
� Technical support
� Updates, fixes, security alerts, data fixes, and critical patch updates
� Tax, legal, and regulatory updates
� Upgrade scripts
� Certification with most new third-party products/versions
� Certification with most new Oracle products
Extended Support
Your technology future is assured with Oracle�s Extended Support. Extended Support lets
you stay competitive, with the freedom to upgrade on your timetable. If you take advantage
of Extended Support, it provides you with an extra three years of support for specific Oracle
releases for an additional fee. You benefit from
� Major product and technology releases
� Technical support
� Updates, fixes, security alerts, data fixes, and critical patch updates
� Tax, legal, and regulatory updates
� Upgrade scripts
� Certification with most existing third-party products/versions
� Certification with most existing Oracle products
Extended Support may not include certification with some new third-party
products/versions.
Sustaining Support
Sustaining Support puts you in control of your upgrade strategy. When Premier Support
expires, if you choose not to purchase Extended Support, or when Extended Support expires,
Sustaining Support will be available for as long as you license your Oracle products. With
Sustaining Support, you receive technical support, including access to our online support
tools, knowledgebases, and technical support experts. You benefit from
� Major product and technology releases
� Technical support
� Access to OracleMetaLink/PeopleSoft Customer Connection/Hyperion e-Support
� Fixes, updates, and critical patch updates created during the Premier Support stage
� Upgrade scripts created during the Premier Support stage
Sustaining Support does not include
� New updates, fixes, security alerts, data fixes, and critical patch updates
� New tax, legal, and regulatory updates
� New upgrade scripts
� Certification with new third-party products/versions
� Certification with new Oracle products
For more specifics on Premier Support, Extended Support, and Sustaining Support, please refer to
Oracle�s Technical Support Policies.
https://cloud.oracle.com/en_US/paas
https://cloud.oracle.com/management
https://docs.oracle.com/cloud/latest/em_home/index.html
<<<
Oracle Management Cloud uses a
broad array of machine learning
techniques, including the following:
» Anomaly detection. Flags
unusual resource usage and
identifies configuration changes.
» Clustering. Filters out signal
from noise; aggregates topologybased
data.
» Correlation. Groups and alerts
on related symptoms; discovers
dependencies.
» Prediction. Forecasts outages
before they happen; plans
capacity and resources.
<<<
http://www.oracle.com/us/solutions/cloud/oracle-management-cloud-brief-2714883.pdf
https://www.forbes.com/sites/oracle/2018/07/23/machine-learning-and-it-jobs-early-lessons-learned-from-system-monitoring/#6ac97dc85ef3
https://www.slideshare.net/DheerajHiremath1/oracle-management-cloud-65816440
http://courtneyllamas.com/category/oracle-management-cloud/
https://cloud.oracle.com/_downloads/eBook_OMC/Oracle_Management_Cloud_eBook.pdf
<<showtoc>>
! documentation
https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/18.4/index.html
! articles
https://www.slideshare.net/hillbillyToad/oracle-rest-data-services-options-for-your-web-services
https://www.thatjeffsmith.com/archive/2019/02/ords-architecture-a-common-deployment-overview/
https://oracle-base.com/articles/misc/an-introduction-to-json-support-in-the-oracle-database
https://oracle-base.com/articles/misc/articles-misc#ords
https://twiki.cern.ch/twiki/bin/view/DB/DevelopingOracleRestfulServices
https://blogs.oracle.com/sql/how-to-store-query-and-create-json-documents-in-oracle-database
! youtube
Oracle REST Data Services Product Walk-through and Demonstration https://www.youtube.com/watch?v=rvxTbTuUm5k <-- GOOD STUFF
Configure/Install and Trouble shooting ORACLE ORDS , with or Without APEX to Run WEB SERVICES (JSON) https://www.youtube.com/watch?v=d6Dl6Dh4zFc
How To create (get and POST ) webservice in Oracle APEX 5 in less than 10 min step by step https://www.youtube.com/watch?v=fD-o73AhzpQ
Creating and Using a RESTful Web Service in Application Express 4.2 https://www.youtube.com/watch?v=gkCvd6P8_OU
REST API Programming ; Create one in under 8 minutes with APEX's REST API Creator https://www.youtube.com/watch?v=RGq4KuEKW3Q
Cloud PaaS and IaaS - How To Videos https://www.youtube.com/channel/UCoLZREsDUGWqBIBL_bBM2cg/search?query=REST
Make the RDBMS Relevant Again with RESTful Web Services and JSON https://www.youtube.com/watch?v=PohxnQbwTzA
AskTOM Office Hours: Building REST APIs with Node.js and Oracle - Part 1 https://www.youtube.com/watch?v=BghtqQOFyi4
oracle REST Data Service in 5 easy steps https://www.youtube.com/watch?v=fi8gGwNEO9M
https://www.youtube.com/results?search_query=ORDS+REST+api
Making & Consuming REST Web Services using ORDS & APEX https://www.youtube.com/watch?v=OvCgpKtEYBg
! ORDS REST API for Database
https://docs.oracle.com/en/database/oracle/oracle-database/19/dbrst/op-database-datapump-jobs-post.html
! ORDS performance
!! ORDS benchmarking
https://github.com/giltene/wrk2
https://github.com/wg/wrk
https://twitter.com/OracleREST/status/1100171448948346880 "500 requests per second, on a free ORDS and XE stack"
https://telegra.ph/Oracle-XE-184-Free-High-load-02-25-2
https://dsavenko.me/oracledb-apex-ords-tomcat-httpd-centos7-all-in-one-guide-introduction/
!! ORDS load balancing
http://krisrice.io/2019-04-17-ORDS-Consul-Fabio/
..
! tutorials
Machine Learning with R in Oracle Database https://community.oracle.com/docs/DOC-1013840
! Software
R for windows - http://cran.cnr.berkeley.edu/
IDE - http://rstudio.org/
! Use case
''R and BIEE''
{{{
R plugins to Oracle (Oracle R Enterprise packages)
Oracle - sys.rqScriptCreate, rqRowEval (parallelism applicable to rq.groupEval and rq.rowEval)
BI publisher consumes XML output from sys.rqScriptCreate for graphs
BIP and OBIEE can also execute R scripts and sys.rqScriptCreate
}}}
''R and Hadoop''
{{{
Hadoop - “Technically, Hadoop consists of two key services: reliable data storage using the Hadoop Distributed File System (HDFS)
and high-performance parallel data processing using a technique called MapReduce.”
with R and Hadoop, you can pretty much do everything in R interface
}}}
''R built-in statistical functions in Oracle''
* these are the functions that I used for building the [[r2project]] - a regression analysis tool for Oracle workload performance
* these built-in functions are ''__not__'' smart scan offloadable
{{{
SQL> r
1 select name , OFFLOADABLE from v$sqlfn_metadata
2* where lower(name) like '%reg%'
NAME OFF
------------------------------ ---
REGR_SLOPE NO
REGR_INTERCEPT NO
REGR_COUNT NO
REGR_R2 NO
REGR_AVGX NO
REGR_AVGY NO
REGR_SXX NO
REGR_SYY NO
REGR_SXY NO
REGEXP_SUBSTR YES
REGEXP_INSTR YES
REGEXP_REPLACE YES
REGEXP_COUNT YES
13 rows selected.
}}}
''R Enterprise packages''
* the sys.rqScriptCreate could be part of the Oracle R Enterprise packages that make use of SELECT SQLs and PARALLEL options (on hints/objects) & that's how it utilizes the Exadata offloading
{{{
SQL> select name , OFFLOADABLE from v$sqlfn_metadata
2 where lower(name) like '%rq%';
no rows selected
}}}
! References
http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/index.html
http://blogs.oracle.com/R
''Oracle By Example'': Oracle R Enterprise Tutorial Series http://goo.gl/IKd6Q
Oracle R Enterprise Training 1 - Getting Started - http://goo.gl/krOrH
Oracle R Enterprise Training 2 - Introduction to R - http://goo.gl/EGEbn
Oracle R Enterprise Training 3 - Transparency Layer - http://goo.gl/vjvu7
Oracle R Enterprise Training 4 - Embedded R Scripts - http://goo.gl/aZXui
Oracle R Enterprise Training 5 - Operationalizing R Scripts - http://goo.gl/JNRFf
Oracle R Enterprise Training 6 - Advanced Topics - http://goo.gl/ziNs1
How to Import Data from External Files in R http://answers.oreilly.com/topic/1629-how-to-import-data-from-external-files-in-r/
Oracle R install http://husnusensoy.wordpress.com/2012/10/25/oracle-r-enterprise-configuration-on-oracle-linux/
Using the R Language with an Oracle Database. http://dbastreet.com/blog/?p=913
Shiny web app http://www.r-bloggers.com/introducing-shiny-easy-web-applications-in-r/
Plotting AWR database metrics using R http://dbastreet.com/blog/?p=946
Coursera 4 week course http://www.r-bloggers.com/videos-from-courseras-four-week-course-in-r/
Andy Klock's R reference https://www.evernote.com/shard/s242/sh/26a0913e-cead-4574-a253-aaf6c733bdbe/563543114559066dcb8141708c5c89a2
https://blogs.oracle.com/R/entry/r_to_oracle_database_connectivity
Oracle on R http://www.r-bloggers.com/connecting-r-to-an-oracle-database-with-rjdbc/ , http://www.r-bloggers.com/author/michael-j-bommarito-ii/
https://blogs.sap.com/2009/02/09/oracle-real-application-testing-with-sap/
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/information-management/streams-fov-11g-134280.pdf <-- datasheet
The beauty of Oracle Streams
http://geertdepaep.wordpress.com/2007/11/24/the-beauty-of-oracle-streams/
How To: Setup up of Oracle Streams Replication
http://apunhiran.blogspot.com/2009/07/how-to-setup-up-of-oracle-streams.html
http://www.scribd.com/doc/123218/Oracle-Streams-Step-by-Step-Doc
http://www.scribd.com/doc/123217/Oracle-Streams-Step-by-Step-PPT
http://dbataj.blogspot.com/2008/01/oracle-streams-setup-between-two.html
http://www.oracle-base.com/articles/9i/Streams9i.php
http://prodlife.wordpress.com/2009/03/03/a-year-with-streams/
http://prodlife.wordpress.com/2008/02/21/oracle-streams-replication-example/
http://prodlife.wordpress.com/2009/05/05/streams-on-rac/
Oracle Streams Configuration: Change Data Capture http://it.toolbox.com/blogs/oracle-guide/oracle-streams-configuration-change-data-capture-13501
Advanced Queues and Streams: A Definition in Plain English http://it.toolbox.com/blogs/oracle-guide/advanced-queues-and-streams-a-definition-in-plain-english-3677
http://psoug.org/reference/streams_demo1.html
Implementing Replication with Oracle Streams Ashish Ray
https://docs.google.com/viewer?url=http://www.projects.ed.ac.uk/areas/student/euclid/STU139/Other_documents/StreamsAndReplication.pdf
https://docs.google.com/viewer?url=http://www.nocoug.org/download/2007-05/Streams_Presentation.ppt
https://docs.google.com/viewer?url=http://www.nocoug.org/download/2007-05/Streams_White_Paper.doc
Oracle® Streams Replication Administrator's Guide 11g Release 1 (11.1) http://download.oracle.com/docs/cd/B28359_01/server.111/b28322/best_capture.htm
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/availability/311396-2-128440.pdf
http://www.colestock.com/blogs/2006/01/how-to-stream-10g-release-1-example.html
http://rohitsinhago.blogspot.com/2009/05/oracle-streams-performance-tests.html
https://docs.google.com/viewer?url=http://www.go-faster.co.uk/mv.dbmssig.20070717.ppt <-- MViews for replication
Oracle® Streams for Near Real Time Asynchronous Replication https://docs.google.com/viewer?url=http://www.cs.berkeley.edu/~nimar/papers/streams-diddr-05.pdf
http://www.scribd.com/doc/7979240/Oracle-White-Paper-Using-Oracle-Streams-Advanced-Queueing-Best-Practices
''Oracle Official References''
Oracle 11g Streams
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/data-integration/twp-streams-11gr1-134658.pdf
Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf
Oracle9i Replication
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/data-integration/oracle-adv-replication-twp-132415.pdf
https://docs.oracle.com/cd/E87041_01/index.htm
https://docs.oracle.com/cd/E87041_01/PDF/OUA_Developer_Guide_2.7.0.pdf
https://docs.oracle.com/cd/E87041_01/PDF/OUA_Admin_Guide_2.7.0.0.12.pdf
Resource Management as an Enabling Technology for Virtualization
http://www.oracle.com/technetwork/articles/servers-storage-admin/resource-mgmt-for-virtualization-1890711.html
https://github.com/oracle/vagrant-boxes
https://www.oracle.com/technetwork/community/developer-vm/index.html
Secure Database Passwords in an Oracle Wallet
http://www.idevelopment.info/data/Oracle/DBA_tips/Security/SEC_15.shtml
oracle XA and dbms_pipe type applications (old programs, c programs)
* a lot of dbms_pipe went to AQ because AQ supports RAC
global_txn_processes
http://www.oracle-base.com/articles/11g/dbms_xa_11gR1.php
https://aws.amazon.com/blogs/database/how-to-solve-some-common-challenges-faced-while-migrating-from-oracle-to-postgresql/
https://aws.amazon.com/blogs/database/how-to-migrate-your-oracle-database-to-postgresql/
Oracle Database 11g/12c To Amazon Aurora with PostgreSQL Compatibility (9.6.x) https://d1.awsstatic.com/whitepapers/Migration/oracle-database-amazon-aurora-postgresql-migration-playbook.pdf
Best Practices for AWS Database Migration Service https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html
''how to search patents:'' http://timurakhmadeev.wordpress.com/2012/03/22/patents/
''use this:'' https://www.google.com/?tbm=pts&gws_rd=ssl#tbm=pts&q=assignee:oracle
see also - [[ASH]], VisualSQLTuning
! ASH
ASH patent http://www.google.com/patents?id=cQWbAAAAEBAJ&pg=PA2&source=gbs_selected_pages&cad=3#v=onepage&q&f=false
! VST
Dan Tow Memory structure and method for tuning a database statement using a join-tree data structure representation, including selectivity factors, of a master table and detail table http://www.freepatentsonline.com/5761654.html
Mozes, Ari Method and system for sample size determination for database optimizers http://www.freepatentsonline.com/6732085.html
! Adaptive thresholds
http://www.docstoc.com/docs/56167536/Graphical-Display-And-Correlation-Of-Severity-Scores-Of-System-Metrics---Patent-7246043
{{{
JB Patents
# Diagnosing Database Performance Problems Using a Plurality of Wait Classes
United States Patent 7,555,499 B2 Issued June 30, 2009
Inventors: John Beresniewicz, Vipul Shah, Hsiao Su, Kyle Hailey, and others
Covers the method and apparatus for diagnosing database performance problems using breakdown of time spent in database by wait classes, as presented in the Oracle Enterprise Manager Performance and Top Activity screens and the workflows that issue from them.
# Graphical Display and Correlation of Severity Scores of System Metrics
United States Patent 7,246,043 B2 Issued July 17, 2007
Inventors: John Beresniewicz, Amir Najmi, Jonathan Soule
Covers the technique for scoring and graphical display of severity of database system metric values by normalizing over a statistical characterization of expected values such that meaningful abnormalities are emphasized and normal values dampened.
# Automatic Determination of High Significance Alert Thresholds for System Performance Metrics Using an Exponentially Tailed Model
United States Patent 7,225,103 Issued May 29, 2007
Inventors: John Beresniewicz, Amir Najmi
Covers the fitting of an exponential model to the upper percentile subsets of observed values for system performance metrics, accounting for common temporal variations in expected workloads. The model parameters are used to automatically generate, set and adjust alert thresholds for detecting anomalous system behavior.
}}}
8051486 Indicating SQL injection attack vulnerability with a stored value http://www.patentgenius.com/patent/8051486.html
7246043 Graphical display and correlation of severity scores of system metrics http://www.patentgenius.com/patent/7246043.html
7225103 Automatic determination of high significance alert thresholds for system performance metrics using an exponentially tailed model
U.S. Patent Number: 7,246,043 Graphical display and correlation of severity scores of system metrics
U.S. Patent Number: 7,225,103 Automatic determination of high significance alert thresholds for system performance metrics using an exponentially tailed model
''Kevin Closson patents''
http://www.patentgenius.com/inventedby/ClossonKevinAForestGroveOR.html
! Exadata Patents
{{{
Boris Erlikhman http://goo.gl/2LvXU
smart scan http://goo.gl/chy2s
flash cache http://goo.gl/YlCA7
smart flash log http://goo.gl/TwyRx
write back cache http://goo.gl/2WCmw
Roger Macnicol http://goo.gl/oxxu7
hcc http://goo.gl/9ptFe, http://goo.gl/3IOSi
Sue Lee http://goo.gl/6WCFw, http://goo.gl/bI0pd
iorm http://goo.gl/BHIc1
}}}
[img(70%,70%)[ https://i.imgur.com/zbAYaZF.png]]
QOS http://goo.gl/B2XOp
Consolidation Planner http://goo.gl/M45nL
Direct IO https://www.google.com/patents/US8224813
Optimizer COST model https://www.google.com/patents/US6957211
Parallel partition-wise joins https://www.google.com/patents/US6609131
partition pruning https://www.google.com/patents/US6965891
On-line transaction processing (OLTP) compression and re-compression of database data https://www.google.com/patents/US8392382
Storing row-major data with an affinity for columns https://www.google.com/patents/US20130024612
SQL Execution Plan Baselines https://www.google.com/patents/US20090106306
Cecilia Gervasio Grant compare AWR snapshots http://goo.gl/WPyGm6
11g
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/database/039449.pdf
Differences Between Enterprise, Standard and Standard One Editions on Oracle 11.2 [ID 1084132.1]
10g
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/database10g/overview/twp-general-10gdb-product-family-132973.pdf
9i
https://docs.google.com/viewer?url=http://www.magnifix.com/pdf/9idb_features.pdf
To ramp up my Exadata learning I have to make use of various media and do multiple reads/references across them. One useful media is Oracle by Example they have tons of video tutorials/demos available. Just go to this site http://goo.gl/Egd1W and copy paste the topics that are mentioned here http://goo.gl/WGNaw
Advisor Webcast Archived Recordings [ID 740964.1]
Database https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#data
OEM https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#em
Exadata https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#exadata
!
! ''Exadata''
The Magic of Exadata
Configuring DCLI
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 1)
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 2)
Exadata Cell First Boot Initialization
Exadata Calibrate and Cell/Grid Disks Configuration
Configuring ASM Disk Groups for Exadata
IORM and Exadata
Possible Execution Plans with Exadata Offloading http://goo.gl/FT2wj
<<<
{{{
show parameter offload
cell_offload_plan_display
cell_offload_processing
-- for future use params
cell_partition_large_extents
cell_offload_compaction
cell_offload_parameters
-- possibilities would be
offloading a FTS - table access storage full /*+ PARALLEL FULL(s) */
offloading a Full index scans - index storage fast full scan /*+ PARALLEL INDEX_FFS(s mysales_cust_id_indx) */
offload in HASH JOINS /*+ PARALLEL */
bloom filter - SYS_OP_BLOOM_FILTER on predicate /*+ PARALLEL */
}}}
<<<
Exadata Automatic Reconnect
Exadata Cell Failure Scenario
''-- "tagged as Exadata"''
Check out the series here [[OBE Exadata 1 to 25]] and here [[Exadata Best Practices Series]] ! ! !
Managing Parallel Processing with the Database Resource Manager Demo 19-Nov-10 60 mins
Using Exadata Smart Scan Video 19-Aug-10 4 mins
Hybrid Columnar Compression Demo 01-Oct-09 22 mins
Smart Flash Cache Architecture Demo 01-Oct-09 8 mins
Cell First Boot Demo 01-Sep-09 5 mins
Cell Configuration Demo 01-Sep-09 10 mins
Smart Scan Scale Out Example Demo 01-Sep-09 10 mins
Smart Flash Cache Monitoring Demo 01-Sep-09 25 mins
Configuring DCLI Demo 01-Jul-07 5 mins
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 2) Demo 01-Jul-07 30 mins
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 1) Demo 01-Jul-07 24 mins
Exadata Cell First Boot Initialization Demo 01-Jul-07 12 mins
Exadata Calibrate and Cell/Grid Disks Configuration Demo 01-Jul-07 12 mins
Configuring ASM Disk Groups for Exadata Demo 01-Jul-07 8 mins
IORM and Exadata Demo 01-Jul-07 40 mins
Real Performance Tests with Exadata Demo 01-Jul-07 42 mins http://goo.gl/roFLK
<<<
{{{
cat ./mon
./dcli -g cells -l root --vmstat="2"
cat test.sh
#!/bin/bash
B=$SECONDS
sqlplus test/test @ss_q1.sql
sqlplus test/test @ss_q2.sql
sqlplus test/test @ss_q3.sql
sqlplus test/test @ss_q4.sql
(( TM = $SECONDS - $B ))
echo "All queries completed in $TM seconds"
cat ss_q1.sql
spool ss_q1
set timing on
spool off
}}}
<<<
Exadata Automatic Reconnect Demo 01-Jul-07 12 mins
Exadata Cell Failure Scenario Demo 01-Jul-07 10 mins
!
! ''Manageability:''
Using SQL Baselines
Using Metric Baselines
Transport a tablespace version to another database
!
! ''Automatic Storage Management (ASM):''
Install ASM single instance in its own home
Install ASM single instance in the same home
Migrate a database to ASM
Setup XML DB to access ASM
Access ASM files using ASMCMD
Real Application Clusters (RAC)
!
! ''RAC Deployment Series (Beta):''
Setting Up RAC Storage
Setting Up Openfiler Storage
Setting Up iSCSI On Client Side
Using fdisk to Partition Storage
Setting Up Multipathing On Client Side
Installing and Configuring ASMLib
Setting Up Storage Permissions On Client Side
Installing Oracle Clusterware
Installing Real Application Clusters
Configuring ASM Storage
Installing Oracle Database Single Instance Software (Part I)
Installing Oracle Database Single Instance Software (Part II)
Creating Single Instance Database
Protecting Single Instance Database Using Oracle Clusterware
Converting Single Instance Database to RAC Database
Adding a Node to Your Cluster
Extending Oracle Clusterware to Third Node
Extending RAC Software to Third Node
Extending RAC Database to Third Node
Rolling Upgrade Your Entire Cluster
Creating a RAC Physical Standby Database
Installing and Configuring OCFS2
Setting Up RAC Primary Database in Archivelog Mode
Backing Up RAC Primary Database
Configuring Oracle Network Services on Clustered Standby Site
Creating RAC Physical Standby Database Using OCFS2 Storage
Checking RAC Physical to RAC Standby databases Communication
Converting RAC Physical Standby Database to RAC Logical Standby Database
Rolling Upgrade Oracle Clusterware
Rolling Upgrade Oracle Clusterware on Clustered Primary Site (10.2.0.1 to 10.2.0.2)
Rolling Upgrade Oracle Clusterware on Clustered Standby Site (10.2.0.1 to 10.2.0.2)
Upgrading your RAC Standby Site
Upgrading RAC Standby Database From 10.2.0.1 to 10.2.0.2 (Part I)
Upgrading RAC Standby Database From 10.2.0.1 to 10.2.0.2 (Part II)
Switching Primary and Standby Databases Roles
Upgrading your old RAC Primary Site
Upgrading RAC Old Primary Database From 10.2.0.1 to 10.2.0.2 (Part I)
Upgrading RAC Old Primary Database From 10.2.0.1 to 10.2.0.2 (Part II)
Switching Back Primary and Standby Databases Roles
!
! ''Miscellaneous:''
RAC scale example
RAC speedup example
Use Transparent Application Failover (TAF) with SELECT statements
!
! ''Oracle Clusterware:''
Use Oracle Clusterware to protect the apache application
Use Oracle Clusterware to protect the Xclock application
RAC Voting Disk Multiplexing
Patch Oracle Clusterware in a Rolling Fashion
CSS Diagnostic Case Study
RAC OCR Mirroring
!
! ''Services:''
Runtime Connection Load Balancing example
Basic use of services in your RAC environment
!
! ''Installs and Enterprise Manager:''
Install ASM in its own home in a RAC environment
Convert a single-instance database to a RAC database using Grid Control
Push Management Agent software using Grid Control
Clone Oracle Clusterware to extend your cluster using Grid Control
Clone ASM home to extend your cluster using Grid Control
Clone database home to extend your cluster using Grid Control
Add a database instance to your RAC database using Grid Control
!
! ''RAC Concepts:''
RAC VIP Concepts
RAC Object Affinity Concepts
Rolling Release Upgrade (Beta): 10.2.0.1 to 10.2.0.2:
Upgrading your Standby Site
Upgrading RAC Standby Database From 10.2.0.1 to 10.2.0.2 (Part I)
Upgrading RAC Standby Database From 10.2.0.1 to 10.2.0.2 (Part II)
Switching Primary and Standby Databases Roles
Upgrading your old Primary Site
Upgrading RAC Old Primary Database From 10.2.0.1 to 10.2.0.2 (Part I)
Upgrading RAC Old Primary Database From 10.2.0.1 to 10.2.0.2 (Part II)
Switching Back Primary and Standby Databases Roles
https://cloud.oracle.com
''tutorial'' http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/dbservice/dataload/dataload.html
Oracle Compute Cloud Service Foundations https://www.pluralsight.com/courses/oracle-compute-cloud-service-foundations
Product announcement
http://www.ome-b.nl/2011/09/22/finally-the-oracle-database-appliance/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+orana+%28OraNA%29
Step by Step install and some other screenshots
http://www.evernote.com/shard/s48/sh/0d565394-1a58-4578-9fc8-e53aa52c4eca/8f360ef4395fa999e32d2c77358ee613
Video intros
http://goo.gl/kWlT3
Unloading history - old Oracle7 dictionary
http://www.ora600.be/node/10707
http://oss.oracle.com/ksplice/docs/ksplice-quickstart.pdf
Using Oracle Ksplice to Update Oracle Linux Systems Without Rebooting
http://www.oracle.com/technetwork/articles/servers-storage-admin/ksplice-linux-518455.html
http://www.freelists.org/post/oracle-l/Reinstall-OS-completely-without-reinstalling-Oracle,6
{{{
I thought of this article as a full OS upgrade just like your case.. but it
seems like it is really just the kernel upgrade/patches..
then I tweeted @wimcoekaerts just out of curiosity.. @wimcoekaerts Do I
still have to relink my Oracle Home even I use ksplice for OS upgrade?
goo.gl/R13Op
this is his response @karlarao <http://twitter.com/karlarao> No. ksplice
isn't really "os upgrade" in the normal sense. it updates the running kernel
(in memory). no need to relink or restart
So on my notes on the section "APPENDIX A - Some FAQs about relinking"
http://docs.google.com/fileview?id5H46jS7ZPdJNGU0NDljZDktMzUwMC00ZWQ4LWIwZDgtNjFlYzNhMzQyMjg0&hl=en
I had this question before:
6) What if I just did a kernel upgrade (2.6.9-old to 2.6.9-newer), and not a
full OS upgrade (from oel4.4 to 4.6), would I still have to relink? The
kernel upgrade just updates the
hardware modules (/lib/modules) which is not related to the gcc binaries or
libraries used to compile the binaries of Oracle? the /usr/lib/gcc-lib is
not affected when you do a
kernel upgrade?
Answer: If you are just upgrading the kernel, no need to relink. If it's
affecting the system libraries, then you have to relink
}}}
''Related ksplice blogs''
http://blogs.oracle.com/ksplice/entry/solving_problems_with_proc
https://blogs.oracle.com/ksplice/entry/8_gdb_tricks_you_should
https://blogs.oracle.com/ksplice/entry/anatomy_of_a_debian_package
oracle prices 2007-2016
https://www.evernote.com/l/ADC4YXJG_DFL96bzb1ZXNHrZ5bJpmuZw4Mo
http://www.oraclelicensestore.com/ar/licensing/tutorial/licensing-tutorial
http://blog.enkitec.com/wp-content/uploads/2010/06/Randy-Hardee-Oracle-Licensing-Guide.pdf
http://benchmarkingblog.wordpress.com/category/power7/
<<<
(1) An 8-core IBM Power 780 (2 chips, 32 threads) with IBM DB2 9.5 is the best 8-core system (1,200,011 tpmC, $.69/tpmC, configuration available 10/13/10) vs. Oracle Database 11g Release 2 Standard Edition One and Oracle Linux on Cisco UCS c250 M2 Extended-Memory Server, 1,053,100 tpmC, $0.58/tpmC, available 12/7/2011.
Source: www.tpc.org. Results current as of 12/16/11.
TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).
<<<
http://blogs.flexerasoftware.com/elo/oracle-software-licensing/
<<<
The cost of the Enterprise edition is currently $47,500 per processor (core) and the Standard Edition $17,500 per processor (socket). If a server or a cluster is equipped with Intel Xeon E7-8870 Processors, supporting up to 10 cores, the calculation for a 4 socket server or cluster is:
Standard Edition: 4 (sockets) x $17,500 = $70,000
Enterprise Edition: 4 (processors) x 10 (cores/processor) x 0.5 (core factor) x $47,500 = $950,000
<<<
http://oraclestorageguy.typepad.com/oraclestorageguy/2011/11/oracle-licensing-on-vmware-no-magic.html
http://www.licenseconsulting.eu/2012/08/29/vmworld-richard-garsthagen-oracle-on-licensing-vmware-virtualized-environments/
http://oraclestorageguy.typepad.com/oraclestorageguy/2012/09/oracle-throws-in-the-towel-on-vmware-licensing-reprise.html
http://www.vmware.com/files/pdf/techpaper/vmw-understanding-oracle-certification-supportlicensing-environments.pdf ''Understanding Oracle Certification, Support and Licensing for VMware Environments''
! 2021
<<<
Bug 27213224 - Deploying Exadata Software Fails At - Step 12 (Initializing Cluster Software) (Doc ID 2391108.1)
Apply following patches in orderly manner on top of BOTH GI Home and Oracle RDBMS Home:
a) 27213224
b) 27309269
OR
You can use the below Workaround.
a) Shutdown CRS on both the nodes.
b) Add the following route on both the nodes :-
# route add -host 169.254.169.254 reject
c) Bring CRS online on node 1.
d) On node 2 run the root.sh.
Above steps mentioned in the workaround, of adding route does not make the route static.
Every time before starting CRS, route need to be added.
BUG:27213224 - NODES ARE NOT ABLE JOIN TO GRID INFRASTRCTURE CSSD FAILING WITH NO NETWORK HB
BUG:27424049 - REJECTING CONNECTION FROM NODE X AS MULTINODE RAC IS NOT SUPPORTED OR CERTIFIED
<<<
! 2021 oracle and non-oracle public cloud
<<<
Oracle Database Support for Non-Oracle Public Cloud Environments (Doc ID 2688277.1)
For the purposes of this document, Non-Oracle Public Cloud Environments are defined as:
(a) Non-Oracle Public Clouds. Examples: Google Cloud Platform, Amazon AWS, Microsoft Azure, IBM Cloud, Alibaba Cloud, etc.
or
(b) Environments that are in any way considered an extension of Non-Oracle Public Clouds including but not limited to running Non-Oracle cloud management software, cloud billing, cloud support, cloud automation, cloud images, or cloud monitoring. Examples: Google Bare Metal Solution, Amazon AWS Outpost, Microsoft Azure Stack, IBM Bluemix Local, Alibaba Hybrid Cloud, etc.
Support Policy for Non-Oracle Public Cloud Environments
Oracle has not certified any of its products on Non-Oracle Public Cloud Environments. Oracle Support will assist customers running Oracle products on Non-Oracle Public Cloud Environments in the following manner: Oracle will only provide support for issues that either are known to occur on an Oracle Certified Platform outside of a non-Oracle Cloud Environment (Oracle Certification Home), or can be demonstrated not to be as a result of running on a Non-Oracle Public Cloud Environment.
If a problem is a known Oracle issue, Oracle support will recommend the appropriate solution on an Oracle Certified Platform outside of a non-Oracle Cloud Environment. If that solution does not work in the Non-Oracle Public Cloud Environment, the customer will be referred to the Non-Oracle Public Cloud vendor for support. When the customer can demonstrate that the Oracle solution does not work when running on an Oracle Certified Platform outside of a non-Oracle Cloud Environment, Oracle will resume support, including logging a bug with Oracle Development for investigation if required.
If the problem is determined not to be a known Oracle issue, we will refer the customer to the Non-Oracle Public Cloud vendor for support. When the customer can demonstrate that the issue occurs when running on an Oracle Certified Platform outside of a non-Oracle Cloud Environment, Oracle will resume support, including logging a bug with Oracle Development for investigation if required.
Support Policy for Oracle Real Application Clusters (RAC)
Oracle does not support Oracle RAC or Oracle RAC One Node running on Non-Oracle Public Cloud Environments.
<<<
<<<
I worked with Andy Klock a bit on the AWS environment in question, and the installation was being done via some AWS-provided automation code. The issue that they were seeing was with either performing new installations of 19.9 or patching existing clusters up to 19.9 in the AWS environment. They could get one node up and running, but as soon as a second node with 19.9 tried to come up, ocssd would spin and eventually time out. Based on this, I would think that a single node Oracle restart environment would not be affected by this.
Now that Frits has found the specific functions, this makes the behavior a little more clear to me.
Since I didn't have the budget to try this out in AWS, I wanted to recreate this and see if I could get the same outcomes. I built a 3-node RAC environment first, using 19.9. I'd expect this to be classified by Oracle as the kgcs_is_on_premise. Here's the entry we see in the ocssd.trc file:
[ INFO] clssscGetCloudProvider: Value from OSD ctx: 1, value in global ctx: 1
That "1" value matches to the first entry Frits mentioned. Here, the cluster behaves as expected.
Now, to get it to act like AWS. I found an AWS metadata service simulator (https://github.com/aws/amazon-ec2-metadata-mock) and fired it up using the AWS 169.254.169.254 address. What I found was that if the metadata service was running and accessible, the ocssd process at cluster startup (on the first node) would be successful and log a return value of 3. If I tried to start additional nodes, they would fail to start ocssd. Here's what was reported in the ocssd.trc file on all of the nodes:
[ INFO] clssscGetCloudProvider: Value from OSD ctx: 3, value in global ctx: 3
The first node would run just fine…I just couldn't start CRS on additional nodes. Blocking the URL or shutting down the simulator wouldn't change anything at this point, because the cluster recognized that it was on a non-Oracle cloud. This behavior would continue until I ran a full shutdown on the cluster, then restarted after blocking access to the AWS metadata simulator. At that point, the cluster reverted back to kgcs_is_on_premise mode, which allowed multiple nodes to start successfully. I believe that what AWS has done to get around this by implementing an iptables rule blocking access to 169.254.169.254, but Klock would have to be the one to shed light on that.
The big takeaway on my side is that I'd be really apprehensive about pursuing a solution that includes running RAC in AWS. You're running an unsupported platform in the end. It looks like a cat and mouse game where Oracle will continue to create ways to block certain functionality from running in other clouds, and you're one new patch away from having a completely broken environment. The Amazon team did come up with a solution, but it was 2+ months after the 19.9 patch release, which could be a massive issue for clients that have security or regulatory requirements to patch within a certain timeframe.
<<<
https://blogs.oracle.com/OTNGarage/entry/how_the_oracle_linux_update?utm_source=feedly
http://public-yum.oracle.com/
<<showtoc>>
also see [[mulesoft]] for integration patterns
! salesforce acquisitions
!! tableau
!! heroku
!! mulesoft
! Salesforce Object Query Language (SOQL) or Salesforce Object Search Language (SOSL)
https://www.google.com/search?q=salesforce+SOQL&oq=salesforce+SOQL&aqs=chrome..69i57j0l7.4767j0j1&sourceid=chrome&ie=UTF-8
! Salesforce example data model
https://mindmajix.com/creating-data-model-in-salesforce#:~:text=In%20Salesforce%2C%20Data%20modelling%20is,different%20relations%20among%20those%20objects.
https://www.google.com/imgres?imgurl=http%3A%2F%2Fforce365.files.wordpress.com%2F2012%2F11%2Fexample-erd1.jpg&imgrefurl=https%3A%2F%2Faudit9.blog%2F2012%2F11%2F14%2Fsalesforce-custom-erd%2F&tbnid=jZgLxzMl_zNLSM&vet=12ahUKEwijk_D52cDrAhVKEd8KHWX_A3QQMygDegUIARDOAQ..i&docid=6dnzlB20V6RRXM&w=1010&h=780&q=salesforce%20data%20model&ved=2ahUKEwijk_D52cDrAhVKEd8KHWX_A3QQMygDegUIARDOAQ
!! salesforce healthcloud data model
https://www.google.com/search?q=salesforce+health+cloud+data+model&sxsrf=ALeKk01Cq0EsP3pnLLcMSY-OtAoIB6_b6Q:1598875461581&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjcrbWfs8XrAhXWmXIEHTaCBa8Q_AUoAnoECAwQBA&biw=2117&bih=1217#imgrc=eAbVLXinO0jvxM
!!! enterprise clinical operating model humana
https://www.google.com/search?client=firefox-b-1-d&q=enterprise+clinical+operating+model+humana
!! salesforce integration using SALESFORCE CONNECT
https://www.udemy.com/course/salesforce-integration-with-heroku/
! APEX programming
https://www.udemy.com/course/salesforce-development-for-beginners/learn/lecture/15666198#overview
https://www.udemy.com/course/salesforce-platform-developer-certification/
https://www.udemy.com/course/salesforce-integration-with-heroku/
! INTEGRATION
!! mulesoft salesforce connector
https://docs.mulesoft.com/salesforce-connector/10.3/
!! salesforce bigquery data sync
How we made data sync between Salesforce into BigQuery at NestAway https://medium.com/nestaway-engineering/how-we-made-sync-between-salesforce-into-bigquery-at-nestaway-2ec229359e34
Running Oracle Database in Solaris 10 Containers - Best Practices
Doc ID: Note:317257.1
-- 2GB limit
http://www.sunsolarisadmin.com/general/ufs-maximum-file-size-2gb-restriction-in-sun-solaris/
-- OS TOOLS , SOLARIS
http://developers.sun.com/solaris/articles/tuning_solaris.html
Get Started With Oracle Restart
http://dbatrain.wordpress.com/2010/08/13/get-started-with-oracle-restart/
Data Guard & Oracle Restart in 11gR2
http://uhesse.wordpress.com/2010/09/
Data Guard & Oracle Restart
http://oracleprof.blogspot.com/2012/08/dataguard-and-oracle-restart-how-to.html
* HIPAA
* FIPPS
* COPPA
* GDPR
* CPNI
* Data Breaches
* Reporting
-- ORACLE SUPPORT
Working Effectively With Global Customer Support
Doc ID: 166650.1
How To Monitor Bugs / Enhancement Requests through Metalink
Doc ID: 602038.1
''Fast, Modern, Reliable: Oracle Linux'' http://www.oracle.com/us/technologies/linux/uek-for-linux-177034.pdf
<<<
''Features and Performance Improvements''
{{{
Latest Infiniband Stack (OFED) 1.5.1 ..................................................... 6
Receive/Transmit Packet Steering and Receive Flow Steering ............. 6
Advanced support for large NUMA systems ........................................... 7
IO affinity ................................................................................................. 7
Improved asynchronous writeback performance .................................... 8
SSD detection ......................................................................................... 8
Task Control Groups ............................................................................... 8
Hardware fault management .................................................................. 9
Power management features .................................................................. 9
Data integrity features ........................................................................... 10
Oracle Cluster File System 2 (OCFS2)................................................. 10
Latencytop ............................................................................................ 10
New fallocate() system call ................................................................... 11
}}}
<<<
http://www.oraclenerd.com/2011/03/oel-6-virtualbox-guest-additions.html
-- some entries on otn forum saying you need to have ULN subscription
https://forums.oracle.com/forums/thread.jspa?threadID=2146476
https://forums.oracle.com/forums/thread.jspa?threadID=2183312
''Playground'' https://blogs.oracle.com/wim/entry/introducing_the_oracle_linux_playground
! OEL6
https://oss.oracle.com/ol6/
uek2 u3 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-UEK2-QU3-en.html
uek2 u2 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-UEK2-QU2-en.html
uek2 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-UEK2-en.html
6.4 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U4-en.html
6.3 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U3-en.html
6.2 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U2-en.html
6.1 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U1-en.html
6 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-GA-en.html
! Public mirrors
https://wikis.oracle.com/display/oraclelinux/Downloading+Oracle+Linux
! migrate from rhel to oel
http://linux.oracle.com/switch/
http://www.oracle.com/technetwork/server-storage/vm/ovm3-quick-start-guide-wp-516656.pdf
https://www.youtube.com/watch?v=pD54PTPpvYc
http://www.oracle.com/technetwork/server-storage/vm/ovm3-demo-vbox-1680215.pdf
http://www.oracle.com/technetwork/server-storage/vm/template-1482544.html
https://blogs.oracle.com/linux/entry/friday_spotlight_getting_started_with
Underground Book
http://itnewscast.com/chapter-5-oracle-vm-manager-sizing-and-installation#Oracle_VM_Manager_Introduction
''How to Use Oracle VM Templates'' http://www.oracle.com/technetwork/articles/servers-storage-admin/configure-vm-templates-1656261.html
http://www.freelists.org/post/oracle-l/oracle-orion-tool
http://www.freelists.org/post/oracle-l/ORION,1
https://twiki.cern.ch/twiki/bin/view/PSSGroup/HAandPerf
https://twiki.cern.ch/twiki/bin/view/PSSGroup/SwingBench
http://www.freelists.org/post/oracle-l/ORION-num-disks
https://twiki.cern.ch/twiki/bin/view/PDBService/OrionTests
<<<
* Please see below for the details on how to use Orion to measure IO numbers, in particular the small random IOPS (Orion will measure the maximum IOPS obtained 'at saturation' by submitting hundreds of concurrent async IO requests of 8KB blocks).
* Sequential IO performance is almost inevitably the HBA speed, that is typically 400 MB per sec, or 800 MB when multipathing is used.
<<<
<<<
How to read Orion output and common gotchas
----------------------------------------------------------------------
* The summary file for a simple run you will produce 3 numbers: Maximum Large MBPS, Maximum Small IOPS, Minimum Small Latency
* Plotting metrics aginst load in excel (from ORION cvs files) is a better way to understand the read the results
* Maximum MBPS typically saturates to the HBA speed. For a single ported 4Gbps HBA you will see something less than 400 MBPS. If the HBA is dual ported and you are using multipathing the number should be close to 800 MBPS
* IOPS is the most critical number. That's is the measurement of the max number of small IO (8KB, i.e. 1 Oracle block) operations per second that the IO subsystem can sustain. It is similar to what is needed for a OLTP-like workload in Oracle, although Orion uses async IO for this tests unlike typical RDBMS operations)
* The storage array cache can play a very important role in producing bogus results (tested). The parameter -cache_size in Orion tests should be set appropriately (in MB). If you can make a test with the array cache disabled.
* Average latency is of little use, latency vs load will instead provide a curve that should be flat for load < N# spindles and then starts to grow linearly.
* When running read-only tests on a new system an optimization can kick in where unformatted blocks are read very quickly. I advise to run a at least one write-only test (that is with -write 100) on a new system.
<<<
http://husnusensoy.wordpress.com/2009/03/31/orion-io-calibration-over-sas-disks/
<<<
To interpret Figure 4, let’s think that our storage array is capable of serving only 8K requests. Any larger requests will be chopped into 8K pieces. That means a large IO request will be corresponding to 125 small IO requests. Moreover think that the total capacity of our storage array is 2000 small IOPS. Now by simple division you can either yield 2000 small (8K) IOPS or 16 large (1M) IOPS from this storage array or somewhere between.
So when the number of total large IO requesters increase, the number of total IOPS will decrease.
Now assume that sustaining 1500 IOPS requires 10 ms, 3000 IOPS requires 20 ms service time on the average. While we are sustaining 1500 IOPS, we can either move on large requester axis and with an addition of 12 IOPS we can reach 20 ms latency, or we can move on small requester axis and with an addition of 1500 IOPS we can reach 20 ms latency (We may choose a third option somewhere between also). As a result increase in large IO results in an increase in service time also.
<<<
http://forums.oracle.com/forums/thread.jspa?messageID=2249899
<<<
If you want to emulate 1MB scans then use this:
-run advanced -type rand -testname mytest -num_disks X -matrix point -num_large Y -num_small 0 -duration 300
where X is the number of physical drives and Y is say 2 or 4 times the number of LUNs. This will give you 2 or 4 outstanding (in-flight) IOs LUN. You can tweak Y as you see fit based on what you see in iostat.
--
Regards,
Greg Rahn
http://structureddata.org
<<<
''Outstanding IO''
http://kevinclosson.wordpress.com/2006/12/11/a-tip-about-the-orion-io-generator-tool/
<<<
"With Orion an outstanding I/O is one issued by io_submit(). You can tune the size of the “flurry” of I/O submitted through io_submit() by tuning outstanding I/O. The way it works is everytime I/O completions are processed Orion issues N number more I/Os where N is the number of completions in the reaped batch. It’s just a way to keep constant pressure on the I/O subsystem."
-- Kevin Closson
<<<
<<<
Stuart,
I don’t understand how your SAN guys can say there is 2GB bandwidth when you are citing the plumbing for your LPAR is 4x2Gb HBAs. That is 800MB/s. Perhaps they mean the entire SAN array can sustain 2GB because maybe it has a total of 10 active 2Gb ports? I don’t know. All that aside, this can only be one of two things I think. Either a) the LPAR you live in has enough RAM to cache all 5GB of your FS files. This seams reasonable as p595s are some real whoppers or b) Orion is failing silently and calculating as if it is doing I/O.
I recommend you monitor sar –b breads and sar –d for physical reads. I think the odds are very good that there is no physical I/O.
<<<
Jim Czuprynski
http://www.databasejournal.com/article.php/2237601
Oracle Database I/O Performance Tuning: Capturing Extra-Database I/O Performance Metrics
http://www.dbasupport.com/oracle/ora11g/Oracle-Database-11gR2-IO-Tuning03.shtml
! The output files
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TSLtQC4c9VI/AAAAAAAABAI/IGGccAPt89g/s400/OrionGraph.JPG]]
! Supported types of IO
- Small random IO
- Large sequential IO
- Large random IO
- Mixed workloads
Check this out for the details of IO types http://www.evernote.com/shard/s48/sh/7a7a05d2-d08a-4a0c-ac65-de0d8b119f85/4fe95aeed62bd5c0512db073f468f885
http://eval.veritas.com/webfiles/presentations/oracle/ioug-a_odm.pdf
http://www.slideshare.net/WhizBob/io-micro-preso07
! Answer the following questions to properly configure the database storage
__1) Will the I/O requests be primarily single-block or multi-block?__
''DSS'' - multiblock IO operations (''MBPS''), sequential IO throughput issued by multiple users
* parallel queries
* queries on large tables that require table scans
* direct data loads
* backups
* restores
''OLTP'' - single block IO (''IOPS'')
__2) What is your average and peak IOPS requirement? What percentage of this traffic are writes?__
__3) What is your average and peak throughput (in MBPS) requirement? What percentage of this traffic are writes?__
If your database's IO req. are primarily single-block, then you should focus on ensuring that the storage can accommodate your I/O request rate (IOPS),
if multiblock then focus on throughput capacity MBPS
! SYSSTAT metrics
''reads''
* single-block reads: physical read total IO requests - physical read total multi block requests
* multi-block reads: physical read total multi block requests
* bytes read: physical read total bytes
''writes''
* single-block writes: physical write total IO requests - physical write total multi block requests
* multi-block writes: physical write total multi block requests
* bytes written: physical write total bytes
__other metrics:__
* redo blocks written: redo blocks written
* redo IO requests: redo writes
* backup IO: in v$backup_async_io and v$backup_sync_io, the IO_COUNT field specifies the number of IO req. and the TOTAL_BYTES field specifies the number of bytes read or written. Note that each row of this view corresponds to a data file, the aggregate over all data files, or the output backup piece.
* flashback log IO: in v$flashback_database_stat, FLASHBACK_DATA, DB_DATA, and REDO_DATA show the number of bytes read or written from the flashback logs, data files and redo logs, respectively, in the given time interval. In SYSSTAT the "flashback log writes" statistic specifies the number of write IO req. to the flashback log.
! Data Warehouse and Orion
''run this multiple IO simulations:''
* __Daily workload__ when end-users and/or other applications query the system: ''read-only workload with possibly many individual parallel IOs''
* __Data Load__, when end-users may or may not access the system: ''write workload with possibly parallel reads'' (by the load program and/or by end-users)
* __Index and materialized view builds__, when end-users may or may not access the system: ''read/write workload''
* __Backups__: ''read workload with likely few other processes, but a possible high degree of parallelism''
In a clustered environment you will have to __invoke Orion in parallel on all nodes__ in order to simulate a clustered workload.
Example, ''a typical Data Warehouse workload'' - simulates __4 parallel sessions__ (-num_large 4) running a statement with a degree of __parallelism of 8__ (-num_streamIO 8), also simulates __raid0 striping__. The internal disks in this case do not have cache.
./orion -run advanced \
-testname orion14 \
-matrix point \
-num_small 0 \
-num_large 4 \
-size_large 1024 \
-num_disks 4 \
-type seq \
-num_streamIO 8 \
-simulate raid0 \
-cache_size 0 \
-verbose
------------------------------------------------------------------------------------------------
''num_large'' (# of parallel sessions)
''num_streamIO'' (# of PARALLEL hint) increase this parameter in order to simulate parallel execution for individual operations. Specify a DOP that you plan to use for your database operations, a good starting point for DOP is ''# of CPU x Parallel threads per CPU''
------------------------------------------------------------------------------------------------
In other words, the maximum throughput for this specific case with that workload is 57.30 MB/sec. In ideal conditions, Oracle will be able to achieve up to 95% of that number. For this particular case, having __4 parallel sessions__ running the following statement would approach the same throughput:
select /*+ NO_MERGE(sales) */ count(*)
from
(select /*+ FULL (s) PARALLEL (s,8) */ *
from all_sales s) sales
/
In a well-balanced Data Warehouse hardware config, there is __sufficient IO bandwidth to feed the CPUs__. As a starting point, you can use the __basic rule that ''every Ghz of CPU power can drive at least 100 MB/sec''__. I.e, for a sinlge server configuration with four 3Ghz CPUs, your storage configuration should at least be able to provide 4*3*100 = 1200 MB/s throughput. __This number should be multiplied by the number of nodes in a RAC configuration__.
! Some Orion command errors
Can only specify -num_streamIO with -type seq
Can only specify -stripe with -simulate RAID0
count (this is num_large) * nstream must be < 2048
Must specify -num_small and cannot specify -num_large when specified -matrix col
{{{
Orion does support filesystems and has done so for years. All you have
to do is create a file that is a multiple of the block size you'll be
testing, e.g.:
Create a 4GB file:
dd if=/dev/zero of=/u01/oracle/mytest.dbf bs=8k count=524288
Then, put this file in your test.lun file:
>cat mytest.lun
/u01/oracle/mytest.dbf
Then, run orion:
orion -run simple -testname mytest -num_disks 1
Regards,
Brandon
}}}
{{{
From my experience, it seems that all num_disks does is increase the max
load orion will run up to when you run with "-run simple/normal", or
"-matrix basic/detailed" tests, for example, with num_disks 1 on a
simple run, it will perform tests of single-block IOs at loads of
1,2,3,4,5 and then multi-block IOs at loads of 1 & 2. If you increase
to num_disks 2, then it will run single-block IOs at loads
1,2,3,4,5,6,7,8,9,10 and multi-block at 1,2,3,4, and it just keeps going
higher as you continue to increase num_disks. Beware it also takes much
longer since each run takes 1 minute by default, however with the larger
num_disks values, it does begin to skip data points, so, for example,
instead of doing every point between 1-20, it will do something like
1,2,4,6,8,10,12,16,20.
In the case of an advanced run like you have below with a specific point
of 45 large IOs and 0 small IOs, I don't think the num_disks parameter
does anything, but please let me know if I'm wrong.
Thanks,
Brandon
}}}
{{{
No prob - I know the documentation doesn't make it very clear. One
thing to be careful with - most filesystems are cached, so you'll
probably get unbelievably good numbers from Orion. The way I usually
workaround this is to create files for testing that are much larger than
my RAM, and clear the OS buffer cache prior to testing. You could also
try playing with the cache_size parameter for Orion, but that never
seemed to do much for me. Hopefully in a future version of orion,
they'll support using directio on a filesystem where supported by the
OS, just like the Oracle database does (e.g.
filesystemio_options=directio).
One more thing to beware of - if you configure orion to run write tests
(it does read-only by default with the simple/normal type tests), it
will destroy any data in the specified test files - so make sure you
don't have it pointed to anything you want to keep, like a real Oracle
datafile.
Regards,
Brandon
}}}
{{{
I believe num_disks has to do with the number of I/O threads that are
spawned and num_large has to do with the number of outstanding I/Os
that are targeted to be issued.
For what I use the tool for (I/O bandwidth testing) I generally run
the sequential workload to get a best possible data point and then use
the rand workload to get numbers closer to what a PQ workload would
be.
On Fri, Sep 12, 2008 at 10:47 AM, Allen, Brandon
<Brandon.Allen@xxxxxxxxxxx> wrote:
> In the case of an advanced run like you have below with a specific point
> of 45 large IOs and 0 small IOs, I don't think the num_disks parameter
> does anything, but please let me know if I'm wrong.
--
Regards,
Greg Rahn
http://structureddata.org
}}}
{{{
You could be right, I'm really not sure and have just come to most of my
current conclusions through trial and error. One thing I've noticed, at
least on Linux (OEL4 & 5) is that orion seems to return pretty
consistent results regardless of how high I push the load for a single
execution, e.g., even if I run with num_small 50 (I usually focus more
on IOPS since I work with OLTP systems) and/or num_disks 50, I'll get
about the same throughput as if I run with 5 or 10. I also never see it
spawn multiple processes/threads at the OS level, so it seems to just be
doing AIO from a single process. I've found that I can push the system
much harder if I run multiple orion processes concurrently, so what I'll
usually do is something like this:
1) Create four 4GB files with dd
2) Create for lun files, e.g. test1.lun, test2.lun, test3.lun and
test4.lun, each pointing to 1 of the 4 test files I created
3) Put four orion commands in a script like this to run four orion
commands in the background:
orion -run advanced -matrix point -num_large 0 -num_small 5
-testname mytest1 -num_disks 1 &
orion -run advanced -matrix point -num_large 0 -num_small 5
-testname mytest2 -num_disks 1 &
orion -run advanced -matrix point -num_large 0 -num_small 5
-testname mytest3 -num_disks 1 &
orion -run advanced -matrix point -num_large 0 -num_small 5
-testname mytest4 -num_disks 1 &
4) Run the script
I'll repeat the above test, increasing the number of concurrent
executions until I find the peak performance. Maybe I'm just doing
something wrong with the standard load-setting parameters, but this
seems to be the only way I can get orion to max out my systems.
}}}
{{{
Normally it is set to the number of physical drives, but it can be
adjusted higher or lower depending on how much load you want to drive.
Here is a couple command lines and summary that I used from a Sun
Thumper(http://www.sun.com/servers/x64/x4500/) for testing I/O
bandwidth using 1MB reads for a data warehouse workload.
-run advanced -type seq -testname thumper_seq -num_disks 45 -matrix
point -num_large 45 -num_small 0 -num_streamIO 16 -disk_start 0
-disk_end 150 -cache_size 0
Maximum Large MBPS=2668.88 @ Small=0 and Large=45
-run advanced -type rand -testname thumper_rand -num_disks 180 -matrix
point -num_large 720 -num_small 0 -duration 60 -disk_start 0 -disk_end
150 -cache_size 0
Maximum Large MBPS=1758.35 @ Small=0 and Large=720
}}}
http://husnusensoy.wordpress.com/2009/03/31/orion-io-calibration-over-sas-disks/
{{{
[oracle@consol10g orion]$ cat mytest.lun
/dev/dm-2
/dev/dm-3
/dev/dm-4
/dev/dm-5
/dev/dm-6
}}}
Small Random & Large Sequential Read Load
{{{
[oracle@consol10g orion]$ ./orion_lnx -run advanced -testname mytest -num_disks 40 -simulate raid0 -write 0 -type seq -matrix basic -cache_size 67108864 -verbose
}}}
Mix Read Load
{{{
[oracle@consol10g orion]$ ./orion_lnx -run advanced -testname mytest -num_disks 40 -simulate raid0 -write 0 -type seq -matrix detailed -cache_size 67108864 -verbose
}}}
For MySQL DW
http://www.pythian.com/news/15161/determining-io-throughput-for-a-system/
{{{
./orion –run advanced –testname mytest –num_small 0 –size_large 1024 –type rand –simulate contact –write 0 –duration 60 –matrix column
-num_small is 0 because you don’t usually do small transactions in a dw.
-type rand for random I/O’s because data warehouse queries usually don’t do sequential reads
-write 0 – no writes, because you do not write often to the dw, that is what the ETL is for.
-duration is in seconds
-matrix column shows you how much you can sustain
}}}
{{{
run Type of workload to run (simple, normal, advanced, dss, oltp)
simple - tests random 8K small IOs at various loads,
then random 1M large IOs at various loads.
normal - tests combinations of random 8K small
IOs and random 1M large IOs
advanced - run the workload specified by the user
using optional parameters
dss - run with random 1M large IOs at increasing loads
to determine the maximum throughput
oltp - run with random 8K small IOs at increasing loads
to determine the maximum IOPS
Optional parameters:
testname Name of the test run
num_disks Number of disks (physical spindles). Default is
the number of LUNs in <testname>.lun
size_small Size of small IOs (in KB) - default 8
size_large Size of large IOs (in KB) - default 1024
type Type of large IOs (rand, seq) - default rand
rand - Random large IOs
seq - Sequential streams of large IOs
num_streamIO Number of concurrent IOs per stream (only if type is
seq) - default 4
simulate Orion tests on a virtual volume formed by combining the
provided volumes in one of these ways (default concat):
concat - A serial concatenation of the volumes
raid0 - A RAID-0 mapping across the volumes
write Percentage of writes (SEE WARNING ABOVE) - default 0
cache_size Size *IN MEGABYTES* of the array's cache.
Unless this option is set to 0, Orion does a number
of (unmeasured) random IO before each large sequential
data point. This is done in order to fill up the array
cache with random data. This way, the blocks from one
data point do not result in cache hits for the next
data point. Read tests are preceded with junk reads
and write tests are preceded with junk writes. If
specified, this 'cache warming' is done until
cache_size worth of IO has been read or written.
Default behavior: fill up cache for 2 minutes before
each data point.
duration Duration of each data point (in seconds) - default 60
num_small Number of outstanding small IOs (only if matrix is
point, col, or max) - no default
num_large For random, number of outstanding large IOs.
For sequential, number of streams (only if matrix is
point, row, or max) - no default
matrix An Orion test consists of data points at various small
and large IO load levels. These points can be
represented as a two-dimensional matrix: Each column
in the matrix represents a fixed small IO load. Each
row represents a fixed large IO load. The first row
is with no large IO load and the first column is with
no small IO load. An Orion test can be a single point,
a row, a column or the whole matrix, depending on the
matrix option setting below (default basic):
basic - test the first row and the first column
detailed - test the entire matrix
point - test at load level num_small, num_large
col - varying large IO load with num_small small IOs
row - varying small IO load with num_large large IOs
max - test varying loads up to num_small, num_large
verbose Prints tracing information to standard output if set.
Default -- not set
ORION runs IO performance tests that model Oracle RDBMS IO workloads.
It measures the performance of small (2-32K) IOs and large (128K+) IOs
at various load levels. Each Orion data point is done at a specific
mix of small and large IO loads sustained for a duration. Anywhere
from a single data point to a two-dimensional array of data points can
be tested by setting the right options.
An Orion test consists of data points at various small and large IO
load levels. These points can be represented as a two-dimensional
matrix: Each column in the matrix represents a fixed small IO load.
Each row represents a fixed large IO load. The first row is with no
large IO load and the first column is with no small IO load. An Orion
test can be a single point, a row, a column or the whole matrix.
The 'run' parameter is the only mandatory parameter. Defaults
are indicated for all other parameters. For additional information on
the user interface, see the Orion User Guide.
<testname> is a filename prefix. By default, it is "orion". It can be
specified with the 'testname' parameter.
<testname>.lun should contain a carriage-return-separated list of LUNs
The output files for a test run are prefixed by <testname>_<date> where
date is "yyyymmdd_hhmm".
The output files are:
<testname>_<date>_summary.txt - Summary of the input parameters along with
min. small latency, max large MBPS
and/or max. small IOPS.
<testname>_<date>_mbps.csv - Performance results of large IOs in MBPS
<testname>_<date>_iops.csv - Performance results of small IOs in IOPS
<testname>_<date>_lat.csv - Latency of small IOs
<testname>_<date>_tradeoff.csv - Shows large MBPS / small IOPS
combinations that can be achieved at
certain small latencies
<testname>_trace.txt - Extended, unprocessed output
WARNING: IF YOU ARE PERFORMING WRITE TESTS, BE PREPARED TO LOSE ANY DATA STORED
ON THE LUNS.
Mandatory parameters:
Examples
For a preliminary set of data
-run simple
For a basic set of data
-run normal
To evaluate storage for an OLTP database
-run oltp
To evaluate storage for a data warehouse
-run dss
To generate combinations of 32KB and 1MB reads to random locations:
-run advanced
-size_small 32 -size_large 1024 -type rand -matrix detailed
To generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes
-run advanced
-simulate RAID0 -stripe 1024 -write 100 -type seq
-matrix col -num_small 0
}}}
Here's the HD used
Barracuda 7200 SATA 3Gb/s (375MB/s) interface 1TB Hard Drive
http://www.seagate.com/ww/v/index.jsp?vgnextoid=20b92d0ca8dce110VgnVCM100000f5ee0a0aRCRD#tTabContentOverview
see also LVMalaASM
I have also created a simple toolkit to characterize the existing storage subsystem if it meets the Application storage performance requirements, it does the following:
-- params_dss_randomwrites
-- params_dss_seqwrites
-- params_dss_randomreads
-- params_dss_seqreads
-- params_oltp_randomwrites
-- params_oltp_seqwrites
-- params_oltp_randomreads
-- params_oltp_seqreads
-- params_dss
-- params_oltp
Get the toolkit here http://karlarao.wordpress.com/scripts-resources/ named ''oriontoolkit.zip''
! Following is the summary of the Orion runs:
------------------------------------------
{{{
+++1 - a run on one datafile created on a filesystem, this is on VMWARE.. mysteriously giving optimistic results
+++2 - a run on the four 1TB hard disk, compare the numbers on the short stroked values!!! whew! way too low!
+++3 - a run on four 1TB hard disk.. but num_disk is 8
+++4 - cool, a raw short stroked partition (3 GB each disk) and not putting it on LVM is at the same performance with LVM magic! but I notice less IO% could be because there is no LVM layer
+++5 - short stroked 4 disks, applied the LVM stripe script trick and turned it into 1 piece of 12 GB LVM
+++6 - a simple orion benchmark on one disk.. not really impresive..
+++7 - 2nd run of a simple orion benchmark on one disk! but this time num_disk = 4
+++8 - 3rd run, this time num_disk = 8
+++9 - 4th run, this time num_disk = 16
+++10 - 5th run, this time num_disk = 32
+++11 - 6th run, this time num disk 64
+++12 - 7th run, this time num disk 128
+++13 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 4, 285.10 MBPS
+++14 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 8, 262.08 MBPS
+++15 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 16, 217.93 MBPS
+++16 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 24, 198.45 MBPS
+++17 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 32, 194.99 MBPS
+++18 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 64, 184.84 MBPS
+++19 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 128, 154.78 MBPS
+++20 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 256, 165.18 MBPS
+++21 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 256, 162.33 MBPS
+++22 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 1, 458.25 MBPS
+++23 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 2, 294.11 MBPS
+++24 DW sequential run, matrix col, raid 0, cache 0, streamio 8, large 0-9, 457.89 MBPS
+++31 DW sequential run, matrix point, duration 300, raid 0, cache 0, streamio 8, large 8, 256.72 MBPS
+++25 FAIL, run normal
+++26 run OLTP, 487 IOPS, 19.99ms lat
+++27 run DSS, 181.19 MBPS
+++28 FAIL, generate combinations of 32KB and 1MB reads to random locations, 340 IOPS, 40 MBPS
+++30 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, CONCAT, cache NE, streamio N/A, large 8, 139.28 MBPS
+++44 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, CONCAT, cache 0, streamio N/A, large 8, 138.71 MBPS
+++45 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 8, 138.47 MBPS
+++46 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 256, 151.89 MBPS <<< RANDOM READS
+++52 Greg Rahn - random scans, matrix point, duration 60, raid 0, cache 0, streamio N/A, large 720, 160.88 MBPS
+++53 Greg Rahn - random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 720, 151.22 MBPS <<<
+++32 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, CONCAT, cache NE, streamio 4, large 8, 440.50 MBPS
+++33 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, CONCAT, cache 0, streamio 4, large 8, 441.24 MBPS
+++34 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 4, large 8, 221.29 MBPS
+++35 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 8, large 8, 254.70 MBPS
+++36 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 8, large 256, 157.62 MBPS
+++37 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 3600, RAID0, cache 0, streamio 8, large 256, 159.09 MBPS <<< SEQUENTIAL READS
+++38 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache NE, streamio 8, large 256, 157.65 MBPS
+++51 Greg Rahn - sequential scans, matrix point, duration 60, RAID0, cache 0, streamio 16, large 45, 347.92 MBPS
+++54 Greg Rahn - sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 16, large 45, 358.31 MBPS
+++55 Greg Rahn - sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 32, large 45, 359.01 MBPS
+++56 Greg Rahn - sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 45, large 45, 352.05 MBPS <<<
+++49 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 8, 147.55 MBPS
+++50 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 256, 109.53 MBPS <<< RANDOM WRITES
+++57 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 720, 107.31 MBPS <<<
+++47 generate multiple sequential 1MB write streams, matrix col, duration 60, CONCAT, cache NE, streamio 4, large 1-8, 421.80 MBPS
+++29 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache NE, streamio 4, large 1-8, 370.14 MBPS
+++39 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache 0, streamio 4, large 1-8, 369.68 MBPS
+++48 generate multiple sequential 1MB write streams, matrix point, duration 60, CONCAT, cache 0, streamio 8, large 8, 419.17 MBPS
+++40 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache 0, streamio 8, large 1-8, 387.46 MBPS
+++41 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 60, cache 0, streamio 8, large 8, 251.69 MBPS
+++42 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 8, large 8, 249.08 MBPS
+++43 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 8, large 256, 106.62 MBPS <<< SEQUENTIAL WRITES
+++58 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 45, large 45, 165.57 MBPS <<<
+++59 FAIL, husnu matrix basic (seq and iops test), stopped at point4, 211.56 MBPS, 365 IOPS, 54.77 lat
+++60 FAIL, husnu matrix detailed (seq and iops test), stopped at point24 out of 189, no MBPS, 370 IOPS, 53.91 lat
+++61 SINGLE DISK RUN seq matrix point, num large 256, streamio 8, raid 0, cache0, duration 300, 53.20 MBPS
+++62 MULTIPLE ORION (4) SESSION RUN seq matrix point, num large 256, streamio 8, raid 0, cache0, duration 300, on the OS, around 200 MBPS
+++63 IOPS - read (random, seq), write (random, seq)
+++ observations: seems like when you do OLTP runs, the collectl-all outputs the wsec/s (sector writes) and not the IOPS write..
+++ I've checked it with the iostat output
+++ params_oltp_randomwrites Maximum Small IOPS=309 @ Small=256 and Large=0 Minimum Small Latency=825.24 @ Small=256 and Large=0
+++ params_oltp_seqwrites Maximum Small IOPS=312 @ Small=256 and Large=0 Minimum Small Latency=818.04 @ Small=256 and Large=0
+++ params_oltp_randomreads Maximum Small IOPS=532 @ Small=256 and Large=0 Minimum Small Latency=480.28 @ Small=256 and Large=0
+++ params_oltp_seqreads Maximum Small IOPS=527 @ Small=256 and Large=0 Minimum Small Latency=485.31 @ Small=256 and Large=0
+++ params_oltp Maximum Small IOPS=481 @ Small=80 and Large=0 Minimum Small Latency=20.34 @ Small=4 and Large=0
+++64 FAIL, increasing random writes
+++65 FULL run of oriontoolkit
+++ params_dss_randomwrites Maximum Large MBPS=108.17 @ Small=0 and Large=256
+++ params_dss_seqwrites Maximum Large MBPS=111.59 @ Small=0 and Large=256
+++ params_dss_randomreads Maximum Large MBPS=148.50 @ Small=0 and Large=256
+++ params_dss_seqreads Maximum Large MBPS=156.24 @ Small=0 and Large=256
+++ params_oltp_randomwrites Maximum Small IOPS=312 @ Small=256 and Large=0 Minimum Small Latency=816.17 @ Small=256 and Large=0
+++ params_oltp_seqwrites Maximum Small IOPS=314 @ Small=256 and Large=0 Minimum Small Latency=812.39 @ Small=256 and Large=0
+++ params_oltp_randomreads Maximum Small IOPS=530 @ Small=256 and Large=0 Minimum Small Latency=482.69 @ Small=256 and Large=0
+++ params_oltp_seqreads Maximum Small IOPS=526 @ Small=256 and Large=0 Minimum Small Latency=486.29 @ Small=256 and Large=0
+++ params_dss Maximum Large MBPS=177.65 @ Small=0 and Large=32
+++ params_oltp Maximum Small IOPS=480 @ Small=80 and Large=0 Minimum Small Latency=20.42 @ Small=4 and Large=0
+++66 ShortStroked disks 150GB/1000GB
+++ params_dss_randomwrites Maximum Large MBPS=151.57 @ Small=0 and Large=256
+++ params_dss_seqwrites Maximum Large MBPS=163.09 @ Small=0 and Large=256
+++ params_dss_randomreads Maximum Large MBPS=192.11 @ Small=0 and Large=256
+++ params_dss_seqreads Maximum Large MBPS=207.77 @ Small=0 and Large=256
+++ params_oltp_randomwrites Maximum Small IOPS=431 @ Small=256 and Large=0 Minimum Small Latency=592.28 @ Small=256 and Large=0
+++ params_oltp_seqwrites Maximum Small IOPS=427 @ Small=256 and Large=0 Minimum Small Latency=597.92 @ Small=256 and Large=0
+++ params_oltp_randomreads Maximum Small IOPS=792 @ Small=256 and Large=0 Minimum Small Latency=323.08 @ Small=256 and Large=0
+++ params_oltp_seqreads Maximum Small IOPS=794 @ Small=256 and Large=0 Minimum Small Latency=322.24 @ Small=256 and Large=0
+++ params_dss Maximum Large MBPS=216.53 @ Small=0 and Large=28
+++ params_oltp Maximum Small IOPS=711 @ Small=80 and Large=0 Minimum Small Latency=14.32 @ Small=4 and Large=0
+++ a short stroked single disk
+++ create regression on OLTP Write and DSS Write
}}}
! Following are the details of the Orion runs:
------------------------------------------
{{{
#################################################################################################################
drwxr-xr-x 2 oracle oracle 4096 Jul 8 12:23 OrionTest1
+++1 - a run on one datafile created on a filesystem, this is on VMWARE.. mysteriously giving optimistic results
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 1
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2
Total Data Points: 8
Name: /home/oracle/mytest.dbf Size: 4294967296
1 FILEs found.
Maximum Large MBPS=181.83 @ Small=0 and Large=2
Maximum Small IOPS=1377 @ Small=5 and Large=0
Minimum Small Latency=0.79 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Jul 13 20:42 OrionTest2
+++2 - a run on the four 1TB hard disk, compare the numbers on the short stroked values!!! whew! way too low!
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 4
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 29
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=143.28 @ Small=0 and Large=8
Maximum Small IOPS=387 @ Small=20 and Large=0
Minimum Small Latency=13.61 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Jul 14 12:25 OrionTest3
+++3 - a run on four 1TB hard disk.. but num_disk is 8
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 8
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
Total Data Points: 38
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=171.55 @ Small=0 and Large=16
Maximum Small IOPS=456 @ Small=40 and Large=0
Minimum Small Latency=13.63 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Jul 16 07:42 OrionTest4
+++4 - cool, a raw short stroked partition (3 GB each disk) and not putting it on LVM is at the same performance with LVM magic! but I notice less IO% could be because there is no LVM layer
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 4
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 29
Name: /dev/sdb1 Size: 3257178624
Name: /dev/sdc1 Size: 3257178624
Name: /dev/sdd1 Size: 3257178624
Name: /dev/sde1 Size: 3257178624
4 FILEs found.
Maximum Large MBPS=232.07 @ Small=0 and Large=8
Maximum Small IOPS=954 @ Small=20 and Large=0
Minimum Small Latency=6.62 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Dec 20 18:13 OrionTest5
+++5 - short stroked 4 disks, applied the LVM stripe script trick and turned it into 1 piece of 12 GB LVM
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 4
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 29
Name: /dev/vgshortstroke/shortstroke Size: 13514047488
1 FILEs found.
Maximum Large MBPS=232.00 @ Small=0 and Large=8
Maximum Small IOPS=942 @ Small=20 and Large=0
Minimum Small Latency=6.61 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Dec 21 16:59 OrionTest6
+++6 - a simple orion benchmark on one disk.. not really impresive..
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 1
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2
Total Data Points: 8
Name: /dev/sdb Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=42.64 @ Small=0 and Large=2
Maximum Small IOPS=103 @ Small=5 and Large=0
Minimum Small Latency=13.62 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Dec 21 18:15 OrionTest7
+++7 - 2nd run of a simple orion benchmark on one disk! but this time num_disk = 4
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 4
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 29
Name: /dev/sdb Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=52.80 @ Small=0 and Large=8
Maximum Small IOPS=135 @ Small=20 and Large=0
Minimum Small Latency=13.67 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Dec 21 19:19 OrionTest8
+++8 - 3rd run, this time num_disk = 8
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 8
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
Total Data Points: 38
Name: /dev/sdb Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=56.74 @ Small=0 and Large=16
Maximum Small IOPS=148 @ Small=36 and Large=0
Minimum Small Latency=13.57 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Dec 21 20:36 OrionTest9
+++9 - 4th run, this time num_disk = 16
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 16
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32
Total Data Points: 41
Name: /dev/sdb Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=56.62 @ Small=0 and Large=18
Maximum Small IOPS=154 @ Small=80 and Large=0
Minimum Small Latency=13.62 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Dec 22 14:04 OrionTest10
+++10 - 5th run, this time num_disk = 32
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 32
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60
Total Data Points: 44
Name: /dev/sdb Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=56.11 @ Small=0 and Large=15
Maximum Small IOPS=159 @ Small=128 and Large=0
Minimum Small Latency=13.69 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Dec 22 16:51 OrionTest11
+++11 - 6th run, this time num disk 64
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 64
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120
Total Data Points: 57
Name: /dev/sdb Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=55.89 @ Small=0 and Large=20
Maximum Small IOPS=160 @ Small=272 and Large=0
Minimum Small Latency=13.65 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root root 4096 Dec 23 14:30 OrionTest12
+++12 - 7th run, this time num disk 128
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 128
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 42, 63, 84, 105, 126, 147, 168, 189, 210, 231, 252
Total Data Points: 84
Name: /dev/sdb Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=56.41 @ Small=0 and Large=18
Maximum Small IOPS=160 @ Small=352 and Large=0
Minimum Small Latency=13.61 @ Small=1 and Large=0
#################################################################################################################
+++13 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 4, 285.10 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 4 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 4
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=285.10 @ Small=0 and Large=4
#################################################################################################################
+++14 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 8, 262.08 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 8 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=262.08 @ Small=0 and Large=8
#################################################################################################################
+++15 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 16, 217.93 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 16 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 16
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=217.93 @ Small=0 and Large=16
#################################################################################################################
+++16 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 24, 198.45 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 24 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 24
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=198.45 @ Small=0 and Large=24
#################################################################################################################
+++17 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 32, 194.99 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 32 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 32
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=194.99 @ Small=0 and Large=32
#################################################################################################################
+++18 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 64, 184.84 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 64 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 64
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=184.84 @ Small=0 and Large=64
#################################################################################################################
+++19 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 128, 154.78 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 128 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 128
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=154.78 @ Small=0 and Large=128
#################################################################################################################
+++20 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 256, 165.18 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 256 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=165.18 @ Small=0 and Large=256
#################################################################################################################
+++21 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 256, 162.33 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 256 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=162.33 @ Small=0 and Large=256
#################################################################################################################
+++22 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 1, 458.25 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 1 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 1
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=458.25 @ Small=0 and Large=1
#################################################################################################################
+++23 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 2, 294.11 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 2 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 2
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=294.11 @ Small=0 and Large=2
#################################################################################################################
+++24 DW sequential run, matrix col, raid 0, cache 0, streamio 8, large 0-9, 457.89 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix col -num_small 0 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 9
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=457.89 @ Small=0 and Large=1
#################################################################################################################
+++31 DW sequential run, matrix point, duration 300, raid 0, cache 0, streamio 8, large 8, 256.72 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -duration 300 -testname mytest -matrix point -num_small 0 -num_large 8 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=256.72 @ Small=0 and Large=8
#################################################################################################################
+++25 FAIL, run normal
-----------------------------------------------------------------------------------------------------------------
#################################################################################################################
+++26 run OLTP, 487 IOPS, 19.99ms lat
-----------------------------------------------------------------------------------------------------------------
-run oltp -testname mytest
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, 80
Large Columns:, 0
Total Data Points: 24
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Small IOPS=487 @ Small=80 and Large=0
Minimum Small Latency=19.99 @ Small=4 and Large=0
#################################################################################################################
+++27 run DSS, 181.19 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run dss -testname mytest
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 240 seconds
Small Columns:, 0
Large Columns:, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60
Total Data Points: 19
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=181.19 @ Small=0 and Large=32
#################################################################################################################
+++28 FAIL, generate combinations of 32KB and 1MB reads to random locations, 340 IOPS, 40 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -size_small 32 -size_large 1024 -type rand -matrix detailed -testname mytest
#################################################################################################################
+++30 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, CONCAT, cache NE, streamio N/A, large 8, 139.28 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=139.28 @ Small=0 and Large=8
#################################################################################################################
+++44 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, CONCAT, cache 0, streamio N/A, large 8, 138.71 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0 -simulate concat
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=138.71 @ Small=0 and Large=8
#################################################################################################################
+++45 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 8, 138.47 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0 -simulate raid0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=138.47 @ Small=0 and Large=8
#################################################################################################################
+++46 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 256, 151.89 MBPS <<< RANDOM READS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0 -simulate raid0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=151.89 @ Small=0 and Large=256
#################################################################################################################
+++52 Greg Rahn - random scans, matrix point, duration 60, raid 0, cache 0, streamio N/A, large 720, 160.88 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 720 -num_small 0 -duration 60 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 720
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=160.88 @ Small=0 and Large=720
#################################################################################################################
+++53 Greg Rahn - random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 720, 151.22 MBPS <<<
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 720 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 720
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=151.22 @ Small=0 and Large=720
#################################################################################################################
+++32 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, CONCAT, cache NE, streamio 4, large 8, 440.50 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=440.50 @ Small=0 and Large=8
#################################################################################################################
+++33 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, CONCAT, cache 0, streamio 4, large 8, 441.24 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=441.24 @ Small=0 and Large=8
#################################################################################################################
+++34 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 4, large 8, 221.29 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=221.29 @ Small=0 and Large=8
#################################################################################################################
+++35 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 8, large 8, 254.70 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=254.70 @ Small=0 and Large=8
#################################################################################################################
+++36 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 8, large 256, 157.62 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=157.62 @ Small=0 and Large=256
#################################################################################################################
+++37 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 3600, RAID0, cache 0, streamio 8, large 256, 159.09 MBPS <<< SEQUENTIAL READS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 256 -num_small 0 -duration 3600 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 3600 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=159.09 @ Small=0 and Large=256
#################################################################################################################
+++38 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache NE, streamio 8, large 256, 157.65 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 256 -num_small 0 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=157.65 @ Small=0 and Large=256
#################################################################################################################
+++51 Greg Rahn - sequential scans, matrix point, duration 60, RAID0, cache 0, streamio 16, large 45, 347.92 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 45 -num_small 0 -num_streamIO 16 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 16
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 45
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=347.92 @ Small=0 and Large=45
#################################################################################################################
+++54 Greg Rahn - sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 16, large 45, 358.31 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 45 -num_small 0 -num_streamIO 16 -cache_size 0 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 16
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 45
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=358.31 @ Small=0 and Large=45
#################################################################################################################
+++55 Greg Rahn - sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 32, large 45, 359.01 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 45 -num_small 0 -num_streamIO 32 -cache_size 0 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 32
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 45
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=359.01 @ Small=0 and Large=45
#################################################################################################################
+++56 Greg Rahn - sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 45, large 45, 352.05 MBPS <<<
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 45 -num_small 0 -num_streamIO 45 -cache_size 0 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 45
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 45
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=352.05 @ Small=0 and Large=45
#################################################################################################################
+++49 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 8, 147.55 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type rand -matrix point -num_small 0 -cache_size 0 -num_large 8 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=147.55 @ Small=0 and Large=8
#################################################################################################################
+++50 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 256, 109.53 MBPS <<< RANDOM WRITES
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type rand -matrix point -num_small 0 -cache_size 0 -num_large 256 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=109.53 @ Small=0 and Large=256
#################################################################################################################
+++57 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 720, 107.31 MBPS <<<
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type rand -matrix point -num_small 0 -cache_size 0 -num_large 720 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 720
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=107.31 @ Small=0 and Large=720
#################################################################################################################
+++47 generate multiple sequential 1MB write streams, matrix col, duration 60, CONCAT, cache NE, streamio 4, large 1-8, 421.80 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate concat -write 100 -type seq -matrix col -num_small 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 100%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 9
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=421.80 @ Small=0 and Large=5
#################################################################################################################
+++29 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache NE, streamio 4, large 1-8, 370.14 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix col -num_small 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 9
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=370.14 @ Small=0 and Large=1
#################################################################################################################
+++39 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache 0, streamio 4, large 1-8, 369.68 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix col -num_small 0 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 9
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=369.68 @ Small=0 and Large=1
#################################################################################################################
+++48 generate multiple sequential 1MB write streams, matrix point, duration 60, CONCAT, cache 0, streamio 8, large 8, 419.17 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate concat -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 8 -num_large 8
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=419.17 @ Small=0 and Large=8
#################################################################################################################
+++40 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache 0, streamio 8, large 1-8, 387.46 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix col -num_small 0 -cache_size 0 -num_streamIO 8
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8
Total Data Points: 9
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=387.46 @ Small=0 and Large=1
#################################################################################################################
+++41 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 60, cache 0, streamio 8, large 8, 251.69 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 8 -num_large 8
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=251.69 @ Small=0 and Large=8
#################################################################################################################
+++42 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 8, large 8, 249.08 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 8 -num_large 8 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 8
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=249.08 @ Small=0 and Large=8
#################################################################################################################
+++43 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 8, large 256, 106.62 MBPS <<< SEQUENTIAL WRITES
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 8 -num_large 256 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=106.62 @ Small=0 and Large=256
#################################################################################################################
+++58 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 45, large 45, 165.57 MBPS <<<
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 45 -num_large 45 -duration 300
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 45
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 45
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
Name: /dev/sdc Size: 1000204886016
Name: /dev/sdd Size: 1000204886016
Name: /dev/sde Size: 1000204886016
4 FILEs found.
Maximum Large MBPS=165.57 @ Small=0 and Large=45
#################################################################################################################
+++59 FAIL, husnu matrix basic (seq and iops test), stopped at point4, 211.56 MBPS, 365 IOPS, 54.77 lat
-----------------------------------------------------------------------------------------------------------------
1, 454.68
2, 202.64
3, 207.06
4, 211.56
Commandline:
-run advanced -testname mytest -num_disks 4 -simulate raid0 -write 0 -type seq -matrix basic -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
#################################################################################################################
+++60 FAIL, husnu matrix detailed (seq and iops test), stopped at point24 out of 189, no MBPS, 370 IOPS, 53.91 lat
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -num_disks 4 -simulate raid0 -write 0 -type seq -matrix detailed -cache_size 0 -verbose
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
#################################################################################################################
+++61 SINGLE DISK RUN seq matrix point, num large 256, streamio 8, raid 0, cache0, duration 300, 53.20 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=53.20 @ Small=0 and Large=256
#################################################################################################################
+++62 MULTIPLE ORION (4) SESSION RUN seq matrix point, num large 256, streamio 8, raid 0, cache0, duration 300, on the OS, around 200 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest1 -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest1
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdb Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=51.03 @ Small=0 and Large=256
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest2 -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest2
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdc Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=48.03 @ Small=0 and Large=256
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest3 -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest3
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sdd Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=44.43 @ Small=0 and Large=256
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest4 -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0
This maps to this test:
Test: mytest4
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:, 0
Large Columns:, 256
Total Data Points: 1
Name: /dev/sde Size: 1000204886016
1 FILEs found.
Maximum Large MBPS=41.20 @ Small=0 and Large=256
#################################################################################################################
+++63 IOPS - read (random, seq), write (random, seq)
+++ observations: seems like when you do OLTP runs, the collectl-all outputs the wsec/s (sector writes) and not the IOPS write..
+++ I've checked it with the iostat output
-----------------------------------------------------------------------------------------------------------------
+++ params_oltp_randomwrites Maximum Small IOPS=309 @ Small=256 and Large=0 Minimum Small Latency=825.24 @ Small=256 and Large=0
+++ params_oltp_seqwrites Maximum Small IOPS=312 @ Small=256 and Large=0 Minimum Small Latency=818.04 @ Small=256 and Large=0
+++ params_oltp_randomreads Maximum Small IOPS=532 @ Small=256 and Large=0 Minimum Small Latency=480.28 @ Small=256 and Large=0
+++ params_oltp_seqreads Maximum Small IOPS=527 @ Small=256 and Large=0 Minimum Small Latency=485.31 @ Small=256 and Large=0
+++ params_oltp Maximum Small IOPS=481 @ Small=80 and Large=0 Minimum Small Latency=20.34 @ Small=4 and Large=0
#################################################################################################################
+++64 FAIL, increasing random writes
-----------------------------------------------------------------------------------------------------------------
-run advanced -testname mytest -type rand -matrix col -simulate raid0 -num_disks 4 -cache_size 0 -num_small 256 -stripe 1024 -write 100 -duration 300
#################################################################################################################
+++65 FULL run of oriontoolkit
-----------------------------------------------------------------------------------------------------------------
+++ params_dss_randomwrites Maximum Large MBPS=108.17 @ Small=0 and Large=256
+++ params_dss_seqwrites Maximum Large MBPS=111.59 @ Small=0 and Large=256
+++ params_dss_randomreads Maximum Large MBPS=148.50 @ Small=0 and Large=256
+++ params_dss_seqreads Maximum Large MBPS=156.24 @ Small=0 and Large=256
+++ params_oltp_randomwrites Maximum Small IOPS=312 @ Small=256 and Large=0 Minimum Small Latency=816.17 @ Small=256 and Large=0
+++ params_oltp_seqwrites Maximum Small IOPS=314 @ Small=256 and Large=0 Minimum Small Latency=812.39 @ Small=256 and Large=0
+++ params_oltp_randomreads Maximum Small IOPS=530 @ Small=256 and Large=0 Minimum Small Latency=482.69 @ Small=256 and Large=0
+++ params_oltp_seqreads Maximum Small IOPS=526 @ Small=256 and Large=0 Minimum Small Latency=486.29 @ Small=256 and Large=0
+++ params_dss Maximum Large MBPS=177.65 @ Small=0 and Large=32
+++ params_oltp Maximum Small IOPS=480 @ Small=80 and Large=0 Minimum Small Latency=20.42 @ Small=4 and Large=0
#################################################################################################################
+++66 ShortStroked disks 150GB/1000GB
-----------------------------------------------------------------------------------------------------------------
+++ params_dss_randomwrites Maximum Large MBPS=151.57 @ Small=0 and Large=256
+++ params_dss_seqwrites Maximum Large MBPS=163.09 @ Small=0 and Large=256
+++ params_dss_randomreads Maximum Large MBPS=192.11 @ Small=0 and Large=256
+++ params_dss_seqreads Maximum Large MBPS=207.77 @ Small=0 and Large=256
+++ params_oltp_randomwrites Maximum Small IOPS=431 @ Small=256 and Large=0 Minimum Small Latency=592.28 @ Small=256 and Large=0
+++ params_oltp_seqwrites Maximum Small IOPS=427 @ Small=256 and Large=0 Minimum Small Latency=597.92 @ Small=256 and Large=0
+++ params_oltp_randomreads Maximum Small IOPS=792 @ Small=256 and Large=0 Minimum Small Latency=323.08 @ Small=256 and Large=0
+++ params_oltp_seqreads Maximum Small IOPS=794 @ Small=256 and Large=0 Minimum Small Latency=322.24 @ Small=256 and Large=0
+++ params_dss Maximum Large MBPS=216.53 @ Small=0 and Large=28
+++ params_oltp Maximum Small IOPS=711 @ Small=80 and Large=0 Minimum Small Latency=14.32 @ Small=4 and Large=0
#################################################################################################################
+++ a short stroked single disk
-----------------------------------------------------------------------------------------------------------------
#################################################################################################################
+++ create regression on OLTP Write and DSS Write
-----------------------------------------------------------------------------------------------------------------
#################################################################################################################
}}}
{{{
Here is a repeat of a post I made back in November 2005 - in case anyone
is having trouble getting it to work on Windows. I haven't checked
lately, but at the time, it wasn't clearly documented. In retrospect,
maybe it should have been more obvious to me that I had to specify a
datafile, but it wasn't obvious at the time:
########################################################################
#######
In case anyone else wants to use ORION on Windows, I finally figured out
how to get it to work. Apparently you have to specify an actual Oracle
datafile, not just a directory or empty text file. I put
"C:\oracle\oradata\orcl\example01.dbf" in my mytest.lun file, and then
ORION worked, giving me the following command-line output:
C:\Program Files\Oracle\Orion>orion -run simple -testname mytest
-num_disks 1
ORION: ORacle IO Numbers -- Version 10.2.0.1.0
Test will take approximately 9 minutes
Larger caches may take longer
And the following results in mytest_summary.txt:
ORION VERSION 10.2.0.1.0
Commandline:
-run simple -testname mytest -num_disks 1
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 0
Large Columns:, 0, 1, 2
Total Data Points: 8
Name: C:\oracle\oradata\orcl\example01.dbf Size: 157294592
1 FILEs found.
Maximum Large MBPS=9.01 @ Small=0 and Large=2
Maximum Small IOPS=52 @ Small=2 and Large=0
Minimum Small Latency=20.45 @ Small=1 and Large=0
########################################################################
#######
}}}
https://stackoverflow.com/questions/48462896/out-of-memory-in-hive-tez-with-lateral-view-json-tuple
https://stackoverflow.com/questions/48403972/oom-in-tez-hive/48407044
https://community.cloudera.com/t5/Support-Questions/Trying-to-use-Hive-EXPLODE-function-to-quot-unfold-quot-an/td-p/103694
! lateral view examples
https://community.cloudera.com/t5/Support-Questions/Hive-Explode-Lateral-View-clarification/td-p/167827
https://stackoverflow.com/questions/42403306/hive-lateral-view-explode
! documentation
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LateralView
! similarities with Xquery on DW environments
<<<
troubleshooting the Xquery. The hidden parameter needs to be tested and re-run the process. If that did not work then they need to break that query into multiple smaller Xquery to load into a table then do the join from there.
Basically the issue is the flattening of XML to do reporting on top of it.
I see this issue on newer data warehouses that uses newer data structures like JSON where they run LATERAL VIEW..EXPLODE function on the marketing data to flatten it https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LateralView
On LATERAL VIEW..EXPLODE the issues encountered is memory related (java gc issues) when developers try to flatten hunders of json leafs at a time. And the usual fix is tune the java container memory and also lessen the columns to explode or break it into pieces.
This is kind of similar to our PGA exhaustion issue.
<<<
http://blogs.oracle.com/optimizer/2010/07/outerjoins_in_oracle.html
outer joins 101 https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:5229892958977
<<<
outer join 101:
you have two tables -- emp and dept. dept has 4 rows (deptno = 10, 20, 30, 40). emp has 14 rows but only 3 distinct values for deptno (10,20,30).
You need to write a report that shows the DEPTNO and count of employees for ALL departments. This requires an OUTER JOIN since the natural join:
select d.deptno, count(empno)
from emp e, dept d
where e.deptno = d.deptno
group by d.deptno;
would "lose" deptno = 40. So, we:
select d.deptno, count(empno)
from emp e, dept d
where e.deptno(+) = d.deptno
group by d.deptno;
and that simply means "use DEPT as the driving table, for each row in DEPT, find and count all of the EMPNOS we find. IF there isn't a match in EMP for a given DEPTNO -- then "make up" a record in EMP with all NULL values and join that record -- so we don't drop the dept record"
<<<
<<<
If we needed to use t3.pk = t2.pk(+) -- then t2.pk will be NULL and therefore t1.pk = t2.pk will not be satisified. HENCE, when we actually outer join t2 to t3 and "make up a row in t2", we also immediately turn around and throw it out.
THEREFORE, the results of the queries:
where t1.pk = t2.pk
and t2.pk (+) = t3.pk
and tb.c = 'SWE'
and
where t1.pk = t2.pk
and t2.pk = t3.pk
and tb.c = 'SWE'
are identitical. By adding the (+) to the first one, all you did was remove many different possible execution plans. And given all of your other comments about the performance of the query with and without the (+) you removed the PLAN THAT ACTUALLY WORKS BEST from even being considered.
Anytime -- anytime -- you see:
where t1.x = t2.x(+)
and t2.any_column = any_value
you know that you can (should, must, be silly not to) remove the (+) from the query. Because if we "make up a NULL row for t2" then we KNOW that t2.any_column cannnot be equal to any_value (it is NULL after all!!)
<<<
{{{
----------------
--ORACLE SYNTAX
----------------
# CARTESIAN PRODUCT - if join condition is omittted
select * from
employees a, departments b (20 x 8 rows = 160 rows)
Types of Joins
Oracle Proprietary SQL: 1999
Joins (8i and prior): Compliant Joins:
- Equijoin - Cross joins
- Non-equijoin - Natural joins
- Outer join - Using clause
- Self join - Full or two sided outer joins
- Arbitrary join conditions for outer joins
Joins comparing SQL:1999 to Oracle Syntax
Oracle Proprietary: SQL: 1999
- Equijoin - Natural / Inner Join
- Outer Join - Left Outer Join
- Self join - Join On
- Non Equijoin - Join Using
- Cartesian Product - Cross Join
# EQUIJOIN (a.k.a simple join / inner join)
SELECT last_name, employees.department_id, department_name
FROM employees, departments
WHERE employees.department_id = departments.department_id
AND last_name = �Matos�;
SELECT e.employee_id, e.last_name, e.department_id, d.department_id, d.location_id <-- WITH ALIAS
FROM employees e , departments d
WHERE e.department_id = d.department_id;
SELECT e.last_name, d.department_name, l.city <-- JOINING MORE THAN TWO TABLES (n-1)
FROM employees e, departments d, locations l
WHERE e.department_id = d.department_id
AND d.location_id = l.location_id;
--> to know how many tables to join, "n-1" (if you're joining 4 tables then you need 3 joins)
# NON-EQUIJOIN
SELECT e.last_name, e.salary, j.grade_level
FROM employees e, job_grades j
WHERE e.salary
BETWEEN j.lowest_sal AND j.highest_sal;
# OUTER JOIN (Place the outer join symbol following the name of the column in the table without the matching rows - where you want it NULL)
SELECT e.employee_id, e.last_name, e.department_id, d.department_id, d.location_id <-- GRANT DOES NOT HAVE A DEPARTMENT
FROM employees e , departments d
WHERE e.department_id = d.department_id (+);
SELECT e.last_name, d.department_name, l.city <-- CONTRACTING DEPARTMENT DOES NOT HAVE ANY EMPLOYEES
FROM employees e, departments d, locations l
WHERE e.department_id (+) = d.department_id
AND d.location_id (+) = l.location_id;
--> You use an outer join to also see rows that do not meet the join condition.
--> The outer join operator can appear on only one side of the expression the side that has information missing. It returns those rows from one table that have no direct match in the other table.
--> A condition involving an outer join cannot use the IN operator or be linked to another condition by the OR operator.
--> The UNION operator works around the issue of being able to use an outer join operator on one side of the expression. The ANSI full outer join also allows you to have an outer join on both sides of the expression.
# SELF JOIN
SELECT worker.last_name || � works for � || manager.last_name
FROM employees worker, employees manager
WHERE worker.manager_id = manager.employee_id;
-------------------
--SQL: 1999 SYNTAX
-------------------
# CROSS JOIN
select * from employees <-- result is Cartesian Product
cross join departments;
# NATURAL JOIN
select * from employees <-- selects rows from the two tables that have equal values in all "matched columns" (the same name & data type)
natural join departments;
# USING (similar to equijoin, but shorter code than "ON")
SELECT e.employee_id, e.last_name, d.location_id
FROM employees e
JOIN departments d
USING (department_id);
WHERE e.department_id = 90; <-- CAN'T DO THIS, do not use a "table name, alias, or qualifier" in the referenced columns ORA-25154: column part of USING clause cannot have qualifier
select * <-- three way join
from employees a
join departments b
using (department_id)
join locations c
using (location_id);
# ON (similar to equijoin)
SELECT employee_id, city, department_name <-- three way join
FROM employees e
JOIN departments d
ON (d.department_id = e.department_id)
JOIN locations l
ON (d.location_id = l.location_id);
# LEFT OUTER JOIN
SELECT e.last_name, e.department_id, d.department_name
FROM employees e
LEFT OUTER JOIN departments d
ON (e.department_id = d.department_id);
This query retrieves all rows in the EMPLOYEES table, which is the left table even if there is no match in the DEPARTMENTS table.
This query was completed in earlier releases as follows:
SELECT e.last_name, e.department_id, d.department_name
FROM hr.employees e, hr.departments d
WHERE e.department_id = d.department_id (+); -- plus sign will have null, return all emp
# RIGHT OUTER JOIN
SELECT e.last_name, e.department_id, d.department_name
FROM employees e
RIGHT OUTER JOIN departments d
ON (e.department_id = d.department_id);
This query retrieves all rows in the DEPARTMENTS table, which is the right table even if there is no match in the EMPLOYEES table.
This query was completed in earlier releases as follows:
SELECT e.last_name, e.department_id, d.department_name
FROM hr.employees e, hr.departments d
WHERE e.department_id(+) = d.department_id ; -- plus sign will have null, return all dept
# FULL OUTER JOIN
SELECT e.last_name, e.department_id, d.department_name <-- SQL :1999 Syntax
FROM employees e
FULL OUTER JOIN departments d
ON (e.department_id = d.department_id);
SELECT e.last_name, e.department_id, d.department_name <-- Oracle Syntax
FROM employees e, departments d
WHERE e.department_id (+) = d.department_id
UNION
SELECT e.last_name, e.department_id, d.department_name
FROM employees e, departments d
WHERE e.department_id = d.department_id (+);
}}}
http://en.wikipedia.org/wiki/PCI_Express#Current_status
http://en.wikipedia.org/wiki/List_of_device_bandwidths
http://www.iphonetechie.com/2010/10/pdanet-4-18-cracked-deb-file-and-installation-tutorial-great-alternative-to-mywi-4-8-3-works-awesome/
! references
''Bryn Llewellyn'' http://www.oracle.com/technetwork/database/multitenant-wp-12c-1949736.pdf
Oracle multi-tenant in the real world - Working with PDBs in 12c - Mike Dietrich
{{{
https://apex.oracle.com/pls/apex/f?p=202202:2:::::P2_SUCHWORT:multi2013
}}}
Multitenant Database Management http://www.oracle.com/technetwork/issue-archive/2014/14-nov/o64ocp12c-2349447.html
Basics of the Multitenant Container Database http://www.oracle.com/technetwork/issue-archive/2014/14-sep/o54ocp12c-2279221.html
! alter parameter
{{{
alter system set parameter=value container=current|all;
select name, value from v$system_parameter where ispdb_modifiable='TRUE' order by name;
}}}
! create user
{{{
create user tim container=current|all;
}}}
! grant user access to different PDBs
https://blog.dbi-services.com/the-privileges-to-connect-to-a-container/
{{{
SQL> create user C##USER1 identified by oracle container=all;
User created.
SQL> grant DBA to C##USER1 container=all;
Grant succeeded.
}}}
! grant select on v$pdbs
http://oracledbpro.blogspot.com/2015/09/cant-view-data-via-common-user-in.html?m=1
{{{
alter user C##TEST set container_data=all container = current;
}}}
! References
http://oracle-base.com/articles/12c/articles-12c.php
<<<
@@Multitenant@@ : Overview of Container Databases (CDB) and Pluggable Databases (PDB) - This article provides a basic overview of the multitenant option, with links to more detailed articles on the functionality.
@@Multitenant@@ : Create and Configure a Container Database (CDB) in Oracle Database 12c Release 1 (12.1) - Take your first steps with the Oracle Database 12c Multitenant option by creating container databases.
@@GOOD STUFF - Multitenant@@ : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1) - Take your next steps with the Oracle Database 12c Multitenant option by creating pluggable databases.
@@GOOD STUFF - Multitenant@@ : Migrate a Non-Container Database (CDB) to a Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1) - Learn now to start converting your existing regular databases into pluggable databases in Oracle Database 12c Release 1 (12.1).
@@Multitenant@@ : Connecting to Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - This article explains how to connect to container databases (CDB) and pluggable databases (PDB) on Oracle 12c Release 1 (12.1).
{{{
SHOW CON_NAME
ALTER SESSION SET container = pdb1;
ALTER SESSION SET container = cdb$root;
}}}
@@Multitenant@@ : Startup and Shutdown Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - Learn how to startup and shutdown container databases (CDB) and pluggable databases (PDB) in Oracle 12c Release 1 (12.1).
{{{
SQL*Plus Command
ALTER PLUGGABLE DATABASE
Pluggable Database (PDB) Automatic Startup
Preserve PDB Startup State (12.1.0.2 onward)
}}}
@@Multitenant@@ : Configure Instance Parameters and Modify Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - This article shows how to configure instance parameters and modify the database for container databases (CDB) and pluggable databases (PDB) in Oracle Database 12c Release 1 (12.1).
@@Multitenant@@ : Manage Tablespaces in a Container Database (CDB) and Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1) - This article demonstrates how to manage tablespaces in a container database (CDB) and pluggable database (PDB) in Oracle Database 12c Release 1 (12.1).
@@Multitenant@@ : Manage Users and Privileges For Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - This article shows how to manage users and privileges for container databases (CDB) and pluggable databases (PDB) in Oracle Database 12c Release 1 (12.1).
{{{
Create Common Users
Create Local Users
Create Common Roles
Create Local Roles
Granting Roles and Privileges to Common and Local Users
}}}
@@Multitenant@@ : Backup and Recovery of a Container Database (CDB) and a Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1) - Learn how backup and recovery is affected by the multitenant option in Oracle Database 12c Release 1 (12.1).
{{{
RMAN Connections
Backup
Container Database (CDB) Backup
Root Container Backup
Pluggable Database (PDB) Backup
Complete Recovery
Tablespace and Datafile Backups
Container Database (CDB) Complete Recovery
Root Container Complete Recovery
Pluggable Database (PDB) Complete Recovery
Tablespace and Datafile Complete Recovery
Point In Time Recovery (PITR)
Container Database (CDB) Point In Time Recovery (PITR)
Pluggable Database (PDB) Point In Time Recovery (PITR)
Table Point In Time Recovery (PITR) in PDBs
}}}
@@Multitenant@@ : Flashback of a Container Database (CDB) in Oracle Database 12c Release 1 (12.1) - Identify the restrictions when using flashback database against a container database (CDB) in Oracle 12c.
Multitenant : Resource Manager with Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - Control resource allocation between pluggable databases and within an individual pluggable database.
@@Multithreaded Model@@ using THREADED_EXECUTION in Oracle Database 12c Release 1 (12.1) - Learn how to switch the database between the multiprocess and multithreaded models in Oracle Database 12c Release 1 (12.1).
--
@@Multitenant@@ : Clone a Remote PDB or Non-CDB in Oracle Database 12c (12.1.0.2) - Clone PDBs from remote PDBs and Non-CDBs over database links in Oracle Database 12c (12.1.0.2).
@@Multitenant@@ : Database Triggers on Pluggable Databases (PDBs) in Oracle 12c Release 1 (12.1) - With the introduction of the multitenant option, database event triggers can be created in the scope of the CDB or PDB.
@@Multitenant@@ : PDB Logging Clause in Oracle Database 12c Release 1 (12.1.0.2) - The PDB logging clause is used to set the default tablespace logging clause for a PDB in Oracle Database 12c Release 1 (12.1.0.2).
@@Multitenant@@ : Metadata Only PDB Clones in Oracle Database 12c Release 1 (12.1.0.2) - Make structure-only copies of PDBs using the NO DATA clause added in Oracle Database 12c Release 1 (12.1.0.2).
@@Multitenant@@ : PDB CONTAINERS Clause in Oracle Database 12c Release 1 (12.1.0.2) - The PDB CONTAINERS clause allows data to be queried across multiple PDBs in Oracle Database 12c Release 1 (12.1.0.2).
@@Multitenant@@ : PDB Subset Cloning in Oracle Database 12c Release 1 (12.1.0.2) - Use subset cloning to limit the amount of tablespaces you bring across to your new PDB.
@@Multitenant@@ : Remove APEX Installations from the CDB in Oracle Database 12c Release 1 (12.1) - This article describes how to remove APEX from the CDB so you can install it directly in a PDB.
@@Multitenant@@ : Running Scripts in Container Databases (CDBs) and Pluggable Databases (PDBs) in Oracle Database 12c Release 1 (12.1) - This article presents a number of solutions to help transition your shell scripts to work with the multitenant option.
{{{
SET CONTAINER
TWO_TASK
Secure External Password Store
Scheduler
catcon.pl
}}}
<<<
From Yong Huang...
http://yong321.freeshell.org/oranotes/LargePoolMtsPga.txt
http://yong321.freeshell.org/oranotes/PGA_and_PrivateMemViewedFromOS.txt <-- good stuff
http://yong321.freeshell.org/oranotes/PGAIncreaseWithPLSQLTable.txt
Hmm... his investigations are awesome, I wonder how DBA_HIST_PGASTAT will be useful for time series analysis
-- PGA Sizing
http://www.freelists.org/post/oracle-l/SGA-shared-pool-size,3
-- ASH PGA usage (in bytes)
https://bdrouvot.wordpress.com/2013/03/19/link-huge-pga-temp/
<<showtoc>>
! to log the PK/FK errors
{{{
ALTER TABLE MY_MASTER_TABLE ADD
CONSTRAINT MY_FK1
FOREIGN KEY (MY_LOOKUP_TABLE1_ID)
REFERENCES MY_LOOKUP_TABLE1 (ID)
ENABLE
NOVALIDATE;
ALTER TABLE MY_MASTER_TABLE ADD
CONSTRAINT MY_FK2
FOREIGN KEY (MY_LOOKUP_TABLE2_ID)
REFERENCES MY_LOOKUP_TABLE2 (ID)
ENABLE
VALIDATE
EXCEPTIONS INTO MY_EXCEPT_TABLE;
CREATE TABLE MY_EXCEPT_TABLE
(
ROW_ID ROWID,
OWNER VARCHAR2(30 BYTE),
TABLE_NAME VARCHAR2(30 BYTE),
CONSTRAINT VARCHAR2(30 BYTE)
);
}}}
! references
http://www.java2s.com/Code/Oracle/Table/Createtablewithforeignkey.htm
https://apexplained.wordpress.com/2013/04/20/the-emp-and-dept-tables-in-oracle/
https://www.techonthenet.com/oracle/foreign_keys/foreign_keys.php
! the plsql channel
http://tutorials.plsqlchannel.com/public/index.php - subscription good stuff
''Nice short,simple tutorial'' http://plsql-tutorial.com
''pl/sql basics video tutorial'' https://www.youtube.com/watch?v=_qBCjLKB_sM
''Debug PL/SQL''
http://st-curriculum.oracle.com/obe/db/11g/r2/prod/appdev/sqldev/plsql_debug/plsql_debug_otn.htm
http://sueharper.blogspot.com/2006/07/remote-debugging-with-sql-developer_13.html
''PL/SQL: The Scripting Language Liberator'' http://goo.gl/BIcDXL
https://www.quora.com/What-features-of-PL-SQL-should-a-beginner-tackler-first
Top 5 Basic Concept Job Interview Questions for Oracle Database PL/SQL Developers
http://www.dbasupport.com/oracle/ora11g/Basic-Concept-Interview-Questions.shtml
Converting a PV vm back into an HVM vm
http://blogs.oracle.com/wim/2011/01/converting_a_pv_vm_back_into_a.html
https://blogs.oracle.com/datawarehousing/entry/partition_wise_joins
Using Parallel Execution [ID 203238.1]
Parallel Execution the Large/Shared Pool and ORA-4031 [ID 238680.1]
What does the parameter parallel_automatic_tuning ? [ID 577869.1]
Master Note Parallel Execution Wait Events [ID 1097154.1]
WAITEVENT: "PX Deq Credit: send blkd" [ID 271767.1]
SELECTING FROM EXTERNAL TABLE WITH CLOB perform very slow and High Wait On 'Px Deq Credit: Send Blkd ' [ID 1300645.1]
Tips to Reduce Waits for "PX DEQ CREDIT SEND BLKD" at Database Level [ID 738464.1]
Old and new Syntax for setting Degree of Parallelism [ID 260845.1]
PARALLEL_EXECUTION_MESSAGE_SIZE Usage [ID 756242.1]
Report for the Degree of Parallelism on Tables and Indexes [ID 270837.1] <-- AWESOME script..
http://fahdmirza.blogspot.com/2011/04/px-deq-credit-send-blkd-tuning.html
http://dbaspot.com/oracle-server/268584-px-deq-credit-send-blkd.html
http://iamsys.wordpress.com/2010/03/24/px-deq-credit-send-blkd-caused-by-ide-sql-developer-toad-plsql-developer/
http://www.dbacomp.com.br/blog/?p=34 <-- GOOD STUFF EXPLANATION
http://oracle-dba-yi.blogspot.com/2011/01/px-deq-credit-send-blkd.html
http://webcache.googleusercontent.com/search?q=cache:UtGFixYN_PEJ:www.asktherealtom.ch/%3Fp%3D8+PX+Deq+Credit:+send+blkd&cd=1&hl=en&ct=clnk&gl=us
http://iamsys.wordpress.com/2010/03/24/px-deq-credit-send-blkd-caused-by-ide-sql-developer-toad-plsql-developer/
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd,27
http://www.freelists.org/post/oracle-l/best-way-to-invoke-parallel-in-DW-loads,13
http://www.mail-archive.com/oracle-l@fatcity.com/msg64774.html <-- tuning large pool
http://tobeimpact.blogspot.com/2013/10/parallel-query-errors-out-with-ora.html
https://fred115.wordpress.com/2012/07/30/db-link-with-taf-does-it-auto-fail-over/
! parameters
{{{
-- essentials
parallel_max_servers - (default: automatic) The maximum number of parallel slave process that may be created on an instance. The default is calculated based on system parameters including CPU_COUNT and PARALLEL_THREADS_PER_CPU. On most systems the value will work out to be 20xCPU_COUNT.
parallel_servers_target - (default: automatic) The upper limit on the number of parallel slaves that may be in use on an instance at any given time if parallel queuing is enabled. The default is calculated automatically.
parallel_min_servers - (default: 0) The minimum number of parallel slave processes that should be kept running, regardless of usage. Usually set to eliminate the overhead of creating and destroying parallel processes.
parallel_threads_per_cpu - (default: 2) Used in various parallel calculations to represent the number of concurrent processes that a CPU can support
-- knobs
parallel_degree_policy - (default: MANUAL) Controls several parallel features including Automatic Degree of Parallelism (auto DOP), Parallel Statement Queuing and In-memory Parallel Execution
MANUAL - disables everything
LIMITED - only enables auto DOP, the PX queueing & in-memory PX remain disabled
AUTO - enables everything
parallel_execution_message_size - (default: 16384) The size of parallel message buffers in bytes.
parallel_degree_level - New in 12c. The scaling factor for default DOP calculations. When the parameter value is set to 50 then the calculated default DOP will be multiplied by .5 thus reducing it to half.
-- resource mgt
pga_aggregate_limit - New in 12c. Has nothing to do with parallel queries. This parameter limits the process PGA memory usage.
parallel_force_local - (default: FALSE) Determines whether parallel query slaves will be forced to execute only on the node that initiated the query (TRUE), or whether they will be allowed to spread on to multiple nodes in a RAC cluster (FALSE).
parallel_instance_group - Used to restrict parallel slaves to certain set of instances in a RAC cluster.
parallel_io_cap_enabled - (default: FALSE) Used in conjunction with the DBMS_RESOURCE_MANAGER.CALIBRATE_IO function to limit default DOP calculations based on the I/O capabilities of the system.
-- deprecated / old way
parallel_automatic_tuning - (default: FALSE) Deprecated since 10g. This parameter enabled an automatic DOP calculation on objects for which a parallelism attribute is set.
parallel_min_percent - (default: 0) Old throttling mechanism. It represents the minimum percentage of parallel servers that are needed for a parallel statement to execute.
-- recommended to leave it as it is
parallel_adaptive_multi_user - (default: TRUE) Old mechanism of throttling parallel statements by downgrading. Provides the ability to automatically downgrade the degree of parallelism for a given statement based on the workload when a query executes. In most cases, this parameter should be set to FALSE on Exadata, for reasons we'll discuss later in the chapter. The bigger problem with the downgrade mechanism though is that the decision about how many slaves to use is based on a single point in time, the point when the parallel statement starts.
parallel_degree_limit - (default: CPU) This parameter sets an upper limit on the DOP that can be applied to a single statement. The default means that Oracle will calculate a value for this limit based on the system's characteristics.
parallel_min_time_threshold - (default: AUTO) The minimum estimated serial execution time that will be trigger auto DOP. The default is AUTO, which translates to 10 seconds. When the PARALLEL_DEGREE_POLICY parameter is set to AUTO or LIMITED, any statement that is estimated to take longer than the threshold established by this parameter will be considered a candidate for auto DOP.
parallel_server - Has nothing to do with parallel queries. Set to true or false depending on whether the database is RAC enabled or not. This parameter was deprecated long ago and has been replaced by the CLUSTER_DATABASE parameter.
parallel_server_instances - Has nothing to do with parallel queries. It is set to the number of instances in a RAC cluster.
-- underscore params
_parallel_statement_queuing - (default: FALSE) related to auto DOP, if set to TRUE this enables PX queueing
_parallel_cluster_cache_policy - (default: ADAPTIVE) related to auto DOP, if set to CACHE this enables the in-mem PX
_parallel_cluster_cache_pct - (default: 80) determines the percentage of the aggregate buffer cache size that is reserved for In-Memory PX, if segments are larger than 80% the size of the aggregate buffer cache, by default, queries using these tables will not qualify for In-Memory PX
_optimizer_ignore_hints - (default: FALSE) if set to TRUE will ignore hints
}}}
! configuration
See this tiddler for details -> [[Auto DOP]]
also check out tiddlers here [[Parallel]]
Parallel Troubleshooting
http://www.oracledatabase12g.com/archives/checklist-for-performance-problems-with-parallel-execution.html
''XPLAN_ASH'' troubleshooting with ASH http://oracle-randolf.blogspot.com/2012/08/parallel-execution-analysis-using-ash.html
Parallel Processing With Standard Edition
http://antognini.ch/2010/09/parallel-processing-with-standard-edition/
Parallel_degree_limit hierarchy – CPU, IO, Auto or Integer
http://blogs.oracle.com/datawarehousing/2011/01/parallel_degree_limit_hierarch.html
Interval Partitioning and Parallel Query Limit Access Paths http://www.pythian.com/news/34543/interval-partitioning-and-parallel-query-limit-access-paths/ Parallel Distribution of aggregation and analytic functions.. gives a lot of food for thought how the chosen Parallel Distribution can influence the performance of operations
''Understanding Parallel Execution - part1'' http://www.oracle.com/technetwork/articles/database-performance/geist-parallel-execution-1-1872400.html
''Understanding Parallel Execution - part2'' http://www.oracle.com/technetwork/articles/database-performance/geist-parallel-execution-2-1872405.html
Parallel Load
{{{
alter table <table_name> parallel;
alter session enable parallel dml;
insert /*+ APPEND */ into parallel_t1
select level, 'x'
from dual
connect by level <= 1000000
;
}}}
Also Consider the following illustration.
{{{
Both tables below have "nologging" set at table level.
SQL> desc redo1
Name Null? Type
----------------------------------------- -------- ----------
X NUMBER
Y NUMBER
SQL> desc redotesttab
Name Null? Type
----------------------------------------- -------- -------
X NUMBER
Y NUMBER
begin
for x in 1..10000 loop
insert into scott.redotesttab values(x,x+1);
-- or
-- insert /*+ APPEND */ into scott.redotesttab values(x,x+1);
end loop;
end;
Note: This will generate redo even if you provide the hint because this
is not a direct-load insert.
Now, consider the following bulk inserts, direct and simple.
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME VALUE
----------------------------------------------------------- ----------
redo size 27556720
SQL> insert into scott.redo1 select * from scott.redotesttab;
50000 rows created.
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME VALUE
----------------------------------------------------------- ----------
redo size 28536820
SQL> insert /*+ APPEND */ into scott.redo1 select * from scott.redotesttab;
50000 rows created.
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME VALUE
----------------------------------------------------------- ----------
redo size 28539944
You will notice that the redo generated via the simple insert is "980100" while
a direct insert generates only "3124".
}}}
Obsolete / Deprecated Initialization Parameters in 10G
Doc ID: Note:268581.1
-- COMPATIBLE
How To Change The COMPATIBLE Parameter And What Is The Significance?
Doc ID: 733987.1
-- CHECK PARAMETER DEPENDENCIES, parameters affecting other parameters
<<<
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCUQFjAA&url=http%3A%2F%2Fyong321.freeshell.org%2Fcomputer%2FParameterDependencyAndStatistics.doc&ei=tzpbUJ3NJabY2gWHkYHoBw&usg=AFQjCNEWM-CRPvEED0uXs0pnpxWRltl4Bg
<<<
Master Note for Partitioning [ID 1312352.1]
http://blogs.oracle.com/db/entry/master_note_for_partitioning_id
Top Partition Performance Issues
Doc ID: Note:166215.1
How to Implement Partitioning in Oracle Versions 8 and 8i
Doc ID: Note:105317.1
How I Designed Table and Index Partitions Using Analytics
Doc ID: 729847.1
-- PARTITION
How to partition a non-partitioned table.
Doc ID: 1070693.6
How to Backup Partition of Range Partitioned Table with Local Indexes
Doc ID: 412264.1
http://blogs.sun.com/dlutz/entry/partition_alignment_guidelines_for_unified
A Comprehensive Guide to Oracle Partitioning with Samples
http://noriegaaoracleexpert.blogspot.com/2009/06/comprehensive-guide-to-oracle_16.html
SQL Access Advisor - Partitioning recommendation
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/11gr2_sqlaccessadv/11gr2_sqlaccessadv_viewlet_swf.html
Compressing Subpartition Segments
http://husnusensoy.wordpress.com/2008/01/23/compressing-subpartition-segments/
From Doug, Randolf, Kerry
http://jonathanlewis.wordpress.com/2010/03/17/partition-stats/
More on Interval Partitioning
http://www.rittmanmead.com/2010/08/07/more-on-interval-partitioning/
non-partitioned to partitioned table
http://www.dbapool.com/articles/031003.html
http://arjudba.blogspot.com/2008/11/how-to-convert-non-partitioned-table-to.html
http://www.oracle-base.com/articles/8i/PartitionedTablesAndIndexes.php
! determine the potential benefit of using partitioning, and the overhead
<<<
Partitioning is first a way to facilitate administration and secondly for performance.
Unfortunately, the performance benefits are not always attainable and it depends on the data and queries.
If queries select on a range of data (not single block reads), it will probably benefit from partitioning.
If queries select one row at a time (single block reads), it will probably not benefit from partitioning (maybe even worse).
Each query to a partition segment requires a little extra overhead to determine which partition to access.
For hash partitions, the overhead is a mathematical mod function that determines the partition.
For range and list partitions, a dictionary lookup is required to determine which partition the data resides.
So, the overhead is both logical reads and CPU.
For ranges of rows to be selected, the overhead is still applied but normally only once for the requested range.
Index blevel is probably less for partition indexes since there is less data in each partition.
But, the hash/range/list partition determination method overhead may not be noticeable for ranges (especially larger ranges).
Range partitioning is ideal when the partition key is a date since most queries on large tables filter by date.
Aligning range partitions with normal data access requests may result in full table/partition scans which can be really good.
So, knowing the data and data access requirements is key to a successful partitioning effort.
nice writeup by Jack Augustin
<<<
https://agilebits.com/home/licenses
http://alternativeto.net/software/1password/
https://lastpass.com
http://keepass.info/features.html
http://keepass.info/download.html
http://www.vilepickle.com/blog/2011/04/19/00105-using-dropbox-and-keepass-synchronize-passwords-while-staying-secure
/***
|''Name:''|PasswordOptionPlugin|
|''Description:''|Extends TiddlyWiki options with non encrypted password option.|
|''Version:''|1.0.2|
|''Date:''|Apr 19, 2007|
|''Source:''|http://tiddlywiki.bidix.info/#PasswordOptionPlugin|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0 (Beta 5)|
***/
//{{{
version.extensions.PasswordOptionPlugin = {
major: 1, minor: 0, revision: 2,
date: new Date("Apr 19, 2007"),
source: 'http://tiddlywiki.bidix.info/#PasswordOptionPlugin',
author: 'BidiX (BidiX (at) bidix (dot) info',
license: '[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D]]',
coreVersion: '2.2.0 (Beta 5)'
};
config.macros.option.passwordCheckboxLabel = "Save this password on this computer";
config.macros.option.passwordInputType = "password"; // password | text
setStylesheet(".pasOptionInput {width: 11em;}\n","passwordInputTypeStyle");
merge(config.macros.option.types, {
'pas': {
elementType: "input",
valueField: "value",
eventName: "onkeyup",
className: "pasOptionInput",
typeValue: config.macros.option.passwordInputType,
create: function(place,type,opt,className,desc) {
// password field
config.macros.option.genericCreate(place,'pas',opt,className,desc);
// checkbox linked with this password "save this password on this computer"
config.macros.option.genericCreate(place,'chk','chk'+opt,className,desc);
// text savePasswordCheckboxLabel
place.appendChild(document.createTextNode(config.macros.option.passwordCheckboxLabel));
},
onChange: config.macros.option.genericOnChange
}
});
merge(config.optionHandlers['chk'], {
get: function(name) {
// is there an option linked with this chk ?
var opt = name.substr(3);
if (config.options[opt])
saveOptionCookie(opt);
return config.options[name] ? "true" : "false";
}
});
merge(config.optionHandlers, {
'pas': {
get: function(name) {
if (config.options["chk"+name]) {
return encodeCookie(config.options[name].toString());
} else {
return "";
}
},
set: function(name,value) {config.options[name] = decodeCookie(value);}
}
});
// need to reload options to load passwordOptions
loadOptionsCookie();
/*
if (!config.options['pasPassword'])
config.options['pasPassword'] = '';
merge(config.optionsDesc,{
pasPassword: "Test password"
});
*/
//}}}
https://blogs.oracle.com/UPGRADE/entry/why_is_every_patchset_now
-- CERTIFICATION MATRIX
Operating System, RDBMS & Additional Component Patches Required for Installation PeopleTools - Master List [ID 756571.1] ß go here
PeopleTools Certifications - Suggested Fixes for PT 8.52 Note:1385944.1 ß click on here
Oracle Server - Enterprise Edition (Doc ID 1100831.1) ß click on here
Required Interim Patches for the Oracle Database with PeopleSoft [ID 1100831.1] ß click on here
PeopleSoft Enterprise PeopleTools Certification Table of Contents [ID 759851.1]
-- PERFORMANCE
PeopleSoft Enterprise Performance on Oracle 10g Database (Doc ID 747254.1)
E-ORACLE:10g Master Performance Solution for Oracle 10g (Doc ID 656639.1)
EGP8.x: Performance issue while running Paycalc in GP with Oracle 9 and 10 as DB (Doc ID 652910.1)
EGP 8.x:Changing Global Payroll COBOL Process without changing delivered code (Doc ID 652805.1)
http://dbasrus.blogspot.com/2007/09/one-for-peoplesoft-folks.html
Performance issue with On Lines Pages and Batch Processes on Oracle 10G (Doc ID 651774.1)
Performance Issue at Tier Processing (Selection at Database Level) (Doc ID 755402.1)
Activity Batch Assignment Performance: Object Where Clause not filtering correct no records (Doc ID 518178.1)
Performance and Tuning: Oracle 10g R2 Real Application Cluster (RAC) with EnterpriseOne (Doc ID 748353.1)
Performance and Tuning UBE Performance and Tuning (Doc ID 748333.1)
Online Performance Configuration Guidelines for PeopleTools 8.45, 8.46, 8.47, 8.48 and 8.49 (Doc ID 747389.1)
Sizing System Hardware for JD Edwards EnterpriseOne (Doc ID 748339.1)
E- ORA: Is there any documentation on Oracle 10g RAC implemention in PeopleSoft? (Doc ID 663340.1)
E-INST: Does PeopleSoft support Oracle RAC (Real Application Clusters)? (Doc ID 620325.1)
E-ORA: Oracle RAC Clusterware support (Doc ID 663690.1)
How To Set Up Oracle RAC for Siebel Applications (Doc ID 473859.1)
What Are the Supported Oracle Real Application Clusters (RAC) Versions? (Doc ID 478215.1)
Oracle 10g RAC support for Analytics (Doc ID 482330.1)
PeopleTools Certification FAQs - Database Platforms - Oracle (Doc ID 756280.1)
Siebel Recommendation on table logging (Doc ID 730133.1)
What does Siebel recommend for the Oracle parameter "compatible" on 10g database (Doc ID 551979.1)
Oracle cluster (Doc ID 522337.1)
Support Status for Oracle Business Intelligence on VMware Virtualized Environments (Doc ID 475484.1)
E-PIA: Red Paper on Implementing Clustering and High Availability for PeopleSoft (Doc ID 612096.1)
E-PIA: Red Paper on Implementing Clustering and High Availability for PeopleSoft (Doc ID 612096.1)
747378.1 Clustering and High Availability for Enterprise Tools 8.4x (Doc ID 747378.1)
747962.1 PeopleSoft EPM Red Paper: PeopleSoft Enterprise Initial Consolidations —04/2007 (Doc ID 747962.1)
747962.1 PeopleSoft EPM Red Paper: PeopleSoft Enterprise Initial Consolidations
747962.1 PeopleSoft EPM Red Paper: PeopleSoft Enterprise Initial Consolidations —04/2007 (Doc ID 747962.1)
Is there a way to automatically kill long running SQL statements (Oracle DB only) at the database after a pre-determined maximum waiting time ? (Doc ID 753941.1)
E-CERT Red Hat Linux 4.0 64 bit certification (Doc ID 656686.1)
PeopleSoft Enterprise PeopleTools Certifications (Doc ID 747587.1)
PeopleSoft Performance on Oracle 10.2.0.2 http://www.freelists.org/post/oracle-l/PeopleSoft-Performance-on-Oracle-10202
-- Hidden Parameters
_disable_function_based_index
http://www.orafaq.com/parms/parm467.htm
-- SECURITY
747524.1 Securing Your PeopleSoft Application Environment (Doc ID 747524.1)
-- PAYROLL
EPY: Performance issue with work table PS_WRK_SEQ_CHECK (Doc ID 646824.1)
http://dbasrus.blogspot.com/2007/09/more-on-peoplesoft.html
http://dbasrus.blogspot.com/2007/09/one-for-peoplesoft-folks.html
EPY: Performance issue with work table PS_WRK_SEQ_CHECK (Doc ID 646824.1)
EPY: Performance issue on Pay confirm process PSPEBUPD_S_BENF_NO (Doc ID 660649.1)
EPY: COBOL Performance Issues: Paycalc or other COBOL jobs take too long to run (Doc ID 607905.1)
E-ORACLE:10g Master Performance Solution for Oracle 10g (Doc ID 656639.1)
EPY - Bonus payroll performance slow due to FLSA processing (Doc ID 634806.1)
EPY 8.x:Performance issues on Paycalc/Dedcalc in release 8 SP1 and above (Doc ID 611138.1)
ETL8.8/GP8.8: Poor Performance GP Payroll Process (GPPDPRUN) modified TL Data (Doc ID 661283.1)
EGP: Performance issues with "UNKNOWN" sql statements in timing trace. (Doc ID 657792.1)
PeopleSoft Global Payroll Off-Cycle Payment Processing (Doc ID 704478.1)
EGP8.X: Global Payroll runs to 'Success' but does not process any data (Doc ID 637945.1)
EGP8.x: What are the tables to partition for Global Payroll Stream Processing ? (Doc ID 619386.1)
EGP 8.x:Changing Global Payroll COBOL Process without changing delivered code (Doc ID 652805.1)
EGP 8.9: Running payslip Generation Process using SFTP- Global Payroll (Doc ID 652909.1)
EGP8.x: Global Payroll Process fails on AIX with 105 Memory allocation error. (Doc ID 656695.1)
ETL9.0: AM/TL9.0: AM absence is doubling quantity when processing time admin. (Doc ID 664004.1)
EGP8.x : How to recognize when the Global Payroll is ending in error ? (Doc ID 636120.1)
EGP8.x: Performance issue while running Paycalc in GP with Oracle 9 and 10 as DB (Doc ID 652910.1)
EGP8.9/9.0: Is it possible to enable Commitment Reporting on Global Payroll? (Doc ID 662078.1)
EGP8.x: Global Payroll PayGroup sizing recommendation (Doc ID 639164.1)
PeopleSoft Global Payroll COBOL Array Information (Doc ID 701403.1)
EGP8.3SP1 How Far does Retro go back in history? (Doc ID 618944.1)
EGP: Deadlock when using streams and partitions (Doc ID 642914.1)
E1: 07: Pre-payroll Troubleshooting (Doc ID 625863.1)
-- TRIGGER PERFORMANCE ISSUES
Performance/Deadlock Issues Caused By SYNCID Database Triggers [ID 1059120.1]
E-WF: Database Locking Issue on PSSYSTEMID Table, Because of SYNCID Field in PSWORKLILST [ID 619750.1]
How Is SYNCID On The PS_PROJECT Record Maintained? [ID 1303668.1]
ECRM: Information about the SYNCID field and what is it used for. [ID 614739.1]
PeopleSoft Enterprise DFW Plug-In - SYNCID Database Trigger Diagnostic Check [ID 1074332.1]
TX Transaction and Enq: Tx - Row Lock Contention - Example wait scenarios [ID 62354.1]
-- PERFORMANCE INDEXES
E-AWE: Approval Framework Indexes for 9.1 Applications [ID 1289904.1]
E-AWE: Recommended Indexes for Application Cross Reference (XREF) Tables to Improve Performance of Approval Workflow Engine (AWE) [ID 1328945.1]
http://www.go-faster.co.uk/gp.stored_outlines.pdf
http://blog.psftdba.com/2010/03/oracle-plan-stability-stored-outlines.html
Exadata MAA best practices series
video: http://www.oracle.com/webfolder/technetwork/Exadata/MAA-BestP/Peoplesoft/021511_93782_source/index.htm
slides: http://www.oracle.com/webfolder/technetwork/Exadata/MAA-BestP/Peoplesoft/Peoplesoft.pdf
S317423: Deploying PeopleSoft Enterprise Applications on Exadata Tips, Techniques and Best Practices http://www.oracle.com/us/products/database/s317423-176382.pdf
Oracle PeopleSoft on Oracle Exadata Database Machine feb 2011 http://www.oracle.com/au/products/database/maa-wp-peoplesoft-on-exadata-321604.pdf
! and a bunch of other references when you google "peoplesoft on exadata"
2011 http://www.oracle.com/au/products/database/maa-wp-peoplesoft-on-exadata-321604.pdf
http://www.oracle.com/webfolder/technetwork/Exadata/MAA-BestP/Peoplesoft/Peoplesoft.pdf
2013 http://www.oracle.com/us/products/applications/peoplesoft-enterprise/psft-oracle-engineered-sys-1931256.pdf
best practices http://www.oracle.com/us/products/database/s317423-176382.pdf
2014 http://www.oracle.com/technetwork/database/availability/peoplesoft-maa-2044588.pdf
2013 http://www.oracle.com/us/products/applications/peoplesoft-enterprise/psft-payroll-engineered-sys-1931259.pdf <-- good stuff
http://hakanbiroglu.blogspot.com/2013/04/installing-peoplesoft-92-pre-build.html#.XF41O2RKjOQ
http://hakanbiroglu.blogspot.com/2013/04/extending-peoplesoft-92-virtual-machine.html#.XF7klmRKjOQ
https://mani2web.wordpress.com/2016/02/17/installing-peopletools-8-55-peoplesoft-hcm-image-16-on-virtualbox-using-dpks-part-2/
"peoplesoft virtualbox vm download"
https://www.youtube.com/watch?v=AXNcL7ZKRVw <-- good stuff , this is the patch used -16660429
PeopleSoft Update Manager (PUM) Home Page (Doc ID 1641843.2)
https://docs.oracle.com/cd/E91282_01/psft/pdf/Using_the_PeopleSoft_VirtualBox_Images_PeopleSoft_PeopleTools_8.54_Dec2015.pdf
<<<
The attached is a series of SQL I have used in the past to report on nVision activity. I have used them on PeopleSoft HR and Finance versions 7.x, 8,x, and 9.1 but for 9.x only on tools up to 8.49. I haven't done hands-on tuning for a few years. But I can't imagine the process scheduler stuff has changed all that much in regards to nVision. At the very least, if there are changes the enclosed SQL should make it more easy to adapt to anything new. The summary SQLs towards the end are pretty good ones to use on a regular basis to monitor overall reporting performance for nVision. The ones in the middle are handy for looking at what is running now from long execution time and long queue time. The ones in the beginning are good for finding detailed data for a given time period (i.e.: look at stats for every report run rather than a summary). Hope you find them useful…often getting lists from SQL is faster than logging into PeopleSoft and looking up stuff in Process Monitor. I already shot it to Rajiv and Rajesh.
<<<
{{{
set pagesize 50000
set linesize 200
col oprid format a10
col submitted format a11
col report_id format a10
col layout_id format a40
col report_scope format a10
col StartTime format a11
col EndTime format a11
col Status format a10
col QueueTime format 9999999999
col Duration format 9999999999
col servernamerun format a5
SELECT TO_CHAR(r.rundttm,'MM-DD HH24:MI') Submitted,
n.report_id,
n.layout_id,
n.report_scope,
TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime,
ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.prcsinstance = p.prcsinstance
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
/
SELECT TO_CHAR(r.rundttm,'MM-DD HH24:MI')||','||
r.oprid||','||
n.report_id||','||
n.layout_id||','||
n.report_scope||','||
TO_CHAR(r.begindttm,'MM-DD HH24:MI')||','||
TO_CHAR(r.enddttm,'MM-DD HH24:MI')||','||
ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440)||','||
ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)||','||
DECODE(r.runstatus,'9','Success','7','Processing','8','Cancelled','3','Error','5','Queued',runstatus)||','
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.prcsinstance = p.prcsinstance
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
/
SELECT TO_CHAR(r.rundttm,'MM-DD HH24:MI') Submitted,
r.oprid,
n.report_id,
n.layout_id,
n.report_scope,
TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime,
DECODE(r.runstatus,'9','Success','7','Processing','8','Cancelled','3','Error','5','Queued',runstatus) Status,
trunc((86400*(r.begindttm-r.rundttm))/60)-60*(trunc(((86400*(r.begindttm-r.rundttm))/60)/60)) QueueTime,
trunc((86400*(r.enddttm-r.begindttm))/60)-60*(trunc(((86400*(r.enddttm-r.begindttm))/60)/60)) Duration
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.prcsinstance = p.prcsinstance
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
/
--- SQL to pull jobs executing longer than 30 minutes
SELECT ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
ROUND(TO_NUMBER(SYSDATE-r.begindttm)*1440) Duration,
r.prcsinstance,
r.oprid,
n.report_id,
n.layout_id,
n.report_scope,
TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.begindttm IS NOT NULL
AND r.enddttm IS NULL
AND r.runstatus IN ('6','7')
AND ROUND(TO_NUMBER(SYSDATE-r.begindttm)*1440) >= 30
AND r.prcsinstance = p.prcsinstance
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
ORDER BY Duration desc
/
--- CSV reports w/ processinstance during stress test
SELECT TO_CHAR(r.rundttm,'MM-DD HH24:MI')||','||
r.prcsinstance||','||
r.oprid||','||
n.report_id||','||
n.layout_id||','||
n.report_scope||','||
TO_CHAR(r.begindttm,'MM-DD HH24:MI')||','||
TO_CHAR(r.enddttm,'MM-DD HH24:MI')||','||
ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440)||','||
ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)||','||
DECODE(r.runstatus,'9','Success','7','Processing','8','Cancelled','3','Error','5','Queued',runstatus)||','
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.rundttm>=TO_DATE('19-DEC-2006 09:00','DD-MON-YYYY HH24:MI')
AND r.rundttm <=TO_DATE('19-DEC-2006 12:00','DD-MON-YYYY HH24:MI')
AND r.prcsinstance = p.prcsinstance
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
/
--- reports w/ prcsinstance during stress test
SELECT ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration,
ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
r.prcsinstance,
r.oprid,
n.report_id,
n.layout_id,
n.report_scope,
DECODE(r.runstatus,'9','Success','7','Processing','8','Cancelled','3','Error','5','Queued',runstatus) Status,
TO_CHAR(r.rundttm,'MM-DD HH24:MI') submitted,
TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.rundttm>=TO_DATE('19-DEC-2006 09:00','DD-MON-YYYY HH24:MI')
AND r.rundttm <=TO_DATE('19-DEC-2006 12:00','DD-MON-YYYY HH24:MI')
AND r.prcsinstance = p.prcsinstance
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
ORDER BY Duration desc
/
--- reports w/ prcsinstance for today
SELECT ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration,
ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
r.prcsinstance,
r.oprid,
n.report_id,
n.layout_id,
n.report_scope,
TO_CHAR(r.rundttm,'MM-DD HH24:MI') submitted,
TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.rundttm>=TRUNC(SYSDATE)
AND r.prcsinstance = p.prcsinstance
AND r.runstatus = '9'
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
ORDER BY Duration
/
--- CSV w/ prcsinstance for today
SELECT ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)||','||
ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440)||','||
r.prcsinstance||','||
r.oprid||','||
n.report_id||','||
n.layout_id||','||
n.report_scope||','||
TO_CHAR(r.rundttm,'MM-DD HH24:MI')||','||
TO_CHAR(r.begindttm,'MM-DD HH24:MI')||','||
TO_CHAR(r.enddttm,'MM-DD HH24:MI')||','
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.rundttm>=TRUNC(SYSDATE)
AND r.prcsinstance = p.prcsinstance
AND r.runstatus = '9'
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
/
-- things currently processing, longest at top
SELECT ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
ROUND(TO_NUMBER(SYSDATE-r.begindttm)*1440) Duration,
r.prcsinstance,
r.oprid,
n.report_id,
n.layout_id,
n.report_scope,
TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.begindttm IS NOT NULL
AND r.enddttm IS NULL
AND r.prcsinstance = p.prcsinstance
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
ORDER BY Duration desc
/
--- things in queue, longest at bottom
SELECT ROUND(TO_NUMBER(SYSDATE-r.rundttm)*1440) QueueTime,
r.prcsinstance,
r.oprid,
n.report_id,
n.layout_id,
n.report_scope,
TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.begindttm IS NULL
AND r.prcsinstance = p.prcsinstance
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
ORDER BY QueueTime
/
--- SUMMARY SQLS ------------------------------------------------------------
--- counts by layout, duration
SELECT n.layout_id,
ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration,
COUNT(*)
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.rundttm>=TRUNC(SYSDATE)
AND r.prcsinstance = p.prcsinstance
AND r.runstatus = '9'
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
GROUP BY n.layout_id, ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)
ORDER BY Duration
/
--- counts by duration
SELECT ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration,
COUNT(*)
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.rundttm>=TRUNC(SYSDATE)
AND r.prcsinstance = p.prcsinstance
AND r.runstatus = '9'
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
GROUP BY ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)
ORDER BY Duration
/
--- counts by OPRID
SELECT r.oprid,
COUNT(*)
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.rundttm>=TRUNC(SYSDATE)
AND r.prcsinstance = p.prcsinstance
AND r.runstatus = '9'
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
GROUP BY r.oprid
ORDER BY count(*)
/
--- counts by queue time
SELECT ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
COUNT(*)
FROM ps_nvs_report n,
psprcsrqst r,
psprcsparms p
WHERE r.rundttm>=TRUNC(SYSDATE)
AND r.prcsinstance = p.prcsinstance
AND r.runstatus = '9'
AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
AND p.origparmlist like '%-NRN%'
GROUP BY ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440)
ORDER BY ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440)
/
ROUND(TO_NUMBER(end_date - start_date)*1440) = elapsed minutes
SQL> desc ps_nvs_report
Name Null? Type
----------------------------------------- -------- ----------------------------
BUSINESS_UNIT NOT NULL VARCHAR2(5)
REPORT_ID NOT NULL VARCHAR2(8)
LAYOUT_ID NOT NULL VARCHAR2(50)
REPORT_SCOPE NOT NULL VARCHAR2(10)
NVS_DIR_TEMPLATE NOT NULL VARCHAR2(254)
NVS_DOC_TEMPLATE NOT NULL VARCHAR2(254)
NVS_LANG_TEMPLATE NOT NULL VARCHAR2(50)
NVS_EMAIL_TEMPLATE NOT NULL VARCHAR2(254)
NVS_DESCR_TEMPLATE NOT NULL VARCHAR2(254)
NVS_AUTH_TEMPLATE NOT NULL VARCHAR2(254)
OUTDESTTYPE NOT NULL VARCHAR2(3)
OUTDESTFORMAT NOT NULL VARCHAR2(3)
REQ_BU_ONLY NOT NULL VARCHAR2(1)
NPLODE_DETAILS NOT NULL VARCHAR2(1)
TRANSLATE_LEDGERS NOT NULL VARCHAR2(1)
DESCR NOT NULL VARCHAR2(30)
EFFDT_OPTN NOT NULL VARCHAR2(1)
TREE_EFFDT DATE
AS_OF_DT_OPTION NOT NULL VARCHAR2(1)
AS_OF_DATE DATE
SQL> desc psprcsrqst
Name Null? Type
----------------------------------------- -------- ----------------------------
PRCSINSTANCE NOT NULL NUMBER(38)
JOBINSTANCE NOT NULL NUMBER(38)
PRCSJOBSEQ NOT NULL NUMBER(38)
PRCSJOBNAME NOT NULL VARCHAR2(12)
PRCSTYPE NOT NULL VARCHAR2(30)
PRCSNAME NOT NULL VARCHAR2(12)
RUNLOCATION NOT NULL VARCHAR2(1)
OPSYS NOT NULL VARCHAR2(1)
DBTYPE NOT NULL VARCHAR2(1)
DBNAME NOT NULL VARCHAR2(8)
SERVERNAMERQST NOT NULL VARCHAR2(8)
SERVERNAMERUN NOT NULL VARCHAR2(8)
RUNDTTM DATE
RECURNAME NOT NULL VARCHAR2(30)
OPRID NOT NULL VARCHAR2(30)
PRCSVERSION NOT NULL NUMBER(38)
RUNSTATUS NOT NULL VARCHAR2(2)
RQSTDTTM DATE
LASTUPDDTTM DATE
BEGINDTTM DATE
ENDDTTM DATE
RUNCNTLID NOT NULL VARCHAR2(30)
PRCSRTNCD NOT NULL NUMBER(38)
CONTINUEJOB NOT NULL NUMBER(38)
USERNOTIFIED NOT NULL NUMBER(38)
INITIATEDNEXT NOT NULL NUMBER(38)
OUTDESTTYPE NOT NULL VARCHAR2(3)
OUTDESTFORMAT NOT NULL VARCHAR2(3)
ORIGPRCSINSTANCE NOT NULL NUMBER(38)
GENPRCSTYPE NOT NULL VARCHAR2(1)
RESTARTENABLED NOT NULL VARCHAR2(1)
TIMEZONE NOT NULL VARCHAR2(9)
SQL> desc psprcsparms
Name Null? Type
----------------------------------------- -------- ----------------------------
PRCSINSTANCE NOT NULL NUMBER(38)
CMDLINE NOT NULL VARCHAR2(127)
PARMLIST NOT NULL VARCHAR2(254)
WORKINGDIR NOT NULL VARCHAR2(127)
OUTDEST NOT NULL VARCHAR2(127)
ORIGPARMLIST NOT NULL VARCHAR2(254)
ORIGOUTDEST NOT NULL VARCHAR2(127)
PRCSOUTPUTDIR NOT NULL VARCHAR2(254)
}}}
http://blog.orapub.com/20140110/Creating-A-Tool-Detailing-Oracle-Database-Process-CPU-Consumption.html
google for "fulltime.sh" script
{{{
#!/bin/sh
#set -x
# Note: This scripts call load_sess_stats.sql twice for data collection.
#
# Set the key variables
#
# use this for virtualised hosts:
PERF_SAMPLE_METHOD='-e cpu-clock'
# use this for physical hosts:
#PERF_SAMPLE_METHOD='-e cycles'
refresh_time=5
uid=system
pwd=oracle
workdir=$PWD
perf_file=perf_report.txt
# perf for non-root
if [ $(cat /proc/sys/kernel/perf_event_paranoid) != 0 ]; then
echo "Error: set perf_event_paranoid to 0 to allow non-root perf usage"
echo "As root: echo 0 > /proc/sys/kernel/perf_event_paranoid"
exit 1
fi
# perf sample method
echo "The perf sample method is set to: $PERF_SAMPLE_METHOD"
echo "Use cpu-clock for virtualised hosts, cycles for physical hosts"
# ctrl_c routine
ctrl_c() {
sqlplus -S / as sysdba <<EOF0 >& /dev/null
drop table op_perf_report;
drop table op_timing;
drop directory ext_dir;
EOF0
echo "End."
exit
}
trap ctrl_c SIGINT
#
sqlplus -S / as sysdba <<EOF1
set termout off echo off feed off
select /*perf profile*/
substr(a.spid,1,9) pid,
substr(b.sid,1,5) sid,
substr(b.serial#,1,5) serial#,
substr(b.machine,1,20) machine,
substr(b.username,1,10) username,
b.server, server,
substr(b.osuser,1,15) osuser,
substr(b.program,1,30) program
from v\$session b, v\$process a, v\$mystat c
where
b.paddr = a.addr
and b.sid != c.sid
and c.statistic# = 0
and type='USER'
order by spid
/
EOF1
read -p "Enter PID to profile: " ospid
echo "Sampling..."
# Setup, so be done once.
#
# As Oracle user
#
# Everything in this entire script is expected to be run
# from the below directory.
#
if ! ps -p $ospid >/dev/null; then ctrl_c; fi
sqlplus / as sysdba <<EOF2 >& /dev/null
set echo on feedback on verify on
create or replace directory ext_dir as '$workdir';
drop table op_perf_report;
create table op_perf_report (
overhead number,
command varchar2(100),
shared_obj varchar2(100),
symbol varchar2(100)
)
organization external (
type oracle_loader
default directory ext_dir
access parameters (
records delimited by newline
nobadfile nodiscardfile nologfile
fields terminated by ','
OPTIONALLY ENCLOSED BY '\\"' LDRTRIM
missing field values are null
)
location ('$perf_file')
)
reject limit unlimited
/
drop table op_timing;
create table op_timing (
time_seq number,
item varchar2(100),
time_s number
);
EOF2
while [ $refresh_time -gt 0 ]; do
if ! ps -p $ospid >/dev/null; then ctrl_c; fi
sqlplus / as sysdba <<EOF3 >& /dev/null
def ospid=$ospid
def timeseq=0
declare
sid_var number;
tot_cpu_s_var number;
curr_wait number;
curr_event varchar2(100);
begin
select s.sid
into sid_var
from v\$process p,
v\$session s
where p.addr = s.paddr
and p.spid = &ospid;
select sum(value/1000000)
into tot_cpu_s_var
from v\$sess_time_model
where stat_name in ('DB CPU','background cpu time')
and sid = sid_var;
insert into op_timing values (×eq , 'Oracle CPU sec' , tot_cpu_s_var );
insert into op_timing
select ×eq, event, time_waited_micro/1000000
from v\$session_event
where sid = sid_var;
select wait_time_micro/1000000, event into curr_wait, curr_event from v\$session_wait where sid=sid_var;
insert into op_timing values ( 2, curr_event, curr_wait);
end;
/
-- select * from op_timing;
EOF3
if ! ps -p $ospid >/dev/null; then ctrl_c; fi
perf record -f $PERF_SAMPLE_METHOD -p $ospid >& /dev/null &
perf record -f $PERF_SAMPLE_METHOD -g -o callgraph.pdata -p $ospid >& /dev/null &
sleep $refresh_time
kill -INT %2 %1
clear
if ! ps -p $ospid >/dev/null; then ctrl_c; fi
sqlplus / as sysdba <<EOF4 >& /dev/null
def ospid=$ospid
def timeseq=1
declare
sid_var number;
tot_cpu_s_var number;
diff number;
curr_wait number;
curr_event varchar2(100);
begin
select s.sid
into sid_var
from v\$process p,
v\$session s
where p.addr = s.paddr
and p.spid = &ospid;
select sum(value/1000000)
into tot_cpu_s_var
from v\$sess_time_model
where stat_name in ('DB CPU','background cpu time')
and sid = sid_var;
insert into op_timing values (×eq , 'Oracle CPU sec' , tot_cpu_s_var );
insert into op_timing
select ×eq, event, time_waited_micro/1000000
from v\$session_event
where sid = sid_var;
select count(*)
into diff
from op_timing a, op_timing b
where a.time_seq=0 and b.time_seq=1 and a.item=b.item and a.time_s<>b.time_s;
if diff = 0 then
select a.wait_time_micro/1000000-b.time_s, a.event into curr_wait, curr_event from v\$session_wait a, op_timing b
where a.sid=sid_var and b.time_seq=2;
update op_timing set time_s = time_s + curr_wait where time_seq = 1 and item = curr_event;
end if;
end;
/
EOF4
#perf report -t, 2> /dev/null | grep $ospid | grep -v [g]rep > $perf_file
perf report -t, > $perf_file 2>/dev/null
if ! ps -p $ospid >/dev/null; then ctrl_c; fi
sqlplus -S / as sysdba <<EOF5
set termout off echo off feed off
variable tot_cpu_s_var number;
variable tot_wait_s_var number;
begin
select end.time_s-begin.time_s
into :tot_cpu_s_var
from op_timing end,
op_timing begin
where end.time_seq = 1
and begin.time_seq = 0
and end.item = begin.item
and end.item = 'Oracle CPU sec';
select sum(end.time_s-begin.time_s)
into :tot_wait_s_var
from op_timing end,
op_timing begin
where end.time_seq = 1
and begin.time_seq = 0
and end.item = begin.item
and end.item != 'Oracle CPU sec';
end;
/
set echo off heading off
select 'PID: '||p.spid||' SID: '||s.sid||' SERIAL: '||s.serial#||' USERNAME: '||s.username,
'CURRENT SQL: '||substr(q.sql_text,1,70)
from v\$session s, v\$process p, v\$sql q
where s.paddr=p.addr
and s.sql_id=q.sql_id (+)
and s.sql_child_number = q.child_number (+)
and p.spid=$ospid
/
set heading on
set serveroutput on
col raw_time_s format 99990.000 heading 'Time|secs'
col item format a60 heading 'Time Component'
col perc format 999.00 heading '%'
select
'cpu : '||rpt.symbol item,
(rpt.overhead/100)*:tot_cpu_s_var raw_time_s,
((rpt.overhead/100)*:tot_cpu_s_var)/(:tot_wait_s_var+:tot_cpu_s_var)*100 perc
from op_perf_report rpt
where rpt.overhead > 2.0
union
select
'cpu : [?] sum of funcs consuming less than 2% of CPU time' item,
sum((rpt.overhead/100)*:tot_cpu_s_var) raw_time_s,
sum((rpt.overhead/100)*:tot_cpu_s_var)/(:tot_wait_s_var+:tot_cpu_s_var)*100 perc
from op_perf_report rpt
where rpt.overhead <= 2.0
group by 1,3
union
select 'wait: '||end.item,
end.time_s-begin.time_s raw_time_s,
(end.time_s-begin.time_s)/(:tot_wait_s_var+:tot_cpu_s_var)*100 perc
from op_timing end,
op_timing begin
where end.time_seq = 1
and begin.time_seq = 0
and end.item = begin.item
and end.time_s-begin.time_s > 0
and end.item != 'Oracle CPU sec'
order by raw_time_s desc
/
set serverout off feed off echo off
truncate table op_timing;
EOF5
done
#perf report -g -i callgraph.pdata > callgraph.txt 2>/dev/null
#echo "The Call Graph file is callgraph.txt"
}}}
http://www.solarisinternals.com/wiki/index.php/Performance_Antipatterns
! 1) From awr_genwl.sql
''AWR CPU and IO Workload Report''
__''Tables used are:''__
- dba_hist_snapshot
- dba_hist_osstat
- dba_hist_sys_time_model
- dba_hist_sysstat
__''Comparison of methods''__
comparison-LAG_WITH_comparison.txt https://www.dropbox.com/s/z33yjepi71ja3jw/comparison-LAG_WITH_comparison.txt
comparison-s0.snap_id,absolutevalue-explanation.sql https://www.dropbox.com/s/jhz3b5f0z4fs1kv/comparison-s0.snap_id%2Cabsolutevalue-explanation.sql
__''Enhancements that could be done:''__
-- I could also make use of this Note 422414.1 that use the following tables:
dba_hist_sysmetric_summary <-- network bytes stat is interesting (Network Traffic Volume Per Sec = Network_bytes_per_sec)... Update: possible to add this on the awr_genwl.sql, the thing is.. metrics are different from sysstat values.. on systat you just get the delta and the rate, in metric the sampling is different let's say the snap duration is 10mins = (intsize/100)/60 what metric does is it samples on a per 60sec interval (num_interval) and get the max, min, avg, std_dev of those samples. so keep that in mind when using this values.
-- DBA_HIST_SERVICE_STAT
-- For the memory usage.. I’ll put in the sysstat metric “session pga memory”, in that way I’ll have rough estimate on memory requirements for the sessions
-- Then for the Network usage.. I’ll put in “bytes sent via SQL*Net to client” and “bytes sent via SQL*Net to dblink”.. each on separate columns.. in this way I’ll know the network requirements (transfer rate) on specific workloads which will be useful for determining the right network capacity (on the hardware & on wire – bandwidth). Could also be useful on a WAN setup, but I still have to do some tests.
!! CPU Capacity
<<<
!!!"Snap|ID"
{{{
s0.snap_id id,
}}}
- This is the beginning value of dba_hist_snapshot, this is your marker when you want to drill down to that particular period by creating an AWR report using awrrpt.sql
the objective of the tool/script is what "start and end SNAP_ID" you feed in when running @?/rdbms/admin/awrrpt.sql
should be the same "start and end SNAP_ID" when you see it in a time series manner. So that when you find a peak period, you are good to go on drilling down on the larger reports (awrrpt.sql)
You can see an example AWR report here (http://karlarao.tiddlyspot.com/#%5B%5BAWR%20Sample%20-%2010.2.0.3%5D%5D) which has SNAP_ID 338-339... now we usually have this report by using the awrrpt.sql
then on a time series manner.. what values you see on the long report is the same when you look at SNAP_ID 338... look at the DB Time here (http://lh3.ggpht.com/_F2x5WXOJ6Q8/S2hR6V8NjCI/AAAAAAAAAo0/YM_c7VhFKiI/dba_hist3.png).. 1324.58÷60 = 22.08... so that is the beauty of the script..
Example using LAG
{{{
select * from
(
select
lag(a.snap_id) over(order by a.snap_id) as id,
b.value-lag(b.value) over(order by a.snap_id) delta
from dba_hist_snapshot a, dba_hist_osstat b
where
a.dbid = b.dbid
and a.instance_number = b.instance_number
and a.snap_id = b.snap_id
and b.stat_name='BUSY_TIME'
order by a.snap_id
)
where id = 338
ID DELTA
---------- ----------
338 46982
}}}
// NOTE:
- Before, I was having issues using the LAG function because it makes this column use the s1.snap_id which is wrong.. but finally figured out how to make sense of LAG.
- The s0.snap_id must be used as a column when doing the SQL trick "e.snap_id = s0.snap_id + 1" (see the old version of the scripts)
//
!!!"Snap|Start|Time"
{{{
TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm,
}}}
- This is the time value associated with the SNAP_ID
!!!"i|n|s|t|#"
{{{
s0.instance_number inst,
}}}
- The instance number, on a RAC environment you have to run the script on each of the nodes
!!!"Snap|Dur|(m)"
{{{
round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
}}}
- This is the "Elapsed" value that you see on the AWR report. The delta value of Begin and End Snaps.
- The unit is in minutes, the long AWR report usually shows it in minutes
!!!"C|P|U"
{{{
s3t1.value AS cpu,
}}}
- From the Oracle perspective, this is the number of CPUs you have on your database.
- Based on dba_hist_osstat value s3t1.stat_name = 'NUM_CPUS'
!!!"***|Total|CPU|Time|(s)"
{{{
(round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value cap,
}}}
- The formula is
''(Snap Dur minutes * 60) * NUM_CPUS''
- The unit is in seconds
- Essentially this is how many seconds of CPU time you can have on a particular snap period. Remember that CPU cycles are finite but you can endlessly wait on WAIT time. On a usual 10mins snap duration, that would be 600 seconds.. if on a particular period you incurred a total of 500 seconds of CPU (see requirements section) then most likely you are on the 83% CPU utilization (500 sec /600 sec)
<<<
!! CPU requirements
<<<
!!!"DB|Time"
{{{
(s5t1.value - s5t0.value) / 1000000 as dbt,
}}}
!!!"DB|CPU"
{{{
(s6t1.value - s6t0.value) / 1000000 as dbc,
}}}
!!!"Bg|CPU"
{{{
(s7t1.value - s7t0.value) / 1000000 as bgc,
}}}
!!!"RMAN|CPU"
{{{
round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2) as rman,
}}}
!!!"A|A|S"
{{{
((s5t1.value - s5t0.value) / 1000000)/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
- - - - - -
AAS = DB Time/Elapsed Time
= (1871.36/60)/10.06
= 3.100331345
}}}
!!!"***|Total|Oracle|CPU|(s)"
{{{
round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2) totora,
}}}
!!!"OS|Load"
{{{
round(s2t1.value,2) AS load,
}}}
!!!"***|Total|OS|CPU|(s)"
{{{
(s1t1.value - s1t0.value)/100 AS totos,
}}}
<<<
!! Memory requirements
<<<
!!!"Physical|Memory|(mb)"
{{{
s4t1.value/1024/1024 AS mem,
}}}
<<<
!! IO requirements
<<<
!!!"IOPs|r"
{{{
((s15t1.value - s15t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as IORs,
}}}
!!!"IOPs|w"
{{{
((s16t1.value - s16t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as IOWs,
}}}
!!!"IOPs|redo"
{{{
((s13t1.value - s13t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as IORedo,
}}}
!!!"IO r|(mb)/s"
{{{
(((s11t1.value - s11t0.value)* &_blocksize)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
as IORmbs,
}}}
!!!"IO w|(mb)/s"
{{{
(((s12t1.value - s12t0.value)* &_blocksize)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
as IOWmbs,
}}}
!!!"Redo|(mb)/s"
{{{
((s14t1.value - s14t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
as redosizesec,
}}}
<<<
!! some SYSSTAT delta values
<<<
!!!"Sess"
{{{
s9t0.value logons,
}}}
!!!"Exec|/s"
{{{
((s10t1.value - s10t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
) as exs,
}}}
<<<
!! CPU Utilization
<<<
!!!"Oracle|CPU|%"
{{{
((round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oracpupct,
}}}
!!!"RMAN|CPU|%"
{{{
((round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as rmancpupct,
}}}
!!!"OS|CPU|%"
{{{
(((s1t1.value - s1t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpupct,
}}}
!!!"U|S|R|%"
{{{
(((s17t1.value - s17t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuusr,
}}}
!!!"S|Y|S|%"
{{{
(((s18t1.value - s18t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpusys,
}}}
!!!"I|O|%"
{{{
(((s19t1.value - s19t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuio
}}}
<<<
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
! 2) From awr_topevents.sql
''AWR Top Events Report, a version of "Top 5 Timed Events" but across SNAP_IDs with AAS metric''
{{{
Sample output:
AWR Top Events Report
i
n
Snap s Snap A
Start t Dur Event Time Avgwt DB Time A
SNAP_ID Time # (m) Event Rank Waits (s) (ms) % S Wait Class
---------- --------------- --- ---------- ---------------------------------------- ----- -------------- -------------- -------- ------- ------ ---------------
338 10/01/17 06:50 1 10.05 CPU time 1 0.00 435.67 0.00 33 0.7 CPU
338 10/01/17 06:50 1 10.05 db file sequential read 2 18506.00 278.94 15.07 21 0.5 User I/O
338 10/01/17 06:50 1 10.05 PX Deq Credit: send blkd 3 79918.00 177.36 2.22 13 0.3 Other
338 10/01/17 06:50 1 10.05 direct path read 4 374300.00 148.74 0.40 11 0.2 User I/O
338 10/01/17 06:50 1 10.05 log file parallel write 5 2299.00 82.60 35.93 6 0.1 System I/O
}}}
{{{
AWR Top Events Report
i
n
Snap s Snap A
Start t Dur Event Time Avgwt DB Time A
SNAP_ID Time # (m) Event Rank Waits (s) (ms) % S Wait Class
---------- --------------- --- ---------- ---------------------------------------- ----- -------------- -------------- -------- ------- ------ ---------------
336 10/01/17 06:30 1 10.12 direct path read 1 49893.00 955.83 19.16 51 1.6 User I/O
336 10/01/17 06:30 1 10.12 db file sequential read 2 9477.00 472.07 49.81 25 0.8 User I/O
336 10/01/17 06:30 1 10.12 db file parallel write 3 3776.00 286.48 75.87 15 0.5 System I/O
336 10/01/17 06:30 1 10.12 log file parallel write 4 2575.00 163.31 63.42 9 0.3 System I/O
336 10/01/17 06:30 1 10.12 log file sync 5 1564.00 156.64 100.15 8 0.3 Commit
}}}
__''Tables used are:''__
- dba_hist_snapshot
- dba_hist_system_event
- dba_hist_sys_time_model
<<<
!!!# "Snap|Start|Time"
!!!# "Snap|ID"
!!!# "i|n|s|t|#"
!!!# "Snap|Dur|(m)"
!!!# "C|P|U"
!!!# "A|A|S"
{{{
AAS = DB Time/Elapsed Time
Begin Snap: 338 17-Jan-10 06:50:58 31 2.9
End Snap: 339 17-Jan-10 07:01:01 30 2.2
01/17/10 06:50:58
01/17/10 07:01:01
Elapsed (SnapDur): 10.05 (mins) = 603 (sec)
DB Time: 22.08 (mins) = 1324.8 (sec)
AAS = 2.197014925 <-- ADDM AAS is 2.2, ASHRPT AAS is 2.7
-- THIS IS DB CPU / DB TIME... TO GET % OF DB CPU ON DB TIME ON TOP 5 TIMED EVENTS SECTION
((round ((s6t1.value - s6t0.value) / 1000000, 2)) / ((s5t1.value - s5t0.value) / 1000000))*100 as pctdbt,
-- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % OF AAS
(round ((s6t1.value - s6t0.value) / 1000000, 2))/60 / round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
------ FROM AWR ... TOTAL AAS is 1.8.. 2.3 if you include the other events at the bottom
A
Time Avgwt DB Time A
SNAP_ID Event Waits (s) (ms) % S Wait Class
---------- ---------------------------------------- -------------- -------------- -------- ------- ------ ---------------
338 CPU time 0.00 435.67 0.00 33 0.7
338 db file sequential read 18506.00 278.94 15.07 21 0.5 User I/O
338 PX Deq Credit: send blkd 79918.00 177.36 2.22 13 0.3 Other
338 direct path read 374300.00 148.74 0.40 11 0.2 User I/O
338 log file parallel write 2299.00 82.60 35.93 6 0.1 System I/O
------ FROM ASHRPT ... TOTAL AAS is 1.99.. 2.47 if you include the other events at the bottom
Top User Events DB/Inst: IVRS/ivrs (Jan 17 06:50 to 07:01)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU CPU 36.20 0.98
PX Deq Credit: send blkd Other 12.88 0.35
db file sequential read User I/O 12.27 0.33
direct path read User I/O 7.36 0.20
PX qref latch Other 4.91 0.13
-------------------------------------------------------------
Top Background Events DB/Inst: IVRS/ivrs (Jan 17 06:50 to 07:01)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
db file sequential read User I/O 6.75 0.18
db file parallel write System I/O 3.68 0.10
log file parallel write System I/O 3.68 0.10
control file parallel write System I/O 1.84 0.05
log file sequential read System I/O 1.84 0.05
-------------------------------------------------------------
}}}
!!!# "Event"
!!!# "Waits"
!!!# "Time|(s)"
!!!# "Avgwt|(ms)"
!!!# "Idle"
!!!# "DB Time|%"
!!!# "Wait Class"
<<<
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
! 3) RAC stuff
Global Cache Load Profile
-- Estimated Interconnect traffic
ROUND(((RPT_PARAMS(STAT_DBBLK_SIZE) *
(RPT_STATS(STAT_GC_CR_RV) + RPT_STATS(STAT_GC_CU_RV) +
RPT_STATS(STAT_GC_CR_SV) + RPT_STATS(STAT_GC_CU_SV))) +
(200 *
(RPT_STATS(STAT_GCS_MSG_RCVD) + RPT_STATS(STAT_GES_MSG_RCVD) +
RPT_STATS(STAT_GCS_MSG_SNT) + RPT_STATS(STAT_GES_MSG_SNT))))
/ 1024 / RPT_STATS(STAT_ELAPSED), 2);
Global Cache Efficiency Percentages - Target local+remote 100%
Global Cache and Enqueue Services - Workload Characteristics
Global Cache and Enqueue Services - Messaging Statistics
-- More RAC Statistics
-- RAC Report Summary
Global CR Served Stats
Global CURRENT Served Stats
Global Cache Transfer Stats
Global Enqueue Statistics
Segments by Global Cache Buffer Busy <-- possible
Global Cache Transfer Stats <-- possible
{{{
New interacUve report for analyzing AWR data
* Performance Hub report generated from SQL*Plus
* @$ORACLE_HOME/rdbms/admin/perfhubrpt.sql
* OR calling dbms_perf.report_perfub(….) function
* Single view of DB performance
* ADDM, SQL Tuning, Real-Time SQL Monitoring, ASH AnalyUcs
* Switch between ASH analyUcs, workload view, ADDM findings and SQL monitoring seamlessly
* Supports both real-Ume & historical mode
* Historical view of SQL Monitoring reports
}}}
http://www.oracle.com/technetwork/oem/db-mgmt/con8450-sqltuning-expertspanel-2338901.pdf
''DBMS_PERF'' http://docs.oracle.com/database/121/ARPLS/d_perf.htm#ARPLS75006
''Oracle Database 12c: EM Express Performance Hub'' http://www.oracle.com/technetwork/database/manageability/emx-perfhub-1970118.html
''Oracle Database 12c: EM Express Active Reports'' http://www.oracle.com/technetwork/database/manageability/emx-activerep-1970119.html
! usage
RDBMS 12.1.0.2 & Cell 12.1.2.1.0 exposes detailed Exadata statistics on historical perfhub report https://twitter.com/karlarao/status/573025645254479872
{{{
-- active
set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool perfhub_active.html
select dbms_perf.report_perfhub(is_realtime=>1,type=>'active') from dual;
spool off
-- historical
set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool perfhub_active2.html
select dbms_perf.report_perfhub(is_realtime=>0,type=>'active') from dual;
spool off
-- historical, without explicitly specifying the "type"
set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool perfhub_active3.html
select dbms_perf.report_perfhub(is_realtime=>0) from dual;
spool off
}}}
! 11.2.0.4 vs 12c
''11204''
{{{
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: html
}}}
''12c''
{{{
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
AWR reports can be generated in the following formats. Please enter the
name of the format at the prompt. Default value is 'html'.
'html' HTML format (default)
'text' Text format
'active-html' Includes Performance Hub active report
Enter value for report_type: active-html
}}}
Here's my investigation on the topic and the reason why it's 24cores on an x2-2 box... and some quirks on the graphing of the "CPU cores line"
http://www.evernote.com/shard/s48/sh/c7f8b7b5-4ceb-40e3-b877-9d00380749af/d76f3f66364a6454a9adafc2ae24c798
http://blogs.oracle.com/rtd/entry/performance_tips
http://www.red-gate.com/products/oracle-development/deployment-suite-for-oracle/webinars/webinar-archive
! Session/System level perf monitoring
* Perfsheet (Performance Visualization) – For Session Monitoring, uses excel sheet
* Ashmon (Active Session Monitoring) – For monitoring Database Session , Ashmon on 64bit http://db-optimizer.blogspot.com/2010/10/ashmon-on-64bit-oracle-11gr2.html, by marcin at github https://github.com/pioro/orasash/
* DB Optimizer - the production version of Ashmon, with cool Visual SQL Tuning! (just like Dan Tow has envisioned)
* ASH Viewer by Alexander Kardapolov http://j.mp/dNidrB, http://ronr.blogspot.com/2012/10/ash-for-standard-edition-or-without.html
* Lab128 (trial software) – Tool for Oracle Tuning, Monitoring and trace SQL/Stored procedures transactions http://www.lab128.com/lab128_download.html http://www.lab128.com/lab128_new_features.html http://www.lab128.com/lab128_rg/html/contents.html Lab128 has automated the pstack sampling, os_explain, & reporting. Good tool to know where the query was spending time http://goo.gl/fyH5x
* Mumbai (freeware) - Performance monitoring tool that integrated Snapper, Orasrp, Statspack viewer, alert log viewer, nice session level profiling, and lots of good stuff! https://marcusmonnig.wordpress.com/mumbai/
* EMlight by Obzora http://obzora.com/home.html - a lightweight web based EM
* Google Chrome AWR Formatter by Tyler Muth - http://tylermuth.wordpress.com/2011/04/20/awr-formatter/ - when you want to drill down on AWR statistics for a specific SNAP_ID this tool can be very helpful. This works only on html format of AWR. I would use it together with the Firefighting Diagnosis excel template of Craig Shallahamer to quickly account the RT = ST+QT
* Snapper (Oracle Session Snapper) - Reports Oracle session level performance counter and wait information in real time http://tech.e2sn.com/oracle-scripts-and-tools/session-snapper - doesn't require Diag&Tuning pack
* MOATS - http://blog.tanelpoder.com/2011/03/29/moats-the-mother-of-all-tuning-scripts/ , http://www.oracle-developer.net/utilities.php
* RAC-aware MOATS - http://jagjeet.wordpress.com/2012/05/13/sqlplus-dashboard-for-rac/ has a cool AAS dashboard with Exadata metrics (smart scans, flash cache, etc.) - this requires Diag&Tuning Pack
* oratop (MOS 1500864.1) - near real-time monitoring of databases, RAC and Single Instance, much like RAC-aware MOATS - doesn't require Diag&Tuning pack, no cool AAS dashboard
* Oracle LTOM (Oracle Lite Onboard Monitor) – Provides automatic session tracing
* Orapub's OSM scripts - A toolkit for database monitoring and workload characterization
* JL references http://jonathanlewis.wordpress.com/2009/06/23/glossary/ , http://jonathanlewis.wordpress.com/2009/12/18/simple-scripts/ , http://jonathanlewis.wordpress.com/statspack-examples/ , http://jonathanlewis.wordpress.com/2010/03/17/partition-stats/
* List of end-user monitoring tools http://www.real-user-monitoring.com/the-complete-list-of-end-user-experience-monitoring-tools/ , http://www.alexanderpodelko.com/PerfManagement.html
* [[ASH masters, AWR masters]] - a collection of ASH and AWR scripts I've been using for years to do session level profiling and workload characterization
* orachk collection manager http://www.fuadarshad.com/2015/02/exadata-12c-new-features-rmoug-slides.html
* [[report_sql_monitor_html.sql]] sql monitor reports
* [[Performance Hub report]] performance hub reports
! SQL Tuning
* SQLTXPLAIN (Oracle Extended Explain Plan Statistics) – Provides details about all schema objects in which the SQL statement depends on.
* Orasrp (Oracle Session Resource Planner) – Builds complete detailed session profile
* gxplan - Visualization of explain plan
* 10053 viewer - http://jonathanlewis.wordpress.com/2010/04/30/10053-viewer/
! Forecasting
* r2toolkit - http://karlarao.tiddlyspot.com/#r2project This is a performance toolkit that uses AWR data and Linear Regression to identify what metric/statistic is driving the database server’s workload. The data points can be very useful for capacity planning giving you informed decisions and completely avoiding guesswork!
Kyle's notes
https://sites.google.com/site/oraclemonitor/notes
<<showtoc>>
{{{
network diagnostic tools
ping
traceroute
host
dig
netstat
gnome-netttool (GUI)
verify ip connectivity
ping <-- packet loss & latancy measurement tool (sends ICMP - internet control message protocol, default is 64byte)
traceroute <-- displays network path to a destination (uses UCP frames to probe the path)
mtr <-- a tool that combines ping & traceroute
New and Modified utilities
ping6
traceroute6
tracepath6
ip -6
host -t AAAA hostname6.domain6
ip
route -n <-- display routing table
traceroute <ip> <-- diagnose routing problems
}}}
! step by step
http://www.ateam-oracle.com/testing-latency-and-throughput/
<<<
The following is a simple list of steps to collect throughput and latency data.
Run MTR to see general latency and packet loss between servers.
Execute a multi-stream iperf test to see total throughput.
Execute UDP/jitter test if your setup will be using UDP between servers.
Execute jmeter tests against application/rest endpoint(s).
<<<
! latency and hops
https://www.digitalocean.com/community/tutorials/how-to-use-traceroute-and-mtr-to-diagnose-network-issues
https://www.thegeekdiary.com/how-to-use-qperf-to-measure-network-bandwidth-and-latency-performance-in-linux/
https://arjanschaaf.github.io/is-the-network-the-limit/
http://paulbakker.io/docker/docker-cloud-network-performance/
How to use qperf to measure network bandwidth and latency performance https://access.redhat.com/solutions/2122681
https://www.opsdash.com/blog/network-performance-linux.html
! Using Traceroute, Ping, MTR, and PathPing, qperf
https://www.pluralsight.com/blog/it-ops/troubleshoot-ping-traceroute
https://www.cisco.com/en/US/docs/internetworking/troubleshooting/guide/tr1907.html#wp1020813
Displaying Routing Information With the traceroute Command https://docs.oracle.com/cd/E23824_01/html/821-1453/ipv6-admintasks-72.html
Troubleshoot network performance issues with ping and traceroute https://help.salesforce.com/s/articleView?id=000326878&type=1
Using Traceroute, Ping, MTR, and PathPing https://www.clouddirect.net/knowledge-base/KB0011455/using-traceroute-ping-mtr-and-pathping
https://www.howtogeek.com/134132/how-to-use-traceroute-to-identify-network-problems/
How to use qperf to measure network bandwidth and latency performance https://access.redhat.com/solutions/2122681
! latency and throughput example commands
https://www.ateam-oracle.com/testing-latency-and-throughput
! bandwidth (iperf)
Oracle Cloud Infrastructure: Bandwidth iperf test https://www.youtube.com/watch?v=z6aGcy25gX8
https://github.com/esnet/iperf/issues/547
https://oracle-randolf.blogspot.com/2017/02/oracle-database-cloud-dbaas-performance.html , https://oracle-randolf.blogspot.com/search/label/DBaaS
! references
Network Troubleshooting Tools https://learning.oreilly.com/library/view/network-troubleshooting-tools/059600186X/
Network Maintenance and Troubleshooting Guide: Field-Tested Solutions for Everyday Problems, Second Editon https://learning.oreilly.com/library/view/network-maintenance-and/9780321647672/ch11.html
DevOps Troubleshooting for Linux Server: Is the Server Down? Tracking Down the Source of Network Problems https://learning.oreilly.com/library/view/devops-troubleshooting-for/9780133258813/ch05.html#ch05lev1sec7
! System level OS perf monitoring
* kSar - a SAR grapher - http://sourceforge.net/projects/ksar/ , https://www.linux.com/news/visualize-sar-data-ksar , https://www.thomas-krenn.com/en/wiki/Linux_Performance_Analysis_using_kSar
{{{
export LC_ALL=C
sar -A -f /var/log/sysstat/sa15 > sardata.txt
cat /var/log/sysstat/sar?? > /tmp/sar.all <- merge multiple days
}}}
* OSWatcher (Oracle OS Watcher) - Reports CPU, RAM and Network stress, and is a new alternative for monitoring Oracle servers (includes session level ps)
* Oracle Cluster Health Monitor - http://goo.gl/UZqS5 (includes session level ps)
* nmon
* Dynamic Tracing Tools - ''DTrace'' - Solaris,Linux ''ProbeVue'' - AIX
* top, vmstat, mpstat - http://smartos.org/2011/05/04/video-the-gregg-performance-series/
* turbostat.c http://developer.amd.com/Assets/51803A_OpteronLinuxTuningGuide_SCREEN.pdf, http://manpages.ubuntu.com/manpages/precise/man8/turbostat.8.html, http://lxr.free-electrons.com/source/tools/power/x86/turbostat/, http://stuff.mit.edu/afs/sipb/contrib/linux/tools/power/x86/turbostat/turbostat.c
* vm performance and CPU contention [[esxtop, vmstat, top, mpstat steal]]
! Session level OS perf monitoring
* iotop http://guichaz.free.fr/iotop/ , for RHEL http://people.redhat.com/jolsa/iotop/ , topio Solaris http://yong321.freeshell.org/freeware/pio.html
* atop alternative to iotop on RHEL4 http://www.atoptool.nl/index.php
* collectl http://collectl.sourceforge.net/ , http://collectl-utils.sourceforge.net/ , detailed process accounting (you can also do ala ''iotop'') http://collectl.sourceforge.net/Process.html
* prstat Solaris
{{{
Memory per process accounting: collectl -sZ -i:1 --procopts m
IO per process accounting: collectl -sZ -i:1
}}}
* iodump http://www.xaprb.com/blog/2009/08/23/how-to-find-per-process-io-statistics-on-linux/ <-- I'm a bit dubious about this..done a test case comparing to collectl.. it can't get the top processes doing the io.. related links: http://goo.gl/NwUcs , http://goo.gl/zVEFE , http://goo.gl/eQg3d
* perf top http://anton.ozlabs.org/blog/2009/09/04/using-performance-counters-for-linux/ <-- kernel profiling tool for linux, much like dtrace probe on syscall, ''wiki'' https://perf.wiki.kernel.org/index.php/Tutorial#Live_analysis_with_perf_top
* Digger - the tool for tracing of unix processes http://alexanderanokhin.wordpress.com/tools/digger/
* per-process level cpu scheduling - latency.c http://eaglet.rain.com/rick/linux/schedstat/
* vtune http://software.intel.com/en-us/intel-vtune-amplifier-xe
* BPF https://blog.memsql.com/bpf-linux-performance/
* perf Basic usage of perf command ( tracing tool ) (Doc ID 2174289.1)
* cputrack (solaris) - process level CPU counters - starting solaris 8 http://www.scalingbits.com/performance/tracing
! Network
* uperf http://www.uperf.org/, http://www.uperf.org/manual.html
* rds-stress http://oss.oracle.com/pipermail/rds-devel/2007-November/000237.html, http://oss.oracle.com/~okir/rds/2008-Feb-29/scalability/
* pingplotter http://www.pingplotter.com/
* netem WAN performance simulator http://www.linuxfoundation.org/collaborate/workgroups/networking/netem , http://www.oracle.com/technetwork/articles/wartak-rac-vm-3-096492.html#9a
* network speed test without flash http://openspeedtest.com/results/5244312
! Storage/IO
* EMC ControlCenter (ECC)
* asm_metric.pl
Orion - see tiddlers below
SQLIO (for SQL Server) - http://sqlserverpedia.com/wiki/SAN_Performance_Tuning_with_SQLIO
ASMIOSTAT Script to collect iostats for ASM disks Doc ID: 437996.1
''asmcmd''
{{{
asmcmd iostat -et --io --region -G DATA 5
}}}
also see [[asm_metrics.pl]]
.
Customer Knowledge Exchange
https://metalink2.oracle.com/metalink/plsql/f?p=130:14:6788425522391793279::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,375443.1,1,1,1,helvetica
''* Master Note: Database Performance Overview [ID 402983.1]''
''Performance Tools Quick Reference Guide Doc ID: Note:438452.1''
<<<
* Query Tuning
Enterprise Manager (SQL Tuning Advisor)
AWR SQL Report
SQLTXPLAIN
TRCANLZR
PL/SQ_ Profiler
LTOM (Session Trace Collector)
OPDG
SQL Tuning Health-Check Script [ID 1366133.1]
* OS Data
OS_Watcher
* Database Tuning
Enterprise Manager ADDM
ADDM Report
STATSPACK
AWR Report
OPDG
* Hang, Locking, and Transient Issues
ASH Report
LTOM (Hang Detector, Data Recorder)
HangFG
* Error/Crash Issues
Stackx
ORA-600/ORA-7445 Troubleshooter
* RAC
RDA
RACcheck - RAC Configuration Audit Tool [ID 1268927.1] - sample report http://dl.dropbox.com/u/25153503/Oracle/raccheck.html
* ASM tools used by Support : KFOD, KFED, AMDU [ID 1485597.1]
<<<
Oracle Performance Diagnostic Guide (OPDG)
Doc ID: Note:390374.1
Performance Improvement Tips for Oracle on UNIX
Doc ID: Note:1005636.6
How to use OS commands to diagnose Database Performance issues?
Doc ID: Note:224176.1
Introduction to Tuning Oracle7 / Oracle8 / 8i / 9i
Doc ID: Note:61998.1
-- DATABASE HEALTH CHECK
How to Perform a Healthcheck on the Database
Doc ID: 122669.1
My Oracle Support Health Checks Catalog [ID 868955.1]
Avoid Known Problems and Improve Stability - New Database, Middleware, E-Business Suite, PeopleSoft, Siebel & JD Edwards Health Checks Released! [ID 1206734.1]
-- ANALYSIS
Yet Another Performance Profiling Method (Or YAPP-Method) (Doc ID 148518.1
Some Reasons for Poor Performance at Database,Network and Client levels
Doc ID: Note:242495.1
Performance Improvement Tips for Oracle on UNIX
Doc ID: 1005636.6
CHECKLIST-What else can influence the Performance of the Database
Doc ID: 148462.1
Abrupt Spikes In Number Of Sessions Causing Slow Performance.
Doc ID: 736635.1
TROUBLESHOOTING: Advanced Query Tuning
Doc ID: 163563.1
Note 233112.1 START HERE> Diagnosing Query Tuning Problems Using a Decision Tree
Note 372431.1 TROUBLESHOOTING: Tuning a New Query
Note 179668.1 TROUBLESHOOTING: Tuning Slow Running Queries
Note 122812.1 Tuning Suggestions When Query Cannot be Modified
Note 67522.1 Diagnosing Why a Query is Not Using an Index
Note 214106.1 Using TKProf to compare actual and predicted row counts
What is the Oracle Diagnostic Methodology (ODM)?
Doc ID: 312789.1
-- ORACLE SUPPORT CASE STUDIES, COE
-- chris warticki
http://blogs.oracle.com/support/
Case Study Master (Doc ID 342534.1)
https://metalink2.oracle.com/metalink/plsql/f?p=130:14:4157667604321941359::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,342534.1,1,1,1,helvetica
Case Study: Diagnosing Another Buffer Busy Waits Issue
Doc ID: Note:358303.1
Freelist Management with Oracle 8i
Doc ID: Note:157250.1
Network Performance Considerations in Designing Client/Server Applications
Doc ID: 76412.1
Database Writer and Buffer Management
Doc ID: 91062.1
http://netappdb.blogspot.com/
Determining CPU Resource Usage for Linux and Unix
Doc ID: Note:466996.1
Measuring Memory Resource Usage for Linux and Unix
Doc ID: Note:467018.1
Linux Kernel: The SLAB Allocator
Doc ID: Note:434351.1
Best Practices for Load Testing
Doc ID: Note:466452.1
What is DARV
Doc ID: 391153.1
-- CUSTOM APPS TUNING
http://blogs.oracle.com/theshortenspot/entry/troubleshooting_in_a_nutshell
Performance Troubleshooting Guides For Oracle Utilities CCB, BI & Oracle ETM [ID 560382.1]
-- STATSPACK
Systemwide Tuning using UTLESTAT Reports in Oracle7/8
Doc ID: Note:62161.1
Note 228913.1 Systemwide Tuning using STATSPACK Reports
New system statistics in Oracle 8i bstat/estat report
Doc ID: 134346.1
Statistics Package (STATSPACK) Guide
Doc ID: 394937.1
FAQ- Statspack Complete Reference
Doc ID: 94224.1
Using AWR/Statspack reports to help solve some Portal Performance Problems scenarios
Doc ID: 565812.1
Two Types of Automatic Statistics Collected in 10g
Doc ID: 559029.1
Creating a StatsPack performance report
Doc ID: 149124.1
Gathering a StatsPack snapshot
Doc ID: 149121.1
What is StatsPack and where are the READMEs?
Doc ID: 149115.1
Systemwide Tuning using STATSPACK Reports
Doc ID: 228913.1
Sharing StatsPack snapshot data between two or more databases
Doc ID: 149122.1
What We Did to Track and Detect Init Parameter Changes in our Database
Doc ID: 436776.1
Oracle Database 10g Migration/Upgrade: Known Issues and Best Practices with Self-Managing Database
Doc ID: 332889.1
How To Integrate Statspack with EM 10G
Doc ID: 274436.1
Installing and Using Standby Statspack in 11gR1
Doc ID: 454848.1
-- DISABLE AWR
Package for disabling AWR without a Diagnostic Pack license in Oracle
Doc ID: 436386.1
-- AWR
Solving Convertible or Lossy data in Data Dictionary objects when changing the NLS_CHARACTERSET
Doc ID: 258904.1
Although AWR snapshot is dropped, WRH$_SQLTEXT still shows some relevant entries
Doc ID: 798526.1
High Storage Consumption for LOBs in SYSAUX Tablespace
Doc ID: 396502.1
-- AWR BASELINE
How to Generate an AWR Report and Create Baselines [ID 748642.1]
-- AWR ERRORS
OERR: ORA-13711 Some snapshots in the range [%s, %s] are missing key statistic [ID 287886.1]
Troubleshooting: AWR Snapshot Collection issues [ID 1301503.1]
ORA-12751 cpu time or run time policy violation [ID 761298.1] <-- usually happens when you are on high CPU, high SYS CPU
AWR or STATSPACK Snapshot collection extremely slow in 11gR2 [ID 1392603.1]
Bug 13372759: AWR SNAPSHOTS HANGING
Bug 13257247 - AWR Snapshot collection hangs due to slow inserts into WRH$_TEMPSTATXS. [ID 13257247.8] <-- BUGG!!!! will cause a bloated IO MB/s number
-- EXPORT IMPORT AWR
http://gavinsoorma.com/2009/07/exporting-and-importing-awr-snapshot-data/
How to Transport AWR Data [ID 872733.1]
http://dboptimizer.com/2011/04/16/importing-multiple-databases-awr-repositories/
-- EVENTS
What is the "WF - Contention'' Enqueue ?
Doc ID: Note:358208.1
Consistent gets - examination
http://www.dba-oracle.com/m_consistent_gets.htm
-- SGA
FREQUENT RESIZE OF SGA
Doc ID: 742599.1
-- BUFFER CACHE
Understanding and Tuning Buffer Cache and DBWR
Doc ID: Note:62172.1
Note 1022293.6 HOW A TABLE CAN BE CACHED IN MEMORY BUFFER CACHE
How to Identify The Segment Associated with Buffer Busy Waits
Doc ID: Note:413931.1
Resolving Intense and "Random" Buffer Busy Wait Performance Problems
Doc ID: Note:155971.1
Case Study: Diagnosing Another Buffer Busy Waits Issue
Doc ID: Note:358303.1
DB_WRITER_PROCESSES or DBWR_IO_SLAVES?
Doc ID: Note:97291.1
Database Writer and Buffer Management
Doc ID: Note:91062.1
STATISTIC "cache hit ratio" - Reference Note
Doc ID: Note:33883.1
Oracle9i NF: Dynamic Buffer Cache Advisory
Doc ID: Note:148511.1
How To Identify a Hot Block Within The Database Buffer Cache.
Doc ID: Note:163424.1
What is "v$bh"? How should it be used?
Doc ID: 73582.1
-- BUFFER BUSY WAITS
How To Identify a Hot Block Within The Database Buffer Cache.
Doc ID: 163424.1
Difference Between 'Buffer Busy Waits' and 'Latch: Cache Buffers Chains"?
Doc ID: 833303.1
Abrupt Spikes In Number Of Sessions Causing Slow Performance.
Doc ID: 736635.1
How to Identify Which Latch is Associated with a "latch free" wait
Doc ID: 413942.1
New system statistics in Oracle 8i bstat/estat report
Doc ID: 134346.1
ACTIVE: DML HANGING - BUFFER BUSY WAITS
Doc ID: 1061802.6
How to Identify The Segment Associated with Buffer Busy Waits
Doc ID: 413931.1
-- BUFFER POOL
Oracle Multiple Buffer Pools Feature
Doc ID: 135223.1
ORACLE8.X: HOW TO MAKE SMALL FREQUENTLY USED TABLES STAY IN MEMORY
Doc ID: 1059295.6
Multiple BUFFER subcaches: What is the total BUFFER CACHE size?
Doc ID: 138226.1
HOW A TABLE CAN BE CACHED IN MEMORY/BUFFER CACHE <-- oracle 7
Doc ID: 1022293.6
-- LARGE POOL
Fundamentals of the Large Pool (Doc ID 62140.1)
-- SHARED POOL
Using the Oracle DBMS_SHARED_POOL Package
Doc ID: Note:61760.1
How to Pin a Cursor in the Shared Pool
Doc ID: Note:726780.1
90+percent of the shared pool memory though no activity on the database
Doc ID: Note:552391.1
How to Pin SQL Statements in Memory Using DBMS_SHARED_POOL
Doc ID: Note:152679.1
90+percent of the shared pool memory though no activity on the database
Doc ID: 552391.1
HOW TO FIND THE SESSION HOLDING A LIBRARY CACHE LOCK
Doc ID: 122793.1
Dump In msqsub() When Querying V$SQL_PLAN
Doc ID: 361342.1
Troubleshooting and Diagnosing ORA-4031 Error
Doc ID: 396940.1
When Cursor_Sharing=Similar/Force do not Share Cursors When Literals are Used?
Doc ID: 364845.1
Handling and resolving unshared cursors/large version_counts
Doc ID: 296377.1
How to Identify Resource Intensive SQL for Tuning
Doc ID: 232443.1
How using synonyms may affect database performance and scalability
Doc ID: 131272.1
Example "Top SQL" queries from V$SQLAREA
Doc ID: 235146.1
ORA-4031 Common Analysis/Diagnostic Scripts
Doc ID: 430473.1
Understanding and Tuning the Shared Pool
Doc ID: 62143.1
-- SHARED POOL PIN
How to Automate Pinning Objects in Shared Pool at Database Startup
Doc ID: 101627.1
PINNING ORACLE APPLICATIONS OBJECTS INTO THE SHARED POOL
Doc ID: 69925.1
How To Use SYS.DBMS_SHARED_POOL In a PL/SQL Stored procedure To Pin objects in Oracle's Shared Pool.
Doc ID: 305529.1
How to Pin a Cursor in the Shared Pool
Doc ID: 726780.1
How to Pin SQL Statements in Memory Using DBMS_SHARED_POOL
Doc ID: 152679.1
-- HARD/SOFT PARSE
How to work out how many of the parse count are hard/soft?
Doc ID: 34433.1
-- COMMIT
Does Auto-Commit Perform Commit On Select?
Doc ID: 371984.1
-- FREELISTS & FREELISTS GROUS
Freelist Management with Oracle 8i
Doc ID: Note:157250.1
How To Solve High ITL Waits For Given Segments.
Doc ID: Note:464041.1
-- EBS
Troubleshooting Oracle Applications Performance Issues
Doc ID: Note:169935.1
MRP Core/Mfg Performance Tuning and Troubleshooting Guide
Doc ID: 100956.1
-- LATCH
What are Latches and What Causes Latch Contention
Doc ID: Note:22908.1
How to Match a Row Cache Object Child Latch to its Row Cache
Doc ID: Note:468334.1
-- CHECKPOINT
Manual Log Switching Causing "Thread 1 Cannot Allocate New Log" Message in the Alert Log
Doc ID: Note:435887.1
Checkpoint Tuning and Troubleshooting Guide
Doc ID: Note:147468.1
Alert Log Messages: Private Strand Flush Not Complete
Doc ID: Note:372557.1
DB Redolog Archive Once A Minute
Doc ID: Note:370151.1
Automatic Checkpoint Tuning in 10g
Doc ID: Note:265831.1
WHY REDO LOG SPACE REQUESTS ALWAYS INCREASE AND NEVER DECREASE?
Doc ID: Note:1025593.6
-- OS LEVEL (Linux - Puschitz)
Oracle MetaLink Note:200266.1
Oracle MetaLink Note:225751.1
Oracle MetaLink Note:249213.1
Oracle MetaLink Note:260152.1
Oracle MetaLink Note:262004.1
Oracle MetaLink Note:265194.1
Oracle MetaLink Note:270382.1
Oracle MetaLink Note:280463.1
Oracle MetaLink Note:329378.1
Oracle MetaLink Note:344320.1
http://www.oracle.com/technology/pub/notes/technote_rhel3.html
http://www.redhat.com/whitepapers/rhel/OracleonLinux.pdf
http://www.redhat.com/magazine/001nov04/features/vm/ <-- Understanding Virtual Memory by Norm Murray and Neil Horman
http://kerneltrap.org/node/2450 <-- Feature: High Memory In The Linux Kernel
http://www.redhat.com/whitepapers/rhel/AdvServerRASMpdfRev2.pdf
-- hang
What To Do and Not To Do When 'shutdown immediate' Hangs
Doc ID: Note:375935.1
Bug:5057695: Shutdown Immediate Very Slow To Close Database.
Doc ID: Note:428688.1
Diagnosing Database Hanging Issues
Doc ID: Note:61552.1
Bug No. 5057695 SHUTDOWN IMMEDIATE SLOW TO CLOSE DOWN DATABASE WITH INACTIVE JDBC THIN SESSIONS
How to Debug Hanging Sessions?
Doc ID: 178721.1
ORA-0054: When Dropping or Truncating Table, When Creating or Rebuilding Index
Doc ID: 117316.1
Connection To / As Sysdba and Shutdown Immediate Hang
Doc ID: 314365.1
How To Use Truss With Opatch?
Doc ID: 470225.1
How to Trace Unix System Calls
Doc ID: 110888.1
TECH: Getting a Stack Trace from a CORE file
Doc ID: 1812.1
TECH: Using Truss / Trace on Unix
Doc ID: 28588.1
How to Process an Express Core File Using dbx, dbg, dde, gdb or ladebug
Doc ID: 118252.1
How to Process an Express Server Core File Using gdb
Doc ID: 189760.1
Procwatcher: Script to Monitor and Examine Oracle and CRS Processes
Doc ID: 459694.1
Interpreting HANGANALYZE trace files to diagnose hanging and performance problems
Doc ID: 215858.1
CASE STUDY: Using Real-Time Diagnostic Tools to Diagnose Intermittent Database Hangs
Doc ID: 370363.1
HANGFG User Guide
Doc ID: 362094.1
No Response from the Server, Does it Hang or Spin?
Doc ID: 68738.1
Diagnosing Webforms Hanging
Doc ID: 179612.1
Database Performance FAQ
Doc ID: 402983.1
Steps to generate HANGANALYZE trace files
Doc ID: 175006.1
How To Display Information About Processes on SUN Solaris
Doc ID: 70609.1
-- INTERNALS
Database Internals (Events, Blockdumps)
https://metalink2.oracle.com/metalink/plsql/f?p=130:14:4157667604321941359::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,267951.1,1,1,1,helvetica
-- SPA
SQL PERFORMANCE ANALYZER 10.2.0.x to 10.2.0.y EXAMPLE SCRIPTS (Doc ID 742644.1)
-- TRACE
Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output
Metalink Note 39817.1
This is the event used to implement the DBMS_SUPPORT trace, which is a
superset of Oracle's SQL_TRACE facility. At level 4, bind calls are included in the
trace output; at level 8, wait events are included, which is the default level for
DBMS_SUPPORT; and at level 12, both binds and waits are included. See the
excellent Oracle Note 39817.1 for a detailed explanation of the raw information in
the trace file.
How to Obtain Tracing of Optimizer Computations (EVENT 10053)
Doc ID: Note:225598.1
Recommended Method for Obtaining 10046 trace for Tuning
Doc ID: Note:376442.1
EVENT: 10046 "enable SQL statement tracing (including binds/waits)"
Doc ID: Note:21154.1
Tracing Oracle Applications using Event 10046
Doc ID: Note:171647.1
Troubleshooting (Tracing)
Doc ID: Note:117820.1
Note 246821.1 trace.sql - Traces a sql statement ensuring that the rows column will be populated
Note 156969.1 coe_trace.sql - SQL Tracing Apps online transactions with Event 10046 (11.5)
Note 156970.1 coe_trace_11.sql - SQL Tracing Apps online transactions with Event 10046 (11.0)
Note 156971.1 coe_trace_all.sql - Turns SQL Trace ON for all open DB Sessions (8.0-9.0)
Note 156966.1 coe_event_10046.sql - SQL Tracing online transactions using Event 10046 7.3-9.0
Note 171647.1 - Tracing Oracle Applications using Event 10046
Note 179848.1 bde_system_event_10046.sql - SQL Trace any transaction with Event 10046 8.1-9.0
Note 224270.1 TRCANLZR.sql - Trace Analyzer - Interpreting Raw SQL Traces generated by EVENT 10046
Note 296559.1 FAQ: Common Tracing Techniques within the Oracle Applications 11i
Introduction to Trace Analyzer and SQLTXPLAIN For System Admins and DBAs (Doc ID 864002.1)
Tracing Sessions in Oracle Using the DBMS_SUPPORT Package
Doc ID: 62160.1
Tracing sessions: waiting on an enqueue
Doc ID: 102925.1
Cannot Read User Trace File Even ''_trace_files_public''=True In 10G RAC
Doc ID: 283379.1
How to Turn on Tracing of Calls to Database
Doc ID: Note:187913.1
Note 1058210.6 HOW TO ENABLE SQL TRACE FOR ANOTHER SESSION USING ORADEBUG
Getting 10046 Trace for Export and Import
Doc ID: Note:258418.1
Library Cache Latch Waits Cause Database Slowdown On Tracing Sessions With Event 10046
Doc ID: Note:311105.1
How To Display The Values Of A Bind Variable In A SQL Statement
Doc ID: Note:1068973.6
Introduction to ORACLE Diagnostic EVENTS
Doc ID: Note:218105.1
When Conventional Thinking Fails: A Performance Case Study in Order Management Workflow customization
Doc ID: Note:431619.1
How to Set SQL Trace on with 10046 Event Trace which Provides the Bind Variables
Doc ID: Note:160124.1
Diagnostics for Query Tuning Problems
Doc ID: Note:68735.1
Master note for diagnosing Portal/Database Performance Issues
Doc ID: Note:578806.1
Debug and Validate Invalid Objects
Doc ID: Note:300056.1
How to Submit a Testcase to Oracle Support for Reproducing an Execution Plan
Doc ID: Note:390610.1
How to Run SQL Testcase Builder from ADRCI [Video] [ID 1174105.1] <-- new stuff
How to Log a Good Performance Service Request
Doc ID: Note:210014.1
Index Rebuild Is Hanging Or Taking Too Long
Doc ID: Note:272762.1
Tracing session created through dblink
Doc ID: Note:258754.1
Overview Reference for SQL_TRACE, TKProf and Explain Plan
Doc ID: Note:199081.1
Diagnostics for Query Tuning Problems
Doc ID: Note:68735.1
12099 ?? <-- what?
-- PL/SQL PROFILER
Implementing and Using the PL/SQL Profiler (Doc ID 243755.1)
-- DBMS_SUPPORT
The DBMS_SUPPORT Package
Doc ID: Note:62294.1
-- DBMS_APPLICATION_INFO
PACKAGE DBMS_APPLICATION_INFO Specification
Doc ID: Note:30366.1
-- RMAN
RMAN Performance Tuning Diagnostics
Doc ID: Note:311068.1
-- CPU
How to Diagnose high CPU usage problems
Doc ID: 352648.1
Diagnosing High CPU Utilization
Doc ID: Note:164768.1
http://www.freelists.org/post/oracle-l/CPU-used-by-this-Session-and-Wait-time
-- V$OSSTAT
Aix 5.3 On Power 5: Does Oracle Recommend 'SMT' Be Enabled Or Not (And Why) (Doc ID 308393.1)
MMNL Process Consuming High CPU (Doc ID 460127.1)
Bug 6164409 - v$osstat shows wrong values for load data (Doc ID 6164409.8)
Bug 6417713 - Linux PowerPC: Dump during startup / during select from V$OSSTAT (Doc ID 6417713.8)
Difference In V$OSSTAT xxx_TICKS and xxx_TIME between 10.1 and 10.2 (Doc ID 433937.1)
Bug 4527873 - Linux: V$OSSTAT view may return no rows (Doc ID 4527873.8)
Document TitleBug 3559340 - V$OSSTAT may contain no data on some platforms with large number of CPUs (Doc ID 3559340.8)
Bug 8777336 - multiple kstat calls while getting socket count and core count for v$osstat (Doc ID 8777336.8)
Very large value for OS_CPU_WAIT_TIME FROM V$OSSTAT / AWR Report (Doc ID 889396.1)
Bug 7447648 - HPUX: OS_CPU_WAIT_TIME value from V$OSSTAT is incorrect on HPUX (Doc ID 7447648.8)
KSUGETOSSTAT FAILED: OP = PSTAT_GETPROCESSOR, LOCATION = SLSGETACTIVE () (Doc ID 375860.1)
ADDM reports ORA-13711 error in OEM on HP-UX Itanium (Doc ID 845668.1)
Bug 5010657 - HPUX-Itanium: No rows from V$OSSTAT / incorrect CPU_COUNT (Doc ID 5010657.8)
ORA-7445 (ksbnfy) (Doc ID 753033.1)
-- IO WAIT
PROBLEM : "CPU I/O WAIT" metric has values > 100% on Linux, HP-UX and AIX Hosts [ID 436855.1]
-- IO
I/O Tuning with Different RAID Configurations
Doc ID: Note:30286.1
CHECKLIST-What else can influence the Performance of the Database
Doc ID: Note:148462.1 Type:
Avoiding I/O Disk Contention
Doc ID: Note:148342.1
Tuning I/O-related waits
Doc ID: Note:223117.1
-- COE ORACLE SUPPORT TOOLS
Doc ID: Note:301137.1 OS Watcher User Guide
Doc ID: Note:433472.1 OS Watcher For Windows (OSWFW) User Guide
OSW System Profile - Sample
Doc ID: Note:461054.1
LTOM System Profiler - Sample Output
Doc ID: Note:461052.1
OS Watcher Graph (OSWg) User Guide
Doc ID: Note:461053.1
OSW System Profile - Sample
Doc ID: NOTE:461054.1
Performance Tools Quick Reference Guide
Doc ID: Note:438452.1
LTOM - The On-Board Monitor User Guide
Doc ID: Note:352363.1
LTOM System Profiler - Sample Output
Doc ID: NOTE:461052.1
Linux sys_checker.sh O/S Shell script to gather critical O/S at periodic intervals
Doc ID: Note:278072.1
Linux Kernel: The SLAB Allocator
Doc ID: Note:434351.1
OSW System Profile - Sample
Doc ID: Note:461054.1
How To Start OSWatcher Every System Boot
Doc ID: Note:580513.1
51. Diagnostic Tools Catalog
href="showdoc?db=NOT&id=362791.1&blackframe=0">Core / Stack Trace Extraction Tool (Stackx) User Guide
559339.1 08-APR-2008 Generic Generic REFERENCE
Doc ID 459694.1 Procwatcher Script to Monitor and Examine Oracle and CRS Processes
Script to Collect RAC Diagnostic Information (racdiag.sql)
Doc ID: 135714.1
Script to Collect OPS Diagnostic Information (opsdiag.sql)
Doc ID: 205809.1
STACKX User Guide
Doc ID: 362791.1
-- ADVISORS
PERFORMANCE TUNING USING 10g ADVISORS AND MANAGEABILITY FEATURES
Doc ID: 276103.1
-- LGWR
LGWR and Asynchronous I/O
Doc ID: 422058.1
-- INDEX
Poor IO performance doing index rebuild online after migrating to another storage
Doc ID: 258907.1
-- MULTIBLOCK READ COUNT
SSTIOMAX AND DB_FILE_MULTIBLOCK_READ_COUNT IN ORACLE 7 AND 8
Doc ID: 131530.1
-- INITRANS
INITRANS relationship with DB_BLOCK_SIZE.
Doc ID: 151473.1
-- DUMP
How to Dump Redo Log File Information
Doc ID: 1031381.6
How to Obtain a Segment Header Dump
Doc ID: 249814.1
How To Determine The Block Header Size
Doc ID: 1061465.6
Obtaining systemstate dumps or 10046 traces at master site during snapshot refresh hang
Doc ID: 273238.1
-- WAIT EVENTS
How I Monitor WAITS to help tune long running queries
Doc ID: 431447.1
-- enq: HW - contention
'enq HW - contention' For Busy LOB Segment [ID 740075.1]
How To Analyze the Wait Statistic: 'enq: HW - contention' [ID 419348.1]
Thread: enq: HW - contention waits http://forums.oracle.com/forums/thread.jspa?threadID=644850&tstart=44
http://www.freelists.org/post/oracle-l/enq-HW-contention-waits
http://forums.oracle.com/forums/thread.jspa?threadID=892508
http://orainternals.wordpress.com/2008/05/16/resolving-hw-enqueue-contention/
{{{
http://www.orafaq.com/forum/t/164483/0/
--to allocate extent to the table
alter table emp allocate extent;
--the table has columns named col1 and col2 which are clob
--to allocate extents to the columns
alter table emp modify lob (col1) (allocate extent (size 10m))
/
alter table emp modify lob (col2) (allocate extent (size 10m))
/
>> alter table theBLOBtable modify lob (theBLOBcolumn) (allocate extent (instance 1));
>> Remember to include the "instance 1" so space is added below HWM, even if
you're not using RAC (ignore documentation's caution: only use it on RAC).
}}}
-- resmgr: become active
The session is waiting for a resource manager active session slot. This event occurs when the resource manager is enabled and the number of active sessions in the session's current consumer group exceeds the current resource plan's active session limit for the consumer group. To reduce the occurrence of this wait event, increase the active session limit for the session's current consumer group.
High "Resmgr:Cpu Quantum" Wait Events In 11g Even When Resource Manager Is Disabled [ID 949033.1]
NOTE:786346.1 - Resource Manager and Sql Tunning Advisory DEFAULT_MAINTENANCE_PLAN
NOTE:756734.1 - 11g: Scheduler Maintenance Tasks or Autotasks
NOTE:806893.1 - Large Waits With The Wait Event "Resmgr:Cpu Quantum"
NOTE:392037.1 - Database Hangs. Sessions wait for 'resmgr:cpu quantum'
No Database User Can Login Except Sys And System because Resource Manager Internal_Quiesce Plan Enabled [ID 396970.1]
Thread: ALTER SYSTEM SUSPEND https://forums.oracle.com/forums/thread.jspa?threadID=852356
<<<
The ALTER SYSTEM SUSPEND - statement halts all input and output (I/O) to datafiles (file header and file data) and control files. The suspended state lets you back up a database without I/O interference. When the database is suspended all preexisting I/O operations are allowed to complete and any new database accesses are placed in a queued state.
ALTER SYSTEM QUIESCE RESTRICTED - Non-DBA active sessions will continue until they become inactive. An active session is one that is currently inside of a transaction, a query, a fetch, or a PL/SQL statement; or a session that is currently holding any shared resources (for example, enqueues). No inactive sessions are allowed to become active. For example, If a user issues a SQL query in an attempt to force an inactive session to become active, the query will appear to be hung. When the database is later unquiesced, the session is resumed, and the blocked action is processed
<<<
-- LOB
LOB Performance Guideline
Doc ID: 268476.1
-- OS TRACE
How to use truss command on IBM AIX
Doc ID: 245350.1
TECH: Using Truss / Trace on Unix
Doc ID: 28588.1
How to Trace Unix System Calls
Doc ID: 110888.1
How to Trace the Forms Runtime Process Using TRUSS/STRACE
Doc ID: 275510.1
Troubleshooting Tips For Spinning/Hanging F60WEBMX Processes
Doc ID: 457381.1
Diagnosing Webforms Hanging
Doc ID: 179612.1
How To Capture A Truss Of F60WEBMX When There Is No Process ID (PID)
Doc ID: 438913.1
How to Run Truss
Doc ID: 146428.1
QREF: Trace commands Summary
Doc ID: 16782.1
TECH: Using Truss / Trace on Unix
Doc ID: 28588.1
Database Startup, Shutdown Or New Connections Hang With Truss Showing OS Failing Semtimedop Call With Err#11 EAGAIN
Doc ID: 760968.1
How To Verify Whether DIRECTIO is Being Used
Doc ID: 555601.1
How To Perform System Tracing For All Forms Runtime Processes?
Doc ID: 400144.1
ALERT: Hang During Startup/Shutdown on Unix When System Uptime > 248 Days
Doc ID: 118228.1
How To Use Truss With Opatch?
Doc ID: 470225.1
Note 110888.1 - How to Trace Unix System Calls
How to Troubleshoot Spinning / Runaway Web Deployed Forms Runtime Processes?
Doc ID: 206681.1
ORA-7445[ksuklms] After Upgrade To 10.2.0.4
Doc ID: 725951.1
-- QMN
Queue Monitor Process: Architecture and Known Issues
Doc ID: 305662.1
Queue Monitor Coordinator Process delays Database Opening due to Replication Queue Tables with Large HighWaterMark
Doc ID: 564663.1
'IPC Send Timeout Detected' errors between QMON Processes after RAC reconfiguration
Doc ID: 458912.1
Queue Monitor Coordinator Process consuming 100% of 1 cpu
Doc ID: 604246.1
-- OS TOOLS , SOLARIS
http://developers.sun.com/solaris/articles/tuning_solaris.html
-- SPACE MANAGEMENT
BMB versus Freelist Segment: DBMS_SPACE.UNUSED_SPACE and DBA_TABLES.EMPTY_BLOCKS (Doc ID 149516.1)
Automatic Space Segment Management in RAC Environments (Doc ID 180608.1)
How to Deallocate Unused Space from a Table, Index or Cluster. (Doc ID 115586.1)
When to use DBMS_SPACE.UNUSED_SPACE or DBMS_SPACE.FREE_BLOCKS Procedures (Doc ID 116565.1)
-- LOCKS, ENQUEUES, DEADLOCKS
FAQ about Detecting and Resolving Locking Conflicts
Doc ID: 15476.1
The Performance Impact of Deadlock Detection
Doc ID: 285270.1
What to do with "ORA-60 Deadlock Detected" Errors
Doc ID: 62365.1
Understanding and Reading Systemstates
Doc ID: 423153.1
Tracing sessions: waiting on an enqueue
Doc ID: 102925.1
WAITEVENT: "enqueue" Reference Note
Doc ID: 34566.1
VIEW: "V$LOCK" Reference Note
Doc ID: 29787.1
TX Transaction locks - Example wait scenarios
Doc ID: 62354.1
ORA-60 DEADLOCK DETECTED ON CONCURRENT DML INITRANS/MAXTRANS
Doc ID: 115467.1
OERR: ORA 60 "deadlock detected while waiting for resource"
Doc ID: 18251.1
ORA-60 / Deadlocks Most Common Causes
Doc ID: 164661.1
Credit Card Authorization Slow And 'row lock contention'
Doc ID: 431084.1
Deadlock Error Not in Alert.log and No Trace File Generated on OPS or RAC
Doc ID: 262226.1
How to Interpret the Different Types of Locks in Lock Manager 1.6
Doc ID: 75705.1
{{{
This is a thorough and systematic performance review and a comprehensive report will be given.
No changes or tuning will be done during the activity. From the detailed report we could do another engagement acting on the bottlenecks found.
--------------------------------------------------------------------------------
The Tuning Document
1) Infrastructure Overview
2) Recommendations
3) Performance Summary
4) Operating System Performance Analysis
- CPU
- Memory
- Swap
- Storage
- Network
5) Oracle Performance Analysis
Database Bottlenecks - this includes but not limited to the following:
- Stresser of the database server's components (CPU,IO,Memory,Network) on low and peak periods using Linear Regression Analysis
- ETL period / Ad hoc reports affecting database server performance
- Issues on particular wait events
- Configuration issues, example would be Parallelism parameters
- Long running SQLs
- etc.
6) Application Performance Analysis
Top SQLs
- Top SQLs - SELECT
- Top SQLs - INSERT
- Top SQLs - UPDATE
- Top SQLs - MERGE
- Top SQLs - PARALLEL
- Unstable execution plans
7) References and Metalink Notes
--------------------------------------------------------------------------------
Things needed prior and during the activity
Below are the documents we need before the activity:
1) Most recent RDA of the database
3) Hardware, Storage, and network architecture that includes the Database, Application Server, BI environment
4) Hardware, Storage (raw and usable), and network make and model (plus specs)
5) Workload period of the following (day and time of the month):
- work hours
- peak and off peak
- ETL period
- reports period
- (OLTP) transaction processing
- backup (RMAN, filesystem copy, tape, SAN mirroring)
Here are the things that we need during the tuning activity:
1) It is critical to have AWR/Statspack data, ideally it should represent the following workload periods:
- work hours
- peak and off peak
- ETL period
- reports period
- (OLTP) transaction processing
- backup (RMAN, filesystem copy, tape, SAN mirroring)
The snap period (interval) should be at least 15mins. And the data retention should be at least 30 days to have enough data samples during workload characterization.
AWR needs a diagnostic and tuning pack license. Statspack is a free tool. Any of them should be installed.
2) SAR data of the database server
Below are some of the tools that will be used during the activity:
• OSWatcher (Oracle OS Watcher) - Reports CPU, RAM and Network stress, and is a new alternative for monitoring Oracle servers
• Perfsheet (Performance Visualization) – For Session Monitoring, uses excel sheet
• Ashmon (Active Session Monitoring) – For monitoring Database Session
• Lab 128 (trial software) – Tool for Oracle Tuning, Monitoring and trace SQL/Stored procedures transactions
• SQLTXPLAIN (Oracle Extended Explain Plan Statistics) – Provides details about all schema objects in which the SQL statement depends on.
• Orasrp (Oracle Session Resource Planner) – Builds complete detailed session profile
• Snapper (Oracle Session Snapper) - Reports Oracle session level performance counter and wait information in real time
• Oracle LTOM (Oracle Lite Onboard Monitor) – Provides automatic session tracing
• AWR r2toolkit - A toolkit for workload characterization and forecasting
• gxplan - Visualization of explain plan
}}}
References:
Total Performance Management http://www.allenhayden.com/cgi/getdoc.pl?file=perfmgmt.pdf
https://www.evernote.com/shard/s48/sh/e654bbad-d3e9-4ea4-b162-be9f2e7f736e/e8201e4222b04cba01ead70347fc5c77 ''<-- details snapper''
! another good format
{{{
1) Overview
* State the problem
2) Database and Workload Overview
Configuration
* Describe the platform/configuration/environment
Workload
* workload patterns, specific jobs during day/night
3) Performance Summary
itemize the findings
* State what you found. List things in order of importance (impact). Be concise. Show 1 graph that illustrates your observation even though you may have 3. These items should all be measurable, ie. # connect/disconnect per hour, IO’s per second, etc.
4) Recommendations
short term
* High impact items that can be done within a 1 week to 1 month. Or items that are so easy to implement that it makes sense to just get them done and checked off the list. Items that must be done before a good reading may be collected, or higher impact items may be done.
near term
* Items that will take 2-3 months to implement.
long term
* Items that will take an extended time to implement due to the size of the effort or the development cycle.
5) Conclusion
* Summarize your report in 1-2 paragraphs.
6) Appendix A - details on specific issues
* Here is where you will put all the details. You can tell your stories here. Reference your stories, graphs, etc. from the body of your report (Observations, Recommendations)
...
Appendix B - References
}}}
! another good format
{{{
EXECUTIVE SUMMARY - Not all reports deserve a Table of Contents (ToC) and an Executive Summary (ES). Uses both only if your report is becoming too lengthy, else remove both ToC and ES.
An Executive Summary should be detailed enough for someone to read and have a good idea of what is going on. At the same time it must be brief, factual, and fluid. Some people will only read this Section. Proofread it as much as you can.
FINDINGS - Include in this section your findings at the highest level possible. List below is just an example; you will have a different list. Order of this list of findings loosely matches the high and medium impact of your Report
RECOMMENDATIONS - There should be almost a one to one relationship between Findings and Recommendations bullets. But sometimes one Finding may spawn two Recommendations, or two Findings can be solved with one Recommendation
OVERVIEW - State the problem. This section should be short. Try to keep it that way. Describe the WHY this engagement and WHAT are the goals. This section is usually less than one page long. Anything between half a page and two pages is fine.
Putting one graph is OK, if it provides some high level view of what is this engagement about.
This section is like a high-level situation summary. It tells the story that brought us here, but it is brief.
SYSTEM CONFIGURATION - Describe the platform/configuration/environment. This section describes hardware and software but not the applications. It includes version of database and size. If there is anything installed on the system other than the database, it is briefly described here.
It includes CPU, Memory and Storage characteristics. On Exadata systems this section defines: Exadata, CRS and Database versions; System model (X2-?, X3-?, X4-?); size of Rack; ASM. In single instance systems, this section may take less than ½ page. On RAC it may take more than ½. And in Exadata it may take one full page or even more. Please try not go over 2 pages. Use a simple Word Table, or just Tabs.
APPLICATIONS - Description of the applications on this database. What they do, users, amount of data, growth, major interfaces. List concerns if any. Sometimes DBAs know little about the applications, so you may need to talk to the Developers, or to some users.
FINDINGS
HIGH IMPACT - Most important finding goes here. If you end up with 20 items as high, 10 as medium and 5 as low, that is fine. Ideally, you want to balance these 3 lists, but do not force this balance
MEDIUM IMPACT - First finding that is important but not “that” important to make it on the first list. We do not want everything on one “everything-is-urgent” list
LOW IMPACT - First finding that may need a change, but if we don’t implement it that is ok. For example: number of sessions is kind of high. If sessions were very high we may list it under medium
NO IMPACT - First finding that is clean or simply does not affect the system. For example: system statistics are not collected and have default values. Or, redo log is healthy
RECOMMENDATIONS
SHORT TERM - Items that can and should be implemented soon (usually within a week or a month). Or items that are so easy to implement that it makes sense to just get them done and checked off the list. Items that must be done before a good reading may be collected, or other higher impact items may be done. It thus is possible to have a medium or low impact item with a recommendation on this list, and pushing a high impact finding to the near term list if the former is a requirement for the latter.
For the most part, use your common sense. You want in this list those items that can be, or that should be implemented sooner than the rest.
NEAR TERM - Items that may take longer to implement (usually a month or more). It is common to list here those items that have to be implemented soon, but may require first the implementation of some of the “short term” items. Or items that need some coordination, like changes to the OS.
LONG TERM - Items that may take an extended time to implement due to the size of the effort or the development cycle (could be two or three months). For example: an upgrade
CONCLUSION - Summarize your report in less than one page if possible. Be positive in your closing remarks (and be factual all the time).
This section is like a high-level action plan. It tells the story of what needs to be done in order to improve the health of the system, but it is brief.
APPENDICES - Here is where you will put all the details and most of your graphs and cut&paste pieces. You can also tell more about your stories here. Use “Heading 2” style for each section of your appendices. For code or trace text, use font courier size 8 dark blue to make it more readable, while reducing footprint.
You may reference into here all your stories, graphs, etc. from the body of your report (Findings and Recommendations).
If you make a reference to a document provided by customer, you can cut and paste that particular piece, then place the actual document under a folder “Sources”. Refer to that document using its actual file name, for example awrrpt_1_10708_10709.txt.
Directory Structure Example:
• Folder: Customer Name Health-Check Report and Supporting Docs v09
o File: Customer Name Health-Check Report v09.docx
o File: Customer Name Health-Check Report v09.pdf
o Folder: Sources
Zip: alert_logs.zip
Zip: ash.zip
Zip: awr_tool_kit.zip
Zip: exachk.zip
File: Some_reference_document.pdf
A good size for a report is anywhere between 10 and 30 pages. For a one-week health-check, producing a report that is 10 to 20 pages long is fine. For a 3 weeks engagement, a report that is 20 to 30 pages long is normal. If you find yourself with a report that is 50+ pages then most probably you want to remove some big chunks and make them separate files under the Sources directory. Then have a short summary and a reference in your report instead.
Avoid cluttering your report since it makes it harder for everyone to read it. So, keep it simple, factual, bullet oriented and with a nice and natural flow from paragraph to paragraph, and from section to section.
}}}
! another good format I used before
[img[ https://i.imgur.com/dCSgpMF.png ]]
! commands
{{{
@snapper out 1 120 "select sid from v$session where status = 'ACTIVE'"
@snapper all 1 5 qc=276
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 5 qc=138
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 1 sid=2164
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 5 "select sid from v$session where program like 'nqsserver%'"
@snapper ash=event+wait_class,stats,gather=ts,tinclude=CPU,sinclude=redo|reads|writes 5 5 "select sid from v$session where username like 'DBFS%'" <-- get sysstat values
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session" <-- ALL PROCESSES - start with this!
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats,gather=a 5 5 "select sid from v$session" <-- ALL PROCESSES - with BUFG and LATCH
@snapper ash=event+wait_class,stats,gather=tsw,tinclude=CPU,sinclude=redo|reads|writes 5 5 "select sid from v$session where username like 'USER%' or program like '%DBW%' or program like '%CKP%' or program like '%LGW%'" <-- get ASM redundancy/parity test case
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'DBFS' or program like '%SMC%' or program like '%W00%'" <-- get DBFS and other background processes
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 5 ALL
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 1374
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'DBFS'"
-- the snapperloop, copy the snapperloop file in the same directory then do a spool then run any of the commands below
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 263
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'SYSADM' and module = 'EX_APPROVAL'"
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'SYSADM'"
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'SYSADM' and sid = 821"
@snapper "stats,gather=s,sinclude=total IO requests|physical.*total bytes" 10 1 all <-- get the total IO MB/s
-- snapper manual begin and end
select sid from v$session where sql_id = '8gqt47kymkn6u'
set serveroutput on
var snapper refcursor
snapper4.sql all,begin 5 1 1538
snapper4.sql all,end 5 1 1538
-- for non-exadata
vmstat 2 100000000 | while read line; do echo "`date +%T`" "$line" ; done >> vmstat_1.txt
iostat -xcd 1 100000 | while read line; do echo "`date +%T`" "$line" ; done >> iostat_1.txt
mpstat -P ALL 1 100000 | while read line; do echo "`date +%T`" "$line" ; done >> mpstat_1.txt
while : ; do top -c -n 45; echo "--"; sleep 1; done | while read line; do echo "`date +%T`" "$line" ; done >> top_1.txt
-- sort the output by IO latency
$ less iostat_1.txt | sort -rnk11 | less
$ less snapper1.txt | grep -i "db file" | sort -rnk5 -t',' | less
-- for exadata
while : ; do dcli -l root -g all_group uptime >> uptime.txt; echo "-"; sleep 2 ; done
dcli -l root -g all_group --vmstat 2 >> vmstat.txt
less /opt/oracle.oswatcher/osw/archive/oswtop/pd01db03.us.cbre.net_top_12.03.08.1600.dat.bz2 | grep -A20 "load average:" > loadspike.txt
metric iorm.pl spools timestamp 08:35:51 while : ; do ./metric_iorm.pl >> metriciorm_1.txt; echo "--"; sleep 10; done | while read line; do echo "`date +%T`" "$line" ; done
metric_iorm.pl [root@enkcel04 ~]# cat iorm.sh
dcli -l root -g /root/cell_group -x metric_iorm.pl | while read line; do echo "`date +%T`" "$line" ; done
while : ; do ./iorm.sh ; echo "---" ; sleep 10; done >> iorm.txt
cat iorm.txt | grep "Total Disk Throughput"
-- solaris
vmstat 1 100000 | while read line; do echo "`date +%T`" "$line" ; done >> vmstat_1.txt
iostat -xnc 1 100000 | while read line; do echo "`date +%T`" "$line" ; done >> iostat_1.txt
while : ; do top -c -n 45; echo "--"; sleep 1; done | while read line; do echo "`date +%T`" "$line" ; done >> top_1.txt
prstat -mL | while read line; do echo "`date +%T`" "$line" ; done >> prstat_1.txt
mpstat 1 100000 | while read line; do echo "`date +%T`" "$line" ; done >> mpstat_1.txt
-- solaris lockstat and prstat
lockstat -o lockstat.out5 -C -i 997 -s10 -D20 -n 1000000 sleep 60
lockstat -C sleep 5 > lockstat-C.out
lockstat -H sleep 5 > lockstat-H.out
lockstat -kIW sleep 5 > lockstat-kIW.out
lockstat -kgIW sleep 5 > lockstat-kgIW.out
lockstat -I sleep 5 > lockstat-I.out
/usr/bin/prstat -Z -n 1 5 40 > prstatz.out2
/usr/bin/prstat -mL 5 40 >prstatml.out2
less lockstat.out5 | grep "% " | sort -rnk1 | less
less lockstat-C.out | grep -B1 -A1 Hottest | sort -nk1
less lockstat-H.out | grep -B1 -A1 Hottest | sort -nk1
less lockstat-kIW.out | grep -B1 -A1 Hottest | sort -nk1
less lockstat-kgIW.out | grep -B1 -A3 Hottest | sort -nk1
less prstat -Z
less prstat -mL
-- solaris ASM troubleshooting
truss -aefo asm.out asmcmd ls DATASBX/TGR
export DBI_TRACE=1
asmcmd
http://www.brendangregg.com/DTrace/iotop
./iotop -CP 5 10
-- aix
lparstat 10 1000000 | while read line; do echo "`date +%T`" "$line" ; done >> lparstat.txt &
then do
cat lparstat.txt | sort -rnk6 | more
iostat -DRTl 10 100
iostat -st 10 100
-- quickly kill a session
* top -c
* copy the output on kill.txt
* on command line do this
Karl@Karl-LaptopDell ~/home
$ cat kill.txt | grep "biprd2 (" | awk '{print "kill -9 " $1}'
kill -9 14349
kill -9 15425
kill -9 3735
}}}
http://books.perl.org/topx
http://use.perl.org/~Ovid/journal/29332
http://www.perlmonks.org/?node_id=543480
http://www.amazon.com/Only-the-best-Perl-books/lm/1296HDTC2HVBH
-- perl pattern matching
http://www.tjhsst.edu/~dhyatt/perl/exA.html
http://www.addedbytes.com/cheat-sheets/regular-expressions-cheat-sheet/
http://xenon.stanford.edu/~xusch/regexp/analyzer.html
http://stackoverflow.com/questions/8286796/how-to-debug-perl-within-a-bash-wrapper
http://www.thegeekstuff.com/2010/05/perl-debugger/
http://www.mail-archive.com/beginners@perl.org/msg92480.html
{{{
$ cat dbd-test.pl
#!/u01/app/oracle/product/11.2.0/dbhome_1/perl/bin/perl
use DBI;
$dbname = 'paprd1';
$user = 'system';
$password = 'Ske1et0n';
$dbd = 'Oracle';
$conn = "dbi:$dbd:$dbname";
print "Connecting to database\n";
$dbh = DBI->connect( $conn,$user,$password);
$cur = $dbh->prepare('select tablespace_name from dba_tablespaces');
$cur->execute();
while (($tablespace_name) = $cur->fetchrow) {
print "$tablespace_name\n";
}
}}}
Logical Reads vs Physical Reads
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6643159615303
Do fast full index scans do physical disk reads?
http://www.mail-archive.com/oracle-l@fatcity.com/msg23688.html
* smartd is a really cool tool and there's a really cool documentation here http://smartmontools.sourceforge.net/badblockhowto.html, plus a calculator that you can use http://homepage2.nifty.com/cars/misc/chs2lba.html
* it all boils down to replacing the HD.. but first you need to tie the failed block device with the physical serial number and their location on the motherboard..
http://forums.fedoraforum.org/showthread.php?t=122196
http://www.linuxjournal.com/content/know-when-your-drives-are-failing-smartd
http://serverfault.com/questions/64239/physically-identify-the-failed-hard-drive
http://www.linuxquestions.org/questions/linux-general-1/how-do-physically-identify-a-failed-raid-disk-561021/
http://www.techrepublic.com/blog/opensource/using-smartctl-to-get-smart-status-information-on-your-hard-drives/1389
''smart gui tool'' http://unixfoo.blogspot.com/2009/03/gsmartcontrol-gui-for-smartctl.html
http://morebigdata.blogspot.com/2012/09/pignalytics-pigs-eat-anything-reading.html
https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool
http://moluccan.co.uk/Joomla/index.php/crib-sheet/287-pinning-cluster-nodes
http://www.filibeto.org/sun/lib/nonsun/oracle/11.2.0.1.0/E11882_01/install.112/e10816/postinst.htm#BABGIJDH
http://blog.ronnyegner-consulting.de/2010/03/11/creating-a-oracle-10g-release-2-or-11g-release-1-database-on-a-11g-release-2-cluster/
<<showtoc>>
<<<
@@
Starting version 12.1.0.2 the "Plan Line ID" was introduced in SQL Monitor - Plan Tab. And for versions below 12.1.0.2, to get "Plan Line ID" on Plan Tab use SQL Developer 4.2
@@
<<<
<<<
@@ What's important about the Plan Line ID ? @@
On SQL Monitor reports the first two sections are Plan Statistics and Plan tabs. And the way you would correlate these two information is through Plan Line ID.
* Plan Statistics - is where you check for details where the time is being spent on the execution plan
* Plan Tab - contains the critical information for drilling down further (see wiki [[“Object Name”, “Access Predicates”, “Filter Predicates”, “Projection”]]) on the specific part of the SQL text
@@Imagine ''__without__'' Plan Line ID on Plan Tab section of SQL Monitor and you are reading a 1000 lines of SQL execution plan and the bottleneck is on line 342. What would you do? Well, you start from line 1 then hit the down arrow key then count 342 times until you reach that specific Plan Line.@@ As a consultant dealing with remote databases if this is the only info you have it's not a good troubleshooting experience, and you are better off getting this info using DBMS_XPLAN.DISPLAY_CURSOR (Column Projection and Predicate filter/access Information sections)
On SQL Developer 4.2 there's an enhanced "Real Time SQL Monitoring viewer" where the Plan Tab section Plan Line ID is aligned with “Obj”,“Access”,“Filter”,“Projection”,“QBlock” - all of that info exposed in every row, no more clicking on every line (yes you click on every line of SQL Monitor html report to expose filter/access predicates). This improvement is very powerful and makes troubleshooting very easy.
In SQL Monitor or SQL Developer - "Real Time SQL Monitoring viewer" you have to flip back and forth on both the Plan Statistics and Plan Tabs. I personally like having the "time spent" and "other plan info" shown all in one line. That's why I use @@DB Optimizer - my go-to SQL profiling/tuning tool (see the bottom section of this post)@@ whenever I'd like to dig in deeper on the business logic behind the SQL. It makes it easy to go through hundreds of lines of exec plan with all info in one line, VST and SQL_TEXT (with dynamic highlighting) on the top, and system level and session level profiling on another tab. And all of this can be saved offline! :)
<<<
! 12.1.0.2
!! yes Plan Line ID on SQL Monitor Plan Tab was introduced
<<<
Plan Statistics
[img(95%,95%)[http://i.imgur.com/azLlOxk.png]]
Plan Tab
[img(95%,95%)[http://i.imgur.com/MGU1JYt.png]]
<<<
! 12.1.0.1
!! no Plan Line ID on SQL Monitor Plan Tab
<<<
Plan Statistics
[img(95%,95%)[http://i.imgur.com/fojDlku.png]]
Plan Tab
[img(95%,95%)[http://i.imgur.com/Z0E9WyY.png]]
<<<
! @@ How to get Plan Tab - Plan Line ID from 11.2 (I think even 11.1 - SQL_PLAN_LINE_ID was introduced) to 12.1.0.1 ??? @@ <- Use SQL Developer 4.2
On version 4.2 of SQL Developer they enhanced the "Real Time SQL Monitoring viewer" http://www.oracle.com/technetwork/developer-tools/sql-developer/sqldev-newfeatures-v42-3211987.html
!! 12.1.0.1 SQL Developer -> Tools -> Real Time SQL Monitor
<<<
Plan Statistics
[img(95%,95%)[http://i.imgur.com/cQ5ctHG.png]]
Plan Tab
[img(95%,95%)[http://i.imgur.com/zWrVlDL.png]]
<<<
!! 11.2.0.4 SQL Developer -> Tools -> Real Time SQL Monitor
<<<
Plan Statistics
[img(95%,95%)[http://i.imgur.com/AnPiIE3.png]]
Plan Tab
[img(95%,95%)[http://i.imgur.com/HcMYsfA.png]]
<<<
! DB Optimizer - my go-to SQL profiling/tuning tool
<<<
@@DB Optimizer - Tuning Tab@@
* Here I can quickly correlate where the time is being spent vs what the logic behind the SQL (VST - Visual SQL Tuning Diagram) vs SQL TEXT. When I click on SKEW (B) object the corresponding sections in SQL TEXT with any association to SKEW (B) gets highlighted. The VST Diagram area also shows the row count and filter ratios/rows between joined tables, and also the join method used and execution path (dark green - START, red - FINISH). Plus the object details (index,tables,stats,histogram) around the SQL_ID.
* Here we are doing many to many join of the SKEW table (alias A to D). And as we join step by step from A to D you can see on the "Actual Statistics" section "CR Buffer Gets" column that the number linearly increases per table join. From 1.67M (A->B) to 3.3M (B->C) to 4.9M (C->D). That's why this is CPU/LIO intensive SQL.
* It produces this row source stats by injecting gather_plan_statistics hint when you run the SQL. So yes the SQL has to finish to get "Actual Statistics" data and here I modified the SQL from 10000000 to 1000. On cases where I can't easily do this "trick" to produce the "Actual Statistics" and if the SQL would run for hours, I would just grab the VST diagram, Execution Plan from DB optimizer and correlate this info with SQL Monitor. But then the pain here is DB Optimizer doesn't show Plan Line ID, arghh. But it allows wildcard search on the execution plan so I can just key in the specific operation and from there I can highlight and get to my Plan Line ID and do correlation. This issue is not a deal breaker for me, but I hope they fix this on the next release :)
[img(95%,95%)[http://i.imgur.com/5bY7S3R.png]]
While doing SQL tuning on one tab, you can also do system level and session level profiling on another tab. This interface looks like OEM. And it's very powerful and fast and RAC-aware. Drilling down from System to Session level is very easy. And I can read PL/SQL packages/procedures right away where the SQL_ID bottleneck is coming from.
@@DB Optimizer - Profiling Tab@@
[img(95%,95%)[http://i.imgur.com/ptkNUEL.png]]
And the beauty about this is I can save my profiling and SQL tuning sessions offline together with my screenshots, SQL Monitor, and SQLD360 files.
[img(50%,50%)[http://i.imgur.com/TQYkUF4.png]]
<<<
! the testcase SQL I used
{{{
create table hr.skew as select * from dba_objects;
select /*+ monitor ordered
use_nl(b) use_nl(c) use_nl(d)
full(a) full(b) full(c) full(d) */
count(*)
from
hr.skew a,
hr.skew b,
hr.skew c,
hr.skew d
where
a.object_id = b.object_id
and b.object_id = c.object_id
and c.object_id = d.object_id
and rownum <= 10000000;
}}}
''MindMap - Plan Stability'' http://www.evernote.com/shard/s48/sh/727c84ca-a25e-4ffa-89f9-4d1e96c471c4/dcad83781f8a07f8983e26fbb8c066a3
''Plan Stability - Apress Book (bind peek, ACS, dynamic sampling, cardinality feedback)'' - https://www.evernote.com/shard/s48/sh/013cd51e-e484-49ac-911b-e01bdd54ac06/ce780dd4ca02d3d0b72b493acf8c33fd
http://coskan.wordpress.com/2011/01/26/plan-stability-through-upgrade-to-11g-introduction/
<<<
1-Introduction
2-Building the test
3-Why is my plan changed?-bugfixes : how you can find which bug fix may caused your plan change
4-Why is my plan changed?-new optimizer parameters : how you can find which parameter change/addition may caused your plan change
5-Why is my plan changed?-extra nested loop : what is the new nested loop step you will see after 11G upgrade
6-Why is my plan changed?-stats : I will try to explain how to understand if your stats are the problem
7-Why is my plan changed?-adaptive cursor sharing : I will talk a “little” about adaptive cursor sharing which may cause different plans for binded sqls after upgrade
8-Opening plan change case on MOS-SQLT : I will try to save the time you spend with Oracle Support when you raise a call for post upgrade performance degredation
9-Plan Baselines-Introduction : What are plan baselines they how they work
10-Plan Baselines-Using SQL Tuning sets : How to create plan baselines from tuning set ?
11-Plan Baselines-Using SQL Cache : How to create plan baselines from SQL Cache ?
12-Plan Baselines-Moving Baselines : How to move your plan baselines between database ?
13-Plan Baselines-Faking Baselines : How to fake the plan baseline?s
14-Plan Baselines-Capturing Baselines : How to capture baselines?
15-Plan Baselines-Management : How to manage your baselines?
16-Testing Statistics with Pending Stats : I’ll go through how you can use pending statistics during upgrades
17-Comparing Statistics : I’ll explain comparing the statistics
18-Cardinality Feedback Feature : I’ll go through new built in cardinality feedback feature which may cause problems
19-Where is the sqlid of active session ? : I’ll show you how you can find what your sql_id when it is null
20-Testing hintless database : I’ll explain how you can get rid of hints
21-Upgrade Day/Week : What needs to be ready for smooth upgrade ?
22-Before after analysis-mining problems : How you can spot possible problems comparing tuning sets
23-Before after analysis-graphs to sell : Using perfsheet to sell your work
24-Further Reading : Compilation of References I used during series and some helpfull links
25-Tools used : Index of the tools I used during series
<<<
Part 1 - http://avdeo.com/2011/06/02/oracle-sql-plan-management-part-1/
Part 2 - http://avdeo.com/2011/06/07/oracle-sql-plan-management-%e2%80%93-part-2/
Part 3 - http://avdeo.com/2011/08/07/oracle-sql-plan-management-%E2%80%93-part-3/
{{{
http://kerryosborne.oracle-guy.com/2008/09/sql-tuning-advisor/
http://kerryosborne.oracle-guy.com/2008/10/unstable-plans/
http://kerryosborne.oracle-guy.com/2008/10/explain-plan-lies/
http://kerryosborne.oracle-guy.com/2008/12/oracle-outlines-aka-plan-stability/
http://kerryosborne.oracle-guy.com/2009/03/bind-variable-peeking-drives-me-nuts/
http://kerryosborne.oracle-guy.com/2009/04/oracle-sql-profiles/
http://kerryosborne.oracle-guy.com/2009/04/do-sql-plan-baselines-use-hints/
http://kerryosborne.oracle-guy.com/2009/04/do-sql-plan-baselines-use-hints-take-2/
http://kerryosborne.oracle-guy.com/2009/05/awr-dbtime-script/
http://oracle-randolf.blogspot.com/2009/03/plan-stability-in-10g-using-existing.html
http://jonathanlewis.wordpress.com/2008/03/06/dbms_xplan3/
http://antognini.ch/papers/SQLProfiles_20060622.pdf
}}}
hourim.wordpress.com series
https://hourim.wordpress.com/2021/07/28/why-my-execution-plan-has-not-been-shared-part-7/
https://hourim.wordpress.com/2020/05/10/why-my-execution-plan-has-not-been-shared-part-6/
https://blog.toadworld.com/2017/06/13/why-my-execution-plan-has-not-been-shared-part-v
https://blog.toadworld.com/2017/05/05/why-my-execution-plan-has-not-been-shared-part-iv
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-iii
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-ii
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-i
ACS - adaptive cursor sharing
http://www.nocoug.org/download/2012-05/Kevin_Closson_Modern_Platform_Topics.pdf
actual presentation at OakTable World http://www.youtube.com/watch?v=S8Ih1NpOlNI#start=0:00;end=37:47;cycles=-1;autoreplay=false;showoptions=false
{{{
CBS -> Newest Full Episodes
ESPN3
Fox News -> Latest News
LiveNews -> Bloomberg
Youtube -> Most Popular
MTV -> Shows
}}}
http://en.wikipedia.org/wiki/Polynomial_regression
http://office.microsoft.com/en-gb/help/choosing-the-best-trendline-for-your-data-HP005262321.aspx
http://newtonexcelbach.wordpress.com/2011/02/04/fitting-high-order-polynomials/
http://stats.stackexchange.com/questions/19555/finding-degree-of-polynomial-in-regression-analysis
http://www.purplemath.com/modules/polyends.htm
http://www.algebra.com/algebra/homework/Polynomials-and-rational-expressions/Polynomials-and-rational-expressions.faq.question.538327.html
How To Setup ASM (10.2 & 11.1) On An Active/Passive Cluster (Non-RAC). [ID 1319050.1] <-- a different variety..
How To Setup ASM (11.2) On An Active/Passive Cluster (Non-RAC). [ID 1296124.1]
http://blogs.oracle.com/xpsoluxdb/entry/clusterware_11gr2_setting_up_an_activepassive_failover_configuration <-- GOOD STUFF using ACTION_SCRIPT
<<showtoc>>
! tour
https://www.pluralsight.com/courses/tekpub-postgres
! workflow
! installation and upgrade
! commands
! performance and troubleshooting
!! sizing and capacity planning
!! benchmark
!! troubleshooting
https://www.slideshare.net/SvetaSmirnova/performance-schema-for-mysql-troubleshooting-75654421
! high availability
! security
.
<<showtoc>>
! merge
https://wiki.postgresql.org/wiki/MergeTestExamples
https://blog.dbi-services.com/postgres-vs-oracle-access-paths-0/
<<<
Postgres vs. Oracle access paths – intro
Postgres vs. Oracle access paths I – Seq Scan
Postgres vs. Oracle access paths II – Index Only Scan
Postgres vs. Oracle access paths III – Partial Index
Postgres vs. Oracle access paths IV – Order By and Index
Postgres vs. Oracle access paths V – FIRST ROWS and MIN/MAX
Postgres vs. Oracle access paths VI – Index Scan
Postgres vs. Oracle access paths VII – Bitmap Index Scan
Postgres vs. Oracle access paths VIII – Index Scan and Filter
Postgres vs. Oracle access paths IX – Tid Scan
Postgres vs. Oracle access paths X – Update
Postgres vs. Oracle access paths XI – Sample Scan
<<<
''18000 mAh - .4 kilos'' http://www.buy.com/prod/energizer-xp18000-emergency-power-for-notebooks/q/loc/111/212003408.html
''6,000 mAh - .2 kilos'' http://www.zagg.com/accessories/zaggsparq.php
http://www.energizerpowerpacks.com/us/products/xp8000/
http://www.energizerpowerpacks.com/us/products/xp18000/
http://pc.mmgn.com/Forums/social/Energizer-XP-18000-Universal-P
emc118561 Sistina LVM2 is reporting duplicate PV on RHEL
emc120281 How to set up a Linux host to use emcpower devices in LVM
Configuring Oracle ASMLib on Multipath Disks on Linux [ID 394956.1] <-- not detailed EMC Powerpath
Configuring Oracle ASMLib on Multipath Disks [ID 309815.1] <-- detailed EMC Powerpath
ORA-15072 when creating a diskgroup with external redundancy [ID 396015.1] <-- If EMC based storage but use the normal Linux multipath driver is used, then the following map settings should be set in /etc/sysconfig/oracleasm
How to List the Single Path Devices for an EMC PowerPath Multipathing Device [ID 420839.1] <-- EMC Powerpath on 2.4 kernel
How To Setup ASM on Linux Using ASMLIB Disks, Raw Devices or Block Devices? [ID 580153.1] <-- mentions 10gR2 and 11gR2 configuration
ASM 11.2 Configuration KIT (ASM 11gR2 Installation & Configuration, Deinstallation, Upgrade, ASM Job Role Separation. [ID 1092213.1] <-- ASM 11gR2
Oracle ASM and Multi-Pathing Technologies [ID 294869.1] <-- details all the multipathing technologies!!!
http://www.oracle.com/technetwork/database/asm.pdf <-- another guide more comprehensive that details multipathing technologies!!!
http://www.oracle.com/technetwork/topics/linux/multipath-097959.html Configuring Oracle ASMLib on Multipath Disks
http://www.oracle.com/technetwork/database/device-mapper-udev-asm.pdf Configuring udev and device mapper for Oracle RAC 10g Release 2 on SLES9
http://www.emcstorageinfo.com/2007/07/emc-powerpath-pseudo-devices.html <-- nice visualization of EMC Powerpath
http://goo.gl/xCjAi <-- Powerpath install guide
http://www.oracle.com/technetwork/database/netapp-asm3329-129196.pdf <-- Netapp ASM
Master Note for Automatic Storage Management (ASM) [ID 1187723.1]
Consolidated Reference List Of Notes For Migration / Upgrade Service Requests [ID 762540.1] <-- migration consolidated SRs
ASMLIB Interacting with persistent names generated by udev or devlabel [ID 372783.1] <-- ASMLIB uses file /proc/partitions, mentions ORACLEASM_SCANORDER=emcpower
FAQ ASMLIB CONFIGURE,VERIFY, TROUBLESHOOT [ID 359266.1] <-- mentions ORACLEASM_SCANORDER=emcpower
http://www.james.labocki.com/?p=155 <-- Configuring Oracle ASM on Enterprise Linux 5
http://jcnarasimhan.blogspot.com/2009/08/managing-asm-disk-discovery.html <-- nice guide
http://forums.oracle.com/forums/thread.jspa?threadID=910819&tstart=0 <-- the forum
http://it.toolbox.com/blogs/surachart/check-the-device-asmlib-on-multipath-32222
http://www.freelists.org/post/oracle-l/Can-ASMLib-and-EMC-PowerPath-work-together
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/Oracle11gR2_RAC_B200-M1_8_Node_Certification.pdf <-- Deploying Oracle 11gR2 RAC on the Cisco Unified Computing System with EMC CLARiiON Storage
http://www.oracle.com/technetwork/database/asm-on-emc-5-3-134797.pdf <-- Using Oracle Database 10g’s Automatic Storage Management with EMC Storage Technology
http://www.ardentperf.com/2008/02/13/oracle-clusterware-on-rhel5oel5-with-udev-and-multipath/
http://blog.capdata.fr/index.php/installation-asm-sur-suse-10-en-64-bits-avec-multipathing-emc-powerpath/
http://gjilevski.wordpress.com/2008/05/07/automatic-storage-management-asm-faq-for-oracle-10g-and-11g-r1/ <--ASM FAQ
LVM on multipath http://christophe.varoqui.free.fr/faq.html
MDADM multipath http://www.linuxtopia.org/online_books/rhel5/installation_guide/rhel5_s2-s390info-multipath.html
DM-Multipath http://willsnotes.wordpress.com/2010/10/13/linux-rhel-5-configuring-multipathing-with-dm-multipath/
Comparison of Powerpath vs dm-multipath http://blog.thilelli.net/post/2009/02/09/Comparison%3A-EMC-PowerPath-vs-GNU/Linux-dm-multipath
http://blog.thilelli.net/post/2007/12/01/Nifty-Tool-For-Querying-Heterogeneous-SCSI-Devices
Microsoft script center http://technet.microsoft.com/en-us/scriptcenter/bb410849
-- coolmaster 800W
http://www.youtube.com/watch?v=IE-cO2mqTGQ
-- Sun servers power calculators
http://www.oracle.com/us/products/servers-storage/sun-power-calculators/index.html
http://laurentschneider.com/wordpress/2010/10/whats-your-favorite-shell-in-windows.html
Using DTrace to understand mpstat and vmstat output http://prefetch.net/articles/dtracecookbook.html
Top Ten DTrace (D) Scripts http://prefetch.net/articles/solaris.dtracetopten.html
Observing I/O Behavior With The DTraceToolkit http://prefetch.net/articles/observeiodtk.html
http://arup.blogspot.com/2011/01/what-makes-great-presentation.html
/***
|Name:|PrettyDatesPlugin|
|Description:|Provides a new date format ('pppp') that displays times such as '2 days ago'|
|Version:|1.0 ($Rev: 3646 $)|
|Date:|$Date: 2008-02-27 02:34:38 +1000 (Wed, 27 Feb 2008) $|
|Source:|http://mptw.tiddlyspot.com/#PrettyDatesPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!!Notes
* If you want to you can rename this plugin. :) Some suggestions: LastUpdatedPlugin, RelativeDatesPlugin, SmartDatesPlugin, SexyDatesPlugin.
* Inspired by http://ejohn.org/files/pretty.js
***/
//{{{
Date.prototype.prettyDate = function() {
var diff = (((new Date()).getTime() - this.getTime()) / 1000);
var day_diff = Math.floor(diff / 86400);
if (isNaN(day_diff)) return "";
else if (diff < 0) return "in the future";
else if (diff < 60) return "just now";
else if (diff < 120) return "1 minute ago";
else if (diff < 3600) return Math.floor(diff/60) + " minutes ago";
else if (diff < 7200) return "1 hour ago";
else if (diff < 86400) return Math.floor(diff/3600) + " hours ago";
else if (day_diff == 1) return "Yesterday";
else if (day_diff < 7) return day_diff + " days ago";
else if (day_diff < 14) return "a week ago";
else if (day_diff < 31) return Math.ceil(day_diff/7) + " weeks ago";
else if (day_diff < 62) return "a month ago";
else if (day_diff < 365) return "about " + Math.ceil(day_diff/31) + " months ago";
else if (day_diff < 730) return "a year ago";
else return Math.ceil(day_diff/365) + " years ago";
}
Date.prototype.formatString_orig_mptw = Date.prototype.formatString;
Date.prototype.formatString = function(template) {
return this.formatString_orig_mptw(template).replace(/pppp/,this.prettyDate());
}
// for MPTW. otherwise edit your ViewTemplate as required.
// config.mptwDateFormat = 'pppp (DD/MM/YY)';
config.mptwDateFormat = 'pppp';
//}}}
To prevent certain IP Adresses from connecting to database,you have to add 2 parameters to the SQLNET.ORA file of your
database and then restart the listener,
the 2 parameters are:
tcp.validnode_checking = yes
tcp.excluded_nodes = (155.23.0.100)
http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_install/reactivating-windows-7-after-moving-to-a-different/3d5e2cdd-e4c6-4951-ae8a-d25c0c3db0a0
http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_install/a-problem-in-activating-a-geographically/af737349-797a-e011-9b4b-68b599b31bf5
http://sourceforge.net/projects/oraresprof/
http://www.dbasupport.com/oracle/ora10g/open_source1.shtml
http://www.dbasupport.com/oracle/ora10g/open_source2.shtml
http://www.sqltools-plusplus.org:7676/links.html
http://sourceforge.net/projects/hotsos-ilo/
http://sourceforge.net/projects/hotsos-ilo/#item3rd-1
http://www.oracledba.ru/orasrp/
http://sourceforge.net/projects/etprof
Carry forms Method-R
http://www.prweb.com/releases/2008/05/prweb839554.htm
Alex
http://www.pythian.com/blogs/author/alex
source code of ORASRP
https://twiki.cern.ch/twiki/bin/view/PSSGroup/SQLTraceAnalysis
simple profiler
http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
How to set trace for others sessions, for your own session and at instance level
http://www.petefinnigan.com/ramblings/how_to_set_trace.htm
pete downloads
http://www.petefinnigan.com/tools.htm
appsdba.com
http://www.appsdba.com/blog/?p=24
performance as a service
http://carymillsap.blogspot.com/2008/06/performance-as-service.html
https://orainternals.files.wordpress.com/2012/04/2012_327_riyaj_pstack_truss_doc.pdf
https://blogs.oracle.com/myoraclediary/entry/how_to_backup_putty_session
{{{
How to Backup PuTTY Session
By user782636 on Feb 16, 2012
1).On the source computer, open up a command prompt , and run:
regedit /ea puTTY.reg HKEY_CURRENT_USER\Software\SimonTatham\PuTTY
The above one will create the file named puTTY.reg in the present working directory
2).Copy puTTY.reg onto the new computer
3).On the target computer, open up a command prompt and run:
regedit /s puTTY.reg
OR
Just double click the puTTY.reg file .this will add the registry contents to the windows registry after asking confirmation.
}}}
http://linux-sxs.org/networking/openssh.putty.html
http://www.vanemery.com/Linux/VNC/vnc-over-ssh.html
http://www.windowstipspage.com/2010/06/configure-putty-connection-manager.html <-- must read for config settings
http://www.thegeekstuff.com/2009/03/putty-extreme-makeover-using-putty-connection-manager/
http://www.thegeekstuff.com/2009/07/10-practical-putty-tips-and-tricks-you-probably-didnt-know/ <-- migrate to another machine
http://dag.wieers.com/blog/content/improving-putty-settings-on-windows <-- save putty sessions
http://mxu.wikia.com/wiki/Cannot_bring_back_PuttyCM_windows_after_it_is_minimized <- application already started error "disable hide when minimized" (tools -> options)
* ''QoS uses ClusterHealthMonitor and analyses every minute''
* QoS makes use of the following technologies:
** policy managed databases
** and utilizes server pools to have that "true grid layer" to be able to automatically stand up instances in any available server/host
** QoS policies
** QoS metrics
** QoS recommendations
* these are the good stuff presentations on QoS
** QoS Management in a Consolidated Environment OOW 2011 Presentation http://www.oracle.com/technetwork/products/clusterware/qos-management-oow11-1569557.pdf, mixed workload QoS http://www.soug.ch/fileadmin/user_upload/Downloads_public/Breysse_Exadata_Workload_Management.pdf
** QOS ppt http://www.slideshare.net/prassinos/oracle-quality-of-service-management-meeting-slas-in-a-grid-environment
''official documentation''
New Feature on 11.2.0.2 http://download.oracle.com/docs/cd/E11882_01/server.112/e17128/chapter1_2.htm
QOS FAQ paper http://www.oracle.com/technetwork/database/exadata/faq-qosmanagement-511893.pdf
QOS OTN front page http://www.oracle.com/technetwork/database/clustering/overview/qosmanageent-508184.html
Introduction to Oracle Database QoS Management http://download.oracle.com/docs/cd/E11882_01/server.112/e24611/apqos_intro.htm#APQOS109
Installing and Enabling Oracle Database QoS Management - http://download.oracle.com/docs/cd/E11882_01/server.112/e24611/install_config.htm#APQOS151 <-- hhmmm it utilizes RAC server pools
-- qos
https://docs.oracle.com/cd/E11882_01/server.112/e24611/apqos_admin.htm#APQOS158
http://docs.oracle.com/cd/E11882_01/server.112/e24611/apqos_intro.htm#APQOS317
https://docs.oracle.com/database/121/RILIN/srvpool.htm#RILIN1247
http://docs.oracle.com/cd/E11882_01/server.112/e24611/apqos_admin.htm#APQOS158
https://www.dwavesys.com/
.
Query tuning by eliminating throwaway - Martin Berg
@@http://focalpoint.altervista.org/throwaway2.pdf@@
http://www.orafaq.com/maillist/oracle-l/2004/02/10/0119.htm
<<<
I've downloaded and read the paper, thanks to Mogens and Cary and my impressions are that the paper has some good ideas but is neither revolutionary nor overly applicable. Basically, the basic idea of the paper is that people usually process many rows more then necessary to produce the desired output. The excess rows are called "throwaway", thus the name of the article. The author then analyzes the throwaway caused by each access method (as of oracle 8.0.4, with notable exceptions of bitmap methods, star schema and hash) and comes to the conclusion that the only cure is to properly index tables, so that predicates are resolved by using index scans. The problem is, in my opinion, directly the opposite: how to design database schema in order to be able to write queries that execute quickly. To that end, I found more useful material in Ralph Kimball's book "The Data Warehouse Toolkit" and in Jonathan's and Tom Kyte's books then in Martin's article. I must confess that the whole debate made me very curious about Dan's book and that I ordered it from Barnes & Noble, but as I am busy with the 10g, it will have to wait at least 4 to 6 weeks.
I am not writing this to denigrate Martin's effort, but to basically point out that Anjo's, Cary's and Jonathan's method based on the wait inerface, together with the business knowledge (one must understand what is it that he or she wants to accomplish, in the first place) is the ultimate in SQL tuning. There is no easy method that will take a horrendous query, which would justify the capital punishment for the author, insert it into a "method", and then following few easy steps, end up with a missile which will execute in milliseconds. If someone wants to find number of AT&T subscribers per state, he or she will have to do something like:
SELECT COUNT(*) FROM SUBSCRIBERS GROUP BY STATE; Given the number of subscribers, it will be a big query and there is no room for improvement. What I see as my role is to prevent design which would result in splitting the subsciber entity into several tables, therefore making the query above into a join. If that happens, no amount of methodical tuning the SQL will help.
<<<
{{{
alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
select sysdate from dual;
SELECT SESSIONTIMEZONE FROM DUAL;
SELECT current_timestamp FROM DUAL;
SELECT dbtimezone FROM DUAL;
}}}
! exercise
{{{
select to_timestamp(localtimestamp) from dual;
select from_tz(to_timestamp(localtimestamp), 'UTC') at time zone 'America/New_York' from dual
-- considers DST, in my case the column needs to be DATE instead of TIMESTAMP
select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC') at time zone 'America/New_York' AS Date) as local_time from dual;
05:20:32 SYS@cdb1> create table test_tz (ts_col date);
Table created.
05:20:40 SYS@cdb1> insert into test_tz select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC') at time zone 'America/New_York' AS Date) as local_time from dual;
1 row created.
05:20:45 SYS@cdb1> select * from test_tz;
TS_COL
--------------------
14-NOV-2019 00:20:45
05:20:51 SYS@cdb1> select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC') at time zone 'America/New_York' AS Date) as local_time from dual;
LOCAL_TIME
--------------------
14-NOV-2019 00:21:03
-- playing with TIMESTAMP
05:02:42 SYS@cdb1> !date
Thu Nov 14 05:04:07 UTC 2019
05:04:07 SYS@cdb1> select to_timestamp(localtimestamp) from dual;
TO_TIMESTAMP(LOCALTIMESTAMP)
---------------------------------------------------------------------------
14-NOV-19 05.04.09.526736 AM
05:04:09 SYS@cdb1> !date
Thu Nov 14 05:07:40 UTC 2019
05:07:40 SYS@cdb1> select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC') at time zone 'America/New_York' AS Date) as DESIRED_FIELD_NAME from dual;
DESIRED_FIELD_NAME
--------------------
14-NOV-2019 00:07:42
05:07:42 SYS@cdb1> select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC') at time zone 'America/New_York' AS Date) as local_time from dual;
LOCAL_TIME
--------------------
14-NOV-2019 00:08:22
05:08:22 SYS@cdb1> create table test_tz (timestamp(3)) ;
create table test_tz (timestamp(3))
*
ERROR at line 1:
ORA-00902: invalid datatype
05:16:46 SYS@cdb1> create table test_tz (ts_col timestamp(3));
Table created.
05:17:27 SYS@cdb1>
05:17:28 SYS@cdb1> insert into test_tz values ('14-NOV-2019 00:08:22');
insert into test_tz values ('14-NOV-2019 00:08:22')
*
ERROR at line 1:
ORA-01849: hour must be between 1 and 12
05:17:50 SYS@cdb1> insert into test_tz select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC') at time zone 'America/New_York' AS Date) as local_time from dual;
1 row created.
05:18:09 SYS@cdb1> select * from test_tz;
TS_COL
---------------------------------------------------------------------------
14-NOV-19 12.18.09.000 AM
05:18:18 SYS@cdb1> drop table test_tz purge;
Table dropped.
05:19:18 SYS@cdb1> create table test_tz (ts_col timestamp);
Table created.
05:19:23 SYS@cdb1> insert into test_tz select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC') at time zone 'America/New_York' AS Date) as local_time from dual;
1 row created.
05:19:28 SYS@cdb1> select * from test_tz;
TS_COL
---------------------------------------------------------------------------
14-NOV-19 12.19.28.000000 AM
05:19:31 SYS@cdb1> drop table test_tz purge;
Table dropped.
05:19:50 SYS@cdb1> create table test_tz (ts_col timestamp(0));
Table created.
05:20:00 SYS@cdb1> insert into test_tz select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC') at time zone 'America/New_York' AS Date) as local_time from dual;
1 row created.
05:20:05 SYS@cdb1> select * from test_tz;
TS_COL
---------------------------------------------------------------------------
14-NOV-19 12.20.05 AM
05:20:10 SYS@cdb1> drop table test_tz purge;
Table dropped.
-- get TIMEZONE OFFSET
select * from V$TIMEZONE_NAMES where lower(tzname) like '%mountain%';
SELECT TZ_OFFSET('US/Eastern') FROM DUAL;
SELECT TZ_OFFSET('US/Mountain') FROM DUAL;
05:43:15 SYS@cdb1> select * from V$TIMEZONE_NAMES where tzname like '%New%';
TZNAME TZABBREV CON_ID
---------------------------------------------------------------- ---------------------------------------------------------------- ----------
America/New_York LMT 0
America/New_York EST 0
America/New_York EDT 0
America/New_York EWT 0
America/New_York EPT 0
America/North_Dakota/New_Salem LMT 0
America/North_Dakota/New_Salem MST 0
America/North_Dakota/New_Salem MDT 0
America/North_Dakota/New_Salem MWT 0
America/North_Dakota/New_Salem MPT 0
America/North_Dakota/New_Salem CST 0
TZNAME TZABBREV CON_ID
---------------------------------------------------------------- ---------------------------------------------------------------- ----------
America/North_Dakota/New_Salem CDT 0
Canada/Newfoundland LMT 0
Canada/Newfoundland NST 0
Canada/Newfoundland NDT 0
Canada/Newfoundland NWT 0
Canada/Newfoundland NPT 0
Canada/Newfoundland NDDT 0
US/Pacific-New LMT 0
US/Pacific-New PST 0
US/Pacific-New PDT 0
US/Pacific-New PWT 0
TZNAME TZABBREV CON_ID
---------------------------------------------------------------- ---------------------------------------------------------------- ----------
US/Pacific-New PPT 0
23 rows selected.
05:43:29 SYS@cdb1> SELECT TZ_OFFSET('US/Eastern') FROM DUAL;
TZ_OFFS
-------
-05:00
05:45:44 SYS@cdb1> select * from V$TIMEZONE_NAMES where tzname like '%Eastern%';
TZNAME TZABBREV CON_ID
---------------------------------------------------------------- ---------------------------------------------------------------- ----------
Canada/Eastern LMT 0
Canada/Eastern EST 0
Canada/Eastern EDT 0
Canada/Eastern EWT 0
Canada/Eastern EPT 0
US/Eastern LMT 0
US/Eastern EST 0
US/Eastern EDT 0
US/Eastern EWT 0
US/Eastern EPT 0
10 rows selected.
05:45:57 SYS@cdb1> select tz_offset('America/New_York') from dual;
TZ_OFFS
-------
-05:00
-- CONVERT back and forth
18:08:00 SYS@cdb1> !date
Tue Nov 19 18:08:24 UTC 2019
18:08:24 SYS@cdb1> SELECT FROM_TZ(TIMESTAMP '2019-11-19 18:08:24', 'UTC') AT TIME ZONE 'US/Eastern' from dual;
FROM_TZ(TIMESTAMP'2019-11-1918:08:24','UTC')ATTIMEZONE'US/EASTERN'
---------------------------------------------------------------------------
19-NOV-19 01.08.24.000000000 PM US/EASTERN
18:08:53 SYS@cdb1> -- 1:09 PM is my laptop clock
18:09:15 SYS@cdb1>
18:09:18 SYS@cdb1> SELECT FROM_TZ(TIMESTAMP '2019-11-19 01:08:24', 'US/Eastern') AT TIME ZONE 'UTC' from dual;
FROM_TZ(TIMESTAMP'2019-11-1901:08:24','US/EASTERN')ATTIMEZONE'UTC'
---------------------------------------------------------------------------
19-NOV-19 06.08.24.000000000 AM UTC
18:10:15 SYS@cdb1> -- 18:08:24 is the server clock (06.08.24)
18:10:44 SYS@cdb1>
18:10:44 SYS@cdb1> -- it added 5 hours
18:11:00 SYS@cdb1>
18:11:00 SYS@cdb1> select * from V$TIMEZONE_NAMES where tzname like '%Eastern%';
TZNAME TZABBREV CON_ID
---------------------------------------------------------------- ---------------------------------------------------------------- ----------
Canada/Eastern LMT 0
Canada/Eastern EST 0
Canada/Eastern EDT 0
Canada/Eastern EWT 0
Canada/Eastern EPT 0
US/Eastern LMT 0
US/Eastern EST 0
US/Eastern EDT 0
US/Eastern EWT 0
US/Eastern EPT 0
10 rows selected.
18:11:11 SYS@cdb1> SELECT TZ_OFFSET('US/Eastern') FROM DUAL;
TZ_OFFS
-------
-05:00
}}}
/***
|Name:|QuickOpenTagPlugin|
|Description:|Changes tag links to make it easier to open tags as tiddlers|
|Version:|3.0.1 ($Rev: 3861 $)|
|Date:|$Date: 2008-03-08 10:53:09 +1000 (Sat, 08 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#QuickOpenTagPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
***/
//{{{
config.quickOpenTag = {
dropdownChar: (document.all ? "\u25bc" : "\u25be"), // the little one doesn't work in IE?
createTagButton: function(place,tag,excludeTiddler) {
// little hack so we can do this: <<tag PrettyTagName|RealTagName>>
var splitTag = tag.split("|");
var pretty = tag;
if (splitTag.length == 2) {
tag = splitTag[1];
pretty = splitTag[0];
}
var sp = createTiddlyElement(place,"span",null,"quickopentag");
createTiddlyText(createTiddlyLink(sp,tag,false),pretty);
var theTag = createTiddlyButton(sp,config.quickOpenTag.dropdownChar,
config.views.wikified.tag.tooltip.format([tag]),onClickTag);
theTag.setAttribute("tag",tag);
if (excludeTiddler)
theTag.setAttribute("tiddler",excludeTiddler);
return(theTag);
},
miniTagHandler: function(place,macroName,params,wikifier,paramString,tiddler) {
var tagged = store.getTaggedTiddlers(tiddler.title);
if (tagged.length > 0) {
var theTag = createTiddlyButton(place,config.quickOpenTag.dropdownChar,
config.views.wikified.tag.tooltip.format([tiddler.title]),onClickTag);
theTag.setAttribute("tag",tiddler.title);
theTag.className = "miniTag";
}
},
allTagsHandler: function(place,macroName,params) {
var tags = store.getTags(params[0]);
var filter = params[1]; // new feature
var ul = createTiddlyElement(place,"ul");
if(tags.length == 0)
createTiddlyElement(ul,"li",null,"listTitle",this.noTags);
for(var t=0; t<tags.length; t++) {
var title = tags[t][0];
if (!filter || (title.match(new RegExp('^'+filter)))) {
var info = getTiddlyLinkInfo(title);
var theListItem =createTiddlyElement(ul,"li");
var theLink = createTiddlyLink(theListItem,tags[t][0],true);
var theCount = " (" + tags[t][1] + ")";
theLink.appendChild(document.createTextNode(theCount));
var theDropDownBtn = createTiddlyButton(theListItem," " +
config.quickOpenTag.dropdownChar,this.tooltip.format([tags[t][0]]),onClickTag);
theDropDownBtn.setAttribute("tag",tags[t][0]);
}
}
},
// todo fix these up a bit
styles: [
"/*{{{*/",
"/* created by QuickOpenTagPlugin */",
".tagglyTagged .quickopentag, .tagged .quickopentag ",
" { margin-right:1.2em; border:1px solid #eee; padding:2px; padding-right:0px; padding-left:1px; }",
".quickopentag .tiddlyLink { padding:2px; padding-left:3px; }",
".quickopentag a.button { padding:1px; padding-left:2px; padding-right:2px;}",
"/* extra specificity to make it work right */",
"#displayArea .viewer .quickopentag a.button, ",
"#displayArea .viewer .quickopentag a.tiddyLink, ",
"#mainMenu .quickopentag a.tiddyLink, ",
"#mainMenu .quickopentag a.tiddyLink ",
" { border:0px solid black; }",
"#displayArea .viewer .quickopentag a.button, ",
"#mainMenu .quickopentag a.button ",
" { margin-left:0px; padding-left:2px; }",
"#displayArea .viewer .quickopentag a.tiddlyLink, ",
"#mainMenu .quickopentag a.tiddlyLink ",
" { margin-right:0px; padding-right:0px; padding-left:0px; margin-left:0px; }",
"a.miniTag {font-size:150%;} ",
"#mainMenu .quickopentag a.button ",
" /* looks better in right justified main menus */",
" { margin-left:0px; padding-left:2px; margin-right:0px; padding-right:0px; }",
"#topMenu .quickopentag { padding:0px; margin:0px; border:0px; }",
"#topMenu .quickopentag .tiddlyLink { padding-right:1px; margin-right:0px; }",
"#topMenu .quickopentag .button { padding-left:1px; margin-left:0px; border:0px; }",
"/*}}}*/",
""].join("\n"),
init: function() {
// we fully replace these builtins. can't hijack them easily
window.createTagButton = this.createTagButton;
config.macros.allTags.handler = this.allTagsHandler;
config.macros.miniTag = { handler: this.miniTagHandler };
config.shadowTiddlers["QuickOpenTagStyles"] = this.styles;
store.addNotification("QuickOpenTagStyles",refreshStyles);
}
}
config.quickOpenTag.init();
//}}}
! official docs
R style guide http://r-pkgs.had.co.nz/style.html
searchable documentation http://www.rdocumentation.org/
introduction to R http://cran.r-project.org/doc/manuals/R-intro.html
! open courses
https://www.datacamp.com/community/open-courses
! structured learning
Introduction to R
https://www.datacamp.com/courses/introduction-to-r
Try R
@@http://tryr.codeschool.com/levels/1/challenges/1 <- ''vectors, matrices, summary statistics''@@
R programming fundamentals
@@http://www.pluralsight.com/courses/table-of-contents/r-programming-fundamentals <- ''functions, flow control, packages, import data, exploring data with R''@@
data.table
https://www.datacamp.com/courses/data-analysis-the-data-table-way
Data Analysis and Statistical Inference
https://www.datacamp.com/courses/data-analysis-and-statistical-inference_mine-cetinkaya-rundel-by-datacamp
Introduction to Computational Finance and Financial Econometrics
https://www.datacamp.com/courses/introduction-to-computational-finance-and-financial-econometrics
dplyr
https://www.datacamp.com/courses/dplyr
Build Web Apps in R with Shiny
https://www.udemy.com/build-web-apps-in-r-with-shiny/?dtcode=JB9Dbu61IsjQ
https://www.coursera.org/course/statistics?utm_content=bufferc229d&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
! presentations
Bigvis http://www.meetup.com/nyhackr/events/112271042/ , http://files.meetup.com/1406240/bigvis.pdf , http://goo.gl/0DTE4a
<<showtoc>>
! R data formats: RData, Rda, Rds, robj, etc
http://stackoverflow.com/questions/21370132/r-data-formats-rdata-rda-rds-etc
! .rda vs. .RData
http://r.789695.n4.nabble.com/rda-vs-RData-td4581445.html
https://stackoverflow.com/questions/8345759/how-to-save-a-data-frame-in-r
{{{
save(foo,file="data.Rda")
load("data.Rda")
}}}
! saving-and-loading-r-objects
http://www.fromthebottomoftheheap.net/2012/04/01/saving-and-loading-r-objects/
https://www.r-bloggers.com/a-better-way-of-saving-and-loading-objects-in-r/
! Saving Data into R Data Format: RDS and RDATA
http://www.sthda.com/english/wiki/saving-data-into-r-data-format-rds-and-rdata
! robj
http://stackoverflow.com/questions/34192038/how-to-open-robj-file-in-rstudio-or-r
! Creating Classes in R: S3, S4, R5 (RC), or R6
http://stackoverflow.com/questions/27219132/creating-classes-in-r-s3-s4-r5-rc-or-r6
Introduction to R6 classes https://rpubs.com/wch/24456
https://www.futurelearn.com/courses/business-analytics-forecasting/1/todo/5986
statistics.com
the practical forecasting book 2nd Ed
https://machinelearningmastery.com/introduction-to-time-series-forecasting-with-python/
! different forecasting trends
http://jcflowers1.iweb.bsu.edu/rlo/trends.htm
[img[ http://i.imgur.com/CY7siAi.png]]
! automated forecasting tools
ETS and ARIMA from forecast package
http://www.forecastpro.com/
http://www.thrivetech.com/inventory-forecasting-software/
http://www.burns-stat.com/documents/books/the-r-inferno/
http://www.jason-french.com/blog/2013/03/11/installing-r-in-linux/
https://cran.r-project.org/doc/manuals/r-release/R-admin.html
https://idre.ucla.edu/
http://www.ats.ucla.edu/stat/
http://www.ats.ucla.edu/stat/r/library/matrix_alg.htm
code examples http://www.ats.ucla.edu/stat/r/library/matrix.txt
! coding style guide
http://google-styleguide.googlecode.com/svn/trunk/Rguide.xml
! free data sets
http://r4stats.com/
http://www.census.gov
Free large data sets http://stackoverflow.com/questions/2674421/free-large-datasets-to-experiment-with-hadoop
! REFERENCES:
http://gallery.r-enthusiasts.com/thumbs.php
http://www.rstudio.com/ide/
-- time series
http://www.packtpub.com/article/creating-time-series-charts-r
Time Series in R with ggplot2 http://stackoverflow.com/questions/4973031/time-series-in-r-with-ggplot2
Displaying time-series data: Stacked bars, area charts or lines…you decide! http://vizwiz.blogspot.com/2012/08/displaying-time-series-data-stacked.html
R Lattice Plot Beats Excel Stacked Area Trend Chart http://chartsgraphs.wordpress.com/2008/10/05/r-lattice-plot-beats-excel-stacked-area-trend-chart/
rainbow: An R Package for Visualizing Functional Time Series http://journal.r-project.org/archive/2011-2/RJournal_2011-2_Lin~Shang.pdf
Using R for Time Series Analysis http://a-little-book-of-r-for-time-series.readthedocs.org/en/latest/src/timeseries.html
Plotting Time Series data using ggplot2 http://www.r-bloggers.com/plotting-time-series-data-using-ggplot2/
Simple time series plot using R : Part 1 http://programming-r-pro-bro.blogspot.com/2011/09/simple-plot-using-r.html
How to convert a daily times series into an averaged weekly? http://stackoverflow.com/questions/11892063/how-to-convert-a-daily-times-series-into-an-averaged-weekly
Plotting AWR database metrics using R http://dbastreet.com/blog/?p=946
R Programming Language Connectivity to Oracle http://dbastreet.com/blog/?p=913
Scripted Collection of OS Watcher Files on Exadata http://tylermuth.wordpress.com/2012/11/02/scripted-collection-of-os-watcher-files-on-exadata/
Using R for Advanced Charts http://processtrends.com/toc_r.htm#Excel Stacked Chart
-- stacked area
http://stackoverflow.com/questions/5030389/getting-a-stacked-area-plot-in-r
How do I create a stacked area plot with many areas, or where the legend “points” at the respective areas? http://stackoverflow.com/questions/6275895/how-do-i-create-a-stacked-area-plot-with-many-areas-or-where-the-legend-points
Stacked Area Histogram in R http://stackoverflow.com/questions/2241290/stacked-area-histogram-in-r
Grayscale stacked area plot in R http://stackoverflow.com/questions/6071990/grayscale-stacked-area-plot-in-r
-- bar chart
http://stackoverflow.com/questions/6437080/bar-chart-of-constant-height-for-factors-in-time-series
-- scatter plot IOPS
http://dboptimizer.com/2013/01/04/r-slicing-and-dicing-data/ , https://sites.google.com/site/oraclemonitor/r-slicing-and-dicing-data , http://datavirtualizer.com/r-data-structures/
http://www.statmethods.net/input/datatypes.html
http://nsaunders.wordpress.com/2010/08/20/a-brief-introduction-to-apply-in-r/
http://stackoverflow.com/questions/2545228/converting-a-dataframe-to-a-vector-by-rows
http://stat.ethz.ch/R-manual/R-patched/library/base/html/colSums.html
http://dboptimizer.com/2013/01/02/no-3d-charts-in-excel-try-r/ , http://www.oaktable.net/content/no-3d-charts-excel-try-r
-- bubble charts
http://flowingdata.com/2010/11/23/how-to-make-bubble-charts/
-- google vis in R - motion chart
http://cran.r-project.org/web/packages/googleVis/vignettes/googleVis.pdf
-- device utilization
http://dtrace.org/blogs/brendan/2011/12/18/visualizing-device-utilization/
https://blogs.oracle.com/dom/entry/visualising_performance
-- google searches
stacked time series area chart R http://goo.gl/BvINM
stacked area chart R http://goo.gl/fbHJO
http://www.r-bloggers.com/from-spreadsheet-thinking-to-r-thinking
! headfirst R
{{{
source("http://www.headfirstlabs.com/books/hfda/hfda.R")
employees
hist(employees$received, breaks=50)
> sd(employees$received)
[1] 2.432138
> summary(employees$received)
Min. 1st Qu. Median Mean 3rd Qu. Max.
-1.800 4.600 5.500 6.028 6.700 25.900
head(employees,n=30)
plot(employees$requested[employees$negotiated==TRUE], employees$received[employees$negotiated==TRUE])
plot(employees$received,employees$requested)
cor(employees$requested[employees$negotiated==TRUE], employees$received[employees$negotiated==TRUE])
cor(employees$received,employees$requested)
graphing packages:
ggplot2
lattice
ggplot2 terminologies:
The data is what we want to visualize. It consists of variables, which are stored as
columns in a data frame.
• Geoms are the geometric objects that are drawn to represent the data, such as bars,
lines, and points.
• Aesthetic attributes, or aesthetics, are visual properties of geoms, such as x and y
position, line color, point shapes, etc.
• There are mappings from data values to aesthetics.
• Scales control the mapping from the values in the data space to values in the aesthetic
space. A continuous y scale maps larger numerical values to vertically higher positions
in space.
• Guides show the viewer how to map the visual properties back to the data space.
The most commonly used guides are the tick marks and labels on an axis.
install.packages("ggplot2")
install.packages("gcookbook")
OR execute this
install.packages(c("ggplot2", "gcookbook"))
The downloaded packages are in
C:\Users\Karl\AppData\Local\Temp\Rtmpmy2Jzo\downloaded_packages
The downloaded packages are in
C:\Users\Karl\AppData\Local\Temp\Rtmpmy2Jzo\downloaded_packages
-- run this on each R session if you want to use ggplot2
library(ggplot2)
for the book do this
library(ggplot2)
library(gcookbook)
** The primary repository for distributing R packages is called CRAN (the Comprehensive R Archive Network),
data <- read.csv("datafile.csv")
data <- read.csv("datafile.csv", header=FALSE)
names(data) <- c("Column1","Column2","Column3") # Manually assign the header names
data <- read.csv("datafile.csv", sep="\t") # sep=" " if space delimited, sep="\t" if tab delimited
data <- read.csv("datafile.csv", stringsAsFactors=FALSE) # don't convert strings as factors
-- help
?hist
}}}
Stan for the beginners [Bayesian inference] in 6 mins (close captioned) https://www.youtube.com/watch?v=tLprFqSWS1w
A visual guide to Bayesian thinking https://www.youtube.com/watch?v=BrK7X_XlGB8
http://andrewgelman.com/2014/01/21/everything-need-know-bayesian-statistics-learned-eight-schools/
http://andrewgelman.com/2014/01/17/think-statistical-evidence-statistical-evidence-cant-conclusive/
An Introduction to Bayesian Inference using R Interfaces to Stan http://user2016.org/tutorials/15.html
http://andrewgelman.com/2012/08/30/a-stan-is-born/
http://mc-stan.org/interfaces/rstan.html
https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started
video https://icerm.brown.edu/video_archive/#/play/1107 Scalable Bayesian Inference with Hamiltonian Monte Carlo - Michael Betancourt, University of Warwick
Scalable Bayesian Inference with Hamiltonian Monte Carlo https://www.youtube.com/watch?v=VnNdhsm0rJQ
Efficient Bayesian inference with Hamiltonian Monte Carlo -- Michael Betancourt (Part 1) https://www.youtube.com/watch?v=pHsuIaPbNbY
Hamiltonian Monte Carlo and Stan -- Michael Betancourt (Part 2) https://www.youtube.com/watch?v=xWQpEAyI5s8
https://cran.r-project.org/web/packages/rstan/vignettes/rstan.html
https://rpubs.com/pviefers/CologneR
http://astrostatistics.psu.edu/su14/lectures/Daniel-Lee-Stan-1.pdf
! others
http://stackoverflow.com/questions/31409591/difference-between-forecast-and-predict-function-in-r
http://stackoverflow.com/questions/28695076/parallel-predict
https://www.r-bloggers.com/parallel-r-model-prediction-building-and-analytics/
{{{
brew cask install 'xquartz'
brew cask install 'r-app'
r
brew cask install 'rstudio'
}}}
https://www.google.com/search?q=install+rstudio+for+mac&oq=install+rstudio+for+mac&aqs=chrome..69i57j0l5.4464j1j1&sourceid=chrome&ie=UTF-8
https://stackoverflow.com/questions/20457290/installing-r-with-homebrew
http://macappstore.org/rstudio/
http://mdzhang.com/posts/osx-r-rstudio/ <- GOOD STUFF
! 2022
* read this https://techrah.github.io/posts/build-openmp-macos-catalina-complete
* download software
https://www.rstudio.com/products/rstudio/download/#download
https://cloud.r-project.org/
* install the downloaded software
* modify zshrc
{{{
cat .zshrc
### python pyenv
# https://stackoverflow.com/questions/10574684/where-to-place-path-variable-assertions-in-zsh
eval "$(pyenv init -)"
### r install
disable r
alias r="/Library/Frameworks/R.framework/Versions/Current/Resources/bin/R"
alias R=r
}}}
http://www.burns-stat.com/documents/books/tao-te-programming/
http://www.r-statistics.com/2013/03/updating-r-from-r-on-windows-using-the-installr-package/
http://www.r-statistics.com/2011/04/how-to-upgrade-r-on-windows-7/
http://www.r-statistics.com/2010/04/changing-your-r-upgrading-strategy-and-the-r-code-to-do-it-on-windows/
http://stackoverflow.com/questions/1401904/painless-way-to-install-a-new-version-of-r-on-windows
http://www.evernote.com/shard/s48/sh/377737c7-4ebc-46b1-bc35-3ecc718b871b/50cea23088bd9102903f413e18615628
http://www.evernote.com/shard/s48/sh/bbf96104-f3b2-467d-b98f-6adcf6d0cf04/4a3dad99cbf84cc639ec6cb00dba99e9
http://www.techradar.com/news/software/applications/7-of-the-best-linux-remote-desktop-clients-716346?artc_pg=2
http://www.nomachine.com/screenshots.php
http://remmina.sourceforge.net/downloads.shtml
http://www.evernote.com/shard/s48/sh/799368fe-07f0-4ebf-8a92-8b295e9bcf0d/61f0bb8e887507684925fad01d3f9245
http://www.evernote.com/shard/s48/sh/287ae327-a298-4b86-8b41-b50ad0ec8666/815212754838567a1e72396c2d4dc730
http://www.evernote.com/shard/s48/sh/821bb643-57e9-4278-b659-890680aab8c0/75558365366c24074eea581ecf104e47
! Types of networking per OS:
Linux: Host-Only, Bridged, or BOTH
Windows: NAT (only)
Legend:
HOST -> host VM, or your server, or your base OS
CLIENT -> guest VM
http://www.evernote.com/shard/s48/sh/4908cf1d-c5fa-4e8e-ba72-391356994634/afda9f7757b016297efc78b79c767d4e
! Network setup explanation at oracle-l
http://www.freelists.org/post/oracle-l/Oracle-Virtualbox-Networking-Multiple-VMS-question-for-advanced-Virtualbox-users,4
https://mail.google.com/mail/u/0/#search/oracle-l+R%26D+server/13809eba19a37b01
<<showtoc>>
''IPs and Installation Matrix'' https://docs.google.com/spreadsheet/ccc?key=0ApH46jS7ZPdJdFpVNlgzdGpBUHJ6U1ZZZzl1bmxtT1E&hl=en_US#gid=0
''MindMap - DesktopServer'' http://www.evernote.com/shard/s48/sh/c3a94bff-007a-4df4-906e-e5079aa8c5cf/ea124916fb8a86d12cb99645e31f3867
! Build photos of the R&D Server
''DesktopServer1'' - contains pre build photos - https://www.evernote.com/shard/s48/sh/c12754e2-e166-4c43-8073-0701ef865a04/cd8c8bb240092da5d368bfa2b126b5ee
''DesktopServer2'' - contains build photos,wiring it,anaconda photo - https://www.evernote.com/shard/s48/sh/d1f64502-ca17-40cc-b33e-e0cfe578700f/f8f52a3dcb5079aba3edf5bcc4db9bd8
''DesktopServer3'' - contains the device mapping - https://www.evernote.com/shard/s48/sh/1bb03446-c3d9-4bef-86a0-1dc1ab3dca67/674b39c5f83409906534ee1d3235503a
! Disk layout, config, and benchmark
[[LVM config history]] shows how I configured the devices and the idea/reasoning behind it, also showing the partition table and layout
[[LVMstripesize,AUsize,UEKkernel]] IO config options test case comparison
''udev ASM - single path'' - https://www.evernote.com/shard/s48/sh/485425bc-a16f-4446-aebd-988342e3c30e/edc860d713dd4a66ff57cbc920b4a69c
''load before and after using UEK kernel'' - https://www.evernote.com/shard/s48/sh/d8e45c77-a6b8-4923-9e7a-f7a008af30cc/72c6522f6937a00ea8e6145e9af1af4e , faster and lower load due to OEL rq_affinity enabled
[[R&D server IO performance]] explains Calibrate IO, Short Stroking, Stripe size, UEK vs Regular Kernel, ASM redundancy / SAN redundancy, effect of ASM redundancy on read/write IOPS – SLOB test case
[[R&D cpu_speed]]
! Networking
[[R&D Server VirtualBox networking]] architecture explanation at evernote and oracle-l
[[R&D DNS]]
[[R&D Headless Autostart]]
[[R&D Mail Server]]
[[R&D Server Daily Report]]
[[R&D Server Samba]]
! Backups
[[R&D rsync]]
! HW failures
[[R&D drive fail]]
! Others
SSD head option:
http://www.amazon.com/Seagate-Momentus-Solid-Hybrid-ST95005620AS/dp/B003NSBF32
small two port router:
http://ttcshelbyville.wordpress.com/tag/smallest-router/
! 2020 update
{{{
that machine retired a few years ago.
these days I only need a few "always on" environments, costing me $22/month
it's more expensive that what you are getting w/ a long term on-prem machine at $1800 w/ 12 VMs and bare metal DB + tons of storage
but you have the flexibility and all overhead cost are tucked in
the new cloud "always-on" environments are ($22/month):
2 Digital Ocean Droplets w/ 512M memory, 20GB storage, 1CPU @ $5/month each
both configured w/ 1GB swapfile to trick the OS w/ more memory
1st droplet is the DB server (12.1.0.2), I ran a swingbench workload here for a month and no issues
[root@karldevfedora ~]# uptime
22:21:52 up 966 days, 5:10, 1 user, load average: 0.00, 0.03, 0.05
2nd droplet is dev box (runs apache, etc.)
root@karldevubuntu:~# uptime
17:21:48 up 967 days, 2:51, 1 user, load average: 0.08, 0.02, 0.01
1 win-vps bronze1 plan w/ 4GB memory, 75GB storage, 1CPU @ $12/month
I also created swapfile/pagefile for performance
the $22 is the cheapest always-on cloud offerings out there. And they are pretty stable just look at the uptime :-)
then if I need a hadoop cluster to test stuff I can just Vagrant script that to my laptop virtualbox or to google compute or digital ocean
}}}
{{{
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
OraPub CPU speed statistic is 613.404
Other statistics: stdev=6.161 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1770)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090320 0 180
1085634 0 177
1085634 0 176
1085634 0 178
1085634 0 180
1085634 0 175
1085634 0 176
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
OraPub CPU speed statistic is 557.797
Other statistics: stdev=32.06 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1951.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090320 0 199
1085634 0 206
1085634 0 207
1085634 0 203
1085634 0 184
1085634 0 185
1085634 0 186
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
OraPub CPU speed statistic is 573.428
Other statistics: stdev=4.555 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1893.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090320 0 199
1085634 0 191
1085634 0 188
1085634 0 190
1085634 0 191
1085634 0 188
1085634 0 188
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
OraPub CPU speed statistic is 561.588
Other statistics: stdev=5.951 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1933.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090320 0 196
1085634 0 192
1085634 0 193
1085634 0 193
1085634 0 197
1085634 0 194
1085634 0 191
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
OraPub CPU speed statistic is 569.407
Other statistics: stdev=3.587 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1906.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090320 0 195
1085634 0 190
1085634 0 190
1085634 0 190
1085634 0 190
1085634 0 191
1085634 0 193
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
}}}
! Database
<<<
!! database app dev
http://www.oracle.com/technetwork/database/enterprise-edition/databaseappdev-vm-161299.html
OTN_Developer_Day_VM.ova
!! oem r4 vm template
http://www.oracle.com/technetwork/oem/enterprise-manager/downloads/oem-templates-1741850.html
V45533-01.zip
V45532-01.zip
V45531-01.zip
V45530-01.zip
!! 18084575 EXADATA 12.1.1.1.1 (MOS NOTE 1667407.1) (Patch)
p18084575_121111_Linux-x86-64.zip
V46534-01.zip
!! database template for oem12cR4
11.2.0.3_Database_Template_for_EM12_1_0_4_Linux_x64.zip
<<<
! tableau
TableauDesktop-32bit.exe
TableauDesktop-64bit.exe
TableauServer-32bit.exe
! OS
<<<
! oracle linux
http://www.oracle.com/technetwork/server-storage/linux/downloads/vm-for-hol-1896500.html
OracleLinux65.ova
! solaris 11 vbox
http://www.oracle.com/technetwork/server-storage/solaris11/downloads/vm-templates-2245495.html
sol-11_2-vbox.ova
!! oel 7 64bit
V46135-01.iso
!! oel 5.7 64bit
V27570-01.zip
!! fedora 20
http://download.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/iso/Fedora-20-x86_64-DVD.iso
!! Oracle VM 3.2.4 Manager & Server VMs
http://www.oracle.com/technetwork/community/developer-vm/index.html#ovm
http://www.oracle.com/technetwork/server-storage/vm/template-1482544.html
http://www.oracle.com/technetwork/server-storage/vm/learnmore/index.html
OracleVMServer3.2.4-b525.ova
OracleVMManager3.2.4-b524.ova
<<<
! Storage
<<<
!! zfs simulator
http://www.oracle.com/technetwork/server-storage/sun-unified-storage/downloads/sun-simulator-1368816.html
OracleZFSStorageVM-OS8.2.zip
<<<
! Hadoop
<<<
!! cloudera quickstart vm
http://www.cloudera.com/content/support/en/downloads/quickstart_vms/cdh-5-1-x1.html
cloudera-quickstart-vm-5.1.0-1-virtualbox.7z
!! Big Data Lite Virtual Machine
http://www.oracle.com/technetwork/database/bigdata-appliance/oracle-bigdatalite-2104726.html
2f362b3937220c4a4b95a3ac0b1ac2a1 bigdatalite-3.0.zip.001
83d728dfbc68a84d84797048c44c001c bigdatalite-3.0.zip.002
ffae62b6469f57266bc9dfbb18e0626c bigdatalite-3.0.zip.003
4c70364e8257b14069e352397c5af49e bigdatalite-3.0.zip.004
872006a378dfa9bbba53edb9ea89ab1f bigdatalite-3.0.zip.005
693f4750563445f2613739af8bbf9574 bigdatalite-3.0.zip.006
<<<
''The BIOS does not detect or recognize the ATA / SATA hard drive''
http://knowledge.seagate.com/articles/en_US/FAQ/168595en
https://ask.fedoraproject.org/question/7231/how-to-triage-comreset-failed-error-at-startup/
http://www.tomshardware.com/forum/250403-32-disk-drive-sudden-death
[[rsync]] commands here
{{{
* note that when you pull/restart the router or some reason the DHCP IPs got messed up all you
have to do is delete the known_hosts and create an empty known_hosts file and then edit the /etc/hosts entries
for the new IPs of the devices.. of course before that you have to do nmap -sP 192.168.203.* to check all "alive" devices
* then after that, you should be able to passwordlessly login on the devices and start the rsync
}}}
[[WD-MyBookLive]]
[[rsync]] shows how I backup my files and iphone/ipad to WD mybooklive
http://www.freelists.org/post/oracle-l/IO-performance,13
https://mail.google.com/mail/u/0/#search/oracle-l+short+stroke/137cab643fef1ed4
* Text Processing in Python https://www.amazon.com/Text-Processing-Python-David-Mertz/dp/0321112547/ref=sr_1_fkmr2_3?ie=UTF8&qid=1486829301&sr=8-3-fkmr2&keywords=python+text+processing+tricks
* Python 3 Text Processing with NLTK 3 Cookbook https://www.amazon.com/Python-Text-Processing-NLTK-Cookbook/dp/1782167854/ref=sr_1_fkmr2_1?ie=UTF8&qid=1486829301&sr=8-1-fkmr2&keywords=python+text+processing+tricks
* Text Analytics with Python: A Practical Real-World Approach to Gaining Actionable Insights from your Data https://www.amazon.com/Text-Analytics-Python-Real-World-Actionable/dp/148422387X/ref=pd_rhf_se_s_cp_1?_encoding=UTF8&pd_rd_i=148422387X&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* Automate the Boring Stuff with Python: Practical Programming for Total Beginners https://www.amazon.com/Automate-Boring-Stuff-Python-Programming/dp/1593275994/ref=pd_rhf_se_s_cp_2?_encoding=UTF8&pd_rd_i=1593275994&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* Violent Python: A Cookbook for Hackers, Forensic Analysts, Penetration Testers and Security Engineers https://www.amazon.com/Violent-Python-Cookbook-Penetration-Engineers/dp/1597499579/ref=pd_rhf_se_s_cp_6?_encoding=UTF8&pd_rd_i=1597499579&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* Data Wrangling with Python: Tips and Tools to Make Your Life Easier https://www.amazon.com/Data-Wrangling-Python-Tools-Easier/dp/1491948817/ref=pd_rhf_se_s_cp_5?_encoding=UTF8&pd_rd_i=1491948817&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* Text Mining with R: A tidy approach https://www.amazon.com/Text-Mining-R-tidy-approach/dp/1491981652/ref=sr_1_1?ie=UTF8&qid=1486829376&sr=8-1&keywords=R+text+processing
* Mastering Text Mining with R https://www.amazon.com/Mastering-Text-Mining-Ashish-Kumar/dp/178355181X/ref=pd_sbs_14_1?_encoding=UTF8&pd_rd_i=178355181X&pd_rd_r=QJR23QWXME6RM0EX70S9&pd_rd_w=1qsnW&pd_rd_wg=Qb4hQ&psc=1&refRID=QJR23QWXME6RM0EX70S9
* An Introduction to Information Theory: Symbols, Signals and Noise (Dover Books on Mathematics) https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614/ref=sr_1_8?ie=UTF8&qid=1486829376&sr=8-8&keywords=R+text+processing
* Automated Data Collection with R: A Practical Guide to Web Scraping and Text Mining https://www.amazon.com/Automated-Data-Collection-Practical-Scraping-ebook/dp/B014T25K5O/ref=sr_1_7?ie=UTF8&qid=1486829376&sr=8-7&keywords=R+text+processing
* Pro Bash Programming: Scripting the Linux Shell (Expert's Voice in Linux) https://www.amazon.com/Pro-Bash-Programming-Scripting-Experts/dp/1430219971/ref=sr_1_fkmr0_2?ie=UTF8&qid=1486857179&sr=8-2-fkmr0&keywords=bash+text+processing
* Python Scripting for Computational Science: 3 (Texts in Computational Science and Engineering) https://www.amazon.com/Python-Scripting-Computational-Science-Engineering-ebook/dp/B001NLKSSO/ref=mt_kindle?_encoding=UTF8&me=
* Network Programmability and Automation: Skills for the Next-Generation Network Engineer https://www.amazon.com/Network-Programmability-Automation-Next-Generation-Engineer/dp/1491931256/ref=sr_1_17?ie=UTF8&qid=1487107643&sr=8-17&keywords=python+scripting
* Taming Text: How to Find, Organize, and Manipulate It https://www.safaribooksonline.com/library/view/taming-text-how/9781933988382/
! bash
* Wicked Cool Shell Scripts, 2nd Edition https://www.safaribooksonline.com/library/view/wicked-cool-shell/9781492018322/
! ruby
* Text Processing with Ruby https://www.safaribooksonline.com/library/view/text-processing-with/9781680501575/
* Practical Ruby for System Administration https://www.amazon.com/Practical-System-Administration-Experts-Source/dp/1590598210/ref=sr_1_304?ie=UTF8&qid=1487114901&sr=8-304&keywords=python+cloud
* Everyday Scripting with Ruby: For Teams, Testers, and You 1st Ed https://www.amazon.com/Everyday-Scripting-Ruby-Teams-Testers/dp/0977616614/ref=pd_sim_14_1?_encoding=UTF8&pd_rd_i=0977616614&pd_rd_r=09RA80YPVNPHFX6W9HRC&pd_rd_w=fpkVD&pd_rd_wg=xTaj2&psc=1&refRID=09RA80YPVNPHFX6W9HRC
* Wicked Cool Ruby Scripts: Useful Scripts that Solve Difficult Problems 1st Ed https://www.amazon.com/Wicked-Cool-Ruby-Scripts-Difficult/dp/1593271824/ref=pd_sim_14_4?_encoding=UTF8&pd_rd_i=1593271824&pd_rd_r=3NVCPNJ5PT5MGVZ7XB8X&pd_rd_w=pDh4w&pd_rd_wg=7Nm4C&psc=1&refRID=3NVCPNJ5PT5MGVZ7XB8X
! python
* Python Quick Start for Linux System Administrators https://app.pluralsight.com/library/courses/python-linux-system-administrators/table-of-contents
* The Hitchhiker's Guide to Python: Best Practices for Development https://www.amazon.com/Hitchhikers-Guide-Python-Practices-Development/dp/1491933178/ref=pd_sim_14_12?_encoding=UTF8&pd_rd_i=1491933178&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
* Python Data Science Handbook: Essential Tools for Working with Data https://www.amazon.com/Python-Data-Science-Handbook-Essential/dp/1491912057/ref=pd_sim_14_3?_encoding=UTF8&pd_rd_i=1491912057&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
* Data Visualization with Python and JavaScript: Scrape, Clean, Explore & Transform Your Data https://www.amazon.com/Data-Visualization-Python-JavaScript-Transform/dp/1491920513/ref=pd_sim_14_4?_encoding=UTF8&pd_rd_i=1491920513&pd_rd_r=WAW4GKP39JZJPXKYZ54H&pd_rd_w=2BBbQ&pd_rd_wg=oYNJw&psc=1&refRID=WAW4GKP39JZJPXKYZ54H
* Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Google+, GitHub, and More https://www.amazon.com/Mining-Social-Web-Facebook-LinkedIn/dp/1449367615/ref=sr_1_160?ie=UTF8&qid=1487114754&sr=8-160&keywords=python+cloud
* Web Scraping with Python: Collecting Data from the Modern Web https://www.amazon.com/Web-Scraping-Python-Collecting-Modern/dp/1491910291/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1491910291&pd_rd_r=4J8EDG8PV293H5RYSV8E&pd_rd_w=piDKd&pd_rd_wg=aqY1t&psc=1&refRID=4J8EDG8PV293H5RYSV8E
* Building Data Pipelines with Python https://www.safaribooksonline.com/library/view/building-data-pipelines/9781491970270/
* Fluent Python: Clear, Concise, and Effective Programming https://www.amazon.com/Fluent-Python-Concise-Effective-Programming/dp/1491946008/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1491946008&pd_rd_r=BDT30TJXXE3E6CNWNQ87&pd_rd_w=Sf9I5&pd_rd_wg=HjtNo&psc=1&refRID=BDT30TJXXE3E6CNWNQ87 <- nicee!
* Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython https://www.amazon.com/Python-Data-Analysis-Wrangling-IPython/dp/1449319793/ref=sr_1_4?ie=UTF8&qid=1487114575&sr=8-4&keywords=python+cloud
* Python for Data Science For Dummies https://www.amazon.com/Python-Data-Science-Dummies-Computers-ebook/dp/B00TWK3RHW/ref=mt_kindle?_encoding=UTF8&me=
* Python: Visual QuickStart Guide, Third Edition https://www.safaribooksonline.com/library/view/python-visual-quickstart/9780133435160/ch11.html
* Python Essential Reference (4th Edition) https://www.amazon.com/Python-Essential-Reference-David-Beazley/dp/0672329786/ref=pd_bxgy_14_img_3?_encoding=UTF8&pd_rd_i=0672329786&pd_rd_r=QJ4H03MEJM726QZG56QZ&pd_rd_w=ZKBa5&pd_rd_wg=Btaz5&psc=1&refRID=QJ4H03MEJM726QZG56QZ
* Python in Practice: Create Better Programs Using Concurrency, Libraries, and Patterns (Developer's Library) https://www.amazon.com/Python-Practice-Concurrency-Libraries-Developers/dp/0321905636/ref=sr_1_106?ie=UTF8&qid=1487114691&sr=8-106&keywords=python+cloud
* Data Science from Scratch: First Principles with Python https://www.amazon.com/Data-Science-Scratch-Principles-Python/dp/149190142X/ref=pd_bxgy_14_img_3?_encoding=UTF8&pd_rd_i=149190142X&pd_rd_r=WAW4GKP39JZJPXKYZ54H&pd_rd_w=0j9ZK&pd_rd_wg=oYNJw&psc=1&refRID=WAW4GKP39JZJPXKYZ54H
* Data Wrangling with Python: Tips and Tools to Make Your Life Easier - REST vs Streaming API https://www.amazon.com/Data-Wrangling-Python-Tools-Easier/dp/1491948817/ref=pd_sim_14_6?_encoding=UTF8&pd_rd_i=1491948817&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
* Python for Unix and Linux System Administration 1st Ed https://www.amazon.com/Python-Unix-Linux-System-Administration/dp/0596515820/ref=sr_1_4?ie=UTF8&qid=1487708727&sr=8-4&keywords=python+system+administration
* Pro Python System Administration 1st Ed https://www.amazon.com/Python-System-Administration-Experts-Source/dp/1430226056/ref=sr_1_3?ie=UTF8&qid=1487708727&sr=8-3&keywords=python+system+administration
* Pro Python System Administration 2nd ed https://www.amazon.com/Python-System-Administration-Rytis-Sileika/dp/148420218X/ref=sr_1_1?ie=UTF8&qid=1487708727&sr=8-1&keywords=python+system+administration
* Foundations of Python Network Programming 3rd ed https://www.amazon.com/Foundations-Python-Network-Programming-Brandon/dp/1430258543/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1430258543&pd_rd_r=MGTKMKGFA3SGN0Z8XW2T&pd_rd_w=ppXt3&pd_rd_wg=gKBwL&psc=1&refRID=MGTKMKGFA3SGN0Z8XW2T
! hadoop
* Data Analytics with Hadoop: An Introduction for Data Scientists https://www.amazon.com/Data-Analytics-Hadoop-Introduction-Scientists/dp/1491913703/ref=pd_sim_14_18?_encoding=UTF8&pd_rd_i=1491913703&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
! spark
* Advanced Analytics with Spark: Patterns for Learning from Data at Scale https://www.amazon.com/Advanced-Analytics-Spark-Patterns-Learning/dp/1491912766/ref=pd_sim_14_27?_encoding=UTF8&pd_rd_i=1491912766&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
! regex
* Regular Expression Recipes: A Problem-Solution Approach https://www.amazon.com/Regular-Expression-Recipes-Problem-Solution-Approach/dp/159059441X/ref=sr_1_189?ie=UTF8&qid=1487114792&sr=8-189&keywords=python+cloud
! bad data
* Bad Data Handbook: Cleaning Up The Data So You Can Get Back To Work https://www.amazon.com/Bad-Data-Handbook-Cleaning-Back/dp/1449321887/ref=sr_1_272?ie=UTF8&qid=1487114866&sr=8-272&keywords=python+cloud
<<showtoc>>
! python data cleaning
* Practical Data Cleaning with Python https://www.safaribooksonline.com/live-training/courses/practical-data-cleaning-with-python/0636920152798/#schedule
** code repo: https://resources.oreilly.com/live-training/practical-data-cleaning-with-python
* Data Wrangling with Python: Tips and Tools to Make Your Life Easier - https://www.amazon.com/Data-Wrangling-Python-Tools-Easier/dp/1491948817/ref=pd_sim_14_6?_encoding=UTF8&pd_rd_i=1491948817&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
! python and text
* video: Natural Language Text Processing with Python https://www.safaribooksonline.com/library/view/natural-language-text/9781491976487/
* NLP applied - Applied Text Analysis with Python https://www.safaribooksonline.com/library/view/applied-text-analysis/9781491963036/
* Python for Secret Agents vol1 - https://www.safaribooksonline.com/library/view/python-for-secret/9781783980420/ , https://www.amazon.com/Python-Secret-Agents-Steven-Lott-ebook/dp/B00N2RWMMW/ref=asap_bc?ie=UTF8
* Python for Secret Agents vol2 - https://www.safaribooksonline.com/library/view/python-for-secret/9781785283406/ , https://www.amazon.com/gp/product/B017XSFKHY/ref=dbs_a_def_rwt_bibl_vppi_i3
* Text Processing in Python - https://www.safaribooksonline.com/library/view/text-processing-in/0321112547/ , https://www.amazon.com/Text-Processing-Python-David-Mertz/dp/0321112547/ref=sr_1_fkmr2_3?ie=UTF8&qid=1486829301&sr=8-3-fkmr2&keywords=python+text+processing+tricks
* NLP - Text Analytics with Python: A Practical Real-World Approach to Gaining Actionable Insights from your Data https://www.safaribooksonline.com/library/view/text-analytics-with/9781484223871/ , https://www.amazon.com/Text-Analytics-Python-Real-World-Actionable/dp/148422387X/ref=pd_rhf_se_s_cp_1?_encoding=UTF8&pd_rd_i=148422387X&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* NLP - Taming Text: How to Find, Organize, and Manipulate It - https://www.safaribooksonline.com/library/view/taming-text-how/9781933988382/
! python and ML
* Data Mining for Business Analytics: Concepts, Techniques, and Applications in R https://www.amazon.com/Data-Mining-Business-Analytics-Applications/dp/1118879368/ref=asap_bc?ie=UTF8
* Introduction to Machine Learning with Python: A Guide for Data Scientists https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_3?ie=UTF8&qid=1526456824&sr=8-3&keywords=Introduction+to+Machine+Learning+with+Python&dpID=51ZPksI0E9L&preST=_SX218_BO1,204,203,200_QL40_&dpSrc=srch
* Mastering Machine Learning with Python in Six Steps: A Practical Implementation Guide to Predictive Data Analytics Using Python https://www.safaribooksonline.com/library/view/mastering-machine-learning/9781484228661/
! python and R
* Python for R Users: A Data Science Approach 1st Edition https://www.amazon.com/Python-Users-Data-Science-Approach/dp/1119126762/ref=sr_1_10?s=books&ie=UTF8&qid=1527090532&sr=1-10&keywords=python+cloud+computing
! visualization
* Making data visual https://www.safaribooksonline.com/library/view/making-data-visual/9781491960493/ch08.html#casestudies_fruitfly
** code repo and examples: https://makingdatavisual.github.io/figurelist.html#fourviews
** https://github.com/MakingDataVisual/makingdatavisual.github.io
** https://resources.oreilly.com/examples/0636920041320
! python - definitive guides
* https://learnxinyminutes.com/docs/python/
* https://learnxinyminutes.com/docs/python3/
* https://learnxinyminutes.com/docs/pythonstatcomp/
* https://learnxinyminutes.com/docs/r/
* https://learnxinyminutes.com/docs/ruby/
* https://learnxinyminutes.com/docs/javascript/
* https://learnxinyminutes.com/docs/json/
* https://learnxinyminutes.com/docs/bash/
* Programming Python, 3rd Edition https://www.safaribooksonline.com/library/view/programming-python-3rd/0596009259/
** code repo: https://resources.oreilly.com/examples/9780596009250
* Fluent Python: Clear, Concise, and Effective Programming - https://www.amazon.com/Fluent-Python-Concise-Effective-Programming/dp/1491946008/ref=pd_bxgy_14_img_3?_encoding=UTF8&pd_rd_i=1491946008&pd_rd_r=7FEQ86HMR2M56ZD7ANWF&pd_rd_w=xkHOG&pd_rd_wg=uUBgD&psc=1&refRID=7FEQ86HMR2M56ZD7ANWF
* The Hitchhiker's Guide to Python: Best Practices for Development - https://www.amazon.com/Hitchhikers-Guide-Python-Practices-Development/dp/1491933178/ref=pd_sim_14_12?_encoding=UTF8&pd_rd_i=1491933178&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
* Mastering Object-oriented Python https://www.amazon.com/Mastering-Object-oriented-Python-Steven-Lott/dp/1783280972/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1783280972&pd_rd_r=7FEQ86HMR2M56ZD7ANWF&pd_rd_w=xkHOG&pd_rd_wg=uUBgD&psc=1&refRID=7FEQ86HMR2M56ZD7ANWF
* Python Pocket Reference: Python In Your Pocket https://www.amazon.com/Python-Pocket-Reference-Your-OReilly/dp/1449357016/ref=pd_sbs_14_3?_encoding=UTF8&pd_rd_i=1449357016&pd_rd_r=FPF805WNWTGN2YRE8GND&pd_rd_w=513QW&pd_rd_wg=QCK4v&psc=1&refRID=FPF805WNWTGN2YRE8GND
* Python Crash Course: A Hands-On, Project-Based Introduction to Programming https://www.amazon.com/Python-Crash-Course-Hands-Project-Based/dp/1593276036/ref=pd_sbs_14_1?_encoding=UTF8&pd_rd_i=1593276036&pd_rd_r=FPF805WNWTGN2YRE8GND&pd_rd_w=513QW&pd_rd_wg=QCK4v&psc=1&refRID=FPF805WNWTGN2YRE8GND
* A Smarter Way to Learn Python: Learn it faster. Remember it longer https://www.amazon.com/Smarter-Way-Learn-Python-Remember-ebook/dp/B077Z55G3B/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=
* Python Playground: Geeky Projects for the Curious Programmer https://www.amazon.com/Python-Playground-Projects-Curious-Programmer/dp/1593276044/ref=pd_sbs_14_5?_encoding=UTF8&pd_rd_i=1593276044&pd_rd_r=FPF805WNWTGN2YRE8GND&pd_rd_w=513QW&pd_rd_wg=QCK4v&psc=1&refRID=FPF805WNWTGN2YRE8GND
* Learn Python 3 the Hard Way: A Very Simple Introduction to the Terrifyingly Beautiful World of Computers and Code https://www.amazon.com/Learn-Python-Hard-Way-Introduction/dp/0134692888/ref=pd_sbs_14_12?_encoding=UTF8&pd_rd_i=0134692888&pd_rd_r=FPF805WNWTGN2YRE8GND&pd_rd_w=513QW&pd_rd_wg=QCK4v&psc=1&refRID=FPF805WNWTGN2YRE8GND
! python and databases
* python flyby randy johnson https://github.com/dallasdba/dbascripts
* python for pl/sql developers http://arup.blogspot.com/2017/01/python-for-plsql-developers-series.html
* mysql for python https://www.safaribooksonline.com/library/view/mysql-for-python/9781849510189/
** code repo: https://github.com/mythstack/MySQL-for-Python-Example-Code
** https://resources.oreilly.com/examples/9781849510189/tree/master
! python and hadoop
https://www.amazon.com/Hadoop-Python-Donald-Miner-ebook/dp/B07D1MP4HS/ref=sr_1_7?ie=UTF8&qid=1526432174&sr=8-7&keywords=python+hadoop&dpID=512eUcTjkTL&preST=_SY445_QL70_&dpSrc=srch
! python and network
Foundations of Python Network Programming https://www.amazon.com/Foundations-Python-Network-Programming-Brandon/dp/1430258543/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1430258543&pd_rd_r=MGTKMKGFA3SGN0Z8XW2T&pd_rd_w=ppXt3&pd_rd_wg=gKBwL&psc=1&refRID=MGTKMKGFA3SGN0Z8XW2T
! python and sysad
* video: Python Quick Start for Linux System Administrators
* Pro Python System Administration 2nd ed. Edition https://www.amazon.com/Python-System-Administration-Rytis-Sileika/dp/148420218X/ref=sr_1_1?ie=UTF8&qid=1487708727&sr=8-1&keywords=python+system+administration
* Python for Unix and Linux System Administration 1st Edition https://www.amazon.com/Python-Unix-Linux-System-Administration/dp/0596515820/ref=sr_1_4?ie=UTF8&qid=1487708727&sr=8-4&keywords=python+system+administration
! python and cloud
* Network Programmability and Automation: Skills for the Next-Generation Network Engineer https://www.amazon.com/Network-Programmability-Automation-Next-Generation-Engineer/dp/1491931256/ref=pd_cp_14_1?_encoding=UTF8&pd_rd_i=1491931256&pd_rd_r=2c30dcee-5ea3-11e8-895a-49ce3940778b&pd_rd_w=agUgD&pd_rd_wg=5rhXw&pf_rd_i=desktop-dp-sims&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=80460301815383741&pf_rd_r=PPAFW1MPB01E7178K92N&pf_rd_s=desktop-dp-sims&pf_rd_t=40701&psc=1&refRID=PPAFW1MPB01E7178K92N
* Practical Network Automation: Leverage the power of Python and Ansible to optimize your network https://www.amazon.com/Practical-Network-Automation-Leverage-optimize/dp/1788299469/ref=pd_bxgy_14_img_3?_encoding=UTF8&pd_rd_i=1788299469&pd_rd_r=2c30dcee-5ea3-11e8-895a-49ce3940778b&pd_rd_w=D611W&pd_rd_wg=5rhXw&pf_rd_i=desktop-dp-sims&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=3914568618330124508&pf_rd_r=PPAFW1MPB01E7178K92N&pf_rd_s=desktop-dp-sims&pf_rd_t=40701&psc=1&refRID=PPAFW1MPB01E7178K92N
** Extending Ansible https://www.amazon.com/Extending-Ansible-Rishabh-Das-ebook/dp/B01BSTEDL8/ref=sr_1_4?ie=UTF8&qid=1527091700&sr=8-4&keywords=python+ansible&dpID=515EZNRMhZL&preST=_SX342_QL70_&dpSrc=srch
* Mastering Python Networking: Your one stop solution to using Python for network automation, DevOps, and SDN https://www.amazon.com/Mastering-Python-Networking-solution-automation/dp/1784397008/ref=sr_1_5_sspa?ie=UTF8&qid=1527091545&sr=8-5-spons&keywords=cloud+automation+python&psc=1
* Programming Google App Engine with Python: Build and Run Scalable Python Apps on Google's Infrastructure https://www.amazon.com/Programming-Google-Engine-Python-Infrastructure/dp/1491900253/ref=sr_1_7?s=books&ie=UTF8&qid=1527090532&sr=1-7&keywords=python+cloud+computing
* Cloud Native Python: Build and deploy resilent applications on the cloud using microservices, AWS, Azure and more https://www.amazon.com/Cloud-Native-Python-applications-microservices/dp/1787129314/ref=sr_1_1_sspa?s=books&ie=UTF8&qid=1527090532&sr=1-1-spons&keywords=python+cloud+computing&psc=1
! python and PL/SQL
* Introduction to Python for PL/SQL Developers - Full Series https://community.oracle.com/docs/DOC-1005069
* Learning PYTHON for PLSQL Developers https://www.youtube.com/watch?v=FbssyLrfkzo
! bash and sysad
* Command Line Kung Fu: Bash Scripting Tricks, Linux Shell Programming Tips, and Bash One-liners https://www.amazon.com/Command-Line-Kung-Programming-One-liners-ebook/dp/B00JRGCFLA/ref=sr_1_4?s=books&ie=UTF8&qid=1527090482&sr=1-4&keywords=bash+system+administration&dpID=41PVsjWk4OL&preST=_SY445_QL70_&dpSrc=srch
11.2.0.2 Grid infrastructure, private interconnect bonding new feature HAIP http://dbastreet.com/blog/?p=515
http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html <-- nice linux guide
http://www.pythian.com/blog/changing-hostnames-in-oracle-rac/
{{{
------------------------------------------------
Change IP Step by Step:
------------------------------------------------
Scenario:
There are two subsidiaries (company A and B) of a certain multinational company, they are located on one building and servers residing on one data center.
Company A was acquired by another private company and because of this, Company B has to change its subnet from 192.168.203 to 172.168.203
Below are the old entries of /etc/hosts file of Company B:
[root@racnode1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
# Public Network (eth0)
192.168.203.11 racnode1.us.oracle.com racnode1
192.168.203.12 racnode2.us.oracle.com racnode2
# Public VIP
192.168.203.111 racnode1-vip.us.oracle.com racnode1-vip
192.168.203.112 racnode2-vip.us.oracle.com racnode2-vip
# Private Interconnect
10.10.10.11 racnode1-priv.us.oracle.com racnode1-priv
10.10.10.12 racnode2-priv.us.oracle.com racnode2-priv
Below will be the new entries of /etc/hosts file of Company B:
[root@racnode1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
# Public Network (eth0)
172.168.203.11 racnode1.us.oracle.com racnode1
172.168.203.12 racnode2.us.oracle.com racnode2
# Public VIP
172.168.203.111 racnode1-vip.us.oracle.com racnode1-vip
172.168.203.112 racnode2-vip.us.oracle.com racnode2-vip
# Private Interconnect
10.10.10.11 racnode1-priv.us.oracle.com racnode1-priv
10.10.10.12 racnode2-priv.us.oracle.com racnode2-priv
Considerations:
- There will be no data center movement, only physical rewiring will happen. Some servers of Company B were already moved to 172.168.203, only the RAC
servers were left. There was a route going to 192.168.203 that's why they could still access the servers
- The EMC CX500 storage is assigned on the 192.168.203 subnet together with the management console. EMC engineer said there will be no problems
with the IP address change on the RAC servers
- Since the IP addresses will be changed, the Net Services entries have to be modified
- Also the database link going to the RAC servers have to be modified to reflect the new IPs
- DNS entries on the 172.168.203 have to be created
- DNS entries on the 192.168.203 have to be deleted
- NFS mountpoints on the servers should be noted, edit the /etc/exports on the source servers to reflect the new IPs
So here it goes...
1) Shut down everything except the CRS stack (execute on racnode1)
a) verify the status
[oracle@racnode1 ~]$ crs_stat2
HA Resource Target State
----------- ------ -----
ora.orcl.db ONLINE ONLINE on racnode1
ora.orcl.orcl1.inst ONLINE ONLINE on racnode1
ora.orcl.orcl2.inst ONLINE ONLINE on racnode2
ora.orcl.orcl_service.cs ONLINE ONLINE on racnode1
ora.orcl.orcl_service.orcl1.srv ONLINE ONLINE on racnode1
ora.orcl.orcl_service.orcl2.srv ONLINE ONLINE on racnode2
ora.racnode1.ASM1.asm ONLINE ONLINE on racnode1
ora.racnode1.LISTENER_RACNODE1.lsnr ONLINE ONLINE on racnode1
ora.racnode1.gsd ONLINE ONLINE on racnode1
ora.racnode1.ons ONLINE ONLINE on racnode1
ora.racnode1.vip ONLINE ONLINE on racnode1
ora.racnode2.ASM2.asm ONLINE ONLINE on racnode2
ora.racnode2.LISTENER_RACNODE2.lsnr ONLINE ONLINE on racnode2
ora.racnode2.gsd ONLINE ONLINE on racnode2
ora.racnode2.ons ONLINE ONLINE on racnode2
ora.racnode2.vip ONLINE ONLINE on racnode2
b) stop the services, instances, ASM, and nodeapps (execute on racnode1)
[oracle@racnode1 ~]$ srvctl stop service -d orcl
[oracle@racnode1 ~]$ srvctl stop database -d orcl
[oracle@racnode1 ~]$ srvctl stop asm -n racnode1
[oracle@racnode1 ~]$ srvctl stop asm -n racnode2
[oracle@racnode1 ~]$ srvctl stop nodeapps -n racnode1
[oracle@racnode1 ~]$ srvctl stop nodeapps -n racnode2
[oracle@racnode1 ~]$ crs_stat2
HA Resource Target State
----------- ------ -----
ora.orcl.db OFFLINE OFFLINE
ora.orcl.orcl1.inst OFFLINE OFFLINE
ora.orcl.orcl2.inst OFFLINE OFFLINE
ora.orcl.orcl_service.cs OFFLINE OFFLINE
ora.orcl.orcl_service.orcl1.srv OFFLINE OFFLINE
ora.orcl.orcl_service.orcl2.srv OFFLINE OFFLINE
ora.racnode1.ASM1.asm OFFLINE OFFLINE
ora.racnode1.LISTENER_RACNODE1.lsnr OFFLINE OFFLINE
ora.racnode1.gsd OFFLINE OFFLINE
ora.racnode1.ons OFFLINE OFFLINE
ora.racnode1.vip OFFLINE OFFLINE
ora.racnode2.ASM2.asm OFFLINE OFFLINE
ora.racnode2.LISTENER_RACNODE2.lsnr OFFLINE OFFLINE
ora.racnode2.gsd OFFLINE OFFLINE
ora.racnode2.ons OFFLINE OFFLINE
ora.racnode2.vip OFFLINE OFFLINE
2) Backup OCR and Voting Disk (execute on racnode1)
a) Query OCR location
[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262144
Used space (kbytes) : 4592
Available space (kbytes) : 257552
ID : 1841304007
Device/File Name : /u02/oradata/orcl/OCRFile
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
b) Query Voting Disk location
[oracle@racnode1 ~]$ crsctl query css votedisk
0. 0 /u02/oradata/orcl/CSSFile
located 1 votedisk(s).
c) Backup the files using "dd"
dd if=/u02/oradata/orcl/OCRFile of=/u03/flash_recovery_area/OCRFile_backup
dd if=/u02/oradata/orcl/CSSFile of=/u03/flash_recovery_area/CSSFile_backup
3) Change the public interface
a) Verify first the interface (both nodes)
[oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth0 192.168.203.0 global public
eth1 10.10.10.0 global cluster_interconnect
[oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth0 192.168.203.0 global public
eth1 10.10.10.0 global cluster_interconnect
b) View the available interface names on each node by running the command (both nodes)
[oracle@racnode1 ~]$ oifcfg iflist
eth0 192.168.203.0
eth1 10.10.10.0
[oracle@racnode2 ~]$ oifcfg iflist
eth0 192.168.203.0
eth1 10.10.10.0
c) In our case the interface eth0 has to be changed. There is no modify command, so we have to delete and redefine the interface.
When you execute the "oifcfg", the changes will also reflect on other nodes. (execute on racnode1)
[oracle@racnode1 ~]$ oifcfg delif -global eth0
[oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth1 10.10.10.0 global cluster_interconnect
[oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth1 10.10.10.0 global cluster_interconnect
[oracle@racnode1 ~]$ oifcfg setif -global eth0/172.168.203.0:public
The CRS installation user (oracle) must be used for this command, otherwise you'll get the following errors
[karao@racnode1 bin]$ ./oifcfg delif -global eth0
PRIF-4: OCR error while deleting the configuration for the given interface
[karao@racnode1 bin]$ ./oifcfg setif -global eth0/172.168.203.0:public
PROC-5: User does not have permission to perform a cluster registry operation on this key. Authentication error [User does not have permission to perform this operation] [0]
PRIF-11: cluster registry error
d) Verify the change (both nodes)
[oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth0 172.168.203.0 global public
eth1 10.10.10.0 global cluster_interconnect
[oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth0 172.168.203.0 global public
eth1 10.10.10.0 global cluster_interconnect
4) Modify the VIP address
a) Verify current VIP (execute on racnode1)
[oracle@racnode1 ~]$ srvctl config nodeapps -n racnode1 -a
VIP exists.: /racnode1-vip.us.oracle.com/192.168.203.111/255.255.255.0/eth0
[oracle@racnode1 ~]$ srvctl config nodeapps -n racnode2 -a
VIP exists.: /racnode2-vip.us.oracle.com/192.168.203.112/255.255.255.0/eth0
Below is the summary of the output:
VIP Hostname is 'racnode1-vip.us.oracle.com'
VIP IP address is '192.168.203.111'
VIP subnet mask is '255.255.255.0'
Interface Name used by the VIP is called 'eth0'
b) Verify that the VIP is no longer running by executing the 'ifconfig' (both nodes)
[oracle@racnode1 ~]$ /sbin/ifconfig
[oracle@racnode2 ~]$ /sbin/ifconfig
c) Change the VIP (we modified the Public IP so we must change the VIP to the same subnet as well)
Below are some notes to remember:
# The root user should be used for this action, otherwise you'll get the error below
[oracle@racnode1 ~]$ srvctl modify nodeapps -n racnode1 -A 172.168.203.111/255.255.255.0/eth0
PRKO-2117 : This command should be executed as the system privilege user.
# The variable ORACLE_HOME must be initialised, otherwise you'll get the error below
****ORACLE_HOME environment variable not set!
ORACLE_HOME should be set to the main
directory that contains Oracle products.
Set and export ORACLE_HOME, then re-run.
You could specify on the "srvctl" command either IP or hostname, in my case, I want the output of the "srvctl config nodeapps -n racnode1 -a"
command to show the VIP hostname (Option 1), below will show you two ways to do it:
First set ORACLE_HOME
[root@racnode1 ~]# export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
Option 1 (execute on racnode1):
** Modify /etc/hosts to contain new VIP IPs on both nodes
# Public VIP
172.168.203.111 racnode1-vip.us.oracle.com racnode1-vip
172.168.203.112 racnode2-vip.us.oracle.com racnode2-vip
[root@racnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/bin/srvctl modify nodeapps -n racnode1 -A racnode1-vip.us.oracle.com/255.255.255.0/eth0
[oracle@racnode1 ~]$ srvctl config nodeapps -n racnode1 -a
VIP exists.: /racnode1-vip.us.oracle.com/172.168.203.111/255.255.255.0/eth0
[root@racnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/bin/srvctl modify nodeapps -n racnode2 -A racnode2-vip.us.oracle.com/255.255.255.0/eth0
[oracle@racnode1 bin]$ srvctl config nodeapps -n racnode2 -a
VIP exists.: /racnode2-vip.us.oracle.com/172.168.203.112/255.255.255.0/eth0
Option 2 (execute on racnode1):
No modifications on /etc/hosts yet
[root@racnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/bin/srvctl modify nodeapps -n racnode1 -A 172.168.203.111/255.255.255.0/eth0
[oracle@racnode1 ~]$ srvctl config nodeapps -n racnode1 -a
VIP exists.: /172.168.203.111/172.168.203.111/255.255.255.0/eth0
[root@racnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/bin/srvctl modify nodeapps -n racnode2 -A 172.168.203.112/255.255.255.0/eth0
[oracle@racnode1 ~]$ srvctl config nodeapps -n racnode2 -a
VIP exists.: /172.168.203.112/172.168.203.111/255.255.255.0/eth0
d) Verify the change (execute on racnode1)
[oracle@racnode1 bin]$ srvctl config nodeapps -n racnode1 -a
VIP exists.: /racnode1-vip.us.oracle.com/172.168.203.111/255.255.255.0/eth0
[oracle@racnode1 bin]$ srvctl config nodeapps -n racnode2 -a
VIP exists.: /racnode2-vip.us.oracle.com/172.168.203.112/255.255.255.0/eth0
4) Shut down CRS (both nodes)
[root@racnode1 bin]# ./crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@racnode2 bin]# ./crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
5) Modify IP address on OS level (/etc/hosts), Net Services files (tnsnames.ora, listener.ora), OCFS2 (if available),etc. (both nodes)
Backup the files first before modification
This is the time where network engineers can rewire on the servers
OS level:
/etc/hosts
/etc/sysconfig/network
/etc/resolv.conf
/etc/sysconfig/network-scripts/ifcfg-eth0
Net Services:
tnsnames.ora
listener.ora
OCFS2 (change to the new IP):
/etc/ocfs2/cluster.conf
NTP server address
/etc/ntp.conf
6) Restart server, verify RAC components
[oracle@racnode1 ~]$ crs_stat2
HA Resource Target State
----------- ------ -----
ora.orcl.db ONLINE ONLINE on racnode1
ora.orcl.orcl1.inst ONLINE ONLINE on racnode1
ora.orcl.orcl2.inst ONLINE ONLINE on racnode2
ora.orcl.orcl_service.cs ONLINE ONLINE on racnode1
ora.orcl.orcl_service.orcl1.srv ONLINE ONLINE on racnode1
ora.orcl.orcl_service.orcl2.srv ONLINE ONLINE on racnode2
ora.racnode1.ASM1.asm ONLINE ONLINE on racnode1
ora.racnode1.LISTENER_RACNODE1.lsnr ONLINE ONLINE on racnode1
ora.racnode1.gsd ONLINE ONLINE on racnode1
ora.racnode1.ons ONLINE ONLINE on racnode1
ora.racnode1.vip ONLINE ONLINE on racnode1
ora.racnode2.ASM2.asm ONLINE ONLINE on racnode2
ora.racnode2.LISTENER_RACNODE2.lsnr ONLINE ONLINE on racnode2
ora.racnode2.gsd ONLINE ONLINE on racnode2
ora.racnode2.ons ONLINE ONLINE on racnode2
ora.racnode2.vip ONLINE ONLINE on racnode2
7) Application testing
-------------
Fallback procedure:
1) Shut down everything plus the CRS stack (execute on racnode1)
a) Shutdown RAC components
[oracle@racnode1 ~]$ srvctl stop service -d orcl
[oracle@racnode1 ~]$ srvctl stop database -d orcl
[oracle@racnode1 ~]$ srvctl stop asm -n racnode1
[oracle@racnode1 ~]$ srvctl stop asm -n racnode2
[oracle@racnode1 ~]$ srvctl stop nodeapps -n racnode1
[oracle@racnode1 ~]$ srvctl stop nodeapps -n racnode2
[oracle@racnode1 ~]$ crs_stat2
HA Resource Target State
----------- ------ -----
ora.orcl.db OFFLINE OFFLINE
ora.orcl.orcl1.inst OFFLINE OFFLINE
ora.orcl.orcl2.inst OFFLINE OFFLINE
ora.orcl.orcl_service.cs OFFLINE OFFLINE
ora.orcl.orcl_service.orcl1.srv OFFLINE OFFLINE
ora.orcl.orcl_service.orcl2.srv OFFLINE OFFLINE
ora.racnode1.ASM1.asm OFFLINE OFFLINE
ora.racnode1.LISTENER_RACNODE1.lsnr OFFLINE OFFLINE
ora.racnode1.gsd OFFLINE OFFLINE
ora.racnode1.ons OFFLINE OFFLINE
ora.racnode1.vip OFFLINE OFFLINE
ora.racnode2.ASM2.asm OFFLINE OFFLINE
ora.racnode2.LISTENER_RACNODE2.lsnr OFFLINE OFFLINE
ora.racnode2.gsd OFFLINE OFFLINE
ora.racnode2.ons OFFLINE OFFLINE
ora.racnode2.vip OFFLINE OFFLINE
b) Shut down CRS (both nodes)
[root@racnode1 bin]# ./crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@racnode2 bin]# ./crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
2) Put back the OCR and Voting Disk using "dd" (execute on racnode1)
a) Use "dd" to restore
[root@racnode1 ~]# dd if=/u03/flash_recovery_area/OCRFile_backup of=/u02/oradata/orcl/OCRFile
9640+0 records in
9640+0 records out
[root@racnode1 ~]# dd if=/u03/flash_recovery_area/CSSFile_backup of=/u02/oradata/orcl/CSSFile
20000+0 records in
20000+0 records out
b) Change permissions and ownership
[root@racnode1 ~]# chown root:oinstall /u02/oradata/orcl/OCRFile
[root@racnode1 ~]# chown oracle:oinstall /u02/oradata/orcl/CSSFile
[root@racnode1 ~]# chmod 640 /u02/oradata/orcl/OCRFile
[root@racnode1 ~]# chmod 644 /u02/oradata/orcl/CSSFile
c) Verify the restore, notice that "oifcfg iflist" still outputs the 172.168.203 subnet, after reconfiguring the interfaces and restart it will output 192.168.203.0
[oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth0 192.168.203.0 global public
eth1 10.10.10.0 global cluster_interconnect
[oracle@racnode1 ~]$ oifcfg iflist
eth0 172.168.203.0
eth1 10.10.10.0
[oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth0 192.168.203.0 global public
eth1 10.10.10.0 global cluster_interconnect
[oracle@racnode2 ~]$ oifcfg iflist
eth0 172.168.203.0
eth1 10.10.10.0
3) Put back the OS level (/etc/hosts) files, Net Services files (tnsnames.ora, listener.ora), OCFS2 (if available),etc. (both nodes)
Also put back the old wire configuration
OS level:
/etc/hosts
/etc/sysconfig/network
/etc/resolv.conf
/etc/sysconfig/network-scripts/ifcfg-eth0
Net Services:
tnsnames.ora
listener.ora
OCFS2 (change to the new IP):
/etc/ocfs2/cluster.conf
NTP server address
/etc/ntp.conf
4) Restart the server, check the CRS and RAC components
a) Check the interfaces and VIP (both nodes)
[oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth0 192.168.203.0 global public
eth1 10.10.10.0 global cluster_interconnect
[oracle@racnode1 ~]$ oifcfg iflist
eth0 192.168.203.0
eth1 10.10.10.0
[oracle@racnode1 ~]$ srvctl config nodeapps -n racnode1 -a
VIP exists.: /racnode1-vip.us.oracle.com/192.168.203.111/255.255.255.0/eth0
[oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
eth0 192.168.203.0 global public
eth1 10.10.10.0 global cluster_interconnect
[oracle@racnode2 ~]$ oifcfg iflist
eth0 192.168.203.0
eth1 10.10.10.0
[oracle@racnode2 ~]$ srvctl config nodeapps -n racnode2 -a
VIP exists.: /racnode2-vip.us.oracle.com/192.168.203.112/255.255.255.0/eth0
b) Check RAC components
[oracle@racnode1 ~]$ crs_stat2
HA Resource Target State
----------- ------ -----
ora.orcl.db ONLINE ONLINE on racnode1
ora.orcl.orcl1.inst ONLINE ONLINE on racnode1
ora.orcl.orcl2.inst ONLINE ONLINE on racnode2
ora.orcl.orcl_service.cs ONLINE ONLINE on racnode1
ora.orcl.orcl_service.orcl1.srv ONLINE ONLINE on racnode1
ora.orcl.orcl_service.orcl2.srv ONLINE ONLINE on racnode2
ora.racnode1.ASM1.asm ONLINE ONLINE on racnode1
ora.racnode1.LISTENER_RACNODE1.lsnr ONLINE ONLINE on racnode1
ora.racnode1.gsd ONLINE ONLINE on racnode1
ora.racnode1.ons ONLINE ONLINE on racnode1
ora.racnode1.vip ONLINE ONLINE on racnode1
ora.racnode2.ASM2.asm ONLINE ONLINE on racnode2
ora.racnode2.LISTENER_RACNODE2.lsnr ONLINE ONLINE on racnode2
ora.racnode2.gsd ONLINE ONLINE on racnode2
ora.racnode2.ons ONLINE ONLINE on racnode2
ora.racnode2.vip ONLINE ONLINE on racnode2
root@karl:/home/karao/Documents/VirtualMachines/vmware-update-2.6.27-5.5.7-2# ./runme.pl
Updating /usr/bin/vmware-config.pl ... already patched
Updating /usr/bin/vmware ... No patch needed/available
Updating /usr/bin/vmnet-bridge ... No patch needed/available
Updating /usr/lib/vmware/bin/vmware-vmx ... No patch needed/available
Updating /usr/lib/vmware/bin-debug/vmware-vmx ... No patch needed/available
VMware modules in "/usr/lib/vmware/modules/source" has been updated.
Before running VMware for the first time after update, you need to configure it
for your running kernel by invoking the following command:
"/usr/bin/vmware-config.pl". Do you want this script to invoke the command for
you now? [yes]
Making sure services for VMware Server are stopped.
Stopping VMware services:
Virtual machine monitor done
Bridged networking on /dev/vmnet0 done
DHCP server on /dev/vmnet1 done
Host-only networking on /dev/vmnet1 done
DHCP server on /dev/vmnet8 done
NAT service on /dev/vmnet8 done
Host-only networking on /dev/vmnet8 done
Virtual ethernet done
Configuring fallback GTK+ 2.4 libraries.
In which directory do you want to install the mime type icons?
[/usr/share/icons]
What directory contains your desktop menu entry files? These files have a
.desktop file extension. [/usr/share/applications]
In which directory do you want to install the application's icon?
[/usr/share/pixmaps]
/usr/share/applications/vmware-server.desktop: warning: value "vmware-server.png" for key "Icon" in group "Desktop Entry" is an icon name with an extension, but there should be no extension as described in the Icon Theme Specification if the value is not an absolute path
/usr/share/applications/vmware-console-uri-handler.desktop: warning: value "vmware-server.png" for key "Icon" in group "Desktop Entry" is an icon name with an extension, but there should be no extension as described in the Icon Theme Specification if the value is not an absolute path
Trying to find a suitable vmmon module for your running kernel.
None of the pre-built vmmon modules for VMware Server is suitable for your
running kernel. Do you want this program to try to build the vmmon module for
your system (you need to have a C compiler installed on your system)? [yes]
Using compiler "/usr/bin/gcc". Use environment variable CC to override.
What is the location of the directory of C header files that match your running
kernel? [/lib/modules/2.6.27-11-generic/build/include]
Extracting the sources of the vmmon module.
Building the vmmon module.
Building for VMware Server 1.0.0.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config0/vmmon-only'
make -C /lib/modules/2.6.27-11-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-headers-2.6.27-11-generic'
CC [M] /tmp/vmware-config0/vmmon-only/linux/driver.o
CC [M] /tmp/vmware-config0/vmmon-only/linux/driverLog.o
CC [M] /tmp/vmware-config0/vmmon-only/linux/hostif.o
/tmp/vmware-config0/vmmon-only/linux/hostif.c: In function ‘HostIF_SetFastClockRate’:
/tmp/vmware-config0/vmmon-only/linux/hostif.c:3441: warning: passing argument 2 of ‘send_sig’ discards qualifiers from pointer target type
CC [M] /tmp/vmware-config0/vmmon-only/common/comport.o
CC [M] /tmp/vmware-config0/vmmon-only/common/cpuid.o
CC [M] /tmp/vmware-config0/vmmon-only/common/hash.o
CC [M] /tmp/vmware-config0/vmmon-only/common/memtrack.o
CC [M] /tmp/vmware-config0/vmmon-only/common/phystrack.o
CC [M] /tmp/vmware-config0/vmmon-only/common/task.o
cc1plus: warning: command line option "-Werror-implicit-function-declaration" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wno-pointer-sign" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++
In file included from /tmp/vmware-config0/vmmon-only/common/task.c:1195:
/tmp/vmware-config0/vmmon-only/common/task_compat.h: In function ‘void Task_Switch_V45(VMDriver*, Vcpuid)’:
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::validEIP’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::cs’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::rsp’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::rip’ may be used uninitialized in this function
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciContext.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciDatagram.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciDriver.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciDs.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciGroup.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciHashtable.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciProcess.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciResource.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciSharedMem.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmx86.o
CC [M] /tmp/vmware-config0/vmmon-only/vmcore/compat.o
CC [M] /tmp/vmware-config0/vmmon-only/vmcore/moduleloop.o
LD [M] /tmp/vmware-config0/vmmon-only/vmmon.o
Building modules, stage 2.
MODPOST 1 modules
WARNING: modpost: module vmmon.ko uses symbol 'init_mm' marked UNUSED
CC /tmp/vmware-config0/vmmon-only/vmmon.mod.o
LD [M] /tmp/vmware-config0/vmmon-only/vmmon.ko
make[1]: Leaving directory `/usr/src/linux-headers-2.6.27-11-generic'
cp -f vmmon.ko ./../vmmon.o
make: Leaving directory `/tmp/vmware-config0/vmmon-only'
The module loads perfectly in the running kernel.
This program previously created the file /dev/vmmon, and was about to remove
it. Somebody else apparently did it already.
This program previously created the file /dev/parport0, and was about to remove
it. Somebody else apparently did it already.
This program previously created the file /dev/parport1, and was about to remove
it. Somebody else apparently did it already.
This program previously created the file /dev/parport2, and was about to remove
it. Somebody else apparently did it already.
This program previously created the file /dev/parport3, and was about to remove
it. Somebody else apparently did it already.
You have already setup networking.
Would you like to skip networking setup and keep your old settings as they are?
(yes/no) [yes] no
Do you want networking for your virtual machines? (yes/no/help) [yes]
Would you prefer to modify your existing networking configuration using the
wizard or the editor? (wizard/editor/help) [wizard]
The following bridged networks have been defined:
. vmnet0 is bridged to eth0
Do you wish to configure another bridged network? (yes/no) [no]
Do you want to be able to use NAT networking in your virtual machines? (yes/no)
[yes]
Configuring a NAT network for vmnet8.
The NAT network is currently configured to use the private subnet
192.168.203.0/255.255.255.0. Do you want to keep these settings? [yes] no
Do you want this program to probe for an unused private subnet? (yes/no/help)
[yes] no
What will be the IP address of your host on the private
network? 172.168.203.0
What will be the netmask of your private network? 255.255.255.0
The following NAT networks have been defined:
. vmnet8 is a NAT network on private subnet 172.168.203.0.
Do you wish to configure another NAT network? (yes/no) [no]
Do you want to be able to use host-only networking in your virtual machines?
[yes]
Configuring a host-only network for vmnet1.
The host-only network is currently configured to use the private subnet
10.10.10.0/255.255.255.0. Do you want to keep these settings? [yes]
The following host-only networks have been defined:
. vmnet1 is a host-only network on private subnet 10.10.10.0.
Do you wish to configure another host-only network? (yes/no) [no]
Extracting the sources of the vmnet module.
Building the vmnet module.
Building for VMware Server 1.0.0.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config0/vmnet-only'
make -C /lib/modules/2.6.27-11-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-headers-2.6.27-11-generic'
CC [M] /tmp/vmware-config0/vmnet-only/driver.o
CC [M] /tmp/vmware-config0/vmnet-only/hub.o
CC [M] /tmp/vmware-config0/vmnet-only/userif.o
CC [M] /tmp/vmware-config0/vmnet-only/netif.o
CC [M] /tmp/vmware-config0/vmnet-only/bridge.o
CC [M] /tmp/vmware-config0/vmnet-only/filter.o
CC [M] /tmp/vmware-config0/vmnet-only/procfs.o
CC [M] /tmp/vmware-config0/vmnet-only/smac_compat.o
CC [M] /tmp/vmware-config0/vmnet-only/smac_linux.x86_64.o
LD [M] /tmp/vmware-config0/vmnet-only/vmnet.o
Building modules, stage 2.
MODPOST 1 modules
WARNING: modpost: missing MODULE_LICENSE() in /tmp/vmware-config0/vmnet-only/vmnet.o
see include/linux/module.h for more information
CC /tmp/vmware-config0/vmnet-only/vmnet.mod.o
LD [M] /tmp/vmware-config0/vmnet-only/vmnet.ko
make[1]: Leaving directory `/usr/src/linux-headers-2.6.27-11-generic'
cp -f vmnet.ko ./../vmnet.o
make: Leaving directory `/tmp/vmware-config0/vmnet-only'
The module loads perfectly in the running kernel.
Please specify a port for remote console connections to use [902]
* Stopping internet superserver xinetd [ OK ]
* Starting internet superserver xinetd [ OK ]
Configuring the VMware VmPerl Scripting API.
Building the VMware VmPerl Scripting API.
Using compiler "/usr/bin/gcc". Use environment variable CC to override.
Installing the VMware VmPerl Scripting API.
The installation of the VMware VmPerl Scripting API succeeded.
Do you want this program to set up permissions for your registered virtual
machines? This will be done by setting new permissions on all files found in
the "/etc/vmware/vm-list" file. [no]
Generating SSL Server Certificate
In which directory do you want to keep your virtual machine files?
[/home/karao/Documents/VirtualMachines]
Do you want to enter a serial number now? (yes/no/help) [no]
Starting VMware services:
Virtual machine monitor done
Virtual ethernet done
Bridged networking on /dev/vmnet0 done
Host-only networking on /dev/vmnet1 (background) done
Host-only networking on /dev/vmnet8 (background) done
NAT service on /dev/vmnet8 done
Starting VMware virtual machines... done
The configuration of VMware Server 1.0.8 build-126538 for Linux for this
running kernel completed successfully.
---------------------------
Resources
---------------------------
Note 276434.1 Modifying the VIP or VIP Hostname of a 10g Oracle Clusterware Node
Note 283684.1 How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster
Note 271121.1 - How to change VIP and VIP/Hostname in 10g
Bug: 4500688 - THE INTERFACE NAME SHOULD BE SPECIFY WHEN EXECUTING 'SRVCTL MODIFY NODEAPPS'
---------------
Oracle® Clusterware Administration and Deployment Guide 11g Release 1 (11.1)
2 Administering Oracle Clusterware
* Changing Network Addresses
----------------
http://forums.oracle.com/forums/thread.jspa?threadID=339447
http://surachartopun.com/2007/01/i-want-to-change-ip-address-on-oracle.html
http://www.ikickass.com/changeoracle10gracvip
http://orcl-experts.info/index.php?name=FAQ&id_cat=9
http://www.db-nemec.com/RAC_IP_Change.html
-----------------
put back to 192
root@karl:/home/karao/Documents/VirtualMachines/vmware-update-2.6.27-5.5.7-2# ./runme.pl
Updating /usr/bin/vmware-config.pl ... already patched
Updating /usr/bin/vmware ... No patch needed/available
Updating /usr/bin/vmnet-bridge ... No patch needed/available
Updating /usr/lib/vmware/bin/vmware-vmx ... No patch needed/available
Updating /usr/lib/vmware/bin-debug/vmware-vmx ... No patch needed/available
VMware modules in "/usr/lib/vmware/modules/source" has been updated.
Before running VMware for the first time after update, you need to configure it
for your running kernel by invoking the following command:
"/usr/bin/vmware-config.pl". Do you want this script to invoke the command for
you now? [yes]
Making sure services for VMware Server are stopped.
Stopping VMware services:
Virtual machine monitor done
Bridged networking on /dev/vmnet0 done
DHCP server on /dev/vmnet1 done
Host-only networking on /dev/vmnet1 done
DHCP server on /dev/vmnet8 done
NAT service on /dev/vmnet8 done
Host-only networking on /dev/vmnet8 done
Virtual ethernet done
Configuring fallback GTK+ 2.4 libraries.
In which directory do you want to install the mime type icons?
[/usr/share/icons]
What directory contains your desktop menu entry files? These files have a
.desktop file extension. [/usr/share/applications]
In which directory do you want to install the application's icon?
[/usr/share/pixmaps]
/usr/share/applications/vmware-server.desktop: warning: value "vmware-server.png" for key "Icon" in group "Desktop Entry" is an icon name with an extension, but there should be no extension as described in the Icon Theme Specification if the value is not an absolute path
/usr/share/applications/vmware-console-uri-handler.desktop: warning: value "vmware-server.png" for key "Icon" in group "Desktop Entry" is an icon name with an extension, but there should be no extension as described in the Icon Theme Specification if the value is not an absolute path
Trying to find a suitable vmmon module for your running kernel.
None of the pre-built vmmon modules for VMware Server is suitable for your
running kernel. Do you want this program to try to build the vmmon module for
your system (you need to have a C compiler installed on your system)? [yes]
Using compiler "/usr/bin/gcc". Use environment variable CC to override.
What is the location of the directory of C header files that match your running
kernel? [/lib/modules/2.6.27-11-generic/build/include]
Extracting the sources of the vmmon module.
Building the vmmon module.
Building for VMware Server 1.0.0.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config0/vmmon-only'
make -C /lib/modules/2.6.27-11-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-headers-2.6.27-11-generic'
CC [M] /tmp/vmware-config0/vmmon-only/linux/driver.o
CC [M] /tmp/vmware-config0/vmmon-only/linux/driverLog.o
CC [M] /tmp/vmware-config0/vmmon-only/linux/hostif.o
/tmp/vmware-config0/vmmon-only/linux/hostif.c: In function ‘HostIF_SetFastClockRate’:
/tmp/vmware-config0/vmmon-only/linux/hostif.c:3441: warning: passing argument 2 of ‘send_sig’ discards qualifiers from pointer target type
CC [M] /tmp/vmware-config0/vmmon-only/common/comport.o
CC [M] /tmp/vmware-config0/vmmon-only/common/cpuid.o
CC [M] /tmp/vmware-config0/vmmon-only/common/hash.o
CC [M] /tmp/vmware-config0/vmmon-only/common/memtrack.o
CC [M] /tmp/vmware-config0/vmmon-only/common/phystrack.o
CC [M] /tmp/vmware-config0/vmmon-only/common/task.o
cc1plus: warning: command line option "-Werror-implicit-function-declaration" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wno-pointer-sign" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++
In file included from /tmp/vmware-config0/vmmon-only/common/task.c:1195:
/tmp/vmware-config0/vmmon-only/common/task_compat.h: In function ‘void Task_Switch_V45(VMDriver*, Vcpuid)’:
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::validEIP’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::cs’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::rsp’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::rip’ may be used uninitialized in this function
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciContext.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciDatagram.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciDriver.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciDs.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciGroup.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciHashtable.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciProcess.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciResource.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmciSharedMem.o
CC [M] /tmp/vmware-config0/vmmon-only/common/vmx86.o
CC [M] /tmp/vmware-config0/vmmon-only/vmcore/compat.o
CC [M] /tmp/vmware-config0/vmmon-only/vmcore/moduleloop.o
LD [M] /tmp/vmware-config0/vmmon-only/vmmon.o
Building modules, stage 2.
MODPOST 1 modules
WARNING: modpost: module vmmon.ko uses symbol 'init_mm' marked UNUSED
CC /tmp/vmware-config0/vmmon-only/vmmon.mod.o
LD [M] /tmp/vmware-config0/vmmon-only/vmmon.ko
make[1]: Leaving directory `/usr/src/linux-headers-2.6.27-11-generic'
cp -f vmmon.ko ./../vmmon.o
make: Leaving directory `/tmp/vmware-config0/vmmon-only'
The module loads perfectly in the running kernel.
This program previously created the file /dev/vmmon, and was about to remove
it. Somebody else apparently did it already.
You have already setup networking.
Would you like to skip networking setup and keep your old settings as they are?
(yes/no) [yes] no
Do you want networking for your virtual machines? (yes/no/help) [yes]
Would you prefer to modify your existing networking configuration using the
wizard or the editor? (wizard/editor/help) [wizard]
The following bridged networks have been defined:
. vmnet0 is bridged to eth0
Do you wish to configure another bridged network? (yes/no) [no]
Do you want to be able to use NAT networking in your virtual machines? (yes/no)
[yes]
Configuring a NAT network for vmnet8.
The NAT network is currently configured to use the private subnet
172.168.203.0/255.255.255.0. Do you want to keep these settings? [yes] no
Do you want this program to probe for an unused private subnet? (yes/no/help)
[yes] no
What will be the IP address of your host on the private
network? 192.168.203.0
What will be the netmask of your private network? 255.255.255.0
The following NAT networks have been defined:
. vmnet8 is a NAT network on private subnet 192.168.203.0.
Do you wish to configure another NAT network? (yes/no) [no]
Do you want to be able to use host-only networking in your virtual machines?
[yes]
Configuring a host-only network for vmnet1.
The host-only network is currently configured to use the private subnet
10.10.10.0/255.255.255.0. Do you want to keep these settings? [yes]
The following host-only networks have been defined:
. vmnet1 is a host-only network on private subnet 10.10.10.0.
Do you wish to configure another host-only network? (yes/no) [no]
Extracting the sources of the vmnet module.
Building the vmnet module.
Building for VMware Server 1.0.0.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config0/vmnet-only'
make -C /lib/modules/2.6.27-11-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-headers-2.6.27-11-generic'
CC [M] /tmp/vmware-config0/vmnet-only/driver.o
CC [M] /tmp/vmware-config0/vmnet-only/hub.o
CC [M] /tmp/vmware-config0/vmnet-only/userif.o
CC [M] /tmp/vmware-config0/vmnet-only/netif.o
CC [M] /tmp/vmware-config0/vmnet-only/bridge.o
CC [M] /tmp/vmware-config0/vmnet-only/filter.o
CC [M] /tmp/vmware-config0/vmnet-only/procfs.o
CC [M] /tmp/vmware-config0/vmnet-only/smac_compat.o
CC [M] /tmp/vmware-config0/vmnet-only/smac_linux.x86_64.o
LD [M] /tmp/vmware-config0/vmnet-only/vmnet.o
Building modules, stage 2.
MODPOST 1 modules
WARNING: modpost: missing MODULE_LICENSE() in /tmp/vmware-config0/vmnet-only/vmnet.o
see include/linux/module.h for more information
CC /tmp/vmware-config0/vmnet-only/vmnet.mod.o
LD [M] /tmp/vmware-config0/vmnet-only/vmnet.ko
make[1]: Leaving directory `/usr/src/linux-headers-2.6.27-11-generic'
cp -f vmnet.ko ./../vmnet.o
make: Leaving directory `/tmp/vmware-config0/vmnet-only'
The module loads perfectly in the running kernel.
Please specify a port for remote console connections to use [902]
* Stopping internet superserver xinetd [ OK ]
* Starting internet superserver xinetd [ OK ]
Configuring the VMware VmPerl Scripting API.
Building the VMware VmPerl Scripting API.
Using compiler "/usr/bin/gcc". Use environment variable CC to override.
Installing the VMware VmPerl Scripting API.
The installation of the VMware VmPerl Scripting API succeeded.
Do you want this program to set up permissions for your registered virtual
machines? This will be done by setting new permissions on all files found in
the "/etc/vmware/vm-list" file. [no]
Generating SSL Server Certificate
In which directory do you want to keep your virtual machine files?
[/home/karao/Documents/VirtualMachines]
Do you want to enter a serial number now? (yes/no/help) [no]
Starting VMware services:
Virtual machine monitor done
Virtual ethernet done
Bridged networking on /dev/vmnet0 done
Host-only networking on /dev/vmnet1 (background) done
Host-only networking on /dev/vmnet8 (background) done
NAT service on /dev/vmnet8 done
Starting VMware virtual machines... done
The configuration of VMware Server 1.0.8 build-126538 for Linux for this
running kernel completed successfully.
root@karl:/home/karao/Documents/VirtualMachines/vmware-update-2.6.27-5.5.7-2#
}}}
http://oraclue.com/2010/11/01/issue-with-oracle-11-2-0-2-new-redundant-interconnect/
Troubleshooting case study for 9i RAC ..PRKC-1021 : Problem in the clusterware https://blogs.oracle.com/gverma/entry/troubleshooting_case_study_for
Troubleshooting done to make root.sh work after a 10gR2 CRS (10.2.0.1) installation on HP-UX PA RISC 64-bit OS https://blogs.oracle.com/gverma/entry/troubleshooting_done_to_make_r
crsctl start crs does not work in 10gR2 https://blogs.oracle.com/gverma/entry/crsctl_start_crs_does_not_work
Considerations for virtual IP setup before doing the 10gR2 CRS install https://blogs.oracle.com/gverma/entry/considerations_for_virtual_ip
10gR2 CRS case study: CRS would not start after reboot - stuck at /etc/init.d/init.cssd startcheck https://blogs.oracle.com/gverma/entry/10gr2_crs_case_study_crs_would
CRS would not start on Exadata http://www.evernote.com/shard/s48/sh/f107ae7b-be88-44f4-8b18-dca7e9e7f1f6/2af0a58a0f24d726b8c7c15ff1e4cdc7
RAC Reference
http://morganslibrary.org/reference/rac.html
RAC Health Check
http://oraexplorer.com/2009/05/rac-assessment-from-oracle/
Oracle RACOne Node -- Changes in 11.2.0.2 [ID 1232802.1]
Administering Oracle RAC One Node http://docs.oracle.com/cd/E11882_01/rac.112/e16795/onenode.htm#BABGAJGH
Oracle RAC One Node http://docs.oracle.com/cd/E11882_01/server.112/e17157/unplanned.htm#BABICFCD
Using Oracle Universal Installer to Install Oracle RAC One Node http://docs.oracle.com/cd/E11882_01/install.112/e24660/racinstl.htm#CIHGGAAE
{{{
1). Verifying an existing Oracle RAC One Node Database
srvctl config database -d <db_name>
srvctl status database -d <db_name>
srvctl config database -d racone
srvctl status database -d racone
2). Performing an online migration
srvctl relocate database -d <db_unique_name> {[-n <target>] [-w <timeout>] | -a [-r]} [-v]
srvctl relocate database -d racone -n harac1 -w 15 -v
3) Converting an Oracle RAC One Node Database to Oracle RAC or vice versa
To convert a database from Oracle RAC One Node to Oracle RAC:
srvctl convert database -d <db_unique_name> -c RAC [-n <node>]
srvctl convert database -d racone -c RAC -n harac1
Add more instances on other nodes as required:
[oracle@harac2 bin]$ srvctl add instance -d racone -i racone_1 -n harac1
[oracle@harac2 bin]$ srvctl add instance -d racone -i racone_3 -n lfmsx3
To convert a database from Oracle RAC to Oracle RAC One Node:
During the RAC to RACOne conversion, please ensure that the addition instances are removed using DBCA before we run the "srvctl convert database" command.
srvctl convert database -d <db_unique_name> -c RACONENODE -i <inst prefix> -w <timeout>
Eg: srvctl convert database -d racone -c RACONENODE -w 30 -i racone
4) Upgrading an Oracle RAC One Node database from 11.2.0.1 to 11.2.0.2
see Oracle RACOne Node -- Changes in 11.2.0.2 [ID 1232802.1]
}}}
! some things you should know about instance relocation
* if you are explicitly relocating an instance then the instance name will change from inst_1 to inst_2
* if the instance just suddenly shuts down.. then it will be inst_1 on node1 and still be inst_1 on node2
behavior of relocation to sessions
* it's seamless
killing the pmon
* it will require relogin
and this behavior sucks because it will also mess up your DBFS mounting
! start stop rac one node
{{{
$ cat pmoncheck
dcli -l oracle -g /home/oracle/dbs_group ps -ef | grep pmon | grep -v grep | grep -v ASM
$ cat stopall.sh
srvctl stop listener
srvctl stop database -d testdb
srvctl stop database -d soltst
srvctl stop database -d solprd
srvctl stop database -d solnpi
srvctl stop database -d soldev
srvctl stop database -d reltst
srvctl stop database -d relprd
srvctl stop database -d reldev
srvctl stop database -d jira
srvctl stop database -d ifstst
srvctl stop database -d ifsprd
srvctl stop database -d ifsnpi
srvctl stop database -d ifsdev
srvctl stop database -d gl91
srvctl stop database -d adminrep
$ cat startall.sh
srvctl start listener
srvctl start database -d adminrep
srvctl start database -d reltst -n haioda1
srvctl start database -d testdb -n haioda1
srvctl start database -d gl91 -n haioda1
srvctl start database -d relprd -n haioda1
srvctl start database -d solprd -n haioda1
srvctl start database -d testdb2 -n haioda1
srvctl start database -d reldev -n haioda1
srvctl start database -d soldev -n haioda2
srvctl start database -d ifsprd -n haioda2
srvctl start database -d ifsdev -n haioda2
srvctl start database -d solnpi -n haioda2
srvctl start database -d ifsnpi -n haioda2
srvctl start database -d ifstst -n haioda2
srvctl start database -d soltst -n haioda2
srvctl start database -d jira -n haioda2
}}}
https://blogs.oracle.com/XPSONHA/entry/installation_procedure_rac_nod
INSTANCE_GROUPS and PARALLEL_INSTANCE_GROUP
http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
http://www.oraclemagician.com/white_papers/par_groups.pdf
http://www.oracledatabase12g.com/archives/checklist-for-performance-problems-with-parallel-execution.html <-- CHECKLIST!
http://satya-racdba.blogspot.com/2009/12/srvctl-commands.html
http://yong321.freeshell.org/oranotes/SingleClientAccessName.txt
http://www.mydbspace.com/?p=324 how scan works
{{{
[pd01db01:oracle:dbm1] /home/oracle
> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node pd01db04
SCAN VIP scan2 is enabled
SCAN VIP scan2 is not running
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node pd01db01
[pd01db01:oracle:dbm1] /home/oracle
>
[pd01db01:oracle:dbm1] /home/oracle
>
[pd01db01:oracle:dbm1] /home/oracle
> srvctl start scan
[pd01db01:oracle:dbm1] /home/oracle
> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node pd01db04
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node pd01db02
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node pd01db01
}}}
you have to manually bring back SCAN to the preferred node on a failover scenario
https://twitter.com/martinberx/status/681445522050256896
<<showtoc>>
! the setup and initial test case
{{{
/u01/app/oracle/product/11.2.0.3/dbhome_1/
srvctl add service -d oltp -s oltp_srvc -r oltp1,oltp2
srvctl start service -d oltp -s oltp_srvc
srvctl stop service -d oltp -s oltp_srvc
srvctl remove service -d oltp -s oltp_srvc
### disable route
srvctl disable service -d oltp -s oltp_srvc -i oltp2 <-- this wont stop the service, you have to manually stop it
srvctl stop service -d oltp -s oltp_srvc -i oltp2
srvctl enable service -d oltp -s oltp_srvc -i oltp1,oltp2 <-- this wont start the service, you have to manually start it
srvctl start service -d oltp -s oltp_srvc <-- this will start the service
### modify route
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2 <-- this sets the preferred and available instances without stopping, removes the oltp2 on crsctl
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1,oltp2 <-- adds the oltp2 back but not started
srvctl start service -d oltp -s oltp_srvc <-- have to start manually after
------------------------------------------------------------------------------------------------------------------------
1) the current state
$ srvctl config service -d oltp
Service name: oltp_srvc
Service is enabled
Server pool: oltp_oltp_srvc
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: oltp1
Available instances:
$ crsctl stat res -t | less
ora.oltp.db
1 ONLINE ONLINE enkx3db01 Open,STABLE
2 ONLINE ONLINE enkx3db02 Open,STABLE
ora.oltp.oltp_srvc.svc
1 ONLINE ONLINE enkx3db01 STABLE
2) Add the 2nd insntace on service
* this adds the service to the config as available node
* also if you try executing "srvctl start service" it will error with PRCC-1014 : oltp_srvc was already running because you have the preferred service already running and the available is not going to kick in unless preferred instances are gone or if it's modified to be a preferred instance
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2
$ srvctl config service -d oltp
Service name: oltp_srvc
Service is enabled
Server pool: oltp_oltp_srvc
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: oltp1
Available instances: oltp2
ora.oltp.db
1 ONLINE ONLINE enkx3db01 Open,STABLE
2 ONLINE ONLINE enkx3db02 Open,STABLE
ora.oltp.oltp_srvc.svc
1 ONLINE ONLINE enkx3db01 STABLE
3) Expand the resources
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1,oltp2 <-- adds the oltp2 back but not started
$ srvctl config service -d oltp
Service name: oltp_srvc
Service is enabled
Server pool: oltp_oltp_srvc
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: oltp1,oltp2
Available instances:
ora.oltp.db
1 ONLINE ONLINE enkx3db01 Open,STABLE
2 ONLINE ONLINE enkx3db02 Open,STABLE
ora.oltp.oltp_srvc.svc
1 ONLINE ONLINE enkx3db01 STABLE
2 OFFLINE OFFLINE STABLE
srvctl start service -d oltp -s oltp_srvc <-- have to start manually after
Service name: oltp_srvc
Service is enabled
Server pool: oltp_oltp_srvc
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: oltp1,oltp2
Available instances:
ora.oltp.db
1 ONLINE ONLINE enkx3db01 Open,STABLE
2 ONLINE ONLINE enkx3db02 Open,STABLE
ora.oltp.oltp_srvc.svc
1 ONLINE ONLINE enkx3db01 STABLE
2 ONLINE ONLINE enkx3db02 STABLE
4) Reduce the resources
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2
$ srvctl config service -d oltp
Service name: oltp_srvc
Service is enabled
Server pool: oltp_oltp_srvc
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: oltp1
Available instances: oltp2
ora.oltp.db
1 ONLINE ONLINE enkx3db01 Open,STABLE
2 ONLINE ONLINE enkx3db02 Open,STABLE
ora.oltp.oltp_srvc.svc
1 ONLINE ONLINE enkx3db01 STABLE
5) Create two dbms_scheduler jobs for Expand and Reduce of nodes
--Expand job
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1,oltp2 <-- adds the oltp2 back but not started
srvctl start service -d oltp -s oltp_srvc <-- have to start manually after
--Reduce job
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2
}}}
! scheduling/automating it
!! cron job
{{{
vi expand.sh
#!/bin/bash
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1,oltp2
srvctl start service -d oltp -s oltp_srvc
vi reduce.sh
#!/bin/bash
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2
}}}
!! dbms_scheduler - doesn't seem to work, seems to be having some environment issues
{{{
begin
dbms_scheduler.create_credential(
credential_name => '"SYSTEM"."ORACLE_CRED"',
username => 'oracle',
password => 'enk1tec');
end;
/
SELECT u.name CREDENTIAL_OWNER, O.NAME CREDENTIAL_NAME, C.USERNAME,
DBMS_ISCHED.GET_CREDENTIAL_PASSWORD(O.NAME, u.name) pwd
FROM SYS.SCHEDULER$_CREDENTIAL C, SYS.OBJ$ O, SYS.USER$ U
WHERE U.USER# = O.OWNER#
AND C.OBJ# = O.OBJ# ;
begin
DBMS_SCHEDULER.create_job (
job_name => '"SYSTEM"."EXPAND_SHRINK_SERVICE"',
JOB_TYPE => 'EXECUTABLE',
JOB_ACTION => '/home/oracle/dba/karao/scripts/expand.sh',
repeat_interval => 'FREQ=MINUTELY;BYSECOND=0',
start_date => SYSTIMESTAMP,
number_of_arguments => 0
);
dbms_scheduler.set_attribute('"SYSTEM"."EXPAND_SHRINK_SERVICE"','credential_name','"SYSTEM"."ORACLE_CRED"');
dbms_Scheduler.enable('"SYSTEM"."EXPAND_SHRINK_SERVICE"');
END;
/
exec dbms_scheduler.run_job('"SYSTEM"."EXPAND_SHRINK_SERVICE"',FALSE);
select additional_info from dba_scheduler_job_run_details where job_name like '%EXPAND_SHRINK_SERVICE%';
BEGIN
SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '"SYSTEM"."EXPAND_SHRINK_SERVICE"',
defer => false,
force => true);
END;
/
}}}
http://allthingsoracle.com/an-introduction-to-11-2-rac-server-pools/
OBE Grid list http://apex.oracle.com/pls/apex/f?p=44785:2:0:FORCE_QUERY::2,CIR,RIR:P2_PRODUCT_ID,P2_RELEASE_ID:2011,71
OBE - policy managed databases http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/grid_rac/10_rac_dbca/rac_dbca_viewlet_swf.html
Server Pool experiments in RAC 11.2 http://martincarstenbach.wordpress.com/2010/02/12/server-pool-experiments-in-rac-11-2/
Policy managed databases http://martincarstenbach.wordpress.com/2010/01/26/policy-managed-databases/
http://cgswong.blogspot.com/2010/12/oracle11gr2-rac-faq.html
-- server pool
http://oracleinaction.com/server-pools/
http://www.hhutzler.de/blog/managing-server-pools/
policyset https://docs.oracle.com/database/121/CWADD/pbmgmt.htm#CWADD92636
policyset example http://blog.dbi-services.com/oracle-policy-managed-databases-policies-and-policy-sets/
How To Test Application Continuity Using A Standalone Java Program (Doc ID 1602233.1)
TAF ENABLED GLOBAL SERVICE in GDS ENVIRONMNET 12C. (Doc ID 2283193.1)
Application Continuity Throws Exception No more data to read from socket For Commits After Failover (Doc ID 2197029.1)
{{{
How To Test Application Continuity Using A Standalone Java Program (Doc ID 1602233.1) To BottomTo Bottom
In this Document
Goal
Solution
Applies to:
JDBC - Version 12.1.0.1.0 and later
Information in this document applies to any platform.
Goal
The document provides step-by-step instructions and a simple standalone Java Program which can be used to test the Application Continuity feature in the 12c JDBC driver.
Solution
Application Continuity in the 12c JDBC driver can be tested with a simple standalone java program and a RAC 12C Database Cluster.
Step1 - Creating a Database Service for Application Continuity on a RAC database
1) Add a service , say acservice, using srvctl on both instances of a two node RAC cluster, orcl1 and orlc2.
srvctl add service -s acservice -d orcl -r orcl1,orcl2
2) Start up the Service
srvctl start service -s acservice -d orcl
3) Enable the service for Application Continuity
srvctl modify service -d ORCL -s acservice -failovertype TRANSACTION -replay_init_time 300 -failoverretry 30 -failoverdelay 3 -notification TRUE -commit_outcome TRUE
If the service is not AC-enabled, the exception java.lang.ClassCastException: oracle.jdbc.driver.T4CConnection cannot be cast to oracle.jdbc.replay.ReplayableConnection would be generated.
Step2 - Running the Sample AcTest.java Program
1) Create and run the AcTest.java program, provided below, with the 12.1.0.1 JDBC driver. Please ensure the java program is named AcTest.java (noticed the "A' and "T" in uppercase).
If the connection is successful, the following output is seen:
You are Connected to RAC Instance - ORCL1
2) When the program pauses, shut down ORCL1 RAC instance
srvctl stop instance -i ORCL1 -d ORCL
3) If the replay is successful, the connection will be successfully established on the ORCL2 RAC instance
After Replay Connected to RAC Instance - ORCL2
AcTest.java
========
import java.sql.*;
import oracle.jdbc.*;
public class AcTest
{
public static void main(String[] args) throws SQLException,java.lang.InterruptedException
{
oracle.jdbc.replay.OracleDataSource AcDatasource = oracle.jdbc.replay.OracleDataSourceFactory.getOracleDataSource();
AcDatasource.setURL("jdbc:oracle:thin:@<HOST>:<PORT>/acservice");
AcDatasource.setUser("<USER>");
AcDatasource.setPassword("<PASSWORD>");
Connection conn = AcDatasource.getConnection();
conn.setAutoCommit(false);
PreparedStatement stmt = conn.prepareStatement("select instance_name from v$instance");
ResultSet rset = stmt.executeQuery();
while (rset.next())
{
System.out.println("You are Connected to RAC Instance - "+ rset.getString(1));
}
Thread.currentThread().sleep(60000);
((oracle.jdbc.replay.ReplayableConnection)conn).beginRequest();
PreparedStatement stmt1 = conn.prepareStatement("select instance_name from v$instance");
ResultSet rset1 = stmt1.executeQuery();
while (rset1.next())
{
System.out.println("After Replay Connected to RAC Instance - "+rset1.getString(1));
}
rset.close();
stmt.close();
rset1.close();
stmt1.close();
conn.close();
((oracle.jdbc.replay.ReplayableConnection)conn).endRequest();
}
}
}}}
12c https://github.com/ardentperf/racattack-vagrantfile
11gR2 https://github.com/ardentperf/racattack
<<showtoc>>
! info
https://gerardnico.com/db/oracle/transaction_table
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues#TOC-enq:-TX---allocate-ITL-entry
http://yong321.freeshell.org/computer/deadlocks.txt
https://antognini.ch/2013/05/itl-deadlocks-script/
https://karlarao.github.io/karlaraowiki/index.html#ITL
! related articles
Reading deadlock trace files
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1528515465282
INITRANS Cause of deadlock
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6042872196732
DEADLOCK DETECTED - DELETE statement - how/why is it waiting in SHARE mode
https://community.oracle.com/thread/934554?start=15&tstart=0
https://www.google.com/search?q=Global+Enqueue+Services+Deadlock+detected+ITL&oq=Global+Enqueue+Services+Deadlock+detected+ITL&aqs=chrome..69i57j33l2.1528j0j0&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=oracle+delete+deadlock+ITL&oq=oracle+delete+deadlock+ITL&aqs=chrome..69i57j69i64l3.6839j1j0&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=oracle+Global+Enqueue+Services+Deadlock+detected&oq=oracle+Global+Enqueue+Services+Deadlock+detected&aqs=chrome..69i57j69i64l3.1124j0j0&sourceid=chrome&ie=UTF-8
http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.html
https://asktom.oracle.com/pls/asktom/asktom.search?tag=deadlock-enq-tx-allocate-itl-entry
http://russellcurtis.blogspot.com/2010/06/global-enqueue-services-deadlock.html
Global Enqueue Services Deadlock detected https://community.oracle.com/thread/3986541
! mos notes
How to Diagnose Different ORA-00060 Deadlock Types Using Deadlock Graphs in Trace (Doc ID 1559695.1)
Troubleshooting "ORA-00060 Deadlock Detected" Errors (Doc ID 62365.1)
How to Identify ORA-00060 Deadlock Types Using Deadlock Graphs in Trace (Doc ID 1507093.1)
Top 5 Database and/or Instance Performance Issues in RAC Environment (Doc ID 1373500.1)
EM 12c, EM 13c: Using The Deadlock Parser Tool For Gathering EM Repository Deadlock Information (Doc ID 2222769.1)
Troubleshooting "Global Enqueue Services Deadlock detected" (Doc ID 1443482.1)
! MOS
Troubleshooting "Global Enqueue Services Deadlock detected" (Doc ID 1443482.1)
{{{
1. TX deadlock in Exclusive(X) mode
2. TX deadlock in Share(S) mode
3. TM deadlock
4. Single resource deadlock for TX , TM, IV or LB
5. LB deadlock
6. Known Issues
7. Further Diagnosis
8. Deadlock Parser Tool (Enterprise Manager)
}}}
ORA-60 DEADLOCK DUE TO BITMAP INDEX IN RAC (Doc ID 1496403.1)
Global Enqueue Services Deadlock Detected During Table Statistics Gatherings and High Execution of Update sys.col_usage$ (Doc ID 2347644.1)
Goldengate Have A Lot Of "ORA-00060: Deadlock Detected" Errors In BODS(Exadata) Production Database (Doc ID 1634549.1)
Does Logminer Show All SQL Statements Involved In An ORA-60 Deadlock Error? (Doc ID 1108508.1)
https://www.hhutzler.de/blog/ges-locks-and-deadlocks/
https://recurrentnull.wordpress.com/2014/04/19/deadlock-parser-parsing-lmd0-trace-files/
https://jonathanlewis.wordpress.com/2013/02/22/deadlock-detection/
Bug 17165204 - Self deadlock while updating HCC compressed tables (Doc ID 17165204.8)
Goldengate Have A Lot Of "ORA-00060: Deadlock Detected" Errors In BODS(Exadata) Production Database (Doc ID 1634549.1)
Global Enqueue Services Deadlock detected - Single resource deadlock: blocking enqueue which blocks itself, f 1 (Doc ID 973178.1)
deadlock on RAC https://community.oracle.com/thread/841950
https://oracle-base.com/articles/misc/deadlocks
https://groups.google.com/forum/#!topic/comp.databases.oracle.server/oemqz8ThbiU
http://olashowunmi.blogspot.com/2015/07/ora-00060-deadlock-detected-while.html
! articles
!! ASH XID
http://oraama.blogspot.com/2014/07/who-changed-data-in-table.html
Bug 5998048 - Deadlock on COMMIT updating AUD$ / Performance degradation when FGA is enabled (Doc ID 5998048.8)
ges deadlock xid https://www.google.com/search?q=ges+deadlock+xid&oq=ges+deadlock+xid&aqs=chrome..69i57j33.2542j0j0&sourceid=chrome&ie=UTF-8
Global Enqueue Services Deadlock detected https://community.oracle.com/thread/2585677
!! enqueue mode
https://logicalread.com/diagnosing-oracle-wait-for-tx-enqueue-mode-6-mc01/#.XQ1ir2RKjOQ
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues#TOC-enq:-TX---allocate-ITL-entry
!! KJUSERPR
Script to Collect RAC Diagnostic Information (racdiag.sql) (Doc ID 135714.1)
{{{
set numwidth 5
column state format a16 tru;
column event format a30 tru;
select dl.inst_id, s.sid, p.spid, dl.resource_name1,
decode(substr(dl.grant_level,1,8),'KJUSERNL','Null','KJUSERCR','Row-S (SS)',
'KJUSERCW','Row-X (SX)','KJUSERPR','Share','KJUSERPW','S/Row-X (SSX)',
'KJUSEREX','Exclusive',request_level) as grant_level,
decode(substr(dl.request_level,1,8),'KJUSERNL','Null','KJUSERCR','Row-S (SS)',
'KJUSERCW','Row-X (SX)','KJUSERPR','Share','KJUSERPW','S/Row-X (SSX)',
'KJUSEREX','Exclusive',request_level) as request_level,
decode(substr(dl.state,1,8),'KJUSERGR','Granted','KJUSEROP','Opening',
'KJUSERCA','Canceling','KJUSERCV','Converting') as state,
s.sid, sw.event, sw.seconds_in_wait sec
from gv$ges_enqueue dl, gv$process p, gv$session s, gv$session_wait sw
where blocker = 1
and (dl.inst_id = p.inst_id and dl.pid = p.spid)
and (p.inst_id = s.inst_id and p.addr = s.paddr)
and (s.inst_id = sw.inst_id and s.sid = sw.sid)
order by sw.seconds_in_wait desc;
}}}
!! select for update
https://hoopercharles.wordpress.com/2011/11/21/select-for-update-in-what-order-are-the-rows-locked/
.
-- job class
http://www.resolvinghere.com/sof/14337075.shtml
https://oracleexamples.wordpress.com/2009/05/03/run-jobs-in-a-particular-instance-using-services/
http://www.dba-oracle.com/job_scheduling/job_classes.htm
https://books.google.com/books?id=NaKoglGZoDwC&pg=PA259&lpg=PA259&dq=oracle+rac+job+class+tim+hall&source=bl&ots=9R2oiBljiw&sig=JERYlcNluxQ5YoTd7H0Nro5xczQ&hl=en&sa=X&ved=0ahUKEwij0Lygm-bLAhWCLB4KHba7AmUQ6AEILjAD#v=onepage&q=oracle%20rac%20job%20class%20tim%20hall&f=false
http://www.ritzyblogs.com/OraTalk/PostID/108/Using-FAN-callouts-relocate-a-service-back
automatic service relocation
http://www.oracledatabase12g.com/wp-content/uploads/2009/08/Session6.pdf
http://jarneil.wordpress.com/2010/11/05/11gr2-database-services-and-instance-shutdown/
https://forums.oracle.com/message/10479460
http://indico.cern.ch/getFile.py/access?resId=1&materialId=slides&confId=135581
http://bdrouvot.wordpress.com/2012/12/13/rac-one-node-avoid-automatic-database-relocation/
http://ilmarkerm.blogspot.com/2012/05/scipt-to-automatically-move-rac-11gr2.html <-- this is the script
http://www.ritzyblogs.com/OraTalk/PostID/108/Using-FAN-callouts-relocate-a-service-back
{{{
#!/bin/bash
#
# GI callout script to catch INSTANCE up event from clusterware and relocate services to preferred instance
# Copy or symlink this script to $GRID_HOME/racg/usrco
# Tested on Oracle Linux 5.8 with 11.2.0.3 Oracle Grid Infrastructure and 11.2.0.2 & 11.2.0.3 Oracle Database Enterprise Edition
# 2012 Ilmar Kerm <ilmar.kerm@gmail.com>
#
LOGFILE=/u02/app/oracle/grid_callout/log.txt
SCRIPTDIR=`dirname $0`
# Determine grid home
if [[ "${SCRIPTDIR:(-11)}" == "/racg/usrco" ]]; then
CRS_HOME="${SCRIPTDIR:0:$(( ${#SCRIPTDIR} - 11 ))}"
export CRS_HOME
fi
# Only execute script for INSTANCE events
if [ "$1" != "INSTANCE" ]; then
exit 0
fi
STATUS=""
DATABASE=""
INSTANCE=""
# Parse input arguments
args=("$@")
for arg in ${args[@]}; do
if [[ "$arg" == *=* ]]; then
KEY=${arg%=*}
VALUE=${arg#*=}
case "$KEY" in
status)
STATUS="$VALUE"
;;
database)
DATABASE="$VALUE"
;;
instance)
INSTANCE="$VALUE"
;;
esac
fi
done
# If database, status and instance values are not set, then exit
# status must be up
if [[ -z "$DATABASE" || -z "$INSTANCE" || "$STATUS" != "up" ]]; then
exit 0
fi
echo "`date`" >> "$LOGFILE"
echo "[$DATABASE][`hostname`] Instance $INSTANCE up" >> "$LOGFILE"
#
# Read database software home directory from clusterware
#
DBCONFIG=`$CRS_HOME/bin/crsctl status res ora.$DATABASE.db -f | grep "ORACLE_HOME="`
if [ -z "$DBCONFIG" ]; then
exit 0
fi
declare -r "$DBCONFIG"
echo "ORACLE_HOME=$ORACLE_HOME" >> "$LOGFILE"
# Array function
in_array() {
local hay needle=$1
shift
for hay; do
[[ $hay == $needle ]] && return 0
done
return 1
}
#
# Read information about services
#
for service in `$CRS_HOME/bin/crsctl status res | grep -E "ora\.$DATABASE\.(.+)\.svc" | sed -rne "s/NAME=ora\.$DATABASE\.(.+)\.svc/\1/gip"`; do
SERVICECONFIG=`$ORACLE_HOME/bin/srvctl config service -d $DATABASE -s $service`
echo "Service $service" >> "$LOGFILE"
if [[ "$SERVICECONFIG" == *"Service is enabled"* ]]; then
echo " enabled" >> "$LOGFILE"
PREFERRED=( `echo "$SERVICECONFIG" | grep "Preferred instances:" | sed -rne "s/.*\: ([a-zA-Z0-9]+)/\1/p" | tr "," "\n"` )
#
# Check if current instance is preferred for this service
#
if in_array "$INSTANCE" "${PREFERRED[@]}" ; then
echo " preferred" >> "$LOGFILE"
#
# Check if service is already running on current instance
#
SRVSTATUS=`$ORACLE_HOME/bin/srvctl status service -d $DATABASE -s $service`
if [[ "$SRVSTATUS" == *"is not running"* ]]; then
#
# if service is not running, then start it
#
echo " service stopped, starting" >> "$LOGFILE"
$ORACLE_HOME/bin/srvctl start service -d "$DATABASE" -s "$service" >> "$LOGFILE"
else
#
# Service is running, but is it running on preferred instance?
#
RUNNING=( `echo "$SRVSTATUS" | sed -rne "s/.* ([a-zA-Z0-9]+)/\1/p" | tr "," "\n"` )
#echo "${RUNNING[@]} = ${PREFERRED[@]}"
if ! in_array "$INSTANCE" "${RUNNING[@]}" ; then
echo " not running on preferred $INSTANCE" >> "$LOGFILE"
#
# Find the first non-preferred running instance
#
CURRENT=""
for inst in "${RUNNING[@]}"; do
if ! in_array "$inst" "${PREFERRED[@]}" ; then
CURRENT="$inst"
break
fi
done
#
# Relocate
#
if [[ -n "$CURRENT" ]]; then
echo " relocate $CURRENT -> $INSTANCE" >> "$LOGFILE"
$ORACLE_HOME/bin/srvctl relocate service -d "$DATABASE" -s "$service" -i "$CURRENT" -t "$INSTANCE" >> "$LOGFILE"
fi
else
#
# Service is already running on preferred instance, no need to do anything
#
echo " running on preferred $INSTANCE" >> "$LOGFILE"
fi
fi
fi
fi
done
}}}
http://www.freelists.org/post/oracle-l/monitor-rac-database-services,7 <-- nice scripts
http://coskan.wordpress.com/2010/12/29/how-to-monitor-services-on-11gr2/ <-- 11gR2
http://yong321.freeshell.org/oranotes/Service.txt <-- 10gR2, 11gR1
''rac11gr2_mon.pl'' http://db.tt/nKIlmSlV
Oracle RAC Database aware Applications - A Developer’s Checklist http://www.oracle.com/technetwork/database/availability/racdbawareapplications-1933522.pdf
Node Evictions on RAC , what to do and what to collect
http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=16757466&gid=2922607&trk=EML_anet_qa_cmnt-cDhOon0JumNFomgJt7dBpSBA
11.1 OCR Backup Management - Best Practice Advice?
http://www.linkedin.com/groupItem?view=&srchtype=discussedNews&gid=2922607&item=28217802&type=member&trk=EML_anet_qa_cmnt-cDhOon0JumNFomgJt7dBpSBA
Can we have VIP's on all public network interfaces with diff network masks.
http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=30266596&gid=2922607&trk=EML_anet_qa_cmnt-cDhOon0JumNFomgJt7dBpSBA
How does one ensure basic compliance with best practices for a Grid stack ?
http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=16817767&gid=2922607&trk=EML_anet_qa_cmnt-cDhOon0JumNFomgJt7dBpSBA
Oracle Support Master Note for Real Application Clusters (RAC), Oracle Clusterware and Oracle Grid Infrastructure (Doc ID 1096952.1)
11gR2 Clusterware and Grid Home - What You Need to Know (Doc ID 1053147.1)
RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)
Doc ID: 810394.1
RAC Assurance Support Team: RAC Starter Kit (Windows)
Doc ID: 811271.1
http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_linux_new.html
RAC: Frequently Asked Questions
Doc ID: Note:220970.1
Smooth the Transition to Real Application Clusters
Doc ID: Note:206037.1
Step-By-Step Install of RAC with OCFS on Windows 2003 (9i)
Doc ID: Note:178882.1
How To Check The Certification Matrix for Real Application Clusters
Doc ID: Note:184875.1
-- PLANNING
Smooth the Transition to Real Application Clusters
Doc ID: 206037.1
RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)
Doc ID: 810394.1
-- SETUP GUIDES
Metalink Note#: 178882.1 Step-By-Step Install of RAC with OCFS on Windows 2000
Metalink Note#: 236155.1 Step-By-Step Install of RAC with RAW Datafiles on Windows 2000
Metalink Note#: 254815.1 Step-By-Step Install of 9i RAC on Veritas DBE/AC and Solaris
Metalink Note#: 247216.1 Step-By-Step Install of RAC on Fujitsu PrimePower with PrimeCluster
Metalink Note#: 184821.1 Step-By-Step Install of 9.2.0.4 RAC on Linux
Note 184821.1 Step-By-Step Installation of 9.2.0.5 RAC on Linux
Metalink Note#: 182177.1 Step-By-Step Install of RAC on HP-UX
Metalink Note#: 175480.1 Step-By-Step Install of RAC on HP Tru64 Unix Cluster
Metalink Note#: 180012.1 Step-By-Step Install of RAC on HP OpenVMS Cluster
Metalink Note#: 199457.1 Step-By-Step Install of RAC on IBM AIX (RS/6000)
Where to find Step-By-Step RAC setup guides:
RAC Step-By-Step Installation on IBM RS/6000 see Note 199457.1
RAC Step-By-Step Installation on LINUX see Note 184821.1
RAC Step-By-Step Installation on COMPAQ OPEN VMS see Note 180012.1
RAC Step-By-Step Installation on SUN CLUSTER V3 see Note 175465.1
RAC Step-By-Step Installation on WINDOWS 2000 or NT see Note 178882.1
RAC Step-By-Step Installation on HP TRU64 UNIX CLUSTER see Note 175480.1
RAC Step-By-Step Installation on HP-UX see Note 182177.1
-- GRID INFRASTRUCTURE
Oracle Support Master Note for Real Application Clusters (RAC), Oracle Clusterware and Oracle Grid Infrastructure (Doc ID 1096952.1)
11gR2 Clusterware and Grid Home - What You Need to Know (Doc ID 1053147.1)
11gR2 Install (Non-RAC): Understanding New Changes With All New 11.2 Installer [ID 884232.1]
11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]
-- RAC ON WINDOWS
Oracle RAC Clusterware Installation on Windows Commonly Missed / Misunderstood Prerequisites (Doc ID 388730.1)
-- TROUBLESHOOTING
Remote Diagnostic Agent (RDA) 4 - RAC Cluster Guide (Doc ID 359395.1) - RDA RAC
CRS 10gR2/ 11gR1/ 11gR2 Diagnostic Collection Guide [ID 330358.1]
RAC Survival Kit: Troubleshooting a Hung Database
Doc ID: Note:206567.1
Data Gathering for Troubleshooting RAC Issues
Doc ID: Note:556679.1
RAC: Ave Receive Time for Current Block is Abnormally High in Statspack
Doc ID: 243593.1
Doc ID: 563566.1 gc lost blocks diagnostics
POOR RAC-INTERCONNECT PERFORMANCE AFTER UPGRADE FROM RHEL3 TO RHEL4/OEL4
Doc ID: 400959.1
EXCESSIVE GETS FOR SHARED POOL SIMULATOR LATCH causing hang/performance problem
Doc ID: 563149.1
Rac Database Is Slow on Windows
Doc ID: 271254.1
Note 213416.1 - RAC: Troubleshooting Windows NT/2000 Service Hangs
Intermittent high elapsed times reported on wait events in AMD-Based systems Or using NTP
Doc ID: 828523.1
'Diag Dummy Wait' On Rac Instance
Doc ID: 360815.1
-- CLUSTER HEALTH MONITOR
Introducing Cluster Health Monitor (IPD/OS) (Doc ID 736752.1)
How to Monitor, Detect and Analyze OS and RAC Resource Related Degradation and Failures on Windows
Doc ID: 810915.1
How to install Oracle Cluster Health Monitor (former IPD/OS) on Windows
Doc ID: 811151.1
How to Collect 'Cluster Health Monitor' (former IPD/OS) Data on Windows Platform for Oracle Support (Doc ID 847485.1)
-- COE TOOLS
Subject: Procwatcher: Script to Monitor and Examine Oracle and CRS Processes
Doc ID: Note:459694.1 Type: BULLETIN
-- PERFORMANCE
Oracle RAC Tuning Tips by Joel Goodman
http://oukc.oracle.com/static05/opn/oracle9i_database/49466/040908_49466_source/index.htm
Understanding RAC Internals by Barb Lundhild
http://oukc.oracle.com/static05/opn/oracle9i_database/40168/053107_40168_source/index.htm
http://www.oracle.com/technology/tech/java/newsletter/articles/oc4j_data_sources/oc4j_ds.htm
-- MULTIPATHING
Subject: Oracle ASM and Multi-Pathing Technologies
Doc ID: Note:294869.1 Type: WHITE PAPER
Last Revision Date: 17-JAN-2008 Status: PUBLISHED
-- ebusiness suite
Configuring Oracle Applications Release 12 with 10g R2 RAC
Doc ID: Note:388577.1
-- AIX
Status of Certification of Oracle Clusterware with HACMP 5.3 & 5.4
Doc ID: Note:404474.1
-- ADD NODE
Adding a Node to a 10g RAC Cluster (10g R1)
Doc ID: Note:270512.1
Unable To Start Asm Instance After Adding Node To Rac Cluster
Doc ID: Note:399889.1
-- DELETE NODE
Removing a Node from a 10g RAC Cluster (only applicable to 10gR1)
Doc ID: Note:269320.1
# on B.7 when removing nodeapps, it will not cleanly remove the VIP
also
on B.12 you have to run it on all the remaining RAC nodes
How To Remove a 10g RAC Node On Windows?
Doc ID: Note:603637.1
-- CLONE
Manually Cloning Oracle Applications Release 11i with 10g or 11g RAC
Doc ID: 760637.1
-- OCR / VOTING DISK
How to recreate OCR/Voting disk accidentally deleted
Doc ID: Note:399482.1
How to move the OCR location ?
- stop the CRS stack on all nodes using
"init.crs stop"
- Edit /var/opt/oracle/ocr.loc on all nodes and set up ocrconfig_loc=new OCR device
- Restore from one of the automatic physical backups using ocrconfig -restore.
- Run ocrcheck to verify.
- reboot to restart the CRS stack.
- additional information can be found at
How to Restore a Lost Voting Disk in 10g (Doc ID 279793.1)
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices. (Doc ID 428681.1)
RAC on Windows: How To Reinitialize the OCR and Vote Disk (without a full reinstall of Oracle Clusterware) [ID 557178.1]
-- REINSTALL
How to Reinstall CRS Without Disturbing Installed Oracle RDBMS Home(s) [ID 456021.1]
How To clean up after a Failed (or successful) Oracle Clusterware Installation on Windows [ID 341214.1]
RAC on Windows: How To Reinitialize the OCR and Vote Disk (without a full reinstall of Oracle Clusterware) [ID 557178.1]
WIN: Manually Removing all Oracle Components on Microsoft Windows Platforms [ID 124353.1]
-- CLUSTERWARE
Note 337737.1 Oracle Clusterware - ASM - Database Version Compatibility
Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment
10g RAC: How to Clean Up After a Failed CRS Install
Doc ID: Note:239998.1
10g RAC: Troubleshooting CRS Root.sh Problems
Doc ID: Note:240001.1
Oracle Clusterware: Components installed.
Doc ID: 556976.1
-- VIP
Oracle 10g VIP (Virtual IP) changes in Oracle 10g 10.1.0.4
Doc ID: Note:296878.1
How to Configure Virtual IPs for 10g RAC
Doc ID: Note:264847.1
VIPCA cannot be run under RHEL/OEL 5
Doc ID: Note:577298.1
Modifying the VIP or VIP Hostname of a 10g Oracle Clusterware Node
Doc ID: Note:276434.1
Should the Database Instance Be Brought Down after VIP service crashes?
Doc ID: Note:391454.1
-- SCAN
How to Setup SCAN Listener and Client for TAF and Load Balancing [Video] (Doc ID 1188736.1)
11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained (Doc ID 887522.1)
http://oracle-dba-yi.blogspot.com/2011/04/11gr2-scan-faq.html
-- SCAN add another listener
How to Configure A Second Listener on a Separate Network in 11.2 Grid Infrastructure [ID 1063571.1]c
-- SCAN just started one listener - ADD SCAN LISTENER
How to start the SCAN listener on new 11Gr2 install?
http://kr.forums.oracle.com/forums/thread.jspa?threadID=1120482
How to add SCAN LISTENER in 11gR2 - http://learnwithme11g.wordpress.com/2010/09/03/how-to-add-scan-listener-in-11gr2-2/
How to update the IP address of the SCAN VIP resources (ora.scan<n>.vip) (Doc ID 952903.1)
How to Modify SCAN Setting or SCAN Listener Port after Installation (Doc ID 972500.1)
-- SCAN name resolution
PRVF-4664 PRVF-4657: Found inconsistent name resolution entries for SCAN name (Doc ID 887471.1)
-- SCAN reset to 1521 default port
WebLogic Server and Oracle 11gR2 JDBC Driver SCAN feature [ID 1304816.1]
How to integrate a 10g/11gR1 RAC database with 11gR2 clusterware (SCAN) [ID 1058646.1]
How to Configure A Second Listener on a Separate Network in 11.2 Grid Infrastructure [ID 1063571.1]
Changing Default Listener Port Number [ID 359277.1]
How to Create Multiple Oracle Listeners and Multiple Listener Addresses [ID 232010.1]
Listening Port numbers [ID 99721.1]
How to Modify SCAN Setting or SCAN Listener Port after Installation [ID 972500.1]
Using the TNS_ADMIN variable and changing the default port number of all Listeners in an 11.2 RAC for an 11.2, 11.1, and 10.2 Database [ID 1306927.1] <-- GOOD STUFF
How to update the IP address of the SCAN VIP resources (ora.scan.vip) [ID 952903.1]
How to Troubleshoot Connectivity Issue with 11gR2 SCAN Name [ID 975457.1]
ORA-12545 or ORA-12537 While Connecting to RAC through SCAN name [ID 970619.1]
Tracing Techniques for Listeners in 11.2 RAC Environments [ID 1325284.1]
SCAN Address Cannot Resolve Instance Name ORA-12521 [ID 1235773.1]
Top 5 Issues That Cause Troubles with Scan VIP and Listeners [ID 1373350.1]
ORA-12541 intermittently with DBLinks using SCAN listener [ID 1269630.1]
How to Modify SCAN Setting or SCAN Listener Port after Installation [ID 972500.1]
Remote Clients Receive ORA-12160 or ORA-12561 Errors Connecting To 11GR2 RAC Via SCAN Listeners [ID 1291985.1]
11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained [ID 887522.1]
Problem: RAC Metrics: Unable to get E-mail Notification for some metrics against Cluster Databases [ID 403886.1]
How to Configure A Second Listener on a Separate Network in 11.2 Grid Infrastructure [ID 1063571.1]
Thread: Multiple listener on RAC 11.2 -> https://forums.oracle.com/forums/thread.jspa?threadID=972062
https://sites.google.com/site/connectassysdba/oracle-rac-11-2-multiple-listener
http://myoracle4u.blogspot.com/2011/07/configure-scan-listener-in-11gr2-rac.html
How to Add SCAN Listener in 11gR2 RAC http://myoracle4u.blogspot.com/2011/07/configure-scan-listener-in-11gr2-rac.html
-- SCAN PERFORMANCE
Scan Listener, Queuesize, SDU, Ports [ID 1292915.1]
-- JUMBO FRAMES
Recommendation for the Real Application Cluster Interconnect and Jumbo Frames
Doc ID: 341788.1
Tuning Inter-Instance Performance in RAC and OPS
Doc ID: 181489.1
-- RDS / INFINIBAND
Doc ID: 751343.1 RAC Support for RDS Over Infiniband
Doc ID: 368464.1 How to Setup IPMP as Cluster Interconnect
Doc ID: 283107.1 Configuring Solaris IP Multipathing (IPMP) for the Oracle 10g VIP
-- INTERCONNECT
How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster
Doc ID: Note:283684.1
Recommendation for the Real Application Cluster Interconnect and Jumbo Frames
Doc ID: 341788.1
Tuning Inter-Instance Performance in RAC and OPS
Doc ID: 181489.1
How To Track Dead Connection Detection(DCD) Mechanism Without Enabling Any Client/Server Network Tracing
Doc ID: 438923.1
-- CHANGE IP ADDRESS
How to Change Interconnect/Public Interface IP or Subnet in Oracle Clusterware
Doc ID: 283684.1
Modifying the VIP or VIP Hostname of a 10g or 11g Oracle Clusterware Node
Doc ID: 276434.1
Considerations when Changing the Database Server Name or IP
Doc ID: 734559.1
Preparing For Changing the IP Addresses Of Oracle Database Servers
Doc ID: 363609.1
Instance Not Coming Up On Second Node In RAC 'Timeout when connecting'
Doc ID: 351914.1
Warning Could Not Be Translated To A Network Address
Doc ID: 464986.1
The Sqlnet Files That Need To Be Changed/Checked During Ip Address Change Of Database Server
Doc ID: 274476.1
EMCA
http://download.oracle.com/docs/cd/B19306_01/em.102/b40002/structure.htm#sthref92
APPLICATION SERVER
http://download.oracle.com/docs/cd/B10464_05/core.904/b10376/host.htm#sthref513
-- CHANGE HOSTNAME
http://www.pythian.com/news/482/changing-hostnames-in-oracle-rac
RAC on Windows: Oracle Clusterware Services Do Not Start After Changing Username or Domain [ID 557273.1]
-- BONDING
Configuring Linux for the Oracle 10g VIP or private interconnect using bonding driver
Doc ID: 298891.1
Setting Up Bonding in SLES 9
Doc ID: 291962.1
Setting Up Bonding in Suse SLES8
Doc ID: 291958.1
-- MIGRATION
Migrating to RAC using Data Guard
Doc ID: Note:273015.1
-- CRS_STAT2
CRS and 10g Real Application Clusters
Doc ID: Note:259301.1
WINDOWS CRS_STAT SCRIPT TO DISPLAY LONG NAMES CORRECTLY
Doc ID: Note:436067.1
--
Bug 5128575 - RAC install of 10.2.0.2 does not update libknlopt.a on all nodes
Doc ID: Note:5128575.8
TROUBLESHOOTING - ASM disk not found/visible/discovered issues
Doc ID: Note:452770.1
Unable To Mount Or Drop A Diskgroup, Fails With Ora-15032 And Ora-15063
Doc ID: Note:353423.1
ASM Diskgroup Failed to Mount On Second Node ORA-15063
Doc ID: Note:731075.1
Diskgroup Was Not Mounted After Created ORA-15063 and ORA-15032
Doc ID: Note:467702.1
Disk has been offline In Asm Diskgroup and has 2 entries in v$asm_disk
Doc ID: Note:393958.1
Adding The Label To ASMLIB Disk Using 'oracleasm renamedisk' Command
Doc ID: Note:280650.1
Ora-15063: Asm Discovered An Insufficient Number Of Disks For Diskgroup using NetApp Storage
Doc ID: Note:577526.1
Cannot Start Asm Ora-15063/ORA-15183
Doc ID: Note:340519.1
NEW CREATED DISKGROUP IS NOT VISIBLE ON SECOND NODE - USING NFS AND ASMLIB
Doc ID: Note:372276.1
Cannot Find Exact Kernel Version Match For ASMLib (Workaround using oracleasm_debug_link tool)
Doc ID: Note:462618.1
Heartbeat/Voting/Quorum Related Timeout Configuration for Linux, OCFS2, RAC Stack to avoid unnessary node fencing, panic and reboot
Doc ID: Note:395878.1
Reconfiguring the CSS disktimeout of 10gR2 Clusterware for Proper LUN Failover of the Dell MD3000i iSCSI Storage
Doc ID: Note:462616.1
10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
Doc ID: Note:284752.1
CSS Timeout Computation in Oracle Clusterware
Doc ID: Note:294430.1
How to Increase CSS Misscount in single instance ASM installations
Doc ID: Note:729878.1
Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5
Doc ID: Note:564580.1
Steps to Create Test RAC Setup On Oracle VM
Doc ID: Note:742603.1
Requirements For Installing Oracle 10gR2 On RHEL/OEL 5 (x86)
Doc ID: Note:419646.1
Prerequisite Checks Fail When Installing 10.2 On Red Hat 5 (RHEL5)
Doc ID: Note:456634.1
Additional steps to install 10gR2 RAC on IBM zSeries Based Linux (SLES10)
Doc ID: Note:471165.1
10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI Failures)
Doc ID: Note:414163.1
Oracle Clusterware (formerly CRS) Rolling Upgrades
Doc ID: Note:338706.1
10.2.0.X CRS Bundle Patch Information
Doc ID: Note:405820.1
-- CSS MISCOUNT
How to Increase CSS Misscount in single instance ASM installations
Doc ID: Note:729878.1
10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
Doc ID: Note:284752.1
Subject: CSS Timeout Computation in RAC 10g (10g Release 1 and 10g Release 2)
Doc ID: Note:294430.1
Subject: 10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
Doc ID: Note:284752.1
-- CONVERT SINGLE INSTANCE TO RAC
How to Convert 10g Single-Instance database to 10g RAC using Manual Conversion procedure
Doc ID: 747457.1
How To Convert A Single Instance Database To RAC In A Cluster File System Configuration (Doc ID 208375.1)
http://avdeo.com/2010/02/22/converting-a-single-instance-database-to-rac-manually-oracle-rac-10g/
http://jaffardba.blogspot.com/2011/03/converting-your-single-instance.html
http://onlineappsdba.com/index.php/2009/06/24/single-instance-to-rac-conversion/
http://download.oracle.com/docs/cd/B28359_01/install.111/b28264/cvrt2rac.htm#BABBBDDB <-- 11.1
http://download.oracle.com/docs/cd/E11882_01/install.112/e17214/cvrt2rac.htm#BABBAHCH <-- 11.2
-- COVERT TO SEPARATE NODES
Converting a RAC Environment to Separate Node Environments
Doc ID: 377347.1
-- SHARED HOME
RAC: How To Move From Shared To Non-Shared Homes [ID 605640.1]
-- LOCAL, REMOTE LISTENER
How To Find Out The Example of The LOCAL_LISTENER and REMOTE_LISTENER Defined In The init.ora When configuring the 11i or R12 on RAC ?
Doc ID: 744508.1
Check LOCAL_LISTENER if you run RAC!
http://tardate.blogspot.com/2007/06/check-locallistener-if-you-run-rac.html
-- RAC ASM
How to Convert a Single-Instance ASM to Cluster ASM
Doc ID: 452758.1
-- CLEAN UP ASM INSTALL, UNINSTALL
How to cleanup ASM installation (RAC and Non-RAC)
Doc ID: 311350.1
-- RMAN RAC backup
HowTo Restore RMAN Disk backups of RAC Database to Single Instance On Another Node
Doc ID: 415579.1
-- TAF, FCF
How To Configure Server Side Transparent Application Failover [ID 460982.1]
How to Configure Client Side Transparent Application Failover with Preconnect Option [ID 802434.1]
Understanding Transparent Application Failover (TAF) and Fast Connection Failover (FCF) [ID 334471.1]
Fast Connection Failover (FCF) Test Client Using 11g JDBC Driver and 11g RAC Cluster
Doc ID: 566573.1
Oracle 10g VIP (Virtual IP) changes in Oracle 10g 10.1.0.4
Doc ID: 296878.1
Can the JDBC Thin Driver Do Failover by Specifying FAILOVER_MODE?
Doc ID: 465423.1
Does JBOSS Support Fast Connection Failover (FCF) to a 10g RAC cluster?
Doc ID: 738122.1
How To Verify And Test Fast Connection Failover (FCF) Setup From a JDBC Thin Client Against a 10.2.x RAC Cluster
Doc ID: 433827.1
Failover Issues and Limitations [Connect-time failover and TAF]
Doc ID: 97926.1
How To Use TAF With Instant Client
Doc ID: 428515.1
Troubleshooting TAF Issues in 10g RAC
Doc ID: 271297.1
How To Configure Server Side Transparent Application Failover
Doc ID: 460982.1
What is the Overhead when using TAF Failover Select Type?
Doc ID: 119537.1
Which Oracle Client versions will connect to and work against which version of the Oracle Database?
Doc ID: 172179.1
Configuration of Load Balancing and Transparent Application Failover
Doc ID: 226880.1
Oracle Net80 TAF Enabled Alias Fails With ORA-12197
Doc ID: 284273.1 Type: PROBLEM
Client Load Balancing and Failover Using Description and Address_List
Doc ID: 69010.1
ADDRESS_LISTs and Oracle Net Failover
Doc ID: 67136.1
Load Balancing and DESCRIPTION_LISTs
Doc ID: 67137.1
-- USER EQUIVALENCE
How to Configure SSH for User Equivalence
Doc ID: 372795.1
How to Configure SSH for User Equivalence
Doc ID: 372795.1
Configuring Ssh For Rac Installations
Doc ID: 308898.1
How To Configure SSH for a RAC Installation
Doc ID: 300548.1
-- NTP
Thread: PRVF-5424 : Clock time offset check failed
https://forums.oracle.com/forums/thread.jspa?threadID=2148827
-- RAC MICROSOFT - BLUE SCREEN
Why do we get a Blue Screen Caused By Orafencedrv.sys
Doc ID: 337784.1
ORACLE PROCESSES ENCOUNTERING (OS 1117) ERRORS ON WINDOWS 2003
Doc ID: 444803.1
http://www.orafaq.com/forum/t/120114/2/
http://forums11.itrc.hp.com/service/forums/bizsupport/questionanswer.do?admit=109447626+1255362432406+28353475&threadId=999460
-- CRS REBOOTS, NODE EVICTION
http://www.rachelp.nl/index_kb.php?menu=articles&actie=show&id=25
Troubleshooting CRS Reboots
Doc ID: 265769.1
Data Gathering for Troubleshooting RAC Issues
Doc ID: 556679.1
CSS Timeout Computation in Oracle Clusterware
Doc ID: 294430.1
Corrupt Packets on the Network causes CSS to REBOOT NODE
Doc ID: 400778.1
10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
Doc ID: 284752.1
Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions <--- SETUP THIS ON ALL RAC ENV, REQUIRED!
Doc ID: 559365.1
Hangcheck-Timer Module Requirements for Oracle 9i, 10g, and 11g RAC on Linux
Doc ID: 726833.1
Frequent Instance Eviction in 9i and/or Node Eviction in 10g
Doc ID: 461662.1
ORA-27506 Results in ORA-29740 and Instance Evictions on Windows
Doc ID: 342708.1
My references - client that runs on 3 node RAC having node evictions
{{{
Common reasons for OCFS2 o2net Idle Timeout (Doc ID 734085.1) <-- cause of the restart
Troubleshooting 10g and 11.1 Clusterware Reboots (Doc ID 265769.1) <-- if then else
OCFS2 Fencing, Network, and Disk Heartbeat Timeout Configuration (Doc ID 457423.1)
OCFS2 - FREQUENTLY ASKED QUESTIONS (Doc ID 391771.1)
Root.sh Unable To Start CRS On Second Node (Doc ID 369699.1)
Troubleshooting TAF Issues in 10g RAC (Doc ID 271297.1)
RAC instabilities due to firewall (netfilter/iptables) enabled on the cluster interconnect (Doc ID 554781.1)
Troubleshooting Oracle Clusterware Root.sh Problems (Doc ID 240001.1)
Corrupt Packets on the Network causes CSS to REBOOT NODE (Doc ID 400778.1)
Linux: RAC Instance Halts For Several Minutes When Rebooting Other Node (Doc ID 263477.1)
Irregular ClssnmPollingThread Missed Checkins Messages in CSSD log (Doc ID 372463.1)
Ocssd.Bin Process Consumes 100% Cpu (Doc ID 730148.1)
CRS DOES NOT STARTUP WITHIN 600 SECONDS AFTER 10.2.0.3 BUNDLE3 (Doc ID 744573.1)
Frequent Instance Eviction in 9i and/or Node Eviction in 10g (Doc ID 461662.1)
Resolving Instance Evictions on Windows Platforms (Doc ID 297498.1)
Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions (Doc ID 559365.1)
CSS Timeout Computation in Oracle Clusterware (Doc ID 294430.1)
Node Eviction with IPCSOCK_SEND FAILED WITH STATUS: 10054 Errors (Doc ID 243547.1)
How to Collect 'Cluster Health Monitor' (former IPD/OS) Data on Windows Platform for Oracle Support (Doc ID 847485.1)
Linux: OCSSD Reboots Nodes Randomly After Application of 10.2.0.4 Patchset and in 11g Environments (Doc ID 731599.1)
10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout (Doc ID 284752.1)
Using Bonded Network Device Can Cause OCFS2 to Detect Network Outage (Doc ID 423183.1)
OCFS2 - FREQUENTLY ASKED QUESTIONS (Doc ID 391771.1)
}}}
-- RAC SPFILE
Recreating the Spfile for RAC Instances Where the Spfile is Stored in ASM
Doc ID: 554120.1
-- RAC ORA-12545, ORA-12533, ORA-12514, Ora-12520
Troubleshooting ORA-12545 / TNS-12545 Connect failed because target host or object does not exist <-- good stuff, step by step
Doc ID: 553328.1
Troubleshooting Guide TNS-12535 or ORA-12535 or ORA-12170 Errors
Doc ID: 119706.1
RAC Connection Redirected To Wrong Host/IP ORA-12545
Doc ID: 364855.1
How to Set the LOCAL_LISTENER Parameter to Resolve the ORA-12514
Doc ID: 362787.1
Intermittent TNS - 12533 Error While Connecting to 10g RAC Database
Doc ID: 472394.1
Ora-12520 When listeners on VIP in 10g RAC Setup
Doc ID: 342419.1
Dispatchers Are Not Registered With Listener Running On Default Port 1521
Doc ID: 465881.1
RAC Instance Status Shows Ready Zero Handlers For The Service
Doc ID: 419824.1
RAC Connection Redirected To Wrong Host/IP ORA-12545
Doc ID: 364855.1
Database Will Not Register With Listener configured on IP instead of Hostname ORA-12514
Doc ID: 365314.1
How MTS and DNS are related, MTS_DISPATCHER and ORA-12545
Doc ID: 131658.1
TNS-12500 errors using 64-bit Listener for 32-bit instance
Doc ID: 121091.1
Ora-12520 When listeners on VIP in 10g RAC Setup
Doc ID: 342419.1
Ora-12545 Frequent Client Connection Failure - 10g Standard Rac
Doc ID: 333159.1
-- LOCAL LISTENER
Init.ora Parameter "LOCAL_LISTENER" Reference Note
Doc ID: 47339.1
How To Find Out The Example of The LOCAL_LISTENER and REMOTE_LISTENER Defined In The init.ora When configuring the 11i or R12 on RAC ?
Doc ID: 744508.1
DATABASE WON'T START AFTER CHANGING THE LOCAL_LISTENER
Doc ID: 415362.1
RAC Instance Status Shows Ready Zero Handlers For The Service
Doc ID: 419824.1
Dispatchers Are Not Registering With the Listener When LOCAL_LISTENER is Set Correctly
Doc ID: 465888.1
How to Configure Local_Listener Parameter Without Mentioning the Host Name or IP Address in Cluster Environmet
Doc ID: 285191.1
LOCAL_LISTENER SPECIFICATION FAILS ON STARTUP OF RAC
Doc ID: 253075.1
-- LOAD BALANCING AND TAF
10g & 11g :Configuration of TAF(Transparent Application Failover) and Load Balancing <-- the one I used for Oberthur
Doc ID: 453293.1
Understanding and Troubleshooting Instance Load Balancing <-- good stuff, with shell scripts included
Doc ID: 263599.1
Troubleshooting TAF Issues in 10g RAC <-- good stuff, with RELOCATE command
Doc ID: 271297.1
Note 259301.1 - CRS and 10g Real Application Clusters
Configuration of Load Balancing and Transparent Application Failover
Doc ID: 226880.1
-- SERVICES
Issues Affecting Automatic Service Registration
Doc ID: 235562.1
Service / Instance Not Registering with Listener
Doc ID: 433693.1
http://mvallath.wordpress.com/2010/04/29/coexist-10gr2-and-11gr2-rac-db-on-the-same-cluster-stumbling-blocks-2/
RAID5 and RAID10 comparison on EMC VNX storage
https://www.evernote.com/shard/s48/sh/e5f58be7-309f-42a9-974e-f67fd20ad4d1/9f2f546330bb300d40ce04c54c3ccac2
All tests produced by oriontoolkit
https://www.dropbox.com/s/jzcl5ydt29mvw69/PerformanceAndTroubleshooting/oriontoolkit.zip
How the relative file number and block number calculation RDBA (relative data block address)
http://translate.google.com/translate?sl=auto&tl=en&u=http://blogs.oracle.com/toddbao/2010/11/rdba.html
https://blogs.oracle.com/fmwinstallproactive/entry/how_to_use_rda_to
taking thread dumps http://middlewaremagic.com/weblogic/?p=823
./rda.sh -M <-- RDA man page
./rda.sh -M RAC <-- RDA man page for each individual module
./rda.sh -cv <-- check directory structure if intact
perl -v <-- verify perl version
./rda.sh -L Test <-- List the test modules
./rda.sh -T ssh <-- test the ssh connectivity
./rda.sh -L profile <-- List the RDA profiles
./rda.sh -p Rac <-- runs the RDA RAC profile
! RDA for multinode collection - RAC
1)
RSA and DSA key must be loaded.. else, it will still ask you on the setup_cluster part
2)
./rda.sh -vX Remote setup_cluster <-- remote data collection initial setup
3)
./rda.sh -vX Remote list <-- list the nodes that have been configured
4)
./rda.sh -v -e REMOTE_TRACE=1 <-- run the RDA for multinode collection, REMOTE_TRACE shows more details on the screen
5)
''to re-run.. re-execute all commands (1,2,3,4)''
! Sample output of multinode collection - RAC
''GOOD OUTPUT''
{{{
[oracle@racnode1 rda]$ ./rda.sh -vX Remote setup_cluster
------------------------------------------------------------------------------
Requesting common information
------------------------------------------------------------------------------
Where RDA should be installed on the remote nodes?
Hit 'Return' to accept the default (/u01/rda/rda)
>
Where setup files and reports should be stored on the remote nodes?
Hit 'Return' to accept the default (/u01/rda/rda)
>
Should an alternative login be used to execute remote requests (Y/N)?
Hit 'Return' to accept the default (N)
>
Enter an Oracle User ID (userid only) to view DBA_ and V$ tables. If RDA will
be run under the Oracle software owner's ID, enter a '/' here, and select Y at
the SYSDBA prompt to avoid being prompted for the database password at
runtime.
Hit 'Return' to accept the default (system)
> /
Is '/' a sysdba user (will connect as sysdba) (Y/N)?
Hit 'Return' to accept the default (N)
> y
------------------------------------------------------------------------------
Requesting information for node racnode1
------------------------------------------------------------------------------
Enter the Oracle Home to be analyzed on the node racnode1
Hit 'Return' to accept the default (/u01/app/oracle/product/10.2.0/db_1)
>
Enter the Oracle SID to be analyzed on the node racnode1
Hit 'Return' to accept the default (orcl1)
>
------------------------------------------------------------------------------
Requesting information for node racnode2
------------------------------------------------------------------------------
Enter the Oracle Home to be analyzed on the node racnode2
Hit 'Return' to accept the default (/u01/app/oracle/product/10.2.0/db_1)
>
Enter the Oracle SID to be analyzed on the node racnode2
Hit 'Return' to accept the default (orcl2)
>
------------------------------------------------------------------------------
RAC Setup Summary
------------------------------------------------------------------------------
Nodes:
. NOD001 racnode1/orcl1
. NOD002 racnode2/orcl2
2 nodes found
-------------------------------------------------------------------------------
S909RDSP: Produces the Remote Data Collection Reports
-------------------------------------------------------------------------------
Updating the setup file ...
[oracle@racnode1 rda]$
[oracle@racnode1 rda]$
[oracle@racnode1 rda]$
[oracle@racnode1 rda]$ ./rda.sh -vX Remote list
Nodes:
. NOD001 racnode1/orcl1
. NOD002 racnode2/orcl2
2 nodes found
[oracle@racnode1 rda]$ ./rda.sh -v -e REMOTE_TRACE=1
Collecting diagnostic data ...
-------------------------------------------------------------------------------
RDA Data Collection Started 26-Nov-2010 11:14:01 AM
-------------------------------------------------------------------------------
Processing Initialization module ...
Processing CFG module ...
Processing OCM module ...
Processing REXE module ...
NOD001> Setting up ...
NOD002> bash: /u01/rda/rda/rda.sh: No such file or directory
NOD001> Collecting diagnostic data ...
NOD001> -------------------------------------------------------------------------------
NOD001> RDA Data Collection Started 26-Nov-2010 11:14:07
NOD001> -------------------------------------------------------------------------------
NOD001> Processing Initialization module ...
NOD001> Processing CFG module ...
NOD001> Processing Sampling module ...
NOD001> Processing OCM module ...
NOD001> Processing OS module ...
NOD002> Setting up ...
NOD002> Collecting diagnostic data ...
NOD002> -------------------------------------------------------------------------------
NOD002> RDA Data Collection Started 26-Nov-2010 11:14:14 AM
NOD002> -------------------------------------------------------------------------------
NOD002> Processing Initialization module ...
NOD002> Processing CFG module ...
NOD002> Processing Sampling module ...
NOD002> Processing OCM module ...
NOD002> Processing OS module ...
NOD002> Processing PROF module ...
NOD002> Processing PERF module ...
NOD001> Processing PROF module ...
NOD001> Processing PERF module ...
NOD002> Processing NET module ...
NOD002> Processing ONET module ...
NOD002> Listener checks may take a few minutes. please be patient...
NOD002> Processing listener LISTENER_RACNODE2
NOD002> Processing Oracle installation module ...
NOD002> Processing RDBMS module ...
NOD001> Processing NET module ...
NOD001> Processing ONET module ...
NOD001> Listener checks may take a few minutes. please be patient...
NOD001> Processing listener LISTENER_RACNODE1
NOD002> Processing RDBMS Memory module ...
NOD001> Processing Oracle installation module ...
NOD001> Processing RDBMS module ...
NOD002> Processing LOG module ...
NOD002> Processing Cluster module ...
NOD002> Processing RDSP module ...
NOD002> Processing LOAD module ...
NOD002> Processing End module ...
NOD002> -------------------------------------------------------------------------------
NOD002> RDA Data Collection Ended 26-Nov-2010 11:16:08 AM
NOD002> -------------------------------------------------------------------------------
NOD002> Generating the reports ...
NOD002> - RDA_PERF_top_sql.txt ...
NOD002> - RDA_PERF_autostats.txt ...
NOD002> - RDA_LOG_udump4_orcl2_ora_17965_trc.dat ...
NOD002> - RDA_ONET_dynamic_dep.txt ...
NOD002> - RDA_END_report.txt ...
NOD002> - RDA_INST_oracle_home.txt ...
NOD002> - RDA_DBA_init_ora.txt ...
NOD002> - RDA_DBM_spresmal.txt ...
NOD002> - RDA_NET_udp_settings.txt ...
NOD002> - RDA_LOG_bdump3_orcl2_arc2_18143_trc.dat ...
NOD002> - RDA_INST_make_report.txt ...
NOD002> - RDA_RAC_srvctl.txt ...
NOD002> - RDA_LOG_bdump7_orcl2_lgwr_16518_trc.dat ...
NOD002> - RDA_OS_kernel_info.txt ...
NOD002> - RDA_RAC_css_log.txt ...
NOD002> - RDA_PROF_dot_bashrc.txt ...
NOD002> - RDA_RAC_init.txt ...
NOD002> - RDA_INST_inventory_xml.txt ...
NOD002> - RDA_LOG_bdump9_orcl2_lmd0_16485_trc.dat ...
NOD002> - RDA_OS_misc_linux_info.txt ...
NOD002> - RDA_CFG_database.txt ...
NOD002> - RDA_LOG_bdump2_orcl2_arc0_8802_trc.dat ...
NOD002> - RDA_PERF_lock_data.txt ...
NOD002> - RDA_INST_oratab.txt ...
NOD002> - RDA_OS_etc_conf.txt ...
NOD002> - RDA_LOG_bdump4_orcl2_arc2_8806_trc.dat ...
NOD002> - RDA_OS_ntpstatus.txt ...
NOD002> - RDA_PROF_dot_bash_profile.txt ...
NOD002> - RDA_OS_linux_release.txt ...
NOD002> - RDA_DBM_sgastat.txt ...
NOD002> - RDA_RAC_cluster_net.txt ...
NOD002> - RDA_RAC_crs_stat.txt ...
NOD002> - RDA_RAC_crs_log.txt ...
NOD002> - RDA_LOG_bdump13_orcl2_lms0_16500_trc.dat ...
NOD002> - RDA_DBA_database_properties.txt ...
NOD002> - RDA_PERF_cbo_trace.txt ...
NOD002> - RDA_DBA_text.txt ...
NOD002> - RDA_LOG_udump1_orcl2_ora_16222_trc.dat ...
NOD002> - RDA_NET_ifconfig.txt ...
NOD002> - RDA_DBM_hwm.txt ...
NOD002> - RDA_PROF_profiles.txt ...
NOD002> - RDA_LOG_bdump8_orcl2_lgwr_7809_trc.dat ...
NOD002> - RDA_PROF_env.txt ...
NOD002> - RDA_DBM_sgacomp.txt ...
NOD002> - RDA_PERF_addm_report.txt ...
NOD002> - RDA_DBA_vsystem_event.txt ...
NOD002> - RDA_DBA_sga_info.txt ...
NOD002> - RDA_RAC_logs.txt ...
NOD002> - RDA_ONET_hs_inithsodbc_ora.txt ...
NOD002> - RDA_RAC_ocrconfig.txt ...
NOD002> - RDA_LOG_bdump.txt ...
NOD002> - RDA_INST_orainst_loc.txt ...
NOD002> - RDA_LOG_bdump5_orcl2_diag_16448_trc.dat ...
NOD002> - RDA_DBA_ses_procs.txt ...
NOD002> - RDA_DBM_lchitrat.txt ...
NOD002> - RDA_DBA_tablespace.txt ...
NOD002> - RDA_OS_disk_info.txt ...
NOD002> - RDA_LOG_last_errors.txt ...
NOD002> - RDA_LOG_udump.txt ...
NOD002> - RDA_DBA_jvm_info.txt ...
NOD002> - RDA_OS_tracing.txt ...
NOD002> - RDA_LOG_udump3_orcl2_ora_18539_trc.dat ...
NOD002> - RDA_DBA_vfeatureinfo.txt ...
NOD002> - RDA_RAC_alert_log.txt ...
NOD002> - RDA_END_system.txt ...
NOD002> - RDA_OS_cpu_info.txt ...
NOD002> - RDA_INST_orainventory_logdir.txt ...
NOD002> - RDA_RAC_ipc.txt ...
NOD002> - RDA_OS_java_version.txt ...
NOD002> - RDA_DBA_replication.txt ...
NOD002> - RDA_RAC_ocrcheck.txt ...
NOD002> - RDA_DBA_nls_parms.txt ...
NOD002> - RDA_DBA_vresource_limit.txt ...
NOD002> - RDA_DBA_partition_data.txt ...
NOD002> - RDA_ONET_sqlnet_listener_ora.txt ...
NOD002> - RDA_LOG_bdump17_orcl2_smon_7813_trc.dat ...
NOD002> - RDA_OS_memory_info.txt ...
NOD002> - RDA_DBA_vspparameters.txt ...
NOD002> - RDA_INST_oraInstall20080321_085143PM_out.dat ...
NOD002> - RDA_LOG_bdump15_orcl2_mmon_16580_trc.dat ...
NOD002> - RDA_LOG_log_trace.txt ...
NOD002> - RDA_LOG_udump2_orcl2_ora_24872_trc.dat ...
NOD002> - RDA_RAC_ocrdump.txt ...
NOD002> - RDA_ONET_sqlnet_sqlnet_ora.txt ...
NOD002> - RDA_OS_nls_env.txt ...
NOD002> - RDA_DBM_libcache.txt ...
NOD002> - RDA_OS_packages.txt ...
NOD002> - RDA_DBA_datafile.txt ...
NOD002> - RDA_DBA_security_files.txt ...
NOD002> - RDA_LOG_bdump1_orcl2_arc0_18044_trc.dat ...
NOD002> - RDA_NET_etc_files.txt ...
NOD002> - RDA_RAC_crs_inventory.txt ...
NOD002> - RDA_RAC_evm_log.txt ...
NOD002> - RDA_DBA_vcontrolfile.txt ...
NOD002> - RDA_DBA_security.txt ...
NOD002> - RDA_LOG_bdump14_orcl2_lms0_7793_trc.dat ...
NOD002> - RDA_DBA_spatial.txt ...
NOD002> - RDA_LOG_bdump16_orcl2_qmnc_9268_trc.dat ...
NOD002> - RDA_DBA_undo_info.txt ...
NOD002> - RDA_PERF_ash_report.txt ...
NOD002> - RDA_DBA_vlicense.txt ...
NOD002> - RDA_INST_comps_xml.txt ...
NOD002> - RDA_DBA_voption.txt ...
NOD002> - RDA_DBA_jobs.txt ...
NOD002> - RDA_RAC_client_log.txt ...
NOD002> - RDA_DBA_vfeatureusage.txt ...
NOD002> - RDA_LOG_error2_orcl2_arc0_8802_trc.dat ...
NOD002> - RDA_PERF_overview.txt ...
NOD002> - RDA_PROF_etc_profile.txt ...
NOD002> - RDA_ONET_lstatus.txt ...
NOD002> - RDA_DBM_respool.txt ...
NOD002> - RDA_DBA_vparameters.txt ...
NOD002> - RDA_PROF_ulimit.txt ...
NOD002> - RDA_LOG_bdump10_orcl2_lmd0_7791_trc.dat ...
NOD002> - RDA_OS_sysdef.txt ...
NOD002> - RDA_RAC_cluster_status_file.txt ...
NOD002> - RDA_ONET_sqlnetsqlnet_log.txt ...
NOD002> - RDA_OS_system_error_log.txt ...
NOD002> - RDA_LOG_bdump11_orcl2_lmon_16460_trc.dat ...
NOD002> - RDA_ONET_sqlnet_tnsnames_ora.txt ...
NOD002> - RDA_LOG_udump5_orcl2_ora_16811_trc.dat ...
NOD002> - RDA_INST__link_homes.txt ...
NOD002> - RDA_DBA_vcompatibility.txt ...
NOD002> - RDA_RAC_racg_dump.txt ...
NOD002> - RDA_LOG_bdump12_orcl2_lmon_7789_trc.dat ...
NOD002> - RDA_PERF_latch_data.txt ...
NOD002> - RDA_CFG_homes.txt ...
NOD002> - RDA_DBA_latch_info.txt ...
NOD002> - RDA_LOG_error1_orcl2_arc0_18044_trc.dat ...
NOD002> - RDA_LOG_bdump6_orcl2_diag_7752_trc.dat ...
NOD002> - RDA_RAC_crs_status.txt ...
NOD002> - RDA_NET_netperf.txt ...
NOD002> - RDA_INST_orainventory_files.txt ...
NOD002> - RDA_RAC_racOnOff.txt ...
NOD002> - RDA_NET_tcpip_settings.txt ...
NOD002> - RDA_DBA_CPU_Statistic.txt ...
NOD002> - RDA_ONET_adapters.txt ...
NOD002> - RDA_LOG_alert_log.txt ...
NOD002> - RDA_CFG_oh_inv.txt ...
NOD002> - RDA_DBA_vsession_wait.txt ...
NOD002> - RDA_LOG_udump6_orcl2_ora_16173_trc.dat ...
NOD002> - RDA_DBA_log_info.txt ...
NOD002> - RDA_INST_oraInstall20080321_033734PM_out.dat ...
NOD002> - RDA_PROF_umask.txt ...
NOD002> - RDA_OS_services.txt ...
NOD002> - RDA_OS_libc.txt ...
NOD002> - RDA_DBM_subpool.txt ...
NOD002> - RDA_DBA_aq_data.txt ...
NOD002> - RDA_INST__link_oh_inv.txt ...
NOD002> - RDA_PERF_awr_report.txt ...
NOD002> - RDA_INST_oracle_install.txt ...
NOD002> - RDA_DBA_versions.txt ...
NOD002> - RDA_DBA_vHWM_Statistic.txt ...
NOD002> - RDA_DBA_dba_registry.txt ...
NOD002> - RDA_ONET_netenv.txt ...
NOD002> - Report index ...
NOD002> Packaging the reports ...
NOD002> RDA_NOD002.zip created for transfer
NOD002> Updating the setup file ...
NOD001> Processing RDBMS Memory module ...
NOD001> Processing LOG module ...
NOD001> Processing Cluster module ...
NOD001> Processing RDSP module ...
NOD001> Processing LOAD module ...
NOD001> Processing End module ...
NOD001> -------------------------------------------------------------------------------
NOD001> RDA Data Collection Ended 26-Nov-2010 11:16:46
NOD001> -------------------------------------------------------------------------------
NOD001> Generating the reports ...
NOD001> - RDA_PERF_top_sql.txt ...
NOD001> - RDA_PERF_autostats.txt ...
NOD001> - RDA_ONET_dynamic_dep.txt ...
NOD001> - RDA_END_report.txt ...
NOD001> - RDA_INST_oracle_home.txt ...
NOD001> - RDA_DBA_init_ora.txt ...
NOD001> - RDA_LOG_bdump13_orcl1_lgwr_22435_trc.dat ...
NOD001> - RDA_DBM_spresmal.txt ...
NOD001> - RDA_LOG_bdump7_orcl1_ckpt_7844_trc.dat ...
NOD001> - RDA_NET_udp_settings.txt ...
NOD001> - RDA_INST_make_report.txt ...
NOD001> - RDA_RAC_srvctl.txt ...
NOD001> - RDA_LOG_bdump6_orcl1_cjq0_7862_trc.dat ...
NOD001> - RDA_OS_kernel_info.txt ...
NOD001> - RDA_RAC_css_log.txt ...
NOD001> - RDA_PROF_dot_bashrc.txt ...
NOD001> - RDA_LOG_bdump19_orcl1_lms0_22392_trc.dat ...
NOD001> - RDA_RAC_init.txt ...
NOD001> - RDA_INST_inventory_xml.txt ...
NOD001> - RDA_OS_misc_linux_info.txt ...
NOD001> - RDA_CFG_database.txt ...
NOD001> - RDA_LOG_bdump22_orcl1_smon_7856_trc.dat ...
NOD001> - RDA_PERF_lock_data.txt ...
NOD001> - RDA_LOG_bdump15_orcl1_lmd0_22347_trc.dat ...
NOD001> - RDA_INST_oratab.txt ...
NOD001> - RDA_OS_etc_conf.txt ...
NOD001> - RDA_OS_ntpstatus.txt ...
NOD001> - RDA_PROF_dot_bash_profile.txt ...
NOD001> - RDA_LOG_bdump2_orcl1_arc0_25404_trc.dat ...
NOD001> - RDA_OS_linux_release.txt ...
NOD001> - RDA_DBM_sgastat.txt ...
NOD001> - RDA_RAC_cluster_net.txt ...
NOD001> - RDA_RAC_crs_stat.txt ...
NOD001> - RDA_LOG_bdump3_orcl1_arc1_23210_trc.dat ...
NOD001> - RDA_INST_oraInstall20090831_114713AM_out.dat ...
NOD001> - RDA_RAC_crs_log.txt ...
NOD001> - RDA_DBA_database_properties.txt ...
NOD001> - RDA_PERF_cbo_trace.txt ...
NOD001> - RDA_DBA_text.txt ...
NOD001> - RDA_LOG_bdump4_orcl1_arc2_9360_trc.dat ...
NOD001> - RDA_NET_ifconfig.txt ...
NOD001> - RDA_DBM_hwm.txt ...
NOD001> - RDA_PROF_profiles.txt ...
NOD001> - RDA_PROF_env.txt ...
NOD001> - RDA_DBM_sgacomp.txt ...
NOD001> - RDA_PERF_addm_report.txt ...
NOD001> - RDA_DBA_vsystem_event.txt ...
NOD001> - RDA_DBA_sga_info.txt ...
NOD001> - RDA_RAC_logs.txt ...
NOD001> - RDA_ONET_hs_inithsodbc_ora.txt ...
NOD001> - RDA_RAC_ocrconfig.txt ...
NOD001> - RDA_LOG_bdump1_orcl1_arc0_9341_trc.dat ...
NOD001> - RDA_LOG_bdump.txt ...
NOD001> - RDA_LOG_bdump14_orcl1_lmd0_7821_trc.dat ...
NOD001> - RDA_INST_orainst_loc.txt ...
NOD001> - RDA_INST_installActions20090831_073133AM_log.dat ...
NOD001> - RDA_DBA_ses_procs.txt ...
NOD001> - RDA_DBM_lchitrat.txt ...
NOD001> - RDA_INST_oraInstall20090831_073524AM_err.dat ...
NOD001> - RDA_DBA_tablespace.txt ...
NOD001> - RDA_OS_disk_info.txt ...
NOD001> - RDA_LOG_last_errors.txt ...
NOD001> - RDA_LOG_udump.txt ...
NOD001> - RDA_DBA_jvm_info.txt ...
NOD001> - RDA_LOG_bdump17_orcl1_lmon_22339_trc.dat ...
NOD001> - RDA_OS_tracing.txt ...
NOD001> - RDA_DBA_vfeatureinfo.txt ...
NOD001> - RDA_INST_installActions20080321_022900PM_log.dat ...
NOD001> - RDA_RAC_alert_log.txt ...
NOD001> - RDA_END_system.txt ...
NOD001> - RDA_OS_cpu_info.txt ...
NOD001> - RDA_INST_orainventory_logdir.txt ...
NOD001> - RDA_RAC_ipc.txt ...
NOD001> - RDA_OS_java_version.txt ...
NOD001> - RDA_DBA_replication.txt ...
NOD001> - RDA_LOG_bdump21_orcl1_reco_7858_trc.dat ...
NOD001> - RDA_RAC_ocrcheck.txt ...
NOD001> - RDA_DBA_nls_parms.txt ...
NOD001> - RDA_DBA_vresource_limit.txt ...
NOD001> - RDA_DBA_partition_data.txt ...
NOD001> - RDA_ONET_sqlnet_listener_ora.txt ...
NOD001> - RDA_OS_memory_info.txt ...
NOD001> - RDA_DBA_vspparameters.txt ...
NOD001> - RDA_LOG_bdump20_orcl1_mmnl_7895_trc.dat ...
NOD001> - RDA_LOG_log_trace.txt ...
NOD001> - RDA_LOG_bdump10_orcl1_j000_17866_trc.dat ...
NOD001> - RDA_RAC_ocrdump.txt ...
NOD001> - RDA_ONET_sqlnet_sqlnet_ora.txt ...
NOD001> - RDA_INST_installActions20090831_114713AM_log.dat ...
NOD001> - RDA_OS_nls_env.txt ...
NOD001> - RDA_DBM_libcache.txt ...
NOD001> - RDA_OS_packages.txt ...
NOD001> - RDA_DBA_datafile.txt ...
NOD001> - RDA_DBA_security_files.txt ...
NOD001> - RDA_LOG_udump1_orcl1_ora_5922_trc.dat ...
NOD001> - RDA_LOG_bdump9_orcl1_diag_22327_trc.dat ...
NOD001> - RDA_NET_etc_files.txt ...
NOD001> - RDA_LOG_bdump11_orcl1_lck0_7963_trc.dat ...
NOD001> - RDA_RAC_crs_inventory.txt ...
NOD001> - RDA_RAC_evm_log.txt ...
NOD001> - RDA_INST_oraInstall20090831_114713AM_err.dat ...
NOD001> - RDA_DBA_vcontrolfile.txt ...
NOD001> - RDA_DBA_security.txt ...
NOD001> - RDA_DBA_spatial.txt ...
NOD001> - RDA_DBA_undo_info.txt ...
NOD001> - RDA_LOG_bdump8_orcl1_diag_7815_trc.dat ...
NOD001> - RDA_LOG_udump3_orcl1_ora_5277_trc.dat ...
NOD001> - RDA_PERF_ash_report.txt ...
NOD001> - RDA_DBA_vlicense.txt ...
NOD001> - RDA_INST_comps_xml.txt ...
NOD001> - RDA_DBA_voption.txt ...
NOD001> - RDA_DBA_jobs.txt ...
NOD001> - RDA_RAC_client_log.txt ...
NOD001> - RDA_DBA_vfeatureusage.txt ...
NOD001> - RDA_PERF_overview.txt ...
NOD001> - RDA_PROF_etc_profile.txt ...
NOD001> - RDA_ONET_lstatus.txt ...
NOD001> - RDA_DBM_respool.txt ...
NOD001> - RDA_INST_installActions20080321_033734PM_log.dat ...
NOD001> - RDA_DBA_vparameters.txt ...
NOD001> - RDA_PROF_ulimit.txt ...
NOD001> - RDA_INST_installActions20090831_073524AM_log.dat ...
NOD001> - RDA_LOG_bdump16_orcl1_lmon_7819_trc.dat ...
NOD001> - RDA_OS_sysdef.txt ...
NOD001> - RDA_LOG_bdump23_orcl1_smon_22439_trc.dat ...
NOD001> - RDA_RAC_cluster_status_file.txt ...
NOD001> - RDA_ONET_sqlnetsqlnet_log.txt ...
NOD001> - RDA_INST_oraInstall20080321_022900PM_out.dat ...
NOD001> - RDA_OS_system_error_log.txt ...
NOD001> - RDA_ONET_sqlnet_tnsnames_ora.txt ...
NOD001> - RDA_LOG_bdump18_orcl1_lms0_7830_trc.dat ...
NOD001> - RDA_INST__link_homes.txt ...
NOD001> - RDA_DBA_vcompatibility.txt ...
NOD001> - RDA_RAC_racg_dump.txt ...
NOD001> - RDA_PERF_latch_data.txt ...
NOD001> - RDA_CFG_homes.txt ...
NOD001> - RDA_DBA_latch_info.txt ...
NOD001> - RDA_LOG_udump2_orcl1_ora_10349_trc.dat ...
NOD001> - RDA_LOG_error1_orcl1_arc0_9341_trc.dat ...
NOD001> - RDA_RAC_crs_status.txt ...
NOD001> - RDA_NET_netperf.txt ...
NOD001> - RDA_INST_orainventory_files.txt ...
NOD001> - RDA_RAC_racOnOff.txt ...
NOD001> - RDA_NET_tcpip_settings.txt ...
NOD001> - RDA_DBA_CPU_Statistic.txt ...
NOD001> - RDA_LOG_bdump12_orcl1_lgwr_7839_trc.dat ...
NOD001> - RDA_ONET_adapters.txt ...
NOD001> - RDA_LOG_alert_log.txt ...
NOD001> - RDA_CFG_oh_inv.txt ...
NOD001> - RDA_DBA_vsession_wait.txt ...
NOD001> - RDA_DBA_log_info.txt ...
NOD001> - RDA_INST_oraInstall20080321_033734PM_out.dat ...
NOD001> - RDA_PROF_umask.txt ...
NOD001> - RDA_OS_services.txt ...
NOD001> - RDA_OS_libc.txt ...
NOD001> - RDA_DBM_subpool.txt ...
NOD001> - RDA_DBA_aq_data.txt ...
NOD001> - RDA_LOG_bdump5_orcl1_arc2_23248_trc.dat ...
NOD001> - RDA_INST__link_oh_inv.txt ...
NOD001> - RDA_PERF_awr_report.txt ...
NOD001> - RDA_INST_oracle_install.txt ...
NOD001> - RDA_DBA_versions.txt ...
NOD001> - RDA_DBA_vHWM_Statistic.txt ...
NOD001> - RDA_DBA_dba_registry.txt ...
NOD001> - RDA_LOG_udump4_orcl1_ora_10747_trc.dat ...
NOD001> - RDA_ONET_netenv.txt ...
NOD001> - RDA_INST_installActions20080321_085143PM_log.dat ...
NOD001> - Report index ...
NOD001> Packaging the reports ...
NOD001> RDA_NOD001.zip created for transfer
NOD001> Updating the setup file ...
Processing RDSP module ...
Processing LOAD module ...
Processing End module ...
-------------------------------------------------------------------------------
RDA Data Collection Ended 26-Nov-2010 11:16:52 AM
-------------------------------------------------------------------------------
Generating the reports ...
- RDA_END_report.txt ...
- RDA_RDSP_overview.txt ...
- RDA_S909RDSP.txt ...
- RDA_END_system.txt ...
- RDA_RDSP_results.txt ...
- RDA_CFG_homes.txt ...
- RDA_CFG_oh_inv.txt ...
- Report index ...
Packaging the reports ...
You can review the reports by transferring the contents of the
/u01/rda/rda/output directory to a location where you have web-browser
access. Then, point your browser at this file to display the reports:
RDA__start.htm
Based on your server configuration, some possible alternative approaches are:
- If your client computer with a browser has access to a web shared
directory, copy the /u01/rda/rda/output directory to the web shared
directory and visit this URL:
http://machine:port/web_shared_directory/RDA__start.htm
or
- If your client computer with a browser has FTP access to the server
computer with the /u01/rda/rda/output directory, visit this URL:
ftp://root@racnode1.us.oracle.com//u01/rda/rda/output/RDA__start.htm
If this file was generated to assist in resolving a Service Request, please
send /u01/rda/rda/output/RDA.RDA_racnode1.zip to Oracle Support by uploading
the file via My Oracle Support. If ftp'ing the file, please be sure to ftp in
BINARY format.
Updating the setup file ...
[oracle@racnode1 output]$ unzip -l RDA.RDA_racnode1.zip
Archive: RDA.RDA_racnode1.zip
Length Date Time Name
-------- ---- ---- ----
121 11-26-10 11:14 RDA_0CFG.fil
2507 11-26-10 11:16 RDA.log
0 11-26-10 11:14 RDA_0REXE.fil
1533 11-26-10 11:16 RDA_END_report.txt
634 11-26-10 11:16 RDA_RDSP_overview.txt
469 11-26-10 11:16 RDA_S909RDSP.htm
236 11-26-10 11:14 RDA_S010CFG.toc
4361 11-26-10 11:16 RDA_END_report.htm
147 11-26-10 11:16 RDA_S909RDSP.txt
412 11-26-10 11:16 RDA_END_system.txt
3407 11-26-10 11:05 RDA_rda.css
486 11-26-10 11:16 RDA__index.htm
251 11-26-10 11:16 RDA_S010CFG.txt
19970 11-26-10 11:16 RDA_CFG_oh_inv.htm
161 11-26-10 11:16 RDA__index.txt
179 11-26-10 11:16 RDA__blank.htm
2223 11-26-10 11:16 RDA_RDSP_overview.htm
915 11-26-10 11:16 RDA_CFG_homes.htm
138 11-26-10 11:16 RDA_S909RDSP.toc
388 11-26-10 11:16 RDA_RDSP_results.txt
604 11-26-10 11:16 RDA_S010CFG.htm
814 11-26-10 11:16 RDA__start.htm
358 11-26-10 11:14 RDA_CFG_homes.txt
5984 11-26-10 11:14 RDA_CFG_oh_inv.txt
1153 11-26-10 11:16 RDA_RDSP_results.htm
1291 11-26-10 11:16 RDA_END_system.htm
1289027 11-26-10 11:16 remote/RDA_NOD001.zip <---- THIS SHOULD EXIST
884638 11-26-10 11:16 remote/RDA_NOD002.zip <---- THIS SHOULD EXIST
-------- -------
2222407 28 files
}}}
''BAD OUTPUT''
{{{
[oracle@racnode1 rda]$ ./rda.sh -v -e REMOTE_TRACE=1
Collecting diagnostic data ...
-------------------------------------------------------------------------------
RDA Data Collection Started 26-Nov-2010 11:12:20 AM
-------------------------------------------------------------------------------
Processing Initialization module ...
Processing CFG module ...
Processing OCM module ...
Processing REXE module ...
Processing RDSP module ...
Processing LOAD module ...
Processing End module ...
-------------------------------------------------------------------------------
RDA Data Collection Ended 26-Nov-2010 11:12:26 AM
-------------------------------------------------------------------------------
Generating the reports ...
- RDA_END_report.txt ...
- RDA_RDSP_overview.txt ...
- RDA_END_system.txt ...
- RDA_RDSP_results.txt ...
- RDA_CFG_homes.txt ...
- RDA_CFG_oh_inv.txt ...
- Report index ...
Packaging the reports ...
You can review the reports by transferring the contents of the
/u01/rda/rda/output directory to a location where you have web-browser
access. Then, point your browser at this file to display the reports:
RDA__start.htm
Based on your server configuration, some possible alternative approaches are:
- If your client computer with a browser has access to a web shared
directory, copy the /u01/rda/rda/output directory to the web shared
directory and visit this URL:
http://machine:port/web_shared_directory/RDA__start.htm
or
- If your client computer with a browser has FTP access to the server
computer with the /u01/rda/rda/output directory, visit this URL:
ftp://root@racnode1.us.oracle.com//u01/rda/rda/output/RDA__start.htm
If this file was generated to assist in resolving a Service Request, please
send /u01/rda/rda/output/RDA.RDA_racnode1.zip to Oracle Support by uploading
the file via My Oracle Support. If ftp'ing the file, please be sure to ftp in
BINARY format.
Updating the setup file ...
[oracle@racnode1 output]$ unzip -l RDA.RDA_racnode1.zip
Archive: RDA.RDA_racnode1.zip
Length Date Time Name
-------- ---- ---- ----
121 11-26-10 11:12 RDA_0CFG.fil
1611 11-26-10 11:12 RDA.log
0 11-26-10 11:12 RDA_0REXE.fil
1533 11-26-10 11:12 RDA_END_report.txt
634 11-26-10 11:12 RDA_RDSP_overview.txt
469 11-26-10 11:12 RDA_S909RDSP.htm
236 11-26-10 11:12 RDA_S010CFG.toc
4361 11-26-10 11:12 RDA_END_report.htm
147 11-26-10 11:12 RDA_S909RDSP.txt
412 11-26-10 11:12 RDA_END_system.txt
3407 11-26-10 11:05 RDA_rda.css
486 11-26-10 11:12 RDA__index.htm
251 11-26-10 11:12 RDA_S010CFG.txt
19970 11-26-10 11:12 RDA_CFG_oh_inv.htm
161 11-26-10 11:12 RDA__index.txt
179 11-26-10 11:12 RDA__blank.htm
2223 11-26-10 11:12 RDA_RDSP_overview.htm
915 11-26-10 11:12 RDA_CFG_homes.htm
138 11-26-10 11:12 RDA_S909RDSP.toc
308 11-26-10 11:12 RDA_RDSP_results.txt
604 11-26-10 11:12 RDA_S010CFG.htm
814 11-26-10 11:12 RDA__start.htm
358 11-26-10 11:12 RDA_CFG_homes.txt
5984 11-26-10 11:12 RDA_CFG_oh_inv.txt
1067 11-26-10 11:12 RDA_RDSP_results.htm
1291 11-26-10 11:12 RDA_END_system.htm
-------- -------
47680 26 files
}}}
! Related Notes
330362.1 RDA Troubleshooting Guide
Remote Diagnostic Agent (RDA) 4 - RAC Cluster Guide (Doc ID 359395.1)
Maclean's notes http://goo.gl/OVvnZ
http://coding-geek.com/how-databases-work/
! complete list of databases
https://dbdb.io/browse
http://community.vsl.co.at/forums/p/22659/154067.aspx
http://jpaul.me/?p=1078
http://www.tomshardware.com/forum/268964-30-what-diffrence-rdimms-udimms
ORACLE® DATABASE 10G WITH RAC AND RELIABLE DATAGRAM SOCKETS CONFIGURATION GUIDE
http://www.filibeto.org/sun/lib/blueprints/821-0802.pdf
Using Reliable Datagram Sockets Over InfiniBand for Oracle Database 10g Clusters
http://www.dell.com/downloads/global/power/ps2q07-20070279-Mahmood.pdf
http://www.freelists.org/post/oracle-l/RAC-declustering,7
http://www.google.com.ph/search?hl=tl&safe=active&q=oracle+kcfis&oq=oracle+kcfis&aq=f&aqi=&aql=&gs_sm=e&gs_upl=10987l11538l0l7l4l0l0l0l0l0l0ll0
! updated 2019
Oracle Clusterware and RAC Support for RDS Over Infiniband (Doc ID 751343.1)
https://maxfilatov.wordpress.com/2018/12/06/sad-story-about-oracle-rds-and-infiniband-relationship/
! pre 12.2
{{{
To verify what protocol is used by RAC, look in the alert log of the ASM and Database instances during startup:
In pre 12.2 version, alert.log for both asm and database shows
Cluster communication is configured to use the following interface(s) for this instance
172.x.x.109
cluster interconnect IPC version:Oracle RDS/IP (generic)
IPC Vendor 1 proto 3
Version 3.0
In the UDP case it would say "UDP/IP" instead of "RDS/IP".
}}}
! 12.2 onwards
{{{
In 12.2+ version, RDS is supported only for databases running on the engineered systems, and the databases on non-engineered systems always use UDP and the database alert.log will show it is using UDP instead of RDS. This is true even if RDS is linked into the oracle binary.
In 12.2+ version, asm alert.log shows
cluster interconnect IPC version: Oracle RDS/IP (generic)
IPC Vendor 1 proto 3
Version 4.1
However, database alert.log shows
cluster interconnect IPC version: [IPCLW over RDS(mode 2) ]
IPC Vendor 1 proto 2
IPCLW is new IPC light weight implementation that is used in 12.2 database.
IPCLW is optimized version of IPC that was used in 12.1 and earlier.
}}}
! my examples here
oracle regexp output left hand side https://gist.github.com/karlarao/4a456e9865247b07d1c7654116801214
oracle one column to rows https://gist.github.com/karlarao/9eb0d05fdb680db4bb6153e4a23c9bac
! references
oracle split string after keyword https://www.google.com/search?biw=1436&bih=796&ei=T0dqW8-ACozt5gKztqyABw&q=oracle+split+string+after+keyword&oq=oracle+split+string+after+keyword&gs_l=psy-ab.3...1867.1867.0.2171.1.1.0.0.0.0.74.74.1.1.0....0...1.1.64.psy-ab..0.0.0....0.8DSEO_Qp9-0
https://stackoverflow.com/questions/36015847/extract-string-after-character-and-before-final-full-stop-period-in-sql
https://stackoverflow.com/questions/45165587/how-to-get-string-after-character-oracle
https://stackoverflow.com/questions/28674778/oracle-need-to-extract-text-between-given-strings
https://www.experts-exchange.com/questions/28349313/Oracle-SQL-Extract-rightmost-word-in-string.html
https://lalitkumarb.wordpress.com/2017/02/17/regexp_substr-extract-everything-after-specific-character/ <-- good stuff
https://lalitkumarb.wordpress.com/2018/07/20/regexp_substr-extract-everything-before-specific-character/
https://stackoverflow.com/questions/4389571/how-to-select-a-substring-in-oracle-sql-up-to-a-specific-character
https://basitaalishan.com/2014/02/23/removing-part-of-string-before-and-after-specific-character-using-transact-sql-string-functions/
https://community.toadworld.com/platforms/sql-server/b/weblog/archive/2014/02/23/removing-part-of-string-before-and-after-specific-character-using-transact-sql-string-functions
https://www.google.com/search?q=substr+and+instr+in+oracle&oq=substr+and+instr&aqs=chrome.1.69i57j0l5.3768j1j4&sourceid=chrome&ie=UTF-8
https://stackoverflow.com/questions/39405528/using-substr-and-instr-in-sql
http://oraclemine.com/substr-and-instr-in-oracle/ <-- good stuff
http://www.java2s.com/Code/Oracle/Char-Functions/CombineINSTRandSUBSTRtogether.htm
https://www.google.com/search?q=oracle+substr+instr+on+keyword&ei=Y0tqW_OaPOmO0gKK64-4BQ&start=10&sa=N&biw=1436&bih=796
https://www.google.com/search?q=oracle+substr+instr+until+the+end+of+string&oq=oracle+substr+instr+until+the+end+of+string&aqs=chrome..69i57j69i64l3.12272j1j1&sourceid=chrome&ie=UTF-8
https://stackoverflow.com/questions/30820143/using-substr-and-instr-find-end-of-string <-- good stuff
https://stackoverflow.com/questions/14621357/oracle-get-substring-before-a-space <-- good stuff
https://stackoverflow.com/questions/15614751/if-statement-in-select-oracle
{{{
RH033
[ ] UNIT 1 - LINUX IDEAS AND HISTORY
open source definition
www.opensource.org/docs/definition.php
www.gnu.org/philosophy/free-sw.html
gnu public license
www.gnu.org/copyleft/gpl.html
[ ] UNIT 2 - LINUX USAGE BASICS
x window system
passwords
root, sudo
vim, nano
/etc/issue for the custom message
[ ] UNIT 3 - RUNNING COMMANDS AND GETTING HELP
levels of help
whatis
--help
man (divided into pages), info (divided into nodes)
manual sections
1 user commands
2 system calls
3 library calls
4 special files
5 file formats
6 games
7 miscellaneous
8 administrative commands
/usr/share/doc
redhat documentation
http://en.wikipedia.org/wiki/List_of_Unix_programs
[ ] UNIT 4 - BROWSING THE FILESYSTEM
file system hierarchy standard - http://proton.pathname.com/fhs
home directories: /root, /home/<username>
user executables: (essential user binaries) /bin, (non-essential binaries such as grapich environments, office tools) /usr/bin, (software compiled from source) /usr/local/bin
system executables: (essential system binaries) /sbin, (non-essential binaries such as grapich environments, office tools) /usr/sbin, (software compiled from source) /usr/local/sbin
other mountpoints: /media, /mnt
configuration: /etc
temporary files: /tmp
kernels and bootloader: /boot
server data: /var, /srv
system information: /proc, /sys
shared libraries: /lib, /usr/lib, /usr/local/lib
[ ] UNIT 5 - USERS, GROUPS, PERMISSIONS
who operator permissions
u + r
g - w
o = x
a s "set user id bit or group"
t "sticky bit (for directories)"
chattr +i <-- add immutable property, only on ext2/3 filesystems
chattr -i <-- remove immutable property
lsattr <-- list immutable property
newgrp
- primary group can be temporarily changed using this command, and will create a new session, to return to original group just do an EXIT
- if you are not a member of the group then you will be prompted with a passwd (check on "/etc/group"), otherwise you'll not be prompted (done as "gpasswd -a oracle karao")
- if the group does not have a password and you try to NEWGRP on that group, then you will be denied
- if the user is added to the group, then you'll see a new group when you do an "id <username>", it could be removed by doing a "usermode -G <group list>" or "gpasswd -d <user> <group>"
- if a user is granted ADMINISTRATOR (-A) privilege then you'll see a new entry on the "/etc/gshadow" --> karao:0jEuOBLJ51YK2:oracle but this user is not seen on the "/etc/group" unless you also add him on the group
- if a user is granted (-M) privilege then you'll see a new entry on the "/etc/gshadow" --> karao:0jEuOBLJ51YK2:oracle:oracle also you see a new entry on the "/etc/group"
- group must not be associated with any username, because then the user is deleted then also the group
gpasswd
- there is no way to revoke the (-A) on a user, you could just redirect it to the ROOT user using the +A command
file
r - you can copy
w - you can't copy and edit if only this
x - you can't copy if only this
directory
r - you can copy if only this
w - you can't copy if only this
x - you can't read and copy if only this
[ ] UNIT 6 - USING THE BASH SHELL
$(hostname)
file{1,2,3}
mkdir -p folder/{inbox,outbox}/{trash,save}
!1003
#!/bin/bash <-- shebang, this tells the OS what interpreter to use in order to execute the script
to know the shells, go to "/etc/shells"
to change your default shell, look for the command "chsh"
[ ] UNIT 7 - STANDARD I/O AND PIPES
linux provides three I/O channels to programs:
STDIN - keyboard by default (file descriptor # 0)
STDOUT - terminal window by default - 1st output data stream (file descriptor # 1)
STDERR - terminal window by default - 2nd output data stream (file descriptor # 2)
redirecting output to a file
> redirect STDOUT to a file
2> redirect STDERR to a file
&> redirect all output to a file
common redirection operators
command > file
command >> file
command < file - send FILE as an input to COMMAND
command 2> file
command 2>> file
[oracle@centos5-11g ~]$ ls -ltr karlarao.txt install2008-05-11_15-41-12.log &>> error.txt <-- NOT ALLOWED
-bash: syntax error near unexpected token `>'
/dev/null <-- is a black hole for data, so that you dont waste storage for the STDERR output file
sample:
redirecting to two files
find /etc/ -iname passwd > find.out 2> /dev/null
redirecting all to a file
find /etc -iname passwd &> find.all
piping to less (send all output to a pipe)
find /etc -iname passwd 2>&1 | less
subshell - to print output of two commands
(cal 2007; cal 2008) | less
piping:
ls -C | tr 'a-z' 'A-Z' <-- translate or delete characters
redirecting to multiple targets (tee)
useful for saving output at various stages in long sequence of pipes, this will actually create the *out files:
ls -l /etc | tee stage1.out | sort | tee stage2.out | uniq -c | tee stage3.out | sort -r | tee stage4.out | less
sending multiple lines to STDNIN (mail) - will only terminate when END (the same word) is encountered
[oracle@centos5-11g ~]$ mail -s "please call" karlarao@gmail.com << END
> helo
> that's it!
> END
SCRIPTING:
(for loops)
for NAME in JOE JANE JULIE
do
ADDRESS="$NAME@gmail.com"
MESSAGE='Projects are due today!'
echo $MESSAGE | mail -s Reminder $ADDRESS
done
-- ping IP ADDRESSES, uses sequence
for USER in $(grep bash /etc/passwd)
for FILE in *txt
for NUM in $(seq 1 10)
for NUM in $(seq 1 2 10) increments of 2
for LETTER in $(seq a z)
#!/bin/bash
# alive.sh
# pings machines
for i in $(seq 1 20)
do
host=172.16.126.$i
ping -c $host &> /dev/null
if [ $? = 0 ]; then
echo "$host is up!"
else
echo "$host is down!"
fi
done
COULD ALSO BE
#!/bin/bash
# alive.sh
# pings machines
for i in {1..20}; do
host=172.16.126.$i
ping -c $host &> /dev/null
if [ $? = 0 ]; then
echo "$host is up!"
else
echo "$host is down!"
fi
done
[ ] UNIT 8 - TEXT PROCESSING TOOLS
CUT
/sbin/ifconfig | grep 'inet addr' | cut -d : -f2 | cut -d ' ' -s -f1
SORT
cut -d : -f 3,1 /etc/passwd | sort -t : -k 2 -n <-- t (delimiter), k (field of sort), n (numerical sort)
UNIQ
cut -d : -f7 /etc/passwd | sort | uniq
DIFF (to do side-by-side mode, -y)
[oracle@centos5-11g ~]$ diff -y lao.txt tzu.txt
The Way that can be told of is not the eternal Way; | The Nameless is the origin of Heaven and Earth;
The name that can be named is not the eternal name. | The named is the mother of all things.
The Nameless is the origin of Heaven and Earth; |
The Named is the mother of all things. <
Therefore let there always be non-being, Therefore let there always be non-being,
so we may see their subtlety, so we may see their subtlety,
And let there always be being, And let there always be being,
so we may see their outcome. so we may see their outcome.
The two are the same, The two are the same,
But after they are produced, But after they are produced,
they have different names. they have different names.
> They both may be called deep and profound.
> Deeper and more profound,
> The door of all subtleties!
PATCH (make the 1st file the same as 2nd file, propagating the changes)
step 1) $ diff -u lao.txt tzu.txt > patch_lao.txt <-- unified format, for better format shows + and -
step 2) $ patch -b lao.txt patch_lao.txt
step 3) $ diff -y lao.txt tzu.txt
to reverse the effect, use the -R switch, or restore the .orig file
$ patch -R lao.txt patch_lao.txt
to make file usable
$ restorecon /etc/issue
ASPELL
interactive:
$ aspell check letter.txt
non-interactive:
$ aspell list < letter.txt <-- on STDIN
LOOK (quick lookup of words)
look <word>
SED
$ sed 's/The/Is/gi' lao.txt <-- search globally, case insensitive
$ sed '1,2s/The/Is/g' lao.txt <-- lines 1 to 2
$ sed '/digby/,/duncan/s/dog/cat/g' pets <-- start on digby and continuing on duncan
$ sed -e '/s/dog/cat/' -e '/s/hi/lo/' pets <-- multiple SED
$ sed -f myedits pets <-- for large edits, place them in a file then reference
REGULAR EXPRESSIONS
^ beggining of the line
$ end of line
[xyz] character that is x,y,z
[^xyz] character that is not x,y,z
grep -l root /etc/* 2> /dev/null <-- look for files that contain the word "root"
[] UNIT 9 - VIM: AN ADVANCED TEXT EDITOR
three modes:
command mode
insert mode
ex mode
A append to end of line
a insert data after cursor
I insert at beginning of line
i insert data before cursor
o insert a new line (below)
O insert a new line (above)
5, Right Arrow move rigt five characters
w,b move by word
),( move by sentence
},{ move by paragraph
10G jump to line 10
G jump at the end of the line
/,n,N search
:%s/\/dev\/hda/\/dev\/sda/g search/replace
change delete yank
(replace) (cut) (copy)
line cc dd yy
letter cl dl yl
word cw dw yw
sentence ahead c) d) y)
sentence behind c( d( y(
paragraph above c{ d{ y{
paragraph below c} d} y}
p paste
u undo
U undo current line
CTRL-r redo
visual mode:
v character oriented visual mode
V line oriented visual mode
CTRL-v block oriented visual mode
multiple windows (must have -o switch):
vi -o lao.txt tzu.txt
CTRL-w, s split horizontal
CTRL-w, v split vertical
CTRL-w, arrow move to another window
configuring vi and vim
on the fly
:set or :set all
permanently
~/.vimrc (primary) or ~/.exrc (for older)
[oracle@centos5-11g ~]$ cat .vimrc
:set nu
:set wrapmargin=10
:help option-list
learn more
:help
vimtutor
visudo <-- opens the /etc/sudoers in vim
vipw <-- edits the password file with necessary locks
[ ] UNIT 10 - BASIC SYSTEM CONFIGURATION TOOLS
important network settings:
ip configuration
device activation
dns configuration
default gateway
less /usr/share/doc/initscripts-8.45.14.EL/sysconfig.txt <-- complete list of options of configuration
ifup
ifdown
ifconfig
network configuration files:
ETHERNET DEVICES
/etc/sysconfig/network-scripts/ifcfg-eth0
configuration options:
DEVICE=eth0 <-- for DHCP config
HWADDR=<mac address> <-- for DHCP config
BOOTPROTO=none|dhcp <-- for DHCP config
IPADDR
NETMASK
GATEWAY
ONBOOT=yes <-- for DHCP config
USERCTL=no
TYPE=Ethernet|Wireless <-- for DHCP config
GLOBAL NETWORK SETTINGS (rather than per-interface basis)
/etc/sysconfig/network <-- many may be provided by DHCP, GATEWAY can be overridden in ifcfg file
NETWORKING=yes
GATEWAY=<ip add> <-- this can also be set in ifcfg file, if the gateway is defined here & in ifcfg, the gateway defined in the most recently activated ifcfg file will be used
HOSTNAME=<hostname>
DNS CONFIGURATION (DNS translates hostnames to network addresses)
/etc/resolv.conf <-- local DNS configuration
search example.com cracker.org <-- specify domains that should be tried when an incomplete DNS name is given to a command
nameserver 192.168.0.254 <-- ip add of the DNS server, pick the fastest
nameserver 192.168.1.254
PRINTING IN LINUX:
configuration tools:
system-config-printer
web based: http://localhost:631
lpadmin
configuration files:
/etc/cups/cupsd.conf
/etc/cups/printers.conf
cups-lpd <-- available for backward compatibility with older LPRng client systems
setup printer:
1) new printer
2) serial port1
3) generic
4) postscript printer
supported printer connections:
local (parallel, serial or usb)
unix/linux print server
windows print server
netware print server
hp jetdirect
printing commands:
lpr (accepts ASCII, postscript, pdf, others)
$ lpr -P accounting -#5 report.ps <-- prints to the accounting printer, without -P will print to default printer
lpq
$ lpq -a <-- shows all jobs, without -P will show jobs from default printer
lprm <job number>
system V printing commands:
lp
lpstat -a <-- shows all configured printers
cancel <job number>
printing utilities:
enscript, a2ps <-- convert text to postscript
evince <-- pdf viewer
ps2pdf <-- postscript to pdf
pdf2ps <-- pdf to ps
pdftotext <-- pdf to plain text
mpage <-- prints ascii or ps input with text reduced in size so it could appear on 1 paper
DATE:
date format [MMDDhhmm[[CC]YY][.ss]]
date 080820002008.05
date -s "08/08/2008 20:00:05"
NTP:
stratum1 to stratum16
local clock is stratum10
stratum (1,2) <-- two ntp servers (2,3) <-- clients
ntpq -np <-- query
if you dont want to sync against you local clock comment out the following
# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
# server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
/var/lib/ntp/drift <-- drift file
SCRIPTING:
(positional parameters)
$0 is the program
$* all command-line arguments
$# holds the number of command-line arguments
sample:
[oracle@centos5-11g ~]$ cat positionaltester.sh2
#!/bin/bash
echo "the program name is $0"
echo "the first argument is $1 and the second is $2"
echo "All command line parameters are $*"
echo "all parameters are $#"
[oracle@centos5-11g ~]$ ./positionaltester.sh2 red hat enterprise linux
the program name is ./positionaltester.sh2
the first argument is red and the second is hat
All command line parameters are red hat enterprise linux
all parameters are 4
(read)
-p prompt to display
sample:
[oracle@centos5-11g ~]$ cat input.sh
#!/bin/bash
read -p "Enter name (first last):" FIRST LAST
echo "your first name is $FIRST and your last name is $LAST"
[oracle@centos5-11g ~]$ ./input.sh
Enter name (first last):karl arao
your first name is karl and your last name is arao
[ ] UNIT 11 - INVESTIGATING AND MANAGING PROCESSES
uid, gid, selinux context determines filesystem access
/prod/<pid> <-- tracks every aspect of process by its PID
LISTING PROCESS:
? <-- daemon processes
-a <-- processes on all terminals
-x <-- includes processes not attached to terminals
-u <-- process owner info
-f <-- process parentage
-o
process states (do a "man ps" for the complete list):
running
sleeping
uninterruptable sleep
zombie
FINDING PROCESS:
ps axo comm,tty | grep ttyS0
pgrep -U root <-- user
pgrep -G student <-- group
pidof bash <-- find process id of a program
SIGNALS ("man 7 signal" for the complete list):
signal 15, term (default) terminate cleanly
signal 9, kill terminate immediately
signal 1, hup re-read configuration files
sending signals to processes:
by PID kill [signal] pid
by name killall [signal] comm
by pattern pkill [-signal] pattern
SCHEDULE PRIORITY (nice)
-20 to 19, default 0 <-- lower value means high cpu priority
when starting a process (only root can lower nice values, also once ordinary user raised the value he can't make it lower)
nice -n -15 vi ~oracle/lao.txt
after starting
renice 5 <pid>
INTERACTIVE PROCESS MGT TOOLS:
top
gnome-system-monitor
JOB CONTROL
firefox & <-- run process in the background
CTRL-z <-- temporarily halt a running program
jobs <-- list jobs
bg <job#> <-- resume in background, you can't stop it, you must fg it first then CTRL-z
fg <job#> <-- resume in foreground
kill %<job#> <-- kills the job
sample:
[oracle@centos5-11g ~]$ jobs
[1]- Stopped find / -iname "*.conf" 2>/dev/null <-- last default
[2] Stopped find / -iname "oracle" 2>/dev/null
[3] Stopped find / -iname "root" 2>/dev/null
[4]+ Stopped find / -iname "conf" 2>/dev/null <-- this is the default
AT - one time jobs
root can modify jobs for other users by getting a login shell (su - <username>)
create at <time> crontab -e
list at -l crontab -l
details at -c <job#> n/a
remove at -d <job#> crontab -r
edit n/a crontab -e
CRON - recurring jobs, runs every minute
root can modify jobs for any user with "crontab -u <username> -l|-e|-r"
see "man 5 crontab" for details on time
# (Use to post in the top of your crontab)
# ------------- minute (0 - 59)
# | ----------- hour (0 - 23)
# | | --------- day of month (1 - 31)
# | | | ------- month (1 - 12)
# | | | | ----- day of week (0 - 6) (Sunday=0)
# | | | | |
# * * * * * command to be executed
EXIT STATUS
0 success
1-255 fail
$? determine exit status
SCRIPTING:
(conditional execution parameters, based on the exit status of the previous command)
&& --> AND THEN, the 2nd command will only run if the 1st exits successfully
|| --> OR ELSE, the 2nd command will only run if the 1st fail
sample 1:
$ grep -q no_such_user /etc/passwd && echo 'user existing' || echo 'no such user' <-- "q" is silent mode, will only give you 1 or 0
$ ping -c1 -W2 centos5-11g &> /dev/null \
&& echo "station is up" \
|| echo $(echo "station is unreachable"; exit 1)
for x in $(seq 1 10); do
echo adding test$x
(
echo -ne "test$x\t"
useradd test$x 2>&1 > /dev/null && mkpasswd test$x
) >> /tmp/userlog
done
echo 'cat /tmp/userlog to see new passwords'
(test, evaluates boolean statements, 0 true, 1 false)
long form:
test "$A" = "$B" && echo "Strings are equal"
test "$A" -eq "$B" && echo "Integers are equal"
short form:
[ "$A" = "$B" ] && echo "Strings are equal"
[ "$A" -eq "$B" ] && echo "Integers are equal"
(file tests, test existence of files)
[ -f issue.patch ] && echo "regular file"
some of the supported file tests are:
-d <file> true if the file is a directory
-e true if the file exists
-f true if the file exists and is a regular file
-h true if the file is a symbolic link
-L true if the file is a symbolic link
-r true if the file exists and is readable by you
-s true if the file exists and is not empty
-w true if the file exists and is writable by you
-x true if the file exists and is executable by you
-O true if the file is effectively owned by you
-G true if the file is effectively owned by your group
(if then else)
# pings my station
if ping -c1 -W2 centos5-11g &> /dev/null; then
echo "station is up"
elif grep "centos5-11g" ~/maintenance.txt &> /dev/null; then
echo "station is undergoing maintenance"
else echo "station is unexpectedly down"
exit 1
fi
# test ping command
if test -x /bin/ping6; then
ping6 -c1 ::1 &> /dev/null && echo "ipv6 stack is up"
elif test -x /bin/ping; then
ping -c1 127.0.0.1 &> /dev/null && echo "no ipv6, ipv4 stack is up"
else
echo "oops! this should not happen"
exit 255
fi
# test if target is up or down, with positional parameters
#!/bin/bash
TARGET=$1
ping -c1 -w2 $TARGET &> /dev/null
RESULT=$?
if [ $RESULT -ne 0 ]
then
echo "$TARGET is down"
else
echo "$TARGET is up"
fi
exit $RESULT
# use reach.sh on AT to ping a station
at now + 5min
for x in $(seq 1 40); do
reach.sh station$x
done
CTRL-d
# output the head os ps with formatting descending
ps axo pid,comm,pcpu --sort=-pcpu | head -n2
# good for finding processes order by CPU PERCENT, RSS (physical memory), CPU TIME (time)
ps axo pid,comm,pcpu,size,rss,vsz,cputime,stat --sort=-pcpu | head -n10
[ ] UNIT 12 - CONFIGURING THE BASH SHELL
2 types of variables
local variables
environment variables
set | less <-- all variables
env | less <-- environment variables
echo $HOME <-- single value
alias="rm -i"
\rm -r Junk <-- if you dont want to use alias on "rm" command
PREVENTING EXPANSION:
echo your cost: \$5.00 <-- (backslash) makes next character literal
' <-- (single quote) inhibit all expansion
" <-- (double quote) inhibit all except:
$ (dollar) variable expansion
` (backquotes) command substitution
\ (backslash) single char inhibition
! (ex point) history substitution
[oracle@centos5-11g ~]$ find . -iname pos\* <-- or you could do "find . -iname 'pos*'
./positionaltester.sh2
./positionaltester.sh
./pos
LOGIN vs NON-LOGIN SHELLS - (where startup scripts are configured)
login shells
any shell created at login (includes x login)
su -
non login shells
su
graphical terminals
executed scripts
any other bash instances
global files
/etc/profile
/etc/profile.d
/etc/bashrc
user files
~/.bash_profile
~/.bashrc
~/.bash_logout <-- when a login shell exits, for auto backups and cleanup temp files
login shells (order)
1) "/etc/profile" ---which calls---> "/etc/profile.d"
2) ~/.bash_profile ---calls---> ~./bashrc ---calls---> /etc/bashrc
non login shells (order)
1) ~/.bashrc ---calls---> /etc/bashrc ---calls---> /etc/profile.d (called by bashrc only for non login shells)
SCRIPTING
ls -laptr
#!/bin/bash
# script for backing up any directory
# 1st: the directory to be backed up
# 2nd: the location to backup to
ORIG=$1
BACK=~/backups/$(basename $ORIG)-$(date +%Y%m%d%H%M) <-- used the "basename" command to get the SYSCONFIG word
if [ -e $BACK ]
then
echo "warning: $BACK exists"
read -p "Press CTRL-c to exit or ENTER to continue"
fi
cp -av $ORIG $BACK
echo "backup of $ORIG to $BACK finished at: $(date +%Y%m%d%H%M)"
[ ] UNIT 13 - FINDING AND PROCESSING FILES
locate -i <filename> <-- case insensitive search
updatedb <-- must be run as root, updated daily
find
-ok <-- will prompt before executing command
sample:
find /u01/app/oracle/oradata/ -size 10M -ok gzip {} \; <-- will prompt to gzip for each file found
-exec
sample:
find /u01/app/oracle/oradata/ -size 10M -exec gzip {} \; <-- will not promp, and will gzip each file found
-user <-- search for files owned by user & group
-group
find, logical operators (OR (-o & -not)..but AND by default)
-o OR
-not NOT
sample:
find -user joe -not -group joe
find -user joe -o -user jane
find . -not \( -user oracle -o -user ken \) -exec ls -l {} \;
find, permissions
-uid UID of user
-gid
-perm permission
find -perm 755 matches if mode is exactly 755
find -perm +222 matches if anyone can write (first is exact)
find -perm -222 matches if everyone can write
find -perm -002 matches if other can write
find, numerical criteria
-size
find -size 1M
find -size +1M
find -size -1M
-links number of links to the file
find, access time
# DAYS
-atime when file was last read
-mtime when file data last changed
-ctime when file data or metadata last changed
samples:
find -ctime 10 <-- exact 10 days
find -ctime -10 <-- within 10 days
find -ctime +10 <-- more than 10 days
# MINUTES
-amin
-mmin
-cmin
# MATCH ACCESS TIMES RELATIVE TO THE TIMESTAMP OF OTHER FILES
-anewer
-newer find -newer recent_file.txt
-not -newer find -not -newer recent_file.txt
-cnewer
find, execution
find -name "*conf" -exec cp {} {}.orig \; <-- backup config files, adding a .orig extension
find /tmp -ctime +3 -user joe -ok rm {} \; <-- prompt to remove joe's tmp files that are over 3 days old
find ~ -perm -022 -exec chmod o-w {} \; <-- fix other-writable files in your home directory
find /var -user root -group mail 2> /dev/null -ls <-- ls -l style listing "-ls"
find -type l -ls <-- list symbolic links "ls style"
find -type f -ls <-- list regular files
find /bin /usr/bin -perm -4000 <-- list all files under /bin /usr/bin that have SetUID bit set
find /bin /usr/bin -perm -u+s <-- list all files under /bin /usr/bin that have SetUID bit set
[ ] UNIT 14 - NETWORK CLIENTS
firefox
engine plugins mycroft.mozdev.org
plugins plugindoc.mozdev.org
non-gui web browser
links http://www.redhat.com
links -dump http://www.redhat.com <-- dumps all the text of the browser to STDOUT
links -source http://www.redhat.com <-- dumps all the html source
wget (retrieve a single file via HTTP or FTP, also mirror a website)
wget <link or html file>
wget --recursive --level=1 --convert-liks http://www.site.com <-- mirror a site
email and messaging
email protocol
pickup
imap/pop (most popular are imaps & pop3s which encrypts data over the wire)
delivery
smtp, esmtp
evolution
- supports gpg (gnu privacy guard)
thunderbird
mutt
mutt -f imaps://user@server <-- specify the mailbox you wish to start in
c <-- to change mailbox
gaim
http://gaim.sourceforge.net/plugins.php
OpenSSH: secure remote shell
ssh
scp <-- secure replacement for rcp
[user@]host:/<path to file>
-r recursion
-p preserve times and permissions
-C to compress datastream
sftp <-- similar to ftp, remote host's sshd needs to have support for sftp in order to work
rsync (uses remote update protocol)
-e specified rsh compatible program to connect with (usually ssh)
-a recursive, preserve
-r recursive, not preserve
--partial continues partially downloaded files
--progress prints progress bar
-P same as --partial --progress
http://everythinglinux.org/rsync/
sample:
rsync --verbose --progress --stats --compress --rsh=/usr/bin/ssh --recursive --times --perms --links --delete *txt oracle@192.168.203.11:/u01/app/oracle/rsync
OR
rsync -e ssh *txt oracle@192.168.203.11:/u01/app/oracle/rsync/
to setup rsync server:
# make the file /etc/rsyncd.conf
motd file = /etc/rsyncd.motd
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
[karlarao]
path = /u01/app/oracle/rync
comment = test_rsync_server
read only = no
list = yes
auth users = oracle
secrets file = /etc/rsyncd.scrt
# then the /etc/rsyncd.scrt
oracle:oracle
# to use it
rsync --verbose --progress --stats --compress --rsh=/usr/bin/ssh --recursive --times --perms --links --delete *txt 192.168.203.11:/u01/app/oracle/rsync
PASSWORDLESS AUTHENTICATION: KEY-BASED AUTHENTICATION
ssh-keygen -t rsa <-- creates rsa public private keys
ssh-keygen -t dsa <-- creates dsa public private keys
ssh-add -l <-- query list of stored keys
ssh-copy-id <-- copy public key to destination system, on older systems you may not have this.. have to manually create authorized_keys
ssh-agent $SHELL <-- agent authenticates on behalf of user
ssh-add <-- add the keys, will ask passphrase
step by step:
1) ssh-keygen -t rsa <-- generate private public keys
ssh-keygen -t dsa
2) ssh-copy-id -i id_rsa.pub oracle@192.168.203.26 <-- copy public key to remote host
ssh-copy-id -i id_dsa.pub oracle@192.168.203.26
3)
[oracle@centos5-11g .ssh]$ ssh-agent $SHELL <-- load identities
[oracle@centos5-11g .ssh]$ ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa:
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Enter passphrase for /home/oracle/.ssh/id_dsa:
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
4) ssh 192.168.203.26 date <-- test
FTP CLIENTS
lftp
gFTP
Xorg Clients (XTERM - X11 forwarding)
ssh -X <user>@<host>
xterm &
network diagnostic tools
ping
traceroute
host
dig
netstat
gnome-netttool (GUI)
smbclient (FTP-like client to access SMB/CIFS resources)
smbclient -L server1 <-- list shares on server1
smbclient -U student //server1/homes <-- access a share
-W workgroup or domain
-U username
-N suppress password prompt (otherwise you will be asked for a password)
nautilus file transfer
[ ] UNIT 15 - ADVANCED TOPICS IN USERS, GROUPS, AND PERMISSIONS
/etc/passwd
/etc/shadow
/etc/group
/etc/gshadow
user management tools
system-config-users
command line
useradd
usermod
userdel [-r]
1 - 499 <-- system users and groups
MONITORING LOGINS
last | less <-- shows login, logout, reboot history
lastb <-- shows bad logins
w <-- show who is logged on and what they are doing, shows load average, cpu info, etc.
echo $$ <-- to show your current process ID
DEFAULT PERMISSIONS
666 <-- umask for files
777 <-- umask for directories
002 <-- default umask for ordinary users
022 <-- default umask for root
SPECIAL PERMISSIONS FOR EXECUTABLES (executable regular files, also 4-2-1) SUID, SGID
(4)suid
-rwSr--r-- 1 oracle oinstall 0 Aug 11 12:40 test2.txt <-- "S" on group if no executable was granted
-rwsr--r-- 1 oracle oinstall 0 Aug 11 12:40 test2.txt <-- "s" on group if executable was granted
(2)sgid
-rw-r-Sr-- 1 oracle oinstall 0 Aug 11 12:39 test.txt <-- "S" on group if no executable was granted
-rw-rwsr-- 1 oracle oinstall 0 Aug 11 12:39 test.txt <-- "s" on group if executable was granted
stickybit
-rwx-----T 1 oracle oinstall 0 Aug 11 12:41 test3.txt <-- "T" on others if no executable was granted
-rwxr--r-t 1 oracle oinstall 0 Aug 11 12:41 test3.txt <-- "t" on others if executable was granted
SPECIAL PERMISSIONS FOR DIRECTORIES STICKY BIT, SGID
suid
drwsr-xr-x 2 oracle oinstall 4096 Aug 11 12:44 test3 <-- "s" on the owner
sgid
drwxr-sr-x 2 oracle oinstall 4096 Aug 11 12:37 test <-- "s" on the group
(1)stickybit
drwxr-xr-t 2 oracle oinstall 4096 Aug 11 12:38 test2 <-- "t" on the others
FOR COLLABORATION (SECURED):
as root user, make a group and directory.. then grant
chmod 3770 <directory>
drwxrws--T 2 oracle collaboration 4096 Aug 16 22:45 collaboration <-- this will be viewable by collaboration members, and could only delete their respective
files, except root & oracle (owner of the folder).. also umask should be 022
so that files created on collaboration folders will just be read only on other users
SCRIPTING:
#!/bin/bash
# create all users defined in userlist file
# just add -x if you have problems
for NAME in $(cat ~/bin/userlist)
do
/usr/sbin/useradd $NAME
PASSWORD=$(openssl rand -base64 10)
echo $PASSWORD | passwd --stdin $NAME
echo "username: $NAME, password: $PASSWORD" | mail -s "Account Info" root@localhost
done
[ ] UNIT 16 - LINUX FILESYSTEM IN-DEPTH
ext2 and msdos <-- typically used for floppies, ext2 (since 1993)
ext3 <-- features such as extended attributes & posix access control lists (ACLs)
GFS & GFS2 <-- for SANs
disk partition
filesystem
inode table <-- for ext2 and ext3 filesystems
inode (index node) which is reference by its inode number <-- contains metadata about files such as (unique within the filesystem):
- file type, permissions, uid, gid
- the link count (count of path names pointing to this file)
- file size and various time stamps
- pointers to the file's data blocks on disk
- other data about the file
computers reference for a file is inode number
humans reference for a file is by file name
directory is mapping between file names and inode numbers
when a filename is referenced by a command,
linux references the directory in which the file resides,
determines the inode number associated with the filename,
looks up the inode information in the inode table
if user has permission..returns the contents of the file
cp <-- creates new inode
mv <-- untouched when on the same filesystem
rm <-- makes the inode free, but the data untouched..would be overwritten once reused
hard links (ln)
- only on the same filesystem
- not allowed on directories
- who created it will be the UID/GID
88724 -rwxr-xr-x 2 root root 244 Aug 11 13:12 create_users.sh <-- the same inode, and file count is 2
88724 -rwxr-xr-x 2 root root 244 Aug 11 13:12 create_HL
soft links (ln -s)
- can span filesystems
- who created it will be the UID/GID
- specify the fully qualified path
- the 25 is the number of characters
- filetype is "l"
175729 lrwxrwxrwx 1 root root 25 Aug 11 14:08 create_link -> /root/bin/create_users.s
SEVEN FUNDAMENTAL FILETYPES
- regular file
d directory
l symbolic link
b block special file <-- used to communicate with hardware a block of data at a time 512bytes,1024bytes,2048bytes
c character special file <-- used to communicate with hardware one character at a time
p named pipe <-- file that passes data between processes
s socket <-- stylized mechanism for inter process communication
df
du
baobab (GUI)
removable media
mount /dev/fd0 /mnt/floppy <-- floppy
mount /dev/cdrom /mnt/cdrom <-- cdrom
mtools
cds and dvds
usb media (detected by kernel as scsi devices)
/dev/sdax or /dev/sdbx
/media/disk
floppy disks
ARCHIVING FILES AND COMPRESSING ARCHIVES
tar <-- (tape archive) natively supports compression using gzip-gunzip, bzip2-bunzip2 (newer)
-c create
-t list
-x extrace
-f <archivename> name of the tar file
-z gzip tar.gz
-j bzip2 tar.bz2
-v verbose
ARCHIVING: other tools
zip, unzip <-- compatible with pkzip archives
file-roller
SCRIPTING:
#!/bin/bash
# script for backing up any directory
# 1st: the directory to be backed up
# 2nd: the location to backup to
ORIG=$1
BACK=~/backups/$(basename $ORIG)-$(date +%Y%m%d%H%M).tar.bz2
if [ -e $BACK ]
then
echo "warning: $BACK exists"
read -p "Press CTRL-c to exit or ENTER to continue"
fi
tar -cjvpf $BACK $ORIG
echo "backup of $ORIG to $BACK finished at: $(date +%Y%m%d%H%M)"
[ ] UNIT 17 - ESSENTIAL SYSTEM ADMINISTRATION TOOLS
check hardware compatibility
http://hardware.redhat.com/hwcert
check release notes
installer can be started from (boot.iso)
cdrom
usb
network (PXE) <-- ethernet and bios must support this
supported installation sources:
network server (ftp, http, nfs)
cdrom
hard disk
managing services
managed by:
System V scripts
init
xinetd super server
GUI, command line
system-config-services (GUI)
command line
/sbin/service start,stop,status,restart,reload
/sbin/chkconfig
managing software
rpm
name-version-release.architecture.rpm <-- VERSION is open source version of the project, RELEASE refers to redhat internal patches to the open source code
yum <-- replacing UP2DATE
/etc/yum.conf
/etc/yum.repos.d/
yum install
yum remove
yum update
yum list available
yum list installed
pup <-- software updater
pirut <-- add/remove software
securing the system
system level network security:
1) application level network security (tcp_wrappers)
2) kernel level network security (iptables, SELinux)
SELinux
- all processes & files have a context
- implements MAC - mandatory access control (default in unix is DAC - discretionary access control, which users make their files world-writable)
- targeted "policy" by default (web, dns, dhcp, proxy, database, logging, etc.)
- users may change the contexts of files that they own, but not alter or override the underlying SElinux policy
to disable
make it PERMISSIVE, logs policy violations but not actually prevent prohibited actions from taking place
available in this classes
RH133
RHS427
RHS429
packet filtering (system-config-securitylevel, simple interface to the kernel level firewall.. NETFILTER)
TCP/IP transaction divided into packets
packets contains a header (destination-source address,protocol specific info) and payload
ip address
port number
TCP/IP - UCP/IP uses distinct ports even though they share same numbers
}}}
{{{
RH131
[ ] UNIT 1 - SYSTEM INITIALIZATION
boot sequence overview:
bios initialization
boot loader
kernel initialization
"init" starts and enters desired run level by executing:
/etc/rc.d/rc.sysinit
/etc/rc.d/rc & /etc/rc.d/rc?.d/
/etc/rc.d/rc.local
X display manager if appropriate
bootloader components
bootloader
1st stage small, resides in the MBR or boot sector (on the 1st 512bytes in hard disk)... IPL (initial program loader) for GRUB is just the 1st stage
primary task is to locate the 2nd stage which does most of the work to boot the system
2nd stage loaded from boot partition
two ways to configure boot loader
primary boot loader
secondary boot loader (first stage boot loader into the boot sector of some partition)
GRUB and grub.conf (read at boot time)
supported filesystems:
ext2/ext3
reiserfs
jfs
fat
minix
ffs
/boot/grub/grub.conf <-- changes takes effect immediately
/sbin/grub-install /dev/sda <-- if GRUB is corrupted, reinstall.. if this command fails do this
1) type "grub"
2) type "root (hd0,0)"
3) type "setup (hd0)"
4) type "quit"
if GRUB can't find the grub.conf then it will default to GRUB command line
info grub
Kernel initialization
kernel boot time functions:
device detection
device driver initialization <-- device drivers compiled into the kernel are loaded when device is found
else if essential (needed for boot) drivers have been compiled as modules then it must be included in INITRD image
which is temporarily mounted be the kernel on a RAM disk to make the modules available for the initialization process
mounts root filesystem read only <-- after essential drivers are loaded, will mount /root in read only
loads initial process (INIT) <-- after loading, control is passed from the kernel to that process (INIT)
less /var/log/dmesg <-- all bootup messages taken just after control is passed to INIT
dmesg
INIT initialization
init reads its config /ETC/INITTAB <-- contains the information on how init should setup the system in every run level, also contains default runlevel
if lost or corrupted, you'll not be able to boot to any standard run levels
initial run level
system initialization scripts
run level specific script directories
trap certain key sequences
define UPS power fail/restore scripts
spawn gettys on virtual consoles
initialize X in run level 5
run levels
0 halt (Do NOT set initdefault to this)
1 Single user mode
2 Multiuser, without NFS (The same as 3, if you do not have networking)
3 Full multiuser mode
4 unused
5 X11
6 reboot (Do NOT set initdefault to this)
s,S,single alternate single user mode
emergency bypass rc.sysinit, sulogin
/sbin/runlevel
/etc/rc.d/rc.sysinit
important tasks include:
activate udev & SELinux
set kernel parameters /etc/sysctl.conf
system clock
loads keymaps
swap partitions
hostname
root filesystem check and remount
activate RAID and LVM devices
enables disk quotas
check & mount other filesystems
cleans up stale locks & PID files
/etc/rc.d/rc <-- responsible for starting/stopping when runlevel changes, also initiates default runlevel as per /etc/inittab "initdefault"
system V run levels
/etc/rc.d/rcX.d <-- each runlevel has corresponding directory, symboloc links in run level directories call the init.d scripts with START (S) or STOP (K) argument
/etc/rc.d/init.d <-- System V init scripts
/etc/rc.d/rc.local
- common place for custom scripts
- run every runlevel
controlling services
control services startup
system-config-services
ntsysv
chkconfig
control services manually
service
chkconfig <-- (together with system-config-services) will start or stop an xinetd-managed service as soon as you configure it on or off
standalone service will not start or stop until the system is rebooted or you use the service command
[ ] UNIT 2 - PACKAGE MANAGEMENT
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" | less
rpm package manager
/var/lib/rpm <-- database is stored in here
rpm installation and removal
-i install
-U upgrade original package will be removed (except config files which will be saved ".rpmsave"), default config files from new version might have ".rpmnew"
will act as -i if package is not yet installed
-F freshen identical to upgrading, except the package will be ignored if not already installed
-e erase
updating a KERNEL RPM (do not use rpm -U or rpm -F)
- kernel modules are version specific, and an upgrade will remove all modules that your present kernel is using, leaving the system unable to dynamically load
device drivers or other modules
/etc/sysconfig/kernel <-- alter kernel addition to GRUB
rpm queries
-qa all installed packages
-qf
-ql
-qi
-qpi
-qpl
-q --requires package prerequisites
-q --provides capabilities provided by package
-q --scripts scripts run upon installation removal
-q --changelog package revision history
-q --queryformat format custom-formatted information
rpm --querytags for a list of query formats
rpm -qa --queryformat '%{name}-%{version}-%{release}: [%{provides} ]\n' | grep postfix <-- list capabilities
rpm -q --provides postfix
rpm verification
installed package file verification:
rpm -V <package name> <-- verifies the installed package against the RPM database, has the file changed since the last install?
rpm -Vp <package_file>.i386.rpm <-- verifies the installed package against the package file
rpm -Va <-- verifies all installed RPMS against the database
signature verification BEFORE package install:
rpm --import <RPM-GPG-KEY-redhat-release>
rpm -K <package_file>.i386.rpm <-- check signature of RPM
rpm -qa gpg-pubkey <-- queries all the GPG keys imported
rpm -checksig <rpm> <-- check integrity of package files
/etc/pki/rpm-gpg <-- GPG key can also be found here
YUM (yellow dog update, modified)
- replacement for UP2DATE
- based on repositories that hold RPMs and repodata file list
- can call upon several repositories for dependency resolution, fetch the RPMs, install needed packages
yum installation and removal
yum install
yum remove
yum update
yum queries
searching packages
yum search <searchterm>
yum list (all | available | extras | installed | recent | updates)
yum info <package name>
searching files
yum whatprovides <filename>
configuring additional repositories
/etc/yum.repos.d/ <-- put the new repo file here, you could make use of $releasever and $basearch variables for repository declaration
yum clean dbcache | all <-- repository information is cached, the clear command..
sample repo file (should be at /etc/yum.repos.d/server1.repo):
[GLS]
name=private repository
baseurl=http://server1.example.com/pub/gls/RPMS
enabled=1
gpgcheck=1
[centos511g]
name=private karl arao
baseurl=http://192.168.203.25/install/centos/CentOS <-- this will look for the repodata folder inside it
gpgcheck=1
gpgkey=http://192.168.203.25/install/centos/RPM-GPG-KEY-CentOS-5 <-- this will prompt you to install the gpgkey
sample repo file for DVD media installation
[root@centos5-11g ~]# cat /etc/yum.repos.d/CentOS-Media.repo
# CentOS-Media.repo
#
# This repo is used to mount the default locations for a CDROM / DVD on
# CentOS-5. You can use this repo and yum to install items directly off the
# DVD ISO that we release.
#
# To use this repo, put in your DVD and use it with the other repos too:
# yum --enablerepo=c4-media [command]
#
# or for ONLY the media repo, do this:
#
# yum --disablerepo=\* --enablerepo=c4-media [command] <-- use this command for the installation..
[c5-media]
name=CentOS-$releasever - Media
baseurl=file:///media/CentOS_5.0_Final/
gpgcheck=1
enabled=0
gpgkey=file:///media/CentOS_5.0_Final/RPM-GPG-KEY-CentOS-5
creating a private repository
- create a directory to hold your packages
- make this directory available by http/ftp
- install the "createrepo" rpm
- run the "createrepo -v /<package-directory>
- this will create a "repodata" subdirectory and the needed support files
- to support anaconda on the same server:
cp /<package-directory>/repodata/comps*.xml /tmp
createrepo -g /tmp/comps*.xml /<package-directory>
createrepo -g comps.xml /path/to/rpms <--example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml)
createrepo <-- creates the support files necessary for a yum repository, support files will be put into the "repodata" subdirectory
the addition and deletion of files within the repository requires createrepo to be run again
files:
repomd.xml <-- contains timestamps & checksum values for the other 3 files
primary.xml.gz <-- contains list of all RPMS in the repository, as well as dependency info, used by "rpm -qpl"
filelists.xml.gz <-- contains list of all files in all the RPMs, used by "yum whatprovides"
other.xml.gz <-- contains additional info, including the change logs for the RPMs
comps.xml <-- (optional) contains info about package groups, allows group installation
redhat network
up2date
redhat network server
rhn proxy server
rhn satellite server
rhn accounts
rhn entitlements
software channels
base channel
child channels
define level of service
update
management
provisioning
monitoring
rhn client
[ ] UNIT 3 - KERNEL SERVICES
the linux kernel (core part of linux OS)
kernel duties:
- system initialization
- process scheduling
- memory management
- security
- provides buffers & caches to speed up hardware access
- implements standard network protocols & filesystem formats
kernel images & variants
/boot/vmlinuz-*
architectures supported:
x86
x86_64
ia64/itanium
powerpc64
s390x
(3) three kernel versions available for x86
regular (supports SMP)
memory support limited to 4GB
memory limit per process 3GB
PAE
memory support limited to 16GB (on processors that supports PAE, almost all except some early Pentium M)
memory limit per process 4GB (virtual memory space)... 3GB (of which available to user-space code & data)
Xen (Dom0..DomU(3))
each domain limited to RAM 16GB
physical machine RAM limit 64GB
NOTE:
HUGEMEM kernel, not available on RHEL5.. must switch to x86-64, then you'll have following supported:
processors 64
memory support limited to 256GB
memory limit per process 512GB
kernel modules
/lib/modules/$(uname -r)
kernel modules utilities
/etc/modprobe.conf
lsmod
modprobe
modprobe -r
modinfo
initrd (specified in grub.conf.. must match the exact filename)
to rebuild the initrd so that module "usb_storage" will be loaded early on boot:
# mkinitrd --with=usb_storage /boot/initrd-$(uname -r).img $(uname -r)
/dev
managing /dev with udev
determine:
- filenames
- permissions
- owners and groups
- commands to execute when a new device shows up
/etc/udev/rules.d
add this line to the new file "99-usb.rules":
KERNEL=="sdc1", NAME="myusbkey", SYMLINK="usbstorage"
mknod /dev/myusbkey b 8 0 <-- not persistent
MAKEDEV
/proc
/etc/sysctl.conf
sysctl -a
sysctl -p
sysctl -w
exploring hardware devices
hal-device
hal-device-manager
lspci
lsusb
monitoring processes & resources
[ ] UNIT 4 - SYSTEM SERVICES
network time protocol
/etc/ntp.conf
system-config-date
ntpdate <-- reset clock manually
- ntp clients should use 3 time servers, allows clients to reject bogus synchronization messages if one of the servers' NTP deamons or clocks malfunction
- NTP counters the drift by manipulating the length of a second
system logging
centralized logging deamons:
SYSLOGD (system logging)
KLOGD (intercepts kernel messages & pass it to syslogd)
/etc/rc.d/init.d/syslog <-- system V script SYSLOG controls both the syslogd & klogd deamons
/etc/syslog.conf <-- configures system logging, has associated severity
/etc/sysconfig/syslog <-- sets switches used when starting syslogd & klogd from the system V initialization script
messages can be logged to:
- files
- broadcast to connected users
- written to console
- transmitted to remote logging deamons across the network
setup remote logging:
on the logging server edit the /etc/sysconfig/syslog.. SYSLOGD_OPTION="-r -m 0"
restart service
on client edit /etc/syslog.conf
add this.. user.* @<ip of log server>
restart service
logger -i -t oracle "this is a test"
check /var/log/messages on log server
log format (has four main entries):
date & time
hostname where the message came
name of application or subsystem where the message came
actual message
XOrg: the X11 server
- open source implementation of X11
- XOrg consists of one core server with dynamically loaded modules
drivers: ati, nv, mouse, keyboard, etc.
extensions: dri, glx, extmod
- font rendering
native server: xfs (a separate service)
fontconfig/xft libraries (more efficient implemented within the XOrg core server, will soon replace xfs)
www.x.org (x consortium) <-- creates reference implementation of X under an open source license
xorg.freedesktop.org <-- adds hardware drivers for a variety of video cards & input devices, along with several software extensions
wiki.x.org
CLIENT --> X --> VIDEO CARD <-- x provides a standard way in which applications, x clients, may display & write on the screen
/var/log/Xorg.0.log <-- logfile
/usr/share/fonts & $HOME/.fonts <-- to add non-default fonts, xft spawns "fc-cache" & reads the contents
"no-listen = tcp" <-- comment this parameter on xfs config file to accept network connections (otherwise is default)
- network font servers listen on TCP port 7100
XOrg server configuration
system-config-display <-- best results while in runlevel 3, to run an X client to be displayed on a remote system, no local server config is necessary
--noui
--reconfig
/etc/X11/xorg.conf
XOrg in runlevel 3
/usr/X11R6/bin/xinit <-- two methods to establish the environment
/usr/X11R6/bin/startx
environment configuration (runlevel 3):
/etc/X11/xinit/xinitrc & ~/.xinitrc
/etc/X11/xinit/Xclients & ~/.Xclients
/etc/sysconfig/desktop
XOrg in runlevel 3:
1) startx will pass control of X session to "/etc/X11/xinit/xinitrc" unless "~/.xinitrc" exists
reads additional system & user config files:
resource files:
/etc/X11/Xresources & $HOME/.Xresources
input devices:
/etc/X11/Xkbmap & $HOME/.Xkbmap
/etc/X11/Xmodmap & $HOME/.Xmodmap
xinitrc then runs all shell scripts in
/etc/X11/xinit/xinitrc.d
xinitrc then turns over control of the X session to ~/.Xclients if not existing, /etc/X11/xinit/Xclients
2) /etc/X11/xinit/Xclients reads /etc/sysconfig/desktop
if unset then it will attempt to run the ff in order:
Gnome
KDE
twm (failsafe mode - xclock,term,mozillla)
Example input on file (/etc/sysconfig/desktop):
DISPLAYMANAGER="GNOME"
DESKTOP="GNOME"
environment configuration (runlevel 5)
/etc/inittab
/etc/sysconfig/desktop
/etc/X11/xdm/Xsession
XOrg in runlevel 5
1) if /etc/inittab is runlevel 5, then /sbin/init will run /etc/X11/prefdm (invokes X server & display manager /etc/sysconfig/desktop)
when display manager is started:
/etc/X11/xdm/Xsetup_0, before display manager presents a login widget
2) once authenticated, /etc/X11/xdm/Xsession is run (similar to startx in runlevel 3)
Remote X sessions (X protocol is unencrypted)
host-based sessions: implemented through xhost
user-based sessions: implemented through Xauthority mechanism
sshd may automatically install xauth keys on remote machine
xhost +trustedhost
xhost -friendlyhost
xhost + <-- this is dangerous
$HOME/.Xauthority <-- contains users allowed to use local display
ssh -Y remote-host <-- tunnel SSH, user-based session
SSH: Secure Shell
can tunnel X11 and other TCP based network traffic
# ssh -L 8080:remote-server:80 user@ssh-server <-- tunnel TCP traffic between the SSH server & client, redirect port 8080 of the local system to port 80
of the remote server, by pointing your web browser to http://localhost:8080 you will access the webpage
on remote-server:80.. you can also do this on VNC
VNC: Virtual Network Computing
uses less bandwidth than pure remote X desktops
server can automatically be started via /etc/init.d/vncserver
vncserver
runs $HOME/.vnc/xstartup
vncviewer host:screen
unique screen numbers distinguish between multiple VNC server on the same host
SSH tunneling: vncviewer -via user@host <localhost>:1
the first client can allow multiple connections: -Shared
can also be "view-only" for demos
CRON
crond deamon
man 5 crontab
/etc/cron.allow
/etc/cron.deny <-- this only exist on my system
cron access control:
if neither cron.allow nor cron.deny exist only root is allowed to install new crontab
if only cron.deny exists, all users except thos lister on cron.deny can install crontab files
if only cron.allow exists, root and all lister users can install crontab files
if both files exists cron.deny is ignored
NOTE: denying a user through cron.allow & cron.deny does not disable their current installed crontab
system crontab files
/etc/crontab <-- master system crontab file
/etc/cron.hourly
/etc/cron.daily
/etc/cron.weekly
/etc/cron.monthly
/etc/cron.d/ <-- contains additional system crontab files
sample for oracle logfiles (will retain 7 logfiles of more than 100MB):
[root@centos5-11g logrotate.d]# pwd
/etc/logrotate.d
[root@centos5-11g logrotate.d]# cat oracle
/u01/app/oracle/diag/rdbms/ora11/ora11/alert/*xml {
daily
rotate 7
missingok
size 100M
}
run-parts <directory> <-- command that runs all scripts on a directory
daily cron jobs
tmpwatch <-- cleans old files in /tmp
logrorate <-- rotates logs /etc/logrotate.conf
logwatch <-- system log analyzer and reporter
ANACRON
- runs cron jobs that did not run when the computer is down
/etc/anacrontab
contents:
1 65 cron.daily run-parts /etc/cron.daily
7 70 cron.weekly run-parts /etc/cron.weekly
30 75 cron.monthly run-parts /etc/cron.monthly
field 1: if the job has not been run in this many days..
field 2: wait this number of minutes after reboot and then run it
field 3: job identifier
field 4: job to run
how it works:
when /etc/crontab run cron jobs.. 0anacron is run first, sets a timestamp in /var/spool/anacron/* that notes the time it was last run
when a server was down for X number of days then starts up again, anacron will read the anacrontab.. then compare the timestamp to /var/spool/anacron/*
if verified..then it will run the job for the next X minutes indicated in /etc/anacrontab
CUPS (uses internet printing protocol)
- allows remote browsing of printer queues
- based on HTTP/1.1
- uses PPD files to describe printers
- only members of SYS group can access web based
/etc/cups/cupsd.conf
/etc/cups/printers.conf <-- automatically generated by printer tools
system-config-printer
web based: localhost:631
cli: lpadmin
documentation: /usr/share/doc/<cups>
[ ] UNIT 5 - USER ADMINISTRATION
adding a new user account
useradd
passwd
newusers <-- add in batch, drawback is user home directories are not populated with files from /etc/skel
chpasswd
user private groups
modifying/deleting user accounts
usermod
group administration
groupadd
groupmod -n staff employee <-- will rename the group, all affected users will use the new name as well as the files
groupdel
password aging policies
- by default password never expires.. you can edit the /etc/login.defs to adjust defaults
chage
lchage
[root@centos5-11g ~]# date
Fri Aug 15 09:21:36 PHT 2008
[root@centos5-11g ~]# chage -M 3 -m 2 -W 2 -I 2 -E 2008-08-21 kathy
Last password change : Aug 15, 2008
Password expires : Aug 18, 2008
Password inactive : Aug 20, 2008
Account expires : Aug 21, 2008
Minimum number of days between password change : 2
Maximum number of days between password change : 3
Number of days of warning before password expires : 2
network users
info about users may be stored & managed on a remote server
two type of info must always be provided for each user account
account info <-- controlled by NSS (NAME SERVICE SWITCH)
authentication <-- controlled by PAM (PLUGGABLE AUTHENTICATION MODULES), encrypts passwords on login & compare it to password provided by NSS
authentication configuration
system-config-authentication
authconfig-tui (text based)
authconfig-gtk (GUI)
SUPPORTED ACCOUNT INFORMATION SERVICES:
(local files)
NIS <-- gets info from database maps stored on NIS server
LDAP <-- entries on LDAP directory server
Hesiod <-- stores info as special resources in a DNS name server, its use is relatively uncommon
Winbind <-- uses winbindd to automatically map accounts stored in Windows domain controller to Linux by storing SID to UID/GID mappings in a
database & automatically generating any other NSS info that is required
SUPPORTED AUTHENTIATION MECHANISMS:
(NSS)
kerberos <-- authenticates by requesting a ticket (from the server), if user's password decrypts the ticket.. he is authenticated
ldap <-- username, password on LDAP directory server
smartcards <-- use smartcards, also to lock the system
smb <-- uses Windows domain controller
Windind <-- uses Windows domain controller
Example: NIS CONFIGURATION (not encrypted)
RPMS:
ypserv (server)
ypbind (client)
yp-tools
portmap
system-config-authentication
ypserv (running on server)
rpc.yppasswdd <-- allows NIS clients to update the passwords on NIS
ypbind (running on clients to share info with server)
portmap
what does this actually do? (five text files changed)
/etc/sysconfig/network <-- specify NIS domain
/etc/yp.conf <-- specify which server to use for NIS domain
/etc/nsswitch.conf <-- specify NIS as source of info for password, shadow, group
/etc/sysconfig/authconfig <-- specify "USENIS=yes"
/etc/pam.d/system-auth-ac <-- password changes for NIS accounts will be sent to rpc.yppasswdd (running on master)
NIS is relatively insecure.. can be used with KERBEROS
alternative is LDAP protected with TLS (SSL)..
############################ NIS AUTOMOUNTER CONFIGURATION STEP BY STEP - START ############################
RPMS:
ypserv (server)
ypbind (client)
yp-tools
portmap
NFS SERVER
1) configure NFS server, edit /etc/exports
/rhome/station12 172.24.0.12(rw,sync)
2) # exportfs -a
3) Make sure the required NFS, NFSLOCK, AND PORTMAP are there & started
NFS CLIENT
1) Make sure the required NETFS, NFSLOCK, AND PORTMAP daemons are there & started
2) test mounting the remote home directory
mount -t nfs 172.24.254.254:/rhome/station12 /rhome
3) edit the auto.master file that will refer to auto.home
#/etc/auto.master
/rhome /etc/auto.home --timeout=60
4) edit auto.home
#/etc/auto.home
nisuser12 172.24.254.254:/rhome/station12/& -nosuid
5) start autofs
# service autofs on
NIS SERVER
1) edit /etc/sysconfig/network
[root@server1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=server1.example.com
GATEWAY=172.24.254.254
NISDOMAIN=RHCE
2) edit /etc/yp.conf
# /etc/yp.conf - ypbind configuration file
ypserver 127.0.0.1
3) restart necessary deamons
portmap The foundation RPC daemon upon which NIS runs.
yppasswdd Lets users change their passwords on the NIS server from NIS clients
ypserv Main NIS server daemon
4) make sure deamons are running
# rpcinfo -p localhost
5) initialize NIS domain
# /usr/lib/yp/ypinit -m
6) restart ypbind and ypxfrd
ypbind Main NIS client daemon
ypxfrd Used to speed up the transfer of very large NIS maps
7) make sure deamons are running
# rpcinfo -p localhost
8) add new users
[root@server1 yp]# useradd -d /rhome/station12/nisuser12 nisuser12
[root@server1 yp]# usermod -d /rhome/nisuser12 nisuser12
[root@server1 ~]# ypcat passwd
nisuser12:$1$1C1UkauJ$ASV7yuHKhMsspBx6SVhpO/:500:500::/rhome/nisuser12:/bin/bash
[root@server1 ~]# getent passwd nisuser12
nisuser12:x:500:500::/rhome/nisuser12:/bin/bash
[root@server1 ~]# ypmatch nisuser12 passwd
nisuser12:$1$1C1UkauJ$ASV7yuHKhMsspBx6SVhpO/:500:500::/rhome/nisuser12:/bin/bash
NIS CLIENT
1) system-config-authentication
- will create yp.conf
- define /etc/sysconfig/network NISDOMAIN
- updates /etc/nsswitch.conf, place NIS
2) start necessary deamons
portmap
ypbind
3) make sure deamons are running
# rpcinfo -p localhost
4) configure /etc/hosts, include both servers
5) test NIS access to the NIS server
[root@station12 ~]# ypcat passwd
nisuser12:$1$1C1UkauJ$ASV7yuHKhMsspBx6SVhpO/:500:500::/rhome/nisuser12:/bin/bash
[root@station12 ~]# getent passwd nisuser12
nisuser12:x:500:500::/rhome/nisuser12:/bin/bash
[root@station12 ~]# ypmatch nisuser12 passwd
nisuser12:$1$1C1UkauJ$ASV7yuHKhMsspBx6SVhpO/:500:500::/rhome/nisuser12:/bin/bash
6) edit /etc/nsswitch.conf..arrange the nis value
7) restart sshd
8) test login
# ssh -l nisuser 172.24.0.12
############################ NIS AUTOMOUNTER CONFIGURATION STEP BY STEP - END ############################
Example: LDAP CONFIGURATION (recommend to use TLS (SSL))
RPMS:
nss_ldap
openldap
what does this actually do? (five text files changed)
/etc/ldap.conf <-- specify location of LDAP, & TLS used
/etc/openldap/ldap.conf <-- specify location of LDAP
/etc/nsswitch.conf <-- source of info for password, shadow, group
/etc/sysconfig/authconfig <-- specify "USELDAPAUTH=yes", "USELDAP=yes"
/etc/pam.d/system-auth-ac <-- PAM will use directory to authenticate
ldapsearch -x -Z <-- if the server is reachable, will dump user info in LDIF format
openssl s_client -connect 192.168.203.26:636 <-- TLS can be tested by openssl s_client
switching accounts
su -
su - root -c free -m <-- run a command
sudo
/etc/sudoers
visudo <-- to edit the /etc/sudoers
sample:
User_Alias LIMITEDTRUST=student1,student2
Cmnd_Alias MINIMUM=/etc/rc.d/init.d/httpd
LIMITEDTRUST ALL-MINIMUM <-- student1,student2 can use sudo with commands listed in MINIMUM
SUID & SGID EXECUTABLES
for security reasons, SUID & SGID are not honored when set on non-compiled programs..such as shell scripts
SGID DIRECTORIES
STICKY BIT
/tmp has sticky bit.. users can only delete their respective files..
FOR COLLABORATION (SECURED):
as root user, make a group and directory.. then grant
chmod 3770 <directory>
drwxrws--T 2 oracle collaboration 4096 Aug 16 22:45 collaboration <-- this will be viewable by collaboration members, and could only delete their respective
files, except root & oracle (owner of the folder).. also umask should be 022
so that files created on collaboration folders will just be read only on other users
DEFAULT FILE PERMISSIONS
umask for root & any system account (uid < 100) 022
for regular user (uid > 99), provided the primary group is the user private group, else 022 002
ACCESS CONTROL LISTS (ACLs)
- if you want to allow additional access to other groups or particular user on a particular directory of file.. this is very useful
- if you set "rx" on /depts/tech and set default "rw" on /depts/tech on a user.. he'll not be able to create new files, but can edit existing files on that directory
drwxrws---+ 2 root hr 4096 Aug 16 23:51 tech <-- it will show + sign if contains ACL
-rw-rw----+ 1 manager hr 0 Aug 16 23:51 test
NOTE: filesystems created during installation are automatically mounted with ACL option, after installation must specifically mounted with ACL option
mount -o remount,acl /home <-- to enable ACL on a filesystem
getfacl /home/schedule.txt <-- view ACL
setfacl -m u:visitor:rx /home/schedule.txt <-- grant visitor rx access to file
setfacl -x u:visiror:x /home/schedule.txt <-- remove execute
setfacl -m d:u:visitor:rw /home/share/project <-- set default ACL on a directory
setfacl -m u:visitor:--- /home/share/project <-- to not have read/write/execute access to a file
SELINUX
NSA (national security agency).. first implementation of MAC was a system called Mach..
later.. they implemented it on Linux kernel as patches became knows as SELinux
MAC mandatory access control
Type Enforcement (assign values to files, directories, resources, users, processes)
DAC discretionary access control
policy <-- rule set, defines which resources a restricted process is allowed to access, any action that is explicitly allowed, by default denied
restricted/unconfined <-- processes category
security context <-- all files & processes have this
elements of context:
user
role
type
sensitivity
category
ls -Z <filename> <-- to view security context of a file
ls -Zd <directory>
ps -eZ <-- view entire process stack
ps Zax
RHEL4 protecting 13 processes
RHEL5 protecting 88 processes
SELinux targeted policy
most local processes are unconfined
chcon -t tmp_t /etc/hosts <-- security context can be change
restorecon /etc/hosts <-- restore default
strict policy
targeted policy
SELINUX= can take one of these three values:
enforcing - SELinux security policy is enforced.
permissive - SELinux prints warnings instead of enforcing.
disabled - SELinux is fully disabled.
SELINUXTYPE= type of policy in use. Possible values are:
targeted - Only targeted network daemons are protected. <-- DEFAULT
strict - Full SELinux protection.
SELinux management
getenforce
system-config-securitylevel <-- disabling requires reboot
system-config-selinux
/var/log/audit/audit.log <-- default logfile for SELinux
setroubleshootd
[ ] UNIT 6 - FILESYSTEM MANAGEMENT
overview: adding new filesystems to filesystem tree
identify device
partition device
make filesystem
label filesystem
create entry in /etc/fstab
mount new filesystem
device recognition
MBR contains:
- executable code to load operating system
- contains structure describing the hard drive partitions
partition id or type
starting cylinder for partition
number of cylinders for partition
four primary partitions, one could be extended (will have a separate partition descriptors on the first sector of the partition)
some linux partition types:
5 or f extended
82 linux swap
83 linux
8e linux LVM
fd linux RAID auto
disk partitioning
total max number of partitions supported by the kernel:
IDE devices 63
SCSI devices 15
/usr/share/doc/kernel-doc-2.6.18/Documentation/devices.txt <-- list of devices
why partition devices?
containment
performance
quota <-- implemented on filesystem level
recovery
managing partitions
fdisk
sfdisk <-- more accurate
GNU parted
partprobe <-- at system bootup, kernel makes its own in-memory copy of the partition tables from disk.. FDISK edits on-disk copy of partition tables
to update the in-memory copies..run this
making filesystems
mkfs <-- front end or wrapper to various filesystem creation programs, it -t is ext3.. then it will look for mkfs.ext3.. and so on..
mkfs.ext2, mkfs.ext3, mkfs.msdos
mke2fs <-- when you do "man mkfs.ext3" this is called
-L to add label
mkfs.ext3 -L opt -b 2048 -i 4096 <device> <-- creates ext3 filesystem on a new partition,
use 2KB sized blocks
& one inode per every 4KB of disk space (should not be lower than block size)
& label of "opt"
filesystem labels
e2label <device> <label>
mount <options> LABEL=<fs label>
blkid <-- can be used to see labels and filesystem type of all devices
sample:
[root@centos5-11g ~]# blkid /dev/sda3
/dev/sda3: LABEL="/" UUID="d86726ee-0f6c-455f-b6ff-af20fef3c941" SEC_TYPE="ext2" TYPE="ext3"
[root@centos5-11g ~]# blkid /dev/mapper/vgsystem-lvu01
/dev/mapper/vgsystem-lvu01: UUID="18602cdd-8219-4377-bac3-7617ec090d8d" SEC_TYPE="ext2" TYPE="ext3"
e2label /dev/mapper/vgsystem-lvu01 u01
e2label /dev/mapper/vgsystem-lvu01
mount LABEL=u01 /u01
tune2fs (adjust filesystem parameters) <-- can also be used to add journal to ext2 filesystem first created with mke2fs
reserved blocks
default mount options
fsck frequency
tune2fs -m 10 /dev/sda1 <-- modify percentage of reserved blocks
tune2fs -o acl,user_xattr /dev/sda1 <-- modify mount options
tune2fs -i0 -c0 /dev/sda1 <-- modify filesystem checks
dumpe2fs <-- view current settings of a filesystem
MOUNT POINTS AND /ETC/FSTAB
used to create the filesystem hierarchy on boot up
contains six fields per line
floppy & cd-rom have noauto as an option <-- Can only be mounted explicitly (i.e., the -a option will not cause the file system to be mounted)
fields in fstab:
device
mount point
fs type
mount options
dump freq
NOTE by Charlie:
Determines whether the dump command (used for backup) needs to
backup the filesystem. 1 for yes, zero (0) for no. The dump command is
for ext2 file systems only. Do not use it for ext3 file systems � Linus
Torvald's himself discourages the use of the dump command on ext3
filesystem due to certain technical issues.
1 daily
2 every other day
fsck order <-- NFS and cd-rom should be ignored
0 ignore
1 first (must for /)
2-9 second
mounting filesystems
mount
-t
-o
default option is: rw,suid,dev,exec,async
reads /etc/mtab if invoked w/o arguments.. <-- display currently mounted filesystems
mount options for EXT3:
rw
suid <-- suid or sgid file modes are honored
dev <-- devices files permitted
exec <-- permit execution of binaries
async <-- file changes managed asynchronously
acl <-- POSIX ACLs are honored
uid=henry, gid=henry <-- all files mounted are owned by
loop <-- using a loopback device
owner <-- similar to user option, but in this case the mount request and the device, or special file, must be owned by the same EUID
unmounting filesystems
umount -a <-- references /etc/mtab
fuser -v <mount point> <-- to show user accessing the mount point
ps -aux | grep \/u01\/app <-- another way
fuser -km <mount point> <-- send kill signal to the process
kill <process> <-- dangerous
mount -o remount,ro /u01 <-- remounts to read only
MOUNT BY EXAMPLE
mount -t ext3 -o noexec /dev/hda7 /home <-- for security, denying permission to execute files
mount -t iso9660 -o loop /iso/documents.iso /mnt/cdimage <-- mount cd drive
mount -t vfat -o uid=515,gid=520 /dev/hdc2 /mnt/projx <-- mount vfat, owner is 515
mount -t ext3 -o noatime /dev/hda2 /data <-- Do not update inode access times on this file system (e.g, for faster access on the news spool to speed up news servers)
mount --bind /u01 /u02 <-- Since Linux 2.4.0 it is possible to remount part of the file hierarchy somewhere else
handling SWAP partitions and files (supplement to system RAM)
step by step:
1) create a swap partition or file
2) make file system type to swap (for patition only)
3) writing a special signatire using.. mkswap
4) adding entry to /etc/fstab
4) activating swap.. swapon -a
setting up swap file
dd if=/dev/zero of=swapfile bs=1024 count=X <-- X is file size in kilobytes blocks, bs is bytes, could also be bs in MB
then.. mkswap
then.. add to /etc/fstab
MOUNTING NFS FILESYSTEMS
make remote filesystem as though it were a local filesystem
/etc/fstab for persistent network mounts
<server>:</path/of/dir> </local/mnt/point> nfs <options> 0 0
/etc/init.d/netfs <-- NFS shares are mounted at boot time
exports can be mounted manually
1) check NFS service on host server
2) edit /etc/exports file on host server.. /var/ftp/pub 192.168.203.25(rw)
3) service nfs reload
4) mount -t nfs <host server>:/var/ftp/pub /mnt/server1
some nfs mount options:
rsize=8192 and wsize 8192 will speed up NFS throughput
soft return with an error on a failed I/O attempt
hard will block a process that tries to access an unreachable share
intr interrupt or kill if server is unreachable
nolock disable file locking (lockd), & allow inter operation with older NFS servers
AUTOMOUNTER (autofs)
/etc/auto.master <-- provides directory /misc
/etc/auto.misc <-- configuration file listing the filesystem to be mounted under the directory
"autofs" deamon
- filesystems automatically unmounted after a specified interval of inactivity
- enable the special map "-hosts" to browse all NFS exports on the network
- supports wildcard directory names
sample:
add this on /etc/auto.misc
server1 -ro,intr,hard 192.168.203.26:/var/ftp/pub
then
service autofs reload
then
cd /misc/server1
then
[oracle@centos5-11g server1]$ ls -l
total 12
-rw-r--r-- 1 root root 9 Aug 17 2008 test1
-rw-r--r-- 1 nfsnobody nfsnobody 9 Aug 17 2008 test2
-rw-r--r-- 1 root root 13 Aug 17 2008 test3
Wildcard Key
A map key of * denotes a wild-card entry. This entry is consulted if the specified key does not exist in the map. A typical wild-card
entry looks like this:
* server:i/export/home/&
The special character �&� will be replaced by the provided key. So, in the example above, a lookup for the key �foo� would yield a
mount of server:/export/home/foo.
DIRECT MAPS (absolute path names)
- does not obscure local directory structure
- referenced in /etc/auto.master
on /etc/auto.master
/- /etc/auto.direct
on /etc/auto.direct
/foo server1:/export/foo
/usr/local/ server1:/usr/local
GNOME-MOUNT
gnome-mount
- automatically mounts removable devices
- integrated with HAL (hardware abstraction layer)
- replaces fstab-sync (RHEL4)
[ ] UNIT 7 - ADVANCED FILESYSTEM MANAGEMENT
configure QUOTA system
- implemented within the kernel
- enabled per filesystem basis
- individual policies for groups or users
limit number of blocks or inodes
implement soft & hard limit
step by step implementation:
1) LABEL=/home /home ext3 defaults,usrquota,grpquota 1 2 <-- edit fstab
2) mount -o remount -v /home <-- remount
3) # quotacheck -cug /home <-- create quota files
4) # quotaon -vug /home <-- activate quota
5) # edquota <username> <-- edit user's quota
Filesystem specifies in which quota-enabled filesystem the quota would be
set. The blocks column specifies the number of blocks, in kilobytes, that lisa
currently owns. The soft field specifies the block soft limit. The hard field
specifies the block hard limit. The inodes column specifies the number of inodes
that are owned by lisa. The soft field specifies the inode soft limit. The hard field
specifies the inode hard limit.
The settings shown will give a soft block limit of 10MB and a hard block limit of
12MB to lisa. Soft limits may be exceeded for a certain grace period. Hard limits
may not be exceeded.
6) # edquota -t <-- To modify the grace period for users,
As we can observe, the default grace period for users is 7 days. The countdown for
the grace period is initiated as soon as the soft limit is breached. After the grace
period, the user will be forced to free space so that his utilization falls below the
soft limit.
7) # edquota -p lisa tony rose <-- To make lisa's quota settings be the prototype for other users
GROUP QUOTAS
8) # edquota -g training <-- To assign group quotas
9) # edquota -tg <-- To modify the grace period for group quotas
10) # edquota -g -p training finance accounting <-- To make the training group's quota settings be the prototype for other groups
Summarizing Quotas for a Filesystem
11) # repquota -aug | less <--
We are presented with two (2) tables. The table on top is the summary for
user quotas. It specifies on which filesystem it is for. It also specifies the grace
period for both block and inode limits.
The first column is for the user name.
The next two (2) columns could either a plus (+) or a minus (-). A + on the
left indicates that the block soft limit has been breached. A + on the right
indicates that the inode block limit has been breached.
The next four (4) columns are for the disk utilization. �used� specifies the
number of blocks that is currently used. �soft� specifies that the soft limit. �hard�
specifies the hard limit. �grace� specifies the remaining time from the grace
period.
The table on the lower part of the screen is for the group quota summary.
Keeping Quota Information Accurate (put script in /etc/rc.local)
12)
#!/bin/bash
# File name: /etc/cron.daily/quotacheck.sh or /home/oracle/bin/quotacheck.sh
#
# This script performs a quotacheck
/sbin/quotaoff -vug -a &> /home/oracle/offerror.txt; cat /home/oracle/offerror.txt | mail -s "quotaoff done" root@localhost
/sbin/quotacheck -vugm -a &> /home/oracle/checkerror.txt; cat /home/oracle/checkerror.txt | mail -s "quotacheck done" root@localhost
/sbin/quotaon -vug -a &> /home/oracle/onerror.txt; cat /home/oracle/onerror.txt | mail -s "quotaon done" root@localhost
It is important to quotacheck after the filesystem has been unmounted
uncleanly, like in the unlikely event of a system crash. Also, quotacheck should be
run every time the system boots.
reporting:
user inspection:
quota
quota overviews:
repquota
miscellaneous utilities:
warnquota <-- mail to users that reached their soft limit
FSCK - file system check, MUST BE UNMOUNTED
> fsck.ext3 -cv /dev/vgsystem/lvtmp <--- check ONLY for bad blocks and verbose then press "Y", if you want to auto repair then add -p switch
-p autorepair
-c check bad blocks
> fsck.vfat -av /mnt/fat32 <--- check for bad blocks and verbose on FAT filesystem
SOFTWARE RAID (mdadm)
- multiple disks grouped together into "arrays" to provide better performance, redundancy, both
- raid levels supported:
raid 0 <-- stripe
raid 1 <-- mirror, only raid type that you can place /BOOT partition
raid 5 <-- 3 or more disks, with 0 or more hot spares.. not good for databases
raid 6 <-- striping with dual (duplicated) distributed parity.. similar to raid 5 except that it improves fault tolerance by allowing
the failure of any two drives in the array
protects data loss during recovery of a single disk failure, provides the administrator the additional time to rebuild
- spare disks add redundancy
- all the disks should be identical, size & speed
- partition type Linux RAID
/proc/mdstat
#create raid partitions
/dev/sdb/
> fdisk sdb1 (raid partition)
/dev/sdc/
> fdisk sdc1 (raid partition)
/dev/sdd/
> fdisk sdd1 (raid partition)
#create raid array
> mdadm --create /dev/md0 -a yes -l 1 -n 2 /dev/sdb1 /dev/sdc1 <--- create raid1 array, "-a yes" intructs udev to create the md device file if it doesnt already exist
> mdadm --misc --detail /dev/md0 <--- to view the detail of your raid
> mkfs.ext3 -v /dev/md0 <--- then format it
> mkdir -p /mnt/md0 <--- make mountpoint
> edit fstab and add /mnt/md0
also..create /dev/mn0 with 3 disks and 2 spares
> mdadm --create /dev/md0 -l 5 -n 3 /dev/sdd1 /dev/sde1 /dev/sdf1 -x 2 /dev/sdg1 /dev/sdh1
other options:
--chunk=64
mke2fs -j -b 4096 -E stride=16 /dev/md0 <-- make ext3, "-E stride" can improve performance, it's software raid device's chunk-size in filesystem blocks
for example, with an ext3 filesystem that will have a 4kb block size on a raid device with a chunk size
of 64kb, the stride should be set to 16.. so determine what chunk size you want, then divide by block size..
#to add a hot swap
> mdadm --manage /dev/md0 -a /dev/sdd1
> mdadm --misc --detail /dev/md0
#to re-add a device that was recently removed from an array
> mdadm --manage /dev/md0 --re-add /dev/sdd1
#to fail an array
> mdadm --manage /dev/md0 -f /dev/sdb1
> mdadm --misc --detail /dev/md0
#to get an overview of all the raid array
> cat /proc/mdstat
#then the hot swap kicks in, then the faulty raid is unusable (must unregister it in the raid table)
> to remove the faulty raid
> mdadm --manage /dev/md0 -r /dev/sdb1
#to dismember the /dev/md0
> unmount
> erase in fstab
> mdadm --manage --stop /dev/md0
> mdadm --manage /dev/md0 -r /dev/sdb1 /dev/sdb1 /dev/sdb1 <-- could be in RHEL4
> mdadm --misc --zero-superblock /dev/sdd1 <-- erase the MD superblock from a device.. do this on all devices (RHEL5)
because if you dont do this, md1 raid will still show w/o valid partition
LVM, logical volume management
- physical devices can be added & removed with relative ease
- partition type Linux LVM
logical volumes
lvcreate
volume groups
vgcreate
physical volumes
pvcreate
linux partitions
#create physical volumes
> pvcreate /dev/hda3
#assign physical volumes to volume group, you can also extend existing volume group.. vgextend
> vgcreate vgsystem /dev/hda3
#create logical volume
> lvcreate -l 83 -n u01 vgsystem
> lvcreate -L 500M -n u01 vgsystem
#"STRIPE like RAID0" logical volumes accross physical volumes, ideal if PVs are contained on separate disks
> lvcreate -i 2 -L 1G -n u01 vgsystem <-- stipes to (2) PVs, with default stripe size
NOTE:
a striped logical volume may be extended later, but only with extents from the original PVs
also, as an alternative.. you can choose which physical volume you want LV to be assigned.. see manpage
#display the logical volumes and allocated extents
> lvdisplay -vm <logical_volume> <-- to show what are the used physical volumes
> ext2online -C d /dev/vgsystem02/lvu01
#GROW in "extents" & "size"
> lvextend -l +83 /dev/vgsystem02/lvu01 <-- (grow logical volume) lvextend is extend, lvreduce to reduce
> lvextend -L +500M /dev/vgsystem/u01 <-- binary is in /usr
> umount <filesystem>
> resize2fs -p /dev/vgsystem/u01 <-- (grow filesystem), binary is in /sbin
#SHRINK LVM
> 760 (used space) * 1.1
> umount /u01 <-- must be unmounted
> e2fsck -f /dev/vgsystem02/lvu01 <-- Force checking even if the file system seems clean
> resize2fs -p /dev/vgsystem02/lvu01 840M <-- (shrink filesystem)
> 760 (used space) * 1.2
> lvreduce -L 900M /dev/vgsystem02/lvu01 <-- (shrink logical volume)
> dumpe2fs /dev/vgsystem02/lvu01 <-- to view the info about the LVM or partition
http://askubuntu.com/questions/196125/how-can-i-resize-an-lvm-partition-i-e-physical-volume
http://microdevsys.com/wp/linux-lvm-resizing-partitions/
#
> e2fsadm <-- counterpart of resize2fs and lvresize in RedHat3
#to move the logical data to another PV
> vgextend
> pvmove -v /dev/sdb1 /dev/sdd1 <-- move the data from source to destination
> vgreduce
NOTE:
- can indicate extents to move
- a certain LV
- can continue when canceled
#to remove the PV in the VG
> vgreduce -v vgsystem02 /dev/sdb1
#to remove VG
> inactivate VG first
> vgremove
#LVM on top of RAID1
> create two raid partition
> hot swap
> pvcreate /dev/md0
> vgextend and add the md0
NOTE: (in creating 2 volume groups)
- one VG for internal
- one VG for external
LVM SNAPSHOTS
- special LV that are exact copy of an existing LV at the time the snapshot is created
- perfect for backups & other operations where a temporary copy of an existing dataset is needed
- only consumes space where they are different from the original LV
* snapshots are allocated space at creation but do not use it until changes are made to the original LV or the snapshot
* when data is changed on the original LV the older data is copied to the snapshot
* snapshots contain only data that has changed on the original LV or the snapshot since the snapshot was created
common uses:
backup of live data
- for database put it in quiesce mode first..
application testing
hosting of virtualized machines
#create snapshot of existing LV
lvcreate -l 64 -s -n datasnap /dev/vgsystem/lvu01 <-- in extents
lvcreate -L 512M -s -n datasnap /dev/vgsystem/lvu01 <-- in MB, the extents will be pulled from the volume group where the LV resides
--- Logical volume ---
LV Name /dev/vgraid/vgraidopt
VG Name vgraid
LV UUID 0KDodV-JlvI-95E6-bvjb-CXBi-rM6u-iI1n2j
LV Write Access read/write
LV snapshot status source of
/dev/vgraid/raidoptsnap [active]
LV Status available
# open 1
LV Size 1000.00 MB
Current LE 250
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:4
--- Logical volume ---
LV Name /dev/vgraid/raidoptsnap
VG Name vgraid
LV UUID FwW2n9-MGML-LZ6h-TZH1-WFbK-BYMc-rYQ8fA
LV Write Access read/write
LV snapshot status active destination for /dev/vgraid/vgraidopt
LV Status available
# open 0
LV Size 1000.00 MB
Current LE 250
COW-table size 1000.00 MB
COW-table LE 250
Allocated to snapshot 0.00%
Snapshot chunk size 8.00 KB
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:5
#mount snapshot
mkdir -p /mnt/datasnap
mount -o ro /dev/vgsystem/datasnap /mnt/datasnap
#remove snapshot
umount /mnt/datasnap
lvremove /dev/vgsystem/datasnap
#check used space.. see "Allocated to snapshot"
lvdisplay /dev/vgsystem/datasnap
#GROW snapshots
lvextend -L +500M /dev/vgsystem/datasnap <-- can be expanded as other LVs
TAR
tar will extract all of the extended attributes that were archived.. to not extract use "--no" switches
use "rmt" to write to a remote tape device
--preserve
like --preserve-permissions --same-order
--acls this option causes tar to store each file�s ACLs in the archive.
--selinux
this option causes tar to store each file�s SELinux security context information in the archive.
--xattrs
this option causes tar to store each file�s extended attributes in the archive. This option also
enables --acls and--selinux if they haven�t been set already, due to the fact that the data for
those are stored in special xattrs.
--no-acls
This option causes tar not to store each file�s ACLs in the archive and not to extract any ACL
information in an archive.
--no-selinux
this option causes tar not to store each file�s SELinux security context information in the archive
and not to extract any SELinux information in an archive.
--no-xattrs
this option causes tar not to store each file�s extended attributes in the archive and not to
extract any extended attributes in an archive. This option also enables --no-acls and --no-selinux
if they haven�t been set already.
Archiving tools: DUMP, RESTORE
DUMP
- backup & restore ext2/3 filesystems (does not work with other filesystems)
- should only be used on unmounted or read-only
- full, incremental backups
#do a level 0 backup
dump -0u -f /dev/nst0 /home
dump -0u -f /dev/nst0 /dev/hda2
NOTE:
"u" option will update the /etc/dumpdates, which will record dump info for future use by dump..
after level 0 backup, dump will perform an incremental backup everyday on active filesystems listed in /etc/fstab
#do an incremental update
dump -4u -f /dev/nst0 /home
NOTE:
will perform an incremental update of all files that have changed since the last backup of level 4 or lower..
as recorded in /etc/dumpdates
#perform remote backup to tape
dump -0uf joe@<server>:/dev/nst0 /home
NOTE:
perform remote backup using rmt, ssh can be used as a transport layer when $RSH is set to ssh
In the event of a catastrophic disk event, the time required to restore all the necessary backup tapes or files to
disk can be kept to a minimum by staggering the incremental dumps. An efficient method of staggering incremental
dumps to minimize the number of tapes follows:
� Always start with a level 0 backup, for example:
/sbin/dump -0u -f /dev/st0 /usr/src
This should be done at set intervals, say once a month or once every two months, and on a set of fresh tapes
that is saved forever.
� After a level 0, dumps of active file systems are taken on a daily basis, using a modified Tower of Hanoi
algorithm, with this sequence of dump levels:
3 2 5 4 7 6 9 8 9 9 ...
For the daily dumps, it should be possible to use a fixed number of tapes for each day, used on a weekly
basis. Each week, a level 1 dump is taken, and the daily Hanoi sequence repeats beginning with 3. For weekly
dumps, another fixed set of tapes per dumped file system is used, also on a cyclical basis.
After several months or so, the daily and weekly tapes should get rotated out of the dump cycle and fresh tapes
brought in.
(The 4.3BSD option syntax is implemented for backward compatibility but is not documented here.)
RESTORE
#restore backup
restore -rf /dev/st0
-r Restore (rebuild) a file system. The target file system should be made pristine with mke2fs(8), mounted, and
the user cd�d into the pristine file system before starting the restoration of the initial level 0 backup.
If the level 0 restores successfully, the -r flag may be used to restore any necessary incremental backups
on top of the level 0. The -r flag precludes an interactive file extraction and can be detrimental to one�s
health (not to mention the disk) if not used carefully. An example:
mke2fs /dev/sda1
mount /dev/sda1 /mnt
cd /mnt
restore rf /dev/st0
Note that restore leaves a file restoresymtable in the root directory to pass information between incremen-
tal restore passes. This file should be removed when the last incremental has been restored.
RSYNC
#rsync on another server
rsync --verbose --progress --stats --compress --rsh=/usr/bin/ssh --recursive --times --perms --links --delete *txt oracle@192.168.203.11:/u01/app/oracle/rsync
OR
rsync -e ssh *txt oracle@192.168.203.11:/u01/app/oracle/rsync/
[ ] UNIT 8 - NETWORK CONFIGURATION
network interfaces
ifconfig -a <-- will show all interfaces, active & inactive
ip link
driver selection
/etc/modprobe.conf <-- RHEL compiles network cards as kernel modules, module is loaded based on alias..
if there is more than one card utilizing one module.. then the mapping will be based on HW address
speed & duplex settings (configured to autogenerate, by DEFAULT)
ethtool <interface> <-- Display or change ethernet card settings, if you alter settings it's best when it's not in use
also turn of autogeneration before forcing manual setting
ETHTOOL_OPTS <-- put this in ifcfg-ethX, to be persistent
"options" OR "install" in /etc/modprobe.conf <-- for older interface modules
#to manually force 100Mbps full duplex operation on eth1
ifdown eth1
ethtool -s eth1 autoneg off speed 100 duplex full
ifup eth1
ETHTOOL_OPTS="autoneg off speed 100 duplex full" <-- to make persistent, add it in ifcfg-eth1
ipv4 addresses
ifconfig
ip addr
DHCP - dynamic ipv4 configuration
/etc/sysconfig/network-scripts/ifcfg-ethX
BOOTPROTO=dhcp
zeroconf <-- if there is no DHCP server configured, then an address of 169.254.0.0/16 network is automatically assigned
these address are non-routable
NOZEROCONF=yes
dhclient deamon <-- will negotiate a lease from a DHCP server
ppd deamon
STATIC ipv4 configuration
/etc/sysconfig/network-scripts/ifcfg-ethX
BOOTPROTO=none
IPADDR=<address>
NETMASK=<netmask>
ifup
ifdown
DEVICE ALIASES
- useful for virtual hosting, hosting multiple web or ftp sites on a single server.. separate ip address are generally required for each
website that supports SSL or when defining multiple FTP sites
- bind multiple addresses to a single NIC, logical 3 network address
eth1:1
eth1:2
eth1:3
- create a separate interface config file for each device alias, must use static networking
ROUTING TABLE
#to view table
route
netstat -r
ip route
[root@centos5-11g ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.203.0 * 255.255.255.0 U 0 0 0 eth0
169.254.0.0 * 255.255.0.0 U 0 0 0 eth0
default 192.168.203.2 0.0.0.0 UG 0 0 0 eth0
[root@centos5-11g ~]# netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.203.0 * 255.255.255.0 U 0 0 0 eth0
169.254.0.0 * 255.255.0.0 U 0 0 0 eth0
default 192.168.203.2 0.0.0.0 UG 0 0 0 eth0
[root@centos5-11g ~]# ip route
192.168.203.0/24 dev eth0 proto kernel scope link src 192.168.203.25 <-- "LOCAL"..packet would be sent physically to the destination address, out the device eth0
169.254.0.0/16 dev eth0 scope link
default via 192.168.203.2 dev eth0 <-- "REMOTE"..packet would be sent physically to the router at 192.168.203.2, out the device eth0
DEFAULT GATEWAY (router)
- specifies where ip packets should be sent where there is no "more specific match" found in the routing table
also generally used when there is only "one way out" of the local network
- if in DHCP, DHCP will server the address of the default gateway..
the dhclient will get the value & set the value in the routing table
/etc/sysconfig/network-scripts/ifcfg-ehtX <-- set per interface (will override global if set)
/etc/sysconfig/network <-- set globally
CONFIGURING ROUTES
- control traffic flow when there is more than one router, or more than one interface each attached to different routers, we may want
to selectively control which traffic goes through which router by configuring additional routes
- static routes defined per interface
/etc/sysconfig/network-scripts/route-ethX
ip route add...
#sample command
ip route add 192.168.22.0/24 via 10.53.0.253
cat /etc/sysconfig/network-scripts/route-ethX
192.168.22.0/24 via 10.53.0.253
- dynamic routes learned via deamons (challege in dynamic is when network changes)
quagga
package that supports RIP - router information protocol <-- smaller networks
OSPF - open shortest path first <-- enterprise networks
BGP - border gateway protocol <-- ISPs
verify ip connectivity
ping <-- packet loss & latancy measurement tool (sends ICMP - internet control message protocol, default is 64byte)
traceroute <-- displays network path to a destination (uses UCP frames to probe the path)
mtr <-- a tool that combines ping & traceroute
defining the local hostname
/etc/sysconfig/network
- might PULL from the network
dhclient
"reverse DNS lookup".. will be done by /etc/rc.d/init.d/network
local resolver
resolver performs forward & reverse lookups
forward lookup looks up the number when we have the name
reverse lookup looks up the name when we have the number
/etc/hosts <-- at a minumum, your hostname should be here, normally checked before DNS
remote resolvers
/etc/resolv.conf
- domains to search
- strict order of name servers to use (DNS)
- may be updated by dhclient
entries:
search
domain
nameserver
PEERDNS=no <-- the dhclient will automatically obtain a list of nameservers from the DHCP server unless the interface config file contains this
/etc/nsswitch.conf <-- precedence of DNS versus /etc/hosts
verify DNS connectivity
nslookup (deprecated)
host
dig
bind-utils (package)
NETWORK CONFIGURATION UTILITIES
system-config-network
profile selection:
system-config-network-cmd --profile <profilename> --activate <-- switch profiles
netprofile (kernel argument) <-- on boot time, choose a profile
transparent dynamic configuration
networkmanager (package) <-- for too many profiles..
nm-applet
IMPLEMENTING IPv6
enabling/disabling ipv6, set this in /etc/modprobe.conf
alias net-pf-10 off
alias ipv6 off
# ip -6 addr
IPv6 DHCP - dynamic interface configuration
two ways to dynamically configure ipv6:
1) router advertisement deamon
- runs on (Linux) default gateway - radvd
- only specifies prefix & default gateway
- enabled with configuration: IPV6_AUTOCONF=yes in /etc/sysconfig/network.. global.. or on interface config
- interface ID automatically generated based on the MAC address of the system
- RFC 3041 was developed to protect the privary from EUI-64, enabled with configuration: IPV6_PRIVACY=rfc3041 on local interface config
2) DHCP version6
dhcp6 supports more configuration options <-- does not listen for broadcast..but rather subscribe to the multicast address ff02::16
enabled with configuration: DHCPV6C=yes on interface config
IPv6 STATIC configuration
- enabled with configuration: IPV6ADDR <-- first Global Unicast Address
- no need for device alias..enabled with configuration: IPV6ADDR_SECONDARIES <-- additional Global Unicast Address
IPv6 routing configuration
Default gateway
- dynamically from radvd or dhcpv6s
- manually specify in /etc/sysconfig/network
configuration:
IPV6_DEFAULTGW
IPV6_DEFAULTDEV <-- only valid on point-to-point interfaces
Static Routes
- defined on interface config /etc/sysconfig/network-scripts/route6-ethX
- or use the "ip 6 route add"
New and Modified utilities
ping6
traceroute6
tracepath6
ip -6
host -t AAAA hostname6.domain6
[ ] UNIT 9 - INSTALLATION
anaconda: different modes
kickstart
upgrade
rescue
consists of two stages:
first stage <-- boots the system & performs initialization of the system
second stage <-- performas the installation
[ ] UNIT 10 - VIRTUALIZATION WITH XEN
[ ] UNIT 11 - TROUBLESHOOTING
method of fault analysis:
characterize the problem
reproduce the problem
find further information
eliminate possible causes
try the easy things first
backup config files before changing
fault analysis: gathering data
useful commands:
history
grep
diff
find / -cmin -60
strace <command>
tail -f <logfile>
generate additional info
*.debug /var/log/debug
--debug option in application
X11: things to check
never debug X while in runlevel5
when changing hardware, try system-config-display first..
X -probeonly <-- performs all tasks necessary to start the X server w/o actually starting it
check /usr/share/hwdata/Cards
/home or /tmp full, quota?
is XFS running? <-- once in a while the font indexes in a font directory may be corrupt..run "mkfontdir" to recreate them
also try commenting out font paths in /etc/X11/fs/config..then run XFS to determine which directories
has problems
change hostname? <-- exit of runlevel5..
NETWORKING
hostname resolution
dig <fq hostname>
ip configuration
ifconfig
default gateway
route -n
module specification
device activation
ORDER OF BOOT PROCESS: REVIEW
bootloader configuration
kernel
/sbin/init
starting init
/etc/rc.d/rc.sysinit
/etc/rc.d/rc.. and /etc/rc.d/rc[1,3,5].d/
entering runlevel X
/etc/rc.d/rc.local
X
POSSIBLE ISSUES:
1) issue: no bootloader splash screen on prompt appears
cause:
grub is misconfigured
boot sector is corrupt
bios setting such as disk addressing scheme has been modified since the boot sector was written
2) issue: kernel does not load at all, or loads partially before a panic occurs
cause:
corrupt kernel image
incorrect parameters passed to the kernel by the bootloader
3) issue: kernel loads completely, but panics or fails when it tries to mount root filesystem and run /sbin/init
cause:
bootloader is misconfigured
/sbin/init is corrupted
/etc/fstab is misconfigured
root filesystem is damaged and unmountable
4) issue: kernel loads completely, and /etc/rc.d/rc.sysinit is started and interupted
cause:
/bin/bash is missing or corrupted
/etc/fstab may have an error, evident when filesystems are mounted or fsck'd
errors in software raid or quota specifications
corrupted non-root filesystem (due to a failed disk)
5) issue: run level errors (typically services)
cause:
another service required by a failing service was not configured for a given runlevel
service-specific configuration errors
misconfigured X or related services in runlevel5
FILESYSTEM PROBLEMS DURING BOOT
rc.sysinit attempts to mount local filesystems
upon failure, user is dropped to a root shell, root in read-only
fsck to repair
but before fsck, check /etc/fstab for mistakes
mount -o remount,rw /..... before editing
manually test mounting filesystems
RECOVERY RUN-LEVELS (pass run-level to init)
runlevel 1
process rc.sysinit & rc1.d scripts
runlevel s,S,or single
process only rc.sysinit
emergency
run sulogin only..much like a failed disk
RESCUE ENVIRONMENT
required when root filesystem is unavailable
non-system specific
boot from CDROM (boot.iso or CD#1)..then type linux rescue
boot from diskboot.img on USB device.. then linux rescue
rescue environment utilities
disk maintenance
networking
miscellaneous
logging:
/tmp/syslog <-- system loggin info
/tmp/anaconda.log <-- booting info
/tmp <-- some more config files are there..
rescue environment details
filesystem reconstruction <-- will try to reconstruct the hard disk's filesystem under /mnt/sysimage
anaconda will ask if filesystems should be mounted
/mnt/sysimage/*
/mnt/source
$PATH includes hard drive's directories
filesystem nodes
system-specific device files provided
"mknod" knows major/minor #'s <-- for floppys, in order to access it
linux rescue nomount <-- a corrupted partition table will appear to hang the rescue environment
ALT-F2 has shell with fdisk
this command will disable automatic mounting of filesystems & circumvents
the hanging caused by bad partition tables
-------------
# TEST CASES:
-------------
#REINSTALL GRUB
prepare the environment:
# dd if=/dev/zero of=/dev/sda bs=256 count=1 && reboot
option 1)
1. just do a /sbin/grub-install <boot device.."/dev/sda"> on rescue mode
- this will recreate a new folder "grub" but will not recreate grub.conf
2. when you have a separate boot partition which is mounted at /boot, since grub is
a boot loader, it doesn't know anything about mountpoints at all
# fdisk -l
# grub-install --root-directory=/boot /dev/hda
NOTE:
how to specify a file?
(hd0,0)/vmlinuz <-- normally this is the case.. because you create 100MB separate mount point
means that the file name 'vmlinuz', found on the first partition of the first
hard disk drive. the argument completion works with file names too.
what else to look for?
device.map <-- the content of this should be the disk where the MBR resides.. (hd0) /dev/sda
grub.conf <-- this is not created when you do grub-install, either you get a copy from backup
or manually recreate it.. DONT FORGET THE "root=LABEL=/"
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
password --md5 $1$wIn2KEYl$pjKQtiDuiRlqO/8QKkS0X0
title CentOS (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=LABEL=/ rhgb quiet
initrd /initrd-2.6.18-8.el5.img
option 2) if above fails.. then do this..
1. on rescue mode type command "grub" & press enter
2. type "root (hd0,0)"
3. type "setup (hd0)"
4. quit
#RECREATE MKINITRD
1. take note of the kernel version by typing
# uname -r
# uname -a
2. then, create the initrd image
# mkinitrd /boot/initrd-$(uname -r).img $(uname -r) <-- if using non-xen kernel
# mkinitrd /boot/initrd-$(uname -r)xen.img $(uname -r)xen <-- if using xen kernel
NOTE:
the image name doesn't have to be the same as before.. grub.conf reference this file
so also check the contents of grub.conf
#ROOT FILESYSTEM READ ONLY, IMMUTABLE PROPERTY ON FSTAB
#ROOT FILESYSTEM READ ONLY, RESIZED THE FILESYSTEM TO A SMALLER VALUE (LVM)
#CORRUPTED MOUNT COMMAND
prepare the environment:
# cp /bin/date /bin/mount
solution
1. load rescue environment
2. chroot /mnt/sysimage
rpm -qf /bin/mount
rpm -V util-linux
exit
3. mount the installer through NFS
4. rpm -ivh --force --root /mnt/sysimage util-linux*
}}}
{{{
###################################################################################################
[ ] UNIT 1 - SYSTEM PERFORMANCE AND SECURITY
###################################################################################################
System Resources as Services
** Computing infrastructure is comprised of roles
systems that serve
systems that request
** System infrastructure is comprised of roles
processes that serve
processes that request
** Processing infrastructure is comprised of roles
accounts that serve
accounts that request
** System resources, and their use, must be accounted for as policy of securing the system
Security in Principle
Security Domains (this course will focus on Local and Remote)
Physical
Local
Remote
Personnel
Security in Practice
Host only services you must, and only to those you must
A service is characterized by its "listening" for an event, like "GET" request on IP port 80
Security Policy: the People
Managing human activities
includes Security Policy maintenance
The policy is the objective reference against which one can measure
Security Policy: the System
Managing system activities
** Regular system monitoring
Log to an external server in case of compromise
Monitor logs with logwatch
Monitor bandwidth usage inbound and outbound
** Regular backups of system data
Response Strategies
** Assume suspected system is untrustworthy
Do not run programs from the suspected system
Boot from trusted media to verify breach
rpm -V --root=/mnt/sysimage --define '_dbpath /path/to/backup' procps <-- this will compare the size,md5sum,ownership,etc. against
the backup and the one on disk
Analyze logs of remote logger and "local" logs
Check file integrity against read-only backup of rpm
database
** Make an image of the machine for further
analysis/evidence-gathering
** Wipe the machine, re-install and restore
from backup
System Faults and Breaches
** Both effect system performance
** System performance is the security
concern
a system fault yields an infrastructure void
an infrastructure void yields opportunity for
alternative resource access
an opportunity for alternative resource access yields
unaccountable resource access
an unaccountable resource access is a breach of security policy
"It is therefore essential to monitor system activity, or "behavior", to establish a norm, and prescribe methods to
reinstate this norm should a fault occur. It is also important to implement emthods to explain the effects of changes
to a system while altering configuration of tis resource access"
"MATRIX" of access controls
----------------------------------------------------------
Access Control Implementation
----------------------------------------------------------
Application configuration file parameters
PAM as linked to, and configured in /etc/pam.d/programname
xinetd as configured in /etc/xinetd.d/service
libwrap as linked to libwrap.so, or managed by so linked
SELinux as per SELinux implemented policy
Netfilter, IPv6 as configured in /etc/sysconfig/ip6tables
Netfilter as configured in /etc/sysconfig/iptables
Method of Fault Analysis
** Characterize the problem
** Reproduce the problem
** Find further information
Fault Analysis: Hypothesis
** Form a series of hypotheses
** Pick a hypothesis to check
** Test the hypothesis
** Note the results, then reform or test a new
hypothesis if needed
** If the easier hypotheses yield no positive
result, further characterize the problem
Fault Analysis: Gathering Data
** strace command
strace -o karl.txt ls /var/lib <-- do an strace
grep ' E.' karl.txt <-- what errors were encountered
grep 'open' karl.txt <-- which files are called "open"
** tail -f logfile
** *.debug in syslog
/etc/syslog.conf:
*.debug /var/log/debug
** --debug option in application
vi /etc/sysconfig/xinetd
EXTRAOPTIONS="-d"
service xinetd restart
Benefits of System Monitoring
** System performance and security may be maintained with regular system monitoring
** System monitoring includes:
Network monitoring and analysis
File system monitoring
Process monitoring
Log file analysis
Network Monitoring Utilities
** Network interfaces (ip)
Show what interfaces are available on a system
** Port scanners (nmap)
Show what services are available on a system
** Packet sniffers (tcpdump, wireshark)
Stores and analyzes all network traffic visible to the
"sniffing" system
Networking, a Local view
** The ip utility
Called by initialization scripts
Greater capability than ifconfig
** Use netstat -ntaupe for a list of:
active network servers
established connections
netstat -tupln <-- to get all services listening on localhost
Networking, a Remote view
nmap -sS -sU -sR -P0 -A -v station1 <-- will perform a TCP SYN(chronous packet) scan (-sS), UDP scan (-sU), rpc/portmap scan (-sR)
with operating system and service version detection (-A) on station1. It will print diagnostic
information (-v) and will not attempt to ping the system before scanning (-P0)
nmap -sP 192.168.234.* <-- scan the whole subnet
nmap <remotehost> | grep tcp <-- to test which services you can reach on the remote host
nmapfe <-- GUI tool frontend
File System Analysis
df, du
stat <-- reports length of the file
find ~ -type f -mmin -90 | xargs ls -l <-- find recently changed files 90 mins
Typical Problematic Permissions
** Files without known owners may indicate
unauthorized access:
find / \( -nouser -o -nogroup \) <-- Locate files and directories with no user or group entries in the /etc/passwd file
** Files/Directories with "other" write
permission (o+w) may indicate a problem
find / -type f -perm -002 <-- Locate other-writable files
find / -type d -perm -2 <-- Locate other-writable directories
Monitoring Processes
** Monitoring utilities
top
gnome-system-monitor
sar
Process Monitoring Utilities
System Activity Reporting
sysstat RPM
Managing Processes by Account
** Use PAM to set controls on account resource limits:
pam_access.so <-- can be used to limit access by account and location /etc/security/access.conf
pam_time.so <-- can be used to limit access by day and time /etc/security/time.conf
pam_limits.so <-- can be used to limit resources available to process /etc/security/limits.conf
System Log Files
** Logging Services:
syslogd <-- Many daemons send messages to
klogd <-- Kernel messages are handled
/etc/syslog.conf <-- configuration file
/var/log/messages <-- most system messages
/var/log/audit/audit.log <-- audit subsystem and SELinux messages
/var/log/secure <-- authentication messages, xinetd services
/var/log/xferlog <-- FTP (vsftpd) transactions
/var/log/maillog <-- mail transactions
syslogd and klogd Configuration
<facility>.<priority> <loglocation> <-- format
mail.info /dev/tty8 <-- example
kern.info /var/log/kernel <-- example
Facility Priority
------------------------------------------- -----------------
authpriv security/authorization messages debug debugging information
cron clock daemons (atd and crond) info general informative messages
daemon other daemons notice normal, but significant, condition
kern kernel messages warning warning messages
local[0-7] reserved for local use err error condition
lpr printing system crit critical condition
mail mail system alert immediate action required
news news system emerg system no longer available
syslog internal syslog messages
user generic user level messages
CENTRALIZED HOST LOGGING:
1) on the remote host setup syslogd to accept remote messages
edit /etc/sysconfig/syslog
SYSLOGD_OPTIONS="-r -m 0"
2) restart syslogd
3) setup syslogd on the source host
vi /etc/syslog.conf
user.* @remotehost
4) restart syslogd
5) test it using the "logger" command
logger -i -t yourname "This is a test"
6) view the log files on the remote and source host
Log File Analysis
logwatch RPM
Virtualization with Xen
Xen Domains
/etc/xen/<domain> <-- Dom-U configuration files are stored on this directory of Dom-0
virt-manager or xm console <-- front end console (GUI)
xmdomain.cfg(5) <-- help file
Xen Configuration
xenbr0 <-- network by default is mapped to this interface
xendomains <-- determines what domains to start by which xen domain configuration file are linked in /etc/xen/auto
create a symbolic link to /etc/xen/auto/<name of Dom-U> for auto start
Domain Management with xm
"xm" tool sends commands to "Xend" which relays the commands to the Hypervisor
Controlling domains:
---------------------
xm <create | destroy> domain
xm <pause | unpause> domain
xm <save | restore> domain filename
xm <shutdown | reboot> domain
Monitoring domains:
---------------------
xm list
xm top
xm console domain
###################################################################################################
[ ] UNIT 2 - SYSTEM SERVICE ACCESS CONTROLS
###################################################################################################
System Resources Managed by init
** Services listening for serial
protocol connections
a serial console
a modem
** Configured in /etc/inittab
** Calls the command rc to spawn initialization scripts
** Calls a script to start the X11 Display Manager
** Provides respawn capability
co:23:respawn:/sbin/agetty -f /etc/issue.serial 19200 ttyS1
System Initialization and Service Management
** Commonly referred to as "System V" or
"SysV"
Many scripts organized by file system directory
semantics
Resource services are either enabled or disabled
** Several configuration files are often used
** Most services start one or more processes
** Commands are "wrapped" by scripts
** Services are managed by these scripts,
found in /etc/init.d/
** Examples:
/etc/init.d/network status
service network status
chkconfig
** Manages service definitions in run levels
** To start the cups service on boot:
chkconfig cups on
** Does not modify current run state of System
V services
** Used for standalone and transient services
** Called by other applications, including
system-config-services
** To list run level assignments, run chkconfig
--list
Initialization Script Management
chkconfig --list <-- provides a listing of all services that are started via initialization scripts or xinetd
-- it only maintains the symbolic links in /etc/rcX.d/ and the xinetd configuration. It does not
start or stop the services or control the behavior of other services
The /etc/sysconfig/ files
* Some services are configured for how they run
- named
- sendmail
- dhcpd
- samba
- init
- syslog
/etc/sysconfig/ <-- many files under this directory describe hardware configuration
-- some of them configure service run-time parameters!!! and "configure the manner" of daemon execution!!!
/etc/init.d/ <-- files under here are executables that "configure the conditions" of daemon execution
/usr/share/doc/initscripts-9.02/sysconfig.txt <-- this is where the /etc/sysconfig/ files are documented
XINETD MANAGED SERVICES
** Transient services are managed by the xinetd service <-- transient services are not configured for a given runlevel
but whether xinetd should manage the port and connections to these services
/etc/services <-- port-to-service management list used by xinetd
xinetd provides the following:
------------------------------
- host-based authentication
- resource logging
- timed access
- address redirection
- etc.
** Incoming requests are brokered by xinetd
** Configuration files:
/etc/xinetd.conf <-- config files
/etc/xinetd.d/<service>
** Linked with libwrap.so, services compiled with this will first call host_access(5) rules when a service is requested
if they allow access, then xinetd's internal access control policies are checked
[root@karl ~]# ldd /usr/sbin/xinetd
linux-vdso.so.1 => (0x00007fff4b9ff000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f58b3c3b000)
libwrap.so.0 => /lib64/libwrap.so.0 (0x00007f58b3a31000) <-- xinetd is linked to libwrap.so
libnsl.so.1 => /lib64/libnsl.so.1 (0x00007f58b3818000)
libm.so.6 => /lib64/libm.so.6 (0x00007f58b3594000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f58b335d000)
libc.so.6 => /lib64/libc.so.6 (0x00007f58b2fe5000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f58b2de1000)
/lib64/ld-linux-x86-64.so.2 (0x00007f58b3e59000)
libfreebl3.so => /usr/lib64/libfreebl3.so (0x00007f58b2b82000)
** Services controlled with chkconfig:
chkconfig tftp on
XINETD DEFAULT CONTROLS
** Top-level configuration file
# /etc/xinetd.conf
defaults
{
instances = 60
log_type = SYSLOG authpriv
log_on_success = HOST PID
log_on_failure = HOST
cps = 25 30
}
includedir /etc/xinetd.d
* Can be overridden or appended-to in service-specific configuration files in /etc/xinetd.d
man xinetd.conf <-- all the xinetd configuration parameters are documented, Extended Internet Services Daemon configuration file
XINETD SERVICE CONFIGURATION
** Service specific configuration
/etc/xinetd.d/<service>
yum install tftp-server
/etc/xinetd.d/tftp: <-- All service config utilities will edit the appropriate xinetd service config files by calling "chkconfig".
-- When xinetd is started, each enabled service is called when a connection is attempted on a specific network port
# default: off
service tftp
{
disable = yes <-- determines whether or not xinetd will accept connections for the service
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd <-- the binary used to run the service, used by libwrap.so (tcp_wrappers)
server_args = -c -s /tftpboot
per_source = 11
cps = 100 2
flags = IPv4
}
XINETD ACCESS CONTROLS
** Syntax
Allow with only_from = host_pattern
Deny with no_access = host_pattern
The most exact specification is authoritative
** Example
only_from = 192.168.0.0/24 <-- if nothing is specified then it defaults to ALL hosts
no_access = 192.168.0.1
service telnet <-- this will block access to the telnet service to everyone except hosts from the 192.168.0.0/24 network
and of those, 192.168.0.1 will be denied access
{
disable = yes
flags = REUSE
socket_type = stream
wait = no
user = root
only_from = 192.168.0.0/24
no_access = 192.168.0.1
server = /usr/bin/in.telnetd
log_on_failure += USERID
}
http://www.cyberciti.biz/faq/how-do-i-turn-on-telnet-service-on-for-a-linuxfreebsd-system/
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch16_:_Telnet,_TFTP,_and_xinetd#.Ua-SuGRATbA
Host Pattern Access Controls
** Host masks for xinetd may be:
- numeric address (192.168.1.0) <-- full or partial, righmost zero are treated as wildcards
-- example: 192.168.1.0
- network name (from /etc/networks) <-- network names from /etc/networks or NIS
-- does not work together with usernames
-- example: @mynetwork
- hostname or domain (.domain.com) <-- performs a reverse lookup everytime a client connects
-- example: .example.com (all hosts in the example.com domain)
- IP address/netmask range (192.168.0.0/24) <-- must specify the complete network address and netmask
-- example: 192.168.0.0/24
** Number of simultaneous connections
Syntax: per_source = 2 <-- limits the number of simultaneous connections per IP address, Cannot exceed maximum instances
SERVICE AND APPLICATION ACCESS CONTROLS
** Service-specific configuration
Daemons like httpd, smbd, squid, etc. provide service-specific security mechanisms
** General configuration
All programs linked with libwrap.so use common configuration files
Because xinetd is linked with libwrap.so, its services are effected
Checks for host and/or remote user name
/etc/hosts.allow <-- when a client connects to a "tcp wrapped" service, these files are examined, then choose to accept or drop the connection
/etc/hosts.deny
all processes controlled by XINETD automatically use libwrap.so (tcp_wrappers)
Here are the standalone deamons linked with libwrap.so:
- sendmail
- slapd
- sshd
- stunnel
- xinetd
- gdm
- gnome-session
- vsftpd
- portmap
TCP_WRAPPERS CONFIGURATION
libwrap.so implements a "STOP ON FIRST MATCH" policy!!!
changes to the access files are effective immediately for all new connections!!!
** Three stages of access checking
Is access explicitly permitted?
Otherwise, is access explicitly denied?
Otherwise, BY DEFAULT, PERMIT ACCESS!
** Configuration stored in two files:
Permissions in /etc/hosts.allow
Denials in /etc/hosts.deny
** Basic syntax:
daemon_list: client_list [:options]
Daemon Specification
"nfs and nis" uses "portmap" service for RPC messages <-- http://querieslinux.blogspot.com/2009/08/what-is-port-map-why-is-it-required.html
-- block the underlying "portmap" for RPC based services like NFS and NIS
** Daemon name:
Applications pass name of their executable
Multiple services can be specified
Use wildcard ALL to match all daemons
Limitations exist for certain daemons
** Advanced Syntax:
daemon@host: client_list ...
EXAMPLES:
in.telnetd: 192.168.0.1
sshd, gdm: 192.168.0.1 <-- comma delimited list of daemons
in.telnetd@192.168.0.254: 192.168.0. <-- if your host has two interface cards and if you want different policies for each, do this!
in.telnetd@192.168.1.254: 192.168.1.
Client Specification
** Host specification
by IP address (192.168.0.1,10.0.0.)
by name (www.redhat.com, .example.com)
by netmask (192.168.0.0/255.255.255.0)
by network name
Macro Definitions (also known as WILDCARDS)
** Host name macros
LOCAL - all hosts without a dot in their name
KNOWN - all hostnames that can be resolved
UNKNOWN - all hostnames that cannot be resolved
PARANOID - all hostnames where forward and reverse lookup do not match, or resolve
** Host and service macro
ALL - always matches all hosts and all services
** EXCEPT - exclude some hosts from your match
Can be used for client and service list
Can be nested
----------------------------------
/etc/hosts.allow
sshd: ALL EXCEPT .cracker.org EXCEPT trusted.cracker.org
/etc/hosts.deny
sshd: ALL
----------------------------------
"Because of the catch-all rule in hosts.deny this ruleset would allow only those who have been explicitly granted access to ssh into the system.
In hosts.allow we granted access to everyone except for hosts in the cracker.org domain, but to this rule we make an exception:
We will allow the host trusted.cracker.org to ssh in despite the band on cracker.org"
Extended Options
man 5 hosts_options <-- documentation
** Syntax:
daemon_list: client_list [:opt1 :opt2...]
** spawn
Can be used to start additional programs
Special expansions are available (%c, %s)
%c client information
%s server information
%h the clients hostname
%p server PID
** Example:
in.telnetd: ALL : spawn echo "login attempt from %c to %s" \
| mail -s warning root
** DENY
Can be used as an option in hosts.allow
** Example:
ALL: ALL: DENY
A TCP_WRAPPERS EXAMPLE
* CONSIDER THE FOLLOWING EXAMPLE FOR THE MACHINE 192.168.0.254 ON A CLASS C NETWORK
-----------------------------------------------
# /etc/hosts.allow
vsftpd: 192.168.0.
in.telnetd, portmap: 192.168.0.8
# /etc/hosts.deny
ALL: .cracker.org EXCEPT trusted.cracker.org
vsftpd, portmap: ALL
sshd: 192.168.0. EXCEPT 192.168.0.4
-----------------------------------------------
OBSERVATIONS:
1) only stations on the local network can ftp to the machine
2) only station8 could NFS-mount a directory from the machine (remember that NFS relies on portmap)
3) all hosts in cracker.org, except trusted.cracker.org are denied access to any tcp-wrapped services
4) only host 192.168.0.4 is able to ssh in from the local network
QUESTIONS:
1) what stations from the local network can initiate a telnet connection to this machine?
2) can machines in the cracker.org network access the web server?
3) what tcp-wrapped services are available to a system from someother.net? what's wrong with these rules from the perspective
of a security policy?
* A REALISTIC EXAMPLE - USING A "MOSTLY CLOSED APPROACH"
-----------------------------------------------
# /etc/hosts.allow
ALL: 127.0.0.1 [::1]
vsftpd: 192.168.0.
in.telnetd, sshd: .example.com 192.168.2.5
# /etc/hosts.deny
ALL: ALL
-----------------------------------------------
The above example denies access to all tcp-wrapped services for everyone, except those which are explicitly allowed.
In this case "ftp" access is allowed to all hosts in the 192.168.0. subnet while "telnet and ssh" are allowed by everyone
in the example.com domain as well as host 192.168.2.5.
Additionally, all services are available via the loopback adapter. This is a better a method for tightening down a system!
It is simplier, more direct approach and is much easier to maintain.
XINETD AND TCP_WRAPPERS
** xinetd provides its own set of access control functions
host-based
time-based
** tcp_wrappers is still used (still authoritative)
xinetd is compiled with libwrap support
If libwrap.so allows the connection, then xinetd security configuration is evaluated
THE WORKFLOW:
--------------
1) tcp_wrappers is checked
2) If access is granted, xinetd's security directives are evaluated
3) If xinetd accepts the connection, then the requested service is called and its service-specific security is evaluated
SELinux
** Mandatory Access Control (MAC) -vs- Discretionary Access Control (DAC)
** A rule set called the policy determines how strict the control
** Processes are either restricted or unconfined
** The policy defines what resources restricted processes are allowed to access
** Any action that is not explicitly allowed is, by default, denied
SELinux Security Context
** All files and processes have a security context
** The context has several elements, depending on the security needs
user:role:type:sensitivity:category
user_u:object_r:tmp_t:s0:c0
Not all systems will display s0:c0
** ls -Z <-- View security context of a file
ls -Zd <-- View security context of a directory
** ps -Z <-- View security context of a process, determines if a process is protected
ps -ZC bash -- any type with "unconfined_t" is not yet restricted by SELinux!!!
Usually paired with other options, such as -e
* To SELinux, everything is an object and access is controlled by "security elements" stored in the inode's extended attribute fields
* Collectively the "elements" are called the "security context"
* There are 5 supported elements
1) USER - indicates the type of user that is logged into the system, if elevated it will stay the same. Processes have a value of system_u
2) ROLE - defines the purpose of the particular file, process, or user
3) TYPE - used by the "Type Enforcement" to specify the nature of the data in a file or process. Rules within the policy say what processes
types can access which file types
4) SENSITIVITY - a security classification sometimes used by government agencies
5) CATEGORY - similar to group, but can block root's access to confidential data
RHEL4 <-- protecting 13 processes
RHEL5 (initial release) <-- protecting 88 processes, still increasing.. and 624 TYPE elements
SELinux: Targeted Policy
** The targeted policy is loaded at install time
** Most local processes are unconfined
** Principally uses the type element for type enforcement
** The security context can be changed with chcon
chcon -t tmp_t /etc/hosts <-- change the security context
chcon --reference /etc/shadow anaconda-ks.cfg <-- takes the security context from one object and apply it to another
restorecon /etc/hosts <-- the policy determines and applies the object's defaul context (safer)
SELinux: Management
** Modes: Enforcing (default), Permissive, Disabled
/etc/sysconfig/selinux
system-config-securitylevel
getenforce and setenforce 0 | 1 <-- see/change current mode
Disable from GRUB with selinux=0 <-- disable needs reboot!
** Policy adjustments: Booleans, file contexts, ports, etc.
system-config-selinux (from policycoreutils-gui package)
getsebool and setsebool
semanage
** Troubleshooting
Advises on how to avoid errors, not ensure security!
setroubleshootd and sealert -b
/var/log/audit/audit.log <-- where SELinux logs errors, if auditd is running
/var/log/messages <-- secondary logging
SELinux: semanage (modular - targeted policy)
** Some features controlled by semanage
** Recompiles small portions of the policy
** semanage function -l
** Most useful in high security environments
6 functions that semanage manipulates:
----------------------------------------
1) login assigns clearances to users at login
2) user assigns role transitions for users, allows for multiple privilege tiers between traditional user and root
3) port allows confined daemons to bind to non-standard ports
4) interface used to assign a security clearance to a network interface
5) fcontext defines the file contexts used by restorecon
6) translation translates sensitivity and categories into names
SELinux: File Types
** A managed service type is called its domain
** Allow rules in the policy define what file types a domain may access
** The policy is stored in a binary format, obscuring the rules from casual viewing
** Types can be viewed with semanage
semanage fcontext -l <-- list of types decompiled into human readable output
cat /etc/selinux/targeted/contexts/files/file_contexts | grep named <-- list of types decompiled into human readable output
ps -ZC named <-- generally the daemon will run with a type value that is similar to it's binary name, and can access files
of a similar type
semanage fcontext -l | cut -d: -f3 | sort -u | grep "named.*_t"
** public_content_t <-- special type that may be available for data that would be shared by several daemons
###################################################################################################
[ ] UNIT 3 - SECURING DATA
###################################################################################################
The Need For Encryption
** Susceptibility of unencrypted traffic
password/data sniffing
data manipulation
authentication manipulation
equivalent to mailing on postcards
** Insecure traditional protocols
telnet, FTP, POP3, etc. : insecure passwords
sendmail, NFS, NIS, etc.: insecure information
rsh, rcp, etc.: insecure authentication
Cryptographic Building Blocks
Cryptographic Building Blocks
** Random Number Generator
** One Way Hashes
** Symmetric Algorithms
** Asymmetric (Public Key) Algorithms
** Public Key Infrastructures
** Digital Certificates
** Two implementations of Cryptographic services for RHEL:
1) openssl,
2) gpg (Gnu Privacy Guard)
Random Number Generator
** Pseudo-Random Numbers and Entropy (movements - mouse, disk io, etc.)
Sources
keyboard and mouse events
block device interrupts
** Kernel provides sources (reads the Entropy)
/dev/random:
â– best source
â– blocks when entropy pool exhausted
/dev/urandom:
â– draws from entropy pool until depleted
â– falls back to pseudo-random generators
** openssl rand [ -base64 ] num
One-Way Hashes (used to check software that was downloaded)
** Arbitrary data reduced to small "fingerprint"
arbitrary length input
fixed length output
If data changed, fingerprint changes ("collision free")
data cannot be regenerated from fingerprint ("one way")
** Common Algorithms
md2, md5, mdc2, rmd160, sha, sha1
** Common Utilities
sha1sum [ --check ] file
md5sum [ --check ] file
openssl, gpg
rpm -V
Symmetric Encryption (used for passphrase from plain text to "ciphertext" - vice versa)
** Based upon a single Key
used to both encrypt and decrypt
** Common Algorithms
DES, 3DES, Blowfish, RC2, RC4, RC5, IDEA, CAST5
** Common Utilities
passwd (modified DES)
gpg (3DES, CAST5, Blowfish)
openssl
Asymmetric Encryption I (public and private key - then distribute public key)
** Based upon public/private key pair
What one key encrypts, the other decrypts
** Protocol I: Encryption without key
synchronization
Recipient
â– generate public/private key pair: P and S
â– publish public key P, guard private key S
Sender
â– encrypts message M with recipient public key
â– send P(M) to recipient
Recipient
â– decrypts with secret key to recover: M = S(P(M))
Asymmetric Encryption II (public and private key - combines encryption and digital signature)
** Protocol II: Digital Signatures Sender
â– generate public/private key pair: P and S
â– publish public key P, guard private key S
â– encrypt message M with private key S
â– send recipient S(M)
Recipient
â– decrypt with sender's public key to recover M = P(S(M))
** Combined Signature and Encryption
** Detached Signatures
Public Key Infrastructures
** Asymmetric encryption depends on public key integrity
** Two approaches discourage rogue public keys:
Publishing Key fingerprints
Public Key Infrastructure (PKI)
â– Distributed web of trust
â– Hierarchical Certificate Authorities
** Digital Certificates
Digital Certificates (Third Party)
** Certificate Authorities
** Digital Certificate
Owner: Public Key and Identity
Issuer: Detached Signature and Identity
Period of Validity
** Types
Certificate Authority Certificates
Server Certificates
** Self-Signed certificates
Generating Digital Certificates
** X.509 Certificate Format <-- The Standard FORMAT
** Generate a public/private key pair and define identity
openssl genrsa -out server1.key.pem 1024 <-- 1st step
** Two Options: <-- 2nd step
1) Use a Certificate Authority
â– generate signature request (csr)
openssl req -new -key server1.key.pem -out server1.csr.pem
â– send csr to CA
â– receive signature from CA
2) Self Signed Certificates <-- 2nd step alternative (self signed)
the owner is also the issuer..such certificates are appropriate
for root level CA's, or in situations where encryption is desired
but authentication identity is not necessary
â– sign your own public key
openssl req -new -key server1.key.pem -out server1.crt.pem -x509
http://www.cacert.org/ <-- CERTificate authorities that do not require payment!
[root@karl ~]# cd /etc/pki/tls/certs <-- the DIRECTORY (RHEL5) where you create your certificates
-- you may also create self signed certificate here!
/usr/share/ssl/certs <-- the DIRECTORY on RHEL4
[root@karl certs]# ls -ltr
total 664
-rw-r--r-- 1 root root 669565 2009-07-22 22:33 ca-bundle.crt
-rw-r--r-- 1 root root 2242 2009-11-18 22:10 Makefile
-rwxr-xr-x 1 root root 610 2009-11-18 22:10 make-dummy-cert
[root@karl certs]# make
This makefile allows you to create:
o public/private key pairs
o SSL certificate signing requests (CSRs)
o self-signed SSL test certificates
To create a key pair, run "make SOMETHING.key".
To create a CSR, run "make SOMETHING.csr".
To create a test certificate, run "make SOMETHING.crt".
To create a key and a test certificate in one file, run "make SOMETHING.pem".
To create a key for use with Apache, run "make genkey".
To create a CSR for use with Apache, run "make certreq".
To create a test certificate for use with Apache, run "make testcert".
To create a test certificate with serial number other than zero, add SERIAL=num
Examples:
make server.key
make server.csr
make server.crt
make stunnel.pem
make genkey
make certreq
make testcert
make server.crt SERIAL=1
make stunnel.pem SERIAL=2
make testcert SERIAL=3
OpenSSH Overview
** OpenSSH replaces common, insecure network communication applications
** Provides user and token-based authentication
** Capable of tunneling insecure protocols through port forwarding (rsync & rdist)
** System default configuration (client and server) resides in /etc/ssh/ <-- configuration file!
Below is the list of RPMs and what they provide:
------------------------------------------------
openssh ssh-keygen, scp
openssl cryptographic libraries and routines required by openssh
openssh-clients ssh, slogin, ssh-agent, ssh-add, sftp
openssh-askpass X11 passphrase dialog
openssh-askpass-gnome GNOME passphrase dialog
openssh-server sshd <-- install this only if you are providing "remote" shell access
OpenSSH Authentication
** The sshd daemon can utilize several different authentication methods
password (sent securely)
RSA and DSA keys
Kerberos
s/key and SecureID
host authentication using system key pairs
The OpenSSH Server
** Provides greater data security between networked systems
private/public key cryptography
compatible with earlier restricted-use commercial versions of SSH
** Implements host-based security through
libwrap.so
SSHD is installed with the following RPMs...
openssl
openssh
openssh-server
Service Profile: SSH
** Type: System V-managed service
** Packages: openssh, openssh-clients, openssh-server
** Daemon: /usr/sbin/sshd
** Script: /etc/init.d/sshd
** Port: 22
** Configuration: /etc/ssh/*, $HOME/.ssh/
** Related: openssl, openssh-askpass, openssh-askpass-gnome, tcp_wrappers
OpenSSH Server Configuration
** SSHD configuration file
/etc/ssh/sshd_config <-- the configuration file
** Options to consider
Protocol
ListenAddress
PermitRootLogin
Banner
Some of the configurations at /etc/ssh/sshd_config
--------------------------------------------------
Protocol 2 <-- only allow SSH2
ListenAddress 192.168.0.250:22 <-- configure to listen on multiple interfaces and multiple ports
PermitRootLogin no <-- don't allow direct remote ROOT ssh
PermitRootLogin forced-commands-only <-- don't allow direct remote ROOT ssh
PermitRootLogin without-password <-- don't allow direct remote ROOT ssh, but allow using public-key
/etc/issue.net <-- the banner!
The OpenSSH Client
** Secure shell sessions
ssh hostname
ssh user@hostname
ssh hostname remote-command
** Secure remote copy files and directories
scp file user@host:remote-dir
scp -r user@host:remote-dir localdir
** Secure ftp provided by sshd
sftp host
sftp -C user@host
Port Forwarding!
* ssh and sshd can forward TCP traffic
* obtuse syntax can be confusing
- L clientport:host:hostport
- R serverport:host:hostport
* can be used to bypass access controls
- requires successful authentication to remote sshd by client
- AllowTcpForwarding
--------------------------------------------------------------------------------------------------------
ssh -L 3025:mail.example.com:25 -N station1.example.com
Tells sshd on station1.example.com:
I, the ssh client, will listen for traffic on my host's port 3025 and send it to you, sshd on station1.
You will decrypt it and forward that traffic to port 25 on mail.example.com as if it came from you
--------------------------------------------------------------------------------------------------------
ssh -R 3025:mail.example.com:25 -N station1.example.com
Tells sshd on station1.example.com:
You, the sshd on station1, will listen for traffic on your port 3025 and send it to me.
I ssh, will decrypt it and forward that traffic to port 25 on mail.example.com as if it came from me
--------------------------------------------------------------------------------------------------------
Also check here http://www.walkernews.net/2007/07/21/how-to-setup-ssh-port-forwarding-in-3-minutes/
Protecting Your Keys
MORE POWERFUL: combined SSH-AGENT and PASSPHRASE
** ssh-add -- collects key passphrases
** ssh-agent -- manages key passphrases
* ssh-copy_id -- copies keys to other hosts
Applications: RPM
** Two implementations of file integrity
** Installed Files
MD5 One-way hash
rpm --verify package_name (or -V) <-- compare the files currently in the system against their original form
** Distributed Package Files
GPG Public Key Signature
rpm --import /etc/pki/rpm-gpg/RPM-GPGKEY-redhat* <-- import the GPG key (public)
rpm --checksig package_file_name (or -K) <-- check the signature of RPMs downloaded from the internet
VNC-SSH TUNNEL
PRE-REQ:
-----------
stationx <-- vncviewer, must be able to authenticate for SSH
stationx+100 <-- vncserver, must have AllowTcpForwarding (yes, default)
STEP BY STEP:
-----------
1) on stationx+100 do
vncserver
netstat -tupln | grep vnc
2) on stationx do
ssh -L 5901:stationx+100:5901 stationx+100 <-- establish an SSH tunnel
ssh -Nf stationx+100 5901:stationx+100:5901 <-- establish an SSH tunnel in the background & not execute as a remote command
3) on stationx do
vncviewer localhost:5901
###################################################################################################
[ ] UNIT 4 - NETWORK RESOURCE ACCESS CONTROLS
###################################################################################################
Routing
** Routers transport packets between different networks
** Each machine needs a default gateway to reach machines outside the local network
** Additional routes can be set using the route command
ipv4 - 32bits addressing - 4 billion unique addresses
ipv6 - 128bits - 340 trillion addresses
ip
route -n <-- display routing table
traceroute <ip> <-- diagnose routing problems
Why IPV6?
** Larger Addresses
128-bit Addressing
Extended Address Hierarchy
** Flexible Header Format
Base header - 40 octets
Next Header field supports Optional Headers for current and future extensions
** More Support for Autoconfiguration
Link-Local Addressing
Router Advertisement Daemon
Dynamic Host Configuration Protocol version 6
http://www.tldp.org/HOWTO/Linux%2BIPv6-HOWTO/ <-- IPV6 HOWTO
IPV6 on RHEL
ip -6 addr show
Utility Notes
--------------- ---------------------------------------
ping6 tests connectivity
ip -6 route displays routing table
traceroute6 verifies list of routers between systems
tracepath6 exposes the PMTU function which is now the responsibility of sending system
host or dig with "-t AAAA" option will obtain the IPV6
netstat look for "::" to get a list of services listening on IPV6
ipv6.ko <-- the kernel module that enables IPV6, to disable it do the following...
alias net-pf-10 off
alias ipv6 off
<-- but if the module is loaded, active interfaces will have the default link-local addresses
automatically assigned. These addresses are locally-scoped i.e. non-routable
Important options in the /etc/sysconfig/network
NETWORKING_IPV6=yes|no <-- enables/disables execution of any IPV6 in startup scripts
IPV6_DEFAULTGW="2001:db8:100:1::ffff" <-- manually define default gateway
Important options in the /etc/sysconfig/network-scripts/ifcfg-eth0
IPV6INIT=yes|no <-- enables/disables execution of any IPV6 in startup scripts on this interface
IPV6_AUTOCONF=yes|no <-- enables/disables listening to Router Advertisements for dynamic configuration
DHCPV6C=yes|no <-- enables/disables sending a DHCP multicast request to ff02::16 to obtain dynamic configuration
IPV6ADDR="2001:db8:100:0::1/64" <-- assign first static IPV6 global unicast address and prefix to interface
IPV6ADDR_SECONDARIES="2001:db8:100:1::1/64 2001:db8:100:2::1/64" <-- assign additional Global Unicast addresses to the interface
/etc/sysconfig/network-scripts/route6-eth0 <-- where static routes can be persistently defined using "ip -6 route add"
2001:db8:100:5::/64 via 2001:db8:100:1::ffff
/usr/share/doc/initscripts-9.02/sysconfig.txt <-- other details here!!!
tcp_wrappers and IPv6
** tcp_wrappers is IPv6 aware
When IPv6 is fully implemented throughout the domain, ensure tcp_wrappers rules include IPv6 addresses
** Example:
preserving localhost connectivity, add to /etc/hosts.allow
ALL: [::1]
[fe80::]/64 <-- IPV6 addresses are enclosed in brakets and may be coupled with a prefix to represent a network
Netfilter Overview
** Filtering in the kernel: no daemon
** Asserts policies at layers 2, 3 & 4 of the OSI Reference Model
** Only inspects packet headers
** Consists of netfilter modules in kernel, and the iptables user-space software
Netfilter Tables and Chains
[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TQhtM0dOccI/AAAAAAAAA-U/tdBtUUKt4Vo/NetfilterTablesAndChains.png]]
--------------------------------------
TABLE
---------------------
Filtering Point filter nat mangle <-- "table names" are case sensitive and are in lower case!
--------------- ---------------------
INPUT X X <-- "filtering point" names are case sensitive and are in UPPER case!
FORWARD X X
OUTPUT X X X
PREROUTING X X
POSTROUTING X X
filter <-- the main packet filtering is performed in this table
nat <-- this is where NAT occurs
mangle <-- this is where a limited number of "special effects" can happen. this table is rarely used
custom chains <-- can be created at runtime
Netfilter Packet Flow
[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TQhtNBPVqFI/AAAAAAAAA-Y/yEo6Y0o-IBQ/NetfilterPacketFlow.png]]
If a packet's destination is to local address then it's handled by the local process. Else, if to another system, and if
packet forwarding is enabled then packets are directed in accordance with the routing table.
PREROUTING this filter point deals with packets first upon arrival (nat)
FORWARD this filter point handles packets being routed through the local system (filter)
INPUT this filter point handles packets destined for the local system, after the routing decision (filter)
OUTPUT this filter point handldes packets after they have left their sending process and prior to POSTROUTING (nat and filter)
POSTROUTING this filter point handles packets immediately prior to leaving the system (nat)
Rule Matching
** Rules in ordered list
** Packets tested against each rule in turn
** On first match, the target is evaluated: usually exits the chain
** Rule may specify multiple criteria for match
** Every criterion in a specification must be met for the rule to match (logical AND)
** Chain policy (default) applies if no match
Rule Targets
Rule Targets determine what action to take when a packet matches the rule's selection criteria
-j <-- the option of the iptabls command, target can be BASE, Custom Chain or Extension Target
** Built-in targets: DROP, ACCEPT
** Extension targets: LOG, REJECT, custom chain
REJECT sends a notice returned to sender
LOG connects to system log kernel facility
LOG match does not exit the chain
** Target is optional, but no more than one per rule and defaults to the chain policy if absent
Simple Example
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TQhtNI8mHSI/AAAAAAAAA-c/GqP0HzgCDxA/NetfilterSimpleExample.png]]
iptables -t filter -A INPUT -s 192.168.0.1 -j DROP
"the example will append a single rule to the INPUT chain of the filter table. This rule
causes any packet with a source address (-s) of 192.168.0.1 to match and "jump" to
it's target, DROP, and discarded"
Our first consideration should be whether our system
- is mostly open (accepting most packets)
- or mostly closed (denying most packets)
The tendency toward open or closed effects not only rules, but most importantly the chain policy,
in effect when no rule matches or is present. The effect is the target for the packet under inspection.
Basic Chain Operations
** List rules in a chain or table (-L or -vL)
** Append a rule to the chain (-A) <-- append rule at the end of existing chain,
if a table is not specified then the "filter" table is assumed
** Insert a rule to the chain (-I)
-I CHAIN (inserts as the first rule) <-- you can insert as the first or at a given point
-I CHAIN 3 (inserts as rule 3)
** Delete an individual rule (-D)
-D CHAIN 3 (deletes rule 3 of the chain)
-D CHAIN RULE (deletes rule explicitly)
-F <-- used to Flush, or remove all rules from a chain. this does not reset the chain policy
-L <-- list the contents of the chain (rules and policy)
-v <-- displays packet and byte counters,interfaces,protocols
-n <-- prevents time consuming reverse lookups of IP addresses
--line-numbers <-- displays line numbers that could then be used to determine the rule number to be used w/ -D or -I
iptables -t filter -nvL --line-numbers <-- example usage that prints good output
Common Match Criteria
"Most rules in the filter table involve allowing or denying packets based on their source or destination."
IP address or network
-s 192.168.0.0/24 <-- packet's source
-d 192.168.0.1 <-- packet's destination
Network interface
-i lo <-- packet's interface arriving
-o eth1 <-- packet's interface leaving
Criteria can be inverted with '!'
-i eth0 -s '!' 192.168.0.0/24
Transport protocol and port
-p tcp --dport 80
-p udp --sport 53
port ranges can be specified with start:end
ICMP type
-p icmp --icmp-type host-unreachable
Additional Chain Operations
** Assign chain policy (-P CHAIN TARGET)
ACCEPT (default, a built-in target)
DROP (a built-in target)
REJECT (not permitted, an extension target)
** Flush all rules of a chain (-F)
Does not flush the policy
** Zero byte and packet counters (-Z [CHAIN])
Useful for monitoring chain statistics
** Manage custom chains (-N, -X)
-N Your_Chain-Name (adds chain)
-X Your_Chain-Name (deletes chain)
Rules: General Considerations
Match Arguments
Connection Tracking
ip_conntrack
cat /proc/net/ip_tables_matches
cat /proc/net/ip_conntrack
Connection Tracking, continued
Connection Tracking Example
Network Address Translation (NAT)
DNAT Examples
SNAT Examples
Rules Persistence
Sample /etc/sysconfig/iptables
IPv6 and ip6tables
Solutions:
iptables -t filter -N CLASS-RULES
iptables -t filter -A INPUT -j CLASS-RULES
iptables -t filter -A CLASS-RULES -i lo -j ACCEPT
iptables -t filter -A CLASS-RULES -p icmp -j ACCEPT
iptables -t filter -A CLASS-RULES -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -t filter -A CLASS-RULES --protocol tcp --dport 22 -j ACCEPT
iptables -t filter -A CLASS-RULES -m state --state NEW --protocol udp --dport 514 -j ACCEPT
iptables -t filter -A CLASS-RULES -j LOG
iptables -t filter -A CLASS-RULES -j REJECT
[root@server1 ~]# iptables -nvL --line-numbers
Chain INPUT (policy ACCEPT 1768 packets, 155K bytes)
num pkts bytes target prot opt in out source destination
1 781 65424 CLASS-RULES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 2524 packets, 213K bytes)
num pkts bytes target prot opt in out source destination
Chain CLASS-RULES (1 references)
num pkts bytes target prot opt in out source destination
1 376 34383 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
3 339 24769 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
4 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
5 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:514
6 9 963 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
7 9 963 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
[root@server1 sysconfig]# cat iptables
# Generated by iptables-save v1.2.11 on Thu Dec 16 17:45:15 2010
*filter
:INPUT ACCEPT [1768:154858]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2324:189432]
:CLASS-RULES - [0:0]
-A INPUT -j CLASS-RULES
-A CLASS-RULES -i lo -j ACCEPT
-A CLASS-RULES -p icmp -j ACCEPT
-A CLASS-RULES -m state --state RELATED,ESTABLISHED -j ACCEPT
-A CLASS-RULES -p tcp -m tcp --dport 22 -j ACCEPT
-A CLASS-RULES -p udp -m state --state NEW -m udp --dport 514 -j ACCEPT
-A CLASS-RULES -j LOG
-A CLASS-RULES -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Thu Dec 16 17:45:15 2010
Now let's do something.. I want to just allow SSH on 172.24 segment with higher priority than the custom chain
well.. it will still allow SSH on the other network segment because of the SSH rule on the custom chain..
iptables -t filter -I INPUT 1 -s 172.24.0.0/16 --protocol tcp --dport 22 -j ACCEPT
so you have to remove it...
iptables -t filter -D CLASS-RULES 4
here's the report
[root@server1 ~]# iptables -nvL --line-numbers
Chain INPUT (policy ACCEPT 1768 packets, 155K bytes)
num pkts bytes target prot opt in out source destination
1 453 35526 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:22
2 1433 124K CLASS-RULES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 3547 packets, 318K bytes)
num pkts bytes target prot opt in out source destination
Chain CLASS-RULES (1 references)
num pkts bytes target prot opt in out source destination
1 800 73396 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
3 457 34861 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
4 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:514
5 117 10082 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
6 117 10082 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
[root@server1 ~]# cat /etc/sysconfig/iptables
# Generated by iptables-save v1.2.11 on Thu Dec 16 18:16:14 2010
*filter
:INPUT ACCEPT [1768:154858]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3498:312822]
:CLASS-RULES - [0:0]
-A INPUT -s 172.24.0.0/255.255.0.0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -j CLASS-RULES
-A CLASS-RULES -i lo -j ACCEPT
-A CLASS-RULES -p icmp -j ACCEPT
-A CLASS-RULES -m state --state RELATED,ESTABLISHED -j ACCEPT
-A CLASS-RULES -p udp -m state --state NEW -m udp --dport 514 -j ACCEPT
-A CLASS-RULES -j LOG
-A CLASS-RULES -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Thu Dec 16 18:16:14 2010
###################################################################################################
[ ] UNIT 5 - ORGANIZING NETWORKED SYSTEMS
###################################################################################################
Host Name Resolution
** Some name services provide mechanisms to translate host names into lower-layer addresses so that computers can communicate
Example: Name --> MAC address (link layer)
Example: Name --> IP address (network layer) --> MAC address (link layer)
** Common Host Name Services
Files (/etc/hosts and /etc/networks)
DNS
NIS
** Multiple client-side resolvers:
"stub"
dig
host
nslookup
The Stub Resolver
** Generic resolver library available to all applications
Provided through gethostbyname() and other glibc functions
Not capable of sophisticated access controls, such as packet signing or encryption
** Can query any name service supported by glibc
** Reads /etc/nsswitch.conf to determine
the order in which to query name services, as
shown here for the default configuration:
hosts: files dns
** The NIS domain name and the DNS domain name should usually be different to simplify troubleshooting and avoid name collisions
DNS-Specific Resolvers
** host
Never reads /etc/nsswitch.conf
By default, looks at both the nameserver and
search lines in /etc/resolv.conf
Minimal output by default
** dig
Never reads /etc/nsswitch.conf
By default, looks only at the nameserver line in /
etc/resolv.conf
Output is in RFC-standard zone file format, the
format used by DNS servers, which makes dig
particularly useful for exploring DNS resolution
** nslookup
Trace a DNS Query with dig
** dig +trace redhat.com
Reads /etc/resolv.conf to determine
nameserver
Queries for root name servers
Chases referrals to find name records (answers)
See notes for sample output in case the training
center's firewall restricts outbound DNS
** This is known as an
iterative query
** Initial Observations:
Names are organized in an inverted tree with root
(.) at top
The name hierarchy allows DNS to cross
organizational boundaries
Names in records end with a dot when fully-qualified
Other Observations
** Answers in the previous trace are in the form of
resource
records
** Each resource record has five fields:
domain - the domain or subdomain being
queried
ttl - how long the record should be cached,
expressed in seconds
class - record classification (usually IN)
type - record type, such as A or NS
rdata - resource data to which the
domain maps
** Conceptually, one queries against the
domain (name), which is mapped to
the rdata for an answer
** In the trace example,
The NS (name server) records are referrals
The A (address) record is the final answer and is the
default query type for dig
IN class <-- the most common class.. two other types are CH (Chaos) and HS (Hesiod)
origin <-- refers to the name of domain or subdomain as it is managed by a particular server
canonical <-- the usual or real name of a host
Domain Class Record Type rdata
canonical name IN A IPv4 address
canonical name IN AAAA IPv6 address
alias IN CNAME canonical name
origin IN MX canonical name of mail exchanger
origin IN NS canonical name of nameserver
reversed IP addresses IN PTR canonical name
origin IN SOA authoritative info
Forward Lookups
** dig redhat.com <-- look for status: NOERROR and answer: 1
Attempts recursion first, as indicated by rd (recursion
desired) in the flags section of the output:
if the nameserver allows recursion, then the server finds
the answer and returns the requested records to the
client
If the nameserver does not allow recursion, then the
server returns a referral to a top-level domain, which
dig chases
** Observations
dig's default query type is A; the rdata for an A record
is an IPv4 address
Use -t AAAA to request IPv6 rdata
When successful, dig returns a status of NOERROR, an
answer count, and also indicates which nameservers are
authoritative for the name
Reverse Lookups
** dig -x 209.132.177.50 <-- look for status: NOERROR and answer: 1
** Observations
The question section in the output shows that DNS
reverses the octets of an address and appends inaddr.
arpa. to fully qualify the domain part of the
record
The answer section shows that DNS uses PTR
(pointer) records for reverse lookups
Additionally, the rdata for a PTR record is a fullyqualified
domain name
Mail Exchanger Lookups
** An MX record maps a domain to the fullyqualified
domain name of a mail server
** dig -t mx redhat.com
** Observations
The rdata field is extended to include an additional
piece of data called the priority
The priority can be thought of as a distance:
networks prefer shorter distances
To avoid additional lookups, nameservers typically
provide A records as additional responses to
correspond with the FQDN's provided in the MX
records
Together, an MX record and its associated A record
resolve a domain's mail server
SOA Lookups
** An SOA record marks a server as a master authority
** dig -t soa redhat.com
** Initial Observations
The domain field is called the origin
The rdata field is extended to support additional
data, explained on the next slide
There is typically only one master nameserver for a
domain; it stores the master copy of its data
Other authoritative nameservers for the domain or
zone are referred to as slaves; they synchronize
their data from the master
SOA rdata
** Master nameserver's FQDN
** Contact email
** Serial number
** Refresh delay before checking serial number
** Retry interval for slave servers
** Expiration for records when the slave cannot
contact its master(s)
** Minimum TTL for negative answers ("no such
host")
Being Authoritative
** The SOA record merely indicates the master
server for the origin (domain)
** A server is authoritative if it has:
Delegation from the parent domain: NS record plus
A record
A local copy of the domain data, including the SOA
record
** A nameserver that has the proper delegation
but lacks domain data is called a
lame server
The Everything Lookup
** dig -t axfr example.com.
@192.168.0.254
** Observations
All records for the zone are transferred
Records reveal much inside knowledge of the
network
Response is too big for UDP, so transfers use TCP
** Most servers restrict zone transfers to a
select few hosts (usually the slave nameservers)
** Use this command from a slave to test
permissions on the master
Exploring DNS with host
** For any of the following queries, add a -v
option to see output in zone file format
** Trace: not available
** Delegation: host -rt ns redhat.com
** Force iterative: host -r redhat.com
** Reverse lookup: host 209.132.177.50
** MX lookup: host -t mx redhat.com
** SOA lookup: host -t soa redhat.com
** Zone transfer: host -t axfr redhat.com
192.168.0.254 or
host -t ixfr=serial example.com.
192.168.0.254
Transitioning to the Server
** Red Hat Enterprise Linux uses BIND, the
Berkely Internet Name Daemon
** BIND is the most widely used DNS server on
the Internet
A stable and reliable infrastructure on which to base
a domain's name and IP address associations
The reference implementation for DNS RFC's
Runs in a chrooted environment
Service Profile: DNS
Access Control Profile: BIND
** Netfilter: tcp/udp ports 53 and 953 incoming; tcp/udp ephemeral ports outgoing
** TCP Wrappers: N/A
ldd `which named` | grep libwrap
strings `which named` | grep hosts
** Xinetd: N/A (named is a standalonedaemon)
** PAM: N/A (no configuration in /etc/pam.d/)
** SELinux: yes - see notes
** App-specific controls: yes, discussed in later slides and in the ARM
/usr/share/doc/bind-*/arm/Bv9ARM.{html,pdf}
[root@server1 ~]# cat /etc/selinux/targeted/contexts/files/file_contexts | grep named
# named
/var/named(/.*)? system_u:object_r:named_zone_t
/var/named/slaves(/.*)? system_u:object_r:named_cache_t
/var/named/data(/.*)? system_u:object_r:named_cache_t
/etc/named\.conf -- system_u:object_r:named_conf_t
/etc/rndc.* -- system_u:object_r:named_conf_t
/usr/sbin/named -- system_u:object_r:named_exec_t
/var/run/ndc -s system_u:object_r:named_var_run_t
/var/run/bind(/.*)? system_u:object_r:named_var_run_t
/var/run/named(/.*)? system_u:object_r:named_var_run_t
/usr/sbin/lwresd -- system_u:object_r:named_exec_t
/var/log/named.* -- system_u:object_r:named_log_t
/var/named/named\.ca -- system_u:object_r:named_conf_t
/var/named/chroot(/.*)? system_u:object_r:named_conf_t
/var/named/chroot/dev/null -c system_u:object_r:null_device_t
/var/named/chroot/dev/random -c system_u:object_r:random_device_t
/var/named/chroot/dev/zero -c system_u:object_r:zero_device_t
/var/named/chroot/etc(/.*)? system_u:object_r:named_conf_t
/var/named/chroot/etc/rndc.key -- system_u:object_r:dnssec_t
/var/named/chroot/var/run/named.* system_u:object_r:named_var_run_t
/var/named/chroot/var/tmp(/.*)? system_u:object_r:named_cache_t
/var/named/chroot/var/named(/.*)? system_u:object_r:named_zone_t
/var/named/chroot/var/named/slaves(/.*)? system_u:object_r:named_cache_t
/var/named/chroot/var/named/data(/.*)? system_u:object_r:named_cache_t
/var/named/chroot/var/named/named\.ca -- system_u:object_r:named_conf_t
Getting Started with BIND
** Install packages
bind <-- for core binaries
bind-chroot <-- for security
caching-nameserver <-- for an initial configuration
** Configure startup
service named configtest
service named start
chkconfig named on
** Proceed with essential named configuration
Essential named Configuration
** Configure the stub resolver
** Define access controls in /etc/named.conf
Declare client match lists
Server interfaces: listen-on and listen-on-v6
What queries should be allowed?
â– Iterative: allow-query { match-list; };
â– Recursive: allow-recursion { matchlist; };
â– Transfers: allow-transfer { matchlist; };
** Add data via zone files
** Test!
Configure the Stub Resolver
** On the nameserver:
Edit /etc/resolv.conf to specify nameserver 127.0.0.1
Edit /etc/sysconfig/network-scripts/ifcfg-* to specify PEERDNS=no
** Advantages:
Ensures consistent lookups for all applications
Simplifies access controls and troubleshooting
** Besides /etc/resolv.conf, where can an
unprivileged user see what nameservers DHCP provides?
bind-chroot Package
** Installs a chroot environment under /var/
named/chroot
** Moves existing config files into the chroot
environment, replacing the original files with
symlinks
** Updates /etc/sysconfig/named with a
named option:
ROOTDIR=/var/named/chroot
** Tips
Inspect /etc/sysconfig/named after installing
bind-chroot
Run ps -ef | grep named after starting named to
verify startup options
caching-nameserver Package
** Provides
named.caching-nameserver.conf
named.ca containing root server 'hints'
Forward and reverse lookup zone files for machinelocal
names and IP addresses (e.g., localhost.
localdomain)
** Tips
Copy named.caching-nameserver.conf to
named.conf
Change ownership to root:named
Edit named.conf
** The following slides describe essential access directives
http://www.ietf.org/rfc/rfc1912.txt <-- RFC for common DNS errors
-------------
GOTCHAs!!!
---------------------------------------------------------------------------------------------------------------------
* system-config-bind utilities will overwrite /etc/named.caching-nameserver.conf if it exists, so you should
copy or move the file to //etc/named.conf before making any changes
* The named init script reads /etc/named.caching-nameserver.conf only if /etc/named.conf is unreadable, which
will be the case if /etc/named.conf doesn't exist, has improper file ownership/permissions, or has the wrong
SELinux context
---------------------------------------------------------------------------------------------------------------------
Address Match List
** A semicolon-separated list of IP addresses or subnets used with security directives for hostbased access control
** Format
IP address: 192.168.0.1
Trailing dot: 192.168.0.
CIDR: 192.168.0/24
Use a bang (!) to denote inversion
** A match list is checked in order, stopping on first match
** Example:
{ 192.168.0.1; 192.168.0.; !192.168.1.0/24; };
Access Control List (ACL)
** In its simplest form, an ACL assigns a name to an address match list
** Can generally be used in place of a match list (nesting is allowed!)
** Best practice is to define ACL's at the top of /etc/named.conf
** Example declarations
acl "trusted" { 192.168.1.21; };
acl "classroom" { 192.168.0.0/24; trusted; };
acl "cracker" { 192.168.1.0/24; };
acl "mymasters" { 192.168.0.254; };
acl "myaddresses" { 127.0.0.1; 192.168.0.1; };
Built-In ACL's
** BIND pre-defines four ACL's
none - No IP address matches
any - All IP addresses match
localhost - Any IP address of the name server matches
localnets - Directly-connected networks match
** What is the difference between the localhost builtin
ACL and the myaddresses example on the previous
page (assuming the server is multi-homed)?
Server Interfaces
** Option: listen-on port 53 { matchlist;
};
** Binds named to specific interfaces
** Example
listen-on port 53 { myaddresses; };
listen-on-v6 port 53 { ::1; };
** Restart and verify: netstat -tulpn | grep
named
** Questions:
What if listen-on does not include
127.0.0.1?
How might changing listen-on-v6 to :: (all IPv6
addresses) affect IPv4?
** Default: if listen-on is missing, named
listens on all interfaces
Allowing Queries
** Option: allow-query { matchlist;
};
** Server provides both authoritative and
cached answers to clients in match list
** Example:
allow-query { classroom; cracker; };
** Default: if allow-query is missing, named
allows all
Allowing Recursion
** Option: allow-recursion { matchlist;
};
** Server chases referrals on behalf of clients in
the match-list
** Example:
allow-recursion { classroom; !cracker; };
** Questions
What happens if 192.168.1.21 tries a recursive
query?
What happens if 127.0.0.1 tries a recursive query?
** Default: if allow-recursion is missing,
named allows all
Allowing Transfers
** Option: allow-transfer { matchlist;
};
** Clients in the match-list are allowed to act as
slave servers
** Example:
allow-transfer { !cracker; classroom; };
** Questions
What happens if 192.168.1.21 tries a slave transfer?
What happens if 127.0.0.1 tries a slave transfer?
** Default: if allow-transfer is missing,
named allows all
Modifying BIND Behavior
** Option: forwarders { match-list; };
** Modifier: forward first | only;
** Directs named to recursively query specified
servers before or instead of chasing referrals
** Example:
forwarders { mymasters; };
forward only;
** How can you determine if forwarders is
required ?
** If the forward modifier is missing, named
assumes first
Access Controls: Putting it Together
** Sample /etc/named.conf with essential access control options:
// acl's make security directives easier to read
acl "myaddresses" { 127.0.0.1; 192.168.0.1; };
acl "trusted" { 192.168.1.21; };
acl "classroom" { 192.168.0.0/24; trusted; };
acl "cracker" { 192.168.1.254; };
options {
# bind to specific interfaces
listen-on port 53 { myaddresses; };
listen-on-v6 port 53 { ::1; };
# make sure I can always query myself for troubleshooting
allow-query { localhost; classroom; cracker; };
allow-recursion { localhost; classroom; !cracker; };
/* don't let cracker (even trusted) do zone transfers */
allow-transfer { localhost; !cracker; classroom; };
# use a recursive, upstream nameserver
forwarders { 192.168.0.254; };
forward only;
};
Slave Zone Declaration
zone "example.com" {
type slave;
masters { mymasters; };
file "slaves/example.com.zone";
};
** Sample zone declaration directs the server to:
Act as an authoritative nameserver for example.
com, where example.com is the origin as specified
in the SOA record's domain field
Be a slave for this zone
Perform zone transfers (AXFR and IXFR) against the
hosts in the masters option
Store the transferred data in /var/named/chroot/
var/named/slaves/example.com.zone
** Reload named to automatically create the
file
Master Zone Declaration
zone "example.com" {
type master;
file "example.com.zone";
};
** Sample zone declaration directs the server to:
Act as an authoritative nameserver for example.
com, where example.com is the origin as specified
in the SOA record's domain field
Be a master for this zone
Read the master data from /var/named/chroot/
var/named/example.com.zone
** Manually create the master file before
reloading named
Zone File Creation
** Content of a zone file:
A collection of records, beginning with the SOA record
The @ symbol is a variable representing the zone's
origin as specified in the zone declaration from /etc/
named.conf
Comments are assembly-style (;)
** Precautions:
BIND appends the domain's origin to any name that is
not properly dot-terminated
If the domain field is missing from a record, BIND uses
the value from the previous record (Danger! What if
another admin changes the record order?)
Remember to increment the serial number and reload
named after modifying a zone file
** What DNS-specific resolver puts its output in
zone file format?
Tips for Zone Files
** Shortcuts:
Do not start from scratch - copy an existing zone file
installed by the caching-nameserver package
To save typing, put $TTL 86400 as the first line of
a zone file, then omit the TTL from individual records
BIND allows you to split multi-valued rdata across
lines when enclosed within parentheses ()
** Choose a filename for your zone file that
reflects the origin in some way
Testing
** Operation
Select one of dig, host, or nslookup, and use it
expertly to verify the operation of your DNS server
Run tail -f /var/log/messages in a separate shell
when restarting services
** Configuration
BIND will fail to start for syntax errors, so always
run service named configtest after editing config
files
configtest runs two syntax utilities against files
specified in your configuration, but the utilities may
be run separately against files outside your
configuration
BIND Syntax Utilities
** named-checkconf -t ROOTDIR /path/to/
named.conf
Inspects /etc/named.conf by default (which will be
the wrong file if the -t option is missing)
Example: named-checkconf -t /var/named/chroot
** named-checkzone origin /path/to/
zonefile
Inspects a specific zone configuration
Example:
named-checkzone redhat.com \
/var/named/chroot/var/named/redhat.com.zone
Advanced BIND Topics
** Remote Name Daemon Control (rndc)
** Delegating Subdomains
Remote Name Daemon Control (rndc)
** Provides local and remote management of
named
** The bind-chroot package configures rndc
Listens on the IPv4 and IPv6 loopbacks only
Reads key from /etc/rndc.key
If the key does not match, cannot start or stop the
named service
No additional configuration is needed for a default,
local install
** Example - flush the server's cache: rndc
flush
Delegating Subdomains
** Steps
On the child, create a zone file to hold the
subdomain's data
On the parent, add an NS record
On the parent, add an A record to complete the
delegation
** Glue Records
If the child's canonical name is in the subdomain it
manages, the A record is called a glue
record
DHCP Overview
** DHCP: Dynamic Host Configuration Protocol,
implemented via dhcpd
** dhcpd provides services to both DHCP and
BOOTP IPv4 clients
Service Profile: DHCP
** Type: SystemV-managed service
** Package: dhcp
** Daemon: /usr/sbin/dhcpd
** Script: /etc/init.d/dhcpd
** Ports: 67 (bootps), 68 (bootpc)
** Configuration: /etc/dhcpd.conf, /var/
lib/dhcpd/dhcpd.leases
** Related: dhclient, dhcpv6_client, dhcpv6
Configuring an IPv4 DHCP Server
** Configure the server in /etc/dhcpd.conf
** Sample configuration provided in /usr/
share/doc/dhcp-version/dhcpd.conf.
sample
** There must be at least one subnet block,
and it must correspond with configured
interfaces.
** Run service dhcpd configtest to check
syntax
Service Profile: DNS <-- installs in an unconfigured state
** Type: System V-managed service
** Packages: bind, bind-utils, bind-chroot
** Daemons: /usr/sbin/named, /usr/sbin/rndc
** Script: /etc/init.d/named
** Ports: 53 (domain), 953(rndc)
** Configuration: (Under /var/named/chroot/) /etc/named.conf, /var/named/*, /etc/rndc.key
** Related: caching-nameserver, openssl
Required RPMS:
bind <-- for core binaries
bind-utils
bind-chroot <-- for security
caching-nameserver <-- for an initial configuration
Applicable Access Controls:
----------------------------------------------------------
Access Control Implementation
----------------------------------------------------------
Application listen-on, allow-query, allow-transfer, forwarders
PAM N/A (no files in /etc/pam.d reference named)
xinetd N/A (init-managed standalone daemon)
libwrap N/A
SELinux ensure correct file context; no change to booleans
Netfilter, IPv6 disregard IPV6 access for now
Netfilter inbound UDP and TCP port 53 and 953 from 192.168.0.0/24
outbound to port 53 + ephemeral ports (>=1024)
[root@server1 ~]# ls -l /etc/named.conf
lrwxrwxrwx 1 root root 32 Nov 20 21:14 /etc/named.conf -> /var/named/chroot/etc/named.conf
Configuration:
--------------
yum install bind bind-utils bind-chroot caching-nameserver
change resolv.conf
modify named.conf (master/slave)
create zone files
-------------------------------------------------------------
Solutions:
sequence1- Impliment a minimal DNS server - caching only nameserver
sequence2- Add data to the name server
sequence3- Add slave DNS capabilities
sequence4- Cleaning up
-------------------------------------------------------------
sequence1- Impliment a minimal DNS server - caching only nameserver
-----------------------------
yum install bind bind-chroot caching-nameserver bind-utils
[root@station103 ~]# cat /etc/services | grep domain
domain 53/tcp # name-domain server
domain 53/udp
[root@station103 ~]# ldd $(which named) | grep libwrap <-- no LIBWRAP
[root@station103 ~]# cat /etc/sysconfig/named <-- to get the ROOTDIR
ROOTDIR=/var/named/chroot
chgrp named /var/named/chroot/etc/named.conf <-- change ownership
chkconfig named on
service named start
iptables -t filter -I CLASS-RULES 4 -p tcp --dport 53 -j ACCEPT <-- add iptables rules
iptables -t filter -I CLASS-RULES 4 -p udp --dport 53 -j ACCEPT
[root@station103 ~]# iptables -nvL --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 3375 786K CLASS-RULES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 2338 packets, 276K bytes)
num pkts bytes target prot opt in out source destination
Chain CLASS-RULES (1 references)
num pkts bytes target prot opt in out source destination
1 96 7072 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
3 3150 753K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
4 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
5 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
6 0 0 ACCEPT tcp -- * * 172.25.0.0/16 0.0.0.0/0 tcp dpt:25
7 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:25
8 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:995
9 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:993
10 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:3128
11 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:80
12 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:445
13 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
14 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:514
15 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:21 state NEW
16 0 0 ACCEPT udp -- * * 172.24.0.0/16 0.0.0.0/0 udp dpts:4002:4005
17 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpts:4002:4005
18 0 0 ACCEPT udp -- * * 172.24.0.0/16 0.0.0.0/0 udp dpt:2049
19 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:2049
20 0 0 ACCEPT udp -- * * 172.24.0.0/16 0.0.0.0/0 udp dpt:111
21 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:111
22 127 25637 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
23 127 25637 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
service iptables save
service iptables restart
Add the following on /etc/named.conf on the "options" <-- named.conf configuration, on the heading part
listen-on port 53 { localhost; }; <-- server interfaces, where named will listen on, defaults to ALL interfaces
allow-query { localhost; 172.24.0.0/16; }; <-- iterative, provides AUTHORITATIVE AND CACHED answers to clients on list, defaults allow ALL
allow-transfer { localhost; 172.24.254.254; }; <-- transfers, clients on list are allowed to act as SLAVE SERVERS
forwarders { 172.24.254.254; }; <-- anything this DNS can't resolve gets forwarded to this!
forward only;
sequence2- Add data to the name server
-----------------------------
[root@station103 ~]# cat /etc/resolv.conf <-- edit the resolv.conf, if on DHCP interface must have PEERDNS=no on network config
search domain103.example.com
nameserver 127.0.0.1
Add a forward lookup zone for domain103.example.com
- declare a zone in named.conf
- create a zone file to hold the data
zone "domain103.example.com" IN { <-- create a FORWARD LOOKUP ZONE in /etc/named.conf, on the bottom part
type master;
file "domain103.example.com.zone";
allow-update { none; };
forwarders {};
};
service named configtest
[root@station103 named]# cp -a localdomain.zone domain103.example.com.zone <-- COPY localdomain.zone to a new FORWARD ZONE file
[root@station103 named]#
[root@station103 named]# ls -ltr
total 80
drwxrwx--- 2 named named 4096 Jul 27 2004 slaves
drwxrwx--- 2 named named 4096 Aug 26 2004 data
-rw-r--r-- 1 named named 416 Aug 26 2004 named.zero
-rw-r--r-- 1 named named 433 Aug 26 2004 named.local
-rw-r--r-- 1 named named 432 Aug 26 2004 named.ip6.local
-rw-r--r-- 1 named named 2518 Aug 26 2004 named.ca
-rw-r--r-- 1 named named 415 Aug 26 2004 named.broadcast
-rw-r--r-- 1 named named 195 Aug 26 2004 localhost.zone
-rw-r--r-- 1 named named 198 Aug 26 2004 localdomain.zone
-rw-r--r-- 1 named named 198 Aug 26 2004 domain103.example.com.zone
[root@station103 named]# cat domain103.example.com.zone <-- CREATE the FORWARD ZONE file
$TTL 86400
@ IN SOA station103 root (
43 ; serial (d. adams) <-- increment this serial#
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
@ IN NS station103 <-- NS record
@ IN MX 10 station103 <-- MX record, below the NS
station3 IN A 172.24.0.3 <-- the A records
station103 IN A 172.24.0.103
service named configtest
service named restart
[root@station103 named]# host station3 localhost <-- test your FORWARD LOOKUPs
Using domain server:
Name: localhost
Address: 127.0.0.1#53
Aliases:
station3.domain103.example.com has address 172.24.0.3
[root@station103 named]# host station3 <-- test your FORWARD LOOKUPs
station3.domain103.example.com has address 172.24.0.3
dig -t mx domain103.example.com
dig -t axfr domain103.example.com
host -l !$
[root@station103 named]# dig -t mx domain103.example.com <-- check your MAIL EXCHANGER RECORD
; <<>> DiG 9.2.4 <<>> -t mx domain103.example.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50821
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;domain103.example.com. IN MX
;; ANSWER SECTION:
domain103.example.com. 86400 IN MX 10 station103.domain103.example.com.
;; AUTHORITY SECTION:
domain103.example.com. 86400 IN NS station103.domain103.example.com.
;; ADDITIONAL SECTION:
station103.domain103.example.com. 86400 IN A 172.24.0.103
;; Query time: 17 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Jan 2 10:26:31 2011
;; MSG SIZE rcvd: 96
[root@station103 named]# dig -t axfr domain103.example.com <-- do a comprehensive check
; <<>> DiG 9.2.4 <<>> -t axfr domain103.example.com
;; global options: printcmd
domain103.example.com. 86400 IN SOA station103.domain103.example.com. root.domain103.example.com. 43 10800 900 604800 86400
domain103.example.com. 86400 IN NS station103.domain103.example.com.
domain103.example.com. 86400 IN MX 10 station103.domain103.example.com.
station103.domain103.example.com. 86400 IN A 172.24.0.103
station3.domain103.example.com. 86400 IN A 172.24.0.3
domain103.example.com. 86400 IN SOA station103.domain103.example.com. root.domain103.example.com. 43 10800 900 604800 86400
;; Query time: 12 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Jan 2 10:26:31 2011
;; XFR size: 6 records
[root@station103 named]# host -l !$ <-- list all host
host -l domain103.example.com
domain103.example.com name server station103.domain103.example.com.
station103.domain103.example.com has address 172.24.0.103
station3.domain103.example.com has address 172.24.0.3
zone "24.172.in-addr.arpa" IN { <-- create a REVERSE LOOKUP ZONE in /etc/named.conf, on the bottom part
type master;
file "172.24.zone";
allow-update { none; };
forwarders {};
};
cp -a named.local 172.24.zone <-- COPY named.local to a new REVERSE ZONE file
[root@station103 named]# cat 172.24.zone <-- CREATE the FORWARD ZONE file
$TTL 86400
@ IN SOA station103.domain103.example.com. root.station103.domain103.example.com. (
1997022701 ; Serial <-- increment this!
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
@ IN NS station103.domain103.example.com.
3.0 IN PTR station3.domain103.example.com. <-- add the PTRs here!
103.0 IN PTR station103.domain103.example.com.
service named configtest
service named restart
host 172.24.0.3 <-- TEST THE REVERSE LOOKUP
host 172.24.0.103
dig -t axfr 24.172.in-addr.arpa
host -l !$
[root@station103 named]# host 172.24.0.3
3.0.24.172.in-addr.arpa domain name pointer station3.domain103.example.com.
[root@station103 named]# host 172.24.0.103
103.0.24.172.in-addr.arpa domain name pointer station103.domain103.example.com.
[root@station103 named]#
[root@station103 named]# dig -t axfr 24.172.in-addr.arpa
; <<>> DiG 9.2.4 <<>> -t axfr 24.172.in-addr.arpa
;; global options: printcmd
24.172.in-addr.arpa. 86400 IN SOA station103.domain103.example.com. root.station103.domain103.example.com. 1997022701 28800 14400 3600000 86400
24.172.in-addr.arpa. 86400 IN NS station103.domain103.example.com.
103.0.24.172.in-addr.arpa. 86400 IN PTR station103.domain103.example.com.
3.0.24.172.in-addr.arpa. 86400 IN PTR station3.domain103.example.com.
24.172.in-addr.arpa. 86400 IN SOA station103.domain103.example.com. root.station103.domain103.example.com. 1997022701 28800 14400 3600000 86400
;; Query time: 15 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Jan 2 11:25:54 2011
;; XFR size: 5 records
[root@station103 named]# host -l !$
host -l 24.172.in-addr.arpa
24.172.in-addr.arpa name server station103.domain103.example.com.
103.0.24.172.in-addr.arpa domain name pointer station103.domain103.example.com.
3.0.24.172.in-addr.arpa domain name pointer station3.domain103.example.com.
sequence3- Add slave DNS capabilities
-----------------------------
dig -t axfr example.com @172.24.254.254
host -r station3.example.com localhost
host -r station103.example.com localhost
dig +norecurse station3.example.com @localhost
[root@station103 named]# dig -t axfr example.com @172.24.254.254 <-- confirm if the remote (MASTER) server will ALLOW US TO SLAVE THE ZONE DATA for example.com
; <<>> DiG 9.2.4 <<>> -t axfr example.com @172.24.254.254
;; global options: printcmd
.. output snipped ..
;; Query time: 134 msec
;; SERVER: 172.24.254.254#53(172.24.254.254)
;; WHEN: Sun Jan 2 11:28:34 2011
;; XFR size: 134 records
[root@station103 named]# host -r station3.example.com localhost <-- non-recursive query to test where the info is currently coming from!
Using domain server:
Name: localhost
Address: 127.0.0.1#53
Aliases:
[root@station103 named]# host -r station103.example.com localhost
Using domain server:
Name: localhost
Address: 127.0.0.1#53
Aliases:
[root@station103 named]# host -r station103.example.com <-- no output
[root@station103 named]#
[root@station103 named]#
[root@station103 named]# dig +norecurse station3.example.com @localhost <-- no answer is available from the local name server
; <<>> DiG 9.2.4 <<>> +norecurse station3.example.com @localhost
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58483
;; flags: qr ra; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 0
;; QUESTION SECTION:
;station3.example.com. IN A
;; AUTHORITY SECTION:
. 3600000 IN NS M.ROOT-SERVERS.NET.
. 3600000 IN NS A.ROOT-SERVERS.NET.
. 3600000 IN NS B.ROOT-SERVERS.NET.
. 3600000 IN NS C.ROOT-SERVERS.NET.
. 3600000 IN NS D.ROOT-SERVERS.NET.
. 3600000 IN NS E.ROOT-SERVERS.NET.
. 3600000 IN NS F.ROOT-SERVERS.NET.
. 3600000 IN NS G.ROOT-SERVERS.NET.
. 3600000 IN NS H.ROOT-SERVERS.NET.
. 3600000 IN NS I.ROOT-SERVERS.NET.
. 3600000 IN NS J.ROOT-SERVERS.NET.
. 3600000 IN NS K.ROOT-SERVERS.NET.
. 3600000 IN NS L.ROOT-SERVERS.NET.
;; Query time: 23 msec
;; SERVER: 127.0.0.1#53(localhost)
;; WHEN: Sun Jan 2 11:30:19 2011
;; MSG SIZE rcvd: 249
[root@station103 named]# cat /etc/named.conf <-- create the SLAVE ZONE!!
zone "example.com" IN {
type slave;
masters { 172.24.254.254; };
file "slaves/example.com.zone";
forwarders {};
};
service named configtest
service named restart
[root@station103 named]# pwd
/var/named/chroot/var/named
[root@station103 named]# ls -l slaves/ <-- upon restarting, you should see this file created!
total 8
-rw------- 1 named named 3497 Jan 2 11:55 example.com.zone
[root@station103 named]# ls -lZ slaves/
-rw------- named named root:object_r:named_cache_t example.com.zone <-- SELINUX context
dig -t axfr example.com @172.24.254.254
host -r station3.example.com localhost
host -r station103.example.com localhost
dig +norecurse station3.example.com @localhost
[root@station103 named]# host -r station3.example.com localhost <-- non-recursive query to test the zone transfer and see where the data is coming
Using domain server:
Name: localhost
Address: 127.0.0.1#53
Aliases:
station3.example.com has address 172.24.0.3
[root@station103 named]# host -r station103.example.com localhost <-- non-recursive query to test the zone transfer and see where the data is coming
Using domain server:
Name: localhost
Address: 127.0.0.1#53
Aliases:
station103.example.com has address 172.24.0.103
[root@station103 named]# dig +norecurse station3.example.com @localhost
; <<>> DiG 9.2.4 <<>> +norecurse station3.example.com @localhost
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11207
;; flags: qr aa ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;station3.example.com. IN A
;; ANSWER SECTION:
station3.example.com. 86400 IN A 172.24.0.3
;; AUTHORITY SECTION:
example.com. 86400 IN NS server1.example.com.
;; ADDITIONAL SECTION:
server1.example.com. 86400 IN A 172.24.254.254
;; Query time: 7 msec
;; SERVER: 127.0.0.1#53(localhost)
;; WHEN: Sun Jan 2 12:57:25 2011
;; MSG SIZE rcvd: 92
[root@station103 named]# dig station3.example.com @localhost
; <<>> DiG 9.2.4 <<>> station3.example.com @localhost
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47217
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;station3.example.com. IN A
;; ANSWER SECTION:
station3.example.com. 86400 IN A 172.24.0.3
;; AUTHORITY SECTION:
example.com. 86400 IN NS server1.example.com.
;; ADDITIONAL SECTION:
server1.example.com. 86400 IN A 172.24.254.254
;; Query time: 8 msec
;; SERVER: 127.0.0.1#53(localhost)
;; WHEN: Sun Jan 2 12:57:42 2011
;; MSG SIZE rcvd: 92
###################################################################################################
[ ] UNIT 6 - NETWORK FILE SHARING SERVICES
###################################################################################################
File Transfer Protocol(FTP)
Service Profile: FTP
Required RPMS:
vsftpd <-- FTP
Applicable Access Controls:
----------------------------------------------------------
Access Control Implementation
----------------------------------------------------------
Application /etc/vsftpd/vsftpd.conf
PAM /etc/pam.d/vsftpd
xinetd N/A
libwrap linked, use service name vsftpd
SELinux ensure correct file context; change one boolean
Netfilter, IPv6 disregard IPV6 access for now
Netfilter tcp and udp port 21, and ip_conntrack_ftp.ko
Network File Service (NFS)
Service Profile: NFS
Port options for the Firewall
NFS Server
NFS utilities
Required RPMS:
nfs-utils <-- NFS
Applicable Access Controls:
----------------------------------------------------------
Access Control Implementation
----------------------------------------------------------
Application /etc/exports
PAM N/A
xinetd N/A
libwrap /sbin/portmap is compiled with libwrap.a
SELinux ensure correct file context; change to boolean
Netfilter, IPv6 disregard IPV6 access for now
Netfilter tcp and udp ports 111 (portmap) and 2049 (nfs) are constant; set other port values in configuration
Client-side NFS
Samba services
Service Profile: SMB
Required RPMS:
samba <-- SAMBA
samba-common
samba-client
Also look at the related tools
--------------------------------
system-config-samba
testparm <-- to check the syntax of smb.conf
smbclient <-- "FTP-LIKE" command line access
smbclient -L <-- allows for simple view of shared services
smbclient //station103.example.com/legal -U karl <-- logs in as user karl
nmblookup <-- queries WINS server
smbpasswd -a joe <-- ADDS USER joe and given password
tdbdump /etc/samba/secrets.tdb <-- reads content of the binary file
mount -t cifs //station103/legal /mnt/samba -o user=karl <-- use cifs
smbmount //station103/legal /mnt/samba -o user=karl <-- use smbfs (deprecated in RHEL5)
smbumount
//station103/legal /mnt/samba cifs username=bob,uid=bob 0 0 <-- entry in /etc/fstab
//station103/legal /mnt/samba cifs username=bob,uid=bob,noauto 0 0 <-- to not require to enter the password before the machine will boot
//station103/legal /mnt/samba cifs credentials=/etc/samba/cred.txt 0 0 <-- to guard against prying eyes!
cat /etc/samba/cred.txt
username=<uname>
password=<passwd>
Applicable Access Controls:
----------------------------------------------------------
Access Control Implementation
----------------------------------------------------------
Application /etc/samba/smb.conf
PAM /etc/pam.d/samba ; but disabled by default with "obey pam restrictions = no" in /etc/samba/smb.conf
xinetd N/A
libwrap N/A
SELinux ensure correct file context; change one boolean
Netfilter, IPv6 disregard IPV6 access for now
Netfilter tcp port 445 (microsoft-ds)
Some references:
http://cri.ch/linux/docs/sk0001.html <-- Mount a Windows share on Linux with Samba
http://www.cyberciti.biz/tips/how-to-mount-remote-windows-partition-windows-share-under-linux.html <-- How to mount remote windows partition (windows share) under Linux
http://goo.gl/iYlvi <-- smbmount sample
http://goo.gl/QINih <-- smbmount on large files
http://en.wikipedia.org/wiki/Smbmount <-- saying smbmount is deprecated in RHEL5
http://goo.gl/4JyJe <-- Good discussion on the difference between smbmount mount.cifs and mount -t
Configuring Samba
Overview of smb.conf Sections
** smb.conf is styled after the .ini file format and is split into different [ ] sections
[global] : section for server generic or global settings
[homes] : used to grant some or all users access to their home directories
[printers] : defines printer resources and services
** Use testparm to check the syntax of /etc/samba/smb.conf
Configuring File and Directory Sharing
Printing to the Samba Server
Authentication Methods
Passwords
Samba Syntax Utility
Samba Client Tools: smbclient
Samba Client Tools: nmblookup
Samba Clients Tools: mounts
Samba Mounts in /etc/fstab
Solutions: A working FTP server accessible to hosts and users
An available but invisible upload directory via FTP
-------------------------------------------------------------
man -k ftp | grep selinux
man ftpd_selinux <-- Security-Enhanced Linux policy for ftp daemons
setsebool -P allow_ftpd_anon_write on
chcon -t public_content_rw_t incoming <-- Allow ftp servers to read and write /var/tmp/incoming, publicly writable!!!
requires the allow_ftpd_anon_write boolean to be set
IPTABLES_MODULES="ip_conntrack_ftp"
iptables -t filter -I CLASS-RULES 6 -s 172.24.0.0/16 --protocol tcp -m tcp --dport 21 -m state --state NEW -j ACCEPT
[root@station103 ~]# iptables -nvL --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 32 1872 CLASS-RULES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 28 packets, 2960 bytes)
num pkts bytes target prot opt in out source destination
Chain CLASS-RULES (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
3 32 1872 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
4 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
5 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:514
6 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:21 state NEW
7 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
8 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Solutions: A working NFS share of the /home/nfstest directory
-------------------------------------------------------------
check nfs and nfslock services
rpcinfo -p <-- list RPC services
showmount -e localhost <-- list NFS shares
[root@station103 sysconfig]# cat /etc/sysconfig/nfs
MOUNTD_PORT="4002"
STATD_PORT="4003"
LOCKD_TCPPORT="4004"
LOCKD_UDPPORT="4004"
RQUOTAD_PORT="4005"
iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p tcp --dport 111 -j ACCEPT
iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p udp --dport 111 -j ACCEPT
iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p tcp --dport 2049 -j ACCEPT
iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p udp --dport 2049 -j ACCEPT
iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p tcp --dport 4002:4005 -j ACCEPT
iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p udp --dport 4002:4005 -j ACCEPT
[root@station103 ~]# cat /etc/hosts.allow
vsftpd: 172.24.
portmap: 172.24.
[root@station103 ~]# cat /etc/hosts.deny
ALL:ALL EXCEPT 172.24.
[root@station103 ~]# cat /etc/exports
/home/nfstest *.example.com(rw,sync)
Solutions:
sequence1- A working Samba server accessible to several users with smbclient (on their home directories)
sequence2- A linux directory that only the "legal" group can useful, a samba share that only "legal" group users can access and modify
-------------------------------------------------------------
sequence1
-----------------------------
Create the users.. with the same secondary group "legal"
smbclient //station103.example.com/joe -U joe <-- by DEFAULT you can share a user's home directory given that you can authenticate
sequence2
-----------------------------
mkdir -p /home/depts/legal
chgrp legal /home/depts/legal
chmod 3770 /home/depts/legal
vi /etc/samba/smb.conf
[legal]
comment = legal's files
path = /home/depts/legal
public = no
write list = @legal
create mask = 0660
[example] <-- browseable, available only to example.com
comment = example
path = /example
browseable = yes
hosts allow = 172.24.
service smb restart
[root@station103 samba]# smbclient -L localhost -N
Anonymous login successful
Domain=[MYGROUP] OS=[Unix] Server=[Samba 3.0.10-1.4E.2]
Sharename Type Comment
--------- ---- -------
legal Disk legal's files
IPC$ IPC IPC Service (Samba Server)
ADMIN$ IPC IPC Service (Samba Server)
Anonymous login successful
Domain=[MYGROUP] OS=[Unix] Server=[Samba 3.0.10-1.4E.2]
Server Comment
--------- -------
STATION103 Samba Server
Workgroup Master
--------- -------
MYGROUP STATION103
smbclient //station103.example.com/legal -U joe <-- mount the "legal", then create a file
-------------------------------------------------------------
###################################################################################################
[ ] UNIT 7 - WEB SERVICES
###################################################################################################
Required RPMS:
httpd <-- HTTP
httpd-devel
httpdmanual
Applicable Access Controls:
----------------------------------------------------------
Access Control Implementation
----------------------------------------------------------
Application /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*
PAM N/A
xinetd N/A
libwrap N/A
SELinux ensure correct file context; change to boolean
Netfilter, IPv6 disregard IPV6 access for now
Netfilter tcp ports 80 and 443
Apache Overview
** Process control:
spawn processes before needed adapt number of processes to demand
** Dynamic module loading:
run-time extensibility without recompiling
** Virtual hosts:
Multiple web sites may share the same web server
Service Profile: HTTPD
** Type: SystemV-managed service
** Packages: httpd, httpd-devel, httpdmanual
** Daemon: /usr/sbin/httpd
** Script: /etc/init.d/httpd
** Ports: 80(http), 443(https)
** Configuration: /etc/httpd/*, /var/www/*
** Related: system-config-httpd, mod_ssl
Apache Configuration
** Main server configuration stored in /etc/httpd/conf/httpd.conf controls general web server parameters, regular virtual hosts,
and access defines filenames and mime-types
** Module configuration files stored in /etc/httpd/conf.d/*
** DocumentRoot default /var/www/html/
Apache Server Configuration
** Min and Max Spare Servers
** Log file configuration
** Host name lookup
** Modules
** Virtual Hosts
** user and group
Apache Namespace Configuration
** Specifying a directory for users' pages:
UserDir public_html
** MIME types configuration:
AddType application/x-httpd-php .phtml
AddType text/html .htm
** Declaring index files for directories:
DirectoryIndex index.html default.htm
Virtual Hosts
NameVirtualHost 192.168.0.100:80
<VirtualHost 192.168.0.100:80>
ServerName virt1.com
DocumentRoot /virt1
</VirtualHost>
<VirtualHost 192.168.0.100:80>
ServerName virt2.com
DocumentRoot /virt2
</VirtualHost>
Apache Access Configuration
** Apache provides directory- and file-level hostbased access control
** Host specifications may include dot notation numerics, network/netmask, and dot notation hostnames and domains
** The Order statement provides control over "order", but not always in the way one might expect
Apache Syntax Utilities
** service httpd configtest
** apachectl configtest
** httpd -t
** Checks both httpd.conf and ssl.conf
Using .htaccess Files
** Change a directory's configuration:
add mime-type definitions
allow or deny certain hosts
** Setup user and password databases:
AuthUserFile directive
htpasswd command:
htpasswd -cm /etc/httpd/.htpasswd bob
htpasswd -m /etc/httpd/.htpasswd alice
.htaccess Advanced Example
AuthName "Bob's Secret Stuff"
AuthType basic
AuthUserFile /var/www/html/.htpasswd
AuthGroupFile /var/www/html/.htgroup
<Limit GET>
require group staff
</Limit>
<Limit PUT POST>
require user bob
</Limit>
CGI
** CGI programs are restricted to separate
directories by ScriptAlias directive:
ScriptAlias /cgi-bin/ /path/cgi-bin/
** Apache can greatly speed up CGI programs
with loaded modules such as mod_perl
Notable Apache Modules
** mod_perl
** mod_php
** mod_speling
Apache Encrypted Web Server
** Apache and SSL: https (port 443)
mod_ssl
/etc/httpd/conf.d/ssl.conf
** Encryption Configuration:
certificate: /etc/pki/tls/certs/your_host.crt
private key: /etc/pki/tls/private/your_host.key
** Certificate/key generation:
/etc/pki/tls/certs/Makefile
self-signed cert: make testcert
certificate signature request: make certreq
Squid Web Proxy Cache
** Squid supports caching of FTP, HTTP, and other data streams
** Squid will forward SSL requests directly to origin servers or to one other proxy
** Squid includes advanced features including access control lists, cache hierarchies, and HTTP server acceleration
Service Profile: Squid
** Type: SystemV-managed service
** Package: squid
** Daemon: /usr/sbin/squid
** Script: /etc/init.d/squid
** Port: 3128(squid), (configurable)
** Configuration: /etc/squid/*
Useful parameters in /etc/squid/squid.conf
** http_port 3128
** cache_mem 8 MB
** cache_dir ufs /var/spool/squid 100 16 256
** acl all src 0.0.0.0/0.0.0.0
** acl localhost src 127.0.0.1/255.255.255.255
** http_access allow localhost
** http_access deny all
Required RPMS:
squid <-- SQUID
Applicable Access Controls:
----------------------------------------------------------
Access Control Implementation
----------------------------------------------------------
Application /etc/squid/squid.conf
PAM /etc/pam.d/squid
xinetd N/A
libwrap N/A
SELinux ensure correct file context; change to boolean
Netfilter, IPv6 disregard IPV6 access for now
Netfilter default tcp port is 3128
Solutions: To implement a web (HTTP) server with a virtual host and CGI capability
sequence1- A working web services implementation: with virtual hosting, CGI capacility, and a proxy server
sequence2- A web server with a CGI script
sequence3- A password protected web server
sequence4- A working squid (ICP) proxy server
-----------------------------------------------------------------------------------------------------------
sequence1
-----------------------------
[root@station103 ~]# cat /etc/services | grep www-http
http 80/tcp www www-http # WorldWideWeb HTTP
http 80/udp www www-http # HyperText Transfer Protocol
[root@station103 ~]# cat /etc/services | grep 443
https 443/tcp # MCom
https 443/udp # MCom
[root@station103 ~]# ldd $(which httpd) | grep libwr <-- check whether httpd is linked
[root@station103 ~]# strings ldd $(which httpd) | grep hosts <-- check whether it has references to hosts.allow or deny, it must output "hosts_access"
iptables -t filter -I CLASS-RULES 4 -s 172.24.0.0/16 -p tcp --dport 80 -j ACCEPT
[root@station103 ~]# iptables -nvL --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 13082 1961K CLASS-RULES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 13685 packets, 1805K bytes)
num pkts bytes target prot opt in out source destination
Chain CLASS-RULES (1 references)
num pkts bytes target prot opt in out source destination
1 214 27786 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
3 1195 1168K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
4 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:80
5 12 720 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:445
6 11317 697K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
7 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:514
8 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:21 state NEW
9 0 0 ACCEPT udp -- * * 172.24.0.0/16 0.0.0.0/0 udp dpts:4002:4005
10 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpts:4002:4005
11 0 0 ACCEPT udp -- * * 172.24.0.0/16 0.0.0.0/0 udp dpt:2049
12 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:2049
13 0 0 ACCEPT udp -- * * 172.24.0.0/16 0.0.0.0/0 udp dpt:111
14 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:111
15 344 67450 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
16 344 67450 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
[root@station103 conf.d]# pwd
/etc/httpd/conf.d
[root@station103 conf.d]# cat www103.example.com.conf <-- YOU CAN FIND THIS CONFIG ON httpd.conf, you just have to add the directory section
NameVirtualHost 172.24.0.103:80
<VirtualHost 172.24.0.103:80>
ServerAdmin root@station103.example.com
DocumentRoot /var/www/virtual/www103.example.com/html
ServerName www103.example.com
ErrorLog logs/www103.example.com-error_log
CustomLog logs/www103.example.com-access_log combined
<Directory /var/www/virtual/www103.example.com/html>
Options Indexes Includes
</Directory>
</VirtualHost>
[root@station103 conf.d]# service httpd configtest
Syntax OK
[root@station103 conf.d]# service httpd reload
Reloading httpd: [ OK ]
elinks http://www103.example.com <-- verify from cracker.org
sequence2
-----------------------------
NameVirtualHost 172.24.0.103:80
<VirtualHost 172.24.0.103:80>
ServerAdmin root@station103.example.com
DocumentRoot /var/www/virtual/www103.example.com/html
ServerName www103.example.com
ErrorLog logs/www103.example.com-error_log
CustomLog logs/www103.example.com-access_log combined
<Directory /var/www/virtual/www103.example.com/html>
Options Indexes Includes
</Directory>
ScriptAlias /cgi-bin/ /var/www/virtual/www103.example.com/cgi-bin/ <-- add this to execute test.sh
</VirtualHost>
sequence3
-----------------------------
[root@station103 html]# cat .htaccess <-- triggers password authentication
AuthName "restricted stuff"
AuthType Basic
AuthUserFile /etc/httpd/conf/.htpasswd-www103
require valid-user
cd /etc/httpd/conf/
ls -ltr
htpasswd -mc .htpasswd-www103 karl
less .htpasswd-www103
karl:$apr1$vC4Hp/..$Mh0tVzOtbGx/76lWimd0b/
chgrp apache .htpasswd-www103
chmod 640 .htpasswd-www103
service httpd reload
vi ../conf.d/www103.example.com.conf
service httpd restart
NameVirtualHost 172.24.0.103:80
<VirtualHost 172.24.0.103:80>
ServerAdmin root@station103.example.com
DocumentRoot /virtual/html
ServerName www103.example.com
ErrorLog logs/www103.example.com-error_log
CustomLog logs/www103.example.com-access_log combined
<Directory /virtual/html>
Options Indexes Includes
AllowOverride AuthConfig <-- this was added for the password prompt to take effect
</Directory>
ScriptAlias /cgi-bin/ /virtual/html/cgi-bin/
</VirtualHost>
sequence4
-----------------------------
Add squid 3128 as HTTP proxy server on Firefox <-- to use port 8080, edit the parameter "http_port" on squid.conf
[root@station103 conf.d]# iptables -t filter -I CLASS-RULES 4 -s 172.24.0.0/16 -p tcp --dport 3128 -j ACCEPT
[root@station103 conf.d]# iptables -nvL --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 29873 20M CLASS-RULES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 19430 packets, 2092K bytes)
num pkts bytes target prot opt in out source destination
Chain CLASS-RULES (1 references)
num pkts bytes target prot opt in out source destination
1 1545 238K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
3 12845 18M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
4 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:3128 <-- ADD THIS FOR SQUID
5 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:80
6 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:445
7 15407 1089K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
8 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:514
9 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:21 state NEW
10 0 0 ACCEPT udp -- * * 172.24.0.0/16 0.0.0.0/0 udp dpts:4002:4005
11 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpts:4002:4005
12 0 0 ACCEPT udp -- * * 172.24.0.0/16 0.0.0.0/0 udp dpt:2049
13 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:2049
14 0 0 ACCEPT udp -- * * 172.24.0.0/16 0.0.0.0/0 udp dpt:111
15 0 0 ACCEPT tcp -- * * 172.24.0.0/16 0.0.0.0/0 tcp dpt:111
16 76 16783 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
17 76 16783 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
[root@station103 conf.d]#
Access the /etc/squid/squid.conf
then search for "Recommended minimum" then hit ENTER twice
Add the following acls
acl example src 172.24.0.0/16
acl otherguys dstdomain .yahoo.com
acl otherguys dstdomain .hotmail.com
And the following further down below
http_access deny otherguys <-- DENY should be first, then ALLOW
http_access allow example
http_access allow localhost
http_access deny all
service squid reload
###################################################################################################
[ ] UNIT 8 - ELECTRONIC MAIL SERVICES
###################################################################################################
Essential Email Operation
[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/TQhtNIU_rYI/AAAAAAAAA-g/oZWGdtkWQLY/EssentialEmailOperation.png]]
Simple Mail Transport Protocol
** RFC-standard protocol for talking to MTA's
Almost always uses TCP port 25
Extended SMTP (ESMTP) provides enhanced features for MTA's
An MTA often uses Local Mail Transport Protocol (LMTP) to talk
to itself
** Example MSP:
mail -vs 'Some Subject' student@stationX.example.com
** Use telnet to troubleshoot SMTP connections
SMTP Firewalls
** Network layer with Netfilter stateful
inspection
Inbound and outbound to TCP port 25
** Application layer for relay protection
Internal MTA to which users connect for sending and
receiving
DMZ-based outgoing smart
host which relays mail from the
internal MTA
DMZ-based inbound mail hub
which relays mail to the internal MTA
Filtering rules within the DMZ MTA's or integrated
applications (e.g., Spamassassin)
Mail Transport Agents
** Red Hat Enterprise Linux includes three
MTA's
Sendmail (default MTA), Postfix, and Exim
** Common features
Support virtual hosting
Provide automatic retry for failed delivery and other
error conditions
Interoperable with Spamassassin
** Default access control
Sendmail and Postfix have no setuid components
Listen on loopback only
Relaying is disabled
Service Profile: Sendmail
Intro to Sendmail Configuration
Incoming Sendmail Configuration
Outgoing Sendmail Configuration
Inbound Sendmail Aliases
Outbound Address Rewriting
Sendmail SMTP Restrictions
Sendmail Operation
Using alternatives to Switch MTAs
Service Profile: Postfix
Intro to Postfix Configuration
Incoming Postfix Configuration
Outgoing Postfix Configuration
Inbound Postfix Aliases
Outbound Address Rewriting
Postfix SMTP Restrictions
Postfix Operation
Procmail, A Mail Delivery Agent
Procmail and Access Controls
Intro to Procmail Configuration
Sample Procmail Recipe
Mail Retrieval Protocols
Service Profile: Dovecot
Dovecot Configuration
Verifying POP Operation
Verifying IMAP Operation
Solutions: To build common skills with MTA configuration
sequence1- A working infrastructure for mail retrieval via POPs and IMAPs
sequence2- User accounts and a Postfix server that starts at boot-time
sequence3- A mail server that is available on the classroom subnet and has essential host-based access controls in place
sequence4- An MTA that allows selective relaying
sequence5- Message archival and address rewriting
sequence6- A working Procmail recipe
-----------------------------------------------------------------------------------------------------------
sequence1
-----------------------------
yum install -y dovecot <-- install dovecot!
make -C /usr/share/ssl/certs dovecot.pem
[root@station103 certs]# cp -p dovecot.pem ../private/
[root@station103 certs]# ls ../private/dovecot.pem
[root@station103 certs]# cat /etc/dovecot.conf | grep protocols
protocols = imaps pop3s
cat /etc/services | grep imaps
cat /etc/services | grep pop3s
iptables -t filter -I CLASS-RULES 4 -p tcp --dport 993 -j ACCEPT
iptables -t filter -I CLASS-RULES 4 -p tcp --dport 995 -j ACCEPT
chkconfig dovecot on
service dovecot restart
echo 'this is a test' | mail -s test student
mutt -f imaps://student@172.24.0.103
sequence2
-----------------------------
for i in myuser1 myuser2 compliance; do useradd $i; echo redhat | passwd --stdin $i; done
yum install -y postfix <-- install postfix! and unconfigure sendmail..
service sendmail stop
chkconfig sendmail off
alternatives --config mta <-- choose postfix!
service postfix restart
chkconfig postfix on
chkconfig --list postfix
cp -rpv /etc/postfix /tmp/postfix.orig <-- backup!
sequence3
-----------------------------
iptables -nvL --line-numbers | grep -i established
cat /etc/services | grep 25 <-- this is smtp, add it on IPTABLES
iptables -nvL --line-numbers
iptables -t filter -I CLASS-RULES 4 -s 172.24.0.0/16 -p tcp --dport 25 -j ACCEPT
iptables -t filter -I CLASS-RULES 4 -s 172.25.0.0/16 -p tcp --dport 25 -j ACCEPT
service iptables save
service iptables restart
iptables -nvL --line-numbers
/etc/postfix/main.cf <-- edit mail.cf, configure interface
inet_interfaces = localhost
inet_interfaces = 172.24.0.103
service postfix restart
netstat -tupln | grep master
[root@station3 ~]# telnet station103.example.com 25 <-- test the connectivity from station3
Trying 172.24.0.103...
Connected to station103.example.com (172.24.0.103).
Escape character is '^]'.
220 station103.example.com ESMTP Postfix
^]
telnet> quit
Connection closed.
[root@station103 postfix]# postconf smtpd_client_restrictions
smtpd_client_restrictions =
smtdp_client_restrictions = check_client_access hash:/etc/postfix/access <-- add on main.cf
cat /etc/postfix/access
127.0.0.1 OK
172.24.0.0/16 OK
172.25.0.0/16 OK
0.0.0.0/0 REJECT
postconf mydestination <-- postconf!
postconf myorigin
echo 'hey root' | mail -s test root
echo 'hey root' | mail -s test student <-- it works!
cat /var/spool/mail/root
sequence4
-----------------------------
yum install -y sendmail-cf <-- sendmail-cf!
cat /etc/mail/sendmail.mc | grep DAEMON_OPTIONS <-- comment out the line that restricts 127.0.0.1
alternatives --config mta
service sendmail restart
echo 'hello' | mail -s test root@station3.example.com
cat /etc/postfix/access
127.0.0.1 OK
172.24.0.0/16 RELAY
172.25.0.0/16 OK
0.0.0.0/0 REJECT
postmap /etc/postfix/access
sequence5
-----------------------------
vi /etc/aliases <-- alias!!
myuser1.alias: myuser1
mylist: myuser1,myuser2,student
service postfix restart
echo 'message alias' | mail -s mailalias mylist@station103.example.com
sequence6
-----------------------------
###################################################################################################
[ ] UNIT 9 - ACCOUNT MANAGEMENT
###################################################################################################
User Accounts
Account Information (Name Service)
Name Service Switch (NSS)
getent
Authentication
Pluggable Authentication Modules (PAM)
PAM Operation
/etc/pam.d/ Files: Tests
/etc/pam.d/ Files: Control Values
Example: /etc/pam.d/login File
The system_auth file
pam_unix.so
Network Authentication
auth Modules
Password Security
Password Policy
session Modules
Utilities and Authentication
PAM Troubleshooting
}}}
[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TQhtM0dOccI/AAAAAAAAA-U/tdBtUUKt4Vo/NetfilterTablesAndChains.png]]
[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TQhtNBPVqFI/AAAAAAAAA-Y/yEo6Y0o-IBQ/NetfilterPacketFlow.png]]
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TQhtNI8mHSI/AAAAAAAAA-c/GqP0HzgCDxA/NetfilterSimpleExample.png]]
[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/TQhtNIU_rYI/AAAAAAAAA-g/oZWGdtkWQLY/EssentialEmailOperation.png]]
Exam schedule
http://www.itgroup.com.ph/corporate/events/red_hat_enterprise_linux_training_philippines
RedHat Training Catalogue 2013
http://images.engage.redhat.com/Web/RedHat/RedHatTrainingCatalogue2013.pdf
''RHEL 6''
http://epistolatory.blogspot.com/2010/05/rhel-6-part-i-distros-new-features-for.html
http://epistolatory.blogspot.com/2010/11/rhel-6-part-ii-installation-of-rhel-6.html
http://epistolatory.blogspot.com/2010/12/rhel-6-part-iii-first-impressions-from.html
http://epistolatory.blogspot.com/2011/11/rhel-6-part-iv-placing-xfs-into.html
''RHEL 7''
http://epistolatory.blogspot.com/2014/07/first-sysadmin-impressions-on-rhel-7.html
http://www.techotopia.com/index.php/RHEL_6_Desktop_-_Starting_Applications_on_Login
http://www.techotopia.com/index.php/RHEL_5_Desktop_Startup_Programs_and_Session_Configuration
https://www.certdepot.net/rhel7-mount-unmount-cifs-nfs-network-file-systems/
https://linuxconfig.org/quick-nfs-server-configuration-on-redhat-7-linux
http://www.itzgeek.com/how-tos/linux/centos-how-tos/how-to-setup-nfs-server-on-centos-7-rhel-7-fedora-22.html
https://www.howtoforge.com/tutorial/setting-up-an-nfs-server-and-client-on-centos-7/
! 1) Image Management
> - ISO library
>> better if you do manual copy.. a lot faster
> - Snapshots
>> - shutdown the VM first before doing snapshots
>> - then, you can preview... then, commit the current state or undo
> - Templates
>> - shutdown before creating templates
> - Pools
! 2) High Availability
Red Hat Enterprise Virtualization High Availability requires
an out-of-band management interface such as IPMI, Dell
DRAC, HP iLO, IBM RSA or BladeCenter for host power
management. In the case of a failure these interfaces are
used to check the hardware status and physically power
down the host to prevent data corruption.
! 3) Live Migration
! 4) System Scheduler
There are three policies:
a) NONE - no automatic load distribution
b) Even Distribution - balance workload between physical systems
have to define the following:
- Maximum Service Level <-- the peak which will trigger the live migration
- Time threshold <-- when the threshold is met, then it will do the live migration to other hosts
c) Power Saving - consolidate more VMs on fewer hosts
have to define the following:
- Maximum Service Level <-- when host reach this utilization, the VMs will automatically live migrated to the idle host to balance the workload
- Minimum Service Level <-- when host utilization goes below this threshold, the power saver policy is triggered and live migration will automatically occur relocating all VMs to other host
- Time threshold <-- when the threshold is met, then it will do the live migration to other hosts
! 5) Power Saver
Must setup the out-of-band management module/controller. http://en.wikipedia.org/wiki/Out-of-band_management
Types of OOB management device:
DRAC5 - Dell Remote Access Controller for Dell computers
ilo - HP Integrated Lights Out standard
ipmilan - Intelligent Platform Management Interface
rsa - IBM Remote Supervisor Adaptor
bladecenter - IBM Bladecentre Remote Supervisor Adapter
For IBM Bladecenter:
IBM BladeCenter: Management Module User's Guide
ftp://ftp.software.ibm.com/systems/support/system_x_pdf/42c4886.pdf
IBM eServer xSeries and BladeCenter Server Management
http://www.redbooks.ibm.com/abstracts/SG246495.html
! 6) Maintenance Manager
! 7) Monitoring and Reporting
! Configure YUM repository
<<<
See the [[Yum]] setup
But here are the specifics:
1) Copy the contents of DVD
mkdir -pv /RHEL/installers/5.4/{os,updates}/x86-64
cp -av /media/cdrom/* /RHEL/installers/5.4/os/x86-64
Additional
• Also create a yum repository with all packages in it (Server,VT,Cluster,ClusterStorage)... so copy all of the contents of these folders on one directory which is the "Server" folder... that will approx 3182 packages
• And copy the fence-agents (from RHN) to the "Server" folder as well
• Once YUM is setup you have to install httpd (see [[Yum]] for details) to be able to access the yum from another machine which is the “host” that will be added to the RHEVM
3) Import GPG key
rpm import RPM-GPG-KEY-redhat-release
4) Install createrepo RPM
5) createrepo -g
then do
yum clean all
yum fence-agents <== should output a file
6) Then.. setup the HTTPD for the installers (see [[Yum]] for details)
<<<
! Configure the storage
<<<
For this one... I'll do NFS
1) chkconfig nfs on
2) create directories and chown em', and edit /etc/exports (as root)
mkdir -p /data/images
mkdir -p /iso/images
mkdir /rhevdata
mkdir /rheviso
chown -R 36:36 /data
chown -R 36:36 /iso
chown -R 36:36 /rhevdata
chown -R 36:36 /rheviso
-- add this to /etc/exports
/data/images *(rw,no_root_squash,async)
/iso/images *(rw,no_root_squash,async)
3) add mount options on /etc/fstab
rhevhost1:/data/images /rhevdata nfs defaults 0 0
rhevhost1:/iso/images /rheviso nfs defaults 0 0
4) restart nfs service
<<<
! Configure the RHEV bridge network
<<<
here is the reference http://kbase.redhat.com/faq/docs/DOC-19071
on my notes:
1) make changes on the network files
eth0
====
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
BRIDGE=rhevm
TYPE=Ethernet
PEERDNS=yes
USERCTL=no
HWADDR=
rhevm
=====
DEVICE=rhevm
ONBOOT=yes
BOOTPROTO=static
TYPE=Bridge
PEERDNS=yes
USERCTL=no
IPADDR=
NETMASK=
2) edit the /etc/hosts files to reflect the hostnames
the login page is at /RHEVManagerWeb/login.aspx
<<<
! Pre-req for RHEVM installation on WindowsServer2k3 SP2 32bit
<<<
Installed at the following order:
Note: better if you have filezilla on the Windows Server
1) Latest Red Hat Enterprise Virtualization Release Notes.
2) WindowsServer2k3 SP2 32bit
• Windows update... then install IE8
3) IIS
Add/Remove programs -> Application Server
• Application Server Console
• ASP.NET
• Enable COM+ access
• Enable DTC access
• IIS
3) .NET framework 3.5 SP1 with family update
4) Poweshell 1.0
<<<
RHEV: Why is a bond not available after a host restart?
http://kbase.redhat.com/faq/docs/DOC-26763
Supported platforms
http://www.redhat.com/rhel/server/advanced/virt.html
Implementation I did on a very OLTP multi app schema database w/ some reporting
MGMT_P1 and MGMT_MTH -> 'RATIO' was used here
in PX the MGMT_P1 also serves as the prioritization of dequeuing... aside from being used as a resource allocation percentage for CPU & IO
https://www.evernote.com/l/ADDqIZzGPg1CU7GzXB5FNzWnbDCTlO1WAs4
see also here [[resource manager - shares vs percentage, mgmt_mth]] for more references about shares on 11g and 12c
''How to Duplicate a Standalone Database: ASM to ASM'' http://www.colestock.com/blogs/labels/ASM.html
11gr2 DataGuard: Restarting DUPLICATE After a Failure https://blogs.oracle.com/XPSONHA/entry/11gr2_dataguard_restarting_dup
RMAN Reference
http://morganslibrary.org/reference/rman.html
* in this case below I need to re-create the controlfile
{{{
col checkpoint_change# format 9999999999999999
select 'controlfile' "SCN location",'SYSTEM checkpoint' name,checkpoint_change#
from v$database
union
select 'file in controlfile',to_char(count(*)),checkpoint_change#
from v$datafile
group by checkpoint_change#
union
select 'file header',to_char(count(*)),checkpoint_change#
from v$datafile_header
group by checkpoint_change#;
SCN location NAME CHECKPOINT_CHANGE#
------------------- ---------------------------------------- ------------------
controlfile SYSTEM checkpoint 7728034951671
file header 783 7729430480637
file in controlfile 783 7728034951671
}}}
{{{
restore database preview;
recover database until scn 7689193749494 preview;
list backup of database summary completed after 'sysdate - 1';
restore database preview summary from tag = TAG20140108T141855;
list archivelog from scn 2475111 until scn 2475374; (+1 on end)
BACKUP ARCHIVELOG FROM SEQUENCE 7754 UNTIL SEQUENCE 7761;
BACKUP ARCHIVELOG FROM SCN 7689190283437 UNTIL SCN 7689193749495;
RESTORE DATABASE PREVIEW ;
RESTORE DATABASE VALIDATE;
RESTORE ARCHIVELOG FROM sequence xx UNTIL SEQUENCE yy THREAD nn VALIDATE;
RESTORE CONTROLFILE VALIDATE;
RESTORE SPFILE VALIDATE;
}}}
https://goldparrot.wordpress.com/2011/05/16/how-to-find-exact-scn-number-for-oracle-restore/
http://damir-vadas.blogspot.com/2010/02/how-to-find-correct-scn.html
http://damir-vadas.blogspot.com/2009/10/autonomous-rman-online-backup.html
http://dba.stackexchange.com/questions/56326/rman-list-archivelogs-that-are-needed-for-to-recover-specified-backup
https://blog.dbi-services.com/list-all-rman-backups-that-are-needed-to-recover/
https://www.pythian.com/blog/rman-infatuation/
https://oracleracdba1.wordpress.com/2012/10/22/how-to-checkvalidate-that-rman-backups-are-good/
http://reneantunez.blogspot.com/2012/09/rman-how-to-verify-i-have-consistant.html
How to determine minimum end point for recovery of an RMAN backup (Doc ID 1329415.1)
RMAN recover database fails RMAN-6025 - v$archived_log.next_change# is 281474976710655 (Doc ID 238422.1)
How to check for correct RMAN syntax [ID 427224.1]
{{{
CHECKSYNTAX can also ckeck the syntax in the comand file.
$ rman CHECKSYNTAX @filename
}}}
375386.1
http://www.oracleracexpert.com/2012/11/rman-debug-and-trace.html
{{{
RMAN Debug Command
$ rman target / debug trace rman.trc log rman.log
Or
$ rman target / catalog xxx/xxxx@rmancat debug trace = /tmp/rman.trc log=/tmp/rman.log
}}}
Rolling a Standby Forward using an RMAN Incremental Backup To Fix The Nologging Changes [ID 958181.1]
ORA-26040:FLASHBACK DATABASE WITH NOLOGGING OBJECTS/ACTIVITIES RESULTS IN CORRUPTION [ID 554445.1]
http://www.idevelopment.info/data/Oracle/DBA_tips/Data_Guard/DG_53.shtml
http://jarneil.wordpress.com/2008/06/03/applying-an-incremental-backup-to-a-physical-standby/
http://web.njit.edu/info/oracle/DOC/backup.102/b14191/rcmdupdb008.htm
http://arup.blogspot.com/2009/12/resolving-gaps-in-data-guard-apply.html
https://shivanandarao-oracle.com/2012/03/26/roll-forward-physical-standby-database-using-rman-incremental-backup/
https://jhdba.wordpress.com/2013/03/18/rebuild-of-standby-using-incremental-backup-of-primary/
https://docs.oracle.com/cd/E11882_01/backup.112/e10643/rcmsynta007.htm#RCMRF107
<<<
You cannot specify PLUS ARCHIVELOG on the BACKUP ARCHIVELOG command or BACKUP AS COPY INCREMENTAL command (or BACKUP INCREMENTAL command when the default backup type is COPY). You cannot specify PLUS ARCHIVELOG when also specifying INCREMENTAL FROM SCN.
Unless the online redo log is archived after the backup, DUPLICATE is not possible with this backup.
<<<
-- RMAN incrementally updated backup to another machine
Use RMAN to relocate a 10TB RAC database with minimum downtime http://www.nyoug.org/Presentations/2011/September/Zuo_RMAN_to_Relocate.pdf
-- RMAN incrementally updated backup
https://www.realdbamagic.com/moving-a-3tb-database-datafiles-with-only-2-minute-downtime/
RMAN Incremental Update Between Different Oracle Versions (Doc ID 2106949.1)
Incrementally Updated Backups Rolling Forward Image Copies Using RMAN https://oracle-base.com/articles/misc/incrementally-updated-image-copy-backups
Merged Incremental Backup Strategies (Doc ID 745798.1)
Moving User datafiles between ASM Diskgroups using Incrementally Updated Backups (Doc ID 1472959.1)
Incrementally Updated Backup In 10G and higher (Doc ID 303861.1)
Using Rman Incremental backups To Update Transportable Tablespaces. (Doc ID 831223.1)
RMAN Fast Incremental Backups using BCT = Block Change Tracking file (Doc ID 262853.1)
How Many Incremental Backups Can Be Taken When BCT Is Enabled ? (Doc ID 452455.1)
https://uhesse.com/2010/12/01/database-migration-to-asm-with-short-downtime/
alejandro vargas rman hands on http://static7.userland.com/oracle/gems/alejandroVargas/RmanHandsOn.pdf
RMAN Backup Strategy for 40TB Data Warehouse Database http://4dag.cronos.be/village/dvp_forum.OpenThread?ThreadIdA=39061
There are two ways of doing this:
* make the RETENTION POLICY longer
* make use of the KEEP option.... but you can't do this inside the FRA (How to KEEP a backup created in the Flash Recovery Area (FRA)? [ID 401163.1])
** workaround is do the backup without the KEEP, put it in a folder.. and rename the folder
http://gavinsoorma.com/2010/04/rman-keep-forever-keep-until-time-and-force-commands/
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3560 DAYS; http://www.freelists.org/post/oracle-l/Rman-backup-keep-forever,3
Bug 5685815 : ALLOW A KEEP OPTION FOR BACKUPS CREATED IN THE FLASH RECOVERY AREA (FRA)
OERR: ORA-19811 cannot have files in DB_RECOVERY_FILE_DEST with keep attribute [ID 288177.1]
keepOption http://docs.oracle.com/cd/B28359_01/backup.111/b28273/rcmsubcl011.htm
RMAN-6764 - New 11g error during backup of Standby Database with the keep option [ID 1331072.1]
A different way of doing RMAN on DBFS..
http://www.appsdba.com/blog/?p=205
http://www.appsdba.com/blog/?p=302
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e18294/adlob_hierarch.htm#g100
http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/emgc10gr2/quick_start/jobs/creating_jobs.htm <-- this is complete
http://www.oracle.com/technetwork/articles/havewala-rman-grid-089150.html <-- this is complete
http://www.oracle.com/technetwork/articles/grid/havewala-gridcontrol-088685.html
http://technology.amis.nl/blog/2892/how-to-stop-running-rman-jobs-in-oem-grid-control
https://forums.oracle.com/forums/thread.jspa?threadID=2465428
http://enterprise-manager.blogspot.com/2008/05/rman-and-enterprise-manager.html
http://www.juvo.be/en/blog/scheduling-rman-backup-within-oem-12c-cloud-control
http://learnwithme11g.wordpress.com/2011/07/04/rman-duplication-from-tape-backups/
http://www.oracle.com/us/products/enterprise-manager/advanced-uses-em11g-wp-170683.pdf
''As a workaround you can use Virtual Tape drives''
{{{
So, I’ve got it working but I still don’t know what the problem is. The work around was to use Oracle’s pseudo tape device. I tried this partially out of desperation and partially from a hunch. I saw a posting that seemed to indicate that having the right locking daemons running could be a problem. Thinking that since a tape device can’t be shared, maybe RMAN wouldn’t do the same checks for lock management. Here’s the allocate command that seems to be working for me.
Still bugs me that I haven’t solved the real problem.
# Allocate Channel(s)
ALLOCATE CHANNEL SBT1 DEVICE TYPE SBT
FORMAT '%d-%U' parms='SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/home/dbshare/orabackup/PROD1)';
ALLOCATE CHANNEL SBT2 DEVICE TYPE SBT
FORMAT '%d-%U' parms='SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/home/dbshare/orabackup/PROD1)';
ALLOCATE CHANNEL SBT3 DEVICE TYPE SBT
FORMAT '%d-%U' parms='SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/home/dbshare/orabackup/PROD1)';
SET COMMAND ID TO 'RMANB_BS_FULL_HOT';
# Execute Database Backup
BACKUP FULL AS BACKUPSET
DATABASE
INCLUDE CURRENT CONTROLFILE
TAG = 'DB_BS_FULL_HOT';
Can anyone lend a hand on this one?
I’m trying to backup a database using Rman and the target file system is NFS mounted. The problem I’m seeing is that Rman will start writing the first set of backup sets and then hang. For example with three backup channels it looks like this…
[eve:oracle:wfprd1] /home/dbshare/orabackup/PROD1
> ls -l
total 103472
-rw-rw---- 1 oracle oinstall 45056 Apr 19 14:03 PROD1-ofma5vgg_1_1
-rw-rw---- 1 oracle oinstall 45056 Apr 19 14:03 PROD1-ogma5vgg_1_1
-rw-rw---- 1 oracle oinstall 45056 Apr 19 14:03 PROD1-ohma5vgh_1_1
It always hangs at the same byte count.
Here are the Linux servers involved.
NFS Host : Vortex
NFS Client : Eve
BTW, the tests I’ve run include:
1) On Vortex, I’ve successfully run an Rman backup of a local test database to this file system (same directory I’ve exported).
2) On Eve, I’ve successfully copied several large files (as the oracle user), to this NFS volume.
3) On Eve, I’ve successfully exported a large chunk of the database onto this NFS mount.
Reading and writing to this NFS file system doesn’t appear to be a problem for the oracle user account. It appears to be an RMAN thing.
Here is how the file system is shared out on the host system:
[vortex:oracle:RACTST1] /home/oracle
> sudo su -
[root@vortex ~]# exportfs
/mnt/dbshare 10.0.0.36
/mnt/dbshare 192.168.192.20
[root@vortex ~]# cat /etc/exports
/mnt/dbshare 10.0.0.36(rw) 192.168.192.20(rw)
…and on the system I’ve mounted the NFS share I’ve tried all three of the options for Vortex below….
[eve:oracle:wfprd1] /home/oracle
> cat /etc/fstab
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/osvg/root / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
/dev/osvg/crs /crs ext3 defaults 1 2
…
#vortex:/mnt/dbshare /home/dbshare nfs rw 0 0
#vortex:/mnt/dbshare /home/dbshare nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0
vortex:/mnt/dbshare /home/dbshare nfs rw,rsize=32768,wsize=32768,hard,noac,addr=10.0.0.152 0 0
}}}
http://www.oracle.com/technetwork/articles/oem/maa-wp-10gr2-recoverybestpractices-131010.pdf
http://www.cmg.org/wp-content/uploads/2010/09/m_73_3.pdf
http://www.oracle.com/technetwork/database/availability/rman-perf-tuning-bp-452204.pdf
http://www.nyoug.org/Presentations/2010/December/Chien_RMAN.pdf
http://www.emc.com/collateral/software/white-papers/h6540-oracle-backup-recovery-perform-avamar-rman-wp.pdf
http://ww2.dbvisit.com/forums/showthread.php?p=2953
https://docs.oracle.com/cd/B28359_01/rac.111/b28254/backup.htm#i491018
http://blog.csdn.net/sweethouse1128/article/details/6846166
https://docs.oracle.com/cd/E11882_01/rac.112/e41960/backup.htm#RACAD890
https://docs.oracle.com/cd/E28223_01/html/E27586/configappl.html
https://levipereira.files.wordpress.com/2011/01/tuning_rman_buffer.pdf
https://docs.oracle.com/cd/E51475_01/html/E52872/integration__ssc__configure_appliance__tuning_the_oracle_database_instance_for.html
@@http://feeds.feedburner.com/KarlAraoTiddlyWiki@@
@@__''<<tiddler ToggleRightSidebar with: ">SEARCH<">>''__@@ ''<--'' click here to toggle on/off the search tab
Official Doc http://docs.oracle.com/cd/E26370_01/doc.121/e26360/toc.htm
RUEI installation http://docs.oracle.com/cd/E26370_01/doc.121/e26358/rueiinstalling.htm
http://www.orafaq.com/forum/t/144040/2/
http://oracleformsinfo.wordpress.com/2011/12/22/oracle-forms-11g-r-2-ruei-real-user-experience-insight-the-good-the-bad-and-the-ugly/
http://oracleformsinfo.wordpress.com/2011/12/30/oracle-ruei-for-oracle-forms-11g-r2-the-good-the-not-so-bad-and-less-ugly-than-before/
http://www.youtube.com/watch?v=904Gy7bYxQY
''Oracle Real User Experience Insight Best Practices Self-Study Series'' http://apex.oracle.com/pls/apex/f?p=44785:24:0:::24:P24_CONTENT_ID,P24_PREV_PAGE:6626,1
Real-World Performance Group Learning Library, URL here http://bit.ly/1xurTO8
Index:
Real-World Performance Education
Video Introduction to Real-World Performance
VideoRWP #1: Cursors and Connections
VideoRWP #2: Bad Performance with Logons
VideoRWP #3: Connection Pools and Hard Parse
VideoRWP #4: Bind Variables and Soft Parse
VideoRWP #5: Shared Cursors and One Parse
VideoRWP #6: Leaking Cursors
VideoRWP #7: Set Based Processing
VideoRWP #8: Set Based Parallel Processing
VideoRWP #9: Deduplication
VideoRWP #10: Transformation
VideoRWP #11: Aggregate
VideoRWP - #12: Getting In Control
VideoRWP - #13: Large Dynamic Connection Pools - Part 1
VideoRWP - #14: Large Dynamic Connection Pools - Part 2
VideoRWP - #15: Index Contention
VideoRWP - #16: Classic Real World Performance
VideoRWP - #17: Database Log Writer
VideoRWP - #18: Large Linux Pages
VideoRWP - #19: Architecture with an AWR Report
Connection Pool Sizing and SmartDB / Connection Pool Sizing Concepts - ToonKoppelaars
https://www.youtube.com/watch?v=eiydITTdDAQ
! official docs
* search for "real-world" https://docs.oracle.com/search/?q=real-world&category=database&product=en%2Fdatabase%2Foracle%2Foracle-database%2F21
* db dev guide - 5 Designing Applications for Oracle Real-World Performance https://docs.oracle.com/en/database/oracle/oracle-database/21/adfns/rwp.html#GUID-754328E1-2203-4B03-A21B-A91C3E548233
https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/raspberry_pi_thoughts_on_game_changing_technology324?lang=en
http://www.raspberrypi.org/about
http://www.raspberrypi.org/faqs
''where to buy''
http://www.alliedelec.com/lp/120626raso/?cm_mmc=Offline-Referral-_-Electronics-_-RaspberryPi-201203-_-World-Selector-Page
http://www.farnell.com/
http://www.engadget.com/2012/09/04/raspberry-pi-getting-started-guide-how-to/
http://www.aonsquared.co.uk/raspi_voice_control
http://blogs.oracle.com/warehousebuilder/2010/07/owb_11gr2_the_right_time_with_goldengate.html
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/middleware/data-integration/goldengate11g-ds-168062.pdf
https://docs.google.com/viewer?url=http://www.imamu.edu.sa/topics/Slides/Data%2520Warehousing%25202/Experiences%2520with%2520Real-Time%2520Data%2520Warehousing%2520Using%2520Oracle%2520Database%252010G.ppt
http://it.toolbox.com/blogs/oracle-guide/ralph-kimball-realtime-data-warehouse-design-challenges-6359
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/middleware/data-integrator/overview/best-practices-for-realtime-data-wa-132882.pdf
https://docs.google.com/viewer?url=http://i.zdnet.com/whitepapers/Quest_Offload_Reporting_To_Improve_Oracle_Database_Performance.pdf
http://www.rittmanmead.com/2007/10/05/five-oracle-bi-trends-for-the-future/
http://www.rittmanmead.com/2010/04/08/realtime-data-warehouses/
http://www.rittmanmead.com/2010/05/27/realtime-data-warehouse-challenges-part-1/
http://www.rittmanmead.com/2010/06/27/realtime-data-warehouse-challenges-%E2%80%93-part-2/
http://www.rittmanmead.com/2010/05/06/realtime-data-warehouse-loading/
http://dssresources.com/papers/features/langseth/langseth02082004.html
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/middleware/data-integration/odi-ee-11g-ds-168065.pdf
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/middleware/data-integration/odi11g-newfeatures-wp-168152.pdf
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/middleware/data-integrator/overview/odiee-km-for-oracle-goldengate-133579.pdf
<<showtoc>>
! ''Monitoring SQL statements with Real-Time SQL Monitoring [ID 1380492.1]''
http://structureddata.org/2008/01/06/oracle-11g-real-time-sql-monitoring-using-dbms_sqltunereport_sql_monitor
{{{
If you want to get a SQL Monitor report for a statement you just ran in your session (similar to dbms_xplan.display_cursor) then use this command:
set pagesize 0 echo off timing off linesize 1000 trimspool on trim on long 2000000 longchunksize 2000000
select DBMS_SQLTUNE.REPORT_SQL_MONITOR(
session_id=>sys_context('userenv','sid'),
report_level=>'ALL') as report
from dual;
Or if you want to generate the EM Active SQL Monitor Report (my recommendation) from any SQL_ID you can use:
set pagesize 0 echo off timing off linesize 1000 trimspool on trim on long 2000000 longchunksize 2000000 feedback off
spool sqlmon_4vbqtp97hwqk8.html
select dbms_sqltune.report_sql_monitor(report_level=>'+histogram', type=>'EM', sql_id=>'4vbqtp97hwqk8') monitor_report from dual;
spool off
}}}
! ''hint''
{{{
/*+ MONITOR */
}}}
! ''spool from SQL Developer''
{{{
--select * from v$sql where sql_fulltext like '%&txt%' order by last_load_time;
SET TERMOUT OFF
SET verify off;
spool C:\Users\Administrator\Documents\sql_stats\fe_stage2a.html;
select
dbms_sql_monitor.report_sql_monitor(sql_id => '&sql_id',
type => decode(upper('&&ptype'),'A', 'ACTIVE', 'H' , 'HTML', 'TEXT'), --'TEXT' 'HTML' 'ACTIVE'
report_level => 'ALL') as report
FROM DUAL;
SPOOL OFF;
}}}
! using SQLD360 - all SQL monitor report types in one shot
https://github.com/karlarao/report_sql_monitor
run with
{{{
@sqld360 <SQL_ID> T
}}}
! references
<<<
http://www.oracle.com/technetwork/database/focus-areas/manageability/sqlmonitor-084401.html?ssSourceSiteId=otncn <-- SQL Monitor FAQ
http://oracledoug.com/serendipity/index.php?/archives/1506-Real-Time-SQL-Monitoring-in-SQL-Developer.html <-- SQL Developer
http://oracledoug.com/serendipity/index.php?/archives/1642-Real-Time-SQL-Monitoring-Statement-Not-Appearing.html <-- hidden parameter to increase the lines - statement not appearing
http://oracledoug.com/serendipity/index.php?%2Farchives%2F1646-Real-Time-SQL-Monitoring-Retention.html <-- retention
http://structureddata.org/2011/08/28/reading-active-sql-monitor-reports-offline/ <-- ''offline view'' of sql monitor reports
http://www.oracle-base.com/blog/2011/03/22/real-time-sql-monitoring-update/
http://blog.aristadba.com/?tag=real-time-sql-monitoring
http://www.pythian.com/news/582/tuning-pack-11g-real-time-sql-monitoring/
http://joze-senegacnik.blogspot.com/2009/12/vsqlmonitor-and-vsqlplanmonitor.html <-- V$SQL_MONITOR and V$SQL_PLAN_MONITOR
<<<
{{{
-- viewing waits system wide
col event format a46
col seconds format 999,999,990.00
col calls format 999,999,990
select a.event,
a.time_waited,
a.total_waits calls,
a.time_waited/a.total_waits average_wait,
sysdate - b.startup_time days_old
from v$system_event a, v$instance b
where rownum < 6
order by a.time_waited desc;
-- viewing waits on a session
select
e.event, e.time_waited
from
v$session_event e
where
e.sid = 12
union all
select
n.name,
s.value
from
v$statname n,
v$sesstat s
where
s.sid = 12
and n.statistic# = s.statistic#
and n.name = 'CPU used by this session'
order by
2 desc
/
-- sesstat
select a.sid, b.name, a.value
from v$sesstat a, v$statname b
where a.statistic# = b.statistic#
and a.value > 0
and a.sid = 12;
-- kill sessions
-- select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' post_transaction;'
select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' immediate;'
from v$process p, v$session s, v$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
and sa.sql_text NOT LIKE '%usercheck%'
-- and upper(sa.sql_text) LIKE '%CP_IINFO_DAILY_RECON_PKG.USP_DAILYCHANGEFUND%'
-- and s.sid = 178
and s.sql_id = '&sql_id'
-- and sid in (1404,1023,520,389,645)
-- and s.username = 'APAC'
-- and sa.plan_hash_value = 3152625234
order by status desc;
-- quicker kill sessions
select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' immediate;'
from v$session s
where s.sql_id = '&sql_id';
-- purge SQL_ID on shared pool
var name varchar2(50)
BEGIN
select /* usercheck */ sa.address||','||sa.hash_value into :name
from v$process p, v$session s, v$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
and sa.sql_text NOT LIKE '%usercheck%'
-- and upper(sa.sql_text) LIKE '%CP_IINFO_DAILY_RECON_PKG.USP_DAILYCHANGEFUND%'
and s.sid = 176
-- and s.username = 'APAC'
order by status desc;
dbms_shared_pool.purge(:name,'C',1);
END;
/
-- show all users
-- on windows to kill do.. orakill <instance_name> <spid>
set lines 32767
col terminal format a4
col machine format a4
col os_login format a4
col oracle_login format a4
col osuser format a4
col module format a5
col program format a8
col schemaname format a5
-- col state format a8
col client_info format a5
col status format a4
col sid format 99999
col serial# format 99999
col unix_pid format a8
col txt format a50
col action format a8
select /* usercheck */ s.INST_ID, s.terminal terminal, s.machine machine, p.username os_login, s.username oracle_login, s.osuser osuser, s.module, s.action, s.program, s.schemaname,
s.state,
s.client_info, s.status status, s.sid sid, s.serial# serial#, lpad(p.spid,7) unix_pid, -- s.sql_hash_value,
sa.plan_hash_value, -- remove in 817, 9i
s.sql_id, -- remove in 817, 9i
substr(sa.sql_text,1,1000) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
and sa.sql_text NOT LIKE '%usercheck%'
-- and lower(sa.sql_text) LIKE '%grant%'
-- and s.username = 'APAC'
-- and s.schemaname = 'SYSADM'
-- and lower(s.program) like '%uscdcmta21%'
-- and s.sid=12
-- and p.spid = 14967
-- and s.sql_hash_value = 3963449097
-- and s.sql_id = '5p6a4cpc38qg3'
-- and lower(s.client_info) like '%10036368%'
-- and s.module like 'PSNVS%'
-- and s.program like 'PSNVS%'
order by status desc;
-- find running jobs
set linesize 250
col sid for 9999 head 'Session|ID'
col spid head 'O/S|Process|ID'
col serial# for 9999999 head 'Session|Serial#'
col log_user for a10
col job for 9999999 head 'Job'
col broken for a1 head 'B'
col failures for 99 head "fail"
col last_date for a18 head 'Last|Date'
col this_date for a18 head 'This|Date'
col next_date for a18 head 'Next|Date'
col interval for 9999.000 head 'Run|Interval'
col what for a60
select j.sid,
s.spid,
s.serial#,
j.log_user,
j.job,
j.broken,
j.failures,
j.last_date||':'||j.last_sec last_date,
j.this_date||':'||j.this_sec this_date,
j.next_date||':'||j.next_sec next_date,
j.next_date - j.last_date interval,
j.what
from (select djr.SID,
dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES,
dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC,
dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
from dba_jobs dj, dba_jobs_running djr
where dj.job = djr.job ) j,
(select p.spid, s.sid, s.serial#
from v$process p, v$session s
where p.addr = s.paddr ) s
where j.sid = s.sid;
-- find where a system is stuck
break on report
compute sum of sessions on report
select event, count(*) sessions from v$session_wait
where state='WAITING'
group by event
order by 2 desc;
-- find the session state
select event, state, count(*) from v$session_wait group by event, state order by 3 desc;
-- when user calls up, describe wait events per session since the session has started up
select max(total_waits), event, sid from v$session_event
where sid = 12
group by sid, event
order by 1 desc;
-- You can easily discover which session has high TIME_WAITED on the db file sequential read or other waits
select a.sid,
a.event,
a.time_waited,
a.time_waited / c.sum_time_waited * 100 pct_wait_time,
round((sysdate - b.logon_time) * 24) hours_connected
from v$session_event a, v$session b,
(select sid, sum(time_waited) sum_time_waited
from v$session_event
where event not in (
'Null event',
'client message',
'KXFX: Execution Message Dequeue - Slave',
'PX Deq: Execution Msg',
'KXFQ: kxfqdeq - normal deqeue',
'PX Deq: Table Q Normal',
'Wait for credit - send blocked',
'PX Deq Credit: send blkd',
'Wait for credit - need buffer to send',
'PX Deq Credit: need buffer',
'Wait for credit - free buffer',
'PX Deq Credit: free buffer',
'parallel query dequeue wait',
'PX Deque wait',
'Parallel Query Idle Wait - Slaves',
'PX Idle Wait',
'slave wait',
'dispatcher timer',
'virtual circuit status',
'pipe get',
'rdbms ipc message',
'rdbms ipc reply',
'pmon timer',
'smon timer',
'PL/SQL lock timer',
'SQL*Net message from client',
'WMON goes to sleep')
having sum(time_waited) > 0 group by sid) c
where a.sid = b.sid
and a.sid = c.sid
and a.time_waited > 0
-- and a.event = 'db file sequential read'
order by hours_connected desc, pct_wait_time;
-- show all users RAC
select s.inst_id instance_id,
s.failover_type failover_type,
s.FAILOVER_METHOD failover_method,
s.FAILED_OVER failed_over,
p.username os_login,
s.username oracle_login,
s.status status,
s.sid oracle_session_id,
s.serial# oracle_serial_no,
lpad(p.spid,7) unix_process_id,
s.machine, s.terminal, s.osuser,
substr(sa.sql_text,1,540) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
--and s.sid=48
--and p.spid
order by 3;
-- this is for RAC TAF, fewer columns
col oracle_login format a10
col instance_id format 99
col sidserial format a8
select s.inst_id instance_id,
s.failover_type failover_type,
s.FAILOVER_METHOD failover_method,
s.FAILED_OVER failed_over,
s.username oracle_login,
s.status status,
concat (s.sid,s.serial#) sidserial,
substr(sa.sql_text,1,15) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.type = 'USER'
and s.username = 'ORACLE'
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
--and s.sid=48
--and p.spid
order by 6;
-- show open cursors
col txt format a100
select sid, hash_value, substr(sql_text,1,1000) txt from v$open_cursor where sid = 12;
-- show running cursors
select nvl(USERNAME,'ORACLE PROC'), s.SID, s.sql_hash_value, SQL_TEXT
from sys.v_$open_cursor oc, sys.v_$session s
where s.SQL_ADDRESS = oc.ADDRESS
and s.SQL_HASH_VALUE = oc.HASH_VALUE
and s.sid = 12
order by USERNAME, s.SID;
-- Get recent snapshot
select instance_number, to_char(startup_time, 'DD-MON-YY HH24:MI:SS') startup_time, to_char(begin_interval_time, 'DD-MON-YY HH24:MI:SS') begin_interval_tim, snap_id
from DBA_HIST_SNAPSHOT
order by snap_id;
-- Finding top expensive SQL in the workload repository, get snap_ids first
select * from (
select a.sql_id as sql_id, sum(elapsed_time_delta)/1000000 as elapsed_time_in_sec,
(select x.sql_text
from dba_hist_sqltext x
where x.dbid = a.dbid and x.sql_id = a.sql_id) as sql_text
from dba_hist_sqlstat a, dba_hist_sqltext b
where a.sql_id = b.sql_id and
a.dbid = b.dbid
and a.snap_id between 710 and 728
group by a.dbid, a.sql_id
order by elapsed_time_in_sec desc
) where ROWNUM < 2
/
-- Only valid for 10g Release 2, Finding top 10 expensive SQL in the cursor cache by elapsed time
select * from (
select sql_id, elapsed_time/1000000 as elapsed_time_in_sec, substr(sql_text,1,80) as sql_text
from v$sqlstats
order by elapsed_time_in_sec desc
) where rownum < 11
/
-- get hash value statistics, The query sorts its output by the number of LIO calls executed per row returned. This is a rough measure of statement efficiency. For example, the following output should bring to mind the question, "Why should an application require more than 174 million memory accesses to compute 5 rows?"
col stmtid heading 'Stmt Id' format 9999999999
col dr heading 'PIO blks' format 999,999,999
col bg heading 'LIOs' format 999,999,999,999
col sr heading 'Sorts' format 999,999
col exe heading 'Runs' format 999,999,999,999
col rp heading 'Rows' format 9,999,999,999
col rpr heading 'LIOs|per Row' format 999,999,999,999
col rpe heading 'LIOs|per Run' format 999,999,999,999
select hash_value stmtid
,sum(disk_reads) dr
,sum(buffer_gets) bg
,sum(rows_processed) rp
,sum(buffer_gets)/greatest(sum(rows_processed),1) rpr
,sum(executions) exe
,sum(buffer_gets)/greatest(sum(executions),1) rpe
from v$sql
where command_type in ( 2,3,6,7 )
and hash_value in (2023740151)
-- and rownum < 20
group by hash_value
order by 5 desc;
-- check block gets of a session
col block_gets format 999,999,999,990
col consistent_gets format 999,999,999,990
select to_char(sysdate, 'hh:mi:ss') "time", physical_reads, block_gets, consistent_gets, block_changes, consistent_changes
from v$sess_io
where sid=681;
-- show SQL in shared SQL area, get hash value
SELECT /* example */ substr(sql_text, 1, 80) sql_text,
sql_id,
hash_value, address, child_number, plan_hash_value, FIRST_LOAD_TIME
FROM v$sql
WHERE
--sql_id = '6wps6tju5b8tq'
-- hash_value = 1481129178
upper(sql_text) LIKE '%INSERT INTO PS_CBLA_RET_TMP SELECT CB_BUS_UN%'
AND sql_text NOT LIKE '%example%'
order by first_load_time;
-- show SQL hash
col txt format a1000
select
sa.hash_value, sa.sql_id,
substr(sa.sql_text,1,1000) txt
from v$sqlarea sa
where
sa.hash_value = 517092776
--ADDRESS = '2EBC7854'
--sql_id = 'gz5bfrcjq060u';
-- show full sql text of the transaction
col sql_text format a1000
set heading off
select sql_text from v$sqltext
where HASH_VALUE = 1481129178
-- where sql_id = 'a5xnahpb62cvq'
order by piece;
set heading on
-- get sql_text
set long 50000 pagesize 0 echo off lines 5000 longchunksize 5000 trimspool on
select sql_fulltext from v$sql where sql_id = 'a5xnahpb62cvq';
/*
The trace doesn't contain the SQLID as such but the hash value.
In this case Hash=61d72ac6.
Translate this to decimal and query v$sqlarea where hash_value=#
( if the hash value is still in the v$sqlarea )
/u01/app/oracle/diag/rdbms/biprddal/biprd1/incident/incdir_65625/biprd1_dia0_19054_i65625.trc
~~~~~~~~~~
....
LibraryHandle: Address=0x1ff1434c8 Hash=61d72ac6 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD
ObjectName: Name=UPDATE WC_BUDGET_BALANCE_A_TMP A SET
*/
select sql_text from dba_hist_sqltext where sql_id in (select sql_id from DBA_HIST_SQLSTAT where plan_hash_value = 1641491142);
-- get sql id and hash value, convert hash to, sqlid to hash, sql_id to hash, h2s
col sql_text format a1000
select substr(sql_text, 1,30), sql_id, hash_value from v$sqltext
where
HASH_VALUE = 1312665718
-- sql_id = '048znjmq3uvs9'
and rownum < 2;
-- show full sql text of the transaction, of the top sqls in awr
col sql_text format a1000
set heading off
select ''|| sql_id || ' '|| hash_value || ' ' || sql_text || '' from v$sqltext
-- where HASH_VALUE = 1481129178
where sql_id in (select distinct a.sql_id as sql_id
from dba_hist_sqlstat a, dba_hist_sqltext b
where a.sql_id = b.sql_id and
a.dbid = b.dbid
and a.snap_id between 710 and 728)
order by sql_id, piece;
set heading on
-- query SQL in ASH
set lines 3000
select substr(sa.sql_text,1,500) txt, a.sample_id, a.sample_time, a.session_id, a.session_serial#, a.user_id, a.sql_id,
a.sql_child_number, a.sql_plan_hash_value,
a.sql_opcode, a.plsql_object_id, a.service_hash, a.session_type,
a.session_state, a.qc_session_id, a.blocking_session,
a.blocking_session_status, a.blocking_session_serial#, a.event, a.event_id,
a.seq#, a.p1, a.p2, a.p3, a.wait_class,
a.wait_time, a.time_waited, a.program, a.module, a.action, a.client_id
from gv$active_session_history a, gv$sqltext sa
where a.sql_id = sa.sql_id
-- and session_id = 126
/* -- weird scenario, when I'm looking for TRUNCATE statement, I can see it in V$SQLTEXT
-- and I can't see it on V$SQLAREA and V$SQL
select * from v$sqltext where upper(sql_text) like '%TRUNCATE%TEST3%';
select * from v$sqlarea
where sql_id = 'dfwz4grz83d6a'
where upper(sql_text) like '%TRUNCATE%';
select * from v$sql
where sql_id = 'dfwz4grz83d6a'
where upper(sql_text) like '%TRUNCATE%';
from oracle-l:
Checking V$FIXED_VIEW_DEFINITION, you can see that V$SQLAREA is based off of
x$kglcursor_child_sqlid, V$SQL is off x$kglcursor_child, and V$SQLTEXT is off
x$kglna. I may be way off on this, but I believe pure DDL is not a cursor,
which is why it won't be found in X$ cursor tables. Check with a CTAS vs. a
plain CREATE TABLE ... (field ...). CTAS uses a cursor and would be found in
all the X$ sql tables. A plan CREATE TABLE won't.
*/
-- query long operations
set lines 200
col opname format a35
col target format a10
col units format a10
select * from (
select
sid, serial#, sql_id,
opname, target, sofar, totalwork, round(sofar/totalwork, 4)*100 pct, units, elapsed_seconds, time_remaining time_remaining_sec, round(time_remaining/60,2) min
,sql_hash_value
-- ,message
from v$session_longops
WHERE sofar < totalwork
order by start_time desc);
-- query session waits
set lines 300
col program format a23
col event format a18
col seconds format 99,999,990
col state format a17
select w.sid, s.sql_hash_value, s.program, w.event, w.wait_time/100 t, w.seconds_in_wait seconds_in_wait, w.state, w.p1, w.p2, w.p3
from v$session s, v$session_wait w
where s.sid = w.sid and s.type = 'USER'
and s.sid = 37
-- and s.sql_hash_value = 1789726554
-- and s.sid = w.sid and s.type = 'BACKGROUND'
and w.state = 'WAITING'
order by 6 asc;
-- show actual transaction start time, and exact object
SELECT s.saddr, s.SQL_ADDRESS, s.sql_hash_value, t.START_TIME, t.STATUS, s.lockwait, s.row_wait_obj#, row_wait_file#, s.row_wait_block#, s.row_wait_row#
--, s.blocking_session
FROM v$session s, v$transaction t
WHERE s.saddr = t.ses_addr
and s.sid = 12;
-- search for the object
select owner, object_name, object_type
from dba_objects
where object_id = 73524;
SELECT owner,segment_name,segment_type
FROM dba_extents
WHERE file_id = 32
AND 238305
BETWEEN block_id AND block_id + blocks - 1;
-- open transactions
set lines 199 pages 100
col object_name for a30
COL iid for 999
col usn for 9999
col slot for 9999
col ublk for 99999
col uname for a15
col sid for 9999
col ser# for 9999999
col start_scn for 99999999999999
col osuser for a20
select * from (
select v.inst_id iid, v.XIDUSN usn, v.XIDSLOT slot, v.XIDSQN ,v. START_TIME, v.start_scn, v.USED_UBLK ublk, o.oracle_username uname,s.sid sid,s.serial# ser#, s.osuser, o.object_id oid ,d.object_name
from gv$transaction v, gv$locked_object o, dba_objects d, gv$session s
where v.XIDUSN = o.XIDUSN and v.xidslot=o.xidslot and v.xidsqn=o.xidsqn and o.object_id = d.object_id and v.addr = s.taddr order by 6,1,11,12,13) where rownum < 26;
-- search for the object in the buffer cache
select b.sid,
nvl(substr(a.object_name,1,30),
'P1='||b.p1||' P2='||b.p2||' P3='||b.p3) object_name,
a.subobject_name,
a.object_type
from dba_objects a, v$session_wait b, x$bh c
where c.obj = a.object_id(+)
and b.p1 = c.file#(+)
and b.p2 = c.dbablk(+)
-- and b.event = 'db file sequential read'
union
select b.sid,
nvl(substr(a.object_name,1,30),
'P1='||b.p1||' P2='||b.p2||' P3='||b.p3) object_name,
a.subobject_name,
a.object_type
from dba_objects a, v$session_wait b, x$bh c
where c.obj = a.data_object_id(+)
and b.p1 = c.file#(+)
and b.p2 = c.dbablk(+)
-- and b.event = 'db file sequential read'
order by 1;
-- if there are locks, show the locks thats are waited in the system
select sid, type, id1, id2, lmode, request, ctime, block
from v$lock
where request>0;
-- per session pga
BREAK ON REPORT
COMPUTE SUM OF alme ON REPORT
COMPUTE SUM OF mame ON REPORT
COLUMN alme HEADING "Allocated MB" FORMAT 99999D9
COLUMN usme HEADING "Used MB" FORMAT 99999D9
COLUMN frme HEADING "Freeable MB" FORMAT 99999D9
COLUMN mame HEADING "Max MB" FORMAT 99999D9
COLUMN username FORMAT a15
COLUMN program FORMAT a22
COLUMN sid FORMAT a5
COLUMN spid FORMAT a8
set pages 3000
SET LINESIZE 3000
set echo off
set feedback off
alter session set nls_date_format='yy-mm-dd hh24:mi:ss';
SELECT sysdate, s.username, SUBSTR(s.sid,1,5) sid, p.spid, logon_time,
SUBSTR(s.program,1,22) program , s.process pid_remote,
ROUND(pga_used_mem/1024/1024) usme,
ROUND(pga_alloc_mem/1024/1024) alme,
ROUND(pga_freeable_mem/1024/1024) frme,
ROUND(pga_max_mem/1024/1024) mame,
decode(a.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offload,
s.sql_id
FROM v$session s,v$process p, v$sql a
WHERE s.paddr=p.addr
and s.sql_id=a.sql_id
ORDER BY pga_max_mem, logon_time;
-- pga breakdown
SELECT pid, category, allocated, used, max_allocated
FROM v$process_memory
WHERE pid = (SELECT pid
FROM v$process
WHERE addr= (select paddr
FROM v$session
WHERE sid = &sid))
-- UNDO
/* Shows active (in progress) transactions -- feed the db_block_size to multiply with t.used_ublk */
/* select value from v$parameter where name = 'db_block_size'; */
select sid, serial#,s.status,username, terminal, osuser,
t.start_time, r.name, (t.used_ublk*8192)/1024 USED_kb, t.used_ublk "ROLLB BLKS",
decode(t.space, 'YES', 'SPACE TX',
decode(t.recursive, 'YES', 'RECURSIVE TX',
decode(t.noundo, 'YES', 'NO UNDO TX', t.status)
)) status
from sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s
where t.xidusn = r.usn
and t.ses_addr = s.saddr;
-- TEMP, show user currently using space in temp space
select se.username
,se.sid
,se.serial#
,su.extents
,su.blocks * to_number(rtrim(p.value))/1024/1024 as Space
,tablespace
,segtype
from v$sort_usage su
,v$parameter p
,v$session se
where p.name = 'db_block_size'
and su.session_addr = se.saddr
order by se.username, se.sid;
-- To report the info on temp usage used...
select swa.sid, vs.process, vs.osuser, vs.machine,vst.sql_text, vs.sql_id "Session SQL_ID",
swa.sql_id "Active SQL_ID", trunc(swa.tempseg_size/1024/1024)"TEMP TOTAL MB"
from v$sql_workarea_active swa, v$session vs, v$sqltext vst
where swa.sid=vs.sid
and vs.sql_id=vst.sql_id
and piece=0
and swa.tempseg_size is not null
order by "TEMP TOTAL MB" desc;
-- a quick TEMP script for threshold
echo "TEMP_Threshold: $TMP_THRSHLD"
sqlplus -s << EOF | read GET_TMP
/ as sysdba
set head off
set pagesize 0
select sum(trunc(swa.tempseg_size/1024/1024))"TEMP TOTAL MB"
from v\$sql_workarea_active swa;
EOF
-- Oracle also provides single-block read statistics for every database file in the V$FILESTAT view. The file-level single-block average wait time can be calculated by dividing the SINGLEBLKRDTIM with the SINGLEBLKRDS, as shown next. (The SINGLEBLKRDTIM is in centiseconds.) You can quickly discover which files have unacceptable average wait times and begin to investigate the mount points or devices and ensure that they are exclusive to the database
select a.file#,
b.file_name,
a.singleblkrds,
a.singleblkrdtim,
a.singleblkrdtim/a.singleblkrds average_wait
from v$filestat a, dba_data_files b
where a.file# = b.file_id
and a.singleblkrds > 0
order by average_wait;
--------------------
-- BUFFER CACHE
--------------------
/*This dynamic view has an entry for each block in the database buffer cache. The status are:
free : Available ram block. It might contain data but it is not currently in use.
xcur :Block held exclusively by this instance
scur :Block held in cache, shared with other instance
cr :Block for consistent read
read :Block being read from disk
mrec :Block in media recovery mode
irec :Block in instance (crash) recovery mode
If it is needed to investigate the buffer cache you can use the following script:*/
SELECT count(*), db.object_name, tb.name
FROM v$bh bh, dba_objects db, v$tablespace tb
WHERE bh.objd = db.object_id
AND bh.TS# = TB.TS#
AND db.owner NOT IN ('SYS', 'SYSTEM')
GROUP BY db.object_name, bh.TS#, tb.name
ORDER BY 1 ASC;
-- get block
select block#,file#,status from v$bh where objd = 46186
-- get touch count
select tch, file#, dbablk,
case when obj = 4294967295
then 'rbs/compat segment'
else (select max( '('||object_type||') ' ||
owner || '.' || object_name ) ||
decode( count(*), 1, '', ' maybe!' )
from dba_objects
where data_object_id = X.OBJ )
end what
from (
select tch, file#, dbablk, obj
from x$bh
where state <> 0
order by tch desc
) x
where rownum <= 5
/
--shows touch count for tables/indexes. Use to determine tables/indexes to keep
select decode(s.buffer_pool_id,0,'DEFAULT',1,'KEEP',2,'RECYCLE') buffer_pool,
s.owner, s.segment_name, s.segment_type,count(bh.obj) blocks, round(avg(bh.tch),2) avg_use, max(bh.tch) max_use
from sys_dba_segs s, X$BH bh where s.segment_objd = bh.obj
group by decode(s.buffer_pool_id,0,'DEFAULT',1,'KEEP',2,'RECYCLE'), s.segment_name, s.segment_type, s.owner
order by decode(s.buffer_pool_id,0,'DEFAULT',1,'KEEP',2,'RECYCLE'), count(bh.obj) desc,
round(avg(bh.tch),2) desc, max(bh.tch) desc;
}}}
http://jonathanlewis.wordpress.com/2010/08/24/index-rebuilds-2/
http://blog.tanelpoder.com/2007/06/23/a-gotcha-with-parallel-index-builds-parallel-degree-and-query-plans/
https://community.oracle.com/thread/2231622?tstart=0
https://asktom.oracle.com/pls/asktom/f%3Fp%3D100:11:0::::P11_QUESTION_ID:1412203938893
http://myracle.wordpress.com/2008/01/11/recover-database-without-control-files-and-redo-log-files/
http://blog.ronnyegner-consulting.de/2010/11/03/how-to-restore-an-rman-backup-without-any-existing-control-files/
http://www.freelists.org/post/oracle-l/high-recursive-cpu-usage
Metalink OPDG - recursive CPU
A test environment, no backup, noarchivelog mode.. and won't open because:
1) current redo log has a corrupted block
2) when it was opened, the undo tablespace also had a corrupted block
See the details here:
http://forums.oracle.com/forums/message.jspa?messageID=4232743#4232743
{{{
Hi Yas,
I tried what user "user583761" suggested. It worked for me.
My scenario was this... (BTW this is a critical test environment)
1) Due to some movements (by the storage engineer) on the SAN storage I got a corrupted current online redo log, so the database is looking for a change on that current redo log to sync the other datafiles and to be able to open the database
2) I have no backup, the database is in "noarchivelog mode"
3) I have to get rid of that "current redo log". So I just followed the steps here http://oracle-abc.wikidot.com/recovery-from-current-redolog-corruption
I did the following steps:
- set the parameter _allow_resetlogs_corruption=TRUE
- "recover database until cancel"
- "cancel"
- alter database open resetlogs
then my session timed out.. then i bounced the database and it opened. Then after a while the instance was again terminated.
4) I checked on the alert log, and found out that Oracle is trying to rollback some transactions in UNDO tablespace and there was a corrupted block on that tablespace. (so.. this is entirely another problem..)
5) I have to get rid of that UNDOTBS1. So I followed the steps mentioned here to set the UNDO_MANAGEMENT=MANUAL
http://dbaforums.org/oracle/index.php?showtopic=1062
6) It opened!
7) Created a new UNDOTBS2, alter the parameters again and removed the hidden parameters, then bounced the database.
8) Executed an incremental level 0 RMAN backup.
And since this is just a test environment (although critical) we will have the test engineers check the database first, but it is certain that we have to rebuild the database.
}}}
http://msdn.microsoft.com/en-us/library/bb211408(v=office.12).aspx
{{{
Sub ClearRanges()
Worksheets("Sheet1").Range("C5:D9,G9:H16,B14:D18"). _
ClearContents
End Sub
}}}
http://www.yogeshguptaonline.com/2009/05/macros-in-excel-selecting-multiple.html
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9534209800346444034
{{{
Oracle Database refreshes non-unique indexes and gather stats on them too with non-atomic refreshes:
create table t2 ( x , y ) as
select rownum x, mod(rownum, 5) y from dual connect by level <= 1000;
create table t1 ( x , y ) as
select rownum x, mod(rownum, 3) y from dual connect by level <= 1000;
create materialized view mv_name
refresh on demand
as
select t1.* from t1
union
select t2.* from t2;
create index iy on mv_name(y);
select status, num_rows from user_indexes
where index_name = 'IY';
STATUS NUM_ROWS
VALID 1,800
insert into t1
select rownum+1000 x, mod(rownum, 3) y from dual connect by level <= 1000;
insert into t2
select rownum+1000 x, mod(rownum, 5) y from dual connect by level <= 1000;
commit;
exec dbms_mview.refresh('mv_name', atomic_refresh => false);
select status, num_rows from user_indexes
where index_name = 'IY';
STATUS NUM_ROWS
VALID 3,600
select count(*) from mv_name;
COUNT(*)
3,600
In terms of making your refresh faster, it's worth investigating if you can go for regular fast refreshes.
You have to go for complete refreshes with union. But you can fast refresh with union all. (Just make sure you add a
marker column: https://jonathanlewis.wordpress.com/2016/07/12/union-all-mv/ )
So you could:
- Create a fast refresh union all MV
- A complete refresh MV on top of this returning the distinct rows
This may help if the intersection of the two tables is "small". If so, the distinct MV will have much less data to process => it'll be faster.
}}}
<<showtoc>>
! python
!!cheatsheet
[img(100%,100%)[ https://i.imgur.com/oTjD8H5.png]]
!!setup
[img(100%,100%)[ https://i.imgur.com/q2EzdOk.png]]
!bash
https://www.linuxjournal.com/content/bash-regular-expressions
! regex debugger
https://www.debuggex.com/
..
http://en.wikipedia.org/wiki/Relational_algebra
Relax-and-Recover is a setup-and-forget Linux bare metal disaster recovery solution. It is easy to set up and requires no maintenance so there is no excuse for not using it.
http://relax-and-recover.org
.
https://blogs.oracle.com/XPSONHA/entry/relocating_grid_infrastructure
https://blogs.oracle.com/XPSONHA/entry/relocating_grid_infrastructure_1
http://www.evernote.com/shard/s48/sh/21e2267d-6530-4012-9615-982dd3850ded/8017106b4dd76c2d1e291e5faf14755f
!! How To Remove Texts Before Or After A Specific Character From Cells In Excel
https://www.extendoffice.com/documents/excel/1783-excel-remove-text-before-character.html
If cell is blank https://exceljet.net/formula/if-cell-is-blank
http://askdba.org/weblog/2010/09/renaming-diskgroup-containing-voting-disk-ocr/
/***
|Name:|RenameTagsPlugin|
|Description:|Allows you to easily rename or delete tags across multiple tiddlers|
|Version:|3.0 ($Rev: 5501 $)|
|Date:|$Date: 2008-06-10 23:11:55 +1000 (Tue, 10 Jun 2008) $|
|Source:|http://mptw.tiddlyspot.com/#RenameTagsPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License|http://mptw.tiddlyspot.com/#TheBSDLicense|
Rename a tag and you will be prompted to rename it in all its tagged tiddlers.
***/
//{{{
config.renameTags = {
prompts: {
rename: "Rename the tag '%0' to '%1' in %2 tidder%3?",
remove: "Remove the tag '%0' from %1 tidder%2?"
},
removeTag: function(tag,tiddlers) {
store.suspendNotifications();
for (var i=0;i<tiddlers.length;i++) {
store.setTiddlerTag(tiddlers[i].title,false,tag);
}
store.resumeNotifications();
store.notifyAll();
},
renameTag: function(oldTag,newTag,tiddlers) {
store.suspendNotifications();
for (var i=0;i<tiddlers.length;i++) {
store.setTiddlerTag(tiddlers[i].title,false,oldTag); // remove old
store.setTiddlerTag(tiddlers[i].title,true,newTag); // add new
}
store.resumeNotifications();
store.notifyAll();
},
storeMethods: {
saveTiddler_orig_renameTags: TiddlyWiki.prototype.saveTiddler,
saveTiddler: function(title,newTitle,newBody,modifier,modified,tags,fields,clearChangeCount,created) {
if (title != newTitle) {
var tagged = this.getTaggedTiddlers(title);
if (tagged.length > 0) {
// then we are renaming a tag
if (confirm(config.renameTags.prompts.rename.format([title,newTitle,tagged.length,tagged.length>1?"s":""])))
config.renameTags.renameTag(title,newTitle,tagged);
if (!this.tiddlerExists(title) && newBody == "")
// dont create unwanted tiddler
return null;
}
}
return this.saveTiddler_orig_renameTags(title,newTitle,newBody,modifier,modified,tags,fields,clearChangeCount,created);
},
removeTiddler_orig_renameTags: TiddlyWiki.prototype.removeTiddler,
removeTiddler: function(title) {
var tagged = this.getTaggedTiddlers(title);
if (tagged.length > 0)
if (confirm(config.renameTags.prompts.remove.format([title,tagged.length,tagged.length>1?"s":""])))
config.renameTags.removeTag(title,tagged);
return this.removeTiddler_orig_renameTags(title);
}
},
init: function() {
merge(TiddlyWiki.prototype,this.storeMethods);
}
}
config.renameTags.init();
//}}}
-- TABLE
HOW TO DO TABLE AND INDEX REORGANIZATION
Doc ID: Note:736563.1
How I Reorganize Objects In Our Manufacturing Database Server
Doc ID: 430679.1
http://www.makeuseof.com/dir/rescuetime/
http://blog.rescuetime.com/2009/05/29/rescuetime-now-does-non-computer-time-codename-timepie/
http://blog.rescuetime.com/2011/10/26/7-steps-to-boost-your-teams-productivity/
http://blog.rescuetime.com/2011/12/28/that-awesome-data-collector-we-call-carry-around-in-our-pockets/
http://blog.rescuetime.com/2012/10/10/updates-to-offline-time-logging/
http://besthubris.com/entrepreneur/rescuetime-time-tracker-offline-version-manictime/
! AWR scripts
<<<
!! AWR port
@awr_genwl
Capacity, Requirements, Utilization
<<<
- ngcp, meralco, dbm
! Statspack
<<<
!! OSM - Summary of Reports:
@genwl
General Workload Report
@findpeaks 0.00
Find General Workload Peak-of-Peak Report
@we %sequential%
Wait Event (%sequential%) Activity Report
@sprt2
Response Time Report
@cpufc
CPU Forecasting Stats Report
@iofc
IO Forecast Stats Report
@ip1 %sga_max_size%
Instance Parameter (by date/time) Report
@spsysstat %logon%
Sysstat Statistic (%logon%) Activity Report
@sqlrank 10
TOP SQL by 10 Report
<<<
<<<
!! Tim Gorman
sp_evtrend.sql
sp_systime2.sql
sp_sys_time_trends.sql
sp_buffer_busy.sql
gen_redo.sql
<<<
''MindMap - Database Resource Management (DBRM)'' http://www.evernote.com/shard/s48/sh/15245790-6686-4bcd-9c01-aa243187f086/1c722c2600b5f424ade98b81bc57e3c1
! The Types of Resources Managed by the Resource Manager
Resource plan directives specify how resources are allocated to resource consumer groups or subplans. Each directive can specify several different methods for allocating resources to its consumer group or subplan. The following sections summarize these resource allocation methods:
* CPU
* Degree of Parallelism Limit
* Parallel Target Percentage
* Parallel Queue Timeout
* Active Session Pool with Queuing
* Automatic Consumer Group Switching
* Canceling SQL and Terminating Sessions
* Execution Time Limit
* Undo Pool
* Idle Time Limit
** “max_utilization_limit” is available from Oracle Database 11g Release 2 onwards
** “cpu_managed” is available from Oracle Database 11g Release 2 onwards. For all releases, you can determine if Resource Manager is managing CPU by seeing whether the current resource plan has CPU directives.
** v$rsrcmgrmetric is available from Oracle Database 11g Release 1 onwards
! References
* ''Oracle Database Resource Manager and OBIEE'' http://www.rittmanmead.com/2010/01/oracle-database-resource-manager-and-obiee/
* Official Doc - 27 Managing Resources with Oracle Database Resource Manager http://docs.oracle.com/cd/E11882_01/server.112/e10595/dbrm.htm#i1010776
* Official Doc - http://docs.oracle.com/cd/E11882_01/server.112/e16638/os.htm#PFGRF95151
* MindMap - Resource Manager series whitepaper http://www.evernote.com/shard/s48/sh/64021de9-92c6-4ade-afeb-81a12e3e015f/fdc686cf4370cf9678098a0928a01669
** Introduction to Resource Management in Oracle Solaris and Oracle Database http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-054-intro-rm-419298.pdf
** Effective Resource Management Using Oracle Solaris Resource Manager http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-055-solaris-rm-419384.pdf
** Effective Resource Management Using Oracle Database Resource Manager http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-056-oracledb-rm-419380.pdf
** Resource Management Case Study for Mixed Workloads and Server Sharing http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-057-mixed-wl-rm-419381.pdf
* ''A fair bite of the CPU pie? Monitoring & Testing Oracle Resource Manager'' http://rnm1978.wordpress.com/2010/09/10/a-fair-bite-of-the-cpu-pie-monitoring-testing-oracle-resource-manager/
* ''Using Oracle Database Resource Manager'' http://www.oracle.com/technetwork/database/focus-areas/performance/resource-manager-twp-133705.pdf
* ''Control Your Environment with the Resource Manager'' http://seouc.com/PDF_files/2011/Presentations/NormanInstanceCaging_SEOUC_2011.pdf
* ''ResourceManagerEnhancements_11gR1'' http://www.oracle-base.com/articles/11g/ResourceManagerEnhancements_11gR1.php
* ''Oracle Resource Manager Concepts'' http://www.dbform.com/html/2010/1283.html
* ''limiting parallel per session'' http://www.experts-exchange.com/Database/Oracle/Q_10351442.html, http://www.freelists.org/post/oracle-l/limit-parallel-process-per-session-in-10204,2
* http://www.pythian.com/news/2740/oracle-limiting-query-runtime-without-killing-the-session/
! ''Enable the Resource Manager''
<<<
alter system set resource_manager_plan=default_plan;
alter system set cpu_count=12; ''<-- if you want to do instance caging''
<<<
! ''Disabling the Resource Manager''
<<<
To disable the Resource Manager, complete the following steps:
Issue the following SQL statement:
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';
Disassociate the Resource Manager from all Oracle Scheduler windows.
To do so, for any Scheduler window that references a resource plan in its resource_plan attribute, use the DBMS_SCHEDULER.SET_ATTRIBUTE procedure to set resource_plan to the empty string (''). Qualify the window name with the SYS schema name if you are not logged in as user SYS. You can view Scheduler windows with the DBA_SCHEDULER_WINDOWS data dictionary view. See "Altering Windows" and Oracle Database PL/SQL Packages and Types Reference for more information.
Note:
By default, all maintenance windows reference the DEFAULT_MAINTENANCE_PLAN resource plan. If you want to completely disable the Resource Manager, you must alter all maintenance windows to remove this plan. However, use caution, because resource consumption by automated maintenance tasks will no longer be regulated, which may adversely affect the performance of your other sessions. See Chapter 24, "Managing Automated Database Maintenance Tasks" for more information on maintenance windows.
<<<
Sysadmin Resources for Oracle Linux
http://blogs.sun.com/OTNGarage/entry/sysadmin_resources_for_oracle_linux
https://community.hortonworks.com/articles/90768/how-to-fix-ambari-hive-view-15-result-fetch-timed.html
http://www.orainternals.com/papers/tuning-101_bottleneck_identification_2.pdf
Doc ID: 415579.1 HowTo Restore RMAN Disk backups of RAC Database to Single Instance On Another Node
{{{
-- SETUP
1) do an incremental level 0 on server1
2) setup the following on server2
a. add an entry on the oratab
orcl:/oracle/app/oracle/product/10.2.0/db_1:N
b. create pfile
comment out the following lines
*.cluster_database_instances=2
*.cluster_database=true
*.remote_listener='LISTENERS_ORCL'
for undo tablespace comment out the following
orcl2.undo_tablespace='UNDOTBS2'
orcl1.undo_tablespace='UNDOTBS1'
and replace it with
undo_tablespace=UNDOTBS1
3) after the incremental level0 on server1, copy all of the backup pieces to server2
4) on server2, do "startup nomount"
5) restore controlfile from autobackup
restore controlfile from '/flash_reco/flash_recovery_area/ORCL/autobackup/ORCL-20090130-c-1177758841-20090130-00';
6) "alter database mount;"
7) catalog backup files
catalog start with '/flash_reco/flash_recovery_area/ORCL';
8) determine the point upto which media recovery should run on the restored database
List of Archived Logs in backup set 43
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 56 727109 30-JAN-09 727157 30-JAN-09
2 36 727107 30-JAN-09 727154 30-JAN-09
9) restore and recover
run {
set until sequence 37 thread 2;
restore database;
recover database;
}
-- FOR AUTOMATIC RECOVERY
1) copy archivelogs from primary
2) on server2 catalog the new archivelogs
catalog start with '/flash_reco/flash_recovery_area/ORCL/archivelog';
3) on server2 execute the following query
set lines 100
select 'recover automatic database until time '''||to_char(max(first_time),'YYYY-MM-DD:HH24:MI:SS')||''' using backup controlfile;' from v$archived_log;
----------------------------------------------------------------------------------------------------------------------
-- to automate the recovery deploy below scripts
-- file recover.sh
rman target / << EOF
catalog start with '/flash_reco/flash_recovery_area/ORCL/archivelog';
yes
exit
EOF
sqlplus "/ as sysdba" @getrecover.sql
-- file getrecover.sql
set lines 100
set heading off
spool recover.sql
select 'recover automatic database until time '''||to_char(max(first_time),'YYYY-MM-DD:HH24:MI:SS')||''' using backup controlfile;' from v$archived_log;
spool off
set echo on
spool recover.log
@@recover.sql
spool off
exit
----------------------------------------------------------------------------------------------------------------------
-- OPEN READ ONLY
1) disable block change tracking;
alter database disable block change tracking;
2) alter database open read only;
-- OPEN
1) rename online redo log files to the new location
2) open resetlogs
3) remove the redolog groups for redo threads of other instances
SQL> select THREAD#, STATUS, ENABLED
2 from v$thread;
THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
2 CLOSED PRIVATE
SQL> select group# from v$log where THREAD#=2;
GROUP#
----------
4
5
6
SQL> alter database disable thread 2;
Database altered.
SQL> alter database drop logfile group 4;
alter database drop logfile group 4
*
ERROR at line 1:
ORA-00350: log 4 of instance racdb2 (thread 2) needs to be archived
ORA-00312: online log 4 thread 2: '/u01/oracle/oradata/ractest/log/redo04.log'
SQL> alter database clear unarchived logfile group 4;
Database altered.
SQL> alter database drop logfile group 4;
Database altered.
SQL> alter database drop logfile group 5;
Database altered.
SQL> alter database drop logfile group 6;
Database altered.
SQL> select THREAD#, STATUS, ENABLED from v$thread;
THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
4) remove the undo tablespaces of other instances
SQL> sho parameter undo;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL>
SQL>
SQL> select tablespace_name from dba_tablespaces where contents='UNDO';
TABLESPACE_NAME
------------------------------
UNDOTBS1
UNDOTBS2
SQL> drop tablespace UNDOTBS2 including contents and datafiles;
Tablespace dropped.
5) create a new temporary tablespace to complete the activity
select file#, name from v$tempfile;
-- to drop tempfiles
select 'alter database tempfile '|| file# ||' drop including datafiles;'
from v$tempfile;
alter tablespace temp add tempfile '/u01b/oradata/HCPRD3/temp01.dbf' size 500M autoextend on next 100M maxsize 2000M;
-- CAVEATS
1) if you add a new datafile it will error at first.. but when you open it, it will create the datafile
Tue Jan 27 18:11:35 2009
alter database recover logfile '/flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc'
Tue Jan 27 18:11:35 2009
Media Recovery Log /flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc
File #6 added to control file as 'UNNAMED00006'. Originally created as:
'+DATA_1/orcl/datafile/karlarao.dbf'
Errors with log /flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Tue Jan 27 18:11:38 2009
Media Recovery failed with error 1244
ORA-283 signalled during: alter database recover logfile '/flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc'...
Tue Jan 27 18:11:38 2009
alter database recover datafile list clear
Completed: alter database recover datafile list clear
Tue Jan 27 18:11:40 2009
alter database recover datafile list 6
Completed: alter database recover datafile list 6
Tue Jan 27 18:11:40 2009
alter database recover datafile list
1 , 2 , 3 , 4 , 5
Completed: alter database recover datafile list
1 , 2 , 3 , 4 , 5
Tue Jan 27 18:11:40 2009
alter database recover if needed
start until cancel using backup controlfile
Media Recovery Start
ORA-279 signalled during: alter database recover if needed
start until cancel using backup controlfile
...
Tue Jan 27 18:11:41 2009
alter database recover logfile '/flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc'
Tue Jan 27 18:11:41 2009
Media Recovery Log /flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc
ORA-279 signalled during: alter database recover logfile '/flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc'...
Tue Jan 27 18:11:41 2009
2) if adding datafile, when you open read only it will look for the added file
SQL> alter database open read only;
alter database open read only
*
ERROR at line 1:
ORA-01565: error in identifying file '+DATA_1/orcl/datafile/karlarao.dbf'
ORA-17503: ksfdopn:2 Failed to open file +DATA_1/orcl/datafile/karlarao.dbf
ORA-15173: entry 'karlarao.dbf' does not exist in directory 'datafile'
3) if adding new redo logs, it will not be created on the pseudo standby
4) any tables created will be generated at the pseudo standby
}}}
http://oraclue.com/2010/11/02/role-based-database-service/
MAX_ENABLED_ROLES: WHAT IS THE MAX THIS CAN BE SET TO?
Doc ID: Note:1012034.6
Roles and Privileges Administration and Restrictions
Doc ID: Note:13615.1
Oracle Clusterware (formerly CRS) Rolling Upgrades
Doc ID: Note:338706.1
How We Upgraded our Oracle 9i Database to Oracle 10g Database with Near-Zero Downtime
Doc ID: Note:431430.1
RAC Survival Kit: Database Upgrades and Migration
Doc ID: Note:206678.1
Applying one-off Oracle Clusterware patches in a mixed version home environment
Doc ID: Note:363254.1
10g Rolling Upgrades with Logical Standby
Doc ID: Note:300479.1
http://jonathanlewis.wordpress.com/2007/02/05/go-faster-stripes/#more-187
http://oraexplorer.com/2009/10/online-san-storage-migration-for-oracle-11g-rac-database-with-asm/
but wait.. I recently had this scenario.. and for large scale migration it's better to SAN Copy.. passing the data through Fiber..
Facebook wall thread
<<<
I heart EMC SAN copy makes storage array migration so fast .. ;)
Martin Berger who wants to migrate storage arrays?
I only care of the data ;-)
June 13 at 11:31pm · Like ·
Roy Hayrosa be sure non of those files are corrupted..hehehe
June 14 at 3:30am · Like ·
Karl Arao @Martin: part of the job man :) there was a need to migrate from old CX to a new CX4 and the rac asm disks,OCR,vote disk should be moved.. Plan A was to make use of asm disk rebalance (add/drop) but turned out it will finish for like daysss.. Plan B was short and sweet make use of the SAN copy of the LUNs (header and metadata are the same) and instantly they were recognized without problems.. Rac started up without problems..
@roy: Asm metadata was clean, rman backup validate check logical was okay :)
June 15 at 2:28am · Like ·
Roy Hayrosa your the man! blog it...want to see the steps :)
June 15 at 12:52pm · Like ·
Karl Arao coming up soon :)
June 16 at 10:58am · Like ·
Martin Berger did you need a downtime for the SAN-copy? how did you exchange the disks on the servers? I'm very curious!
June 16 at 12:32pm · Like ·
Karl Arao Nope.. there is a way to do incremental SANCopy while the RAC environment is still running.. only time you have to full shutdown is when you do the final sync so the dirty blocks will be synced to the new devices... The whole activity is just like restarting the whole RAC environment and pointing the server to the new LUNs.. Bulk of the work will be on the storage engineer, in our case the OCR and Voting Disk are on OCFS2 so we just need to edit the fstab with the new EMC pseudo device names.. and for the ASM, we are using ASMlib.. and the new devices although having different names still has the header and metadata which the only two stuff that ASM cares about so when you boot the machine up again it's as if nothing happened.
Note that you should not present the OLD and NEW LUNs together, because ASM will tell you that it is seeing two instance of the disk.. :)
This is very ideal for large array migration and the business allows for a minimal downtime window.. at a minimum one restart of the servers.. instead of doing a full array migration using ASM rebalance which will take longer and will make you worry if it's already finished or not and if you take this route the bottleneck would be your CPUs consuming IO time (on full throttle rebalance power 11) .. compared to SANCopy, it's all passing through the Fiber (1TB = 1hour as per the engineer) and much faster.. :)
June 16 at 2:41pm · Like · 1 person ·
<<<
add drop on dell storage http://www.oracle.com/jp/gridcenter/partner/nssol/wp-storage-mig-grid-nsso-289788-ja.pdf
How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11 http://www.oracle.com/technetwork/articles/servers-storage-admin/migrate-s8db-to-s11-1867397.html
on migration to hana, this is where hana can beat exadata..
they can run the bwa on top of hana, or do bwa by itself
2 hour data load
cube to cube build
http://www.oracle.com/us/solutions/sap/oracleengsys-sap-080613-final2-1988224.pdf
http://www.precise.com/appsapone/app-sap-one
https://blogs.oracle.com/bhuang/entry/white_paper_for_sap_on
architecture config options https://www.google.com/search?q=sap+dual+stack+vs+single+stack&oq=SAP+dual+stacks+vs+&aqs=chrome.1.69i57j0.4685j0j1&sourceid=chrome&ie=UTF-8
Single stack vs. Dual Stack - Why does SAP recommend single stack https://archive.sap.com/discussions/thread/919833
https://www.sas.com/en_us/software/studio.html
https://www.google.com/search?ei=G9TBXNe5Cqu2ggen-q-ADA&q=r+studio+job+vs+SAS+studio&oq=r+studio+job+vs+SAS+studio&gs_l=psy-ab.3..35i39.4103.4103..4372...0.0..0.86.86.1......0....1..gws-wiz.2ZZe9iOp8yE
go here GetSystemInfo and run the system info tool you'll see the interface and transfer mode SATA-300 and SATA-150
that is actually SATAII 3Gbps (375MB/s) and SATAI 1.5Gbps (187.5MB/s)
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZwkhYQb6MI/AAAAAAAABLc/-oT8vP0wHh0/SATA.png]]
Other references:
http://www.tomshardware.com/forum/56577-35-sata-notebooks-sale-today
http://www.gtopala.com/siw-download.html
http://www.neowin.net/forum/topic/603834-what-is-the-difference-between-sata-150-and-sata-300/
https://www.scaledagileframework.com/story/
https://learning.oreilly.com/videos/leading-safe-scaled/9780134864044/9780134864044-SAFE_00_01_03_00
https://github.com/karlarao/hive-scd-examples
https://www.researchgate.net/publication/330798405_Temporal_Dimensional_Modeling
https://github.com/Roenbaeck/tempodim
http://www.anchormodeling.com/?p=1212
https://mssqldude.wordpress.com/2019/04/15/adf-slowly-changing-dimension-type-2-with-mapping-data-flows-complete/
! SCD type 1,2,3
- Type1 (update in place)
- Type2 (historical - dates, flags, versions)
[img(70%,70%)[https://i.imgur.com/RPwufrR.png]]
- Type 3 (keep version of last two values and delete the rest)
[img(70%,70%)[https://i.imgur.com/sCQsQsn.png]]
- Mixed Type 2 and 3 (add Flags Y on last two recent versions)
[img(70%,70%)[https://i.imgur.com/wOnAL58.png]]
[oracle@dbrocaix01 oradata]$ sqlplus "/ as sysdba"
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jul 2 00:12:20 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Release 10.2.0.1.0 - Production
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> conn scott/tiger
Connected.
SQL>
SQL> begin
2 DBMS_RLS.ADD_POLICY (
3 OBJECT_SCHEMA => 'SEC_MGR',
4 OBJECT_NAME => 'PEOPLE_RO',
5 POLICY_NAME => 'PEOPLE_RO_IUD',
6 FUNCTION_SCHEMA => 'SEC_MGR',
7 POLICY_FUNCTION => 'No_Records',
8 STATEMENT_TYPES => 'INSERT,UPDATE,DELETE',
9 UPDATE_CHECK => TRUE);
10 end;
11 /
begin
*
ERROR at line 1:
ORA-00439: feature not enabled: Fine-grained access control
ORA-06512: at "SYS.DBMS_RLS", line 20
ORA-06512: at line 2
SQL> exit
Disconnected from Oracle Database 10g Release 10.2.0.1.0 - Production
EE options installed:
1) ASO
2) spatial
3) label security
4) OLAP
1) ASO
2) partitioning
3) spatial
4) OLAP
COMP_NAME VERSION STATUS
----------------------------------- ------------------------------ -----------
Oracle Database Catalog Views 10.2.0.1.0 VALID
Oracle Database Packages and Types 10.2.0.1.0 VALID
Oracle Workspace Manager 10.2.0.1.0 VALID
Oracle Label Security 10.2.0.1.0 VALID
PARAMETER VALUE
---------------------------------------- ------------------------------
Partitioning FALSE
Objects TRUE
Real Application Clusters FALSE
Advanced replication FALSE
Bit-mapped indexes FALSE
Connection multiplexing TRUE
Connection pooling TRUE
Database queuing TRUE
Incremental backup and recovery TRUE
Instead-of triggers TRUE
Parallel backup and recovery FALSE
PARAMETER VALUE
---------------------------------------- ------------------------------
Parallel execution FALSE
Parallel load TRUE
Point-in-time tablespace recovery FALSE
Fine-grained access control FALSE
Proxy authentication/authorization TRUE
Change Data Capture FALSE
Plan Stability TRUE
Online Index Build FALSE
Coalesce Index FALSE
Managed Standby FALSE
Materialized view rewrite FALSE
PARAMETER VALUE
---------------------------------------- ------------------------------
Materialized view warehouse refresh FALSE
Database resource manager FALSE
Spatial FALSE
Visual Information Retrieval FALSE
Export transportable tablespaces FALSE
Transparent Application Failover TRUE
Fast-Start Fault Recovery FALSE
Sample Scan TRUE
Duplexed backups FALSE
Java TRUE
OLAP Window Functions TRUE
PARAMETER VALUE
---------------------------------------- ------------------------------
Block Media Recovery FALSE
Fine-grained Auditing FALSE
Application Role FALSE
Enterprise User Security FALSE
Oracle Data Guard FALSE
Oracle Label Security FALSE
OLAP FALSE
Table compression FALSE
Join index FALSE
Trial Recovery FALSE
Data Mining FALSE
PARAMETER VALUE
---------------------------------------- ------------------------------
Online Redefinition FALSE
Streams Capture FALSE
File Mapping FALSE
Block Change Tracking FALSE
Flashback Table FALSE
Flashback Database FALSE
Data Mining Scoring Engine FALSE
Transparent Data Encryption FALSE
Backup Encryption FALSE
Unused Block Compression FALSE
54 rows selected.
-------------------------------------------------------------------------------------------------------------------
[oracle@dbrocaix01 oradata]$ sqlplus "/ as sysdba"
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jul 2 00:12:41 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
select comp_name, version, status from dba_registry
COMP_NAME VERSION STATUS
----------------------------------- ------------------------------ -----------
Oracle Database Catalog Views 10.2.0.1.0 VALID
Oracle Database Packages and Types 10.2.0.1.0 VALID
Oracle Workspace Manager 10.2.0.1.0 VALID
Oracle Label Security 10.2.0.1.0 VALID
col parameter format a40
select * from v$option
PARAMETER VALUE
---------------------------------------- ------------------------------
Partitioning TRUE
Objects TRUE
Real Application Clusters FALSE
Advanced replication TRUE
Bit-mapped indexes TRUE
Connection multiplexing TRUE
Connection pooling TRUE
Database queuing TRUE
Incremental backup and recovery TRUE
Instead-of triggers TRUE
Parallel backup and recovery TRUE
PARAMETER VALUE
---------------------------------------- ------------------------------
Parallel execution TRUE
Parallel load TRUE
Point-in-time tablespace recovery TRUE
Fine-grained access control TRUE
Proxy authentication/authorization TRUE
Change Data Capture TRUE
Plan Stability TRUE
Online Index Build TRUE
Coalesce Index TRUE
Managed Standby TRUE
Materialized view rewrite TRUE
PARAMETER VALUE
---------------------------------------- ------------------------------
Materialized view warehouse refresh TRUE
Database resource manager TRUE
Spatial TRUE
Visual Information Retrieval TRUE
Export transportable tablespaces TRUE
Transparent Application Failover TRUE
Fast-Start Fault Recovery TRUE
Sample Scan TRUE
Duplexed backups TRUE
Java TRUE
OLAP Window Functions TRUE
PARAMETER VALUE
---------------------------------------- ------------------------------
Block Media Recovery TRUE
Fine-grained Auditing TRUE
Application Role TRUE
Enterprise User Security TRUE
Oracle Data Guard TRUE
Oracle Label Security FALSE
OLAP TRUE
Table compression TRUE
Join index TRUE
Trial Recovery TRUE
Data Mining TRUE
PARAMETER VALUE
---------------------------------------- ------------------------------
Online Redefinition TRUE
Streams Capture TRUE
File Mapping TRUE
Block Change Tracking TRUE
Flashback Table TRUE
Flashback Database TRUE
Data Mining Scoring Engine FALSE
Transparent Data Encryption TRUE
Backup Encryption TRUE
Unused Block Compression TRUE
54 rows selected.
https://www.thatjeffsmith.com/archive/2016/11/7-ways-to-avoid-select-from-queries-in-sql-developer/
ORA-04031: Unable to Allocate 83232 Bytes of Shared Memory
Doc ID: 374329.1
Diagnosing and Resolving Error ORA-04031
Doc ID: 146599.1
ORA-4031 Common Analysis/Diagnostic Scripts
Doc ID: 430473.1
LOG_BUFFER Differs from the Value Set in the spfile or pfile
Doc ID: 373018.1
-- ASMM
Excess “KGH: NO ACCESS” Memory Allocation [Video] [ID 801787.1]
http://www.evernote.com/shard/s48/sh/b60af7b1-9b57-4be8-b64b-476f068d0c9f/6a719a99c1367c81b59620d9bb6fb1d0
Shutdown Normal or Shutdown Immediate Hangs. SMON disabling TX Recovery
Doc ID: Note:1076161.6
SMON - Temporary Segment Cleanup and Free Space Coalescing
Doc ID: Note:61997.1
ORA-0054: When Dropping or Truncating Table, When Creating or Rebuilding Index
Doc ID: Note:117316.1
http://docs.oracle.com/cd/E11857_01/em.111/e14091/chap1.htm#SNMPR001
http://docs.oracle.com/cd/E11857_01/em.111/e16790/notification.htm#EMADM9130
firescope http://www.firescope.com/Support/ , http://www.firescope.com/QuickStart/Unify/Article.asp?ContentID=1
Configuring SNMP Trap Notification Method in EM - Steps and Troubleshooting [ID 434886.1]
Where can I find the MIB file for Grid Control SNMP trap Notifications? [ID 389585.1]
The Enterprise Manager MIB file 'omstrap.v1' has Incorrect Formatting [ID 750117.1]
How To Configure Notification Rules in Enterprise Manager Grid Control? [ID 429422.1]
How to Troubleshoot Notifications That Are Hung / Stuck and Not Being Sent from EM 10g [ID 285093.1]
How to Verify the SNMP Trap Contents Being Sent by Grid Control? [ID 469884.1]
http://h30499.www3.hp.com/t5/ITRC-HP-Systems-Insight-Manager/HP-insight-manager-snmp-traps/td-p/5269813
''Monitoring Exadata database machine with Oracle Enterprise Manager 11g'' http://dbastreet.com/blog/?tag=enterprise-manager If you use enterprise wide monitoring tools like tivoli, openview or netcool, use snmp traps from oracle enterprise manager, to notify these monitoring tools (ie dont try to directly use snmp to monitor the exadata components. You could do this but it will be too time consuming).
''setting up SNMP''
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch22_:_Monitoring_Server_Performance
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch23_:_Advanced_MRTG_for_Linux
http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/mrtg/
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sect-System_Monitoring_Tools-Net-SNMP.html
TESTING SQL PERFORMANCE IMPACT OF AN ORACLE 9i TO ORACLE DATABASE 10g RELEASE 2 UPGRADE WITH SQL PERFORMANCE ANALYZER (Doc ID 562899.1)
SQL PERFORMANCE ANALYZER EXAMPLE [ID 455889.1]
{{{
SQL>
SQL> --Setting optimizer_capture_sql_plan_baselines=TRUE
SQL> --Automatically Captures the Plan for any
SQL> --Repeatable SQL statement.
SQL> --By Default optimizer_capture_sql_plan_baselines is False.
SQL> --A repeatable sql means which is executed more then once.
SQL> --Now we will Capture various Optimizer plans in SPM Baseline.
SQL> --Plans are captured into the Baselines as New Plans are found.
SQL>
SQL> pause
SQL>
SQL> alter session set optimizer_capture_sql_plan_baselines = TRUE;
Session altered.
SQL> alter session set optimizer_use_sql_plan_baselines=FALSE;
Session altered.
SQL> alter system set optimizer_features_enable='11.1.0.6';
System altered.
SQL>
SQL> pause
SQL>
SQL> set autotrace on
SQL>
SQL> pause
SQL>
SQL> --Execute Sql.
SQL>
SQL> pause
SQL>
SQL> SELECT *
2 from sh.sales
3 where quantity_sold > 30
4 order by prod_id;
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 3803407550
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 29 | 328 (7)| 00:00:04 | | |
| 1 | SORT ORDER BY | | 1 | 29 | 328 (7)| 00:00:04 | | |
| 2 | PARTITION RANGE ALL| | 1 | 29 | 327 (6)| 00:00:04 | 1 | 28 |
|* 3 | TABLE ACCESS FULL | SALES | 1 | 29 | 327 (6)| 00:00:04 | 1 | 28 |
----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("QUANTITY_SOLD">30)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1718 consistent gets
0 physical reads
0 redo size
639 bytes sent via SQL*Net to client
409 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> pause
SQL>
SQL> --This was the first execution of the Sql.
SQL>
SQL> pause
SQL>
SQL> SELECT *
2 from sh.sales
3 where quantity_sold > 30
4 order by prod_id;
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 3803407550
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 29 | 328 (7)| 00:00:04 | | |
| 1 | SORT ORDER BY | | 1 | 29 | 328 (7)| 00:00:04 | | |
| 2 | PARTITION RANGE ALL| | 1 | 29 | 327 (6)| 00:00:04 | 1 | 28 |
|* 3 | TABLE ACCESS FULL | SALES | 1 | 29 | 327 (6)| 00:00:04 | 1 | 28 |
----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("QUANTITY_SOLD">30)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1718 consistent gets
0 physical reads
0 redo size
639 bytes sent via SQL*Net to client
409 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> pause
SQL>
SQL> set autotrace off
SQL>
SQL> pause
SQL>
SQL> --This was the second execution of the Sql.
SQL>
SQL> pause
SQL>
SQL>
SQL> --Verify what Plans have been inserted into Plan Baseline.
SQL>
SQL> pause
SQL>
SQL> select sql_handle, plan_name,
2 origin, enabled, accepted,sql_text
3 from dba_sql_plan_baselines
4 where sql_text like 'SELECT%sh.sales%';
SQL_HANDLE PLAN_NAME ORIGIN ENA ACC
------------------------ ----------------------------- -------------- --- ---
SQL_TEXT
--------------------------------------------------------------------------------
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE YES YES
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE YES NO
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SQL>
SQL> pause
SQL>
SQL> --SYS_SQL_PLAN_0f3e54d254bc8843 is the Very First Plan that
SQL> --Is inserted in the Sql Plan Baseline.
SQL> --Note the Very First Plan is ENABLED=YES AND ACCEPTED=YES
SQL> --Note the ORIGIN is AUTO-CAPTURE WHICH MEANS The plan was captured
SQL> --Automatically when optimizer_capture_sql_plan_baselines = TRUE.
SQL> --Note The Plan hash value: 3803407550
SQL>
SQL> pause
SQL>
SQL> --Let us change the Optimizer Envoriment.
SQL>
SQL> pause
SQL>
SQL> alter system set optimizer_features_enable='10.2.0.3';
System altered.
SQL> alter session set optimizer_index_cost_adj=1;
Session altered.
SQL>
SQL> pause
SQL>
SQL> set autotrace on
SQL>
SQL> pause
SQL>
SQL> SELECT *
2 from sh.sales
3 where quantity_sold > 30
4 order by prod_id;
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 899219946
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 29 | 294 (13)| 00:00:04 | | |
| 1 | SORT ORDER BY | | 1 | 29 | 294 (13)| 00:00:04 | | |
| 2 | PARTITION RANGE ALL | | 1 | 29 | 293 (13)| 00:00:04 | 1 | 28 |
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 1 | 29 | 293 (13)| 00:00:04 | 1 | 28 |
| 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | |
| 5 | BITMAP INDEX FULL SCAN | SALES_PROMO_BIX | | | | | 1 | 28 |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("QUANTITY_SOLD">30)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2030 consistent gets
0 physical reads
0 redo size
639 bytes sent via SQL*Net to client
409 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> pause
SQL>
SQL> set autotrace off
SQL>
SQL> pause
SQL>
SQL> --Note the plan hash value
SQL>
SQL> pause
SQL>
SQL> --Let us verify if the plan was inserted into Plan Baseline.
SQL>
SQL> pause
SQL>
SQL> select sql_handle, plan_name,
2 origin, enabled, accepted,sql_text
3 from dba_sql_plan_baselines
4 where sql_text like 'SELECT%sh.sales%';
SQL_HANDLE PLAN_NAME ORIGIN ENA ACC
------------------------ ----------------------------- -------------- --- ---
SQL_TEXT
--------------------------------------------------------------------------------
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE YES YES
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE YES NO
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SQL>
SQL> pause
SQL>
SQL> ---A New Plan SYS_SQL_PLAN_0f3e54d211df68d0 was found
SQL> ---And inserted in the Plan Baseline.
SQL> ---Note the ACCEPTED IS NO,as This is second plan added to the base line.
SQL> ---Note The Plan hash value: 899219946
SQL>
SQL> pause
SQL>
SQL> --Let us Change the Optimizer Envoriment.
SQL>
SQL> pause
SQL>
SQL> alter system set optimizer_features_enable='9.2.0';
System altered.
SQL> alter session set optimizer_index_cost_adj=50;
Session altered.
SQL> alter session set optimizer_index_caching=100;
Session altered.
SQL>
SQL> pause
SQL>
SQL> set autotrace on
SQL>
SQL> pause
SQL>
SQL> SELECT *
2 from sh.sales
3 where quantity_sold > 30
4 order by prod_id;
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 3803407550
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 29 | 47 | | |
| 1 | SORT ORDER BY | | 1 | 29 | 47 | | |
| 2 | PARTITION RANGE ALL| | 1 | 29 | 45 | 1 | 28 |
|* 3 | TABLE ACCESS FULL | SALES | 1 | 29 | 45 | 1 | 28 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("QUANTITY_SOLD">30)
Note
-----
- cpu costing is off (consider enabling it)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
1718 consistent gets
0 physical reads
0 redo size
639 bytes sent via SQL*Net to client
409 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> pause
SQL>
SQL> set autotrace off
SQL>
SQL> pause
SQL>
SQL> --Note the plan hash value.
SQL>
SQL> pause
SQL>
SQL> ----Let us verify if the plan was inserted into Plan Baseline.
SQL>
SQL> pause
SQL>
SQL> select sql_handle,plan_name,
2 origin, enabled, accepted,sql_text
3 from dba_sql_plan_baselines
4 where sql_text like 'SELECT%sh.sales%';
SQL_HANDLE PLAN_NAME ORIGIN ENA ACC
------------------------ ----------------------------- -------------- --- ---
SQL_TEXT
--------------------------------------------------------------------------------
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE YES YES
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE YES NO
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SQL>
SQL> pause
SQL>
SQL> --Note No Plan was added above because The plan found was not New,
SQL> --It was the same plan as found the First Time
SQL> --Note the Plan hash value: 3803407550
SQL>
SQL>
SQL> pause
SQL>
SQL> --Let us Change the Optimizer Envoriment.
SQL>
SQL> pause
SQL>
SQL> alter system set optimizer_features_enable='9.2.0';
System altered.
SQL> alter session set optimizer_mode = first_rows;
Session altered.
SQL>
SQL> pause
SQL>
SQL> set autotrace on
SQL>
SQL> pause
SQL>
SQL> SELECT *
2 from sh.sales
3 where quantity_sold > 30
4 order by prod_id;
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 899219946
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 29 | 2211 | | |
| 1 | SORT ORDER BY | | 1 | 29 | 2211 | | |
| 2 | PARTITION RANGE ALL | | 1 | 29 | 2209 | 1 | 28 |
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 1 | 29 | 2209 | 1 | 28 |
| 4 | BITMAP CONVERSION TO ROWIDS | | | | | | |
| 5 | BITMAP INDEX FULL SCAN | SALES_PROMO_BIX | | | | 1 | 28 |
-------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("QUANTITY_SOLD">30)
Note
-----
- cpu costing is off (consider enabling it)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2030 consistent gets
0 physical reads
0 redo size
639 bytes sent via SQL*Net to client
409 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> pause
SQL>
SQL> set autotrace off
SQL>
SQL> pause
SQL>
SQL> --Note the plan hash value
SQL>
SQL> pause
SQL>
SQL> --Let us verify if the plan was inserted into Plan Baseline.
SQL>
SQL> pause
SQL>
SQL> select sql_handle, plan_name,
2 origin, enabled, accepted,sql_text
3 from dba_sql_plan_baselines
4 where sql_text like 'SELECT%sh.sales%';
SQL_HANDLE PLAN_NAME ORIGIN ENA ACC
------------------------ ----------------------------- -------------- --- ---
SQL_TEXT
--------------------------------------------------------------------------------
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE YES YES
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE YES NO
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SQL>
SQL> pause
SQL>
SQL> ---Note No plan was added above because The Plan found was not New.
SQL> ---Note the Plan hash value: 899219946
SQL>
SQL> pause
SQL>
SQL> --So the SPM Baseline is now populated with 2 different plans.
SQL>
SQL> pause
SQL>
SQL> --Let us now Turn the auto Capture off
SQL>
SQL> pause
SQL>
SQL> alter session set optimizer_capture_sql_plan_baselines = FALSE;
Session altered.
SQL>
SQL> pause
SQL>
SQL> --Optimizer_capture_sql_plan_baselines needs to be set
SQL> --To TRUE only for Capture purpose.
SQL> --Optimizer_capture_sql_plan_baselines =TRUE is not
SQL> --Needed for USING AN existing SPM Baseline.
SQL>
SQL> pause
SQL>
SQL> --Now lets us see how the SPM uses the Plan.
SQL> --The Parameter optimizer_use_sql_plan_baselines
SQL> --Must be true for plans from SPM to be used.
SQL> --By Default optimizer_use_sql_plan_baselines
SQL> --Is set to TRUE only.
SQL> --If optimizer_use_sql_plan_baselines is set
SQL> --To FALSE than Plans will not be used
SQL> --From existing SPM Baseline,
SQL> --Even if they are populated.
SQL> --Note The Plan must be ENABLED=YES AND ACCEPTED=YES
SQL> --To be used by SPM.
SQL> --The Very First Plan for a particular sql that gets
SQL> --Loaded into an SPM Baseline Is ENABLED=YES
SQL> --AND ACCEPTED=YES.
SQL> --After that any Plan that gets loaded into the
SQL> --SPM Baseline is ENABLED=YES AND ACCEPTED=NO.
SQL> --These Plans needs to be ACCEPTED=YES before
SQL> --They can be used,
SQL> --The plans can be made ACCEPTED=YES by using the
SQL> --Plan verification Step.
SQL>
SQL> pause
SQL>
SQL> alter system set optimizer_use_sql_plan_baselines =TRUE;
System altered.
SQL>
SQL> pause
SQL>
SQL> --Let us change the optimizer envoriment.
SQL>
SQL> pause
SQL>
SQL> alter system set optimizer_features_enable='10.2.0.3';
System altered.
SQL> alter session set optimizer_index_cost_adj=1;
Session altered.
SQL>
SQL> pause
SQL>
SQL> set autotrace on
SQL>
SQL> pause
SQL>
SQL> --Execute the Sql in this new envoriment.
SQL>
SQL> pause
SQL>
SQL> SELECT *
2 from sh.sales
3 where quantity_sold > 30
4 order by prod_id;
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 899219946
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 29 | 294 (13)| 00:00:04 | | |
| 1 | SORT ORDER BY | | 1 | 29 | 294 (13)| 00:00:04 | | |
| 2 | PARTITION RANGE ALL | | 1 | 29 | 293 (13)| 00:00:04 | 1 | 28 |
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 1 | 29 | 293 (13)| 00:00:04 | 1 | 28 |
| 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | |
| 5 | BITMAP INDEX FULL SCAN | SALES_PROMO_BIX | | | | | 1 | 28 |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("QUANTITY_SOLD">30)
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_0f3e54d211df68d0" used for this statement
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2030 consistent gets
0 physical reads
0 redo size
639 bytes sent via SQL*Net to client
409 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> pause
SQL>
SQL> set autotrace off
SQL>
SQL> pause
SQL>
SQL> --Even though we have set
SQL> --alter system set optimizer_features_enable='10.2.0.3';
SQL> --alter session set optimizer_index_cost_adj=1
SQL> --We are still using the Plan with a Plan hash value: 3803407550
SQL> --This is because SQL PLAN baseline was used.
SQL> --Note the Line
SQL> --SQL plan baseline "SYS_SQL_PLAN_0f3e54d254bc8843" used for this statement
SQL> --This indicates SQL plan baseline was used.
SQL> --Note that SYS_SQL_PLAN_0f3e54d254bc8843 was used because it was enabled
SQL> --And accepted=YES as it was the very first Plan.
SQL>
SQL> Pause
SQL>
SQL> --Lets us disbale Plan SYS_SQL_PLAN_0f3e54d254bc8843
SQL> --We will use dbms_spm.alter_sql_plan_baseline
SQL>
SQL> pause
SQL>
SQL> var pbsts varchar2(30);
SQL> exec :pbsts := dbms_spm.alter_sql_plan_baseline('SYS_SQL_7de69bb90f3e54d2','SYS_SQL_PLAN_0f3e54d254bc8843','accepted','NO');
PL/SQL procedure successfully completed.
SQL>
SQL> pause
SQL>
SQL> --Verify the Plan Baseline.
SQL>
SQL> pause
SQL>
SQL> select sql_handle, plan_name,
2 origin, enabled, accepted, fixed, sql_text
3 from dba_sql_plan_baselines
4 where sql_text like 'SELECT%sh.sales%';
SQL_HANDLE PLAN_NAME ORIGIN ENA ACC FIX
------------------------ ----------------------------- -------------- --- --- ---
SQL_TEXT
--------------------------------------------------------------------------------
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE YES YES NO
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE YES NO NO
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SQL>
SQL> pause
SQL>
SQL> --Note that SQL WITH SQL HANDLE SYS_SQL_7de69bb90f3e54d2 AND
SQL> --Plan Name SYS_SQL_PLAN_0f3e54d254bc8843 is accepted=NO
SQL> --So This plan should not be used Now.
SQL>
SQL> pause
SQL>
SQL> alter system set optimizer_features_enable='10.2.0.3';
System altered.
SQL> alter session set optimizer_index_cost_adj=1;
Session altered.
SQL>
SQL> pause
SQL>
SQL> set autotrace on
SQL>
SQL> pause
SQL>
SQL> SELECT *
2 from sh.sales
3 where quantity_sold > 30
4 order by prod_id;
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 899219946
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 29 | 294 (13)| 00:00:04 | | |
| 1 | SORT ORDER BY | | 1 | 29 | 294 (13)| 00:00:04 | | |
| 2 | PARTITION RANGE ALL | | 1 | 29 | 293 (13)| 00:00:04 | 1 | 28 |
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 1 | 29 | 293 (13)| 00:00:04 | 1 | 28 |
| 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | |
| 5 | BITMAP INDEX FULL SCAN | SALES_PROMO_BIX | | | | | 1 | 28 |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("QUANTITY_SOLD">30)
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_0f3e54d211df68d0" used for this statement
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2030 consistent gets
0 physical reads
0 redo size
639 bytes sent via SQL*Net to client
409 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> pause
SQL>
SQL> set autotrace off
SQL>
SQL> pause
SQL>
SQL> --You can see That NO Plans from SPM Baseline was used.
SQL>
SQL> pause
SQL>
SQL> --Now let us enable the use of SPM for sql handle SYS_SQL_7de69bb90f3e54d2
SQL> --And Plan Name SYS_SQL_PLAN_0f3e54d211df68d0
SQL>
SQL> pause
SQL>
SQL> var pbsts varchar2(30);
SQL> exec :pbsts := dbms_spm.alter_sql_plan_baseline('SYS_SQL_7de69bb90f3e54d2','SYS_SQL_PLAN_0f3e54d211df68d0','accepted','YES');
PL/SQL procedure successfully completed.
SQL>
SQL> pause
SQL>
SQL> --Verify the Plan Baseline.
SQL>
SQL> pause
SQL>
SQL> select sql_handle, plan_name,
2 origin, enabled, accepted,sql_text
3 from dba_sql_plan_baselines
4 where sql_text like 'SELECT%sh.sales%';
SQL_HANDLE PLAN_NAME ORIGIN ENA ACC
------------------------ ----------------------------- -------------- --- ---
SQL_TEXT
--------------------------------------------------------------------------------
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE YES YES
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE YES NO
SELECT *
from sh.sales
where quantity_sold > 30
order by prod_id
SQL>
SQL> pause
SQL>
SQL> --Note that SQL with SQL HANDLE SYS_SQL_7de69bb90f3e54d2 AND
SQL> --Plan Name SYS_SQL_PLAN_0f3e54d211df68d0 is accepted=YES
SQL> --and ENABLED=YES
SQL>
SQL> pause
SQL>
SQL> --Let us Change the Optimizer Envoriment
SQL>
SQL> pause
SQL>
SQL> alter system set optimizer_features_enable='11.1.0.6';
System altered.
SQL> alter session set optimizer_index_cost_adj=100;
Session altered.
SQL> alter session set optimizer_index_caching=0;
Session altered.
SQL>
SQL> pause
SQL>
SQL> set autotrace on
SQL>
SQL> pause
SQL>
SQL> SELECT *
2 from sh.sales
3 where quantity_sold > 30
4 order by prod_id;
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 899219946
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 29 | 294 (13)| 00:00:04 | | |
| 1 | SORT ORDER BY | | 1 | 29 | 294 (13)| 00:00:04 | | |
| 2 | PARTITION RANGE ALL | | 1 | 29 | 293 (13)| 00:00:04 | 1 | 28 |
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 1 | 29 | 293 (13)| 00:00:04 | 1 | 28 |
| 4 | BITMAP CONVERSION TO ROWIDS | | | | | | | |
| 5 | BITMAP INDEX FULL SCAN | SALES_PROMO_BIX | | | | | 1 | 28 |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("QUANTITY_SOLD">30)
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_0f3e54d211df68d0" used for this statement
Statistics
----------------------------------------------------------
90 recursive calls
0 db block gets
2075 consistent gets
0 physical reads
0 redo size
639 bytes sent via SQL*Net to client
409 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
13 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> pause
SQL>
SQL> set autotrace off
SQL>
SQL> pause
SQL>
SQL> --Note that
SQL> --SQL plan baseline "SYS_SQL_PLAN_0f3e54d211df68d0" used for this statement
SQL> --The Plan hash value: 899219946
SQL> --
SQL>
SQL> pause
SQL>
SQL> spool off
}}}
http://docs.oracle.com/cd/E11882_01/server.112/e16638/optplanmgmt.htm#BABEAFGG
http://www.databasejournal.com/features/oracle/article.php/3896411/article.htm
sql baseline 10g http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/10g/r2/sql_baseline.viewlet/sql_baseline_viewlet_swf.html
sql baseline 11g http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/11gr2_baseline/11gr2_baseline_viewlet_swf.html
baselines and better plans http://www.oracle.com/technetwork/issue-archive/2009/09-mar/o29spm-092092.html
Optimizer Plan Change Management: Improved Stability and Performance in 11g http://www.vldb.org/pvldb/1/1454175.pdf
http://optimizermagic.blogspot.com/2009/01/plan-regressions-got-you-down-sql-plan.html
http://optimizermagic.blogspot.com/2009/01/sql-plan-management-part-2-of-4-spm.html
http://optimizermagic.blogspot.com/2009/01/sql-plan-management-part-3-of-4.html
http://optimizermagic.blogspot.com/2009/02/sql-plan-management-part-4-of-4-user.html
https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_1_of_4_creating_sql_plan_baselines
https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_2_of_4_spm_aware_optimizer
https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_3_of_4_evolving_sql_plan_baselines_1
https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_4_of_4_user_interfaces_and_other_features
''OBE videos:''
Controlling Execution Plan Evolution Using SQL Plan Management http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/prod/manage/spm/spm.htm
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r1/prod/manage/spm/spm.htm
viewlet http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/changemgmt/05_spm/05_spm_viewlet_swf.html
Plan Stability Features (Including SPM) Start Point [ID 1359841.1]
How to use hints to customize SQL Profile or SQL PLAN Baseline [ID 1400903.1]
How to Use SQL Plan Management (SPM) - Example Usage [ID 456518.1]
Oracle 11g – How to force a sql_id to use a plan_hash_value using SQL Baselines
http://rnm1978.wordpress.com/2011/06/28/oracle-11g-how-to-force-a-sql_id-to-use-a-plan_hash_value-using-sql-baselines/
http://aprakash.wordpress.com/2012/07/05/loading-sql-plan-into-spm-using-awr/
http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/
http://blog.tanelpoder.com/oracle/performance/sql/oracle-sql-plan-stability/
http://intermediatesql.com/tag/spm/ <-- GOOD STUFF by Maxym Kharchenko
http://technology.amis.nl/wp-content/uploads/2013/04/Koppelaars_SQL_Plan_Mgmt.pdf <-- GOOD STUFF by Toon Koppelaars
http://fordba.wordpress.com/tag/dbms_spm-evolve_sql_plan_baseline/ <-- good stuff
http://www.oracle-base.com/articles/11g/sql-plan-management-11gr1.php <-- GOOD STUFF
http://www.slideshare.net/mariselsins/using-sql-plan-management-for-performance-testing#, http://www.pythian.com/wp-content/uploads/2013/03/NoCOUG_Journal_201208_Maris_Elsins.pdf
! 2021
https://www.oracle.com/database/technologies/datawarehouse-bigdata/query-optimization.html <- good stuff white papers
spm 19c https://www.oracle.com/technetwork/database/bi-datawarehousing/twp-sql-plan-mgmt-19c-5324207.pdf
spm 18c https://www.oracle.com/technetwork/database/bi-datawarehousing/twp-sql-plan-management-0218-4403742.pdf
https://blogs.oracle.com/optimizer/what-is-automatic-sql-plan-management-and-why-should-you-care
https://orastory.wordpress.com/2015/05/01/strategies-for-minimising-sql-execution-plan-instability/ <- ideas
http://mwidlake.wordpress.com/2009/12/10/command_type-values/
http://mwidlake.wordpress.com/2010/01/08/more-on-command_type-values/
{{{
select * from audit_actions order by action
ACTION NAME
---------- ----------------------------
0 UNKNOWN
1 CREATE TABLE
2 INSERT
3 SELECT
4 CREATE CLUSTER
5 ALTER CLUSTER
6 UPDATE
7 DELETE
8 DROP CLUSTER
9 CREATE INDEX
10 DROP INDEX
11 ALTER INDEX
12 DROP TABLE
13 CREATE SEQUENCE
14 ALTER SEQUENCE
15 ALTER TABLE
16 DROP SEQUENCE
17 GRANT OBJECT
18 REVOKE OBJECT
19 CREATE SYNONYM
20 DROP SYNONYM
21 CREATE VIEW
22 DROP VIEW
23 VALIDATE INDEX
24 CREATE PROCEDURE
25 ALTER PROCEDURE
26 LOCK
27 NO-OP
28 RENAME
29 COMMENT
30 AUDIT OBJECT
31 NOAUDIT OBJECT
32 CREATE DATABASE LINK
33 DROP DATABASE LINK
34 CREATE DATABASE
35 ALTER DATABASE
36 CREATE ROLLBACK SEG
37 ALTER ROLLBACK SEG
38 DROP ROLLBACK SEG
39 CREATE TABLESPACE
40 ALTER TABLESPACE
41 DROP TABLESPACE
42 ALTER SESSION
43 ALTER USER
44 COMMIT
45 ROLLBACK
46 SAVEPOINT
47 PL/SQL EXECUTE
48 SET TRANSACTION
49 ALTER SYSTEM
50 EXPLAIN
51 CREATE USER
52 CREATE ROLE
53 DROP USER
54 DROP ROLE
55 SET ROLE
56 CREATE SCHEMA
57 CREATE CONTROL FILE
59 CREATE TRIGGER
60 ALTER TRIGGER
61 DROP TRIGGER
62 ANALYZE TABLE
63 ANALYZE INDEX
64 ANALYZE CLUSTER
65 CREATE PROFILE
66 DROP PROFILE
67 ALTER PROFILE
68 DROP PROCEDURE
70 ALTER RESOURCE COST
71 CREATE MATERIALIZED VIEW LOG
72 ALTER MATERIALIZED VIEW LOG
73 DROP MATERIALIZED VIEW LOG
74 CREATE MATERIALIZED VIEW
75 ALTER MATERIALIZED VIEW
76 DROP MATERIALIZED VIEW
77 CREATE TYPE
78 DROP TYPE
79 ALTER ROLE
80 ALTER TYPE
81 CREATE TYPE BODY
82 ALTER TYPE BODY
83 DROP TYPE BODY
84 DROP LIBRARY
85 TRUNCATE TABLE
86 TRUNCATE CLUSTER
91 CREATE FUNCTION
92 ALTER FUNCTION
93 DROP FUNCTION
94 CREATE PACKAGE
95 ALTER PACKAGE
96 DROP PACKAGE
97 CREATE PACKAGE BODY
98 ALTER PACKAGE BODY
99 DROP PACKAGE BODY
100 LOGON
101 LOGOFF
102 LOGOFF BY CLEANUP
103 SESSION REC
104 SYSTEM AUDIT
105 SYSTEM NOAUDIT
106 AUDIT DEFAULT
107 NOAUDIT DEFAULT
108 SYSTEM GRANT
109 SYSTEM REVOKE
110 CREATE PUBLIC SYNONYM
111 DROP PUBLIC SYNONYM
112 CREATE PUBLIC DATABASE LINK
113 DROP PUBLIC DATABASE LINK
114 GRANT ROLE
115 REVOKE ROLE
116 EXECUTE PROCEDURE
117 USER COMMENT
118 ENABLE TRIGGER
119 DISABLE TRIGGER
120 ENABLE ALL TRIGGERS
121 DISABLE ALL TRIGGERS
122 NETWORK ERROR
123 EXECUTE TYPE
128 FLASHBACK
129 CREATE SESSION
157 CREATE DIRECTORY
158 DROP DIRECTORY
159 CREATE LIBRARY
160 CREATE JAVA
161 ALTER JAVA
162 DROP JAVA
163 CREATE OPERATOR
164 CREATE INDEXTYPE
165 DROP INDEXTYPE
167 DROP OPERATOR
168 ASSOCIATE STATISTICS
169 DISASSOCIATE STATISTICS
170 CALL METHOD
171 CREATE SUMMARY
172 ALTER SUMMARY
173 DROP SUMMARY
174 CREATE DIMENSION
175 ALTER DIMENSION
176 DROP DIMENSION
177 CREATE CONTEXT
178 DROP CONTEXT
179 ALTER OUTLINE
180 CREATE OUTLINE
181 DROP OUTLINE
182 UPDATE INDEXES
183 ALTER OPERATOR
197 PURGE USER_RECYCLEBIN
198 PURGE DBA_RECYCLEBIN
199 PURGE TABLESAPCE
200 PURGE TABLE
201 PURGE INDEX
202 UNDROP OBJECT
204 FLASHBACK DATABASE
205 FLASHBACK TABLE
206 CREATE RESTORE POINT
207 DROP RESTORE POINT
208 PROXY AUTHENTICATION ONLY
209 DECLARE REWRITE EQUIVALENCE
210 ALTER REWRITE EQUIVALENCE
211 DROP REWRITE EQUIVALENCE
}}}
{{{
********************************************* INTRODUCTION *********************************************
--Note1
"A Relational Model of Data for Large Shared Data Banks". In this paper, Dr. Codd proposed
the relational model for database systems.
For more information, see E. F. Codd, The Relational Model for Database Management Version 2
(Reading, Mass.: Addison-Wesley, 1990).
--Note2
There are four types of databases:
1) Heirarchal
2) Network
3) Relational <-- Oracle 7 is RDBMS
4) Object Relational <-- Oracle 8 and later
--Note3
1) System Development Life Cycle (5 steps)
- Strategy & Analysis (where ERD is made)
- Design
- Build & Document
- Transition
- Production
2) Data Model (4 steps)
- Model of system in client's mind
- Entity model of client's model
- Table model of entity model
- Tables on disk
3) ER Models
1- Entity (one table)
2- Attribute (columns in a table)
* --> mandatory
o --> optional
3- Relationship (A named association between entities showing optionality and degree)
- - - --> optional element indicating "may be" (optionality)
----- --> mandatory element indicating "must be" (optionality)
crow's foot --> degree element indicating "one or more" (degree)
single line --> degree element indicating "one and only one" (degree)
Each direction of the relationship contains:
- A label, for example, taught by or assigned to
- An optionality, either must be or may be
- A degree, either one and only one or one or more
Note: The term cardinality is a synonym for the term degree.
Each source entity {may be | must be} relationship name {one and only one | one or more} destination
entity.
Note: The convention is to read clockwise.
- Unique Identifiers
A unique identifier (UID) is any combination of attributes or relationships, or both, that serves to
distinguish occurrences of an entity. Each entity occurrence must be uniquely identifiable.
- Tag each attribute that is part of the UID with a number symbol: #
- Tag secondary UIDs with a number sign in parentheses: (#)
********************************************* CHAPTER 1 *********************************************
WRITING BASIC SELECT STATEMENTS
3 things you could do:
Projection
Selection
Joining
# Note: Throughout this course, the words keyword, clause, and statement are used as follows:
- A keyword refers to an individual SQL element.
For example, SELECT and FROM are keywords.
- A clause is a part of a SQL statement.
For example, SELECT employee_id, last_name, ... is a clause.
- A statement is a combination of two or more clauses.
For example, SELECT * FROM employees is a SQL statement.
# Operator Precedence: MDAS
# If any column value in an arithmetic expression is null, the result is null.
# DESCRIBE
********************************************* CHAPTER 2 *********************************************
RESTRICTING AND SORTING DATA
The WHERE clause can compare values in columns, literal values, arithmetic expressions, or functions. It consists of three elements:
- Column name
- Comparison condition
- Column name, constant, or list of values
# Character strings are case sensitive, use UPPER or LOWER for case insensitive search
# The default date display is DD-MON-RR
# An alias cannot be used in the WHERE clause.
Other Comparison Operations:
- between...and...
- in (set)
- like
- is null
# Emphasize that the values specified with the BETWEEN operator in the example are inclusive. Explain
that BETWEEN ... AND ... is actually translated by Oracle server to a pair of AND conditions: (a >=
lower limit) AND (a <= higher limit). So using BETWEEN ... AND ... has no
performance benefits, and it is used for logical simplicity.
# Explain that IN ( ... ) is actually translated by Oracle server to a set of OR conditions: a =
value1 OR a = value2 OR a = value3. So using IN ( ... ) has no performance
benefits, and it is used for logical simplicity.
# SELECT employee_id, last_name, job_id
FROM employees
WHERE job_id LIKE �%SA\_%� ESCAPE �\�; <--- The ESCAPE option identifies the backslash (\) as the escape character. In the pattern, the escape
character precedes the underscore (_). This causes the Oracle Server to interpret the underscore
literally.
# NULL: you cannot test with = because a null cannot be equal or unequal to any value
Logical Conditions:
- and
- or
- not
# Order Evaluated Operator:
1 Arithmetic operators
2 Concatenation operator
3 Comparison conditions
4 IS [NOT] NULL, LIKE, [NOT] IN
5 [NOT] BETWEEN
6 NOT logical condition
7 AND logical condition
8 OR logical condition
SELECT last_name, job_id, salary
FROM hr.employees
WHERE job_id = 'SA_REP'
OR job_id = 'AD_PRES'
AND salary > 15000;
is the same as
SELECT last_name, job_id, salary
FROM hr.employees
WHERE job_id = 'SA_REP'
OR (job_id = 'AD_PRES'
AND salary > 15000);
# Override rules of precedence by using parentheses.
# ORDER BY: You can specify an expression, or an alias, or column position as the sort condition.
# Let the students know that the ORDER BY clause is executed last in query execution. It is placed last unless the "FOR UPDATE" clause is used.
# Null values are displayed last for ascending sequences and first for descending sequences.
SELECT last_name, department_id, salary
FROM hr.employees
ORDER BY department_id, salary desc; <-- order by department_id ASC and then by salary DESC
is different from this
SELECT last_name, department_id, salary
FROM hr.employees
ORDER BY department_id desc, salary desc; <-- order by department_id DESC and then by salary DESC
********************************************* CHAPTER 3 *********************************************
SINGLE ROW FUNCTIONS
PART ONE:
There are two distinct types of functions:
- Single-row functions
- Multiple-row functions
Single-row functions:
- Character functions: Accept character input and can return both character and number values
- Number functions: Accept numeric input and return numeric values
- Date functions: Operate on values of the DATE data type (All date functions return a value of DATE data type except the MONTHS_BETWEEN function, which returns a number.)
- Conversion functions: Convert a value from one data type to another
- General functions:
NVL
NVL2
NULLIF
COALSECE
CASE
DECODE
# CHARACTER FUNCTIONS: (can be divided into the following:)
1) Case-Manipulation Functions:
lower LOWER(�SQL Course�) --> sql course
upper UPPER(�SQL Course�) --> SQL COURSE
initcap INITCAP(�SQL Course�) --> Sql Course
2) Character-Manipulation Functions:
concat <karl arao> --> concat(first_name, last_name) --> karlarao
substr <TAYLOR> --> substr(last_name, 1,3) --> tay
substr(last_name, -6,3) --> tay
length <ABEL> --> length(last_name) --> 4
instr <TAYLOR> --> instr(last_name, 'a') --> 2 <-- shows where "a" is
lpad <24000> --> lpad(salary,10,'*') --> *****24000
rpad <24000> --> rpad(salary,10,'*') --> 24000***** --> select last_name, salary/1000, rpad(' ',salary/1000+1, '*') from employees;
trim --> trim(�H� FROM �HelloWorld�) --> elloWorld
replace
# NUMBER FUNCTIONS:
round round(45.929, 2) --> 45.93
round(45.929, -1) --> 50
round(45.929, 0) --> 46
trunc trunc(45.929, 2) --> 45.92
trunc(45.929, -1) --> 40
trunc(45.929, 0) --> 45
mod mod(salary, 1000) --> will output the remainder of salary divided by 1000, used to determine if value is ODD/EVEN
# DATE FUNCTIONS:
DATE is stored internally as follows:
-------------------------------------
CENTURY YEAR MONTH DAY HOUR MINUTE SECOND
19 94 06 07 5 10 43
ASSUME VALUE IS 07-FEB-99:
months_between months_between(sysdate, hire_date) --> 31.6982407
add_months add_months(hire_date, 6) --> 07-Aug-99
next_day next_day(hire_date, 'Friday') --> 12-Feb-99
last_day last_day(hire_date) --> 28-Feb-99
ASSUME SYSDATE IS 25-JUL-95: 1-15 & 16-30 (day) / 0-6 & 7-12 (month)
round round(sysdate, 'MONTH') --> 01-Aug-95
round(sysdate, 'YEAR') --> 01-Jan-96
trunc trunc(sysdate, 'MONTH') --> 01-Jul-95
trunc(sysdate, 'YEAR') --> 01-Jan-95
PART TWO:
# CONVERSION FUNCTIONS:
to_char to_char(hire_date, 'MM/YY') --> 06/95 (FOR ALTERING RETRIEVAL FORMAT - FLEXIBLE)
to_char(salary, '$99,999.00') --> $60,000.68 (decimal place rounded to number of places provided if converted TO_CHAR)
to_date to_date('May 24, 1999','fxMonth DD, YYYY') --> TO_DATE to make it a number (just converts it to a date), then format it by TO_CHAR
to_date('01-Jan-90', 'DD-MON-RR')
to_number to_number('123,456.00','999,999.00') --> TO_NUMBER to make it a number (just converts it to a number), then format it by TO_CHAR
SAMPLE FORMAT ELEMENTS OF VALID DATE FORMATS:
SCC or CC Century; server prefixes B.C. date with -
Years in dates YYYY or SYYYY Year; server prefixes B.C. date with -
YYY or YY or Y Last three, two, or one digits of year
Y,YYY Year with comma in this position
IYYY, IYY, IY, I Four, three, two, or one digit year based on the ISO standard
SYEAR or YEAR Year spelled out; server prefixes B.C. date with -
BC or AD B.C./.D. indicator
B.C. or A.D. B.C./A.D. indicator with periods
Q Quarter of year
MM Month: two-digit value
MONTH Name of month padded with blanks to length of nine characters
MON Name of month, three-letter abbreviation
RM Roman numeral month
WW or W Week of year or month
DDD or DD or D Day of year, month, or week
DAY Name of day padded with blanks to a length of nine characters
DY Name of day; three-letter abbreviation
J Julian day; the number of days since 31 December 4713 B.C.
ELEMENTS OF DATE FORMAT MODEL:
Time elements format the time portion of the date. --> HH24:MI:SS AM --> 15:45:32 PM
Add character strings by enclosing them in double quotation marks. --> DD "of" MONTH --> 12 of OCTOBER
Number suffixes spell out numbers. --> ddspth --> fourteenth
NUMBER FORMAT ELEMENTS (CONVERTING A NUMBER TO THE CHARACTER DATA TYPE):
Element Description Example Result
9 Numeric position (number of 9s determine display width) 999999 1234
0 Display leading zeros 099999 001234
$ Floating dollar sign $999999 $1234
L Floating local currency symbol L999999 FF1234
. Decimal point in position specified 999999.99 1234.00
, Comma in position specified 999,999 1,234
MI Minus signs to right (negative values) 999999MI 1234-
PR Parenthesize negative numbers 999999PR <1234>
EEEE Scientific notation (format must specify four Es) 99.999EEEE 1.234E+03
V Multiply by 10 n times (n = number of 9s after V) 9999V99 123400
B Display zero values as blank, not 0 B9999.99 1234.00
# use "fm" to avoid trailing zeros
SELECT last_name,TO_CHAR(hire_date,�fmDdspth "of" Month YYYY fmHH:MI:SS AM�)HIREDATE
FROM employees;
# use "fx" Because the fx modifier is used, an exact match is required and the spaces after the word �May� are not recognized.
SELECT last_name, hire_date
FROM hr.employees
WHERE hire_date = TO_DATE('May 24, 1999', 'fxMonth DD, YYYY');
# Emphasize the format D, as the students need it for practice 10. The D format returns a value from 1 to
7 representing the day of the week. Depending on the NLS date setting options, the value 1 may
represent Sunday or Monday. In the United States, the value 1 represents Sunday.
Element Description
# There are several new data types available in the Oracle9i release pertaining to time. These include:
TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE,
INTERVAL YEAR, INTERVAL DAY. These are discussed later in the course.
# RR format
To find employees who were hired prior to 1990, the RR format can be used. Since the year is now
greater than 1999, the RR format interprets the year portion of the date from 1950 to 1999.
The following command, on the other hand, results in no rows being selected because the YY format
interprets the year portion of the date in the current century (2090).
SELECT last_name, TO_CHAR(hire_date, 'DD-Mon-yyyy')
FROM hr.employees
WHERE TO_DATE(hire_date, 'DD-Mon-YY') < '01-Jan-1990'; <-- no values will be retrieved becauase it will be interpreted as 2090
# GENERAL FUNCTIONS:
nvl last_name, nvl(to_char(commission_pct), 'no commission') <-- looks for NULL, then label it...but first have to TO_CHAR
last_name, nvl(commission_pct, 0) <-- looks for NULL, makes the value "0"
nvl2 nvl2(commission_pct, salary*commission_pct, 0) <-- execute 2nd if not null, 3rd if null
nullif nullif(length(first_name), length(last_name)) <-- if both equal then "NULL", if not then return 1st expression
coalesce coalesce(commission_pct, salary, 10) <-- return 1 if not null, return 2 if 1 is null and 2 is not null, return 3 if all is null, 4..5..6..n
CONDITIONAL EXPRESSIONS:
# The CASE expression is new in the Oracle9i Server release
case SELECT last_name, job_id, salary,
CASE job_id WHEN �IT_PROG� THEN 1.10*salary
WHEN �ST_CLERK� THEN 1.15*salary
WHEN �SA_REP� THEN 1.20*salary
ELSE salary END "REVISED_SALARY" <-- this will be the column name, if there's no ELSE then it will return NULL
FROM employees;
decode SELECT last_name, job_id, salary,
DECODE(job_id, 'IT_PROG', 1.10*salary,
'ST_CLERK', 1.15*salary,
'SA_REP', 1.20*salary,
salary) <-- if there's no default value then it will return NULL
REVISED_SALARY <-- this will be the column name
FROM employees;
SELECT last_name, salary,
DECODE (TRUNC(salary/2000, 0),
0, 0.00,
1, 0.09,
2, 0.20,
3, 0.30,
4, 0.40,
5, 0.42,
6, 0.44,
0.45) TAX_RATE
FROM employees
WHERE department_id = 80;
Monthly Salary Range Rate
$0.00 - 1999.99 00%
$2,000.00 - 3,999.99 09%
$4,000.00 - 5,999.99 20%
$6,000.00 - 7,999.99 30%
$8,000.00 - 9,999.99 40%
$10,000.00 - 11,999.99 42%
$12,200.00 - 13,999.99 44%
$14,000.00 or greater 45%
********************************************* CHAPTER 4 *********************************************
DISPLAYING DATA FROM MULTIPLE TABLES
----------------
--ORACLE SYNTAX
----------------
# CARTESIAN PRODUCT - if join condition is omittted
select * from
employees a, departments b (20 x 8 rows = 160 rows)
Types of Joins
Oracle Proprietary SQL: 1999
Joins (8i and prior): Compliant Joins:
- Equijoin - Cross joins
- Non-equijoin - Natural joins
- Outer join - Using clause
- Self join - Full or two sided outer joins
- Arbitrary join conditions for outer joins
Joins comparing SQL:1999 to Oracle Syntax
Oracle Proprietary: SQL: 1999
- Equijoin - Natural / Inner Join
- Outer Join - Left Outer Join
- Self join - Join On
- Non Equijoin - Join Using
- Cartesian Product - Cross Join
# EQUIJOIN (a.k.a simple join / inner join)
SELECT last_name, employees.department_id, department_name
FROM employees, departments
WHERE employees.department_id = departments.department_id
AND last_name = �Matos�;
SELECT e.employee_id, e.last_name, e.department_id, d.department_id, d.location_id <-- WITH ALIAS
FROM employees e , departments d
WHERE e.department_id = d.department_id;
SELECT e.last_name, d.department_name, l.city <-- JOINING MORE THAN TWO TABLES (n-1)
FROM employees e, departments d, locations l
WHERE e.department_id = d.department_id
AND d.location_id = l.location_id;
--> to know how many tables to join, "n-1" (if you're joining 4 tables then you need 3 joins)
# NON-EQUIJOIN
SELECT e.last_name, e.salary, j.grade_level
FROM employees e, job_grades j
WHERE e.salary
BETWEEN j.lowest_sal AND j.highest_sal;
# OUTER JOIN (Place the outer join symbol following the name of the column in the table without the matching rows - where you want it NULL)
SELECT e.employee_id, e.last_name, e.department_id, d.department_id, d.location_id <-- GRANT DOES NOT HAVE A DEPARTMENT
FROM employees e , departments d
WHERE e.department_id = d.department_id (+);
SELECT e.last_name, d.department_name, l.city <-- CONTRACTING DEPARTMENT DOES NOT HAVE ANY EMPLOYEES
FROM employees e, departments d, locations l
WHERE e.department_id (+) = d.department_id
AND d.location_id (+) = l.location_id;
--> You use an outer join to also see rows that do not meet the join condition.
--> The outer join operator can appear on only one side of the expression the side that has information missing. It returns those rows from one table that have no direct match in the other table.
--> A condition involving an outer join cannot use the IN operator or be linked to another condition by the OR operator.
--> The UNION operator works around the issue of being able to use an outer join operator on one side of the expression. The ANSI full outer join also allows you to have an outer join on both sides of the expression.
# SELF JOIN
SELECT worker.last_name || � works for � || manager.last_name
FROM employees worker, employees manager
WHERE worker.manager_id = manager.employee_id;
-------------------
--SQL: 1999 SYNTAX
-------------------
# CROSS JOIN
select * from employees <-- result is Cartesian Product
cross join departments;
# NATURAL JOIN
select * from employees <-- selects rows from the two tables that have equal values in all "matched columns" (the same name & data type)
natural join departments;
# USING (similar to equijoin, but shorter code than "ON")
SELECT e.employee_id, e.last_name, d.location_id
FROM employees e
JOIN departments d
USING (department_id);
WHERE e.department_id = 90; <-- CAN'T DO THIS, do not use a "table name, alias, or qualifier" in the referenced columns ORA-25154: column part of USING clause cannot have qualifier
select * <-- three way join
from employees a
join departments b
using (department_id)
join locations c
using (location_id);
# ON (similar to equijoin)
SELECT employee_id, city, department_name <-- three way join
FROM employees e
JOIN departments d
ON (d.department_id = e.department_id)
JOIN locations l
ON (d.location_id = l.location_id);
# LEFT OUTER JOIN
SELECT e.last_name, e.department_id, d.department_name
FROM employees e
LEFT OUTER JOIN departments d
ON (e.department_id = d.department_id);
This query retrieves all rows in the EMPLOYEES table, which is the left table even if there is no match in the DEPARTMENTS table.
This query was completed in earlier releases as follows:
SELECT e.last_name, e.department_id, d.department_name
FROM hr.employees e, hr.departments d
WHERE e.department_id = d.department_id (+); -- plus sign will have null, return all emp
# RIGHT OUTER JOIN
SELECT e.last_name, e.department_id, d.department_name
FROM employees e
RIGHT OUTER JOIN departments d
ON (e.department_id = d.department_id);
This query retrieves all rows in the DEPARTMENTS table, which is the right table even if there is no match in the EMPLOYEES table.
This query was completed in earlier releases as follows:
SELECT e.last_name, e.department_id, d.department_name
FROM hr.employees e, hr.departments d
WHERE e.department_id(+) = d.department_id ; -- plus sign will have null, return all dept
# FULL OUTER JOIN
SELECT e.last_name, e.department_id, d.department_name <-- SQL :1999 Syntax
FROM employees e
FULL OUTER JOIN departments d
ON (e.department_id = d.department_id);
SELECT e.last_name, e.department_id, d.department_name <-- Oracle Syntax
FROM employees e, departments d
WHERE e.department_id (+) = d.department_id
UNION
SELECT e.last_name, e.department_id, d.department_name
FROM employees e, departments d
WHERE e.department_id = d.department_id (+);
********************************************* CHAPTER 5 *********************************************
AGGREGATING DATA USING GROUP FUNCTIONS
Types of Group Functions:
- AVG
- COUNT
- MAX
- MIN
- STDDEV
- SUM
- VARIANCE
# All group functions ignore null values. To substitute a value for null values, use the NVL, NVL2,
or COALESCE functions.
# AVG, SUM, VARIANCE, and STDDEV functions can be used only with numeric data types.
# The NVL function forces group functions to include
null values
select avg(nvl(a.commission_pct, 0)) from employees a;
# You cannot use a column alias in the GROUP BY clause.
# The GROUP BY column does not have to be in the
SELECT list.
******************************************************************************************
Types of subqueries (today I couldn't think of the term "inline view")
* subquery ( subselect used in where clause)
* correlated subquery (subselect uses fields from outer query)
* scalar subquery (subselect in select list)
* inline views (subselect in from clause)
******************************************************************************************
}}}
<<showtoc>>
! sql developer , sqlcl
https://www.oracle.com/database/technologies/appdev/sql-developer.html
! data grip
https://www.jetbrains.com/datagrip/?fromMenu
! dbeaver community/pro
https://dbeaver.io/download/
https://dbeaver.com/
https://www.slant.co/versus/198/210/~dbeaver_vs_datagrip
! robomongo
https://robomongo.org/
.
{{{
SQL Operations (ROW, SET) (Doc ID 100848.1)
Oracle Database - Enterprise Edition - Version 9.2.0.8 and later
All Platforms
PURPOSE
The document describes about the SQL Operations(ROW, SET) and explains with examples some of the ROW operations.
SCOPE
This article will be useful for Oracle DBA(s) and Developers.
DETAILS
SQL Operations:
To interpret the Explain Plan and correctly evaluate the SQL Tuning options, it is necessary to understand the differences between the available database operations. The operations can be classified as :
· Row operations
· Set operations
ROW Operations
The Row Operations are executed one row at a time. It will be executed at the FETCH stage, if there is no set operation involved. The user can see the first result before the last row is fetched. Example : FULL TABLE SCAN
AND-EQUAL
CONCATENATION
INDEX UNIQUE SCAN
INDEX RANGE SCAN
HASH JOIN
NESTED LOOPS
TABLE ACCESS BY ROWID
TABLE ACCESS CLUSTER
TABLE ACCESS FULL
TABLE ACCESS HASH
Some of the ROW operations are described in detail:
AND-EQUAL:
This merges sorted lists of values returned by indexes. It returns the list of values that are common to both Lists.(ROWIDs found in both indexes). This is used for merges of nonunique indexes and range scans of Unique indexes.
Example:
NOTE: In the images and/or the document content below, the user information and data used represents fictitious data from the Oracle sample schema(s) or Public Documentation delivered with an Oracle database product. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.
Select empno,state,zipcode
From emp
Where state='GA'
And zipcode=65434
Explain Plan
TABLE ACCESS BY ROWID EMP
AND-EQUAL
INDEX RANGE SCAN EMP$STATE
INDEX RANGE SCAN EMP$ZIPCODE
CONCATENATION:
The Concatenation does a UNION ALL of result sets.
Example:
Select empno, state, zipcode
From emp
Where (state='KS' and zipcode=45678)
Or (state='MD' and zipcode=87746);
Explain Plan
CONCATENATION
TABLE ACCESS BY ROWID EMP
AND-EQUAL
INDEX RANGE SCAN EMP$STATE
INDEX RANGE SCAN EMP$ZIPCODE
TABLE ACCESS BY ROWID EMP
AND-EQUAL
INDEX RANGE SCAN EMP$STATE
INDEX RANGE SCAN EMP$ZIPCODE
HASH JOIN:
This operation joins tables by creating an in-memory bitmap of one of the tables and then using a hashing function to locate the join rows in the second table.
Example:
Select emp.empno
From emp, dept
Where emp.deptno=detp.deptno
And emp.state='NY';
Explain Plan
HASH JOIN
TABLE ACCESS FULL EMP
TABLE ACCESS FULL DEPT
INDEX RANGE SCAN:
The index range scan selects a range of values from an index. The index can be either unique or non-unique. The range scans are used with one of the conditions:
· A range operator is used (such as < or >)
· The BETWEEN clause is used
· A search string with wild card is used(such as B%)
· Only part of concatenated index is used(such as using leading column of composite index)
Example:
Select empno,state
From emp
Where deptno > 20;
Explain Plan
TABLE ACCESS BY ROWID EMP
INDEX RANGE SCAN EMP$DEPTNO
SET Operations
The Set operations are executed on a result set of rows. It will be executed at EXECUTE stage when the cursor is opened. The user cannot see the first result until all rows are fetched and processed.
Example: FULL TABLE SCAN with GROUP BY clause.
FOR UPDATE
HASH JOIN
INTERSECTION
MERGE JOIN
MINUS
SORT AGGREGATE
SORT GROUP BY
SORT UNIQUE
SORT JOIN
SORT ORDER BY
UNION
}}}
.
{{{
Issues arise from:
* coding
* data mapping / model
* logic
Delays come from:
* poor requirements gathering
* technical debt
* poor project management
}}}
<<<
Starting 11gR1 Oracle introduced Testcase Builder (TCB) as part of the Oracle Database Fault Diagnosability Infrastructure (ADRCI, DBMS_HM and DBMS_SQLDIAG just to keep it simple). Basically it’s a set of APIs to generate a testcase starting from either a SQL ID or a SQL text.
<<<
! howto using the API
https://mauro-pagano.com/2015/07/09/how-to-get-a-sql-testcase-with-a-single-step/
! howto using SQLD360
{{{
The sqld360 does generate scripts to build a testcase.
https://github.com/karlarao/sqldb360/blob/master/sql/sqld360_5e_tcb.sql
Plus the standalone sql file
Does not do the CBO env though.
And has an option to generate TCB but I do not know if it has been tested.
}}}
{{{
Issues arise from:
* coding
* data mapping / model
* logic
Delays come from:
* poor requirements gathering
* technical debt
* poor project management
}}}
check out nigelbayliss scripts and testcases at https://github.com/oracle/oracle-db-examples/tree/master/optimizer
-- some stories by gverma
Deleting statistics or/and dropping indexes on Global temporary tables can help too https://blogs.oracle.com/gverma/entry/deleting_statistics_orand_drop
10g optimizer case study: Runtime Execution issues with View merging https://blogs.oracle.com/gverma/entry/10g_optimizer_case_study_runti
A tuning case study: The goofy optimizer (9i.x RDBMS ) https://blogs.oracle.com/gverma/entry/a_tuning_case_study_the_goofy_1
Yet Another Case Study: The over-commit trap https://blogs.oracle.com/gverma/entry/yet_another_case_study_the_ove_1
An Application Tuning Case Study: The Deadly Deadlock https://blogs.oracle.com/gverma/entry/an_application_tuning_case_stu_1
A SQL Tuning Case Study: Could we K.I.S.S. Please? https://blogs.oracle.com/gverma/entry/a_sql_tuning_case_study_could_1
When Conventional Thinking Fails: A Performance Case Study in Order Management Workflow customization https://blogs.oracle.com/gverma/entry/when_conventional_thinking_fai_1
Workflow performance case study: Dont Repeat History, Learn from it https://blogs.oracle.com/gverma/entry/workflow_performance_case_stud_1
http://iamsys.wordpress.com/2012/03/15/oracle-histogram-causing-bad-sql-plan/
<<showtoc>>
! books
SQL performance explained https://use-the-index-luke.com/sql/table-of-contents
https://www.amazon.com/Programming-Oracle-Triggers-Procedures-Prentice
https://www.amazon.com/SQL-Antipatterns-Programming-Pragmatic-Programmers-eboo
! articles
https://www.datacamp.com/community/tutorials/sql-tutorial-query#gs.ePzPPkU
check out [[Data Model, Design]]
<<showtoc>>
! video tutorials
!! SQL Developer Data Modeler Just what you need
* this shows brewery data model ala "untapped"
https://www.youtube.com/watch?time_continue=3707&v=NfrUy-TYP_8
!! Database Design Tutorial
https://www.youtube.com/watch?v=I_rxqSJAj6U
!! Data Modeling-Oracle SQL Developer Data Modeler
* this shows "items" and "item category" data model
Data Modeling-Oracle SQL Developer Data Modeler-Part 1 to 3 https://www.youtube.com/watch?v=pQdVhyBlP_s&list=PLRchQ6rKGoij_kf9Sfm45X071t-zlINdR
Data Modeling-Oracle SQL Developer Data Modeler-Part 4 https://www.youtube.com/watch?v=0d2rLrKYPzA
!! Introduction to SQL Developer Data Modeler (shows UK example)
* this shows student grading data model
https://www.youtube.com/watch?v=wsVh1zLmQb0
!! ER DIAGRAM USING MS VISIO
ER DIAGRAM USING MS VISIO 10 part_1 https://www.youtube.com/watch?v=unSWF7IR2nw&list=PLC2183520018E70C1
ER DIAGRAM USING MS VISIO 10 part_2 https://www.youtube.com/watch?v=qimT1FTJzK8&list=PLC2183520018E70C1&index=2
http://usmannoshahi.blogspot.com/2014/06/auto-increment-trigger-from-sql.html
! step by step hands-on
!! logical model
!!! create new design and save
[img(80%,80%)[https://i.imgur.com/GgNKoJo.png]]
[img(80%,80%)[https://i.imgur.com/BGqlnn4.png]]
!!! edit model properties
[img(80%,80%)[https://i.imgur.com/n7Cp0HH.png]]
[img(80%,80%)[https://i.imgur.com/IJyPFo2.png]]
[img(80%,80%)[https://i.imgur.com/QZWUI2L.png]]
[img(80%,80%)[https://i.imgur.com/3fd8r3C.png]]
[img(80%,80%)[https://i.imgur.com/ALCFeyu.png]]
[img(80%,80%)[https://i.imgur.com/80Mjm5k.png]]
!!! edit logical model properties
[img(80%,80%)[https://i.imgur.com/7AJ33Rc.png]]
[img(80%,80%)[https://i.imgur.com/toKgR3z.png]]
!!! create entity
[img(80%,80%)[https://i.imgur.com/SfAV1a2.png]]
[img(80%,80%)[https://i.imgur.com/5Un8x0J.png]]
[img(80%,80%)[https://i.imgur.com/70hBtVr.png]]
!!! edit domain administration, enter data types
<<<
whenever we create a database we mainly use 3 data types:
* variable character
* number
* date
<<<
[img(80%,80%)[https://i.imgur.com/VkUErZ5.png]]
[img(80%,80%)[https://i.imgur.com/4PNGGKY.png]]
[img(80%,80%)[https://i.imgur.com/64P6jNI.png]]
[img(80%,80%)[https://i.imgur.com/LPE3C9X.png]]
[img(80%,80%)[https://i.imgur.com/83znKNp.png]]
!!! create display and edit notation
[img(80%,80%)[https://i.imgur.com/j5xL8if.png]]
[img(80%,80%)[https://i.imgur.com/2FwhOer.png]]
!!! edit relationships, create PK - FK
* foreign key will be created as a new column if it doesn't exist on the target table
* if column already exist on the target table, the FK will be appended with sequence number
** to fix this, you need to delete the relationship and the duplicate column
[img(80%,80%)[https://i.imgur.com/kECYnbM.png]]
* uncheck source optional
* CASCADE
[img(80%,80%)[https://i.imgur.com/gGjmuc5.png]]
[img(80%,80%)[https://i.imgur.com/eryzCwf.png]]
!!! create Unique key
[img(80%,80%)[https://i.imgur.com/P3vKZjk.png]]
* to set DBID as Unique key, click on "Unique Identifiers"
[img(80%,80%)[https://i.imgur.com/uW93xhc.png]]
* click on plus, and edit the new entry, on "attributes and relations" select DBID
[img(80%,80%)[https://i.imgur.com/xcQ2PYX.png]]
* on General name it "dbid UK" and select "Unique Key"
[img(80%,80%)[https://i.imgur.com/yXCZCt3.png]]
[img(80%,80%)[https://i.imgur.com/yYJrbBd.png]]
!!! auto generate sequences with trigger
!! relational model
!!! engineer to relational model
[img(80%,80%)[https://i.imgur.com/mzO396H.png]]
!!! Generate DDL
[img(80%,80%)[https://i.imgur.com/ipm4mGg.png]]
* select 12c database, click OK
[img(80%,80%)[https://i.imgur.com/j1vwUKE.png]]
* click Generate
[img(80%,80%)[https://i.imgur.com/uKrkKV3.png]]
* make sure no errors on generating DDL
[img(80%,80%)[https://i.imgur.com/3UsTVn8.png]]
!! reverse engineer
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldevdm/r30/datamodel2moddm/datamodel2moddm.htm
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldevdm/r20/updatedb/UpdateDB.html
http://www.slideshare.net/kgraziano/reverse-engineering-an-existing-database-using-oracle-sql-developer-data-modeler
[img(80%,80%)[https://i.imgur.com/wE96nms.png]]
* select "New Relational Model"
[img(80%,80%)[https://i.imgur.com/06vpzKX.png]]
.
[img(80%,80%)[https://i.imgur.com/6WFuZQZ.png]]
.
[img(80%,80%)[https://i.imgur.com/i1tc9t7.png]]
.
[img(80%,80%)[https://i.imgur.com/gXqKhFp.png]]
* this is the output of Reverse Engineer
[img(80%,80%)[https://i.imgur.com/W9Bd60N.png]]
* this is the original
[img(80%,80%)[https://i.imgur.com/seoO7GD.png]]
http://www.dpriver.com/pp/sqlformat.htm
http://elentok.com/sql <-- allows collapsing the multiple subqueries of the SQL
! datagenerator
Swingbench uses data generator, and you can use it separately
http://www.dominicgiles.com/datagenerator.html
! quicksql
You can also use quicksql https://docs.oracle.com/database/apex-18.1/AEUTL/using-quick-SQL.htm#AEUTL-GUID-A1308899-AA1D-42EA-8CAE-B128366538FE
Defining new data structures using Quick SQL https://www.youtube.com/watch?v=Ux2eISE9cSQ
! meta360
BTW you can use the meta360 to get all DDL of a schema without the data. Then using that info you can feed it to quick SQL to generate the insert scripts
https://github.com/carlos-sierra/meta360
! quickplsql
https://github.com/mortenbra/quick-plsql
<<<
https://apex.oracle.com/pls/apex/f?p=QUICKPLSQL:HOME
<<<
http://www.evernote.com/shard/s48/sh/4e9718c6-5881-4106-8822-e291a2523b9f/e1d04aa0e9d04b79769bfc57fff373f8
! Possible reasons
* ''CPU starvation'' - In AWR/Statspack, that "Captured SQL.. CPU" section is being pulled from sum(cpu_time_delta) of dba_hist_sqlstat and divided by 'DB CPU' by the time model which only accounts for the "real CPU cycles" which gives you a lower value for the denominator since most of the CPU time is spent on run queue and not accounted
* ''Module calling SQL'' or ''SQL calling module calling SQL'' - in this scenario the __module__ CPU time number is somewhat equal to the called __SQL__ causing the numbers to double which is more than the accounted real CPU cycles
! Troubleshooting
* it could really be just a CPU starvation issue
* it could really be a double counting issue
* or it could be both
** if it shows the CPU Wait, then it's CPU starvation
** if it doesn't show the CPU Wait, determine if the CPU starvation happens in a fly by manner (not a sustained workload) ELSE it could just be a double counting issue
''Ultimately you have to triage with fine grained sample intervals (snapper) and with OS data, because the spikes may be hidden from the normalized DBA_HIST_SQLSTAT data''
but
I won't really totally depend on this when troubleshooting, this section of AWR/Statspack is just a means of knowing what are the top consuming SQLs, and I've got a script called awr_topsqlx http://goo.gl/YIkQ7 which shows the AAS for a particular SQL_ID in a time series manner. If, there would be double counting.. it may show the calling PL/SQL and SQL_ID with high AAS.. and that's a good thing because both them are worth investigating..
Also
The "PL/SQL lock timer" on top 5 timed events is just a Statspack thing, in AWR you may see it as "inactive session" if the job got killed or nothing at all if the job finished
! 1) CPU starvation
<<<
the workload used here is 256 sessions of IOsaturationtoolkit-v2 https://www.dropbox.com/s/6bwcm5n22b22uoj/IOsaturationtoolkit-v2.tar.bz2, load average peak is 71 on 8 CPU box..
''this one says Captured SQL account for 108.0%''
{{{
^LSQL ordered by CPU Time DB/Inst: DW/dw Snaps: 22768-22769
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - CPU Time as a percentage of Total DB CPU
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 108.0% of Total CPU Time (s): 146 <-- this is 146 (real CPU cycles)
-> Captured PL/SQL account for 0.8% of Total CPU Time (s): 146
CPU CPU per Elapsed
Time (s) Executions Exec (s) %Total Time (s) %CPU %IO SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
154.5 10 15.45 105.7 90,798.7 .2 57.7 1qnnkbgf13csf <-- this is 154.5
Module: SQL*Plus
Select count(*) from owitest
1.4 49 0.03 0.9 1.4 97.7 .0 fgawnchwmysj7
Module: ASH Viewer
SELECT * FROM V$ACTIVE_SESSION_HISTORY WHERE SAMPLE_ID > :1
0.4 12 0.04 0.3 96.8 .5 99.5 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
0.2 3 0.07 0.1 0.3 77.2 21.7 2gwy69qkwkhcz
Module: sqlplus@desktopserver.local (TNS V1-V3)
select sample_time, count(sid) from ( select to_char(ash.sample_time,'MM/DD
/YY HH24:MI:SS') sample_time, ash.session_id sid, ash.session_serial#
serial#, ash.user_id user_id, ash.program, ash.sql_id, ash.s
ql_plan_hash_value, sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
0.2 24 0.01 0.1 16.1 1.2 92.0 2b064ybzkwf1y
Module: OEM.SystemPool
BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2, :3); END;
0.2 5 0.04 0.1 0.2 98.0 .0 9xcgnpkwktzy9
Module: sqlplus@desktopserver.local (TNS V1-V3)
select sample_time, count(sid) from ( select to_char(ash.sample_time,'MM/DD
/YY HH24:MI:SS') sample_time, ash.session_id sid, ash.session_serial#
serial#, ash.user_id user_id, ash.program, ash.sql_id, ash.s
ql_plan_hash_value, sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
0.1 2 0.07 0.1 7.6 1.9 24.2 b4qw7n9wg64mh
INSERT /*+ APPEND LEADING(@"SEL$F5BB74E1" "H"@"SEL$2" "A"@"SEL$1") USE_NL(@"SE
}}}
<<<
! 2) ''Module calling SQL'' or ''SQL calling module calling SQL''
I've done some detailed test cases which are available here (click on each of the tiddlers), the workload is this [[CPU spike 1min idle interval]] which kinda matches the load on the first oracle-l post where it's got PL/SQL lock timer and frequent fast SQLs
* [[doublecounting-test0- 1st encounter]]
* [[doublecounting-test1-killed]]
* [[doublecounting-test2-finished]]
each of the instrumentation are correlated by time the load spike occured but here are the things that you have to focus on each of the instrumentation:
*collectl - check the columns "User" and "Run" and "Avg1"
*ASH - the number before the "CPU".. that's the number of AAS CPU it consumed
*snapper - on my test cases the snap interval is 1sec (see the snapper commands I used here [[CPU spike 1min idle interval]]).. I need this tool to catch the every 1 min sudden spike of load where my server only have 8 CPUs and the workload is consuming 16 CPUs, if it says "1600% ON CPU" that means it consumed 16 CPUs (1600/100)
*gas - the number of sessions and AVG_ETIME which is the elapsed time per execute
*sql_detail - the CPU_WAIT_EXEC which is the CPU WAIT
*AWR - the Top 5 Timed Events and the "Captured SQL account for", and notice the Executions if it's zero (killed) or has a value (finished)
*Statspack - the Top 5 Timed Events and the "Captured SQL account for", and notice the Executions if it's zero (killed) or has a value (finished)
and below is the summary
<<<
test case used is a modified version of cputoolkit to simulate the high "PL/SQL lock timer" on Statspack https://www.dropbox.com/s/je6eafm1a9pnfpk/cputoolkit.tar.bz2
see the [[CPU spike 1min idle interval]] for the details of the test case script used
{{{
with double counting
-> Captured SQL accounts for 179.8% of Total DB CPU <-- 179.8%
-> SQL reported below exceeded 1.0% of Total DB CPU
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
130.06 92 1.41 86.1 222.94 24,669,380 175009430
Module: sqlplus@desktopserver.local (TNS V1-V3)
SELECT /*+ cputoolkit ordered us
e_nl(b) use_nl(c) use_nl(d) full
(a) full(b) full(c) full(d) */ COUNT(*) FROM SYS.OBJ$ A, SYS.OBJ
$ B, SYS.OBJ$ C, SYS.OBJ$ D WHERE A.OWNER# = B.OWNER# AND B.OWNE
119.00 14 8.50 78.7 251.14 24,164,729 1927962500
Module: sqlplus@desktopserver.local (TNS V1-V3)
declare rcount number; begin -- 600/60=10 minute
s of workload for j in 1..1800 loop -- lotslios
by Tanel Poder select /*+ cputoolkit ordered
use_nl(b) use_nl(c) use_nl(d)
19.18 46 0.42 12.7 19.81 0 2248514484
Module: sqlplus@desktopserver.local (TNS V1-V3)
select to_char(start_time,'DD HH:MI:SS'), samples,
--total, --waits, --cpu, round(fpct * (tot
al/samples),2) fasl, decode(fpct,null,null,first) first,
round(spct * (total/samples),2) sasl, decode(spct,
1.60 277 0.01 1.1 1.87 0 2550496894
Module: sqlplus@desktopserver.local (TNS V1-V3)
select value ||'/'||(select instance_name from v$instance) ||'_
ora_'|| (select spid||case when traceid is not null then
'_'||traceid else null end from v$process where
addr = (select paddr from v$session
without double counting.. the executions is zero, so i think the job has to be cancelled or finish
-> Captured SQL accounts for 99.0% of Total DB CPU <-- 99%
-> SQL reported below exceeded 1.0% of Total DB CPU
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
198.34 144 1.38 86.1 300.59 37,436,409 175009430
Module: sqlplus@desktopserver.local (TNS V1-V3)
SELECT /*+ cputoolkit ordered us
e_nl(b) use_nl(c) use_nl(d) full
(a) full(b) full(c) full(d) */ COUNT(*) FROM SYS.OBJ$ A, SYS.OBJ
$ B, SYS.OBJ$ C, SYS.OBJ$ D WHERE A.OWNER# = B.OWNER# AND B.OWNE
179.49 0 77.9 281.65 35,710,858 1927962500
Module: sqlplus@desktopserver.local (TNS V1-V3)
declare rcount number; begin -- 600/60=10 minute
s of workload for j in 1..1800 loop -- lotslios
by Tanel Poder select /*+ cputoolkit ordered
use_nl(b) use_nl(c) use_nl(d)
6.22 164 0.04 2.7 6.42 0 2005132824
Module: sqlplus@desktopserver.local (TNS V1-V3)
select to_char(sysdate,'MM/DD/YY HH24:MI:SS') tm, a.inst_id inst
, sid, substr(program,1,19) prog, a.username, b.sql_id, child_nu
mber child, plan_hash_value, executions execs, (elapsed_time/dec
ode(nvl(executions,0),0,1,executions))/1000000 avg_etime, sql_te
}}}
<<<
! original question from oracle-l
<<<
http://www.freelists.org/post/oracle-l/DB-CPU-is-much-lower-than-CPU-Time-Reported-by-TOP-SQL-consumers
{{{
* CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash
Value
---------- ------------ ---------- ------ ---------- --------------- ----------
4407.08 20,294 0.22 51.6 10006.66 1,228,369,784 3703299877
Module: JDBC Thin Client
3943.14 157,316 0.03 46.2 6915.60 1,034,202,723 1127338565
Module: sel_ancomm_vss_06.tsk@c2aixprod (TNS V1-V3)
2358.20 269,711 0.01 27.6 4095.76 1,508,308,542 1995656981
Module: sel_zuteiler_alert.tsk@c2aixprod (TNS V1-V3)
1305.21 9,932 0.13 15.3 2483.90 331,327 1310406159
Module: sel_verwaltung.tsk@c2aixprod (TNS V1-V3)*
These 4 statements already adds up to 12013 CPU Seconds and the DB CPU is 8464 seconds.
Also look this text from top sql statmenet section:
-> Total DB CPU (s): 8,539
*-> Captured SQL accounts for 232.5% of Total DB CPU*
-> SQL reported below exceeded 1.0% of Total DB CPU
Capture SQL is 232% DB CPU! How can this be possible?
}}}
<<<
https://fiddles.io/#
http://sqlfiddle.com/
https://www.db-fiddle.com/
https://akdora.wordpress.com/2009/02/18/rules-of-precedence-in-sql-where-clause/
https://www.tutorialspoint.com/plsql/plsql_operators_precedence.htm
<<showtoc>>
! SQL server on linux - announcement
https://blogs.microsoft.com/blog/2016/03/07/announcing-sql-server-on-linux/#sm.000ie7pk911due9py3s1m58yce5xt
https://techcrunch.com/2016/11/16/microsofts-sql-server-for-linux-is-now-available-for-testing/
https://www.microsoft.com/en-us/sql-server/sql-server-vnext-including-Linux
https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/16/announcing-sql-server-on-linux-public-preview-first-preview-of-next-release-of-sql-server/
https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/16/announcing-the-next-generation-of-databases-and-data-lakes-from-microsoft/
<<<
SQL Server 2016 SP1
We are announcing SQL Server 2016 SP1 which is a unique service pack – for the first time we introduce consistent programming model across SQL Server editions. With this model, programs written to exploit powerful SQL features such as in-memory OLTP, in-memory columnstore analytics, and partitioning will work across Enterprise, Standard and Express editions.
<<<
https://cloudblogs.microsoft.com/sqlserver/2016/12/16/sql-server-on-linux-how-introduction/
! vscode , .NET MVC
download https://code.visualstudio.com/docs/introvideos/overview
https://www.microsoft.com/en-us/sql-server/developer-tools
https://channel9.msdn.com/Tags/sql+server?sort=viewed
https://gitter.im/mssqldev/Lobby
http://discuss.emberjs.com/t/are-developers-creating-ember-apps-outside-the-context-of-rails/283/65
http://stackoverflow.com/questions/25916381/how-am-i-supposed-to-persist-to-sql-server-db-using-ember-js-and-asp-net-mvc
http://www.codeproject.com/Articles/511031/A-sample-real-time-web-application-using-Ember-js
! SQL server on linux - preview
vm template https://azure.microsoft.com/en-us/marketplace/partners/microsoft/sqlservervnextonredhatenterpriselinux72/
sql-cli https://www.microsoft.com/en-us/sql-server/developer-get-started/node-rhel
https://portal.azure.com
!! docs
https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-get-started-tutorial
https://www.npmjs.com/package/sql-cli
https://gitter.im/mssqldev/Lobby
! HOWTO
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-get-started
https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-hero-tutorial
! azure pricing calculator
https://azure.microsoft.com/en-us/pricing/calculator/?tduid=(2835563d6d6ab27e429813b35ec8211b)(81561)(2130923)(0400ie1s1jpf)()
! oracle cloud vs azure
<<<
So I didn't know that I had to select the "Public Cloud Services - US", I had to watch this youtube video https://www.youtube.com/watch?v=tCVvtn3M4c4 , and this youtube video https://www.youtube.com/watch?v=cBBqrRTaMDw , "Identity Domain" , "Welcome to Oracle Cloud"
they also released the preview of SQL server (vNext CTP1) on Linux and made the following database features "in-memory OLTP, in-memory columnstore analytics, and partitioning" available across Enterprise, Standard and Express editions (that's for SQL Server 2016 SP1). Matrix here https://technet.microsoft.com/.../windows/cc645993(v=sql.90) , the blog here https://blogs.technet.microsoft.com/.../announcing-the.../ . That's a very good move to compete w/ Oracle database in terms of price point and features. Also Azure is very easy to use compared to Oracle Cloud and I can see Microsoft gearing towards developer happiness w/ this site https://www.microsoft.com/.../sql.../developer-get-started and the vscode https://code.visualstudio.com/docs/introvideos/overview and this https://gitter.im/mssqldev/Lobby it feels less enterprisey and more of fun w/ genuine contributions by the community
Microsoft just released SQL Server 2016 SP1 1) and at the same time moved most of the Enterprise features to all editions (Standard , Express) for this version 2). Its gonna be hard to sell Oracle options like partitioning, inmemmory when is free in SQL Serever.
1) https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/16/sql-server-2016-service-pack-1-generally-available/
2) https://technet.microsoft.com/en-us/windows/cc645993(v=sql.90)
<<<
! cloud UI
https://builtwith.com/?https%3a%2f%2fportal.azure.com <- built w/ ASP.NET MVC
https://builtwith.com/cloud.oracle.com <- built w/ J2EE and Foundation
https://builtwith.com/?https%3a%2f%2fcloud.digitalocean.com <- built w/ rails
We have this https://github.com/mauropagano/sqld360/blob/master/sql/sqld360_1d_standalone.sql
And the Kerry link that you sent
We used this before on benchmarking some OBIEE SQLS https://www.dropbox.com/s/t02ysug2t1nufxq/runbenchtoolkit.zip
And there are other tools you can use http://www.rittmanmead.com/2013/03/performance-and-obiee-test-build/
Or you can make use of this to capture the SQLs https://github.com/tmuth/Query-Test-Framework and then run it on top of runbenchtoolkit?
''Download''
http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
http://www.oracle.com/technetwork/developer-tools/sqlcl/downloads/sqlcl-relnotes-421-3415922.html
http://www.oracle.com/technetwork/developer-tools/sql-developer/sqldev-newfeatures-v42-3211987.html
SQL Developer Data Modeler User's Guide http://docs.oracle.com/database/sql-developer-4.2/DMDUG/toc.htm
''Migrate settings to new machine''
http://zacktutorials.blogspot.com/2011/02/how-to-copy-oracle-sqldeveloper.html
http://oracledeli.wordpress.com/2011/09/28/sql-developer_migrate_settings_files/
{{{
Navigate to the following location,
Step 1: C:\Documents and Settings\\Application Data\SQL Developer
Step 2: C:\Documents and Settings\\Application Data\SQL Developer\systemXX.X.X.X.XX
Step 3: Copy the product-preferences.xml in the location below,
C:\Documents and Settings\\Application Data\SQL Developer\systemXX.X.X.X.XX\o.sqldeveloper.XX.X.X.XX.XX
Step 4: Copy the connections.xml in the location below,
C:\Documents and Settings\\Application Data\SQL Developer\systemXX.X.X.X.XX\o.jdeveloper.db.connection.XX.X.X.X.XX.XX.XX
Copy the 2 files(product-preferences.xml & connections.xml ) to your new machine in the same location
C:\Users\karl\AppData\Roaming\SQL Developer\system4.2.0.17.089.1709\o.jdeveloper.db.connection.13.0.0.1.42.170225.201
C:\Users\karl\AppData\Roaming\SQL Developer\system4.2.0.17.089.1709\o.sqldeveloper.12.2.1.17.89.1709
}}}
''SetJavaHome''
http://stackoverflow.com/questions/7876502/how-can-i-run-oracle-sql-developer-on-jdk-1-6-and-everything-else-on-1-7
{{{
version 4.x
%APPDATA%\sqldeveloper\1.0.0.0.0\product.conf
}}}
''instance viewer''
http://www.thatjeffsmith.com/archive/2014/12/sql-developer-4-1-instance-viewer/ , https://www.youtube.com/watch?v=FrdUCdGJEG8
! SELECT asterisk - automatic column population
https://www.thatjeffsmith.com/archive/2016/11/7-ways-to-avoid-select-from-queries-in-sql-developer/
-- SQL*LOADER
Doc ID 1012594.6 Useful unix utilities to be used with SQL*Loader
Doc ID 1012726.6 Converting load file from delimited to fixed format
Doc ID 77337.1 How to add blank line to a SQL*Plus spooled output file
* SQL*Loader will show as INSERT SQL with module SQL Loader conventional path. So if you are qualifying sqls make sure to look at SQL_TEXT and MODULE
[img(80%,80%)[ https://i.imgur.com/7kG293E.png ]]
http://steve-lyon.blogspot.com/2013/07/sql-loader-step-by-step-basics-example-1.html
{{{
NAME, BALANCE, START_DT
"Jones, Joe" , 14 , "Jan-12-2012 09:25:37 AM"
"Loyd, Lizy" , 187.26 , "Aug-03-2004 03:13:00 PM"
"Smith, Sam" , 298.5 , "Mar-27-1997 11:58:04 AM"
"Doyle, Deb" , 5.95 , "Nov-30-2010 08:42:21 PM"
create table hr.sql_loader_demo_simple
( customer_full_name varchar2(50)
, account_balance_amt number
, account_start_date date
) ;
------------------------------------------------------------
-- SQL-Loader Basic Control File
------------------------------------------------------------
options ( skip=1 )
load data
infile 'data.csv'
truncate into table scott.sql_loader_demo_simple
fields terminated by ","
optionally enclosed by '"'
( customer_full_name
, account_balance_amt
, account_start_date DATE "Mon-DD-YYYY HH:MI:SS am"
)
sqlldr 'scott/tiger@my_database' control='control.txt' log='results.log
}}}
There was a whitepaper about oracle sqlnet and debugging it, not written by someone of our group I think. Does anyone remember that whitepaper?
I try to look up some oracle network related functions, and try to see if I can get some basics like network layers and accompanying functions.
I think the paper you are looking for is still available at
Examining Oracle Net, Net8, SQL*Net Trace Files (Doc ID 156485.1)
Other MOS related notes/references
SQL*NET PACKET STRUCTURE: NS PACKET HEADER (Doc ID 1007807.6)
http://www.nyoug.org/Presentations/2008/Sep/Harris_Listening%20In.pdf
http://ondoc.logand.com/d/359/html
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-network-waits
Troubleshooting Waits for 'SQL*Net message to client' and 'SQL*Net more data to client' Events from a Performance Perspective (Doc ID 1404526.1)
High Waits for Event 'SQL*Net message from client' Attributed to SQL in TKProf (Doc ID 400164.1)
<<<
Reduce Client Bottlenecks
A client bottleneck in the context of a slow database is another way to say that most of the time for sessions is being spent outside of the database. This could be due to a truly slow client or a slow network (and related components).
Observations and Causes
Examine the table below for common observations and causes:
Note: This list shows some common observations and causes but is not a complete list. If you do not find a possible cause in this list, you can always open a service request with Oracle to investigate other possible causes. Please see the section below called, "Open a Service Request with Oracle Support Services".
High Wait Time due to Client Events Before Any Type of Call
The Oracle shadow process is spending a significant amount of time waiting for messages from clients. The waits occur between FETCH and PARSE calls or before EXECUTE calls. There are few FETCH calls for the same cursor.
What to look for
TKProf:
Overall wait event summary for non-recursive and recursive statements shows significant amount of time for SQL*Net message from client waits compared to the total elapsed time in the database
Each FETCH call typically returns 5 or more rows (indicating that array fetches are occurring)
Cause Identified: Slow client is unable to respond to the database quickly
The client is running slowly and is taking time to make requests of the database.
Cause Justification
TKProf:
SQL*Net message from client waits are a large part of the overall time (see the overall summary section)
There are more than 5 rows per execution on average (divide total rows by total execution calls for both recursive and non-recursive calls). When array operations are used, you'll see 5 to 10 rows per execution.
You may also observe that performance is good when the same queries that the client sends are executed via a different client (on another node).
Solution Identified: Investigate the client
Its possible that the client or middle-tier is saturated (not enough CPU or memory) and is simply unable to send requests to the database fast enough.
You will need to check the client for sufficient resources or application bugs that may be delaying database calls.
M
Effort Details
Medium effort; It is easy to check clients or mid-tiers for OS resource saturation. Bugs in application code are more difficult to find.
L
Risk Details
Low risk.
Solution Implementation
It may help to use a tool like OSWatcher to capture OS performance metrics on the client.
To identify a specific client associated with a database session, see the V$SESSION view under the columns, CLIENT_INFO, PROCESS, MACHINE, PROGRAM.
Documentation
Reference: V$SESSION
Notes
The OS Watcher (OSW) User Guide
The OS Watcher For Windows (OSWFW) User Guide
Implementation Verification
Implement the solution and determine if the performance improves. If performance does not improve, examine the following:
Review other possible reasons
Verify that the data collection was done properly
Verify the problem statement
If you would like to log a service request, a test case would be helpful at this stage.
Cause Identified: Slow network limiting the response time between client and database
The network is saturated and this is limiting the ability of the client and database to communicate with each other.
Cause Justification
TKProf:
SQL*Net message from client waits are a large part of the overall time (see the overall summary section)
Array operations are used. This is seen when there are more than 5 rows per execution on average (divide total rows by total execution calls for both recursive and non-recursive calls)
The average time for a ping is about equal to twice the average time for a SQL*Net message from client wait and this time is more than a few milliseconds. This indicates that most of the client time is spent in the network.
You may also observe that performance is good when the same queries that the client sends are executed via a different client on a different subnet (especially one very close to the database server).
Solution Identified: Investigate the network
Check the responsiveness of the network from different subnets and interface cards. The netstat, ping and traceroute utilities can be used to check network performance.
M
Effort Details
Medium effort; Network problems are relatively easy to check but sometimes difficult to solve.
L
Risk Details
Low risk.
Solution Implementation
Consult your system documentation for utilities such as ping, netstat, and traceroute
Implementation Verification
Implement the solution and determine if the performance improves. If performance does not improve, examine the following:
Review other possible reasons
Verify that the data collection was done properly
Verify the problem statement
If you would like to log a service request, a test case would be helpful at this stage.
High Wait Time due to Client Events Between FETCH Calls
The Oracle shadow process is spending a significant amount of time waiting for messages from clients between FETCH calls for the same cursor.
What to look for
10046 / TKProf:
Overall wait event summary for non-recursive and recursive statements shows significant amount of time for SQL*Net message from client waits compared to the total elapsed time in the database
The client waits occur between many fetch calls for the same cursor (as seen in the cursor #).
On average, there are less than 5 (and usually 1) row returned per execution
Cause Identified: Lack of Array Operations Causing Excess Calls to the Database
The client is not using array operations to process multiple rows in the database. This means that many more calls are performed against the database. Each call incurs a wait while the database waits for the next call. The time accumulates over many calls and will impact performance.
Cause Justification
TKProf:
SQL*Net message from client waits are a large part of the overall time (see the overall summary section)
There is nearly 1 row per execution on average (divide total rows by total execution calls for both recursive and non-recursive calls). When array operations are used, you'll see 5 to 10 rows per execution.
In some cases, most of the time is for a few SQL statements; you may need to examine the whole TKProf to find where the client waits were highest and examine those for the use of array operations
Solution Identified: Use array operations to avoid calls
Array operations will operate on several rows at a time (either fetch, update, or insert). A single fetch or execute call will do the work of many more. Usually, the benefits of array operations diminish after an arraysize of 10 to 20, but this depends on what the application is doing and should be determined through benchmarking.
Since fewer calls are needed, there are savings in waiting for client messages, network traffic, and database work such as logical reads and block pins.
M
Effort Details
Medium effort; Depending on the client, it may be easy or difficult to change the application and use array operations.
L
Risk Details
Very low risk; it is risky when enormous array sizes are used in OLTP operations and many rows are expected. This is due to waiting for the entire array to be filled until the first row is returned.
Solution Implementation
The implementation of array operations will vary by the type of programming language being used. See the documents below for some common ways to implement array operations.
Documentation
PL/SQL User's Guide and Reference : Reducing Loop Overhead for DML Statements and Queries with Bulk SQL
Programmer's Guide to the Oracle Precompilers : Using Host Arrays
JDBC Developer's Guide and Reference: Update Batching
JDBC Developer's Guide and Reference: Oracle Row Prefetching
Notes
Bulk Binding - What it is, Advantages, and How to use it
How To Fetch Data into a Table of Records using Bulk Collect and FOR All
Implementation Verification
Implement the solution and determine if the performance improves. If performance does not improve, examine the following:
Review other possible reasons
Verify that the data collection was done properly
Verify the problem statement
If you would like to log a service request, a test case would be helpful at this stage.
<<<
https://blog.tanelpoder.com/2008/02/10/sqlnet-message-to-client-vs-sqlnet-more-data-to-client/
check email "Re: [SOLVED] Re: SQL*Net more data from client"
<<<
The “more data” behavior/pattern I think is the same for both “to client” and “from client”, the data on both scenarios span SDU packets it’s just a matter of which side is waiting.
For troubleshooting, those 3 key time accounting instrumentation metrics (on snapper) I believe would be the same for both cases you’ll just reverse the “to” and “from”.
But the bottomline is make sure the client and server packetsize are big enough and the same on both sides (client and server).
http://docwiki.embarcadero.com/DBOptimizer/en/Oracle:_Network_Waits#SQL.2ANet_more_data_to_client
<<<
<<<
https://blog.tanelpoder.com/2008/02/10/sqlnet-message-to-client-vs-sqlnet-more-data-to-client/
Now we see SQL*Net more data to client waits as well as the 5000 rows returned for every fetch call just don’t fit into a single SDU buffer.
I’ll reiterate that both SQL*Net message to client and SQL*Net more data to client waits only record the time it took to write the return data from Oracle’s userland SDU buffer to OS kernel-land TCP socket buffer. Thus the wait times of only microseconds. Thanks to that, all of the time a TCP packet spent “flying” towards the client is actually accounted in SQL*Net message from client wait statistic. The problem here is though, that we don’t know how much of this time was spent on the wire and how much of it was application think time.
Therefore, unless you’re going to buy a tool which is able to interpret TCP ACK echo timestamps, you need to measure network latency using application side instrumentation.
And this blog shows before and after workload screenshots after setting the DEFAULT_SDU_SIZE=32767, RECV_BUF_SIZE=65536, SEND_BUF_SIZE=65536
https://oracleattitude.wordpress.com/2014/08/22/oracle-performance-sqlnet-more-data-from-client/
<<<
! the issue
{{{
I can see a huge amount of Network waits on an environment (all related to ‘SQL*Net more data from client’) but I can find no SQL_ID in the sessions waiting for such event.
Note: DBA team reports no complaints whatsoever related to this from either the app or the user level
The following is happening at H&M in only one of their 3 clustered environments (EU2).
This is the summary of what’s been seen on this environment during last months.
Black Friday though Cyber Monday (4 days)
BUCKET PERCENT COLOR TOOLTIP
------------------------------ ---------- ------ ------------------------------------------------------------
ON CPU (42.2%) 42.2 34CF27 1907967 10s-samples (42.2% of DB Time)
Network (31.6%) 31.6 989779 1429259 10s-samples (31.6% of DB Time)
Cluster (14.4%) 14.4 CEC3B5 651200 10s-samples (14.4% of DB Time)
Other (3.3%) 3.3 F571A0 149882 10s-samples (3.3% of DB Time)
Commit (2.3%) 2.3 EA6A05 102367 10s-samples (2.3% of DB Time)
Application (1.9%) 1.9 C42A05 86235 10s-samples (1.9% of DB Time)
User I/O (1.8%) 1.8 0252D7 80683 10s-samples (1.8% of DB Time)
System I/O (1.7%) 1.7 1E96DD 78265 10s-samples (1.7% of DB Time)
Concurrency (.7%) .7 871C12 33500 10s-samples (.7% of DB Time)
Administrative (.1%) .1 75763E 3987 10s-samples (.1% of DB Time)
Scheduler (0%) 0 9FFA9D 211 10s-samples (0% of DB Time)
Configuration (0%) 0 594611 181 10s-samples (0% of DB Time)
From a Database perspective currently:
SQL> set null ‘(null)’
SQL> select event, sql_id, count('x') from v$active_session_history where event = 'SQL*Net more data from client' group by event, sql_id having count('x') > 10 order by count('x') ;
EVENT SQL_ID COUNT('X')
------------------------------------------ ------------- ----------
SQL*Net more data from client bd5jc9nsyjq29 12
SQL*Net more data from client adg2f9v3hsxtt 13
SQL*Net more data from client 434jx12t4g8dn 17
SQL*Net more data from client (null) 24865
SQL> select con_id, sql_id, dbid, program, module, ROW_NUMBER () OVER (ORDER BY COUNT(*) DESC) rn, COUNT(*) samples FROM dba_hist_active_sess_history h WHERE sql_id||program||module IS NOT NULL AND wait_class = 'Network' AND event = 'SQL*Net more data from client' GROUP BY con_id, sql_id, dbid, program, module having COUNT(*) > 100000 ;
CON_ID SQL_ID DBID PROGRAM MODULE RN SAMPLES
---------- ------------- ---------- ---------------------------------------- ---------------------------------------- ---------- ----------
0 (null) 1629892510 JDBC Thin Client JDBC Thin Client 1 8405707
0 (null) 1629892510 JDBC Thin Client /hmwebservices 2 309973
0 (null) 1629892510 JDBC Thin Client /ru_ru 3 168724
0 (null) 1629892510 JDBC Thin Client /pl_pl 4 168675
SQL> @ashtop username,sql_id "event='SQL*Net more data from client'" sysdate-7 sysdate
Total Distinct
Seconds AAS %This USERNAME SQL_ID FIRST_SEEN LAST_SEEN Execs Seen
--------- ------- ------- -------------------- ------------- ------------------- ------------------- ----------
92809 .2 97% | HYPRODBRIS (null) 2018-12-18 14:24:29 2018-12-19 13:04:03 1
64 .0 0% | HYPRODBRIS 434jx12t4g8dn 2018-12-18 14:26:54 2018-12-19 12:37:49 64
33 .0 0% | HYPRODBRIS adg2f9v3hsxtt 2018-12-18 14:39:20 2018-12-19 12:47:16 33
32 .0 0% | HYPRODBRIS 4rum1h74czt7m 2018-12-18 18:40:14 2018-12-19 12:40:02 32
30 .0 0% | HYPRODBRIS 8zv14zdf9f2b3 2018-12-18 16:36:11 2018-12-19 12:46:52 30
29 .0 0% | HYPRODBRIS 737u6qhnqgkc6 2018-12-18 15:20:29 2018-12-19 12:49:59 29
26 .0 0% | HYPRODBRIS bd5jc9nsyjq29 2018-12-18 16:28:06 2018-12-19 12:31:25 26
25 .0 0% | HYPRODBRIS 6pzcrqd3mzk0g 2018-12-18 15:02:49 2018-12-19 12:17:53 25
25 .0 0% | HYPRODBRIS 7gb0ugms3ppjz 2018-12-18 16:27:54 2018-12-19 12:41:48 25
23 .0 0% | HYPRODBRIS 70fcfxjcypsn0 2018-12-18 16:39:34 2018-12-19 12:11:18 23
22 .0 0% | HYPRODBRIS 8vzxyrj296ph5 2018-12-18 17:05:28 2018-12-19 13:00:28 22
20 .0 0% | HYPRODBRIS 6jdwkdb5b7yp9 2018-12-18 16:30:03 2018-12-19 13:00:16 20
19 .0 0% | HYPRODBRIS 02drmxbqbf8yz 2018-12-18 15:51:51 2018-12-19 12:56:29 19
19 .0 0% | HYPRODBRIS 2d10dcz1yf66r 2018-12-18 17:31:01 2018-12-19 12:14:53 19
19 .0 0% | HYPRODBRIS 3rhxvxhsz1rf1 2018-12-18 15:23:14 2018-12-19 12:42:19 19
From an application perspective, Ct tells me the same release is running in all EU1, EU2, EU3 sites (?) and I can find no differences in network configuration (at server level) in either of them.
Any idea on how to track down which processes/programs are causing this behaviour would be highly appreciated.
}}}
! the fix
{{{
If the SQL statement to be parsed is big enough to be sent in several pieces, the shadow process waits for the full sql statement before it can actually start parsing it.
At this time it waits for “SQL*Net message from client” without any sql_id until it can actually start parsing it.
running SQL statements up to 1.5 MB in size.
48KB statement rendered 2 waits for SQL*Net message from client waits
1.5MB statement rendered 43 waits for SQL*Net message from client waits
Short stack: kslwtectx<-opikndf2<-ttcclr<-ttcc2u<-ttcpip<-opitsk<-opiino<-opiodr<-opidrv<-sou2o<-opimai_real<-ssthrdmain<-main<-__libc_start_mains
}}}
.
-- easy CSV with headings https://www.safaribooksonline.com/library/view/oracle-sqlplus-the/0596007469/re105.html
https://stackoverflow.com/questions/5576901/sqlplus-spooling-how-to-get-rid-of-first-empty-line
{{{
$ s1
SQL*Plus: Release 19.0.0.0.0 - Production on Wed Oct 20 10:03:07 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Last Successful login time: Wed Oct 20 2021 10:01:28 -04:00
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
10:03:07 SYSTEM@ORCL> @testcsv2
10:03:11 SYSTEM@ORCL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle@localhost.localdomain:/home/oracle/karao/scripts-master/performance:orclcdb
$ cat testcsv2.csv
"USERNAME","USER_ID","PASSWORD","ACCOUNT_STATUS","LOCK_DATE","EXPIRY_DATE","DEFAULT_TABLESPACE","TEMPORARY_TABLESPACE","LOCAL_TEMP_TABLESPACE","CREATED","PROFILE","INITIAL_RSRC_CONSUMER_GROUP","EXTERNAL_NAME","PASSWORD_VERSIONS","EDITIONS_ENABLED","AUTHENTICATION_TYPE","PROXY_ONLY_CONNECT","COMMON","LAST_LOGIN","ORACLE_MAINTAINED","INHERITED","DEFAULT_COLLATION","IMPLICIT","ALL_SHARD","PASSWORD_CHANGE_DATE"
"SYS",0,,"OPEN",,,"SYSTEM","TEMP","TEMP","17-APR-19","DEFAULT","SYS_GROUP",,"11G 12C ","N","PASSWORD","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"AUDSYS",8,,"LOCKED","31-MAY-19",,"USERS","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"SYSTEM",9,,"OPEN",,,"SYSTEM","TEMP","TEMP","17-APR-19","DEFAULT","SYS_GROUP",,"11G 12C ","N","PASSWORD","N","YES","20-OCT-21 10.03.07.000000000 AM -04:00","Y","YES","USING_NLS_COMP","NO","NO",
"OUTLN",13,,"LOCKED","31-MAY-19",,"SYSTEM","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"GSMADMIN_INTERNAL",22,,"LOCKED","31-MAY-19",,"SYSAUX","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"GSMUSER",23,,"LOCKED","31-MAY-19",,"USERS","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"DIP",24,,"LOCKED","17-APR-19",,"USERS","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"REMOTE_SCHEDULER_AGENT",35,,"LOCKED","31-MAY-19",,"USERS","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"DBSFWUSER",36,,"LOCKED","31-MAY-19",,"SYSAUX","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
oracle@localhost.localdomain:/home/oracle/karao/scripts-master/performance:orclcdb
$
oracle@localhost.localdomain:/home/oracle/karao/scripts-master/performance:orclcdb
$ cat testcsv2.sql
-- format csv
set markup csv on
set feedback off
-- this will not show the output on screen
set termout off
set echo off verify off
-- for performance, set the arraysize to larger value
set arraysize 5000
spool testcsv2.csv
select * from dba_users where rownum < 10;
spool off
}}}
-- easy HTML
{{{
SET MARKUP HTML ON
}}}
-- define variable
{{{
COL edb360_bypass NEW_V edb360_bypass;
select 3600 edb360_bypass from dual;
or this
define edb360_secs2go = 3600
}}}
-- PRELIM
http://laurentschneider.com/wordpress/2011/07/sqlplus-prelim.html
-- ESCAPE CHARACTER
http://www.orafaq.com/faq/how_does_one_escape_special_characters_when_writing_sql_queries
http://www.orafaq.com/wiki/SQL*Plus_FAQ
{{{
Define an escape character:
SET ESCAPE '\'
SELECT '\&abc' FROM dual;
}}}
-- Don't scan for substitution variables:
{{{
SET SCAN OFF
SELECT '&ABC' x FROM dual;
}}}
-- NULLIF
https://forums.oracle.com/forums/thread.jspa?threadID=2303647
{{{
The simplest way is NULLIF
NULLIF (x, y)
returns NULL if x and y are the same; otherwise, it returns x. So
n / NULLIF (d, 0)
returns NULL if d is 0; otherwise, it returns n/d.
}}}
-- ACCEPT/HIDE
http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12005.htm
http://www.database-expert.com/white_papers/oracle_sql_script_that_accepts_passwords.htm
SQL*Plus command line history completion - RLWRAP
Doc ID: 460591.1
{{{
Purpose
The SQL*Plus, the primary interface to the Oracle Database server,
provides a powerful yet easy-to-use environment for querying, defining, and controlling data.
However, some command-line utilities, for example bash, provide features such as:
- command history (up/down arrow keys)
- auto completion (TAB key)
- searchable command line history (Ctrl+r)
The scope of this bulletin is to provide these features to SQL*Plus.
Scope and Application
For all SQL*Plus users, but in particular for Linux platforms as this note has been written thinking at that Operating System. In any case this idea can work on other OSs as well.
SQL*Plus command line history completion
SQL*Plus users working on Linux platforms have the opportunity to use a readline wrapper "rlwrap". rlwrap is a 'readline wrapper' that uses the GNU readline library to allow the editing of keyboard input for any other command. Input history is remembered across invocations, separately for each command; history completion and search work as in bash and completion word lists can be specified on the command line. Since SQL*Plus is not built with readline library, rlwrap is just doing the job.
- 'rlwrap' is really a tiny program. It's about 24K in size, and you can download it from the official developper (Hans Lub) website http://utopia.knoware.nl/~hlub/uck/rlwrap/
What do you need to compile and run it
A newer (4.2+) GNU readline (you can get it at ftp://ftp.gnu.org/gnu/readline/)
and an ANSI C compiler.
rlwrap compiles and runs on many Unix systems and on cygwin.
Installation should be as simple as:
./configure
make
make install
Compile rlwrap statically
If you don't have the root account you can compile rlwrap statically
and install it under $HOME/bin executing this command:
CFLAGS=-I$HOME/readline-6.0 CPPFLAGS=-I$HOME/readline-6.0 LDFLAGS=-static ./configure --prefix=$HOME/bin
make
make install
where $HOME/readline-6.0 is the 'readline' source location
A different option and if your are using Linux, it is to download the newest source rpm package from e.g.: http://download.fedora.redhat.com/pub/epel/5/SRPMS/rlwrap-0.37-1.el5.src.rpm and build the binary rpm package by
# rpm -ivh rlwrap-0.37-1.el5.src.rpm
# cd /usr/src/redhat/SPECS/
# rpmbuild -bb rlwrap.spec
# cd ../RPMS/<arch>/
and then you can install it as any other rpm package by e.g.
# rpm -ivh rlwrap-0.37-1.x86_64.rpm
- After installing the package, you should to configure a user's environment so that it makes use of the installed utility, add the following line in '/etc/bashrc' (globally) or in '${HOME}/.bashrc' (locally for the user). Change '<path>' with the right path of your rlwrap:
alias sqlplus='<path>/rlwrap ${ORACLE_HOME}/bin/sqlplus'
The modified .bashrc won't take effect until you launch a new terminal session or until you source .bashrc. So shut down any terminals you already have open and start a new one.
If you now launch SQL*Plus in exactly the way you've used so far, you should be able to type one SQL command and submit it, and then immediately be able to press the up-arrow key and retrieve it. The more SQL commands you issue over time, the more commands rlwrap will remember. As well as just scrolling through your previous SQL commands, you can press 'Ctrl+r' to give you a searchable command line history.
You can also create your own '${HOME}/.sqlplus_completions' file (locally)or '/usr/share/rlwrap/sqlplus' file (globally) with all SQL reserved words (or in case whatever you want) as your auto-completion list (see rlwrap man page for dettails).
And example of '${HOME}/.sqlplus_completions' with some reserved words:
COPY PAUSE SHUTDOWN
DEFINE PRINT SPOOL
DEL PROMPT SQLPLUS
ACCEPT DESCRIBE QUIT START
APPEND DISCONNECT RECOVER STARTUP
ARCHIVE LOG EDIT REMARK STORE
ATTRIBUTE EXECUTE REPFOOTER TIMING
BREAK EXIT REPHEADER TTITLE
BTITLE GET RESERVED UNDEFINE
CHANGE HELP RESERVED VARIABLE
CLEAR HOST RUN WHENEVER
copy pause shutdown
define print spool
del prompt sqlplus
accept describe quit start
append disconnect recover startup
archive log edit remark store
attribute execute repfooter timing
break exit repheader ttitle
btitle get reserved undefine
change help reserved variable
clear host run whenever
ALL ALTER AND ANY ARRAY ARROW AS ASC AT
BEGIN BETWEEN BY
CASE CHECK CLUSTERS CLUSTER COLAUTH COLUMNS COMPRESS CONNECT CRASH CREATE CURRENT
DECIMAL DECLARE DEFAULT DELETE DESC DISTINCT DROP
ELSE END EXCEPTION EXCLUSIVE EXISTS
FETCH FORM FOR FROM
GOTO GRANT GROUP
HAVING
IDENTIFIED IF IN INDEXES INDEX INSERT INTERSECT INTO IS
LIKE LOCK
MINUS MODE
NOCOMPRESS NOT NOWAIT NULL
OF ON OPTION OR ORDEROVERLAPS
PRIOR PROCEDURE PUBLIC
RANGE RECORD RESOURCE REVOKE
SELECT SHARE SIZE SQL START SUBTYPE
TABAUTH TABLE THEN TO TYPE
UNION UNIQUE UPDATE USE
VALUES VIEW VIEWS
WHEN WHERE WITH
all alter and any array arrow as asc at
begin between by
case check clusters cluster colauth columns compress connect crash create current
decimal declare default delete desc distinct drop
else end exception exclusive exists
fetch form for from
goto grant group
having
identified if in indexes index insert intersect into is
like lock
minus mode
nocompress not nowait null
of on option or orderoverlaps
prior procedure public
range record resource revoke
select share size sql start subtype
tabauth table then to type
union unique update use
values view views
Note:
You can use 'rlwrap' with all Oracle command line utilities such as Recovery Manager (RMAN) , Oracle Data Pump (expdp), ASM command (asmcmd), etc.
i.e.:
alias rman='/usr/bin/rlwrap ${ORACLE_HOME}/bin/rman'
alias expdp='/usr/bin/rlwrap ${ORACLE_HOME}/bin/expdp'
alias asmcmd='/usr/bin/rlwrap ${ORACLE_HOME}/bin/asmcmd'
References
http://utopia.knoware.nl/~hlub/uck/rlwrap/
ftp://ftp.gnu.org/gnu/readline
http://download.fedora.redhat.com/pub/epel
}}}
-- HEX, DECIMAL, ASCII
Script To Convert Hexadecimal Input Into a Decimal Value
Doc ID: 1019580.6
How to Convert Numbers to Words
Doc ID: 135986.1
Need To Convert A Varchar2 String Into Its Hexadecimal Equivalent
Doc ID: 269578.1
http://www.orafaq.com/wiki/SQL_FAQ#How_does_one_add_a_day.2Fhour.2Fminute.2Fsecond_to_a_date_value.3F
{{{
Here are a couple of examples:
Description Date Expression
Now SYSDATE
Tomorow/ next day SYSDATE + 1
Seven days from now SYSDATE + 7
One hour from now SYSDATE + 1/24
Three hours from now SYSDATE + 3/24
A half hour from now SYSDATE + 1/48
10 minutes from now SYSDATE + 10/1440
30 seconds from now SYSDATE + 30/86400
Tomorrow at 12 midnight TRUNC(SYSDATE + 1)
Tomorrow at 8 AM TRUNC(SYSDATE + 1) + 8/24
Next Monday at 12:00 noon NEXT_DAY(TRUNC(SYSDATE), 'MONDAY') + 12/24
First day of the month at 12 midnight TRUNC(LAST_DAY(SYSDATE ) + 1)
The next Monday, Wednesday or Friday at 9 a.m TRUNC(LEAST(NEXT_DAY(sysdate, 'MONDAY'), NEXT_DAY(sysdate, 'WEDNESDAY'), NEXT_DAY(sysdate, 'FRIDAY'))) + 9/24
}}}
weekday https://docs.oracle.com/cd/E51711_01/DR/WeekDay.html
http://laurentschneider.com/wordpress/2005/12/the-sqlplus-settings-i-like.html
http://awads.net/wp/2005/08/04/oracle-sqlplus/
CSV output
{{{
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
set feedback off pages 0 term off head on und off trimspool on echo off lines 4000 colsep ','
spool awr_cpuwl-tableau-&_instname-&_hostname..csv
<SQL here>
spool off
host sed -n -i '2,$ p' awr_cpuwl-tableau-&_instname-&_hostname..csv
}}}
http://blog.oraclecontractors.com/?p=551
http://pastebin.com/dYCc8NXY
http://www.geekinterview.com/question_details/60974
http://larig.wordpress.com/2011/05/29/formatting-oracle-output-in-sqlplus/
http://stackoverflow.com/questions/643137/how-do-i-spool-to-a-csv-formatted-file-using-sqlplus
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2189860818012
http://database.blogs.webucator.com/2011/02/27/importing-data-using-oracle-sql-developer/ <- import data using SQL Developer
https://forums.oracle.com/forums/thread.jspa?threadID=855621 <- remove dash line below header
-- VARIABLES
http://www.dbforums.com/oracle/1089379-sqlplus-passing-parameters.html
http://www.unix.com/unix-dummies-questions-answers/25395-how-pass-values-oracle-sql-plus-unix-shell-script.html
Spice up your SQL Scripts with Variables http://www.orafaq.com/node/515
Oracle Support Resources List
http://blogs.oracle.com/Support/2007/08/
How to Identify Resource Intensive SQL for Tuning (Doc ID 232443.1)
Example "Top SQL" queries from V$SQLAREA (Doc ID 235146.1)
TROUBLESHOOTING Query Tuning
Doc ID: 752662.1
FAQ: Query Tuning Frequently Asked Questions
Doc ID: Note:398838.1
PERFORMANCE TUNING USING 10g ADVISORS AND MANAGEABILITY FEATURES
Doc ID: 276103.1
How to Submit a Testcase to Oracle Support for Reproducing an Execution Plan
Doc ID: Note:390610.1
-- RAT
Real Application Testing Now Available for Earlier Releases
Doc ID: Note:560977.1
TESTING SQL PERFORMANCE IMPACT OF AN ORACLE 9i TO ORACLE DATABASE 10g RELEASE 2 UPGRADE WITH SQL PERFORMANCE ANALYZER
Doc ID: Note:562899.1
-- EXPLAIN PLAN
Methods for Obtaining a Formatted Explain Plan
Doc ID: Note:235530.1
SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
Doc ID: Note:215187.1
Document TitleDatabase Community: SQLTXPLAIN 2: Comparing Two Explain Plans using the SQLTXPLAIN COMPARE method (Doc ID 953964.1)
Database Performance Archived Webcasts (Doc ID 1050869.1)
Support Community SQLT (SQLTXPLAIN) Enhanced Explain Plan and Related Diagnostic Information for One SQL (Doc ID 764311.1)
SQLT (SQLTXPLAIN) - Tool that helps to diagnose SQL statements performing poorly (Doc ID 215187.1)
SQL Code Diagnostics: How to Create an SQLTXPLAIN ("XPLAIN" Method) in 4 to 5 Easy Steps! (Doc ID 804267.1)
bde_x.sql - Simple Explain Plan for given SQL Statement (8i-10g) (Doc ID 174603.1)
SQLT (SQLTXPLAIN) - Tool that helps to diagnose SQL statements performing poorly (Doc ID 215187.1)
coe_xplain_80.sql - Enhanced Explain Plan for given SQL Statement (8.0) (Doc ID 156959.1)
coe_xplain_73.sql - Enhanced Explain Plan for given SQL Statement (7.3) (Doc ID 156960.1)
Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds and/or Waits generated by EVENT 10046
Doc ID: Note:224270.1
Implementing and Using the PL/SQL Profiler
Doc ID: Note:243755.1
Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output
Doc ID: Note:39817.1TROUBLESHOOTROUBLESHOOTING: Advanced Query Tuning
Doc ID: 163563.1TING: Advanced Query Tuning
Doc ID: 163563.1
Determining the execution plan for a distributed query
Doc ID: 33838.1
How to Display All Loaded Execution Plan for a specific sql_id
Doc ID: 465398.1
-- SLOW
Why is a Particular Query Slower on One Machine than Another?
Doc ID: 604256.1
Potentially Expensive Query Operations
Doc ID: 162142.1
TROUBLESHOOTING: Advanced Query Tuning
Doc ID: 163563.1
TROUBLESHOOTING: Possible Causes of Poor SQL Performance
Doc ID: 33089.1
How to Tune a Query that Cannot be Modified
Doc ID: 122812.1
Diagnostics for Query Tuning Problems
Doc ID: 68735.1
-- SLOW SIMULATE
How to simulate a slow query. Useful for testing of timeout issues
Doc ID: 357615.1
-- HISTOGRAM
Case Study: Judicious Use of Histograms for Oracle Applications Tuning
Doc ID: 358323.1
-- PREDICATE
Query Performance is influenced by its predicate order
Doc ID: 276877.1
-- SQL PROFILE
http://robineast.wordpress.com/2007/08/04/what-they-dont-tell-you-about-oracle-sql-profiles/
http://kerryosborne.oracle-guy.com/2009/04/oracle-sql-profiles/
Automatic SQL Tuning - SQL Profiles
Doc ID: 271196.1
How To Move SQL Profiles From One Database To Another Database
Doc ID: 457531.1
How To Capture The Entire App.Sqls Execution Plan In Sql Profile
Doc ID: 556133.1
Slow Query - The Explain Plan Changed - Just By Changing One Character it is Fast Again
Doc ID: 463134.1
-- 10053
How to Obtain Tracing of Optimizer Computations (EVENT 10053)
Doc ID: 225598.1
CASE STUDY: Analyzing 10053 Trace Files (Doc ID 338137.1)
-- SQL
VIEW: "V$SQL" Reference Note
Doc ID: 43762.1
Useful Join Columns:
( ADDRESS,HASH_VALUE ) - Join to <View:V$SQLTEXT> . ( ADDRESS,HASH_VALUE )
Support Notes:
This view shows one row for each version of each SQL statement.
See <View:V$SQLAREA> for an aggregated view which groups all versions
of the same SQL statement together.
When monitoring performance it can be beneficial to use this view
rather then V$SQLAREA if looking at only a subset of statements in the
shared pool.
-- TYPE
How to Determine Type or Table Dependents of an Object Type
Doc ID: 69661.1
http://www.slaviks-blog.com/2010/03/30/oracle-sql_id-and-hash-value/
http://blogs.oracle.com/toddbao/2010/11/how_to_get_sql_id_from_sql_statement.html
Understanding SQL Plan Baselines in Oracle Database 11g
http://www.databasejournal.com/features/oracle/article.php/3896411/article.htm
HOW TO TUNE ONE SQL FOR VARIOUS SIZE OF DATABASES
http://toadworld.com/BLOGS/tabid/67/EntryId/615/How-to-Tune-One-SQL-for-Various-Size-of-Databases.aspx
Baseline Advisor
http://orastory.wordpress.com/2011/03/22/sql-tuning-set-to-baseline-to-advisor/
{{{
Things to note:
In 10g and 11gR1 the default for SELECT_WORKLOAD_REPOSITORY is to return only BASIC information, which excludes the plan! So DBMS_SPM.LOAD_PLANS_FROM_SQLSET doesn’t load any plans.
It doesn’t throw a warning either, which it could sensibly, since the STS has no plan, and it can see that</grumble>
This changes to TYPICAL in 11gR2 (thanks Surachart!)
Parameter “optimizer_use_sql_plan_baselines” must be set to TRUE for a baseline to be used
Flush the cursor cache after loading the baseline to make sure it gets picked up on next execution of the sql_id
}}}
http://kerryosborne.oracle-guy.com/2009/04/04/oracle-sql-profiles/
http://kerryosborne.oracle-guy.com/2009/07/31/why-isnt-oracle-using-my-outline-profile-baseline/
<<<
Greg Rahn says:
April 5, 2009 at 4:57 pm
The main difference between an Outline and a SQL Profile is an Outline contains a full set of query execution plan directives where a SQL Profile (created by the Tuning Adviser) only contains adjustments (OPT_ESTIMATE / COLUMN_STATS / TABLE_STATS) for cardinality allowing the optimizer the option to choose the operation based on the additional information. This means an Outline always has exactly the same execution plan, but a SQL Profile may not.
To use an analogy, an Outline is a complete set of turn by turn directions, where a SQL Profile contains only the (adjusted) estimated driving times for portions of the trip.
<<<
{{{
Does SQLT provide formatted 10053 output?
SQLTXPLAIN collects 10053 trace but it does not re-format it. The trace file will be called sqlt_Snnnnn_10053_explain.trc, where nnnnn is the unique identifier for the SQLTXPLAIN command.
10053 is an internal trace and it is not documented. It was created by development for development to analyze issues with the cost based optimizer. It is included in SQLTXPLAIN output for completeness and so that we have the trace in the event that there is a need to engage Oracle support or development. Because SQLTXPLAIN uses EXPLAIN PLAN the SQL ID will be different to the SQL ID of the original SQL. For example
SELECT /*+ monitor */ e.first_name "First", e.last_name "Last", d.department_name "Department"
FROM hr.employees e, hr.departments d
WHERE e.department_id = d.department_id
which has a SQL ID of 8yw3t99dpvf8k will be changed to
EXPLAIN PLAN SET statement_id = '36861' INTO SQLTXPLAIN.sqlt$_sql_plan_table FOR
SELECT /*+ monitor */ e.first_name "First", e.last_name "Last", d.department_name "Department"
FROM hr.employees e, hr.departments d
WHERE e.department_id = d.department_id
which has a SQL ID of 3712ayqgzw6hb.
Because there is a lot of tracing happening during the execution of SQLTXPLAIN the trace will often not begin where the trace of the SQL of interest begins. To find the beginning of the statement of interest search the 10053 trace file for the string "Current SQL Statement".
The end of the trace for the statement of interest can be found by searching for "atom_hint" or "END SQL Statement Dump".
10053 can also be obtained by running SQLHC.sql.
}}}
{{{
11.4.5.0 November 21, 2012
ENH: Event 10053 is now enabled with SQL_Optimizer instead of SQL_Compiler. Trace 10053 becomes more readable.
11.4.4.2 February 2, 2012
ENH: EVENT 10053 trace includes now tracing SQL Plan Management SPM. Look for "SPM: " token in trace.
11.4.4.1 January 2, 2012
ENH: SQLT XTRACT on 11.2 uses now DBMS_SQLDIAG.DUMP_TRACE to generate 10053 on child cursor with largest average elapsed time from connecting instance.
ENH: TRCA provides now a script sqlt/run/sqltrcasplit.sql to split a trace file into a 10046 trace and the rest.
(In other words, it provides access to this capability of splitting a 10046/10053 trace to end users.)
}}}
! XPLORE
* ''version upgrade''
- because you just don't want to modify the optimizer_features_enable to a lower value as it was before .. let's say you upgraded from 10gR2 to 11gR2 because it has a wide scope and will enable/disable a lot of optimizer features.. because you want to retain the 11gR2 OFE parameter and be systematic and want to pinpoint the exact issue causing the performance regression. So what you can do is to disable/turn off that particular feature using fix control - (can be done with no data)
- for every new release of Oracle the optimizer improvements are implemented through patches or fix controls and this started in DB version 10.2.0.2 I think.. so you can toggle on or off an optimizer feature through fix control
* ''wrong result''
- if it returns 1 row instead of 3.. which automatically means it's a bug! the golden rule is, any transformations done by the optimizer must not alter the result set.. it could alter the performance but not the result set.. so you want to pinpoint what specific optimizer fix control that is causing this wrong result set. Because even a new optimizer feature may cause this.. Then after identifying and letting the Oracle Support know the specific fix control culprit, they will provide a bug fix or patch for it - (must be done with data)
* ''just finding a good plan''
- a brute force one by one case analysis of every fix control this behaves like the hints injection by DB Optimizer (heuristics) but here it's on a wider and more detailed scope going through all the 1000+ fix control gives you a lot of permutations to choose from which if you find the specific fix control causing it to have a good performance then you can play around with the execution plan with hints.. you can leave this running in a dev or qa environment - (must be done with data)
* ''hard parsing (long parse times)''
- it is totally possible that a new optimizer feature can make your parse times really slow.. so this will go through all the fix control cases and will give you a valuable input on where it is going wrong (can be done with no data)
! XHUME
* ''troubleshooting stats''
- must be done on a dev environment because it hacks and modify the data dictionary, what it does is update the last created date info of a table to be older than the oldest statistics collection history of that table.. then iterates through all the the statistics history and then at some point you'll find the best execution plan, then you can pluck it out using the DBMS_STATS api or compare specific stats period to see what's wrong between two statistics collection
! XGRAM
* ''hack the histogram'' - set of scripts to modify/set and hack the histograms
{{{
EXEC sqltxplain.sqlt$a.purge_repository(81429, 81429);
}}}
{{{
select distinct statid from sqltxplain.SQLT$_STATTAB;
/home/oracle/dba/sqlt/utl/sqltimp.sql s<SQLT ID> sysadm
}}}
''show sqltxplain configuration parameters''
{{{
select name, value from sqltxplain.SQLI$_PARAMETER;
}}}
''-- execute this''
{{{
-- this will disable the TCB and shorten the STA run
EXEC sqltxplain.sqlt$a.set_param('test_case_builder', 'N'); <-- the default is Y
EXEC sqltxadmin.sqlt$a.set_sess_param('test_case_builder', 'N'); <-- 2022
EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '30'); <-- the default is 1800sec
}}}
-- to enable SQL Tuning Advisor do this
{{{
EXEC sqltxplain.sqlt$a.set_param('sql_tuning_advisor', 'Y'); <-- the default is Y
EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '1800');
}}}
I usually do the compare on my laptop because on the client site usually I can't install all the other tools that I need.. like the DB Optimizer, SQL Developer, etc.
this tiddler will show you how to make use of the SQLT compare feature to compare the good and bad run of a particular SQL..
I usually do the following steps to drill down on a query.. but for this tiddler I will only discuss the stuff that are highlighted
''- install SQLTXPLAIN
- pull bad and good SQLT runs
- do compare''
- generate local test case
- execute query
- db optimizer
! Install SQLTXPLAIN
{{{
Execute sqlt/install/sqcreate.sql connected as SYS.
# cd sqlt/install
# sqlplus / as sysdba
SQL> START sqcreate.sql
}}}
! Pull bad and good SQLT runs
{{{
# cd sqlt/run
# sqlplus apps
SQL> START sqltxtract.sql [SQL_ID]|[HASH_VALUE]
SQL> START sqltxtract.sql 0w6uydn50g8cx
SQL> START sqltxtract.sql 2524255098
}}}
the two zip files should be on your laptop, in my case I unzipped them on their own directory
{{{
oracle@karl.fedora:/trace/sqlt:orcl
$ ls -ltr
total 72
drwxr-xr-x 5 oracle dba 4096 Oct 10 18:09 utl
drwxr-xr-x 2 oracle dba 4096 Oct 20 16:04 doc
-rw-r--r-- 1 oracle dba 41940 Oct 30 12:13 sqlt_instructions.html
drwxr-xr-x 3 oracle dba 4096 Oct 30 12:56 input
drwxr-xr-x 3 oracle dba 4096 Nov 27 17:13 sqlt_s54471-bad <-- bad
drwxr-xr-x 2 oracle dba 4096 Nov 27 17:23 install
drwxr-xr-x 3 oracle dba 4096 Nov 27 17:26 sqlt_s54491-good <-- good
drwxr-xr-x 2 oracle dba 4096 Nov 27 17:32 run
}}}
''contents of sqlt_s54471-bad''
{{{
oracle@karl.fedora:/trace/sqlt/sqlt_s54471-bad:orcl
$ ls -tlr
total 228468
-rw-rw-r-- 1 oracle dba 21481 Nov 26 13:33 sqlt_s54471_readme.html <-- this HTML file contains the exact commands to do the COMPARE
-rw-rw-r-- 1 oracle dba 44785 Nov 26 13:33 sqlt_s54471_p3382835738_sqlprof.sql
-rw-rw-r-- 1 oracle dba 27420205 Nov 26 13:33 sqlt_s54471_main.html
-rw-rw-r-- 1 oracle dba 701057 Nov 26 13:33 sqlt_s54471_lite.html
-rw-rw-r-- 1 oracle dba 74054 Nov 26 13:33 sqlt_s54471_sql_monitor.txt
-rw-rw-r-- 1 oracle dba 495488 Nov 26 13:33 sqlt_s54471_sql_monitor.html
-rw-rw-r-- 1 oracle dba 517117 Nov 26 13:33 sqlt_s54471_sql_monitor_active.html
-rw-rw-r-- 1 oracle dba 582550 Nov 26 13:33 sqlt_s54471_sql_detail_active.html
-rw-rw-r-- 1 oracle dba 619823 Nov 26 13:33 sqlt_s54471_tcb.zip
-rw-rw-r-- 1 oracle dba 104563006 Nov 26 13:33 sqlt_s54471_10053_explain.trc
-rw-rw-r-- 1 oracle dba 7782 Nov 26 13:37 sqlt_s54471_tc_sql.sql
-rw-rw-r-- 1 oracle dba 8415 Nov 26 13:37 sqlt_s54471_tc_script.sql
-rw-rw-r-- 1 oracle dba 316568 Nov 26 13:37 sqlt_s54471_opatch.zip
-rw-rw-r-- 1 oracle dba 2588 Nov 26 13:37 sqlt_s54471_driver.zip
-rw-rw-r-- 1 oracle dba 41947040 Nov 26 13:38 sqlt_s54471_trc.zip
-rw-rw-r-- 1 oracle dba 28333 Nov 26 13:38 sqlt_s54471_log.zip
-rw-r--r-- 1 oracle dba 56285212 Nov 27 17:11 sqlt_s54471.zip
drwxr-xr-x 2 oracle dba 4096 Nov 27 17:13 sqlt_s54471_tc
}}}
''contents of sqlt_s54491-good''
{{{
oracle@karl.fedora:/trace/sqlt/sqlt_s54491-good:orcl
$ ls -ltr
total 219324
-rw-rw-r-- 1 oracle dba 21526 Nov 27 02:35 sqlt_s54491_readme.html <-- this HTML file contains the exact commands to do the COMPARE
-rw-rw-r-- 1 oracle dba 34589692 Nov 27 02:35 sqlt_s54491_main.html
-rw-rw-r-- 1 oracle dba 660035 Nov 27 02:35 sqlt_s54491_lite.html
-rw-rw-r-- 1 oracle dba 1488 Nov 27 02:35 sqlt_s54491_sta_script_mem.sql
-rw-rw-r-- 1 oracle dba 333117 Nov 27 02:35 sqlt_s54491_sta_report_mem.txt
-rw-rw-r-- 1 oracle dba 57213 Nov 27 02:35 sqlt_s54491_sql_monitor.txt
-rw-rw-r-- 1 oracle dba 424680 Nov 27 02:35 sqlt_s54491_sql_monitor.html
-rw-rw-r-- 1 oracle dba 385239 Nov 27 02:35 sqlt_s54491_sql_monitor_active.html
-rw-rw-r-- 1 oracle dba 461504 Nov 27 02:35 sqlt_s54491_sql_detail_active.html
-rw-rw-r-- 1 oracle dba 40487 Nov 27 02:35 sqlt_s54491_p73080644_sqlprof.sql
-rw-rw-r-- 1 oracle dba 616556 Nov 27 02:36 sqlt_s54491_tcb.zip
-rw-rw-r-- 1 oracle dba 104557695 Nov 27 02:36 sqlt_s54491_10053_explain.trc
-rw-rw-r-- 1 oracle dba 7793 Nov 27 09:18 sqlt_s54491_tc_sql.sql
-rw-rw-r-- 1 oracle dba 8659 Nov 27 09:18 sqlt_s54491_tc_script.sql
-rw-rw-r-- 1 oracle dba 316568 Nov 27 09:18 sqlt_s54491_opatch.zip
-rw-rw-r-- 1 oracle dba 2587 Nov 27 09:18 sqlt_s54491_driver.zip
-rw-rw-r-- 1 oracle dba 33308174 Nov 27 09:18 sqlt_s54491_trc.zip
-rw-rw-r-- 1 oracle dba 28711 Nov 27 09:18 sqlt_s54491_log.zip
-rw-r--r-- 1 oracle dba 48453034 Nov 27 17:11 sqlt_s54491.zip
drwxr-xr-x 3 oracle dba 4096 Nov 27 17:44 sqlt_s54491_tc
}}}
! Do compare
* All of the steps below will be executed on your laptop
* All the specific commands are on the sqlt_s<ID>_readme.html file which you can just copy and paste
''start with the bad run''
{{{
Unzip sqlt_s54471_tc.zip from this SOURCE in order to get sqlt_s54471_expdp.dmp.
Copy sqlt_s54471_exp.dmp to the server (BINARY).
Execute import on server:
imp sqltxplain FILE=sqlt_s54471_exp.dmp TABLES=sqlt% IGNORE=Y
OR
just do
$ ./sqlt_<ID>_import.sh
}}}
''next is the good run''
{{{
Unzip sqlt_s54491_tc.zip from this SOURCE in order to get sqlt_s54491_expdp.dmp.
Copy sqlt_s54491_exp.dmp to the server (BINARY).
Execute import on server:
imp sqltxplain FILE=sqlt_s54491_exp.dmp TABLES=sqlt% IGNORE=Y
OR
just do
$ ./sqlt_<ID>_import.sh
}}}
''query the statement_ids and plan_hash_values''
{{{
SELECT
p.statement_id,
p.plan_hash_value,
DECODE(p.plan_hash_value, s.best_plan_hash_value, '[B]')||
DECODE(p.plan_hash_value, s.worst_plan_hash_value, '[W]')||
DECODE(p.plan_hash_value, s.xecute_plan_hash_value, '[X]') attribute,
x.sql_id,
round(x.ELAPSED_TIME/1000000,2) ELAPSED,
round((x.ELAPSED_TIME/1000000)/x.EXECUTIONS,2) ELAPSED_EXEC,
SUBSTR(s.method, 1, 3) method,
SUBSTR(s.instance_name_short, 1, 8) instance,
SUBSTR(s.sql_text, 1, 60) sql_text
FROM (
SELECT DISTINCT plan_hash_value, sqlt_plan_hash_value, statement_id
FROM sqltxplain.sqlt$_plan_extension
) p,
sqltxplain.sqlt$_sql_statement s,
sqltxplain.SQLT$_GV$SQLSTATS x
WHERE p.statement_id = s.statement_id
AND p.statement_id = x.statement_id
ORDER BY
p.statement_id;
SELECT LPAD(s.statement_id, 5, '0') staid,
SUBSTR(s.method, 1, 3) method,
SUBSTR(s.instance_name_short, 1, 8) instance,
SUBSTR(s.sql_text, 1, 60) sql_text
FROM sqltxplain.sqlt$_sql_statement s
WHERE USER IN ('SYS', 'SYSTEM', 'SQLTXPLAIN', s.username)
ORDER BY
s.statement_id;
STATEMENT_ID PLAN_HASH_VALUE ATTRIBUTE SQL_ID ELAPSED ELAPSED_EXEC MET INSTANCE SQL_TEXT
------------ --------------- --------- ------------- ---------- ------------ --- -------- ------------------------------------------------------------
16274 2337881134 [B][W] 1dx0vsstj8p8m 30.41 3.8 XTR mixtrn SELECT SETID, CBRE_PROPERTY_ID, AUDIT_STAMP, TO_CHAR(AUDIT_S
69520 2337881134 [B] 1dx0vsstj8p8m 1519.8 52.41 XTR mixprd SELECT SETID, CBRE_PROPERTY_ID, AUDIT_STAMP, TO_CHAR(AUDIT_S
69520 53752269 [W] 1dx0vsstj8p8m 1519.8 52.41 XTR mixprd SELECT SETID, CBRE_PROPERTY_ID, AUDIT_STAMP, TO_CHAR(AUDIT_S
}}}
''execute compare''
* Note: when doing compare, you may also want to check on the main SQLT reports to check on the plan performance
sqlt_s<ID>_main.html -> Plans Summary
sqlt_s<ID>_main.html -> Plan Performance Statistics
sqlt_s<ID>_main.html -> Plan Performance History
{{{
Execute the COMPARE method connecting into SQL*Plus as SQLTXPLAIN. You will be asked to enter which 2 statements you want to compare.
START sqlt/run/sqltcompare.sql
OR
@sqltcompare <bad ID> <good ID> <bad plan_hash_value> <good plan_hash_value>
SQL> @sqltcompare.sql [statement id1] [statement id2] [plan hash value1] [plan hash value2];
SQL> @sqltcompare 16274 69520 2337881134 2337881134
}}}
{{{
xplore
1) create the nowrap.sql and specify the comment
2) specify the nowrap.sql and TC password
3) look out for "no data found" indicative of an error on the norwap.sql
4) specify correct data formatting, else it will error when nowrap.sql is executed
15:01:05 SYS@dw> start install
Test Case User: TC22518
Password: TC22518
Installation completed.
You are now connected as TC22518.
1. Set CBO env if needed
2. Execute @create_xplore_script.sql
15:01:43 TC22518@dw> @create_xplore_script.sql
Parameter 1:
XPLORE Method: XECUTE (default) or XPLAIN
"XECUTE" requires /* ^^unique_id */ token in SQL
"XPLAIN" uses "EXPLAIN PLAN FOR" command
Enter "XPLORE Method" [XECUTE]:
Parameter 2:
Include CBO Parameters: Y (default) or N
Enter "CBO Parameters" [Y]:
Parameter 3:
Include Exadata Parameters: Y (default) or N
Enter "EXADATA Parameters" [Y]:
Parameter 4:
Include Fix Control: Y (default) or N
Enter "Fix Control" [Y]:
Parameter 5:
Generate SQL Monitor Reports: N (default) or Y
Only applicable when XPLORE Method is XECUTE
Enter "SQL Monitor" [N]: Y
Review and execute @xplore_script_1.sql
SQL>@xplore_script_1.sql nowrap.sql TC22518
}}}
* ''Documentation'' http://db.tt/668XTuvg
* ''Examples'' http://db.tt/VbIAaiBF
* ''Author of SQLTXPLAIN'' - Carlos Sierra http://carlos-sierra.net/
10gR2 version
<<<
''NOTE: when installing SQLTXPLAIN do this!!! makes it less intrusive on production servers''
// True, it is not perfect yet. Very close, though. I do not like how the install scripts ask for a "schema user" or "application user".
To work around that, I create a new role (SQLTXPLAIN_ROLE) and provide that as my "application_user".
Then, whenever I want to run SQLTXPLAIN, I just grant/revoke that role to the real application userid. //
http://orajourn.blogspot.com/search/label/SQLTXPLAIN
<<<
11gR2 version
<<<
o Export SQLT repository
o Import SQLT repository
o Using the COMPARE method
o Restore CBO schema object statistics
o Restore CBO system statistics
o Create local test case using SQLT files
o Create stand-alone TC based on a SQLT TC
o Load SQL Plan from SQL Set
o Restore SQL Set
o Gather CBO statistics without Histograms
o Gather CBO statistics with Histograms
o List generated files
http://www.allguru.net/database/oracle-sql-profile-tuning-command/
<<<
! Install SQLTXPLAIN
{{{
Execute sqlt/install/sqcreate.sql connected as SYS.
# cd sqlt/install
# sqlplus / as sysdba
SQL> START sqcreate.sql
}}}
! Query the statement_ids and plan_hash_values
{{{
SELECT
p.statement_id,
p.plan_hash_value,
DECODE(p.plan_hash_value, s.best_plan_hash_value, '[B]')||
DECODE(p.plan_hash_value, s.worst_plan_hash_value, '[W]')||
DECODE(p.plan_hash_value, s.xecute_plan_hash_value, '[X]') attribute,
x.sql_id,
round(x.ELAPSED_TIME/1000000,2) ELAPSED,
round((x.ELAPSED_TIME/1000000)/NULLIF(x.EXECUTIONS,0),2) ELAPSED_EXEC,
SUBSTR(s.method, 1, 3) method,
SUBSTR(s.instance_name_short, 1, 8) instance,
SUBSTR(s.sql_text, 1, 60) sql_text
FROM (
SELECT DISTINCT plan_hash_value, sqlt_plan_hash_value, statement_id
FROM sqltxplain.sqlt$_plan_extension
) p,
sqltxplain.sqlt$_sql_statement s,
sqltxplain.SQLT$_GV$SQLSTATS x
WHERE p.statement_id = s.statement_id
AND p.statement_id = x.statement_id
ORDER BY
p.statement_id;
SELECT LPAD(s.statement_id, 5, '0') staid,
SUBSTR(s.method, 1, 3) method,
SUBSTR(s.instance_name_short, 1, 8) instance,
SUBSTR(s.sql_text, 1, 60) sql_text
FROM sqltxplain.sqlt$_sql_statement s
WHERE USER IN ('SYS', 'SYSTEM', 'SQLTXPLAIN', s.username)
ORDER BY
s.statement_id;
}}}
''SQLTXPLAIN scenarios'' http://www.evernote.com/shard/s48/sh/57e47988-c8c0-4cfd-bd9d-b7952e468509/cce0799de209e8483deeaa0c3c6cecec
{{{
To systematically identify why is it behaving differently on DEV2 you can
make use of SQLTXPLAIN (sqltxtract) on both environments
and do a sqltcompare http://karlarao.tiddlyspot.com/#SQLT-compare
Also you can make use of the SQLT test case builder to replicate the plan
that you have on the DEV1 environment
http://karlarao.tiddlyspot.com/#%5B%5Btestcase%20-%20SQLT-tc%20(test%20case%20builder)%5D%5D
,
the *set_cbo_env.sql will execute a couple of "alter system" commands and
you can just comment that part when executing the test case on the
application schema.
So do this:
COMPARE
-----------------
1) Execute sqltxtract <sql_id> on DEV1
2) Execute sqltxtract <sql_id> on DEV2
3) copy the sqlt_s<ID>.zip generated from DEV1 to DEV2, then extract it
4) look for sqlt_s<ID>_tc.zip and unzip, then
execute ./sqlt_<ID>_import.sh.. that will import the data points from DEV1
to the DEV2 SQLT repository
5) Follow the "query the statement_ids and plan_hash_values" from
http://karlarao.tiddlyspot.com/#SQLT-compare
6) Follow the "execute compare" from
http://karlarao.tiddlyspot.com/#SQLT-compare
7) Open the sqltcompare HTML file and look for the red highlighted text
those are the differences between the two environments
TEST CASE - reproduce the same execution plan
-------------------------------------------------------------------------
1) On DEV2, go to the sqlt_s<ID>_tc.zip that you unzipped from the sqlt of
DEV1
2) The new version of SQLTXPLAIN has xpress.sh which executes
the xpress.sql, the xpress.sql executes the following:
- restore schema object stats from DEV1
- restore system statistics from DEV1
- the sqlt_<ID>_set_cbo_env.sql prompts you to connect as the application
schema
- the tc.sql executes the test case script
3) Now if you want to have the same plan as the DEV1, just execute the
xpress.sh BUT.. read on the scripts, and be aware
that sqlt_<ID>_set_cbo_env.sql and q.sql executes "alter system" commands
because it tries to make the environments the same. So if you don't want
those "alter system" commands executed just comment them out, you can do
this with the restore schema and object stats as well.
So whenever I do SQL troubleshooting I always run SQLTXPLAIN.. and it
helped me a lot on a bunch of scenarios like:
- pure OLTP system that upgraded from an old to new hardware the CPU speed
was faster on the new environment that made it to change a lot of plans -
then pushing the system stats back to the old hardware value made it go
back to the old plans. How did I discover it? I made use of sqltcompare and
sqlt test case builder
- troubleshooting stats differences and stats problems
- missing indexes
- finding out a locking issue caused by a trigger from one of the tables
- troubleshooting a storage problem from an old and new environment
- parameter changes on the old and new environment
- plan changes caused by parameter change
- etc.
-Karl
}}}
''Whenever I do SQL troubleshooting I always run SQLTXPLAIN''.. and it helped me a lot on a bunch of scenarios like:
<<<
* pure OLTP system that upgraded from an old to new hardware the CPU speed was faster on the new environment that made it to change a lot of plans - then pushing the system stats back to the old hardware value made it go back to the old plans. How did I discover it? I made use of sqltcompare and sqlt test case builder
* troubleshooting stats differences and stats problems
* missing indexes
* finding out a locking issue caused by a trigger from one of the tables
* troubleshooting a storage problem from an old and new environment
* parameter changes on the old and new environment
* plan changes caused by parameter change
- etc.
<<<
! SQLTQ
* on the coe_xref profile, you can just copy and paste the original SQL and the optimizer will tokenize it.. it's just that force matching will not take effect or be the same if you have different binds. so you can induce hints on the dev box then run coe_xref then just edit the output file and remove the hints you've entered even with different formatting the force matching will still take effect
* bind on java :1 instead of a valid :b1, this causes SQL_ID to be different.. so what you can do is edit the SQL_TEXT with a valid bind which is what the sqltq.sql is doing whenever it finds an invalid bind it injects the letter b with it because you can't just have :1 on it
* there are 3 things that could be wrong in a SQL.. environment, stats, and binds.. the force matching signature could be different because of binds
<<<
Hi
You should read MOS note 167086.1. Also, SQL Performance analyzer (SPA)
is the tool that you need to perform plan regression analysis. But, SPA
would require minimal setup in production to capture tuning sets though.
If you are interested only about stability (and not worried about using
11g new features), you could upgrade the copy of prod database to 11g, set
optimizer compatibility to 10.2, collect baselines, enable use of base
lines, and set compatibility to 11g. Of course, if the application is not
using bind variables, this approach might not be optimal.
Cheers
Riyaj Shamsudeen
<<<
Tips for avoiding upgrade related query problems [ID 167086.1]
E1: TDA: Set Up Application to Generate Tuned SQL Statements [ID 629261.1]
How to filter out SQL statement run by specific schemas from a SQL Tuning set. [ID 1268219.1]
How To Move a SQL Tuning Set From One Database to Another [ID 751068.1]
HOW TO TRANSPORT A SQL TUNING SET [ID 456019.1]
HOW TO LOAD QUERIES INTO A SQL TUNING SET [ID 1271343.1]
* Master Note: SQL Query Performance Overview [ID 199083.1]
http://www.freelists.org/post/oracle-l/SQLs-run-in-any-period
{{{
Hi,
We have several environments with 10g (10.2.0.4) in prod and non-prod,and we
get advantage of AWR reports to get the top sqls that are run within any
particular period. We've run into a situation when it seems that the
developers ran some scripts which they are not supposed to. Now if we need
to know all sqls that are run within any time duration to prove our point,
say last 12 hours, I'm sure there must be a way. Can anyone help me in this
regard?
We dont have auditing in place. Is there any script that anyone likes help
me with?
Thanks.
}}}
{{{
Hi Saad,
You could try my scripts awr_topsql and awr_topsqlx which I've uploaded on
this link http://karlarao.wordpress.com/scripts-resources/
The default for these scripts is get the top 5 SQLs across SNAP_IDs and
"order by" the top 20 according to the total elapsed time..
if you "order by" SNAP_ID you'll get the same output as the AWR reports
you've generated manually using awrrpt.sql across SNAP_IDs you could check
it by comparing the output..
So this makes the task of searching for top SQL easier.. plus, I've added
some metrics to have better view/info of that top SQL..
here are the info/sections you'll get from the script (& some short
description):
1) - snap_id, time, instance, snap duration
# The time period and snap_id could be used to show the SQLs for a given
workload period..let's say you usual work hours is 9-6pm, you could just
show the particular SQLs on that period.. there's a data range section on
the bottom of the script you could make use of it if you want to filter.
2) - sql_id, plan_hash_value, module
# You could make use of this info if you want to know where the SQL was
executed (SQL*Plus, OWB, Toad, etc.).. plus you could compare the
plan_hash_value but I suggest you make use of Kerry Osborne's
awr_plan_change.sql script if you'd like to search for unstable plans.
3) - total elapsed time, elapsed time per exec
- cpu time
- io time
- app wait time
- concurrency wait time
- cluster wait time
# These are the time info.. at least without tracing the SQL you'd know what
time component is consuming the elapsed time of that particular SQL.. so
let's say your total elapsed time is 1000sec, and cpu time of 30sec, and io
time of 300sec... you would know that it is consuming significant IO but you
have to look for the other 670sec which could be attributed by "other" wait
events (like PX Deq Credit: send blkd,etc,etc)
4) - LIOs
- PIOs
- direct writes
- rows
- executions
- parse count
- PX
# Some other statistics about the SQL.. if your incurring a lot of PIOs, how
many times this SQL was executed on that period, the # of PX spawed.. just
be careful about these numbers if you have "executions" of let's say 8.. you
have to divide these values to 8 as well as on the time section..
only the "elapsed time per exec" is the per execution value..
this is for formatting reasons I can't fit them all on my screen.. :p
5) - AAS (Average Active Sessions)
- Time Rank
- SQL type, SQL text
# This is one of my favorites... this will measure how's the SQL is
performing against my database server.. I'm using the AAS & CPU count as my
yardstick for a possible performance problem (I suggest reading Kyle's stuff
about this):
if AAS < 1
-- Database is not blocked
AAS ~= 0
-- Database basically idle
-- Problems are in the APP not DB
AAS < # of CPUs
-- CPU available
-- Database is probably not blocked
-- Are any single sessions 100% active?
AAS > # of CPUs
-- Could have performance problems
AAS >> # of CPUS
-- There is a bottleneck
so having the AAS as another metric on the TOP SQL is good stuff.. I've also
added the "time rank" column to know what is the SQLs ranking on the top
SQL.. normally the default settings of the script will show time rank 1 and
2.. this could be useful also if you are finding a particular SQL that is on
rank #15 and you are seeing that there's an adhoc query that is time rank #1
and #2 affecting the database performance..
And.... this script could also show SQLs that span across SNAP_IDs... I
would order the output by SNAP_ID and filter on that particular SQL then you
would see that if the SQL is still running and span across let's say 2
SNAP_IDs then the exec count would be 0 (zero) and elapsed time per exec is
0 (zero).. only the time when the query is finished you'll see these values
populated.. I've noticed this behavior and it's the same thing that is shown
on the AWR reports.. you could go here for that scenario
http://karlarao.tiddlyspot.com/#%5B%5BTopSQL%20on%20AWR%5D%5D
}}}
! Starting/Stopping CRS processes
• Use crsctl as root to perform these actions
o To start Oracle high availability services on local node
* crsctl start crs
o To stop Oracle high availability services on local node
* crsctl stop crs
o To start Oracle high availability services in exclusive mode
* crsctl start crs -excl
-- other stop commands
crsctl stop cluster
crsctl stop resource
crsctl stop crs
crsctl stop has
crsctl stop ip
crsctl stop testdns
! Administering databases
• Use srvctl command as oracle to perform these actions
o To check the status of a database
* srvctl status database -d <database>
o To start a database
* srvctl start database -d <database>
o To stop a database
* srvctl stop database -d <database>
o To start a database instance
* srvctl start instance -i <instance> -d <database>
o To stop a database instance
* srvctl stop instance -i <instance> -d <database>
o To start a database service
* srvctl start instance -s <service> -d <database>
o To stop a database service
* srvctl stop instance -s <service> -d <database>
! Administering database services
https://martincarstenbach.wordpress.com/2014/02/18/runtime-load-balancing-advisory-in-rac-12c/
https://easyoradba.com/2012/01/29/transparent-application-failover-taf-service-in-oracle-rac-11gr2/
* Add service
srvctl add service -d dw -s dw.local -r dw1,dw2
• Use srvctl command as oracle to perform these actions
o To start a service
* srvctl start service -d <database> -s <service>
o To stop a service
* srvctl stop service -d <database> -s <service>
o To relocate a service from one instance to another
* srvctl relocate service -d <database> -s <service> -i <old_instance> -t <new_instance>
-f Disconnect all sessions during stop or relocate service operations.
o To delete a service
* srvctl remove service -d <database> -s <service>
* inventory services
srvctl config service -d PALLOC
srvctl config scan
* service creation example for Weblogic connection pool
srvctl add service -d PALLOC -s PALLOC_SVC -preferred PALLOC1,PALLOC2 -clbgoal short -rlbgoal SERVICE_TIME -notification true
! Checking the status of cluster managed resources
• The following command is in /usr/local/bin and will give the status of cluster resources:
o crsstat
! Listener
To migrate listener to the new ORACLE_HOME
* srvctl modify listener -l LISTENER_HCMPRD -o /u01/app/oracle/product/11.2.0.3/dbhome_1
https://blogs.oracle.com/gverma/entry/crsctl_start_crs_does_not_work
http://www.datadisk.co.uk/html_docs/rac/rac_cs.htm
http://www.oracle-home.ro/Oracle_Database/RAC/Startup-start-up-Oracle-Clusterware.html
http://www.oracle-home.ro/Oracle_Database/RAC/11gR2-Clusterware-Startup-Sequence.html
Troubleshoot Grid Infrastructure Startup Issues [ID 1050908.1]
http://www.dbaexpert.com/ASM.Pocket.pdf
https://jorgebarbablog.wordpress.com/2016/03/21/how-to-load-the-ssb-schema-into-an-oracle-database/
http://guyharrison.squarespace.com/blog/2010/10/21/accelerating-oracle-database-performance-with-ssd.html
''MacBook''
http://macperformanceguide.com/Reviews-SSDMacPro.html
http://www.anandtech.com/show/2504/7
http://www.anandtech.com/show/2445/20
http://www.storagereview.com/how_improve_low_ssd_performance_intel_series_5_chipset_environments
Apple's 2010 MacBook Air (11 & 13 inch) Thoroughly Reviewed
http://www.anandtech.com/Show/Index/3991?cPage=13&all=False&sort=0&page=4&slug=apples-2010-macbook-air-11-13inch-reviewed <-- GOOD STUFF
Support and Q&A for Solid-State Drives
http://blogs.msdn.com/b/e7/archive/2009/05/05/support-and-q-a-for-solid-state-drives-and.aspx <-- GOOD STUFF
http://www.anandtech.com/show/2738 <-- GOOD STUFF REVIEW + TRIM + NICE EXPLANATIONS
http://www.usenix.org/event/usenix08/tech/full_papers/agrawal/agrawal_html/index.html <-- GOOD STUFF PAPER
http://en.wikipedia.org/wiki/NAND_flash#NAND_flash
http://www.anandtech.com/Show/Index/2829?cPage=5&all=False&sort=0&page=11&slug= <-- The SSD Relapse: Understanding and Choosing the Best SSD
http://forum.notebookreview.com/alienware-m17x/509472-alienware-m17x-crystalmarkdisk-2-2-a-3.html
http://www.pcworld.com/article/192579 <-- how to install SSD in your laptop
http://www.zdnet.com/reviews/product/laptops/apple-macbook-air-fall-2010-core-2-duo-186ghz-128gb-ssd-133-inch/34198701?tag=mantle_skin;content
http://www.tomshardware.com/reviews/compactflash-sdhc-class-10,2574-8.html
http://www.lexar.com/products/lexar-professional-133x-sdxc-card?category=4155
http://www.legitreviews.com/
Intel 320 series
http://www.amazon.com/Intel-SATA-2-5-Inch-Solid-State-Drive/dp/B004T0DNP6/ref=sr_1_7?ie=UTF8&s=electronics&qid=1302071634&sr=1-7
http://www.anandtech.com/show/4244/intel-ssd-320-review/3
OCZ Vertex 3
http://www.anandtech.com/show/4186/ocz-vertex-3-preview-the-first-client-focused-sf2200
http://www.tomshardware.com/reviews/battlefield-rift-ssd,3062-14.html ''SSD IO profile - review for games''
http://www.storagereview.com/ssd_vs_hdd - ''SSD vs HDD''
http://guyharrison.squarespace.com/blog/2011/12/6/using-ssd-for-redo-on-exadata-pt-2.html ''<-- 4KB chunks is faster than 512 bytes''
http://flashdba.com/4k-sector-size/ ''<-- 4K sector size''
http://en.wikipedia.org/wiki/Advanced_Format
https://ata.wiki.kernel.org/index.php/ATA_4_KiB_sector_issues
http://hoopercharles.wordpress.com/2011/12/18/idle-thoughts-ssd-redo-logs-and-sector-size/ ''<-- hoopers, noons, guy discussing the issue.. on my test cases, I noticed huge improvements on sequential read/write with higher chunks when short stroking an LVM from 1MB to 4MB chunks''
''Anatomy of a Solid-state Drive'' http://queue.acm.org/detail.cfm?id=2385276
http://highscalability.com/blog/2013/6/13/busting-4-modern-hardware-myths-are-memory-hdds-and-ssds-rea.html
''Using Solid State Disk to optimize Oracle databases series'' http://guyharrison.squarespace.com/ssdguide
''other tagged as SSD'' http://guyharrison.squarespace.com/blog/tag/ssd
''whitepaper'' http://www.quest.com/Quest_Site_Assets/WhitePapers/Best_Practices_for_Optimizing_Oracle_RDBMS_with_Solid_State_Disk-final.pdf
''cool presentation from LSI guys'' http://www.oswoug.org/Slides/LSI/SolidStateStorageinOracleEnvironmentsv4.pptx
<<<
http://jonathanlewis.wordpress.com/2012/10/05/ssd-2/#comments
I’m not too surprised about Guy’s conclusions about redo on SSD.
This is not because I am expert on SSD (I can spell it) but because I attended a presentation a couple of years ago by a couple of engineers from LSI.
Their job was to do performance testing of the new LSI SSD product for Oracle Flashcache.
They were surprised to consistently find that redo performed better on spinning rust than on SSD.
After discussing it with some other engineers they had a better understanding of the limitations of SSD when used with redo.
The Powerpoint for that presentation is here:
http://www.oswoug.org/Slides/LSI/SolidStateStorageinOracleEnvironmentsv4.pptx
The most interesting redo bits aren’t available in the presention, you had to be there.
There is mention however that due to sequential writes redo performs better on HDD.
<<<
''some of the important points''
{{{
OK so now that I have this super fast device – what does that mean? The obvious, well isn’t…
It’s all equally accessible – no short stroking
While it doesn’t rotate, mixed reads and writes do slow it down
Scanning the Device for bad sectors is a thing of the past
It may not be necessary to stripe for performance
In cache cases you might not even need to mirror SSDs
Using Smart Flash Cache AND moving data objects to SSD decreased performance
Online Redo Logs are best handled by HDD because of the sequential writes
}}}
''Other references''
''Solid State Drive vs. Hard Disk Drive Price and Performance Study''
http://www.dell.com/downloads/global/products/pvaul/en/ssd_vs_hdd_price_and_performance_study.pdf
http://en.wikipedia.org/wiki/Solid-state_drive#cite_note-72
http://www.intel.com/support/ssdc/hpssd/sb/CS-029623.htm#5
''redo on SSD''
http://kevinclosson.wordpress.com/2007/07/21/manly-men-only-use-solid-state-disk-for-redo-logging-lgwr-io-is-simple-but-not-lgwr-processing/
http://communities.intel.com/community/datastack/blog/2011/11/07/improve-database-performance-redo-and-transaction-logs-on-solid-state-disks-ssds
http://www.pythian.com/blog/de-confusing-ssd-for-oracle-databases/
http://www.linkedin.com/groups/Anybody-using-SSD-Redo-logs-2922607.S.52078141
http://serverfault.com/questions/159687/putting-oracle-redo-logs-on-dram-ssd-for-a-heavy-write-database
http://odenysenko.wordpress.com/2012/10/18/troubleshooting-log-file-sync-waits/
http://orainternals.wordpress.com/2008/07/07/tuning-log-file-sync-wait-events/
http://goo.gl/8hNbl
http://www.freelists.org/post/oracle-l/Exadata-How-do-you-use-FlashDisk
http://www.lsi.com/downloads/Public/Solid%20State%20Storage/WarpDrive%20SLP-300/WarpDrive_Oracle_Best_Practices.pdf
http://www.emc.com/collateral/hardware/white-papers/h5967-leveraging-clariion-cx4-oracle-deploy-wp.pdf
http://www.electronicproducts.com/Passive_Components/Capacitors/Supercapacitors_for_SSD_backup_power.aspx
groupadd -g 500 dba
useradd -u 500 -g dba -G dba oracle
mkdir -p /u01/app/oracle
chown -R oracle:dba /u01
chmod -R 775 /u01/
alternatives --install /usr/bin/java java /opt/jdk1.6.0_27/bin/java 1
/usr/java/jdk1.6.0_27/bin
/usr/java/jdk1.6.0_27/bin/java
alternatives --install /usr/bin/java java /usr/java/jdk1.6.0_27/bin/java 1
yum install -y rng-utils-2
make-3.81
binutils-2.17.50.0.6
gcc-4.1.1
libaio-0.3.106
glibc-common-2.3.4-2.9
compat-libstdc++-296 -2.96-132.7.2
libstdc++-4.1.1
libstdc++-devel-4.11
setarch-1.6-1
sysstat-5.0.5-1
compat-db-4.1.25-9
-deconfig dbcontrol db [-repos drop] [-cluster] [-silent] [parameters]: de-configure Database Control
-deconfig centralAgent (db | asm) [-cluster] [ -silent] [parameters]: de-configure central agent management
-deconfig all db [-repos drop] [-cluster] [-silent] [parameters]: de-configure both Database Control and central agent management
/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/logs/EMAgentPush2011-09-26_07-49-39-AM.log
FO: BaseServiceHandler.process Action code: 10
INFO: BaseServiceHandler.process Action code: 10
INFO: BaseServiceHandler.process Action code: 10
INFO: ======SSH setup is already exists for user: oracle for nodes: db1
INFO: Perform doSSHConnectivitySetup which is Mandatory : PASSED
INFO: RETURNING FROM PERFORM VALIDATION:true
INFO: GenericInstaller, validation done.... do connectivity next...
INFO: UIXmlWrapper.UIXmlWrapper filename: /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/logs/tempUI.xml
INFO: prodName: 1 noPrereqClone: false
INFO: For Product : oracle.sysman.prov.agentpush.step1 there will be real deploymnet
INFO: UIXmlWrapper.UIXmlWrapper filename: /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/logs/tempUI.xml
INFO: VirtualHost set to true, so adding a sample entry to the new file
INFO: setEnvMappingFile:Init RI with REMOTE_PATH_PROPERTIES_LOC= null
INFO: Exception : [Message:] No path files specified for platform on nodes "db1". [Exception:] oracle.sysman.prov.remoteinterfaces.exception.FatalException: No path files specified for platform on nodes "db1".
at oracle.sysman.prov.remoteinterfaces.nativesystem.NativeSystem.startup(NativeSystem.java:513)
at oracle.sysman.prov.remoteinterfaces.clusterops.ClusterBaseOps.startup(ClusterBaseOps.java:425)
at oracle.sysman.prov.remoteinterfaces.clusterops.ClusterBaseOps.startup(ClusterBaseOps.java:338)
at oracle.sysman.prov.agentpush.services.RemoteInterfaceWrapper.setEnvMappingFile(RemoteInterfaceWrapper.java:550)
at oracle.sysman.prov.agentpush.services.GenericInstaller.helper(GenericInstaller.java:302)
at oracle.sysman.prov.agentpush.services.GenericInstaller.run(GenericInstaller.java:658)
at java.lang.Thread.run(Thread.java:662)
FO: BaseServiceHandler.process Action code: 10
INFO: BaseServiceHandler.process Action code: 10
INFO: BaseServiceHandler.process Action code: 10
INFO: RetVAL: <?xml version = '1.0' encoding = 'UTF-8'?><prov:Descriptions version="1.0.0" xmlns:prov="http://www.oracle.com/sysman/prov/deployment" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.oracle.com/sysman/prov/deployment deploy_state.xsd"> <prov:Provisioning>
<prov:MetaData>
<prov:Session sessionId="-55333588:132a45d77d9:-7fe7:1317050630064" timestamp="2011-09-26_10-23-46-AM">
<prov:SessionLocation location="/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM"/>
</prov:Session>
<Prereqs xmlns="http://www.oracle.com/sysman/prov/deployment"><ActionState node="localnode"><Action name="agentpush_localPrereq_StartUp" state="success"/><Action name="runlocalprereqs" state="success"/><Action name="agentpush_localPrereq_ShutDown" state="success"/></ActionState><ActionState node="db1"><Action name="agentpush_remotePrereq_StartUp" state="success"/><Action name="runremoteprereqs" state="not_executed"/><Action name="agentpush_remotePrereq_ShutDown" state="success"/></ActionState><LogLocation node="localnode" location="Prereqs"/><LogLocation node="db1" location="Prereqs/db1"/><PrereqResultsLocation node="localnode" location="Prereqs/results"/><PrereqResultsLocation node="db1" location="Prereqs/db1/results"/></Prereqs><Fixup xmlns="http://www.oracle.com/sysman/prov/deployment"><FixupLocation node="localnode" location="Prereqs/Fixup"/><FixupLocation node="db1" location="Prereqs/db1/Fixup"/></Fixup></prov:MetaData>
<prov:Interview>
<prov:DeployMode name="newagent"/>
<prov:Attribute name="installType" type="String" value="Fresh Install"/>
<prov:Attribute name="shiphomeLoc" type="String" value="1"/>
<prov:Attribute name="shiphomeLocVal" type="String" value="null"/>
<prov:Attribute name="remoteHostNamesStr" type="String" value="db1"/>
<prov:Attribute name="installBaseDir" type="String" value="/u01/app/oracle"/>
<prov:Attribute name="version" type="String" value="11.1.0.1.0"/>
<prov:Attribute name="clusterInstall" type="String" value="null"/>
<prov:Attribute name="clusterNodeNames" type="String" value="null"/>
<prov:Attribute name="clusterName" type="String" value=""/>
<prov:Attribute name="username" type="String" value="oracle"/>
<prov:Attribute name="portValue" type="String" value="3872"/>
<prov:Attribute name="preInstallScript" type="String" value=""/>
<prov:Attribute name="runAsRootPreInstallScript" type="String" value="null"/>
<prov:Attribute name="postInstallScript" type="String" value=""/>
<prov:Attribute name="runAsRootPostInstallScript" type="String" value="null"/>
<prov:Attribute name="runRootSH" type="String" value="null"/>
<prov:Attribute name="virtualHost" type="String" value="on"/>
<prov:Attribute name="SLBHost" type="String" value=""/>
<prov:Attribute name="SLBPort" type="String" value=""/>
<prov:Attribute name="params" type="String" value=""/>
<prov:Attribute name="omsPassword" type="String" value="*****"/>
<prov:Attribute name="isOMS10205OrNewer" type="String" value="true"/>
<prov:Attribute name="NO_OF_PREREQ_XMLS_TO_PARSE" type="String" value="2"/>
<prov:Attribute name="appPrereqEntryPointDir" type="String" value="emagent_install"/>
<prov:Attribute name="platform" type="String" value="linux_x64"/>
</prov:Interview>
</prov:Provisioning>
</prov:Descriptions>
INFO: BaseServiceHandler.process Action code: 10
INFO: PrereqWaitServiceHandler._handleBasicPrereqCompletion:agentInstallProps
INFO: prodName: 1 noPrereqClone: false cloneSrcHomenull
INFO: isPrereqProd: false is clone: false isNoPrereqClonefalse
INFO: Its not fake deployment for prop : oracle.sysman.prov.agentpush.step1
INFO: PrereqWaitServiceHandler:_handleBasicPrereqCompletion: retryStarted: null
INFO: PrereqWaitServiceHandler:_handleBasicPrereqCompletion: recoveryStarted: null
INFO: noOfPrereqXmlsToParse:2
INFO: noOfPrereqXmlsToParseInt:2
INFO: finding entrypoint for dir: /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/
INFO: /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs//entrypoints exists
INFO: Local Prereq returning all entry points under :/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/
INFO: Prereq parsing: entrypoint is : connectivity
INFO: Entry Point:/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity has results.xml file:/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity/local/results/agent/agent_prereq_results.xml
INFO: prereq resultXMLs found for node local are: [/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity/local/results/agent/agent_prereq_results.xml]
INFO: Result file location is : /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity/local/results/agent/agent_prereq_results.xml
INFO: no of prereqs executed as found in result file /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity/local/results/agent/agent_prereq_results.xml : 43
INFO: the result found till now:
-------------------------------------------------------
[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$ scp Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip oracle@db1:/u01/app/oracle/
Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip 3% 15MB 12.4KB/s - stalled -^CKilled by signal 2.
[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$ scp -l 8192 Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip oracle@db1:/u01/app/oracle/
Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip 1% 4808KB 12.6KB/s 10:21:21 ET^CKilled by signal 2.
[oracle@emgc11g agent_11010]$ scp Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip oracle@db1:/u01/app/oracle/
Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip 0% 2208KB 1.5MB/s 05:05 ETA^CKilled by signal 2.
[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$ ssh -vvv db1
[oracle@emgc11g agent_11010]$ ssh -vvv db1
OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
debug1: Reading configuration data /home/oracle/.ssh/config
debug1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to db1 [192.168.203.221] port 22.
debug1: Connection established.
debug1: identity file /home/oracle/.ssh/identity type -1
debug3: Not a RSA1 key file /home/oracle/.ssh/id_rsa.
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug3: key_read: missing keytype
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug2: key_type_from_name: unknown key type '-----END'
debug3: key_read: missing keytype
debug1: identity file /home/oracle/.ssh/id_rsa type 1
debug1: identity file /home/oracle/.ssh/id_dsa type -1
debug1: loaded 3 keys
debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3
debug1: match: OpenSSH_4.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_4.3
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_init: found hmac-md5
debug1: kex: server->client aes128-cbc hmac-md5 none
debug2: mac_init: found hmac-md5
debug1: kex: client->server aes128-cbc hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 119/256
debug2: bits set: 507/1024
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug3: check_host_in_hostfile: filename /home/oracle/.ssh/known_hosts
debug3: check_host_in_hostfile: match line 1
debug3: check_host_in_hostfile: filename /home/oracle/.ssh/known_hosts
debug3: check_host_in_hostfile: match line 1
debug1: Host 'db1' is known and matches the RSA host key.
debug1: Found key in /home/oracle/.ssh/known_hosts:1
debug2: bits set: 495/1024
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/oracle/.ssh/identity ((nil))
debug2: key: /home/oracle/.ssh/id_rsa (0x7f037a015360)
debug2: key: /home/oracle/.ssh/id_dsa ((nil))
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug3: start over, passed a different list publickey,gssapi-with-mic,password
debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_lookup gssapi-with-mic
debug3: remaining preferred: publickey,keyboard-interactive,password
debug3: authmethod_is_enabled gssapi-with-mic
debug1: Next authentication method: gssapi-with-mic
debug3: Trying to reverse map address 192.168.203.221.
debug1: Unspecified GSS failure. Minor code may provide more information
Unknown code krb5 195
debug1: Unspecified GSS failure. Minor code may provide more information
Unknown code krb5 195
debug1: Unspecified GSS failure. Minor code may provide more information
Unknown code krb5 195
debug2: we did not send a packet, disable method
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /home/oracle/.ssh/identity
debug3: no such identity: /home/oracle/.ssh/identity
debug1: Offering public key: /home/oracle/.ssh/id_rsa
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 151
debug2: input_userauth_pk_ok: SHA1 fp 8c:3b:de:62:b0:8f:61:41:da:38:55:12:e7:7a:4d:0b:03:29:da:3f
debug3: sign_and_send_pubkey
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug3: ssh_session2_open: channel_new: 0
debug2: channel 0: send open
debug1: Entering interactive session.
debug2: callback start
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 0
debug3: tty_make_modes: ospeed 38400
debug3: tty_make_modes: ispeed 38400
debug3: tty_make_modes: 1 3
debug3: tty_make_modes: 2 28
debug3: tty_make_modes: 3 127
debug3: tty_make_modes: 4 21
debug3: tty_make_modes: 5 4
debug3: tty_make_modes: 6 0
debug3: tty_make_modes: 7 0
debug3: tty_make_modes: 8 17
debug3: tty_make_modes: 9 19
debug3: tty_make_modes: 10 26
debug3: tty_make_modes: 12 18
debug3: tty_make_modes: 13 23
debug3: tty_make_modes: 14 22
debug3: tty_make_modes: 18 15
debug3: tty_make_modes: 30 0
debug3: tty_make_modes: 31 0
debug3: tty_make_modes: 32 0
debug3: tty_make_modes: 33 0
debug3: tty_make_modes: 34 0
debug3: tty_make_modes: 35 0
debug3: tty_make_modes: 36 1
debug3: tty_make_modes: 37 0
debug3: tty_make_modes: 38 1
debug3: tty_make_modes: 39 0
debug3: tty_make_modes: 40 0
debug3: tty_make_modes: 41 0
debug3: tty_make_modes: 50 1
debug3: tty_make_modes: 51 1
debug3: tty_make_modes: 52 0
debug3: tty_make_modes: 53 1
debug3: tty_make_modes: 54 1
debug3: tty_make_modes: 55 1
debug3: tty_make_modes: 56 0
debug3: tty_make_modes: 57 0
debug3: tty_make_modes: 58 0
debug3: tty_make_modes: 59 1
debug3: tty_make_modes: 60 1
debug3: tty_make_modes: 61 1
debug3: tty_make_modes: 62 0
debug3: tty_make_modes: 70 1
debug3: tty_make_modes: 71 0
debug3: tty_make_modes: 72 1
debug3: tty_make_modes: 73 0
debug3: tty_make_modes: 74 0
debug3: tty_make_modes: 75 0
debug3: tty_make_modes: 90 1
debug3: tty_make_modes: 91 1
debug3: tty_make_modes: 92 0
debug3: tty_make_modes: 93 0
debug1: Sending environment.
debug3: Ignored env HOSTNAME
debug3: Ignored env SHELL
debug3: Ignored env TERM
debug3: Ignored env HISTSIZE
debug3: Ignored env KDE_NO_IPV6
debug3: Ignored env QTDIR
debug3: Ignored env QTINC
debug3: Ignored env USER
debug3: Ignored env LD_LIBRARY_PATH
debug3: Ignored env LS_COLORS
debug3: Ignored env ORACLE_SID
debug3: Ignored env ORACLE_BASE
debug3: Ignored env KDEDIR
debug3: Ignored env MAIL
debug3: Ignored env PATH
debug3: Ignored env INPUTRC
debug3: Ignored env PWD
debug1: Sending env LANG = en_US.UTF-8
debug2: channel 0: request env confirm 0
debug3: Ignored env KDE_IS_PRELINKED
debug3: Ignored env SSH_ASKPASS
debug3: Ignored env SHLVL
debug3: Ignored env HOME
debug3: Ignored env LOGNAME
debug3: Ignored env QTLIB
debug3: Ignored env CVS_RSH
debug3: Ignored env LESSOPEN
debug3: Ignored env ORACLE_HOME
debug3: Ignored env G_BROKEN_FILENAMES
debug3: Ignored env _
debug3: Ignored env OLDPWD
debug2: channel 0: request shell confirm 0
debug2: fd 3 setting TCP_NODELAY
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel 0: rcvd adjust 2097152
Last login: Mon Sep 26 11:07:21 2011 from 192.168.203.15
https://odd.blog/2008/12/10/how-to-fix-ssh-timeout-problems/
http://ask.systutorials.com/1694/how-to-enable-ssh-service-on-fedora-linux
https://docs.oseems.com/general/application/ssh/disable-timeout
http://www.cyberciti.biz/tips/open-ssh-server-connection-drops-out-after-few-or-n-minutes-of-inactivity.html
{{{
# edit the file
/etc/ssh/sshd_config
TCPKeepAlive no
ClientAliveInterval 30
ClientAliveCountMax 100
# restart the service
systemctl stop sshd.service
systemctl start sshd.service
}}}
or do it in putty
https://patrickmn.com/aside/how-to-keep-alive-ssh-sessions/
<<<
On Windows (PuTTY)
In your session properties, go to Connection and under Sending of null packets to keep session active, set Seconds between keepalives (0 to turn off) to e.g. 300 (5 minutes).
<<<
also install the sqldeveloper keepalive
http://scristalli.github.io/SQL-Developer-4-keepalive/
https://www.swift.com/our-solutions/interfaces-and-integration/alliance-messaging-hub#:~:text=Alliance%20Messaging%20Hub%20(AMH)%20is,%2Dnetwork%2C%20financial%20messaging%20solution.&text=The%20solution%20delivers%20seamless%20routing,and%20new%20levels%20of%20efficiency.
https://www.swift.com/
!current session
{{{
select res.*
from (
select *
from (
select
sys_context ('userenv','ACTION') ACTION,
sys_context ('userenv','AUDITED_CURSORID') AUDITED_CURSORID,
sys_context ('userenv','AUTHENTICATED_IDENTITY') AUTHENTICATED_IDENTITY,
sys_context ('userenv','AUTHENTICATION_DATA') AUTHENTICATION_DATA,
sys_context ('userenv','AUTHENTICATION_METHOD') AUTHENTICATION_METHOD,
sys_context ('userenv','BG_JOB_ID') BG_JOB_ID,
sys_context ('userenv','CLIENT_IDENTIFIER') CLIENT_IDENTIFIER,
sys_context ('userenv','CLIENT_INFO') CLIENT_INFO,
sys_context ('userenv','CURRENT_BIND') CURRENT_BIND,
sys_context ('userenv','CURRENT_EDITION_ID') CURRENT_EDITION_ID,
sys_context ('userenv','CURRENT_EDITION_NAME') CURRENT_EDITION_NAME,
sys_context ('userenv','CURRENT_SCHEMA') CURRENT_SCHEMA,
sys_context ('userenv','CURRENT_SCHEMAID') CURRENT_SCHEMAID,
sys_context ('userenv','CURRENT_SQL') CURRENT_SQL,
sys_context ('userenv','CURRENT_SQLn') CURRENT_SQLn,
sys_context ('userenv','CURRENT_SQL_LENGTH') CURRENT_SQL_LENGTH,
sys_context ('userenv','CURRENT_USER') CURRENT_USER,
sys_context ('userenv','CURRENT_USERID') CURRENT_USERID,
sys_context ('userenv','DATABASE_ROLE') DATABASE_ROLE,
sys_context ('userenv','DB_DOMAIN') DB_DOMAIN,
sys_context ('userenv','DB_NAME') DB_NAME,
sys_context ('userenv','DB_UNIQUE_NAME') DB_UNIQUE_NAME,
sys_context ('userenv','DBLINK_INFO') DBLINK_INFO,
sys_context ('userenv','ENTRYID') ENTRYID,
sys_context ('userenv','ENTERPRISE_IDENTITY') ENTERPRISE_IDENTITY,
sys_context ('userenv','FG_JOB_ID') FG_JOB_ID,
sys_context ('userenv','GLOBAL_CONTEXT_MEMORY') GLOBAL_CONTEXT_MEMORY,
sys_context ('userenv','GLOBAL_UID') GLOBAL_UID,
sys_context ('userenv','HOST') HOST,
sys_context ('userenv','IDENTIFICATION_TYPE') IDENTIFICATION_TYPE,
sys_context ('userenv','INSTANCE') INSTANCE,
sys_context ('userenv','INSTANCE_NAME') INSTANCE_NAME,
sys_context ('userenv','IP_ADDRESS') IP_ADDRESS,
sys_context ('userenv','ISDBA') ISDBA,
sys_context ('userenv','LANG') LANG,
sys_context ('userenv','LANGUAGE') LANGUAGE,
sys_context ('userenv','MODULE') MODULE,
sys_context ('userenv','NETWORK_PROTOCOL') NETWORK_PROTOCOL,
sys_context ('userenv','NLS_CALENDAR') NLS_CALENDAR,
sys_context ('userenv','NLS_CURRENCY') NLS_CURRENCY,
sys_context ('userenv','NLS_DATE_FORMAT') NLS_DATE_FORMAT,
sys_context ('userenv','NLS_DATE_LANGUAGE') NLS_DATE_LANGUAGE,
sys_context ('userenv','NLS_SORT') NLS_SORT,
sys_context ('userenv','NLS_TERRITORY') NLS_TERRITORY,
sys_context ('userenv','OS_USER') OS_USER,
sys_context ('userenv','POLICY_INVOKER') POLICY_INVOKER,
sys_context ('userenv','PROXY_ENTERPRISE_IDENTITY') PROXY_ENTERPRISE_IDENTITY,
sys_context ('userenv','PROXY_USER') PROXY_USER,
sys_context ('userenv','PROXY_USERID') PROXY_USERID,
sys_context ('userenv','SERVER_HOST') SERVER_HOST,
sys_context ('userenv','SERVICE_NAME') SERVICE_NAME,
sys_context ('userenv','SESSION_EDITION_ID') SESSION_EDITION_ID,
sys_context ('userenv','SESSION_EDITION_NAME') SESSION_EDITION_NAME,
sys_context ('userenv','SESSION_USER') SESSION_USER,
sys_context ('userenv','SESSION_USERID') SESSION_USERID,
sys_context ('userenv','SESSIONID') SESSIONID,
sys_context ('userenv','SID') SID,
sys_context ('userenv','STATEMENTID') STATEMENTID,
sys_context ('userenv','TERMINAL') TERMINAL
from dual
-- where sys_context ('userenv','SESSIONID') NOT in ('SYS', 'XDB') -- <<<<< filter by user
)
unpivot include nulls (
val for name in (action, audited_cursorid, authenticated_identity, authentication_data, authentication_method, bg_job_id, client_identifier, client_info, current_bind, current_edition_id, current_edition_name, current_schema, current_schemaid, current_sql, current_sqln, current_sql_length, current_user, current_userid, database_role, db_domain, db_name, db_unique_name, dblink_info, entryid, enterprise_identity, fg_job_id, global_context_memory, global_uid, host, identification_type, instance, instance_name, ip_address, isdba, lang, language, module, network_protocol, nls_calendar, nls_currency, nls_date_format, nls_date_language, nls_sort, nls_territory, os_user, policy_invoker, proxy_enterprise_identity, proxy_user, proxy_userid, server_host, service_name, session_edition_id, session_edition_name, session_user, session_userid, sessionid, sid, statementid, terminal)
)
) res;
}}}
!other session
{{{
-- check with
select name, value
from V$SES_OPTIMIZER_ENV
where sid=54
and name='parallel_force_local';
}}}
https://lh5.googleusercontent.com/-SKtDoT5Ipqs/TnwEjxvRpwI/AAAAAAAABWU/zmKYWQVdxE0/s288/networkmap.png
system-config-samba to share a linux filesystem mount to windows
-create user on samba same as the windows userid
--------------
http://www.drron.com.au/2010/01/16/a-note-about-wdtv-live-and-samba-shares/
http://www.reallylinux.com/docs/sambaserver.shtml
http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-samba-configuring.html
http://www.cyberciti.biz/tips/how-to-mount-remote-windows-partition-windows-share-under-linux.html
http://mybookworld.wikidot.com/start <-- WD hack
-- from http://www.perfvision.com/statspack/sp_10g.txt
{{{
STATSPACK report for
Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
1193559071 cdb10 1 27-Jul-07 11:03 10.2.0.1.0 NO
Host Name: tsukuba Num CPUs: 2 Phys Memory (MB): 6,092
~~~~
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- -------------------
Begin Snap: 114 30-Jul-07 15:00:06 36 16.9
End Snap: 116 30-Jul-07 17:00:05 41 24.8
Elapsed: 119.98 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 308M Std Block Size: 8K
Shared Pool Size: 128M Log Buffer: 6,066K
Load Profile Per Second Per Transaction
~~~~~~~~~~~~ --------------- ---------------
Redo size: 235,846.01 410,605.90
Logical reads: 6,095.13 10,611.57
Block changes: 1,406.37 2,448.49
Physical reads: 7.23 12.59
Physical writes: 25.45 44.31
User calls: 152.84 266.09
Parses: 3.78 6.58
Hard parses: 0.13 0.22
Sorts: 9.06 15.77
Logons: 0.04 0.06
Executes: 151.75 264.20
Transactions: 0.57
% Blocks changed per Read: 23.07 Recursive Call %: 21.61
Rollback per transaction %: 20.85 Rows per Sort: 52.20
Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.98
Buffer Hit %: 99.88 In-memory Sort %: 100.00
Library Hit %: 99.80 Soft Parse %: 96.67
Execute to Parse %: 97.51 Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 18.63 % Non-Parse CPU: 98.06
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 91.48 89.32
% SQL with executions>1: 91.88 83.52
% Memory for SQL w/exec>1: 97.02 64.37
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
PL/SQL lock timer 2,103 6,170 2934 41.0
log file parallel write 5,751 2,035 354 13.5
db file parallel write 16,343 1,708 104 11.4
log file sync 2,936 1,285 438 8.5
log buffer space 1,307 950 727 6.3
-------------------------------------------------------------
Host CPU (CPUs: 2)
~~~~~~~~ Load Average
Begin End User System Idle WIO WCPU
------- ------- ------- ------- ------- ------- --------
0.13 0.34 47.08 3.51 49.41 0.00 22.83
Instance CPU
~~~~~~~~~~~~
% of total CPU for Instance: 5.57
% of busy CPU for Instance: 11.02
%DB time waiting for CPU - Resource Mgr:
Memory Statistics Begin End
~~~~~~~~~~~~~~~~~ ------------ ------------
Host Mem (MB): 6,092.4 6,092.4
SGA use (MB): 468.0 468.0
PGA use (MB): 96.9 166.8
% Host Mem used for SGA+PGA: 9.3 10.4
-------------------------------------------------------------
Time Model System Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Ordered by % of DB time desc, Statistic name
Statistic Time (s) % of DB time
----------------------------------- -------------------- ------------
sql execute elapsed time 3,447.6 75.4
DB CPU 718.2 15.7
parse time elapsed 88.9 1.9
hard parse elapsed time 78.9 1.7
sequence load elapsed time 63.0 1.4
PL/SQL execution elapsed time 29.8 .7
hard parse (sharing criteria) elaps 1.4 .0
PL/SQL compilation elapsed time 1.1 .0
connection management call elapsed 0.9 .0
repeated bind elapsed time 0.0 .0
hard parse (bind mismatch) elapsed 0.0 .0
DB time 4,574.2
background elapsed time 3,976.2
background cpu time 84.7
-------------------------------------------------------------
Wait Events DB/Inst: CDB10/cdb10 Snaps: 114-116
-> s - second, cs - centisecond, ms - millisecond, us - microsecond
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
--------------------------------- ------------ ------ ---------- ------ --------
PL/SQL lock timer 2,103 100 6,170 2934 0.5
log file parallel write 5,751 0 2,035 354 1.4
db file parallel write 16,343 0 1,708 104 4.0
log file sync 2,936 33 1,285 438 0.7
log buffer space 1,307 49 950 727 0.3
SQL*Net message from dblink 8,990 0 681 76 2.2
db file sequential read 34,436 0 605 18 8.3
enq: RO - fast object reuse 147 30 160 1087 0.0
log file switch (checkpoint incom 200 62 153 763 0.0
local write wait 437 26 136 310 0.1
control file parallel write 3,596 0 109 30 0.9
log file switch completion 199 28 104 522 0.0
buffer busy waits 163 32 69 425 0.0
db file scattered read 2,644 0 30 11 0.6
SQL*Net more data to dblink 3,507 0 30 9 0.8
os thread startup 97 8 20 211 0.0
direct path write 422 0 17 40 0.1
direct path write temp 106 0 11 104 0.0
enq: CF - contention 24 4 8 337 0.0
control file sequential read 56,057 0 7 0 13.6
SQL*Net break/reset to client 2,670 0 6 2 0.6
direct path read temp 50 0 5 100 0.0
db file parallel read 4 0 2 376 0.0
read by other session 187 0 1 8 0.0
log file switch (private strand f 4 0 1 276 0.0
single-task message 5 0 1 156 0.0
log file single write 74 0 1 10 0.0
latch: In memory undo latch 1 0 1 643 0.0
library cache pin 12 0 1 50 0.0
LGWR wait for redo copy 185 16 1 3 0.0
rdbms ipc reply 462 0 1 1 0.1
direct path read 259 0 0 2 0.1
latch: object queue header operat 3 0 0 135 0.0
reliable message 81 0 0 4 0.0
library cache load lock 32 0 0 7 0.0
SQL*Net more data to client 3,565 0 0 0 0.9
kksfbc child completion 2 100 0 51 0.0
latch: cache buffers chains 6 0 0 14 0.0
latch: shared pool 9 0 0 5 0.0
row cache lock 61 0 0 1 0.0
log file sequential read 74 0 0 0 0.0
SQL*Net message to dblink 8,991 0 0 0 2.2
latch: library cache 14 0 0 1 0.0
undo segment extension 477 100 0 0 0.1
latch free 1 0 0 3 0.0
SQL*Net message from client 1,094,839 0 113,088 103 264.8
Streams AQ: qmn slave idle wait 257 0 7,038 27386 0.1
Streams AQ: qmn coordinator idle 524 51 7,038 13431 0.1
wait for unread message on broadc 7,156 100 7,028 982 1.7
virtual circuit status 240 100 7,018 29243 0.1
Wait Events DB/Inst: CDB10/cdb10 Snaps: 114-116
-> s - second, cs - centisecond, ms - millisecond, us - microsecond
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
--------------------------------- ------------ ------ ---------- ------ --------
Streams AQ: waiting for messages 1,453 98 7,003 4820 0.4
Streams AQ: waiting for time mana 89 44 6,805 76455 0.0
jobq slave wait 2,285 98 6,661 2915 0.6
class slave wait 4 100 20 4889 0.0
SQL*Net message to client 1,094,841 0 1 0 264.8
SQL*Net more data from client 64 0 0 0 0.0
-------------------------------------------------------------
Background Wait Events DB/Inst: CDB10/cdb10 Snaps: 114-116
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
--------------------------------- ------------ ------ ---------- ------ --------
log file parallel write 5,755 0 2,035 354 1.4
db file parallel write 16,343 0 1,708 104 4.0
control file parallel write 3,597 0 109 30 0.9
os thread startup 96 8 20 212 0.0
direct path write 259 0 17 66 0.1
events in waitclass Other 444 7 9 20 0.1
log file switch (checkpoint incom 8 88 8 955 0.0
log buffer space 11 27 6 562 0.0
control file sequential read 5,882 0 1 0 1.4
log file single write 74 0 1 10 0.0
db file sequential read 71 0 1 10 0.0
direct path read 259 0 0 2 0.1
log file switch completion 2 0 0 162 0.0
log file sequential read 74 0 0 0 0.0
latch: library cache 1 0 0 1 0.0
buffer busy waits 16 0 0 0 0.0
rdbms ipc message 28,583 76 57,513 2012 6.9
Streams AQ: qmn slave idle wait 257 0 7,038 27386 0.1
Streams AQ: qmn coordinator idle 524 51 7,038 13431 0.1
pmon timer 2,461 99 7,025 2854 0.6
smon timer 344 3 6,942 20181 0.1
Streams AQ: waiting for time mana 89 44 6,805 76455 0.0
class slave wait 1 100 5 4891 0.0
-------------------------------------------------------------
Wait Event Histogram DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
-> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
-> % of Waits - value: .0 indicates value was <.05%, null is truly 0
-> Ordered by Event (idle events last)
Total ----------------- % of Waits ------------------
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
LGWR wait for redo copy 185 78.4 2.2 1.6 7.6 10.3
PL/SQL lock timer 2103 100.0
SQL*Net break/reset to cli 2670 97.0 .7 .1 .4 .3 .2 1.2
SQL*Net message from dblin 8990 23.9 49.8 1.4 7.9 10.5 2.6 2.2 1.5
SQL*Net message to dblink 8991 100.0
SQL*Net more data from dbl 6 100.0
SQL*Net more data to clien 3570 100.0
SQL*Net more data to dblin 3507 97.1 .3 .2 2.1 .2
buffer busy waits 163 41.1 .6 2.5 1.8 54.0
control file parallel writ 3597 27.8 53.4 18.8 .0
control file sequential re 56K 99.7 .0 .0 .1 .1 .0 .1 .0
cursor: mutex S 10 100.0
cursor: mutex X 5 100.0
db file parallel read 4 75.0 25.0
db file parallel write 16K .2 1.0 3.8 15.5 15.0 13.7 49.6 1.3
db file scattered read 2649 59.9 10.9 5.4 5.8 7.9 4.8 5.1 .2
db file sequential read 34K 45.5 1.8 5.0 16.1 15.2 7.9 8.3 .2
direct path read 259 97.3 .4 .8 .4 .8 .4
direct path read temp 50 56.0 2.0 2.0 4.0 10.0 24.0 2.0
direct path write 422 87.9 .5 4.5 3.6 2.8 .7
direct path write temp 106 57.5 40.6 1.9
enq: CF - contention 24 41.7 4.2 4.2 4.2 33.3 12.5
enq: RO - fast object reus 147 7.5 9.5 48.3 34.7
kksfbc child completion 2 100.0
latch free 1 100.0
latch: In memory undo latc 1 100.0
latch: cache buffers chain 6 83.3 16.7
latch: cache buffers lru c 1 100.0
latch: library cache 14 57.1 42.9
latch: object queue header 3 33.3 66.7
latch: shared pool 9 22.2 33.3 33.3 11.1
library cache load lock 32 21.9 15.6 25.0 9.4 18.8 3.1 6.3
library cache pin 12 16.7 8.3 16.7 16.7 41.7
local write wait 437 2.3 27.7 24.5 45.5
log buffer space 1307 .3 .1 .2 .5 .4 98.5
log file parallel write 5752 .2 1.7 5.8 28.1 19.5 7.5 24.2 13.0
log file sequential read 74 97.3 1.4 1.4
log file single write 74 5.4 44.6 12.2 25.7 5.4 6.8
log file switch (checkpoin 200 1.5 .5 98.0
log file switch (private s 4 100.0
log file switch completion 199 3.5 1.0 1.5 1.5 92.5
log file sync 2938 .9 1.2 4.2 8.7 11.1 5.8 68.1
os thread startup 96 100.0
rdbms ipc reply 462 98.7 .2 .4 .2 .2 .2
read by other session 187 40.6 4.3 11.2 19.8 14.4 4.3 5.3
reliable message 81 79.0 17.3 1.2 2.5
row cache lock 61 95.1 1.6 1.6 1.6
single-task message 5 100.0
undo segment extension 477 100.0
SQL*Net message from clien 1094K 96.8 1.0 .3 .2 .1 .1 1.1 .5
SQL*Net message to client 1094K 100.0 .0 .0 .0 .0 .0
Wait Event Histogram DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
-> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
-> % of Waits - value: .0 indicates value was <.05%, null is truly 0
-> Ordered by Event (idle events last)
Total ----------------- % of Waits ------------------
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
SQL*Net more data from cli 64 100.0
Streams AQ: qmn coordinato 524 48.9 .2 51.0
Streams AQ: qmn slave idle 257 100.0
Streams AQ: waiting for me 1453 .1 1.2 98.8
Streams AQ: waiting for ti 89 24.7 11.2 64.0
class slave wait 4 100.0
dispatcher timer 120 100.0
jobq slave wait 2285 .0 .5 99.5
pmon timer 2461 1.5 .0 .0 .1 .1 .9 97.3
rdbms ipc message 28K 5.2 .9 1.0 1.9 1.4 1.4 36.1 52.2
smon timer 344 45.6 .6 .9 4.7 .3 .6 24.4 23.0
virtual circuit status 240 100.0
wait for unread message on 7157 .0 .0 .0 .0 99.9 .1
-------------------------------------------------------------
SQL ordered by CPU DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total DB CPU (s): 718
-> Captured SQL accounts for 47.3% of Total DB CPU
-> SQL reported below exceeded 1.0% of Total DB CPU
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
14.01 1,187 0.01 2.0 14.32 0 2331695545
Module: Lab128
--lab128
select replace(stat_name,'TICKS','TIME') stat_name,val
ue from v$osstat
where substr(stat_name,1,3) !='AVG'
12.95 588 0.02 1.8 14.65 0 2004329213
Module: Lab128
--lab128
select latch#,gets,misses,sleeps,immediate_gets,immedi
ate_misses,
waits_holding_latch,spin_gets
from v$latch where
gets+immediate_gets>0
12.61 2 6.31 1.8 181.43 11,010 4116021597
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN ash.collect(3,1200); :mydate := n
ext_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
12.59 666 0.02 1.8 13.77 0 3872095438
Module: Realtime Connection
select begin_time, wait_class#, (time_waited)/(intsize_cs
ec/100) from v$waitclassmetric union all select begin_time, -1,
value from v$sysmetric where metric_name = 'CPU Usage Per Sec' a
nd group_id = 2 order by begin_time, wait_class#
10.39 6,480 0.00 1.4 10.55 0 19176310
Module: Lab128
--lab128
select sid,ownerid,user#,sql_id,sql_child_number,seq#,
event#
,serial#,row_wait_obj#,row_wait_file#,row_wait_block#,ro
w_wait_row#,blocking_session
,service_name,p1,p2,p3,wait_time,s
econds_in_wait,decode(state,'WAITING',0,1) state
,machine,progr
am
from v$session
where status='ACTIVE' and username is not n
9.56 1,101 0.01 1.3 13.03 139,744 3286148528
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.co
n# = cd.con# and cd.enabled = :1 and c.owner# = u.user#
9.24 8 1.15 1.3 13.19 14,321 781079612
Call CALC_NEW_DOWN_PROF(:1, :2, :3)
8.69 2,105 0.00 1.2 11.95 0 3802278413
SELECT A.*, :B1 SAMPLE_TIME FROM V$ASHNOW A
8.45 336 0.03 1.2 9.66 0 3922007841
Module: Realtime Connection
SELECT event#, sql_id, sql_plan_hash_value, sql_opcode, session_
id, session_serial#, module, action, client_id, DECODE(wait_time
, 0, 'W', 'C'), 1, time_waited, service_hash, user_id, program,
sample_time, p1, p2, p3, current_file#, current_obj#, current_bl
ock#, qc_session_id, qc_instance_id FROM v$active_session_histor
8.41 15,240 0.00 1.2 28.50 338,766 4175898638
SQL ordered by CPU DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total DB CPU (s): 718
-> Captured SQL accounts for 47.3% of Total DB CPU
-> SQL reported below exceeded 1.0% of Total DB CPU
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
UPDATE TOPOLOGY_LINK SET DATETO=sysdate, STATEID=0 WHERE TOPOLOG
YID=:1 AND PARENTID=:2 AND STATEID=1
7.84 2 3.92 1.1 48.30 381,260 2027985784
UPDATE TMP_CALC_HFC_SLOW_CM_TMP SET STATUS_ERROR = 1 WHERE DOCSI
FSIGQUNERROREDS < PREV_DOCSIFSIGQUNERROREDS OR DOCSIFSIGQCORRECT
EDS < PREV_DOCSIFSIGQCORRECTEDS OR DOCSIFSIGQUNCORRECTABLES < PR
EV_DOCSIFSIGQUNCORRECTABLES OR SYSUPTIME <= PREV_SYSUPTIME OR (
DOCSIFSIGQUNERROREDS - PREV_DOCSIFSIGQUNERROREDS ) + ( DOCSIFSIG
7.51 479 0.02 1.0 8.57 0 3714876926
Module: Lab128
--lab128
select sql_id,plan_hash_value,parse_calls,disk_reads,d
irect_writes,
buffer_gets,rows_processed,serializable_aborts,fe
tches,executions,
end_of_fetch_count,loads,invalidations,px_ser
vers_executions,
cpu_time,elapsed_time,application_wait_time,co
ncurrency_wait_time,
cluster_wait_time,user_io_wait_time,plsql_
7.37 1 7.37 1.0 10.12 12,690 75475900
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, C
M_ID, MAX(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(U
P_ID) UP_ID, MAX(DOWN_ID) DOWN_ID,
MAX(MAC_ID) MAC_ID, MAX(CMTS_ID) CMTS_
ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY
7.35 390 0.02 1.0 8.08 0 3755369401
Module: Realtime Connection
select metric_id, value from v$sysmetric where intsize_csec > 59
00 and group_id = 2 and metric_id in (2092,
2093, 2125, 2126,
2100, 2124,
2127, 2128)
7.21 1,177 0.01 1.0 7.42 0 2760020466
Module: Lab128
--lab128
select indx,ksleswts,kslestmo,round(kslestim / 10000)
from x$kslei
where inst_id=userenv('INSTANCE') and kslestim>0
-------------------------------------------------------------
SQL ordered by Elapsed DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total DB Time (s): 4,574
-> Captured SQL accounts for 46.2% of Total DB Time
-> SQL reported below exceeded 1.0% of Total DB Time
Elapsed Elap per CPU Old
Time (s) Executions Exec (s) %Total Time (s) Physical Reads Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
244.55 8 30.57 5.3 3.89 573 1916282772
Call CALC_DELETE_MEDIUM_RAWDATA(:1, :2)
181.43 2 90.72 4.0 12.61 1 4116021597
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN ash.collect(3,1200); :mydate := n
ext_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
137.15 2 68.58 3.0 6.00 4,446 3327781611
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)
128.72 2,105 0.06 2.8 1.20 1 1692944121
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1
93.17 112 0.83 2.0 3.45 713 2689373535
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_J
OB_PROCS(); :mydate := next_date; IF broken THEN :b := 1; ELSE :
b := 0; END IF; END;
82.31 2 41.16 1.8 6.68 345 614087306
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC
_SLOW_CM_LAST_TMP
70.80 2 35.40 1.5 4.74 242 237730869
DELETE FROM TMP_CALC_QOS_SLOW_CM_LAST
64.02 2 32.01 1.4 4.67 323 3329113987
DELETE FROM TMP_CALC_HFC_SLOW_CM_LAST
61.19 1 61.19 1.3 5.63 1,555 2149686744
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, BITSPERSYMBOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_
LINK link, UPSTREAM_CHANNEL channel WHERE power.SECONDID = :1 AN
D link.TOPOLOGYID = power.TOPOLOGYID AND link.PARENTLEN = 1 AND
link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha
60.08 2 30.04 1.3 6.79 5 982709942
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS
_SLOW_CM_LAST_TMP
48.30 2 24.15 1.1 7.84 0 2027985784
UPDATE TMP_CALC_HFC_SLOW_CM_TMP SET STATUS_ERROR = 1 WHERE DOCSI
FSIGQUNERROREDS < PREV_DOCSIFSIGQUNERROREDS OR DOCSIFSIGQCORRECT
EDS < PREV_DOCSIFSIGQCORRECTEDS OR DOCSIFSIGQUNCORRECTABLES < PR
EV_DOCSIFSIGQUNCORRECTABLES OR SYSUPTIME <= PREV_SYSUPTIME OR (
DOCSIFSIGQUNERROREDS - PREV_DOCSIFSIGQUNERROREDS ) + ( DOCSIFSIG
-------------------------------------------------------------
SQL ordered by Gets DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> End Buffer Gets Threshold: 10000 Total Buffer Gets: 43,878,832
-> Captured SQL accounts for 17.4% of Total Buffer Gets
-> SQL reported below exceeded 1.0% of Total Buffer Gets
CPU Elapsd Old
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
1,214,045 1 1,214,045.0 2.8 5.45 5.91 1551069132
select errors.TOPOLOGYID, errors.SAMPLE_LENGTH, UNIQUE_CMS, ACTI
VE_CMS, CHANNELWIDTH, BITSPERSYMBOL, SNR_DOWN, RXPOWER_DOWN FROM
CM_ERRORS errors, CM_POWER_2 power, TOPOLOGY_LINK link, DOWNSTR
EAM_CHANNEL channel where errors.SECONDID = power.SECONDID AND e
rrors.SECONDID = :1 AND errors.TOPOLOGYID = power.TOPOLOGYID AND
1,065,067 1 1,065,067.0 2.4 6.32 39.14 2109849972
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, CHANNELWIDTH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_
POWER_1 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL channel, UPS
TREAM_POWER_1 upstream_rx WHERE power.SECONDID = :1 and power.SE
CONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO
762,573 1 762,573.0 1.7 5.63 61.19 2149686744
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, BITSPERSYMBOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_
LINK link, UPSTREAM_CHANNEL channel WHERE power.SECONDID = :1 AN
D link.TOPOLOGYID = power.TOPOLOGYID AND link.PARENTLEN = 1 AND
link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha
522,314 2 261,157.0 1.2 6.79 60.08 982709942
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS
_SLOW_CM_LAST_TMP
503,642 2 251,821.0 1.1 6.68 82.31 614087306
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC
_SLOW_CM_LAST_TMP
-------------------------------------------------------------
SQL ordered by Reads DB/Inst: CDB10/cdb10 Snaps: 114-116
-> End Disk Reads Threshold: 1000 Total Disk Reads: 52,072
-> Captured SQL accounts for 75.3% of Total Disk Reads
-> SQL reported below exceeded 1.0% of Total Disk Reads
CPU Elapsd Old
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
4,709 2 2,354.5 9.0 1.59 7.07 694687570
Call CALC_DELETE_OLD_DATA(:1)
4,446 2 2,223.0 8.5 6.00 137.15 3327781611
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)
4,141 8 517.6 8.0 5.22 17.10 591467433
Call CALC_TOPOLOGY_MEDIUM(:1, :2, :3, :4)
3,198 8 399.8 6.1 0.76 4.02 323802731
DELETE FROM TMP_TOP_MED_DN WHERE DOWNID IN( SELECT TOPOLOGYID FR
OM ( SELECT PARENTID,TOPOLOGYID,ROW_NUMBER() OVER(PARTITION BY P
ARENTID ORDER BY DATEFROM DESC) RN FROM TOPOLOGY_LINK WHERE STAT
EID=1 AND TOPOLOGYID_NODETYPEID=128 AND PARENTID_NODETYPEID=127
) WHERE RN>1 )
1,707 1 1,707.0 3.3 2.09 11.86 2155459437
INSERT /*+ APPEND */ INTO CM_RAWDATA SELECT * FROM CM_RAWDATA_SH
ADOW
1,555 1 1,555.0 3.0 5.63 61.19 2149686744
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, BITSPERSYMBOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_
LINK link, UPSTREAM_CHANNEL channel WHERE power.SECONDID = :1 AN
D link.TOPOLOGYID = power.TOPOLOGYID AND link.PARENTLEN = 1 AND
link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha
1,523 8 190.4 2.9 0.17 1.15 2531676910
Module: Lab128
--lab128
select se.fa_se, uit.ui, uipt.uip, uist.uis, fr_s.fr_s
e, t.dt from (select /*+ all_rows */ count(*) fa_se from (select
ts#,max(length) m from sys.fet$ group by ts#) f, sys.seg$ s whe
re s.ts#=f.ts# and extsize>m) se, (select count(*) ui from sys.i
nd$ where bitand(flags,1)=1) uit, (select count(*) uip from sys.
1,292 2 646.0 2.5 0.16 0.54 1794345920
DELETE FROM CM_BYTES WHERE SECONDID <= :B1
1,151 2 575.5 2.2 1.97 32.12 2763442576
INSERT INTO CM_BYTES SELECT SECONDID, CMID, SAMPLE_LENGTH, BYTES
_DOWN, BYTES_UP FROM TMP_CALC_QOS_SLOW_CM WHERE BYTES_DOWN>=0 AN
D BYTES_UP>=0
1,114 2 557.0 2.1 1.77 7.82 465647697
INSERT INTO CM_QOS_PROF SELECT :B2 , C.TOPOLOGYID, :B2 - :B1 , C
.NODE_PROFILE_ID, C.QOS_PROF_IDX FROM CM_QOS_PROF C WHERE C.TOPO
LOGYID IN ( SELECT CMID FROM TMP_TOP_SLOW_CM MINUS SELECT TOPOLO
GYID FROM CM_QOS_PROF WHERE SECONDID = :B2 ) AND C.SECONDID = :B
1
1,089 1 1,089.0 2.1 0.49 31.46 3396396246
select TOPOLOGYID, CER from CM_VA where SECONDID = :1 and CER IS
SQL ordered by Reads DB/Inst: CDB10/cdb10 Snaps: 114-116
-> End Disk Reads Threshold: 1000 Total Disk Reads: 52,072
-> Captured SQL accounts for 75.3% of Total Disk Reads
-> SQL reported below exceeded 1.0% of Total Disk Reads
CPU Elapsd Old
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
NOT NULL
1,064 2 532.0 2.0 1.82 36.87 18649301
INSERT INTO CM_ERRORS SELECT SECONDID, CMID, SAMPLE_LENGTH, UNER
ROREDS, CORRECTEDS, UNCORRECTABLES, SNR FROM TMP_CALC_HFC_SLOW_C
M
944 1 944.0 1.8 4.62 6.30 2293375291
Module: Admin Connection
select output from table(dbms_workload_repository.ash_report_htm
l(:1, :2, :3, :4, 0))
900 1 900.0 1.7 6.32 39.14 2109849972
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, CHANNELWIDTH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_
POWER_1 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL channel, UPS
TREAM_POWER_1 upstream_rx WHERE power.SECONDID = :1 and power.SE
CONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO
874 8 109.3 1.7 1.72 4.35 1599796656
INSERT INTO TMP_TOP_MED_DN SELECT M.CMTSID, M.VENDOR_DESC, M.MOD
EL_DESC, MAC_L.TOPOLOGYID, DOWN_L.TOPOLOGYID, M.UP_SNR_CNR_A3, M
.UP_SNR_CNR_A2, M.UP_SNR_CNR_A1, M.UP_SNR_CNR_A0, M.MAC_SLOTS_OP
EN, M.MAC_SLOTS_USED, M.CMTS_REBOOT, 0 FROM TMP_TOP_MED_CMTS M,
TOPOLOGY_LINK DOWN_L, TOPOLOGY_NODE DOWN_N, TOPOLOGY_LINK MAC_L
757 1 757.0 1.5 0.46 2.59 2347914587
SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP,
M.TOPOLOGYID UP_ID, T.UP_DESC UP
_DESC, T.MAC_ID MAC_ID, T.CMTS
_ID CMTS_ID, M.MAX_PERCENT_UTIL,
M.MAX_PACKETS_PER_SEC, M.AVG_PACKET_SIZE,
713 112 6.4 1.4 3.45 93.17 2689373535
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_J
OB_PROCS(); :mydate := next_date; IF broken THEN :b := 1; ELSE :
b := 0; END IF; END;
690 2 345.0 1.3 0.34 2.91 4032294671
DELETE FROM CM_POLL_STATUS WHERE LAST_SAMPLETIME <= SYSDATE-30
686 2 343.0 1.3 0.07 0.29 3762878399
DELETE FROM MISSED_RAWDATA WHERE SAMPLETIME <= TRUNC(SYSDATE,'hh
') - 2/24
628 1 628.0 1.2 0.91 1.49 856816204
Module: Admin Connection
SELECT ash.current_obj#, ash.dim1_percentage, ash.event, ash.dim
12_percentage, dbms_ash_internal.get_obj_name(
my_obj.owner, my_obj.object_name,
my_obj.subobject_name, my_
SQL ordered by Reads DB/Inst: CDB10/cdb10 Snaps: 114-116
-> End Disk Reads Threshold: 1000 Total Disk Reads: 52,072
-> Captured SQL accounts for 75.3% of Total Disk Reads
-> SQL reported below exceeded 1.0% of Total Disk Reads
CPU Elapsd Old
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
obj.object_type), my_obj.tablespace_name FROM ( SELECT d12aa
573 8 71.6 1.1 3.89 244.55 1916282772
Call CALC_DELETE_MEDIUM_RAWDATA(:1, :2)
-------------------------------------------------------------
SQL ordered by Executions DB/Inst: CDB10/cdb10 Snaps: 114-116
-> End Executions Threshold: 100 Total Executions: 1,092,470
-> Captured SQL accounts for 11.3% of Total Executions
-> SQL reported below exceeded 1.0% of Total Executions
CPU per Elap per Old
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
15,240 15,240 1.0 0.00 0.00 1999961487
INSERT INTO TOPOLOGY_LINK (TOPOLOGYID, TOPOLOGYID_NODETYPEID, PA
RENTID, PARENTID_NODETYPEID, LINKTYPEID, PARENTLEN, DATEFROM, DA
TETO, STATEID) VALUES (:1,:2,:3,:4,1,:5,sysdate,TO_DATE('9999-12
-31 23:59:59', 'YYYY-MM-DD HH24:MI:SS'),1)
15,240 15,240 1.0 0.00 0.00 4175898638
UPDATE TOPOLOGY_LINK SET DATETO=sysdate, STATEID=0 WHERE TOPOLOG
YID=:1 AND PARENTID=:2 AND STATEID=1
-------------------------------------------------------------
SQL ordered by Parse Calls DB/Inst: CDB10/cdb10 Snaps: 114-116
-> End Parse Calls Threshold: 1000 Total Parse Calls: 27,215
-> Captured SQL accounts for 75.2% of Total Parse Calls
-> SQL reported below exceeded 1.0% of Total Parse Calls
% Total Old
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
1,310 1,310 4.81 1254950678
select file# from file$ where ts#=:1
1,235 1,235 4.54 3404108640
ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED
1,235 1,235 4.54 3742653144
select sysdate from dual
1,101 1,101 4.05 3286148528
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.co
n# = cd.con# and cd.enabled = :1 and c.owner# = u.user#
944 944 3.47 2850132846
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts
=:8,extsize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13
, 65535, NULL, :13),groups=decode(:14, 65535, NULL, :14), cacheh
int=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:17),scanhint=:18
where ts#=:1 and file#=:2 and block#=:3
640 640 2.35 2803285
update sys.mon_mods$ set inserts = inserts + :ins, updates = upd
ates + :upd, deletes = deletes + :del, flags = (decode(bitand(fl
ags, :flag), :flag, flags, flags + :flag)), drop_segments = drop
_segments + :dropseg, timestamp = :time where obj# = :objn
640 640 2.35 2396279102
lock table sys.mon_mods$ in exclusive mode nowait
548 548 2.01 4143084494
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with grantee#=:1 and privilege
#>0
418 418 1.54 794436051
Module: OEM.SystemPool
SELECT INSTANTIABLE, supertype_owner, supertype_name, LOCAL_ATTR
IBUTES FROM all_types WHERE type_name = :1 AND owner = :2
415 2 1.52 260339297
insert into sys.col_usage$ values ( :objn, :coln, decode(bit
and(:flag,1),0,0,1), decode(bitand(:flag,2),0,0,1), decode(b
itand(:flag,4),0,0,1), decode(bitand(:flag,8),0,0,1), decode
(bitand(:flag,16),0,0,1), decode(bitand(:flag,32),0,0,1), :t
ime)
415 415 1.52 2554034351
lock table sys.col_usage$ in exclusive mode nowait
415 1,327 1.52 3665763022
update sys.col_usage$ set equality_preds = equality_preds
+ decode(bitand(:flag,1),0,0,1), equijoin_preds = equijoi
SQL ordered by Parse Calls DB/Inst: CDB10/cdb10 Snaps: 114-116
-> End Parse Calls Threshold: 1000 Total Parse Calls: 27,215
-> Captured SQL accounts for 75.2% of Total Parse Calls
-> SQL reported below exceeded 1.0% of Total Parse Calls
% Total Old
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
n_preds + decode(bitand(:flag,2),0,0,1), nonequijoin_preds
= nonequijoin_preds + decode(bitand(:flag,4),0,0,1), range_pre
ds = range_preds + decode(bitand(:flag,8),0,0,1),
396 396 1.46 1348827743
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#
,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, N
VL(spare1,0),NVL(scanhint,0) from seg$ where ts#=:1 and file#=:2
and block#=:3
-------------------------------------------------------------
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
CPU used by this session 73,235 10.2 17.7
CPU used when call started 69,804 9.7 16.9
CR blocks created 2,170 0.3 0.5
Cached Commit SCN referenced 5,439 0.8 1.3
Commit SCN cached 37 0.0 0.0
DB time 3,354,438 466.0 811.2
DBWR checkpoint buffers written 92,389 12.8 22.3
DBWR checkpoints 129 0.0 0.0
DBWR object drop buffers written 5,818 0.8 1.4
DBWR revisited being-written buff 565 0.1 0.1
DBWR thread checkpoint buffers wr 58,196 8.1 14.1
DBWR transaction table writes 294 0.0 0.1
DBWR undo block writes 100,810 14.0 24.4
IMU CR rollbacks 236 0.0 0.1
IMU Flushes 3,876 0.5 0.9
IMU Redo allocation size 8,979,320 1,247.3 2,171.5
IMU commits 2,255 0.3 0.6
IMU contention 47 0.0 0.0
IMU ktichg flush 24 0.0 0.0
IMU pool not allocated 415 0.1 0.1
IMU recursive-transaction flush 15 0.0 0.0
IMU undo allocation size 16,377,144 2,274.9 3,960.6
IMU- failed to get a private stra 415 0.1 0.1
PX local messages recv'd 0 0.0 0.0
PX local messages sent 0 0.0 0.0
SMON posted for undo segment shri 20 0.0 0.0
SQL*Net roundtrips to/from client 1,094,719 152.1 264.7
SQL*Net roundtrips to/from dblink 8,996 1.3 2.2
active txn count during cleanout 639,508 88.8 154.7
application wait time 16,554 2.3 4.0
auto extends on undo tablespace 0 0.0 0.0
background checkpoints completed 37 0.0 0.0
background checkpoints started 37 0.0 0.0
background timeouts 22,223 3.1 5.4
branch node splits 29 0.0 0.0
buffer is not pinned count 29,662,682 4,120.4 7,173.6
buffer is pinned count 29,809,596 4,140.8 7,209.1
bytes received via SQL*Net from c 108,581,448 15,082.9 26,259.1
bytes received via SQL*Net from d 962,647 133.7 232.8
bytes sent via SQL*Net to client 93,282,655 12,957.7 22,559.3
bytes sent via SQL*Net to dblink 8,966,262 1,245.5 2,168.4
calls to get snapshot scn: kcmgss 216,338 30.1 52.3
calls to kcmgas 147,056 20.4 35.6
calls to kcmgcs 640,324 89.0 154.9
change write time 103,316 14.4 25.0
cleanout - number of ktugct calls 678,574 94.3 164.1
cleanouts and rollbacks - consist 4 0.0 0.0
cleanouts only - consistent read 19,868 2.8 4.8
cluster key scan block gets 27,338 3.8 6.6
cluster key scans 21,540 3.0 5.2
commit batch performed 10 0.0 0.0
commit batch requested 10 0.0 0.0
commit batch/immediate performed 153 0.0 0.0
commit batch/immediate requested 153 0.0 0.0
commit cleanout failures: block l 22,940 3.2 5.6
commit cleanout failures: buffer 67 0.0 0.0
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
commit cleanout failures: callbac 219 0.0 0.1
commit cleanout failures: cannot 0 0.0 0.0
commit cleanouts 107,561 14.9 26.0
commit cleanouts successfully com 84,335 11.7 20.4
commit immediate performed 143 0.0 0.0
commit immediate requested 143 0.0 0.0
commit txn count during cleanout 54,758 7.6 13.2
concurrency wait time 9,120 1.3 2.2
consistent changes 832,273 115.6 201.3
consistent gets 34,015,984 4,725.1 8,226.4
consistent gets - examination 28,527,345 3,962.7 6,899.0
consistent gets direct 0 0.0 0.0
consistent gets from cache 34,015,984 4,725.1 8,226.4
cursor authentications 260 0.0 0.1
data blocks consistent reads - un 2,159 0.3 0.5
db block changes 10,124,487 1,406.4 2,448.5
db block gets 9,862,848 1,370.0 2,385.2
db block gets direct 6,668 0.9 1.6
db block gets from cache 9,856,180 1,369.1 2,383.6
deferred (CURRENT) block cleanout 16,349 2.3 4.0
dirty buffers inspected 71,152 9.9 17.2
enqueue conversions 17,965 2.5 4.3
enqueue releases 147,314 20.5 35.6
enqueue requests 147,387 20.5 35.6
enqueue timeouts 67 0.0 0.0
enqueue waits 113 0.0 0.0
execute count 1,092,470 151.8 264.2
frame signature mismatch 0 0.0 0.0
free buffer inspected 331,981 46.1 80.3
free buffer requested 204,101 28.4 49.4
global undo segment hints helped 0 0.0 0.0
global undo segment hints were st 0 0.0 0.0
heap block compress 52,202 7.3 12.6
hot buffers moved to head of LRU 100,209 13.9 24.2
immediate (CR) block cleanout app 19,872 2.8 4.8
immediate (CURRENT) block cleanou 80,162 11.1 19.4
index fast full scans (full) 132 0.0 0.0
index fetch by key 16,874,551 2,344.0 4,080.9
index scans kdiixs1 837,547 116.3 202.6
leaf node 90-10 splits 9,341 1.3 2.3
leaf node splits 17,221 2.4 4.2
lob reads 203 0.0 0.1
lob writes 1,593 0.2 0.4
lob writes unaligned 1,593 0.2 0.4
logons cumulative 257 0.0 0.1
messages received 24,046 3.3 5.8
messages sent 24,047 3.3 5.8
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 4,854,564 674.3 1,174.0
opened cursors cumulative 29,312 4.1 7.1
parse count (failures) 0 0.0 0.0
parse count (hard) 906 0.1 0.2
parse count (total) 27,215 3.8 6.6
parse time cpu 1,421 0.2 0.3
parse time elapsed 7,626 1.1 1.8
physical read IO requests 37,545 5.2 9.1
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
physical read bytes 426,573,824 59,254.6 103,161.8
physical read total IO requests 93,794 13.0 22.7
physical read total bytes 1,346,237,440 187,003.4 325,571.3
physical read total multi block r 2,699 0.4 0.7
physical reads 52,072 7.2 12.6
physical reads cache 51,063 7.1 12.4
physical reads cache prefetch 13,906 1.9 3.4
physical reads direct 1,009 0.1 0.2
physical reads direct (lob) 0 0.0 0.0
physical reads direct temporary t 750 0.1 0.2
physical reads prefetch warmup 0 0.0 0.0
physical write IO requests 101,042 14.0 24.4
physical write bytes 1,501,020,160 208,504.0 363,003.7
physical write total IO requests 119,138 16.6 28.8
physical write total bytes 3,434,338,304 477,057.7 830,553.4
physical write total multi block 14,190 2.0 3.4
physical writes 183,230 25.5 44.3
physical writes direct 7,677 1.1 1.9
physical writes direct (lob) 7 0.0 0.0
physical writes direct temporary 4,242 0.6 1.0
physical writes from cache 175,553 24.4 42.5
physical writes non checkpoint 159,817 22.2 38.7
pinned buffers inspected 18 0.0 0.0
prefetch warmup blocks aged out b 0 0.0 0.0
prefetched blocks aged out before 0 0.0 0.0
process last non-idle time 7,198 1.0 1.7
recovery blocks read 0 0.0 0.0
recursive calls 303,380 42.1 73.4
recursive cpu usage 36,748 5.1 8.9
redo blocks read for recovery 0 0.0 0.0
redo blocks written 3,430,184 476.5 829.6
redo buffer allocation retries 3,510 0.5 0.9
redo entries 5,034,912 699.4 1,217.6
redo log space requests 1,095 0.2 0.3
redo log space wait time 26,372 3.7 6.4
redo ordering marks 96,204 13.4 23.3
redo size 1,697,855,400 235,846.0 410,605.9
redo synch time 131,843 18.3 31.9
redo synch writes 10,465 1.5 2.5
redo wastage 1,345,656 186.9 325.4
redo write time 208,626 29.0 50.5
redo writer latching time 55 0.0 0.0
redo writes 5,754 0.8 1.4
rollback changes - undo records a 64,619 9.0 15.6
rollbacks only - consistent read 2,148 0.3 0.5
rows fetched via callback 12,793,552 1,777.1 3,094.0
session connect time 0 0.0 0.0
session cursor cache hits 17,301 2.4 4.2
session logical reads 43,878,832 6,095.1 10,611.6
session pga memory 225,399,144 31,309.8 54,510.1
session pga memory max 375,411,048 52,147.7 90,788.7
session uga memory 524,039,191,952 72,793,331.3 ############
session uga memory max 329,698,024 45,797.8 79,733.5
shared hash latch upgrades - no w 853,856 118.6 206.5
shared hash latch upgrades - wait 22 0.0 0.0
sorts (disk) 0 0.0 0.0
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
sorts (memory) 65,215 9.1 15.8
sorts (rows) 3,404,105 472.9 823.2
sql area purged 116 0.0 0.0
summed dirty queue length 1,359,596 188.9 328.8
switch current to new buffer 7,823 1.1 1.9
table fetch by rowid 19,110,650 2,654.6 4,621.7
table fetch continued row 164 0.0 0.0
table scan blocks gotten 428,059 59.5 103.5
table scan rows gotten 40,899,306 5,681.3 9,891.0
table scans (long tables) 34 0.0 0.0
table scans (short tables) 11,931 1.7 2.9
total number of times SMON posted 333 0.1 0.1
transaction rollbacks 153 0.0 0.0
transaction tables consistent rea 8 0.0 0.0
transaction tables consistent rea 100 0.0 0.0
undo change vector size 664,433,044 92,295.2 160,685.1
user I/O wait time 80,928 11.2 19.6
user calls 1,100,285 152.8 266.1
user commits 3,273 0.5 0.8
user rollbacks 862 0.1 0.2
workarea executions - onepass 4 0.0 0.0
workarea executions - optimal 73,237 10.2 17.7
write clones created in backgroun 330 0.1 0.1
write clones created in foregroun 2,493 0.4 0.6
-------------------------------------------------------------
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Statistics with absolute values (should not be diffed)
Statistic Begin Value End Value
--------------------------------- --------------- ---------------
logons current 36 41
opened cursors current 607 1,017
session cursor cache count 71,680 74,159
workarea memory allocated 0 28,311
-------------------------------------------------------------
Instance Activity Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
--------------------------------- ------------------ ---------
log switches (derived) 37 18.50
-------------------------------------------------------------
OS Statistics DB/Inst: CDB10/cdb10 Snaps: 114-116
-> ordered by statistic type (CPU use, Virtual Memory, Hardware Config), Name
Statistic Total
------------------------- ----------------------
BUSY_TIME 728,809
IDLE_TIME 711,843
SYS_TIME 50,602
USER_TIME 678,207
LOAD 0
OS_CPU_WAIT_TIME 328,900
VM_IN_BYTES 212,729,856
VM_OUT_BYTES 794,091,520
PHYSICAL_MEMORY_BYTES 6,388,301,824
NUM_CPUS 2
-------------------------------------------------------------
Tablespace IO Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
->ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TS_STARGUS
31,284 4 19.8 1.3 36,946 5 146 9.2
UNDOTBS1
45 0 69.6 1.0 56,258 8 162 437.6
TEMP
2,868 0 8.8 2.4 5,220 1 0 0.0
SYSAUX
818 0 5.5 1.2 1,840 0 36 3.1
SYSTEM
1,810 0 6.4 2.2 342 0 6 5.0
PERFSTAT
241 0 5.9 1.0 359 0 0 0.0
EXAMPLE
37 0 3.0 1.0 37 0 0 0.0
USERS
37 0 1.1 1.0 37 0 0 0.0
-------------------------------------------------------------
File IO Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
->Mx Rd Bkt: Max bucket time for single block read
->ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Mx Av
Av Rd Rd Av Av Buffer BufWt
Reads Reads/s (ms) Bkt Blks/Rd Writes Writes/s Waits (ms)
-------------- ------- ----- --- ------- ------------ -------- ---------- ------
EXAMPLE /export/home/oracle10/oradata/cdb10/example01.dbf
37 0 3.0 1.0 37 0 0
PERFSTAT /export/home/oracle10/oradata/cdb10/perfstat01.dbf
241 0 5.9 ### 1.0 359 0 0
SYSAUX /export/home/oracle10/oradata/cdb10/sysaux01.dbf
818 0 5.5 ### 1.2 1,840 0 36 3.1
SYSTEM /export/home/oracle10/oradata/cdb10/system01.dbf
1,810 0 6.4 ### 2.2 342 0 6 5.0
TEMP /export/home/oracle10/oradata/cdb10/temp01.dbf
2,868 0 8.8 ### 2.4 5,220 1 0
TS_STARGUS /export/home/oracle10/oradata/cdb10/ts_stargus_01.db
31,284 4 19.8 ### 1.3 36,946 5 146 9.2
UNDOTBS1 /export/home/oracle10/oradata/cdb10/undotbs01.dbf
45 0 69.6 ### 1.0 56,258 8 162 437.6
USERS /export/home/oracle10/oradata/cdb10/users01.dbf
37 0 1.1 1.0 37 0 0
-------------------------------------------------------------
File Read Histogram Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
->Number of single block reads in each time range
->ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
0 - 2 ms 2 - 4 ms 4 - 8 ms 8 - 16 ms 16 - 32 ms 32+ ms
------------ ------------ ------------ ------------ ------------ ------------
PERFSTAT /export/home/oracle10/oradata/cdb10/perfstat01.dbf
96 22 50 21 16 4
SYSAUX /export/home/oracle10/oradata/cdb10/sysaux01.dbf
310 77 224 94 29 14
SYSTEM /export/home/oracle10/oradata/cdb10/system01.dbf
565 188 446 142 36 34
TS_STARGUS /export/home/oracle10/oradata/cdb10/ts_stargus_01.db
13,196 1,392 4,710 4,818 2,539 2,827
UNDOTBS1 /export/home/oracle10/oradata/cdb10/undotbs01.dbf
1 0 0 1 1 5
TEMP /export/home/oracle10/oradata/cdb10/temp01.dbf
1,986 27 78 87 39 48
-------------------------------------------------------------
Buffer Pool Statistics DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
-> Buffers: the number of buffers. Units of K, M, G are divided by 1000
Free Writ Buffer
Pool Buffer Physical Physical Buffer Comp Busy
P Buffers Hit% Gets Reads Writes Waits Wait Waits
--- ------- ---- -------------- ------------ ----------- ------- ---- ----------
D 38K 100 43,867,770 50,739 175,553 0 0 350
-------------------------------------------------------------
Instance Recovery Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B 0 9 290 556 58761 184320 58761
E 0 10 956 3056 184320 184320 456931
-------------------------------------------------------------
Buffer Pool Advisory DB/Inst: CDB10/cdb10 End Snap: 116
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Pool, Block Size, Buffers For Estimate
Est
Phys Estimated Est
Size for Size Buffers Read Phys Reads Est Phys % dbtime
P Est (M) Factr (thousands) Factr (thousands) Read Time for Rds
--- -------- ----- ------------ ------ -------------- ------------ --------
D 28 .1 3 2.1 3,489 56,173 38.4
D 56 .2 7 1.8 2,947 45,783 31.3
D 84 .3 10 1.5 2,485 36,931 25.2
D 112 .4 14 1.4 2,280 33,006 22.5
D 140 .5 17 1.3 2,175 30,988 21.2
D 168 .5 21 1.3 2,102 29,589 20.2
D 196 .6 24 1.2 1,977 27,204 18.6
D 224 .7 28 1.2 1,882 25,373 17.3
D 252 .8 31 1.1 1,812 24,044 16.4
D 280 .9 35 1.0 1,671 21,336 14.6
D 308 1.0 38 1.0 1,625 20,462 14.0
D 336 1.1 42 1.0 1,589 19,770 13.5
D 364 1.2 45 1.0 1,551 19,038 13.0
D 392 1.3 49 0.9 1,527 18,580 12.7
D 420 1.4 52 0.9 1,513 18,303 12.5
D 448 1.5 55 0.9 1,497 18,003 12.3
D 476 1.5 59 0.9 1,480 17,671 12.1
D 504 1.6 62 0.9 1,460 17,301 11.8
D 532 1.7 66 0.9 1,446 17,018 11.6
D 560 1.8 69 0.9 1,425 16,620 11.3
-------------------------------------------------------------
Buffer wait Statistics DB/Inst: CDB10/cdb10 Snaps: 114-116
-> ordered by wait time desc, waits desc
Class Waits Total Wait Time (s) Avg Time (ms)
---------------------- ----------- ------------------- -------------
undo header 162 71 438
data block 183 1 8
segment header 5 0 2
-------------------------------------------------------------
PGA Aggr Target Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
-> B: Begin snap E: End snap (rows identified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ---------------- -------------------------
99.5 6,690 32
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- --------- --------- ---------- ---------- ------ ------ ------ ----------
B 200 136 96.9 0.0 .0 .0 .0 40,960
E 200 118 166.8 28.8 17.3 100.0 .0 40,960
-------------------------------------------------------------
PGA Aggr Target Histogram DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- ------------- ------------ ------------
2K 4K 66,759 66,759 0 0
64K 128K 42 42 0 0
128K 256K 4 4 0 0
256K 512K 68 68 0 0
512K 1024K 4,553 4,553 0 0
1M 2M 1,790 1,790 0 0
4M 8M 20 16 4 0
8M 16M 22 22 0 0
16M 32M 4 4 0 0
-------------------------------------------------------------
PGA Memory Advisory DB/Inst: CDB10/cdb10 End Snap: 116
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
25 0.1 68,983.4 56,207.3 55.0 1,210
50 0.3 68,983.4 52,212.1 57.0 1,035
100 0.5 68,983.4 21,393.4 76.0 0
150 0.8 68,983.4 7,199.5 91.0 0
200 1.0 68,983.4 7,157.5 91.0 0
240 1.2 68,983.4 6,802.2 91.0 0
280 1.4 68,983.4 6,802.2 91.0 0
320 1.6 68,983.4 6,802.2 91.0 0
360 1.8 68,983.4 6,802.2 91.0 0
400 2.0 68,983.4 6,802.2 91.0 0
600 3.0 68,983.4 6,623.8 91.0 0
800 4.0 68,983.4 6,623.8 91.0 0
1,200 6.0 68,983.4 6,623.8 91.0 0
1,600 8.0 68,983.4 6,623.8 91.0 0
-------------------------------------------------------------
Process Memory Summary Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> Num Procs or Allocs: For Begin/End snapshot lines, it is the number of
processes. For Category lines, it is the number of allocations
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist Num
Avg Std Dev Max Max Procs
Alloc Used Freeabl Alloc Alloc Alloc Alloc or
Category (MB) (MB) (MB) (MB) (MB) (MB) (MB) Allocs
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B -------- 97.0 50.1 21.2 2.6 4.9 31 48 38
Other 72.3 1.9 4.9 31 31 38
Freeable 21.2 .0 .8 .4 2 26
SQL 2.5 1.2 .1 .1 0 46 29
PL/SQL 1.0 .6 .0 .0 0 0 36
E -------- 166.9 97.7 38.3 3.9 8.3 48 48 43
Other 90.3 2.1 4.6 31 31 43
Freeable 38.3 .0 1.2 2.3 14 32
SQL 36.8 34.8 1.1 5.5 32 46 32
PL/SQL 1.4 .7 .0 .0 0 0 41
-------------------------------------------------------------
Top Process Memory (by component) DB/Inst: CDB10/cdb10 Snaps: 114-116
-> ordered by Begin/End snapshot, Alloc (MB) desc
Alloc Used Freeabl Max Hist Max
PId Category (MB) (MB) (MB) Alloc (MB) Alloc (MB)
- ------ ------------- ------- ------- -------- ---------- ----------
B 6 LGWR -------- 30.7 14.4 .1 30.7 30.7
Other 30.6 30.6 30.6
Freeable .1 .0 .1
PL/SQL .0 .0 .0 .0
11 MMON -------- 7.6 5.9 1.4 7.6 7.7
Other 6.1 6.1 6.1
Freeable 1.4 .0 1.4
SQL .1 .0 .1 1.4
PL/SQL .0 .0 .0 .1
32 ------------ 3.7 2.3 1.0 3.7 5.0
Other 2.3 2.3 2.3
Freeable 1.0 .0 1.0
SQL .2 .1 .2 2.0
PL/SQL .2 .0 .2 .2
16 J001 -------- 3.5 1.0 1.8 3.5 3.5
Freeable 1.8 .0 1.8
Other 1.6 1.6 1.6
SQL .1 .1 .1 2.2
PL/SQL .1 .0 .1 .1
36 ------------ 3.3 2.5 .8 3.3 4.9
Other 2.4 2.4 2.4
Freeable .8 .0 .8
SQL .1 .1 .1 2.8
PL/SQL .0 .0 .0 .0
25 ------------ 2.8 1.7 .8 2.8 48.2
Other 1.7 1.7 1.7
Freeable .8 .0 .8
SQL .2 .1 .2 45.7
PL/SQL .0 .0 .0 .0
34 ------------ 2.7 1.5 1.0 2.7 3.7
Other 1.5 1.5 1.5
Freeable 1.0 .0 1.0
SQL .1 .0 .1 1.4
PL/SQL .1 .0 .1 .1
38 ------------ 2.6 1.7 .9 2.6 3.9
Other 1.5 1.5 1.5
Freeable .9 .0 .9
SQL .1 .0 .1 1.9
PL/SQL .1 .0 .1 .1
23 ------------ 2.5 1.3 .9 2.5 5.8
Other 1.4 1.4 1.4
Freeable .9 .0 .9
SQL .1 .0 .1 4.6
PL/SQL .0 .0 .0 .0
24 ------------ 2.5 1.2 .9 2.5 5.7
Other 1.4 1.4 1.4
Freeable .9 .0 .9
SQL .1 .0 .1 4.6
PL/SQL .0 .0 .0 .0
8 SMON -------- 2.3 .6 1.4 2.3 2.4
Freeable 1.4 .0 1.4
Other .8 .8 .8
SQL .1 .0 .1 .8
Top Process Memory (by component) DB/Inst: CDB10/cdb10 Snaps: 114-116
-> ordered by Begin/End snapshot, Alloc (MB) desc
Alloc Used Freeabl Max Hist Max
PId Category (MB) (MB) (MB) Alloc (MB) Alloc (MB)
- ------ ------------- ------- ------- -------- ---------- ----------
B 8 PL/SQL .0 .0 .0 .0
37 ------------ 2.2 1.3 .9 2.2 11.1
Other 1.3 1.3 5.2
Freeable .9 .0 .9
PL/SQL .0 .0 .0 .0
SQL .0 .0 .0 5.0
40 ------------ 2.0 1.1 .9 2.0 2.6
Freeable .9 .0 .9
Other .7 .7 .7
SQL .4 .2 .4 1.5
PL/SQL .0 .0 .0 .0
27 ------------ 2.0 .8 .9 2.0 3.8
Other 1.0 1.0 1.0
Freeable .9 .0 .9
SQL .0 .0 .0 2.3
PL/SQL .0 .0 .0 .0
10 CJQ0 -------- 1.9 .7 .8 1.9 2.3
Other 1.1 1.1 1.1
Freeable .8 .0 .8
SQL .1 .0 .1 .9
PL/SQL .0 .0 .0 .0
42 ------------ 1.9 .5 1.1 1.9 2.3
Freeable 1.1 .0 1.1
Other .8 .8 .8
SQL .1 .0 .1 1.2
PL/SQL .0 .0 .0 .0
33 ------------ 1.9 1.5 .3 1.9 2.3
Other 1.5 1.5 1.5
Freeable .3 .0 .3
SQL .1 .1 .1 .6
PL/SQL .0 .0 .0 .0
35 TNS V1-V3 --- 1.9 .7 .3 1.9 1.9
Other 1.5 1.5 1.5
Freeable .3 .0 .3
SQL .1 .0 .1 .4
PL/SQL .0 .0 .0 .0
31 ------------ 1.7 .5 1.0 1.7 4.8
Freeable 1.0 .0 1.0
Other .7 .7 .7
SQL .1 .0 .1 3.8
PL/SQL .0 .0 .0 .0
28 ------------ 1.7 .5 1.0 1.7 4.0
Freeable 1.0 .0 1.0
Other .6 .6 .6
SQL .0 .0 .0 3.3
PL/SQL .0 .0 .0 .0
E 31 ------------ 48.1 33.5 13.6 48.1 48.1
SQL 32.1 32.0 32.1 45.7
Freeable 13.6 .0 13.6
Other 2.3 2.3 2.3
PL/SQL .0 .0 .0 .0
6 LGWR -------- 30.7 14.4 .1 30.7 30.7
Other 30.6 30.6 30.6
Top Process Memory (by component) DB/Inst: CDB10/cdb10 Snaps: 114-116
-> ordered by Begin/End snapshot, Alloc (MB) desc
Alloc Used Freeabl Max Hist Max
PId Category (MB) (MB) (MB) Alloc (MB) Alloc (MB)
- ------ ------------- ------- ------- -------- ---------- ----------
E 6 Freeable .1 .0 .1
PL/SQL .0 .0 .0 .0
11 MMON -------- 7.6 5.9 1.3 7.6 7.7
Other 6.2 6.2 6.2
Freeable 1.3 .0 1.3
PL/SQL .0 .0 .0 .1
SQL .0 .0 .0 1.4
28 ------------ 6.0 1.5 1.0 6.0 30.3
Other 4.8 4.8 4.8
Freeable 1.0 .0 1.0
SQL .2 .1 .2 27.6
PL/SQL .0 .0 .0 .0
42 ------------ 5.5 4.3 1.0 5.5 16.4
Other 3.9 3.9 7.4
Freeable 1.0 .0 1.0
SQL .4 .2 .4 7.8
PL/SQL .2 .0 .2 .2
36 ------------ 4.3 3.4 .8 4.3 6.0
Other 3.3 3.3 3.3
Freeable .8 .0 .8
SQL .1 .1 .1 2.8
PL/SQL .0 .0 .0 .0
16 J001 -------- 3.8 .9 1.9 3.8 3.8
Freeable 1.9 .0 1.9
Other 1.7 1.7 1.7
SQL .1 .1 .1 2.3
PL/SQL .1 .0 .1 .1
32 ------------ 3.7 2.3 1.0 3.7 5.0
Other 2.3 2.3 2.3
Freeable 1.0 .0 1.0
SQL .2 .1 .2 2.0
PL/SQL .2 .0 .2 .2
43 m000 -------- 3.2 .8 1.2 3.2 3.2
Other 1.9 1.9 1.9
Freeable 1.2 .0 1.2
SQL .1 .0 .1 1.3
PL/SQL .0 .0 .0 .0
30 ------------ 2.8 1.5 .4 2.8 3.4
Other 2.2 2.2 2.2
Freeable .4 .0 .4
SQL .1 .0 .1 1.7
PL/SQL .1 .0 .1 .1
24 ------------ 2.6 1.5 1.0 2.6 48.0
Other 1.4 1.4 1.4
Freeable 1.0 .0 1.0
SQL .2 .1 .2 45.7
PL/SQL .0 .0 .0 .0
38 ------------ 2.6 1.7 .9 2.6 3.9
Other 1.5 1.5 1.5
Freeable .9 .0 .9
SQL .1 .0 .1 1.9
PL/SQL .1 .0 .1 .1
41 J003 -------- 2.6 1.8 .0 2.6 2.6
Top Process Memory (by component) DB/Inst: CDB10/cdb10 Snaps: 114-116
-> ordered by Begin/End snapshot, Alloc (MB) desc
Alloc Used Freeabl Max Hist Max
PId Category (MB) (MB) (MB) Alloc (MB) Alloc (MB)
- ------ ------------- ------- ------- -------- ---------- ----------
E 41 Other 1.3 1.3 1.3
SQL 1.2 1.2 1.2 1.2
PL/SQL .0 .0 .0 .0
25 ------------ 2.5 1.6 .8 2.5 48.2
Other 1.5 1.5 1.6
Freeable .8 .0 .8
SQL .2 .1 .2 45.7
PL/SQL .0 .0 .0 .0
26 ------------ 2.5 1.5 1.0 2.5 6.2
Other 1.4 1.4 1.4
Freeable 1.0 .0 1.0
SQL .1 .0 .1 4.6
PL/SQL .0 .0 .0 .0
21 ------------ 2.5 1.3 1.0 2.5 6.3
Other 1.3 1.3 1.3
Freeable 1.0 .0 1.0
SQL .1 .0 .1 4.6
PL/SQL .0 .0 .0 .0
27 ------------ 2.5 1.4 1.0 2.5 6.3
Other 1.3 1.3 1.3
Freeable 1.0 .0 1.0
SQL .1 .0 .1 4.6
PL/SQL .0 .0 .0 .0
20 ------------ 2.4 1.5 .3 2.4 2.4
Other 1.9 1.9 1.9
Freeable .3 .0 .3
PL/SQL .1 .0 .1 .1
SQL .1 .0 .1 .8
23 ------------ 2.3 1.4 .9 2.3 5.8
Other 1.2 1.2 1.2
Freeable .9 .0 .9
SQL .2 .1 .2 4.6
PL/SQL .0 .0 .0 .0
8 SMON -------- 2.3 .6 1.4 2.3 2.4
Freeable 1.4 .0 1.4
Other .8 .8 .8
SQL .1 .0 .1 .8
PL/SQL .0 .0 .0 .0
-------------------------------------------------------------
Enqueue activity DB/Inst: CDB10/cdb10 Snaps: 114-116
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc
Enqueue Type (Request Reason)
------------------------------------------------------------------------------
Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
RO-Multiple Object Reuse (fast object reuse)
828 828 0 92 164 1,778.10
CF-Controlfile Transaction
4,380 4,378 2 21 8 394.57
-------------------------------------------------------------
Undo Segment Summary DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out Of Space count
-> Undo segment block stats:
uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concy TR (mins) OOS eS/eR/eU
---- ---------- --------------- -------- ---------- --------- ----- -----------
1 100.6 35,353 151 6 15/16.6 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Undo Segment Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Most recent 35 Undostat rows, ordered by End Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- -----------
30-Jul 16:53 22,174 6,157 68 4 15 0/0 0/0/0/0/0/0
30-Jul 16:43 25,060 8,393 91 6 15 0/0 0/0/0/0/0/0
30-Jul 16:33 8,605 1,924 0 4 15 0/0 0/0/0/0/0/0
30-Jul 16:23 4,861 1,331 0 3 15 0/0 0/0/0/0/0/0
30-Jul 16:13 669 558 0 3 15 0/0 0/0/0/0/0/0
30-Jul 16:03 163 577 39 3 15 0/0 0/0/0/0/0/0
30-Jul 15:53 641 670 0 3 15 0/0 0/0/0/0/0/0
30-Jul 15:43 18,180 8,713 151 6 17 0/0 0/0/0/0/0/0
30-Jul 15:33 13,650 4,299 0 3 15 0/0 0/0/0/0/0/0
30-Jul 15:23 5,704 1,470 0 4 15 0/0 0/0/0/0/0/0
30-Jul 15:13 752 824 0 3 15 0/0 0/0/0/0/0/0
30-Jul 15:03 156 437 0 3 15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Latch Activity DB/Inst: CDB10/cdb10 Snaps: 114-116
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
AWR Alerted Metric Eleme 38,444 0.0 0 0
Consistent RBA 5,792 0.0 0 0
FOB s.o list latch 299 0.0 0 0
In memory undo latch 48,296 0.0 1.0 1 8,337 0.0
JS mem alloc latch 12 0.0 0 0
JS queue access latch 12 0.0 0 0
JS queue state obj latch 51,762 0.0 0 0
JS slv state obj latch 265 0.0 0 0
KGX_diag 1 0.0 0 0
KMG MMAN ready and start 2,393 0.0 0 0
KTF sga latch 16 0.0 0 2,170 0.0
KWQMN job cache list lat 209 0.0 0 0
KWQP Prop Status 2 0.0 0 0
MQL Tracking Latch 0 0 141 0.0
Memory Management Latch 0 0 2,393 0.0
OS process 918 0.0 0 0
OS process allocation 3,010 0.0 0 0
OS process: request allo 265 0.0 0 0
PL/SQL warning settings 1,857 0.0 0 0
SQL memory manager latch 4 0.0 0 2,362 0.0
SQL memory manager worka 187,821 0.0 0 0
Shared B-Tree 275 0.0 0 0
active checkpoint queue 19,980 0.0 0.0 0 0
active service list 15,541 0.0 0 2,462 0.0
archive control 2,591 0.0 0 0
begin backup scn array 184 0.0 0 0
cache buffer handles 44,767 0.0 0 0
cache buffers chains 81,274,536 0.0 0.0 0 332,470 0.0
cache buffers lru chain 776,957 0.0 0.0 0 48,710 0.2
cache table scan latch 0 0 2,649 0.0
channel handle pool latc 932 0.0 0 0
channel operations paren 49,152 0.1 0.0 0 0
checkpoint queue latch 310,068 0.0 0.0 0 165,813 0.0
client/application info 3,920 0.0 0 0
commit callback allocati 188 0.0 0 0
compile environment latc 14,532 0.0 0 0
dictionary lookup 110 0.0 0 0
dml lock allocation 43,741 0.0 0 0
dummy allocation 509 0.0 0 0
enqueue hash chains 313,028 0.0 0.0 0 8,626 0.0
enqueues 194,211 0.0 0 0
event group latch 135 0.0 0 0
file cache latch 1,870 0.0 0 0
global KZLD latch for me 47 0.0 0 0
global tx hash mapping 19,501 0.0 0 0
hash table column usage 625 0.0 0 206,624 0.0
hash table modification 272 0.0 0 0
job workq parent latch 0 0 244 0.0
job_queue_processes para 238 0.0 0 0
kks stats 2,827 0.0 0 0
Latch Activity DB/Inst: CDB10/cdb10 Snaps: 114-116
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ksuosstats global area 2,864 0.0 1.0 0 0
ktm global data 384 0.0 0 0
kwqbsn:qsga 275 0.0 0 0
lgwr LWN SCN 5,840 0.1 0.0 0 0
library cache 2,540,877 0.0 0.0 0 3,538 0.0
library cache load lock 2,219 0.0 0.0 0 432 0.0
library cache lock 125,561 0.0 0 0
library cache lock alloc 3,399 0.0 0 0
library cache pin 2,345,637 0.0 0.0 0 0
library cache pin alloca 1,624 0.0 0 0
list of block allocation 4,519 0.0 0 0
loader state object free 794 0.0 0 0
message pool operations 1,126 0.0 0 0
messages 100,936 0.0 0.0 0 0
mostly latch-free SCN 5,853 0.2 0.0 0 0
multiblock read objects 6,668 0.0 0 0
ncodef allocation latch 140 0.0 0 0
object queue header heap 789 0.0 0 5,293 0.0
object queue header oper 946,784 0.0 0.0 0 0
object stats modificatio 706 0.4 0.0 0 0
parallel query alloc buf 948 0.0 0 0
parameter list 95 0.0 0 0
parameter table allocati 258 0.0 0 0
post/wait queue 7,277 0.0 0.0 0 4,742 0.0
process allocation 265 0.0 0 135 0.0
process group creation 265 0.0 0 0
qmn task queue latch 1,028 0.0 0 0
redo allocation 41,085 0.1 0.0 0 5,036,618 0.0
redo copy 0 0 5,036,686 0.0
redo writing 42,999 0.0 0.0 0 0
resmgr group change latc 854 0.0 0 0
resmgr:actses active lis 1,768 0.0 0 0
resmgr:actses change gro 321 0.0 0 0
resmgr:free threads list 500 0.0 0 0
resmgr:schema config 1,192 0.0 0 0
row cache objects 726,241 0.1 0.0 0 2,901 0.0
rules engine aggregate s 32 0.0 0 0
rules engine rule set st 264 0.0 0 0
sequence cache 8,908 0.0 0 0
session allocation 239,923 0.0 0.0 0 0
session idle bit 2,214,854 0.0 0.0 0 0
session state list latch 571 0.0 0 0
session switching 140 0.0 0 0
session timer 2,462 0.0 0 0
shared pool 109,418 0.1 0.1 0 0
simulator hash latch 2,579,022 0.0 0.0 0 0
simulator lru latch 2,536,133 0.0 0.0 0 19,731 0.0
slave class 4 0.0 0 0
slave class create 17 0.0 0 0
sort extent pool 5,868 0.0 0.0 0 0
Latch Activity DB/Inst: CDB10/cdb10 Snaps: 114-116
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
state object free list 4 0.0 0 0
statistics aggregation 280 0.0 0 0
temp lob duration state 3 0.0 0 0
temporary table state ob 5 0.0 0 0
threshold alerts latch 607 0.0 0 0
transaction allocation 1,971,670 0.0 0 0
transaction branch alloc 4,366 0.0 0 0
undo global data 885,891 0.0 0.0 0 0
user lock 428 0.0 0 0
-------------------------------------------------------------
Latch Sleep breakdown DB/Inst: CDB10/cdb10 Snaps: 114-116
-> ordered by misses desc
Get Spin
Latch Name Requests Misses Sleeps Gets
-------------------------- --------------- ------------ ----------- -----------
cache buffers chains 81,274,536 7,697 6 7,691
cache buffers lru chain 776,957 334 1 333
library cache 2,540,877 305 14 291
shared pool 109,418 75 9 66
object queue header operat 946,784 67 3 64
In memory undo latch 48,296 1 1 0
ksuosstats global area 2,864 1 1 0
-------------------------------------------------------------
Latch Miss Sources DB/Inst: CDB10/cdb10 Snaps: 114-116
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
In memory undo latch ktiFlush: child 0 1 1
cache buffers chains kcbgtcr: kslbegin excl 0 4 1
cache buffers chains kcbgtcr: fast path 0 2 5
cache buffers chains kcbgcur: kslbegin 0 1 0
cache buffers chains kcbbxsv 0 1 0
cache buffers chains kcbrls: kslbegin 0 1 3
cache buffers chains kcbnew: new latch again 0 1 0
cache buffers chains kcbgtcr: kslbegin shared 0 1 1
cache buffers lru chain kcbbxsv: move to being wri 0 1 0
ksuosstats global area ksugetosstat 0 1 1
library cache lock kgllkdl: child: no lock ha 0 5 0
object queue header oper kcbo_switch_q_bg 0 1 0
object queue header oper kcbw_unlink_q_bg 0 1 0
object queue header oper kcbo_write_q 0 1 0
shared pool kghalo 0 9 5
shared pool kghfrunp: clatch: nowait 0 9 0
-------------------------------------------------------------
Mutex Sleep DB/Inst: CDB10/cdb10 Snaps: 114-116
-> ordered by Wait Time desc
Wait
Mutex Type Location Sleeps Time (s)
------------------ -------------------------------- -------------- ------------
Cursor Parent kkspsc0 [KKSPRTLOC26] 7 0.0
Cursor Parent kkspsc0 [KKSPRTLOC27] 3 0.0
Cursor Parent kksfbc [KKSPRTLOC2] 5 0.0
-------------------------------------------------------------
Dictionary Cache Stats DB/Inst: CDB10/cdb10 Snaps: 114-116
->"Pct Misses" should be very low (< 2% in most cases)
->"Final Usage" is the number of cache entries being used in End Snapshot
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control 137 0.0 0 4 1
dc_database_links 106 0.0 0 0 1
dc_files 147 0.0 0 0 7
dc_global_oids 10,527 0.2 0 0 27
dc_histogram_data 8,275 3.8 0 0 536
dc_histogram_defs 20,526 6.5 0 0 1,449
dc_object_grants 62 37.1 0 0 26
dc_object_ids 21,680 2.0 0 0 376
dc_objects 4,476 9.0 0 111 371
dc_profiles 212 0.0 0 0 2
dc_rollback_segments 1,364 0.0 0 0 22
dc_segments 5,535 5.1 0 941 222
dc_sequences 149 2.0 0 149 4
dc_tablespace_quotas 2,294 0.0 0 0 2
dc_tablespaces 110,983 0.0 0 0 8
dc_usernames 739 0.3 0 0 8
dc_users 59,259 0.0 0 0 44
outstanding_alerts 250 6.4 0 32 16
-------------------------------------------------------------
Library Cache Activity DB/Inst: CDB10/cdb10 Snaps: 114-116
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 1,266 0.6 13,678 0.1 4 0
CLUSTER 45 4.4 72 4.2 0 0
INDEX 123 4.1 243 7.8 14 0
SQL AREA 614 70.0 1,106,350 0.1 587 357
TABLE/PROCEDURE 2,220 6.2 32,828 3.3 521 0
TRIGGER 75 0.0 1,120 0.3 3 0
-------------------------------------------------------------
Rule Sets DB/Inst: CDB10/cdb10 Snaps: 114-116
-> * indicates Rule Set activity (re)started between Begin/End snaps
-> Top 25 ordered by Evaluations desc
No-SQL SQL
Rule * Eval/sec Reloads/sec Eval % Eval %
----------------------------------- - ------------ ----------- ------ ------
SYS.ALERT_QUE_R 0 0 0 0
-------------------------------------------------------------
Shared Pool Advisory DB/Inst: CDB10/cdb10 End Snap: 116
-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid
Est LC Est LC Est LC Est LC
Shared SP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size (M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
96 .8 17 2,126 ####### 1.0 3,022 1.5 17,814,526
112 .9 32 3,178 ####### 1.0 2,129 1.0 17,880,989
128 1.0 45 4,017 ####### 1.0 2,030 1.0 17,889,372
144 1.1 60 5,638 ####### 1.0 2,012 1.0 17,891,230
160 1.3 75 6,925 ####### 1.0 2,006 1.0 17,892,241
176 1.4 90 8,389 ####### 1.0 1,999 1.0 17,893,491
192 1.5 105 9,841 ####### 1.0 1,994 1.0 17,894,332
208 1.6 120 11,208 ####### 1.0 1,988 1.0 17,895,911
224 1.8 135 12,039 ####### 1.0 1,965 1.0 17,897,261
240 1.9 150 12,962 ####### 1.0 1,955 1.0 17,898,124
256 2.0 165 13,918 ####### 1.0 1,950 1.0 17,898,777
-------------------------------------------------------------
SGA Memory Summary DB/Inst: CDB10/cdb10 Snaps: 114-116
End Size (Bytes)
SGA regions Begin Size (Bytes) (if different)
------------------------------ -------------------- --------------------
Database Buffers 322,961,408
Fixed Size 1,979,648
Redo Buffers 6,406,144
Variable Size 159,386,368
-------------------- --------------------
sum 490,733,568
-------------------------------------------------------------
SGA breakdown difference DB/Inst: CDB10/cdb10 Snaps: 114-116
-> Top 35 rows by size, ordered by Pool, Name (note rows with null values for
Pool column, or Names showing free memory are always shown)
-> Null value for Begin MB or End MB indicates the size of that Pool/Name was
insignificant, or zero in that snapshot
Pool Name Begin MB End MB % Diff
------ ------------------------------ -------------- -------------- --------
java p free memory 24.0 24.0 0.00
shared ASH buffers 4.0 4.0 0.00
shared CCursor 6.4 6.1 -4.48
shared FileOpenBlock 1.4 1.4 0.00
shared Heap0: KGL 3.4 3.4 1.17
shared KCB Table Scan Buffer 3.8 3.8 0.00
shared KGLS heap 3.5 1.7 -51.17
shared KQR M PO 2.1 -100.00
shared KSFD SGA I/O b 3.8 3.8 0.00
shared PCursor 4.3 4.3 -0.04
shared PL/SQL MPCODE 3.5 4.6 30.43
shared db_block_hash_buckets 2.2 2.2 0.00
shared event statistics per sess 1.5 1.5 0.00
shared free memory 10.9 13.7 25.39
shared kglsim hash table bkts 4.0 4.0 0.00
shared kglsim heap 1.3 1.3 0.00
shared kglsim object batch 1.8 1.8 0.00
shared kks stbkt 1.5 1.5 0.00
shared library cache 9.2 9.2 -0.04
shared private strands 2.3 2.3 0.00
shared row cache 7.1 7.1 0.00
shared sql area 24.0 23.6 -1.77
buffer_cache 308.0 308.0 0.00
fixed_sga 1.9 1.9 0.00
log_buffer 6.1 6.1 0.00
-------------------------------------------------------------
SQL Memory Statistics DB/Inst: CDB10/cdb10 Snaps: 114-116
Begin End % Diff
-------------- -------------- --------------
Avg Cursor Size (KB): 44.50 57.82 23.05
Cursor to Parent ratio: 1.10 1.18 6.74
Total Cursors: 1,354 1,294 -4.64
Total Parents: 1,232 1,098 -12.20
-------------------------------------------------------------
init.ora Parameters DB/Inst: CDB10/cdb10 Snaps: 114-116
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
audit_file_dest /export/home/oracle10/admin/cdb10
background_dump_dest /export/home/oracle10/admin/cdb10
compatible 10.2.0.1.0
control_files /export/home/oracle10/oradata/cdb
core_dump_dest /export/home/oracle10/admin/cdb10
db_block_size 8192
db_cache_size 322961408
db_domain
db_file_multiblock_read_count 8
db_name cdb10
db_recovery_file_dest /export/home/oracle10/flash_recov
db_recovery_file_dest_size 2147483648
dispatchers (PROTOCOL=TCP) (SERVICE=cdb10XDB)
job_queue_processes 10
open_cursors 300
pga_aggregate_target 209715200
processes 150
remote_login_passwordfile EXCLUSIVE
sga_max_size 490733568
sga_target 0
shared_pool_size 134217728
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /export/home/oracle10/admin/cdb10
-------------------------------------------------------------
End of Report ( sp_114_116.lst )
}}}
-- from http://www.perfvision.com/statspack/sp_9i.txt
{{{
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
CDB 1745492617 cdb 1 9.2.0.5.0 NO limerock
Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 35 27-Jul-07 08:50:31 16 26.7
End Snap: 37 27-Jul-07 08:52:27 16 26.7
Elapsed: 1.93 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 304M Std Block Size: 8K
Shared Pool Size: 32M Log Buffer: 512K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 8,680.03 41,953.50
Logical reads: 36,522.81 176,526.92
Block changes: 13.83 66.83
Physical reads: 0.01 0.04
Physical writes: 1.98 9.58
User calls: 21.15 102.21
Parses: 4.32 20.88
Hard parses: 0.03 0.17
Sorts: 5.69 27.50
Logons: 0.21 1.00
Executes: 36,221.39 175,070.04
Transactions: 0.21
% Blocks changed per Read: 0.04 Recursive Call %: 99.94
Rollback per transaction %: 8.33 Rows per Sort: 84.41
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 99.35 Soft Parse %: 99.20
Execute to Parse %: 99.99 Latch Hit %: 99.21
Parse CPU to Parse Elapsd %: 30.00 % Non-Parse CPU: 99.98
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 95.26 95.35
% SQL with executions>1: 63.65 63.38
% Memory for SQL w/exec>1: 67.54 67.41
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 195 48.93
PL/SQL lock timer 37 111 27.83
latch free 1,649 84 21.09
control file parallel write 36 4 .89
log file parallel write 69 3 .68
-------------------------------------------------------------
Wait Events for DB: CDB Instance: cdb Snaps: 35 -37
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
PL/SQL lock timer 37 37 111 3005 1.5
latch free 1,649 1,649 84 51 68.7
control file parallel write 36 0 4 99 1.5
log file parallel write 69 69 3 40 2.9
SQL*Net message from dblink 594 0 1 2 24.8
log file sync 23 0 1 31 1.0
db file parallel write 2 0 0 35 0.1
control file sequential read 394 0 0 0 16.4
db file sequential read 1 0 0 10 0.0
LGWR wait for redo copy 2 0 0 5 0.1
SQL*Net more data to client 125 0 0 0 5.2
SQL*Net message to dblink 594 0 0 0 24.8
SQL*Net break/reset to clien 6 0 0 0 0.3
SQL*Net message from client 2,330 0 690 296 97.1
SQL*Net message to client 2,330 0 0 0 97.1
SQL*Net more data from clien 95 0 0 0 4.0
-------------------------------------------------------------
Background Wait Events for DB: CDB Instance: cdb Snaps: 35 -37
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
control file parallel write 36 0 4 99 1.5
log file parallel write 69 69 3 40 2.9
db file parallel write 2 0 0 35 0.1
LGWR wait for redo copy 2 0 0 5 0.1
rdbms ipc message 250 181 336 1343 10.4
smon timer 1 1 300 ###### 0.0
pmon timer 87 35 113 1302 3.6
-------------------------------------------------------------
SQL ordered by Gets for DB: CDB Instance: cdb Snaps: 35 -37
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
300,002 1 300,002.0 7.1 13.52 104.77 29540053
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=13; end loop; en
d;
300,002 1 300,002.0 7.1 13.65 105.88 94571329
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=8; end loop; end
;
300,002 300,000 1.0 7.1 7.27 54.06 322898470
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=7
300,002 1 300,002.0 7.1 13.64 106.98 404650074
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=1; end loop; end
;
300,002 1 300,002.0 7.1 14.15 106.41 779315540
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=3; end loop; end
;
300,002 300,000 1.0 7.1 7.15 48.09 895863666
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=6
300,002 300,000 1.0 7.1 7.12 44.48 901988619
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=13
300,002 300,000 1.0 7.1 7.40 49.95 1016842815
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=14
300,002 1 300,002.0 7.1 13.61 104.30 1195290885
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=5; end loop; end
;
SQL ordered by Gets for DB: CDB Instance: cdb Snaps: 35 -37
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
300,002 300,000 1.0 7.1 7.26 49.07 1213725976
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=3
300,002 300,000 1.0 7.1 7.05 46.49 1483699112
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=9
300,002 300,000 1.0 7.1 7.51 53.09 1541773081
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=4
300,002 300,000 1.0 7.1 6.95 47.00 1762017642
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=5
300,002 1 300,002.0 7.1 13.61 103.74 1961700584
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=14; end loop; en
d;
300,002 300,000 1.0 7.1 7.53 44.97 1992710801
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=8
300,002 1 300,002.0 7.1 13.71 105.37 2424958502
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=12; end loop; en
d;
300,002 1 300,002.0 7.1 13.76 103.51 2715777206
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=2; end loop; end
;
300,002 300,000 1.0 7.1 7.38 44.31 2950501839
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=2
300,002 300,000 1.0 7.1 7.00 54.49 3015592115
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=10
SQL ordered by Gets for DB: CDB Instance: cdb Snaps: 35 -37
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
300,002 1 300,002.0 7.1 13.68 103.22 3089756567
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=11; end loop; en
d;
300,002 1 300,002.0 7.1 13.66 103.39 3299045037
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=9; end loop; end
;
300,002 300,000 1.0 7.1 7.05 52.18 3478188992
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=1
300,002 1 300,002.0 7.1 13.66 107.26 3507678102
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=10; end loop; en
d;
300,002 1 300,002.0 7.1 13.63 105.86 3555711258
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=4; end loop; end
;
300,002 1 300,002.0 7.1 13.62 103.58 3788634673
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=6; end loop; end
;
-------------------------------------------------------------
SQL ordered by Reads for DB: CDB Instance: cdb Snaps: 35 -37
-> End Disk Reads Threshold: 1000
CPU Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
1 2 0.5 100.0 0.87 0.95 3674571752
Module: sqlplus@limerock (TNS V1-V3)
begin :snap := statspack.snap; end;
0 1 0.0 0.0 13.52 104.77 29540053
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=13; end loop; en
d;
0 14 0.0 0.0 0.01 0.01 62978080
Module: sqlplus@limerock (TNS V1-V3)
SELECT NULL FROM DUAL FOR UPDATE NOWAIT
0 1 0.0 0.0 13.65 105.88 94571329
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=8; end loop; end
;
0 1 0.0 0.0 0.00 0.00 130926350
select count(*) from sys.job$ where next_date < :1 and (field1 =
:2 or (field1 = 0 and 'Y' = :3))
0 300,000 0.0 0.0 7.27 54.06 322898470
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=7
0 20 0.0 0.0 0.06 0.01 380234442
Module: Lab128
--lab128
select namespace,gets,gethits,pins,pinhits,reloads,inv
alidations
from v$librarycache where gets>0
0 1 0.0 0.0 13.64 106.98 404650074
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=1; end loop; end
;
0 4 0.0 0.0 0.08 0.07 418041023
Module: Lab128
--lab128
select name,v value from (
select /*+ no_merge */
name,decode(value,'TRUE',1,'FALSE',0,to_number(value)) v
fr
om v$system_parameter where type in (1,3,6)
and rownum >0
)
0 1 0.0 0.0 0.00 0.00 615142939
INSERT INTO SMON_SCN_TIME (THREAD, TIME_MP, TIME_DP, SCN_WRP, SC
N_BAS) VALUES (:1, :2, :3, :4, :5)
0 13 0.0 0.0 0.01 0.02 680302622
SQL ordered by Reads for DB: CDB Instance: cdb Snaps: 35 -37
-> End Disk Reads Threshold: 1000
CPU Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
Module: Lab128
--lab128
select tablespace ts_name,session_addr,sqladdr,sqlhash
,
blocks,segfile#,segrfno#,segtype
from v$sort_usage
0 1 0.0 0.0 14.15 106.41 779315540
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=3; end loop; end
;
0 300,000 0.0 0.0 7.15 48.09 895863666
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=6
0 300,000 0.0 0.0 7.12 44.48 901988619
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=13
0 50 0.0 0.0 0.00 0.00 998317450
Module: sqlplus@limerock (TNS V1-V3)
SELECT VALUE FROM STATS$SYSSTAT WHERE SNAP_ID = :B4 AND DBID = :
B3 AND INSTANCE_NUMBER = :B2 AND NAME = :B1
0 300,000 0.0 0.0 7.40 49.95 1016842815
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=14
0 37 0.0 0.0 0.02 0.74 1053795750
COMMIT
0 37 0.0 0.0 0.00 0.01 1093001666
SELECT TO_NUMBER(TO_CHAR(SYSDATE,'D')) FROM DUAL
0 2 0.0 0.0 0.24 0.25 1116368370
Module: sqlplus@limerock (TNS V1-V3)
INSERT INTO STATS$SQLTEXT ( HASH_VALUE , TEXT_SUBSET , PIECE , S
QL_TEXT , ADDRESS , COMMAND_TYPE , LAST_SNAP_ID ) SELECT ST1.HAS
H_VALUE , SS.TEXT_SUBSET , ST1.PIECE , ST1.SQL_TEXT , ST1.ADDRES
S , ST1.COMMAND_TYPE , SS.SNAP_ID FROM V$SQLTEXT ST1 , STATS$SQL
_SUMMARY SS WHERE SS.SNAP_ID = :B3 AND SS.DBID = :B2 AND SS.INST
0 20 0.0 0.0 0.05 0.05 1144592741
Module: Lab128
--lab128
select statistic#, value from v$sysstat where value!=0
0 18 0.0 0.0 0.00 0.10 1160064496
Module: Lab128
--lab128
select addr,pid,spid,pga_alloc_mem from v$process
0 1 0.0 0.0 13.61 104.30 1195290885
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
SQL ordered by Reads for DB: CDB Instance: cdb Snaps: 35 -37
-> End Disk Reads Threshold: 1000
CPU Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=5; end loop; end
;
0 300,000 0.0 0.0 7.26 49.07 1213725976
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=3
0 37 0.0 0.0 0.02 0.09 1231279053
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1
0 4 0.0 0.0 0.00 0.00 1254950678
select file# from file$ where ts#=:1
0 3 0.0 0.0 0.08 0.29 1272081939
Module: Lab128
--lab128
select address,hash_value,piece,sql_text
from V$SQLT
EXT_WITH_NEWLINES where address=:1 and hash_value=:2
0 1 0.0 0.0 0.00 0.00 1287368460
Module: Lab128
--lab128
select file_id,tablespace_name ts_name, sum(bytes) byt
es
from dba_free_space
group by file_id, tablespace_name
0 21 0.0 0.0 0.00 0.01 1316169839
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= n
ext_date) and (next_date < :2)) or ((last_date is null) and
(next_date < :3))) and (field1 = :4 or (field1 = 0 and 'Y' = :5)
) and (this_date is null) order by next_date, job
0 18 0.0 0.0 0.08 0.03 1356713530
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
1) and privilege#>0
-------------------------------------------------------------
SQL ordered by Executions for DB: CDB Instance: cdb Snaps: 35 -37
-> End Executions Threshold: 100
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
300,000 300,000 1.0 0.00 0.00 322898470
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=7
300,000 300,000 1.0 0.00 0.00 895863666
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=6
300,000 300,000 1.0 0.00 0.00 901988619
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=13
300,000 300,000 1.0 0.00 0.00 1016842815
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=14
300,000 300,000 1.0 0.00 0.00 1213725976
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=3
300,000 300,000 1.0 0.00 0.00 1483699112
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=9
300,000 300,000 1.0 0.00 0.00 1541773081
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=4
300,000 300,000 1.0 0.00 0.00 1762017642
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=5
300,000 300,000 1.0 0.00 0.00 1992710801
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=8
300,000 300,000 1.0 0.00 0.00 2950501839
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=2
300,000 300,000 1.0 0.00 0.00 3015592115
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=10
300,000 300,000 1.0 0.00 0.00 3478188992
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=1
300,000 300,000 1.0 0.00 0.00 3868060563
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=11
300,000 300,000 1.0 0.00 0.00 3995262551
Module: SQL*Plus
SQL ordered by Executions for DB: CDB Instance: cdb Snaps: 35 -37
-> End Executions Threshold: 100
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
SELECT ROWID FROM EMP_HASH WHERE EMPNO=12
483 483 1.0 0.00 0.00 3986604633
INSERT INTO ASH.V$ASH@REPO VALUES (:B1,:B2,:B3,:B4,:B5,:B6,:B7,:
B8,:B9,:B10,:B11,:B12,:B13,:B14,:B15,:B16,:B17,:B18,:B19,:B20,:B
21,:B22,:B23,:B24)
102 1,436 14.1 0.00 0.00 3737298577
Module: Lab128
--lab128
select --+first_rows
w.sid,s.ownerid,s.user#,s.sql_ad
dress,s.sql_hash_value,
w.seq#,w.event event#,
w.p1,w.p2,w.p3,
w.wait_time,w.seconds_in_wait,decode(w.state,'WAITING',0,1) st
ate,s.serial#,
row_wait_obj#,row_wait_file#,row_wait_block#,row
_wait_row#,machine,program
from v$session s, v$session_wait w
50 50 1.0 0.00 0.00 998317450
Module: sqlplus@limerock (TNS V1-V3)
SELECT VALUE FROM STATS$SYSSTAT WHERE SNAP_ID = :B4 AND DBID = :
B3 AND INSTANCE_NUMBER = :B2 AND NAME = :B1
37 0 0.0 0.00 0.02 1053795750
COMMIT
37 37 1.0 0.00 0.00 1093001666
SELECT TO_NUMBER(TO_CHAR(SYSDATE,'D')) FROM DUAL
37 37 1.0 0.00 0.00 1231279053
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1
37 37 1.0 0.00 0.00 2344200387
SELECT ASHSEQ.NEXTVAL FROM DUAL
37 483 13.1 0.00 0.01 3802278413
SELECT A.*, :B1 SAMPLE_TIME FROM V$ASHNOW A
30 1,440 48.0 0.01 0.03 1642626990
Module: bltwish.exe
select indx /* v9 */, ksle
swts, trunc(kslestim/10000
) from sys.oem$kslei
where ksleswts > 0
30 423 14.1 0.02 0.08 1866839420
Module: bltwish.exe
select 1, to_char(sysdate,'SSS
SS')+trunc(sysdate-to_date('JAN-01-1970 00:00:00','MON-DD-YYYY H
H24:MI:SS'))*86400 , sysdate,
s.indx , decode(w.ksusstim,
0,decode(n.kslednam,
30 30 1.0 0.00 0.00 3378495259
Module: bltwish.exe
select to_char(sysdate,'SSSSS') +
(to_char(sysdate,'J')- 2454309 )*86400
SQL ordered by Executions for DB: CDB Instance: cdb Snaps: 35 -37
-> End Executions Threshold: 100
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
from dual
30 4,230 141.0 0.00 0.00 4160458976
Module: bltwish.exe
select KSUSGSTN , KSUSGSTV
from
sys.oem$ksusgsta where
KSUSGSTV> 0
22 22 1.0 0.00 0.00 1693927332
select count(*) from sys.job$ where (next_date > sysdate) and (n
ext_date < (sysdate+5/86400))
21 0 0.0 0.00 0.00 1316169839
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= n
ext_date) and (next_date < :2)) or ((last_date is null) and
(next_date < :3))) and (field1 = :4 or (field1 = 0 and 'Y' = :5)
) and (this_date is null) order by next_date, job
20 120 6.0 0.00 0.00 380234442
Module: Lab128
--lab128
select namespace,gets,gethits,pins,pinhits,reloads,inv
alidations
from v$librarycache where gets>0
20 2,822 141.1 0.00 0.00 1144592741
Module: Lab128
--lab128
select statistic#, value from v$sysstat where value!=0
20 20 1.0 0.00 0.00 1977490509
Module: Lab128
--lab128
select cnum_set, buf_got, sum_write, sum_scan, free_bu
ffer_wait,
write_complete_wait, buffer_busy_wait, free_buffer_
inspected,
dirty_buffers_inspected, db_block_change, db_block_
gets, consistent_gets,
physical_reads, physical_writes, set_ms
ize
from v$buffer_pool_statistics
19 19 1.0 0.00 0.00 1520741509
Module: Lab128
--lab128
select * from (select nvl(sum(decode(status,'WAIT(COMM
ON)',1,0)),0) mts_idle,
nvl(sum(decode(status,'WAIT(COMMON)',
0,1)),0) mts_busy,
nvl(sum(idle),0) mts_idle_time, nvl(sum(bu
-------------------------------------------------------------
SQL ordered by Parse Calls for DB: CDB Instance: cdb Snaps: 35 -37
-> End Parse Calls Threshold: 1000
% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
30 30 5.99 1642626990
Module: bltwish.exe
select indx /* v9 */, ksle
swts, trunc(kslestim/10000
) from sys.oem$kslei
where ksleswts > 0
30 30 5.99 1866839420
Module: bltwish.exe
select 1, to_char(sysdate,'SSS
SS')+trunc(sysdate-to_date('JAN-01-1970 00:00:00','MON-DD-YYYY H
H24:MI:SS'))*86400 , sysdate,
s.indx , decode(w.ksusstim,
0,decode(n.kslednam,
30 30 5.99 3378495259
Module: bltwish.exe
select to_char(sysdate,'SSSSS') +
(to_char(sysdate,'J')- 2454309 )*86400
from dual
30 30 5.99 4160458976
Module: bltwish.exe
select KSUSGSTN , KSUSGSTV
from
sys.oem$ksusgsta where
KSUSGSTV> 0
18 18 3.59 1356713530
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
1) and privilege#>0
17 17 3.39 3469977555
Module: sqlplus@limerock (TNS V1-V3)
ALTER SESSION SET TIME_ZONE='-07:00'
17 17 3.39 3997906522
select user# from sys.user$ where name = 'OUTLN'
14 14 2.79 62978080
Module: sqlplus@limerock (TNS V1-V3)
SELECT NULL FROM DUAL FOR UPDATE NOWAIT
14 14 2.79 1432236634
Module: sqlplus@limerock (TNS V1-V3)
BEGIN DBMS_APPLICATION_INFO.SET_MODULE(:1,NULL); END;
14 14 2.79 2009857449
Module: sqlplus@limerock (TNS V1-V3)
SELECT CHAR_VALUE FROM SYSTEM.PRODUCT_PRIVS WHERE (UPPER('SQL*
Plus') LIKE UPPER(PRODUCT)) AND ((UPPER(USER) LIKE USERID) OR
(USERID = 'PUBLIC')) AND (UPPER(ATTRIBUTE) = 'ROLES')
SQL ordered by Parse Calls for DB: CDB Instance: cdb Snaps: 35 -37
-> End Parse Calls Threshold: 1000
% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
14 14 2.79 2865022085
Module: sqlplus@limerock (TNS V1-V3)
BEGIN DBMS_OUTPUT.DISABLE; END;
14 14 2.79 3096433403
Module: sqlplus@limerock (TNS V1-V3)
SELECT ATTRIBUTE,SCOPE,NUMERIC_VALUE,CHAR_VALUE,DATE_VALUE FROM
SYSTEM.PRODUCT_PRIVS WHERE (UPPER('SQL*Plus') LIKE UPPER(PRODUCT
)) AND (UPPER(USER) LIKE USERID)
14 14 2.79 4119976668
Module: sqlplus@limerock (TNS V1-V3)
SELECT USER FROM DUAL
14 14 2.79 4282642546
Module: SQL*Plus
SELECT DECODE('A','A','1','2') FROM DUAL
4 4 0.80 1254950678
select file# from file$ where ts#=:1
4 4 0.80 1480482175
Module: lab128_1584.exe
begin DBMS_APPLICATION_INFO.SET_MODULE('Lab128',NULL); end;
4 4 0.80 2011103812
Module: Lab128
--lab128
select object_id,data_object_id,owner,object_type,
o
bject_name||decode(subobject_name,null,null,' ('||subobject_name
||')') obj_name, created
from dba_objects where data_object_id
is not null and created>=:1
4 4 0.80 3033724852
Module: lab128_1584.exe
--lab128
select sid,serial#,systimestamp from v$session where s
id in (select sid from v$mystat where rownum=1)
4 4 0.80 3194447098
Module: lab128_1584.exe
alter session set optimizer_mode=choose
4 4 0.80 3986506689
Module: lab128_1584.exe
ALTER SESSION SET NLS_LANGUAGE= 'AMERICAN' NLS_TERRITORY= 'AMERI
CA' NLS_CURRENCY= '$' NLS_ISO_CURRENCY= 'AMERICA' NLS_NUMERIC_CH
ARACTERS= '.,' NLS_CALENDAR= 'GREGORIAN' NLS_DATE_FORMAT= 'DD-MO
N-RR' NLS_DATE_LANGUAGE= 'AMERICAN' NLS_SORT= 'BINARY' TIME_ZONE
= '-07:00' NLS_COMP= 'BINARY' NLS_DUAL_CURRENCY= '$' NLS_TIME_FO
3 3 0.60 3716207873
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
1
SQL ordered by Parse Calls for DB: CDB Instance: cdb Snaps: 35 -37
-> End Parse Calls Threshold: 1000
% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
2 50 0.40 998317450
Module: sqlplus@limerock (TNS V1-V3)
SELECT VALUE FROM STATS$SYSSTAT WHERE SNAP_ID = :B4 AND DBID = :
B3 AND INSTANCE_NUMBER = :B2 AND NAME = :B1
2 2 0.40 1116368370
Module: sqlplus@limerock (TNS V1-V3)
INSERT INTO STATS$SQLTEXT ( HASH_VALUE , TEXT_SUBSET , PIECE , S
QL_TEXT , ADDRESS , COMMAND_TYPE , LAST_SNAP_ID ) SELECT ST1.HAS
H_VALUE , SS.TEXT_SUBSET , ST1.PIECE , ST1.SQL_TEXT , ST1.ADDRES
S , ST1.COMMAND_TYPE , SS.SNAP_ID FROM V$SQLTEXT ST1 , STATS$SQL
_SUMMARY SS WHERE SS.SNAP_ID = :B3 AND SS.DBID = :B2 AND SS.INST
2 2 0.40 3404108640
ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED
2 2 0.40 3674571752
Module: sqlplus@limerock (TNS V1-V3)
begin :snap := statspack.snap; end;
2 2 0.40 3742653144
select sysdate from dual
1 1 0.20 29540053
Module: SQL*Plus
declare r rowid; begin for i in 1..300000 loop --u
pdate emp set sal=sal where empno=2; --commit;
select rowid into r from emp_hash where empno=13; end loop; en
d;
-------------------------------------------------------------
Instance Activity Stats for DB: CDB Instance: cdb Snaps: 35 -37
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
CPU used by this session 19,543 168.5 814.3
CPU used when call started 19,542 168.5 814.3
CR blocks created 0 0.0 0.0
Cached Commit SCN referenced 0 0.0 0.0
Commit SCN cached 0 0.0 0.0
DBWR buffers scanned 0 0.0 0.0
DBWR checkpoint buffers written 230 2.0 9.6
DBWR checkpoints 0 0.0 0.0
DBWR free buffers found 0 0.0 0.0
DBWR lru scans 0 0.0 0.0
DBWR make free requests 0 0.0 0.0
DBWR summed scan depth 0 0.0 0.0
DBWR transaction table writes 0 0.0 0.0
DBWR undo block writes 65 0.6 2.7
SQL*Net roundtrips to/from client 2,264 19.5 94.3
SQL*Net roundtrips to/from dblink 594 5.1 24.8
active txn count during cleanout 44 0.4 1.8
background checkpoints completed 0 0.0 0.0
background checkpoints started 0 0.0 0.0
background timeouts 134 1.2 5.6
branch node splits 0 0.0 0.0
buffer is not pinned count 4,216,672 36,350.6 175,694.7
buffer is pinned count 68,992 594.8 2,874.7
bytes received via SQL*Net from c 381,702 3,290.5 15,904.3
bytes received via SQL*Net from d 93,520 806.2 3,896.7
bytes sent via SQL*Net to client 942,808 8,127.7 39,283.7
bytes sent via SQL*Net to dblink 318,849 2,748.7 13,285.4
calls to get snapshot scn: kcmgss 4,201,733 36,221.8 175,072.2
calls to kcmgas 101 0.9 4.2
calls to kcmgcs 26 0.2 1.1
change write time 1 0.0 0.0
cleanout - number of ktugct calls 64 0.6 2.7
cleanouts and rollbacks - consist 0 0.0 0.0
cleanouts only - consistent read 13 0.1 0.5
cluster key scan block gets 4,200,464 36,210.9 175,019.3
cluster key scans 4,200,309 36,209.6 175,012.9
commit cleanout failures: block l 0 0.0 0.0
commit cleanout failures: buffer 0 0.0 0.0
commit cleanout failures: callbac 0 0.0 0.0
commit cleanouts 219 1.9 9.1
commit cleanouts successfully com 219 1.9 9.1
commit txn count during cleanout 44 0.4 1.8
consistent changes 0 0.0 0.0
consistent gets 4,234,500 36,504.3 176,437.5
consistent gets - examination 3,195 27.5 133.1
current blocks converted for CR 0 0.0 0.0
cursor authentications 0 0.0 0.0
data blocks consistent reads - un 0 0.0 0.0
db block changes 1,604 13.8 66.8
db block gets 2,146 18.5 89.4
deferred (CURRENT) block cleanout 106 0.9 4.4
dirty buffers inspected 0 0.0 0.0
enqueue conversions 2,093 18.0 87.2
enqueue releases 1,525 13.2 63.5
enqueue requests 1,526 13.2 63.6
enqueue timeouts 1 0.0 0.0
Instance Activity Stats for DB: CDB Instance: cdb Snaps: 35 -37
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
enqueue waits 0 0.0 0.0
execute count 4,201,681 36,221.4 175,070.0
free buffer inspected 0 0.0 0.0
free buffer requested 139 1.2 5.8
hot buffers moved to head of LRU 0 0.0 0.0
immediate (CR) block cleanout app 13 0.1 0.5
immediate (CURRENT) block cleanou 47 0.4 2.0
index fast full scans (full) 0 0.0 0.0
index fetch by key 2,097 18.1 87.4
index scans kdiixs1 16,762 144.5 698.4
leaf node 90-10 splits 5 0.0 0.2
leaf node splits 24 0.2 1.0
logons cumulative 24 0.2 1.0
messages received 71 0.6 3.0
messages sent 71 0.6 3.0
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 4,227,439 36,443.4 176,143.3
number of auto extends on undo ta 0 0.0 0.0
opened cursors cumulative 381 3.3 15.9
parse count (failures) 0 0.0 0.0
parse count (hard) 4 0.0 0.2
parse count (total) 501 4.3 20.9
parse time cpu 3 0.0 0.1
parse time elapsed 10 0.1 0.4
physical reads 1 0.0 0.0
physical reads direct 0 0.0 0.0
physical writes 230 2.0 9.6
physical writes direct 0 0.0 0.0
physical writes non checkpoint 104 0.9 4.3
pinned buffers inspected 0 0.0 0.0
prefetched blocks 0 0.0 0.0
prefetched blocks aged out before 0 0.0 0.0
process last non-idle time 21,339,926,032 183,964,879.6 ############
recovery array read time 0 0.0 0.0
recovery array reads 0 0.0 0.0
recovery blocks read 0 0.0 0.0
recursive calls 4,202,407 36,227.7 175,100.3
recursive cpu usage 15,346 132.3 639.4
redo blocks written 2,103 18.1 87.6
redo buffer allocation retries 0 0.0 0.0
redo entries 869 7.5 36.2
redo log space requests 0 0.0 0.0
redo log space wait time 0 0.0 0.0
redo ordering marks 0 0.0 0.0
redo size 1,006,884 8,680.0 41,953.5
redo synch time 72 0.6 3.0
redo synch writes 23 0.2 1.0
redo wastage 18,768 161.8 782.0
redo write time 428 3.7 17.8
redo writer latching time 1 0.0 0.0
redo writes 69 0.6 2.9
rollback changes - undo records a 0 0.0 0.0
rollbacks only - consistent read 0 0.0 0.0
rows fetched via callback 1,393 12.0 58.0
session connect time 21,339,926,032 183,964,879.6 ############
session logical reads 4,236,646 36,522.8 176,526.9
Instance Activity Stats for DB: CDB Instance: cdb Snaps: 35 -37
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
session pga memory 0 0.0 0.0
session pga memory max 65,536 565.0 2,730.7
session uga memory 314,496 2,711.2 13,104.0
session uga memory max 8,751,552 75,444.4 364,648.0
shared hash latch upgrades - no w 16,699 144.0 695.8
sorts (disk) 0 0.0 0.0
sorts (memory) 660 5.7 27.5
sorts (rows) 55,711 480.3 2,321.3
summed dirty queue length 0 0.0 0.0
switch current to new buffer 14 0.1 0.6
table fetch by rowid 33,238 286.5 1,384.9
table fetch continued row 0 0.0 0.0
table scan blocks gotten 238 2.1 9.9
table scan rows gotten 751 6.5 31.3
table scans (long tables) 0 0.0 0.0
table scans (short tables) 244 2.1 10.2
transaction rollbacks 0 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
user calls 2,453 21.2 102.2
user commits 22 0.2 0.9
user rollbacks 2 0.0 0.1
workarea executions - multipass 0 0.0 0.0
workarea executions - onepass 0 0.0 0.0
workarea executions - optimal 1,088 9.4 45.3
write clones created in backgroun 0 0.0 0.0
write clones created in foregroun 0 0.0 0.0
-------------------------------------------------------------
Tablespace IO Stats for DB: CDB Instance: cdb Snaps: 35 -37
->ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TS_STARGUS
0 0 0.0 165 1 0 0.0
UNDOTBS1
0 0 0.0 65 1 0 0.0
PERFSTAT
1 0 10.0 1.0 0 0 0 0.0
-------------------------------------------------------------
File IO Stats for DB: CDB Instance: cdb Snaps: 35 -37
->ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
PERFSTAT /export/home/oracle/oradata/cdb/perfstat01.dbf
1 0 10.0 1.0 0 0 0
TS_STARGUS /export/home/oracle/oradata/cdb/ts_stargus_01.dbf
0 0 165 1 0
UNDOTBS1 /export/home/oracle/oradata/cdb/undotbs01.dbf
0 0 65 1 0
-------------------------------------------------------------
Buffer Pool Statistics for DB: CDB Instance: cdb Snaps: 35 -37
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Write Buffer
Number of Cache Buffer Physical Physical Buffer Complete Busy
P Buffers Hit % Gets Reads Writes Waits Waits Waits
--- ---------- ----- ----------- ----------- ---------- ------- -------- ------
D 37,715 100.0 4,113,113 1 230 0 0 0
-------------------------------------------------------------
Instance Recovery Stats for DB: CDB Instance: cdb Snaps: 35 -37
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- ---------- ---------- ---------- ---------- ----------
B 0 0 184637 184320 184320 487328
E 0 0 184453 184320 184320 489431
-------------------------------------------------------------
Buffer Pool Advisory for DB: CDB Instance: cdb End Snap: 37
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Size for Size Buffers for Est Physical Estimated
P Estimate (M) Factr Estimate Read Factor Physical Reads
--- ------------ ----- ---------------- ------------- ------------------
D 32 .1 3,970 3.55 6,776,798
D 64 .2 7,940 2.64 5,042,221
D 96 .3 11,910 2.04 3,902,538
D 128 .4 15,880 1.55 2,968,018
D 160 .5 19,850 1.36 2,599,759
D 192 .6 23,820 1.26 2,404,270
D 224 .7 27,790 1.17 2,233,313
D 256 .8 31,760 1.07 2,050,577
D 288 .9 35,730 1.02 1,942,525
D 304 1.0 37,715 1.00 1,911,084
D 320 1.1 39,700 0.96 1,843,014
D 352 1.2 43,670 0.93 1,779,633
D 384 1.3 47,640 0.91 1,735,053
D 416 1.4 51,610 0.87 1,671,944
D 448 1.5 55,580 0.85 1,620,410
D 480 1.6 59,550 0.80 1,519,992
D 512 1.7 63,520 0.79 1,503,275
D 544 1.8 67,490 0.78 1,487,894
D 576 1.9 71,460 0.77 1,477,315
D 608 2.0 75,430 0.76 1,448,569
D 640 2.1 79,400 0.73 1,400,297
-------------------------------------------------------------
PGA Aggr Target Stats for DB: CDB Instance: cdb Snaps: 35 -37
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ---------------- -------------------------
100.0 31 0
Warning: pga_aggregate_target was set too low for current workload, as this
value was exceeded during this interval. Use the PGA Advisory view
to help identify a different value for pga_aggregate_target.
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- --------- --------- ---------- ---------- ------ ------ ------ ----------
B 24 4 43.2 0.0 .0 .0 .0 1,228
E 24 4 43.3 0.0 .0 .0 .0 1,228
-------------------------------------------------------------
PGA Aggr Target Histogram for DB: CDB Instance: cdb Snaps: 35 -37
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- ------------- ------------ ------------
8K 16K 927 927 0 0
16K 32K 114 114 0 0
32K 64K 22 22 0 0
64K 128K 2 2 0 0
512K 1024K 23 23 0 0
-------------------------------------------------------------
PGA Memory Advisory for DB: CDB Instance: cdb End Snap: 37
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
12 0.5 38,168.6 8,976.9 81.0 420
18 0.8 38,168.6 8,976.9 81.0 267
24 1.0 38,168.6 4,822.2 89.0 174
29 1.2 38,168.6 4,433.7 90.0 2
34 1.4 38,168.6 4,406.1 90.0 0
38 1.6 38,168.6 4,402.7 90.0 0
43 1.8 38,168.6 3,956.6 91.0 0
48 2.0 38,168.6 3,794.2 91.0 0
72 3.0 38,168.6 2,933.2 93.0 0
96 4.0 38,168.6 2,820.9 93.0 0
144 6.0 38,168.6 1,378.8 97.0 0
192 8.0 38,168.6 1,061.1 97.0 0
-------------------------------------------------------------
Rollback Segment Stats for DB: CDB Instance: cdb Snaps: 35 -37
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
managment, as RBS may be dynamically created and dropped as needed
Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
------ -------------- ------- --------------- -------- -------- --------
0 17.0 0.00 0 0 0 0
1 79.0 0.00 2,278 0 0 0
2 95.0 0.00 380 0 0 0
3 109.0 0.00 167,692 0 0 0
4 97.0 0.00 656 0 0 0
5 67.0 0.00 434 0 0 0
6 64.0 0.00 788 0 0 0
7 117.0 0.00 672 0 0 0
8 102.0 0.00 434 0 0 0
9 134.0 0.00 193,104 0 0 0
10 119.0 0.00 544 0 0 0
-------------------------------------------------------------
Rollback Segment Storage for DB: CDB Instance: cdb Snaps: 35 -37
->Optimal Size should be larger than Avg Active
RBS No Segment Size Avg Active Optimal Size Maximum Size
------ --------------- --------------- --------------- ---------------
0 385,024 0 385,024
1 109,174,784 187,464,208 209,838,080
2 257,024,000 376,982,905 257,024,000
3 25,288,704 318,918,134 243,458,048
4 22,142,976 62,433,998 109,240,320
5 11,657,216 105,183,436 157,474,816
6 17,948,672 319,545,521 260,169,728
7 15,851,520 320,974,179 205,250,560
8 159,506,432 317,225,223 249,683,968
9 21,094,400 12,377,948 484,433,920
10 17,948,672 288,462,988 243,392,512
-------------------------------------------------------------
Latch Activity for DB: CDB Instance: cdb Snaps: 35 -37
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
Consistent RBA 69 0.0 0 0
FIB s.o chain latch 6 0.0 0 0
FOB s.o list latch 26 0.0 0 0
SQL memory manager latch 2 0.0 0 36 0.0
SQL memory manager worka 2,706 0.0 0 0
active checkpoint queue 40 0.0 0 0
archive control 41 0.0 0 0
cache buffer handles 8 0.0 0 0
cache buffers chains 8,473,919 0.8 0.0 0 131 0.0
cache buffers lru chain 375 0.0 0 177 0.0
channel handle pool latc 42 0.0 0 0
channel operations paren 136 0.0 0 0
checkpoint queue latch 2,530 0.0 0.0 0 140 0.0
child cursor hash table 84 0.0 0 0
dml lock allocation 223 0.0 0 0
dummy allocation 48 0.0 0 0
enqueue hash chains 5,148 0.0 0 0
enqueues 2,794 0.0 0 0
event group latch 21 0.0 0 0
global tx hash mapping 1,299 0.0 0 0
job_queue_processes para 2 0.0 0 0
ktm global data 1 0.0 0 0
lgwr LWN SCN 70 0.0 0 0
library cache 26,286 0.0 0 0
library cache pin 2,534 0.0 0 0
library cache pin alloca 2,356 0.0 0 0
list of block allocation 50 0.0 0 0
messages 641 0.0 0 0
mostly latch-free SCN 70 0.0 0 0
ncodef allocation latch 27 0.0 0 0
post/wait queue 37 0.0 0 23 0.0
process allocation 21 0.0 0 21 0.0
process group creation 42 0.0 0 0
redo allocation 1,026 0.1 0.0 0 0
redo copy 0 0 888 0.2
redo writing 321 0.0 0 0
row cache enqueue latch 3,044 0.0 0 0
row cache objects 3,228 0.0 0 0
sequence cache 177 0.0 0 0
session allocation 441 0.0 0 0
session idle bit 5,014 0.0 0.0 0 0
session switching 27 0.0 0 0
session timer 62 0.0 0 0
shared pool 2,437 0.0 0 0
simulator hash latch 919 0.0 0 0
simulator lru latch 7 0.0 0 7 0.0
sort extent pool 41 0.0 0 0
transaction allocation 27,170 0.0 0 0
transaction branch alloc 101 0.0 0 0
undo global data 1,744 0.0 0 0
Latch Activity for DB: CDB Instance: cdb Snaps: 35 -37
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
user lock 72 0.0 0 0
-------------------------------------------------------------
Latch Sleep breakdown for DB: CDB Instance: cdb Snaps: 35 -37
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
cache buffers chains 8,473,919 68,091 1,649 0/0/0/0/0
-------------------------------------------------------------
Latch Miss Sources for DB: CDB Instance: cdb Snaps: 35 -37
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
cache buffers chains kcbgtcr: kslbegin excl 0 1,171 1,409
cache buffers chains kcbrls: kslbegin 0 478 240
-------------------------------------------------------------
Dictionary Cache Stats for DB: CDB Instance: cdb Snaps: 35 -37
->"Pct Misses" should be very low (< 2% in most cases)
->"Cache Usage" is the number of cache entries being used
->"Pct SGA" is the ratio of usage to allocated size for that cache
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_database_links 1,040 0.0 0 0 1
dc_objects 19 0.0 0 0 695
dc_profiles 18 0.0 0 0 1
dc_rollback_segments 11 0.0 0 0 12
dc_sequences 3 0.0 0 3 5
dc_tablespaces 35 0.0 0 0 5
dc_user_grants 140 0.0 0 0 16
dc_usernames 37 0.0 0 0 6
dc_users 391 0.0 0 0 19
-------------------------------------------------------------
Library Cache Activity for DB: CDB Instance: cdb Snaps: 35 -37
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 47 0.0 47 0.0 0 0
SQL AREA 403 1.0 1,032 0.8 0 0
TABLE/PROCEDURE 57 0.0 145 0.0 0 0
-------------------------------------------------------------
Shared Pool Advisory for DB: CDB Instance: cdb End Snap: 37
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid
Estd
Shared Pool SP Estd Estd Estd Lib LC Time
Size for Size Lib Cache Lib Cache Cache Time Saved Estd Lib Cache
Estim (M) Factr Size (M) Mem Obj Saved (s) Factr Mem Obj Hits
----------- ----- ---------- ------------ ------------ ------- ---------------
16 .5 20 2,700 258 1.0 142,856
32 1.0 35 5,145 258 1.0 142,880
48 1.5 48 8,497 259 1.0 143,134
64 2.0 48 8,497 259 1.0 143,134
-------------------------------------------------------------
SGA Memory Summary for DB: CDB Instance: cdb Snaps: 35 -37
SGA regions Size in Bytes
------------------------------ ----------------
Database Buffers 318,767,104
Fixed Size 731,712
Redo Buffers 811,008
Variable Size 67,108,864
----------------
sum 387,418,688
-------------------------------------------------------------
SGA breakdown difference for DB: CDB Instance: cdb Snaps: 35 -37
Pool Name Begin value End value % Diff
------ ------------------------------ ---------------- ---------------- -------
shared 1M buffer 2,098,176 2,098,176 0.00
shared Checkpoint queue 513,280 513,280 0.00
shared FileIdentificatonBlock 349,824 349,824 0.00
shared FileOpenBlock 818,960 818,960 0.00
shared KGK heap 7,000 7,000 0.00
shared KGLS heap 2,343,848 2,343,848 0.00
shared KQR L PO 1,068,048 1,068,048 0.00
shared KQR M PO 1,053,744 1,053,744 0.00
shared KQR S SO 4,120 4,120 0.00
shared KQR X PO 2,576 2,576 0.00
shared KSXR large reply queue 167,624 167,624 0.00
shared KSXR pending messages que 853,952 853,952 0.00
shared KSXR receive buffers 1,034,000 1,034,000 0.00
shared PL/SQL DIANA 803,064 803,064 0.00
shared PL/SQL MPCODE 607,400 624,976 2.89
shared PLS non-lib hp 2,088 2,088 0.00
shared SYSTEM PARAMETERS 169,016 169,016 0.00
shared character set object 279,728 279,728 0.00
shared dictionary cache 3,229,952 3,229,952 0.00
shared enqueue 218,952 218,952 0.00
shared errors 13,088 13,088 0.00
shared event statistics per sess 1,294,440 1,294,440 0.00
shared fixed allocation callback 472 472 0.00
shared free memory 3,182,136 3,122,272 -1.88
shared joxs heap init 4,240 4,240 0.00
shared krvxrr 253,056 253,056 0.00
shared ksm_file2sga region 370,496 370,496 0.00
shared library cache 11,717,552 11,742,120 0.21
shared message pool freequeue 771,984 771,984 0.00
shared miscellaneous 12,213,312 12,213,312 0.00
shared parameters 48,368 50,664 4.75
shared sessions 310,960 310,960 0.00
shared sim memory hea 328,304 328,304 0.00
shared sql area 20,934,016 20,949,440 0.07
shared table definiti 12,648 12,648 0.00
shared temporary tabl 25,840 25,840 0.00
shared trigger defini 2,128 2,128 0.00
shared trigger inform 472 472 0.00
buffer_cache 318,767,104 318,767,104 0.00
fixed_sga 731,712 731,712 0.00
log_buffer 787,456 787,456 0.00
-------------------------------------------------------------
init.ora Parameters for DB: CDB Instance: cdb Snaps: 35 -37
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
background_dump_dest /export/home/oracle/admin/cdb/bdu
compatible 9.2.0.0.0
control_files /export/home/oracle/oradata/cdb/c
core_dump_dest /export/home/oracle/admin/cdb/cdu
cursor_space_for_time TRUE
db_block_size 8192
db_cache_size 318767104
db_domain
db_file_multiblock_read_count 16
db_name cdb
fast_start_mttr_target 0
instance_name cdb
java_pool_size 0
job_queue_processes 3
nls_date_format YYYY-MM-DD HH24:MI:SS
open_cursors 500
optimizer_dynamic_sampling 2
optimizer_index_cost_adj 40
optimizer_mode ALL_ROWS
pga_aggregate_target 25165824
processes 100
remote_login_passwordfile NONE
shared_pool_size 33554432
timed_statistics TRUE
undo_management AUTO
undo_retention 900
undo_tablespace UNDOTBS1
user_dump_dest /export/home/oracle/admin/cdb/udu
-------------------------------------------------------------
End of Report
}}}
http://collectl.sourceforge.net/NetworkStats.html
Harvard goes PaaS with SELinux Sandbox http://opensource.com/education/12/8/harvard-goes-paas-selinux-sandbox?sc_cid=70160000000TmB8AAK
Introducing the SELinux Sandbox http://danwalsh.livejournal.com/28545.html
Cool things with SELinux... Introducing sandbox -X http://danwalsh.livejournal.com/31146.html
{{{
-- sandy info
http://www.w7forums.com/sandy-bridge-review-intel-core-i7-2600k-i5-2500k-and-core-i3-2100-tested-t9378.html
http://www.geek.com/articles/chips/new-intel-atom-and-core-i7-processors-on-the-way-2011053/
--defective/bug sandy bridge
http://www.notebookcheck.net/Intel-s-defective-Sandy-Bridge-Chipsets-Status-Report.45596.0.html
http://www.anandtech.com/show/4142/intel-discovers-bug-in-6series-chipset-begins-recall
--cougar sata bug
http://www.anandtech.com/show/4143/the-source-of-intels-cougar-point-sata-bug
--z68 smart response technology, putting SSD ala cache
http://hothardware.com/Reviews/Intel-Z68-Express-Chipset-With-Smart-Response-Technology/
http://www.anandtech.com/show/4329/intel-z68-chipset-smart-response-technology-ssd-caching-review/4
--sandy bridge max 32GB not yet here.. needs non-ECC
http://www.amazon.com/review/R1VPPYQ2C823XM/ref=cm_cr_dp_cmt?ie=UTF8&ASIN=B00288BHIG&nodeID=172282&tag=&linkCode=#wasThisHelpful
}}}
/***
|Name:|SaveCloseTiddlerPlugin|
|Description:|Provides two extra toolbar commands, saveCloseTiddler and cancelCloseTiddler|
|Version:|3.0 ($Rev: 5502 $)|
|Date:|$Date: 2008-06-10 23:31:39 +1000 (Tue, 10 Jun 2008) $|
|Source:|http://mptw.tiddlyspot.com/#SaveCloseTiddlerPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
To use these you must add them to the tool bar in your EditTemplate
***/
//{{{
merge(config.commands,{
saveCloseTiddler: {
text: 'done/close',
tooltip: 'Save changes to this tiddler and close it',
handler: function(ev,src,title) {
var closeTitle = title;
var newTitle = story.saveTiddler(title,ev.shiftKey);
if (newTitle)
closeTitle = newTitle;
return config.commands.closeTiddler.handler(ev,src,closeTitle);
}
},
cancelCloseTiddler: {
text: 'cancel/close',
tooltip: 'Undo changes to this tiddler and close it',
handler: function(ev,src,title) {
// the same as closeTiddler now actually
return config.commands.closeTiddler.handler(ev,src,title);
}
}
});
//}}}
https://www.safaribooksonline.com/library/view/learning-functional-data/9781785888731/
http://thestrugglingblogger.com/2011/01/scavenger-hunt-early-in-2011/
http://www.howcast.com/videos/27585-How-To-Create-a-Modern-Scavenger-Hunt
http://www.ehow.com/how_7935_devise-scavenger-hunt.html
http://www.diva-girl-parties-and-stuff.com/scavenger-hunts.html
http://patrickpowers.net/2011/01/social-media-scavenger-hunt-pulls-it-all-together/
http://lifehacker.com/5787809/april-fools-day-qr-code-scavenger-hunt
http://www.manilabookfair.com/BOA/special_events/mechanics%20html/Scavengers-Hunt-Mechanics.html
http://www.scavengerhuntanywhere.com/
http://www.scavengerhuntclues.org/
http://knol.google.com/k/scavenger-hunts-how-to-write-fun-and-challenging-clues# <-- GOOD STUFF
http://www.geekmom.com/2011/09/state-fair-geek-scavenger-hunt/
http://www.consolationchamps.com/content/geocaching.html
http://blog.makezine.com/archive/2011/09/qr-code-scavenger-hunt-at-ri-mini-maker-faire-this-saturday.html
http://blog.thomnichols.org/2011/08/quirk-a-cross-platform-qr-scavenger-hunt-game
http://2d-code.co.uk/next-level-qr-code-scavenger-hunt/
http://www.teampedia.net/wiki/index.php?title=QR_Code_Scavenger_Hunt
http://mashable.com/2010/12/14/qr-code-scavenger-hunt/
http://www.youtube.com/watch?v=m08rU5ipX9o
http://qrwild.com/
http://www.qrcodepress.com/qr-code-scavenger-hunts-the-next-big-thing/85927/
-- geeks
http://culturewav.es/public_thought/66707
http://googleio.appspot.com/qrhunt
http://googleio.appspot.com/qr/
http://technologyconference.org/scavenger_hunt.cfm
http://virtualgeek.typepad.com/virtual_geek/2011/08/vmworld-2011-top-10-number-5.html
http://vtexan.com/2011/08/the-great-scavenger-vhunt-at-vmworld/
http://blog.cowger.us/2011/08/19/vmworld-vhunt/
http://www.facebook.com/defcon?v=box_3
http://www.facebook.com/defconscavhunt?sk=photos
http://www.barcodelib.com/java_barcode/barcode_symbologies/qrcode.html <-- LIBRARY
http://www.youtube.com/watch?v=OB-tmGmZ_Fk <-- google forms
http://news.cnet.com/8301-17939_109-10166251-2.html <-- google docs validation
https://docs.google.com/support/bin/topic.py?topic=1360868 <-- google docs documentation
http://www.youtube.com/watch?v=SCf5qRajTtI&feature=results_video&playnext=1&list=PLC929B061B0F74983 <-- simple example
Database Scripts Library Index
HighAvailability.RMAN - Recovery Manager
131704.1
Script: To generate a Database Link create script
Doc ID: Note:1020175.6
-- DEPENDENCIES
Script To List Recursive Dependency Between Objects
Doc ID: 139594.1
HowTo: Show recursive dependencies and reverse: which objects are dependent of ...
Doc ID: 756350.1
<<showtoc>>
! Performance tests
* Oracle FMW SOA 11g R1: Using Secure Files https://www.oracle.com/technetwork/database/availability/oraclefmw-soa-11gr1-securefiles-1842740.pdf
* another good one from my notes https://www.oracle.com/technical-resources/articles/database/sql-11g-securefiles.html
* LOB vs securefiles (in german) http://www.database-consult.de/docs/LOBversusSF1.pdf
! for migration
they have to do export/import or dbms_redefinition
they need space for the new compressed table before they can drop the old one
if they have space in RECO they can use that for the new table, then alter table move later on after dropping the old table
all scenarios can be tested
! General Troubleshooting
{{{
Here are some of the questions that need to be answered when troubleshooting securefiles/LOB (some of them we already know):
> Is it TX/4 or TX/6
> Are the (gc) buffer busy waits on the LOB segment or the LOB index ?
> Are they on the space management blocks or on the "real content" blocks of the object ?
> Is the LOB defined with the default chunksize or 32K chunksize - is the sizing appropriate ?
> Is each LOB in its own tablespace ? Are the tablespace single-file tablespaces or multi-file tablespaces.
> Do the tablespaces use system extent management, or fixed size ?
> Are there any features involved that would slow down the insert/update/delete process (compression, deduplication)
> Are the LOBs largely subject to inserts with reads, or are there lots of updates and deletes ?
> Are the LOBS nocache, cache, or cache read ?
}}}
All About Security: User, Privilege, Role, SYSDBA, O/S Authentication, Audit, Encryption, OLS, Data Vault
Doc ID: Note:207959.1
-- PRIVILEGES
Script to Create View to Show All User Privs
Doc ID: Note:1020286.6
Script to Show System and Object Privs for a User
Doc ID: Note:1019508.6
-- DEFAULT PASSWORDS
160861.1
-- FGAC
Note 67977.1 Oracle8i FGAC - Working Examples
-- APPLICATION CONTEXT
How to Determine Active Context (DBMS_SESSION.LIST_CONTEXT)
Doc ID: Note:69573.1
-- OVERVIEW OF ORACLE SECURITY SERVER
Doc ID: Note:1031071.6
-- ERROR MESSAGES
Fine Grained Access Control Feature Is Not Available In the Oracle Server Standard Edition
Doc ID: Note:219911.1
-- PASSWORD
ORACLE_SID, TNS Alias,Password File and others Case Sensitiveness
Doc ID: 225097.1
Script to prevent a user from changing his password
Doc ID: Note:135878.1
Oracle Created Database Users: Password, Usage and Files References
Doc ID: 160861.1
Oracle Password Management Policy
Doc ID: 114930.1
-- 11g PASSWORD
11g R1 New Feature : Case Sensitive Passwords and Strong User Authentication
Doc ID: 429465.1
ORA-01017 when changing expired password using OCIPASSWORDCHANGE against 11.1
Doc ID: 788538.1
-- PASSWORD FILE
How to Avoid Common Flaws and Errors Using Passwordfile
Doc ID: 185703.1
-- PROFILE
11G DEFAULT Profile Changes
Doc ID: 454635.1
-- HARDENING
Security Check List: Steps to Make Your Database Secure from Attacks
Doc ID: 131752.1
-- AUDIT
Moving AUD$ to Another Tablespace and Adding Triggers to AUD$
Doc ID: Note:72460.1
-- AUDIT VAULT
-- SYS
Audit Sys Logins
Doc ID: Note:462564.1
-- AUD$ TABLE
Moving AUD$ to Another Tablespace and Adding Triggers to AUD$
Doc ID: Note:72460.1
Note 1019377.6 - Script to move SYS.AUD$ table out of SYSTEM tablespace
Note 166301.1 - How to Reorganize SYS.AUD$ Table
Note 731908.1 - New Feature DBMS_AUDIT_MGMT To Manage And Purge Audit Information
Note 73408.1 - How to Truncate, Delete, or Purge Rows from the Audit Trail Table SYS.AUD$
Problem: Linux64: Installing 32bit 10g Grid Control Fails Due to Incompatibility with the 64bit OS
Doc ID: 421749.1
Enterprise Manager Support Matrix for zLinux
Doc ID: 725980.1
Linux Crashes when Enterprise Manager Agent Starts on RHEL 4 Update 6 and 7
Doc ID: 729543.1
Can OSAUD Collect SQL Text or Bind Variables? - NO
Doc ID: 729280.1
How To Set the AUDIT_SYSLOG _LEVEL Parameter?
Doc ID: 553225.1
Audit Sys Logins
Doc ID: 462564.1
HOW TO CAPTURE ALL THE DDL STATEMENTS
Doc ID: 739604.1
New Feature DBMS_AUDIT_MGMT To Manage And Purge Audit Information
Doc ID: 731908.1
-- DATABASE VAULT
http://www.oracle.com/technology/deploy/security/database-security/database-vault/index.html
Industry expert Rich Mogull explains the importance of Separation of Duties for Database Administration, a Ziff Davis Enterprise Security Webcast Sponsored By Oracle
http://www.oracle.com/pls/ebn/live_viewer.main?p_direct=yes&p_shows_id=6469943
"Keep Them Separated", a Ziff Davis whitepaper describing best practices for internal controls and separation of duties to ensure compliant database management
http://www.oracle.com/dm/09q1field/keep_them_separated_zd_whitepaper_6-18-08.pdf
Forrester's Noel Yuhanna on Security and Compliance with Oracle Database Vault
http://www.oracle.com/pls/ebn/live_viewer.main?p_direct=yes&p_shows_id=5337015
IDC Report: Preventing Enterprise Data Leaks at the Source
http://www.oracle.com/corporate/analyst/reports/infrastructure/sec/209752.pdf
Oracle Database Vault Transparent Privileged User Access Control iSeminar
http://www.oracle.com/pls/ebn/live_viewer.main?p_direct=yes&p_shows_id=5617423
Oracle Database Vault Demo
http://www.oracle.com/pls/ebn/swf_viewer.load?p_shows_id=5641797&p_referred=0&p_width=800&p_height=600
Protecting Applications with Oracle Database Vault Whitepaper
http://www.oracle.com/technology/deploy/security/database-security/pdf/database-vault-11g-whitepaper.pdf
Oracle Database Vault for E-Business Suite Application Data Sheet
http://www.oracle.com/technology/deploy/security/database-security/pdf/ds_database_vault_ebusiness.pdf
Enterprise Data Security Assessment
http://www.oracle.com/broadband/survey/security/index.html
Installing Database Vault in a Data Guard Environment
Doc ID: 754065.1
Cannot Install Database Vault in a Single Instance Database in a RAC home.
Doc ID: 604773.1
How To Restrict The Access To An Object For The Object's Owner
Doc ID: 550265.1
-- APPS - DATABASE VAULT
Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 10.2.0.4
Doc ID: 428503.1 Type: WHITE PAPER
-- ENCRYPTION NETWORK
Encrypting EBS 11i Network Traffic using Advanced Security Option / Advanced Networking Option
Doc ID: Note:391248.1
-- LABEL SECURITY
How to Install / Deinstall Oracle Label Security Oracle9i/10g
Doc ID: Note:171155.1
if you'll install OLS on softwares with 10.2.0.3, you must reinstall the patchset and after the installation run the catols.sql
If you add the OLS option with the OUI after you have applied a patchset, you
must re-apply the same patchset, the OUI that comes with the patchset will then
update the binary component of the OLS option to the same patchset level as the RDBMS.
This action will typically take little time as compared to a complete patchset installation.
After Installing OLS, Create Policy Issues ORA-12447 and ORA-600 [KGHALO2]
Doc ID: Note:303511.1
Oracle Label Security Frequently Asked Questions
Doc ID: Note:213684.1
Note 234599.1 Enabling Oracle Label Security in Oracle E-Business Suite
Oracle Label Security Packages affect Data Guard usage of Switchover and connections to Primary Database
Doc ID: Note:265192.1
Installing Oracle Label Security Automatically Moves AUD$ Table out from SYS into SYSTEM schema
Doc ID: Note:278184.1
catnools.sql is not available in $ORACLE_HOME/rdbms/admin
Doc ID: Note:239825.1
Ora-439 Oracle Label Security Option Not Enabled though Already Installed
Doc ID: Note:250411.1
Unable to Install OLS on 10.1.0.3
Doc ID: Note:303751.1
Bug 3024516 - Oracle Label Security marked as INVALID in DBA_REGISTRY after upgrade
Doc ID: Note:3024516.8
Easy way to install, follow this OBE:
http://www.oracle.com/technology/obe/obe10gdb/install/lsinstall/lsinstall.htm
-- SSL AUTHENTICATION
Step by Step Guide To Configure SSL Authentication
Doc ID: 736510.1
-- TDE
10g R2 New Feature TDE : Transparent Data Encryption
Doc ID: 317311.1
Fails To Open / Create The Wallet: ORA-28353
Doc ID: 395252.1
Using Transparent Data Encryption with Oracle E-Business Suite Release 11i
Doc ID: 403294.1
TDE - Trying To Open Wallet In Default Location Fails With Ora-28353
Doc ID: 391086.1
How to Open the Encryption Wallet Automatically When the Database Starts.
Doc ID: 460293.1
Bug 5551624 - ORA-28353 creating a wallet
Doc ID: 5551624.8
10gR2: How to Export/Import with Data Encrypted with Transparent Data Encryption (TDE) -- TDE is only compatible with DataPump export and DataPump import.
Doc ID: 317317.1
Using Transparent Data Encryption In An Oracle Dataguard Config in 10gR2
Doc ID: 389958.1
Managing TDE wallets in a RAC environment
Doc ID: 567287.1
Transferring Encrypted Data from one Database to Another
Doc ID: 270919.1
Selective Data Encryption in Oracle RDBMS, Overview and References
Doc ID: 232000.1
How To Generate A New Master Encryption Key for the TDE
Doc ID: 445147.1
Doc ID 728292.1 Known Issues When Using TDE and Indexes on the Encrypted Columns
Doc ID 454980.1 Best Practices for having indexes on encrypted columns using TDE in 10gR2
-- SECURE APPLICATION ROLES
ORA-28201 Not Enough Privileges to Enable Application Role
Doc ID: 150418.1
OERR: ORA-28201 Not enough privileges to enable application role \'%s\'
Doc ID: 173528.1
An Example of Using Application Context's Initialized Globally
Doc ID: 242156.1
Changing Role within Stored Procedures using dbms_session.set_role
Doc ID: 69483.1
-- DEFINER - INVOKER RIGHTS
Invokers Rights Procedure Executed by Definers Rights Procedures
Doc ID: 162489.1
How to Know if a Stored Procedure is Defined as AUTHID CURRENT_USER ?
Doc ID: 130425.1
-- PROXY USERS
Using JDBC to Connect Through a Proxy User
Doc ID: 227538.1
http://www.oracle.com/technology/products/ias/toplink/doc/1013/main/_html/dblgcfg008.htm
http://www.it-eye.nl/weblog/2005/09/12/oracle-proxy-users-by-example/
http://www.it-eye.nl/weblog/2005/09/09/oracle-proxy-users/
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:21575905259251
http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/jdbc_faq_0.htm#05_14
-- PUBLIC
Be Cautious When Revoking Privileges Granted to PUBLIC
Doc ID: 247093.1
Some Views That Belong To SYS Are Not Created
Doc ID: 434905.1
PUBLIC : Is it a User, a Role, a User Group, a Privilege ?
Doc ID: 234551.1
-- CPU
steve links
http://www.integrigy.com/oracle-security-blog/archive/2008/01/31/oracle-exploits
http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpuapr2009.html
Critical Patch Update April 2009 Patch Availability Document for Oracle Products
Doc ID: 786800.1
http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpuapr2009.html
Critical Patch Update April 2009 Database Known Issues
Doc ID: 786803.1
https://metalink.oracle.com/metalink/plsql/f?p=200:10:1924032030661268483::NO:::
Security Alerts and Critical Patch Updates- Frequently Asked Questions
Doc ID: 360470.1
10.2.0.3 Patch Set - Availability and Known Issues
Doc ID: 401435.1
Release Schedule of Current Database Patch Sets
Doc ID: 742060.1
10.2.0.4 Patch Set - List of Bug Fixes by Problem Type
Doc ID: 401436.1
How To Find The Description/Details Of The Bugs Fixed By A Patch Using Opatch?
Doc ID: 750350.1
Critical Patch Update April 2009 Database Patch Security Vulnerability Molecule Mapping
Doc ID: 786811.1
How to confirm that a Critical Patch Update (CPU) has been installed
Doc ID: 821263.1
Introduction to "Bug Description" Articles
Doc ID: 245840.1
Interim Patch (One-Off Patch) FAQ
Doc ID: 726362.1
Security Alerts and Critical Patch Updates- Frequently Asked Questions
Doc ID: 360470.1
http://www.oracle.com/technology/deploy/security/cpu/cpufaq.htm
http://www.slaviks-blog.com/2009/01/20/oracle-cpu-dissected/
http://www.freelists.org/post/oracle-l/AWR-logical-reads-question,3
{{{
step 1 ####
11490 RPT_STAT_DEFS(STAT_LOGC_READ).NAME := 'session logical reads';
11491 RPT_STAT_DEFS(STAT_LOGC_READ).SOURCE := SRC_SYSDIF;
step 2 ####
(select dataobj#, obj#, dbid,
14958 sum(logical_reads_delta) logical_reads
14959 from dba_hist_seg_stat
step 3####
decode(:gets, 0, to_number(null),
14955 100 * logical_reads / :gets) ratio
and also.. there's a part where it filters for COMMAND_TYPE 47 -- PL/SQL BLOCK' or begin/declare * which marks it as zero 0
}}}
{{{
session logical reads 6,050,561 10,033.0 2,383.1
Segments by Logical Reads DB/Inst: IVRS/ivrs Snaps: 338-339
-> Total Logical Reads: 6,050,561
-> Captured Segments account for 101.7% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
TPCH TPCHTAB LINEITEM TABLE 4,960,400 81.98
TPCH TPCHTAB ORDERS TABLE 502,768 8.31
TPCH TPCHTAB PARTSUPP TABLE 161,968 2.68
TPCH TPCHTAB PART TABLE 95,984 1.59
TPCC USERS STOCK_I1 INDEX 91,984 1.52
-------------------------------------------------------------
session logical reads 6,050,561 10,033.0 2,383.1
4,960,400/6,050,561
= 0.819824806327876
}}}
/***
|Name:|SelectThemePlugin|
|Description:|Lets you easily switch theme and palette|
|Version:|1.0.1 ($Rev: 3646 $)|
|Date:|$Date: 2008-02-27 02:34:38 +1000 (Wed, 27 Feb 2008) $|
|Source:|http://mptw.tiddlyspot.com/#SelectThemePlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!Notes
* Borrows largely from ThemeSwitcherPlugin by Martin Budden http://www.martinswiki.com/#ThemeSwitcherPlugin
* Theme is cookie based. But set a default by setting config.options.txtTheme in MptwConfigPlugin (for example)
* Palette is not cookie based. It actually overwrites your ColorPalette tiddler when you select a palette, so beware.
!Usage
* {{{<<selectTheme>>}}} makes a dropdown selector
* {{{<<selectPalette>>}}} makes a dropdown selector
* {{{<<applyTheme>>}}} applies the current tiddler as a theme
* {{{<<applyPalette>>}}} applies the current tiddler as a palette
* {{{<<applyTheme TiddlerName>>}}} applies TiddlerName as a theme
* {{{<<applyPalette TiddlerName>>}}} applies TiddlerName as a palette
***/
//{{{
config.macros.selectTheme = {
label: {
selectTheme:"select theme",
selectPalette:"select palette"
},
prompt: {
selectTheme:"Select the current theme",
selectPalette:"Select the current palette"
},
tags: {
selectTheme:'systemTheme',
selectPalette:'systemPalette'
}
};
config.macros.selectTheme.handler = function(place,macroName)
{
var btn = createTiddlyButton(place,this.label[macroName],this.prompt[macroName],this.onClick);
// want to handle palettes and themes with same code. use mode attribute to distinguish
btn.setAttribute('mode',macroName);
};
config.macros.selectTheme.onClick = function(ev)
{
var e = ev ? ev : window.event;
var popup = Popup.create(this);
var mode = this.getAttribute('mode');
var tiddlers = store.getTaggedTiddlers(config.macros.selectTheme.tags[mode]);
// for default
if (mode == "selectPalette") {
var btn = createTiddlyButton(createTiddlyElement(popup,'li'),"(default)","default color palette",config.macros.selectTheme.onClickTheme);
btn.setAttribute('theme',"(default)");
btn.setAttribute('mode',mode);
}
for(var i=0; i<tiddlers.length; i++) {
var t = tiddlers[i].title;
var name = store.getTiddlerSlice(t,'Name');
var desc = store.getTiddlerSlice(t,'Description');
var btn = createTiddlyButton(createTiddlyElement(popup,'li'), name?name:t, desc?desc:config.macros.selectTheme.label['mode'], config.macros.selectTheme.onClickTheme);
btn.setAttribute('theme',t);
btn.setAttribute('mode',mode);
}
Popup.show();
return stopEvent(e);
};
config.macros.selectTheme.onClickTheme = function(ev)
{
var mode = this.getAttribute('mode');
var theme = this.getAttribute('theme');
if (mode == 'selectTheme')
story.switchTheme(theme);
else // selectPalette
config.macros.selectTheme.updatePalette(theme);
return false;
};
config.macros.selectTheme.updatePalette = function(title)
{
if (title != "") {
store.deleteTiddler("ColorPalette");
if (title != "(default)")
store.saveTiddler("ColorPalette","ColorPalette",store.getTiddlerText(title),
config.options.txtUserName,undefined,"");
refreshAll();
if(config.options.chkAutoSave)
saveChanges(true);
}
};
config.macros.applyTheme = {
label: "apply",
prompt: "apply this theme or palette" // i'm lazy
};
config.macros.applyTheme.handler = function(place,macroName,params,wikifier,paramString,tiddler) {
var useTiddler = params[0] ? params[0] : tiddler.title;
var btn = createTiddlyButton(place,this.label,this.prompt,config.macros.selectTheme.onClickTheme);
btn.setAttribute('theme',useTiddler);
btn.setAttribute('mode',macroName=="applyTheme"?"selectTheme":"selectPalette"); // a bit untidy here
}
config.macros.selectPalette = config.macros.selectTheme;
config.macros.applyPalette = config.macros.applyTheme;
config.macros.refreshAll = { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
createTiddlyButton(place,"refresh","refresh layout and styles",function() { refreshAll(); });
}};
//}}}
<<showtoc>>
! querying
http://docs.sequelizejs.com/en/v3/docs/querying/
! Sequelize CRUD 101
http://lorenstewart.me/2016/10/03/sequelize-crud-101/?utm_source=nodeweekly&utm_medium=email
! sequelize support for oracle
https://github.com/gintsgints/sequelize <- the most updated fork, use this
https://github.com/featurist/sworm <- this looks pretty good, has a wrapper to node-oracledb
https://github.com/sequelize/sequelize/issues/3013
https://github.com/sequelize/sequelize/search?q=oracle&type=Issues&utf8=%E2%9C%93
http://stackoverflow.com/questions/33803398/no-data-recovered-with-sequelize-oracle
https://github.com/SGrondin/oracle-orm
http://stackoverflow.com/questions/14403153/node-js-oracle-orm
http://stackoverflow.com/search?page=2&tab=relevance&q=oracle%20sequelize
http://stackoverflow.com/questions/14403153/node-js-oracle-orm/21118082#21118082
<<<
nodejs the ORM's that are best supported are for open source databases
Oracle and companies that use Oracle tend to use ADF (Java), APEX (PL/SQL) plus other Oracle specific tools (Report Writer, Oracle Forms)
<<<
Yes, don't get the sequence at all. just put the seq_name.nextval directly in the insert.
insert into x (c1, c2, c3) values (seq_name.nextval,:1, :2);
Also, make sure the sequences are cached.
---------
>
> Oracle 11g PL-SQL supports the following syntax for getting the next value
> from a Sequence into a variable
>
> DECLARE
> l_n_seqval NUMBER;
> BEGIN
> l_n_seqval := SEQ_NAME.NEXTVAL;
> END;
>
> whereas in 10g and earlier a Select from DUAL was needed
>
> DECLARE
> l_n_seqval NUMBER;
> BEGIN
> SELECT SEQ_NAME.NEXTVAL
> INTO l_n_seqval
> FROM DUAL;
> END;
>
> We use the 10g technique in a group of triggers which generate audit
> records; the sequence gets tens or hundreds of thousands of hits daily.
> Any minuscule incremental improvement could be significant.
>
> In your experience is there any efficiency to be gained from the 11g syntax
> (i.e., avoiding the SELECT FROM DUAL)?
alter session set "_serial_direct_read" = always;
http://dioncho.wordpress.com/2010/06/09/interesting-combination-of-rac-and-serial-direct-path-read/
http://dioncho.wordpress.com/2009/07/21/disabling-direct-path-read-for-the-serial-full-table-scan-11g/
http://sai-oracle.blogspot.com/2007/12/how-to-bypass-buffer-cache-for-full.html
http://oracledoug.com/serendipity/index.php?/archives/1321-11g-and-direct-path-reads.html
Control Services and Scheduled Jobs at Startup?
http://morganslibrary.org/hci/hci012.html
Peoplesoft - Using Set Processing https://docs.oracle.com/cd/E57990_01/pt853pbh2/eng/pt/tape/task_UsingSetProcessing-07720a.html#topofpage
Moving from Procedural to Set-Based Thinking http://www.orchestrapit.co.uk/?p=171
Faster Batch Processing http://www.oracle.com/technetwork/testcontent/o26performance-096310.html
https://savvinov.com/2017/07/10/set-based-processing/
http://blog.orapub.com/20120513/oracle-database-row-versus-set-processing-surprise.html
http://structureddata.org/2010/07/20/the-core-performance-fundamentals-of-oracle-data-warehousing-set-processing-vs-row-processing/
Real-World Performance - 8 - Set Based Parallel Processing https://www.youtube.com/watch?v=sriSU6eWGzU
https://www.codeproject.com/Articles/34142/Understanding-Set-based-and-Procedural-approaches
http://unixed.com/blog/2014/09/setup-x11-access-to-the-solaris-gui-gnome-desktop/
http://www.clustrix.com/blog/bid/257352/Sharding-In-Theory-and-Practice
https://instagram-engineering.com/sharding-ids-at-instagram-1cf5a71e5a5c
https://en.wikipedia.org/wiki/Universally_unique_identifier
HERA sharding on RAC
https://medium.com/paypal-engineering/scaling-database-access-for-100s-of-billions-of-queries-per-day-paypal-introducing-hera-e192adacda54
Troubleshooting: Tuning the Shared Pool and Tuning Library Cache Latch Contention (Doc ID 62143.1)
http://blog.tanelpoder.com/2010/11/04/a-little-new-feature-for-shared-pool-geeks/
understanding shared pool memory structures https://www.oracle.com/technetwork/database/manageability/ps-s003-274003-106-1-fin-v2-128827.pdf
http://www.overclockers.com/short-stroke-raid/
http://www.simplisoftware.com/Public/index.php?request=HdTach
http://www.overclock.net/raid-controllers-software/690318-raid-0-hdds-best-stripe-size.html
http://www.tomshardware.com/forum/244351-32-partion-hard-drive-performance
<<<
Just so we're all clear, the fastest portion of any hard disk is the outer track, not the inner track. Hard disks will also allocate partitions from the outside first, and move inwards with additional partitions.
The trick of taking a large hard drive and partitioning it such that a small partition is the only thing on it and uses only the outer, faster tracks is called short-stroking.
The partition's STR is quite fast because only the outer tracks are used, and the access times for that partition go down as well, because the head is only moving over a shorter range of tracks instead of the whole platter.
This technique has been used in enterprise environments on SCSI/SAS drives to increase database performance, where in many cases speed of access is more important that storage capacity.
<<<
http://techreport.com/forums/viewtopic.php?f=5&t=3843
http://www.storagereview.com/articles/200109/20010918ST380021A_STR.html
Capacity's Effect on Server Performance http://www.storagereview.com/capacity_s_effect_on_server_performance
''Do the following:''
{{{
http://www.makeuseof.com/tag/7-hidden-windows-caches-clear/ <- do this first on windows
http://superuser.com/questions/1050417/how-to-clean-windows-installer-folder-in-windows-10
https://blogs.technet.microsoft.com/joscon/2012/01/18/can-you-safely-delete-files-in-the-windirinstaller-directory/
https://www.raymond.cc/blog/safely-delete-unused-msi-and-mst-files-from-windows-installer-folder/
http://www.homedev.com.au/free/patchcleaner
http://superuser.com/questions/707767/how-can-i-free-up-drive-space-from-the-windows-installer-folder-without-killing
http://www.techentice.com/delete-pagefile-sys-in-windows-7/
http://www.howtogeek.com/184091/5-ways-to-free-up-disk-space-on-a-mac/ <- then do this on mac
http://www.netreliant.com/news/9/17/Compacting-VirtualBox-Disk-Images-Windows-Guests.html <- GOOD STUFF
}}}
{{{
* download sdelete v.1.61 at kaige21.tistory.com/288
* degrag C drive - Right-click the drive and choose the Properties option, select the Tools tab and click the Defragment now
* execute sdelete - sdelete.exe -z C:
* shutdown VM
* compact vdi - VBoxManage modifyhd --compact "[drive]:\[path_to_image_file]\[name_of_image_file].vdi"
}}}
http://www.joshhardman.net/shrink-virtualbox-vdi-files/
http://maketecheasier.com/shrink-your-virtualbox-vm/2009/04/06
http://kakku.wordpress.com/2008/06/23/virtualbox-shrink-your-vdi-images-space-occupied-disk-size/
http://www.linuxreaders.com/2009/04/21/how-to-shrink-your-virtualbox-vm/
http://jimiz.net/blog/2010/02/compress-vdi-file-virtualbox/
https://www.maketecheasier.com/shrink-your-virtualbox-vm
http://dantwining.co.uk/2011/07/18/how-to-shrink-a-dynamically-expanding-guest-virtualbox-image/
http://superuser.com/questions/529149/how-to-compact-virtualboxs-vdi-file-size
gc buffer busy acquire https://juliandontcheff.wordpress.com/2013/04/21/dba-tips-for-tuning-siebel-on-rac-and-exadata/
Oracle RAC Database aware Applications - A Developer’s Checklist http://www.oracle.com/technetwork/database/availability/racdbawareapplications-1933522.pdf
Bug 14618938 : EXADATA: GCS DRM FREEZE IN ENTER SERVER MODE
Oracle Database - the best choice for Siebel Applications http://www.oracle.com/us/products/database/oracle-database-siebel-bwp-068927.pdf
Guidelines for Using Real Application Clusters for an Oracle Database https://docs.oracle.com/cd/E14004_01/books/SiebInstWIN/SiebInstCOM_RDBMS13.html
Siebel on Exadata http://www.oracle.com/technetwork/database/features/availability/maa-wp-siebel-exadata-177506.pdf
http://www.wikihow.com/Calculate-Growth-Rate
http://stackoverflow.com/questions/19824601/how-calculate-growth-rate-in-long-format-data-frame
How to simulate a slow query. Useful for testing of timeout issues [ID 357615.1]
{{{
Purpose
A simple way for controlling the speed at which a query executes. Useful when investigating timeout issues.
Software Requirements/Prerequisites
SQL*Plus
Configuring the Sample Code
User should have execute privilege on DBMS_LOCK package.
Running the Sample Code
1- login with SQL*PLUS
2- execute the function slow_query
Caution
This sample code is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.
Proofread this sample code before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this sample code may not be in an executable state when you first receive it. Check over the sample code to ensure that errors of this type are corrected.
Sample Code
CREATE OR REPLACE FUNCTION slow_query( p_wait number)
RETURN varchar2 IS
v_date1 date;
v_date2 date;
BEGIN
SELECT sysdate INTO v_date1 FROM dual;
FOR i in 1..p_wait LOOP
dbms_lock.sleep(1);
END LOOP;
SELECT sysdate INTO v_date2 FROM dual;
RETURN to_char(trunc((v_date2 - v_date1) *60*60*24)) || ' seconds delay';
END;
This implementation is preferable over one that implements a single call to dbms_lock.sleep over a long period of time. The reason for this is that some types of break requests may not get processed until after the call is completed depending on the type client making the call ( i.e. JDBC/thin connections) and the OS capability to signal the system call invoked by the dbms_lock.sleep.
Another possible implementation that uses a busy loop rather than locking calls is:
create or replace function slow_query(p_seconds_wait number)
return varchar2 is
v_date_end date;
v_date_now date;
v_date_start date;
begin
select sysdate, sysdate, sysdate + p_seconds_wait/(24*60*60)
into v_date_start, v_date_now, v_date_end from dual;
while ( v_date_now < v_date_end ) loop
select sysdate into v_date_now from dual;
end loop;
return to_char(trunc((v_date_now - v_date_start) *60*60*24))
|| ' seconds delay';
end;
/
Sample Code Output
SQL> select slow_query(10) from dual;
SLOW_QUERY(10)
--------------------------------------------------------------------------------
10 seconds delay
<------ This query took 10 seconds to execute
1* select dname, slow_query(2) slow_query from dept
SQL> /
DNAME SLOW_QUERY
------------------------------------------ --------------------
ACCOUNTING 2 seconds delay
RESEARCH 2 seconds delay
SALES 2 seconds delay
OPERATIONS 2 seconds delay
<---------- this query took 8 seconds to execute
tip: If you set the arraysize to be 1 you can actually see the rows coming in one by one. If the arraysize is 15 then the rows appear together after 8 seconds delay.
}}}
{{{
1) Created PL/SQL to Make Sleep :
CREATE OR REPLACE FUNCTION slow( p_seconds in number ) RETURN number IS BEGIN dbms_lock.sleep( p_seconds ); RETURN 1; END;
2) Call the above PL/SQL to sleep below SQL Query and make it run for more than 6+ min.
SELECT slow( 0.1 ) FROM dual CONNECT BY level <= 20000
}}}
Fishworks simulator quick guide http://www.evernote.com/shard/s48/sh/be6faabd-df78-465a-bc4b-ce4db3c99358/6b4d5f084d8b5d1351b081009914829e
Karl Arao'<<tiddler ToggleRightSidebar with: "s">> TiddlyWiki
https://www.evernote.com/l/ADAUIHBMoIxODJ3X3rxHczGEkh0n6LVnT3M
http://download.oracle.com/docs/cd/E11857_01/em.111/e16790/sizing.htm#EMADM9354
http://download.oracle.com/docs/cd/B16240_01/doc/em.102/e10954/sizing.htm#CEGCDFFE
how frequent does it talk to the OMS?
what is the size of data it pushes to OMS?
1) Download Skype RPM
2) Plug in the cam
3) Read this
http://forum.skype.com/index.php?showtopic=522511
https://help.ubuntu.com/community/Webcam#Skype
4) Install the 32bit libv4l
yum install libv4l.i686
vi /usr/local/bin/skype
LD_PRELOAD=/usr/lib/libv4l/v4l2convert.so
/usr/bin/skype
chmod a+x /usr/local/bin/skype
or run as
bash -c 'LD_PRELOAD=/usr/lib/libv4l/v4l2convert.so skype'
5) restart skype
Exadata Smart Scan troubleshooting wrong results
http://www.evernote.com/shard/s48/sh/13cfe3fd-f9c1-423c-b1d2-c8a27708178b/fc6a3da907ab0e68939d8761530d1bd4
Exadata: How to diagnose smart scan and wrong results [ID 1260804.1]
http://www.youtube.com/watch?v=L_Ye89cDmKU
Why Diskless Booting is Good http://wiki.smartos.org/display/DOC/Using+SmartOS
Why you need ZFS http://wiki.smartos.org/display/DOC/ZFS
Tuning the IO Throttle http://wiki.smartos.org/display/DOC/Tuning+the+IO+Throttle
http://wiki.smartos.org/display/DOC/How+to+create+a+Virtual+Machine+in+SmartOS
http://wiki.smartos.org/display/DOC/How+to+create+a+KVM+VM+%28+Hypervisor+virtualized+machine+%29+in+SmartOS
cuddletech ppt http://www.cuddletech.com/RealWorld-OpenSolaris.pdf
http://serverfault.com/questions/363842/what-design-features-make-joyents-zfs-and-amazons-ebs-s3-reliable
http://www.evernote.com/shard/s48/sh/80abddd2-ecf4-4c7c-9c91-bc8f28e2562e/fe9ba812e6f02fe6f8d5fd00dd37c707
How to diagnose smart scan and wrong results [ID 1260804.1]
Best Practices for OLTP on the Sun Oracle Database Machine [ID 1269706.1]
''-- troubleshooting''
http://kerryosborne.oracle-guy.com/2010/06/exadata-offload-the-secret-sauce/
http://tech.e2sn.com/oracle/exadata/performance-troubleshooting/exadata-smart-scan-performance
http://fritshoogland.wordpress.com/2010/08/23/an-investigation-into-exadata/
http://danirey.wordpress.com/2011/03/07/oracle-exadata-performance-revealed-smartscan-part-iii/
http://www.slideshare.net/padday/the-real-life-social-network-v2
http://www.akadia.com/services/solaris_tips.html
Transparent Failover with Solaris MPxIO and Oracle ASM
http://blogs.sun.com/BestPerf/entry/transparent_failover_with_solaris_mpxio
http://developers.sun.com/solaris/articles/solaris_perftools.html
http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ
http://blogs.sun.com/WCP/entry/cooltst_cool_threads_selection_tool
http://cooltools.sunsource.net/cooltst/index.html
http://glennfawcett.wordpress.com/2010/09/21/oracle-open-world-presentation-uploaded-optimizing-oracle-databases-on-sparc-enterprise-m-series-servers/
{{{
system sockets cores threads
M9000-64 64 256 512
M9000-32 32 128 256
M8000 16 64 128
M5000 8 32 64
M4000 4 16 32
M3000 1 4 8
}}}
http://www.hotchips.org/wp-content/uploads/hc_archives/hc24/HC24-9-Big-Iron/HC24.29.926-SPARC-T5-CMT-Turullois-Oracle-final6.pdf
also check this out ''SPARC M5-32 and M6-32 Servers: Processor numbering and decoding CPU location. (Doc ID 1540202.1)''
{{{
$ sh showcpucount
Total number of physical processors: 4
Number of virtual processors: 384
Total number of cores: 48
Number of cores per physical processor: 12
Number of hardware threads (strands or vCPUs) per core: 8
Processor speed: 3600 MHz (3.60 GHz)
-e
** Socket-Core-vCPU mapping **
showcpucount: line 25: syntax error at line 34: `(' unexpected
$ prtdiag | head -1
System Configuration: Oracle Corporation sun4v SPARC T5-8
oracle@enksc1client01:/export/home/oracle:dbm011
$ prtdiag
System Configuration: Oracle Corporation sun4v SPARC T5-8
Memory size: 785152 Megabytes
================================ Virtual CPUs ================================
CPU ID Frequency Implementation Status
------ --------- ---------------------- -------
0 3600 MHz SPARC-T5 on-line
1 3600 MHz SPARC-T5 on-line
2 3600 MHz SPARC-T5 on-line
3 3600 MHz SPARC-T5 on-line
4 3600 MHz SPARC-T5 on-line
5 3600 MHz SPARC-T5 on-line
6 3600 MHz SPARC-T5 on-line
7 3600 MHz SPARC-T5 on-line
8 3600 MHz SPARC-T5 on-line
9 3600 MHz SPARC-T5 on-line
10 3600 MHz SPARC-T5 on-line
11 3600 MHz SPARC-T5 on-line
12 3600 MHz SPARC-T5 on-line
13 3600 MHz SPARC-T5 on-line
14 3600 MHz SPARC-T5 on-line
15 3600 MHz SPARC-T5 on-line
16 3600 MHz SPARC-T5 on-line
17 3600 MHz SPARC-T5 on-line
18 3600 MHz SPARC-T5 on-line
19 3600 MHz SPARC-T5 on-line
20 3600 MHz SPARC-T5 on-line
21 3600 MHz SPARC-T5 on-line
22 3600 MHz SPARC-T5 on-line
23 3600 MHz SPARC-T5 on-line
24 3600 MHz SPARC-T5 on-line
25 3600 MHz SPARC-T5 on-line
26 3600 MHz SPARC-T5 on-line
27 3600 MHz SPARC-T5 on-line
28 3600 MHz SPARC-T5 on-line
29 3600 MHz SPARC-T5 on-line
30 3600 MHz SPARC-T5 on-line
31 3600 MHz SPARC-T5 on-line
32 3600 MHz SPARC-T5 on-line
33 3600 MHz SPARC-T5 on-line
34 3600 MHz SPARC-T5 on-line
35 3600 MHz SPARC-T5 on-line
36 3600 MHz SPARC-T5 on-line
37 3600 MHz SPARC-T5 on-line
38 3600 MHz SPARC-T5 on-line
39 3600 MHz SPARC-T5 on-line
40 3600 MHz SPARC-T5 on-line
41 3600 MHz SPARC-T5 on-line
42 3600 MHz SPARC-T5 on-line
43 3600 MHz SPARC-T5 on-line
44 3600 MHz SPARC-T5 on-line
45 3600 MHz SPARC-T5 on-line
46 3600 MHz SPARC-T5 on-line
47 3600 MHz SPARC-T5 on-line
48 3600 MHz SPARC-T5 on-line
49 3600 MHz SPARC-T5 on-line
50 3600 MHz SPARC-T5 on-line
51 3600 MHz SPARC-T5 on-line
52 3600 MHz SPARC-T5 on-line
53 3600 MHz SPARC-T5 on-line
54 3600 MHz SPARC-T5 on-line
55 3600 MHz SPARC-T5 on-line
56 3600 MHz SPARC-T5 on-line
57 3600 MHz SPARC-T5 on-line
58 3600 MHz SPARC-T5 on-line
59 3600 MHz SPARC-T5 on-line
60 3600 MHz SPARC-T5 on-line
61 3600 MHz SPARC-T5 on-line
62 3600 MHz SPARC-T5 on-line
63 3600 MHz SPARC-T5 on-line
64 3600 MHz SPARC-T5 on-line
65 3600 MHz SPARC-T5 on-line
66 3600 MHz SPARC-T5 on-line
67 3600 MHz SPARC-T5 on-line
68 3600 MHz SPARC-T5 on-line
69 3600 MHz SPARC-T5 on-line
70 3600 MHz SPARC-T5 on-line
71 3600 MHz SPARC-T5 on-line
72 3600 MHz SPARC-T5 on-line
73 3600 MHz SPARC-T5 on-line
74 3600 MHz SPARC-T5 on-line
75 3600 MHz SPARC-T5 on-line
76 3600 MHz SPARC-T5 on-line
77 3600 MHz SPARC-T5 on-line
78 3600 MHz SPARC-T5 on-line
79 3600 MHz SPARC-T5 on-line
80 3600 MHz SPARC-T5 on-line
81 3600 MHz SPARC-T5 on-line
82 3600 MHz SPARC-T5 on-line
83 3600 MHz SPARC-T5 on-line
84 3600 MHz SPARC-T5 on-line
85 3600 MHz SPARC-T5 on-line
86 3600 MHz SPARC-T5 on-line
87 3600 MHz SPARC-T5 on-line
88 3600 MHz SPARC-T5 on-line
89 3600 MHz SPARC-T5 on-line
90 3600 MHz SPARC-T5 on-line
91 3600 MHz SPARC-T5 on-line
92 3600 MHz SPARC-T5 on-line
93 3600 MHz SPARC-T5 on-line
94 3600 MHz SPARC-T5 on-line
95 3600 MHz SPARC-T5 on-line
96 3600 MHz SPARC-T5 on-line
97 3600 MHz SPARC-T5 on-line
98 3600 MHz SPARC-T5 on-line
99 3600 MHz SPARC-T5 on-line
100 3600 MHz SPARC-T5 on-line
101 3600 MHz SPARC-T5 on-line
102 3600 MHz SPARC-T5 on-line
103 3600 MHz SPARC-T5 on-line
104 3600 MHz SPARC-T5 on-line
105 3600 MHz SPARC-T5 on-line
106 3600 MHz SPARC-T5 on-line
107 3600 MHz SPARC-T5 on-line
108 3600 MHz SPARC-T5 on-line
109 3600 MHz SPARC-T5 on-line
110 3600 MHz SPARC-T5 on-line
111 3600 MHz SPARC-T5 on-line
112 3600 MHz SPARC-T5 on-line
113 3600 MHz SPARC-T5 on-line
114 3600 MHz SPARC-T5 on-line
115 3600 MHz SPARC-T5 on-line
116 3600 MHz SPARC-T5 on-line
117 3600 MHz SPARC-T5 on-line
118 3600 MHz SPARC-T5 on-line
119 3600 MHz SPARC-T5 on-line
120 3600 MHz SPARC-T5 on-line
121 3600 MHz SPARC-T5 on-line
122 3600 MHz SPARC-T5 on-line
123 3600 MHz SPARC-T5 on-line
124 3600 MHz SPARC-T5 on-line
125 3600 MHz SPARC-T5 on-line
126 3600 MHz SPARC-T5 on-line
127 3600 MHz SPARC-T5 on-line
128 3600 MHz SPARC-T5 on-line
129 3600 MHz SPARC-T5 on-line
130 3600 MHz SPARC-T5 on-line
131 3600 MHz SPARC-T5 on-line
132 3600 MHz SPARC-T5 on-line
133 3600 MHz SPARC-T5 on-line
134 3600 MHz SPARC-T5 on-line
135 3600 MHz SPARC-T5 on-line
136 3600 MHz SPARC-T5 on-line
137 3600 MHz SPARC-T5 on-line
138 3600 MHz SPARC-T5 on-line
139 3600 MHz SPARC-T5 on-line
140 3600 MHz SPARC-T5 on-line
141 3600 MHz SPARC-T5 on-line
142 3600 MHz SPARC-T5 on-line
143 3600 MHz SPARC-T5 on-line
144 3600 MHz SPARC-T5 on-line
145 3600 MHz SPARC-T5 on-line
146 3600 MHz SPARC-T5 on-line
147 3600 MHz SPARC-T5 on-line
148 3600 MHz SPARC-T5 on-line
149 3600 MHz SPARC-T5 on-line
150 3600 MHz SPARC-T5 on-line
151 3600 MHz SPARC-T5 on-line
152 3600 MHz SPARC-T5 on-line
153 3600 MHz SPARC-T5 on-line
154 3600 MHz SPARC-T5 on-line
155 3600 MHz SPARC-T5 on-line
156 3600 MHz SPARC-T5 on-line
157 3600 MHz SPARC-T5 on-line
158 3600 MHz SPARC-T5 on-line
159 3600 MHz SPARC-T5 on-line
160 3600 MHz SPARC-T5 on-line
161 3600 MHz SPARC-T5 on-line
162 3600 MHz SPARC-T5 on-line
163 3600 MHz SPARC-T5 on-line
164 3600 MHz SPARC-T5 on-line
165 3600 MHz SPARC-T5 on-line
166 3600 MHz SPARC-T5 on-line
167 3600 MHz SPARC-T5 on-line
168 3600 MHz SPARC-T5 on-line
169 3600 MHz SPARC-T5 on-line
170 3600 MHz SPARC-T5 on-line
171 3600 MHz SPARC-T5 on-line
172 3600 MHz SPARC-T5 on-line
173 3600 MHz SPARC-T5 on-line
174 3600 MHz SPARC-T5 on-line
175 3600 MHz SPARC-T5 on-line
176 3600 MHz SPARC-T5 on-line
177 3600 MHz SPARC-T5 on-line
178 3600 MHz SPARC-T5 on-line
179 3600 MHz SPARC-T5 on-line
180 3600 MHz SPARC-T5 on-line
181 3600 MHz SPARC-T5 on-line
182 3600 MHz SPARC-T5 on-line
183 3600 MHz SPARC-T5 on-line
184 3600 MHz SPARC-T5 on-line
185 3600 MHz SPARC-T5 on-line
186 3600 MHz SPARC-T5 on-line
187 3600 MHz SPARC-T5 on-line
188 3600 MHz SPARC-T5 on-line
189 3600 MHz SPARC-T5 on-line
190 3600 MHz SPARC-T5 on-line
191 3600 MHz SPARC-T5 on-line
192 3600 MHz SPARC-T5 on-line
193 3600 MHz SPARC-T5 on-line
194 3600 MHz SPARC-T5 on-line
195 3600 MHz SPARC-T5 on-line
196 3600 MHz SPARC-T5 on-line
197 3600 MHz SPARC-T5 on-line
198 3600 MHz SPARC-T5 on-line
199 3600 MHz SPARC-T5 on-line
200 3600 MHz SPARC-T5 on-line
201 3600 MHz SPARC-T5 on-line
202 3600 MHz SPARC-T5 on-line
203 3600 MHz SPARC-T5 on-line
204 3600 MHz SPARC-T5 on-line
205 3600 MHz SPARC-T5 on-line
206 3600 MHz SPARC-T5 on-line
207 3600 MHz SPARC-T5 on-line
208 3600 MHz SPARC-T5 on-line
209 3600 MHz SPARC-T5 on-line
210 3600 MHz SPARC-T5 on-line
211 3600 MHz SPARC-T5 on-line
212 3600 MHz SPARC-T5 on-line
213 3600 MHz SPARC-T5 on-line
214 3600 MHz SPARC-T5 on-line
215 3600 MHz SPARC-T5 on-line
216 3600 MHz SPARC-T5 on-line
217 3600 MHz SPARC-T5 on-line
218 3600 MHz SPARC-T5 on-line
219 3600 MHz SPARC-T5 on-line
220 3600 MHz SPARC-T5 on-line
221 3600 MHz SPARC-T5 on-line
222 3600 MHz SPARC-T5 on-line
223 3600 MHz SPARC-T5 on-line
224 3600 MHz SPARC-T5 on-line
225 3600 MHz SPARC-T5 on-line
226 3600 MHz SPARC-T5 on-line
227 3600 MHz SPARC-T5 on-line
228 3600 MHz SPARC-T5 on-line
229 3600 MHz SPARC-T5 on-line
230 3600 MHz SPARC-T5 on-line
231 3600 MHz SPARC-T5 on-line
232 3600 MHz SPARC-T5 on-line
233 3600 MHz SPARC-T5 on-line
234 3600 MHz SPARC-T5 on-line
235 3600 MHz SPARC-T5 on-line
236 3600 MHz SPARC-T5 on-line
237 3600 MHz SPARC-T5 on-line
238 3600 MHz SPARC-T5 on-line
239 3600 MHz SPARC-T5 on-line
240 3600 MHz SPARC-T5 on-line
241 3600 MHz SPARC-T5 on-line
242 3600 MHz SPARC-T5 on-line
243 3600 MHz SPARC-T5 on-line
244 3600 MHz SPARC-T5 on-line
245 3600 MHz SPARC-T5 on-line
246 3600 MHz SPARC-T5 on-line
247 3600 MHz SPARC-T5 on-line
248 3600 MHz SPARC-T5 on-line
249 3600 MHz SPARC-T5 on-line
250 3600 MHz SPARC-T5 on-line
251 3600 MHz SPARC-T5 on-line
252 3600 MHz SPARC-T5 on-line
253 3600 MHz SPARC-T5 on-line
254 3600 MHz SPARC-T5 on-line
255 3600 MHz SPARC-T5 on-line
256 3600 MHz SPARC-T5 on-line
257 3600 MHz SPARC-T5 on-line
258 3600 MHz SPARC-T5 on-line
259 3600 MHz SPARC-T5 on-line
260 3600 MHz SPARC-T5 on-line
261 3600 MHz SPARC-T5 on-line
262 3600 MHz SPARC-T5 on-line
263 3600 MHz SPARC-T5 on-line
264 3600 MHz SPARC-T5 on-line
265 3600 MHz SPARC-T5 on-line
266 3600 MHz SPARC-T5 on-line
267 3600 MHz SPARC-T5 on-line
268 3600 MHz SPARC-T5 on-line
269 3600 MHz SPARC-T5 on-line
270 3600 MHz SPARC-T5 on-line
271 3600 MHz SPARC-T5 on-line
272 3600 MHz SPARC-T5 on-line
273 3600 MHz SPARC-T5 on-line
274 3600 MHz SPARC-T5 on-line
275 3600 MHz SPARC-T5 on-line
276 3600 MHz SPARC-T5 on-line
277 3600 MHz SPARC-T5 on-line
278 3600 MHz SPARC-T5 on-line
279 3600 MHz SPARC-T5 on-line
280 3600 MHz SPARC-T5 on-line
281 3600 MHz SPARC-T5 on-line
282 3600 MHz SPARC-T5 on-line
283 3600 MHz SPARC-T5 on-line
284 3600 MHz SPARC-T5 on-line
285 3600 MHz SPARC-T5 on-line
286 3600 MHz SPARC-T5 on-line
287 3600 MHz SPARC-T5 on-line
288 3600 MHz SPARC-T5 on-line
289 3600 MHz SPARC-T5 on-line
290 3600 MHz SPARC-T5 on-line
291 3600 MHz SPARC-T5 on-line
292 3600 MHz SPARC-T5 on-line
293 3600 MHz SPARC-T5 on-line
294 3600 MHz SPARC-T5 on-line
295 3600 MHz SPARC-T5 on-line
296 3600 MHz SPARC-T5 on-line
297 3600 MHz SPARC-T5 on-line
298 3600 MHz SPARC-T5 on-line
299 3600 MHz SPARC-T5 on-line
300 3600 MHz SPARC-T5 on-line
301 3600 MHz SPARC-T5 on-line
302 3600 MHz SPARC-T5 on-line
303 3600 MHz SPARC-T5 on-line
304 3600 MHz SPARC-T5 on-line
305 3600 MHz SPARC-T5 on-line
306 3600 MHz SPARC-T5 on-line
307 3600 MHz SPARC-T5 on-line
308 3600 MHz SPARC-T5 on-line
309 3600 MHz SPARC-T5 on-line
310 3600 MHz SPARC-T5 on-line
311 3600 MHz SPARC-T5 on-line
312 3600 MHz SPARC-T5 on-line
313 3600 MHz SPARC-T5 on-line
314 3600 MHz SPARC-T5 on-line
315 3600 MHz SPARC-T5 on-line
316 3600 MHz SPARC-T5 on-line
317 3600 MHz SPARC-T5 on-line
318 3600 MHz SPARC-T5 on-line
319 3600 MHz SPARC-T5 on-line
320 3600 MHz SPARC-T5 on-line
321 3600 MHz SPARC-T5 on-line
322 3600 MHz SPARC-T5 on-line
323 3600 MHz SPARC-T5 on-line
324 3600 MHz SPARC-T5 on-line
325 3600 MHz SPARC-T5 on-line
326 3600 MHz SPARC-T5 on-line
327 3600 MHz SPARC-T5 on-line
328 3600 MHz SPARC-T5 on-line
329 3600 MHz SPARC-T5 on-line
330 3600 MHz SPARC-T5 on-line
331 3600 MHz SPARC-T5 on-line
332 3600 MHz SPARC-T5 on-line
333 3600 MHz SPARC-T5 on-line
334 3600 MHz SPARC-T5 on-line
335 3600 MHz SPARC-T5 on-line
336 3600 MHz SPARC-T5 on-line
337 3600 MHz SPARC-T5 on-line
338 3600 MHz SPARC-T5 on-line
339 3600 MHz SPARC-T5 on-line
340 3600 MHz SPARC-T5 on-line
341 3600 MHz SPARC-T5 on-line
342 3600 MHz SPARC-T5 on-line
343 3600 MHz SPARC-T5 on-line
344 3600 MHz SPARC-T5 on-line
345 3600 MHz SPARC-T5 on-line
346 3600 MHz SPARC-T5 on-line
347 3600 MHz SPARC-T5 on-line
348 3600 MHz SPARC-T5 on-line
349 3600 MHz SPARC-T5 on-line
350 3600 MHz SPARC-T5 on-line
351 3600 MHz SPARC-T5 on-line
352 3600 MHz SPARC-T5 on-line
353 3600 MHz SPARC-T5 on-line
354 3600 MHz SPARC-T5 on-line
355 3600 MHz SPARC-T5 on-line
356 3600 MHz SPARC-T5 on-line
357 3600 MHz SPARC-T5 on-line
358 3600 MHz SPARC-T5 on-line
359 3600 MHz SPARC-T5 on-line
360 3600 MHz SPARC-T5 on-line
361 3600 MHz SPARC-T5 on-line
362 3600 MHz SPARC-T5 on-line
363 3600 MHz SPARC-T5 on-line
364 3600 MHz SPARC-T5 on-line
365 3600 MHz SPARC-T5 on-line
366 3600 MHz SPARC-T5 on-line
367 3600 MHz SPARC-T5 on-line
368 3600 MHz SPARC-T5 on-line
369 3600 MHz SPARC-T5 on-line
370 3600 MHz SPARC-T5 on-line
371 3600 MHz SPARC-T5 on-line
372 3600 MHz SPARC-T5 on-line
373 3600 MHz SPARC-T5 on-line
374 3600 MHz SPARC-T5 on-line
375 3600 MHz SPARC-T5 on-line
376 3600 MHz SPARC-T5 on-line
377 3600 MHz SPARC-T5 on-line
378 3600 MHz SPARC-T5 on-line
379 3600 MHz SPARC-T5 on-line
380 3600 MHz SPARC-T5 on-line
381 3600 MHz SPARC-T5 on-line
382 3600 MHz SPARC-T5 on-line
383 3600 MHz SPARC-T5 on-line
======================= Physical Memory Configuration ========================
Segment Table:
--------------------------------------------------------------
Base Segment Interleave Bank Contains
Address Size Factor Size Modules
--------------------------------------------------------------
0x0 256 GB 4 64 GB /SYS/PM0/CM0/CMP/BOB0/CH0/D0
/SYS/PM0/CM0/CMP/BOB0/CH1/D0
/SYS/PM0/CM0/CMP/BOB1/CH0/D0
/SYS/PM0/CM0/CMP/BOB1/CH1/D0
64 GB /SYS/PM0/CM0/CMP/BOB2/CH0/D0
/SYS/PM0/CM0/CMP/BOB2/CH1/D0
/SYS/PM0/CM0/CMP/BOB3/CH0/D0
/SYS/PM0/CM0/CMP/BOB3/CH1/D0
64 GB /SYS/PM0/CM0/CMP/BOB4/CH0/D0
/SYS/PM0/CM0/CMP/BOB4/CH1/D0
/SYS/PM0/CM0/CMP/BOB5/CH0/D0
/SYS/PM0/CM0/CMP/BOB5/CH1/D0
64 GB /SYS/PM0/CM0/CMP/BOB6/CH0/D0
/SYS/PM0/CM0/CMP/BOB6/CH1/D0
/SYS/PM0/CM0/CMP/BOB7/CH0/D0
/SYS/PM0/CM0/CMP/BOB7/CH1/D0
0x80000000000 256 GB 4 64 GB /SYS/PM0/CM1/CMP/BOB0/CH0/D0
/SYS/PM0/CM1/CMP/BOB0/CH1/D0
/SYS/PM0/CM1/CMP/BOB1/CH0/D0
/SYS/PM0/CM1/CMP/BOB1/CH1/D0
64 GB /SYS/PM0/CM1/CMP/BOB2/CH0/D0
/SYS/PM0/CM1/CMP/BOB2/CH1/D0
/SYS/PM0/CM1/CMP/BOB3/CH0/D0
/SYS/PM0/CM1/CMP/BOB3/CH1/D0
64 GB /SYS/PM0/CM1/CMP/BOB4/CH0/D0
/SYS/PM0/CM1/CMP/BOB4/CH1/D0
/SYS/PM0/CM1/CMP/BOB5/CH0/D0
/SYS/PM0/CM1/CMP/BOB5/CH1/D0
64 GB /SYS/PM0/CM1/CMP/BOB6/CH0/D0
/SYS/PM0/CM1/CMP/BOB6/CH1/D0
/SYS/PM0/CM1/CMP/BOB7/CH0/D0
/SYS/PM0/CM1/CMP/BOB7/CH1/D0
0x300000000000 256 GB 4 64 GB /SYS/PM3/CM0/CMP/BOB0/CH0/D0
/SYS/PM3/CM0/CMP/BOB0/CH1/D0
/SYS/PM3/CM0/CMP/BOB1/CH0/D0
/SYS/PM3/CM0/CMP/BOB1/CH1/D0
64 GB /SYS/PM3/CM0/CMP/BOB2/CH0/D0
/SYS/PM3/CM0/CMP/BOB2/CH1/D0
/SYS/PM3/CM0/CMP/BOB3/CH0/D0
/SYS/PM3/CM0/CMP/BOB3/CH1/D0
64 GB /SYS/PM3/CM0/CMP/BOB4/CH0/D0
/SYS/PM3/CM0/CMP/BOB4/CH1/D0
/SYS/PM3/CM0/CMP/BOB5/CH0/D0
/SYS/PM3/CM0/CMP/BOB5/CH1/D0
64 GB /SYS/PM3/CM0/CMP/BOB6/CH0/D0
/SYS/PM3/CM0/CMP/BOB6/CH1/D0
/SYS/PM3/CM0/CMP/BOB7/CH0/D0
/SYS/PM3/CM0/CMP/BOB7/CH1/D0
0x380000000000 256 GB 4 64 GB /SYS/PM3/CM1/CMP/BOB0/CH0/D0
/SYS/PM3/CM1/CMP/BOB0/CH1/D0
/SYS/PM3/CM1/CMP/BOB1/CH0/D0
/SYS/PM3/CM1/CMP/BOB1/CH1/D0
64 GB /SYS/PM3/CM1/CMP/BOB2/CH0/D0
/SYS/PM3/CM1/CMP/BOB2/CH1/D0
/SYS/PM3/CM1/CMP/BOB3/CH0/D0
/SYS/PM3/CM1/CMP/BOB3/CH1/D0
64 GB /SYS/PM3/CM1/CMP/BOB4/CH0/D0
/SYS/PM3/CM1/CMP/BOB4/CH1/D0
/SYS/PM3/CM1/CMP/BOB5/CH0/D0
/SYS/PM3/CM1/CMP/BOB5/CH1/D0
64 GB /SYS/PM3/CM1/CMP/BOB6/CH0/D0
/SYS/PM3/CM1/CMP/BOB6/CH1/D0
/SYS/PM3/CM1/CMP/BOB7/CH0/D0
/SYS/PM3/CM1/CMP/BOB7/CH1/D0
======================================== IO Devices =======================================
Slot + Bus Name + Model Max Speed Cur Speed
Status Type Path /Width /Width
-------------------------------------------------------------------------------------------
/SYS/MB/USB_CTLR PCIE usb-pciexclass,0c0330 -- --
/pci@300/pci@1/pci@0/pci@4/pci@0/pci@6/usb@0
/SYS/RIO/XGBE0 PCIE network-pciex8086,1528 -- --
/pci@300/pci@1/pci@0/pci@4/pci@0/pci@8/network@0
/SYS/RIO/NET1 PCIE network-pciex8086,1528 -- --
/pci@300/pci@1/pci@0/pci@4/pci@0/pci@8/network@0,1
/SYS/MB/SASHBA0 PCIE scsi-pciex1000,87 LSI,2308_2 -- --
/pci@300/pci@1/pci@0/pci@4/pci@0/pci@c/scsi@0
/SYS/RCSA/PCIE1 PCIE network-pciex8086,10fb X1109a-z/1109a-z -- --
/pci@300/pci@1/pci@0/pci@6/network@0
/SYS/RCSA/PCIE1 PCIE network-pciex8086,10fb X1109a-z/1109a-z -- --
/pci@300/pci@1/pci@0/pci@6/network@0,1
/SYS/RCSA/PCIE3 PCIE pciex15b3,1003 -- --
/pci@340/pci@1/pci@0/pci@6/pciex15b3,1003@0
/SYS/RCSA/PCIE9 PCIE network-pciex8086,10fb X1109a-z/1109a-z -- --
/pci@380/pci@1/pci@0/pci@a/network@0
/SYS/RCSA/PCIE9 PCIE network-pciex8086,10fb X1109a-z/1109a-z -- --
/pci@380/pci@1/pci@0/pci@a/network@0,1
/SYS/RCSA/PCIE11 PCIE pciex15b3,1003 -- --
/pci@3c0/pci@1/pci@0/pci@e/pciex15b3,1003@0
============================ Environmental Status ============================
Fan sensors:
All fan sensors are OK.
Temperature sensors:
All temperature sensors are OK.
Current sensors:
All current sensors are OK.
Voltage sensors:
All voltage sensors are OK.
============================ FRU Status ============================
All FRUs are enabled.
oracle@enksc1client01:/export/home/oracle:dbm011
$
oracle@enksc1client01:/export/home/oracle:dbm011
$
oracle@enksc1client01:/export/home/oracle:dbm011
$
oracle@enksc1client01:/export/home/oracle:dbm011
$ ls
esp local.login oradiag_oracle set_cluster_interconnect.wk1
local.cshrc local.profile set_cluster_interconnect.lst
}}}
! Solaris Performance Metrics Disk Utilisation by Process
http://www.brendangregg.com/Solaris/paper_diskubyp1.pdf
{{{
zoneadm list -civ | grep er2zgrc319v
zoneadm list -civ | grep er2zgrc320v
zoneadm list -civ | grep er2zgrc321v
zoneadm list -civ | grep er2zgrc322v
# 1 - To check the current environment properties:
svccfg -s system/identity:node listprop config
root@er2zgrc321v:~# svccfg -s system/identity:node listprop config
config application
config/enable_mapping boolean true
config/ignore_dhcp_hostname boolean false
config/loopback astring
config/nodename astring er2zgrc321v
# 2 - Set the new hostname
from: er2zgrc321v-i
to: er2zgrc421v
svccfg -s system/identity:node setprop config/nodename="er2zgrc421v"
svccfg -s system/identity:node setprop config/loopback="er2zgrc421v"
root@er2zgrc321v:~# svccfg -s system/identity:node listprop config
config application
config/enable_mapping boolean true
config/ignore_dhcp_hostname boolean false
config/nodename astring er2zgrc421v
config/loopback astring er2zgrc421v
root@er2zgrc321v:~#
# 3- Refresh the properties:
svccfg -s system/identity:node refresh
#4 - Restart the service:
svcadm restart system/identity:node
#5 - verify that the changes took place:
svccfg -s system/identity:node listprop config
zoneadm -z er2zgrc321v-i reboot
root@er2s1app01:~# zoneadm list -civ | grep er2zgrc319v
root@er2s1app01:~# zoneadm list -civ | grep er2zgrc320v
root@er2s1app01:~# zoneadm list -civ | grep er2zgrc321v
35 er2zgrc321v-i running /zones/er2zgrc321v solaris excl
root@er2s1app01:~# zoneadm list -civ | grep er2zgrc322v
root@er2s1app01:~# zlogin er2zgrc321v-i
[Connected to zone 'er2zgrc321v-i' pts/5]
Last login: Thu Feb 23 22:17:15 2017 from er2s1vm02.erp.h
Oracle Corporation SunOS 5.11 11.3 August 2016
You have new mail.
root@er2zgrc421v:~#
root@er2zgrc421v:~#
root@er2zgrc421v:~#
root@er2zgrc421v:~#
}}}
{{{
Edit the resolv.conf and nsswitch.conf and then run this script to upload it to SMF
/SAP_media/enkitec/scripts/nscfg.sh
# Configure SMF (directly)
#svccfg -s network/dns/client listprop config
#svccfg -s network/dns/client setprop config/nameserver = net_address: "(99.999.10.53 99.999.200.53)"
#svccfg -s network/dns/client setprop config/domain = astring: erp.example.com
#svccfg -s network/dns/client setprop config/search = astring: '("erp.example.com")'
#svccfg -s name-service/switch setprop config/ipnodes = astring: '("files dns")'
#svccfg -s name-service/switch setprop config/host = astring: '("files dns")'
#svccfg -s network/dns/client listprop config
#svccfg -s name-service/switch listprop config
#svcadm enable dns/client
#svcadm refresh name-service/switch
# Or modify up your /etc/resolv.conf and your /etc/nsswitch.conf and then import them with nscfg.
nscfg import -f svc:/system/name-service/switch:default
nscfg import -f name-service/switch:default
/usr/sbin/nscfg import -f dns/client
nscfg import -f dns/client:default
svcadm enable dns/client
svcadm refresh name-service/switch
svcadm refresh dns/client
}}}
! nsswitch
{{{
in sap it needs to be
root@hostname:~# cat /etc/nsswitch.conf | grep hosts
hosts: cluster files dns
}}}
! DNS
{{{
root@er1p2vm03:~# cat /etc/resolv.conf
#
# _AUTOGENERATED_FROM_SMF_V1_
#
# WARNING: THIS FILE GENERATED FROM SMF DATA.
# DO NOT EDIT THIS FILE. EDITS WILL BE LOST.
# See resolv.conf(4) for details.
domain erp.example.com
search erp.example.com
options timeout:1
nameserver 99.999.10.53
nameserver 99.999.200.53
svccfg -s network/dns/client delprop config/domain
svcadm refresh dns/client
root@er1p2vm03:~# cat /etc/resolv.conf
#
# _AUTOGENERATED_FROM_SMF_V1_
#
# WARNING: THIS FILE GENERATED FROM SMF DATA.
# DO NOT EDIT THIS FILE. EDITS WILL BE LOST.
# See resolv.conf(4) for details.
search erp.example.com
options timeout:1
nameserver 99.999.10.53
nameserver 99.999.200.53
}}}
!! references
<<<
11.2 Grid Install Fails with SEVERE: [FATAL] [INS-13013], and PRVF-5640 or a Warning in "Task resolv.conf Integrity" (Doc ID 1271996.1)
https://blogs.oracle.com/gurubalan/entry/dns_client_configuration_guide_for
svccfg man page http://docs.oracle.com/cd/E19253-01/816-5166/6mbb1kqjj/index.html
DNS client configuration steps in Oracle Solaris 11 https://blogs.oracle.com/gurubalan/entry/dns_client_configuration_guide_for
https://newbiedba.wordpress.com/2012/12/05/solaris-11-how-to-configure-resolv-conf-and-nsswitch-conf/
https://www.itfromallangles.com/2012/05/solaris-11-dns-client-configuration-using-svccfg/
How to set dns-server and search domain in solaris 5.11 http://www.rocworks.at/wordpress/?p=284
https://blogs.oracle.com/SolarisSMF/entry/changes_to_svccfg_import_and
<<<
! NTP
{{{
root@er1p1vm03:~# cat /etc/inet/ntp.conf
server 99.999.10.53
server 99.999.200.53
slewalways yes
disable pll
echo "slewalways yes" >> /etc/inet/ntp.conf
echo "disable pll" >> /etc/inet/ntp.conf
svccfg -s svc:/network/ntp:default setprop config/slew_always = true
svcadm refresh ntp
svcadm restart ntp
svcprop -p config/slew_always svc:/network/ntp:default
}}}
!! references
<<<
Oracle RAC Install: Runcluvfy.sh Fails With PRVF-5436 When Using NTP.CONF On Solaris 11 (Doc ID 1511006.1)
11.2.0.1/11.2.0.2 to 11.2.0.3 Grid Infrastructure and Database Upgrade on Exadata Database Machine (Doc ID 1373255.1)
o CVU may complain about missing "'slewalways yes' & 'disable pll'". If this is the case then this message can be ignored. Solaris 11 Express has an SMF property for configuring slew NTP settings, see bug 13612271
CML1069-DellStorageCenterOracleRAC-SolarisBPs.pdf
Managing Network Time Protocol (Tasks) https://docs.oracle.com/cd/E23824_01/html/821-1454/time-20.html
https://rageek.wordpress.com/2012/04/10/oracle-rac-and-ntpd-conf-configuration-on-solaris-11/
How to Deploy Oracle RAC on Oracle Solaris 11 Zone Clusters http://www.oracle.com/technetwork/articles/servers-storage-admin/deployrac-onsolaris11-1721976.html
<<<
Monitoring Swap Resources https://docs.oracle.com/cd/E23824_01/html/821-1459/fsswap-52195.html
Playing with Swap Monitoring and Increasing Swap Space Using ZFS Volumes http://www.oracle.com/technetwork/articles/servers-storage-admin/monitor-swap-solaris-zfs-2216650.html
Video Tutorial: Installing Solaris 11 in VirtualBox
http://blogs.oracle.com/jimlaurent/2010/11/video_tutorial_installing_solaris_11_in_virtualbox.html
http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html
''what's new'' http://www.oracle.com/technetwork/server-storage/solaris11/documentation/solaris11-whatsnew-201111-392603.pdf
''Taking Your First Steps with Oracle Solaris 11'' http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-112-s11-first-steps-524819.html
http://www.oracle.com/technetwork/server-storage/solaris11/documentation/index.html
http://en.wikipedia.org/wiki/Solaris_(operating_system)
http://www.sun.drydog.com/faq/s86faq.html#s4.18
http://www.oracle-base.com/articles/11g/OracleDB11gR2InstallationOnSolaris10.php
http://137.254.16.27/jimlaurent/entry/video_tutorial_installing_solaris_11
{{{
The isainfo command can be used to determine if a Solaris system has been configured to run in 32 or 64 bit mode.
Run the command
isainfo -v
If the system is running in 32 bit mode, you will see the following output:
32-bit sparc applications
On a 64 bit Solaris system, you’ll see:
64-bit sparcv9 applications
32-bit sparc applications
bash-3.00# isainfo
amd64 i386bash-3.00# isainfo -kv
64-bit amd64 kernel modulesbash-3.00# isainfo -nv
64-bit amd64 applications
cx16 mon sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpubash-3.00# isainfo -b
64
/usr/bin/isainfo -kvIf your OS is 64-bit, you will see output like:64-bit sparcv9 kernel modulesIf your OS is 32-bit, you will get this output:32-bit sparc kernel modules
}}}
http://blogs.oracle.com/pomah/entry/configuration_example_of_oracle_asm1
http://gdesaboyina.wordpress.com/2009/04/01/oracle-asm-on-solaris-containerslocalzonesnon-global-zones/
http://blogs.oracle.com/pomah/entry/configuring_oracle_asm_in_solaris
http://blogs.oracle.com/pomah/entry/configuration_example_of_oracle_asm
http://askdba.org/weblog/2008/07/oracle-11g-installation-on-solaris-10/
http://blog.csdn.net/wenchenzhao113/article/details/4383886
http://www.oracle.com/technetwork/articles/systems-hardware-architecture/deploying-rac-in-containers-168438.pdf <-- GOOD STUFF
http://wikis.sun.com/display/BluePrints/Deploying+Oracle+Real+Application+Clusters+(RAC)+on+Solaris+Zone+Clusters
http://goo.gl/GZtcV <-- racsig vmware asm on solaris
also look at the [[Veritas Oracle Doc]]
https://www.safaribooksonline.com/library/view/oracle-solaris-11/9781618660831/
<<showtoc>>
! Installing Oracle Solaris 11
<<<
user - jack:jack
root - root:solaris
<<<
!! Two ways of installing
* using Live CD (only x86) or Text installer
!! install log location
{{{
oracle@enksc1db0201:/export/home/oracle:dbm012
$ less /var/sadm/system/logs/install_log
}}}
!! system messages
{{{
less /var/sadm/system/logs/messages
}}}
!! OBP (openboot prom)
openboot prom <- solaris boot loader used on SPARC
grub <- boot loader used on x86
* to access the openboot prom
{{{
eeprom
monitor
banner
}}}
! Updating and Managing Packages (IPS)
* IPS replaced SVR4 found in earlier releases
* allows you to list, search, install, update, remove packages
!! IPS admin
** manage all software packages
** manage software publishers
** manage repositories
** update an image to a new OS release
*** can be used to create images, it's basically an installation and you can basically baseline your image change things with IPS snapshot it or back it up and put it up as a bootable environment and you don't have to make this changes stick you can revert back to the image you had if you like so you can test new OS packages without damaging your system
** create and manage boot environments
*** this is what the images are - boot environments, you can have a baseline image, or several of them and boot from any of them
!! IPS terms
** manifest - describes an IPS package
** repository - internet or network location, the location is specified by a URI (universal resource identifier)
** image - a location where IPS packages can be installed
** catalog - lists all packages in a given repository
** package archive - file that contains publisher info
** mirror - repo that contain package content
** boot environment (BE) - bootable instance of an image (OS)
!! CLI - IPS
{{{
# publisher stuff
pkg publisher <- list publisher
pkg set-publisher -g http://pkg.openindiana.org/sfe sfe <- add publisher
# troubleshooting
pkg info
pkg contents
pkg history
pkg uninstall
}}}
!! beadm (manage boot environments)
{{{
$ beadm list
BE Flags Mountpoint Space Policy Created
-- ----- ---------- ----- ------ -------
SCMU_2016.07 NR / 13.65G static 2016-08-30 15:29
solaris - - 115.86M static 2016-08-29 16:55
solaris-backup-1 - - 103.84M static 2016-08-29 23:02
solaris-bkup - - 103.74M static 2016-08-29 22:42
solaris-idr - - 299.37M static 2016-08-30 03:57
beadm create test
beadm destroy test
}}}
! Administering Services (SMF - service management facility) - svcs and svcadm
* SMF part of FMA (fault management architecture)
* comes in both GUI (SMF services) and CLI versions
!! SMF standard naming convention - FMRI (fault management resource identifier)
* scheme - type of service
* location - system service is running on
* function category - the service function
** applications
** network
** device
** milestone (run level)
** system
* description - service name
* instance - which instance (if many instances are running)
!! example name convention
{{{
scheme://location/function/description:instance
Example: svc://localhost/network/nfs/server:default
}}}
!! service states
* online
* offline
* disabled
* maintenance
* degraded
* legacy_run
!! CLI - svcs, svcadm, svccfg
* svcs - list the services and properties
* svcadm - administration of services
* svccfg - create/define your own service (created in a service manifest file, the best is to build from an existing manifest file)
!!! svcs
{{{
svcs -a
# show all services that ExaWatcher depends upon in order to run
$ svcs -d ExaWatcher
STATE STIME FMRI
online Jan_23 svc:/milestone/network:default
online Jan_23 svc:/system/filesystem/local:default
online Jan_23 svc:/milestone/multi-user:default
oracle@enksc1db0201:/export/home/oracle:dbm012
# show all services that depend on ExaWatcher itself for them to run
$ svcs -D ExaWatcher
STATE STIME FMRI
oracle@enksc1db0201:/export/home/oracle:dbm012
$ svcs -d smtp
STATE STIME FMRI
online Jan_23 svc:/system/identity:domain
online Jan_23 svc:/network/service:default
online Jan_23 svc:/milestone/name-services:default
online Jan_23 svc:/system/filesystem/local:default
online Jan_23 svc:/system/filesystem/autofs:default
online Jan_23 svc:/system/system-log:default
oracle@enksc1db0201:/var/svc/log:dbm012
$
oracle@enksc1db0201:/var/svc/log:dbm012
$ svcs -D smtp
STATE STIME FMRI
oracle@enksc1db0201:/var/svc/log:dbm012
$
oracle@enksc1db0201:/var/svc/log:dbm012
# verbose on service
$ svcs -xv smtp
svc:/network/smtp:sendmail (sendmail SMTP mail transfer agent)
State: online since Mon Jan 23 17:12:29 2017
See: man -M /usr/share/man -s 1M sendmail
See: /var/svc/log/network-smtp:sendmail.log
Impact: None.
}}}
!!! svcadm
{{{
svcadm -h
# boot and shutdown system
svcadm milestone svc:/milestone/single-user:default
svcadm milestone svc:/milestone/all
svcadm milestone svc:/milestone/none
svcadm milestone help
}}}
!!! svccfg
{{{
# create a new service
svccfg -h
svccfg validate NewService.xml
svccfg import NewService.xml
svcadm enable NewService
}}}
!! services log location
* each service has its own log
{{{
system-zones-monitoring:default.log
system-zones:default.log
oracle@enksc1db0201:/var/svc/log:dbm012
$ less system-pkgserv:default.log
oracle@enksc1db0201:/var/svc/log:dbm012
$ pwd
/var/svc/log
}}}
! Administering Data Storage (ZFS)
!! ZFS does the following
* storage
* data integrity
* encryption
* backup and restore of files
* creation and management of containers (zones)
!! ZFS features
* enables addressing of multiple disk storage devices as a large contiguous block
* 128-bit addressing (means no size restrictions of files)
* 256-bit checksums on all disk operations
* support RAID parity, striping, and mirroring schemes
* automated detection and repair of corrupt data
* encryption to protect sensitive data
* data compression to save space
* user storage quotas
* sharing data with other ZFS pools
* snapshot and recovery
!! ZFS terms
* Filesystem
* Pool - one or more disk devices or partitions
* Clone - an exact copy of a ZFS filesystem
* Snapshot - a copy of the state of the filesystem
* Checksum - check integrity
* Quota - limit on the storage amount for a user
!! ZFS storage pools
* created and configured using the /usr/sbin/zpool command
* the rpool is the default ZFS pool
!!! zpool commands
[img(50%,50%)[ http://i.imgur.com/2zoGGwY.png ]]
!!! CLI - create a pool
{{{
mkdir /zfstest
cd /zfstest
mkfile -n 100m testdisk1
mkfile -n 100m testdisk2
mkfile -n 100m testdisk3
mkfile -n 100m testdisk4
zpool create testpool /zfstest/testdisk1 /zfstest/testdisk2 /zfstest/testdisk3
root@enksc1db0201:/zfstest# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 416G 172G 244G 41% 1.00x ONLINE -
testpool 285M 164K 285M 0% 1.00x ONLINE -
root@enksc1db0201:/zfstest# zpool status
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t5000CCA01D8F2528d0s0 ONLINE 0 0 0
c0t5000CCA01D8FC350d0s0 ONLINE 0 0 0
errors: No known data errors
pool: testpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
/zfstest/testdisk1 ONLINE 0 0 0
/zfstest/testdisk2 ONLINE 0 0 0
/zfstest/testdisk3 ONLINE 0 0 0
errors: No known data errors
root@enksc1db0201:/zfstest# df -h
Filesystem Size Used Available Capacity Mounted on
testpool 253M 31K 253M 1% /testpool
zpool destroy testpool
}}}
!! ZFS file systems
https://docs.oracle.com/cd/E23824_01/html/821-1448/gaynd.html
!!! ZFS commands
[img(50%,50%)[ http://i.imgur.com/J4y91jv.png ]]
!!! CLI - create a filesystem
{{{
zfs create testpool/data
!!! CLI - list filesystems
Managing Your ZFS Root Pool https://docs.oracle.com/cd/E23824_01/html/821-1448/gjtuk.html
Querying ZFS File System Information https://docs.oracle.com/cd/E23824_01/html/821-1448/gazsu.html
{{{
zfs list
}}}
root@enksc1db0201:/zfstest# df -h
Filesystem Size Used Available Capacity Mounted on
testpool 253M 32K 253M 1% /testpool
testpool/data 253M 31K 253M 1% /testpool/data
}}}
!!! mount/unmount
{{{
zfs unmount testpool/data
zfs mount testpool/data
# to mount all
zfs mount -a
}}}
!! ZFS snapshots and clones
!!! snapshot
* a snapshot is a read-only copy of the state of a ZFS filesystem
* takes up almost no disk space
* keeps track of only changes to the filesystem
!!! clone
* a clone is a writeable copy of a snapshot
* used to turn a snapshot into a complete filesystem
* can be placed within any other pool
!!! administering ZFS snapshot and clone
* Timeslider is a GUI tool to manage snapshots
* you can also use zfs commands
!!! create a snapshot
{{{
zfs snapshot <poolname>@<snapshotname>
zfs snapshot testpool@friday
# to rollback to a given snapshot
# you would usually rollback to the most recent snapshot,
# if you need to rollback on older then you need to delete the most recent snapshot
zfs rollback
}}}
!!!! list/get all snapshots
{{{
root@enksc1db0201:~# zfs get all | grep -i "type snapshot"
rpool/ROOT/SCMU_2016.07@install type snapshot -
rpool/ROOT/SCMU_2016.07@snapshot type snapshot -
rpool/ROOT/SCMU_2016.07@2016-08-30-04:02:48 type snapshot -
rpool/ROOT/SCMU_2016.07@2016-08-30-08:57:45 type snapshot -
rpool/ROOT/SCMU_2016.07@2016-08-30-20:29:07 type snapshot -
rpool/ROOT/SCMU_2016.07/var@install type snapshot -
rpool/ROOT/SCMU_2016.07/var@snapshot type snapshot -
rpool/ROOT/SCMU_2016.07/var@2016-08-30-04:02:48 type snapshot -
rpool/ROOT/SCMU_2016.07/var@2016-08-30-08:57:45 type snapshot -
rpool/ROOT/SCMU_2016.07/var@2016-08-30-20:29:07 type snapshot -
testpool@friday type snapshot -
}}}
!!! create a clone
{{{
zfs clone
}}}
!! troubleshooting ZFS
!!! get history of chages
{{{
zpool history
}}}
!!! get info on pool and filesystem
{{{
zfs get all
zfs list
zpool status
}}}
! Administering Oracle Solaris Zones
!! Zone configuration
!! Zone resource utilization
!! Administering zones
!! Zone and resource issues
! Administering a Physical Network
! Administering User Accounts
! System and File Access
! System Processes and Tasks
''cpu count''
{{{
http://www.solarisinternals.com/wiki/index.php/CPU/Processor <-- good stuff reference
http://blogs.oracle.com/sistare/entry/cpu_to_core_mapping <-- good script
mpstat |tail +2 |wc -l
# psrinfo -v
/usr/sbin/psrinfo
/usr/platform/sun4u/sbin/prtdiag
uname -p
prtdiag
prtconf
swap -l
top
prtconf | grep "Memory"
check Total physical memory:
# prtdiag -v | grep Memory
# prtconf | grep Memory
---
check Free physical Memory:
# top (if available)
# sar -r 5 10
Free Memory=freemen*8 (pagesize=8k)
# vmstat 5 10
Free Memory = free
---
For swap:
# swap -s
# swap -l
}}}
''check memory''
http://oraclepoint.com/oralife/2011/02/09/different-ways-to-check-memory-usage-on-solaris-server/
{{{
Unix Commands
1. echo ::memstat | mdb –k
2. prstat –t
3. ps -efo pmem,uid,pid,ppid,pcpu,comm | sort -r
4. /usr/proc/bin/pmap -x <process-id>
Scripts & Tools
1. NMUPM utility (Oracle Support) How to Check the Host Memory Usage on Solaris via NMUPM Utility [ID 741004.1]
}}}
{{{
nmupm_mem.sh :
#!/bin/ksh
PAGESZ="/usr/bin/pagesize"
BC="/bin/bc"
SCALE=2
WAIT=300
MAXCOUNT=3
NMUPM="$ORACLE_HOME/bin/nmupm osLoad"
echo "Calulates average memory (interval $WAIT (s)) usage on Solaris using nmupm"
PAGESIZE=`$PAGESZ`
result1=`$NMUPM | awk -F"|" '{print $14 }'`
REALMEM=`$NMUPM | awk -F"|" '{print $13 }'`
#echo $result1
X=0
while [ $X -le $MAXCOUNT ]
do
sleep $WAIT
result2=`$NMUPM | awk -F"|" '{print $14 }'`
#echo $result2
DIFF="($result2 - $result1) * $PAGESIZE / 1024 / $WAIT"
RESULT=$($BC << EOF
scale=$SCALE
(${DIFF})
EOF)
MEMREL="$RESULT / $REALMEM * 100"
MEMPCT=$($BC << EOF
scale=$SCALE
(${MEMREL})
EOF)
#echo $result1
echo "Memory $REALMEM [kB] Freemem $RESULT [kB] %Free $MEMPCT"
result1=$result2
X=$((X+1))
done
}}}
<<<
how to login on pdom <- ilom (use to connect,restart,get info)
how to login on ldom <- global/non-global
how to login on zones <- zlogin
what is a solaris cluster <- clustered filesystem (tied with zones availability so use clzc)
how rac is configured <- zone level or ldom level
<<<
''Nice paper on hardware virtualization that also applies to zones'' http://neerajbhatia.wordpress.com/2011/10/07/capacity-planning-and-performance-management-on-ibm-powervm-virtualized-environment/
''Consolidating Applications with Oracle Solaris Containers'' http://www.oracle.com/us/products/servers-storage/solaris/consolid-solaris-containers-wp-075578.pdf
http://www.usenix.org/events/vm04/wips/tucker.pdf
http://61.153.44.88/opensolaris/solaris-containers-resource-management-and-solaris-zones-developer-guide/html/p21.html
http://61.153.44.88/opensolaris/solaris-containers-resource-management-and-solaris-zones-developer-guide/html/p2.html#concepts-2
''search for "solaris poolstat output" and you'll find lot's of resources regarding containers''
Solaris Containers — What They Are and How to Use Them http://www.google.com.ph/url?sa=t&source=web&cd=52&ved=0CB4QFjABODI&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.150.5215%26rep%3Drep1%26type%3Dpdf&rct=j&q=solaris%20poolstat%20output&ei=8fecTqyzIJDJsQKPyKiTCg&usg=AFQjCNGserXvEgoNyYzJaXHtqfqw78dSCA&cad=rja
''System Administration Guide: Virtualization Using the Solaris Operating System'' http://dc401.4shared.com/doc/FUgUu5Vu/preview.html
''BEST PRACTICES FOR RUNNING ORACLE DATABASES IN SOLARIS™ CONTAINERS'' http://www.filibeto.org/sun/lib/blueprints/820-7195.pdf
april 2010 ''Best Practices for Running Oracle Databases in Oracle Solaris Containers'' http://developers.sun.com/solaris/docs/oracle_containers.pdf
''The Sun BluePrints™ Guide to Solaris™ Containers'' http://61.153.44.88/server-storage/820-0001.pdf
''poolstat, the counterpart of lparstat on aix'' http://download.oracle.com/docs/cd/E19963-01/html/821-1460/rmpool-107.html, http://docs.huihoo.com/opensolaris/solaris-containers-resource-management-and-solaris-zones/html/p36.html, http://dlc.sun.com/osol/docs/content/SYSADRM/rmpool-107.html, http://download.oracle.com/docs/cd/E19455-01/817-1592/rmpool.task-105/index.html <-- this is the output
{{{
machine% poolstat
pset
id pool size used load
0 pool_default 4 3.6 6.2
1 pool_sales 4 3.3 8.4
}}}
''System Administration Guide: Solaris Containers, Resource Management, and Zones'' http://www.filibeto.org/~aduritz/truetrue/solaris10/sys-admin-rm.pdf
''How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study'' http://www.oracle.com/technetwork/server-storage/sun-sparc-enterprise/documentation/t-series-latency-1579242.pdf
''CMT performance''
Important Considerations for Operating Oracle RAC on T-Series Servers [ID 1181315.1]
Migration from fast single threaded CPU machine to CMT UltraSPARC T1 and T2 results in increased CPU reporting and diminished performance [ID 781763.1]
On Solaris 8, Persistent Write Contention for File System Files May Result in Degraded I/O Performance [ID 1019557.1]
Database Responding Very Slowly - Aiowait Timed Out Messages [ID 236322.1]
How to Use the Solaris Truss Command to Trace and Understand System Call Flow and Operation [ID 1010771.1]
Database Hangs With Aiowait Time Out Warning if Async IO Is True [ID 163530.1]
Warning: Aiowait Timed Out 1 Times Database Not Responding Cannot kill processes [ID 743425.1]
Warning "aiowait timed out x times" in alert.log [ID 222989.1] <-- GOOD STUFF
Database Instance Hang at Database Checkpoint With Block Change Tracking Enabled. [ID 1326886.1]
ORA-12751 cpu time or run time policy violation [ID 761298.1]
Solaris[TM] Operating System: All TNF Probes in the Kernel and Prex [ID 1017600.1]
How to Analyze High CPU Utilization In Solaris [ID 1008930.1]
How to Determine What is Consuming CPU System Time Using the lockstat Command [ID 1001812.1]
GUDS - A Script for Gathering Solaris Performance Data [ID 1285485.1]
Sun Fire[TM] Midframe/Midrange Servers: CPU/Memory Board Dynamic Reconfiguration (DR) Considerations [ID 1003332.1] <-- cfgadm
Migration from fast single threaded CPU machine to CMT UltraSPARC T1 and T2 results in increased CPU reporting
Doc ID: 781763.1
<<showtoc>>
! corrupted MAC
ssh or scp connection terminates with the error "Corrupted MAC on input" (Doc ID 1389880.1) To BottomTo Bottom
! NFS mount hang
System gets hung while reboot due to in progress NFS READ or WRITE operations, even though NFS server is available https://access.redhat.com/solutions/778173
RHEL mount hangs: nfs: server [...] not responding, still trying https://access.redhat.com/solutions/28211
Strange NFS problem (not responding still trying) https://community.hpe.com/t5/System-Administration/Strange-NFS-problem-not-responding-still-trying/td-p/3269194
How to Configure a Physical Interface After System Installation http://docs.oracle.com/cd/E19253-01/816-4554/fpdcn/index.html
How to Get Started Configuring Your Network in Oracle Solaris 11 http://www.oracle.com/technetwork/articles/servers-storage-dev/s11-network-config-1632927.html
http://blogs.oracle.com/observatory/entry/replacing_the_system_hdd_on
How to enable SAR (System Activity Reporter) on Solaris 10
http://muctable.org/?p=102
http://www.virtualsystemsadmin.com/?q=node/194
{{{
simply put these lines in the root crontab. Execution of script at the cron intervals collects data and bangs into daily files.
# Collect measurements at 10-minute intervals
0,10,20,30,40,50 * * * * /usr/lib/sa/sa1
# Create daily reports and purge old files
0 * * * * /usr/lib/sa/sa2 -A
}}}
http://docs.oracle.com/cd/E23824_01/html/821-1451/spconcepts-60676.html
The data files are placed in the /var/adm/sa directory
http://blogs.oracle.com/jimlaurent/entry/solaris_faq_myths_and_facts
http://nixcraft.com/solaris-opensolaris/738-how-find-swap-solaris-unix.html
http://www.ehow.com/how_6080053_determine-paging-space-solaris.html
{{{
df -kh swap
swap -s
}}}
Linux Kernel: The SLAB Allocator [ID 434351.1]
TECH: Unix Virtual Memory, Paging & Swapping explained [ID 17094.1]
ADDM Reports Significant Virtual Memory Paging [ID 1322964.1]
ADDM Reports "Significant Virtual Memory Paging Was Detected On The Host Operating System" [ID 395957.1]
/usr/platform/sun4u/sbin/prtdiag -v
If you just want information on the CPU’s you can also try:
psrinfo -v
Finally, to just get your total memory size do:
prtconf | grep Memory
For Hardware Info:
/usr/platforum/$(uname -m)/sbin/prtdiag -v
For Disks:
Either /usr/sbin/format or /usr/bin/iostat -En <-- disk info
Solaris Tips and Tricks http://sysunconfig.net/unixtips/solaris.html
VxFS Commands quick reference - http://eval.veritas.com/downloads/van/fs_quickref.pdf
http://hub.opensolaris.org/bin/view/Community+Group+zones/faq#HQ:Whatisazone3F
http://www.usenix.org/events/vm04/wips/tucker.pdf <-- cool stuff usenix whitepaper
-- Identify global, non-global
http://alittlestupid.com/2009/03/30/how-to-identify-a-solaris-non-global-zone/
http://www.mysysad.com/2009/01/indentify-zone-processes-via-global.html
http://unix.ittoolbox.com/groups/technical-functional/solaris-l/how-to-find-which-is-the-global-zone-for-a-particular-nonglobal-zone-3307215
http://www.unix.com/solaris/128825-how-identify-global-non-global-solaris-server.html
11g
----
SQL Result Cache
PL/SQL Function Cache
Compression
SecureFiles
http://www.oracle.com/us/corporate/features/sparc-supercluster-t4-4-489157.html
Data Sheet - http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-supercluster-ds-496616.pdf
FAQ - http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-supercluster-faq-496617.pdf
m5000 server
http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/m-series/m5000/overview/index.html
http://www.m5000-server.com/
http://en.wikipedia.org/wiki/SPARC_Enterprise
http://www.infoq.com/news/2014/02/sparkr-announcement
https://amplab.cs.berkeley.edu/2014/01/26/large-scale-data-analysis-made-easier-with-sparkr/
http://wikis.sun.com/display/SAPonSun/SAP+on+Sun
http://wikis.sun.com/display/SAPonSun/Demystifying+Oracle+IO
http://wikis.sun.com/display/SAPonSun/Speedup+SAP+%28Performance+Tuning%29
{{{
'Demystifying Oracle I/O - clarification on synchronous, asynchronous, blocking, nonblocking, direct & direct path I/O'
'SAP on Oracle Performance in high latency Metrocluster Setup'
'Getting insights with DTrace - Part 1: Analyzing Oracle Logwriter w/ buffered vs. direct I/O'
crosslink: 'Getting insights with DTrace - Part 2: OS version checks (uname)'; since this article is not related to performance, it is posted in Danger Zone*
'Getting insights with DTrace - Part 3: Analyzing SAP Appserver I/O'
'Implementing Oracle on ZFS and ZFS Storage Appliances'
'Oracle DB and Flash Devices'
Demystifying Oracle IO
Getting insights with DTrace - Part 1
Getting insights with DTrace - Part 3
Implementing Oracle on ZFS and ZFS Storage Appliances
Oracle DB and Flash Devices
SAP on Oracle Performance in high latency Metrocluster Setup
}}}
http://wikis.sun.com/display/SAPonSun/Oracle+DB+and+Flash+Devices
http://wikis.sun.com/display/SAPonSun/SAP+on+Oracle+Performance+in+high+latency+Metrocluster+Setup
http://wikis.sun.com/display/SAPonSun/Widening+the+Storage+Bottleneck+for+an+Oracle+Database
http://wikis.sun.com/display/SAPonSun/Implementing+Oracle+on+ZFS+and+ZFS+Storage+Appliances
http://wikis.sun.com/display/OC2dot5/Installation <-- Ops Center
http://wikis.sun.com/display/SAPonSun/Getting+insights+with+DTrace+-+Part+1
http://wikis.sun.com/display/SAPonSun/Getting+insights+with+DTrace+-+Part+2
http://wikis.sun.com/display/SAPonSun/Getting+insights+with+DTrace+-+Part+3
http://wikis.sun.com/display/SAPonSun/Running+SAP+on+OpenSolaris
http://blog.tanelpoder.com/2008/06/15/advanced-oracle-troubleshooting-guide-part-6-understanding-oracle-execution-plans-with-os_explain/
http://blog.tanelpoder.com/2009/04/24/tracing-oracle-sql-plan-execution-with-dtrace/
http://blog.tanelpoder.com/2008/10/31/advanced-oracle-troubleshooting-guide-part-9-process-stack-profiling-from-sqlplus-using-ostackprof/
http://blog.tanelpoder.com/2008/09/02/oracle-hidden-costs-revealed-part2-using-dtrace-to-find-why-writes-in-system-tablespace-are-slower-than-in-others/
http://blog.tanelpoder.com/2008/06/15/advanced-oracle-troubleshooting-guide-part-6-understanding-oracle-execution-plans-with-os_explain/
Lab128 has automated the pstack sampling, os_explain, & reporting. Good tool to know where the query was spending time http://goo.gl/fyH5x
http://www.business-intelligence-quotient.com/?p=1083
http://blogs.oracle.com/optimizer/2010/11/star_transformation.html
! 2014
How to Start a Startup (Stanford CS183B) http://startupclass.samaltman.com/
http://www.wired.com/2014/09/now-can-take-free-y-combinator-startup-course-online/
http://venturebeat.com/2014/09/25/how-the-tech-elite-teach-stanford-students-to-build-billion-dollar-companies-in-11-quotes/
http://venturebeat.com/2014/10/09/how-peter-thiel-teaches-stanford-students-to-create-billion-dollar-monopolies-in-3-quotes/
http://www.quora.com/How-to-Start-a-Startup-Stanford-CS183B
''videos'' https://www.youtube.com/channel/UCxIJaCMEptJjxmmQgGFsnCg
! 2013
http://blog.ycombinator.com/tag/Startup%20School%202013
''videos'' http://blog.ycombinator.com/videos-from-startup-school-2013-are-now-online
! YC startup school
http://www.startupschool.org/
Troubleshoot Grid Infrastructure Startup Issues [ID 1050908.1]
Top 5 Grid Infrastructure Startup Issues [ID 1368382.1]
{{{
To determine the status of GI, please run the following commands:
1. $GRID_HOME/bin/crsctl check crs
2. $GRID_HOME/bin/crsctl stat res -t -init
3. $GRID_HOME/bin/crsctl stat res -t
4. ps -ef | egrep 'init|d.bin'
}}}
Startup Videos
Something ventured
Startup Kids
http://www.hulu.com/20-under-20-transforming-tomorrow
http://thenextweb.com/entrepreneur/2012/12/02/how-to-hire-the-right-developer-for-your-tech-startup/
How to Collect Diagnostics for Database Hanging Issues (Doc ID 452358.1)
The normal distribution and the impirical value
http://www.wisc-online.com/objects/ViewObject.aspx?ID=TMH2102
The area under the standard normal distribution
http://www.wisc-online.com/Objects/ViewObject.aspx?ID=TMH3302
Creating a Scatter Plot in Excel
http://www.ncsu.edu/chemistry/resource/excel/excel.html
http://www.youtube.com/watch?v=nnM-7Q6gmUA
http://www.youtube.com/watch?v=MTsRlauTtd4
''Statistics in Oracle''
http://www.java2s.com/Tutorial/Oracle/0400__Linear-Regression-Functions/Catalog0400__Linear-Regression-Functions.htm
http://www.adp-gmbh.ch/ora/sql/agg/index.html <-- aggregate functions in oracle
http://www.vlamis.com/Papers/oow2001-1.pdf
http://www.dbasupport.com/oracle/ora9i/functions1_1.shtml
http://download.oracle.com/docs/cd/B14117_01/server.101/b10736/analysis.htm <-- official doc
http://download.oracle.com/docs/cd/B12037_01/server.101/b10759/functions117.htm <-- sql reference
http://oracledmt.blogspot.com/2007/02/new-oracle-statistical-functions-page.html
http://weblogs.sdn.sap.com/files/Statistical_Analysis-Oracle.ppt&pli=1
http://www.morganslibrary.com/reference/analytic_functions.html
http://ykud.com/blog/cognos/calculating-trend-lines-in-cognos-report-studio-and-oracle-sql
http://wwwmaths.anu.edu.au/~mendelso/papers/BMN31-03-09.pdf
http://www.nyoug.org/Presentations/SIG/DataWarehousing/dw_sig_nov_2002.PDF
http://www.rittmanmead.com/2004/08/27/analytic-functions-in-owb/
http://www.olsug.org/wiki/images/f/f9/Oracle_Statistical_Functions_preso_1.ppt
http://www.uga.edu/oir/reports/OracleAnaylticFunction-SAIR-2006.ppt
http://www.olsug.org/Presentations/May_2005/Workshops/Statistical_Analysis_of_Gene_Expression_Data_with_Oracle_and_R_Workshop.pdf
http://www.stat.yale.edu/~hz68/Adaptive-FLR.pdf
http://www.nocoug.org/download/2008-08/2008_08_NCOUG_11g4DW_hb.pdf
http://blogs.oracle.com/datamining/2010/08/the_meaning_of_probability.html
''Oracle Documentation'' - ''Linear Regression'' http://docs.oracle.com/cd/E11882_01/server.112/e25554/analysis.htm#BCFIIAGJ
''REGR_SLOPE'' Analytic Sales Forecast - http://dspsd.blogspot.com/2012/02/analytic-sales-forecast.html, http://www.rittmanmead.com/2012/03/statistical-analysis-in-the-database/
''Articles''
http://www.kdnuggets.com/2015/02/10-things-statistics-big-data-analysis.html
http://jonathanlewis.wordpress.com/statspack-examples/
-- from http://www.perfvision.com/statspack/statspack10.txt
{{{
Database
Cache Sizes
Load Profile
Instance Efficiency Percentages
Top 5 Timed Events
Host CPU (CPUs: 2)
Instance CPU
Memory Statistics
Time Model System Stats
Wait Events
Background Wait Events
Wait Event Histogram
SQL ordered by CPU
SQL ordered by Elapsed
SQL ordered by Reads
SQL ordered by Executions
SQL ordered by Parse Calls
Instance Activity Stats
Instance Activity Stats
-> Statistics with absolute values (should not be diffed)
Instance Activity Stats
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
OS Statistics
Tablespace IO Stats
File IO Stats
File Read Histogram Stats
Buffer Pool Statistics
Instance Recovery Stats
Buffer Pool Advisory
Buffer wait Statistics
PGA Aggr Target Stats
PGA Aggr Target Histogram
PGA Memory Advisory
Process Memory Summary Stats
Top Process Memory (by component)
Enqueue activity
Undo Segment Summary
Undo Segment Stats
Latch Activity
Latch Sleep breakdown
Latch Miss Sources
Mutex Sleep
Dictionary Cache Stats
Library Cache Activity
Rule Sets
Shared Pool Advisory
SGA Memory Summary
SGA breakdown difference
SQL Memory Statistics
init.ora Parameters
}}}
-- from http://www.perfvision.com/statspack/statspack9.txt
{{{
Cache Sizes
Load Profile
Instance Efficiency Percentages (Target 100%)
Top 5 Timed Events
Wait Events for
Background Wait Events for
SQL ordered by Gets for
SQL ordered by Reads for
SQL ordered by Executions for
SQL ordered by Parse Calls for
Instance Activity Stats for
Tablespace IO Stats for
File IO Stats for
Buffer Pool Statistics for
Instance Recovery Stats for
Buffer Pool Advisory for
PGA Aggr Target Stats for
PGA Aggr Target Histogram for
PGA Memory Advisory for
Rollback Segment Stats for
Rollback Segment Storage for
Latch Activity for
Latch Activity for
Latch Sleep breakdown for
Latch Miss Sources for
Dictionary Cache Stats for
Library Cache Activity for
Shared Pool Advisory for
SGA Memory Summary for
SGA breakdown difference for
init.ora Parameters for
}}}
Calculate IOPS in a storage array
http://www.zdnetasia.com/calculate-iops-in-a-storage-array-62061792.htm
http://www.techrepublic.com/blog/the-enterprise-cloud/calculate-iops-in-a-storage-array/
Calculate IOPS per disk
https://communities.netapp.com/community/netapp-blogs/databases/blog/2011/08/11/formula-to-calculate-iops-per-disk
{{{
Formula:
IOPS Estimated = 1 / ((seek time / 1000) + (latency / 1000)
Let's make a simple test:
SAS - 600GB 15K - Seagate - http://www.seagate.com/www/en-us/products/enterprise-hard-drives/cheetah-15k#tTabContentSpecifications
Estimated IOPS = 1 / ( ( (average read seek time+average write seek time) / 2) / 1000) + (average latency / 1000)
Estimated IOPS = 1 / ((3.65 / 1000) + (2.0 / 1000) = 1 / (0.00365) + (0.002) = 176.99115044247787610619469026549 - ~ 175 IOPS
SATA - 1TB 7.2K - Seagate - http://www.seagate.com/www/en-us/products/enterprise-hard-drives/constellation-es/constellation-es-1/#tTabContentSpecifications
Estimated IOPS = 1 / ( ( (average read seek time+average write seek time) / 2) / 1000) + (average latency / 1000)
Estimated IOPS = 1 / ((9.00 / 1000) + (4.16 / 1000) = 1 / (0.009) + (0.00416) = 75.987841945288753799392097264438 - ~ 75 IOPS
}}}
More on Performance Metrics: The Relationship Between IOPS and Latency
http://www.networkcomputing.com/servers-storage/more-on-performance-metrics-the-relation/240005213
IOPS calculator http://wmarow.com/storage/strcalc.html
RAID calculator http://wmarow.com/storage/raidslider.html
array estimator http://wmarow.com/storage/goals.html
STORAGE NOTES – IOPS, RAID, PERFORMANCE AND RELIABILITY http://www.virtuallyimpossible.co.uk/storage-notes-iops-raid-performance-and-reliability/
Storage array capacity: Performance vs. cost
http://www.zdnetasia.com/storage-array-capacity-performance-vs-cost-62062039.htm?scid=nl_z_tgsr
http://blogs.oracle.com/rdm/entry/capacity_sizing_for_15k_disks
See also these other sources:
* http://blog.aarondelp.com/2009/10/its-now-all-about-iops.html
* http://www.yellow-bricks.com/2009/12/23/iops/
* http://www.tomshardware.com/forum/251893-32-raid-raid
High Performance Storage Systems for SQL Server
http://www.simple-talk.com/sql/performance/high-performance-storage-systems-for-sql-server/ <-- MB/s per disk
<<<
All Aboard the IO Bus!
To illustrate IO bus saturation, let’s consider a simple example. A 1GB fiber channel is capable of handling about 90 MB/Sec of throughput. Assuming each disk it services is capable of 150 IOPS (of 8K each), that’s a total of 1.2 MB/Sec, which means that the channel is capable of handling up to 75 disks. Any more than that and we have channel saturation, meaning we need more channels, higher channel throughput capabilities, or both.
The other crucial consideration here is the type of IO we’re performing. In the above calculation of 150*8K IOPS, we assumed a random/OLTP type workload. In reporting/OLAP environments, we’ll have a lot more sequential IO consisting of, for example, large table scans during data warehouse loads. In such cases, the IO throughput requirements are a lot higher. Depending on the disk, ''the maximum MB/Sec will vary, but let’s assume 40 MB/Sec. It only takes three of those disks to produce 120 MB/Sec, leading to saturation of our 1GB fiber channel.''
In general, OLTP systems feature lots of disks to overcome latency issues, and OLAP systems feature lots of channels to handle peak throughput demands. It’s important that we consider both IOPS, to calculate the number of disks we need, and the IO type, to ensure the IO bus is capable of handling the throughput. But what about SANs?
<<<
Sane SAN
http://jamesmorle.wordpress.com/2010/08/23/sanesan2010-introduction/
http://jamesmorle.wordpress.com/2010/08/23/sanesan2010-serial-to-serial-when-one-bottleneck-isnt-enough/
http://jamesmorle.wordpress.com/2010/09/06/sane-san2010-storage-arrays-ready-aim-fire/
http://jamesmorle.wordpress.com/2011/09/16/right-practice/
Queue Depth
http://storage.ittoolbox.com/groups/technical-functional/emc-l/clearing-outstanding-disk-io-1574369
http://en.wikipedia.org/wiki/IOPS
http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1279241670018+28353475&threadId=1310049
http://www.ardentperf.com/2008/03/13/oracle-iops-and-hba-queue-depth/
http://www.ardentperf.com/2008/01/31/oracle-io-and-operating-system-caching/
Evaluating Storage Benchmarks http://www.enterprisestorageforum.com/hardware/features/article.php/3668416/Evaluating-Storage-Benchmarks.htm
Measuring Storage Performance http://www.enterprisestorageforum.com/hardware/features/article.php/3671466/Measuring-Storage-Performance
Oracle and storage IOs, explanations and experience at CERN http://cdsweb.cern.ch/record/1177416/files/CHEP2009-28-24.pdf
Bob Sneed's nice paper ''Oracle I/O: Supply and Demand'' http://vnull.pcnet.com.pl/dl/solaris/Oracle_IO_1.1.pdf , http://bobsneed.wordpress.com/2009/11/05/oracle-io-supply-and-demand/
IOPS - Frames per second
http://www.thesanman.org/2012/03/understanding-iops.html
Converged Fabrics: Part 1 - Converged Fabrics http://youtu.be/qiU8QcAArwE
Converged Fabrics: Part 2 - Calculating IOPs http://youtu.be/VAlbVOyQ7w0
storage index aging
http://oracle-sage.com/2014/11/03/exadata-storage-index-aging-part-1/
http://oracle-sage.com/2014/11/04/exadata-storage-index-aging-part-2a/
http://oracle-sage.com/2014/11/04/exadata-storage-index-aging-part-2b/
http://oracle-sage.com/2014/11/05/exadata-storage-index-aging-part-3-analysis-on-test-2b/
{{{
HARDWARE RAID - leverage on it, if it's available
SOFTWARE RAID - only if Hardware RAID is not available
RAID 0
RAID 1 <-- min of 2.. +1 hot spare
RAID 1+0 <-- min of 4.. +1 hot spare
RAID 5 <-- min of 3.. +1 hot spare
Note: use it with LVM to automatically scale, just add to existing Volume Group then LVEXTEND
RAC 9i
Theoretical Maximum number of nodes = 64
Practical Maximum number of nodes = 8
RAC 10g
Theoretical Maximum number of nodes = 128
Practical Maximum number of nodes = 8
RedHat Cluster Suite
Practical Maximum number of nodes = 32
ASM's minimum ASM disk = 4
ASM's maximum ASM disk = 8
Recommended RAID configuration = 1+0
OCFS2,
- can't use it in LVM
- EMC have its own LVM software.. leverage on it
- if in a clustered environment, then use OCFS2 for the FLASH_RECOVERY_AREA
LVM is not cluster-aware
- if you're planning for a cluster filesystem on LVM then use GFS (global filesystem)
- if you have EMC luns, then don't mix it with the internal disks because they have
different performance charteristics and LUNS sometimes disappear,
ideal is create a separate Volume Group for different disk characteristics
but you can't get extents from a different Volume Group
- if not in a clustered environment, use EXT3 for FLASH_RECOVERY_AREA
ibm storage LUN limit is 375GB.. times 4 = 2.9TB
}}}
''How to Tell if the IO of the Database is Slow [ID 1275596.1]''
{{{
====================================================================================
RAID Type of RAID Control Database Redo Log Archive Log
File File File File
====================================================================================
0 Striping Avoid* OK* Avoid* Avoid*
------------------------------------------------------------------------------------
1 Shadowing OK OK Recommended Recommended
------------------------------------------------------------------------------------
0+1 Striping + OK Recommended Avoid Avoid
Shadowing (1)
------------------------------------------------------------------------------------
3 Striping with OK Avoid Avoid Avoid
Static Parity (2)
------------------------------------------------------------------------------------
5 Striping with OK Avoid Avoid Avoid
Rotating Parity (2)
------------------------------------------------------------------------------------
* RAID 0 does not provide any protection against failures. It requires a strong backup
strategy.
(1) RAID 0+1 is recommended for database files because this avoids hot spots and gives
the best possible performance during a disk failure. The disadvantage of RAID 0+1
is that it is a costly configuration.
(2) When heavy write operation involves this datafile
-- RAID CONFIGURATION
I/O Tuning with Different RAID Configurations
Doc ID: Note:30286.1
Avoiding I/O Disk Contention
Doc ID: Note:148342.1
-- SOLID STATE
Solid State Disks & DSS Operations
Doc ID: Note:76413.1
-- iSCSI
Document TitleUsing Openfiler iSCSI with an Oracle RAC database on Linux (Doc ID 371434.1)
-- RAW DEVICES
Announcement of De-Support of using RAW devices in Release 12G
Doc ID: NOTE:578455.1
Making the decision to use raw devices
Doc ID: Note:29676.1
-- ASYNC IO, DIRECT IO
Pros and Cons of Using Direct I/O for Databases [ID 1005087.1]
Understanding Cyclic Caching and Page Cache on Solaris 8 and Above [ID 1003383.1]
Oracle database restart takes longer on high end systems running Solaris[TM] 8 [ID 1003483.1]
ASM INHERENTLY PERFORMS ASYNCHRONOUS I/O REGARDLESS OF FILESYSTEMIO_OPTIONS PARAMETER
Doc ID: Note:751463.1
File System's Buffer Cache versus Direct I/O <-- PARAMETER
Doc ID: Note:462072.1
Init.ora Parameter "FILESYSTEMIO_OPTIONS" is Incorrectly Set to "NONE" as a DEFAULT in 9.2.0 on AIX
Doc ID: Note:230238.1
RMAN Backup Controlfile Fails With RMAN-03009 ORA-01580 ORA-27044
Doc ID: Note:737877.1
How To Check if Asynchronous I/O is Working On Linux
Doc ID: Note:237299.1
DirectIO on Redhat and SuSe Linux
Doc ID: Note:297521.1
Direct I/O or Concurrent I/O on AIX 5L
Doc ID: 272520.1
Async io and AdvFS - Does Oracle Support it?
Doc ID: 50548.1
SOLARIS: Asynchronous I/O (AIO) on Solaris (SPARC) servers
Doc ID: 48769.1
AIX How does Oracle use AIO servers and what determines how many are used? [ID 443368.1]
AIX Recommendations For using CIO/DIO for Filesystems containing Oracle Files on AIX [ID 960055.1]
-- CIO Concurrent IO
How to use Concurrent I/O on HP-UX and improve throughput on an Oracle single-instance database [ID 1231869.1]
see warnings on using CIO on ORACLE_HOMEs at [[Veritas Oracle Doc]] which should also be the same for JFS2 on AIX
Db_block_size Requirements For Direct IO / Concurrent IO [ID 418714.1]
Slow I/O On HP Unix [ID 457063.1] <-- shows mounting of cio
Question On Retail Predictive Application Server (RPAS) And Concurrent IO (Asynchronous Mode) [ID 1303046.1]
Direct I/O or Concurrent I/O on AIX 5L [ID 272520.1] <-- nice matrix on AIX JFS
Direct I/O (DIO) and Concurrent I/O (CIO) on AIX 5L [ID 257338.1]
-- QUICK IO
How to Verify Quick I/O is Working
Doc ID: 135447.1
-- filesystemio_options
filesystemio_options and filesystem mounts Supported/recommended on AIX for Oracle 9i [ID 602791.1]
-- BENCHMARK
Comparing Performance Between RAW IO vs OCFS vs EXT 2/3
Doc ID: 236679.1
Oracm Large Disk IO Timeout Message Explained
Doc ID: 359898.1
-- WARNING
WARNING:1 Oracle process running out of OS kernelI/O resources
Doc ID: 748607.1
10.2 Grid Agent Can Break RAID Mirroring and Cause Hard Disk To Go Offline
Doc ID: 454647.1
Process spins and traces with "Asynch I/O kernel limits" Warnings [ID 1313555.1]
WARNING:Could not increase the asynch I/O limit to 514 for SQL direct I/O. It is set to 128
WARNING:io_submit failed due to kernel limitations MAXAIO for process=0 pending aio=0
-- BLOCKS
Extent and Block Space Calculation and Usage in Oracle Databases
Doc ID: 10640.1
Note 162994.1 SCRIPT TO REPORT EXTENTS AND CONTIGUOUS FREE SPACE
Script: Computing Table Size
Doc ID: 70183.1
Note: 1019709.6 SCRIPT TO REPORT TABLESPACE FREE AND FRAGMENTATION
Note: 1019585.6 SCRIPT TO CALCULATE BLOCKS NEEDED BY A TABLE
Note: 1019524.6 SCRIPT TO REPORT SPACE USED IN A TABLESPACE
Note: 1019505.6 SCRIPT TO SHOW TABLE EXTENTS & STORAGE PARAMETERS
RDBPROD: Space Management and Thresholds in Rdb
Doc ID: 62688.1
RDBPROD: How to identify fragmented rows in a table
Doc ID: 283203.1
RDBPROD: Tutorial on Area Page Sizing
Doc ID: 62689.1
}}}
http://blog.go-faster.co.uk/2012/03/editing-hints-in-stored-outlines.html
http://strataconf.com/stratany2011/public/content/video
http://strataconf.com/stratany2012/public/schedule/proceedings
Which Doc Contains Standard Edition's Replication Features?
http://forums.oracle.com/forums/thread.jspa?messageID=4557612
Streams in Oracle SE
http://forums.oracle.com/forums/thread.jspa?messageID=4035228
Streams between Standard and Enterprise edition is not working [ID 567872.1]
<<<
In 11g, the method to capture changes in SE is to use synchronous capture. For more information check: http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_capture.htm#CACIDGBI)
Also an example on how to configure is in the 2Day+ Data Replication and Integration manual.
The example demonstrates setting up sync capture as 2way replication (bidirectional) http://download.oracle.com/docs/cd/B28359_01/server.111/b28324/tdpii_repcont.htm#BABDEBBA
<<<
''Enhanced subquery optimizations in Oracle - vldb09-423.pdf''
http://www.evernote.com/shard/s48/sh/6da5b118-5471-4dee-bb8a-bdf0c7da9893/c7327d55701ccaeff53861ef496cb79d
http://blogs.oracle.com/optimizer/2010/09/optimizer_transformations_subquery_unesting_part_2.html
{{{
=subtotal(9,yourrange)
}}}
http://www.ozgrid.com/forum/showthread.php?t=59996&page=1
http://www.pcreview.co.uk/forums/do-you-sum-only-visible-cells-you-have-filtered-list-t1015388.html
http://www.pcreview.co.uk/forums/totals-reflecting-filtered-cells-only-not-all-data-worksheet-t1038534.html
http://support.microsoft.com/kb/187667
http://office.microsoft.com/en-us/excel-help/countif-HP005209029.aspx
http://www.eggheadcafe.com/software/aspnet/29984846/how-can-i-count-the-number-of-characters-on-a-cell.aspx
http://www.ehow.com/how_5925626_count-number-characters-ms-excel.html
http://www.excelforum.com/excel-new-users/372335-formula-to-count-number-of-times-the-letter-x-appears-in-a-column.html
http://www.google.com.ph/search?sourceid=chrome&ie=UTF-8&q=ms+excel+if+or
http://www.officearticles.com/excel/if_statements_in_formulas_in_microsoft_excel.htm
http://www.bluemoosetech.com/microsoft-excel-functions.php?jid=19&title=Microsoft%20Excel%20Functions:%20IF,%20AND,%20OR
http://www.experiglot.com/2006/12/11/how-to-use-nested-if-statements-in-excel-with-and-or-not/
! Sun Sparc T3 CPUs - thread:core ratio
see discussions here https://www.evernote.com/shard/s48/sh/ddf62a51-c7d9-489b-b1f4-c14b008a1d63/68a86a16889ac103
http://www.natecarlson.com/2010/05/07/review-supermicros-sc847a-4u-chassis-with-36-drive-bays/
https://www.youtube.com/watch?v=xtOg44r6dsE
..
! RHEL 3 and below (double the size)
! RHEL 4:
1) if RAM <= 2GB then swap = 2X RAM
2) if RAM > 2GB then
e.g. 4GB
(2GB x 2) + 2GB
= 6GB (3x swap partition)
e.g. 8GB
(2GB x 2) + 6GB
= 10GB (5x swap partition)
! RHEL 5:
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Installation_Guide/s2-diskpartrecommend-x86.html
#
A swap partition (at least 256 MB) — swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing.
If you are unsure about what size swap partition to create, make it twice the amount of RAM on your machine. It must be of type swap.
Creation of the proper amount of swap space varies depending on a number of factors including the following (in descending order of importance):
*
The applications running on the machine.
*
The amount of physical RAM installed on the machine.
*
The version of the OS.
Swap should equal 2x physical RAM for up to 2 GB of physical RAM, and then an additional 1x physical RAM for any amount above 2 GB, but never less than 32 MB.
So, if:
M = Amount of RAM in GB, and S = Amount of swap in GB, then
If M < 2
S = M *2
Else
S = M + 2
Using this formula, a system with 2 GB of physical RAM would have 4 GB of swap, while one with 3 GB of physical RAM would have 5 GB of swap. Creating a large swap space partition can be especially helpful if you plan to upgrade your RAM at a later time.
For systems with really large amounts of RAM (more than 32 GB) you can likely get away with a smaller swap partition (around 1x, or less, of physical RAM).
! more recent RHEL 5 and RHEL6:
consult/read the documentation because physical memory nowadays are getting bigger and bigger..
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/index.html
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Installation_Guide/s1-diskpartitioning-x86.html
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s2-diskpartrecommend-x86.html
! NOTE:
1) LVM distributes its extents across disks... so don't put your SWAP on an LVM
2) SWAP must be near the center of the cylinder
http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s2-swap-creating-lvm2.html
http://www.centos.org/docs/4/html/rhel-sag-en-4/s1-swap-adding.html
http://serverfault.com/questions/306419/is-the-fdisk-partition-type-important-when-using-lvm <-- GOOD STUFF
http://www.linuxquestions.org/questions/red-hat-31/can-we-use-partition-type-83-for-creating-a-lvm-volume-762819/ <-- GOOD
http://sourceforge.net/tracker/?func=detail&aid=2528606&group_id=115473&atid=671650 <-- GOOD STUFF
http://www.techotopia.com/index.php/Adding_and_Managing_Fedora_Swap_Space#Adding_Swap_Space_to_the_Volume_Group
http://forums.fedoraforum.org/showthread.php?t=146289
http://www.walkernews.net/2007/07/02/how-to-create-linux-lvm-in-3-minutes/
http://forums.fedoraforum.org/showthread.php?p=707874
http://www.howtoforge.com/linux_lvm
http://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg301789.html
http://forum.soft32.com/linux2/Bug-410227-mkswap-good-checking-partition-type-83-ftopict71038.html
http://forums.opensuse.org/english/get-technical-help-here/install-boot-login/390784-install-fails-format-swap-3008-a.html
http://nixforums.org/about150579-mkswap-mistake.html
http://www.redhat.com/magazine/009jul05/features/lvm2/
https://help.ubuntu.com/10.04/serverguide/C/advanced-installation.html
! Installation
<<<
1) Download the swingbench here http://www.dominicgiles.com/swingbench.html
2) Set the environment variables at /home/oracle/dba/benchmark/swingbench/swingbench.env
* JAVAHOME
* SWINGHOME
* ORACLE_HOME
export JAVAHOME=/u01/app/oracle/product/11.2.0.3/dbhome_1/jdk
export SWINGHOME=/home/oracle/dba/benchmark/swingbench
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
3) Run the oewizard
for standard testcase across platforms
- the ordercount and customercount should be 1000000
- and start with #ofusers 1000, then increase to 1200
{{{
/home/oracle/dba/benchmark/swingbench/bin/oewizard
-- 1M customers
11:27:20 SYS@dw> select sum(bytes)/1024/1024 from dba_segments where owner = 'SOE';
SUM(BYTES)/1024/1024
--------------------
724.851563
}}}
* if you are using ASM just specify +DATA
* If you don't have SYSDBA access to the machine then you can create your own user "karlarao"
then grant CONNECT,DBA,SYSDBA to that user
then edit the oewizard.xml file
{{{
<?xml version = '1.0' encoding = 'UTF-8'?>
<WizardConfig Mode="InterActive" Name="Oracle Entry Install Wizard" xmlns="http://www.dominicgiles.com/swingbench/wizard">
<WizardSteps RunnableStep="5">
<WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step0"/>
<WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step1"/>
<WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step2"/>
<WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step3"/>
<WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step4"/>
<WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step5"/>
<WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step6"/>
<WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step7"/>
</WizardSteps>
<DefaultParameters>
<Parameter Key="dbapassword" Value="karlarao"/>
<Parameter Key="password" Value="soe"/>
<Parameter Key="itsize" Value="374M"/>
<Parameter Key="datatablespacesexists" Value="false"/>
<Parameter Key="partitoningrequired" Value="true"/>
<Parameter Key="indextablespacesexists" Value="false"/>
<Parameter Key="Mode" Value="InterActive"/>
<Parameter Key="dbausername" Value="karlarao"/>
<Parameter Key="indextablespace" Value="soeindex"/>
<Parameter Key="operation" Value="create"/>
<Parameter Key="ordercount" Value="1000000"/>
<Parameter Key="indexdatafile" Value="+DATA"/>
<Parameter Key="datafile" Value="+DATA"/>
<Parameter Key="connectionstring" Value="desktopserver.local:1521:dw"/>
<Parameter Key="tablespace" Value="soe"/>
<Parameter Key="username" Value="soe"/>
<Parameter Key="customercount" Value="1000000"/>
<Parameter Key="tsize" Value="195M"/>
<Parameter Key="connectiontype" Value="thin"/>
</DefaultParameters>
</WizardConfig>
}}}
then after running the oewizard, run the following commands as "/ as sysdba":
{{{
grant execute on dbms_lock to soe;
exec dbms_stats.gather_schema_stats('soe');
exec dbms_utility.compile_schema('soe',true);
}}}
increase the processes parameter to 2048
4) Edit the swingconfig.xml with your connect string, system password
''connect string''
{{{
format is <<hostname>>:<<port>>:<<service>>
$ cat swingconfig.xml | grep -i connect
<Connection>
<ConnectString>desktopserver.local:1521:dw</ConnectString>
</Connection>
}}}
''system password''
{{{
$ cat swingconfig.xml | grep -i system
<SystemUserName>system</SystemUserName>
<SystemPassword>oracle</SystemPassword>
}}}
OR the user you created on the oewizard step
{{{
$ cat swingconfig.xml | grep -i system
<SystemUserName>karlarao</SystemUserName>
<SystemPassword>karlarao</SystemPassword>
}}}
<<<
! Connection String differences
<<<
''Thin JDBC'' .. apparently this does not failover sessions
{{{
<ConnectString>desktopserver.local:1521:dw</ConnectString>
<DriverType>Oracle10g Type IV jdbc driver (thin)</DriverType>
}}}
''OCI - tnsnames.ora'' with FAILOVER_MODE option
{{{
<ConnectString>exadata</ConnectString>
<DriverType>Oracle10g Type II jdbc driver (oci)</DriverType>
}}}
<<<
! Run the benchmark - Single Instance
''Review the command options here'' http://www.dominicgiles.com/commandline.html
1) In ''GUI'' mode
<<<
cpumonitor will give you the CPU and IO graphs
{{{
cd $HOME/swingbench/bin
./cpumonitor
./swingbench -cpuloc localhost
}}}
to stop, cancel the swingbench then stop the coordinator
{{{
./coordinator -stop
}}}
<<<
2) To have a consistent load using ''charbench''
<<<
{{{
while : ; do ./charbench -a -rt 00:01 ; echo "---" ; done
}}}
or
just edit the file swingconfig.xml and change the default 15 users to 1000
{{{
<NumberOfUsers>1000</NumberOfUsers>
}}}
then just execute ./charbench
<<<
3) Using ''minibench''
<<<
{{{
cd $HOME/swingbench/bin
./cpumonitor
./minibench -cpuloc localhost
}}}
to stop, cancel the minibench then stop the coordinator
{{{
./coordinator -stop
}}}
<<<
! Run the benchmark - RAC
http://www.dominicgiles.com/clusteroverviewwalkthough23.html
{{{
install swingbench on db1 and db2
run oewizard on db1
./coordinator -g
./minibench -g group1 -cs exadata1 -co localhost &
./minibench -g group2 -cs exadata2 -co localhost &
edit the HostName and MonitoredNodes tags of clusteroverview.xml
./clusteroverview
./coordinator -stop
}}}
http://www.dominicgiles.com/blog/files/859a2dd3f34b49a43e5a39380d39b680-7.html
http://dominicgiles.com/swingbench/clusteroverview21f.pdf
relocate session rac https://forums.oracle.com/forums/thread.jspa?messageID=3742732
! OLTP and DSS workload mix
{{{
Benchmark Description Read/Write Ratio
Order Entry TPC-C like 60/40 <-- based on OE schema, stress interconnects and memory
Calling Circle Telco based 70/30 <-- stress the CPU and memory without the need for a powerful I/O subsystem
Stress Test Simple I,U,D,S 50/50 <-- simply fires random inserts,updates,selects and updates against a well know table
Sales History DSS 100/0 <-- based on SH schema, designed to test the performance of complicated queries when run against large tables
}}}
<<<
[img[ https://www.evernote.com/shard/s48/sh/7611f954-dc37-45d1-b2c6-22d01f224692/c1578b3143b3d345f2306c26b925c080/res/c597305c-2dff-4aa7-b67b-931a6105907c/swingbench.png ]]
<<<
''Issues''
entropy on random,urandom
http://www.freelists.org/post/oracle-l/swingbench-connection-issue
http://www.usn-it.de/index.php/2009/02/20/oracle-11g-jdbc-driver-hangs-blocked-by-devrandom-entropy-pool-empty/
http://www.freelists.org/post/oracle-l/Difference-between-devurandom-and-devurandom-Was-swingbench-connection-issue,2
''Possible test cases''
Real Application Clusters, Online table rebuilds, Standby databases, Online backup and recovery etc.
''My old config files''
{{{
MyBookLive:~/backup# find /DataVolume/shares/Public/Backup -iname "swingconfig.xml"
/DataVolume/shares/Public/Backup/Disks/disk1-WD1TB/backup/Documents/backup/temp/dbrocaix/swingconfig.xml
/DataVolume/shares/Public/Backup/Disks/WD1TB/backup/temp/dbrocaix/swingconfig.xml
MyBookLive:/DataVolume/shares/Public/Backup/Disks/disk1-WD1TB/backup/Documents/backup/temp/dbrocaix# ls -ltr
total 704
-rwxrwxrwx 1 root root 1131 Feb 28 2011 swingbench.env
-rwxrwxrwx 1 root root 58 Feb 28 2011 char.txt
-rwxrwxrwx 1 root root 4884 Feb 28 2011 swingconfig.xml.oe
-rwxrwxrwx 1 root root 3504 Feb 28 2011 swingconfig.xml.cc
-rwxrwxrwx 1 root root 673 Feb 28 2011 swingbench.css
-rwxrwxrwx 1 root root 251 Feb 28 2011 swingbench
-rwxrwxrwx 1 root root 4937 Feb 28 2011 swingconfig.xml
-rwxrwxrwx 1 root root 179 Feb 28 2011 s2h.sql
-rwxrwxrwx 1 root root 190 Feb 28 2011 stopdb.sh
-rwxrwxrwx 1 root root 170 Feb 28 2011 startdb.sh
-rwxrwxrwx 1 root root 1011 Feb 28 2011 g.sql
}}}
! Swingbench on a CPU centric micro bench scenario VS cputoolkit
The thing here is the more consistent and less parameters you have on your test cases
the more repeatable it will be and easier for you to measure the response time effects
of the any environment and configuration changes
* It's doing IO, so you'll not only see CPU on your AAS.. but also log file sync when you ramp up the number of users so if you have slow disks then you are burning some of your response time on IO
* As you ramp up more users let's say from 1 to 50.. and as you do more CPU WAIT IO and increase your load average you'll start getting ORA-03111
{{{
12:00:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15
12:10:01 AM 5 813 0.00 0.00 0.00
12:20:01 AM 5 811 0.00 0.00 0.00
02:20:01 PM 2 1872 4.01 20.40 11.73
02:30:02 PM 4 1896 327.89 126.20 50.80 <-- load avg
02:40:01 PM 6 871 0.31 18.10 27.58
02:50:01 PM 7 867 0.12 2.57 14.52
You'll start getting ORA-03111: break received on communication channel
http://royontechnology.blogspot.com/2009/06/mysterious-ora-03111-error.html
}}}
* But swingbench is still a totally awesome tool, I can use it for a bunch of test cases but using it on a CPU centric micro benchmark like observing threads vs cores just gives a lot of non-CPU noise
* cputoolkit as a micro benchmark tool allows you to measure the effect on scalability (LIOs per unit of work) as you consumes the max # of CPUs.. see the LIOS_ELAP which takes into account the time spent on CPU_WAIT, if you just derive the unit of work on LIOS_EXEC it will be incorrect as it only takes into account how many times you have done work and not including the time spent on run-queue (CPU_WAIT) which is the most important thing to quantify the effect on the users
{{{
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------
12/05/12 01:02:47, 1002, 254336620, 1476.18, 2128.63, 652.45, 1.47, 2.12, .65, 253828.96, 119483.73
}}}
check here [[cpu centric benchmark comparisons]] for more info
https://help.ubuntu.com/community/SwitchingToUbuntu/FromLinux/RedHatEnterpriseLinuxAndFedora
{{{
Contents
Administrative Tasks
Package Management
Graphical Tools
Command Line Tools
Table of Equivalent Commands
Services
Graphical Tools
Command Line Tools
Network
Graphical Tools
Command Line Tools
}}}
http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/ <-- brendan's experience
http://web.elastic.org/~fche/blog2/archive/2011/10/17/using_systemtap_better <-- systemtap author response
http://sprocket.io/blog/2007/11/systemtap-its-like-dtrace-for-linux-yo/ <-- a quick howto guide
nits to please a kernel hacker http://lwn.net/Articles/301285/
systemtap example commands <-- http://sourceware.org/systemtap/examples/index.html
installation <-- http://goo.gl/OS7u7
http://sourceware.org/systemtap/wiki <-- wiki
http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaai%2Fstapgui%2Finstallation.htm <-- systemtap IDE GUI
http://sourceforge.net/projects/stapgui/
http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaai%2Fstapgui%2Fdemo.htm <-- GUI demo
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/SystemTap_Beginners_Guide/useful-systemtap-scripts.html <-- scripts
http://sourceware.org/systemtap/SystemTap_Beginners_Guide/using-usage.html <-- Running SystemTap Scripts
http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaai%2FsystemTap%2Fliaaisystapinstallrhel.htm <-- Installing SystemTap on Red Hat Enterprise Linux 5.2
Life of an Oracle I/O: tracing logical and physical I/O with systemtap https://db-blog.web.cern.ch/blog/luca-canali/2014-12-life-oracle-io-tracing-logical-and-physical-io-systemtap
https://www.cnblogs.com/zengkefu/p/6580389.html
http://ksun-oracle.blogspot.com/2018/01/oracle-logical-read-current-gets-access.html
<<<
There’s really no available SPECint number for the T4 but it’s pretty much the same range of speed as the T5 but fewer cores (16 vs 8 per processor)
You can size it at 30 (speed/core) as well.
$ less spec.txt | sort -rnk1 | grep -i sparc | grep -i oracle
30.5625, 16, 1, 16, 8, 441, 489, Oracle Corporation, SPARC T5-1B, Oct-13
29.2969, 128, 8, 16, 8, 3490, 3750, Oracle Corporation, SPARC T5-8, Apr-13
29.1875, 16, 1, 16, 8, 436, 467, Oracle Corporation, SPARC T5-1B, Apr-13
T5 http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=113060701
T4 http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=111092601
T5 https://www.spec.org/jEnterprise2010/results/res2014q1/jEnterprise2010-20140107-00047.html
T4 https://www.spec.org/jEnterprise2010/results/res2011q3/jEnterprise2010-20110907-00027.html
<<<
disagree. it's not going to be the same performance. SPECint_rate2006/core says it all. see the slide here
<<<
[img(50%,50%)[ http://goo.gl/csMbU ]]
<<<
and SPECint_rate2006/core comparison here (higher the better)
the Oracle slide used the "baseline" number.. where I usually use the "result" (in csv) which is equivalent to the "peak" column in the SPECint_rate2006 main page
so the 2830 is a baseline number divide by # of cores which is 64
<<<
[img[ http://goo.gl/0XnNo ]]
<<<
and that rules out storage.
! on E7 comparison (x3-8)
well yeah they're about the same performance range
{{{
$ cat spec.txt | grep -i intel | grep 8870 | sort -rnk1
27, 40, 4, 10, 2, 1010, 1080, Unisys Corporation, Unisys ES7000 Model 7600R G3 (Intel Xeon E7-8870)
26.75, 40, 4, 10, 2, 1010, 1070, NEC Corporation, Express5800/A1080a-S (Intel Xeon E7-8870)
26.75, 40, 4, 10, 2, 1010, 1070, NEC Corporation, Express5800/A1080a-D (Intel Xeon E7-8870)
26.5, 40, 4, 10, 2, 1000, 1060, Oracle Corporation, Sun Server X2-8 (Intel Xeon E7-8870 2.40 GHz)
25.875, 80, 8, 10, 2, 1960, 2070, Supermicro, SuperServer 5086B-TRF (X8OBN-F Intel E7-8870)
24.875, 80, 8, 10, 2, 1890, 1990, Oracle Corporation, Sun Server X2-8 (Intel Xeon E7-8870 2.40 GHz)
}}}
! on E5 comparison (x3-2)
x3-2 is still way faster than t5-8 ;) 44 vs 29 SPECint_rate2006/core.. oh yeah, faster.
{{{
$ cat spec.txt | grep -i intel | grep -i "E5-26" | grep -i sun | sort -rnk1
44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X6270 M3 (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X3-2B (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Server X3-2L (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Fire X4270 M3 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Server X3-2 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Fire X4170 M3 (Intel Xeon E5-2690 2.9GHz)
}}}
http://oracle-dba-yi.blogspot.com/2010/01/taf-vs-fan-vs-fcf-vs-ons.html <- good stuff
{{{
What the differences and relationship among TAF/FAN/FCF/ONS?
1 Definition
1) TAF
a feature of Oracle Net Services for OCI8 clients. TAF is transparent application failover which will move a session to a backup connection if the session fails. With Oracle 10g Release 2, you can define the TAF policy on the service using dbms_service package. It will only work with OCI clients. It will only move the session and if the parameter is set, it will failover the select statement. For insert, update or delete transactions, the application must be TAF aware and roll back the transaction. YES, you should enable FCF on your OCI client when you use TAF, it will make the failover faster.
Note: TAF will not work with JDBC thin.
2) FAN
FAN is a feature of Oracle RAC which stands for Fast Application Notification. This allows the database to notify the client of any change (Node up/down, instance up/down, database up/down). For integrated clients, inflight transactions are interrupted and an error message is returned. Inactive connections are terminated.
FCF is the client feature for Oracle Clients that have integrated with FAN to provide fast failover for connections. Oracle JDBC Implicit Connection Cache, Oracle Data Provider for .NET (ODP.NET) and Oracle Call Interface are all integrated clients which provide the Fast Connection Failover feature.
3) FCF
FCF is a feature of Oracle clients that are integrated to receive FAN events and abort inflight transactions, clean up connections when a down event is received as well as create new connections when a up event is received. Tomcat or JBOSS can take advantage of FCF if the Oracle connection pool is used underneath. This can be either UCP (Universal Connection Pool for JAVA) or ICC (JDBC Implicit Connection Cache). UCP is recommended as ICC will be deprecated in a future release.
4) ONS
http://forums.oracle.com/forums/thread.jspa?messageID=3566976
ONS is part of the clusterware and is used to propagate messages both between nodes and to application-tiers
ONS is the foundation for FAN upon which is built FCF.
RAC uses FAN to publish configuration changes and LBA events. Applications can react as those published events in two way :
- by using ONS api (you need to program it)
- by using FCF (automatic by using JDBC implicit connection cache on the application server)
you can also respond to FAN event by using server-side callout but this on the server side (as their name suggests it)
Rodrigo Mufalani
"ONS send/receive messages about failures automatically. It is a daemon process that runs on each node notifying status from components of database, nodeapps.
If listener process fails on node1 his failure is notified by EVMD, then local ONS communicates the failure to remote ONS in remote nodes, then local ONS on these nodes notifying all aplications about failure that occurred on node1."
2 Relationship
ONS --> FAN --> FCF
ONS -> send/receive messages on local and remote nodes.
FAN -> uses ONS to notify other processes about changes in configuration of service level
FCF -> uses FAN information working with conection pools JAVA and others.
http://forums.oracle.com/forums/thread.jspa?messageID=3566976
3 To use TAF/FAN/FCF/ONS, do you need to configure/install in server or client side?
4 Does ONS automatically send messages ?
or is there any settings to be done ?
Does ONS only broadcast msgs ?
http://forums.oracle.com/forums/thread.jspa?messageID=3566976
ONS is part of the clusterware and is used to propagate messages both between nodes and to application-tiers
ONS is the foundation for FAN upon which is built FCF.
RAC uses FAN to publish configuration changes and LBA events. Applications can react as those published events in two way :
- by using ONS api (you need to program it)
- by using FCF (automatic by using JDBC implicit connection cache on the application server)
you can also respond to FAN event by using server-side callout but this on the server side (as their name suggests it)
Rodrigo Mufalani
"ONS send/receive messages about failures automatically. It is a daemon process that runs on each node notifying status from components of database, nodeapps.
If listener process fails on node1 his failure is notified by EVMD, then local ONS communicates the failure to remote ONS in remote nodes, then local ONS on these nodes notifying all aplications about failure that occurred on node1."
5 Are TAF and FAN mutually exclusive? or if TAF and FCF are mutually exclusive?
No. You can use both TAF and FAN at the same time, or both TAF and FCF, it depends on what you want to achieve with it.
6 TAF Basic Configuration with FAN: Example
Oracle Database 10g Release 2 supports server-side TAF with FAN.
To use server-side TAF:
1) create and start your service using SRVCTL
$ srvctl add service -d RACDB -s AP -r I1,I2
$ srvctl start service -d RACDB -s AP
2) configure TAF in the RDBMS by using the DBMS_SERVICE package.
execute dbms_service.modify_service ( ,-
service_name => 'AP' ,-
aq_ha_notifications => true ,-
failover_method => dbms_service.failover_method_basic ,-
failover_type => dbms_service.failover_type_session ,-
failover_retries => 180, failover_delay => 5 ,-
clb_goal => dbms_service.clb_goal_long);
3) When done, make sure that you define a TNS entry for it in your tnsnames.ora file.
AP =
(DESCRIPTION =(FAILOVER=ON)(LOAD_BALANCE=ON)
(ADDRESS=(PROTOCOL=TCP)(HOST=N1VIP)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=N2VIP)(PORT=1521))
(CONNECT_DATA = (SERVICE_NAME = AP)))
Note that this TNS name does not need to specify TAF parameters as with the previous slide.
7 TAF Basic Configuration without FAN: Example
1) Before using TAF, it is recommended that you create and start a service that is used during connections.
By doing so, you benefit from the integration of TAF and services. When you want to use BASIC TAF with a service, you should have the -P BASIC option when creating the service.
After the service is created, you simply start it on your database.
$ srvctl add service -d RACDB -s AP -r I1,I2 -P BASIC
$ srvctl start service -d RACDB -s AP
2) Then, your application needs to connect to the service by using a connection descriptor similar to the one shown in the slide. The FAILOVER_MODE parameter must be included in the CONNECT_DATA section of your connection descriptor.
AP =
(DESCRIPTION =(FAILOVER=ON)(LOAD_BALANCE=ON)
(ADDRESS=(PROTOCOL=TCP)(HOST=N1VIP)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=N2VIP)(PORT=1521))
(CONNECT_DATA =
(SERVICE_NAME = AP)
(FAILOVER_MODE =
(TYPE=SESSION)
(METHOD=BASIC)
(RETRIES=180)
(DELAY=5))))
Note: If using TAF, do not set the GLOBAL_DBNAME parameter in your listener.ora file.
8 Metalink notes
--Understanding Transparent Application Failover (TAF) and Fast Connection Failover (FCF) [ID 334471.1]
--How To Verify And Test Fast Connection Failover (FCF) Setup From a JDBC Thin Client Against a 10.2.x RAC Cluster [ID 433827.1]
--Fast Connection Failover (FCF) Test Client Using 11g JDBC Driver and 11g RAC Cluster [ID 566573.1]
--Questions about how ONS and FCF work with JDBC [ID 752595.1]
--Configuring ONS For Fast Connection Failover
--How To Implement (Fast Connection Failover) FCF Using JDBC driver ? [ID 414199.1]
--How to Implement Load Balancing With RAC Configured System Using JDBC [ID 247135.1]
}}}
transparent application failover https://vzw.webex.com/vzw/j.php?MTID=ma1137d5dea6daf255be06f6fb672fbd8
Implementing Transparent Application Failover https://docs.oracle.com/cd/E19509-01/820-3492/boaem/index.html
https://www.dropbox.com/s/ez5wlxg9uylxtw0/RacTaf-Demo.sql
* if you kill the instance all the sessions will failover..
* when I was just killing with post_transactions sometimes they are just being killed and not failing over
see the differences below..
{{{
##################################
Relocate sessions: TPM decreased
##################################
srvctl add service -d dw -s dw_service -r dw1,dw2
SYS@dw1> /
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 7 SOE SESSION BASIC NO
dw2 23 SOE SESSION BASIC NO
--after killed dw1
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw2 23 SOE SESSION BASIC NO
dw2 7 SOE SESSION BASIC YES
-- db and service state after kill of dw1
ora.dw.db database ONLINE OFFLINE Instance Shutdown
ora.dw.db database ONLINE ONLINE db2 Open
ora.dw.dw_service.svc service ONLINE OFFLINE
ora.dw.dw_service.svc service ONLINE ONLINE db2
-- execute relocate service
$ srvctl relocate service -d dw -s dw_service -i dw2 -t dw1
-- after relocate
ora.dw.db database ONLINE ONLINE db1 Open
ora.dw.db database ONLINE ONLINE db2 Open
ora.dw.dw_service.svc service ONLINE OFFLINE
ora.dw.dw_service.svc service ONLINE ONLINE db1
-- kill 15 sessions
alter system disconnect session '59,3' post_transaction;
alter system disconnect session '48,21' post_transaction;
alter system disconnect session '85,5807' post_transaction;
alter system disconnect session '105,1229' post_transaction;
alter system disconnect session '96,521' post_transaction;
alter system disconnect session '82,5313' post_transaction;
alter system disconnect session '50,1467' post_transaction;
alter system disconnect session '86,1837' post_transaction;
alter system disconnect session '38,1161' post_transaction;
alter system disconnect session '47,903' post_transaction;
alter system disconnect session '70,3049' post_transaction;
alter system disconnect session '67,4253' post_transaction;
alter system disconnect session '40,7959' post_transaction;
alter system disconnect session '75,3105' post_transaction;
alter system disconnect session '66,8437' post_transaction;
-- after kill on dw2
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 4 SOE SESSION BASIC YES
dw2 10 SOE SESSION BASIC NO
dw2 5 SOE SESSION BASIC YES
-- after kill of pmon on dw2
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 4 SOE NONE NONE NO
dw1 11 SOE SESSION BASIC YES
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 15 SOE SESSION BASIC YES
#########################
kill sessions - TPM did not change
#########################
-- before kill on dw2
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 8 SOE SESSION BASIC NO
dw2 22 SOE SESSION BASIC NO
-- 1st kill on dw2
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 8 SOE SESSION BASIC NO
dw1 2 SOE SESSION BASIC YES
dw2 9 SOE SESSION BASIC NO
dw2 11 SOE SESSION BASIC YES
-- 2nd kill on dw2
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 8 SOE SESSION BASIC NO
dw1 7 SOE SESSION BASIC YES
dw2 4 SOE SESSION BASIC NO
dw2 11 SOE SESSION BASIC YES
-- 3rd kill on dw2, it actually takes care of the rebalance of sessions
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 8 SOE SESSION BASIC NO
dw1 7 SOE SESSION BASIC YES
dw2 1 SOE SESSION BASIC NO
dw2 14 SOE SESSION BASIC YES
#####################
kill sessions part2
#####################
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 1 SOE SESSION BASIC NO
dw1 6 SOE SESSION BASIC YES
dw2 23 SOE SESSION BASIC NO
21:37:30 SYS@dw2> alter system disconnect session '1,31' post_transaction;
alter system disconnect session '77,1' post_transaction;
alter system disconnect session '65,3' post_transaction;
alter system disconnect session '47,239' post_transaction;
alter system disconnect session '53,1' post_transaction;
alter system disconnect session '45,5' post_transaction;
alter system disconnect session '61,1' post_transaction;
alter system disconnect session '46,39' post_transaction;
alter system disconnect session '72,5' post_transaction;
alter system disconnect session '51,1085' post_transaction;
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 1 SOE SESSION BASIC NO
dw1 11 SOE SESSION BASIC YES
dw2 13 SOE SESSION BASIC NO
dw2 5 SOE SESSION BASIC YES
-- after pmon kill
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 3 SOE NONE NONE NO
dw1 1 SOE SESSION BASIC NO
dw1 11 SOE SESSION BASIC YES
-- after restart of instance by agent
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 1 SOE SESSION BASIC NO
dw1 14 SOE SESSION BASIC YES
dw2 15 SOE SESSION BASIC YES
-- before kill dw1
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 1 SOE SESSION BASIC NO
dw1 14 SOE SESSION BASIC YES
dw2 15 SOE SESSION BASIC YES
-- after kill dw1
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw2 6 SOE NONE NONE NO
dw2 24 SOE SESSION BASIC YES
-- after kill dw1
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw2 30 SOE SESSION BASIC YES
-- after kill of some session from dw2
SYS@dw1> /
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 5 SOE SESSION BASIC YES
dw2 25 SOE SESSION BASIC YES
SYS@dw1> /
INSTANCE_NAME USERCOUNT USER_NAME FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1 12 SOE SESSION BASIC YES
dw2 18 SOE SESSION BASIC YES
}}}
http://www.xaprb.com/blog/2012/02/23/black-box-performance-analysis-with-tcp-traffic/
http://www.percona.com/files/white-papers/mysql-performance-analysis-percona-toolkit-tcp.pdf
''High ping times on interconnect for new Cisco UCS servers''..... https://mail.google.com/mail/u/0/?shva=1#inbox/13d23918df3d1965
{{{
graham - We have seen cases where 10GigE NICs have lower packet rate limits than 1GigE NICs. So if you are pushing many packets to and fro you might be hitting that.
Having said that 'v' is another possibility as others have said. Any chance of removing the 'v'?
kevin - Even though your 10GbE stuff is exhibiting pathology I'd recommend against expecting improved latency on the road from 1GbE to 10GbE. They are fatter pipes, not faster. Yes, I have seen 10GbE cards that exhibit better latency at sub 1GbE payload but those efficiencies were not attributable to 10GbE per se.
james - And basically forget it completely moving forward, because 40gigE is 4xlane 10gig, and 100gigE is 10 lane. We're done with latency improvements.
martin b - Umm, I think I read somewhere that if you really really need low latency you'd go FDR IB. There are PCIe gen 3 cards out there that are good enough to handle the data.
Apparently in HPC FDR is even beating 40 GBit Ethernet, and I guess it's for reasons mentioned here. Not that I heard of any real-life system with 40 GBit Ethernet though. And admittedly the use cases in the RDBMS world requiring FDR IB are a bit limited.
Kevin - Apples oranges.
FDR IB is not mainstream so let's jaw out QDR.
QDR IB with a good protocol like Oracle's RDS (OFED owned) is the creme de la creme.
RoCE doesn't suck if one starts out with head extracted from butt.
What most people end up doing is comparing IPoE to RDMAoIB and that's just not a very fund conversation.
Kyle - For network testing I've been using 3 tools
netio - simple good push button throughput test
netperf - more nobs and whistles, a bit overwhelming
ttcp - java based net test tool
}}}
http://www.agiledata.org/essays/tdd.html
https://www.google.com/search?sxsrf=ACYBGNRr6391gd_-pTZ7h1k2W2uwXmAUCQ%3A1567868802431&ei=gsdzXbfsGcnM_AbdrbuoAw&q=oracle+TDD&oq=oracle+TDD&gs_l=psy-ab.3..0j0i22i30.17663.19125..19373...0.2..0.91.733.10......0....1..gws-wiz.......0i71j35i39j0i131j0i67j0i20i263j0i131i20i263.IfRwUT1wEBk&ved=0ahUKEwi3tZe4_r7kAhVJJt8KHd3WDjUQ4dUDCAs&uact=5
https://softwareengineering.stackexchange.com/questions/162268/tdd-with-sql-and-data-manipulation-functions
https://plunit.com/
https://mikesmithers.wordpress.com/2016/07/31/test-driven-development-and-plsql-the-odyssey-begins/
https://stackoverflow.com/questions/7440008/is-there-any-way-to-apply-tdd-techniques-for-dev-in-pl-sql
https://blog.disy.net/tdd-for-plsql-with-junit/
http://engineering.pivotal.io/post/oracle-sql-tdd/
<<showtoc>>
Master Note For Transparent Data Encryption ( TDE ) (Doc ID 1228046.1)
! backup wallet
Backup Auto-login keystore https://support.oracle.com/epmos/faces/CommunityDisplay?resultUrl=https%3A%2F%2Fcommunity.oracle.com%2Fthread%2F3916985&_afrLoop=390268269273865&resultTitle=Backup+Auto-login+keystore&commId=3916985&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=j7vbtfksh_175
https://oracle-base.com/articles/12c/multitenant-transparent-data-encryption-tde-12cr1
TDE Wallet Problem in 12c: Cannot do a Set Key operation when an auto-login wallet is present (Doc ID 1944507.1)
https://wiki.loopback.org/display/KB/How+to+crack+Oracle+Wallets
! change password
http://www.asktheway.org/official-documents/oracle/E50529_01/DBIMI/to_dbimi9742_d235.htm#DBIMI9742
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/asoag/configuring-transparent-data-encryption.html#GUID-4FBC5088-A045-4306-88C0-FEBC07CA18AC
How To Change The Wallet Password For A Secure External Password Store? (Doc ID 557382.1)
Oracle Wallet Manager and orapki https://docs.oracle.com/cd/E28280_01/core.1111/e10105/walletmgr.htm#ASADM10177
http://brainsurface.blogspot.com/2016/03/creating-and-changing-walletspasswords.html
! change encryption
How to change TDE Tablespace Encryption From AES128 To AES256 in 12.2 (Doc ID 2456250.1)
! losing the wallet key
https://technology.amis.nl/2013/12/08/wheres-my-wallet-loosing-the-encryption-master-key-in-11g-db-compatibility-in-12c/
-TDP (thermal design power)
average maximum power a processor can dissipate while running commercially available software
DP is primarily used as a guideline for manufacturers of thermal solutions (heatsinks/fans, etc) which tells them how much heat their solution should dissipate
TDP is usually 20% - 30% lower than the CPU maximum power dissipation.
http://www.cpu-world.com/Glossary/T/Thermal_Design_Power_(TDP).html
http://www.overclock.net/t/1265539/what-does-max-tdp-mean
http://www.cpu-world.com/Glossary/M/Minimum_Maximum_power_dissipation.html
http://www.hardwareanalysis.com/content/topic/75906/
http://forums.anandtech.com/showthread.php?t=2355884
! what is DAG - Directed acyclic graph
<<<
Tez uses an algorithim called DAG to schedule its execution
DAG is a fancy term for an explain plan tree in Oracle where the execution flows one way (producers/consumers)
DAG = explain plan, the high level process
vertex = server process
edge = connection between producer/consumer vertex
tasks = are the producer/consumer map/reduce tasks
container = reusable JVM containers across DAGs
<<<
! DAG is also like a hierarchical query
http://people.apache.org/~dongsheng/horak/100309_dag_structures_sql.pdf
https://www.codeproject.com/Articles/22824/A-Model-to-Represent-Directed-Acyclic-Graphs-DAG-o
https://docs.oracle.com/middleware/1221/jdev/api-reference-esdk/oracle/javatools/util/DependencyGraph.html
https://codemonth.dk/code_is_good/dev_qa_prod.assert?condition=codemonth:::22::A-PLSQL-DAG-implementation.
https://oracle-base.com/articles/misc/hierarchical-queries
https://docs.oracle.com/database/121/SQLRF/queries003.htm#SQLRF52335
http://www.oradev.com/connect_by.html
https://community.toadworld.com/platforms/oracle/b/weblog/archive/2014/06/15/hitchhikers-guide-to-explain-plan-7
https://renenyffenegger.ch/notes/development/databases/Oracle/SQL/select/hierarchical-queries/start-with_connect-by/index
https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-views/content/section_understanding_dags_vertices_tasks.html
https://cwiki.apache.org/confluence/display/TEZ/How+to+Diagnose+Tez+App
https://www.google.com/search?q=run+tensorflow+across+cluster&oq=run+tensorflow+across+cluster&aqs=chrome..69i57j33l2.5801j0j4&sourceid=chrome&ie=UTF-8
Chapter 12. Distributing TensorFlow Across Devices and Servers https://learning.oreilly.com/library/view/hands-on-machine-learning/9781491962282/ch12.html
https://www.tensorflow.org/api_docs/python/tf/train/ClusterSpec
Automatically Collecting Diagnostic Data Using the Oracle Trace File Analyzer Collector
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/atnms/managing-configuring-tfa.html#GUID-E52F3ADB-B210-4135-8D7D-D9A10FF8665B
https://docs.oracle.com/en/engineered-systems/health-diagnostics/trace-file-analyzer/tfaug/managing-and-configuring-tfa.html#GUID-CBF85753-9DCC-48BC-AA83-5CA2982ED0EB
https://www.google.com/search?source=hp&ei=R9nHX7OhGaOSwbkP7JK2mAM&q=tfa+collector+interval&oq=tfa+collector+interval&gs_lcp=CgZwc3ktYWIQAzIFCCEQoAE6CAgAELEDEIMBOg4ILhCxAxCDARDHARCjAjoICC4QsQMQgwE6CAguEMcBEKMCOgsILhCxAxDHARCjAjoRCC4QsQMQxwEQowIQyQMQkwI6BQgAELEDOggILhDHARCvAToCCAA6BQgAEMkDOgQIABAKOgYIABAWEB46BwghEAoQoAFQlgtYmCFgtCNoAHAAeAGAAboBiAGcEJIBBDE3LjWYAQCgAQGqAQdnd3Mtd2l6&sclient=psy-ab&ved=0ahUKEwjz4KeZ86_tAhUjSTABHWyJDTMQ4dUDCAg&uact=5
http://techaticpsr.blogspot.com/2012/04/its-official-we-have-no-love-for.html <-- mentions ''khugepaged''
''do this to disable''
{{{
root# echo never> /sys/kernel/mm/redhat_transparent_hugepage/enabled
}}}
Transparent huge pages in 2.6.38 http://lwn.net/Articles/423584/
KVM ppt on THP http://www.linux-kvm.org/wiki/images/9/9e/2010-forum-thp.pdf
http://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/
! Oracle's view
https://blogs.oracle.com/linux/entry/performance_issues_with_transparent_huge
ALERT: Disable Transparent HugePages on SLES11, RHEL6, OEL6 and UEK2 Kernels (Doc ID 1557478.1)
''to disable''
{{{
Add the following lines in /etc/rc.local and reboot the server:
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
}}}
! references
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/disabling-transparent-hugepages.html#GUID-02E9147D-D565-4AF8-B12A-8E6E9F74BEEA
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/restrictions-for-hugepages-and-transparent-hugepages-configurations.html#GUID-D8178896-D00F-4F02-82A7-A44F89D8F103
https://oracle-base.com/articles/linux/configuring-huge-pages-for-oracle-on-linux-64
https://stackoverflow.com/questions/48743100/why-thp-transparent-huge-pages-are-not-recommended-for-databases-like-oracle-a
https://alexandrnikitin.github.io/blog/transparent-hugepages-measuring-the-performance-impact/
https://blogs.oracle.com/linux/performance-issues-with-transparent-huge-pages-thp
<<tabs txtMoreTab "Tags" "All Tags" TabAllTags "Miss" "Missing tiddlers" TabMoreMissing "Orph" "Orphaned tiddlers" TabMoreOrphans "Shad" "Shadowed tiddlers" TabMoreShadowed>>
<<allTags excludeLists [a-z]>>
-- drop table
How To Efficiently Drop A Table With Many Extents
Doc ID: Note:68836.1
-- ROW CHAINING MIGRATION
How to Identify, Avoid and Eliminate Chained and Migrated Rows ?
Doc ID: 746778.1
How to deal with row chaining on columns of LONG or LONG RAW datatype
Doc ID: 746519.1
Note 102932.1 - Monitoring Chained Rows on IOTs
Note 122020.1 - Row Chaining and Row Migration
Note 265707.1 - Analyze Table List chained rows Into chained_rows Gives ORA-947
-- COMPRESSION
Potential Row Cache sizing conflicts using data compression
Doc ID: 168085.1
-- SEGMENT ADVISOR
SEGMENT ADVISOR not Working as Expected for LOB or SYS_LOB SEGMENT [ID 988744.1]
How to determine the actual size of the LOB segments and how to free the deleted/unused space above/below the HWM [ID 386341.1]
Bug 5565887: SHRINK SPACE IS REQUIRED TWICE FOR RELEASING SPACE.
Reclaiming LOB space in Oracle http://halisway.blogspot.com/2007/06/reclaiming-lob-space-in-oracle.html
Reclaiming Unused LOB Space http://www.idevelopment.info/data/Oracle/DBA_tips/LOBs/LOBS_85.shtml
{{{
DBMS_SPACE
- it says segment advisor is not really accurate for LOBs, but john found a way to do this manually
OPS$ORAISP@ISP> select table_name,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_ROW_LEN,last_analyzed from dba_tables where table_name = 'SWWCNTP0';
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_ROW_LEN LAST_ANALYZED
------------------------------ ---------- ---------- ------------ ----------- ---------------
SWWCNTP0 1885641694 279741908 0 111 13-AUG-11
OPS$ORAISP@ISP> select /*+ parallel (a,8) */ count(9) from sapsr3.SWWCNTP0;
^Cselect /*+ parallel (a,8) */ count(9) from sapsr3.SWWCNTP0
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation
OPS$ORAISP@ISP>
OPS$ORAISP@ISP>
OPS$ORAISP@ISP> a a
1* select /*+ parallel (a,8) */ count(9) from sapsr3.SWWCNTP0 a
OPS$ORAISP@ISP> /
COUNT(9)
----------
1945159726
OPS$ORAISP@ISP> select segment_name,tablespace_name,bytes/1048576/1024 from dba_segments where segment_name = upper('swwcntp0');
SEGMENT_NAME TABLESPACE_NAME BYTES/1048576/1024
--------------------------------------------------------------------------------- ------------------------------ ------------------
SWWCNTP0 PSAPSR3 2221.16504
OPS$ORAISP@ISP> a '
1* select * from dba_lobs where table_name ='SWWCNTP0'
OPS$ORAISP@ISP> /
OWNER TABLE_NAME
------------------------------ ------------------------------
COLUMN_NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SEGMENT_NAME TABLESPACE_NAME INDEX_NAME CHUNK PCTVERSION RETENTION FREEPOOLS CACHE LOGGING ENCR COMPRE DEDUPLICATION IN_
------------------------------ ------------------------------ ------------------------------ ---------- ---------- ---------- ---------- ---------- ------- ---- ------ --------------- ---
FORMAT PAR SEC SEG RETENTI RETENTION_VALUE
--------------- --- --- --- ------- ---------------
SAPSR3 SWWCNTP0
DATA
SYS_LOB0000054277C00005$$ PSAPSR3 SYS_IL0000054277C00005$$ 8192 43200 YES YES NONE NONE NONE YES
NOT APPLICABLE NO NO YES YES
OPS$ORAISP@ISP> select bytes/1048576/1024 from dba_segments where segment_name = 'SYS_LOB0000054277C00005$$';
BYTES/1048576/1024
------------------
.000061035
I don’t think there is any empty space in the table. It doesn’t look like anything has been deleted.
In Mid August:
OPS$ORAISP@ISP> select table_name,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_ROW_LEN,last_analyzed from dba_tables where table_name = 'SWWCNTP0';
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_ROW_LEN LAST_ANALYZED
------------------------------ ---------- ---------- ------------ ----------- ---------------
SWWCNTP0 1,885,641,694 279741908 0 111 13-AUG-11
Today:
1* select count(9) from sapsr3.SWWCNTP0 a
OPS$ORAISP@ISP> /
COUNT(9)
----------
1,945,159,726
There are almost 60 Million more records today than there were on 13-AUG-11.
Also looking at the dba_tab_modifications will give you an idea on the transactions that happened on this table..
}}}
http://ryrobes.com/python/building-tableau-data-extract-files-with-python-in-tableau-8-sample-usage/
http://www.tableausoftware.com/public/blog/2013/08/data-scraping-python-2098
tableau and r integration http://www.tableausoftware.com/about/blog/2013/10/tableau-81-and-r-25327
R is Here! http://www.tableausoftware.com/about/blog/r-integration
Using R and Tableau whitepaper http://www.tableausoftware.com/learn/whitepapers/using-r-and-tableau
tableau and R blog series http://www.jenunderwood.com/2014/01/19/tableau-with-r-part-2-clustering/
Set up Rserve on Windows http://community.tableausoftware.com/thread/132478
<<<
{{{
Starting from the beginning for anyone that is trying to do this. Here's the R code.
# install Rserve package
install.packages("Rserve")
# load the Rserve package
library(Rserve)
# Starts Rserve
Rserve(debug = FALSE, args = NULL, quote=(length(args) > 1))
The Console window should display:
Starting Rserve... "c:\gsutil\R\WIN-LI~1\3.0\Rserve\libs\x64\Rserve.exe"
In Tableau, under the Help menu select "Manage R Connection..."
Select "localhost" under the Server with the default port of 6311 and then select "Test Connection" (leave the box unchecked for "Sign in with user name and password").
You should get a dialog box that says "Successfully connected to the Rserve service".
I am using R version 3.0.1. you can use this command in RStudio to check your version.
getRversion()
Hope this helps.
A recreation of Hans Rosling’s Gapminder in Tableau http://community.tableausoftware.com/groups/midwest/blog/2013/07/08/a-recreation-of-hans-rosling-s-gapminder-in-tableau
}}}
<<<
Advantages of Using Locally Managed vs Dictionary Managed Tablespaces 105120.1
What happens when a sort occurs Note 1076161.6 Shutdown Normal or Shutdown Immediate hangs
-- TABLESPACE
How to 'DROP' a Datafile from a Tablespace
Doc ID: Note:111316.1
-- SCRIPTS
Script: Report' Table's Rows per Datafile
Doc ID: Note:1019625.6
Script: To Report Map of all Database File
Doc ID: Note:1019714.6
Script to Report Segments in a Given Datafile
Doc ID: Note:1019720.6
Script to Print Block Map of Entire Database
Doc ID: Note:1019710.6
Script to Report Tables Approaching MAXEXTENTS
Doc ID: Note:1019721.6
Script: To Create Tablespace Block Map
Doc ID: Note:1019474.6
Script to Report Segment Storage Parameters
Doc ID: Note:1019918.6
Script to Report on Segment Extents
Doc ID: Note:1019915.6
Script to Report Tablespace Free and Fragmentation
Doc ID: Note:1019709.6
Script to Report on Tablespace Storage Parameters
Doc ID: Note:1019506.6
Script to Print Block Map of Entire Database
Doc ID: Note:1019710.6
-- MIGRATE TO LMT
Create/Upgrade/Migrate a Database to Have a Locally Managed SYSTEM Tablespace
Doc ID: 175434.1
-- 2GB LIMIT
2Gb or Not 2Gb - File limits in Oracle
Doc ID: Note:62427.1
-- SPACE
Troubleshooting a Database Tablespace Used(%) Alert problem
Doc ID: 403264.1
http://blogs.warwick.ac.uk/java/entry/oracle_tde_/ <-- "OPEN_NO_MASTER_KEY" on 11203
1260584.1
https://forums.oracle.com/forums/thread.jspa?threadID=1080799
Creating Duplicate database using RMAN encrypted backups: [ID 464832.1] <-- you don't have to do this if you have an auto open wallet, make sure to have the right right upper case for the db_unique_name directory or at least make sure that the wallet directory is accessible
{{{
orapki wallet create -wallet /u01/app/oracle/admin/testdb/wallet -auto_login_local -pwd "welcome1%"
alter system set encryption wallet open identified by "welcome1%";
alter system set encryption key identified by "welcome1%";
alter system set encryption wallet close identified by "welcome1%";
select * from gv$encryption_wallet;
alter system set encryption wallet open identified by "welcome1%";
col name format a50
select name from v$datafile where rownum < 2;
NAME
--------------------------------------------------
/oracle/oradata/db01/users01.dbf
CREATE SMALLFILE TABLESPACE data_encypt
DATAFILE '/oracle/oradata/db01/encrypt_01.dbf'
SIZE 50M LOGGING EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ENCRYPTION USING 'AES192' DEFAULT STORAGE(ENCRYPT);
drop table hr.test_lob;
drop table hr.test_lob_tmp;
select a.table_name,b.tablespace_name,b.encrypted
from dba_tables a, dba_tablespaces b
where a.tablespace_name=b.tablespace_name
and owner='HR'
and table_name in ('TEST_LOB');
TABLE_NAME TABLESPACE_NAME ENC
------------------------------ ------------------------------ ---
TEST_LOB USERS NO
alter table hr.test_lob move tablespace data_encypt;
* rebuild indexes if necessary
TABLE_NAME TABLESPACE_NAME ENC
------------------------------ ------------------------------ ---
TEST_LOB DATA_ENCYPT YES
-- then move the LOBs to the encrypted tablespace as well
15:53:02 SYS@db01> select a.table_name,b.tablespace_name,b.encrypted
from dba_tables a, dba_tablespaces b
where a.tablespace_name=b.tablespace_name
and owner='HR'
and table_name in ('TEST_LOB');15:53:49 2 15:53:49 3 15:53:49 4 15:53:49 5
TABLE_NAME TABLESPACE_NAME ENC
------------------------------ ------------------------------ ---
TEST_LOB DATA_ENCYPT YES
col column_name format a10
select
OWNER
,TABLE_NAME
,COLUMN_NAME
,SEGMENT_NAME
,TABLESPACE_NAME
,INDEX_NAME
,CHUNK
,PCTVERSION
,RETENTION
,FREEPOOLS
,CACHE
,LOGGING
,ENCRYPT
,COMPRESSION
,DEDUPLICATION
,IN_ROW
,FORMAT
,PARTITIONED
from dba_lobs
where owner in ('HR')
and table_name = 'TEST_LOB';
15:52:45 SYS@db01> col column_name format a10
15:52:57 SYS@db01> select
15:52:57 2 OWNER
15:52:57 3 ,TABLE_NAME
15:52:57 4 ,COLUMN_NAME
15:52:57 5 ,SEGMENT_NAME
15:52:57 6 ,TABLESPACE_NAME
15:52:57 7 ,INDEX_NAME
,CHUNK
15:52:57 8 15:52:57 9 ,PCTVERSION
15:52:57 10 ,RETENTION
15:52:57 11 ,FREEPOOLS
15:52:57 12 ,CACHE
15:52:57 13 ,LOGGING
15:52:57 14 ,ENCRYPT
15:52:57 15 ,COMPRESSION
15:52:57 16 ,DEDUPLICATION
15:52:57 17 ,IN_ROW
15:52:57 18 ,FORMAT
15:52:57 19 ,PARTITIONED
15:52:57 20 from dba_lobs
15:52:57 21 where owner in ('HR')
15:52:57 22 and table_name = 'TEST_LOB';
OWNER TABLE_NAME COLUMN_NAM SEGMENT_NAME TABLESPACE_NAME INDEX_NAME CHUNK PCTVERSION RETENTION FREEPOOLS CACHE LOGGING ENCR COMPRE DEDUPLICATION IN_ FORMAT PAR
------------------------------ ------------------------------ ---------- ------------------------------ ------------------------------ ------------------------------ ---------- ---------- ---------- ---------- ---------- ------- ---- ------ --------------- --- --------------- ---
HR TEST_LOB CLOB_FIELD SYS_LOB0000071750C00002$$ USERS SYS_IL0000071750C00002$$ 8192 900 NO YES NONE NONE NONE YES ENDIAN NEUTRAL NO
HR TEST_LOB BLOB_FIELD SYS_LOB0000071750C00003$$ USERS SYS_IL0000071750C00003$$ 8192 900 NO YES NONE NONE NONE YES NOT APPLICABLE NO
col segment_name format a30
select segment_name, tablespace_name, segment_type, round(bytes/1024/1024,2) segment_mb
from dba_segments where owner='HR' and segment_type = 'LOBSEGMENT'
order by 4 asc;
select 'alter table '||owner||'.'||table_name||' move LOB ('||column_name||') store as (tablespace DATA_ENCYPT);'
from dba_lobs
where table_name = 'TEST_LOB';
alter table hr.test_lob move lob (CLOB_FIELD) store as (tablespace DATA_ENCYPT);
OWNER TABLE_NAME COLUMN_NAM SEGMENT_NAME TABLESPACE_NAME INDEX_NAME CHUNK PCTVERSION RETENTION FREEPOOLS CACHE LOGGING ENCR COMPRE DEDUPLICATION IN_ FORMAT PAR
------------------------------ ------------------------------ ---------- ------------------------------ ------------------------------ ------------------------------ ---------- ---------- ---------- ---------- ---------- ------- ---- ------ --------------- --- --------------- ---
HR TEST_LOB CLOB_FIELD SYS_LOB0000071750C00002$$ DATA_ENCYPT SYS_IL0000071750C00002$$ 8192 900 NO YES NONE NONE NONE YES ENDIAN NEUTRAL NO
HR TEST_LOB BLOB_FIELD SYS_LOB0000071750C00003$$ USERS SYS_IL0000071750C00003$$ 8192 900 NO YES NONE NONE NONE YES NOT APPLICABLE NO
-- using the script
OWNER TABLE_NAME COLUMN_NAM SEGMENT_NAME TABLESPACE_NAME INDEX_NAME CHUNK PCTVERSION RETENTION FREEPOOLS CACHE LOGGING ENCR COMPRE DEDUPLICATION IN_ FORMAT PAR
------------------------------ ------------------------------ ---------- ------------------------------ ------------------------------ ------------------------------ ---------- ---------- ---------- ---------- ---------- ------- ---- ------ --------------- --- --------------- ---
HR TEST_LOB CLOB_FIELD SYS_LOB0000071750C00002$$ DATA_ENCYPT SYS_IL0000071750C00002$$ 8192 900 NO YES NONE NONE NONE YES ENDIAN NEUTRAL NO
HR TEST_LOB BLOB_FIELD SYS_LOB0000071750C00003$$ DATA_ENCYPT SYS_IL0000071750C00003$$ 8192 900 NO YES NONE NONE NONE YES NOT APPLICABLE NO
}}}
! wallet error if it does not exist
{{{
/home/oracle/dba/rman/backup.reco.sh testdb PROD FULL FULLDBBKUP_012 > /home/oracle/dba/rman/backup.reco.$ORACLE_SID.log
/home/oracle/dba/rman/rmanDuplicate.sh testdb LATEST FULLDBBKUP_012 NOPURGE testdb2 > rmanDuplicate.testdb.log
-- if we don't have wallet open then it will error
contents of Memory Script:
{
sql clone "alter system set db_name =
''TESTDB2'' comment=
''Reset to original value by RMAN'' scope=spfile";
sql clone "alter system reset db_unique_name scope=spfile";
shutdown clone immediate;
}
executing Memory Script
Errors in memory script
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-01507: database not mounted
ORA-06512: at "SYS.X$DBMS_RCVMAN", line 13466
ORA-06512: at line 1
RMAN-03015: error occurred in stored script Memory Script
RMAN-10035: exception raised in RPC:
ORA-19583: conversation terminated due to error
ORA-19870: error while restoring backup piece /reco/rman/duplicate/TESTDB/2013_04_17/FULLDBBKUP/o1_mf_nnnd0_FULLDBBKUP_8pxxk2x7_.bkp
ORA-19913: unable to decrypt backup
ORA-28365: wallet is not open
ORA-06512: at "SYS.X$DBMS_BACKUP_RESTORE", line 2338
RMAN-10031: RPC Error: ORA-19583 occurred during call to DBMS_BACKUP_RESTORE.RESTOREBACKUPPIECE
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 04/17/2013 14:38:41
RMAN-05501: aborting duplication of target database
}}}
<<cloud tags action:popup systemConfig systemPalette systemServer systemTheme TaskPackage TiddlyWiki TidIDEPackage transclusion ScrollbarPackage NavigationPackage MediaPackage IconPackage DiscoveryPackage bookmarklet>>
/***
|Name|TagCloudPlugin|
|Source|http://www.TiddlyTools.com/#TagCloudPlugin|
|Version|1.7.0|
|Author|Eric Shulman|
|Original Author|Clint Checketts|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|present a 'cloud' of tags (or links) using proportional font display|
!Usage
<<<
{{{
<<cloud type action:... limit:... tag tag tag ...>>
<<cloud type action:... limit:... +TiddlerName>>
<<cloud type action:... limit:... -TiddlerName>>
<<cloud type action:... limit:... =tagvalue>>
}}}
where:
* //type// is a keyword, one of:
** ''tags'' (default) - displays a cloud of tags, based on frequency of use
** ''links'' - displays a cloud of tiddlers, based on number of links //from// each tiddler
** ''references'' - displays a cloud of tiddlers, based on number of links //to// each tiddler
* ''action:popup'' (default) - clicking a cloud item shows a popup with links to related tiddlers<br>//or//<br> ''action:goto'' - clicking a cloud item immediately opens the tiddler corresponding to that item
* ''limit:N'' (optional) - restricts the cloud display to only show the N most popular tags/links
* ''tag tag tag...'' (or ''title title title'' if ''links''/''references'' is used)<br>shows all tags/links in the document //except// for those listed as macro parameters
* ''+TiddlerName''<br>show only tags/links read from a space-separated, bracketed list stored in a separate tiddler.
* ''-TiddlerName''<br>show all tags/links //except// those read from a space-separated, bracketed list stored in a separate tiddler.
* ''=tagvalue'' (//only if type=''tags''//)<br>shows only tags that are themselves tagged with the indicated tag value (i.e., ~TagglyTagging usage)
//note: for backward-compatibility, you can also use the macro {{{<<tagCloud ...>>}}} in place of {{{<<cloud ...>>}}}//
<<<
!Examples
<<<
//all tags excluding<<tag systemConfig>>, <<tag excludeMissing>> and <<tag script>>//
{{{<<cloud systemConfig excludeMissing script>>}}}
{{groupbox{<<cloud systemConfig excludeMissing script>>}}}
//top 10 tags excluding<<tag systemConfig>>, <<tag excludeMissing>> and <<tag script>>//
{{{<<cloud limit:10 systemConfig excludeMissing script>>}}}
{{groupbox{<<cloud limit:10 systemConfig excludeMissing script>>}}}
//tags listed in// [[FavoriteTags]]
{{{<<cloud +FavoriteTags>>}}}
{{groupbox{<<cloud +FavoriteTags>>}}}
//tags NOT listed in// [[FavoriteTags]]
{{{<<cloud -FavoriteTags>>}}}
{{groupbox{<<cloud -FavoriteTags>>}}}
//links to tiddlers tagged with 'package'//
{{{<<cloud action:goto =package>>}}}
{{groupbox{<<cloud action:goto =package>>}}}
//top 20 most referenced tiddlers//
{{{<<cloud references limit:20>>}}}
{{groupbox{<<cloud references limit:20>>}}}
//top 20 tiddlers that contain the most links//
{{{<<cloud links limit:20>>}}}
{{groupbox{<<cloud links limit:20>>}}}
<<<
!Revisions
<<<
2009.07.17 [1.7.0] added {{{-TiddlerName}}} parameter to exclude tags that are listed in the indicated tiddler
2009.02.26 [1.6.0] added {{{action:...}}} parameter to apply popup vs. goto action when clicking cloud items
2009.02.05 [1.5.0] added ability to show links or back-links (references) instead of tags and renamed macro to {{{<<cloud>>}}} to reflect more generalized usage.
2008.12.16 [1.4.2] corrected group calculation to prevent 'group=0' error
2008.12.16 [1.4.1] revised tag filtering so excluded tags don't affect calculations
2008.12.15 [1.4.0] added {{{limit:...}}} parameter to restrict the number of tags displayed to the top N most popular
2008.11.15 [1.3.0] added {{{+TiddlerName}}} parameter to include only tags that are listed in the indicated tiddler
2008.09.05 [1.2.0] added '=tagname' parameter to include only tags that are themselves tagged with the specified value (i.e., ~TagglyTagging usage)
2008.07.03 [1.1.0] added 'segments' property to macro object. Extensive code cleanup
<<<
!Code
***/
//{{{
version.extensions.TagCloudPlugin= {major: 1, minor: 7 , revision: 0, date: new Date(2009,7,17)};
//Originally created by Clint Checketts, contributions by Jonny Leroy and Eric Shulman
//Currently maintained and enhanced by Eric Shulman
//}}}
//{{{
config.macros.cloud = {
tagstip: "%1 tiddlers tagged with '%0'",
refslabel: " (%0 references)",
refstip: "%1 tiddlers have links to '%0'",
linkslabel: " (%0 links)",
linkstip: "'%0' has links to %1 other tiddlers",
groups: 9,
init: function() {
config.macros.tagCloud=config.macros.cloud; // for backward-compatibility
config.shadowTiddlers.TagCloud='<<cloud>>';
config.shadowTiddlers.StyleSheetTagCloud=
'/*{{{*/\n'
+'.tagCloud span {line-height: 3.5em; margin:3px;}\n'
+'.tagCloud1{font-size: 80%;}\n'
+'.tagCloud2{font-size: 100%;}\n'
+'.tagCloud3{font-size: 120%;}\n'
+'.tagCloud4{font-size: 140%;}\n'
+'.tagCloud5{font-size: 160%;}\n'
+'.tagCloud6{font-size: 180%;}\n'
+'.tagCloud7{font-size: 200%;}\n'
+'.tagCloud8{font-size: 220%;}\n'
+'.tagCloud9{font-size: 240%;}\n'
+'/*}}}*/\n';
setStylesheet(store.getTiddlerText('StyleSheetTagCloud'),'tagCloudsStyles');
},
getLinks: function(tiddler) { // get list of links to existing tiddlers and shadows
if (!tiddler.linksUpdated) tiddler.changed();
var list=[]; for (var i=0; i<tiddler.links.length; i++) {
var title=tiddler.links[i];
if (store.isShadowTiddler(title)||store.tiddlerExists(title))
list.push(title);
}
return list;
},
handler: function(place,macroName,params) {
// unpack params
var inc=[]; var ex=[]; var limit=0; var action='popup';
var links=(params[0]&¶ms[0].toLowerCase()=='links'); if (links) params.shift();
var refs=(params[0]&¶ms[0].toLowerCase()=='references'); if (refs) params.shift();
if (params[0]&¶ms[0].substr(0,7).toLowerCase()=='action:')
action=params.shift().substr(7).toLowerCase();
if (params[0]&¶ms[0].substr(0,6).toLowerCase()=='limit:')
limit=parseInt(params.shift().substr(6));
while (params.length) {
if (params[0].substr(0,1)=='+') { // read taglist from tiddler
inc=inc.concat(store.getTiddlerText(params[0].substr(1),'').readBracketedList());
} else if (params[0].substr(0,1)=='-') { // exclude taglist from tiddler
ex=ex.concat(store.getTiddlerText(params[0].substr(1),'').readBracketedList());
} else if (params[0].substr(0,1)=='=') { // get tag list using tagged tags
var tagged=store.getTaggedTiddlers(params[0].substr(1));
for (var t=0; t<tagged.length; t++) inc.push(tagged[t].title);
} else ex.push(params[0]); // exclude params
params.shift();
}
// get all items, include/exclude specific items
var items=[];
var list=(links||refs)?store.getTiddlers('title','excludeLists'):store.getTags();
for (var t=0; t<list.length; t++) {
var title=(links||refs)?list[t].title:list[t][0];
if (links) var count=this.getLinks(list[t]).length;
else if (refs) var count=store.getReferringTiddlers(title).length;
else var count=list[t][1];
if ((!inc.length||inc.contains(title))&&(!ex.length||!ex.contains(title)))
items.push({ title:title, count:count });
}
if(!items.length) return;
// sort by decending count, limit results (optional)
items=items.sort(function(a,b){return(a.count==b.count)?0:(a.count>b.count?-1:1);});
while (limit && items.length>limit) items.pop();
// find min/max and group size
var most=items[0].count;
var least=items[items.length-1].count;
var groupSize=(most-least+1)/this.groups;
// sort by title and draw the cloud of items
items=items.sort(function(a,b){return(a.title==b.title)?0:(a.title>b.title?1:-1);});
var cloudWrapper = createTiddlyElement(place,'div',null,'tagCloud',null);
for (var t=0; t<items.length; t++) {
cloudWrapper.appendChild(document.createTextNode(' '));
var group=Math.ceil((items[t].count-least)/groupSize)||1;
var className='tagCloudtag tagCloud'+group;
var tip=refs?this.refstip:links?this.linkstip:this.tagstip;
tip=tip.format([items[t].title,items[t].count]);
if (action=='goto') { // TAG/LINK/REFERENCES GOTO
var btn=createTiddlyLink(cloudWrapper,items[t].title,true,className);
btn.title=tip;
btn.style.fontWeight='normal';
} else if (!links&&!refs) { // TAG POPUP
var btn=createTiddlyButton(cloudWrapper,items[t].title,tip,onClickTag,className);
btn.setAttribute('tag',items[t].title);
} else { // LINK/REFERENCES POPUP
var btn=createTiddlyButton(cloudWrapper,items[t].title,tip,
function(ev) { var e=ev||window.event; var cmt=config.macros.cloud;
var popup = Popup.create(this);
var title = this.getAttribute('tiddler');
var count = this.getAttribute('count');
var refs = this.getAttribute('refs')=='T';
var links = this.getAttribute('links')=='T';
var label = (refs?cmt.refslabel:cmt.linkslabel).format([count]);
createTiddlyLink(popup,title,true);
createTiddlyText(popup,label);
createTiddlyElement(popup,'hr');
if (refs) {
popup.setAttribute('tiddler',title);
config.commands.references.handlePopup(popup,title);
}
if (links) {
var tiddler = store.fetchTiddler(title);
var links=config.macros.cloud.getLinks(tiddler);
for(var i=0;i<links.length;i++)
createTiddlyLink(createTiddlyElement(popup,'li'),
links[i],true);
}
Popup.show();
e.cancelBubble=true; if(e.stopPropagation) e.stopPropagation();
return false;
}, className);
btn.setAttribute('tiddler',items[t].title);
btn.setAttribute('count',items[t].count);
btn.setAttribute('refs',refs?'T':'F');
btn.setAttribute('links',links?'T':'F');
btn.title=tip;
}
}
}
};
//}}}
/***
|Name:|TagglyTaggingPlugin|
|Description:|tagglyTagging macro is a replacement for the builtin tagging macro in your ViewTemplate|
|Version:|3.3.1 ($Rev: 9828 $)|
|Date:|$Date: 2009-06-03 21:38:41 +1000 (Wed, 03 Jun 2009) $|
|Source:|http://mptw.tiddlyspot.com/#TagglyTaggingPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!Notes
See http://mptw.tiddlyspot.com/#TagglyTagging
***/
//{{{
merge(String.prototype,{
parseTagExpr: function(debug) {
if (this.trim() == "")
return "(true)";
var anyLogicOp = /(!|&&|\|\||\(|\))/g;
var singleLogicOp = /^(!|&&|\|\||\(|\))$/;
var spaced = this.
// because square brackets in templates are no good
// this means you can use [(With Spaces)] instead of [[With Spaces]]
replace(/\[\(/g," [[").
replace(/\)\]/g,"]] ").
// space things out so we can use readBracketedList. tricky eh?
replace(anyLogicOp," $1 ");
var expr = "";
var tokens = spaced.readBracketedList(false); // false means don't uniq the list. nice one JR!
for (var i=0;i<tokens.length;i++)
if (tokens[i].match(singleLogicOp))
expr += tokens[i];
else
expr += "tiddler.tags.contains('%0')".format([tokens[i].replace(/'/,"\\'")]); // fix single quote bug. still have round bracket bug i think
if (debug)
alert(expr);
return '('+expr+')';
}
});
merge(TiddlyWiki.prototype,{
getTiddlersByTagExpr: function(tagExpr,sortField) {
var result = [];
var expr = tagExpr.parseTagExpr();
store.forEachTiddler(function(title,tiddler) {
if (eval(expr))
result.push(tiddler);
});
if(!sortField)
sortField = "title";
result.sort(function(a,b) {return a[sortField] < b[sortField] ? -1 : (a[sortField] == b[sortField] ? 0 : +1);});
return result;
}
});
config.taggly = {
// for translations
lingo: {
labels: {
asc: "\u2191", // down arrow
desc: "\u2193", // up arrow
title: "title",
modified: "modified",
created: "created",
show: "+",
hide: "-",
normal: "normal",
group: "group",
commas: "commas",
sitemap: "sitemap",
numCols: "cols\u00b1", // plus minus sign
label: "Tagged as '%0':",
exprLabel: "Matching tag expression '%0':",
excerpts: "excerpts",
descr: "descr",
slices: "slices",
contents: "contents",
sliders: "sliders",
noexcerpts: "title only",
noneFound: "(none)"
},
tooltips: {
title: "Click to sort by title",
modified: "Click to sort by modified date",
created: "Click to sort by created date",
show: "Click to show tagging list",
hide: "Click to hide tagging list",
normal: "Click to show a normal ungrouped list",
group: "Click to show list grouped by tag",
sitemap: "Click to show a sitemap style list",
commas: "Click to show a comma separated list",
numCols: "Click to change number of columns",
excerpts: "Click to show excerpts",
descr: "Click to show the description slice",
slices: "Click to show all slices",
contents: "Click to show entire tiddler contents",
sliders: "Click to show tiddler contents in sliders",
noexcerpts: "Click to show entire title only"
},
tooDeepMessage: "* //sitemap too deep...//"
},
config: {
showTaggingCounts: true,
listOpts: {
// the first one will be the default
sortBy: ["title","modified","created"],
sortOrder: ["asc","desc"],
hideState: ["show","hide"],
listMode: ["normal","group","sitemap","commas"],
numCols: ["1","2","3","4","5","6"],
excerpts: ["noexcerpts","excerpts","descr","slices","contents","sliders"]
},
valuePrefix: "taggly.",
excludeTags: ["excludeLists","excludeTagging"],
excerptSize: 50,
excerptMarker: "/%"+"%/",
siteMapDepthLimit: 25
},
getTagglyOpt: function(title,opt) {
var val = store.getValue(title,this.config.valuePrefix+opt);
return val ? val : this.config.listOpts[opt][0];
},
setTagglyOpt: function(title,opt,value) {
// create it silently if it doesn't exist
if (!store.tiddlerExists(title)) {
store.saveTiddler(title,title,config.views.editor.defaultText.format([title]),config.options.txtUserName,new Date(),"");
// <<tagglyTagging expr:"...">> creates a tiddler to store its display settings
// Make those tiddlers less noticeable by tagging as excludeSearch and excludeLists
// Because we don't want to hide real tags, check that they aren't actually tags before doing so
// Also tag them as tagglyExpression for manageability
// (contributed by RA)
if (!store.getTaggedTiddlers(title).length) {
store.setTiddlerTag(title,true,"excludeSearch");
store.setTiddlerTag(title,true,"excludeLists");
store.setTiddlerTag(title,true,"tagglyExpression");
}
}
// if value is default then remove it to save space
return store.setValue(title, this.config.valuePrefix+opt, value == this.config.listOpts[opt][0] ? null : value);
},
getNextValue: function(title,opt) {
var current = this.getTagglyOpt(title,opt);
var pos = this.config.listOpts[opt].indexOf(current);
// supposed to automagically don't let cols cycle up past the number of items
// currently broken in some situations, eg when using an expression
// lets fix it later when we rewrite for jquery
// the columns thing should be jquery table manipulation probably
var limit = (opt == "numCols" ? store.getTaggedTiddlers(title).length : this.config.listOpts[opt].length);
var newPos = (pos + 1) % limit;
return this.config.listOpts[opt][newPos];
},
toggleTagglyOpt: function(title,opt) {
var newVal = this.getNextValue(title,opt);
this.setTagglyOpt(title,opt,newVal);
},
createListControl: function(place,title,type) {
var lingo = config.taggly.lingo;
var label;
var tooltip;
var onclick;
if ((type == "title" || type == "modified" || type == "created")) {
// "special" controls. a little tricky. derived from sortOrder and sortBy
label = lingo.labels[type];
tooltip = lingo.tooltips[type];
if (this.getTagglyOpt(title,"sortBy") == type) {
label += lingo.labels[this.getTagglyOpt(title,"sortOrder")];
onclick = function() {
config.taggly.toggleTagglyOpt(title,"sortOrder");
return false;
}
}
else {
onclick = function() {
config.taggly.setTagglyOpt(title,"sortBy",type);
config.taggly.setTagglyOpt(title,"sortOrder",config.taggly.config.listOpts.sortOrder[0]);
return false;
}
}
}
else {
// "regular" controls, nice and simple
label = lingo.labels[type == "numCols" ? type : this.getNextValue(title,type)];
tooltip = lingo.tooltips[type == "numCols" ? type : this.getNextValue(title,type)];
onclick = function() {
config.taggly.toggleTagglyOpt(title,type);
return false;
}
}
// hide button because commas don't have columns
if (!(this.getTagglyOpt(title,"listMode") == "commas" && type == "numCols"))
createTiddlyButton(place,label,tooltip,onclick,type == "hideState" ? "hidebutton" : "button");
},
makeColumns: function(orig,numCols) {
var listSize = orig.length;
var colSize = listSize/numCols;
var remainder = listSize % numCols;
var upperColsize = colSize;
var lowerColsize = colSize;
if (colSize != Math.floor(colSize)) {
// it's not an exact fit so..
upperColsize = Math.floor(colSize) + 1;
lowerColsize = Math.floor(colSize);
}
var output = [];
var c = 0;
for (var j=0;j<numCols;j++) {
var singleCol = [];
var thisSize = j < remainder ? upperColsize : lowerColsize;
for (var i=0;i<thisSize;i++)
singleCol.push(orig[c++]);
output.push(singleCol);
}
return output;
},
drawTable: function(place,columns,theClass) {
var newTable = createTiddlyElement(place,"table",null,theClass);
var newTbody = createTiddlyElement(newTable,"tbody");
var newTr = createTiddlyElement(newTbody,"tr");
for (var j=0;j<columns.length;j++) {
var colOutput = "";
for (var i=0;i<columns[j].length;i++)
colOutput += columns[j][i];
var newTd = createTiddlyElement(newTr,"td",null,"tagglyTagging"); // todo should not need this class
wikify(colOutput,newTd);
}
return newTable;
},
createTagglyList: function(place,title,isTagExpr) {
switch(this.getTagglyOpt(title,"listMode")) {
case "group": return this.createTagglyListGrouped(place,title,isTagExpr); break;
case "normal": return this.createTagglyListNormal(place,title,false,isTagExpr); break;
case "commas": return this.createTagglyListNormal(place,title,true,isTagExpr); break;
case "sitemap":return this.createTagglyListSiteMap(place,title,isTagExpr); break;
}
},
getTaggingCount: function(title,isTagExpr) {
// thanks to Doug Edmunds
if (this.config.showTaggingCounts) {
var tagCount = config.taggly.getTiddlers(title,'title',isTagExpr).length;
if (tagCount > 0)
return " ("+tagCount+")";
}
return "";
},
getTiddlers: function(titleOrExpr,sortBy,isTagExpr) {
return isTagExpr ? store.getTiddlersByTagExpr(titleOrExpr,sortBy) : store.getTaggedTiddlers(titleOrExpr,sortBy);
},
getExcerpt: function(inTiddlerTitle,title,indent) {
if (!indent)
indent = 1;
var displayMode = this.getTagglyOpt(inTiddlerTitle,"excerpts");
var t = store.getTiddler(title);
if (t && displayMode == "excerpts") {
var text = t.text.replace(/\n/," ");
var marker = text.indexOf(this.config.excerptMarker);
if (marker != -1) {
return " {{excerpt{<nowiki>" + text.substr(0,marker) + "</nowiki>}}}";
}
else if (text.length < this.config.excerptSize) {
return " {{excerpt{<nowiki>" + t.text + "</nowiki>}}}";
}
else {
return " {{excerpt{<nowiki>" + t.text.substr(0,this.config.excerptSize) + "..." + "</nowiki>}}}";
}
}
else if (t && displayMode == "contents") {
return "\n{{contents indent"+indent+"{\n" + t.text + "\n}}}";
}
else if (t && displayMode == "sliders") {
return "<slider slide>\n{{contents{\n" + t.text + "\n}}}\n</slider>";
}
else if (t && displayMode == "descr") {
var descr = store.getTiddlerSlice(title,'Description');
return descr ? " {{excerpt{" + descr + "}}}" : "";
}
else if (t && displayMode == "slices") {
var result = "";
var slices = store.calcAllSlices(title);
for (var s in slices)
result += "|%0|<nowiki>%1</nowiki>|\n".format([s,slices[s]]);
return result ? "\n{{excerpt excerptIndent{\n" + result + "}}}" : "";
}
return "";
},
notHidden: function(t,inTiddler) {
if (typeof t == "string")
t = store.getTiddler(t);
return (!t || !t.tags.containsAny(this.config.excludeTags) ||
(inTiddler && this.config.excludeTags.contains(inTiddler)));
},
// this is for normal and commas mode
createTagglyListNormal: function(place,title,useCommas,isTagExpr) {
var list = config.taggly.getTiddlers(title,this.getTagglyOpt(title,"sortBy"),isTagExpr);
if (this.getTagglyOpt(title,"sortOrder") == "desc")
list = list.reverse();
var output = [];
var first = true;
for (var i=0;i<list.length;i++) {
if (this.notHidden(list[i],title)) {
var countString = this.getTaggingCount(list[i].title);
var excerpt = this.getExcerpt(title,list[i].title);
if (useCommas)
output.push((first ? "" : ", ") + "[[" + list[i].title + "]]" + countString + excerpt);
else
output.push("*[[" + list[i].title + "]]" + countString + excerpt + "\n");
first = false;
}
}
return this.drawTable(place,
this.makeColumns(output,useCommas ? 1 : parseInt(this.getTagglyOpt(title,"numCols"))),
useCommas ? "commas" : "normal");
},
// this is for the "grouped" mode
createTagglyListGrouped: function(place,title,isTagExpr) {
var sortBy = this.getTagglyOpt(title,"sortBy");
var sortOrder = this.getTagglyOpt(title,"sortOrder");
var list = config.taggly.getTiddlers(title,sortBy,isTagExpr);
if (sortOrder == "desc")
list = list.reverse();
var leftOvers = []
for (var i=0;i<list.length;i++)
leftOvers.push(list[i].title);
var allTagsHolder = {};
for (var i=0;i<list.length;i++) {
for (var j=0;j<list[i].tags.length;j++) {
if (list[i].tags[j] != title) { // not this tiddler
if (this.notHidden(list[i].tags[j],title)) {
if (!allTagsHolder[list[i].tags[j]])
allTagsHolder[list[i].tags[j]] = "";
if (this.notHidden(list[i],title)) {
allTagsHolder[list[i].tags[j]] += "**[["+list[i].title+"]]"
+ this.getTaggingCount(list[i].title) + this.getExcerpt(title,list[i].title) + "\n";
leftOvers.setItem(list[i].title,-1); // remove from leftovers. at the end it will contain the leftovers
}
}
}
}
}
var allTags = [];
for (var t in allTagsHolder)
allTags.push(t);
var sortHelper = function(a,b) {
if (a == b) return 0;
if (a < b) return -1;
return 1;
};
allTags.sort(function(a,b) {
var tidA = store.getTiddler(a);
var tidB = store.getTiddler(b);
if (sortBy == "title") return sortHelper(a,b);
else if (!tidA && !tidB) return 0;
else if (!tidA) return -1;
else if (!tidB) return +1;
else return sortHelper(tidA[sortBy],tidB[sortBy]);
});
var leftOverOutput = "";
for (var i=0;i<leftOvers.length;i++)
if (this.notHidden(leftOvers[i],title))
leftOverOutput += "*[["+leftOvers[i]+"]]" + this.getTaggingCount(leftOvers[i]) + this.getExcerpt(title,leftOvers[i]) + "\n";
var output = [];
if (sortOrder == "desc")
allTags.reverse();
else if (leftOverOutput != "")
// leftovers first...
output.push(leftOverOutput);
for (var i=0;i<allTags.length;i++)
if (allTagsHolder[allTags[i]] != "")
output.push("*[["+allTags[i]+"]]" + this.getTaggingCount(allTags[i]) + this.getExcerpt(title,allTags[i]) + "\n" + allTagsHolder[allTags[i]]);
if (sortOrder == "desc" && leftOverOutput != "")
// leftovers last...
output.push(leftOverOutput);
return this.drawTable(place,
this.makeColumns(output,parseInt(this.getTagglyOpt(title,"numCols"))),
"grouped");
},
// used to build site map
treeTraverse: function(title,depth,sortBy,sortOrder,isTagExpr) {
var list = config.taggly.getTiddlers(title,sortBy,isTagExpr);
if (sortOrder == "desc")
list.reverse();
var indent = "";
for (var j=0;j<depth;j++)
indent += "*"
var childOutput = "";
if (depth > this.config.siteMapDepthLimit)
childOutput += indent + this.lingo.tooDeepMessage;
else
for (var i=0;i<list.length;i++)
if (list[i].title != title)
if (this.notHidden(list[i].title,this.config.inTiddler))
childOutput += this.treeTraverse(list[i].title,depth+1,sortBy,sortOrder,false);
if (depth == 0)
return childOutput;
else
return indent + "[["+title+"]]" + this.getTaggingCount(title) + this.getExcerpt(this.config.inTiddler,title,depth) + "\n" + childOutput;
},
// this if for the site map mode
createTagglyListSiteMap: function(place,title,isTagExpr) {
this.config.inTiddler = title; // nasty. should pass it in to traverse probably
var output = this.treeTraverse(title,0,this.getTagglyOpt(title,"sortBy"),this.getTagglyOpt(title,"sortOrder"),isTagExpr);
return this.drawTable(place,
this.makeColumns(output.split(/(?=^\*\[)/m),parseInt(this.getTagglyOpt(title,"numCols"))), // regexp magic
"sitemap"
);
},
macros: {
tagglyTagging: {
handler: function (place,macroName,params,wikifier,paramString,tiddler) {
var parsedParams = paramString.parseParams("tag",null,true);
var refreshContainer = createTiddlyElement(place,"div");
// do some refresh magic to make it keep the list fresh - thanks Saq
refreshContainer.setAttribute("refresh","macro");
refreshContainer.setAttribute("macroName",macroName);
var tag = getParam(parsedParams,"tag");
var expr = getParam(parsedParams,"expr");
if (expr) {
refreshContainer.setAttribute("isTagExpr","true");
refreshContainer.setAttribute("title",expr);
refreshContainer.setAttribute("showEmpty","true");
}
else {
refreshContainer.setAttribute("isTagExpr","false");
if (tag) {
refreshContainer.setAttribute("title",tag);
refreshContainer.setAttribute("showEmpty","true");
}
else {
refreshContainer.setAttribute("title",tiddler.title);
refreshContainer.setAttribute("showEmpty","false");
}
}
this.refresh(refreshContainer);
},
refresh: function(place) {
var title = place.getAttribute("title");
var isTagExpr = place.getAttribute("isTagExpr") == "true";
var showEmpty = place.getAttribute("showEmpty") == "true";
removeChildren(place);
addClass(place,"tagglyTagging");
var countFound = config.taggly.getTiddlers(title,'title',isTagExpr).length
if (countFound > 0 || showEmpty) {
var lingo = config.taggly.lingo;
config.taggly.createListControl(place,title,"hideState");
if (config.taggly.getTagglyOpt(title,"hideState") == "show") {
createTiddlyElement(place,"span",null,"tagglyLabel",
isTagExpr ? lingo.labels.exprLabel.format([title]) : lingo.labels.label.format([title]));
config.taggly.createListControl(place,title,"title");
config.taggly.createListControl(place,title,"modified");
config.taggly.createListControl(place,title,"created");
config.taggly.createListControl(place,title,"listMode");
config.taggly.createListControl(place,title,"excerpts");
config.taggly.createListControl(place,title,"numCols");
config.taggly.createTagglyList(place,title,isTagExpr);
if (countFound == 0 && showEmpty)
createTiddlyElement(place,"div",null,"tagglyNoneFound",lingo.labels.noneFound);
}
}
}
}
},
// todo fix these up a bit
styles: [
"/*{{{*/",
"/* created by TagglyTaggingPlugin */",
".tagglyTagging { padding-top:0.5em; }",
".tagglyTagging li.listTitle { display:none; }",
".tagglyTagging ul {",
" margin-top:0px; padding-top:0.5em; padding-left:2em;",
" margin-bottom:0px; padding-bottom:0px;",
"}",
".tagglyTagging { vertical-align: top; margin:0px; padding:0px; }",
".tagglyTagging table { margin:0px; padding:0px; }",
".tagglyTagging .button { visibility:hidden; margin-left:3px; margin-right:3px; }",
".tagglyTagging .button, .tagglyTagging .hidebutton {",
" color:[[ColorPalette::TertiaryLight]]; font-size:90%;",
" border:0px; padding-left:0.3em;padding-right:0.3em;",
"}",
".tagglyTagging .button:hover, .hidebutton:hover, ",
".tagglyTagging .button:active, .hidebutton:active {",
" border:0px; background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]];",
"}",
".selected .tagglyTagging .button { visibility:visible; }",
".tagglyTagging .hidebutton { color:[[ColorPalette::Background]]; }",
".selected .tagglyTagging .hidebutton { color:[[ColorPalette::TertiaryLight]] }",
".tagglyLabel { color:[[ColorPalette::TertiaryMid]]; font-size:90%; }",
".tagglyTagging ul {padding-top:0px; padding-bottom:0.5em; margin-left:1em; }",
".tagglyTagging ul ul {list-style-type:disc; margin-left:-1em;}",
".tagglyTagging ul ul li {margin-left:0.5em; }",
".editLabel { font-size:90%; padding-top:0.5em; }",
".tagglyTagging .commas { padding-left:1.8em; }",
"/* not technically tagglytagging but will put them here anyway */",
".tagglyTagged li.listTitle { display:none; }",
".tagglyTagged li { display: inline; font-size:90%; }",
".tagglyTagged ul { margin:0px; padding:0px; }",
".excerpt { color:[[ColorPalette::TertiaryDark]]; }",
".excerptIndent { margin-left:4em; }",
"div.tagglyTagging table,",
"div.tagglyTagging table tr,",
"td.tagglyTagging",
" {border-style:none!important; }",
".tagglyTagging .contents { border-bottom:2px solid [[ColorPalette::TertiaryPale]]; padding:0 1em 1em 0.5em;",
" margin-bottom:0.5em; }",
".tagglyTagging .indent1 { margin-left:3em; }",
".tagglyTagging .indent2 { margin-left:4em; }",
".tagglyTagging .indent3 { margin-left:5em; }",
".tagglyTagging .indent4 { margin-left:6em; }",
".tagglyTagging .indent5 { margin-left:7em; }",
".tagglyTagging .indent6 { margin-left:8em; }",
".tagglyTagging .indent7 { margin-left:9em; }",
".tagglyTagging .indent8 { margin-left:10em; }",
".tagglyTagging .indent9 { margin-left:11em; }",
".tagglyTagging .indent10 { margin-left:12em; }",
".tagglyNoneFound { margin-left:2em; color:[[ColorPalette::TertiaryMid]]; font-size:90%; font-style:italic; }",
"/*}}}*/",
""].join("\n"),
init: function() {
merge(config.macros,this.macros);
config.shadowTiddlers["TagglyTaggingStyles"] = this.styles;
store.addNotification("TagglyTaggingStyles",refreshStyles);
}
};
config.taggly.init();
//}}}
/***
InlineSlidersPlugin
By Saq Imtiaz
http://tw.lewcid.org/sandbox/#InlineSlidersPlugin
// syntax adjusted to not clash with NestedSlidersPlugin
// added + syntax to start open instead of closed
***/
//{{{
config.formatters.unshift( {
name: "inlinesliders",
// match: "\\+\\+\\+\\+|\\<slider",
match: "\\<slider",
// lookaheadRegExp: /(?:\+\+\+\+|<slider) (.*?)(?:>?)\n((?:.|\n)*?)\n(?:====|<\/slider>)/mg,
lookaheadRegExp: /(?:<slider)(\+?) (.*?)(?:>)\n((?:.|\n)*?)\n(?:<\/slider>)/mg,
handler: function(w) {
this.lookaheadRegExp.lastIndex = w.matchStart;
var lookaheadMatch = this.lookaheadRegExp.exec(w.source)
if(lookaheadMatch && lookaheadMatch.index == w.matchStart ) {
var btn = createTiddlyButton(w.output,lookaheadMatch[2] + " "+"\u00BB",lookaheadMatch[2],this.onClickSlider,"button sliderButton");
var panel = createTiddlyElement(w.output,"div",null,"sliderPanel");
panel.style.display = (lookaheadMatch[1] == '+' ? "block" : "none");
wikify(lookaheadMatch[3],panel);
w.nextMatch = lookaheadMatch.index + lookaheadMatch[0].length;
}
},
onClickSlider : function(e) {
if(!e) var e = window.event;
var n = this.nextSibling;
n.style.display = (n.style.display=="none") ? "block" : "none";
return false;
}
});
//}}}
talend TAC (talend administration center)
https://www.google.com/search?q=talend+tac&oq=talend+TAC&aqs=chrome.0.0l5j69i65.2216j0j1&sourceid=chrome&ie=UTF-8
''W8 forms''
http://en.wikipedia.org/wiki/IRS_tax_forms
http://www.modernstreet.com/useful-tips/how-to-fill-up-the-w8ben-form/
http://www.proz.com/forum/money_matters/62056-what_is_the_form_w8ben_for.html
http://www.doughroller.net/taxes/what-is-a-w-8-form/
! Buzzwords
http://www.robietherobot.com/buzzword.htm
http://www.dack.com/web/bullshit.html
http://www.1728.org/buzzword.htm
http://unsuck-it.com/browse/v-z/
http://www.allowe.com/Humor/book/Buzz%20Word%20Translator.htm
http://oilpatchwriting.wordpress.com/2011/01/26/25-most-annoying-buzzwords/
http://www3.telus.net/linguisticsissues/buzz.html
http://hemantoracledba.blogspot.com/2008/05/temporary-segments-in-dataindex.html
https://tensorflow.rstudio.com/
https://github.com/rstudio/tensorflow
https://cloud.google.com/ml-engine/
http://code.google.com/p/puttycyg/ <-- nice local/putty console
http://sourceforge.net/projects/console/ <-- nice gui but I would prefer terminator
search for "terminator vi problem" , then add this line on your cygwin .bash_profile to fix the vim issue on SSH
http://software.jessies.org/terminator/faq.html#heading_toc_j_6
http://c2.com/cgi/wiki?BetterCygwinTerminal <-- good stuff
{{{
export TERM=ansi
}}}
http://software.jessies.org/terminator/#downloads
http://www.legendu.net/misc/blog/install-terminator-on-cygwin/
! usage
<<<
terminator - CMD+F searches text on screen
Supports regular expressions!
When you select text with mouse, hold the option or cmd key down (don’t remember which) when you move the mouse - it switches to rectangular/block selection
Very useful for copying pieces of output or highlighting stuff like branches in exec plans when demoing
Also full terminal output logs are stored in .terminator/logs ...
This has been useful a few times
the logs don’t age out... but I’ve removed them once per year
You can disable that too in the settings
<<<
http://www.ubuntugeek.com/iphone-tethering-on-ubuntu-9-10-karmic.html
http://ivkin.net/2010/05/tethering-ubuntu-lucid-lynx-and-iphone-os-3-1/ <-- worked for migs
http://jkeating.livejournal.com/75270.html
https://mknowles.com.au/wordpress/2010/03/31/iphone-usb-tethering-with-fedora-12/
http://c10m.robertgraham.com/p/blog-page.html
http://www.youtube.com/watch?feature=player_embedded&v=73XNtI0w7jA
http://highscalability.com/blog/2013/5/13/the-secret-to-10-million-concurrent-connections-the-kernel-i.html
ppt http://goo.gl/nw0wa
Thunderbird and Zimbra
http://wiki.zimbra.com/wiki/Thunderbird_%26_Lightning
<<showtoc>>
Some examples of good tiddly templates..
* [[PerformanceTools-Database]]
* [[Orion Users Guide]]
* [[RHEV 2.1 features]]
* [[AAS investigation]]
* [[ASMtoEMCPowerDevices]]
!CheatSheet
|''Bold''|{{{''text''}}}|
|__Uline__|{{{__text__}}}|
|//Italic//|{{{//text//}}}|
|Bullets|{{{*text}}}|
|No.s|{{{#text}}}|
|Heads|{{{!text}}}|
|Table|{{{|t|t|}}}|
|Quote|{{{<<<>>>}}}|
|{{{Mono}}}|{{{{{{text}}}}}}|
|[[Tid]]|{{{[[Text]]}}}|
|[[Help|http://www.blogjones.com/TiddlyWikiTutorial.html#EasyToEdit]]|{{{[[t|url]]}}}|
''more here'' http://www.tiddlywiki.com/#Reference
! Some good source of tips on formatting:
!! images
@@To reference a photo on a Tiddler@@
[img(30%,30%)[ picturename | http://www.evernote.com/shard/s48/sh/97a2722a-d7a6-4afa-9c1a-09d5cfbf3a64/1f0efcf6196323cf7e29542c86d15734/res/a23899a1-cb48-46ef-b629-eba8626905e9/IMG_3118.JPG]]
-- ''http'' URL
{{{
-- full image
[img[ <URL of photo> ]]
-- resized
[img(30%,30%)[ <URL of photo> ]]
}}}
-- ''local file'' URL, must place the file to the same directory of Tiddlywiki
{{{
[img(30%,30%)[ images/Exadata Provisioning Worksheet.png ]]
}}}
[img(30%,30%)[ images/Exadata Provisioning Worksheet.png ]]
http://www.mail-archive.com/tiddlywiki@googlegroups.com/msg05763.html
http://www.tiddlytools.com/#ImageSizePlugin
http://groups.google.com/group/tiddlywiki/browse_thread/thread/48ed226457b795c1
http://groups.google.com/group/TiddlyWiki/browse_thread/thread/beefac7bfc13f70f
!! Escape Characters
@@Escape Characters@@
http://tiddlywiki.org/wiki/Escape
!! SQL scripts
@@SQL scripts@@
put it in
"""
{{{
select * from dual;
}}}
"""
sample output
{{{
select * from dual;
}}}
!! links, URL
@@Links@@
{{{
Internal/external link: [[text|WikiWord or URL]]
Image link: [img[picturename|path/to/picture.jpg]]
File link: [[text|path/to/file.pdf]]
}}}
! Header
! Bullets
* SQLTXPLAIN (Oracle Extended Explain Plan Statistics) – Provides details about all schema objects in which the SQL statement depends on.
* Orasrp (Oracle Session Resource Planner) – Builds complete detailed session profile
* gxplan - Visualization of explain plan
* 10053 viewer - http://jonathanlewis.wordpress.com/2010/04/30/10053-viewer/
! Indented text
''Remember I mentioned this on the blog post above.. ?''
<<<
"So what’s the effect? mm… on a high CPU activity period you’ll notice that there will be a higher AAS on the Top Activity Page compared to Performance Page. Simply because ASH samples every second and it does that quickly on every active session (the only way to see CPU usage realtime) while the time model CPU although it updates quicker (5secs I think) than v$sysstat “CPU used by this session” there could still be some lag time and it will still be based on Time Statistics (one of two ways to calculate AAS) which could be affected by averages."
<<<
I'll expound on that with test cases included.. ''see below!''
! Dashed line
------------------------------------------------------------------------------------------------
''num_large'' (# of parallel sessions)
''num_streamIO'' (# of PARALLEL hint) increase this parameter in order to simulate parallel execution for individual operations. Specify a DOP that you plan to use for your database operations, a good starting point for DOP is ''# of CPU x Parallel threads per CPU''
------------------------------------------------------------------------------------------------
! Cascaded text
> - ISO library
>> better if you do manual copy.. a lot faster
> - Snapshots
>> - shutdown the VM first before doing snapshots
>> - then, you can preview... then, commit the current state or undo
> - Templates
>> - shutdown before creating templates
> - Pools
! Number, Bullet, Cascaded Image, Cascaded SQL
!!!! 2) From DBA_HIST_ACTIVE_SESS_HISTORY
* In the case of DBA_HIST_ ''sample count'' is sample count*10 since they only write out 1/10 samples
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZtyRcp7m_I/AAAAAAAABLI/sLqztbLY3Mw/AASFromDBA_HIST.png]]
{{{
select * from dual;
}}}
<<<
! Cascaded Image
* AWR
<<<
per server
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmcOnhBI/AAAAAAAABOk/lxo8_tbLqX4/powerdevices5-awr.png]]
> per instance
> [img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFKbFENV1I/AAAAAAAABPE/nUCFo_HOjHY/powerdevices5-awr2.png]]
>> awr output on each instance
>> [img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TaFKbZB6nnI/AAAAAAAABPI/8MVhDN5Q_rI/powerdevices5-awr3.png]]
<<<
! Cascaded bullets
On the Excel sheet, you have to fill in the following sections
* From RDA
** ASM Library Information
** ASM Library Disk Information
** Disk Partitions
** Operating System Setup->Operating System Packages
** Operating System Setup->Disk Drives->Disk Mounts
** Oracle Cluster Registry (Cluster -> Cluster Information -> ocrcheck)
* From ''powermt'' command
** Logical Device IDs and names
* From sysreport
** raw devices (possible for OCR and Voting Disk)
** fstab (check for OCFS2 mounts)
* Double check from OS commands
** Voting Disk (''crsctl query css votedisk'')
** ls -l /dev/
** /etc/init.d/oracleasm querydisk <device_name>
! Bullet, Scripts, Cascaded Bullet
* Run this query to check if it's recognized as ''FOREIGN'' or ''CANDIDATE''
{{{
set lines 400
col name format a20
col label format a20
col path format a20
col redundancy format a20
select a.group_number, a.name, a.header_status, a.mount_status, a.state, a.total_mb, a.free_mb, a.label, path, a.redundancy
from v$asm_disk a
order by 1,2;
GROUP_NUMBER NAME HEADER_STATU STATE TOTAL_MB FREE_MB LABEL PATH REDUNDANCY
------------ -------------------- ------------ -------- ---------- ---------- -------------------- -------------------- --------------------
}}}
* I've done some precautions on my data gathering by checking on the ''fstab'' and ''raw devices config'' and found out that ''there are no pointers to the two devices''..
** I have an obsessive–compulsive tendencies just to make sure that these devices are not used by some services. If accidentally these EMC power devices were used for something else let's say as a filesystem.. Oracle will still allow you to do the ADD/DROP operation on these devices wiping out all the data on those devices!
this is tiddlywiki version <<version>>
<<showtoc>>
! TiddlyWiki Documentation
http://db.tt/VhwdfmiJ
! HOWTO install a PLUGIN
http://www.wikihow.com/Install-a-Tiddlywiki-Plugin
http://mnteractive.com/archive/how-to-install-a-tiddlywiki-plugin/ , copy source, save, reload, link to systemConfig
! Changes done on this Tiddly
!! 0) go to systemConfig and copy/paste all plugins code when migrating/creating a new tiddlywiki
!! 1) Look and Feel
go to Tags -> systemPalette -> MptwSmoke -> then add this line -> click Apply
<<<
Name: MptwSmoke
Background: #fff
Foreground: #000
PrimaryPale: #F5F5F5
PrimaryLight: #228B22
PrimaryMid: #111
PrimaryDark: #000
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
<<<
!! 2) Hide the right hand side bar
http://tiddlywiki.org/wiki/How_To/Setting_Up_TiddlyWiki_As_a_Website#Hide_right_side_bar
http://tiddlywiki.org/wiki/Import
go to SiteTitle tiddler then put this
{{{
Karl Arao'<<tiddler ToggleRightSidebar with: "s">> TiddlyWiki
}}}
go to SiteSubtitle for subtitle
{{{
has moved to -> http://KARLARAO.WIKI
}}}
!! 3) Tag Cloud
Just add this line on a Tiddler... enclose it with """<< ... >>"""
''cloud tags action:popup systemConfig systemPalette systemServer systemTheme TaskPackage TiddlyWiki TidIDEPackage transclusion ScrollbarPackage NavigationPackage MediaPackage IconPackage DiscoveryPackage bookmarklet''
!! 4) Check out the following to edit the look and feel
DefaultTiddlers
MainMenu
{{{
the default tiddlers
[[RSS & Search]] [[TagCloud]]
}}}
{{{
the current main menu
[[About]] [[RSS & Search]] [[TagCloud]] [[Oracle]] [[.MOSNotes]] [[OraclePerformance]] [[Benchmark]] [[Capacity Planning]] [[Hardware and OS]] [[EngineeredSystems]] [[Exadata]] [[HA]] [[PerformanceTools]] [[Troubleshooting & Internals]] [[SQL Tuning]] [[EnterpriseManager]] [[DataWarehouse]] [[Linux]] [[CloudComputing]] [[CodeNinja]] [[etc..]]
}}}
!! 5) Install Table of Contents plugin
see this https://groups.google.com/forum/#!topic/tiddlywiki/96ollIZcJMk, and this for the source http://devpad.tiddlyspot.com/#DcTableOfContentsPlugin
just reference it with
{{{
<<showtoc>>
}}}
note that you can only do 5 levels deep. A to E, or 1 - 5
!! 6) Setup RSS
[img(40%,40%)[ http://i.imgur.com/1lh6ER7.png ]]
[img(30%,30%)[ http://i.imgur.com/C1t0hrJ.png ]]
[img(30%,30%)[ http://i.imgur.com/iDhAFsS.png ]]
[img(30%,30%)[ http://i.imgur.com/Im7SJKC.png ]]
! Some useful links on TiddlyWiki
http://jaybyjayfresh.com/2008/01/23/tiddlytemplating-using-tiddlywiki-to-create-webpages/
http://faq.tiddlyspot.com/
http://www.alisonsinclair.ca/blog/archives/29
http://en.wikipedia.org/wiki/TiddlyWiki
http://en.wikipedia.org/wiki/Personal_wiki
http://en.wikipedia.org/wiki/Getting_Things_Done
http://en.wikipedia.org/wiki/Comparison_of_wiki_software#cite_note-33
http://en.wikipedia.org/wiki/List_of_wiki_software
http://tiddlywiki.org/wiki/TiddlyWiki_Resources
http://www.giffmex.org/twfortherestofus.html
http://tiddlythemes.com/#Home
http://tiddlyspot.blogspot.com/2007/09/tiddlythemes-images-now-work-on-your.html
http://tiddlywiki.org/wiki/How_To/Setting_Up_TiddlyWiki_As_a_Website
http://parand.com/say/index.php/2006/01/06/howto-using-tiddlywiki/
http://www.youtube.com/watch?v=sgQqP1_lZG4
! Some caveat on Linux/Windows and other browsers
__''Linux''__
For Linux, your only option is Firefox.. that's it. Even with tiddlysaver.jar and chrome trick, doesn't seem to work.
__''Windows''__
For Windows, you can stay with Firefox.. but for netbooks that seems to be slow. Good news is, the tiddlysaver.jar is working ;) so you can use Google Chrome! but the Chrome can't save updates on the tiddlyspot site.. so you can use Firefox for end-of-day saving to the karlarao.tiddlyspot.com
http://tiddlywiki.org/wiki/How_To/Configure_your_browser_to_allow_saves_to_disk
http://tiddlywiki.org/wiki/Google_Chrome <-- doesn't seem to work!
BTW, I tried Opera, Safari, IE.. they are all crap..
http://www.cyclismo.org/tutorial/R/time.html
https://stat.ethz.ch/R-manual/R-devel/library/base/html/as.POSIXlt.html
http://mwidlake.wordpress.com/2010/07/14/how-often-is-vsys_time_model-updated/
Supported Platforms/Operating Systems, Processor types, and Compilers in TimesTen 7.0.5.0
Doc ID: 605755.1
Oracle TimesTen In-Memory Database : Master Note Parent (Doc ID 1088128.1)
TimesTen Product Documentation Library : Master Note Child (Doc ID 806197.1)
http://www.oracle.com/technetwork/products/timesten/overview/index.html?origref=http://www.oracle.com/technetwork/database/options/imdb-cache/index.html
http://yongjun-jiao.blogspot.com/2010/08/timesten-oracle-database-saviour-for.html
http://en.wikipedia.org/wiki/Database_caching
''Using Oracle In-Memory Database Cache to Accelerate the Oracle Database'' http://www.oracle.com/technetwork/database/performance/wp-imdb-cache-130299.pdf
''TimesTen FAQ'' http://www.oracle.com/technetwork/products/timesten/faq-091526.html
http://edn.embarcadero.com/article/28886 <-- weird giving subsecond value
http://forums.oracle.com/forums/thread.jspa?threadID=1117064 <-- helpful post by user11268895
http://kr.forums.oracle.com/forums/thread.jspa?messageID=2517203
http://bytes.com/topic/oracle/answers/65116-help-need-avg-timestamp1-timestamp2-get-type-error
http://mikerault.blogspot.com/2006/07/oracle-timestamp-math.html
http://www.forumtopics.com/busobj/viewtopic.php?t=132019&start=0&postdays=0&postorder=asc&sid=14bcf84697b244aca41a387791d4b729
http://www.excelforum.com/excel-worksheet-functions/573306-how-to-convert-date-time-to-seconds.html
http://en.allexperts.com/q/Excel-1059/2009/4/excel-formula-convert-min-1.htm
http://www.mrexcel.com/forum/showthread.php?t=2641
Do Date Arithmetic on Dates
http://www.appsdba.com/blog/?p=278
/%
!info
|Name|ToggleRightSidebar|
|Source|http://www.TiddlyTools.com/#ToggleRightSidebar|
|Version|2.0.0|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|transclusion|
|Description|show/hide right sidebar (SideBarOptions)|
Usage
<<<
{{{
<<tiddler ToggleRightSidebar>>
<<tiddler ToggleRightSidebar with: label tooltip>>
}}}
Try it: <<tiddler ToggleRightSidebar##show
with: {{config.options.chkShowRightSidebar?'►':'◄'}}>>
<<<
Configuration:
<<<
{{{
config.options.chkShowRightSidebar (true)
config.options.txtToggleRightSideBarLabelShow (◄)
config.options.txtToggleRightSideBarLabelHide (►)
}}}
<<<
!end
!show
<<tiddler {{
var co=config.options;
if (co.chkShowRightSidebar===undefined) co.chkShowRightSidebar=true;
var sb=document.getElementById('sidebar');
var da=document.getElementById('displayArea');
if (sb) {
sb.style.display=co.chkShowRightSidebar?'block':'none';
da.style.marginRight=co.chkShowRightSidebar?'':'1em';
}
'';}}>><html><nowiki><a href='javascript:;' title="$2"
onmouseover="
this.href='javascript:void(eval(decodeURIComponent(%22(function(){try{('
+encodeURIComponent(encodeURIComponent(this.onclick))
+')()}catch(e){alert(e.description?e.description:e.toString())}})()%22)))';"
onclick="
var co=config.options;
var opt='chkShowRightSidebar';
var show=co[opt]=!co[opt];
var sb=document.getElementById('sidebar');
var da=document.getElementById('displayArea');
if (sb) {
sb.style.display=show?'block':'none';
da.style.marginRight=show?'':'1em';
}
saveOptionCookie(opt);
var labelShow=co.txtToggleRightSideBarLabelShow||'◄';
var labelHide=co.txtToggleRightSideBarLabelHide||'►';
if (this.innerHTML==labelShow||this.innerHTML==labelHide)
this.innerHTML=show?labelHide:labelShow;
this.title=(show?'hide':'show')+' right sidebar';
var sm=document.getElementById('storyMenu');
if (sm) config.refreshers.content(sm);
return false;
">$1</a></html>
!end
%/<<tiddler {{
var src='ToggleRightSidebar';
src+(tiddler&&tiddler.title==src?'##info':'##show');
}} with: {{
var co=config.options;
var labelShow=co.txtToggleRightSideBarLabelShow||'◄';
var labelHide=co.txtToggleRightSideBarLabelHide||'►';
'$1'!='$'+'1'?'$1':(co.chkShowRightSidebar?labelHide:labelShow);
}} {{
var tip=(config.options.chkShowRightSidebar?'hide':'show')+' right sidebar';
'$2'!='$'+'2'?'$2':tip;
}}>>
/***
|Name:|ToggleTagPlugin|
|Description:|Makes a checkbox which toggles a tag in a tiddler|
|Version:|3.1.0 ($Rev: 4907 $)|
|Date:|$Date: 2008-05-13 03:15:46 +1000 (Tue, 13 May 2008) $|
|Source:|http://mptw.tiddlyspot.com/#ToggleTagPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!!Usage
{{{<<toggleTag }}}//{{{TagName TiddlerName LabelText}}}//{{{>>}}}
* TagName - the tag to be toggled, default value "checked"
* TiddlerName - the tiddler to toggle the tag in, default value the current tiddler
* LabelText - the text (gets wikified) to put next to the check box, default value is '{{{[[TagName]]}}}' or '{{{[[TagName]] [[TiddlerName]]}}}'
(If a parameter is '.' then the default will be used)
* TouchMod flag - if non empty then touch the tiddlers mod date. Note, can set config.toggleTagAlwaysTouchModDate to always touch mod date
!!Examples
|Code|Description|Example|h
|{{{<<toggleTag>>}}}|Toggles the default tag (checked) in this tiddler|<<toggleTag>>|
|{{{<<toggleTag TagName>>}}}|Toggles the TagName tag in this tiddler|<<toggleTag TagName>>|
|{{{<<toggleTag TagName TiddlerName>>}}}|Toggles the TagName tag in the TiddlerName tiddler|<<toggleTag TagName TiddlerName>>|
|{{{<<toggleTag TagName TiddlerName 'click me'>>}}}|Same but with custom label|<<toggleTag TagName TiddlerName 'click me'>>|
|{{{<<toggleTag . . 'click me'>>}}}|dot means use default value|<<toggleTag . . 'click me'>>|
!!Notes
* If TiddlerName doesn't exist it will be silently created
* Set label to '-' to specify no label
* See also http://mgtd-alpha.tiddlyspot.com/#ToggleTag2
!!Known issues
* Doesn't smoothly handle the case where you toggle a tag in a tiddler that is current open for editing
* Should convert to use named params
***/
//{{{
if (config.toggleTagAlwaysTouchModDate == undefined) config.toggleTagAlwaysTouchModDate = false;
merge(config.macros,{
toggleTag: {
createIfRequired: true,
shortLabel: "[[%0]]",
longLabel: "[[%0]] [[%1]]",
handler: function(place,macroName,params,wikifier,paramString,tiddler) {
var tiddlerTitle = tiddler ? tiddler.title : '';
var tag = (params[0] && params[0] != '.') ? params[0] : "checked";
var title = (params[1] && params[1] != '.') ? params[1] : tiddlerTitle;
var defaultLabel = (title == tiddlerTitle ? this.shortLabel : this.longLabel);
var label = (params[2] && params[2] != '.') ? params[2] : defaultLabel;
var touchMod = (params[3] && params[3] != '.') ? params[3] : "";
label = (label == '-' ? '' : label); // dash means no label
var theTiddler = (title == tiddlerTitle ? tiddler : store.getTiddler(title));
var cb = createTiddlyCheckbox(place, label.format([tag,title]), theTiddler && theTiddler.isTagged(tag), function(e) {
if (!store.tiddlerExists(title)) {
if (config.macros.toggleTag.createIfRequired) {
var content = store.getTiddlerText(title); // just in case it's a shadow
store.saveTiddler(title,title,content?content:"",config.options.txtUserName,new Date(),null);
}
else
return false;
}
if ((touchMod != "" || config.toggleTagAlwaysTouchModDate) && theTiddler)
theTiddler.modified = new Date();
store.setTiddlerTag(title,this.checked,tag);
return true;
});
}
}
});
//}}}
ANOTHER EXAMPLE: 2 NODE RAC BATCH LOAD SCENARIO
------------------------------------------------------------------------------------------------------------------------------------------
The SQL_ID 630ycmzhv9w1v was executed on node1 with no parallelism.. so I'm not seeing any instance of this SQL on the node2
while...
The SQL_ID 0cxy506jng0jt was executed on node1.. but with parallelism.. and I'm seeing this SQL on node2 but the exec is 0 (zero) and elap per exec is 0 (zero)
{{{
SQL> select count(*) from parallel_t1;
AWR Top SQL Report
COUNT(*)
----------
0
SQL> select sum(bytes)/1024/1024 from dba_segments where segment_name = 'PARALLEL_T1';
AWR Top SQL Report
SUM(BYTES)/1024/1024
--------------------
.0625
SQL> alter table parallel_t1 parallel;
Table altered.
SQL>
SQL>
SQL>
SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
SQL> alter session enable parallel dml;
Session altered.
}}}
{{{
SQL> insert /*+ APPEND */ into parallel_t1
select level, 'x'
from dual
connect by level <= 1000000
;
2 3 4 5
1000000 rows created.
SQL> SQL>
SQL>
SQL>
SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
}}}
First node
{{{
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
207 10/05/24 23:50 1 4.24 0cxy506jng0jt 20011431 SQL*Plus 228.32 228.32 21.30 49.78 0.00 10.62 39.97 13251 6951 14640 1500000 1 3 2 0.90 1 insert
202 10/05/24 23:20 1 10.04 630ycmzhv9w1v 1541388231 SQL*Plus 163.84 11.36 21.16 0.00 0.49 0.33 72270 58 48208 0 0 1 0 0.27 2 insert
203 10/05/24 23:30 1 10.29 630ycmzhv9w1v 1541388231 SQL*Plus 69.39 69.39 6.40 3.95 0.00 0.48 0.16 30767 0 24768 0 1 0 0 0.11 1 insert
203 10/05/24 23:30 1 10.29 0cxy506jng0jt 1541388231 SQL*Plus 29.64 29.64 3.07 10.88 0.00 0.03 0.77 15735 151 15385 1000000 1 1 0 0.05 4 insert
207 10/05/24 23:50 1 4.24 b84gb6u21n480 948201263 9.57 9.57 0.70 6.08 0.04 0.76 0.45 121 9 0 1 1 2 1 0.04 3 insert
205 10/05/24 23:43 1 0.12 7vgmvmy8vvb9s 43914496 2.88 2.88 0.46 0.00 0.00 0.00 0.01 8 0 0 1 1 1 0 0.40 3 insert
208 10/05/24 23:54 1 0.23 7vgmvmy8vvb9s 43914496 2.66 2.66 0.37 0.01 0.00 0.00 0.00 8 1 0 1 1 1 0 0.19 3 insert
201 10/05/24 23:10 1 9.86 7vgmvmy8vvb9s 43914496 0.94 0.94 0.36 0.01 0.00 0.00 0.06 9 3 0 1 1 1 0 0.00 3 insert
201 10/05/24 23:10 1 9.86 agpd044zj368m 3821145811 0.92 0.92 0.05 0.23 0.00 0.01 0.58 92 2 0 121 1 1 0 0.00 4 insert
146 10/05/21 07:30 1 10.01 3kr90614kgmzt 2043930043 0.90 0.90 0.01 0.90 0.00 0.00 0.00 39 2 0 140 1 1 0 0.00 4 insert
182 10/05/21 13:30 1 10.01 7vgmvmy8vvb9s 43914496 0.65 0.65 0.26 0.00 0.00 0.00 0.00 8 0 0 1 1 1 0 0.00 4 insert
201 10/05/24 23:10 1 9.86 6hwjmjgrpsuaa 2721822575 0.47 0.47 0.03 0.42 0.00 0.00 0.03 39 8 0 67 1 1 0 0.00 5 insert
208 10/05/24 23:54 1 0.23 3m8smr0v7v1m6 0 0.45 0.00 0.07 0.15 0.00 0.00 0.19 985 14 0 115 115 115 0 0.03 4 INSERT
146 10/05/21 07:30 1 10.01 7vgmvmy8vvb9s 43914496 0.44 0.44 0.11 0.00 0.00 0.00 0.00 8 0 0 1 1 1 0 0.00 5 insert
155 10/05/21 09:00 1 10.01 7vgmvmy8vvb9s 43914496 0.42 0.42 0.16 0.00 0.00 0.00 0.00 8 0 0 1 1 1 0 0.00 4 insert
184 10/05/21 13:50 1 10.01 7vgmvmy8vvb9s 43914496 0.41 0.41 0.12 0.00 0.00 0.00 0.00 8 0 0 1 1 1 0 0.00 4 insert
169 10/05/21 11:20 1 10.01 7vgmvmy8vvb9s 43914496 0.39 0.39 0.14 0.00 0.00 0.00 0.00 8 0 0 1 1 1 0 0.00 4 insert
174 10/05/21 12:10 1 10.01 7vgmvmy8vvb9s 43914496 0.38 0.38 0.11 0.00 0.00 0.00 0.00 8 0 0 1 1 1 0 0.00 4 insert
179 10/05/21 13:00 1 10.01 7vgmvmy8vvb9s 43914496 0.37 0.37 0.13 0.00 0.00 0.00 0.00 8 0 0 1 1 1 0 0.00 4 insert
169 10/05/21 11:20 1 10.01 cp3gpd7z878w8 1950636251 0.37 0.37 0.00 0.00 0.00 0.00 0.00 9 0 0 27 1 1 0 0.00 5 insert
}}}
Second node
{{{
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
207 10/05/24 23:50 2 4.35 0cxy506jng0jt 20011431 SQL*Plus 172.10 16.18 60.92 0.00 12.56 45.03 12097 6948 14640 500000 0 2 2 0.66 2 insert
202 10/05/24 23:20 2 10.05 7vgmvmy8vvb9s 43914496 10.57 10.57 0.70 1.00 0.00 0.00 0.05 8 4 0 1 1 1 0 0.02 4 insert
202 10/05/24 23:20 2 10.05 f9nzhpn9854xz 2614576983 5.30 5.30 0.13 4.43 0.00 0.54 0.22 143 5 0 74 1 1 0 0.01 5 insert
200 10/05/24 23:04 2 6.01 7vgmvmy8vvb9s 43914496 2.18 2.18 0.38 0.00 0.00 0.00 0.00 8 2 0 1 1 1 0 0.01 5 insert
208 10/05/24 23:55 2 0.11 7vgmvmy8vvb9s 43914496 1.11 1.11 0.21 0.00 0.00 0.00 0.00 8 1 0 1 1 1 0 0.17 5 insert
206 10/05/24 23:43 2 7.21 7vgmvmy8vvb9s 43914496 0.72 0.72 0.25 0.03 0.00 0.00 0.00 8 1 0 1 1 1 0 0.00 2 insert
206 10/05/24 23:43 2 7.21 ak1t0asdv0yjs 565531888 0.65 0.65 0.00 0.34 0.00 0.00 0.32 7 1 0 1 1 1 0 0.00 4 insert
206 10/05/24 23:43 2 7.21 fp6ajqcjfqf2j 4010731594 0.50 0.50 0.01 0.47 0.00 0.00 0.00 8 1 0 20 1 1 0 0.00 5 insert
205 10/05/24 23:43 2 0.12 f318xdxdn0pdc 2536105608 0.32 0.32 0.06 0.00 0.00 0.00 0.00 6 1 0 4 1 1 0 0.04 2 insert
205 10/05/24 23:43 2 0.12 71y370j6428cb 3717298615 0.25 0.25 0.04 0.03 0.00 0.00 0.02 6 1 0 2 1 1 0 0.03 4 insert
205 10/05/24 23:43 2 0.12 7vgmvmy8vvb9s 43914496 0.21 0.21 0.18 0.01 0.00 0.00 0.00 8 1 0 1 1 1 0 0.03 5 insert
}}}
{{{
SQL> select count(*) from parallel_t1;
AWR Top SQL Report
COUNT(*)
----------
1000000
SQL> select sum(bytes)/1024/1024 from dba_segments where segment_name = 'PARALLEL_T1';
AWR Top SQL Report
SUM(BYTES)/1024/1024
--------------------
122.5625
}}}
Below is the plan table output
{{{
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cxy506jng0jt
--------------------
insert /*+ APPEND */ into parallel_t1 select level, 'x' from dual connect by level <= 1000000
Plan hash value: 20011431
--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | 2 (100)| | | | |
| 1 | PX COORDINATOR | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 2 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |
| 3 | LOAD AS SELECT | | | | | Q1,01 | PCWP | |
| 4 | BUFFER SORT | | | | | Q1,01 | PCWC | |
| 5 | PX RECEIVE | | 1 | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
| 6 | PX SEND ROUND-ROBIN | :TQ10000 | 1 | 2 (0)| 00:00:01 | | S->P | RND-ROBIN |
| 7 | CONNECT BY WITHOUT FILTERING| | | | | | | |
| 8 | FAST DUAL | | 1 | 2 (0)| 00:00:01 | | | |
--------------------------------------------------------------------------------------------------------------------
SQL_ID 0cxy506jng0jt
--------------------
insert /*+ APPEND */ into parallel_t1 select level, 'x' from dual
connect by level <= 1000000
Plan hash value: 1541388231
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | 2 (100)| |
| 1 | LOAD AS SELECT | | | | |
| 2 | CONNECT BY WITHOUT FILTERING| | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------------------
36 rows selected.
}}}
select dfo_number,tq_id,server_type,instance,process,num_rows,bytes
from v$pq_tqstat
order by dfo_number, tq_id, server_type desc, instance, process;
------------------------
INSERT... SELECT - 10000 rows
------------------------
{{{
alter table parallel_t1 parallel 2;
alter session enable parallel dml;
insert /*+ APPEND */ into parallel_t1
select level, 'x'
from dual
connect by level <= 10000
DFO_NUMBER TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS BYTES
---------- ---------- ---------- ---------- ---------- ---------- ----------
1 0 Producer 1 QC 10000 1081001
1 0 Consumer 1 P000 5000 540550
1 0 Consumer 2 P000 5000 540451
1 1 Producer 1 P000 1 107
1 1 Producer 2 P000 1 107
1 1 Consumer 1 QC 2 214
# ON NODE 1
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
251 10/05/31 00:51 1 2.68 8wrc8gydaqz3j 20011431 SQL*Plus 3.77 3.77 0.30 0.00 0.00 0.01 0.22 926 0 77 15000 1 2 1 0.02 3 insert
# ON NODE 2
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
251 10/05/31 00:51 2 2.69 8wrc8gydaqz3j 20011431 SQL*Plus 0.97 0.28 0.02 0.00 0.02 0.25 857 2 77 5000 0 1 1 0.01 5 insert
}}}
------------------------
CREATE TABLE... SELECT - 10000 rows
------------------------
-- table in default degree
create table parallel_sum
pctfree 0
parallel
nologging
nocompress
storage(initial 8m next 8m)
as
select * from parallel_t1;
{{{
DFO_NUMBER TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS BYTES
---------- ---------- ---------- ---------- ---------- ---------- ----------
1 0 Producer 1 P000 1 107
1 0 Producer 2 P000 1 107
1 0 Producer 2 P001 1 107
1 0 Producer 2 P002 1 107
1 0 Consumer 1 QC 4 428
# ON NODE 1
-- no output
# ON NODE 2
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
256 10/05/31 01:06 2 2.43 62s8rfcus41hn 1693566775 SQL*Plus 12.60 0.57 0.57 0.00 3.12 1.62 2383 137 111 7996 0 3 3 0.09 1 create
}}}
------------------------
SELECT...COUNT(*) - 10000 rows
------------------------
-- table in default degree
{{{
DFO_NUMBER TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS BYTES
---------- ---------- ---------- ---------- ---------- ---------- ----------
1 0 Producer 1 P000 1 32
1 0 Producer 1 P001 1 32
1 0 Producer 2 P000 1 32
1 0 Producer 2 P001 1 32
1 0 Consumer 1 QC 4 128
# ON NODE 1
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
266 10/05/31 01:29 1 0.75 82wqpmq0g9n5s 1945250665 SQL*Plus 1.18 1.18 0.27 0.63 0.03 0.00 0.01 243 241 0 1 1 3 2 0.03 4 select
# ON NODE 2
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
266 10/05/31 01:29 2 0.75 82wqpmq0g9n5s 1945250665 SQL*Plus 0.50 0.15 0.32 0.00 0.01 0.00 80 48 0 0 0 2 2 0.01 3 select
}}}
See the SNAPs below:
{{{
i
n Elapsed
Snap s Snap Plan Elapsed Time CPU A
Snap Start t Dur SQL Hash Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
1065 10/05/04 12:19 1 0.88 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 4.48 4.48 0.90 0 17788 0 1000000 1 0 0 0.08 2 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1066 10/05/04 12:20 1 1.65 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 3.43 1.13 0 34415 0 0 0 1 0 0.03 5 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1067 10/05/04 12:22 1 0.28 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 9.20 0.85 0 24766 0 0 0 0 0 0.55 1 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1068 10/05/04 12:22 1 0.18 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 13.62 2.03 0 40108 0 0 0 0 0 1.26 2 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1069 10/05/04 12:22 1 0.26 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 4.92 4.92 1.35 0 17157 0 1000000 1 0 0 0.32 2 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
}}}
A bulk insert operation
{{{
sys@IVRS> insert into parallel_t1
select level, 'x'
from dual
connect by level <= 1000000 2 3 4
5 /
1000000 rows created.
Elapsed: 00:00:42.77
}}}
After the bulk load
{{{
i
n Elapsed
Snap s Snap Plan Elapsed Time CPU A
Snap Start t Dur SQL Hash Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
1071 10/05/04 12:25 1 0.24 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 10.55 3.20 0 38760 0 0 0 0 0 0.73 2 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1072 10/05/04 12:25 1 0.37 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 11.87 3.60 0 31011 0 0 0 0 0 0.53 1 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1073 10/05/04 12:26 1 0.10 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 4.62 4.62 1.08 0 25076 0 1000000 1 0 0 0.77 1 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
}}}
but if I show the cumulative... then add everything it is 32 seconds, still where is the 10seconds?
{{{
i
n Elapsed
Snap s Snap Plan Elapsed Elapsed Time CPU A
Snap Start t Dur SQL Hash Total Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
1069 10/05/04 12:22 1 0.26 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 102.21 4.92 4.92 1.35 0 17157 0 1000000 1 0 0 0.32 2 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1071 10/05/04 12:25 1 0.24 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 118.37 10.55 3.20 0 38760 0 0 0 0 0 0.73 2 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1072 10/05/04 12:25 1 0.37 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 130.24 11.87 3.60 0 31011 0 0 0 0 0 0.53 1 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1073 10/05/04 12:26 1 0.10 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 134.86 4.62 4.62 1.08 0 25076 0 1000000 1 0 0 0.77 1 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
}}}
ANOTHER EXAMPLE
------------------------------------------------------------------------------------------------------------------------------------------
{{{
perfstat@IVRS> create table parallel_t1(c1 int, c2 char(100));
Table created.
Elapsed: 00:00:03.21
perfstat@IVRS> insert into parallel_t1
select level, 'x'
from dual
connect by level <= 1000000 2 3 4
5 /
1000000 rows created.
Elapsed: 00:00:50.97
}}}
{{{
i
n Elapsed
Snap s Snap Plan Elapsed Elapsed Time CPU A
Snap Start t Dur SQL Hash Total Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
1088 10/05/04 13:33 1 0.49 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 168.95 0.00 0.00 0 5 0 0 0 1 0 0.00 17 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1089 10/05/04 13:34 1 0.33 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 187.64 18.69 4.88 0 84622 0 0 0 0 0 0.94 1 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
1090 10/05/04 13:34 1 1.48 8ucn3svx8s6sj 1236776825 sqlplus@dbrocaix01.b 212.21 24.57 24.57 2.78 0 76771 0 1000000 1 0 0 0.28 4 insert into parallel_t1
ayantel.com (TNS V1- select level, 'x'
V3) from dual
connect by level <= 1000000
}}}
-- total execute count is 8.. which results to 26.5 elapsed.. also exist on sqltxplain.. but the real elapsed is 50 secs
so I must add the elapsed total and exec total to get the rough estimate of the real elapsed..
Piratebay
Isohunt
Btjunkie
http://www.demonoid.me
http://eztv.it/showlist/
http://www.orbitdownloader.com/free-youtube-downloader-firefox.htm
! CPU and kernel info
{{{
[oracle@dbrocaix01 ~]$ uname -a
Linux dbrocaix01 2.6.9-42.0.0.0.1.EL #1 Sun Oct 15 13:58:55 PDT 2006 i686 i686 i386 GNU/Linux
[oracle@dbrocaix01 ~]$
[oracle@dbrocaix01 ~]$ cat /etc/redhat-release
Enterprise Linux Enterprise Linux AS release 4 (October Update 4)
[oracle@dbrocaix01 ~]$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 7
model name : Intel(R) Core(TM)2 Duo CPU T8100 @ 2.10GHz
stepping : 6
cpu MHz : 2081.751
cache size : 3072 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx lm pni
bogomips : 4171.86
[oracle@dbrocaix01 ~]$
}}}
! SAR information
06/01/10
{{{
22:30:05 CPU %user %nice %system %iowait %idle
22:40:01 all 0.84 0.00 1.54 2.64 94.98
22:50:01 all 0.36 0.00 0.79 3.57 95.28
23:00:01 all 0.42 0.00 0.84 2.16 96.58
23:10:01 all 0.47 0.00 1.53 7.44 90.57
23:20:01 all 5.10 0.00 20.39 35.56 38.95
23:30:01 all 7.54 1.06 26.81 27.07 37.53
23:40:01 all 6.56 55.32 18.51 6.63 12.97
23:50:01 all 7.99 52.75 39.27 0.00 0.00
Average: all 3.66 13.65 13.72 10.64 58.34
}}}
06/02/10
{{{
00:00:01 CPU %user %nice %system %iowait %idle
00:10:01 all 8.95 79.83 11.23 0.00 0.00
00:20:01 all 10.00 79.86 10.14 0.00 0.00
00:30:01 all 9.19 78.30 12.51 0.00 0.00
00:40:01 all 8.31 85.11 6.58 0.00 0.00
00:50:01 all 9.58 77.72 12.70 0.00 0.00
01:00:01 all 9.30 84.29 6.41 0.00 0.00
01:10:01 all 8.94 83.76 7.31 0.00 0.00
Average: all 9.18 81.27 9.55 0.00 0.00
}}}
! Time Series information on workload, top events, io statistics, top SQLs for SNAP_ID 1120,1121,1123
AWR CPU and IO Workload Report
{{{
AWR CPU and IO Workload Report
i *** *** ***
n Total Total Total U S
Snap s Snap C CPU A Oracle OS Physical Oracle RMAN OS S Y I
Snap Start t Dur P Time DB DB Bg RMAN A CPU OS CPU Memory IOPs IOPs IOPs IO r IO w Redo Exec CPU CPU CPU R S O
ID Time # (m) U (s) Time CPU CPU CPU S (s) Load (s) (mb) r w redo (mb)/s (mb)/s (mb)/s Sess /s % % % % % %
------ --------------- --- ---------- --- ----------- ---------- --------- --------- -------- ----- ----------- ------- ----------- ---------- --------- --------- --------- --------- --------- --------- ---- --------- ------ ---- ---- ---- ---- ----
1120 10/06/01 23:40 1 10.02 1 601.20 17.76 3.80 43.39 0.00 0.0 47.19 2.58 600.63 0.00 1.771 1.392 0.461 0.067 0.063 0.082 21 3.678 8 0 100 8 38 0
1121 10/06/01 23:50 1 10.03 1 601.80 51.99 4.27 7.64 0.00 0.1 11.92 1.85 604.71 0.01 0.105 0.661 0.058 0.001 0.008 0.002 21 1.062 2 0 100 8 19 0
1123 10/06/02 00:10 1 10.03 1 601.80 44.45 14.84 10.04 0.00 0.1 24.88 1.20 602.24 0.05 2.704 0.695 0.080 0.042 0.009 0.002 23 3.704 4 0 100 10 10 0
}}}
AWR Top Events Report
{{{
AWR Top Events Report
i
n
Snap s Snap A
Snap Start t Dur Event Time Avgwt DB Time A
ID Time # (m) Event Rank Waits (s) (ms) % S Wait Class
------ --------------- --- ---------- ---------------------------------------- ----- -------------- -------------- -------- ------- ------ ---------------
1120 10/06/01 23:40 1 10.02 log file sequential read 1 87.00 38.91 447.28 219 0.1 System I/O
1120 10/06/01 23:40 1 10.02 db file scattered read 2 576.00 25.93 45.02 146 0.0 User I/O
1120 10/06/01 23:40 1 10.02 db file sequential read 3 485.00 22.12 45.62 125 0.0 User I/O
1120 10/06/01 23:40 1 10.02 db file parallel write 4 824.00 15.16 18.40 85 0.0 System I/O
1120 10/06/01 23:40 1 10.02 log file parallel write 5 277.00 10.25 37.01 58 0.0 System I/O
1121 10/06/01 23:50 1 10.03 db file parallel write 1 398.00 15.97 40.13 31 0.0 System I/O
1121 10/06/01 23:50 1 10.03 CPU time 2 0.00 4.27 0.00 8 0.0 CPU
1121 10/06/01 23:50 1 10.03 control file parallel write 3 199.00 3.64 18.29 7 0.0 System I/O
1121 10/06/01 23:50 1 10.03 log file parallel write 4 34.00 1.95 57.36 4 0.0 System I/O
1121 10/06/01 23:50 1 10.03 os thread startup 5 4.00 1.52 380.36 3 0.0 Concurrency
1123 10/06/02 00:10 1 10.03 db file sequential read 1 1160.00 30.18 26.01 68 0.1 User I/O
1123 10/06/02 00:10 1 10.03 CPU time 2 0.00 14.84 0.00 33 0.0 CPU
1123 10/06/02 00:10 1 10.03 db file parallel write 3 416.00 11.90 28.61 27 0.0 System I/O
1123 10/06/02 00:10 1 10.03 db file scattered read 4 476.00 5.58 11.72 13 0.0 User I/O
1123 10/06/02 00:10 1 10.03 log file parallel write 5 48.00 2.30 47.98 5 0.0 System I/O
15 rows selected.
}}}
AWR Tablespace IO Report
{{{
AWR Tablespace IO Report
i
n
Snap s Snap
Snap Start t Dur IO IOPS IOPS IOPS
ID Time # (m) TS Rank Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms) Total IO R+W Total R+W
------ --------------- --- ---------- -------------------- ---- -------- ---------- --------- ---------- -------- ----------- ------------ ------------- ------------ ---------
1120 10/06/01 23:40 1 10.02 SYSAUX 1 984 2 39.9 5.1 455 1 0 0.0 1439 2
1120 10/06/01 23:40 1 10.02 UNDOTBS1 2 2 0 50.0 1.0 212 0 0 0.0 214 0
1120 10/06/01 23:40 1 10.02 USERS 3 39 0 177.2 1.5 145 0 0 0.0 184 0
1120 10/06/01 23:40 1 10.02 SYSTEM 4 29 0 54.8 1.0 18 0 0 0.0 47 0
1120 10/06/01 23:40 1 10.02 CCINDEX 5 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 SOEINDEX 5 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 SOE 5 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 TPCCTAB 5 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 TPCHTAB 5 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 CCDATA 5 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 PSE 5 1 0 0.0 1.0 1 0 0 0.0 2 0
1121 10/06/01 23:50 1 10.03 SYSAUX 1 32 0 32.2 2.2 210 0 0 0.0 242 0
1121 10/06/01 23:50 1 10.03 USERS 2 35 0 19.1 1.3 155 0 0 0.0 190 0
1121 10/06/01 23:50 1 10.03 UNDOTBS1 3 0 0 0.0 0.0 27 0 0 0.0 27 0
1121 10/06/01 23:50 1 10.03 SYSTEM 4 1 0 70.0 1.0 6 0 0 0.0 7 0
1123 10/06/02 00:10 1 10.03 SYSAUX 1 1295 2 22.8 2.2 222 0 0 0.0 1517 3
1123 10/06/02 00:10 1 10.03 SYSTEM 2 272 0 19.6 1.0 15 0 0 0.0 287 0
1123 10/06/02 00:10 1 10.03 USERS 3 53 0 15.3 2.1 155 0 0 0.0 208 0
1123 10/06/02 00:10 1 10.03 UNDOTBS1 4 0 0 0.0 0.0 26 0 0 0.0 26 0
19 rows selected.
}}}
AWR File IO Report
{{{
AWR File IO Report
i
n
Snap s Snap
Snap Start t Dur IO IOPS IOPS IOPS
ID Time # (m) TS File# Filename Rank Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms) Total IO R+W Total R+W
------ --------------- --- ---------- -------------------- ----- ------------------------------------------------------------ ---- -------- ---------- --------- ---------- -------- ----------- ------------ ------------- ------------ ---------
1120 10/06/01 23:40 1 10.02 SYSAUX 3 +DATA_1/ivrs/datafile/sysaux.258.652821943 1 984 2 39.9 5.1 455 1 0 0.0 1439 2
1120 10/06/01 23:40 1 10.02 UNDOTBS1 2 +DATA_1/ivrs/datafile/undotbs1.257.652821933 2 2 0 50.0 1.0 212 0 0 0.0 214 0
1120 10/06/01 23:40 1 10.02 USERS 4 +DATA_1/ivrs/datafile/users.263.652821963 3 16 0 37.5 1.8 81 0 0 0.0 97 0
1120 10/06/01 23:40 1 10.02 USERS 13 +DATA_1/ivrs/datafile/users02.dbf 4 23 0 274.3 1.3 64 0 0 0.0 87 0
1120 10/06/01 23:40 1 10.02 SYSTEM 1 +DATA_1/ivrs/datafile/system.267.652821909 5 28 0 56.8 1.0 17 0 0 0.0 45 0
1120 10/06/01 23:40 1 10.02 SOE 10 +DATA_1/ivrs/datafile/soe.dbf 6 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 SOEINDEX 11 +DATA_1/ivrs/datafile/soeindex.dbf 6 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 CCINDEX 9 +DATA_1/ivrs/datafile/ccindex.dbf 6 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 CCDATA 8 +DATA_1/ivrs/datafile/ccdata.dbf 6 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 PSE 7 +DATA_1/ivrs/pse.dbf 6 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 TPCCTAB 6 +DATA_1/ivrs/tpcctab01.dbf 6 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 SYSTEM 5 +DATA_1/ivrs/datafile/system_02.dbf 6 1 0 0.0 1.0 1 0 0 0.0 2 0
1120 10/06/01 23:40 1 10.02 TPCHTAB 12 +DATA_1/ivrs/datafile/tpch_01.dbf 6 1 0 0.0 1.0 1 0 0 0.0 2 0
1121 10/06/01 23:50 1 10.03 SYSAUX 3 +DATA_1/ivrs/datafile/sysaux.258.652821943 1 32 0 32.2 2.2 210 0 0 0.0 242 0
1121 10/06/01 23:50 1 10.03 USERS 4 +DATA_1/ivrs/datafile/users.263.652821963 2 18 0 20.0 1.3 87 0 0 0.0 105 0
1121 10/06/01 23:50 1 10.03 USERS 13 +DATA_1/ivrs/datafile/users02.dbf 3 17 0 18.2 1.4 68 0 0 0.0 85 0
1121 10/06/01 23:50 1 10.03 UNDOTBS1 2 +DATA_1/ivrs/datafile/undotbs1.257.652821933 4 0 0 27 0 0 0.0 27 0
1121 10/06/01 23:50 1 10.03 SYSTEM 1 +DATA_1/ivrs/datafile/system.267.652821909 5 1 0 70.0 1.0 6 0 0 0.0 7 0
1123 10/06/02 00:10 1 10.03 SYSAUX 3 +DATA_1/ivrs/datafile/sysaux.258.652821943 1 1295 2 22.8 2.2 222 0 0 0.0 1517 3
1123 10/06/02 00:10 1 10.03 SYSTEM 1 +DATA_1/ivrs/datafile/system.267.652821909 2 272 0 19.6 1.0 15 0 0 0.0 287 0
1123 10/06/02 00:10 1 10.03 USERS 13 +DATA_1/ivrs/datafile/users02.dbf 3 33 0 15.2 2.2 77 0 0 0.0 110 0
1123 10/06/02 00:10 1 10.03 USERS 4 +DATA_1/ivrs/datafile/users.263.652821963 4 20 0 15.5 2.0 78 0 0 0.0 98 0
1123 10/06/02 00:10 1 10.03 UNDOTBS1 2 +DATA_1/ivrs/datafile/undotbs1.257.652821933 5 0 0 26 0 0 0.0 26 0
23 rows selected.
}}}
AWR Top SQL Report
{{{
AWR Top SQL Report
i
n Elapsed
Snap s Snap Plan Elapsed Time CPU A
Snap Start t Dur SQL Hash Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
1120 10/06/01 23:40 1 10.02 24b3xmp4wd3tu 1023035917 36.31 36.31 11.57 0 429787 1304 60200 1 1 0 0.06 1 delete from sys.wri$_optstat_histgrm_his
tory where savtime < :1
1120 10/06/01 23:40 1 10.02 c2p32r5mzv8hb 0 31.59 31.59 6.81 0 351 0 1 1 1 0 0.05 2 BEGIN prvt_advisor.delete_expired_tas
ks; END;
1120 10/06/01 23:40 1 10.02 djxhgkkdw5s5w 2381629097 31.39 31.39 6.66 0 122 0 0 1 1 0 0.05 3 DELETE FROM WRI$_ADV_PARAMETERS A WHERE
A.TASK_ID = :B1
1120 10/06/01 23:40 1 10.02 24hc2470c87up 4200754155 12.72 12.72 3.37 0 3022 2833 0 1 1 0 0.02 4 delete from WRH$_SQL_PLAN tab where (:b
eg_snap <= tab.snap_id and tab.s
nap_id <= :end_snap and dbid = :
dbid) and not exists (select 1 from W
RM$_BASELINE b where
(tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id
) and (tab.snap
_id <= b.end_snap_id))
1120 10/06/01 23:40 1 10.02 djs2w2f17nw2z 0 10.85 10.85 2.48 0 1749 57 1 1 1 0 0.02 5 DECLARE job BINARY_INTEGER := :job; next
_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate :
= next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1121 10/06/01 23:50 1 10.03 1wzqub25cwnjm 0 24.95 24.95 1.53 0 118 0 1 1 1 0 0.04 1 DECLARE job BINARY_INTEGER := :job; next
_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN wksys.wk_job.invoke(21,21
); :mydate := next_date; IF broken THEN
:b := 1; ELSE :b := 0; END IF; END;
1121 10/06/01 23:50 1 10.03 d92h3rjp0y217 0 1.89 1.89 1.52 0 57 0 1 1 1 0 0.00 2 begin prvt_hdm.auto_execute( :db_id, :in
st_id, :end_snap ); end;
1121 10/06/01 23:50 1 10.03 5h7w8ykwtb2xt 0 1.78 1.78 1.43 0 6 0 0 1 1 0 0.00 3 INSERT INTO SYS.WRI$_ADV_PARAMETERS (TAS
K_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2
, :B1 )
1121 10/06/01 23:50 1 10.03 djs2w2f17nw2z 0 1.48 1.48 0.73 0 1880 47 1 1 1 0 0.00 4 DECLARE job BINARY_INTEGER := :job; next
_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate :
= next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1121 10/06/01 23:50 1 10.03 bunssq950snhf 2694099131 1.09 1.09 0.90 0 6 0 7 1 1 0 0.00 5 insert into wrh$_sga_target_advice (sn
ap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_P
HYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SI
ZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice
1123 10/06/02 00:10 1 10.03 ctdqj2v5dt8t4 813260237 EXCEL.EXE 42.23 42.23 13.62 0 37427 2743 804 1 1 0 0.07 1 SELECT s0.snap_id id,
1123 10/06/02 00:10 1 10.03 05s9358mm6vrr 0 9.26 9.26 4.04 0 49359 298 1 1 1 0 0.02 2 begin dbms_feature_usage_internal.exec_d
b_usage_sampling(:bind1); end;
1123 10/06/02 00:10 1 10.03 djs2w2f17nw2z 0 1.87 1.87 0.98 0 1938 111 1 1 1 0 0.00 3 DECLARE job BINARY_INTEGER := :job; next
_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate :
= next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1123 10/06/02 00:10 1 10.03 43w0r9122v7jm 921311599 MMON_SLAVE 1.40 1.40 0.70 0 5424 38 1 1 1 0 0.00 4 select max(bytes) from dba_segments
1123 10/06/02 00:10 1 10.03 1m7zxctxm9bhq 3312925166 MMON_SLAVE 1.01 1.01 0.37 0 754 33 1 1 1 0 0.00 5 select decode(cap + app + prop + msg + a
q , 0, 0, 1), 0, NULL from (select decod
e (count(*), 0, 0, 1) cap from dba_captu
re), (select decode (count(*), 0, 0, 1)
app from dba_apply), (select decode (cou
nt(*), 0, 0, 1) prop from dba_propagatio
n), (select decode (count(*), 0, 0, 1) m
sg from dba_streams_message_consumers
where streams_name != 'SCHEDULER_COORD
INATOR' and streams_name != 'SCHEDULER_
PICKUP'),(select decode (count(*), 0, 0,
1) aq from system.aq$_queue_tables)
15 rows selected.
}}}
AWR Top SQL Report 2
{{{
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
1120 10/06/01 23:40 1 10.02 24b3xmp4wd3tu 1023035917 36.31 36.31 11.57 16.55 0.00 0.00 0.00 429787 1304 0 60200 1 1 0 0.06 1 delete
1120 10/06/01 23:40 1 10.02 c2p32r5mzv8hb 0 31.59 31.59 6.81 0.00 0.00 0.00 0.00 351 0 0 1 1 1 0 0.05 2 BEGIN
1120 10/06/01 23:40 1 10.02 djxhgkkdw5s5w 2381629097 31.39 31.39 6.66 0.00 0.00 0.00 0.00 122 0 0 0 1 1 0 0.05 3 DELETE
1120 10/06/01 23:40 1 10.02 24hc2470c87up 4200754155 12.72 12.72 3.37 12.14 0.00 0.00 0.00 3022 2833 0 0 1 1 0 0.02 4 delete
1120 10/06/01 23:40 1 10.02 djs2w2f17nw2z 0 10.85 10.85 2.48 6.89 0.00 0.00 0.00 1749 57 0 1 1 1 0 0.02 5 DECLAR
1121 10/06/01 23:50 1 10.03 1wzqub25cwnjm 0 24.95 24.95 1.53 0.00 0.00 0.00 0.00 118 0 0 1 1 1 0 0.04 1 DECLAR
1121 10/06/01 23:50 1 10.03 d92h3rjp0y217 0 1.89 1.89 1.52 0.00 0.00 0.00 0.00 57 0 0 1 1 1 0 0.00 2 begin
1121 10/06/01 23:50 1 10.03 5h7w8ykwtb2xt 0 1.78 1.78 1.43 0.00 0.00 0.00 0.00 6 0 0 0 1 1 0 0.00 3 INSERT
1121 10/06/01 23:50 1 10.03 djs2w2f17nw2z 0 1.48 1.48 0.73 0.69 0.00 0.00 0.00 1880 47 0 1 1 1 0 0.00 4 DECLAR
1121 10/06/01 23:50 1 10.03 bunssq950snhf 2694099131 1.09 1.09 0.90 0.00 0.00 0.00 0.00 6 0 0 7 1 1 0 0.00 5 insert
1123 10/06/02 00:10 1 10.03 ctdqj2v5dt8t4 813260237 EXCEL.EX 42.23 42.23 13.62 28.27 0.00 0.00 0.00 37427 2743 0 804 1 1 0 0.07 1 SELECT
E
1123 10/06/02 00:10 1 10.03 05s9358mm6vrr 0 9.26 9.26 4.04 4.87 0.00 0.00 0.00 49359 298 1 1 1 1 0 0.02 2 begin
1123 10/06/02 00:10 1 10.03 djs2w2f17nw2z 0 1.87 1.87 0.98 0.82 0.00 0.00 0.00 1938 111 0 1 1 1 0 0.00 3 DECLAR
1123 10/06/02 00:10 1 10.03 43w0r9122v7jm 921311599 MMON_SLA 1.40 1.40 0.70 0.61 0.00 0.00 0.00 5424 38 0 1 1 1 0 0.00 4 select
VE
1123 10/06/02 00:10 1 10.03 1m7zxctxm9bhq 3312925166 MMON_SLA 1.01 1.01 0.37 0.63 0.00 0.00 0.00 754 33 0 1 1 1 0 0.00 5 select
VE
15 rows selected.
}}}
! The AWR report for SNAP_ID 1120
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 1120 01-Jun-10 23:40:27 21 1.8
End Snap: 1121 01-Jun-10 23:50:29 21 1.8
Elapsed: 10.02 (mins)
DB Time: 0.30 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 152M 152M Std Block Size: 8K
Shared Pool Size: 140M 140M Log Buffer: 2,860K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 86,266.06 3,989,028.92
Logical reads: 810.43 37,475.23
Block changes: 694.78 32,127.31
Physical reads: 8.53 394.62
Physical writes: 8.13 375.85
User calls: 0.02 1.15
Parses: 1.55 71.62
Hard parses: 0.15 6.85
Sorts: 1.03 47.77
Logons: 0.01 0.38
Executes: 3.68 170.08
Transactions: 0.02
% Blocks changed per Read: 85.73 Recursive Call %: 99.93
Rollback per transaction %: 15.38 Rows per Sort: 34.23
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 98.95 In-memory Sort %: 100.00
Library Hit %: 89.08 Soft Parse %: 90.44
Execute to Parse %: 57.89 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 94.92 % Non-Parse CPU: 85.28
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 48.78 50.52
% SQL with executions>1: 77.36 66.13
% Memory for SQL w/exec>1: 71.53 62.59
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
log file sequential read 87 39 447 219.1 System I/O
db file scattered read 576 26 45 146.0 User I/O
db file sequential read 485 22 46 124.6 User I/O
db file parallel write 824 15 18 85.4 System I/O
log file parallel write 277 10 37 57.7 System I/O
-------------------------------------------------------------
Time Model Statistics DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Total time in database user-calls (DB Time): 17.8s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 21.8 122.5
DB CPU 3.8 21.4
parse time elapsed 0.2 .9
PL/SQL execution elapsed time 0.1 .7
repeated bind elapsed time 0.0 .0
DB time 17.8 N/A
background elapsed time 181.6 N/A
background cpu time 43.4 N/A
-------------------------------------------------------------
Wait Class DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
-------------------- ---------------- ------ ---------------- ------- ---------
System I/O 2,990 .0 83 28 230.0
User I/O 1,087 .0 48 44 83.6
Configuration 5 40.0 3 690 0.4
Other 25 .0 1 60 1.9
Concurrency 4 .0 1 194 0.3
Commit 1 .0 0 5 0.1
-------------------------------------------------------------
Wait Events DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
log file sequential read 87 .0 39 447 6.7
db file scattered read 576 .0 26 45 44.3
db file sequential read 485 .0 22 46 37.3
db file parallel write 824 .0 15 18 63.4
log file parallel write 277 .0 10 37 21.3
Log archive I/O 54 .0 9 173 4.2
control file parallel write 273 .0 7 25 21.0
control file sequential read 1,473 .0 3 2 113.3
log file switch completion 3 33.3 2 713 0.2
enq: CF - contention 1 .0 1 1382 0.1
log buffer space 2 50.0 1 657 0.2
os thread startup 4 .0 1 194 0.3
log file single write 2 .0 0 63 0.2
latch free 1 .0 0 82 0.1
change tracking file synchro 11 .0 0 3 0.8
direct path read 13 .0 0 2 1.0
direct path write 13 .0 0 2 1.0
log file sync 1 .0 0 5 0.1
change tracking file synchro 12 .0 0 0 0.9
ASM background timer 146 .0 588 4026 11.2
virtual circuit status 20 100.0 580 28985 1.5
Streams AQ: qmn slave idle w 21 .0 579 27578 1.6
Streams AQ: qmn coordinator 45 53.3 579 12870 3.5
class slave wait 10 20.0 328 32772 0.8
jobq slave wait 22 95.5 64 2887 1.7
KSV master wait 10 10.0 4 381 0.8
-------------------------------------------------------------
Background Wait Events DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
log file sequential read 55 .0 23 416 4.2
db file parallel write 824 .0 15 18 63.4
log file parallel write 277 .0 10 37 21.3
Log archive I/O 54 .0 9 173 4.2
control file parallel write 273 .0 7 25 21.0
events in waitclass Other 24 .0 1 59 1.8
control file sequential read 397 .0 1 3 30.5
os thread startup 4 .0 1 194 0.3
db file sequential read 1 .0 0 140 0.1
log file single write 2 .0 0 63 0.2
direct path read 13 .0 0 2 1.0
direct path write 13 .0 0 2 1.0
rdbms ipc message 2,609 89.3 8,537 3272 200.7
ASM background timer 146 .0 588 4026 11.2
pmon timer 203 100.0 586 2886 15.6
Streams AQ: qmn slave idle w 21 .0 579 27578 1.6
Streams AQ: qmn coordinator 45 53.3 579 12870 3.5
smon timer 2 100.0 445 222294 0.2
class slave wait 10 20.0 328 32772 0.8
KSV master wait 10 10.0 4 381 0.8
-------------------------------------------------------------
Operating System Statistics DB/Inst: IVRS/ivrs Snaps: 1120-1121
Statistic Total
-------------------------------- --------------------
BUSY_TIME 60,063
IDLE_TIME 0
IOWAIT_TIME 0
NICE_TIME 31,825
SYS_TIME 23,093
USER_TIME 4,804
LOAD 3
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 2,868
NUM_CPUS 1
-------------------------------------------------------------
Service Statistics DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
-------------------------------- ------------ ------------ ---------- ----------
SYS$USERS 17.8 3.8 57 1,784
SYS$BACKGROUND 0.0 0.0 5,075 485,334
ivrs.karl.com 0.0 0.0 0 0
ivrsXDB 0.0 0.0 0 0
-------------------------------------------------------------
Service Wait Class Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
----------------------------------------------------------------
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
SYS$USERS
37 690 0 0 0 0 0 0
SYS$BACKGROUND
1050 4120 4 78 0 0 0 0
-------------------------------------------------------------
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
36 12 1 36.3 204.4 24b3xmp4wd3tu
delete from sys.wri$_optstat_histgrm_history where savtime < :1
32 7 1 31.6 177.9 c2p32r5mzv8hb
BEGIN prvt_advisor.delete_expired_tasks; END;
31 7 1 31.4 176.7 djxhgkkdw5s5w
DELETE FROM WRI$_ADV_PARAMETERS A WHERE A.TASK_ID = :B1
13 3 1 12.7 71.6 24hc2470c87up
delete from WRH$_SQL_PLAN tab where (:beg_snap <= tab.snap_id and tab.s
nap_id <= :end_snap and dbid = :dbid) and not exists (select 1 from W
RM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.snap
11 2 1 10.8 61.1 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
7 2 1 6.6 37.4 73z18fnnbb7dw
delete from sys.wri$_optstat_histhead_history where savtime < :1
5 0 1 4.6 25.7 925d7dd714u48
INSERT INTO STATS$LATCH ( SNAP_ID , DBID , INSTANCE_NUMBER , NAME , LATCH# , LEV
EL# , GETS , MISSES , SLEEPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , W
AIT_TIME ) SELECT :B3 , :B2 , :B1 , NAME , LATCH# , LEVEL# , GETS , MISSES , SLE
EPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , WAIT_TIME FROM V$LATCH
2 2 1 2.2 12.4 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
2 2 1 2.2 12.3 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
2 0 113 0.0 8.5 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
1 1 1 1.2 6.9 1crajpb7j5tyz
INSERT INTO STATS$SGA_TARGET_ADVICE ( SNAP_ID , DBID , INSTANCE_NUMBER , SGA_SIZ
E , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TIME_FACTOR , ESTD_PHYSICAL_READS )
SELECT :B3 , :B2 , :B1 , SGA_SIZE , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TI
ME_FACTOR , ESTD_PHYSICAL_READS FROM V$SGA_TARGET_ADVICE
1 0 1 1.2 6.7 fs9syj0mtwbt0
INSERT INTO STATS$ROWCACHE_SUMMARY ( SNAP_ID , DBID , INSTANCE_NUMBER , PARAMETE
R , TOTAL_USAGE , USAGE , GETS , GETMISSES , SCANS , SCANMISSES , SCANCOMPLETES
, MODIFICATIONS , FLUSHES , DLM_REQUESTS , DLM_CONFLICTS , DLM_RELEASES ) SELECT
:B3 , :B2 , :B1 , PARAMETER , SUM("COUNT") , SUM(USAGE) , SUM(GETS) , SUM(GETMI
1 0 1 0.8 4.6 aqwc375jm02h2
INSERT INTO STATS$THREAD ( SNAP_ID , DBID , INSTANCE_NUMBER , THREAD# , THREAD_I
NSTANCE_NUMBER , STATUS , OPEN_TIME , CURRENT_GROUP# , SEQUENCE# ) SELECT :B3 ,
:B2 , :B1 , T.THREAD# , I.INSTANCE_NUMBER , T.STATUS , T.OPEN_TIME , T.CURRENT_G
ROUP# , T.SEQUENCE# FROM V$THREAD T , V$INSTANCE I WHERE I.THREAD#(+) = T.THREAD
1 0 1 0.6 3.6 5v76s30g8tfyf
INSERT INTO STATS$PROCESS_MEMORY_ROLLUP ( SNAP_ID , DBID , INSTANCE_NUMBER , PID
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
, SERIAL# , CATEGORY , ALLOCATED , USED , MAX_ALLOCATED , MAX_MAX_ALLOCATED , A
VG_ALLOCATED , STDDEV_ALLOCATED , NON_ZERO_ALLOCATIONS ) SELECT * FROM (SELECT :
B3 SNAP_ID , :B2 DBID , :B1 INSTANCE_NUMBER , NVL(PM.PID, -9) PID , NVL(PM.SERIA
1 0 1 0.6 3.4 ff0hrfad01nf1
delete from WRH$_SQLSTAT_BL tab where (:beg_snap <= tab.snap_id and tab
.snap_id <= :end_snap and dbid = :dbid) and not exists (select 1 from
WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.sn
1 0 1 0.6 3.4 9w1zcwvsbtqur
delete from sys.wri$_optstat_tab_history where savtime < :1
1 0 1 0.6 3.3 dyd4b36t1ppph
delete from wrh$_sqltext tab where (tab.dbid = :dbid and :beg_snap <= t
ab.snap_id and tab.snap_id <= :end_snap and tab.ref_count = 0) an
d not exists (select 1 from WRM$_BASELINE b where (b.db
id = :dbid2 and tab.snap_id >= b.start_snap_id a
1 0 1 0.5 2.9 8m0xucarywjpv
delete from sys.wri$_optstat_ind_history where savtime < :1
0 0 1 0.5 2.7 4hq1ht79r1506
delete from WRH$_FILEMETRIC_HISTORY tab where (:beg_snap <= tab.snap_id and
tab.snap_id <= :end_snap and dbid = :dbid) and not exists (selec
t 1 from WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and
0 0 1 0.4 2.4 0nwqx4g4gn1zp
delete from sys.wri$_optstat_opr where start_time < :1
0 0 1 0.4 2.4 ctm013tzmdg6k
delete from WRH$_WAITSTAT_BL tab where (:beg_snap <= tab.snap_id and ta
b.snap_id <= :end_snap and dbid = :dbid) and not exists (select 1 fro
m WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.s
0 0 1 0.4 2.4 0fr34wn27cjvt
Module: MMON_SLAVE
delete from WRI$_ALERT_HISTORY where time_suggested < :1
0 0 1 0.4 2.1 0y69qk6t30t61
delete from WRM$_SNAP_ERROR tab where (:beg_snap <= tab.snap_id and tab
.snap_id <= :end_snap and dbid = :dbid) and not exists (select 1 from
WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.sn
0 0 1 0.4 2.0 bunssq950snhf
insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice
0 0 1 0.4 2.0 1c50d5m0uh16j
delete from WRH$_ACTIVE_SESSION_HISTORY_BL tab where (:beg_snap <= tab.snap_id
and tab.snap_id <= :end_snap and dbid = :dbid) and not exists
(select 1 from WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and
0 0 1 0.3 1.9 bq0xuw807fdju
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
INSERT INTO STATS$EVENT_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT_ID
, WAIT_TIME_MILLI , WAIT_COUNT ) SELECT :B3 , :B2 , :B1 , EN.EVENT_ID , WAIT_TIM
E_MILLI , WAIT_COUNT FROM V$EVENT_HISTOGRAM EH , V$EVENT_NAME EN WHERE EH.EVENT
= EN.NAME AND EH.EVENT# = EN.EVENT#
0 0 2 0.2 1.9 6cxqh7mktnbjm
insert into smon_scn_time (thread, time_mp, time_dp, scn, scn_wrp, scn_bas, num
_mappings, tim_scn_map) values (0, :1, :2, :3, :4, :5, :6, :7)
0 0 1 0.3 1.9 299gaqtz96qfy
Module: MMON_SLAVE
delete from WRI$_SEGADV_OBJLIST where creation_time < :1
0 0 1 0.3 1.7 2ptnd5byk1qh7
delete from WRH$_LATCH_PARENT_BL tab where (:beg_snap <= tab.snap_id and
tab.snap_id <= :end_snap and dbid = :dbid) and not exists (select 1
from WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (t
0 0 38 0.0 1.7 83taa7kaw59c1
select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl(sc
ale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,scale,183,s
cale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,rowid,col#,proper
ty, nvl(charsetid,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$ wh
0 0 1 0.2 1.4 0m4v8p44shy83
Module: MMON_SLAVE
delete from WRI$_SEGADV_CNTRLTAB where start_time < :1
0 0 1 0.2 1.3 c3amcasx93pvb
INSERT INTO STATS$FILESTATXS ( SNAP_ID , DBID , INSTANCE_NUMBER , TSNAME , FILEN
AME , PHYRDS , PHYWRTS , SINGLEBLKRDS , READTIM , WRITETIM , SINGLEBLKRDTIM , PH
YBLKRD , PHYBLKWRT , WAIT_COUNT , TIME , FILE# ) SELECT :B3 , :B2 , :B1 , TSNAME
, FILENAME , PHYRDS , PHYWRTS , SINGLEBLKRDS , READTIM , WRITETIM , SINGLEBLKRD
0 0 1 0.2 1.3 0gf6adkbuyytg
INSERT INTO STATS$FILE_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , FILE# , SI
NGLEBLKRDTIM_MILLI , SINGLEBLKRDS ) SELECT :B3 , :B2 , :B1 , FILE# , SINGLEBLKRD
TIM_MILLI , SINGLEBLKRDS FROM V$FILE_HISTOGRAM WHERE SINGLEBLKRDS > 0
0 0 1 0.2 1.2 dvzr1zfmdddga
INSERT INTO STATS$SYSSTAT ( SNAP_ID , DBID , INSTANCE_NUMBER , STATISTIC# , NAME
, VALUE ) SELECT :B3 , :B2 , :B1 , STATISTIC# , NAME , VALUE FROM V$SYSSTAT
0 0 1 0.2 1.2 dcj4ww2c5bq4d
INSERT INTO STATS$TEMPSTATXS ( SNAP_ID , DBID , INSTANCE_NUMBER , TSNAME , FILEN
AME , PHYRDS , PHYWRTS , SINGLEBLKRDS , READTIM , WRITETIM , SINGLEBLKRDTIM , PH
YBLKRD , PHYBLKWRT , WAIT_COUNT , TIME , FILE# ) SELECT :B3 , :B2 , :B1 , TSNAME
, FILENAME , PHYRDS , PHYWRTS , SINGLEBLKRDS , READTIM , WRITETIM , SINGLEBLKRD
0 0 161 0.0 1.2 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),
0 0 1 0.2 1.2 5yhxd1urrkq31
INSERT INTO STATS$PARAMETER ( SNAP_ID , DBID , INSTANCE_NUMBER , NAME , VALUE ,
ISDEFAULT , ISMODIFIED ) SELECT :B3 , :B2 , :B1 , I.KSPPINM , SV.KSPPSTVL , SV.K
SPPSTDF , DECODE(BITAND(SV.KSPPSTVF,7),1,'MODIFIED',4,'SYSTEM_MOD','FALSE') FROM
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
STATS$X$KSPPI I , STATS$X$KSPPSV SV WHERE I.INDX = SV.INDX AND ( ( (TRANSLATE(K
0 0 146 0.0 1.1 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
0 0 1 0.2 1.1 a7z22y3zmhtqs
select from_tz( cast ((max(analyzetime) - 1) as timestamp), to_cha
r(systimestamp, 'TZH:TZM')) from sys.tab$ where analyzetime is not null
-------------------------------------------------------------
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
12 36 1 11.57 204.4 24b3xmp4wd3tu
delete from sys.wri$_optstat_histgrm_history where savtime < :1
7 32 1 6.81 177.9 c2p32r5mzv8hb
BEGIN prvt_advisor.delete_expired_tasks; END;
7 31 1 6.66 176.7 djxhgkkdw5s5w
DELETE FROM WRI$_ADV_PARAMETERS A WHERE A.TASK_ID = :B1
3 13 1 3.37 71.6 24hc2470c87up
delete from WRH$_SQL_PLAN tab where (:beg_snap <= tab.snap_id and tab.s
nap_id <= :end_snap and dbid = :dbid) and not exists (select 1 from W
RM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.snap
2 11 1 2.48 61.1 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
2 2 1 1.97 12.4 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
2 2 1 1.95 12.3 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
2 7 1 1.74 37.4 73z18fnnbb7dw
delete from sys.wri$_optstat_histhead_history where savtime < :1
1 1 1 1.00 6.9 1crajpb7j5tyz
INSERT INTO STATS$SGA_TARGET_ADVICE ( SNAP_ID , DBID , INSTANCE_NUMBER , SGA_SIZ
E , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TIME_FACTOR , ESTD_PHYSICAL_READS )
SELECT :B3 , :B2 , :B1 , SGA_SIZE , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TI
ME_FACTOR , ESTD_PHYSICAL_READS FROM V$SGA_TARGET_ADVICE
0 0 1 0.34 2.0 bunssq950snhf
insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice
0 1 1 0.30 3.4 9w1zcwvsbtqur
delete from sys.wri$_optstat_tab_history where savtime < :1
0 2 113 0.00 8.5 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
0 1 1 0.19 2.9 8m0xucarywjpv
delete from sys.wri$_optstat_ind_history where savtime < :1
0 0 1 0.19 1.1 a7z22y3zmhtqs
select from_tz( cast ((max(analyzetime) - 1) as timestamp), to_cha
r(systimestamp, 'TZH:TZM')) from sys.tab$ where analyzetime is not null
0 0 1 0.15 1.3 0gf6adkbuyytg
INSERT INTO STATS$FILE_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , FILE# , SI
NGLEBLKRDTIM_MILLI , SINGLEBLKRDS ) SELECT :B3 , :B2 , :B1 , FILE# , SINGLEBLKRD
TIM_MILLI , SINGLEBLKRDS FROM V$FILE_HISTOGRAM WHERE SINGLEBLKRDS > 0
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
0 5 1 0.13 25.7 925d7dd714u48
INSERT INTO STATS$LATCH ( SNAP_ID , DBID , INSTANCE_NUMBER , NAME , LATCH# , LEV
EL# , GETS , MISSES , SLEEPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , W
AIT_TIME ) SELECT :B3 , :B2 , :B1 , NAME , LATCH# , LEVEL# , GETS , MISSES , SLE
EPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , WAIT_TIME FROM V$LATCH
0 0 1 0.13 1.9 299gaqtz96qfy
Module: MMON_SLAVE
delete from WRI$_SEGADV_OBJLIST where creation_time < :1
0 0 1 0.13 2.4 ctm013tzmdg6k
delete from WRH$_WAITSTAT_BL tab where (:beg_snap <= tab.snap_id and ta
b.snap_id <= :end_snap and dbid = :dbid) and not exists (select 1 fro
m WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.s
0 0 38 0.00 1.7 83taa7kaw59c1
select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl(sc
ale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,scale,183,s
cale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,rowid,col#,proper
ty, nvl(charsetid,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$ wh
0 0 1 0.12 1.4 0m4v8p44shy83
Module: MMON_SLAVE
delete from WRI$_SEGADV_CNTRLTAB where start_time < :1
0 1 1 0.11 3.3 dyd4b36t1ppph
delete from wrh$_sqltext tab where (tab.dbid = :dbid and :beg_snap <= t
ab.snap_id and tab.snap_id <= :end_snap and tab.ref_count = 0) an
d not exists (select 1 from WRM$_BASELINE b where (b.db
id = :dbid2 and tab.snap_id >= b.start_snap_id a
0 0 1 0.10 1.2 5yhxd1urrkq31
INSERT INTO STATS$PARAMETER ( SNAP_ID , DBID , INSTANCE_NUMBER , NAME , VALUE ,
ISDEFAULT , ISMODIFIED ) SELECT :B3 , :B2 , :B1 , I.KSPPINM , SV.KSPPSTVL , SV.K
SPPSTDF , DECODE(BITAND(SV.KSPPSTVF,7),1,'MODIFIED',4,'SYSTEM_MOD','FALSE') FROM
STATS$X$KSPPI I , STATS$X$KSPPSV SV WHERE I.INDX = SV.INDX AND ( ( (TRANSLATE(K
0 0 1 0.10 2.4 0fr34wn27cjvt
Module: MMON_SLAVE
delete from WRI$_ALERT_HISTORY where time_suggested < :1
0 1 1 0.10 3.4 ff0hrfad01nf1
delete from WRH$_SQLSTAT_BL tab where (:beg_snap <= tab.snap_id and tab
.snap_id <= :end_snap and dbid = :dbid) and not exists (select 1 from
WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.sn
0 0 1 0.10 1.2 dvzr1zfmdddga
INSERT INTO STATS$SYSSTAT ( SNAP_ID , DBID , INSTANCE_NUMBER , STATISTIC# , NAME
, VALUE ) SELECT :B3 , :B2 , :B1 , STATISTIC# , NAME , VALUE FROM V$SYSSTAT
0 0 146 0.00 1.1 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
0 0 1 0.10 2.4 0nwqx4g4gn1zp
delete from sys.wri$_optstat_opr where start_time < :1
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
0 0 1 0.09 1.9 bq0xuw807fdju
INSERT INTO STATS$EVENT_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT_ID
, WAIT_TIME_MILLI , WAIT_COUNT ) SELECT :B3 , :B2 , :B1 , EN.EVENT_ID , WAIT_TIM
E_MILLI , WAIT_COUNT FROM V$EVENT_HISTOGRAM EH , V$EVENT_NAME EN WHERE EH.EVENT
= EN.NAME AND EH.EVENT# = EN.EVENT#
0 0 1 0.09 2.0 1c50d5m0uh16j
delete from WRH$_ACTIVE_SESSION_HISTORY_BL tab where (:beg_snap <= tab.snap_id
and tab.snap_id <= :end_snap and dbid = :dbid) and not exists
(select 1 from WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and
0 0 1 0.09 1.3 c3amcasx93pvb
INSERT INTO STATS$FILESTATXS ( SNAP_ID , DBID , INSTANCE_NUMBER , TSNAME , FILEN
AME , PHYRDS , PHYWRTS , SINGLEBLKRDS , READTIM , WRITETIM , SINGLEBLKRDTIM , PH
YBLKRD , PHYBLKWRT , WAIT_COUNT , TIME , FILE# ) SELECT :B3 , :B2 , :B1 , TSNAME
, FILENAME , PHYRDS , PHYWRTS , SINGLEBLKRDS , READTIM , WRITETIM , SINGLEBLKRD
0 0 1 0.09 1.7 2ptnd5byk1qh7
delete from WRH$_LATCH_PARENT_BL tab where (:beg_snap <= tab.snap_id and
tab.snap_id <= :end_snap and dbid = :dbid) and not exists (select 1
from WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (t
0 0 1 0.08 1.2 dcj4ww2c5bq4d
INSERT INTO STATS$TEMPSTATXS ( SNAP_ID , DBID , INSTANCE_NUMBER , TSNAME , FILEN
AME , PHYRDS , PHYWRTS , SINGLEBLKRDS , READTIM , WRITETIM , SINGLEBLKRDTIM , PH
YBLKRD , PHYBLKWRT , WAIT_COUNT , TIME , FILE# ) SELECT :B3 , :B2 , :B1 , TSNAME
, FILENAME , PHYRDS , PHYWRTS , SINGLEBLKRDS , READTIM , WRITETIM , SINGLEBLKRD
0 0 1 0.08 2.1 0y69qk6t30t61
delete from WRM$_SNAP_ERROR tab where (:beg_snap <= tab.snap_id and tab
.snap_id <= :end_snap and dbid = :dbid) and not exists (select 1 from
WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.sn
0 0 161 0.00 1.2 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),
0 1 1 0.06 6.7 fs9syj0mtwbt0
INSERT INTO STATS$ROWCACHE_SUMMARY ( SNAP_ID , DBID , INSTANCE_NUMBER , PARAMETE
R , TOTAL_USAGE , USAGE , GETS , GETMISSES , SCANS , SCANMISSES , SCANCOMPLETES
, MODIFICATIONS , FLUSHES , DLM_REQUESTS , DLM_CONFLICTS , DLM_RELEASES ) SELECT
:B3 , :B2 , :B1 , PARAMETER , SUM("COUNT") , SUM(USAGE) , SUM(GETS) , SUM(GETMI
0 1 1 0.02 3.6 5v76s30g8tfyf
INSERT INTO STATS$PROCESS_MEMORY_ROLLUP ( SNAP_ID , DBID , INSTANCE_NUMBER , PID
, SERIAL# , CATEGORY , ALLOCATED , USED , MAX_ALLOCATED , MAX_MAX_ALLOCATED , A
VG_ALLOCATED , STDDEV_ALLOCATED , NON_ZERO_ALLOCATIONS ) SELECT * FROM (SELECT :
B3 SNAP_ID , :B2 DBID , :B1 INSTANCE_NUMBER , NVL(PM.PID, -9) PID , NVL(PM.SERIA
0 0 2 0.01 1.9 6cxqh7mktnbjm
insert into smon_scn_time (thread, time_mp, time_dp, scn, scn_wrp, scn_bas, num
_mappings, tim_scn_map) values (0, :1, :2, :3, :4, :5, :6, :7)
0 0 1 0.01 2.7 4hq1ht79r1506
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
delete from WRH$_FILEMETRIC_HISTORY tab where (:beg_snap <= tab.snap_id and
tab.snap_id <= :end_snap and dbid = :dbid) and not exists (selec
t 1 from WRM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and
0 1 1 0.01 4.6 aqwc375jm02h2
INSERT INTO STATS$THREAD ( SNAP_ID , DBID , INSTANCE_NUMBER , THREAD# , THREAD_I
NSTANCE_NUMBER , STATUS , OPEN_TIME , CURRENT_GROUP# , SEQUENCE# ) SELECT :B3 ,
:B2 , :B1 , T.THREAD# , I.INSTANCE_NUMBER , T.STATUS , T.OPEN_TIME , T.CURRENT_G
ROUP# , T.SEQUENCE# FROM V$THREAD T , V$INSTANCE I WHERE I.THREAD#(+) = T.THREAD
-------------------------------------------------------------
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 487,178
-> Captured SQL account for 99.1% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
429,787 1 429,787.0 88.2 11.57 36.31 24b3xmp4wd3tu
delete from sys.wri$_optstat_histgrm_history where savtime < :1
35,143 1 35,143.0 7.2 1.74 6.64 73z18fnnbb7dw
delete from sys.wri$_optstat_histhead_history where savtime < :1
3,278 1 3,278.0 0.7 0.19 0.52 8m0xucarywjpv
delete from sys.wri$_optstat_ind_history where savtime < :1
3,128 1 3,128.0 0.6 0.30 0.60 9w1zcwvsbtqur
delete from sys.wri$_optstat_tab_history where savtime < :1
3,022 1 3,022.0 0.6 3.37 12.72 24hc2470c87up
delete from WRH$_SQL_PLAN tab where (:beg_snap <= tab.snap_id and tab.s
nap_id <= :end_snap and dbid = :dbid) and not exists (select 1 from W
RM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.snap
1,749 1 1,749.0 0.4 2.48 10.85 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
953 1 953.0 0.2 0.19 0.20 a7z22y3zmhtqs
select from_tz( cast ((max(analyzetime) - 1) as timestamp), to_cha
r(systimestamp, 'TZH:TZM')) from sys.tab$ where analyzetime is not null
612 204 3.0 0.1 0.04 0.05 5ngzsfstg8tmy
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.dataobj#
,o.flags from obj$ o where o.obj#=:1
578 2 289.0 0.1 0.01 0.01 7cq8d0jqxzum1
delete from smon_scn_time where thread=0 and scn = (select min(scn) from smon_s
cn_time where thread=0)
568 161 3.5 0.1 0.07 0.21 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),
-------------------------------------------------------------
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Total Disk Reads: 5,130
-> Captured SQL account for 96.7% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
2,833 1 2,833.0 55.2 3.37 12.72 24hc2470c87up
delete from WRH$_SQL_PLAN tab where (:beg_snap <= tab.snap_id and tab.s
nap_id <= :end_snap and dbid = :dbid) and not exists (select 1 from W
RM$_BASELINE b where (tab.dbid = b.dbid) and
(tab.snap_id >= b.start_snap_id) and (tab.snap
1,304 1 1,304.0 25.4 11.57 36.31 24b3xmp4wd3tu
delete from sys.wri$_optstat_histgrm_history where savtime < :1
381 1 381.0 7.4 1.74 6.64 73z18fnnbb7dw
delete from sys.wri$_optstat_histhead_history where savtime < :1
150 1 150.0 2.9 0.11 0.59 dyd4b36t1ppph
delete from wrh$_sqltext tab where (tab.dbid = :dbid and :beg_snap <= t
ab.snap_id and tab.snap_id <= :end_snap and tab.ref_count = 0) an
d not exists (select 1 from WRM$_BASELINE b where (b.db
id = :dbid2 and tab.snap_id >= b.start_snap_id a
72 1 72.0 1.4 0.10 0.42 0fr34wn27cjvt
Module: MMON_SLAVE
delete from WRI$_ALERT_HISTORY where time_suggested < :1
57 1 57.0 1.1 2.48 10.85 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
52 1 52.0 1.0 0.19 0.52 8m0xucarywjpv
delete from sys.wri$_optstat_ind_history where savtime < :1
50 1 50.0 1.0 0.30 0.60 9w1zcwvsbtqur
delete from sys.wri$_optstat_tab_history where savtime < :1
24 113 0.2 0.5 0.21 1.51 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
13 1 13.0 0.3 0.09 0.34 bq0xuw807fdju
INSERT INTO STATS$EVENT_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT_ID
, WAIT_TIME_MILLI , WAIT_COUNT ) SELECT :B3 , :B2 , :B1 , EN.EVENT_ID , WAIT_TIM
E_MILLI , WAIT_COUNT FROM V$EVENT_HISTOGRAM EH , V$EVENT_NAME EN WHERE EH.EVENT
= EN.NAME AND EH.EVENT# = EN.EVENT#
-------------------------------------------------------------
SQL ordered by Executions DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Total Executions: 2,211
-> Captured SQL account for 62.9% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
204 204 1.0 0.00 0.00 5ngzsfstg8tmy
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.dataobj#
,o.flags from obj$ o where o.obj#=:1
161 161 1.0 0.00 0.00 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),
146 146 1.0 0.00 0.00 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
120 1 0.0 0.00 0.00 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
113 2,019 17.9 0.00 0.01 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
110 0 0.0 0.00 0.00 350f5yrnnmshs
lock table sys.mon_mods$ in exclusive mode nowait
110 107 1.0 0.00 0.00 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
+ :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn
84 0 0.0 0.00 0.00 b2gnxm5z6r51n
lock table sys.col_usage$ in exclusive mode nowait
80 80 1.0 0.00 0.00 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3
47 47 1.0 0.00 0.00 04xtrk7uyhknh
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, spare2 f
rom obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner is null an
d linkname is null and subname is null
38 541 14.2 0.00 0.01 83taa7kaw59c1
select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl(sc
ale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,scale,183,s
cale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,rowid,col#,proper
ty, nvl(charsetid,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$ wh
27 27 1.0 0.00 0.00 bsa0wjtftg3uw
select file# from file$ where ts#=:1
-------------------------------------------------------------
SQL ordered by Parse Calls DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Total Parse Calls: 931
-> Captured SQL account for 67.1% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
110 110 11.82 350f5yrnnmshs
lock table sys.mon_mods$ in exclusive mode nowait
110 110 11.82 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
+ :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn
84 161 9.02 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),
84 0 9.02 53btfq0dt9bs9
insert into sys.col_usage$ values ( :objn, :coln, decode(bitand(:flag,1),0,0
,1), decode(bitand(:flag,2),0,0,1), decode(bitand(:flag,4),0,0,1), decode(
bitand(:flag,8),0,0,1), decode(bitand(:flag,16),0,0,1), decode(bitand(:flag,
32),0,0,1), :time)
84 84 9.02 b2gnxm5z6r51n
lock table sys.col_usage$ in exclusive mode nowait
80 80 8.59 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3
27 27 2.90 bsa0wjtftg3uw
select file# from file$ where ts#=:1
14 14 1.50 49s332uhbnsma
declare vsn varchar2(20); begin vsn :=
dbms_rcvman.getPackageVersion; :pkg_vsn:pkg_vsn_i := vsn;
if vsn is not null then :pkg_vsnub4 :=
to_number(substr(vsn,1,2) || substr(vsn,4,2) || s
4 4 0.43 130dvvr5s8bgn
select obj#, dataobj#, part#, hiboundlen, hiboundval, ts#, file#, block#, pctfre
e$, pctused$, initrans, maxtrans, flags, analyzetime, samplesize, rowcnt, blkcnt
, empcnt, avgspc, chncnt, avgrln, length(bhiboundval), bhiboundval from tabpart$
where bo# = :1 order by part#
4 4 0.43 19rkm1wsf9axx
insert into WRI$_ALERT_HISTORY (sequence_id, reason_id, owner, object_name, subo
bject_name, reason_argument_1, reason_argument_2, reason_argument_3, reason_argu
ment_4, reason_argument_5, time_suggested, creation_time, action_argument_1, act
ion_argument_2, action_argument_3, action_argument_4, action_argument_5, message
-------------------------------------------------------------
SQL ordered by Sharable Memory DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
SQL ordered by Version Count DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session 3,530 5.9 271.5
CPU used when call started 3,079 5.1 236.9
CR blocks created 3 0.0 0.2
DB time 7,628 12.7 586.8
DBWR checkpoint buffers written 4,873 8.1 374.9
DBWR checkpoints 1 0.0 0.1
DBWR transaction table writes 10 0.0 0.8
DBWR undo block writes 3,082 5.1 237.1
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 11 0.0 0.9
IMU Redo allocation size 52,520 87.4 4,040.0
IMU commits 2 0.0 0.2
IMU contention 0 0.0 0.0
IMU pool not allocated 1 0.0 0.1
IMU undo allocation size 60,368 100.4 4,643.7
IMU- failed to get a private str 1 0.0 0.1
SQL*Net roundtrips to/from clien 0 0.0 0.0
active txn count during cleanout 326 0.5 25.1
background checkpoints completed 1 0.0 0.1
background checkpoints started 1 0.0 0.1
background timeouts 2,331 3.9 179.3
buffer is not pinned count 5,734 9.5 441.1
buffer is pinned count 137,121 228.1 10,547.8
bytes received via SQL*Net from 0 0.0 0.0
bytes sent via SQL*Net to client 0 0.0 0.0
calls to get snapshot scn: kcmgs 2,824 4.7 217.2
calls to kcmgas 884 1.5 68.0
calls to kcmgcs 47 0.1 3.6
change write time 674 1.1 51.9
cleanout - number of ktugct call 432 0.7 33.2
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 6 0.0 0.5
cluster key scan block gets 1,480 2.5 113.9
cluster key scans 406 0.7 31.2
commit batch/immediate performed 2 0.0 0.2
commit batch/immediate requested 2 0.0 0.2
commit cleanout failures: callba 10 0.0 0.8
commit cleanouts 1,874 3.1 144.2
commit cleanouts successfully co 1,864 3.1 143.4
commit immediate performed 2 0.0 0.2
commit immediate requested 2 0.0 0.2
commit txn count during cleanout 133 0.2 10.2
concurrency wait time 78 0.1 6.0
consistent changes 3 0.0 0.2
consistent gets 14,156 23.6 1,088.9
consistent gets - examination 3,869 6.4 297.6
consistent gets from cache 14,156 23.6 1,088.9
cursor authentications 27 0.0 2.1
data blocks consistent reads - u 3 0.0 0.2
db block changes 417,655 694.8 32,127.3
db block gets 473,022 786.9 36,386.3
db block gets from cache 473,022 786.9 36,386.3
deferred (CURRENT) block cleanou 699 1.2 53.8
enqueue conversions 188 0.3 14.5
enqueue releases 5,963 9.9 458.7
enqueue requests 5,965 9.9 458.9
enqueue timeouts 2 0.0 0.2
enqueue waits 1 0.0 0.1
execute count 2,211 3.7 170.1
free buffer inspected 1,892 3.2 145.5
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
free buffer requested 9,246 15.4 711.2
heap block compress 2 0.0 0.2
hot buffers moved to head of LRU 1 0.0 0.1
immediate (CR) block cleanout ap 6 0.0 0.5
immediate (CURRENT) block cleano 695 1.2 53.5
index crx upgrade (positioned) 179 0.3 13.8
index fast full scans (full) 1 0.0 0.1
index fetch by key 1,447 2.4 111.3
index scans kdiixs1 1,578 2.6 121.4
leaf node 90-10 splits 6 0.0 0.5
leaf node splits 26 0.0 2.0
lob reads 8 0.0 0.6
lob writes 0 0.0 0.0
lob writes unaligned 0 0.0 0.0
logons cumulative 5 0.0 0.4
messages received 1,104 1.8 84.9
messages sent 1,104 1.8 84.9
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 9,524 15.8 732.6
opened cursors cumulative 1,878 3.1 144.5
parse count (failures) 0 0.0 0.0
parse count (hard) 89 0.2 6.9
parse count (total) 931 1.6 71.6
parse time cpu 56 0.1 4.3
parse time elapsed 59 0.1 4.5
physical read IO requests 1,065 1.8 81.9
physical read bytes 42,024,960 69,909.6 3,232,689.2
physical read total IO requests 2,756 4.6 212.0
physical read total bytes 258,244,096 429,595.6 19,864,930.5
physical read total multi block 757 1.3 58.2
physical reads 5,130 8.5 394.6
physical reads cache 5,117 8.5 393.6
physical reads cache prefetch 4,065 6.8 312.7
physical reads direct 13 0.0 1.0
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 1,244 2.1 95.7
physical write IO requests 837 1.4 64.4
physical write bytes 40,026,112 66,584.5 3,078,931.7
physical write total IO requests 2,016 3.4 155.1
physical write total bytes 158,405,632 263,511.8 12,185,048.6
physical write total multi block 753 1.3 57.9
physical writes 4,886 8.1 375.9
physical writes direct 13 0.0 1.0
physical writes from cache 4,873 8.1 374.9
physical writes non checkpoint 4,551 7.6 350.1
pinned buffers inspected 1 0.0 0.1
recursive calls 22,325 37.1 1,717.3
recursive cpu usage 3,330 5.5 256.2
redo blocks written 104,869 174.5 8,066.9
redo buffer allocation retries 3 0.0 0.2
redo entries 209,315 348.2 16,101.2
redo log space requests 3 0.0 0.2
redo log space wait time 220 0.4 16.9
redo ordering marks 6 0.0 0.5
redo size 51,857,376 86,266.1 3,989,028.9
redo synch time 0 0.0 0.0
redo synch writes 150 0.3 11.5
redo wastage 80,764 134.4 6,212.6
redo write time 1,112 1.9 85.5
redo writer latching time 0 0.0 0.0
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
redo writes 277 0.5 21.3
rollback changes - undo records 89 0.2 6.9
rollbacks only - consistent read 3 0.0 0.2
rows fetched via callback 616 1.0 47.4
session cursor cache hits 1,390 2.3 106.9
session logical reads 487,178 810.4 37,475.2
session pga memory 3,582,928 5,960.3 275,609.9
session pga memory max 5,466,444 9,093.6 420,495.7
session uga memory 4,295,094,396 7,144,998.5 #############
session uga memory max 6,205,016 10,322.2 477,308.9
shared hash latch upgrades - no 229 0.4 17.6
sorts (memory) 621 1.0 47.8
sorts (rows) 21,258 35.4 1,635.2
sql area evicted 0 0.0 0.0
sql area purged 0 0.0 0.0
switch current to new buffer 608 1.0 46.8
table fetch by rowid 69,435 115.5 5,341.2
table fetch continued row 4 0.0 0.3
table scan blocks gotten 4,656 7.8 358.2
table scan rows gotten 151,229 251.6 11,633.0
table scans (long tables) 1 0.0 0.1
table scans (short tables) 181 0.3 13.9
total number of times SMON poste 0 0.0 0.0
transaction rollbacks 2 0.0 0.2
undo change vector size 24,233,936 40,313.8 1,864,148.9
user I/O wait time 4,812 8.0 370.2
user calls 15 0.0 1.2
user commits 11 0.0 0.9
user rollbacks 2 0.0 0.2
workarea executions - optimal 310 0.5 23.9
write clones created in foregrou 0 0.0 0.0
-------------------------------------------------------------
Instance Activity Stats - Absolute ValuesDB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Statistics with absolute values (should not be diffed)
Statistic Begin Value End Value
-------------------------------- --------------- ---------------
session cursor cache count 549 604
opened cursors current 38 38
logons current 21 21
-------------------------------------------------------------
Instance Activity Stats - Thread Activity DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
-------------------------------- ------------------ ---------
log switches (derived) 1 5.99
-------------------------------------------------------------
Tablespace IO Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX
984 2 39.9 5.1 455 1 0 0.0
UNDOTBS1
2 0 50.0 1.0 212 0 0 0.0
USERS
39 0 177.2 1.5 145 0 0 0.0
SYSTEM
29 0 54.8 1.0 18 0 0 0.0
CCDATA
1 0 0.0 1.0 1 0 0 0.0
CCINDEX
1 0 0.0 1.0 1 0 0 0.0
PSE
1 0 0.0 1.0 1 0 0 0.0
SOE
1 0 0.0 1.0 1 0 0 0.0
SOEINDEX
1 0 0.0 1.0 1 0 0 0.0
TPCCTAB
1 0 0.0 1.0 1 0 0 0.0
TPCHTAB
1 0 0.0 1.0 1 0 0 0.0
-------------------------------------------------------------
File IO Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
CCDATA +DATA_1/ivrs/datafile/ccdata.dbf
1 0 0.0 1.0 1 0 0 0.0
CCINDEX +DATA_1/ivrs/datafile/ccindex.dbf
1 0 0.0 1.0 1 0 0 0.0
PSE +DATA_1/ivrs/pse.dbf
1 0 0.0 1.0 1 0 0 0.0
SOE +DATA_1/ivrs/datafile/soe.dbf
1 0 0.0 1.0 1 0 0 0.0
SOEINDEX +DATA_1/ivrs/datafile/soeindex.dbf
1 0 0.0 1.0 1 0 0 0.0
SYSAUX +DATA_1/ivrs/datafile/sysaux.258.652821943
984 2 39.9 5.1 455 1 0 0.0
SYSTEM +DATA_1/ivrs/datafile/system.267.652821909
28 0 56.8 1.0 17 0 0 0.0
SYSTEM +DATA_1/ivrs/datafile/system_02.dbf
1 0 0.0 1.0 1 0 0 0.0
TPCCTAB +DATA_1/ivrs/tpcctab01.dbf
1 0 0.0 1.0 1 0 0 0.0
TPCHTAB +DATA_1/ivrs/datafile/tpch_01.dbf
1 0 0.0 1.0 1 0 0 0.0
UNDOTBS1 +DATA_1/ivrs/datafile/undotbs1.257.652821933
2 0 50.0 1.0 212 0 0 0.0
USERS +DATA_1/ivrs/datafile/users.263.652821963
16 0 37.5 1.8 81 0 0 0.0
USERS +DATA_1/ivrs/datafile/users02.dbf
23 0 274.3 1.3 64 0 0 0.0
-------------------------------------------------------------
Buffer Pool Statistics DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Writ Buffer
Number of Pool Buffer Physical Physical Buff Comp Busy
P Buffers Hit% Gets Reads Writes Wait Wait Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D 18,962 99 487,101 5,124 4,873 0 0 0
-------------------------------------------------------------
Instance Recovery Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B 0 53 132 0 184320 184320 N/A N/A
E 0 53 158 233 111123 184320 111123 N/A
-------------------------------------------------------------
Buffer Pool Advisory DB/Inst: IVRS/ivrs Snap: 1121
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 12 .1 1,497 1.3 18,607
D 24 .2 2,994 1.2 16,976
D 36 .2 4,491 1.1 16,610
D 48 .3 5,988 1.1 16,010
D 60 .4 7,485 1.0 15,332
D 72 .5 8,982 1.0 14,718
D 84 .6 10,479 1.0 14,718
D 96 .6 11,976 1.0 14,705
D 108 .7 13,473 1.0 14,653
D 120 .8 14,970 1.0 14,627
D 132 .9 16,467 1.0 14,627
D 144 .9 17,964 1.0 14,627
D 152 1.0 18,962 1.0 14,627
D 156 1.0 19,461 1.0 14,627
D 168 1.1 20,958 1.0 14,627
D 180 1.2 22,455 1.0 14,627
D 192 1.3 23,952 1.0 14,627
D 204 1.3 25,449 1.0 14,627
D 216 1.4 26,946 1.0 14,627
D 228 1.5 28,443 1.0 14,627
D 240 1.6 29,940 1.0 14,627
-------------------------------------------------------------
PGA Aggr Summary DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ------------------ --------------------------
100.0 22 0
-------------------------------------------------------------
Warning: pga_aggregate_target was set too low for current workload, as this
value was exceeded during this interval. Use the PGA Advisory view
to help identify a different value for pga_aggregate_target.
PGA Aggr Target Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B 103 49 109.0 0.0 .0 .0 .0 21,094
E 103 49 110.3 0.0 .0 .0 .0 21,094
-------------------------------------------------------------
PGA Aggr Target Histogram DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
2K 4K 278 278 0 0
64K 128K 4 4 0 0
512K 1024K 28 28 0 0
-------------------------------------------------------------
PGA Memory Advisory DB/Inst: IVRS/ivrs Snap: 1121
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
13 0.1 91.5 15.7 85.0 9
26 0.3 91.5 15.7 85.0 9
52 0.5 91.5 15.7 85.0 9
77 0.8 91.5 6.1 94.0 6
103 1.0 91.5 0.0 100.0 5
124 1.2 91.5 0.0 100.0 5
144 1.4 91.5 0.0 100.0 5
165 1.6 91.5 0.0 100.0 4
185 1.8 91.5 0.0 100.0 4
206 2.0 91.5 0.0 100.0 4
309 3.0 91.5 0.0 100.0 0
412 4.0 91.5 0.0 100.0 0
618 6.0 91.5 0.0 100.0 0
824 8.0 91.5 0.0 100.0 0
-------------------------------------------------------------
Shared Pool Advisory DB/Inst: IVRS/ivrs Snap: 1121
-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Est LC Est LC Est LC Est LC
Shared SP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
60 .4 16 2,219 1,636 1.0 194 1.1 17,505
76 .5 26 3,764 1,648 1.0 182 1.0 17,587
92 .7 26 3,764 1,648 1.0 182 1.0 17,587
108 .8 26 3,764 1,648 1.0 182 1.0 17,587
124 .9 26 3,764 1,648 1.0 182 1.0 17,587
140 1.0 26 3,764 1,648 1.0 182 1.0 17,587
156 1.1 26 3,764 1,648 1.0 182 1.0 17,587
172 1.2 26 3,764 1,648 1.0 182 1.0 17,587
188 1.3 26 3,764 1,648 1.0 182 1.0 17,587
204 1.5 26 3,764 1,648 1.0 182 1.0 17,587
220 1.6 26 3,764 1,648 1.0 182 1.0 17,587
236 1.7 26 3,764 1,648 1.0 182 1.0 17,587
252 1.8 26 3,764 1,648 1.0 182 1.0 17,587
268 1.9 26 3,764 1,648 1.0 182 1.0 17,587
284 2.0 26 3,764 1,648 1.0 182 1.0 17,587
-------------------------------------------------------------
SGA Target Advisory DB/Inst: IVRS/ivrs Snap: 1121
SGA Target SGA Size Est DB Est Physical
Size (M) Factor Time (s) Reads
---------- ---------- ------------ ----------------
156 0.5 1,022 14,704
234 0.8 1,005 14,612
312 1.0 1,005 14,612
390 1.3 1,005 14,612
468 1.5 1,005 14,612
546 1.8 1,005 14,612
624 2.0 1,005 14,612
-------------------------------------------------------------
Streams Pool Advisory DB/Inst: IVRS/ivrs Snap: 1121
Size for Size Est Spill Est Spill Est Unspill Est Unspill
Est (MB) Factor Count Time (s) Count Time (s)
---------- --------- ----------- ----------- ----------- -----------
4 1.0 0 0 0 0
8 2.0 0 0 0 0
12 3.0 0 0 0 0
16 4.0 0 0 0 0
20 5.0 0 0 0 0
24 6.0 0 0 0 0
28 7.0 0 0 0 0
32 8.0 0 0 0 0
36 9.0 0 0 0 0
40 10.0 0 0 0 0
44 11.0 0 0 0 0
48 12.0 0 0 0 0
52 13.0 0 0 0 0
56 14.0 0 0 0 0
60 15.0 0 0 0 0
64 16.0 0 0 0 0
68 17.0 0 0 0 0
72 18.0 0 0 0 0
76 19.0 0 0 0 0
80 20.0 0 0 0 0
-------------------------------------------------------------
Java Pool Advisory DB/Inst: IVRS/ivrs Snap: 1121
Est LC Est LC Est LC Est LC
Java JP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
8 1.0 1 132 24 1.0 182 1.0 55
12 1.5 1 132 24 1.0 182 1.0 55
16 2.0 1 132 24 1.0 182 1.0 55
-------------------------------------------------------------
Buffer Wait Statistics DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Enqueue Activity DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc
Enqueue Type (Request Reason)
------------------------------------------------------------------------------
Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
CF-Controlfile Transaction
329 328 1 1 1 1,420.00
-------------------------------------------------------------
Undo Segment Summary DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
1 3.2 310 0 3 15/15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Undo Segment Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Most recent 35 Undostat rows, ordered by Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
01-Jun 23:50 2,907 250 0 3 15 0/0 0/0/0/0/0/0
01-Jun 23:40 251 60 0 3 15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ASM allocation 148 0.0 N/A 0 0 N/A
ASM db client latch 399 0.0 N/A 0 0 N/A
ASM map headers 60 0.0 N/A 0 0 N/A
ASM map load waiting lis 10 0.0 N/A 0 0 N/A
ASM map operation freeli 180 0.0 N/A 0 0 N/A
ASM map operation hash t 34,306 0.0 N/A 0 0 N/A
ASM network background l 305 0.0 N/A 0 0 N/A
AWR Alerted Metric Eleme 2,242 0.0 N/A 0 0 N/A
Consistent RBA 278 0.0 N/A 0 0 N/A
FAL request queue 15 0.0 N/A 0 0 N/A
FAL subheap alocation 15 0.0 N/A 0 0 N/A
FIB s.o chain latch 24 0.0 N/A 0 0 N/A
FOB s.o list latch 112 0.0 N/A 0 0 N/A
In memory undo latch 431 0.0 N/A 0 161 0.0
JS queue state obj latch 4,320 0.0 N/A 0 0 N/A
JS slv state obj latch 3 0.0 N/A 0 0 N/A
KFK SGA context latch 297 0.0 N/A 0 0 N/A
KFMD SGA 14 0.0 N/A 0 0 N/A
KMG MMAN ready and start 200 0.0 N/A 0 0 N/A
KTF sga latch 2 0.0 N/A 0 190 0.0
KWQMN job cache list lat 8 0.0 N/A 0 0 N/A
KWQP Prop Status 3 0.0 N/A 0 0 N/A
MQL Tracking Latch 0 N/A N/A 0 12 0.0
Memory Management Latch 0 N/A N/A 0 200 0.0
OS process 36 0.0 N/A 0 0 N/A
OS process allocation 224 0.0 N/A 0 0 N/A
OS process: request allo 8 0.0 N/A 0 0 N/A
PL/SQL warning settings 28 0.0 N/A 0 0 N/A
Reserved Space Latch 3 0.0 N/A 0 0 N/A
SGA IO buffer pool latch 269 0.0 N/A 0 301 0.0
SQL memory manager latch 14 0.0 N/A 0 199 0.0
SQL memory manager worka 14,361 0.0 N/A 0 0 N/A
Shared B-Tree 27 0.0 N/A 0 0 N/A
active checkpoint queue 1,029 0.0 N/A 0 0 N/A
active service list 1,224 0.0 N/A 0 203 0.0
archive control 16 0.0 N/A 0 0 N/A
archive process latch 213 0.0 N/A 0 0 N/A
cache buffer handles 96 0.0 N/A 0 0 N/A
cache buffers chains 1,944,376 0.0 N/A 0 14,867 0.0
cache buffers lru chain 22,199 0.0 N/A 0 4,006 0.0
cache table scan latch 0 N/A N/A 0 234 0.0
channel handle pool latc 8 0.0 N/A 0 0 N/A
channel operations paren 2,827 0.0 N/A 0 0 N/A
checkpoint queue latch 12,550 0.0 N/A 0 4,821 0.0
client/application info 8 0.0 N/A 0 0 N/A
commit callback allocati 16 0.0 N/A 0 0 N/A
compile environment latc 16 0.0 N/A 0 0 N/A
dml lock allocation 973 0.0 N/A 0 0 N/A
dummy allocation 10 0.0 N/A 0 0 N/A
enqueue hash chains 12,105 0.0 N/A 0 26 0.0
enqueues 10,457 0.0 N/A 0 0 N/A
event group latch 4 0.0 N/A 0 0 N/A
file cache latch 56 0.0 N/A 0 0 N/A
hash table column usage 251 0.0 N/A 0 1,043 0.0
hash table modification 7 0.0 N/A 0 0 N/A
job workq parent latch 0 N/A N/A 0 2 0.0
job_queue_processes para 11 0.0 N/A 0 0 N/A
kks stats 178 0.0 N/A 0 0 N/A
ksuosstats global area 44 0.0 N/A 0 0 N/A
ksv instance 2 0.0 N/A 0 0 N/A
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ktm global data 2 0.0 N/A 0 0 N/A
kwqbsn:qsga 26 0.0 N/A 0 0 N/A
lgwr LWN SCN 423 0.0 N/A 0 0 N/A
library cache 6,302 0.0 N/A 0 0 N/A
library cache load lock 290 0.0 N/A 0 10 0.0
library cache lock 3,416 0.0 N/A 0 0 N/A
library cache lock alloc 74 0.0 N/A 0 0 N/A
library cache pin 1,834 0.0 N/A 0 0 N/A
library cache pin alloca 6 0.0 N/A 0 0 N/A
list of block allocation 156 0.0 N/A 0 0 N/A
loader state object free 12 0.0 N/A 0 0 N/A
logminer context allocat 1 0.0 N/A 0 0 N/A
messages 7,157 0.0 N/A 0 0 N/A
mostly latch-free SCN 423 0.0 N/A 0 0 N/A
msg queue 8 0.0 N/A 0 8 0.0
multiblock read objects 1,252 0.0 N/A 0 6 0.0
ncodef allocation latch 9 0.0 N/A 0 0 N/A
object queue header heap 1,076 0.0 N/A 0 0 N/A
object queue header oper 25,351 0.0 N/A 0 0 N/A
object stats modificatio 53 0.0 N/A 0 2 0.0
parallel query alloc buf 80 0.0 N/A 0 0 N/A
parameter table allocati 5 0.0 N/A 0 0 N/A
post/wait queue 9 0.0 N/A 0 6 0.0
process allocation 8 0.0 N/A 0 4 0.0
process group creation 8 0.0 N/A 0 0 N/A
qmn task queue latch 84 0.0 N/A 0 0 N/A
redo allocation 1,306 0.0 N/A 0 209,294 0.0
redo copy 0 N/A N/A 0 209,292 0.0
redo writing 2,283 0.0 N/A 0 0 N/A
reservation so alloc lat 2 0.0 N/A 0 0 N/A
resmgr group change latc 1 0.0 N/A 0 0 N/A
resmgr:actses active lis 6 0.0 N/A 0 0 N/A
resmgr:actses change gro 1 0.0 N/A 0 0 N/A
resmgr:free threads list 4 0.0 N/A 0 0 N/A
resmgr:schema config 2 0.0 N/A 0 0 N/A
row cache objects 11,172 0.0 N/A 0 0 N/A
rules engine aggregate s 8 0.0 N/A 0 0 N/A
rules engine rule set st 216 0.0 N/A 0 0 N/A
segmented array pool 20 0.0 N/A 0 0 N/A
sequence cache 18 0.0 N/A 0 0 N/A
session allocation 35,098 0.0 N/A 0 0 N/A
session idle bit 40 0.0 N/A 0 0 N/A
session state list latch 18 0.0 N/A 0 0 N/A
session switching 9 0.0 N/A 0 0 N/A
session timer 203 0.0 N/A 0 0 N/A
shared pool 8,441 0.0 N/A 0 0 N/A
shared pool sim alloc 3 0.0 N/A 0 0 N/A
shared pool simulator 1,003 0.0 N/A 0 0 N/A
simulator hash latch 78,582 0.0 N/A 0 0 N/A
simulator lru latch 76,921 0.0 N/A 0 557 0.0
slave class 27 0.0 N/A 0 0 N/A
slave class create 31 3.2 1.0 0 0 N/A
sort extent pool 14 0.0 N/A 0 0 N/A
state object free list 2 0.0 N/A 0 0 N/A
statistics aggregation 140 0.0 N/A 0 0 N/A
threshold alerts latch 73 0.0 N/A 0 0 N/A
transaction allocation 16 0.0 N/A 0 0 N/A
transaction branch alloc 9 0.0 N/A 0 0 N/A
undo global data 1,435 0.0 N/A 0 0 N/A
user lock 2 0.0 N/A 0 0 N/A
-------------------------------------------------------------
Latch Sleep Breakdown DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> ordered by misses desc
Latch Name
----------------------------------------
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
slave class create
31 1 1 0 0 0 0
-------------------------------------------------------------
Latch Miss Sources DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
slave class create ksvcreate 0 1 0
-------------------------------------------------------------
Parent Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Child Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Logical Reads DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Total Logical Reads: 487,178
-> Captured Segments account for 97.5% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSAUX I_WRI$_OPTSTAT_H_ST INDEX 181,728 37.30
SYS SYSAUX I_WRI$_OPTSTAT_H_OBJ INDEX 180,544 37.06
SYS SYSAUX WRI$_OPTSTAT_HISTGRM TABLE 61,584 12.64
SYS SYSAUX I_WRI$_OPTSTAT_HH_ST INDEX 13,792 2.83
SYS SYSAUX I_WRI$_OPTSTAT_HH_OB INDEX 13,552 2.78
-------------------------------------------------------------
Segments by Physical Reads DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Total Physical Reads: 5,130
-> Captured Segments account for 94.9% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSAUX WRH$_SQL_PLAN TABLE 2,864 55.83
SYS SYSAUX I_WRI$_OPTSTAT_H_OBJ INDEX 629 12.26
SYS SYSAUX WRI$_OPTSTAT_HISTGRM TABLE 340 6.63
SYS SYSAUX I_WRI$_OPTSTAT_H_ST INDEX 331 6.45
SYS SYSAUX I_WRI$_OPTSTAT_HH_OB INDEX 195 3.80
-------------------------------------------------------------
Segments by Row Lock Waits DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Segments by ITL Waits DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Buffer Busy Waits DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Dictionary Cache Stats DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> "Pct Misses" should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control 24 0.0 0 N/A 3 1
dc_global_oids 22 0.0 0 N/A 0 26
dc_histogram_data 398 28.4 0 N/A 0 1,213
dc_histogram_defs 674 21.7 0 N/A 0 3,329
dc_object_ids 1,096 18.6 0 N/A 0 935
dc_objects 400 12.8 0 N/A 0 1,575
dc_profiles 1 0.0 0 N/A 0 1
dc_rollback_segments 78 0.0 0 N/A 0 16
dc_segments 364 22.0 0 N/A 1 819
dc_tablespaces 600 0.0 0 N/A 0 12
dc_usernames 8 0.0 0 N/A 0 16
dc_users 99 0.0 0 N/A 0 16
outstanding_alerts 36 11.1 0 N/A 8 26
-------------------------------------------------------------
Library Cache Activity DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> "Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 12 0.0 33 0.0 0 0
CLUSTER 1 0.0 1 0.0 0 0
SQL AREA 88 100.0 2,833 9.4 0 0
TABLE/PROCEDURE 117 32.5 869 16.2 0 0
TRIGGER 1 0.0 1 0.0 0 0
-------------------------------------------------------------
Process Memory Summary DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist
Avg Std Dev Max Max
Alloc Used Alloc Alloc Alloc Alloc Num Num
Category (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other 102.9 N/A 4.5 7.6 24 24 23 23
Freeable 5.7 .0 .6 .5 1 N/A 9 9
SQL .4 .2 .0 .0 0 2 9 8
PL/SQL .1 .0 .0 .0 0 0 21 21
E Other 103.2 N/A 4.5 7.6 24 24 23 23
Freeable 6.7 .0 .7 .5 1 N/A 9 9
SQL .4 .2 .0 .0 0 2 9 8
PL/SQL .1 .0 .0 .0 0 0 21 21
-------------------------------------------------------------
SGA Memory Summary DB/Inst: IVRS/ivrs Snaps: 1120-1121
End Size (Bytes)
SGA regions Begin Size (Bytes) (if different)
------------------------------ ------------------- -------------------
Database Buffers 159,383,552
Fixed Size 1,261,612
Redo Buffers 2,928,640
Variable Size 163,581,908
-------------------
sum 327,155,712
-------------------------------------------------------------
SGA breakdown difference DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
insignificant, or zero in that snapshot
Pool Name Begin MB End MB % Diff
------ ------------------------------ -------------- -------------- -------
java free memory 2.7 2.7 0.00
java joxlod exec hp 5.1 5.1 0.00
java joxs heap .2 .2 0.00
large ASM map operations hashta .2 .2 0.00
large CTWR dba buffer .4 .4 0.00
large PX msg pool .2 .2 0.00
large free memory 1.2 1.2 0.00
large krcc extent chunk 2.0 2.0 0.00
shared ASH buffers 2.0 2.0 0.00
shared CCursor 2.2 2.5 12.13
shared Heap0: KGL 2.0 2.1 2.10
shared KCB Table Scan Buffer 3.8 3.8 0.00
shared KGLS heap 4.7 5.2 10.36
shared KQR M PO 2.7 2.9 7.33
shared KSFD SGA I/O b 3.8 3.8 0.00
shared PCursor 1.4 1.6 12.57
shared PL/SQL MPCODE 1.8 1.8 0.00
shared free memory 71.7 69.3 -3.41
shared kglsim hash table bkts 2.0 2.0 0.00
shared library cache 3.7 3.8 2.87
shared row cache 3.6 3.6 0.00
shared sql area 8.5 9.5 11.52
stream free memory 4.0 4.0 0.00
buffer_cache 152.0 152.0 0.00
fixed_sga 1.2 1.2 0.00
log_buffer 2.8 2.8 0.00
-------------------------------------------------------------
Streams CPU/IO Usage DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Streams processes ordered by CPU usage
-> CPU and I/O Time in micro seconds
Session Type CPU Time User I/O Time Sys I/O Time
------------------------- -------------- -------------- --------------
QMON Coordinator 18,305 0 0
QMON Slaves 9,247 0 0
-------------------------------------------------------------
Streams Capture DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Streams Apply DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Queues DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Subscribers DB/Inst: IVRS/ivrs Snaps: 1120-1121
No data exists for this section of the report.
-------------------------------------------------------------
Rule Set DB/Inst: IVRS/ivrs Snaps: 1120-1121
-> Rule Sets ordered by Evaluations
Fast SQL CPU Elapsed
Ruleset Name Evals Evals Execs Time Time
------------------------- -------- -------- -------- -------- --------
SYS.ALERT_QUE_R 8 0 0 0 1
-------------------------------------------------------------
Resource Limit Stats DB/Inst: IVRS/ivrs Snap: 1121
No data exists for this section of the report.
-------------------------------------------------------------
init.ora Parameters DB/Inst: IVRS/ivrs Snaps: 1120-1121
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
audit_file_dest /oracle/app/oracle/admin/ivrs/adu
audit_sys_operations TRUE
background_dump_dest /oracle/app/oracle/admin/ivrs/bdu
compatible 10.2.0.3.0
control_files +DATA_1/ivrs/control01.ctl, +DATA
core_dump_dest /oracle/app/oracle/admin/ivrs/cdu
db_block_size 8192
db_domain karl.com
db_file_multiblock_read_count 16
db_name ivrs
db_recovery_file_dest /flash_reco/flash_recovery_area
db_recovery_file_dest_size 161061273600
dispatchers (PROTOCOL=TCP) (SERVICE=ivrsXDB)
job_queue_processes 10
log_archive_dest_1 LOCATION=USE_DB_RECOVERY_FILE_DES
log_archive_format ivrs_%t_%s_%r.arc
open_cursors 300
os_authent_prefix
os_roles FALSE
pga_aggregate_target 108003328
processes 150
recyclebin OFF
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
sga_target 327155712
spfile +DATA_1/ivrs/spfileivrs.ora
sql92_security TRUE
statistics_level TYPICAL
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /oracle/app/oracle/admin/ivrs/udu
-------------------------------------------------------------
End of Report
Report written to awrrpt_1_1120_1121.txt
}}}
! The ASH report for SNAP_ID 1120
{{{
Summary of All User Input
-------------------------
Format : TEXT
DB Id : 2607950532
Inst num : 1
Begin time : 01-Jun-10 23:40:00
End time : 01-Jun-10 23:50:00
Slot width : Default
Report targets : 0
Report name : ashrpt_1_0601_2350.txt
ASH Report For IVRS/ivrs
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
1 312M (100%) 152M (48.7%) 114M (36.5%) 2.0M (0.6%)
Analysis Begin Time: 01-Jun-10 23:40:00
Analysis End Time: 01-Jun-10 23:50:00
Elapsed Time: 10.0 (mins)
Sample Count: 19
Average Active Sessions: 0.32
Avg. Active Session per CPU: 0.32
Report Target: None specified
Top User Events DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU CPU 5.26 0.02
-------------------------------------------------------------
Top Background Events DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU CPU 26.32 0.08
log file sequential read System I/O 21.05 0.07
db file sequential read User I/O 15.79 0.05
log file parallel write System I/O 15.79 0.05
db file parallel write System I/O 5.26 0.02
-------------------------------------------------------------
Top Event P1/P2/P3 Values DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
Event % Event P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1 Parameter 2 Parameter 3
-------------------------- -------------------------- --------------------------
log file sequential read 21.05 "0","2","8192" 5.26
log# block# blocks
"0","28673","2048" 5.26
"0","57345","2048" 5.26
db file sequential read 15.79 "3","23777","1" 5.26
file# block# blocks
"3","38062","1" 5.26
"3","38149","1" 5.26
log file parallel write 15.79 "1","1877","1" 5.26
files blocks requests
"1","1879","1" 5.26
"1","1879","2" 5.26
db file parallel write 5.26 "1","0","2147483647" 5.26
requests interrupt timeout
db file scattered read 5.26 "3","39563","16" 5.26
file# block# blocks
-------------------------------------------------------------
Top Service/Module DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$BACKGROUND MMON_SLAVE 52.63 Auto-Purge Slave A 52.63
UNNAMED 42.11 UNNAMED 42.11
SYS$USERS UNNAMED 5.26 UNNAMED 5.26
-------------------------------------------------------------
Top Client IDs DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Command Types DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Statements DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL using literals DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
No data exists for this section of the report.
-------------------------------------------------------------
Top PL/SQL Procedures DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
-> 'PL/SQL entry subprogram' represents the application's top-level
entry-point(procedure, function, trigger, package initialization
or RPC call) into PL/SQL.
-> 'PL/SQL current subprogram' is the pl/sql subprogram being executed
at the point of sampling . If the value is 'SQL', it represents
the percentage of time spent executing SQL for the particular
plsql entry subprogram
PLSQL Entry Subprogram % Activity
----------------------------------------------------------------- ----------
PLSQL Current Subprogram % Current
----------------------------------------------------------------- ----------
SYS.PRVT_ADVISOR.DELETE_EXPIRED_TASKS 15.79
SYS.PRVT_ADVISOR.DELETE_EXPIRED_TASKS 15.79
PERFSTAT.STATSPACK.SNAP#1 5.26
PERFSTAT.STATSPACK.SNAP#2 5.26
-------------------------------------------------------------
Top Sessions DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
-> '# Samples Active' shows the number of ASH samples in which the session
was found waiting for that particular event. The percentage shown
in this column is calculated with respect to wall clock time
and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
the PQ slave activity into the session issuing the PQ. Refer to
the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# % Activity Event % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
153, 26 52.63 CPU + Wait for CPU 21.05
SYS oracle@dbrocai...el.com (m000) 4/60 [ 7%] 1
db file sequential read 15.79
3/60 [ 5%] 0
db file scattered read 5.26
1/60 [ 2%] 0
152, 1 21.05 log file sequential read 15.79
SYS oracle@dbrocai...el.com (ARC1) 3/60 [ 5%] 0
CPU + Wait for CPU 5.26
1/60 [ 2%] 0
166, 1 15.79 log file parallel write 15.79
SYS oracle@dbrocai...el.com (LGWR) 3/60 [ 5%] 0
143, 9 5.26 CPU + Wait for CPU 5.26
PERFSTAT oracle@dbrocai...el.com (J000) 1/60 [ 2%] 1
167, 1 5.26 db file parallel write 5.26
SYS oracle@dbrocai...el.com (DBW0) 1/60 [ 2%] 0
-------------------------------------------------------------
Top Blocking Sessions DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
No data exists for this section of the report.
-------------------------------------------------------------
Top Sessions running PQs DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Objects DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
-> With respect to Application, Cluster, User I/O and buffer busy waits only.
Object ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
Object Name (Type) Tablespace
----------------------------------------------------- -------------------------
8813 15.79 db file sequential read 15.79
SYS.WRI$_ALERT_HISTORY (TABLE) SYSAUX
4171 5.26 db file scattered read 5.26
SYS.WRI$_SEGADV_CNTRLTAB (TABLE) SYSAUX
-------------------------------------------------------------
Top DB Files DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
-> With respect to Cluster and User I/O events only.
File ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
File Name Tablespace
----------------------------------------------------- -------------------------
3 21.05 db file sequential read 15.79
+DATA_1/ivrs/datafile/sysaux.258.652821943 SYSAUX
db file scattered read 5.26
-------------------------------------------------------------
Top Latches DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
No data exists for this section of the report.
-------------------------------------------------------------
Activity Over Time DB/Inst: IVRS/ivrs (Jun 01 23:40 to 23:50)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
23:40:00 (5.0 min) 18 CPU + Wait for CPU 6 31.58
log file sequential read 4 21.05
db file sequential read 3 15.79
23:45:00 (5.0 min) 1 db file parallel write 1 5.26
-------------------------------------------------------------
End of Report
Report written to ashrpt_1_0601_2350.txt
}}}
! The AWR report for SNAP_ID 1121
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 1121 01-Jun-10 23:50:29 21 1.8
End Snap: 1122 02-Jun-10 00:00:30 22 1.7
Elapsed: 10.03 (mins)
DB Time: 0.87 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 152M 152M Std Block Size: 8K
Shared Pool Size: 140M 140M Log Buffer: 2,860K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 2,197.17 62,941.52
Logical reads: 10.06 288.24
Block changes: 4.46 127.67
Physical reads: 0.19 5.43
Physical writes: 1.03 29.57
User calls: 0.03 0.81
Parses: 0.46 13.14
Hard parses: 0.00 0.00
Sorts: 0.68 19.43
Logons: 0.01 0.29
Executes: 1.06 30.43
Transactions: 0.03
% Blocks changed per Read: 44.29 Recursive Call %: 99.64
Rollback per transaction %: 42.86 Rows per Sort: 46.27
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 98.12 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 100.00
Execute to Parse %: 56.81 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 80.00 % Non-Parse CPU: 98.13
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 50.52 50.54
% SQL with executions>1: 66.13 78.31
% Memory for SQL w/exec>1: 62.59 72.80
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
db file parallel write 398 16 40 30.7 System I/O
CPU time 4 8.2
control file parallel write 199 4 18 7.0 System I/O
log file parallel write 34 2 57 3.8 System I/O
os thread startup 4 2 380 2.9 Concurrenc
-------------------------------------------------------------
Time Model Statistics DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Total time in database user-calls (DB Time): 52s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 53.1 102.2
Java execution elapsed time 49.2 94.6
DB CPU 4.3 8.2
parse time elapsed 0.2 .3
PL/SQL execution elapsed time 0.1 .3
repeated bind elapsed time 0.0 .0
DB time 52.0 N/A
background elapsed time 31.0 N/A
background cpu time 7.6 N/A
-------------------------------------------------------------
Wait Class DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
-------------------- ---------------- ------ ---------------- ------- ---------
System I/O 1,951 .0 23 12 92.9
User I/O 81 .0 2 22 3.9
Concurrency 4 .0 2 380 0.2
Other 19 .0 0 14 0.9
Commit 3 .0 0 50 0.1
-------------------------------------------------------------
Wait Events DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file parallel write 398 .0 16 40 19.0
control file parallel write 199 .0 4 18 9.5
log file parallel write 34 .0 2 57 1.6
os thread startup 4 .0 2 380 0.2
db file sequential read 68 .0 1 20 3.2
control file sequential read 1,320 .0 1 1 62.9
db file scattered read 13 .0 0 30 0.6
latch free 1 .0 0 250 0.0
log file sync 3 .0 0 50 0.1
change tracking file synchro 9 .0 0 2 0.4
change tracking file synchro 9 .0 0 0 0.4
Streams AQ: qmn slave idle w 22 .0 602 27347 1.0
Streams AQ: qmn coordinator 44 50.0 602 13673 2.1
ASM background timer 120 .0 586 4885 5.7
class slave wait 5 100.0 586 117189 0.2
virtual circuit status 20 100.0 586 29294 1.0
jobq slave wait 23 95.7 66 2888 1.1
-------------------------------------------------------------
Background Wait Events DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file parallel write 398 .0 16 40 19.0
control file parallel write 199 .0 4 18 9.5
log file parallel write 34 .0 2 57 1.6
os thread startup 4 .0 2 380 0.2
control file sequential read 280 .0 1 3 13.3
db file scattered read 1 .0 0 208 0.0
events in waitclass Other 18 .0 0 1 0.9
rdbms ipc message 2,365 98.8 7,089 2997 112.6
Streams AQ: qmn slave idle w 22 .0 602 27347 1.0
Streams AQ: qmn coordinator 44 50.0 602 13673 2.1
pmon timer 203 100.0 589 2902 9.7
ASM background timer 120 .0 586 4885 5.7
class slave wait 5 100.0 586 117189 0.2
smon timer 2 100.0 586 292970 0.1
-------------------------------------------------------------
Operating System Statistics DB/Inst: IVRS/ivrs Snaps: 1121-1122
Statistic Total
-------------------------------- --------------------
BUSY_TIME 60,471
IDLE_TIME 0
IOWAIT_TIME 0
NICE_TIME 44,171
SYS_TIME 11,180
USER_TIME 4,813
LOAD 2
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 6,880
NUM_CPUS 1
-------------------------------------------------------------
Service Statistics DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
-------------------------------- ------------ ------------ ---------- ----------
SYS$USERS 52.0 4.3 47 2,251
SYS$BACKGROUND 0.0 0.0 72 3,870
ivrs.karl.com 0.0 0.0 0 0
ivrsXDB 0.0 0.0 0 0
-------------------------------------------------------------
Service Wait Class Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
----------------------------------------------------------------
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
SYS$USERS
35 69 0 0 0 0 0 0
SYS$BACKGROUND
46 110 4 152 0 0 0 0
-------------------------------------------------------------
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
25 2 1 25.0 48.0 1wzqub25cwnjm
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN wksys.wk_job.invoke(21,21); :mydate := next_date; IF broken THEN
:b := 1; ELSE :b := 0; END IF; END;
2 2 1 1.9 3.6 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
2 1 1 1.8 3.4 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
1 1 1 1.5 2.9 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1 1 1 1.1 2.1 bunssq950snhf
insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice
1 1 1 0.7 1.3 7vgmvmy8vvb9s
insert into wrh$_tempstatxs (snap_id, dbid, instance_number, file#, creation_c
hange#, phyrds, phywrts, singleblkrds, readtim, writetim, singleblkrdtim, phy
blkrd, phyblkwrt, wait_count, time) select :snap_id, :dbid, :instance_num
ber, tf.tfnum, to_number(tf.tfcrc_scn) creation_change#, ts.kcftiopyr, ts.
1 0 1 0.6 1.2 84qubbrsr0kfn
insert into wrh$_latch (snap_id, dbid, instance_number, latch_hash, level#, ge
ts, misses, sleeps, immediate_gets, immediate_misses, spin_gets, sleep1, s
leep2, sleep3, sleep4, wait_time) select :snap_id, :dbid, :instance_number,
hash, level#, gets, misses, sleeps, immediate_gets, immediate_misses, spin_ge
1 0 1 0.6 1.2 g6wf9na8zs5hb
insert into wrh$_sysmetric_summary (snap_id, dbid, instance_number, beg
in_time, end_time, intsize, group_id, metric_id, num_interval, maxval, minv
al, average, standard_deviation) select :snap_id, :dbid, :instance_number,
begtime, endtime, intsize_csec, groupid, metricid, numintv, max, min,
0 0 1 0.3 0.6 1crajpb7j5tyz
INSERT INTO STATS$SGA_TARGET_ADVICE ( SNAP_ID , DBID , INSTANCE_NUMBER , SGA_SIZ
E , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TIME_FACTOR , ESTD_PHYSICAL_READS )
SELECT :B3 , :B2 , :B1 , SGA_SIZE , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TI
ME_FACTOR , ESTD_PHYSICAL_READS FROM V$SGA_TARGET_ADVICE
0 0 1 0.3 0.6 f9nzhpn9854xz
insert into wrh$_seg_stat (snap_id, dbid, instance_number, ts#, obj#, dataobj#
, logical_reads_total, logical_reads_delta, buffer_busy_waits_total, buffer_b
usy_waits_delta, db_block_changes_total, db_block_changes_delta, physical_rea
ds_total, physical_reads_delta, physical_writes_total, physical_writes_delta,
-------------------------------------------------------------
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
2 25 1 1.53 48.0 1wzqub25cwnjm
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN wksys.wk_job.invoke(21,21); :mydate := next_date; IF broken THEN
:b := 1; ELSE :b := 0; END IF; END;
2 2 1 1.52 3.6 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
1 2 1 1.43 3.4 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
1 1 1 0.90 2.1 bunssq950snhf
insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice
1 1 1 0.73 2.9 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1 1 1 0.56 1.3 7vgmvmy8vvb9s
insert into wrh$_tempstatxs (snap_id, dbid, instance_number, file#, creation_c
hange#, phyrds, phywrts, singleblkrds, readtim, writetim, singleblkrdtim, phy
blkrd, phyblkwrt, wait_count, time) select :snap_id, :dbid, :instance_num
ber, tf.tfnum, to_number(tf.tfcrc_scn) creation_change#, ts.kcftiopyr, ts.
0 0 1 0.32 0.6 1crajpb7j5tyz
INSERT INTO STATS$SGA_TARGET_ADVICE ( SNAP_ID , DBID , INSTANCE_NUMBER , SGA_SIZ
E , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TIME_FACTOR , ESTD_PHYSICAL_READS )
SELECT :B3 , :B2 , :B1 , SGA_SIZE , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TI
ME_FACTOR , ESTD_PHYSICAL_READS FROM V$SGA_TARGET_ADVICE
0 1 1 0.22 1.2 g6wf9na8zs5hb
insert into wrh$_sysmetric_summary (snap_id, dbid, instance_number, beg
in_time, end_time, intsize, group_id, metric_id, num_interval, maxval, minv
al, average, standard_deviation) select :snap_id, :dbid, :instance_number,
begtime, endtime, intsize_csec, groupid, metricid, numintv, max, min,
0 0 1 0.12 0.3 fktqvw2wjxdxc
insert into wrh$_filestatxs (snap_id, dbid, instance_number, file#, creation_c
hange#, phyrds, phywrts, singleblkrds, readtim, writetim, singleblkrdtim, phy
blkrd, phyblkwrt, wait_count, time) select :snap_id, :dbid, :instance_num
ber, df.file#, (df.crscnbas + (df.crscnwrp * power(2,32))) creation_change#,
0 1 1 0.11 1.2 84qubbrsr0kfn
insert into wrh$_latch (snap_id, dbid, instance_number, latch_hash, level#, ge
ts, misses, sleeps, immediate_gets, immediate_misses, spin_gets, sleep1, s
leep2, sleep3, sleep4, wait_time) select :snap_id, :dbid, :instance_number,
hash, level#, gets, misses, sleeps, immediate_gets, immediate_misses, spin_ge
-------------------------------------------------------------
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 6,053
-> Captured SQL account for 47.3% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
1,880 1 1,880.0 31.1 0.73 1.48 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
363 121 3.0 6.0 0.09 0.09 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
361 1 361.0 6.0 0.04 0.15 925d7dd714u48
INSERT INTO STATS$LATCH ( SNAP_ID , DBID , INSTANCE_NUMBER , NAME , LATCH# , LEV
EL# , GETS , MISSES , SLEEPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , W
AIT_TIME ) SELECT :B3 , :B2 , :B1 , NAME , LATCH# , LEVEL# , GETS , MISSES , SLE
EPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , WAIT_TIME FROM V$LATCH
357 1 357.0 5.9 0.03 0.04 dvzr1zfmdddga
INSERT INTO STATS$SYSSTAT ( SNAP_ID , DBID , INSTANCE_NUMBER , STATISTIC# , NAME
, VALUE ) SELECT :B3 , :B2 , :B1 , STATISTIC# , NAME , VALUE FROM V$SYSSTAT
293 1 293.0 4.8 0.02 0.02 g337099aatnuj
update smon_scn_time set orig_thread=0, time_mp=:1, time_dp=:2, scn=:3, scn_wrp
=:4, scn_bas=:5, num_mappings=:6, tim_scn_map=:7 where thread=0 and scn = (sel
ect min(scn) from smon_scn_time where thread=0)
222 1 222.0 3.7 0.02 0.10 bq0xuw807fdju
INSERT INTO STATS$EVENT_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT_ID
, WAIT_TIME_MILLI , WAIT_COUNT ) SELECT :B3 , :B2 , :B1 , EN.EVENT_ID , WAIT_TIM
E_MILLI , WAIT_COUNT FROM V$EVENT_HISTOGRAM EH , V$EVENT_NAME EN WHERE EH.EVENT
= EN.NAME AND EH.EVENT# = EN.EVENT#
162 39 4.2 2.7 0.04 0.05 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
119 119 1.0 2.0 0.04 0.04 g2wr3u7s1gtf3
select count(*) from sys.job$ where (next_date > sysdate) and (next_date < (sysd
ate+5/86400))
118 1 118.0 1.9 1.53 24.95 1wzqub25cwnjm
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN wksys.wk_job.invoke(21,21); :mydate := next_date; IF broken THEN
:b := 1; ELSE :b := 0; END IF; END;
98 1 98.0 1.6 0.11 0.65 84qubbrsr0kfn
insert into wrh$_latch (snap_id, dbid, instance_number, latch_hash, level#, ge
ts, misses, sleeps, immediate_gets, immediate_misses, spin_gets, sleep1, s
leep2, sleep3, sleep4, wait_time) select :snap_id, :dbid, :instance_number,
hash, level#, gets, misses, sleeps, immediate_gets, immediate_misses, spin_ge
77 1 77.0 1.3 0.02 0.09 6c06mfv01xt2h
update wrh$_seg_stat_obj sso set (index_type, base_obj#, base_object_name, ba
se_object_owner) = (select decode(ind.type#,
1, 'NORMAL'|| decode(bitand(ind.property, 4), 0, '',
4, '/REV'), 2, 'BITMAP', 3, 'CLUSTER', 4, 'IOT - TOP',
-------------------------------------------------------------
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Total Disk Reads: 114
-> Captured SQL account for 52.6% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
47 1 47.0 41.2 0.73 1.48 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
14 1 14.0 12.3 0.02 0.10 bq0xuw807fdju
INSERT INTO STATS$EVENT_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT_ID
, WAIT_TIME_MILLI , WAIT_COUNT ) SELECT :B3 , :B2 , :B1 , EN.EVENT_ID , WAIT_TIM
E_MILLI , WAIT_COUNT FROM V$EVENT_HISTOGRAM EH , V$EVENT_NAME EN WHERE EH.EVENT
= EN.NAME AND EH.EVENT# = EN.EVENT#
9 1 9.0 7.9 0.04 0.15 925d7dd714u48
INSERT INTO STATS$LATCH ( SNAP_ID , DBID , INSTANCE_NUMBER , NAME , LATCH# , LEV
EL# , GETS , MISSES , SLEEPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , W
AIT_TIME ) SELECT :B3 , :B2 , :B1 , NAME , LATCH# , LEVEL# , GETS , MISSES , SLE
EPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , WAIT_TIME FROM V$LATCH
9 1 9.0 7.9 0.03 0.11 bu95jup1jp5t3
insert into wrh$_db_cache_advice (snap_id, dbid, instance_number,
bpid, buffers_for_estimate, name, block_size, advice_status, size_for_e
stimate, size_factor, physical_reads, base_physical_reads, actual_physic
al_reads) select :snap_id, :dbid, :instance_number, a.bpid, a.nbufs,
6 1 6.0 5.3 0.08 0.29 f9nzhpn9854xz
insert into wrh$_seg_stat (snap_id, dbid, instance_number, ts#, obj#, dataobj#
, logical_reads_total, logical_reads_delta, buffer_busy_waits_total, buffer_b
usy_waits_delta, db_block_changes_total, db_block_changes_delta, physical_rea
ds_total, physical_reads_delta, physical_writes_total, physical_writes_delta,
4 1 4.0 3.5 0.01 0.20 39agg31n4t2zc
INSERT INTO STATS$WAITSTAT ( SNAP_ID , DBID , INSTANCE_NUMBER , CLASS , WAIT_COU
NT , TIME ) SELECT :B3 , :B2 , :B1 , CLASS , "COUNT" , TIME FROM V$WAITSTAT
3 1 3.0 2.6 0.06 0.11 0gf6adkbuyytg
INSERT INTO STATS$FILE_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , FILE# , SI
NGLEBLKRDTIM_MILLI , SINGLEBLKRDS ) SELECT :B3 , :B2 , :B1 , FILE# , SINGLEBLKRD
TIM_MILLI , SINGLEBLKRDS FROM V$FILE_HISTOGRAM WHERE SINGLEBLKRDS > 0
3 1 3.0 2.6 0.02 0.19 13fnb572x6z9j
insert into wrh$_osstat (snap_id, dbid, instance_number, stat_id, value) sele
ct :snap_id, :dbid, :instance_number, osstat_id, value from v$osstat
3 1 3.0 2.6 0.11 0.65 84qubbrsr0kfn
insert into wrh$_latch (snap_id, dbid, instance_number, latch_hash, level#, ge
ts, misses, sleeps, immediate_gets, immediate_misses, spin_gets, sleep1, s
leep2, sleep3, sleep4, wait_time) select :snap_id, :dbid, :instance_number,
hash, level#, gets, misses, sleeps, immediate_gets, immediate_misses, spin_ge
2 1 2.0 1.8 0.01 0.09 0apuzwzk800p3
INSERT INTO STATS$SYSTEM_EVENT ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT , TOTA
L_WAITS , TOTAL_TIMEOUTS , TIME_WAITED_MICRO , EVENT_ID ) SELECT :B3 , :B2 , :B1
, EVENT , TOTAL_WAITS , TOTAL_TIMEOUTS , TIME_WAITED_MICRO , EVENT_ID FROM V$SY
STEM_EVENT
2 1 2.0 1.8 0.22 0.62 g6wf9na8zs5hb
insert into wrh$_sysmetric_summary (snap_id, dbid, instance_number, beg
in_time, end_time, intsize, group_id, metric_id, num_interval, maxval, minv
al, average, standard_deviation) select :snap_id, :dbid, :instance_number,
begtime, endtime, intsize_csec, groupid, metricid, numintv, max, min,
-------------------------------------------------------------
SQL ordered by Executions DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Total Executions: 639
-> Captured SQL account for 72.1% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
121 2 0.0 0.00 0.00 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
119 119 1.0 0.00 0.00 g2wr3u7s1gtf3
select count(*) from sys.job$ where (next_date > sysdate) and (next_date < (sysd
ate+5/86400))
39 51 1.3 0.00 0.00 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
21 315 15.0 0.00 0.00 772s25v1y0x8k
select shared_pool_size_for_estimate s, shared_pool_size_factor * 100 f
, estd_lc_load_time l, 0 from v$shared_pool_advice
21 441 21.0 0.00 0.00 aykvshm7zsabd
select size_for_estimate, size_factor * 100 f,
estd_physical_read_time, estd_physical_reads
from v$db_cache_advice where id = '3'
21 420 20.0 0.00 0.00 g6gu1n3x0h1h4
select streams_pool_size_for_estimate s, streams_pool_size_factor * 10
0 f, estd_spill_time + estd_unspill_time, 0 from v$streams_pool_advic
e
14 14 1.0 0.00 0.00 49s332uhbnsma
declare vsn varchar2(20); begin vsn :=
dbms_rcvman.getPackageVersion; :pkg_vsn:pkg_vsn_i := vsn;
if vsn is not null then :pkg_vsnub4 :=
to_number(substr(vsn,1,2) || substr(vsn,4,2) || s
6 6 1.0 0.00 0.00 c0agatqzq2jzr
insert into "SYS"."ALERT_QT" (q_name, msgid, corrid, priority, state, delay, ex
piration, time_manager_info, local_order_no, chain_no, enq_time, step_no, enq_
uid, enq_tid, retry_count, exception_qschema, exception_queue, recipient_key,
dequeue_msgid, user_data, sender_name, sender_address, sender_protocol, user
5 5 1.0 0.00 0.02 6cr55dpp3n44a
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, spare2 f
rom obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner is null an
d linkname is null and subname = :4
5 0 0.0 0.00 0.00 7mvdhsu3d43ag
select a.obj# OBJOID, a.class_oid CLSOID, decode(bitand(a.flags, 16384), 0, a
.next_run_date, a.last_enabled_time) RUNTIME, (2*a.priority + decode(bita
nd(a.job_status, 4), 0, 0, decode(a.running_instance, :1, -1, 1))) PR
I, 1 JOBTYPE, a.schedule_limit SCHLIM, a.job_weight WT, decode(a.running_i
-------------------------------------------------------------
SQL ordered by Parse Calls DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Total Parse Calls: 276
-> Captured SQL account for 52.5% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
39 39 14.13 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
14 14 5.07 49s332uhbnsma
declare vsn varchar2(20); begin vsn :=
dbms_rcvman.getPackageVersion; :pkg_vsn:pkg_vsn_i := vsn;
if vsn is not null then :pkg_vsnub4 :=
to_number(substr(vsn,1,2) || substr(vsn,4,2) || s
5 5 1.81 7mvdhsu3d43ag
select a.obj# OBJOID, a.class_oid CLSOID, decode(bitand(a.flags, 16384), 0, a
.next_run_date, a.last_enabled_time) RUNTIME, (2*a.priority + decode(bita
nd(a.job_status, 4), 0, 0, decode(a.running_instance, :1, -1, 1))) PR
I, 1 JOBTYPE, a.schedule_limit SCHLIM, a.job_weight WT, decode(a.running_i
4 4 1.45 bsa0wjtftg3uw
select file# from file$ where ts#=:1
3 3 1.09 19rkm1wsf9axx
insert into WRI$_ALERT_HISTORY (sequence_id, reason_id, owner, object_name, subo
bject_name, reason_argument_1, reason_argument_2, reason_argument_3, reason_argu
ment_4, reason_argument_5, time_suggested, creation_time, action_argument_1, act
ion_argument_2, action_argument_3, action_argument_4, action_argument_5, message
3 3 1.09 3505vtqmvvf40
insert into wrh$_waitclassmetric_history (snap_id, dbid, instance_number, wa
it_class_id, begin_time, end_time, intsize, group_id, average_waiter_c
ount, dbtime_in_wait, time_waited, wait_count) select :snap_id, :dbid
, :instance_number, wait_id, begtime, endtime, intsize_csec, groupid,
3 3 1.09 3qsmy8ybvwt3n
insert into WRI$_ALERT_OUTSTANDING (reason_id, object_id, subobject_id, internal
_instance_number, owner, object_name, subobject_name, sequence_id, reason_argume
nt_1, reason_argument_2, reason_argument_3, reason_argument_4, reason_argument_5
, time_suggested, creation_time, action_argument_1, action_argument_2, action_ar
3 3 1.09 c4nhd1ntptxq7
select message_level, sequence_id, time_suggested from WRI$_ALERT_OUTSTANDING wh
ere reason_id = :1 and object_id = :2 and subobject_id = :3 and internal_instanc
e_number = :4
3 3 1.09 fd9hn33xa7bph
delete from WRI$_ALERT_OUTSTANDING where reason_id = :1 and object_id = :2 and s
ubobject_id = :3 and internal_instance_number = :4 returning owner, object_name,
subobject_name, sequence_id, error_instance_id, creation_time into :5, :6, :7,
:8, :9, :10
2 2 0.72 0k8522rmdzg4k
select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#
>0
-------------------------------------------------------------
SQL ordered by Sharable Memory DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
SQL ordered by Version Count DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session 837 1.4 39.9
CPU used when call started 244 0.4 11.6
CR blocks created 2 0.0 0.1
DB time 9,256 15.4 440.8
DBWR checkpoint buffers written 621 1.0 29.6
DBWR checkpoints 0 0.0 0.0
DBWR transaction table writes 10 0.0 0.5
DBWR undo block writes 97 0.2 4.6
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 8 0.0 0.4
IMU Redo allocation size 34,348 57.1 1,635.6
IMU commits 5 0.0 0.2
IMU contention 0 0.0 0.0
IMU pool not allocated 0 0.0 0.0
IMU undo allocation size 40,168 66.8 1,912.8
IMU- failed to get a private str 0 0.0 0.0
SQL*Net roundtrips to/from clien 0 0.0 0.0
active txn count during cleanout 58 0.1 2.8
background checkpoints completed 0 0.0 0.0
background checkpoints started 0 0.0 0.0
background timeouts 2,346 3.9 111.7
buffer is not pinned count 1,796 3.0 85.5
buffer is pinned count 554 0.9 26.4
bytes received via SQL*Net from 0 0.0 0.0
bytes sent via SQL*Net to client 0 0.0 0.0
calls to get snapshot scn: kcmgs 1,128 1.9 53.7
calls to kcmgas 71 0.1 3.4
calls to kcmgcs 28 0.1 1.3
change write time 16 0.0 0.8
cleanout - number of ktugct call 72 0.1 3.4
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 0 0.0 0.0
cluster key scan block gets 623 1.0 29.7
cluster key scans 34 0.1 1.6
commit batch/immediate performed 1 0.0 0.1
commit batch/immediate requested 1 0.0 0.1
commit cleanout failures: callba 8 0.0 0.4
commit cleanouts 418 0.7 19.9
commit cleanouts successfully co 410 0.7 19.5
commit immediate performed 1 0.0 0.1
commit immediate requested 1 0.0 0.1
commit txn count during cleanout 42 0.1 2.0
concurrency wait time 152 0.3 7.2
consistent changes 2 0.0 0.1
consistent gets 3,188 5.3 151.8
consistent gets - examination 1,290 2.1 61.4
consistent gets from cache 3,188 5.3 151.8
cursor authentications 3 0.0 0.1
data blocks consistent reads - u 2 0.0 0.1
db block changes 2,681 4.5 127.7
db block gets 2,865 4.8 136.4
db block gets from cache 2,865 4.8 136.4
deferred (CURRENT) block cleanou 267 0.4 12.7
enqueue conversions 166 0.3 7.9
enqueue releases 5,253 8.7 250.1
enqueue requests 5,253 8.7 250.1
enqueue timeouts 0 0.0 0.0
enqueue waits 0 0.0 0.0
execute count 639 1.1 30.4
free buffer inspected 300 0.5 14.3
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
free buffer requested 310 0.5 14.8
heap block compress 4 0.0 0.2
hot buffers moved to head of LRU 1 0.0 0.1
immediate (CR) block cleanout ap 0 0.0 0.0
immediate (CURRENT) block cleano 72 0.1 3.4
index crx upgrade (positioned) 122 0.2 5.8
index fast full scans (full) 1 0.0 0.1
index fetch by key 586 1.0 27.9
index scans kdiixs1 320 0.5 15.2
leaf node 90-10 splits 7 0.0 0.3
leaf node splits 28 0.1 1.3
lob reads 0 0.0 0.0
lob writes 0 0.0 0.0
lob writes unaligned 0 0.0 0.0
logons cumulative 6 0.0 0.3
messages received 427 0.7 20.3
messages sent 427 0.7 20.3
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 1,313 2.2 62.5
opened cursors cumulative 281 0.5 13.4
parse count (failures) 0 0.0 0.0
parse count (hard) 0 0.0 0.0
parse count (total) 276 0.5 13.1
parse time cpu 8 0.0 0.4
parse time elapsed 10 0.0 0.5
physical read IO requests 63 0.1 3.0
physical read bytes 933,888 1,552.4 44,470.9
physical read total IO requests 1,405 2.3 66.9
physical read total bytes 22,671,872 37,687.3 1,079,613.0
physical read total multi block 12 0.0 0.6
physical reads 114 0.2 5.4
physical reads cache 114 0.2 5.4
physical reads cache prefetch 51 0.1 2.4
physical reads direct 0 0.0 0.0
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 51 0.1 2.4
physical write IO requests 398 0.7 19.0
physical write bytes 5,087,232 8,456.5 242,249.1
physical write total IO requests 1,045 1.7 49.8
physical write total bytes 16,367,616 27,207.8 779,410.3
physical write total multi block 81 0.1 3.9
physical writes 621 1.0 29.6
physical writes direct 0 0.0 0.0
physical writes from cache 621 1.0 29.6
physical writes non checkpoint 285 0.5 13.6
pinned buffers inspected 0 0.0 0.0
recursive calls 4,680 7.8 222.9
recursive cpu usage 485 0.8 23.1
redo blocks written 2,695 4.5 128.3
redo buffer allocation retries 0 0.0 0.0
redo entries 1,778 3.0 84.7
redo log space requests 0 0.0 0.0
redo log space wait time 0 0.0 0.0
redo ordering marks 0 0.0 0.0
redo size 1,321,772 2,197.2 62,941.5
redo synch time 15 0.0 0.7
redo synch writes 152 0.3 7.2
redo wastage 10,216 17.0 486.5
redo write time 217 0.4 10.3
redo writer latching time 0 0.0 0.0
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
redo writes 35 0.1 1.7
rollback changes - undo records 6 0.0 0.3
rollbacks only - consistent read 2 0.0 0.1
rows fetched via callback 435 0.7 20.7
session cursor cache hits 95 0.2 4.5
session logical reads 6,053 10.1 288.2
session pga memory 5,688,060 9,455.2 270,860.0
session pga memory max 8,637,180 14,357.5 411,294.3
session uga memory 8,590,152,560 14,279,342.5 #############
session uga memory max 6,484,692 10,779.5 308,794.9
shared hash latch upgrades - no 158 0.3 7.5
sorts (memory) 408 0.7 19.4
sorts (rows) 18,878 31.4 899.0
sql area evicted 0 0.0 0.0
sql area purged 0 0.0 0.0
switch current to new buffer 1 0.0 0.1
table fetch by rowid 536 0.9 25.5
table fetch continued row 0 0.0 0.0
table scan blocks gotten 196 0.3 9.3
table scan rows gotten 1,190 2.0 56.7
table scans (long tables) 0 0.0 0.0
table scans (short tables) 151 0.3 7.2
total number of times SMON poste 0 0.0 0.0
transaction rollbacks 1 0.0 0.1
undo change vector size 458,732 762.6 21,844.4
user I/O wait time 187 0.3 8.9
user calls 17 0.0 0.8
user commits 12 0.0 0.6
user rollbacks 9 0.0 0.4
workarea executions - optimal 215 0.4 10.2
write clones created in foregrou 0 0.0 0.0
-------------------------------------------------------------
Instance Activity Stats - Absolute ValuesDB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Statistics with absolute values (should not be diffed)
Statistic Begin Value End Value
-------------------------------- --------------- ---------------
session cursor cache count 604 659
opened cursors current 38 38
logons current 21 22
-------------------------------------------------------------
Instance Activity Stats - Thread Activity DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
-------------------------------- ------------------ ---------
log switches (derived) 0 .00
-------------------------------------------------------------
Tablespace IO Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX
32 0 32.2 2.2 210 0 0 0.0
USERS
35 0 19.1 1.3 155 0 0 0.0
UNDOTBS1
0 0 0.0 .0 27 0 0 0.0
SYSTEM
1 0 70.0 1.0 6 0 0 0.0
-------------------------------------------------------------
File IO Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX +DATA_1/ivrs/datafile/sysaux.258.652821943
32 0 32.2 2.2 210 0 0 0.0
SYSTEM +DATA_1/ivrs/datafile/system.267.652821909
1 0 70.0 1.0 6 0 0 0.0
UNDOTBS1 +DATA_1/ivrs/datafile/undotbs1.257.652821933
0 0 N/A N/A 27 0 0 0.0
USERS +DATA_1/ivrs/datafile/users.263.652821963
18 0 20.0 1.3 87 0 0 0.0
USERS +DATA_1/ivrs/datafile/users02.dbf
17 0 18.2 1.4 68 0 0 0.0
-------------------------------------------------------------
Buffer Pool Statistics DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Writ Buffer
Number of Pool Buffer Physical Physical Buff Comp Busy
P Buffers Hit% Gets Reads Writes Wait Wait Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D 18,962 98 6,151 119 621 0 0 0
-------------------------------------------------------------
Instance Recovery Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B 0 53 158 233 111123 184320 111123 N/A
E 0 53 153 263 110330 184320 110330 N/A
-------------------------------------------------------------
Buffer Pool Advisory DB/Inst: IVRS/ivrs Snap: 1122
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 12 .1 1,497 1.3 19,688
D 24 .2 2,994 1.2 17,478
D 36 .2 4,491 1.2 17,116
D 48 .3 5,988 1.1 16,521
D 60 .4 7,485 1.1 15,784
D 72 .5 8,982 1.0 14,879
D 84 .6 10,479 1.0 14,879
D 96 .6 11,976 1.0 14,866
D 108 .7 13,473 1.0 14,763
D 120 .8 14,970 1.0 14,737
D 132 .9 16,467 1.0 14,737
D 144 .9 17,964 1.0 14,737
D 152 1.0 18,962 1.0 14,737
D 156 1.0 19,461 1.0 14,737
D 168 1.1 20,958 1.0 14,737
D 180 1.2 22,455 1.0 14,737
D 192 1.3 23,952 1.0 14,737
D 204 1.3 25,449 1.0 14,737
D 216 1.4 26,946 1.0 14,737
D 228 1.5 28,443 1.0 14,737
D 240 1.6 29,940 1.0 14,737
-------------------------------------------------------------
PGA Aggr Summary DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ------------------ --------------------------
100.0 21 0
-------------------------------------------------------------
PGA Aggr Target Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B 103 49 110.3 0.0 .0 .0 .0 21,094
E 103 48 111.9 0.0 .0 .0 .0 21,094
-------------------------------------------------------------
PGA Aggr Target Histogram DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
2K 4K 183 183 0 0
64K 128K 4 4 0 0
512K 1024K 28 28 0 0
-------------------------------------------------------------
PGA Memory Advisory DB/Inst: IVRS/ivrs Snap: 1122
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
13 0.1 111.9 15.7 88.0 9
26 0.3 111.9 15.7 88.0 9
52 0.5 111.9 15.7 88.0 9
77 0.8 111.9 6.1 95.0 6
103 1.0 111.9 0.0 100.0 5
124 1.2 111.9 0.0 100.0 5
144 1.4 111.9 0.0 100.0 5
165 1.6 111.9 0.0 100.0 4
185 1.8 111.9 0.0 100.0 4
206 2.0 111.9 0.0 100.0 4
309 3.0 111.9 0.0 100.0 0
412 4.0 111.9 0.0 100.0 0
618 6.0 111.9 0.0 100.0 0
824 8.0 111.9 0.0 100.0 0
-------------------------------------------------------------
Shared Pool Advisory DB/Inst: IVRS/ivrs Snap: 1122
-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Est LC Est LC Est LC Est LC
Shared SP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
60 .4 17 2,234 1,772 1.0 202 1.1 17,917
76 .5 27 3,765 1,792 1.0 182 1.0 18,013
92 .7 27 3,765 1,792 1.0 182 1.0 18,013
108 .8 27 3,765 1,792 1.0 182 1.0 18,013
124 .9 27 3,765 1,792 1.0 182 1.0 18,013
140 1.0 27 3,765 1,792 1.0 182 1.0 18,013
156 1.1 27 3,765 1,792 1.0 182 1.0 18,013
172 1.2 27 3,765 1,792 1.0 182 1.0 18,013
188 1.3 27 3,765 1,792 1.0 182 1.0 18,013
204 1.5 27 3,765 1,792 1.0 182 1.0 18,013
220 1.6 27 3,765 1,792 1.0 182 1.0 18,013
236 1.7 27 3,765 1,792 1.0 182 1.0 18,013
252 1.8 27 3,765 1,792 1.0 182 1.0 18,013
268 1.9 27 3,765 1,792 1.0 182 1.0 18,013
284 2.0 27 3,765 1,792 1.0 182 1.0 18,013
-------------------------------------------------------------
SGA Target Advisory DB/Inst: IVRS/ivrs Snap: 1122
SGA Target SGA Size Est DB Est Physical
Size (M) Factor Time (s) Reads
---------- ---------- ------------ ----------------
156 0.5 1,084 14,875
234 0.8 1,057 14,734
312 1.0 1,057 14,734
390 1.3 1,057 14,734
468 1.5 1,057 14,734
546 1.8 1,057 14,734
624 2.0 1,057 14,734
-------------------------------------------------------------
Streams Pool Advisory DB/Inst: IVRS/ivrs Snap: 1122
Size for Size Est Spill Est Spill Est Unspill Est Unspill
Est (MB) Factor Count Time (s) Count Time (s)
---------- --------- ----------- ----------- ----------- -----------
4 1.0 0 0 0 0
8 2.0 0 0 0 0
12 3.0 0 0 0 0
16 4.0 0 0 0 0
20 5.0 0 0 0 0
24 6.0 0 0 0 0
28 7.0 0 0 0 0
32 8.0 0 0 0 0
36 9.0 0 0 0 0
40 10.0 0 0 0 0
44 11.0 0 0 0 0
48 12.0 0 0 0 0
52 13.0 0 0 0 0
56 14.0 0 0 0 0
60 15.0 0 0 0 0
64 16.0 0 0 0 0
68 17.0 0 0 0 0
72 18.0 0 0 0 0
76 19.0 0 0 0 0
80 20.0 0 0 0 0
-------------------------------------------------------------
Java Pool Advisory DB/Inst: IVRS/ivrs Snap: 1122
Est LC Est LC Est LC Est LC
Java JP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
8 1.0 1 132 28 1.0 182 1.0 57
12 1.5 1 132 28 1.0 182 1.0 57
16 2.0 1 132 28 1.0 182 1.0 57
-------------------------------------------------------------
Buffer Wait Statistics DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Enqueue Activity DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Undo Segment Summary DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
1 3.0 309 0 3 15/15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Undo Segment Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Most recent 35 Undostat rows, ordered by Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
02-Jun 00:00 86 59 0 2 15 0/0 0/0/0/0/0/0
01-Jun 23:50 2,907 250 0 3 15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ASM db client latch 400 0.0 N/A 0 0 N/A
ASM map operation freeli 10 0.0 N/A 0 0 N/A
ASM map operation hash t 4,890 0.0 N/A 0 0 N/A
ASM network background l 240 0.0 N/A 0 0 N/A
AWR Alerted Metric Eleme 2,232 0.0 N/A 0 0 N/A
Consistent RBA 34 0.0 N/A 0 0 N/A
FAL request queue 13 0.0 N/A 0 0 N/A
FAL subheap alocation 13 0.0 N/A 0 0 N/A
FOB s.o list latch 57 0.0 N/A 0 0 N/A
In memory undo latch 404 0.0 N/A 0 160 0.0
JOX SGA heap latch 26 0.0 N/A 0 0 N/A
JS mem alloc latch 2 0.0 N/A 0 0 N/A
JS queue access latch 2 0.0 N/A 0 0 N/A
JS queue state obj latch 4,356 0.0 N/A 0 0 N/A
JS slv state obj latch 4 0.0 N/A 0 0 N/A
KFK SGA context latch 304 0.0 N/A 0 0 N/A
KFMD SGA 8 0.0 N/A 0 0 N/A
KMG MMAN ready and start 201 0.0 N/A 0 0 N/A
KTF sga latch 1 0.0 N/A 0 194 0.0
KWQMN job cache list lat 6 0.0 N/A 0 0 N/A
KWQP Prop Status 3 0.0 N/A 0 0 N/A
MQL Tracking Latch 0 N/A N/A 0 12 0.0
Memory Management Latch 0 N/A N/A 0 201 0.0
OS process 33 0.0 N/A 0 0 N/A
OS process allocation 224 0.0 N/A 0 0 N/A
OS process: request allo 7 0.0 N/A 0 0 N/A
PL/SQL warning settings 28 0.0 N/A 0 0 N/A
SGA IO buffer pool latch 0 N/A N/A 0 1 0.0
SQL memory manager latch 2 0.0 N/A 0 199 0.0
SQL memory manager worka 13,561 0.0 N/A 0 0 N/A
Shared B-Tree 28 0.0 N/A 0 0 N/A
active checkpoint queue 595 0.0 N/A 0 0 N/A
active service list 1,229 0.0 N/A 0 204 0.0
archive control 14 0.0 N/A 0 0 N/A
archive process latch 212 0.0 N/A 0 0 N/A
cache buffer handles 118 0.0 N/A 0 0 N/A
cache buffers chains 19,065 0.0 N/A 0 399 0.0
cache buffers lru chain 1,661 0.0 N/A 0 14 0.0
channel handle pool latc 7 0.0 N/A 0 0 N/A
channel operations paren 2,822 0.0 N/A 0 0 N/A
checkpoint queue latch 5,805 0.0 N/A 0 558 0.0
client/application info 11 0.0 N/A 0 0 N/A
commit callback allocati 12 0.0 N/A 0 0 N/A
compile environment latc 13 0.0 N/A 0 0 N/A
dml lock allocation 392 0.0 N/A 0 0 N/A
dummy allocation 11 0.0 N/A 0 0 N/A
enqueue hash chains 10,603 0.0 N/A 0 0 N/A
enqueues 9,944 0.0 N/A 0 0 N/A
event group latch 4 0.0 N/A 0 0 N/A
file cache latch 35 0.0 N/A 0 0 N/A
hash table modification 2 0.0 N/A 0 0 N/A
job workq parent latch 0 N/A N/A 0 4 0.0
job_queue_processes para 12 0.0 N/A 0 0 N/A
ksuosstats global area 45 0.0 N/A 0 0 N/A
ktm global data 2 0.0 N/A 0 0 N/A
kwqbsn:qsga 27 0.0 N/A 0 0 N/A
lgwr LWN SCN 214 0.0 N/A 0 0 N/A
library cache 2,097 0.0 N/A 0 0 N/A
library cache load lock 0 N/A N/A 0 10 0.0
library cache lock 1,289 0.0 N/A 0 0 N/A
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
library cache lock alloc 72 0.0 N/A 0 0 N/A
library cache pin 658 0.0 N/A 0 0 N/A
library cache pin alloca 10 0.0 N/A 0 0 N/A
list of block allocation 30 0.0 N/A 0 0 N/A
logminer context allocat 1 0.0 N/A 0 0 N/A
messages 5,560 0.0 N/A 0 0 N/A
mostly latch-free SCN 214 0.0 N/A 0 0 N/A
multiblock read objects 26 0.0 N/A 0 2 0.0
ncodef allocation latch 10 0.0 N/A 0 0 N/A
object queue header heap 595 0.0 N/A 0 0 N/A
object queue header oper 2,935 0.0 N/A 0 0 N/A
object stats modificatio 1 0.0 N/A 0 1 0.0
parallel query alloc buf 80 0.0 N/A 0 0 N/A
parameter table allocati 5 0.0 N/A 0 0 N/A
post/wait queue 8 0.0 N/A 0 3 0.0
process allocation 7 0.0 N/A 0 4 0.0
process group creation 7 0.0 N/A 0 0 N/A
qmn task queue latch 88 0.0 N/A 0 0 N/A
redo allocation 737 0.0 N/A 0 1,789 0.0
redo copy 0 N/A N/A 0 1,788 0.0
redo writing 1,117 0.0 N/A 0 0 N/A
resmgr group change latc 3 0.0 N/A 0 0 N/A
resmgr:actses active lis 9 0.0 N/A 0 0 N/A
resmgr:actses change gro 2 0.0 N/A 0 0 N/A
resmgr:free threads list 7 0.0 N/A 0 0 N/A
resmgr:schema config 2 0.0 N/A 0 0 N/A
row cache objects 1,794 0.0 N/A 0 0 N/A
rules engine aggregate s 6 0.0 N/A 0 0 N/A
rules engine rule set st 212 0.0 N/A 0 0 N/A
sequence cache 15 0.0 N/A 0 0 N/A
session allocation 1,135 0.0 N/A 0 0 N/A
session idle bit 40 0.0 N/A 0 0 N/A
session state list latch 16 0.0 N/A 0 0 N/A
session switching 10 0.0 N/A 0 0 N/A
session timer 204 0.0 N/A 0 0 N/A
shared pool 496 0.0 N/A 0 0 N/A
shared pool simulator 128 0.0 N/A 0 0 N/A
simulator hash latch 1,055 0.0 N/A 0 0 N/A
simulator lru latch 995 0.0 N/A 0 23 0.0
slave class 2 0.0 N/A 0 0 N/A
slave class create 8 12.5 1.0 0 0 N/A
sort extent pool 15 0.0 N/A 0 0 N/A
state object free list 2 0.0 N/A 0 0 N/A
statistics aggregation 140 0.0 N/A 0 0 N/A
threshold alerts latch 71 0.0 N/A 0 0 N/A
transaction allocation 12 0.0 N/A 0 0 N/A
transaction branch alloc 10 0.0 N/A 0 0 N/A
undo global data 403 0.0 N/A 0 0 N/A
user lock 4 0.0 N/A 0 0 N/A
-------------------------------------------------------------
Latch Sleep Breakdown DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> ordered by misses desc
Latch Name
----------------------------------------
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
slave class create
8 1 1 0 0 0 0
-------------------------------------------------------------
Latch Miss Sources DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
slave class create ksvcreate 0 1 0
-------------------------------------------------------------
Parent Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Child Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Logical Reads DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Total Logical Reads: 6,053
-> Captured Segments account for 83.8% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSTEM SMON_SCN_TIME TABLE 592 9.78
SYS SYSTEM JOB$ TABLE 464 7.67
PERFSTAT USERS STATS$SYSSTAT TABLE 320 5.29
PERFSTAT USERS STATS$LATCH_PK INDEX 304 5.02
SYS SYSAUX WRH$_SQL_PLAN_PK INDEX 288 4.76
-------------------------------------------------------------
Segments by Physical Reads DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Total Physical Reads: 114
-> Captured Segments account for 93.9% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSAUX WRH$_SQL_PLAN_PK INDEX 36 31.58
PERFSTAT USERS STATS$EVENT_HISTOGRA TABLE 8 7.02
PERFSTAT USERS STATS$PARAMETER_PK INDEX 8 7.02
PERFSTAT USERS STATS$EVENT_HISTOGRA INDEX 6 5.26
SYS SYSAUX WRH$_ENQUEUE_STAT_PK INDEX 6 5.26
-------------------------------------------------------------
Segments by Row Lock Waits DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Segments by ITL Waits DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Buffer Busy Waits DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Dictionary Cache Stats DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> "Pct Misses" should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control 15 0.0 0 N/A 2 1
dc_global_oids 15 0.0 0 N/A 0 26
dc_object_ids 89 2.2 0 N/A 0 937
dc_objects 80 10.0 0 N/A 0 1,583
dc_profiles 2 0.0 0 N/A 0 1
dc_rollback_segments 78 0.0 0 N/A 0 16
dc_segments 2 0.0 0 N/A 2 819
dc_tablespace_quotas 4 0.0 0 N/A 0 2
dc_tablespaces 146 0.0 0 N/A 0 12
dc_usernames 15 0.0 0 N/A 0 16
dc_users 107 0.0 0 N/A 0 16
outstanding_alerts 32 9.4 0 N/A 6 26
-------------------------------------------------------------
Library Cache Activity DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> "Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 16 0.0 39 0.0 0 0
JAVA DATA 1 0.0 0 N/A 0 0
TABLE/PROCEDURE 19 0.0 121 0.0 0 0
TRIGGER 3 0.0 3 0.0 0 0
-------------------------------------------------------------
Process Memory Summary DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist
Avg Std Dev Max Max
Alloc Used Alloc Alloc Alloc Alloc Num Num
Category (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other 103.2 N/A 4.5 7.6 24 24 23 23
Freeable 6.7 .0 .7 .5 1 N/A 9 9
SQL .4 .2 .0 .0 0 2 9 8
PL/SQL .1 .0 .0 .0 0 0 21 21
E Other 103.9 N/A 4.3 7.4 24 24 24 24
Freeable 7.6 .0 .8 .5 1 N/A 10 10
SQL .4 .2 .0 .0 0 2 10 8
PL/SQL .1 .0 .0 .0 0 0 22 22
JAVA .0 .0 .0 .0 0 1 1 0
-------------------------------------------------------------
SGA Memory Summary DB/Inst: IVRS/ivrs Snaps: 1121-1122
End Size (Bytes)
SGA regions Begin Size (Bytes) (if different)
------------------------------ ------------------- -------------------
Database Buffers 159,383,552
Fixed Size 1,261,612
Redo Buffers 2,928,640
Variable Size 163,581,908
-------------------
sum 327,155,712
-------------------------------------------------------------
SGA breakdown difference DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
insignificant, or zero in that snapshot
Pool Name Begin MB End MB % Diff
------ ------------------------------ -------------- -------------- -------
java free memory 2.7 2.7 0.00
java joxlod exec hp 5.1 5.1 0.00
java joxs heap .2 .2 0.00
large ASM map operations hashta .2 .2 0.00
large CTWR dba buffer .4 .4 0.00
large PX msg pool .2 .2 0.00
large free memory 1.2 1.2 0.00
large krcc extent chunk 2.0 2.0 0.00
shared ASH buffers 2.0 2.0 0.00
shared CCursor 2.5 2.5 0.00
shared Heap0: KGL 2.1 2.1 0.00
shared KCB Table Scan Buffer 3.8 3.8 0.00
shared KGLS heap 5.2 5.2 0.00
shared KQR M PO 2.9 2.9 0.17
shared KSFD SGA I/O b 3.8 3.8 0.00
shared PCursor 1.6 1.6 0.00
shared PL/SQL MPCODE 1.8 1.8 0.00
shared free memory 69.3 69.2 -0.03
shared kglsim hash table bkts 2.0 2.0 0.00
shared library cache 3.8 3.8 0.00
shared row cache 3.6 3.6 0.00
shared sql area 9.5 9.5 0.00
stream free memory 4.0 4.0 0.00
buffer_cache 152.0 152.0 0.00
fixed_sga 1.2 1.2 0.00
log_buffer 2.8 2.8 0.00
-------------------------------------------------------------
Streams CPU/IO Usage DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Streams processes ordered by CPU usage
-> CPU and I/O Time in micro seconds
Session Type CPU Time User I/O Time Sys I/O Time
------------------------- -------------- -------------- --------------
QMON Coordinator 25,608 0 0
QMON Slaves 17,816 0 0
-------------------------------------------------------------
Streams Capture DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Streams Apply DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Queues DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Subscribers DB/Inst: IVRS/ivrs Snaps: 1121-1122
No data exists for this section of the report.
-------------------------------------------------------------
Rule Set DB/Inst: IVRS/ivrs Snaps: 1121-1122
-> Rule Sets ordered by Evaluations
Fast SQL CPU Elapsed
Ruleset Name Evals Evals Execs Time Time
------------------------- -------- -------- -------- -------- --------
SYS.ALERT_QUE_R 6 0 0 0 1
-------------------------------------------------------------
Resource Limit Stats DB/Inst: IVRS/ivrs Snap: 1122
No data exists for this section of the report.
-------------------------------------------------------------
init.ora Parameters DB/Inst: IVRS/ivrs Snaps: 1121-1122
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
audit_file_dest /oracle/app/oracle/admin/ivrs/adu
audit_sys_operations TRUE
background_dump_dest /oracle/app/oracle/admin/ivrs/bdu
compatible 10.2.0.3.0
control_files +DATA_1/ivrs/control01.ctl, +DATA
core_dump_dest /oracle/app/oracle/admin/ivrs/cdu
db_block_size 8192
db_domain karl.com
db_file_multiblock_read_count 16
db_name ivrs
db_recovery_file_dest /flash_reco/flash_recovery_area
db_recovery_file_dest_size 161061273600
dispatchers (PROTOCOL=TCP) (SERVICE=ivrsXDB)
job_queue_processes 10
log_archive_dest_1 LOCATION=USE_DB_RECOVERY_FILE_DES
log_archive_format ivrs_%t_%s_%r.arc
open_cursors 300
os_authent_prefix
os_roles FALSE
pga_aggregate_target 108003328
processes 150
recyclebin OFF
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
sga_target 327155712
spfile +DATA_1/ivrs/spfileivrs.ora
sql92_security TRUE
statistics_level TYPICAL
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /oracle/app/oracle/admin/ivrs/udu
-------------------------------------------------------------
End of Report
Report written to awrrpt_1_1121_1122.txt
}}}
! The ASH report for SNAP_ID 1121
{{{
Summary of All User Input
-------------------------
Format : TEXT
DB Id : 2607950532
Inst num : 1
Begin time : 01-Jun-10 23:50:00
End time : 02-Jun-10 00:00:00
Slot width : Default
Report targets : 0
Report name : ashrpt_1_0602_0000.txt
ASH Report For IVRS/ivrs
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
1 312M (100%) 152M (48.7%) 114M (36.4%) 2.0M (0.6%)
Analysis Begin Time: 01-Jun-10 23:50:00
Analysis End Time: 02-Jun-10 00:00:00
Elapsed Time: 10.0 (mins)
Sample Count: 2
Average Active Sessions: 0.03
Avg. Active Session per CPU: 0.03
Report Target: None specified
Top User Events DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top Background Events DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
db file parallel write System I/O 100.00 0.03
-------------------------------------------------------------
Top Event P1/P2/P3 Values DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
Event % Event P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1 Parameter 2 Parameter 3
-------------------------- -------------------------- --------------------------
db file parallel write 100.00 "1","0","2147483647" 100.00
requests interrupt timeout
-------------------------------------------------------------
Top Service/Module DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$BACKGROUND UNNAMED 100.00 UNNAMED 100.00
-------------------------------------------------------------
Top Client IDs DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Command Types DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Statements DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL using literals DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top PL/SQL Procedures DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top Sessions DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
-> '# Samples Active' shows the number of ASH samples in which the session
was found waiting for that particular event. The percentage shown
in this column is calculated with respect to wall clock time
and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
the PQ slave activity into the session issuing the PQ. Refer to
the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# % Activity Event % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
167, 1 100.00 db file parallel write 100.00
SYS oracle@dbrocai...el.com (DBW0) 2/60 [ 3%] 0
-------------------------------------------------------------
Top Blocking Sessions DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top Sessions running PQs DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Objects DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Files DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top Latches DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
No data exists for this section of the report.
-------------------------------------------------------------
Activity Over Time DB/Inst: IVRS/ivrs (Jun 01 23:50 to 00:00)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
23:55:00 (5.0 min) 2 db file parallel write 2 100.00
-------------------------------------------------------------
End of Report
Report written to ashrpt_1_0602_0000.txt
}}}
! The AWR report for SNAP_ID 1123
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 1123 02-Jun-10 00:10:33 23 1.7
End Snap: 1124 02-Jun-10 00:20:35 22 1.8
Elapsed: 10.03 (mins)
DB Time: 0.74 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 152M 152M Std Block Size: 8K
Shared Pool Size: 140M 140M Log Buffer: 2,860K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 2,564.40 110,234.57
Logical reads: 155.98 6,704.93
Block changes: 7.04 302.79
Physical reads: 5.37 230.93
Physical writes: 1.11 47.64
User calls: 0.08 3.36
Parses: 1.85 79.64
Hard parses: 0.21 9.07
Sorts: 1.59 68.21
Logons: 0.01 0.64
Executes: 3.70 159.21
Transactions: 0.02
% Blocks changed per Read: 4.52 Recursive Call %: 99.83
Rollback per transaction %: 7.14 Rows per Sort: 33.50
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 96.56 In-memory Sort %: 100.00
Library Hit %: 85.80 Soft Parse %: 88.61
Execute to Parse %: 49.98 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 67.00 % Non-Parse CPU: 31.85
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 50.76 54.41
% SQL with executions>1: 78.97 70.90
% Memory for SQL w/exec>1: 73.67 60.57
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
db file sequential read 1,160 30 26 67.9 User I/O
CPU time 15 33.4
db file parallel write 416 12 29 26.8 System I/O
db file scattered read 476 6 12 12.6 User I/O
log file parallel write 48 2 48 5.2 System I/O
-------------------------------------------------------------
Time Model Statistics DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Total time in database user-calls (DB Time): 44.5s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 36.7 82.5
DB CPU 14.8 33.4
parse time elapsed 9.7 21.8
hard parse elapsed time 9.6 21.7
PL/SQL execution elapsed time 0.1 .2
connection management call elapsed time 0.0 .0
repeated bind elapsed time 0.0 .0
DB time 44.5 N/A
background elapsed time 34.3 N/A
background cpu time 10.0 N/A
-------------------------------------------------------------
Wait Class DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
-------------------- ---------------- ------ ---------------- ------- ---------
User I/O 1,637 .0 36 22 116.9
System I/O 2,194 .0 17 8 156.7
Concurrency 6 .0 1 156 0.4
Commit 3 .0 0 81 0.2
Network 199 .0 0 0 14.2
Other 24 .0 0 1 1.7
-------------------------------------------------------------
Wait Events DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file sequential read 1,160 .0 30 26 82.9
db file parallel write 416 .0 12 29 29.7
db file scattered read 476 .0 6 12 34.0
log file parallel write 48 .0 2 48 3.4
control file parallel write 199 .0 2 9 14.2
os thread startup 6 .0 1 156 0.4
control file sequential read 1,531 .0 1 0 109.4
log file sync 3 .0 0 81 0.2
SQL*Net more data to client 168 .0 0 1 12.0
latch free 3 .0 0 4 0.2
change tracking file synchro 9 .0 0 1 0.6
change tracking file synchro 11 .0 0 0 0.8
direct path write 1 .0 0 2 0.1
latch: redo allocation 1 .0 0 0 0.1
SQL*Net message to client 23 .0 0 0 1.6
SQL*Net more data from clien 8 .0 0 0 0.6
Streams AQ: qmn slave idle w 22 .0 602 27346 1.6
Streams AQ: qmn coordinator 43 51.2 602 13991 3.1
ASM background timer 121 .0 591 4884 8.6
class slave wait 5 100.0 586 117189 0.4
virtual circuit status 20 100.0 586 29294 1.4
jobq slave wait 47 97.9 135 2868 3.4
SQL*Net message from client 23 .0 6 260 1.6
-------------------------------------------------------------
Background Wait Events DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file parallel write 416 .0 12 29 29.7
log file parallel write 48 .0 2 48 3.4
control file parallel write 199 .0 2 9 14.2
db file sequential read 9 .0 1 125 0.6
os thread startup 6 .0 1 156 0.4
control file sequential read 208 .0 0 2 14.9
events in waitclass Other 21 .0 0 0 1.5
rdbms ipc message 2,373 98.4 8,598 3623 169.5
Streams AQ: qmn slave idle w 22 .0 602 27346 1.6
Streams AQ: qmn coordinator 43 51.2 602 13991 3.1
ASM background timer 121 .0 591 4884 8.6
pmon timer 203 100.0 586 2887 14.5
class slave wait 5 100.0 586 117189 0.4
smon timer 2 50.0 494 246915 0.1
-------------------------------------------------------------
Operating System Statistics DB/Inst: IVRS/ivrs Snaps: 1123-1124
Statistic Total
-------------------------------- --------------------
BUSY_TIME 60,224
IDLE_TIME 0
IOWAIT_TIME 0
NICE_TIME 48,037
SYS_TIME 5,935
USER_TIME 6,023
LOAD 1
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 54,808
NUM_CPUS 1
-------------------------------------------------------------
Service Statistics DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
-------------------------------- ------------ ------------ ---------- ----------
ivrs.karl.com 42.4 13.8 2,743 37,440
SYS$USERS 2.1 1.1 111 2,227
SYS$BACKGROUND 0.0 0.0 381 54,211
ivrsXDB 0.0 0.0 0 0
-------------------------------------------------------------
Service Wait Class Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
----------------------------------------------------------------
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
ivrs.karl.com
1249 2827 0 0 0 0 196 10
SYS$USERS
53 82 0 0 0 0 0 0
SYS$BACKGROUND
335 667 6 93 0 0 0 0
-------------------------------------------------------------
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
42 14 1 42.2 95.0 ctdqj2v5dt8t4
Module: EXCEL.EXE
SELECT s0.snap_id id, TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm, s
0.instance_number inst, round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_I
NTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.E
ND_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
9 4 1 9.3 20.8 05s9358mm6vrr
begin dbms_feature_usage_internal.exec_db_usage_sampling(:bind1); end;
2 1 1 1.9 4.2 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1 1 1 1.4 3.1 43w0r9122v7jm
Module: MMON_SLAVE
select max(bytes) from dba_segments
1 0 1 1.0 2.3 1m7zxctxm9bhq
Module: MMON_SLAVE
select decode(cap + app + prop + msg + aq , 0, 0, 1), 0, NULL from (select decod
e (count(*), 0, 0, 1) cap from dba_capture), (select decode (count(*), 0, 0, 1)
app from dba_apply), (select decode (count(*), 0, 0, 1) prop from dba_propagatio
n), (select decode (count(*), 0, 0, 1) msg from dba_streams_message_consumers
1 1 1 0.8 1.9 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
1 1 1 0.8 1.8 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
1 0 1 0.8 1.7 fmysjzxwxjuwj
Module: MMON_SLAVE
BEGIN DBMS_FEATURE_PARTITION_SYSTEM(:feature_boolean, :aux_cnt, :feature_info);
END;
1 0 55 0.0 1.6 cqgv56fmuj63x
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#
(+) order by order#
1 0 1 0.7 1.5 6cxqh7mktnbjm
insert into smon_scn_time (thread, time_mp, time_dp, scn, scn_wrp, scn_bas, num
_mappings, tim_scn_map) values (0, :1, :2, :3, :4, :5, :6, :7)
1 0 1 0.7 1.5 9wa06dzuu5g37
Module: MMON_SLAVE
select count(*), count(*), NULL from DBA_OLAP2_CUBES where invalid != 'Y' and OW
NER = 'SYS' and CUBE_NAME = 'STKPRICE_TBL'
1 0 55 0.0 1.5 8swypbbr0m372
select order#,columns,types from access$ where d_obj#=:1
1 0 1 0.6 1.4 3j4q75fw0kwyt
Module: MMON_SLAVE
SELECT NUM||':'||IDX_OR_TAB||':'||PTYPE||':'||SUBPTYPE||':'||PCNT||':'||SUBPCNT|
|':'|| PCOLS||':'||SUBPCOLS||':'||IDX_FLAGS||':'|| IDX_TYPE||':'||IDX_UK||'|' MY
_STRING FROM (SELECT * FROM (SELECT /*+ full(o) */ DENSE_RANK() OVER (ORDER BY D
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
ECODE(I.BO#,NULL,P.OBJ#,I.BO#)) NUM, DECODE(O.TYPE#,1,'I',2,'T',NULL) IDX_OR_TAB
1 0 51 0.0 1.4 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
1 0 1 0.6 1.3 7hksk9agpp7r0
Module: MMON_SLAVE
select count(*), NULL, prvt_hdm.db_feature_clob from wri$_adv_usage u, wri$_adv_
definitions a where a.name = 'ADDM' and u.advisor_id = a.id and u.num_execs > 0
and u.last_exec_time >= (select max(last_sample_date) from wri$_dbu_usage_sampl
e)
1 0 1 0.6 1.3 b19z63j9a3usu
Module: MMON_SLAVE
select count(*), NULL, NULL from dba_repcat
0 0 22 0.0 1.1 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
0 0 210 0.0 1.1 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
0 0 32 0.0 1.0 g3wrkmxkxzhf2
select cols,audit$,textlength,intcols,property,flags,rowid from view$ where obj#
=:1
-------------------------------------------------------------
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
14 42 1 13.62 95.0 ctdqj2v5dt8t4
Module: EXCEL.EXE
SELECT s0.snap_id id, TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm, s
0.instance_number inst, round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_I
NTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.E
ND_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
4 9 1 4.04 20.8 05s9358mm6vrr
begin dbms_feature_usage_internal.exec_db_usage_sampling(:bind1); end;
1 2 1 0.98 4.2 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1 1 1 0.70 3.1 43w0r9122v7jm
Module: MMON_SLAVE
select max(bytes) from dba_segments
1 1 1 0.70 1.9 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
1 1 1 0.65 1.8 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
0 1 1 0.37 2.3 1m7zxctxm9bhq
Module: MMON_SLAVE
select decode(cap + app + prop + msg + aq , 0, 0, 1), 0, NULL from (select decod
e (count(*), 0, 0, 1) cap from dba_capture), (select decode (count(*), 0, 0, 1)
app from dba_apply), (select decode (count(*), 0, 0, 1) prop from dba_propagatio
n), (select decode (count(*), 0, 0, 1) msg from dba_streams_message_consumers
0 1 1 0.36 1.7 fmysjzxwxjuwj
Module: MMON_SLAVE
BEGIN DBMS_FEATURE_PARTITION_SYSTEM(:feature_boolean, :aux_cnt, :feature_info);
END;
0 0 1 0.31 0.7 1crajpb7j5tyz
INSERT INTO STATS$SGA_TARGET_ADVICE ( SNAP_ID , DBID , INSTANCE_NUMBER , SGA_SIZ
E , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TIME_FACTOR , ESTD_PHYSICAL_READS )
SELECT :B3 , :B2 , :B1 , SGA_SIZE , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TI
ME_FACTOR , ESTD_PHYSICAL_READS FROM V$SGA_TARGET_ADVICE
0 0 1 0.29 0.7 bunssq950snhf
insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice
0 1 1 0.25 1.4 3j4q75fw0kwyt
Module: MMON_SLAVE
SELECT NUM||':'||IDX_OR_TAB||':'||PTYPE||':'||SUBPTYPE||':'||PCNT||':'||SUBPCNT|
|':'|| PCOLS||':'||SUBPCOLS||':'||IDX_FLAGS||':'|| IDX_TYPE||':'||IDX_UK||'|' MY
_STRING FROM (SELECT * FROM (SELECT /*+ full(o) */ DENSE_RANK() OVER (ORDER BY D
ECODE(I.BO#,NULL,P.OBJ#,I.BO#)) NUM, DECODE(O.TYPE#,1,'I',2,'T',NULL) IDX_OR_TAB
0 1 1 0.17 1.3 7hksk9agpp7r0
Module: MMON_SLAVE
select count(*), NULL, prvt_hdm.db_feature_clob from wri$_adv_usage u, wri$_adv_
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
definitions a where a.name = 'ADDM' and u.advisor_id = a.id and u.num_execs > 0
and u.last_exec_time >= (select max(last_sample_date) from wri$_dbu_usage_sampl
e)
0 1 1 0.15 1.5 9wa06dzuu5g37
Module: MMON_SLAVE
select count(*), count(*), NULL from DBA_OLAP2_CUBES where invalid != 'Y' and OW
NER = 'SYS' and CUBE_NAME = 'STKPRICE_TBL'
0 1 55 0.00 1.6 cqgv56fmuj63x
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#
(+) order by order#
0 1 1 0.14 1.3 b19z63j9a3usu
Module: MMON_SLAVE
select count(*), NULL, NULL from dba_repcat
0 1 51 0.00 1.4 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
0 1 55 0.00 1.5 8swypbbr0m372
select order#,columns,types from access$ where d_obj#=:1
0 0 22 0.00 1.1 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
0 0 210 0.00 1.1 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
0 0 32 0.00 1.0 g3wrkmxkxzhf2
select cols,audit$,textlength,intcols,property,flags,rowid from view$ where obj#
=:1
0 1 1 0.02 1.5 6cxqh7mktnbjm
insert into smon_scn_time (thread, time_mp, time_dp, scn, scn_wrp, scn_bas, num
_mappings, tim_scn_map) values (0, :1, :2, :3, :4, :5, :6, :7)
-------------------------------------------------------------
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 93,869
-> Captured SQL account for 89.6% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
49,359 1 49,359.0 52.6 4.04 9.26 05s9358mm6vrr
begin dbms_feature_usage_internal.exec_db_usage_sampling(:bind1); end;
37,427 1 37,427.0 39.9 13.62 42.23 ctdqj2v5dt8t4
Module: EXCEL.EXE
SELECT s0.snap_id id, TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm, s
0.instance_number inst, round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_I
NTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.E
ND_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
27,158 1 27,158.0 28.9 0.16 0.29 6s0ngxbx4fp4s
Module: MMON_SLAVE
select count(*) from col$ c, obj$ o where c.charsetform = 2 and c.obj# = o.obj#
and o.owner# not in (select distinct u.user_id from all_users u, sys.ku_noex
p_view k where (k.OBJ_TYPE='USER' and k.name=u.username) or (u.username='SYSTEM
'))
5,424 1 5,424.0 5.8 0.70 1.40 43w0r9122v7jm
Module: MMON_SLAVE
select max(bytes) from dba_segments
3,486 1 3,486.0 3.7 0.36 0.76 fmysjzxwxjuwj
Module: MMON_SLAVE
BEGIN DBMS_FEATURE_PARTITION_SYSTEM(:feature_boolean, :aux_cnt, :feature_info);
END;
1,938 1 1,938.0 2.1 0.98 1.87 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1,615 1 1,615.0 1.7 0.21 0.23 czrnm8k3h9jzq
Module: MMON_SLAVE
select count(*) from sys.tab$ t, sys.obj$ o where t.obj# = o.obj# and bitand(t.p
roperty, 1) = 0 and o.owner# not in (select u.user# from user$ u where u.name in
('SYS', 'SYSTEM'))
1,468 1 1,468.0 1.6 0.09 0.09 7qhrg6s7taf2r
Module: MMON_SLAVE
BEGIN DBMS_FEATURE_PARTITION_USER(:feature_boolean, :aux_cnt, :feature_info); E
ND;
1,136 1 1,136.0 1.2 0.25 0.62 3j4q75fw0kwyt
Module: MMON_SLAVE
SELECT NUM||':'||IDX_OR_TAB||':'||PTYPE||':'||SUBPTYPE||':'||PCNT||':'||SUBPCNT|
|':'|| PCOLS||':'||SUBPCOLS||':'||IDX_FLAGS||':'|| IDX_TYPE||':'||IDX_UK||'|' MY
_STRING FROM (SELECT * FROM (SELECT /*+ full(o) */ DENSE_RANK() OVER (ORDER BY D
ECODE(I.BO#,NULL,P.OBJ#,I.BO#)) NUM, DECODE(O.TYPE#,1,'I',2,'T',NULL) IDX_OR_TAB
950 1 950.0 1.0 0.06 0.07 cfz686a6qp0kg
select o.obj#, u.name, o.name, t.spare1, DECODE(bitand(t.flags, 26843545
6), 268435456, t.initrans, t.pctfree$) from sys.obj$ o, sys.user$ u, sys.tab$
t where (bitand(t.trigflag, 1048576) = 1048576) and o.obj#=t.obj#
and o.owner# = u.user#
-------------------------------------------------------------
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Total Disk Reads: 3,233
-> Captured SQL account for 99.8% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
2,743 1 2,743.0 84.8 13.62 42.23 ctdqj2v5dt8t4
Module: EXCEL.EXE
SELECT s0.snap_id id, TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm, s
0.instance_number inst, round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_I
NTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.E
ND_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
298 1 298.0 9.2 4.04 9.26 05s9358mm6vrr
begin dbms_feature_usage_internal.exec_db_usage_sampling(:bind1); end;
111 1 111.0 3.4 0.98 1.87 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
46 1 46.0 1.4 0.15 0.66 9wa06dzuu5g37
Module: MMON_SLAVE
select count(*), count(*), NULL from DBA_OLAP2_CUBES where invalid != 'Y' and OW
NER = 'SYS' and CUBE_NAME = 'STKPRICE_TBL'
40 55 0.7 1.2 0.14 0.73 cqgv56fmuj63x
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#
(+) order by order#
38 1 38.0 1.2 0.70 1.40 43w0r9122v7jm
Module: MMON_SLAVE
select max(bytes) from dba_segments
34 55 0.6 1.1 0.10 0.66 8swypbbr0m372
select order#,columns,types from access$ where d_obj#=:1
33 1 33.0 1.0 0.37 1.01 1m7zxctxm9bhq
Module: MMON_SLAVE
select decode(cap + app + prop + msg + aq , 0, 0, 1), 0, NULL from (select decod
e (count(*), 0, 0, 1) cap from dba_capture), (select decode (count(*), 0, 0, 1)
app from dba_apply), (select decode (count(*), 0, 0, 1) prop from dba_propagatio
n), (select decode (count(*), 0, 0, 1) msg from dba_streams_message_consumers
32 22 1.5 1.0 0.08 0.49 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
32 51 0.6 1.0 0.11 0.62 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
-------------------------------------------------------------
SQL ordered by Executions DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Total Executions: 2,229
-> Captured SQL account for 68.5% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
210 161 0.8 0.00 0.00 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
149 149 1.0 0.00 0.00 grwydz59pu6mc
select text from view$ where rowid=:1
120 1 0.0 0.00 0.00 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
93 90 1.0 0.00 0.00 04xtrk7uyhknh
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, spare2 f
rom obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner is null an
d linkname is null and subname is null
86 86 1.0 0.00 0.00 5ngzsfstg8tmy
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.dataobj#
,o.flags from obj$ o where o.obj#=:1
64 748 11.7 0.00 0.00 83taa7kaw59c1
select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl(sc
ale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,scale,183,s
cale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,rowid,col#,proper
ty, nvl(charsetid,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$ wh
62 62 1.0 0.00 0.00 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3
55 143 2.6 0.00 0.01 8swypbbr0m372
select order#,columns,types from access$ where d_obj#=:1
55 184 3.3 0.00 0.01 cqgv56fmuj63x
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#
(+) order by order#
53 86 1.6 0.00 0.00 6769wyy3yf66f
select pos#,intcol#,col#,spare1,bo#,spare2 from icol$ where obj#=:1
51 547 10.7 0.00 0.01 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
44 52 1.2 0.00 0.00 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
39 14 0.4 0.00 0.00 asvzxj61dc5vs
select timestamp, flags from fixed_obj$ where obj#=:1
32 32 1.0 0.00 0.00 1gu8t96d0bdmu
select t.ts#,t.file#,t.block#,nvl(t.bobj#,0),nvl(t.tab#,0),t.intcols,nvl(t.cluco
ls,0),t.audit$,t.flags,t.pctfree$,t.pctused$,t.initrans,t.maxtrans,t.rowcnt,t.bl
kcnt,t.empcnt,t.avgspc,t.chncnt,t.avgrln,t.analyzetime,t.samplesize,t.cols,t.pro
SQL ordered by Executions DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Total Executions: 2,229
-> Captured SQL account for 68.5% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
perty,nvl(t.degree,1),nvl(t.instances,1),t.avgspc_flb,t.flbcnt,t.kernelcols,nvl(
32 53 1.7 0.00 0.00 7ng34ruy5awxq
select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags,i.property,i.pctf
ree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i.distkey,i.lblkkey,i.dblkkey,i.cl
ufac,i.cols,i.analyzetime,i.samplesize,i.dataobj#,nvl(i.degree,1),nvl(i.instance
s,1),i.rowcnt,mod(i.pctthres$,256),i.indmethod#,i.trunccnt,nvl(c.unicols,0),nvl(
32 32 1.0 0.00 0.01 g3wrkmxkxzhf2
select cols,audit$,textlength,intcols,property,flags,rowid from view$ where obj#
=:1
25 11 0.4 0.00 0.00 2q93zsrvbdw48
select grantee#,privilege#,nvl(col#,0),max(mod(nvl(option$,0),2))from objauth$ w
here obj#=:1 group by grantee#,privilege#,nvl(col#,0) order by grantee#
25 0 0.0 0.00 0.00 6aq34nj2zb2n7
select col#, grantee#, privilege#,max(mod(nvl(option$,0),2)) from objauth$ where
obj#=:1 and col# is not null group by privilege#, col#, grantee# order by col#,
grantee#
-------------------------------------------------------------
SQL ordered by Parse Calls DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Total Parse Calls: 1,115
-> Captured SQL account for 66.7% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
149 149 13.36 grwydz59pu6mc
select text from view$ where rowid=:1
62 62 5.56 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3
55 55 4.93 8swypbbr0m372
select order#,columns,types from access$ where d_obj#=:1
55 55 4.93 cqgv56fmuj63x
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#
(+) order by order#
44 44 3.95 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
39 39 3.50 asvzxj61dc5vs
select timestamp, flags from fixed_obj$ where obj#=:1
22 22 1.97 39m4sx9k63ba2
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece from idl_ub2$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
22 22 1.97 c6awqs517jpj0
select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece from idl_char$ w
here obj#=:1 and part=:2 and version=:3 order by piece#
22 22 1.97 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
22 22 1.97 ga9j9xk5cy9s0
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece from idl_sb4$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
16 16 1.43 b1wc53ddd6h3p
select audit$,options from procedure$ where obj#=:1
14 14 1.26 49s332uhbnsma
declare vsn varchar2(20); begin vsn :=
dbms_rcvman.getPackageVersion; :pkg_vsn:pkg_vsn_i := vsn;
if vsn is not null then :pkg_vsnub4 :=
to_number(substr(vsn,1,2) || substr(vsn,4,2) || s
13 13 1.17 19g6x6d7nh669
select blocks,NVL(ts#,-1),status$,NVL(relfile#,0),maxextend,inc, crscnwrp,crscnb
as,NVL(spare1,0) from file$ where file#=:1
13 13 1.17 6qz82dptj0qr7
select l.col#, l.intcol#, l.lobj#, l.ind#, l.ts#, l.file#, l.block#, l.chunk, l.
pctversion$, l.flags, l.property, l.retention, l.freepools from lob$ l where l.o
bj# = :1 order by l.intcol# asc
-------------------------------------------------------------
SQL ordered by Sharable Memory DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Only Statements with Sharable Memory greater than 1048576 are displayed
Sharable Mem (b) Executions % Total SQL Id
---------------- ------------ -------- -------------
1,129,533 1 0.77 ctdqj2v5dt8t4
Module: EXCEL.EXE
SELECT s0.snap_id id, TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm, s
0.instance_number inst, round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_I
NTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.E
ND_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
-------------------------------------------------------------
SQL ordered by Version Count DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session 2,268 3.8 162.0
CPU used when call started 1,968 3.3 140.6
CR blocks created 4 0.0 0.3
DB time 22,558 37.5 1,611.3
DBWR checkpoint buffers written 663 1.1 47.4
DBWR checkpoints 0 0.0 0.0
DBWR transaction table writes 10 0.0 0.7
DBWR undo block writes 111 0.2 7.9
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 12 0.0 0.9
IMU Redo allocation size 46,816 77.8 3,344.0
IMU commits 2 0.0 0.1
IMU contention 0 0.0 0.0
IMU pool not allocated 0 0.0 0.0
IMU undo allocation size 47,184 78.4 3,370.3
IMU- failed to get a private str 0 0.0 0.0
SQL*Net roundtrips to/from clien 20 0.0 1.4
active txn count during cleanout 59 0.1 4.2
background checkpoints completed 0 0.0 0.0
background checkpoints started 0 0.0 0.0
background timeouts 2,338 3.9 167.0
branch node splits 1 0.0 0.1
buffer is not pinned count 25,000 41.5 1,785.7
buffer is pinned count 90,579 150.5 6,469.9
bytes received via SQL*Net from 19,533 32.5 1,395.2
bytes sent via SQL*Net to client 343,032 570.0 24,502.3
calls to get snapshot scn: kcmgs 4,931 8.2 352.2
calls to kcmgas 99 0.2 7.1
calls to kcmgcs 32 0.1 2.3
change write time 14 0.0 1.0
cleanout - number of ktugct call 74 0.1 5.3
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 3 0.0 0.2
cluster key scan block gets 2,995 5.0 213.9
cluster key scans 2,262 3.8 161.6
commit batch/immediate performed 1 0.0 0.1
commit batch/immediate requested 1 0.0 0.1
commit cleanout failures: callba 12 0.0 0.9
commit cleanouts 486 0.8 34.7
commit cleanouts successfully co 474 0.8 33.9
commit immediate performed 1 0.0 0.1
commit immediate requested 1 0.0 0.1
commit txn count during cleanout 51 0.1 3.6
concurrency wait time 93 0.2 6.6
consistent changes 1,330 2.2 95.0
consistent gets 86,838 144.3 6,202.7
consistent gets - examination 30,253 50.3 2,160.9
consistent gets direct 1 0.0 0.1
consistent gets from cache 86,837 144.3 6,202.6
cursor authentications 14 0.0 1.0
data blocks consistent reads - u 4 0.0 0.3
db block changes 4,239 7.0 302.8
db block gets 7,031 11.7 502.2
db block gets direct 1 0.0 0.1
db block gets from cache 7,030 11.7 502.1
deferred (CURRENT) block cleanou 299 0.5 21.4
dirty buffers inspected 2 0.0 0.1
enqueue conversions 167 0.3 11.9
enqueue releases 5,524 9.2 394.6
enqueue requests 5,524 9.2 394.6
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
enqueue timeouts 0 0.0 0.0
enqueue waits 0 0.0 0.0
execute count 2,229 3.7 159.2
free buffer inspected 3,302 5.5 235.9
free buffer requested 4,347 7.2 310.5
heap block compress 2 0.0 0.1
hot buffers moved to head of LRU 3,471 5.8 247.9
immediate (CR) block cleanout ap 3 0.0 0.2
immediate (CURRENT) block cleano 95 0.2 6.8
index crx upgrade (positioned) 189 0.3 13.5
index fast full scans (full) 1 0.0 0.1
index fetch by key 27,381 45.5 1,955.8
index scans kdiixs1 1,433 2.4 102.4
leaf node 90-10 splits 13 0.0 0.9
leaf node splits 37 0.1 2.6
lob reads 120 0.2 8.6
lob writes 498 0.8 35.6
lob writes unaligned 498 0.8 35.6
logons cumulative 9 0.0 0.6
messages received 454 0.8 32.4
messages sent 454 0.8 32.4
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 54,983 91.4 3,927.4
opened cursors cumulative 1,718 2.9 122.7
parse count (failures) 0 0.0 0.0
parse count (hard) 127 0.2 9.1
parse count (total) 1,115 1.9 79.6
parse time cpu 1,011 1.7 72.2
parse time elapsed 1,509 2.5 107.8
physical read IO requests 1,627 2.7 116.2
physical read bytes 26,484,736 44,008.5 1,891,766.9
physical read total IO requests 3,182 5.3 227.3
physical read total bytes 51,681,280 85,876.4 3,691,520.0
physical read total multi block 474 0.8 33.9
physical reads 3,233 5.4 230.9
physical reads cache 3,232 5.4 230.9
physical reads cache prefetch 1,606 2.7 114.7
physical reads direct 1 0.0 0.1
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 1,586 2.6 113.3
physical write IO requests 418 0.7 29.9
physical write bytes 5,464,064 9,079.4 390,290.3
physical write total IO requests 1,078 1.8 77.0
physical write total bytes 16,936,960 28,143.4 1,209,782.9
physical write total multi block 89 0.2 6.4
physical writes 667 1.1 47.6
physical writes direct 1 0.0 0.1
physical writes direct (lob) 1 0.0 0.1
physical writes from cache 666 1.1 47.6
physical writes non checkpoint 343 0.6 24.5
pinned buffers inspected 0 0.0 0.0
recursive calls 26,868 44.7 1,919.1
recursive cpu usage 1,489 2.5 106.4
redo blocks written 3,071 5.1 219.4
redo buffer allocation retries 0 0.0 0.0
redo entries 1,940 3.2 138.6
redo log space requests 0 0.0 0.0
redo log space wait time 0 0.0 0.0
redo ordering marks 0 0.0 0.0
redo size 1,543,284 2,564.4 110,234.6
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
redo synch time 26 0.0 1.9
redo synch writes 156 0.3 11.1
redo wastage 12,444 20.7 888.9
redo write time 242 0.4 17.3
redo writer latching time 0 0.0 0.0
redo writes 48 0.1 3.4
rollback changes - undo records 6 0.0 0.4
rollbacks only - consistent read 4 0.0 0.3
rows fetched via callback 1,117 1.9 79.8
session cursor cache hits 1,268 2.1 90.6
session logical reads 93,869 156.0 6,704.9
session pga memory 2,551,380 4,239.5 182,241.4
session pga memory max 3,665,492 6,090.8 261,820.9
session uga memory 8,590,163,388 14,273,879.4 #############
session uga memory max 56,144,212 93,292.3 4,010,300.9
shared hash latch upgrades - no 261 0.4 18.6
sorts (memory) 955 1.6 68.2
sorts (rows) 31,997 53.2 2,285.5
sql area evicted 0 0.0 0.0
sql area purged 0 0.0 0.0
summed dirty queue length 2 0.0 0.1
switch current to new buffer 1 0.0 0.1
table fetch by rowid 30,384 50.5 2,170.3
table fetch continued row 14 0.0 1.0
table scan blocks gotten 14,166 23.5 1,011.9
table scan rows gotten 568,919 945.4 40,637.1
table scans (long tables) 3 0.0 0.2
table scans (short tables) 290 0.5 20.7
total number of times SMON poste 1 0.0 0.1
transaction rollbacks 1 0.0 0.1
undo change vector size 548,816 911.9 39,201.1
user I/O wait time 3,588 6.0 256.3
user calls 47 0.1 3.4
user commits 13 0.0 0.9
user rollbacks 1 0.0 0.1
workarea executions - optimal 645 1.1 46.1
write clones created in foregrou 0 0.0 0.0
-------------------------------------------------------------
Instance Activity Stats - Absolute ValuesDB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Statistics with absolute values (should not be diffed)
Statistic Begin Value End Value
-------------------------------- --------------- ---------------
session cursor cache count 741 847
opened cursors current 39 39
logons current 23 22
-------------------------------------------------------------
Instance Activity Stats - Thread Activity DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
-------------------------------- ------------------ ---------
log switches (derived) 0 .00
-------------------------------------------------------------
Tablespace IO Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX
1,295 2 22.8 2.2 222 0 0 0.0
SYSTEM
272 0 19.6 1.0 15 0 0 0.0
USERS
53 0 15.3 2.1 155 0 0 0.0
UNDOTBS1
0 0 0.0 .0 26 0 0 0.0
-------------------------------------------------------------
File IO Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX +DATA_1/ivrs/datafile/sysaux.258.652821943
1,295 2 22.8 2.2 222 0 0 0.0
SYSTEM +DATA_1/ivrs/datafile/system.267.652821909
272 0 19.6 1.0 15 0 0 0.0
UNDOTBS1 +DATA_1/ivrs/datafile/undotbs1.257.652821933
0 0 N/A N/A 26 0 0 0.0
USERS +DATA_1/ivrs/datafile/users.263.652821963
20 0 15.5 2.0 78 0 0 0.0
USERS +DATA_1/ivrs/datafile/users02.dbf
33 0 15.2 2.2 77 0 0 0.0
-------------------------------------------------------------
Buffer Pool Statistics DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Writ Buffer
Number of Pool Buffer Physical Physical Buff Comp Busy
P Buffers Hit% Gets Reads Writes Wait Wait Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D 18,962 97 93,864 3,232 666 0 0 0
-------------------------------------------------------------
Instance Recovery Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B 0 53 148 212 110508 184320 110508 N/A
E 0 53 206 352 8793 184320 8793 N/A
-------------------------------------------------------------
Buffer Pool Advisory DB/Inst: IVRS/ivrs Snap: 1124
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 12 .1 1,497 2.8 50,622
D 24 .2 2,994 1.3 23,069
D 36 .2 4,491 1.2 22,182
D 48 .3 5,988 1.2 21,484
D 60 .4 7,485 1.1 20,715
D 72 .5 8,982 1.1 19,768
D 84 .6 10,479 1.0 18,988
D 96 .6 11,976 1.0 18,585
D 108 .7 13,473 1.0 18,349
D 120 .8 14,970 1.0 18,230
D 132 .9 16,467 1.0 18,124
D 144 .9 17,964 1.0 18,124
D 152 1.0 18,962 1.0 18,124
D 156 1.0 19,461 1.0 18,124
D 168 1.1 20,958 1.0 18,124
D 180 1.2 22,455 1.0 18,124
D 192 1.3 23,952 1.0 18,124
D 204 1.3 25,449 1.0 18,124
D 216 1.4 26,946 1.0 18,124
D 228 1.5 28,443 1.0 18,124
D 240 1.6 29,940 0.9 16,243
-------------------------------------------------------------
PGA Aggr Summary DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ------------------ --------------------------
100.0 95 0
-------------------------------------------------------------
PGA Aggr Target Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B 103 46 116.0 0.0 .0 .0 .0 21,094
E 103 48 113.4 0.0 .0 .0 .0 21,094
-------------------------------------------------------------
PGA Aggr Target Histogram DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
2K 4K 523 523 0 0
64K 128K 4 4 0 0
256K 512K 2 2 0 0
512K 1024K 113 113 0 0
1M 2M 3 3 0 0
-------------------------------------------------------------
PGA Memory Advisory DB/Inst: IVRS/ivrs Snap: 1124
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
13 0.1 228.9 129.6 64.0 14
26 0.3 228.9 129.6 64.0 14
52 0.5 228.9 129.6 64.0 14
77 0.8 228.9 113.3 67.0 7
103 1.0 228.9 0.0 100.0 5
124 1.2 228.9 0.0 100.0 5
144 1.4 228.9 0.0 100.0 5
165 1.6 228.9 0.0 100.0 4
185 1.8 228.9 0.0 100.0 4
206 2.0 228.9 0.0 100.0 4
309 3.0 228.9 0.0 100.0 0
412 4.0 228.9 0.0 100.0 0
618 6.0 228.9 0.0 100.0 0
824 8.0 228.9 0.0 100.0 0
-------------------------------------------------------------
Shared Pool Advisory DB/Inst: IVRS/ivrs Snap: 1124
-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Est LC Est LC Est LC Est LC
Shared SP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
60 .4 16 1,610 2,052 1.0 292 1.5 20,642
76 .5 31 3,852 2,150 1.0 194 1.0 20,881
92 .7 31 4,183 2,150 1.0 194 1.0 20,881
108 .8 31 4,183 2,150 1.0 194 1.0 20,881
124 .9 31 4,183 2,150 1.0 194 1.0 20,881
140 1.0 31 4,183 2,150 1.0 194 1.0 20,881
156 1.1 31 4,183 2,150 1.0 194 1.0 20,881
172 1.2 31 4,183 2,150 1.0 194 1.0 20,881
188 1.3 31 4,183 2,150 1.0 194 1.0 20,881
204 1.5 31 4,183 2,150 1.0 194 1.0 20,881
220 1.6 31 4,183 2,150 1.0 194 1.0 20,881
236 1.7 31 4,183 2,150 1.0 194 1.0 20,881
252 1.8 31 4,183 2,150 1.0 194 1.0 20,881
268 1.9 31 4,183 2,150 1.0 194 1.0 20,881
284 2.0 31 4,183 2,150 1.0 194 1.0 20,881
-------------------------------------------------------------
SGA Target Advisory DB/Inst: IVRS/ivrs Snap: 1124
SGA Target SGA Size Est DB Est Physical
Size (M) Factor Time (s) Reads
---------- ---------- ------------ ----------------
156 0.5 1,249 23,087
234 0.8 1,125 20,217
312 1.0 1,107 18,112
390 1.3 1,034 18,112
468 1.5 1,034 18,112
546 1.8 1,034 18,112
624 2.0 1,034 18,112
-------------------------------------------------------------
Streams Pool Advisory DB/Inst: IVRS/ivrs Snap: 1124
Size for Size Est Spill Est Spill Est Unspill Est Unspill
Est (MB) Factor Count Time (s) Count Time (s)
---------- --------- ----------- ----------- ----------- -----------
4 1.0 0 0 0 0
8 2.0 0 0 0 0
12 3.0 0 0 0 0
16 4.0 0 0 0 0
20 5.0 0 0 0 0
24 6.0 0 0 0 0
28 7.0 0 0 0 0
32 8.0 0 0 0 0
36 9.0 0 0 0 0
40 10.0 0 0 0 0
44 11.0 0 0 0 0
48 12.0 0 0 0 0
52 13.0 0 0 0 0
56 14.0 0 0 0 0
60 15.0 0 0 0 0
64 16.0 0 0 0 0
68 17.0 0 0 0 0
72 18.0 0 0 0 0
76 19.0 0 0 0 0
80 20.0 0 0 0 0
-------------------------------------------------------------
Java Pool Advisory DB/Inst: IVRS/ivrs Snap: 1124
Est LC Est LC Est LC Est LC
Java JP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
8 1.0 1 132 28 1.0 194 1.0 57
12 1.5 1 132 28 1.0 194 1.0 57
16 2.0 1 132 28 1.0 194 1.0 57
-------------------------------------------------------------
Buffer Wait Statistics DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Enqueue Activity DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Undo Segment Summary DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
1 .1 170 0 2 15/15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Undo Segment Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Most recent 35 Undostat rows, ordered by Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
02-Jun 00:10 86 170 0 2 15 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ASM db client latch 398 0.0 N/A 0 0 N/A
ASM map operation freeli 22 0.0 N/A 0 0 N/A
ASM map operation hash t 8,530 0.0 N/A 0 0 N/A
ASM network background l 243 0.0 N/A 0 0 N/A
AWR Alerted Metric Eleme 2,296 0.0 N/A 0 0 N/A
Consistent RBA 48 0.0 N/A 0 0 N/A
FAL request queue 12 0.0 N/A 0 0 N/A
FAL subheap alocation 12 0.0 N/A 0 0 N/A
FIB s.o chain latch 2 0.0 N/A 0 0 N/A
FOB s.o list latch 93 0.0 N/A 0 0 N/A
In memory undo latch 434 0.0 N/A 0 165 0.0
JS mem alloc latch 1 0.0 N/A 0 0 N/A
JS queue access latch 1 0.0 N/A 0 0 N/A
JS queue state obj latch 4,408 0.0 N/A 0 0 N/A
JS slv state obj latch 13 0.0 N/A 0 0 N/A
KFK SGA context latch 313 0.0 N/A 0 0 N/A
KFMD SGA 18 0.0 N/A 0 0 N/A
KMG MMAN ready and start 201 0.0 N/A 0 0 N/A
KTF sga latch 1 0.0 N/A 0 190 0.0
KWQMN job cache list lat 10 0.0 N/A 0 0 N/A
KWQP Prop Status 3 0.0 N/A 0 0 N/A
MQL Tracking Latch 0 N/A N/A 0 12 0.0
Memory Management Latch 0 N/A N/A 0 201 0.0
OS process 60 0.0 N/A 0 0 N/A
OS process allocation 239 0.0 N/A 0 0 N/A
OS process: request allo 15 0.0 N/A 0 0 N/A
PL/SQL warning settings 56 0.0 N/A 0 0 N/A
SGA IO buffer pool latch 1 0.0 N/A 0 1 0.0
SQL memory manager latch 2 0.0 N/A 0 199 0.0
SQL memory manager worka 13,815 0.0 N/A 0 0 N/A
Shared B-Tree 28 0.0 N/A 0 0 N/A
active checkpoint queue 615 0.0 N/A 0 0 N/A
active service list 1,269 0.0 N/A 0 203 0.0
archive control 60 0.0 N/A 0 0 N/A
archive process latch 211 0.0 N/A 0 0 N/A
begin backup scn array 1 0.0 N/A 0 0 N/A
cache buffer handles 172 0.0 N/A 0 0 N/A
cache buffers chains 172,751 0.0 N/A 0 7,022 0.0
cache buffers lru chain 9,600 0.0 N/A 0 3,433 0.0
cache table scan latch 0 N/A N/A 0 6 0.0
channel handle pool latc 17 0.0 N/A 0 0 N/A
channel operations paren 2,843 0.0 N/A 0 0 N/A
checkpoint queue latch 5,924 0.0 N/A 0 652 0.0
client/application info 23 0.0 N/A 0 0 N/A
commit callback allocati 20 0.0 N/A 0 0 N/A
compile environment latc 25 0.0 N/A 0 0 N/A
dml lock allocation 462 0.0 N/A 0 0 N/A
dummy allocation 19 0.0 N/A 0 0 N/A
enqueue hash chains 11,209 0.0 N/A 0 0 N/A
enqueues 10,446 0.0 N/A 0 0 N/A
event group latch 7 0.0 N/A 0 0 N/A
file cache latch 36 0.0 N/A 0 0 N/A
global KZLD latch for me 1 0.0 N/A 0 0 N/A
hash table column usage 0 N/A N/A 0 36,144 0.0
hash table modification 9 0.0 N/A 0 0 N/A
job workq parent latch 0 N/A N/A 0 4 0.0
job_queue_processes para 12 0.0 N/A 0 0 N/A
kks stats 235 0.0 N/A 0 0 N/A
ksuosstats global area 44 0.0 N/A 0 0 N/A
ktm global data 2 0.0 N/A 0 0 N/A
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
kwqbsn:qsga 27 0.0 N/A 0 0 N/A
lgwr LWN SCN 217 0.0 N/A 0 0 N/A
library cache 9,535 0.0 N/A 0 0 N/A
library cache load lock 544 0.0 N/A 0 10 0.0
library cache lock 4,535 0.0 N/A 0 0 N/A
library cache lock alloc 180 0.0 N/A 0 0 N/A
library cache pin 2,384 0.0 N/A 0 0 N/A
library cache pin alloca 84 0.0 N/A 0 0 N/A
list of block allocation 38 0.0 N/A 0 0 N/A
loader state object free 6 0.0 N/A 0 0 N/A
logminer context allocat 1 0.0 N/A 0 0 N/A
message pool operations 2 0.0 N/A 0 0 N/A
messages 5,633 0.0 N/A 0 0 N/A
mostly latch-free SCN 217 0.0 N/A 0 0 N/A
multiblock read objects 1,048 0.0 N/A 0 2 0.0
ncodef allocation latch 10 0.0 N/A 0 0 N/A
object queue header heap 681 0.0 N/A 0 41 0.0
object queue header oper 11,422 0.0 N/A 0 0 N/A
object stats modificatio 75 0.0 N/A 0 1 0.0
parallel query alloc buf 80 0.0 N/A 0 0 N/A
parameter list 5 0.0 N/A 0 0 N/A
parameter table allocati 11 0.0 N/A 0 0 N/A
post/wait queue 9 0.0 N/A 0 4 0.0
process allocation 15 0.0 N/A 0 7 0.0
process group creation 15 0.0 N/A 0 0 N/A
qmn task queue latch 88 2.3 1.0 0 0 N/A
redo allocation 764 0.1 1.0 0 1,913 0.0
redo copy 0 N/A N/A 0 1,912 0.0
redo writing 1,179 0.0 N/A 0 0 N/A
resmgr group change latc 5 0.0 N/A 0 0 N/A
resmgr:actses active lis 13 0.0 N/A 0 0 N/A
resmgr:actses change gro 3 0.0 N/A 0 0 N/A
resmgr:free threads list 11 0.0 N/A 0 0 N/A
resmgr:schema config 2 0.0 N/A 0 0 N/A
row cache objects 132,222 0.0 N/A 0 0 N/A
rules engine aggregate s 10 0.0 N/A 0 0 N/A
rules engine rule set st 220 0.0 N/A 0 0 N/A
sequence cache 35 0.0 N/A 0 0 N/A
session allocation 41,822 0.0 N/A 0 0 N/A
session idle bit 111 0.0 N/A 0 0 N/A
session state list latch 30 0.0 N/A 0 0 N/A
session switching 10 0.0 N/A 0 0 N/A
session timer 203 0.0 N/A 0 0 N/A
shared pool 20,332 0.0 N/A 0 0 N/A
shared pool sim alloc 16 0.0 N/A 0 0 N/A
shared pool simulator 1,191 0.0 N/A 0 0 N/A
simulator hash latch 6,796 0.0 N/A 0 0 N/A
simulator lru latch 6,079 0.0 N/A 0 290 0.0
slave class 4 0.0 N/A 0 0 N/A
slave class create 16 6.3 1.0 0 0 N/A
sort extent pool 21 0.0 N/A 0 0 N/A
state object free list 4 0.0 N/A 0 0 N/A
statistics aggregation 140 0.0 N/A 0 0 N/A
temp lob duration state 2 0.0 N/A 0 0 N/A
threshold alerts latch 75 0.0 N/A 0 0 N/A
transaction allocation 22 0.0 N/A 0 0 N/A
transaction branch alloc 10 0.0 N/A 0 0 N/A
undo global data 495 0.0 N/A 0 0 N/A
user lock 8 0.0 N/A 0 0 N/A
-------------------------------------------------------------
Latch Sleep Breakdown DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> ordered by misses desc
Latch Name
----------------------------------------
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
qmn task queue latch
88 2 2 0 0 0 0
redo allocation
764 1 1 0 0 0 0
slave class create
16 1 1 0 0 0 0
-------------------------------------------------------------
Latch Miss Sources DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
event range base latch No latch 0 2 2
redo allocation kcrfw_redo_gen: redo alloc 0 1 0
slave class create ksvcreate 0 1 0
-------------------------------------------------------------
Parent Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Child Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Logical Reads DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Total Logical Reads: 93,869
-> Captured Segments account for 83.5% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSTEM I_OBJ# INDEX 24,976 26.61
SYS SYSAUX WRH$_SYSSTAT_PK 950532_637 INDEX 9,568 10.19
SYS SYSTEM OBJ$ TABLE 6,176 6.58
SYS SYSAUX WRH$_SYSSTAT 950532_637 TABLE 5,888 6.27
SYS SYSAUX WRH$_SYSSTAT_PK 950532_407 INDEX 3,184 3.39
-------------------------------------------------------------
Segments by Physical Reads DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Total Physical Reads: 3,233
-> Captured Segments account for 81.7% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSAUX WRH$_SYSSTAT_PK 950532_637 INDEX 573 17.72
SYS SYSAUX WRH$_SYSSTAT 950532_637 TABLE 534 16.52
SYS SYSAUX WRH$_SYSSTAT_PK 950532_407 INDEX 229 7.08
SYS SYSAUX WRH$_SYSSTAT 950532_407 TABLE 218 6.74
SYS SYSAUX WRH$_SYSSTAT_PK 950532_302 INDEX 215 6.65
-------------------------------------------------------------
Segments by Row Lock Waits DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Segments by ITL Waits DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Buffer Busy Waits DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Dictionary Cache Stats DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> "Pct Misses" should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control 16 0.0 0 N/A 2 1
dc_files 26 50.0 0 N/A 0 13
dc_global_oids 27 0.0 0 N/A 0 26
dc_histogram_data 5,855 0.9 0 N/A 0 1,264
dc_histogram_defs 5,743 3.7 0 N/A 0 3,554
dc_object_grants 4 0.0 0 N/A 0 2
dc_object_ids 2,085 4.1 0 N/A 0 1,024
dc_objects 829 13.0 0 N/A 1 1,703
dc_profiles 4 0.0 0 N/A 0 1
dc_rollback_segments 93 0.0 0 N/A 0 16
dc_segments 532 11.7 0 N/A 3 881
dc_sequences 1 0.0 0 N/A 1 6
dc_tablespaces 16,474 0.0 0 N/A 0 12
dc_usernames 54 3.7 0 N/A 0 18
dc_users 16,072 0.0 0 N/A 0 24
outstanding_alerts 38 13.2 0 N/A 10 26
-------------------------------------------------------------
Library Cache Activity DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> "Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 17 23.5 37 13.5 0 0
CLUSTER 21 0.0 33 0.0 0 0
INDEX 3 0.0 3 0.0 0 0
SQL AREA 133 97.7 3,255 11.4 0 0
TABLE/PROCEDURE 526 23.4 1,190 22.4 0 0
TRIGGER 3 0.0 3 0.0 0 0
-------------------------------------------------------------
Process Memory Summary DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist
Avg Std Dev Max Max
Alloc Used Alloc Alloc Alloc Alloc Num Num
Category (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other 108.9 N/A 4.4 7.3 24 24 25 25
Freeable 6.6 .0 .7 .5 1 N/A 10 10
SQL .4 .2 .0 .0 0 2 11 9
PL/SQL .1 .0 .0 .0 0 0 23 23
E Other 106.1 N/A 4.4 7.4 24 24 24 24
Freeable 6.8 .0 .8 .5 1 N/A 9 9
SQL .4 .2 .0 .0 0 2 10 9
PL/SQL .1 .0 .0 .0 0 0 22 22
-------------------------------------------------------------
SGA Memory Summary DB/Inst: IVRS/ivrs Snaps: 1123-1124
End Size (Bytes)
SGA regions Begin Size (Bytes) (if different)
------------------------------ ------------------- -------------------
Database Buffers 159,383,552
Fixed Size 1,261,612
Redo Buffers 2,928,640
Variable Size 163,581,908
-------------------
sum 327,155,712
-------------------------------------------------------------
SGA breakdown difference DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
insignificant, or zero in that snapshot
Pool Name Begin MB End MB % Diff
------ ------------------------------ -------------- -------------- -------
java free memory 2.7 2.7 0.00
java joxlod exec hp 5.1 5.1 0.00
java joxs heap .2 .2 0.00
large ASM map operations hashta .2 .2 0.00
large CTWR dba buffer .4 .4 0.00
large PX msg pool .2 .2 0.00
large free memory 1.2 1.2 0.00
large krcc extent chunk 2.0 2.0 0.00
shared ASH buffers 2.0 2.0 0.00
shared CCursor 2.5 2.9 15.30
shared Heap0: KGL 2.1 2.3 8.33
shared KCB Table Scan Buffer 3.8 3.8 0.00
shared KGLS heap 5.3 5.7 8.75
shared KQR M PO 2.9 3.1 6.87
shared KSFD SGA I/O b 3.8 3.8 0.00
shared PCursor 1.6 1.9 14.99
shared PL/SQL DIANA N/A 1.4 N/A
shared PL/SQL MPCODE 1.8 2.0 11.04
shared free memory 68.9 63.8 -7.41
shared kglsim hash table bkts 2.0 2.0 0.00
shared library cache 3.8 4.0 4.85
shared row cache 3.6 3.6 0.00
shared sql area 9.6 12.4 29.43
stream free memory 4.0 4.0 0.00
buffer_cache 152.0 152.0 0.00
fixed_sga 1.2 1.2 0.00
log_buffer 2.8 2.8 0.00
-------------------------------------------------------------
Streams CPU/IO Usage DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Streams processes ordered by CPU usage
-> CPU and I/O Time in micro seconds
Session Type CPU Time User I/O Time Sys I/O Time
------------------------- -------------- -------------- --------------
QMON Coordinator 34,431 0 0
QMON Slaves 14,489 0 0
-------------------------------------------------------------
Streams Capture DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Streams Apply DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Queues DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Subscribers DB/Inst: IVRS/ivrs Snaps: 1123-1124
No data exists for this section of the report.
-------------------------------------------------------------
Rule Set DB/Inst: IVRS/ivrs Snaps: 1123-1124
-> Rule Sets ordered by Evaluations
Fast SQL CPU Elapsed
Ruleset Name Evals Evals Execs Time Time
------------------------- -------- -------- -------- -------- --------
SYS.ALERT_QUE_R 10 0 0 0 0
-------------------------------------------------------------
Resource Limit Stats DB/Inst: IVRS/ivrs Snap: 1124
No data exists for this section of the report.
-------------------------------------------------------------
init.ora Parameters DB/Inst: IVRS/ivrs Snaps: 1123-1124
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
audit_file_dest /oracle/app/oracle/admin/ivrs/adu
audit_sys_operations TRUE
background_dump_dest /oracle/app/oracle/admin/ivrs/bdu
compatible 10.2.0.3.0
control_files +DATA_1/ivrs/control01.ctl, +DATA
core_dump_dest /oracle/app/oracle/admin/ivrs/cdu
db_block_size 8192
db_domain karl.com
db_file_multiblock_read_count 16
db_name ivrs
db_recovery_file_dest /flash_reco/flash_recovery_area
db_recovery_file_dest_size 161061273600
dispatchers (PROTOCOL=TCP) (SERVICE=ivrsXDB)
job_queue_processes 10
log_archive_dest_1 LOCATION=USE_DB_RECOVERY_FILE_DES
log_archive_format ivrs_%t_%s_%r.arc
open_cursors 300
os_authent_prefix
os_roles FALSE
pga_aggregate_target 108003328
processes 150
recyclebin OFF
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
sga_target 327155712
spfile +DATA_1/ivrs/spfileivrs.ora
sql92_security TRUE
statistics_level TYPICAL
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /oracle/app/oracle/admin/ivrs/udu
-------------------------------------------------------------
End of Report
Report written to awrrpt_1_1123_1124.txt
}}}
! The ASH report for SNAP_ID 1123
{{{
Summary of All User Input
-------------------------
Format : TEXT
DB Id : 2607950532
Inst num : 1
Begin time : 02-Jun-10 00:10:00
End time : 02-Jun-10 00:20:00
Slot width : Default
Report targets : 0
Report name : ashrpt_1_0602_0020.txt
ASH Report For IVRS/ivrs
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
1 312M (100%) 152M (48.7%) 114M (36.4%) 2.0M (0.6%)
Analysis Begin Time: 02-Jun-10 00:10:00
Analysis End Time: 02-Jun-10 00:20:00
Elapsed Time: 10.0 (mins)
Sample Count: 6
Average Active Sessions: 0.10
Avg. Active Session per CPU: 0.10
Report Target: None specified
Top User Events DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
db file sequential read User I/O 50.00 0.05
CPU + Wait for CPU CPU 16.67 0.02
-------------------------------------------------------------
Top Background Events DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU CPU 16.67 0.02
db file parallel write System I/O 16.67 0.02
-------------------------------------------------------------
Top Event P1/P2/P3 Values DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
Event % Event P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1 Parameter 2 Parameter 3
-------------------------- -------------------------- --------------------------
db file sequential read 50.00 "3","51479","1" 16.67
file# block# blocks
"3","62885","1" 16.67
"3","64132","1" 16.67
db file parallel write 16.67 "1","0","2147483647" 16.67
requests interrupt timeout
-------------------------------------------------------------
Top Service/Module DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
ivrs.bayantel. EXCEL.EXE 66.67 UNNAMED 66.67
SYS$BACKGROUND MMON_SLAVE 16.67 Auto-DBFUS Action 16.67
UNNAMED 16.67 UNNAMED 16.67
-------------------------------------------------------------
Top Client IDs DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Command Types DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
-> 'Distinct SQLIDs' is the count of the distinct number of SQLIDs
with the given SQL Command Type found over all the ASH samples
in the analysis period
Distinct Avg Active
SQL Command Type SQLIDs % Activity Sessions
---------------------------------------- ---------- ---------- ----------
SELECT 1 66.67 0.07
-------------------------------------------------------------
Top SQL Statements DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
SQL ID Planhash % Activity Event % Event
------------- ----------- ---------- ------------------------------ ----------
ctdqj2v5dt8t4 813260237 66.67 db file sequential read 50.00
SELECT s0.snap_id id, TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm, s
0.instance_number inst, round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_I
NTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.E
ND_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
CPU + Wait for CPU 16.67
-------------------------------------------------------------
Top SQL using literals DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
No data exists for this section of the report.
-------------------------------------------------------------
Top PL/SQL Procedures DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
-> 'PL/SQL entry subprogram' represents the application's top-level
entry-point(procedure, function, trigger, package initialization
or RPC call) into PL/SQL.
-> 'PL/SQL current subprogram' is the pl/sql subprogram being executed
at the point of sampling . If the value is 'SQL', it represents
the percentage of time spent executing SQL for the particular
plsql entry subprogram
PLSQL Entry Subprogram % Activity
----------------------------------------------------------------- ----------
PLSQL Current Subprogram % Current
----------------------------------------------------------------- ----------
SYS.DBMS_FEATURE_USAGE_INTERNAL.EXEC_DB_USAGE_SAMPLING 16.67
SQL 16.67
-------------------------------------------------------------
Top Sessions DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
-> '# Samples Active' shows the number of ASH samples in which the session
was found waiting for that particular event. The percentage shown
in this column is calculated with respect to wall clock time
and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
the PQ slave activity into the session issuing the PQ. Refer to
the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# % Activity Event % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
138, 65 66.67 db file sequential read 50.00
PERFSTAT EXCEL.EXE 3/60 [ 5%] 0
CPU + Wait for CPU 16.67
1/60 [ 2%] 0
138, 50 16.67 CPU + Wait for CPU 16.67
SYS oracle@dbrocai...el.com (m000) 1/60 [ 2%] 1
167, 1 16.67 db file parallel write 16.67
SYS oracle@dbrocai...el.com (DBW0) 1/60 [ 2%] 0
-------------------------------------------------------------
Top Blocking Sessions DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
No data exists for this section of the report.
-------------------------------------------------------------
Top Sessions running PQs DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Objects DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
-> With respect to Application, Cluster, User I/O and buffer busy waits only.
Object ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
Object Name (Type) Tablespace
----------------------------------------------------- -------------------------
56420 16.67 db file sequential read 16.67
SYS.WRH$_SYSSTAT.WRH$_SYSSTA_2607950532_570 (TABLE PA SYSAUX
56680 16.67 db file sequential read 16.67
SYS.WRH$_SYSSTAT.WRH$_SYSSTA_2607950532_637 (TABLE PA SYSAUX
56839 16.67 db file sequential read 16.67
SYS.WRH$_SYSSTAT_PK.WRH$_SYSSTA_2607950532_637 (INDEX SYSAUX
-------------------------------------------------------------
Top DB Files DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
-> With respect to Cluster and User I/O events only.
File ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
File Name Tablespace
----------------------------------------------------- -------------------------
3 50.00 db file sequential read 50.00
+DATA_1/ivrs/datafile/sysaux.258.652821943 SYSAUX
-------------------------------------------------------------
Top Latches DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
No data exists for this section of the report.
-------------------------------------------------------------
Activity Over Time DB/Inst: IVRS/ivrs (Jun 02 00:10 to 00:20)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
00:10:00 (5.0 min) 1 CPU + Wait for CPU 1 16.67
00:15:00 (5.0 min) 5 db file sequential read 3 50.00
CPU + Wait for CPU 1 16.67
db file parallel write 1 16.67
-------------------------------------------------------------
End of Report
Report written to ashrpt_1_0602_0020.txt
}}}
! CPU and kernel info
{{{
[oracle@dbrocaix01 ~]$ uname -a
Linux dbrocaix01 2.6.9-42.0.0.0.1.EL #1 Sun Oct 15 13:58:55 PDT 2006 i686 i686 i386 GNU/Linux
[oracle@dbrocaix01 ~]$
[oracle@dbrocaix01 ~]$ cat /etc/redhat-release
Enterprise Linux Enterprise Linux AS release 4 (October Update 4)
[oracle@dbrocaix01 ~]$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 7
model name : Intel(R) Core(TM)2 Duo CPU T8100 @ 2.10GHz
stepping : 6
cpu MHz : 2081.751
cache size : 3072 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx lm pni
bogomips : 4171.86
[oracle@dbrocaix01 ~]$
}}}
! Time Series information on workload, top events, io statistics, top SQLs for SNAP_ID 848,1097,1103
AWR CPU and IO Workload Report
{{{
AWR CPU and IO Workload Report
i *** *** ***
n Total Total Total U S
Snap s Snap C CPU A Oracle OS Physical Oracle RMAN OS S Y I
Snap Start t Dur P Time DB DB Bg RMAN A CPU OS CPU Memory IOPs IOPs IOPs IO r IO w Redo Exec CPU CPU CPU R S O
ID Time # (m) U (s) Time CPU CPU CPU S (s) Load (s) (mb) r w redo (mb)/s (mb)/s (mb)/s Sess /s % % % % % %
------ --------------- --- ---------- --- ----------- ---------- --------- --------- -------- ----- ----------- ------- ----------- ---------- --------- --------- --------- --------- --------- --------- ---- --------- ------ ---- ---- ---- ---- ----
848 10/04/25 07:01 1 10.05 1 603.00 855.02 687.66 14.18 0.00 1.4 701.84 0.95 416.76 0.19 0.068 0.677 0.265 0.001 0.007 0.003 26 7.829 116 0 69 27 41 4
1097 10/05/11 00:00 1 10.03 1 601.80 809.43 697.29 9.01 0.00 1.3 706.30 1.05 402.29 0.00 0.680 0.605 0.153 0.010 0.007 0.003 22 4.048 117 0 67 29 37 10
1103 10/05/11 01:00 1 10.03 1 601.80 877.33 833.12 12.01 0.00 1.5 845.12 0.43 461.15 0.01 0.043 0.668 0.133 0.001 0.008 0.002 23 2.054 140 0 77 34 42 2
}}}
AWR Top Events Report
{{{
AWR Top Events Report
i
n
Snap s Snap A
Snap Start t Dur Event Time Avgwt DB Time A
ID Time # (m) Event Rank Waits (s) (ms) % S Wait Class
------ --------------- --- ---------- ---------------------------------------- ----- -------------- -------------- -------- ------- ------ ---------------
848 10/04/25 07:01 1 10.05 CPU time 1 0.00 687.66 0.00 80 1.1 CPU
848 10/04/25 07:01 1 10.05 db file parallel write 2 408.00 20.23 49.58 2 0.0 System I/O
848 10/04/25 07:01 1 10.05 log file sequential read 3 66.00 19.73 298.93 2 0.0 System I/O
848 10/04/25 07:01 1 10.05 control file sequential read 4 3575.00 8.34 2.33 1 0.0 System I/O
848 10/04/25 07:01 1 10.05 db file sequential read 5 54.00 7.36 136.31 1 0.0 User I/O
1097 10/05/11 00:00 1 10.03 CPU time 1 0.00 697.29 0.00 86 1.2 CPU
1097 10/05/11 00:00 1 10.03 db file sequential read 2 344.00 18.48 53.72 2 0.0 User I/O
1097 10/05/11 00:00 1 10.03 db file scattered read 3 77.00 16.25 211.02 2 0.0 User I/O
1097 10/05/11 00:00 1 10.03 db file parallel write 4 239.00 14.00 58.57 2 0.0 System I/O
1097 10/05/11 00:00 1 10.03 control file parallel write 5 197.00 10.10 51.28 1 0.0 System I/O
1103 10/05/11 01:00 1 10.03 CPU time 1 0.00 833.12 0.00 95 1.4 CPU
1103 10/05/11 01:00 1 10.03 control file parallel write 2 193.00 21.54 111.59 2 0.0 System I/O
1103 10/05/11 01:00 1 10.03 db file parallel write 3 255.00 17.10 67.05 2 0.0 System I/O
1103 10/05/11 01:00 1 10.03 db file scattered read 4 18.00 2.72 151.19 0 0.0 User I/O
1103 10/05/11 01:00 1 10.03 change tracking file synchronous write 5 10.00 2.50 250.35 0 0.0 Other
15 rows selected.
}}}
AWR Tablespace IO Report
{{{
AWR Tablespace IO Report
i
n
Snap s Snap
Snap Start t Dur IO IOPS IOPS IOPS
ID Time # (m) TS Rank Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms) Total IO R+W Total R+W
------ --------------- --- ---------- -------------------- ---- -------- ---------- --------- ---------- -------- ----------- ------------ ------------- ------------ ---------
848 10/04/25 07:01 1 10.05 SYSAUX 1 12 0 78.3 1.0 201 0 0 0.0 213 0
848 10/04/25 07:01 1 10.05 USERS 2 25 0 198.4 1.0 167 0 0 0.0 192 0
848 10/04/25 07:01 1 10.05 UNDOTBS1 3 0 0 0.0 0.0 30 0 1 0.0 30 0
848 10/04/25 07:01 1 10.05 SYSTEM 4 0 0 0.0 0.0 10 0 0 0.0 10 0
1097 10/05/11 00:00 1 10.03 SYSAUX 1 159 0 148.5 2.9 180 0 0 0.0 339 1
1097 10/05/11 00:00 1 10.03 SYSTEM 2 221 0 47.6 1.0 10 0 0 0.0 231 0
1097 10/05/11 00:00 1 10.03 USERS 3 28 0 18.2 2.3 147 0 0 0.0 175 0
1097 10/05/11 00:00 1 10.03 UNDOTBS1 4 0 0 0.0 0.0 27 0 0 0.0 27 0
1103 10/05/11 01:00 1 10.03 SYSAUX 1 20 0 130.0 3.1 202 0 0 0.0 222 0
1103 10/05/11 01:00 1 10.03 USERS 2 10 0 48.0 4.8 161 0 0 0.0 171 0
1103 10/05/11 01:00 1 10.03 UNDOTBS1 3 0 0 0.0 0.0 27 0 0 0.0 27 0
1103 10/05/11 01:00 1 10.03 SYSTEM 4 0 0 0.0 0.0 12 0 0 0.0 12 0
12 rows selected.
}}}
AWR File IO Report
{{{
AWR File IO Report
i
n
Snap s Snap
Snap Start t Dur IO IOPS IOPS IOPS
ID Time # (m) TS File# Filename Rank Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms) Total IO R+W Total R+W
------ --------------- --- ---------- -------------------- ----- ------------------------------------------------------------ ---- -------- ---------- --------- ---------- -------- ----------- ------------ ------------- ------------ ---------
848 10/04/25 07:01 1 10.05 SYSAUX 3 +DATA_1/ivrs/datafile/sysaux.258.652821943 1 12 0 78.3 1.0 201 0 0 0.0 213 0
848 10/04/25 07:01 1 10.05 USERS 13 +DATA_1/ivrs/datafile/users02.dbf 2 22 0 150.0 1.0 103 0 0 0.0 125 0
848 10/04/25 07:01 1 10.05 USERS 4 +DATA_1/ivrs/datafile/users.263.652821963 3 3 0 553.3 1.0 64 0 0 0.0 67 0
848 10/04/25 07:01 1 10.05 UNDOTBS1 2 +DATA_1/ivrs/datafile/undotbs1.257.652821933 4 0 0 30 0 1 0.0 30 0
848 10/04/25 07:01 1 10.05 SYSTEM 1 +DATA_1/ivrs/datafile/system.267.652821909 5 0 0 10 0 0 0.0 10 0
1097 10/05/11 00:00 1 10.03 SYSAUX 3 +DATA_1/ivrs/datafile/sysaux.258.652821943 1 159 0 148.5 2.9 180 0 0 0.0 339 1
1097 10/05/11 00:00 1 10.03 SYSTEM 1 +DATA_1/ivrs/datafile/system.267.652821909 2 221 0 47.6 1.0 10 0 0 0.0 231 0
1097 10/05/11 00:00 1 10.03 USERS 4 +DATA_1/ivrs/datafile/users.263.652821963 3 16 0 19.4 2.3 78 0 0 0.0 94 0
1097 10/05/11 00:00 1 10.03 USERS 13 +DATA_1/ivrs/datafile/users02.dbf 4 12 0 16.7 2.3 69 0 0 0.0 81 0
1097 10/05/11 00:00 1 10.03 UNDOTBS1 2 +DATA_1/ivrs/datafile/undotbs1.257.652821933 5 0 0 27 0 0 0.0 27 0
1103 10/05/11 01:00 1 10.03 SYSAUX 3 +DATA_1/ivrs/datafile/sysaux.258.652821943 1 20 0 130.0 3.1 202 0 0 0.0 222 0
1103 10/05/11 01:00 1 10.03 USERS 4 +DATA_1/ivrs/datafile/users.263.652821963 2 7 0 45.7 4.9 84 0 0 0.0 91 0
1103 10/05/11 01:00 1 10.03 USERS 13 +DATA_1/ivrs/datafile/users02.dbf 3 3 0 53.3 4.7 77 0 0 0.0 80 0
1103 10/05/11 01:00 1 10.03 UNDOTBS1 2 +DATA_1/ivrs/datafile/undotbs1.257.652821933 4 0 0 27 0 0 0.0 27 0
1103 10/05/11 01:00 1 10.03 SYSTEM 1 +DATA_1/ivrs/datafile/system.267.652821909 5 0 0 12 0 0 0.0 12 0
15 rows selected.
}}}
AWR Top SQL Report
{{{
AWR Top SQL Report
i
n Elapsed
Snap s Snap Plan Elapsed Time CPU A
Snap Start t Dur SQL Hash Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
848 10/04/25 07:01 1 10.05 6gvch1xu9ca3g 0 393.66 78.73 323.47 0 4317 0 5 5 4 0 0.65 1 DECLARE job BINARY_INTEGER := :job; next
_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_E
M_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0;
END IF; END;
848 10/04/25 07:01 1 10.05 2zwjrv2186835 1684123640 Oracle Enterprise Ma 390.67 390.67 321.73 0 0 0 574 1 0 0 0.65 2 DELETE FROM MGMT_METRICS_RAW WHERE ROWID
nager.rollup = :B1
848 10/04/25 07:01 1 10.05 djs2w2f17nw2z 0 10.03 10.03 3.03 0 2256 25 1 1 1 0 0.02 3 DECLARE job BINARY_INTEGER := :job; next
_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate :
= next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
848 10/04/25 07:01 1 10.05 d92h3rjp0y217 0 4.33 4.33 1.82 0 57 0 1 1 1 0 0.01 4 begin prvt_hdm.auto_execute( :db_id, :in
st_id, :end_snap ); end;
848 10/04/25 07:01 1 10.05 5h7w8ykwtb2xt 0 DBMS_SCHEDULER 4.14 4.14 1.74 0 6 0 0 1 1 0 0.01 5 INSERT INTO SYS.WRI$_ADV_PARAMETERS (TAS
K_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2
, :B1 )
1097 10/05/11 00:00 1 10.03 6gvch1xu9ca3g 0 404.88 80.98 348.56 0 14533 614 5 5 5 0 0.67 1 DECLARE job BINARY_INTEGER := :job; next
_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_E
M_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0;
END IF; END;
1097 10/05/11 00:00 1 10.03 2zwjrv2186835 1684123640 Oracle Enterprise Ma 361.45 361.45 338.67 0 1292 15 598 1 1 0 0.60 2 DELETE FROM MGMT_METRICS_RAW WHERE ROWID
nager.rollup = :B1
1097 10/05/11 00:00 1 10.03 g2aqmpuqbytjy 2786940945 Oracle Enterprise Ma 8.65 1.44 0.16 0 606 25 2000 6 1 0 0.01 3 SELECT ROWID FROM MGMT_METRICS_RAW WHERE
nager.rollup TARGET_GUID = :B2 AND COLLECTION_TIMEST
AMP < :B1
1097 10/05/11 00:00 1 10.03 db78fxqxwxt7r 3312420081 4.66 0.03 0.35 0 415 55 1919 134 7 0 0.01 4 select /*+ rule */ bucket, endpoint, col
#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucke
t
1097 10/05/11 00:00 1 10.03 52upwvrbypadx 1716435639 Oracle Enterprise Ma 3.89 3.89 0.51 0 115 94 259 1 1 0 0.01 5 SELECT CM.TARGET_GUID, CM.METRIC_GUID, M
nager.current metric AX(CM.COLLECTION_TIMESTAMP) FROM MGMT_CU
purge RRENT_METRICS CM, MGMT_METRICS M WHERE C
M.METRIC_GUID = M.METRIC_GUID AND M.KEYS
_FROM_MULT_COLLS = 0 GROUP BY CM.TARGET_
GUID, CM.METRIC_GUID
1103 10/05/11 01:00 1 10.03 6gvch1xu9ca3g 0 438.17 109.54 416.31 0 5870 0 4 4 3 0 0.73 1 DECLARE job BINARY_INTEGER := :job; next
_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_E
M_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0;
END IF; END;
1103 10/05/11 01:00 1 10.03 2zwjrv2186835 1684123640 Oracle Enterprise Ma 436.64 436.64 415.14 0 1292 0 598 1 0 0 0.73 2 DELETE FROM MGMT_METRICS_RAW WHERE ROWID
nager.rollup = :B1
1103 10/05/11 01:00 1 10.03 d92h3rjp0y217 0 4.65 4.65 2.01 0 58 5 1 1 1 0 0.01 3 begin prvt_hdm.auto_execute( :db_id, :in
st_id, :end_snap ); end;
1103 10/05/11 01:00 1 10.03 5h7w8ykwtb2xt 0 4.52 4.52 1.89 0 6 5 0 1 1 0 0.01 4 INSERT INTO SYS.WRI$_ADV_PARAMETERS (TAS
K_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2
, :B1 )
1103 10/05/11 01:00 1 10.03 djs2w2f17nw2z 0 1.66 1.66 1.10 0 1841 48 1 1 1 0 0.00 5 DECLARE job BINARY_INTEGER := :job; next
_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate :
= next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
15 rows selected.
}}}
AWR Top SQL Report 2
{{{
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
848 10/04/25 07:01 1 10.05 6gvch1xu9ca3g 0 393.66 78.73 323.47 0.00 0.00 0.00 0.00 4317 0 0 5 5 4 0 0.65 1 DECLAR
848 10/04/25 07:01 1 10.05 2zwjrv2186835 1684123640 Oracle E 390.67 390.67 321.73 0.00 0.00 0.00 0.00 0 0 0 574 1 0 0 0.65 2 DELETE
nterpris
e Manage
r.rollup
848 10/04/25 07:01 1 10.05 djs2w2f17nw2z 0 10.03 10.03 3.03 4.96 0.00 0.00 0.00 2256 25 0 1 1 1 0 0.02 3 DECLAR
848 10/04/25 07:01 1 10.05 d92h3rjp0y217 0 4.33 4.33 1.82 0.00 0.00 0.00 0.00 57 0 0 1 1 1 0 0.01 4 begin
848 10/04/25 07:01 1 10.05 5h7w8ykwtb2xt 0 DBMS_SCH 4.14 4.14 1.74 0.00 0.00 0.00 0.00 6 0 0 0 1 1 0 0.01 5 INSERT
EDULER
1097 10/05/11 00:00 1 10.03 6gvch1xu9ca3g 0 404.88 80.98 348.56 33.13 0.00 0.00 0.00 14533 614 0 5 5 5 0 0.67 1 DECLAR
1097 10/05/11 00:00 1 10.03 2zwjrv2186835 1684123640 Oracle E 361.45 361.45 338.67 0.02 0.00 0.00 0.00 1292 15 0 598 1 1 0 0.60 2 DELETE
nterpris
e Manage
r.rollup
1097 10/05/11 00:00 1 10.03 g2aqmpuqbytjy 2786940945 Oracle E 8.65 1.44 0.16 8.51 0.00 0.00 0.00 606 25 0 2000 6 1 0 0.01 3 SELECT
nterpris
e Manage
r.rollup
1097 10/05/11 00:00 1 10.03 db78fxqxwxt7r 3312420081 4.66 0.03 0.35 4.41 0.00 0.00 0.00 415 55 0 1919 134 7 0 0.01 4 select
1097 10/05/11 00:00 1 10.03 52upwvrbypadx 1716435639 Oracle E 3.89 3.89 0.51 3.45 0.00 0.00 0.00 115 94 0 259 1 1 0 0.01 5 SELECT
nterpris
e Manage
r.curren
t metric
purge
1103 10/05/11 01:00 1 10.03 6gvch1xu9ca3g 0 438.17 109.54 416.31 0.00 0.00 0.00 0.00 5870 0 0 4 4 3 0 0.73 1 DECLAR
1103 10/05/11 01:00 1 10.03 2zwjrv2186835 1684123640 Oracle E 436.64 436.64 415.14 0.00 0.00 0.00 0.00 1292 0 0 598 1 0 0 0.73 2 DELETE
nterpris
e Manage
r.rollup
1103 10/05/11 01:00 1 10.03 d92h3rjp0y217 0 4.65 4.65 2.01 1.34 0.00 0.00 0.00 58 5 0 1 1 1 0 0.01 3 begin
1103 10/05/11 01:00 1 10.03 5h7w8ykwtb2xt 0 4.52 4.52 1.89 1.34 0.00 0.00 0.00 6 5 0 0 1 1 0 0.01 4 INSERT
1103 10/05/11 01:00 1 10.03 djs2w2f17nw2z 0 1.66 1.66 1.10 0.49 0.00 0.00 0.00 1841 48 0 1 1 1 0 0.00 5 DECLAR
15 rows selected.
}}}
! The AWR report for SNAP_ID 848
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 848 25-Apr-10 07:01:00 26 5.0
End Snap: 849 25-Apr-10 07:11:04 26 3.7
Elapsed: 10.05 (mins)
DB Time: 14.25 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 172M 172M Std Block Size: 8K
Shared Pool Size: 120M 120M Log Buffer: 2,860K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 2,631.50 27,842.32
Logical reads: 53.40 565.00
Block changes: 7.00 74.07
Physical reads: 0.07 0.72
Physical writes: 0.95 10.05
User calls: 5.48 57.96
Parses: 1.07 11.32
Hard parses: 0.01 0.07
Sorts: 2.18 23.05
Logons: 0.01 0.16
Executes: 7.83 82.82
Transactions: 0.09
% Blocks changed per Read: 13.11 Recursive Call %: 70.04
Rollback per transaction %: 3.51 Rows per Sort: 47.33
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.92
Buffer Hit %: 99.87 In-memory Sort %: 100.00
Library Hit %: 99.87 Soft Parse %: 99.38
Execute to Parse %: 86.34 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 58.82 % Non-Parse CPU: 99.97
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 82.55 82.57
% SQL with executions>1: 87.11 89.12
% Memory for SQL w/exec>1: 81.76 84.56
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time 688 80.4
db file parallel write 408 20 50 2.4 System I/O
log file sequential read 66 20 299 2.3 System I/O
control file sequential read 3,575 8 2 1.0 System I/O
db file sequential read 54 7 136 0.9 User I/O
-------------------------------------------------------------
Time Model Statistics DB/Inst: IVRS/ivrs Snaps: 848-849
-> Total time in database user-calls (DB Time): 855s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 858.2 100.4
DB CPU 687.7 80.4
PL/SQL execution elapsed time 2.6 .3
parse time elapsed 0.9 .1
hard parse (sharing criteria) elapsed time 0.2 .0
hard parse elapsed time 0.2 .0
repeated bind elapsed time 0.0 .0
DB time 855.0 N/A
background elapsed time 60.0 N/A
background cpu time 14.2 N/A
-------------------------------------------------------------
Wait Class DB/Inst: IVRS/ivrs Snaps: 848-849
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
-------------------- ---------------- ------ ---------------- ------- ---------
System I/O 4,446 .0 59 13 78.0
User I/O 54 .0 7 136 0.9
Concurrency 8 50.0 6 707 0.1
Configuration 3 33.3 2 529 0.1
Network 3,856 .0 1 0 67.6
Other 22 .0 1 39 0.4
Commit 4 .0 0 11 0.1
-------------------------------------------------------------
Wait Events DB/Inst: IVRS/ivrs Snaps: 848-849
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file parallel write 408 .0 20 50 7.2
log file sequential read 66 .0 20 299 1.2
control file sequential read 3,575 .0 8 2 62.7
db file sequential read 54 .0 7 136 0.9
os thread startup 7 57.1 6 808 0.1
control file parallel write 212 .0 5 22 3.7
Log archive I/O 23 .0 3 136 0.4
log file parallel write 160 .0 3 17 2.8
log buffer space 1 100.0 1 977 0.0
SQL*Net more data to client 584 .0 1 2 10.2
latch free 2 .0 1 426 0.0
log file switch completion 2 .0 1 306 0.0
SQL*Net message to client 3,272 .0 0 0 57.4
log file sync 4 .0 0 11 0.1
change tracking file synchro 10 .0 0 1 0.2
log file single write 2 .0 0 4 0.0
change tracking file synchro 10 .0 0 0 0.2
buffer busy waits 1 .0 0 2 0.0
SQL*Net message from client 3,272 .0 1,166 356 57.4
ASM background timer 153 .0 590 3857 2.7
virtual circuit status 20 100.0 586 29294 0.4
class slave wait 24 .0 585 24356 0.4
Streams AQ: qmn slave idle w 21 .0 577 27487 0.4
Streams AQ: qmn coordinator 43 51.2 577 13424 0.8
jobq slave wait 85 95.3 242 2851 1.5
KSV master wait 24 .0 0 18 0.4
-------------------------------------------------------------
Background Wait Events DB/Inst: IVRS/ivrs Snaps: 848-849
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file parallel write 408 .0 20 50 7.2
log file sequential read 25 .0 12 489 0.4
os thread startup 7 57.1 6 808 0.1
control file parallel write 212 .0 5 22 3.7
Log archive I/O 23 .0 3 136 0.4
log file parallel write 160 .0 3 17 2.8
db file sequential read 1 .0 1 767 0.0
control file sequential read 220 .0 0 0 3.9
events in waitclass Other 20 .0 0 1 0.4
log file single write 2 .0 0 4 0.0
rdbms ipc message 2,479 94.1 7,021 2832 43.5
ASM background timer 153 .0 590 3857 2.7
pmon timer 204 100.0 589 2889 3.6
smon timer 2 100.0 586 292971 0.0
class slave wait 24 .0 585 24356 0.4
Streams AQ: qmn slave idle w 21 .0 577 27487 0.4
Streams AQ: qmn coordinator 43 51.2 577 13424 0.8
KSV master wait 3 .0 0 1 0.1
-------------------------------------------------------------
Operating System Statistics DB/Inst: IVRS/ivrs Snaps: 848-849
Statistic Total
-------------------------------- --------------------
BUSY_TIME 41,676
IDLE_TIME 18,987
IOWAIT_TIME 2,201
NICE_TIME 0
SYS_TIME 24,945
USER_TIME 16,534
LOAD 1
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 199,040
NUM_CPUS 1
-------------------------------------------------------------
Service Statistics DB/Inst: IVRS/ivrs Snaps: 848-849
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
-------------------------------- ------------ ------------ ---------- ----------
SYS$USERS 860.8 690.1 25 28,890
SYS$BACKGROUND 0.0 0.0 12 3,274
ivrs.karl.com 0.0 0.0 0 0
ivrsXDB 0.0 0.0 0 0
-------------------------------------------------------------
Service Wait Class Stats DB/Inst: IVRS/ivrs Snaps: 848-849
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
----------------------------------------------------------------
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
SYS$USERS
28 531 1 0 0 0 3856 126
SYS$BACKGROUND
26 204 7 565 0 0 0 0
-------------------------------------------------------------
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 848-849
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
394 323 5 78.7 46.0 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
391 322 1 390.7 45.7 2zwjrv2186835
Module: Oracle Enterprise Manager.rollup
DELETE FROM MGMT_METRICS_RAW WHERE ROWID = :B1
10 3 1 10.0 1.2 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
4 2 1 4.3 0.5 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
4 2 1 4.1 0.5 5h7w8ykwtb2xt
Module: DBMS_SCHEDULER
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
4 4 684 0.0 0.5 8qqu3q00vn8yk
Module: Lab128
--lab128 select s.sid,s.ownerid,s.user#,sql_address ,decode(s.sql_id,null,s.pr
ev_sql_id,s.sql_id) sql_id ,decode(s.sql_id,null,s.prev_child_number,s.sql_chil
d_number) sql_child_number ,userenv('INSTANCE') inst_id,s.seq#,s.event#,s.seria
l# ,s.row_wait_obj#,s.row_wait_file#,s.row_wait_block#,s.row_wait_row# ,s.serv
3 1 91 0.0 0.4 3w449zp3trs2m
Module: Lab128
--lab128 select /*+choose*/ 'D'||file# a, nvl(phyrds,0) phyrds, nvl(phywrts
,0) phywrts, nvl(phyblkrd,0) phyblkrd, nvl(phyblkwrt,0) phyblkwrt, nvl(rea
dtim,0) readtim, nvl(writetim,0) writetim from v$filestat union all sele
ct 'T'||file#, nvl(phyrds,0), nvl(phywrts,0), nvl(phyblkrd,0), nvl(phybl
3 2 113 0.0 0.3 cu93nv62rnksz
Module: Lab128
--lab128 select sum(decode(archived,'NO',1,0)) arch_n, count(*) grp_n, su
m(bytes) bytes, max(sequence#) cur_seq from v$log where thread#=:1
2 2 58 0.0 0.3 6j3n789qhk1d8
Module: Lab128
--lab128 select sql_id,plan_hash_value,parse_calls,disk_reads,direct_writes, b
uffer_gets,rows_processed,serializable_aborts,fetches,executions, end_of_fetch_
count,loads,invalidations,px_servers_executions, cpu_time,elapsed_time,applicat
ion_wait_time,concurrency_wait_time, cluster_wait_time,user_io_wait_time,plsql_
2 1 1 1.9 0.2 4s89043frjt6c
Module: Lab128
--lab128 select decode(dirty,'Y',1,0)+decode(temp,'Y',16,0)+decode(ping,'Y',153
6,0)+decode(stale,'Y',16384,0)+decode(direct,'Y',65536,0) flag,0 lru_flag,ts#,fi
le#,block#,class#,status,objd from v$bh
-------------------------------------------------------------
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 848-849
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
323 394 5 64.69 46.0 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
322 391 1 321.73 45.7 2zwjrv2186835
Module: Oracle Enterprise Manager.rollup
DELETE FROM MGMT_METRICS_RAW WHERE ROWID = :B1
4 4 684 0.01 0.5 8qqu3q00vn8yk
Module: Lab128
--lab128 select s.sid,s.ownerid,s.user#,sql_address ,decode(s.sql_id,null,s.pr
ev_sql_id,s.sql_id) sql_id ,decode(s.sql_id,null,s.prev_child_number,s.sql_chil
d_number) sql_child_number ,userenv('INSTANCE') inst_id,s.seq#,s.event#,s.seria
l# ,s.row_wait_obj#,s.row_wait_file#,s.row_wait_block#,s.row_wait_row# ,s.serv
3 10 1 3.03 1.2 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
2 2 58 0.03 0.3 6j3n789qhk1d8
Module: Lab128
--lab128 select sql_id,plan_hash_value,parse_calls,disk_reads,direct_writes, b
uffer_gets,rows_processed,serializable_aborts,fetches,executions, end_of_fetch_
count,loads,invalidations,px_servers_executions, cpu_time,elapsed_time,applicat
ion_wait_time,concurrency_wait_time, cluster_wait_time,user_io_wait_time,plsql_
2 4 1 1.82 0.5 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
2 4 1 1.74 0.5 5h7w8ykwtb2xt
Module: DBMS_SCHEDULER
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
2 3 113 0.01 0.3 cu93nv62rnksz
Module: Lab128
--lab128 select sum(decode(archived,'NO',1,0)) arch_n, count(*) grp_n, su
m(bytes) bytes, max(sequence#) cur_seq from v$log where thread#=:1
1 2 47 0.03 0.2 1a5x7hwwfd1tu
Module: Lab128
--lab128 select --+rule latch#,sum(gets) gets,sum(misses) misses, sum(sleeps)
sleeps, sum(immediate_gets) immediate_gets,sum(immediate_misses) immediate_miss
es, sum(waits_holding_latch) waits_holding_latch,sum(spin_gets) spin_gets from
v$latch_children where misses+immediate_misses>100 group by latch#
1 2 81 0.02 0.2 ftmbsd4sgrs8z
Module: Lab128
--lab128 select sid,inst_id,decode(type,'TM',id1,null) obj_id,type,id1,id2, l
mode,lmode "Lock Held",request,request "Lock Request", ctime,kaddr from gv$l
ock where type!='MR'
-------------------------------------------------------------
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 848-849
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 32,205
-> Captured SQL account for 66.9% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
11,050 17 650.0 34.3 0.66 1.19 7tsk4wu9599z0
Module: Lab128
--lab128 select object_id,data_object_id,owner,object_type, object_name||deco
de(subobject_name,null,null,' ('||subobject_name||')') obj_name, last_ddl_time
from dba_objects where data_object_id is not null and last_ddl_time>=:1
6,720 84 80.0 20.9 0.76 0.87 085pnk9fbdqpa
Module: Lab128
--lab128 select b.segment_name seg_name, a.extends, a.writes, a.gets, a.waits,
a.rssize, a.xacts, a.shrinks, a.wraps, a.hwmsize, a.status v_status, b.segment
_id usn, b.tablespace_name ts_name, b.status, a.inst_id from gv$rollstat a
, dba_rollback_segs b where a.usn(+)=b.segment_id
4,317 5 863.4 13.4 323.47 393.66 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
2,256 1 2,256.0 7.0 3.03 10.03 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
451 1 451.0 1.4 0.06 0.72 11dxc6vpx5z4n
INSERT INTO STATS$SQLTEXT ( OLD_HASH_VALUE , TEXT_SUBSET , PIECE , SQL_ID , SQL_
TEXT , ADDRESS , COMMAND_TYPE , LAST_SNAP_ID ) SELECT /*+ ordered use_nl(vst) */
NEW_SQL.OLD_HASH_VALUE , NEW_SQL.TEXT_SUBSET , VST.PIECE , VST.SQL_ID , VST.SQL
_TEXT , VST.ADDRESS , VST.COMMAND_TYPE , NEW_SQL.SNAP_ID FROM (SELECT HASH_VALUE
351 117 3.0 1.1 0.15 0.27 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
346 1 346.0 1.1 0.38 1.03 925d7dd714u48
INSERT INTO STATS$LATCH ( SNAP_ID , DBID , INSTANCE_NUMBER , NAME , LATCH# , LEV
EL# , GETS , MISSES , SLEEPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , W
AIT_TIME ) SELECT :B3 , :B2 , :B1 , NAME , LATCH# , LEVEL# , GETS , MISSES , SLE
EPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , WAIT_TIME FROM V$LATCH
321 107 3.0 1.0 0.03 0.07 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
+ :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn
276 3 92.0 0.9 0.43 1.71 2pqjt85mh15t1
Module: Lab128
--lab128 select d.file_id file#, d.file_id fid, decode(t.contents,'TEMPORARY'
,'T',null)||'D'||d.file_id nid, d.file_name, f.status status, f.enabled, t.t
ablespace_name ts_name, d.bytes, decode(d.maxbytes,0,d.bytes,d.maxbytes) maxby
tes, f.ts#, d.blocks, f.rfile# from dba_data_files d, v$datafile f, dba_tabl
259 1 259.0 0.8 0.10 1.03 bq0xuw807fdju
INSERT INTO STATS$EVENT_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT_ID
, WAIT_TIME_MILLI , WAIT_COUNT ) SELECT :B3 , :B2 , :B1 , EN.EVENT_ID , WAIT_TIM
E_MILLI , WAIT_COUNT FROM V$EVENT_HISTOGRAM EH , V$EVENT_NAME EN WHERE EH.EVENT
= EN.NAME AND EH.EVENT# = EN.EVENT#
-------------------------------------------------------------
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 848-849
-> Total Disk Reads: 41
-> Captured SQL account for 46.3% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
25 1 25.0 61.0 3.03 10.03 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
5 1 5.0 12.2 0.20 1.52 adx64219uugxh
INSERT INTO STATS$SQL_SUMMARY ( SNAP_ID , DBID , INSTANCE_NUMBER , TEXT_SUBSET ,
SQL_ID , SHARABLE_MEM , SORTS , MODULE , LOADED_VERSIONS , FETCHES , EXECUTIONS
, PX_SERVERS_EXECUTIONS , END_OF_FETCH_COUNT , LOADS , INVALIDATIONS , PARSE_CA
LLS , DISK_READS , DIRECT_WRITES , BUFFER_GETS , APPLICATION_WAIT_TIME , CONCURR
5 1 5.0 12.2 0.10 1.03 bq0xuw807fdju
INSERT INTO STATS$EVENT_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT_ID
, WAIT_TIME_MILLI , WAIT_COUNT ) SELECT :B3 , :B2 , :B1 , EN.EVENT_ID , WAIT_TIM
E_MILLI , WAIT_COUNT FROM V$EVENT_HISTOGRAM EH , V$EVENT_NAME EN WHERE EH.EVENT
= EN.NAME AND EH.EVENT# = EN.EVENT#
4 1 4.0 9.8 0.38 1.03 925d7dd714u48
INSERT INTO STATS$LATCH ( SNAP_ID , DBID , INSTANCE_NUMBER , NAME , LATCH# , LEV
EL# , GETS , MISSES , SLEEPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , W
AIT_TIME ) SELECT :B3 , :B2 , :B1 , NAME , LATCH# , LEVEL# , GETS , MISSES , SLE
EPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , WAIT_TIME FROM V$LATCH
2 1 2.0 4.9 0.21 0.86 g6wf9na8zs5hb
insert into wrh$_sysmetric_summary (snap_id, dbid, instance_number, beg
in_time, end_time, intsize, group_id, metric_id, num_interval, maxval, minv
al, average, standard_deviation) select :snap_id, :dbid, :instance_number,
begtime, endtime, intsize_csec, groupid, metricid, numintv, max, min,
1 1 1.0 2.4 0.06 0.72 11dxc6vpx5z4n
INSERT INTO STATS$SQLTEXT ( OLD_HASH_VALUE , TEXT_SUBSET , PIECE , SQL_ID , SQL_
TEXT , ADDRESS , COMMAND_TYPE , LAST_SNAP_ID ) SELECT /*+ ordered use_nl(vst) */
NEW_SQL.OLD_HASH_VALUE , NEW_SQL.TEXT_SUBSET , VST.PIECE , VST.SQL_ID , VST.SQL
_TEXT , VST.ADDRESS , VST.COMMAND_TYPE , NEW_SQL.SNAP_ID FROM (SELECT HASH_VALUE
1 1 1.0 2.4 0.05 0.36 12w90w4nnm5m1
INSERT INTO STATS$ENQUEUE_STATISTICS ( SNAP_ID , DBID , INSTANCE_NUMBER , EQ_TYP
E , REQ_REASON , TOTAL_REQ# , TOTAL_WAIT# , SUCC_REQ# , FAILED_REQ# , CUM_WAIT_T
IME , EVENT# ) SELECT :B3 , :B2 , :B1 , EQ_TYPE , REQ_REASON , TOTAL_REQ# , TOTA
L_WAIT# , SUCC_REQ# , FAILED_REQ# , CUM_WAIT_TIME , EVENT# FROM V$ENQUEUE_STATIS
1 1 1.0 2.4 0.03 0.67 bqnn4c3gjtmgu
insert into wrh$_bg_event_summary (snap_id, dbid, instance_number, event_id
, total_waits, total_timeouts, time_waited_micro) select /*+ ordered use_nl(
e) */ :snap_id, :dbid, :instance_number, e.event_id, sum(e.total_waits),
sum(e.total_timeouts), sum(e.time_waited_micro) from v$session bgsids, v$s
0 84 0.0 0.0 0.76 0.87 085pnk9fbdqpa
Module: Lab128
--lab128 select b.segment_name seg_name, a.extends, a.writes, a.gets, a.waits,
a.rssize, a.xacts, a.shrinks, a.wraps, a.hwmsize, a.status v_status, b.segment
_id usn, b.tablespace_name ts_name, b.status, a.inst_id from gv$rollstat a
, dba_rollback_segs b where a.usn(+)=b.segment_id
0 4 0.0 0.0 0.00 0.00 089dbukv1aanh
Module: EM_PING
SELECT SYS_EXTRACT_UTC(SYSTIMESTAMP) FROM DUAL
-------------------------------------------------------------
SQL ordered by Executions DB/Inst: IVRS/ivrs Snaps: 848-849
-> Total Executions: 4,721
-> Captured SQL account for 62.9% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
684 488 0.7 0.01 0.01 8qqu3q00vn8yk
Module: Lab128
--lab128 select s.sid,s.ownerid,s.user#,sql_address ,decode(s.sql_id,null,s.pr
ev_sql_id,s.sql_id) sql_id ,decode(s.sql_id,null,s.prev_child_number,s.sql_chil
d_number) sql_child_number ,userenv('INSTANCE') inst_id,s.seq#,s.event#,s.seria
l# ,s.row_wait_obj#,s.row_wait_file#,s.row_wait_block#,s.row_wait_row# ,s.serv
170 4,095 24.1 0.01 0.01 dxzurka33025y
Module: Lab128
--lab128 select sid,inst_id,serial#,saddr,paddr, sql_address sqladdr,sql_hash_
value,sql_id,sql_child_number, prev_sql_addr,prev_hash_value,prev_sql_id,prev_c
hild_number, username,command,status,server,osuser,machine,terminal, program,m
odule,action,type,logon_time,ownerid,schemaname, seq#, event "Event",p1text,p1
120 0 0.0 0.00 0.00 21jwn2m0rcnfu
Module: Lab128
--lab128 select sid,serial#,inst_id,last_update_time updated,username,opname,ta
rget,sofar, totalwork,units,elapsed_seconds elapsed,time_remaining left from
gv$session_longops where sofar!=totalwork or (sysdate-last_update_time)<=:1
order by last_update_time desc
120 2,746 22.9 0.00 0.00 bk16uma37hhch
Module: Lab128
--lab128 select sid,inst_id,statistic#,value from gv$sesstat where value!=0 and
statistic#=12 union all select sid,inst_id,statistic#,value from gv$sesstat
where value!=0 and statistic#=54 union all select sid,inst_id,statistic#,valu
e from gv$sesstat where value!=0 and statistic#=62
117 5 0.0 0.00 0.00 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
116 116 1.0 0.00 0.00 4jhq63c9mf4kz
Module: Lab128
--lab128 select cnum_set, buf_got, sum_write, sum_scan, free_buffer_wait, wri
te_complete_wait, buffer_busy_wait, free_buffer_inspected, dirty_buffers_inspe
cted, db_block_change, db_block_gets, consistent_gets, physical_reads, physica
l_writes, set_msize from v$buffer_pool_statistics
114 1,140 10.0 0.01 0.01 d5vf5a1ffcskb
Module: Lab128
--lab128 select replace(stat_name,'TICKS','TIME') stat_name,value from v$osstat
where substr(stat_name,1,3) !='AVG'
113 113 1.0 0.01 0.02 cu93nv62rnksz
Module: Lab128
--lab128 select sum(decode(archived,'NO',1,0)) arch_n, count(*) grp_n, su
m(bytes) bytes, max(sequence#) cur_seq from v$log where thread#=:1
111 18,871 170.0 0.00 0.00 dy1y6skrzyc9x
Module: Lab128
--lab128 select statistic#, value from v$sysstat where value!=0
107 0 0.0 0.00 0.00 350f5yrnnmshs
lock table sys.mon_mods$ in exclusive mode nowait
107 107 1.0 0.00 0.00 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
SQL ordered by Executions DB/Inst: IVRS/ivrs Snaps: 848-849
-> Total Executions: 4,721
-> Captured SQL account for 62.9% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
+ :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn
100 6,200 62.0 0.01 0.01 10grzj2um950f
Module: Lab128
--lab128 select event,total_waits,total_timeouts,time_waited from v$system_ev
ent where time_waited>0
97 71 0.7 0.00 0.00 6mn64mp4prgbt
Module: Lab128
--lab128 select ses_addr,inst_id,xidusn usn,start_time,used_ublk ublk,used_urec
urec ,substr(start_time,10,10) "Start Time" from gv$transaction
94 2,551 27.1 0.00 0.00 9wmurr1m2knxn
Module: Lab128
--lab128 select inst_id,addr,pid,spid,pga_alloc_mem from gv$process
91 1,274 14.0 0.02 0.04 3w449zp3trs2m
Module: Lab128
--lab128 select /*+choose*/ 'D'||file# a, nvl(phyrds,0) phyrds, nvl(phywrts
,0) phywrts, nvl(phyblkrd,0) phyblkrd, nvl(phyblkwrt,0) phyblkwrt, nvl(rea
dtim,0) readtim, nvl(writetim,0) writetim from v$filestat union all sele
ct 'T'||file#, nvl(phyrds,0), nvl(phywrts,0), nvl(phyblkrd,0), nvl(phybl
87 0 0.0 0.00 0.01 989adrqsh9vs4
Module: Lab128
--lab128 select tablespace ts_name,session_addr,inst_id,sqladdr,sqlhash, bloc
ks,segfile#,segrfno#,segtype from gv$sort_usage
84 1,260 15.0 0.01 0.01 085pnk9fbdqpa
Module: Lab128
--lab128 select b.segment_name seg_name, a.extends, a.writes, a.gets, a.waits,
a.rssize, a.xacts, a.shrinks, a.wraps, a.hwmsize, a.status v_status, b.segment
_id usn, b.tablespace_name ts_name, b.status, a.inst_id from gv$rollstat a
, dba_rollback_segs b where a.usn(+)=b.segment_id
81 792 9.8 0.02 0.02 ftmbsd4sgrs8z
Module: Lab128
--lab128 select sid,inst_id,decode(type,'TM',id1,null) obj_id,type,id1,id2, l
mode,lmode "Lock Held",request,request "Lock Request", ctime,kaddr from gv$l
ock where type!='MR'
58 2,237 38.6 0.03 0.04 6j3n789qhk1d8
Module: Lab128
--lab128 select sql_id,plan_hash_value,parse_calls,disk_reads,direct_writes, b
uffer_gets,rows_processed,serializable_aborts,fetches,executions, end_of_fetch_
count,loads,invalidations,px_servers_executions, cpu_time,elapsed_time,applicat
ion_wait_time,concurrency_wait_time, cluster_wait_time,user_io_wait_time,plsql_
58 68 1.2 0.00 0.00 bsa0wjtftg3uw
select file# from file$ where ts#=:1
-------------------------------------------------------------
SQL ordered by Parse Calls DB/Inst: IVRS/ivrs Snaps: 848-849
-> Total Parse Calls: 645
-> Captured SQL account for 64.7% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
107 107 16.59 350f5yrnnmshs
lock table sys.mon_mods$ in exclusive mode nowait
107 107 16.59 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
+ :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn
58 58 8.99 bsa0wjtftg3uw
select file# from file$ where ts#=:1
14 14 2.17 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
5 5 0.78 0k8522rmdzg4k
select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#
>0
5 5 0.78 24dkx03u3rj6k
SELECT COUNT(*) FROM MGMT_PARAMETERS WHERE PARAMETER_NAME=:B1 AND UPPER(PARAMETE
R_VALUE)='TRUE'
5 13 0.78 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),
5 6 0.78 47a50dvdgnxc2
update sys.job$ set failures=0, this_date=null, flag=:1, last_date=:2, next_dat
e = greatest(:3, sysdate), total=total+(sysdate-nvl(this_date,sysdate)) where j
ob=:4
5 0 0.78 53btfq0dt9bs9
insert into sys.col_usage$ values ( :objn, :coln, decode(bitand(:flag,1),0,0
,1), decode(bitand(:flag,2),0,0,1), decode(bitand(:flag,4),0,0,1), decode(
bitand(:flag,8),0,0,1), decode(bitand(:flag,16),0,0,1), decode(bitand(:flag,
32),0,0,1), :time)
5 5 0.78 a1mbfp580hw3k
select u1.user#, u2.user#, u3.user#, failures, flag, interval#, what, nlsenv,
env, field1 from sys.job$ j, sys.user$ u1, sys.user$ u2, sys.user$ u3 where j
ob=:1 and (next_date <= sysdate or :2 != 0) and lowner = u1.name and powner = u
2.name and cowner = u3.name
-------------------------------------------------------------
SQL ordered by Sharable Memory DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
SQL ordered by Version Count DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 848-849
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session 37,467 62.1 657.3
CPU used when call started 3,407 5.7 59.8
CR blocks created 0 0.0 0.0
Cached Commit SCN referenced 0 0.0 0.0
Commit SCN cached 0 0.0 0.0
DB time 70,920 117.6 1,244.2
DBWR checkpoint buffers written 573 1.0 10.1
DBWR checkpoints 1 0.0 0.0
DBWR transaction table writes 11 0.0 0.2
DBWR undo block writes 126 0.2 2.2
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 0 0.0 0.0
IMU Redo allocation size 0 0.0 0.0
IMU commits 1 0.0 0.0
IMU contention 0 0.0 0.0
IMU pool not allocated 204 0.3 3.6
IMU undo allocation size 1,884 3.1 33.1
IMU- failed to get a private str 204 0.3 3.6
SMON posted for undo segment shr 0 0.0 0.0
SQL*Net roundtrips to/from clien 3,276 5.4 57.5
active txn count during cleanout 67 0.1 1.2
application wait time 0 0.0 0.0
background checkpoints completed 0 0.0 0.0
background checkpoints started 1 0.0 0.0
background timeouts 2,342 3.9 41.1
branch node splits 0 0.0 0.0
buffer is not pinned count 6,900 11.4 121.1
buffer is pinned count 6,592 10.9 115.7
bytes received via SQL*Net from 84,366 139.9 1,480.1
bytes sent via SQL*Net to client 2,388,363 3,960.3 41,901.1
calls to get snapshot scn: kcmgs 7,082 11.7 124.3
calls to kcmgas 225 0.4 4.0
calls to kcmgcs 38 0.1 0.7
change write time 70 0.1 1.2
cleanout - number of ktugct call 88 0.2 1.5
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 0 0.0 0.0
cluster key scan block gets 4,118 6.8 72.3
cluster key scans 2,563 4.3 45.0
commit batch/immediate performed 2 0.0 0.0
commit batch/immediate requested 2 0.0 0.0
commit cleanout failures: callba 10 0.0 0.2
commit cleanouts 599 1.0 10.5
commit cleanouts successfully co 589 1.0 10.3
commit immediate performed 2 0.0 0.0
commit immediate requested 2 0.0 0.0
commit txn count during cleanout 51 0.1 0.9
concurrency wait time 566 0.9 9.9
consistent changes 0 0.0 0.0
consistent gets 26,341 43.7 462.1
consistent gets - examination 4,855 8.1 85.2
consistent gets direct 0 0.0 0.0
consistent gets from cache 26,341 43.7 462.1
cursor authentications 4 0.0 0.1
data blocks consistent reads - u 0 0.0 0.0
db block changes 4,222 7.0 74.1
db block gets 5,864 9.7 102.9
db block gets direct 0 0.0 0.0
db block gets from cache 5,864 9.7 102.9
deferred (CURRENT) block cleanou 398 0.7 7.0
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 848-849
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
dirty buffers inspected 0 0.0 0.0
enqueue conversions 191 0.3 3.4
enqueue releases 5,680 9.4 99.7
enqueue requests 5,683 9.4 99.7
enqueue timeouts 4 0.0 0.1
enqueue waits 0 0.0 0.0
exchange deadlocks 0 0.0 0.0
execute count 4,721 7.8 82.8
free buffer inspected 0 0.0 0.0
free buffer requested 143 0.2 2.5
heap block compress 4 0.0 0.1
hot buffers moved to head of LRU 0 0.0 0.0
immediate (CR) block cleanout ap 0 0.0 0.0
immediate (CURRENT) block cleano 83 0.1 1.5
index crx upgrade (positioned) 215 0.4 3.8
index fast full scans (full) 20 0.0 0.4
index fetch by key 5,491 9.1 96.3
index scans kdiixs1 833 1.4 14.6
leaf node 90-10 splits 10 0.0 0.2
leaf node splits 30 0.1 0.5
lob reads 0 0.0 0.0
lob writes 0 0.0 0.0
lob writes unaligned 0 0.0 0.0
logons cumulative 9 0.0 0.2
messages received 554 0.9 9.7
messages sent 554 0.9 9.7
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 19,199 31.8 336.8
opened cursors cumulative 644 1.1 11.3
parse count (failures) 0 0.0 0.0
parse count (hard) 4 0.0 0.1
parse count (total) 645 1.1 11.3
parse time cpu 20 0.0 0.4
parse time elapsed 34 0.1 0.6
physical read IO requests 41 0.1 0.7
physical read bytes 335,872 556.9 5,892.5
physical read total IO requests 3,972 6.6 69.7
physical read total bytes 391,015,424 648,360.9 6,859,919.7
physical read total multi block 316 0.5 5.5
physical reads 41 0.1 0.7
physical reads cache 41 0.1 0.7
physical reads cache prefetch 0 0.0 0.0
physical reads direct 0 0.0 0.0
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 0 0.0 0.0
physical write IO requests 408 0.7 7.2
physical write bytes 4,694,016 7,783.4 82,351.2
physical write total IO requests 1,241 2.1 21.8
physical write total bytes 38,889,472 64,484.4 682,271.4
physical write total multi block 212 0.4 3.7
physical writes 573 1.0 10.1
physical writes direct 0 0.0 0.0
physical writes direct (lob) 0 0.0 0.0
physical writes direct temporary 0 0.0 0.0
physical writes from cache 573 1.0 10.1
physical writes non checkpoint 134 0.2 2.4
pinned buffers inspected 0 0.0 0.0
prefetch warmup blocks aged out 0 0.0 0.0
prefetched blocks aged out befor 0 0.0 0.0
recursive calls 7,724 12.8 135.5
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 848-849
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
recursive cpu usage 34,240 56.8 600.7
redo blocks written 3,256 5.4 57.1
redo buffer allocation retries 2 0.0 0.0
redo entries 2,592 4.3 45.5
redo log space requests 2 0.0 0.0
redo log space wait time 64 0.1 1.1
redo ordering marks 1 0.0 0.0
redo size 1,587,012 2,631.5 27,842.3
redo synch time 3 0.0 0.1
redo synch writes 164 0.3 2.9
redo wastage 50,328 83.5 883.0
redo write time 290 0.5 5.1
redo writer latching time 0 0.0 0.0
redo writes 160 0.3 2.8
rollback changes - undo records 641 1.1 11.3
rollbacks only - consistent read 0 0.0 0.0
rows fetched via callback 918 1.5 16.1
session connect time 0 0.0 0.0
session cursor cache hits 198 0.3 3.5
session logical reads 32,205 53.4 565.0
session pga memory 163,354,104 270,865.1 2,865,861.5
session pga memory max 1,190,893,048 1,974,675.2 20,892,860.5
session uga memory 25,769,443,272 42,729,513.6 #############
session uga memory max 10,726,784 17,786.6 188,189.2
shared hash latch upgrades - no 245 0.4 4.3
sorts (memory) 1,314 2.2 23.1
sorts (rows) 62,192 103.1 1,091.1
sql area evicted 0 0.0 0.0
sql area purged 0 0.0 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 0 0.0 0.0
table fetch by rowid 1,308 2.2 23.0
table fetch continued row 0 0.0 0.0
table scan blocks gotten 12,701 21.1 222.8
table scan rows gotten 964,873 1,599.9 16,927.6
table scans (long tables) 0 0.0 0.0
table scans (short tables) 883 1.5 15.5
total number of times SMON poste 0 0.0 0.0
transaction rollbacks 2 0.0 0.0
undo change vector size 507,048 840.8 8,895.6
user I/O wait time 828 1.4 14.5
user calls 3,304 5.5 58.0
user commits 55 0.1 1.0
user rollbacks 2 0.0 0.0
workarea executions - optimal 1,056 1.8 18.5
write clones created in backgrou 1 0.0 0.0
-------------------------------------------------------------
Instance Activity Stats - Absolute Values DB/Inst: IVRS/ivrs Snaps: 848-849
-> Statistics with absolute values (should not be diffed)
Statistic Begin Value End Value
-------------------------------- --------------- ---------------
session cursor cache count 6,359 6,471
opened cursors current 130 96
logons current 26 26
-------------------------------------------------------------
Instance Activity Stats - Thread Activity DB/Inst: IVRS/ivrs Snaps: 848-849
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
-------------------------------- ------------------ ---------
log switches (derived) 1 5.97
-------------------------------------------------------------
Tablespace IO Stats DB/Inst: IVRS/ivrs Snaps: 848-849
-> ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX
12 0 78.3 1.0 201 0 0 0.0
USERS
25 0 198.4 1.0 167 0 0 0.0
UNDOTBS1
0 0 0.0 .0 30 0 1 0.0
SYSTEM
0 0 0.0 .0 10 0 0 0.0
-------------------------------------------------------------
File IO Stats DB/Inst: IVRS/ivrs Snaps: 848-849
-> ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX +DATA_1/ivrs/datafile/sysaux.258.652821943
12 0 78.3 1.0 201 0 0 0.0
SYSTEM +DATA_1/ivrs/datafile/system.267.652821909
0 0 N/A N/A 10 0 0 0.0
UNDOTBS1 +DATA_1/ivrs/datafile/undotbs1.257.652821933
0 0 N/A N/A 30 0 1 0.0
USERS +DATA_1/ivrs/datafile/users.263.652821963
3 0 553.3 1.0 64 0 0 0.0
USERS +DATA_1/ivrs/datafile/users02.dbf
22 0 150.0 1.0 103 0 0 0.0
-------------------------------------------------------------
Buffer Pool Statistics DB/Inst: IVRS/ivrs Snaps: 848-849
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Writ Buffer
Number of Pool Buffer Physical Physical Buff Comp Busy
P Buffers Hit% Gets Reads Writes Wait Wait Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D 21,457 100 32,184 41 573 0 0 1
-------------------------------------------------------------
Instance Recovery Stats DB/Inst: IVRS/ivrs Snaps: 848-849
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B 0 45 420 2489 9631 184320 9631 N/A
E 0 45 465 2370 9920 184320 9920 N/A
-------------------------------------------------------------
Buffer Pool Advisory DB/Inst: IVRS/ivrs Snap: 849
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 16 .1 1,996 3.0 112,984
D 32 .2 3,992 1.6 61,210
D 48 .3 5,988 1.4 51,236
D 64 .4 7,984 1.2 46,823
D 80 .5 9,980 1.2 44,849
D 96 .6 11,976 1.2 43,976
D 112 .7 13,972 1.1 43,007
D 128 .7 15,968 1.1 42,684
D 144 .8 17,964 1.0 38,415
D 160 .9 19,960 1.0 38,080
D 172 1.0 21,457 1.0 37,757
D 176 1.0 21,956 1.0 37,649
D 192 1.1 23,952 1.0 37,554
D 208 1.2 25,948 1.0 37,386
D 224 1.3 27,944 1.0 37,350
D 240 1.4 29,940 1.0 36,083
D 256 1.5 31,936 0.9 34,396
D 272 1.6 33,932 0.9 32,710
D 288 1.7 35,928 0.8 31,012
D 304 1.8 37,924 0.8 29,325
D 320 1.9 39,920 0.7 27,639
-------------------------------------------------------------
PGA Aggr Summary DB/Inst: IVRS/ivrs Snaps: 848-849
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ------------------ --------------------------
100.0 126 0
-------------------------------------------------------------
Warning: pga_aggregate_target was set too low for current workload, as this
value was exceeded during this interval. Use the PGA Advisory view
to help identify a different value for pga_aggregate_target.
PGA Aggr Target Stats DB/Inst: IVRS/ivrs Snaps: 848-849
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B 103 13 174.2 0.0 .0 .0 .0 21,094
E 103 34 129.1 0.0 .0 .0 .0 21,094
-------------------------------------------------------------
PGA Aggr Target Histogram DB/Inst: IVRS/ivrs Snaps: 848-849
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
2K 4K 887 887 0 0
64K 128K 3 3 0 0
128K 256K 1 1 0 0
512K 1024K 164 164 0 0
1M 2M 1 1 0 0
-------------------------------------------------------------
PGA Memory Advisory DB/Inst: IVRS/ivrs Snap: 849
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
13 0.1 3,185.0 245.7 93.0 45
26 0.3 3,185.0 245.7 93.0 45
52 0.5 3,185.0 245.7 93.0 45
77 0.8 3,185.0 189.2 94.0 30
103 1.0 3,185.0 0.0 100.0 23
124 1.2 3,185.0 0.0 100.0 23
144 1.4 3,185.0 0.0 100.0 23
165 1.6 3,185.0 0.0 100.0 23
185 1.8 3,185.0 0.0 100.0 23
206 2.0 3,185.0 0.0 100.0 22
309 3.0 3,185.0 0.0 100.0 1
412 4.0 3,185.0 0.0 100.0 0
618 6.0 3,185.0 0.0 100.0 0
824 8.0 3,185.0 0.0 100.0 0
-------------------------------------------------------------
Shared Pool Advisory DB/Inst: IVRS/ivrs Snap: 849
-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Est LC Est LC Est LC Est LC
Shared SP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
60 .5 14 1,565 21,291 .9 3,708 14.8 111,856
72 .6 24 2,608 22,530 .9 2,469 9.9 112,971
84 .7 36 4,074 23,706 1.0 1,293 5.2 113,788
96 .8 47 5,561 24,555 1.0 444 1.8 114,402
108 .9 53 6,645 24,749 1.0 250 1.0 114,634
120 1.0 53 6,645 24,749 1.0 250 1.0 114,643
132 1.1 53 6,645 24,749 1.0 250 1.0 114,646
144 1.2 53 6,645 24,749 1.0 250 1.0 114,647
156 1.3 53 6,645 24,749 1.0 250 1.0 114,647
168 1.4 53 6,645 24,749 1.0 250 1.0 114,647
180 1.5 53 6,645 24,749 1.0 250 1.0 114,647
192 1.6 53 6,645 24,749 1.0 250 1.0 114,647
204 1.7 53 6,645 24,749 1.0 250 1.0 114,647
216 1.8 53 6,645 24,749 1.0 250 1.0 114,647
228 1.9 53 6,645 24,749 1.0 250 1.0 114,647
240 2.0 53 6,645 24,749 1.0 250 1.0 114,647
-------------------------------------------------------------
SGA Target Advisory DB/Inst: IVRS/ivrs Snap: 849
SGA Target SGA Size Est DB Est Physical
Size (M) Factor Time (s) Reads
---------- ---------- ------------ ----------------
156 0.5 5,286 61,534
234 0.8 4,676 44,209
312 1.0 4,522 37,753
390 1.3 4,405 32,883
468 1.5 4,283 27,786
546 1.8 4,283 27,786
624 2.0 4,283 27,786
-------------------------------------------------------------
Streams Pool Advisory DB/Inst: IVRS/ivrs Snap: 849
Size for Size Est Spill Est Spill Est Unspill Est Unspill
Est (MB) Factor Count Time (s) Count Time (s)
---------- --------- ----------- ----------- ----------- -----------
4 1.0 0 0 0 0
8 2.0 0 0 0 0
12 3.0 0 0 0 0
16 4.0 0 0 0 0
20 5.0 0 0 0 0
24 6.0 0 0 0 0
28 7.0 0 0 0 0
32 8.0 0 0 0 0
36 9.0 0 0 0 0
40 10.0 0 0 0 0
44 11.0 0 0 0 0
48 12.0 0 0 0 0
52 13.0 0 0 0 0
56 14.0 0 0 0 0
60 15.0 0 0 0 0
64 16.0 0 0 0 0
68 17.0 0 0 0 0
72 18.0 0 0 0 0
76 19.0 0 0 0 0
80 20.0 0 0 0 0
-------------------------------------------------------------
Java Pool Advisory DB/Inst: IVRS/ivrs Snap: 849
Est LC Est LC Est LC Est LC
Java JP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
8 1.0 2 168 69 1.0 250 1.0 103
12 1.5 2 168 69 1.0 250 1.0 103
16 2.0 2 168 69 1.0 250 1.0 103
-------------------------------------------------------------
Buffer Wait Statistics DB/Inst: IVRS/ivrs Snaps: 848-849
-> ordered by wait time desc, waits desc
Class Waits Total Wait Time (s) Avg Time (ms)
------------------ ----------- ------------------- --------------
undo header 1 0 0
-------------------------------------------------------------
Enqueue Activity DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Undo Segment Summary DB/Inst: IVRS/ivrs Snaps: 848-849
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
1 .2 348 320 3 15/19.366 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Undo Segment Stats DB/Inst: IVRS/ivrs Snaps: 848-849
-> Most recent 35 Undostat rows, ordered by Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
25-Apr 07:19 128 241 0 3 15 0/0 0/0/0/0/0/0
25-Apr 07:09 116 107 320 3 19 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Latch Activity DB/Inst: IVRS/ivrs Snaps: 848-849
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ASM allocation 172 0.0 N/A 0 0 N/A
ASM db client latch 397 0.0 N/A 0 0 N/A
ASM map headers 72 0.0 N/A 0 0 N/A
ASM map load waiting lis 12 0.0 N/A 0 0 N/A
ASM map operation freeli 14 0.0 N/A 0 0 N/A
ASM map operation hash t 40,702 0.0 N/A 0 0 N/A
ASM network background l 306 0.0 N/A 0 0 N/A
AWR Alerted Metric Eleme 2,248 0.0 N/A 0 0 N/A
Consistent RBA 161 0.0 N/A 0 0 N/A
FAL request queue 11 0.0 N/A 0 0 N/A
FAL subheap alocation 11 0.0 N/A 0 0 N/A
FIB s.o chain latch 25 0.0 N/A 0 0 N/A
FOB s.o list latch 76 0.0 N/A 0 0 N/A
In memory undo latch 12 0.0 N/A 0 208 0.0
JS queue state obj latch 4,320 0.0 N/A 0 0 N/A
JS slv state obj latch 3 0.0 N/A 0 0 N/A
KFK SGA context latch 293 0.0 N/A 0 0 N/A
KFMD SGA 5 0.0 N/A 0 0 N/A
KMG MMAN ready and start 201 0.0 N/A 0 0 N/A
KTF sga latch 1 0.0 N/A 0 192 0.0
KWQP Prop Status 3 0.0 N/A 0 0 N/A
MQL Tracking Latch 0 N/A N/A 0 12 0.0
Memory Management Latch 0 N/A N/A 0 201 0.0
OS process 27 0.0 N/A 0 0 N/A
OS process allocation 219 0.0 N/A 0 0 N/A
OS process: request allo 6 0.0 N/A 0 0 N/A
PL/SQL warning settings 15 0.0 N/A 0 0 N/A
Reserved Space Latch 2 0.0 N/A 0 0 N/A
SGA IO buffer pool latch 531 0.0 N/A 0 598 0.0
SQL memory manager latch 38 0.0 N/A 0 198 0.0
SQL memory manager worka 17,318 0.0 N/A 0 0 N/A
Shared B-Tree 27 0.0 N/A 0 0 N/A
active checkpoint queue 606 0.0 N/A 0 0 N/A
active service list 1,242 0.0 N/A 0 205 0.0
archive control 12 0.0 N/A 0 0 N/A
archive process latch 212 0.0 N/A 0 0 N/A
cache buffer handles 66 0.0 N/A 0 0 N/A
cache buffers chains 135,803 0.0 N/A 0 166 0.0
cache buffers lru chain 1,297 0.0 N/A 0 0 N/A
channel handle pool latc 6 0.0 N/A 0 0 N/A
channel operations paren 2,818 0.0 N/A 0 0 N/A
checkpoint queue latch 5,804 0.0 N/A 0 597 0.0
client/application info 153 0.0 N/A 0 0 N/A
compile environment latc 108 0.0 N/A 0 0 N/A
dml lock allocation 764 0.0 N/A 0 0 N/A
dummy allocation 18 0.0 N/A 0 0 N/A
enqueue hash chains 11,554 0.0 N/A 0 58 0.0
enqueues 10,281 0.0 N/A 0 0 N/A
event group latch 3 0.0 N/A 0 0 N/A
file cache latch 33 0.0 N/A 0 0 N/A
hash table column usage 10 0.0 N/A 0 21 0.0
hash table modification 1 0.0 N/A 0 0 N/A
job workq parent latch 0 N/A N/A 0 10 0.0
job_queue_processes para 14 0.0 N/A 0 0 N/A
kks stats 3 0.0 N/A 0 0 N/A
ksuosstats global area 273 0.4 1.0 0 0 N/A
ksv instance 2 0.0 N/A 0 0 N/A
ktm global data 2 0.0 N/A 0 0 N/A
kwqbsn:qsga 26 0.0 N/A 0 0 N/A
lgwr LWN SCN 327 0.0 N/A 0 0 N/A
Latch Activity DB/Inst: IVRS/ivrs Snaps: 848-849
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
library cache 4,379 0.0 N/A 0 0 N/A
library cache lock 3,088 0.0 N/A 0 0 N/A
library cache lock alloc 158 0.0 N/A 0 0 N/A
library cache pin 1,248 0.0 N/A 0 0 N/A
library cache pin alloca 29 0.0 N/A 0 0 N/A
list of block allocation 37 0.0 N/A 0 0 N/A
logminer context allocat 1 0.0 N/A 0 0 N/A
messages 5,930 0.0 N/A 0 0 N/A
mostly latch-free SCN 327 0.0 N/A 0 0 N/A
msg queue 24 0.0 N/A 0 24 0.0
multiblock read objects 0 N/A N/A 0 6 0.0
ncodef allocation latch 9 0.0 N/A 0 0 N/A
object queue header heap 605 0.0 N/A 0 0 N/A
object queue header oper 2,671 0.0 N/A 0 0 N/A
object stats modificatio 1 0.0 N/A 0 2 0.0
parallel query alloc buf 80 0.0 N/A 0 0 N/A
parameter table allocati 9 0.0 N/A 0 0 N/A
post/wait queue 13 0.0 N/A 0 7 0.0
process allocation 6 0.0 N/A 0 3 0.0
process group creation 6 0.0 N/A 0 0 N/A
qmn task queue latch 84 0.0 N/A 0 0 N/A
redo allocation 951 0.0 N/A 0 2,591 0.0
redo copy 0 N/A N/A 0 2,591 0.0
redo writing 1,512 0.0 N/A 0 0 N/A
reservation so alloc lat 1 0.0 N/A 0 0 N/A
resmgr group change latc 32 0.0 N/A 0 0 N/A
resmgr:actses active lis 130 0.0 N/A 0 0 N/A
resmgr:actses change gro 5 0.0 N/A 0 0 N/A
resmgr:free threads list 14 0.0 N/A 0 0 N/A
resmgr:schema config 116 0.0 N/A 0 0 N/A
row cache objects 9,991 0.0 N/A 0 0 N/A
rules engine rule set st 200 0.0 N/A 0 0 N/A
segmented array pool 24 0.0 N/A 0 0 N/A
sequence cache 6 0.0 N/A 0 0 N/A
session allocation 1,142 0.0 N/A 0 0 N/A
session idle bit 6,700 0.0 N/A 0 0 N/A
session state list latch 22 0.0 N/A 0 0 N/A
session switching 9 0.0 N/A 0 0 N/A
session timer 205 0.0 N/A 0 0 N/A
shared pool 2,024 0.0 N/A 0 0 N/A
shared pool simulator 425 0.0 N/A 0 0 N/A
simulator hash latch 2,281 0.0 N/A 0 0 N/A
simulator lru latch 2,259 0.0 N/A 0 10 0.0
slave class 75 0.0 N/A 0 0 N/A
slave class create 56 1.8 1.0 1 0 N/A
sort extent pool 100 0.0 N/A 0 0 N/A
state object free list 2 0.0 N/A 0 0 N/A
statistics aggregation 140 0.0 N/A 0 0 N/A
threshold alerts latch 65 0.0 N/A 0 0 N/A
transaction allocation 427,471 0.0 N/A 0 0 N/A
transaction branch alloc 9 0.0 N/A 0 0 N/A
undo global data 3,492 0.0 N/A 0 0 N/A
user lock 11 0.0 N/A 0 0 N/A
-------------------------------------------------------------
Latch Sleep Breakdown DB/Inst: IVRS/ivrs Snaps: 848-849
-> ordered by misses desc
Latch Name
----------------------------------------
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
ksuosstats global area
273 1 1 0 0 0 0
slave class create
56 1 1 0 0 0 0
-------------------------------------------------------------
Latch Miss Sources DB/Inst: IVRS/ivrs Snaps: 848-849
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
ksuosstats global area ksugetosstat 0 1 1
slave class create ksvcreate 0 1 0
-------------------------------------------------------------
Parent Latch Statistics DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Child Latch Statistics DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Logical Reads DB/Inst: IVRS/ivrs Snaps: 848-849
-> Total Logical Reads: 32,205
-> Captured Segments account for 79.6% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSTEM OBJ$ TABLE 10,608 32.94
SYS SYSTEM TS$ TABLE 2,816 8.74
SYS SYSTEM I_FILE#_BLOCK# INDEX 1,456 4.52
SYS SYSTEM FILE$ TABLE 1,264 3.92
SYS SYSTEM SEG$ TABLE 1,168 3.63
-------------------------------------------------------------
Segments by Physical Reads DB/Inst: IVRS/ivrs Snaps: 848-849
-> Total Physical Reads: 41
-> Captured Segments account for 63.4% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
PERFSTAT USERS STATS$EVENT_HISTOGRA INDEX 4 9.76
PERFSTAT USERS STATS$LATCH_PK INDEX 4 9.76
PERFSTAT USERS STATS$SQL_SUMMARY TABLE 4 9.76
PERFSTAT USERS STATS$BG_EVENT_SUMMA TABLE 1 2.44
PERFSTAT USERS STATS$EVENT_HISTOGRA TABLE 1 2.44
-------------------------------------------------------------
Segments by Row Lock Waits DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Segments by ITL Waits DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Buffer Busy Waits DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Dictionary Cache Stats DB/Inst: IVRS/ivrs Snaps: 848-849
-> "Pct Misses" should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control 13 0.0 0 N/A 2 1
dc_files 39 0.0 0 N/A 0 13
dc_global_oids 13 0.0 0 N/A 0 45
dc_histogram_data 17 0.0 0 N/A 0 2,184
dc_histogram_defs 12 0.0 0 N/A 0 4,412
dc_object_ids 331 0.0 0 N/A 0 1,207
dc_objects 197 0.0 0 N/A 0 2,351
dc_profiles 5 0.0 0 N/A 0 1
dc_rollback_segments 78 0.0 0 N/A 0 16
dc_segments 4 0.0 0 N/A 0 1,010
dc_tablespaces 1,649 0.0 0 N/A 0 13
dc_users 973 0.0 0 N/A 0 56
outstanding_alerts 26 0.0 0 N/A 0 25
-------------------------------------------------------------
Library Cache Activity DB/Inst: IVRS/ivrs Snaps: 848-849
-> "Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 57 0.0 80 0.0 0 0
TABLE/PROCEDURE 20 0.0 244 0.0 0 0
TRIGGER 7 0.0 38 0.0 0 0
-------------------------------------------------------------
Process Memory Summary DB/Inst: IVRS/ivrs Snaps: 848-849
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist
Avg Std Dev Max Max
Alloc Used Alloc Alloc Alloc Alloc Num Num
Category (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other 165.1 N/A 5.9 10.7 46 46 28 28
Freeable 8.1 .0 .7 .5 1 N/A 11 11
SQL .6 .3 .0 .0 0 6 14 11
PL/SQL .4 .2 .0 .0 0 0 26 26
JAVA .0 .0 .0 .0 0 1 1 1
E Other 120.6 N/A 4.3 7.2 24 337 28 28
Freeable 7.7 .0 .8 .5 1 N/A 10 10
SQL .5 .2 .0 .0 0 6 13 10
PL/SQL .4 .1 .0 .0 0 0 26 26
-------------------------------------------------------------
SGA Memory Summary DB/Inst: IVRS/ivrs Snaps: 848-849
End Size (Bytes)
SGA regions Begin Size (Bytes) (if different)
------------------------------ ------------------- -------------------
Database Buffers 180,355,072
Fixed Size 1,261,612
Redo Buffers 2,928,640
Variable Size 142,610,388
-------------------
sum 327,155,712
-------------------------------------------------------------
SGA breakdown difference DB/Inst: IVRS/ivrs Snaps: 848-849
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
insignificant, or zero in that snapshot
Pool Name Begin MB End MB % Diff
------ ------------------------------ -------------- -------------- -------
java free memory 2.7 2.7 0.00
java joxlod exec hp 5.1 5.1 0.00
java joxs heap .2 .2 0.00
large ASM map operations hashta .2 .2 0.00
large CTWR dba buffer .4 .4 0.00
large PX msg pool .2 .2 0.00
large free memory 1.2 1.2 0.00
large krcc extent chunk 2.0 2.0 0.00
shared ASH buffers 2.0 2.0 0.00
shared CCursor 5.4 5.4 0.17
shared Heap0: KGL 3.6 3.6 0.00
shared KCB Table Scan Buffer 3.8 3.8 0.00
shared KGLS heap 6.5 6.5 0.00
shared KQR M PO 3.9 3.9 0.00
shared KSFD SGA I/O b 3.8 3.8 0.00
shared PCursor 3.3 3.3 0.00
shared PL/SQL DIANA 2.3 2.3 0.00
shared PL/SQL MPCODE 3.6 3.6 0.00
shared event statistics per sess 1.3 1.3 0.00
shared free memory 20.9 20.9 -0.15
shared kglsim hash table bkts 2.0 2.0 0.00
shared library cache 5.8 5.8 0.01
shared row cache 3.6 3.6 0.00
shared sql area 22.2 22.2 0.11
stream free memory 4.0 4.0 0.00
buffer_cache 172.0 172.0 0.00
fixed_sga 1.2 1.2 0.00
log_buffer 2.8 2.8 0.00
-------------------------------------------------------------
Streams CPU/IO Usage DB/Inst: IVRS/ivrs Snaps: 848-849
-> Streams processes ordered by CPU usage
-> CPU and I/O Time in micro seconds
Session Type CPU Time User I/O Time Sys I/O Time
------------------------- -------------- -------------- --------------
QMON Coordinator 48,643 0 0
QMON Slaves 30,874 0 0
-------------------------------------------------------------
Streams Capture DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Streams Apply DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Queues DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Subscribers DB/Inst: IVRS/ivrs Snaps: 848-849
No data exists for this section of the report.
-------------------------------------------------------------
Rule Set DB/Inst: IVRS/ivrs Snaps: 848-849
-> Rule Sets ordered by Evaluations
Fast SQL CPU Elapsed
Ruleset Name Evals Evals Execs Time Time
------------------------- -------- -------- -------- -------- --------
SYS.ALERT_QUE_R 0 0 0 0 0
-------------------------------------------------------------
Resource Limit Stats DB/Inst: IVRS/ivrs Snap: 849
No data exists for this section of the report.
-------------------------------------------------------------
init.ora Parameters DB/Inst: IVRS/ivrs Snaps: 848-849
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
audit_file_dest /oracle/app/oracle/admin/ivrs/adu
audit_sys_operations TRUE
background_dump_dest /oracle/app/oracle/admin/ivrs/bdu
compatible 10.2.0.3.0
control_files +DATA_1/ivrs/control01.ctl, +DATA
core_dump_dest /oracle/app/oracle/admin/ivrs/cdu
db_block_size 8192
db_domain karl.com
db_file_multiblock_read_count 16
db_name ivrs
db_recovery_file_dest /flash_reco/flash_recovery_area
db_recovery_file_dest_size 161061273600
dispatchers (PROTOCOL=TCP) (SERVICE=ivrsXDB)
job_queue_processes 10
log_archive_dest_1 LOCATION=USE_DB_RECOVERY_FILE_DES
log_archive_format ivrs_%t_%s_%r.arc
open_cursors 300
os_authent_prefix
os_roles FALSE
pga_aggregate_target 108003328
processes 150
recyclebin OFF
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
sga_target 327155712
spfile +DATA_1/ivrs/spfileivrs.ora
sql92_security TRUE
statistics_level TYPICAL
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /oracle/app/oracle/admin/ivrs/udu
-------------------------------------------------------------
End of Report
Report written to awrrpt_1_848_849.txt
}}}
! The ASH report for SNAP_ID 848
{{{
Summary of All User Input
-------------------------
Format : TEXT
DB Id : 2607950532
Inst num : 1
Begin time : 25-Apr-10 07:01:00
End time : 25-Apr-10 07:11:00
Slot width : Default
Report targets : 0
Report name : ashrpt_1_0425_0711.txt
ASH Report For IVRS/ivrs
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
1 312M (100%) 172M (55.1%) 94M (30.1%) 2.0M (0.6%)
Analysis Begin Time: 25-Apr-10 07:01:00
Analysis End Time: 25-Apr-10 07:11:00
Elapsed Time: 10.0 (mins)
Sample Count: 44
Average Active Sessions: 0.73
Avg. Active Session per CPU: 0.73
Report Target: None specified
Top User Events DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU CPU 90.91 0.67
log file sequential read System I/O 2.27 0.02
-------------------------------------------------------------
Top Background Events DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
db file parallel write System I/O 4.55 0.03
CPU + Wait for CPU CPU 2.27 0.02
-------------------------------------------------------------
Top Event P1/P2/P3 Values DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
Event % Event P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1 Parameter 2 Parameter 3
-------------------------- -------------------------- --------------------------
db file parallel write 4.55 "1","0","2147483647" 4.55
requests interrupt timeout
log file sequential read 2.27 "0","40962","8192" 2.27
log# block# blocks
-------------------------------------------------------------
Top Service/Module DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$USERS Oracle Enterprise Manage 90.91 target 1 90.91
SYS$BACKGROUND UNNAMED 4.55 UNNAMED 4.55
MMON_SLAVE 2.27 Auto-Flush Slave A 2.27
SYS$USERS Lab128 2.27 UNNAMED 2.27
-------------------------------------------------------------
Top Client IDs DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Command Types DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
-> 'Distinct SQLIDs' is the count of the distinct number of SQLIDs
with the given SQL Command Type found over all the ASH samples
in the analysis period
Distinct Avg Active
SQL Command Type SQLIDs % Activity Sessions
---------------------------------------- ---------- ---------- ----------
SELECT 1 2.27 0.02
-------------------------------------------------------------
Top SQL Statements DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
SQL ID Planhash % Activity Event % Event
------------- ----------- ---------- ------------------------------ ----------
6mn64mp4prgbt 3305425530 2.27 CPU + Wait for CPU 2.27
--lab128 select ses_addr,inst_id,xidusn usn,start_time,used_ublk ublk,used_urec
urec ,substr(start_time,10,10) "Start Time" from gv$transaction
-------------------------------------------------------------
Top SQL using literals DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
No data exists for this section of the report.
-------------------------------------------------------------
Top PL/SQL Procedures DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
-> 'PL/SQL entry subprogram' represents the application's top-level
entry-point(procedure, function, trigger, package initialization
or RPC call) into PL/SQL.
-> 'PL/SQL current subprogram' is the pl/sql subprogram being executed
at the point of sampling . If the value is 'SQL', it represents
the percentage of time spent executing SQL for the particular
plsql entry subprogram
PLSQL Entry Subprogram % Activity
----------------------------------------------------------------- ----------
PLSQL Current Subprogram % Current
----------------------------------------------------------------- ----------
SYSMAN.EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS 90.91
SYSMAN.EMD_LOADER.EMD_RAW_PURGE 90.91
-------------------------------------------------------------
Top Sessions DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
-> '# Samples Active' shows the number of ASH samples in which the session
was found waiting for that particular event. The percentage shown
in this column is calculated with respect to wall clock time
and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
the PQ slave activity into the session issuing the PQ. Refer to
the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# % Activity Event % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
127, 322 90.91 CPU + Wait for CPU 88.64
SYSMAN oracle@dbrocai...el.com (J001) 39/60 [ 65%] 1
log file sequential read 2.27
1/60 [ 2%] 1
167, 1 4.55 db file parallel write 4.55
SYS oracle@dbrocai...el.com (DBW0) 2/60 [ 3%] 0
125, 2 2.27 CPU + Wait for CPU 2.27
PERFSTAT lab128.exe 1/60 [ 2%] 0
141, 679 2.27 CPU + Wait for CPU 2.27
SYS oracle@dbrocai...el.com (m000) 1/60 [ 2%] 0
-------------------------------------------------------------
Top Blocking Sessions DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
No data exists for this section of the report.
-------------------------------------------------------------
Top Sessions running PQs DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Objects DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Files DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
No data exists for this section of the report.
-------------------------------------------------------------
Top Latches DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
No data exists for this section of the report.
-------------------------------------------------------------
Activity Over Time DB/Inst: IVRS/ivrs (Apr 25 07:01 to 07:11)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
07:01:00 (4.0 min) 26 CPU + Wait for CPU 26 59.09
07:05:00 (5.0 min) 18 CPU + Wait for CPU 15 34.09
db file parallel write 2 4.55
log file sequential read 1 2.27
-------------------------------------------------------------
End of Report
Report written to ashrpt_1_0425_0711.txt
}}}
! The AWR report for SNAP_ID 1097
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 1097 11-May-10 00:00:36 22 1.7
End Snap: 1098 11-May-10 00:10:38 22 1.8
Elapsed: 10.03 (mins)
DB Time: 13.49 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 156M 156M Std Block Size: 8K
Shared Pool Size: 136M 136M Log Buffer: 2,860K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 3,111.23 28,800.68
Logical reads: 33.49 310.02
Block changes: 11.45 105.98
Physical reads: 1.24 11.48
Physical writes: 0.89 8.28
User calls: 0.04 0.42
Parses: 1.47 13.57
Hard parses: 0.11 0.98
Sorts: 1.31 12.14
Logons: 0.01 0.14
Executes: 4.05 37.48
Transactions: 0.11
% Blocks changed per Read: 34.19 Recursive Call %: 99.88
Rollback per transaction %: 3.08 Rows per Sort: 27.69
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 96.30 In-memory Sort %: 100.00
Library Hit %: 92.04 Soft Parse %: 92.74
Execute to Parse %: 63.79 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 59.74 % Non-Parse CPU: 99.93
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 54.99 57.02
% SQL with executions>1: 78.34 75.40
% Memory for SQL w/exec>1: 74.37 72.00
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time 697 86.1
db file sequential read 344 18 54 2.3 User I/O
db file scattered read 77 16 211 2.0 User I/O
db file parallel write 239 14 59 1.7 System I/O
control file parallel write 197 10 51 1.2 System I/O
-------------------------------------------------------------
Time Model Statistics DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Total time in database user-calls (DB Time): 809.4s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 813.0 100.4
DB CPU 697.3 86.1
parse time elapsed 23.5 2.9
hard parse elapsed time 22.5 2.8
PL/SQL compilation elapsed time 4.8 .6
PL/SQL execution elapsed time 4.1 .5
repeated bind elapsed time 0.2 .0
DB time 809.4 N/A
background elapsed time 36.7 N/A
background cpu time 9.0 N/A
-------------------------------------------------------------
Wait Class DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
-------------------- ---------------- ------ ---------------- ------- ---------
User I/O 421 .0 35 82 6.5
System I/O 1,926 .0 30 16 29.6
Configuration 2 100.0 2 977 0.0
Concurrency 4 25.0 1 337 0.1
Other 21 .0 1 49 0.3
Commit 4 .0 0 1 0.1
-------------------------------------------------------------
Wait Events DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file sequential read 344 .0 18 54 5.3
db file scattered read 77 .0 16 211 1.2
db file parallel write 239 .0 14 59 3.7
control file parallel write 197 .0 10 51 3.0
log file sequential read 30 .0 4 141 0.5
log buffer space 2 100.0 2 977 0.0
os thread startup 4 25.0 1 337 0.1
change tracking file synchro 10 .0 1 98 0.2
control file sequential read 1,370 .0 1 1 21.1
log file parallel write 90 .0 1 10 1.4
latch free 1 .0 0 45 0.0
log file sync 4 .0 0 1 0.1
change tracking file synchro 10 .0 0 1 0.2
Streams AQ: qmn coordinator 44 50.0 602 13674 0.7
Streams AQ: qmn slave idle w 22 .0 602 27348 0.3
ASM background timer 153 .0 588 3844 2.4
virtual circuit status 20 100.0 586 29296 0.3
class slave wait 25 4.0 455 18202 0.4
jobq slave wait 64 92.2 179 2804 1.0
KSV master wait 24 .0 0 10 0.4
-------------------------------------------------------------
Background Wait Events DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file parallel write 239 .0 14 59 3.7
control file parallel write 197 .0 10 51 3.0
os thread startup 4 25.0 1 337 0.1
events in waitclass Other 20 .0 1 49 0.3
log file parallel write 90 .0 1 10 1.4
control file sequential read 250 .0 1 3 3.8
db file sequential read 4 .0 0 30 0.1
rdbms ipc message 2,407 96.7 6,999 2908 37.0
Streams AQ: qmn coordinator 44 50.0 602 13674 0.7
Streams AQ: qmn slave idle w 22 .0 602 27348 0.3
ASM background timer 153 .0 588 3844 2.4
pmon timer 203 100.0 586 2888 3.1
smon timer 2 100.0 586 292973 0.0
class slave wait 25 4.0 455 18202 0.4
-------------------------------------------------------------
Operating System Statistics DB/Inst: IVRS/ivrs Snaps: 1097-1098
Statistic Total
-------------------------------- --------------------
BUSY_TIME 40,229
IDLE_TIME 19,998
IOWAIT_TIME 6,318
NICE_TIME 355
SYS_TIME 22,395
USER_TIME 17,339
LOAD 1
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 2,864
NUM_CPUS 1
-------------------------------------------------------------
Service Statistics DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
-------------------------------- ------------ ------------ ---------- ----------
SYS$USERS 809.4 697.3 680 16,945
SYS$BACKGROUND 0.0 0.0 62 3,177
ivrs.karl.com 0.0 0.0 0 0
ivrsXDB 0.0 0.0 0 0
-------------------------------------------------------------
Service Wait Class Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
----------------------------------------------------------------
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
SYS$USERS
383 3364 0 0 0 0 0 0
SYS$BACKGROUND
38 108 4 135 0 0 0 0
-------------------------------------------------------------
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
405 349 5 81.0 50.0 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
361 339 1 361.5 44.7 2zwjrv2186835
Module: Oracle Enterprise Manager.rollup
DELETE FROM MGMT_METRICS_RAW WHERE ROWID = :B1
9 0 6 1.4 1.1 g2aqmpuqbytjy
Module: Oracle Enterprise Manager.rollup
SELECT ROWID FROM MGMT_METRICS_RAW WHERE TARGET_GUID = :B2 AND COLLECTION_TIMEST
AMP < :B1
5 0 134 0.0 0.6 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
4 1 1 3.9 0.5 52upwvrbypadx
Module: Oracle Enterprise Manager.current metric purge
SELECT CM.TARGET_GUID, CM.METRIC_GUID, MAX(CM.COLLECTION_TIMESTAMP) FROM MGMT_CU
RRENT_METRICS CM, MGMT_METRICS M WHERE CM.METRIC_GUID = M.METRIC_GUID AND M.KEYS
_FROM_MULT_COLLS = 0 GROUP BY CM.TARGET_GUID, CM.METRIC_GUID
2 0 35 0.1 0.3 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
2 0 1 2.3 0.3 934ptadxh61cv
Module: Oracle Enterprise Manager.purge system performan
SELECT ROWID FROM MGMT_SYSTEM_PERFORMANCE_LOG M WHERE TIME < :B1
2 1 1 2.0 0.2 fvw4xs4avpjj7
BEGIN MGMT_DELTA.ECM_HISTORY_PURGE(:1); END;
2 0 7 0.3 0.2 0z0uyqmj1qjsk
SELECT PP.POLICY_NAME, PP.POLICY_TYPE, PP.ROLLUP_PROC_NAME, PP.PURGE_PROC_NAME,
PP.POLICY_RETENTION_HOURS, PP.RETENTION_GROUP_NAME, TPD.CAN_ROLLUP_UPTO_TIME, TP
D.ROLLEDUP_UPTO_TIME, TPD.TARGET_RETENTION_HOURS FROM MGMT_PURGE_POLICY PP, MGMT
_PURGE_POLICY_TARGET_STATE TPD WHERE PP.POLICY_NAME = TPD.POLICY_NAME AND TPD.TA
2 1 6 0.3 0.2 3hnvpbd4b6142
BEGIN EM_MASTER_AGENT.MASTER_AGENT_HIST_PURGE(:1); END;
-------------------------------------------------------------
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
349 405 5 69.71 50.0 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
339 361 1 338.67 44.7 2zwjrv2186835
Module: Oracle Enterprise Manager.rollup
DELETE FROM MGMT_METRICS_RAW WHERE ROWID = :B1
1 2 1 1.02 0.2 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1 2 1 0.93 0.2 fvw4xs4avpjj7
BEGIN MGMT_DELTA.ECM_HISTORY_PURGE(:1); END;
1 1 6 0.12 0.2 d9m83rp9zfb9m
BEGIN MGMT_BLACKOUT_ENGINE.BLACKOUT_STATE_PURGE(:1); END;
1 1 6 0.11 0.2 crqc0d6r83p1r
BEGIN EM_SEVERITY.SEVERITY_PURGE(:1); END;
1 2 6 0.09 0.2 3hnvpbd4b6142
BEGIN EM_MASTER_AGENT.MASTER_AGENT_HIST_PURGE(:1); END;
1 1 1 0.53 0.1 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
1 4 1 0.51 0.5 52upwvrbypadx
Module: Oracle Enterprise Manager.current metric purge
SELECT CM.TARGET_GUID, CM.METRIC_GUID, MAX(CM.COLLECTION_TIMESTAMP) FROM MGMT_CU
RRENT_METRICS CM, MGMT_METRICS M WHERE CM.METRIC_GUID = M.METRIC_GUID AND M.KEYS
_FROM_MULT_COLLS = 0 GROUP BY CM.TARGET_GUID, CM.METRIC_GUID
1 1 1 0.51 0.1 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
0 9 6 0.03 1.1 g2aqmpuqbytjy
Module: Oracle Enterprise Manager.rollup
SELECT ROWID FROM MGMT_METRICS_RAW WHERE TARGET_GUID = :B2 AND COLLECTION_TIMEST
AMP < :B1
-------------------------------------------------------------
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 20,151
-> Captured SQL account for 48.9% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
14,533 5 2,906.6 72.1 348.56 404.88 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
2,072 1 2,072.0 10.3 1.02 1.54 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1,292 1 1,292.0 6.4 338.67 361.45 2zwjrv2186835
Module: Oracle Enterprise Manager.rollup
DELETE FROM MGMT_METRICS_RAW WHERE ROWID = :B1
946 26 36.4 4.7 0.21 0.61 cqgv56fmuj63x
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#
(+) order by order#
892 26 34.3 4.4 0.20 0.43 8swypbbr0m372
select order#,columns,types from access$ where d_obj#=:1
832 1 832.0 4.1 0.93 2.00 fvw4xs4avpjj7
BEGIN MGMT_DELTA.ECM_HISTORY_PURGE(:1); END;
732 1 732.0 3.6 0.35 0.63 ajq92dy2g059g
BEGIN MGMT_ECM_POLICY.ECM_POLICY_ERROR_PURGE(:1); END;
668 6 111.3 3.3 0.75 1.25 d9m83rp9zfb9m
BEGIN MGMT_BLACKOUT_ENGINE.BLACKOUT_STATE_PURGE(:1); END;
663 1 663.0 3.3 0.47 0.89 2nk7zb6h8ngf8
BEGIN MGMT_ECM_CSA_PKG.AUTO_PURGE(:1); END;
606 6 101.0 3.0 0.16 8.65 g2aqmpuqbytjy
Module: Oracle Enterprise Manager.rollup
SELECT ROWID FROM MGMT_METRICS_RAW WHERE TARGET_GUID = :B2 AND COLLECTION_TIMEST
AMP < :B1
605 6 100.8 3.0 0.66 1.31 crqc0d6r83p1r
BEGIN EM_SEVERITY.SEVERITY_PURGE(:1); END;
457 6 76.2 2.3 0.55 1.75 3hnvpbd4b6142
BEGIN EM_MASTER_AGENT.MASTER_AGENT_HIST_PURGE(:1); END;
415 141 2.9 2.1 0.22 1.43 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
415 134 3.1 2.1 0.35 4.66 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
406 89 4.6 2.0 0.07 0.07 53saa2zkr6wc3
select intcol#,nvl(pos#,0),col#,nvl(spare1,0) from ccol$ where con#=:1
401 6 66.8 2.0 0.42 0.89 ba07n07ard6wp
BEGIN EMD_BCN_AVAIL.BEACON_AVAIL_LOG_PURGE(:1); END;
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 20,151
-> Captured SQL account for 48.9% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
345 8 43.1 1.7 0.26 0.35 abtp0uqvdb1d3
CALL MGMT_ADMIN_DATA.EVALUATE_MGMT_METRICS(:tguid, :mguid, :result)
293 1 293.0 1.5 0.06 0.06 g337099aatnuj
update smon_scn_time set orig_thread=0, time_mp=:1, time_dp=:2, scn=:3, scn_wrp
=:4, scn_bas=:5, num_mappings=:6, tim_scn_map=:7 where thread=0 and scn = (sel
ect min(scn) from smon_scn_time where thread=0)
292 1 292.0 1.4 0.34 0.69 8m927vjw305ff
BEGIN ECM_CT.PURGE_HOST_CONFIGS(:1); END;
279 6 46.5 1.4 0.37 0.79 dmnntnpc9mnnq
SELECT SEV.ROWID FROM MGMT_SEVERITY SEV, (SELECT METRIC_GUID, KEY_VALUE, MAX(COL
LECTION_TIMESTAMP) MAX_COLL FROM MGMT_SEVERITY WHERE TARGET_GUID = :B3 AND SEVER
ITY_CODE = :B2 AND COLLECTION_TIMESTAMP < :B1 GROUP BY METRIC_GUID, KEY_VALUE) M
AX_CLR_SEV WHERE SEV.TARGET_GUID = :B3 AND SEV.METRIC_GUID = MAX_CLR_SEV.METRIC_
265 35 7.6 1.3 0.46 2.49 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
256 1 256.0 1.3 0.23 0.50 b8x6jah3hpq7u
SELECT SNAPSHOT_GUID, SNAPSHOT_TYPE, TARGET_NAME FROM MGMT_ECM_GEN_SNAPSHOT WHER
E SAVED_TIMESTAMP < SYSDATE - :B5 AND (:B4 IS NULL OR :B4 = SNAPSHOT_TYPE) AND (
:B3 IS NULL OR :B3 = TARGET_NAME) AND TARGET_TYPE = :B2 AND IS_CURRENT = :B1
238 38 6.3 1.2 0.06 0.06 6769wyy3yf66f
select pos#,intcol#,col#,spare1,bo#,spare2 from icol$ where obj#=:1
235 7 33.6 1.2 0.19 1.84 0z0uyqmj1qjsk
SELECT PP.POLICY_NAME, PP.POLICY_TYPE, PP.ROLLUP_PROC_NAME, PP.PURGE_PROC_NAME,
PP.POLICY_RETENTION_HOURS, PP.RETENTION_GROUP_NAME, TPD.CAN_ROLLUP_UPTO_TIME, TP
D.ROLLEDUP_UPTO_TIME, TPD.TARGET_RETENTION_HOURS FROM MGMT_PURGE_POLICY PP, MGMT
_PURGE_POLICY_TARGET_STATE TPD WHERE PP.POLICY_NAME = TPD.POLICY_NAME AND TPD.TA
213 6 35.5 1.1 0.09 1.05 9d20p3d2jzftx
Module: Oracle Enterprise Manager.rollup
SELECT ROWID FROM MGMT_METRICS_1DAY WHERE TARGET_GUID = :B2 AND ROLLUP_TIMESTAMP
< :B1
213 1 213.0 1.1 0.21 0.62 b2hrmq9xsdw51
BEGIN EMD_LOADER.STRING_HISTORY_PURGE(:1); END;
-------------------------------------------------------------
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Total Disk Reads: 746
-> Captured SQL account for 67.0% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
614 5 122.8 82.3 348.56 404.88 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
97 35 2.8 13.0 0.46 2.49 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
94 1 94.0 12.6 0.51 3.89 52upwvrbypadx
Module: Oracle Enterprise Manager.current metric purge
SELECT CM.TARGET_GUID, CM.METRIC_GUID, MAX(CM.COLLECTION_TIMESTAMP) FROM MGMT_CU
RRENT_METRICS CM, MGMT_METRICS M WHERE CM.METRIC_GUID = M.METRIC_GUID AND M.KEYS
_FROM_MULT_COLLS = 0 GROUP BY CM.TARGET_GUID, CM.METRIC_GUID
66 1 66.0 8.8 1.02 1.54 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
55 134 0.4 7.4 0.35 4.66 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
39 1 39.0 5.2 0.21 0.62 b2hrmq9xsdw51
BEGIN EMD_LOADER.STRING_HISTORY_PURGE(:1); END;
35 6 5.8 4.7 0.66 1.31 crqc0d6r83p1r
BEGIN EM_SEVERITY.SEVERITY_PURGE(:1); END;
27 1 27.0 3.6 0.93 2.00 fvw4xs4avpjj7
BEGIN MGMT_DELTA.ECM_HISTORY_PURGE(:1); END;
25 6 4.2 3.4 0.75 1.25 d9m83rp9zfb9m
BEGIN MGMT_BLACKOUT_ENGINE.BLACKOUT_STATE_PURGE(:1); END;
25 6 4.2 3.4 0.16 8.65 g2aqmpuqbytjy
Module: Oracle Enterprise Manager.rollup
SELECT ROWID FROM MGMT_METRICS_RAW WHERE TARGET_GUID = :B2 AND COLLECTION_TIMEST
AMP < :B1
22 35 0.6 2.9 0.19 0.67 39m4sx9k63ba2
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece from idl_ub2$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
20 1 20.0 2.7 0.47 0.89 2nk7zb6h8ngf8
BEGIN MGMT_ECM_CSA_PKG.AUTO_PURGE(:1); END;
20 6 3.3 2.7 0.37 0.79 dmnntnpc9mnnq
SELECT SEV.ROWID FROM MGMT_SEVERITY SEV, (SELECT METRIC_GUID, KEY_VALUE, MAX(COL
LECTION_TIMESTAMP) MAX_COLL FROM MGMT_SEVERITY WHERE TARGET_GUID = :B3 AND SEVER
ITY_CODE = :B2 AND COLLECTION_TIMESTAMP < :B1 GROUP BY METRIC_GUID, KEY_VALUE) M
AX_CLR_SEV WHERE SEV.TARGET_GUID = :B3 AND SEV.METRIC_GUID = MAX_CLR_SEV.METRIC_
18 6 3.0 2.4 0.30 1.43 ayhsygk061114
Module: Oracle Enterprise Manager.rollup
SELECT METRIC_GUID, TRUNC(ROLLUP_TIMESTAMP, 'DD'), KEY_VALUE, SUM(SAMPLE_COUNT),
MIN(VALUE_MINIMUM), MAX(VALUE_MAXIMUM), SUM(SAMPLE_COUNT * VALUE_AVERAGE), SUM(
(POWER(VALUE_SDEV, 2) * (SAMPLE_COUNT-1)) + (SAMPLE_COUNT * POWER(VALUE_AVERAGE,
2))) FROM MGMT_METRICS_1HOUR WHERE TARGET_GUID = :B3 AND ROLLUP_TIMESTAMP >= (:
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Total Disk Reads: 746
-> Captured SQL account for 67.0% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
17 6 2.8 2.3 0.09 1.05 9d20p3d2jzftx
Module: Oracle Enterprise Manager.rollup
SELECT ROWID FROM MGMT_METRICS_1DAY WHERE TARGET_GUID = :B2 AND ROLLUP_TIMESTAMP
< :B1
16 6 2.7 2.1 0.25 0.44 6yk8483w5yskv
BEGIN EM_SEVERITY.STATELESS_SEVERITY_CLEAR(:1); END;
16 1 16.0 2.1 0.34 0.69 8m927vjw305ff
BEGIN ECM_CT.PURGE_HOST_CONFIGS(:1); END;
16 6 2.7 2.1 0.42 0.89 ba07n07ard6wp
BEGIN EMD_BCN_AVAIL.BEACON_AVAIL_LOG_PURGE(:1); END;
16 6 2.7 2.1 0.20 0.39 cppfgy7hj6a9v
SELECT TGT.TARGET_NAME, TGT.TARGET_TYPE, MET.METRIC_NAME, MET.METRIC_COLUMN, CSE
V.KEY_VALUE FROM MGMT_CURRENT_SEVERITY CSEV, MGMT_TARGETS TGT, MGMT_METRICS MET
WHERE CSEV.TARGET_GUID = :B4 AND CSEV.TARGET_GUID = TGT.TARGET_GUID AND CSEV.MET
RIC_GUID = MET.METRIC_GUID AND CSEV.SEVERITY_CODE IN (:B3 , :B2 ) AND CSEV.COLLE
15 1 15.0 2.0 338.67 361.45 2zwjrv2186835
Module: Oracle Enterprise Manager.rollup
DELETE FROM MGMT_METRICS_RAW WHERE ROWID = :B1
15 6 2.5 2.0 0.55 1.75 3hnvpbd4b6142
BEGIN EM_MASTER_AGENT.MASTER_AGENT_HIST_PURGE(:1); END;
14 1 14.0 1.9 0.12 1.53 2gx110xsm3fc2
Module: Oracle Enterprise Manager.rollup
INSERT INTO MGMT_METRICS_1DAY ( TARGET_GUID, METRIC_GUID, ROLLUP_TIMESTAMP, KEY_
VALUE, SAMPLE_COUNT, VALUE_AVERAGE, VALUE_MINIMUM, VALUE_MAXIMUM, VALUE_SDEV ) V
ALUES ( :B19 , :B1 , :B2 , :B3 , :B4 , DECODE(:B5 , 0, 0, :B6 /:B7 ), :B8 , :B9
, DECODE(:B10 , 0, 0, 1, 0, DECODE( SIGN(:B11 * :B12 - POWER(:B13 ,2)), -1,0, SQ
14 1 14.0 1.9 0.35 0.63 ajq92dy2g059g
BEGIN MGMT_ECM_POLICY.ECM_POLICY_ERROR_PURGE(:1); END;
13 1 13.0 1.7 0.23 0.50 b8x6jah3hpq7u
SELECT SNAPSHOT_GUID, SNAPSHOT_TYPE, TARGET_NAME FROM MGMT_ECM_GEN_SNAPSHOT WHER
E SAVED_TIMESTAMP < SYSDATE - :B5 AND (:B4 IS NULL OR :B4 = SNAPSHOT_TYPE) AND (
:B3 IS NULL OR :B3 = TARGET_NAME) AND TARGET_TYPE = :B2 AND IS_CURRENT = :B1
11 7 1.6 1.5 0.19 1.84 0z0uyqmj1qjsk
SELECT PP.POLICY_NAME, PP.POLICY_TYPE, PP.ROLLUP_PROC_NAME, PP.PURGE_PROC_NAME,
PP.POLICY_RETENTION_HOURS, PP.RETENTION_GROUP_NAME, TPD.CAN_ROLLUP_UPTO_TIME, TP
D.ROLLEDUP_UPTO_TIME, TPD.TARGET_RETENTION_HOURS FROM MGMT_PURGE_POLICY PP, MGMT
_PURGE_POLICY_TARGET_STATE TPD WHERE PP.POLICY_NAME = TPD.POLICY_NAME AND TPD.TA
11 141 0.1 1.5 0.22 1.43 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
10 1 10.0 1.3 0.08 1.60 cpqs5k8jpd9kz
Module: Oracle Enterprise Manager.rollup
SELECT MT.TARGET_GUID, LAST_LOAD_TIME FROM MGMT_TARGETS MT, MGMT_TARGET_ROLLUP_T
IMES MTRT WHERE MTRT.ROLLUP_TABLE_NAME = :B1 AND MT.TARGET_GUID = MTRT.TARGET_GU
ID AND (TRUNC(MT.LAST_LOAD_TIME, 'HH24') > MTRT.ROLLUP_TIMESTAMP)
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Total Disk Reads: 746
-> Captured SQL account for 67.0% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
9 6 1.5 1.2 0.05 1.32 7681r92mnf326
SELECT AVL.ROWID FROM MGMT_AVAILABILITY AVL WHERE AVL.TARGET_GUID = :B1 AND AVL.
END_COLLECTION_TIMESTAMP < (SELECT MAX(END_COLLECTION_TIMESTAMP) FROM MGMT_AVAIL
ABILITY WHERE TARGET_GUID = :B1 AND CURRENT_STATUS = :B3 AND END_COLLECTION_TIME
STAMP <= :B2 )
9 6 1.5 1.2 0.09 1.37 dwc018us235p6
BEGIN EM_SEVERITY.AVAILABILITY_PURGE(:1); END;
8 1 8.0 1.1 0.06 0.91 0k3whfwtbc3aw
Module: Oracle Enterprise Manager.rollup
INSERT INTO MGMT_SYSTEM_ERROR_LOG (MODULE_NAME, OCCUR_DATE, ERROR_CODE, LOG_LEVE
L, ERROR_MSG, FACILITY, CLIENT_DATA, HOST_URL, EMD_URL) VALUES (:B9 , :B8 , :B7
, SUBSTR(:B6 ,1,16), SUBSTR(:B5 ,1.2048), SUBSTR(:B4 ,1,6), SUBSTR(:B3 ,1,128),
SUBSTR(:B2 ,1,256), SUBSTR(:B1 ,1,256))
-------------------------------------------------------------
SQL ordered by Executions DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Total Executions: 2,436
-> Captured SQL account for 41.6% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
141 122 0.9 0.00 0.01 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2
134 1,919 14.3 0.00 0.03 db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 a
nd intcol#=:2 and row#=:3 order by bucket
89 114 1.3 0.00 0.00 53saa2zkr6wc3
select intcol#,nvl(pos#,0),col#,nvl(spare1,0) from ccol$ where con#=:1
59 59 1.0 0.00 0.00 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3
38 81 2.1 0.00 0.00 6769wyy3yf66f
select pos#,intcol#,col#,spare1,bo#,spare2 from icol$ where obj#=:1
35 25 0.7 0.01 0.02 39m4sx9k63ba2
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece from idl_ub2$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
35 10 0.3 0.00 0.01 c6awqs517jpj0
select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece from idl_char$ w
here obj#=:1 and part=:2 and version=:3 order by piece#
35 89 2.5 0.01 0.07 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
35 44 1.3 0.00 0.01 ga9j9xk5cy9s0
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece from idl_sb4$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
26 420 16.2 0.01 0.02 8swypbbr0m372
select order#,columns,types from access$ where d_obj#=:1
26 395 15.2 0.01 0.02 cqgv56fmuj63x
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#
(+) order by order#
-------------------------------------------------------------
SQL ordered by Parse Calls DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Total Parse Calls: 882
-> Captured SQL account for 56.7% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
59 59 6.69 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3
35 35 3.97 39m4sx9k63ba2
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece from idl_ub2$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
35 35 3.97 c6awqs517jpj0
select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece from idl_char$ w
here obj#=:1 and part=:2 and version=:3 order by piece#
35 35 3.97 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
35 35 3.97 ga9j9xk5cy9s0
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece from idl_sb4$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
26 26 2.95 8swypbbr0m372
select order#,columns,types from access$ where d_obj#=:1
26 26 2.95 cqgv56fmuj63x
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#
(+) order by order#
24 24 2.72 b1wc53ddd6h3p
select audit$,options from procedure$ where obj#=:1
17 17 1.93 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
14 14 1.59 49s332uhbnsma
declare vsn varchar2(20); begin vsn :=
dbms_rcvman.getPackageVersion; :pkg_vsn:pkg_vsn_i := vsn;
if vsn is not null then :pkg_vsnub4 :=
to_number(substr(vsn,1,2) || substr(vsn,4,2) || s
10 24 1.13 83taa7kaw59c1
select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl(sc
ale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,scale,183,s
cale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,rowid,col#,proper
ty, nvl(charsetid,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$ wh
9 24 1.02 1gu8t96d0bdmu
select t.ts#,t.file#,t.block#,nvl(t.bobj#,0),nvl(t.tab#,0),t.intcols,nvl(t.cluco
ls,0),t.audit$,t.flags,t.pctfree$,t.pctused$,t.initrans,t.maxtrans,t.rowcnt,t.bl
kcnt,t.empcnt,t.avgspc,t.chncnt,t.avgrln,t.analyzetime,t.samplesize,t.cols,t.pro
perty,nvl(t.degree,1),nvl(t.instances,1),t.avgspc_flb,t.flbcnt,t.kernelcols,nvl(
9 89 1.02 53saa2zkr6wc3
select intcol#,nvl(pos#,0),col#,nvl(spare1,0) from ccol$ where con#=:1
-------------------------------------------------------------
SQL ordered by Sharable Memory DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
SQL ordered by Version Count DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session 35,558 59.1 547.1
CPU used when call started 170 0.3 2.6
CR blocks created 2 0.0 0.0
DB time 63,074 104.8 970.4
DBWR checkpoint buffers written 538 0.9 8.3
DBWR transaction table writes 11 0.0 0.2
DBWR undo block writes 82 0.1 1.3
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 5 0.0 0.1
IMU Redo allocation size 73,692 122.5 1,133.7
IMU commits 60 0.1 0.9
IMU contention 0 0.0 0.0
IMU undo allocation size 309,896 515.0 4,767.6
SQL*Net roundtrips to/from clien 0 0.0 0.0
active txn count during cleanout 66 0.1 1.0
background timeouts 2,332 3.9 35.9
buffer is not pinned count 8,882 14.8 136.7
buffer is pinned count 2,944 4.9 45.3
bytes received via SQL*Net from 0 0.0 0.0
bytes sent via SQL*Net to client 0 0.0 0.0
calls to get snapshot scn: kcmgs 4,805 8.0 73.9
calls to kcmgas 129 0.2 2.0
calls to kcmgcs 38 0.1 0.6
change write time 39 0.1 0.6
cleanout - number of ktugct call 82 0.1 1.3
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 0 0.0 0.0
cluster key scan block gets 1,098 1.8 16.9
cluster key scans 348 0.6 5.4
commit batch/immediate performed 2 0.0 0.0
commit batch/immediate requested 2 0.0 0.0
commit cleanout failures: callba 11 0.0 0.2
commit cleanouts 493 0.8 7.6
commit cleanouts successfully co 482 0.8 7.4
commit immediate performed 2 0.0 0.0
commit immediate requested 2 0.0 0.0
commit txn count during cleanout 46 0.1 0.7
concurrency wait time 135 0.2 2.1
consistent changes 2 0.0 0.0
consistent gets 13,819 23.0 212.6
consistent gets - examination 5,923 9.8 91.1
consistent gets from cache 13,819 23.0 212.6
cursor authentications 12 0.0 0.2
data blocks consistent reads - u 2 0.0 0.0
db block changes 6,889 11.5 106.0
db block gets 6,332 10.5 97.4
db block gets direct 0 0.0 0.0
db block gets from cache 6,332 10.5 97.4
deferred (CURRENT) block cleanou 339 0.6 5.2
dirty buffers inspected 0 0.0 0.0
enqueue conversions 189 0.3 2.9
enqueue releases 5,505 9.2 84.7
enqueue requests 5,509 9.2 84.8
enqueue timeouts 4 0.0 0.1
enqueue waits 0 0.0 0.0
execute count 2,436 4.1 37.5
free buffer inspected 951 1.6 14.6
free buffer requested 1,028 1.7 15.8
heap block compress 4 0.0 0.1
hot buffers moved to head of LRU 1 0.0 0.0
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
immediate (CR) block cleanout ap 0 0.0 0.0
immediate (CURRENT) block cleano 106 0.2 1.6
index crx upgrade (positioned) 282 0.5 4.3
index fast full scans (full) 24 0.0 0.4
index fetch by key 2,653 4.4 40.8
index scans kdiixs1 1,456 2.4 22.4
leaf node 90-10 splits 8 0.0 0.1
leaf node splits 30 0.1 0.5
lob reads 0 0.0 0.0
lob writes 0 0.0 0.0
lob writes unaligned 0 0.0 0.0
logons cumulative 9 0.0 0.1
messages received 320 0.5 4.9
messages sent 320 0.5 4.9
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 6,877 11.4 105.8
opened cursors cumulative 1,333 2.2 20.5
parse count (failures) 0 0.0 0.0
parse count (hard) 64 0.1 1.0
parse count (total) 882 1.5 13.6
parse time cpu 46 0.1 0.7
parse time elapsed 77 0.1 1.2
physical read IO requests 409 0.7 6.3
physical read bytes 6,111,232 10,156.5 94,019.0
physical read total IO requests 2,092 3.5 32.2
physical read total bytes 325,857,280 541,556.5 5,013,188.9
physical read total multi block 356 0.6 5.5
physical reads 746 1.2 11.5
physical reads cache 746 1.2 11.5
physical reads cache prefetch 337 0.6 5.2
physical reads direct 0 0.0 0.0
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 181 0.3 2.8
physical write IO requests 364 0.6 5.6
physical write bytes 4,407,296 7,324.7 67,804.6
physical write total IO requests 1,061 1.8 16.3
physical write total bytes 16,143,872 26,830.2 248,367.3
physical write total multi block 121 0.2 1.9
physical writes 538 0.9 8.3
physical writes direct 0 0.0 0.0
physical writes direct (lob) 0 0.0 0.0
physical writes direct temporary 0 0.0 0.0
physical writes from cache 538 0.9 8.3
physical writes non checkpoint 215 0.4 3.3
recursive calls 21,906 36.4 337.0
recursive cpu usage 35,105 58.3 540.1
redo blocks written 3,809 6.3 58.6
redo entries 3,316 5.5 51.0
redo ordering marks 2 0.0 0.0
redo size 1,872,044 3,111.2 28,800.7
redo synch time 1 0.0 0.0
redo synch writes 209 0.4 3.2
redo wastage 22,136 36.8 340.6
redo write time 116 0.2 1.8
redo writes 92 0.2 1.4
rollback changes - undo records 665 1.1 10.2
rollbacks only - consistent read 2 0.0 0.0
rows fetched via callback 1,312 2.2 20.2
session cursor cache hits 918 1.5 14.1
session logical reads 20,151 33.5 310.0
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
session pga memory 22,320,632 37,095.6 343,394.3
session pga memory max 858,511,484 1,426,798.0 13,207,869.0
session uga memory 25,769,967,092 42,828,241.6 #############
session uga memory max 12,333,588 20,497.7 189,747.5
shared hash latch upgrades - no 402 0.7 6.2
sorts (disk) 0 0.0 0.0
sorts (memory) 789 1.3 12.1
sorts (rows) 21,851 36.3 336.2
sql area evicted 0 0.0 0.0
sql area purged 0 0.0 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 1 0.0 0.0
table fetch by rowid 3,742 6.2 57.6
table fetch continued row 56 0.1 0.9
table scan blocks gotten 637 1.1 9.8
table scan rows gotten 25,079 41.7 385.8
table scans (long tables) 0 0.0 0.0
table scans (short tables) 219 0.4 3.4
total number of times SMON poste 0 0.0 0.0
transaction rollbacks 2 0.0 0.0
undo change vector size 702,384 1,167.3 10,805.9
user I/O wait time 3,435 5.7 52.9
user calls 27 0.0 0.4
user commits 63 0.1 1.0
user rollbacks 2 0.0 0.0
workarea executions - onepass 0 0.0 0.0
workarea executions - optimal 413 0.7 6.4
write clones created in backgrou 1 0.0 0.0
-------------------------------------------------------------
Instance Activity Stats - Absolute ValuesDB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Statistics with absolute values (should not be diffed)
Statistic Begin Value End Value
-------------------------------- --------------- ---------------
session cursor cache count 1,244 1,393
opened cursors current 38 39
logons current 22 22
-------------------------------------------------------------
Instance Activity Stats - Thread Activity DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
-------------------------------- ------------------ ---------
log switches (derived) 0 .00
-------------------------------------------------------------
Tablespace IO Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX
159 0 148.5 2.9 180 0 0 0.0
SYSTEM
221 0 47.6 1.0 10 0 0 0.0
USERS
28 0 18.2 2.3 147 0 0 0.0
UNDOTBS1
0 0 0.0 .0 27 0 0 0.0
-------------------------------------------------------------
File IO Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX +DATA_1/ivrs/datafile/sysaux.258.652821943
159 0 148.5 2.9 180 0 0 0.0
SYSTEM +DATA_1/ivrs/datafile/system.267.652821909
221 0 47.6 1.0 10 0 0 0.0
UNDOTBS1 +DATA_1/ivrs/datafile/undotbs1.257.652821933
0 0 N/A N/A 27 0 0 0.0
USERS +DATA_1/ivrs/datafile/users.263.652821963
16 0 19.4 2.3 78 0 0 0.0
USERS +DATA_1/ivrs/datafile/users02.dbf
12 0 16.7 2.3 69 0 0 0.0
-------------------------------------------------------------
Buffer Pool Statistics DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Writ Buffer
Number of Pool Buffer Physical Physical Buff Comp Busy
P Buffers Hit% Gets Reads Writes Wait Wait Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D 19,461 96 20,162 745 538 0 0 0
-------------------------------------------------------------
Instance Recovery Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B 0 45 389 1709 8714 184320 8714 N/A
E 0 46 573 2632 9712 184320 9712 N/A
-------------------------------------------------------------
Buffer Pool Advisory DB/Inst: IVRS/ivrs Snap: 1098
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 12 .1 1,497 1.0 22,190
D 24 .2 2,994 1.0 21,718
D 36 .2 4,491 1.0 21,380
D 48 .3 5,988 1.0 21,318
D 60 .4 7,485 1.0 21,286
D 72 .5 8,982 1.0 21,234
D 84 .5 10,479 1.0 21,208
D 96 .6 11,976 1.0 21,202
D 108 .7 13,473 1.0 21,202
D 120 .8 14,970 1.0 21,202
D 132 .8 16,467 1.0 21,202
D 144 .9 17,964 1.0 21,202
D 156 1.0 19,461 1.0 21,202
D 168 1.1 20,958 1.0 21,202
D 180 1.2 22,455 1.0 21,202
D 192 1.2 23,952 1.0 21,202
D 204 1.3 25,449 1.0 21,202
D 216 1.4 26,946 1.0 21,202
D 228 1.5 28,443 1.0 21,202
D 240 1.5 29,940 1.0 21,202
-------------------------------------------------------------
PGA Aggr Summary DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ------------------ --------------------------
100.0 29 0
-------------------------------------------------------------
Warning: pga_aggregate_target was set too low for current workload, as this
value was exceeded during this interval. Use the PGA Advisory view
to help identify a different value for pga_aggregate_target.
PGA Aggr Target Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B 103 48 111.5 0.0 .0 .0 .0 21,094
E 103 49 106.6 0.0 .0 .0 .0 21,094
-------------------------------------------------------------
PGA Aggr Target Histogram DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
2K 4K 371 371 0 0
64K 128K 4 4 0 0
512K 1024K 38 38 0 0
-------------------------------------------------------------
PGA Memory Advisory DB/Inst: IVRS/ivrs Snap: 1098
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
13 0.1 217.6 325.5 40.0 17
26 0.3 217.6 325.5 40.0 17
52 0.5 217.6 325.5 40.0 17
77 0.8 217.6 310.6 41.0 5
103 1.0 217.6 5.3 98.0 2
124 1.2 217.6 0.0 100.0 2
144 1.4 217.6 0.0 100.0 2
165 1.6 217.6 0.0 100.0 2
185 1.8 217.6 0.0 100.0 2
206 2.0 217.6 0.0 100.0 2
309 3.0 217.6 0.0 100.0 0
412 4.0 217.6 0.0 100.0 0
618 6.0 217.6 0.0 100.0 0
824 8.0 217.6 0.0 100.0 0
-------------------------------------------------------------
Shared Pool Advisory DB/Inst: IVRS/ivrs Snap: 1098
-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Est LC Est LC Est LC Est LC
Shared SP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
56 .4 19 2,382 3,205 1.0 287 1.3 24,247
72 .5 31 4,300 3,263 1.0 229 1.0 24,447
88 .6 31 4,300 3,263 1.0 229 1.0 24,447
104 .8 31 4,300 3,263 1.0 229 1.0 24,447
120 .9 31 4,300 3,263 1.0 229 1.0 24,447
136 1.0 31 4,300 3,263 1.0 229 1.0 24,447
152 1.1 31 4,300 3,263 1.0 229 1.0 24,447
168 1.2 31 4,300 3,263 1.0 229 1.0 24,447
184 1.4 31 4,300 3,263 1.0 229 1.0 24,447
200 1.5 31 4,300 3,263 1.0 229 1.0 24,447
216 1.6 31 4,300 3,263 1.0 229 1.0 24,447
232 1.7 31 4,300 3,263 1.0 229 1.0 24,447
248 1.8 31 4,300 3,263 1.0 229 1.0 24,447
264 1.9 31 4,300 3,263 1.0 229 1.0 24,447
280 2.1 31 4,300 3,263 1.0 229 1.0 24,447
-------------------------------------------------------------
SGA Target Advisory DB/Inst: IVRS/ivrs Snap: 1098
SGA Target SGA Size Est DB Est Physical
Size (M) Factor Time (s) Reads
---------- ---------- ------------ ----------------
156 0.5 1,627 21,270
234 0.8 1,624 21,185
312 1.0 1,624 21,185
390 1.3 1,624 21,185
468 1.5 1,624 21,185
546 1.8 1,624 21,185
624 2.0 1,624 21,185
-------------------------------------------------------------
Streams Pool Advisory DB/Inst: IVRS/ivrs Snap: 1098
Size for Size Est Spill Est Spill Est Unspill Est Unspill
Est (MB) Factor Count Time (s) Count Time (s)
---------- --------- ----------- ----------- ----------- -----------
4 1.0 0 0 0 0
8 2.0 0 0 0 0
12 3.0 0 0 0 0
16 4.0 0 0 0 0
20 5.0 0 0 0 0
24 6.0 0 0 0 0
28 7.0 0 0 0 0
32 8.0 0 0 0 0
36 9.0 0 0 0 0
40 10.0 0 0 0 0
44 11.0 0 0 0 0
48 12.0 0 0 0 0
52 13.0 0 0 0 0
56 14.0 0 0 0 0
60 15.0 0 0 0 0
64 16.0 0 0 0 0
68 17.0 0 0 0 0
72 18.0 0 0 0 0
76 19.0 0 0 0 0
80 20.0 0 0 0 0
-------------------------------------------------------------
Java Pool Advisory DB/Inst: IVRS/ivrs Snap: 1098
Est LC Est LC Est LC Est LC
Java JP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
8 1.0 3 171 72 1.0 229 1.0 105
12 1.5 3 171 72 1.0 229 1.0 105
16 2.0 3 171 72 1.0 229 1.0 105
-------------------------------------------------------------
Buffer Wait Statistics DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Enqueue Activity DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Undo Segment Summary DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
1 .2 377 233 3 15/17.9 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Undo Segment Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Most recent 35 Undostat rows, ordered by Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
11-May 00:18 95 269 0 3 15 0/0 0/0/0/0/0/0
11-May 00:08 107 108 233 3 18 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ASM allocation 172 0.0 N/A 0 0 N/A
ASM db client latch 394 0.0 N/A 0 0 N/A
ASM map headers 72 0.0 N/A 0 0 N/A
ASM map load waiting lis 12 0.0 N/A 0 0 N/A
ASM map operation freeli 142 0.0 N/A 0 0 N/A
ASM map operation hash t 36,078 0.0 N/A 0 0 N/A
ASM network background l 307 0.0 N/A 0 0 N/A
AWR Alerted Metric Eleme 2,219 0.0 N/A 0 0 N/A
Consistent RBA 92 0.0 N/A 0 0 N/A
FAL request queue 12 0.0 N/A 0 0 N/A
FAL subheap alocation 12 0.0 N/A 0 0 N/A
FIB s.o chain latch 24 0.0 N/A 0 0 N/A
FOB s.o list latch 78 0.0 N/A 0 0 N/A
In memory undo latch 1,011 0.0 N/A 0 221 0.0
JS mem alloc latch 2 0.0 N/A 0 0 N/A
JS queue access latch 2 0.0 N/A 0 0 N/A
JS queue state obj latch 4,320 0.0 N/A 0 0 N/A
JS slv state obj latch 3 0.0 N/A 0 0 N/A
KFK SGA context latch 296 0.0 N/A 0 0 N/A
KFMD SGA 4 0.0 N/A 0 0 N/A
KMG MMAN ready and start 201 0.0 N/A 0 0 N/A
KTF sga latch 1 0.0 N/A 0 191 0.0
KWQP Prop Status 3 0.0 N/A 0 0 N/A
MQL Tracking Latch 0 N/A N/A 0 12 0.0
Memory Management Latch 0 N/A N/A 0 201 0.0
OS process 27 0.0 N/A 0 0 N/A
OS process allocation 219 0.0 N/A 0 0 N/A
OS process: request allo 6 0.0 N/A 0 0 N/A
PL/SQL warning settings 77 0.0 N/A 0 0 N/A
SGA IO buffer pool latch 477 0.0 N/A 0 545 0.0
SQL memory manager latch 2 0.0 N/A 0 197 0.0
SQL memory manager worka 13,585 0.0 N/A 0 0 N/A
Shared B-Tree 28 0.0 N/A 0 0 N/A
active checkpoint queue 437 0.0 N/A 0 0 N/A
active service list 1,242 0.0 N/A 0 203 0.0
archive control 13 0.0 N/A 0 0 N/A
archive process latch 209 0.0 N/A 0 0 N/A
cache buffer handles 138 0.0 N/A 0 0 N/A
cache buffers chains 52,014 0.0 N/A 0 1,641 0.0
cache buffers lru chain 2,515 0.0 N/A 0 204 0.0
cache table scan latch 0 N/A N/A 0 32 0.0
channel handle pool latc 6 0.0 N/A 0 0 N/A
channel operations paren 2,802 0.0 N/A 0 0 N/A
checkpoint queue latch 5,353 0.0 N/A 0 656 0.0
client/application info 206 0.0 N/A 0 0 N/A
compile environment latc 20 0.0 N/A 0 0 N/A
dml lock allocation 570 0.0 N/A 0 0 N/A
dummy allocation 18 0.0 N/A 0 0 N/A
enqueue hash chains 11,205 0.0 N/A 0 0 N/A
enqueues 10,239 0.0 N/A 0 0 N/A
event group latch 3 0.0 N/A 0 0 N/A
file cache latch 35 0.0 N/A 0 0 N/A
hash table column usage 0 N/A N/A 0 2,354 0.0
hash table modification 5 0.0 N/A 0 0 N/A
job workq parent latch 0 N/A N/A 0 12 0.0
job_queue_processes para 16 0.0 N/A 0 0 N/A
kks stats 122 0.0 N/A 0 0 N/A
ksuosstats global area 44 0.0 N/A 0 0 N/A
ksv instance 2 0.0 N/A 0 0 N/A
ktm global data 2 0.0 N/A 0 0 N/A
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
kwqbsn:qsga 27 0.0 N/A 0 0 N/A
lgwr LWN SCN 259 0.0 N/A 0 0 N/A
library cache 6,841 0.0 N/A 0 0 N/A
library cache load lock 234 0.0 N/A 0 0 N/A
library cache lock 3,670 0.0 N/A 0 0 N/A
library cache lock alloc 209 0.0 N/A 0 0 N/A
library cache pin 1,749 0.0 N/A 0 0 N/A
library cache pin alloca 50 0.0 N/A 0 0 N/A
list of block allocation 32 0.0 N/A 0 0 N/A
logminer context allocat 1 0.0 N/A 0 0 N/A
messages 5,381 0.0 N/A 0 0 N/A
mostly latch-free SCN 259 0.0 N/A 0 0 N/A
msg queue 24 0.0 N/A 0 24 0.0
multiblock read objects 180 0.0 N/A 0 6 0.0
ncodef allocation latch 10 0.0 N/A 0 0 N/A
object queue header heap 474 0.0 N/A 0 0 N/A
object queue header oper 4,340 0.0 N/A 0 0 N/A
object stats modificatio 38 0.0 N/A 0 2 0.0
parallel query alloc buf 80 0.0 N/A 0 0 N/A
parameter table allocati 9 0.0 N/A 0 0 N/A
post/wait queue 12 0.0 N/A 0 6 0.0
process allocation 6 0.0 N/A 0 3 0.0
process group creation 6 0.0 N/A 0 0 N/A
qmn task queue latch 88 0.0 N/A 0 0 N/A
redo allocation 979 0.0 N/A 0 3,317 0.0
redo copy 0 N/A N/A 0 3,315 0.0
redo writing 1,129 0.0 N/A 0 0 N/A
resmgr group change latc 43 0.0 N/A 0 0 N/A
resmgr:actses active lis 16 0.0 N/A 0 0 N/A
resmgr:actses change gro 6 0.0 N/A 0 0 N/A
resmgr:free threads list 14 0.0 N/A 0 0 N/A
resmgr:schema config 2 0.0 N/A 0 0 N/A
row cache objects 14,060 0.0 N/A 0 0 N/A
rules engine rule set st 200 0.0 N/A 0 0 N/A
segmented array pool 24 0.0 N/A 0 0 N/A
sequence cache 6 0.0 N/A 0 0 N/A
session allocation 30,961 0.0 N/A 0 0 N/A
session idle bit 65 0.0 N/A 0 0 N/A
session state list latch 26 0.0 N/A 0 0 N/A
session switching 10 0.0 N/A 0 0 N/A
session timer 203 0.0 N/A 0 0 N/A
shared pool 9,054 0.0 N/A 0 0 N/A
shared pool sim alloc 6 0.0 N/A 0 0 N/A
shared pool simulator 781 0.0 N/A 0 0 N/A
simulator hash latch 1,794 0.0 N/A 0 0 N/A
simulator lru latch 1,599 0.0 N/A 0 73 0.0
slave class 75 0.0 N/A 0 0 N/A
slave class create 56 1.8 1.0 0 0 N/A
sort extent pool 15 0.0 N/A 0 0 N/A
state object free list 2 0.0 N/A 0 0 N/A
statistics aggregation 140 0.0 N/A 0 0 N/A
threshold alerts latch 65 0.0 N/A 0 0 N/A
transaction allocation 22 0.0 N/A 0 0 N/A
transaction branch alloc 10 0.0 N/A 0 0 N/A
undo global data 686 0.0 N/A 0 0 N/A
user lock 12 0.0 N/A 0 0 N/A
-------------------------------------------------------------
Latch Sleep Breakdown DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> ordered by misses desc
Latch Name
----------------------------------------
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
slave class create
56 1 1 0 0 0 0
-------------------------------------------------------------
Latch Miss Sources DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
slave class create ksvcreate 0 1 0
-------------------------------------------------------------
Parent Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Child Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Logical Reads DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Total Logical Reads: 20,151
-> Captured Segments account for 63.2% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYSMAN SYSAUX MGMT_METRICS_RAW_PK INDEX 1,888 9.37
SYSMAN SYSAUX MGMT_SYSTEM_PERF_LOG INDEX 880 4.37
SYS SYSTEM ACCESS$ TABLE 672 3.33
SYS SYSTEM SYS_C00648 INDEX 656 3.26
SYS SYSTEM I_OBJ1 INDEX 640 3.18
-------------------------------------------------------------
Segments by Physical Reads DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Total Physical Reads: 746
-> Captured Segments account for 60.1% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSTEM IDL_UB1$ TABLE 97 13.00
SYSMAN SYSAUX MGMT_METRICS TABLE 84 11.26
SYSMAN SYSAUX MGMT_METRICS_RAW_PK INDEX 75 10.05
SYSMAN SYSAUX MGMT_METRICS_1HOUR_P INDEX 37 4.96
SYSMAN SYSAUX MGMT_METRICS_1DAY_PK INDEX 29 3.89
-------------------------------------------------------------
Segments by Row Lock Waits DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Segments by ITL Waits DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Buffer Busy Waits DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Dictionary Cache Stats DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> "Pct Misses" should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control 13 0.0 0 N/A 2 1
dc_global_oids 179 0.6 0 N/A 0 33
dc_histogram_data 573 23.4 0 N/A 0 1,503
dc_histogram_defs 449 31.4 0 N/A 0 3,627
dc_object_ids 900 5.8 0 N/A 0 1,062
dc_objects 421 22.6 0 N/A 0 1,854
dc_profiles 6 0.0 0 N/A 0 1
dc_rollback_segments 78 0.0 0 N/A 0 16
dc_segments 210 28.1 0 N/A 2 949
dc_tablespace_quotas 4 50.0 0 N/A 0 2
dc_tablespaces 968 0.0 0 N/A 0 12
dc_usernames 248 0.0 0 N/A 0 16
dc_users 871 0.0 0 N/A 0 23
outstanding_alerts 26 0.0 0 N/A 0 25
-------------------------------------------------------------
Library Cache Activity DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> "Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 82 14.6 155 7.7 0 0
CLUSTER 2 0.0 3 0.0 0 0
SQL AREA 474 93.2 3,043 6.1 0 0
TABLE/PROCEDURE 470 28.9 566 18.6 0 0
TRIGGER 8 0.0 39 0.0 0 0
-------------------------------------------------------------
Process Memory Summary DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist
Avg Std Dev Max Max
Alloc Used Alloc Alloc Alloc Alloc Num Num
Category (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other 103.6 N/A 4.3 7.4 24 24 24 24
Freeable 7.4 .0 .7 .5 2 N/A 10 10
SQL .4 .2 .0 .0 0 2 10 7
PL/SQL .1 .0 .0 .0 0 0 22 22
JAVA .0 .0 .0 .0 0 1 1 1
E Other 102.3 N/A 4.3 7.5 24 25 24 24
Freeable 3.8 .0 .6 .5 1 N/A 6 6
SQL .4 .2 .0 .0 0 2 10 7
PL/SQL .1 .0 .0 .0 0 0 22 22
-------------------------------------------------------------
SGA Memory Summary DB/Inst: IVRS/ivrs Snaps: 1097-1098
End Size (Bytes)
SGA regions Begin Size (Bytes) (if different)
------------------------------ ------------------- -------------------
Database Buffers 163,577,856
Fixed Size 1,261,612
Redo Buffers 2,928,640
Variable Size 159,387,604
-------------------
sum 327,155,712
-------------------------------------------------------------
SGA breakdown difference DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
insignificant, or zero in that snapshot
Pool Name Begin MB End MB % Diff
------ ------------------------------ -------------- -------------- -------
java free memory 2.7 2.7 0.00
java joxlod exec hp 5.1 5.1 0.00
java joxs heap .2 .2 0.00
large ASM map operations hashta .2 .2 0.00
large CTWR dba buffer .4 .4 0.00
large PX msg pool .2 .2 0.00
large free memory 1.2 1.2 0.00
large krcc extent chunk 2.0 2.0 0.00
shared ASH buffers 2.0 2.0 0.00
shared CCursor 2.7 2.9 6.81
shared Heap0: KGL 2.3 2.4 5.09
shared KCB Table Scan Buffer 3.8 3.8 0.00
shared KGLS heap 5.6 5.9 5.39
shared KQR M PO 3.1 3.2 4.58
shared KSFD SGA I/O b 3.8 3.8 0.00
shared PCursor 1.7 1.9 7.03
shared PL/SQL DIANA 1.4 1.6 17.41
shared PL/SQL MPCODE 2.6 3.1 21.93
shared free memory 61.2 58.5 -4.51
shared kglsim hash table bkts 2.0 2.0 0.00
shared library cache 4.4 4.7 7.13
shared row cache 3.6 3.6 0.00
shared sql area 10.5 11.0 4.62
stream free memory 4.0 4.0 0.00
buffer_cache 156.0 156.0 0.00
fixed_sga 1.2 1.2 0.00
log_buffer 2.8 2.8 0.00
-------------------------------------------------------------
Streams CPU/IO Usage DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Streams processes ordered by CPU usage
-> CPU and I/O Time in micro seconds
Session Type CPU Time User I/O Time Sys I/O Time
------------------------- -------------- -------------- --------------
QMON Slaves 37,247 0 0
QMON Coordinator 36,837 0 0
-------------------------------------------------------------
Streams Capture DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Streams Apply DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Queues DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Subscribers DB/Inst: IVRS/ivrs Snaps: 1097-1098
No data exists for this section of the report.
-------------------------------------------------------------
Rule Set DB/Inst: IVRS/ivrs Snaps: 1097-1098
-> Rule Sets ordered by Evaluations
Fast SQL CPU Elapsed
Ruleset Name Evals Evals Execs Time Time
------------------------- -------- -------- -------- -------- --------
SYS.ALERT_QUE_R 0 0 0 0 0
-------------------------------------------------------------
Resource Limit Stats DB/Inst: IVRS/ivrs Snap: 1098
No data exists for this section of the report.
-------------------------------------------------------------
init.ora Parameters DB/Inst: IVRS/ivrs Snaps: 1097-1098
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
audit_file_dest /oracle/app/oracle/admin/ivrs/adu
audit_sys_operations TRUE
background_dump_dest /oracle/app/oracle/admin/ivrs/bdu
compatible 10.2.0.3.0
control_files +DATA_1/ivrs/control01.ctl, +DATA
core_dump_dest /oracle/app/oracle/admin/ivrs/cdu
db_block_size 8192
db_domain karl.com
db_file_multiblock_read_count 16
db_name ivrs
db_recovery_file_dest /flash_reco/flash_recovery_area
db_recovery_file_dest_size 161061273600
dispatchers (PROTOCOL=TCP) (SERVICE=ivrsXDB)
job_queue_processes 10
log_archive_dest_1 LOCATION=USE_DB_RECOVERY_FILE_DES
log_archive_format ivrs_%t_%s_%r.arc
open_cursors 300
os_authent_prefix
os_roles FALSE
pga_aggregate_target 108003328
processes 150
recyclebin OFF
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
sga_target 327155712
spfile +DATA_1/ivrs/spfileivrs.ora
sql92_security TRUE
statistics_level TYPICAL
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /oracle/app/oracle/admin/ivrs/udu
-------------------------------------------------------------
End of Report
Report written to awrrpt_1_1097_1098.txt
}}}
! The ASH report for SNAP_ID 1097
{{{
Summary of All User Input
-------------------------
Format : TEXT
DB Id : 2607950532
Inst num : 1
Begin time : 11-May-10 00:00:00
End time : 11-May-10 00:10:00
Slot width : Default
Report targets : 0
Report name : ashrpt_1_0511_0010.txt
ASH Report For IVRS/ivrs
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
1 312M (100%) 156M (50.0%) 111M (35.5%) 2.0M (0.6%)
Analysis Begin Time: 11-May-10 00:00:00
Analysis End Time: 11-May-10 00:10:00
Elapsed Time: 10.0 (mins)
Sample Count: 44
Average Active Sessions: 0.73
Avg. Active Session per CPU: 0.73
Report Target: None specified
Top User Events DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU CPU 86.36 0.63
db file sequential read User I/O 4.55 0.03
log file sequential read System I/O 2.27 0.02
-------------------------------------------------------------
Top Background Events DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
change tracking file synchronous wr Other 2.27 0.02
db file parallel write System I/O 2.27 0.02
log file parallel write System I/O 2.27 0.02
-------------------------------------------------------------
Top Event P1/P2/P3 Values DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
Event % Event P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1 Parameter 2 Parameter 3
-------------------------- -------------------------- --------------------------
db file sequential read 4.55 "1","56086","1" 2.27
file# block# blocks
"1","60831","1" 2.27
change tracking file synchrono 2.27 "18","1","0" 2.27
block# blocks NOT DEFINED
db file parallel write 2.27 "1","0","2147483647" 2.27
requests interrupt timeout
log file parallel write 2.27 "1","2","1" 2.27
files blocks requests
log file sequential read 2.27 "0","16386","8192" 2.27
log# block# blocks
-------------------------------------------------------------
Top Service/Module DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$USERS Oracle Enterprise Manage 88.64 target 1 86.36
target 6 2.27
SYS$BACKGROUND UNNAMED 6.82 UNNAMED 6.82
SYS$USERS UNNAMED 4.55 UNNAMED 4.55
-------------------------------------------------------------
Top Client IDs DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Command Types DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Statements DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL using literals DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top PL/SQL Procedures DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
-> 'PL/SQL entry subprogram' represents the application's top-level
entry-point(procedure, function, trigger, package initialization
or RPC call) into PL/SQL.
-> 'PL/SQL current subprogram' is the pl/sql subprogram being executed
at the point of sampling . If the value is 'SQL', it represents
the percentage of time spent executing SQL for the particular
plsql entry subprogram
PLSQL Entry Subprogram % Activity
----------------------------------------------------------------- ----------
PLSQL Current Subprogram % Current
----------------------------------------------------------------- ----------
SYSMAN.EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS 93.18
SYSMAN.EMD_LOADER.EMD_RAW_PURGE 84.09
SQL 6.82
SYSMAN.MGMT_LOG.LOG_PERFORMANCE 2.27
-------------------------------------------------------------
Top Sessions DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
-> '# Samples Active' shows the number of ASH samples in which the session
was found waiting for that particular event. The percentage shown
in this column is calculated with respect to wall clock time
and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
the PQ slave activity into the session issuing the PQ. Refer to
the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# % Activity Event % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
138, 103 93.18 CPU + Wait for CPU 86.36
SYSMAN oracle@dbrocai...el.com (J000) 38/60 [ 63%] 1
db file sequential read 4.55
2/60 [ 3%] 0
log file sequential read 2.27
1/60 [ 2%] 1
149, 1 2.27 change tracking file synchrono 2.27
SYS oracle@dbrocai...el.com (CTWR) 1/60 [ 2%] 0
166, 1 2.27 log file parallel write 2.27
SYS oracle@dbrocai...el.com (LGWR) 1/60 [ 2%] 0
167, 1 2.27 db file parallel write 2.27
SYS oracle@dbrocai...el.com (DBW0) 1/60 [ 2%] 0
-------------------------------------------------------------
Top Blocking Sessions DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top Sessions running PQs DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Objects DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
-> With respect to Application, Cluster, User I/O and buffer busy waits only.
Object ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
Object Name (Type) Tablespace
----------------------------------------------------- -------------------------
335 2.27 db file sequential read 2.27
SYS.KOTTD$ (TABLE) SYSTEM
-------------------------------------------------------------
Top DB Files DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
-> With respect to Cluster and User I/O events only.
File ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
File Name Tablespace
----------------------------------------------------- -------------------------
1 2.27 db file sequential read 2.27
+DATA_1/ivrs/datafile/system.267.652821909 SYSTEM
-------------------------------------------------------------
Top Latches DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
No data exists for this section of the report.
-------------------------------------------------------------
Activity Over Time DB/Inst: IVRS/ivrs (May 11 00:00 to 00:10)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
00:00:00 (5.0 min) 26 CPU + Wait for CPU 26 59.09
00:05:00 (5.0 min) 18 CPU + Wait for CPU 12 27.27
db file sequential read 2 4.55
change tracking file synchrono 1 2.27
-------------------------------------------------------------
End of Report
Report written to ashrpt_1_0511_0010.txt
}}}
! The AWR report for SNAP_ID 1103
{{{
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 1103 11-May-10 01:00:43 23 3.2
End Snap: 1104 11-May-10 01:10:45 22 1.8
Elapsed: 10.03 (mins)
DB Time: 14.62 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 156M 156M Std Block Size: 8K
Shared Pool Size: 136M 136M Log Buffer: 2,860K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 2,391.64 27,691.31
Logical reads: 16.26 188.29
Block changes: 6.34 73.42
Physical reads: 0.16 1.87
Physical writes: 1.02 11.81
User calls: 0.04 0.42
Parses: 0.60 6.96
Hard parses: 0.00 0.00
Sorts: 0.52 6.04
Logons: 0.01 0.13
Executes: 2.05 23.77
Transactions: 0.09
% Blocks changed per Read: 38.99 Recursive Call %: 99.61
Rollback per transaction %: 3.85 Rows per Sort: 61.24
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.01 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 100.00
Execute to Parse %: 70.71 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 122.22 % Non-Parse CPU: 99.99
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 59.76 59.76
% SQL with executions>1: 81.59 81.29
% Memory for SQL w/exec>1: 78.40 78.55
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time 833 95.0
control file parallel write 193 22 112 2.5 System I/O
db file parallel write 255 17 67 1.9 System I/O
db file scattered read 18 3 151 0.3 User I/O
change tracking file synchrono 10 3 250 0.3 Other
-------------------------------------------------------------
Time Model Statistics DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Total time in database user-calls (DB Time): 877.3s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 879.8 100.3
DB CPU 833.1 95.0
PL/SQL execution elapsed time 0.4 .0
parse time elapsed 0.2 .0
repeated bind elapsed time 0.0 .0
DB time 877.3 N/A
background elapsed time 63.6 N/A
background cpu time 12.0 N/A
-------------------------------------------------------------
Wait Class DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
-------------------- ---------------- ------ ---------------- ------- ---------
System I/O 1,867 .0 42 23 35.9
Other 21 .0 4 173 0.4
User I/O 43 .0 3 71 0.8
Concurrency 3 33.3 2 522 0.1
Commit 1 .0 0 37 0.0
-------------------------------------------------------------
Wait Events DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
control file parallel write 193 .0 22 112 3.7
db file parallel write 255 .0 17 67 4.9
db file scattered read 18 .0 3 151 0.3
change tracking file synchro 10 .0 3 250 0.2
os thread startup 3 33.3 2 522 0.1
log file sequential read 10 .0 2 152 0.2
control file sequential read 1,328 .0 1 1 25.5
latch free 1 .0 1 1121 0.0
log file parallel write 81 .0 1 13 1.6
db file sequential read 25 .0 0 14 0.5
log file sync 1 .0 0 37 0.0
change tracking file synchro 10 .0 0 1 0.2
ASM background timer 136 .0 586 4309 2.6
virtual circuit status 20 100.0 586 29295 0.4
Streams AQ: qmn slave idle w 21 .0 574 27348 0.4
Streams AQ: qmn coordinator 42 50.0 574 13674 0.8
class slave wait 11 .0 434 39426 0.2
jobq slave wait 69 94.2 199 2881 1.3
KSV master wait 14 14.3 8 562 0.3
-------------------------------------------------------------
Background Wait Events DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
control file parallel write 193 .0 22 112 3.7
db file parallel write 255 .0 17 67 4.9
events in waitclass Other 20 .0 3 125 0.4
os thread startup 3 33.3 2 522 0.1
log file parallel write 81 .0 1 13 1.6
control file sequential read 244 .0 0 2 4.7
db file scattered read 2 .0 0 35 0.0
db file sequential read 1 .0 0 6 0.0
rdbms ipc message 2,388 97.1 6,994 2929 45.9
pmon timer 202 100.0 589 2916 3.9
ASM background timer 136 .0 586 4309 2.6
smon timer 2 100.0 586 292969 0.0
Streams AQ: qmn slave idle w 21 .0 574 27348 0.4
Streams AQ: qmn coordinator 42 50.0 574 13674 0.8
class slave wait 11 .0 434 39426 0.2
-------------------------------------------------------------
Operating System Statistics DB/Inst: IVRS/ivrs Snaps: 1103-1104
Statistic Total
-------------------------------- --------------------
BUSY_TIME 46,115
IDLE_TIME 13,989
IOWAIT_TIME 1,075
NICE_TIME 0
SYS_TIME 25,317
USER_TIME 20,651
LOAD 0
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 15,680
NUM_CPUS 1
-------------------------------------------------------------
Service Statistics DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
-------------------------------- ------------ ------------ ---------- ----------
SYS$USERS 877.3 833.1 48 6,419
SYS$BACKGROUND 0.0 0.0 61 3,205
ivrs.karl.com 0.0 0.0 0 0
ivrsXDB 0.0 0.0 0 0
-------------------------------------------------------------
Service Wait Class Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
----------------------------------------------------------------
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
SYS$USERS
10 48 0 0 0 0 0 0
SYS$BACKGROUND
33 258 3 156 0 0 0 0
-------------------------------------------------------------
SQL ordered by Elapsed Time DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------
438 416 4 109.5 49.9 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
437 415 1 436.6 49.8 2zwjrv2186835
Module: Oracle Enterprise Manager.rollup
DELETE FROM MGMT_METRICS_RAW WHERE ROWID = :B1
5 2 1 4.7 0.5 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
5 2 1 4.5 0.5 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
2 1 1 1.7 0.2 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
0 0 1 0.4 0.0 bunssq950snhf
insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice
0 0 1 0.3 0.0 1crajpb7j5tyz
INSERT INTO STATS$SGA_TARGET_ADVICE ( SNAP_ID , DBID , INSTANCE_NUMBER , SGA_SIZ
E , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TIME_FACTOR , ESTD_PHYSICAL_READS )
SELECT :B3 , :B2 , :B1 , SGA_SIZE , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TI
ME_FACTOR , ESTD_PHYSICAL_READS FROM V$SGA_TARGET_ADVICE
0 0 1 0.3 0.0 dvzr1zfmdddga
INSERT INTO STATS$SYSSTAT ( SNAP_ID , DBID , INSTANCE_NUMBER , STATISTIC# , NAME
, VALUE ) SELECT :B3 , :B2 , :B1 , STATISTIC# , NAME , VALUE FROM V$SYSSTAT
0 0 1 0.2 0.0 350myuyx0t1d6
insert into wrh$_tablespace_stat (snap_id, dbid, instance_number, ts#, tsname
, contents, status, segment_space_management, extent_management, is_back
up) select :snap_id, :dbid, :instance_number, ts.ts#, ts.name as tsname,
decode(ts.contents$, 0, (decode(bitand(ts.flags, 16), 16, 'UNDO', '
0 0 1 0.2 0.0 52upwvrbypadx
Module: Oracle Enterprise Manager.current metric purge
SELECT CM.TARGET_GUID, CM.METRIC_GUID, MAX(CM.COLLECTION_TIMESTAMP) FROM MGMT_CU
RRENT_METRICS CM, MGMT_METRICS M WHERE CM.METRIC_GUID = M.METRIC_GUID AND M.KEYS
_FROM_MULT_COLLS = 0 GROUP BY CM.TARGET_GUID, CM.METRIC_GUID
-------------------------------------------------------------
SQL ordered by CPU Time DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ----------- ------- -------------
416 438 4 104.08 49.9 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
415 437 1 415.14 49.8 2zwjrv2186835
Module: Oracle Enterprise Manager.rollup
DELETE FROM MGMT_METRICS_RAW WHERE ROWID = :B1
2 5 1 2.01 0.5 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
2 5 1 1.89 0.5 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
1 2 1 1.10 0.2 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
0 0 1 0.41 0.0 bunssq950snhf
insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice
0 0 1 0.28 0.0 1crajpb7j5tyz
INSERT INTO STATS$SGA_TARGET_ADVICE ( SNAP_ID , DBID , INSTANCE_NUMBER , SGA_SIZ
E , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TIME_FACTOR , ESTD_PHYSICAL_READS )
SELECT :B3 , :B2 , :B1 , SGA_SIZE , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TI
ME_FACTOR , ESTD_PHYSICAL_READS FROM V$SGA_TARGET_ADVICE
0 0 1 0.18 0.0 52upwvrbypadx
Module: Oracle Enterprise Manager.current metric purge
SELECT CM.TARGET_GUID, CM.METRIC_GUID, MAX(CM.COLLECTION_TIMESTAMP) FROM MGMT_CU
RRENT_METRICS CM, MGMT_METRICS M WHERE CM.METRIC_GUID = M.METRIC_GUID AND M.KEYS
_FROM_MULT_COLLS = 0 GROUP BY CM.TARGET_GUID, CM.METRIC_GUID
0 0 115 0.00 0.0 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
0 0 5 0.02 0.0 g2aqmpuqbytjy
Module: Oracle Enterprise Manager.rollup
SELECT ROWID FROM MGMT_METRICS_RAW WHERE TARGET_GUID = :B2 AND COLLECTION_TIMEST
AMP < :B1
-------------------------------------------------------------
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 9,791
-> Captured SQL account for 64.1% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
5,870 4 1,467.5 60.0 416.31 438.17 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
1,841 1 1,841.0 18.8 1.10 1.66 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
1,292 1 1,292.0 13.2 415.14 436.64 2zwjrv2186835
Module: Oracle Enterprise Manager.rollup
DELETE FROM MGMT_METRICS_RAW WHERE ROWID = :B1
580 5 116.0 5.9 0.10 0.10 g2aqmpuqbytjy
Module: Oracle Enterprise Manager.rollup
SELECT ROWID FROM MGMT_METRICS_RAW WHERE TARGET_GUID = :B2 AND COLLECTION_TIMEST
AMP < :B1
522 259 2.0 5.3 0.06 0.06 52td38mkm2jh3
Module: Oracle Enterprise Manager.current metric purge
SELECT ROWID FROM MGMT_CURRENT_METRICS WHERE TARGET_GUID = :B3 AND METRIC_GUID =
:B2 AND COLLECTION_TIMESTAMP < :B1 FOR UPDATE SKIP LOCKED
354 1 354.0 3.6 0.07 0.07 925d7dd714u48
INSERT INTO STATS$LATCH ( SNAP_ID , DBID , INSTANCE_NUMBER , NAME , LATCH# , LEV
EL# , GETS , MISSES , SLEEPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , W
AIT_TIME ) SELECT :B3 , :B2 , :B1 , NAME , LATCH# , LEVEL# , GETS , MISSES , SLE
EPS , IMMEDIATE_GETS , IMMEDIATE_MISSES , SPIN_GETS , WAIT_TIME FROM V$LATCH
347 8 43.4 3.5 0.08 0.10 abtp0uqvdb1d3
CALL MGMT_ADMIN_DATA.EVALUATE_MGMT_METRICS(:tguid, :mguid, :result)
345 115 3.0 3.5 0.14 0.14 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
293 1 293.0 3.0 0.03 0.03 g337099aatnuj
update smon_scn_time set orig_thread=0, time_mp=:1, time_dp=:2, scn=:3, scn_wrp
=:4, scn_bas=:5, num_mappings=:6, tim_scn_map=:7 where thread=0 and scn = (sel
ect min(scn) from smon_scn_time where thread=0)
251 1 251.0 2.6 0.05 0.25 dvzr1zfmdddga
INSERT INTO STATS$SYSSTAT ( SNAP_ID , DBID , INSTANCE_NUMBER , STATISTIC# , NAME
, VALUE ) SELECT :B3 , :B2 , :B1 , STATISTIC# , NAME , VALUE FROM V$SYSSTAT
227 1 227.0 2.3 0.04 0.04 bq0xuw807fdju
INSERT INTO STATS$EVENT_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT_ID
, WAIT_TIME_MILLI , WAIT_COUNT ) SELECT :B3 , :B2 , :B1 , EN.EVENT_ID , WAIT_TIM
E_MILLI , WAIT_COUNT FROM V$EVENT_HISTOGRAM EH , V$EVENT_NAME EN WHERE EH.EVENT
= EN.NAME AND EH.EVENT# = EN.EVENT#
208 1 208.0 2.1 0.02 0.08 11dxc6vpx5z4n
INSERT INTO STATS$SQLTEXT ( OLD_HASH_VALUE , TEXT_SUBSET , PIECE , SQL_ID , SQL_
TEXT , ADDRESS , COMMAND_TYPE , LAST_SNAP_ID ) SELECT /*+ ordered use_nl(vst) */
NEW_SQL.OLD_HASH_VALUE , NEW_SQL.TEXT_SUBSET , VST.PIECE , VST.SQL_ID , VST.SQL
_TEXT , VST.ADDRESS , VST.COMMAND_TYPE , NEW_SQL.SNAP_ID FROM (SELECT HASH_VALUE
SQL ordered by Gets DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets: 9,791
-> Captured SQL account for 64.1% of Total
Gets CPU Elapsed
Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
129 20 6.5 1.3 0.02 0.02 18naypzfmabd6
INSERT INTO MGMT_SYSTEM_PERFORMANCE_LOG (JOB_NAME, TIME, DURATION, MODULE, ACTIO
N, IS_TOTAL, NAME, VALUE, CLIENT_DATA, HOST_URL) VALUES (:B9 , SYSDATE, :B8 , SU
BSTR(:B7 , 1, 512), SUBSTR(:B6 ,1,32), :B5 , SUBSTR(:B4 ,1,128), SUBSTR(:B3 ,1,1
28), SUBSTR(:B2 ,1,128), SUBSTR(:B1 ,1,256))
119 11 10.8 1.2 0.01 0.01 6d64jpfzqc9rv
INSERT INTO MGMT_METRICS_RAW (TARGET_GUID, COLLECTION_TIMESTAMP, METRIC_GUID, KE
Y_VALUE, VALUE, STRING_VALUE) VALUES (:B5 , :B4 , :B3 , :B2 , NULL, :B1 )
116 116 1.0 1.2 0.09 0.09 g2wr3u7s1gtf3
select count(*) from sys.job$ where (next_date > sysdate) and (next_date < (sysd
ate+5/86400))
112 1 112.0 1.1 0.02 0.02 d8mayxqw0wnpv
SELECT OWNER FROM DBA_PROCEDURES WHERE OBJECT_NAME = 'MGMT_USER' AND PROCEDURE_N
AME = 'DROP_USER'
109 1 109.0 1.1 0.18 0.18 52upwvrbypadx
Module: Oracle Enterprise Manager.current metric purge
SELECT CM.TARGET_GUID, CM.METRIC_GUID, MAX(CM.COLLECTION_TIMESTAMP) FROM MGMT_CU
RRENT_METRICS CM, MGMT_METRICS M WHERE CM.METRIC_GUID = M.METRIC_GUID AND M.KEYS
_FROM_MULT_COLLS = 0 GROUP BY CM.TARGET_GUID, CM.METRIC_GUID
107 1 107.0 1.1 0.08 0.14 adx64219uugxh
INSERT INTO STATS$SQL_SUMMARY ( SNAP_ID , DBID , INSTANCE_NUMBER , TEXT_SUBSET ,
SQL_ID , SHARABLE_MEM , SORTS , MODULE , LOADED_VERSIONS , FETCHES , EXECUTIONS
, PX_SERVERS_EXECUTIONS , END_OF_FETCH_COUNT , LOADS , INVALIDATIONS , PARSE_CA
LLS , DISK_READS , DIRECT_WRITES , BUFFER_GETS , APPLICATION_WAIT_TIME , CONCURR
103 11 9.4 1.1 0.03 0.03 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
-------------------------------------------------------------
SQL ordered by Reads DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Total Disk Reads: 97
-> Captured SQL account for 56.7% of Total
Reads CPU Elapsed
Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
48 1 48.0 49.5 1.10 1.66 djs2w2f17nw2z
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;
28 1 28.0 28.9 0.05 0.25 dvzr1zfmdddga
INSERT INTO STATS$SYSSTAT ( SNAP_ID , DBID , INSTANCE_NUMBER , STATISTIC# , NAME
, VALUE ) SELECT :B3 , :B2 , :B1 , STATISTIC# , NAME , VALUE FROM V$SYSSTAT
7 1 7.0 7.2 0.02 0.08 11dxc6vpx5z4n
INSERT INTO STATS$SQLTEXT ( OLD_HASH_VALUE , TEXT_SUBSET , PIECE , SQL_ID , SQL_
TEXT , ADDRESS , COMMAND_TYPE , LAST_SNAP_ID ) SELECT /*+ ordered use_nl(vst) */
NEW_SQL.OLD_HASH_VALUE , NEW_SQL.TEXT_SUBSET , VST.PIECE , VST.SQL_ID , VST.SQL
_TEXT , VST.ADDRESS , VST.COMMAND_TYPE , NEW_SQL.SNAP_ID FROM (SELECT HASH_VALUE
5 1 5.0 5.2 1.89 4.52 5h7w8ykwtb2xt
INSERT INTO SYS.WRI$_ADV_PARAMETERS (TASK_ID,NAME,DATATYPE,VALUE,FLAGS,DESCRIPTI
ON) VALUES (:B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
5 1 5.0 5.2 2.01 4.65 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;
4 1 4.0 4.1 0.08 0.10 4qju99hqmn81x
INSERT INTO WRH$_ACTIVE_SESSION_HISTORY ( snap_id, dbid, instance_number, sample
_id, sample_time, session_id, session_serial#, user_id, sql_id, sql_child_
number, sql_plan_hash_value, force_matching_signature, service_hash, sessi
on_type, sql_opcode, plsql_entry_object_id, plsql_entry_subprogram_id, pls
4 1 4.0 4.1 0.08 0.14 adx64219uugxh
INSERT INTO STATS$SQL_SUMMARY ( SNAP_ID , DBID , INSTANCE_NUMBER , TEXT_SUBSET ,
SQL_ID , SHARABLE_MEM , SORTS , MODULE , LOADED_VERSIONS , FETCHES , EXECUTIONS
, PX_SERVERS_EXECUTIONS , END_OF_FETCH_COUNT , LOADS , INVALIDATIONS , PARSE_CA
LLS , DISK_READS , DIRECT_WRITES , BUFFER_GETS , APPLICATION_WAIT_TIME , CONCURR
3 1 3.0 3.1 0.03 0.06 agpd044zj368m
insert into wrh$_system_event (snap_id, dbid, instance_number, event_id, total
_waits, total_timeouts, time_waited_micro) select :snap_id, :dbid, :insta
nce_number, event_id, total_waits, total_timeouts, time_waited_micro from
v$system_event order by event_id
2 1 2.0 2.1 0.04 0.09 g6wf9na8zs5hb
insert into wrh$_sysmetric_summary (snap_id, dbid, instance_number, beg
in_time, end_time, intsize, group_id, metric_id, num_interval, maxval, minv
al, average, standard_deviation) select :snap_id, :dbid, :instance_number,
begtime, endtime, intsize_csec, groupid, metricid, numintv, max, min,
1 1 1.0 1.0 0.06 0.08 0gf6adkbuyytg
INSERT INTO STATS$FILE_HISTOGRAM ( SNAP_ID , DBID , INSTANCE_NUMBER , FILE# , SI
NGLEBLKRDTIM_MILLI , SINGLEBLKRDS ) SELECT :B3 , :B2 , :B1 , FILE# , SINGLEBLKRD
TIM_MILLI , SINGLEBLKRDS FROM V$FILE_HISTOGRAM WHERE SINGLEBLKRDS > 0
1 1 1.0 1.0 0.08 0.13 g2vhd8wttdzmf
INSERT INTO STATS$BG_EVENT_SUMMARY ( SNAP_ID , DBID , INSTANCE_NUMBER , EVENT ,
TOTAL_WAITS , TOTAL_TIMEOUTS , TIME_WAITED_MICRO ) SELECT :B3 , :B2 , :B1 , E.EV
ENT , SUM(E.TOTAL_WAITS) , SUM(E.TOTAL_TIMEOUTS) , SUM(E.TIME_WAITED_MICRO) FROM
V$SESSION_EVENT E WHERE E.SID IN (SELECT S.SID FROM V$SESSION S WHERE S.TYPE =
-------------------------------------------------------------
SQL ordered by Executions DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Total Executions: 1,236
-> Captured SQL account for 66.2% of Total
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) SQL Id
------------ --------------- -------------- ---------- ----------- -------------
259 0 0.0 0.00 0.00 52td38mkm2jh3
Module: Oracle Enterprise Manager.current metric purge
SELECT ROWID FROM MGMT_CURRENT_METRICS WHERE TARGET_GUID = :B3 AND METRIC_GUID =
:B2 AND COLLECTION_TIMESTAMP < :B1 FOR UPDATE SKIP LOCKED
116 116 1.0 0.00 0.00 g2wr3u7s1gtf3
select count(*) from sys.job$ where (next_date > sysdate) and (next_date < (sysd
ate+5/86400))
115 4 0.0 0.00 0.00 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
25 25 1.0 0.00 0.00 5kyb5bvnu3s04
SELECT DISTINCT METRIC_GUID FROM MGMT_METRICS WHERE TARGET_TYPE = :B3 AND METRIC
_NAME = :B2 AND METRIC_COLUMN = :B1
25 25 1.0 0.00 0.00 91h2x42zqagcm
UPDATE MGMT_CURRENT_METRICS SET COLLECTION_TIMESTAMP = :B3 , VALUE = :B2 , STRIN
G_VALUE = :B1 WHERE TARGET_GUID = :B6 AND METRIC_GUID = :B5 AND KEY_VALUE = :B4
AND COLLECTION_TIMESTAMP < :B3
25 25 1.0 0.00 0.00 gfdn12rn0fg3m
SELECT TARGET_GUID FROM MGMT_TARGETS WHERE TARGET_NAME = :B2 AND TARGET_TYPE = :
B1
20 20 1.0 0.00 0.00 18naypzfmabd6
INSERT INTO MGMT_SYSTEM_PERFORMANCE_LOG (JOB_NAME, TIME, DURATION, MODULE, ACTIO
N, IS_TOTAL, NAME, VALUE, CLIENT_DATA, HOST_URL) VALUES (:B9 , SYSDATE, :B8 , SU
BSTR(:B7 , 1, 512), SUBSTR(:B6 ,1,32), :B5 , SUBSTR(:B4 ,1,128), SUBSTR(:B3 ,1,1
28), SUBSTR(:B2 ,1,128), SUBSTR(:B1 ,1,256))
20 300 15.0 0.00 0.00 772s25v1y0x8k
select shared_pool_size_for_estimate s, shared_pool_size_factor * 100 f
, estd_lc_load_time l, 0 from v$shared_pool_advice
20 400 20.0 0.00 0.00 aykvshm7zsabd
select size_for_estimate, size_factor * 100 f,
estd_physical_read_time, estd_physical_reads
from v$db_cache_advice where id = '3'
20 400 20.0 0.00 0.00 g6gu1n3x0h1h4
select streams_pool_size_for_estimate s, streams_pool_size_factor * 10
0 f, estd_spill_time + estd_unspill_time, 0 from v$streams_pool_advic
e
14 14 1.0 0.00 0.00 49s332uhbnsma
declare vsn varchar2(20); begin vsn :=
dbms_rcvman.getPackageVersion; :pkg_vsn:pkg_vsn_i := vsn;
if vsn is not null then :pkg_vsnub4 :=
to_number(substr(vsn,1,2) || substr(vsn,4,2) || s
-------------------------------------------------------------
SQL ordered by Parse Calls DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Total Parse Calls: 362
-> Captured SQL account for 43.1% of Total
% Total
Parse Calls Executions Parses SQL Id
------------ ------------ --------- -------------
14 14 3.87 49s332uhbnsma
declare vsn varchar2(20); begin vsn :=
dbms_rcvman.getPackageVersion; :pkg_vsn:pkg_vsn_i := vsn;
if vsn is not null then :pkg_vsnub4 :=
to_number(substr(vsn,1,2) || substr(vsn,4,2) || s
11 11 3.04 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0
5 115 1.38 6ssrk2dqj7jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and (n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date, j
ob
5 5 1.38 bsa0wjtftg3uw
select file# from file$ where ts#=:1
5 5 1.38 g2m68t7jmcctz
select job, nvl2(last_date, 1, 0) from sys.job$ where next_date <= :1 and (field
1 = :2 or (field1 = 0 and 'Y' = :3)) order by next_date, job
4 4 1.10 0k8522rmdzg4k
select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#
>0
4 4 1.10 24dkx03u3rj6k
Module:
SELECT COUNT(*) FROM MGMT_PARAMETERS WHERE PARAMETER_NAME=:B1 AND UPPER(PARAMETE
R_VALUE)='TRUE'
4 5 1.10 47a50dvdgnxc2
update sys.job$ set failures=0, this_date=null, flag=:1, last_date=:2, next_dat
e = greatest(:3, sysdate), total=total+(sysdate-nvl(this_date,sysdate)) where j
ob=:4
4 4 1.10 a1mbfp580hw3k
select u1.user#, u2.user#, u3.user#, failures, flag, interval#, what, nlsenv,
env, field1 from sys.job$ j, sys.user$ u1, sys.user$ u2, sys.user$ u3 where j
ob=:1 and (next_date <= sysdate or :2 != 0) and lowner = u1.name and powner = u
2.name and cowner = u3.name
4 4 1.10 aq8yqxyyb40nn
update sys.job$ set this_date=:1 where job=:2
-------------------------------------------------------------
SQL ordered by Sharable Memory DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
SQL ordered by Version Count DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session 42,539 70.7 818.1
CPU used when call started 443 0.7 8.5
CR blocks created 5 0.0 0.1
DB time 73,477 122.0 1,413.0
DBWR checkpoint buffers written 614 1.0 11.8
DBWR transaction table writes 10 0.0 0.2
DBWR undo block writes 116 0.2 2.2
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 2 0.0 0.0
IMU Redo allocation size 33,124 55.0 637.0
IMU commits 49 0.1 0.9
IMU contention 0 0.0 0.0
IMU undo allocation size 145,332 241.4 2,794.9
SMON posted for undo segment shr 0 0.0 0.0
SQL*Net roundtrips to/from clien 0 0.0 0.0
active txn count during cleanout 60 0.1 1.2
background timeouts 2,322 3.9 44.7
buffer is not pinned count 1,844 3.1 35.5
buffer is pinned count 711 1.2 13.7
bytes received via SQL*Net from 0 0.0 0.0
bytes sent via SQL*Net to client 0 0.0 0.0
calls to get snapshot scn: kcmgs 2,138 3.6 41.1
calls to kcmgas 119 0.2 2.3
calls to kcmgcs 25 0.0 0.5
change write time 61 0.1 1.2
cleanout - number of ktugct call 74 0.1 1.4
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 1 0.0 0.0
cluster key scan block gets 626 1.0 12.0
cluster key scans 36 0.1 0.7
commit batch/immediate performed 2 0.0 0.0
commit batch/immediate requested 2 0.0 0.0
commit cleanout failures: callba 9 0.0 0.2
commit cleanouts 446 0.7 8.6
commit cleanouts successfully co 437 0.7 8.4
commit immediate performed 2 0.0 0.0
commit immediate requested 2 0.0 0.0
commit txn count during cleanout 43 0.1 0.8
concurrency wait time 157 0.3 3.0
consistent changes 5 0.0 0.1
consistent gets 5,433 9.0 104.5
consistent gets - examination 1,695 2.8 32.6
consistent gets direct 0 0.0 0.0
consistent gets from cache 5,433 9.0 104.5
cursor authentications 11 0.0 0.2
data blocks consistent reads - u 5 0.0 0.1
db block changes 3,818 6.3 73.4
db block gets 4,358 7.2 83.8
db block gets direct 0 0.0 0.0
db block gets from cache 4,358 7.2 83.8
deferred (CURRENT) block cleanou 298 0.5 5.7
dirty buffers inspected 0 0.0 0.0
enqueue conversions 176 0.3 3.4
enqueue releases 5,369 8.9 103.3
enqueue requests 5,371 8.9 103.3
enqueue timeouts 5 0.0 0.1
enqueue waits 0 0.0 0.0
execute count 1,236 2.1 23.8
free buffer inspected 242 0.4 4.7
free buffer requested 254 0.4 4.9
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
heap block compress 3 0.0 0.1
hot buffers moved to head of LRU 0 0.0 0.0
immediate (CR) block cleanout ap 1 0.0 0.0
immediate (CURRENT) block cleano 83 0.1 1.6
index crx upgrade (positioned) 206 0.3 4.0
index fast full scans (full) 20 0.0 0.4
index fetch by key 658 1.1 12.7
index scans kdiixs1 710 1.2 13.7
leaf node 90-10 splits 10 0.0 0.2
leaf node splits 29 0.1 0.6
lob reads 0 0.0 0.0
lob writes 0 0.0 0.0
lob writes unaligned 0 0.0 0.0
logons cumulative 7 0.0 0.1
messages received 324 0.5 6.2
messages sent 324 0.5 6.2
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 2,892 4.8 55.6
opened cursors cumulative 360 0.6 6.9
parse count (failures) 0 0.0 0.0
parse count (hard) 0 0.0 0.0
parse count (total) 362 0.6 7.0
parse time cpu 11 0.0 0.2
parse time elapsed 9 0.0 0.2
physical read IO requests 26 0.0 0.5
physical read bytes 794,624 1,319.8 15,281.2
physical read total IO requests 1,530 2.5 29.4
physical read total bytes 179,646,976 298,379.2 3,454,749.5
physical read total multi block 164 0.3 3.2
physical reads 97 0.2 1.9
physical reads cache 97 0.2 1.9
physical reads cache prefetch 71 0.1 1.4
physical reads direct 0 0.0 0.0
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 71 0.1 1.4
physical write IO requests 402 0.7 7.7
physical write bytes 5,029,888 8,354.2 96,728.6
physical write total IO requests 1,073 1.8 20.6
physical write total bytes 16,072,704 26,695.5 309,090.5
physical write total multi block 120 0.2 2.3
physical writes 614 1.0 11.8
physical writes direct 0 0.0 0.0
physical writes direct (lob) 0 0.0 0.0
physical writes direct temporary 0 0.0 0.0
physical writes from cache 614 1.0 11.8
physical writes non checkpoint 207 0.3 4.0
recursive calls 5,668 9.4 109.0
recursive cpu usage 42,208 70.1 811.7
redo blocks written 2,902 4.8 55.8
redo entries 2,088 3.5 40.2
redo ordering marks 1 0.0 0.0
redo size 1,439,948 2,391.6 27,691.3
redo synch time 4 0.0 0.1
redo synch writes 161 0.3 3.1
redo wastage 20,648 34.3 397.1
redo write time 106 0.2 2.0
redo writes 80 0.1 1.5
rollback changes - undo records 665 1.1 12.8
rollbacks only - consistent read 5 0.0 0.1
rows fetched via callback 430 0.7 8.3
Instance Activity Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
session cursor cache hits 55 0.1 1.1
session logical reads 9,791 16.3 188.3
session pga memory 19,249,488 31,971.9 370,182.5
session pga memory max 1,000,192,336 1,661,239.3 19,234,468.0
session uga memory 21,474,294,224 35,667,082.3 #############
session uga memory max 9,592,884 15,933.0 184,478.5
shared hash latch upgrades - no 246 0.4 4.7
sorts (disk) 0 0.0 0.0
sorts (memory) 314 0.5 6.0
sorts (rows) 19,228 31.9 369.8
sql area evicted 0 0.0 0.0
sql area purged 0 0.0 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 0 0.0 0.0
table fetch by rowid 640 1.1 12.3
table fetch continued row 0 0.0 0.0
table scan blocks gotten 513 0.9 9.9
table scan rows gotten 24,520 40.7 471.5
table scans (long tables) 0 0.0 0.0
table scans (short tables) 178 0.3 3.4
total number of times SMON poste 0 0.0 0.0
transaction rollbacks 2 0.0 0.0
undo change vector size 466,272 774.4 8,966.8
user I/O wait time 288 0.5 5.5
user calls 22 0.0 0.4
user commits 50 0.1 1.0
user rollbacks 2 0.0 0.0
workarea executions - onepass 0 0.0 0.0
workarea executions - optimal 173 0.3 3.3
write clones created in backgrou 1 0.0 0.0
-------------------------------------------------------------
Instance Activity Stats - Absolute ValuesDB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Statistics with absolute values (should not be diffed)
Statistic Begin Value End Value
-------------------------------- --------------- ---------------
session cursor cache count 2,375 2,473
opened cursors current 74 39
logons current 23 22
-------------------------------------------------------------
Instance Activity Stats - Thread Activity DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
-------------------------------- ------------------ ---------
log switches (derived) 0 .00
-------------------------------------------------------------
Tablespace IO Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> ordered by IOs (Reads + Writes) desc
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX
20 0 130.0 3.1 202 0 0 0.0
USERS
10 0 48.0 4.8 161 0 0 0.0
UNDOTBS1
0 0 0.0 .0 27 0 0 0.0
SYSTEM
0 0 0.0 .0 12 0 0 0.0
-------------------------------------------------------------
File IO Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> ordered by Tablespace, File
Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX +DATA_1/ivrs/datafile/sysaux.258.652821943
20 0 130.0 3.1 202 0 0 0.0
SYSTEM +DATA_1/ivrs/datafile/system.267.652821909
0 0 N/A N/A 12 0 0 0.0
UNDOTBS1 +DATA_1/ivrs/datafile/undotbs1.257.652821933
0 0 N/A N/A 27 0 0 0.0
USERS +DATA_1/ivrs/datafile/users.263.652821963
7 0 45.7 4.9 84 0 0 0.0
USERS +DATA_1/ivrs/datafile/users02.dbf
3 0 53.3 4.7 77 0 0 0.0
-------------------------------------------------------------
Buffer Pool Statistics DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Writ Buffer
Number of Pool Buffer Physical Physical Buff Comp Busy
P Buffers Hit% Gets Reads Writes Wait Wait Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D 19,461 99 9,792 97 614 0 0 0
-------------------------------------------------------------
Instance Recovery Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B 0 45 468 2412 8975 184320 8975 N/A
E 0 46 447 2084 9252 184320 9252 N/A
-------------------------------------------------------------
Buffer Pool Advisory DB/Inst: IVRS/ivrs Snap: 1104
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 12 .1 1,497 1.1 24,770
D 24 .2 2,994 1.0 23,250
D 36 .2 4,491 1.0 22,666
D 48 .3 5,988 1.0 22,564
D 60 .4 7,485 1.0 22,377
D 72 .5 8,982 1.0 22,302
D 84 .5 10,479 1.0 22,225
D 96 .6 11,976 1.0 22,207
D 108 .7 13,473 1.0 22,199
D 120 .8 14,970 1.0 22,199
D 132 .8 16,467 1.0 22,199
D 144 .9 17,964 1.0 22,199
D 156 1.0 19,461 1.0 22,199
D 168 1.1 20,958 1.0 22,199
D 180 1.2 22,455 1.0 22,199
D 192 1.2 23,952 1.0 22,199
D 204 1.3 25,449 1.0 22,199
D 216 1.4 26,946 1.0 22,199
D 228 1.5 28,443 1.0 22,199
D 240 1.5 29,940 1.0 21,939
-------------------------------------------------------------
PGA Aggr Summary DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ------------------ --------------------------
100.0 24 0
-------------------------------------------------------------
Warning: pga_aggregate_target was set too low for current workload, as this
value was exceeded during this interval. Use the PGA Advisory view
to help identify a different value for pga_aggregate_target.
PGA Aggr Target Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B 103 49 153.5 0.0 .0 .0 .0 21,094
E 103 49 112.1 0.0 .0 .0 .0 21,094
-------------------------------------------------------------
PGA Aggr Target Histogram DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
2K 4K 138 138 0 0
64K 128K 3 3 0 0
128K 256K 1 1 0 0
512K 1024K 31 31 0 0
-------------------------------------------------------------
PGA Memory Advisory DB/Inst: IVRS/ivrs Snap: 1104
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
13 0.1 366.1 325.5 53.0 20
26 0.3 366.1 325.5 53.0 20
52 0.5 366.1 325.5 53.0 20
77 0.8 366.1 310.6 54.0 6
103 1.0 366.1 5.3 99.0 3
124 1.2 366.1 0.0 100.0 3
144 1.4 366.1 0.0 100.0 3
165 1.6 366.1 0.0 100.0 3
185 1.8 366.1 0.0 100.0 3
206 2.0 366.1 0.0 100.0 3
309 3.0 366.1 0.0 100.0 0
412 4.0 366.1 0.0 100.0 0
618 6.0 366.1 0.0 100.0 0
824 8.0 366.1 0.0 100.0 0
-------------------------------------------------------------
Shared Pool Advisory DB/Inst: IVRS/ivrs Snap: 1104
-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Est LC Est LC Est LC Est LC
Shared SP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
56 .4 17 1,880 6,439 1.0 334 1.5 34,050
72 .5 31 4,267 6,543 1.0 230 1.0 34,481
88 .6 33 4,636 6,543 1.0 230 1.0 34,481
104 .8 33 4,636 6,543 1.0 230 1.0 34,481
120 .9 33 4,636 6,543 1.0 230 1.0 34,481
136 1.0 33 4,636 6,543 1.0 230 1.0 34,481
152 1.1 33 4,636 6,543 1.0 230 1.0 34,481
168 1.2 33 4,636 6,543 1.0 230 1.0 34,481
184 1.4 33 4,636 6,543 1.0 230 1.0 34,481
200 1.5 33 4,636 6,543 1.0 230 1.0 34,481
216 1.6 33 4,636 6,543 1.0 230 1.0 34,481
232 1.7 33 4,636 6,543 1.0 230 1.0 34,481
248 1.8 33 4,636 6,543 1.0 230 1.0 34,481
264 1.9 33 4,636 6,543 1.0 230 1.0 34,481
280 2.1 33 4,636 6,543 1.0 230 1.0 34,481
-------------------------------------------------------------
SGA Target Advisory DB/Inst: IVRS/ivrs Snap: 1104
SGA Target SGA Size Est DB Est Physical
Size (M) Factor Time (s) Reads
---------- ---------- ------------ ----------------
156 0.5 2,551 22,360
234 0.8 2,545 22,183
312 1.0 2,545 22,183
390 1.3 2,537 21,923
468 1.5 2,537 21,923
546 1.8 2,537 21,923
624 2.0 2,537 21,923
-------------------------------------------------------------
Streams Pool Advisory DB/Inst: IVRS/ivrs Snap: 1104
Size for Size Est Spill Est Spill Est Unspill Est Unspill
Est (MB) Factor Count Time (s) Count Time (s)
---------- --------- ----------- ----------- ----------- -----------
4 1.0 0 0 0 0
8 2.0 0 0 0 0
12 3.0 0 0 0 0
16 4.0 0 0 0 0
20 5.0 0 0 0 0
24 6.0 0 0 0 0
28 7.0 0 0 0 0
32 8.0 0 0 0 0
36 9.0 0 0 0 0
40 10.0 0 0 0 0
44 11.0 0 0 0 0
48 12.0 0 0 0 0
52 13.0 0 0 0 0
56 14.0 0 0 0 0
60 15.0 0 0 0 0
64 16.0 0 0 0 0
68 17.0 0 0 0 0
72 18.0 0 0 0 0
76 19.0 0 0 0 0
80 20.0 0 0 0 0
-------------------------------------------------------------
Java Pool Advisory DB/Inst: IVRS/ivrs Snap: 1104
Est LC Est LC Est LC Est LC
Java JP Est LC Time Time Load Load Est LC
Pool Size Size Est LC Saved Saved Time Time Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
8 1.0 3 171 83 1.0 230 1.0 107
12 1.5 3 171 83 1.0 230 1.0 107
16 2.0 3 171 83 1.0 230 1.0 107
-------------------------------------------------------------
Buffer Wait Statistics DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Enqueue Activity DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Undo Segment Summary DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/
TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
1 .2 330 242 4 15/18.05 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Undo Segment Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Most recent 35 Undostat rows, ordered by Time desc
Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
11-May 01:18 84 224 0 3 15 0/0 0/0/0/0/0/0
11-May 01:08 104 106 242 4 18 0/0 0/0/0/0/0/0
-------------------------------------------------------------
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ASM allocation 79 0.0 N/A 0 0 N/A
ASM db client latch 386 0.0 N/A 0 0 N/A
ASM map headers 35 0.0 N/A 0 0 N/A
ASM map load waiting lis 6 0.0 N/A 0 0 N/A
ASM map operation freeli 76 0.0 N/A 0 0 N/A
ASM map operation hash t 20,210 0.0 N/A 0 0 N/A
ASM network background l 272 0.0 N/A 0 0 N/A
AWR Alerted Metric Eleme 2,232 0.0 N/A 0 0 N/A
Consistent RBA 80 0.0 N/A 0 0 N/A
FAL request queue 12 0.0 N/A 0 0 N/A
FAL subheap alocation 12 0.0 N/A 0 0 N/A
FIB s.o chain latch 11 0.0 N/A 0 0 N/A
FOB s.o list latch 54 0.0 N/A 0 0 N/A
In memory undo latch 646 0.0 N/A 0 204 0.0
JS queue state obj latch 4,320 0.0 N/A 0 0 N/A
JS slv state obj latch 2 0.0 N/A 0 0 N/A
KFK SGA context latch 301 0.0 N/A 0 0 N/A
KFMD SGA 6 0.0 N/A 0 0 N/A
KMG MMAN ready and start 200 0.0 N/A 0 0 N/A
KTF sga latch 1 0.0 N/A 0 194 0.0
KWQP Prop Status 3 0.0 N/A 0 0 N/A
MQL Tracking Latch 0 N/A N/A 0 12 0.0
Memory Management Latch 0 N/A N/A 0 200 0.0
OS process 21 0.0 N/A 0 0 N/A
OS process allocation 213 0.0 N/A 0 0 N/A
OS process: request allo 5 0.0 N/A 0 0 N/A
PL/SQL warning settings 31 0.0 N/A 0 0 N/A
SGA IO buffer pool latch 252 0.0 N/A 0 288 0.0
SQL memory manager latch 2 0.0 N/A 0 193 0.0
SQL memory manager worka 13,247 0.0 N/A 0 0 N/A
Shared B-Tree 27 0.0 N/A 0 0 N/A
active checkpoint queue 452 0.0 N/A 0 0 N/A
active service list 1,233 0.0 N/A 0 202 0.0
archive control 13 0.0 N/A 0 0 N/A
archive process latch 205 0.0 N/A 0 0 N/A
cache buffer handles 46 0.0 N/A 0 0 N/A
cache buffers chains 28,796 0.0 N/A 0 390 0.0
cache buffers lru chain 1,548 0.0 N/A 0 39 0.0
channel handle pool latc 5 0.0 N/A 0 0 N/A
channel operations paren 2,766 0.0 N/A 0 0 N/A
checkpoint queue latch 5,327 0.0 N/A 0 541 0.0
client/application info 128 0.0 N/A 0 0 N/A
compile environment latc 16 0.0 N/A 0 0 N/A
dml lock allocation 534 0.0 N/A 0 0 N/A
dummy allocation 15 0.0 N/A 0 0 N/A
enqueue hash chains 10,916 0.0 N/A 0 0 N/A
enqueues 10,026 0.0 N/A 0 0 N/A
event group latch 2 0.0 N/A 0 0 N/A
file cache latch 38 0.0 N/A 0 0 N/A
hash table modification 1 0.0 N/A 0 0 N/A
job workq parent latch 0 N/A N/A 0 8 0.0
job_queue_processes para 14 0.0 N/A 0 0 N/A
ksuosstats global area 44 0.0 N/A 0 0 N/A
ksv instance 1 0.0 N/A 0 0 N/A
ktm global data 2 0.0 N/A 0 0 N/A
kwqbsn:qsga 26 0.0 N/A 0 0 N/A
lgwr LWN SCN 255 0.0 N/A 0 0 N/A
library cache 2,890 0.0 N/A 0 0 N/A
library cache lock 1,994 0.0 N/A 0 0 N/A
library cache lock alloc 143 0.0 N/A 0 0 N/A
Latch Activity DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
library cache pin 793 0.0 N/A 0 0 N/A
library cache pin alloca 25 0.0 N/A 0 0 N/A
list of block allocation 36 0.0 N/A 0 0 N/A
logminer context allocat 1 0.0 N/A 0 0 N/A
messages 5,359 0.0 N/A 0 0 N/A
mostly latch-free SCN 255 0.0 N/A 0 0 N/A
msg queue 11 0.0 N/A 0 11 0.0
multiblock read objects 36 0.0 N/A 0 6 0.0
ncodef allocation latch 9 0.0 N/A 0 0 N/A
object queue header heap 452 0.0 N/A 0 0 N/A
object queue header oper 2,787 0.0 N/A 0 0 N/A
object stats modificatio 1 0.0 N/A 0 2 0.0
parallel query alloc buf 80 0.0 N/A 0 0 N/A
parameter table allocati 8 0.0 N/A 0 0 N/A
post/wait queue 3 0.0 N/A 0 1 0.0
process allocation 5 0.0 N/A 0 2 0.0
process group creation 5 0.0 N/A 0 0 N/A
qmn task queue latch 84 0.0 N/A 0 0 N/A
redo allocation 911 0.0 N/A 0 2,087 0.0
redo copy 0 N/A N/A 0 2,085 0.0
redo writing 1,099 0.0 N/A 0 0 N/A
resmgr group change latc 27 0.0 N/A 0 0 N/A
resmgr:actses active lis 13 0.0 N/A 0 0 N/A
resmgr:actses change gro 4 0.0 N/A 0 0 N/A
resmgr:free threads list 11 0.0 N/A 0 0 N/A
resmgr:schema config 2 0.0 N/A 0 0 N/A
row cache objects 3,934 0.0 N/A 0 0 N/A
rules engine rule set st 200 0.0 N/A 0 0 N/A
segmented array pool 12 0.0 N/A 0 0 N/A
sequence cache 6 0.0 N/A 0 0 N/A
session allocation 1,267 0.0 N/A 0 0 N/A
session idle bit 55 0.0 N/A 0 0 N/A
session state list latch 18 0.0 N/A 0 0 N/A
session switching 9 0.0 N/A 0 0 N/A
session timer 202 0.0 N/A 0 0 N/A
shared pool 1,051 0.0 N/A 0 0 N/A
shared pool simulator 168 0.0 N/A 0 0 N/A
simulator hash latch 1,176 0.0 N/A 0 0 N/A
simulator lru latch 1,148 0.0 N/A 0 16 0.0
slave class 38 0.0 N/A 0 0 N/A
slave class create 31 3.2 1.0 1 0 N/A
sort extent pool 14 0.0 N/A 0 0 N/A
state object free list 2 0.0 N/A 0 0 N/A
statistics aggregation 140 0.0 N/A 0 0 N/A
threshold alerts latch 65 0.0 N/A 0 0 N/A
transaction allocation 22 0.0 N/A 0 0 N/A
transaction branch alloc 9 0.0 N/A 0 0 N/A
undo global data 627 0.0 N/A 0 0 N/A
user lock 9 0.0 N/A 0 0 N/A
-------------------------------------------------------------
Latch Sleep Breakdown DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> ordered by misses desc
Latch Name
----------------------------------------
Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
slave class create
31 1 1 0 0 0 0
-------------------------------------------------------------
Latch Miss Sources DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
slave class create ksvcreate 0 1 0
-------------------------------------------------------------
Parent Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Child Latch Statistics DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Logical Reads DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Total Logical Reads: 9,791
-> Captured Segments account for 77.9% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYSMAN SYSAUX MGMT_METRICS_RAW_PK INDEX 1,936 19.77
SYSMAN SYSAUX MGMT_CURRENT_METRICS INDEX 624 6.37
SYS SYSTEM SMON_SCN_TIME TABLE 576 5.88
SYS SYSTEM JOB$ TABLE 416 4.25
PERFSTAT USERS STATS$LATCH_PK INDEX 288 2.94
-------------------------------------------------------------
Segments by Physical Reads DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Total Physical Reads: 97
-> Captured Segments account for 105.2% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
PERFSTAT USERS STATS$SYSSTAT_PK INDEX 28 28.87
SYS SYSAUX WRH$_SQL_PLAN_PK INDEX 15 15.46
SYS SYSAUX WRH$_SQL_PLAN TABLE 11 11.34
SYS SYSAUX WRH$_WAITSTAT_PK 50532_1033 INDEX 8 8.25
PERFSTAT USERS STATS$SQLTEXT_PK INDEX 7 7.22
-------------------------------------------------------------
Segments by Row Lock Waits DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Segments by ITL Waits DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Segments by Buffer Busy Waits DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Dictionary Cache Stats DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> "Pct Misses" should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control 13 0.0 0 N/A 2 1
dc_global_oids 12 0.0 0 N/A 0 33
dc_object_ids 87 0.0 0 N/A 0 1,137
dc_objects 75 1.3 0 N/A 0 1,935
dc_profiles 4 0.0 0 N/A 0 1
dc_rollback_segments 78 0.0 0 N/A 0 16
dc_segments 5 0.0 0 N/A 5 1,011
dc_tablespaces 857 0.0 0 N/A 0 12
dc_users 150 0.0 0 N/A 0 25
outstanding_alerts 26 0.0 0 N/A 0 25
-------------------------------------------------------------
Library Cache Activity DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> "Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 47 0.0 86 0.0 0 0
TABLE/PROCEDURE 17 0.0 132 0.0 0 0
TRIGGER 6 0.0 37 0.0 0 0
-------------------------------------------------------------
Process Memory Summary DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> B: Begin snap E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc
Hist
Avg Std Dev Max Max
Alloc Used Alloc Alloc Alloc Alloc Num Num
Category (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other 146.6 N/A 5.9 10.9 45 45 25 25
Freeable 6.3 .0 .7 .5 2 N/A 9 9
SQL .4 .2 .0 .0 0 2 11 8
PL/SQL .2 .1 .0 .0 0 0 23 23
JAVA .0 .0 .0 .0 0 1 1 1
E Other 106.0 N/A 4.4 7.4 24 321 24 24
Freeable 5.6 .0 .7 .5 1 N/A 8 8
SQL .4 .2 .0 .0 0 2 10 7
PL/SQL .2 .0 .0 .0 0 0 22 22
JAVA .0 .0 .0 .0 0 1 1 1
-------------------------------------------------------------
SGA Memory Summary DB/Inst: IVRS/ivrs Snaps: 1103-1104
End Size (Bytes)
SGA regions Begin Size (Bytes) (if different)
------------------------------ ------------------- -------------------
Database Buffers 163,577,856
Fixed Size 1,261,612
Redo Buffers 2,928,640
Variable Size 159,387,604
-------------------
sum 327,155,712
-------------------------------------------------------------
SGA breakdown difference DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
insignificant, or zero in that snapshot
Pool Name Begin MB End MB % Diff
------ ------------------------------ -------------- -------------- -------
java free memory 2.7 2.7 0.00
java joxlod exec hp 5.1 5.1 0.00
java joxs heap .2 .2 0.00
large ASM map operations hashta .2 .2 0.00
large CTWR dba buffer .4 .4 0.00
large PX msg pool .2 .2 0.00
large free memory 1.2 1.2 0.00
large krcc extent chunk 2.0 2.0 0.00
shared ASH buffers 2.0 2.0 0.00
shared CCursor 3.3 3.3 0.00
shared Heap0: KGL 2.6 2.6 0.00
shared KCB Table Scan Buffer 3.8 3.8 0.00
shared KGLS heap 6.3 6.3 0.00
shared KQR M PO 3.4 3.4 0.01
shared KSFD SGA I/O b 3.8 3.8 0.00
shared PCursor 2.1 2.1 0.00
shared PL/SQL DIANA 1.7 1.7 0.00
shared PL/SQL MPCODE 3.2 3.2 0.00
shared free memory 54.7 54.7 0.01
shared kglsim hash table bkts 2.0 2.0 0.00
shared library cache 4.9 4.9 0.00
shared row cache 3.6 3.6 0.00
shared sql area 12.8 12.8 0.00
stream free memory 4.0 4.0 0.00
buffer_cache 156.0 156.0 0.00
fixed_sga 1.2 1.2 0.00
log_buffer 2.8 2.8 0.00
-------------------------------------------------------------
Streams CPU/IO Usage DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Streams processes ordered by CPU usage
-> CPU and I/O Time in micro seconds
Session Type CPU Time User I/O Time Sys I/O Time
------------------------- -------------- -------------- --------------
QMON Coordinator 46,026 0 0
QMON Slaves 33,023 0 0
-------------------------------------------------------------
Streams Capture DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Streams Apply DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Queues DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Buffered Subscribers DB/Inst: IVRS/ivrs Snaps: 1103-1104
No data exists for this section of the report.
-------------------------------------------------------------
Rule Set DB/Inst: IVRS/ivrs Snaps: 1103-1104
-> Rule Sets ordered by Evaluations
Fast SQL CPU Elapsed
Ruleset Name Evals Evals Execs Time Time
------------------------- -------- -------- -------- -------- --------
SYS.ALERT_QUE_R 0 0 0 0 0
-------------------------------------------------------------
Resource Limit Stats DB/Inst: IVRS/ivrs Snap: 1104
No data exists for this section of the report.
-------------------------------------------------------------
init.ora Parameters DB/Inst: IVRS/ivrs Snaps: 1103-1104
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
audit_file_dest /oracle/app/oracle/admin/ivrs/adu
audit_sys_operations TRUE
background_dump_dest /oracle/app/oracle/admin/ivrs/bdu
compatible 10.2.0.3.0
control_files +DATA_1/ivrs/control01.ctl, +DATA
core_dump_dest /oracle/app/oracle/admin/ivrs/cdu
db_block_size 8192
db_domain karl.com
db_file_multiblock_read_count 16
db_name ivrs
db_recovery_file_dest /flash_reco/flash_recovery_area
db_recovery_file_dest_size 161061273600
dispatchers (PROTOCOL=TCP) (SERVICE=ivrsXDB)
job_queue_processes 10
log_archive_dest_1 LOCATION=USE_DB_RECOVERY_FILE_DES
log_archive_format ivrs_%t_%s_%r.arc
open_cursors 300
os_authent_prefix
os_roles FALSE
pga_aggregate_target 108003328
processes 150
recyclebin OFF
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
sga_target 327155712
spfile +DATA_1/ivrs/spfileivrs.ora
sql92_security TRUE
statistics_level TYPICAL
undo_management AUTO
undo_tablespace UNDOTBS1
user_dump_dest /oracle/app/oracle/admin/ivrs/udu
-------------------------------------------------------------
End of Report
Report written to awrrpt_1_1103_1104.txt
}}}
! The ASH report for SNAP_ID 1103
{{{
Using the report name ashrpt_1_0511_0110.txt
Summary of All User Input
-------------------------
Format : TEXT
DB Id : 2607950532
Inst num : 1
Begin time : 11-May-10 01:00:00
End time : 11-May-10 01:10:00
Slot width : Default
Report targets : 0
Report name : ashrpt_1_0511_0110.txt
ASH Report For IVRS/ivrs
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS 2607950532 ivrs 1 10.2.0.3.0 NO dbrocaix01.b
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
1 312M (100%) 156M (50.0%) 110M (35.3%) 2.0M (0.6%)
Analysis Begin Time: 11-May-10 01:00:00
Analysis End Time: 11-May-10 01:10:00
Elapsed Time: 10.0 (mins)
Sample Count: 55
Average Active Sessions: 0.92
Avg. Active Session per CPU: 0.92
Report Target: None specified
Top User Events DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU CPU 83.64 0.77
-------------------------------------------------------------
Top Background Events DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
db file parallel write System I/O 7.27 0.07
CPU + Wait for CPU CPU 3.64 0.03
control file parallel write System I/O 3.64 0.03
log file parallel write System I/O 1.82 0.02
-------------------------------------------------------------
Top Event P1/P2/P3 Values DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
Event % Event P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1 Parameter 2 Parameter 3
-------------------------- -------------------------- --------------------------
db file parallel write 7.27 "1","0","2147483647" 7.27
requests interrupt timeout
control file parallel write 3.64 "3","3","3" 3.64
files block# requests
log file parallel write 1.82 "1","11","1" 1.82
files blocks requests
-------------------------------------------------------------
Top Service/Module DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$USERS Oracle Enterprise Manage 81.82 target 1 81.82
SYS$BACKGROUND UNNAMED 14.55 UNNAMED 14.55
MMON_SLAVE 1.82 Auto-Flush Slave A 1.82
SYS$USERS UNNAMED 1.82 UNNAMED 1.82
-------------------------------------------------------------
Top Client IDs DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Command Types DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Statements DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL using literals DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top PL/SQL Procedures DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
-> 'PL/SQL entry subprogram' represents the application's top-level
entry-point(procedure, function, trigger, package initialization
or RPC call) into PL/SQL.
-> 'PL/SQL current subprogram' is the pl/sql subprogram being executed
at the point of sampling . If the value is 'SQL', it represents
the percentage of time spent executing SQL for the particular
plsql entry subprogram
PLSQL Entry Subprogram % Activity
----------------------------------------------------------------- ----------
PLSQL Current Subprogram % Current
----------------------------------------------------------------- ----------
SYSMAN.EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS 81.82
SYSMAN.EMD_LOADER.EMD_RAW_PURGE 80.00
SQL 1.82
WKSYS.WK_JOB.INVOKE 1.82
UNKNOWN_PLSQL_ID <50092,11> 1.82
-------------------------------------------------------------
Top Sessions DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
-> '# Samples Active' shows the number of ASH samples in which the session
was found waiting for that particular event. The percentage shown
in this column is calculated with respect to wall clock time
and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
the PQ slave activity into the session issuing the PQ. Refer to
the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# % Activity Event % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
138, 245 81.82 CPU + Wait for CPU 81.82
SYSMAN oracle@dbrocai...el.com (J000) 45/60 [ 75%] 1
167, 1 7.27 db file parallel write 7.27
SYS oracle@dbrocai...el.com (DBW0) 4/60 [ 7%] 0
165, 1 3.64 control file parallel write 3.64
SYS oracle@dbrocai...el.com (CKPT) 2/60 [ 3%] 0
138, 243 1.82 CPU + Wait for CPU 1.82
SYS oracle@dbrocai...el.com (J000) 1/60 [ 2%] 0
154, 1 1.82 CPU + Wait for CPU 1.82
SYS oracle@dbrocai...el.com (O001) 1/60 [ 2%] 0
-------------------------------------------------------------
Top Blocking Sessions DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top Sessions running PQs DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Objects DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Files DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
No data exists for this section of the report.
-------------------------------------------------------------
Top Latches DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
No data exists for this section of the report.
-------------------------------------------------------------
Activity Over Time DB/Inst: IVRS/ivrs (May 11 01:00 to 01:10)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
01:00:00 (5.0 min) 31 CPU + Wait for CPU 28 50.91
db file parallel write 3 5.45
01:05:00 (5.0 min) 24 CPU + Wait for CPU 20 36.36
control file parallel write 2 3.64
db file parallel write 1 1.82
-------------------------------------------------------------
End of Report
Report written to ashrpt_1_0511_0110.txt
}}}
<<<
1) SID on the database
2) translates to user karen on the peoplesoft side
3) users will alert the peoplesoft admin that she's having problems
4) Peoplesoft admin will enable tracing, and increase the "lock fence" for that user.
5) DBA needs to tell the peoplesoft admin the SID that's causing the locked and SIDs that are blocked.
get:
tuxedo logs
app server logs
db server logs
web server logs
PSAPPSR configuration
<<<
MAA
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_TransientLogicalRollingUpgrade.pdf
http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-transientlogicalrollingu-1-131927.pdf
http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-upgrades-made-easy-131972.pdf
Oracle11g Data Guard: Database Rolling Upgrade Shell Script [ID 949322.1]
http://uhesse.wordpress.com/2011/08/10/rolling-upgrade-with-transient-logical-standby/
http://docs.oracle.com/cd/E11882_01/network.112/e10746/asotrans.htm#ASOAG600
http://husnusensoy.wordpress.com/2008/07/12/migrating-data-using-transportable-tablespacetts/
http://flowingdata.com/2008/08/19/3-worthwhile-alternatives-to-the-pie-chart/
http://www.databison.com/index.php/treemap-in-excel-coming-soon/
http://www.databison.com/index.php/treemap-add-in-for-excel-demo/
http://www.databison.com/index.php/excel-treemap/
http://www.e-junkie.com/shop/product/321057.php?section=affiliates
http://en.wikipedia.org/wiki/List_of_treemapping_software
http://en.wikipedia.org/wiki/List_of_treemapping_software
http://code.google.com/p/treemap-gviz/wiki/Help
http://www.drasticdata.nl/DrasticTreemapGApi/index.html#Example
http://groups.google.com/group/google-visualization-api/browse_thread/thread/cb378039e42e0596/a375af91ff81a486
http://code.google.com/apis/chart/interactive/docs/gallery/treemap.html
http://www.wikiviz.org/wiki/Tools
http://groups.google.com/group/protovis/browse_thread/thread/7c065963493ac80a/608a0e1938bc308f
http://modernamericanman.com/ids499/protovis6.php
http://modernamericanman.com/ids499/protovis7.php <-- cool time series treemap viz
http://www.nytimes.com/interactive/2009/09/12/business/financial-markets-graphic.html <-- cool time series treemap viz
http://mbostock.github.com/d3/ex/treemap.html
http://www.inf.uni-konstanz.de/gk/pubsys/publishedFiles/ScKeMa06.pdf <-- treemap paper
''Treemap Tools''
xdiskusage http://xdiskusage.sourceforge.net/
''Excel treemap''
here's the tool and how to install it http://www.databison.com/index.php/excel-treemap/excel-treemap-add-in-documentation/
Restrictions on Triggers
Doc ID: Note:269819.1
ADMINISTER DATABASE TRIGGER Privilege Causes Logon Trigger to Skip Errors
Doc ID: Note:265012.1
How to Automate Grant Operations When New Objects Are Created in a SCHEMA/DATABASE
Doc ID: Note:210693.1
http://www.dba-oracle.com/m_trigger.htm
11g NEW PL/SQL: COMPOUND Trigger
Doc ID: Note:430847.1
Transactional Triggers within Forms - ON-LOCK, ON-UPDATE, ON-INSERT, ON-DELETE
Doc ID: Note:123013.1
Trigger Execution Sequence in Oracle Forms
Doc ID: Note:61675.1
Database or Logon Event Trigger becomes Invalid: Who can Connect?
Doc ID: Note:120712.1
RMS: Trigger Restrictions on Mutating Tables
Doc ID: Note:264978.1
How to Freeze the First Row and First Column on a Form
Doc ID: Note:29021.1
How to use Format Triggers in Reports
Doc ID: Note:31364.1
Fire Sequence of Database Triggers
Doc ID: Note:121196.1
Oracle8i - Database Trigger Enhancements
Doc ID: Note:74173.1
Audit Action Incorrect For Alter Trigger Command
Doc ID: Note:456086.1
Oracle10g Database Has Invalid Triggers In The Recyclebin
Doc ID: Note:402044.1
Database Event occuring even when corresponding trigger failing
Doc ID: Note:258774.1
ALTER ANY TRIGGER/PROCEDURE/PACKAGE VS. CREATE ANY TRIGGER/PROCEDURE/PACKAGE
Doc ID: Note:1008256.6
ALERT: Trigger Updates Wrong Column after Drop Function-Based Index
Doc ID: Note:259854.1
HOW TO USE KEY-OTHERS TRIGGER IN FORMS -- FUNCTION KEYS DO NOT PERFORM DEFAULT ACTIONS
Doc ID: Note:1004731.6
Implicit Database Trigger Compilation Fails With ORA-04045 and ORA-01031
Doc ID: Note:222467.1
How to execute simple DDL statements from a trigger
Doc ID: Note:196818.1
How to create a trigger with "referencing parent" clause
Doc ID: Note:165918.1
How to Log/Trap all the Errors Occurring in the Database?
Doc ID: Note:213680.1
''doing it with ADRCI and logrotate'' http://www.evernote.com/shard/s48/sh/ed44bc6b-cb96-419f-b6d8-8a865e86721d/36318c5fcbe562ee2850804e4d6a97de
{{{
$ adrci
ADRCI: Release 11.2.0.2.0 - Production on Tue Nov 1 09:20:16 2011 Copyright
(c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
ADR base = "/u01/app/oracle"
adrci> show homes
ADR Homes:
diag/asm/+asm/+ASM1
diag/tnslsnr/co01db01/listener
diag/tnslsnr/co01db01/listener_epmuat1
diag/tnslsnr/co01db01/epmuat1
diag/rdbms/fdmuat/FDMUAT1
diag/rdbms/epmuat/EPMUAT1
diag/rdbms/biuat01/BIUAT011
diag/rdbms/cbid/cbid1
diag/rdbms/cfdd/cfdd1
diag/rdbms/cepd/cepd1
diag/clients/user_oracle/host_1396280441_11
adrci> set homepath diag/rdbms/biuat01/BIUAT011
adcri> purge -age 10080 -type TRACE
}}}
http://download.oracle.com/docs/cd/E11882_01/server.112/e10701/adrci.htm
''the old stuff''
{{{
-- LOG FILES
find $(pwd) -name "*.log" -mtime +7 -exec ls -l {} \; > delete.txt
wc -l delete.txt
ls -1 | wc -l
find $(pwd) -name "*.log" -mtime +7 -exec rm -rf {} \;
-- TRACE FILES
find $(pwd) -name "*.trc" -mtime +7 -exec ls -l {} \; > delete.txt
wc -l delete.txt
ls -1 | wc -l
find $(pwd) -name "*.trc" -mtime +7 -exec rm -rf {} \;
-- AUD FILES
find $(pwd) -name "*.aud" -mtime +7 -exec ls -l {} \; > delete.txt
wc -l delete.txt
ls -1 | wc -l
find $(pwd) -name "*.aud" -mtime +7 -exec rm -rf {} \;
-- DIRECTORIES
find $(pwd) -type d -name "cdmp_*" -mtime +7 -exec ls -ld {} \; > delete.txt
wc -l delete.txt
ls -1 | wc -l
find $(pwd) -type d -name "cdmp_*" -mtime +7 -exec rm -rf {} \;
find $(pwd) -type d -name "core_*" -mtime +7 -exec ls -ld {} \; > delete.txt
wc -l delete.txt
ls -1 | wc -l
find $(pwd) -type d -name "core_*" -mtime +7 -exec rm -rf {} \;
}}}
http://structureddata.org/2007/11/21/troubleshooting-bad-execution-plans/
How to Use the Solaris Truss Command to Trace and Understand System Call Flow and Operation [ID 1010771.1] <-- ''good stuff''
Case Study: Using DTrace and truss in the Solaris 10 OS http://www.oracle.com/technetwork/systems/articles/dtrace-truss-jsp-140760.html
How to Analyze High CPU Utilization In Solaris [ID 1008930.1]
{{{
truss -f -t open,ioctl -u ‘:directio’ sqlplus “/ as sysdba”
}}}
{{{
Alternatively, try `truss -u:directio a.out' to focus on this library function. In the test of my
dio program, I see
/1: open("testfile", O_RDONLY) = 3
/1@1: -> libc:directio(0x3, 0x1, 0xd2ff260c, 0xd2f008e9)
/1: ioctl(3, _ION('f', 76, 0), 0x00000001) = 0
/1@1: <- libc:directio() = 0
Or try `truss -t!all -u:directio a.out' to remove all syscalls in
output to make it cleaner.
Chao_ping's test on Oracle on Solaris:
File handle 26200/26201 are datafiles on ufs. Others are on vxfs with
Qio. _filesystemio_options=directio.
truss dbwr result:
ioctl(26201, _ION('f', 76, 0), 0x00000001) = 0
And with your second test plan:
[oracle@ebaysha2**8i]$grep direct dio3.log
/1@1: -> libc:directio(0x6655, 0x1, 0xfffffffe7e355e40, 0x746571fefefefeff)
/1@1: <- libc:directio() = 0
/1@1: -> libc:directio(0x6654, 0x1, 0xfffffffe7e355e40, 0x746571fefefefeff)
/1@1: <- libc:directio() = 0
/1@1: -> libc:directio(0x6653, 0x1, 0xfffffffe7e3559e0, 0x746571fefefefeff)
/1@1: <- libc:directio() = 0
/1@1: -> libc:directio(0x6652, 0x1, 0xfffffffe7e3559e0, 0x746571fefefefeff)
/1@1: <- libc:directio() = 0
}}}
{{{
$ truss -c -p 9704
^Csyscall seconds calls errors
read .00 1
write .00 3
open .00 2
close .00 10
time .00 2
lseek .00 2
times .03 282
semsys .00 31
ioctl .00 3 3
fdsync .00 1
fcntl .01 14
poll .01 146
sigprocmask .00 56
context .00 14
fstatvfs .00 3
writev .00 2
getrlimit .00 3
setitimer .00 28
lwp_create .00 2
lwp_self .00 1
lwp_cond_wai .03 427
lwp_cond_sig .15 427
kaio 5.49 469 430 <-- More kernelized IO time
stat64 .00 3 1
fstat64 .00 3
pread64 .00 32
pwrite64 .35 432 <-- Each pwrite() call takes 350/432 = 0.8 ms
open64 .00 6
---- --- ---
sys totals: 6.07 2405 434
usr time: 1.71
elapsed: 36.74
}}}
''real world experience - intermittent disconnect'' - http://www.evernote.com/shard/s48/sh/ab146159-4b2c-4960-929b-e0f2bda558a4/cb5dbbb405c6dcbdc09f3b9cce3bc2a6
''dtruss'' http://www.brendangregg.com/DTrace/dtruss_example.txt
https://changetracking.wordpress.com/2019/02/13/troubleshooting-oracle-session-disconnection-slowness/?utm_campaign=59366ec794a3267b2601a574&utm_content=5c640cb4d282a70001000668&utm_medium=smarpshare&utm_source=linkedin
/***
Description: Contains the stuff you need to use Tiddlyspot
Note, you also need UploadPlugin, PasswordOptionPlugin and LoadRemoteFileThroughProxy
from http://tiddlywiki.bidix.info for a complete working Tiddlyspot site.
***/
//{{{
// edit this if you are migrating sites or retrofitting an existing TW
config.tiddlyspotSiteId = 'karlarao';
// make it so you can by default see edit controls via http
config.options.chkHttpReadOnly = false;
window.readOnly = false; // make sure of it (for tw 2.2)
window.showBackstage = true; // show backstage too
// disable autosave in d3
if (window.location.protocol != "file:")
config.options.chkGTDLazyAutoSave = false;
// tweak shadow tiddlers to add upload button, password entry box etc
with (config.shadowTiddlers) {
SiteUrl = 'http://'+config.tiddlyspotSiteId+'.tiddlyspot.com';
SideBarOptions = SideBarOptions.replace(/(<<saveChanges>>)/,"$1<<tiddler TspotSidebar>>");
OptionsPanel = OptionsPanel.replace(/^/,"<<tiddler TspotOptions>>");
DefaultTiddlers = DefaultTiddlers.replace(/^/,"[[WelcomeToTiddlyspot]] ");
MainMenu = MainMenu.replace(/^/,"[[WelcomeToTiddlyspot]] ");
}
// create some shadow tiddler content
merge(config.shadowTiddlers,{
'TspotOptions':[
"tiddlyspot password:",
"<<option pasUploadPassword>>",
""
].join("\n"),
'TspotControls':[
"| tiddlyspot password:|<<option pasUploadPassword>>|",
"| site management:|<<upload http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/store.cgi index.html . . " + config.tiddlyspotSiteId + ">>//(requires tiddlyspot password)//<br>[[control panel|http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/controlpanel]], [[download (go offline)|http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/download]]|",
"| links:|[[tiddlyspot.com|http://tiddlyspot.com/]], [[FAQs|http://faq.tiddlyspot.com/]], [[blog|http://tiddlyspot.blogspot.com/]], email [[support|mailto:support@tiddlyspot.com]] & [[feedback|mailto:feedback@tiddlyspot.com]], [[donate|http://tiddlyspot.com/?page=donate]]|"
].join("\n"),
'WelcomeToTiddlyspot':[
"This document is a ~TiddlyWiki from tiddlyspot.com. A ~TiddlyWiki is an electronic notebook that is great for managing todo lists, personal information, and all sorts of things.",
"",
"@@font-weight:bold;font-size:1.3em;color:#444; //What now?// @@ Before you can save any changes, you need to enter your password in the form below. Then configure privacy and other site settings at your [[control panel|http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/controlpanel]] (your control panel username is //" + config.tiddlyspotSiteId + "//).",
"<<tiddler TspotControls>>",
"See also GettingStarted.",
"",
"@@font-weight:bold;font-size:1.3em;color:#444; //Working online// @@ You can edit this ~TiddlyWiki right now, and save your changes using the \"save to web\" button in the column on the right.",
"",
"@@font-weight:bold;font-size:1.3em;color:#444; //Working offline// @@ A fully functioning copy of this ~TiddlyWiki can be saved onto your hard drive or USB stick. You can make changes and save them locally without being connected to the Internet. When you're ready to sync up again, just click \"upload\" and your ~TiddlyWiki will be saved back to tiddlyspot.com.",
"",
"@@font-weight:bold;font-size:1.3em;color:#444; //Help!// @@ Find out more about ~TiddlyWiki at [[TiddlyWiki.com|http://tiddlywiki.com]]. Also visit [[TiddlyWiki.org|http://tiddlywiki.org]] for documentation on learning and using ~TiddlyWiki. New users are especially welcome on the [[TiddlyWiki mailing list|http://groups.google.com/group/TiddlyWiki]], which is an excellent place to ask questions and get help. If you have a tiddlyspot related problem email [[tiddlyspot support|mailto:support@tiddlyspot.com]].",
"",
"@@font-weight:bold;font-size:1.3em;color:#444; //Enjoy :)// @@ We hope you like using your tiddlyspot.com site. Please email [[feedback@tiddlyspot.com|mailto:feedback@tiddlyspot.com]] with any comments or suggestions."
].join("\n"),
'TspotSidebar':[
"<<upload http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/store.cgi index.html . . " + config.tiddlyspotSiteId + ">><html><a href='http://" + config.tiddlyspotSiteId + ".tiddlyspot.com/download' class='button'>download</a></html>"
].join("\n")
});
//}}}
Tuning the linux network subsystem (IBM redpaper)
http://www.scribd.com/doc/50160287/21/Tuning-the-network-subsystem
http://blog.ronnyegner-consulting.de/2010/07/22/tuning-linux-for-oracle/
sudo apt-get install twiki
or TWiki
! Supplemental docs
http://twiki.org/cgi-bin/view/TWiki/TWikiInstallationGuide
http://twiki.org/cgi-bin/view/TWiki/InstallingTWiki
http://twiki.org/cgi-bin/view/Codev/DownloadTWiki
''vm'' http://twiki.org/cgi-bin/view/Codev/DownloadTWikiVM
http://twiki.org/cgi-bin/view/Support/InstallingIsDifficult
http://twiki.org/cgi-bin/view/Support/SID-00243
! Security and comparison
''comparison'' http://twiki.org/cgi-bin/view/Codev/ComparisonOfMediaWikiToTWiki
''matrix comparison'' http://www.wikimatrix.org/compare/MediaWiki+TiddlyWiki+TWiki
''mediawiki'' security and access control http://www.mediawiki.org/wiki/Manual:Preventing_access
mom this is how twitter works
http://www.jhische.com/twitter/
http://iiotzov.files.wordpress.com/2011/08/iotzov_oem_repository.pdf
Monitor Exadata Database Machine: Configuring User Defined Metrics for Additional Network Monitoring (1110675.1)
http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5511,29
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/exadata/exadatav2/37_DBM_EM_UDM/37_dbm_em_udm_viewlet_swf.html
http://www.oracle.com/technetwork/articles/servers-storage-admin/signed-kernel-modules-1891111.html
http://public-yum.oracle.com/beta/
http://aberdave.blogspot.co.at/2011/08/universal-scalability-law-usl-in-oracle.html
http://www.underwaterhockey.com/nationals/2012/index.html
http://www.swordfishuwh.com/accs2012/index.html
''wikipedia'' http://en.wikipedia.org/wiki/Underwater_hockey
''rules of the game''
http://www.usauwh.com/media/
http://www.usauwh.com/media/USOA/UWH_Rules_v1_30_Vol1_finalGH.pdf
http://www.usauwh.com/media/USOA/UWH_Rules_v1_30_Vol2_finalGH.pdf
''tips and tricks''
''Yori clinic'' http://www.scribd.com/doc/62540424/Georgia-Tech-Underwater-Hockey-Clinic-Oct-24-2009-by-Yori-Huynh
http://www.scribd.com/illinoisuwh
<<showtoc>>
! comparison
sparc UltraSPARC-III+ http://browser.primatelabs.com/geekbench2/search?utf8=%E2%9C%93&q=sparc+UltraSPARC-III%2B+
sparc SPARC64-VII 64-bit http://browser.primatelabs.com/geekbench2/search?utf8=%E2%9C%93&q=sparc+SPARC64-VII+64-bit
! references
{{{
UltraSPARC-III+ vs SPARC64-VII
http://sparc.org/timeline/
https://en.wikipedia.org/wiki/UltraSPARC_III
https://en.wikipedia.org/wiki/SPARC64_VI#SPARC64_VII
https://en.wikipedia.org/wiki/SPARC_Enterprise
http://www.tech-specifications.com/SunServers/M5000.html SPARC64 VII: 2.4GHz
A SPARC CPU Overview: SPARC64-VII+ (M3), T3, T4, T5, M5, M10 - how do they compare? https://blogs.oracle.com/orasysat/entry/sparc_t4_t5_m5_m10
sun m5000 specs https://www.google.com/search?q=sun+m500+wikipedia&oq=sun+m500+wikipedia&aqs=chrome..69i57j0l5.4470j0j1&sourceid=chrome&es_sm=119&ie=UTF-8#q=sun+m5000+specs
sun m5000 https://www.google.com/search?q=sun+m5000&es_sm=119&source=lnms&tbm=isch&sa=X&ved=0CAgQ_AUoAmoVChMI17WPm8GwxwIVh8-ACh0j5wrW&biw=1417&bih=757&dpr=0.9
sun netra t12 https://www.google.com/search?q=sun+netra+t12&es_sm=119&biw=1417&bih=757&source=lnms&tbm=isch&sa=X&ved=0CAcQ_AUoAmoVChMI0_K7k8GwxwIVhI8NCh1jqQjh&dpr=0.9
UltraSPARC III vs SPARC64 VII https://www.google.com/search?q=UltraSPARC+III+vs+SPARC64+VII&oq=ultra&aqs=chrome.1.69i57j69i59j69i65l3j0.3190j0j1&sourceid=chrome&es_sm=119&ie=UTF-8
}}}
''Understanding the SCN'' http://www.dbapool.com/articles/1029200701.html
''SCN – What, why, and how?'' http://orainternals.wordpress.com/2012/01/19/scn-what-why-and-how/
http://www.ora-solutions.net/web/2012/01/19/oracle-scn-problem/
http://blog.ronnyegner-consulting.de/2012/02/10/the-next-major-outage-could-be-caused-by-a-fundamental-oracle-design-problem-the-scn/
-- UNDO
''UNDO troubleshooting''
http://www.oraclenerd.com/2009/11/undo-brain-damage.html
How To Size UNDO Tablespace For Automatic Undo Management
Doc ID: 262066.1
10g NEW FEATURE on AUTOMATIC UNDO RETENTION
Doc ID: 240746.1
Automatic Tuning of Undo_retention Causes Space Problems
Doc ID: 420525.1
FAQ – Automatic Undo Management (AUM) / System Managed Undo (SMU)
Doc ID: 461480.1
ORA-1555 Using Automatic Undo Management - How to troubleshoot
Doc ID: Note:389554.1
ORA-01555 Using Automatic Undo Management - Causes and Solutions
Doc ID: Note:269814.1
ORA-01555 "Snapshot too old" - Detailed Explanation
Doc ID: Note:40689.1
Full UNDO Tablespace In 10gR2
Doc ID: 413732.1
Database Transaction's info
Doc ID: 832368.1
LOBS - Storage, Read-consistency and Rollback
Doc ID: Note:162345.1
-- TRANSACTION
Database Transaction's info
Doc ID: 832368.1
-- ROLLBACK SEGMENT
Rollback Segment Configuration & Tips
Doc ID: 69464.1
https://oracle-base.com/articles/12c/auditing-enhancements-12cr1
https://oracle-base.com/blog/2015/06/29/auditing-enhancements-audit-policies-and-unified-audit-trail-in-oracle-database-12c/
https://oracle-base.com/articles/12c/auditing-enhancements-12cr1#enable-disable-pure-unified-auditing
https://www.integrigy.com/files/Integrigy%20Oracle%2012c%20Unified%20Auditing.pdf
https://www.integrigy.com/security-resources/oracle-12c-unified-auditing
https://www.integrigy.com/oracle-security-blog/what-oracle-12-unified-auditing-view-unifiedaudittrail-94-columns
<<<
keep track of database link usage
<<<
.
@@Can I upload an existing TiddlyWiki to a tiddlyspot.com site?@@
Updated 10 November, 2009
Yes, you can.
Here's how to do it:
<<<
Sign up for a new Tiddlyspot site at http://tiddlyspot.com/
(Suppose you called it yoursiteid, accessible at http://yoursiteid.tiddlyspot.com)
Now, open your existing TW (from your local hard disk)
Go to backstage, click import, and use http://yoursiteid.tiddlyspot.com as the "URL or path name" to import from
Check the checkboxes next to UploadPlugin, TspotSetupPlugin, PasswordOptionsPlugin, LoadRemoteFileThroughProxy
Uncheck the 'keep linked' option
Click import
Save your existing local TW (DON'T FORGET THIS STEP)
Reload your existing local TW (DON'T FORGET THIS STEP)
Enter your Tiddlyspot password under the options>> slider
Click 'upload' from the right sidebar
Check that it worked by visiting http://yoursiteid.tiddlyspot.com/ and doing a shift+reload.
<<<
@@To generate RSS feed when uploading@@
Go to "Backstage Area" (that's on the topmost right section)
then click on "Tweak" tab
and check on the "Generate an RSS feed when saving changes"
that's it!
| !date | !user | !location | !storeUrl | !uploadDir | !toFilename | !backupdir | !origin |
| 24/11/2014 13:48:33 | KarlArao | [[karlarao.html|file:///Users/karl/Dropbox/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
| 06/02/2015 08:52:59 | KarlArao | [[karlarao.html|file:///Users/karl/Dropbox/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
| 12/07/2015 23:28:58 | KarlArao | [[karlarao.html|file:///Users/karl/Dropbox/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
| 16/10/2015 12:38:58 | KarlArao | [[karlarao.html|file:///Users/karl/Dropbox/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
| 27/04/2016 12:41:15 | KarlArao | [[karlarao.html|file:///Users/karl/Dropbox/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
| 27/04/2016 12:50:08 | KarlArao | [[karlarao.html|file:///Users/karl/Dropbox/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
| 03/07/2016 00:28:04 | KarlArao | [[karlarao.html|file:///Users/karl/Dropbox/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
| 15/09/2016 01:18:38 | KarlArao | [[karlarao.html|file:///Users/karl/Dropbox/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
| 21/06/2017 18:43:56 | KarlArao | [[karlarao.html|file:///H:/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
| 21/06/2017 19:10:39 | KarlArao | [[karlarao.html|file:///H:/Documents/KnowledgeFiles/InformationTechnology/TiddlyWiki/karlarao.html]] | [[store.cgi|http://karlarao.tiddlyspot.com/store.cgi]] | . | [[index.html | http://karlarao.tiddlyspot.com/index.html]] | . | ok |
/***
|''Name:''|UploadPlugin|
|''Description:''|Save to web a TiddlyWiki|
|''Version:''|4.1.3|
|''Date:''|Feb 24, 2008|
|''Source:''|http://tiddlywiki.bidix.info/#UploadPlugin|
|''Documentation:''|http://tiddlywiki.bidix.info/#UploadPluginDoc|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0|
|''Requires:''|PasswordOptionPlugin|
***/
//{{{
version.extensions.UploadPlugin = {
major: 4, minor: 1, revision: 3,
date: new Date("Feb 24, 2008"),
source: 'http://tiddlywiki.bidix.info/#UploadPlugin',
author: 'BidiX (BidiX (at) bidix (dot) info',
coreVersion: '2.2.0'
};
//
// Environment
//
if (!window.bidix) window.bidix = {}; // bidix namespace
bidix.debugMode = false; // true to activate both in Plugin and UploadService
//
// Upload Macro
//
config.macros.upload = {
// default values
defaultBackupDir: '', //no backup
defaultStoreScript: "store.php",
defaultToFilename: "index.html",
defaultUploadDir: ".",
authenticateUser: true // UploadService Authenticate User
};
config.macros.upload.label = {
promptOption: "Save and Upload this TiddlyWiki with UploadOptions",
promptParamMacro: "Save and Upload this TiddlyWiki in %0",
saveLabel: "save to web",
saveToDisk: "save to disk",
uploadLabel: "upload"
};
config.macros.upload.messages = {
noStoreUrl: "No store URL in parmeters or options",
usernameOrPasswordMissing: "Username or password missing"
};
config.macros.upload.handler = function(place,macroName,params) {
if (readOnly)
return;
var label;
if (document.location.toString().substr(0,4) == "http")
label = this.label.saveLabel;
else
label = this.label.uploadLabel;
var prompt;
if (params[0]) {
prompt = this.label.promptParamMacro.toString().format([this.destFile(params[0],
(params[1] ? params[1]:bidix.basename(window.location.toString())), params[3])]);
} else {
prompt = this.label.promptOption;
}
createTiddlyButton(place, label, prompt, function() {config.macros.upload.action(params);}, null, null, this.accessKey);
};
config.macros.upload.action = function(params)
{
// for missing macro parameter set value from options
if (!params) params = {};
var storeUrl = params[0] ? params[0] : config.options.txtUploadStoreUrl;
var toFilename = params[1] ? params[1] : config.options.txtUploadFilename;
var backupDir = params[2] ? params[2] : config.options.txtUploadBackupDir;
var uploadDir = params[3] ? params[3] : config.options.txtUploadDir;
var username = params[4] ? params[4] : config.options.txtUploadUserName;
var password = config.options.pasUploadPassword; // for security reason no password as macro parameter
// for still missing parameter set default value
if ((!storeUrl) && (document.location.toString().substr(0,4) == "http"))
storeUrl = bidix.dirname(document.location.toString())+'/'+config.macros.upload.defaultStoreScript;
if (storeUrl.substr(0,4) != "http")
storeUrl = bidix.dirname(document.location.toString()) +'/'+ storeUrl;
if (!toFilename)
toFilename = bidix.basename(window.location.toString());
if (!toFilename)
toFilename = config.macros.upload.defaultToFilename;
if (!uploadDir)
uploadDir = config.macros.upload.defaultUploadDir;
if (!backupDir)
backupDir = config.macros.upload.defaultBackupDir;
// report error if still missing
if (!storeUrl) {
alert(config.macros.upload.messages.noStoreUrl);
clearMessage();
return false;
}
if (config.macros.upload.authenticateUser && (!username || !password)) {
alert(config.macros.upload.messages.usernameOrPasswordMissing);
clearMessage();
return false;
}
bidix.upload.uploadChanges(false,null,storeUrl, toFilename, uploadDir, backupDir, username, password);
return false;
};
config.macros.upload.destFile = function(storeUrl, toFilename, uploadDir)
{
if (!storeUrl)
return null;
var dest = bidix.dirname(storeUrl);
if (uploadDir && uploadDir != '.')
dest = dest + '/' + uploadDir;
dest = dest + '/' + toFilename;
return dest;
};
//
// uploadOptions Macro
//
config.macros.uploadOptions = {
handler: function(place,macroName,params) {
var wizard = new Wizard();
wizard.createWizard(place,this.wizardTitle);
wizard.addStep(this.step1Title,this.step1Html);
var markList = wizard.getElement("markList");
var listWrapper = document.createElement("div");
markList.parentNode.insertBefore(listWrapper,markList);
wizard.setValue("listWrapper",listWrapper);
this.refreshOptions(listWrapper,false);
var uploadCaption;
if (document.location.toString().substr(0,4) == "http")
uploadCaption = config.macros.upload.label.saveLabel;
else
uploadCaption = config.macros.upload.label.uploadLabel;
wizard.setButtons([
{caption: uploadCaption, tooltip: config.macros.upload.label.promptOption,
onClick: config.macros.upload.action},
{caption: this.cancelButton, tooltip: this.cancelButtonPrompt, onClick: this.onCancel}
]);
},
options: [
"txtUploadUserName",
"pasUploadPassword",
"txtUploadStoreUrl",
"txtUploadDir",
"txtUploadFilename",
"txtUploadBackupDir",
"chkUploadLog",
"txtUploadLogMaxLine"
],
refreshOptions: function(listWrapper) {
var opts = [];
for(i=0; i<this.options.length; i++) {
var opt = {};
opts.push();
opt.option = "";
n = this.options[i];
opt.name = n;
opt.lowlight = !config.optionsDesc[n];
opt.description = opt.lowlight ? this.unknownDescription : config.optionsDesc[n];
opts.push(opt);
}
var listview = ListView.create(listWrapper,opts,this.listViewTemplate);
for(n=0; n<opts.length; n++) {
var type = opts[n].name.substr(0,3);
var h = config.macros.option.types[type];
if (h && h.create) {
h.create(opts[n].colElements['option'],type,opts[n].name,opts[n].name,"no");
}
}
},
onCancel: function(e)
{
backstage.switchTab(null);
return false;
},
wizardTitle: "Upload with options",
step1Title: "These options are saved in cookies in your browser",
step1Html: "<input type='hidden' name='markList'></input><br>",
cancelButton: "Cancel",
cancelButtonPrompt: "Cancel prompt",
listViewTemplate: {
columns: [
{name: 'Description', field: 'description', title: "Description", type: 'WikiText'},
{name: 'Option', field: 'option', title: "Option", type: 'String'},
{name: 'Name', field: 'name', title: "Name", type: 'String'}
],
rowClasses: [
{className: 'lowlight', field: 'lowlight'}
]}
};
//
// upload functions
//
if (!bidix.upload) bidix.upload = {};
if (!bidix.upload.messages) bidix.upload.messages = {
//from saving
invalidFileError: "The original file '%0' does not appear to be a valid TiddlyWiki",
backupSaved: "Backup saved",
backupFailed: "Failed to upload backup file",
rssSaved: "RSS feed uploaded",
rssFailed: "Failed to upload RSS feed file",
emptySaved: "Empty template uploaded",
emptyFailed: "Failed to upload empty template file",
mainSaved: "Main TiddlyWiki file uploaded",
mainFailed: "Failed to upload main TiddlyWiki file. Your changes have not been saved",
//specific upload
loadOriginalHttpPostError: "Can't get original file",
aboutToSaveOnHttpPost: 'About to upload on %0 ...',
storePhpNotFound: "The store script '%0' was not found."
};
bidix.upload.uploadChanges = function(onlyIfDirty,tiddlers,storeUrl,toFilename,uploadDir,backupDir,username,password)
{
var callback = function(status,uploadParams,original,url,xhr) {
if (!status) {
displayMessage(bidix.upload.messages.loadOriginalHttpPostError);
return;
}
if (bidix.debugMode)
alert(original.substr(0,500)+"\n...");
// Locate the storeArea div's
var posDiv = locateStoreArea(original);
if((posDiv[0] == -1) || (posDiv[1] == -1)) {
alert(config.messages.invalidFileError.format([localPath]));
return;
}
bidix.upload.uploadRss(uploadParams,original,posDiv);
};
if(onlyIfDirty && !store.isDirty())
return;
clearMessage();
// save on localdisk ?
if (document.location.toString().substr(0,4) == "file") {
var path = document.location.toString();
var localPath = getLocalPath(path);
saveChanges();
}
// get original
var uploadParams = new Array(storeUrl,toFilename,uploadDir,backupDir,username,password);
var originalPath = document.location.toString();
// If url is a directory : add index.html
if (originalPath.charAt(originalPath.length-1) == "/")
originalPath = originalPath + "index.html";
var dest = config.macros.upload.destFile(storeUrl,toFilename,uploadDir);
var log = new bidix.UploadLog();
log.startUpload(storeUrl, dest, uploadDir, backupDir);
displayMessage(bidix.upload.messages.aboutToSaveOnHttpPost.format([dest]));
if (bidix.debugMode)
alert("about to execute Http - GET on "+originalPath);
var r = doHttp("GET",originalPath,null,null,username,password,callback,uploadParams,null);
if (typeof r == "string")
displayMessage(r);
return r;
};
bidix.upload.uploadRss = function(uploadParams,original,posDiv)
{
var callback = function(status,params,responseText,url,xhr) {
if(status) {
var destfile = responseText.substring(responseText.indexOf("destfile:")+9,responseText.indexOf("\n", responseText.indexOf("destfile:")));
displayMessage(bidix.upload.messages.rssSaved,bidix.dirname(url)+'/'+destfile);
bidix.upload.uploadMain(params[0],params[1],params[2]);
} else {
displayMessage(bidix.upload.messages.rssFailed);
}
};
// do uploadRss
if(config.options.chkGenerateAnRssFeed) {
var rssPath = uploadParams[1].substr(0,uploadParams[1].lastIndexOf(".")) + ".xml";
var rssUploadParams = new Array(uploadParams[0],rssPath,uploadParams[2],'',uploadParams[4],uploadParams[5]);
var rssString = generateRss();
// no UnicodeToUTF8 conversion needed when location is "file" !!!
if (document.location.toString().substr(0,4) != "file")
rssString = convertUnicodeToUTF8(rssString);
bidix.upload.httpUpload(rssUploadParams,rssString,callback,Array(uploadParams,original,posDiv));
} else {
bidix.upload.uploadMain(uploadParams,original,posDiv);
}
};
bidix.upload.uploadMain = function(uploadParams,original,posDiv)
{
var callback = function(status,params,responseText,url,xhr) {
var log = new bidix.UploadLog();
if(status) {
// if backupDir specified
if ((params[3]) && (responseText.indexOf("backupfile:") > -1)) {
var backupfile = responseText.substring(responseText.indexOf("backupfile:")+11,responseText.indexOf("\n", responseText.indexOf("backupfile:")));
displayMessage(bidix.upload.messages.backupSaved,bidix.dirname(url)+'/'+backupfile);
}
var destfile = responseText.substring(responseText.indexOf("destfile:")+9,responseText.indexOf("\n", responseText.indexOf("destfile:")));
displayMessage(bidix.upload.messages.mainSaved,bidix.dirname(url)+'/'+destfile);
store.setDirty(false);
log.endUpload("ok");
} else {
alert(bidix.upload.messages.mainFailed);
displayMessage(bidix.upload.messages.mainFailed);
log.endUpload("failed");
}
};
// do uploadMain
var revised = bidix.upload.updateOriginal(original,posDiv);
bidix.upload.httpUpload(uploadParams,revised,callback,uploadParams);
};
bidix.upload.httpUpload = function(uploadParams,data,callback,params)
{
var localCallback = function(status,params,responseText,url,xhr) {
url = (url.indexOf("nocache=") < 0 ? url : url.substring(0,url.indexOf("nocache=")-1));
if (xhr.status == 404)
alert(bidix.upload.messages.storePhpNotFound.format([url]));
if ((bidix.debugMode) || (responseText.indexOf("Debug mode") >= 0 )) {
alert(responseText);
if (responseText.indexOf("Debug mode") >= 0 )
responseText = responseText.substring(responseText.indexOf("\n\n")+2);
} else if (responseText.charAt(0) != '0')
alert(responseText);
if (responseText.charAt(0) != '0')
status = null;
callback(status,params,responseText,url,xhr);
};
// do httpUpload
var boundary = "---------------------------"+"AaB03x";
var uploadFormName = "UploadPlugin";
// compose headers data
var sheader = "";
sheader += "--" + boundary + "\r\nContent-disposition: form-data; name=\"";
sheader += uploadFormName +"\"\r\n\r\n";
sheader += "backupDir="+uploadParams[3] +
";user=" + uploadParams[4] +
";password=" + uploadParams[5] +
";uploaddir=" + uploadParams[2];
if (bidix.debugMode)
sheader += ";debug=1";
sheader += ";;\r\n";
sheader += "\r\n" + "--" + boundary + "\r\n";
sheader += "Content-disposition: form-data; name=\"userfile\"; filename=\""+uploadParams[1]+"\"\r\n";
sheader += "Content-Type: text/html;charset=UTF-8" + "\r\n";
sheader += "Content-Length: " + data.length + "\r\n\r\n";
// compose trailer data
var strailer = new String();
strailer = "\r\n--" + boundary + "--\r\n";
data = sheader + data + strailer;
if (bidix.debugMode) alert("about to execute Http - POST on "+uploadParams[0]+"\n with \n"+data.substr(0,500)+ " ... ");
var r = doHttp("POST",uploadParams[0],data,"multipart/form-data; ;charset=UTF-8; boundary="+boundary,uploadParams[4],uploadParams[5],localCallback,params,null);
if (typeof r == "string")
displayMessage(r);
return r;
};
// same as Saving's updateOriginal but without convertUnicodeToUTF8 calls
bidix.upload.updateOriginal = function(original, posDiv)
{
if (!posDiv)
posDiv = locateStoreArea(original);
if((posDiv[0] == -1) || (posDiv[1] == -1)) {
alert(config.messages.invalidFileError.format([localPath]));
return;
}
var revised = original.substr(0,posDiv[0] + startSaveArea.length) + "\n" +
store.allTiddlersAsHtml() + "\n" +
original.substr(posDiv[1]);
var newSiteTitle = getPageTitle().htmlEncode();
revised = revised.replaceChunk("<title"+">","</title"+">"," " + newSiteTitle + " ");
revised = updateMarkupBlock(revised,"PRE-HEAD","MarkupPreHead");
revised = updateMarkupBlock(revised,"POST-HEAD","MarkupPostHead");
revised = updateMarkupBlock(revised,"PRE-BODY","MarkupPreBody");
revised = updateMarkupBlock(revised,"POST-SCRIPT","MarkupPostBody");
return revised;
};
//
// UploadLog
//
// config.options.chkUploadLog :
// false : no logging
// true : logging
// config.options.txtUploadLogMaxLine :
// -1 : no limit
// 0 : no Log lines but UploadLog is still in place
// n : the last n lines are only kept
// NaN : no limit (-1)
bidix.UploadLog = function() {
if (!config.options.chkUploadLog)
return; // this.tiddler = null
this.tiddler = store.getTiddler("UploadLog");
if (!this.tiddler) {
this.tiddler = new Tiddler();
this.tiddler.title = "UploadLog";
this.tiddler.text = "| !date | !user | !location | !storeUrl | !uploadDir | !toFilename | !backupdir | !origin |";
this.tiddler.created = new Date();
this.tiddler.modifier = config.options.txtUserName;
this.tiddler.modified = new Date();
store.addTiddler(this.tiddler);
}
return this;
};
bidix.UploadLog.prototype.addText = function(text) {
if (!this.tiddler)
return;
// retrieve maxLine when we need it
var maxLine = parseInt(config.options.txtUploadLogMaxLine,10);
if (isNaN(maxLine))
maxLine = -1;
// add text
if (maxLine != 0)
this.tiddler.text = this.tiddler.text + text;
// Trunck to maxLine
if (maxLine >= 0) {
var textArray = this.tiddler.text.split('\n');
if (textArray.length > maxLine + 1)
textArray.splice(1,textArray.length-1-maxLine);
this.tiddler.text = textArray.join('\n');
}
// update tiddler fields
this.tiddler.modifier = config.options.txtUserName;
this.tiddler.modified = new Date();
store.addTiddler(this.tiddler);
// refresh and notifiy for immediate update
story.refreshTiddler(this.tiddler.title);
store.notify(this.tiddler.title, true);
};
bidix.UploadLog.prototype.startUpload = function(storeUrl, toFilename, uploadDir, backupDir) {
if (!this.tiddler)
return;
var now = new Date();
var text = "\n| ";
var filename = bidix.basename(document.location.toString());
if (!filename) filename = '/';
text += now.formatString("0DD/0MM/YYYY 0hh:0mm:0ss") +" | ";
text += config.options.txtUserName + " | ";
text += "[["+filename+"|"+location + "]] |";
text += " [[" + bidix.basename(storeUrl) + "|" + storeUrl + "]] | ";
text += uploadDir + " | ";
text += "[[" + bidix.basename(toFilename) + " | " +toFilename + "]] | ";
text += backupDir + " |";
this.addText(text);
};
bidix.UploadLog.prototype.endUpload = function(status) {
if (!this.tiddler)
return;
this.addText(" "+status+" |");
};
//
// Utilities
//
bidix.checkPlugin = function(plugin, major, minor, revision) {
var ext = version.extensions[plugin];
if (!
(ext &&
((ext.major > major) ||
((ext.major == major) && (ext.minor > minor)) ||
((ext.major == major) && (ext.minor == minor) && (ext.revision >= revision))))) {
// write error in PluginManager
if (pluginInfo)
pluginInfo.log.push("Requires " + plugin + " " + major + "." + minor + "." + revision);
eval(plugin); // generate an error : "Error: ReferenceError: xxxx is not defined"
}
};
bidix.dirname = function(filePath) {
if (!filePath)
return;
var lastpos;
if ((lastpos = filePath.lastIndexOf("/")) != -1) {
return filePath.substring(0, lastpos);
} else {
return filePath.substring(0, filePath.lastIndexOf("\\"));
}
};
bidix.basename = function(filePath) {
if (!filePath)
return;
var lastpos;
if ((lastpos = filePath.lastIndexOf("#")) != -1)
filePath = filePath.substring(0, lastpos);
if ((lastpos = filePath.lastIndexOf("/")) != -1) {
return filePath.substring(lastpos + 1);
} else
return filePath.substring(filePath.lastIndexOf("\\")+1);
};
bidix.initOption = function(name,value) {
if (!config.options[name])
config.options[name] = value;
};
//
// Initializations
//
// require PasswordOptionPlugin 1.0.1 or better
bidix.checkPlugin("PasswordOptionPlugin", 1, 0, 1);
// styleSheet
setStylesheet('.txtUploadStoreUrl, .txtUploadBackupDir, .txtUploadDir {width: 22em;}',"uploadPluginStyles");
//optionsDesc
merge(config.optionsDesc,{
txtUploadStoreUrl: "Url of the UploadService script (default: store.php)",
txtUploadFilename: "Filename of the uploaded file (default: in index.html)",
txtUploadDir: "Relative Directory where to store the file (default: . (downloadService directory))",
txtUploadBackupDir: "Relative Directory where to backup the file. If empty no backup. (default: ''(empty))",
txtUploadUserName: "Upload Username",
pasUploadPassword: "Upload Password",
chkUploadLog: "do Logging in UploadLog (default: true)",
txtUploadLogMaxLine: "Maximum of lines in UploadLog (default: 10)"
});
// Options Initializations
bidix.initOption('txtUploadStoreUrl','');
bidix.initOption('txtUploadFilename','');
bidix.initOption('txtUploadDir','');
bidix.initOption('txtUploadBackupDir','');
bidix.initOption('txtUploadUserName','');
bidix.initOption('pasUploadPassword','');
bidix.initOption('chkUploadLog',true);
bidix.initOption('txtUploadLogMaxLine','10');
// Backstage
merge(config.tasks,{
uploadOptions: {text: "upload", tooltip: "Change UploadOptions and Upload", content: '<<uploadOptions>>'}
});
config.backstageTasks.push("uploadOptions");
//}}}
http://www.ukoug.org/home/
http://karlarao.wordpress.com/2010/03/27/ideas-build-off-ideas-making-use-of-social-networking-sites/
http://www.oaktable.net/
https://sites.google.com/site/oracleinth/home/newsletter-february-2011
http://rmoug.org/
http://www.pythian.com/news/32775/how-do-you-moderate-linkedin-discussion-forums/
! materials
Using R for Big Data with Spark https://www.safaribooksonline.com/library/view/using-r-for/9781491973035/
https://www.amazon.com/Using-Big-Data-Spark-Training/dp/B01M5E365E/ref=pd_sbs_65_t_1?_encoding=UTF8&psc=1&refRID=SQZM3B99GKETTX0VC722
-- UTILITIES
11g DBCA New features / Enhancements
Doc ID: 454631.1
-- SERVER ALERTS
Understanding Oracle 10G - Server Generated Alerts
Doc ID: 266970.1
Configuring Server-Generated Alerts for Tablespace Usage using PL/SQL and Advanced Queues
Doc ID: 271585.1
Metric Alerts are being Raised for Tablespaces Which no Longer Exist / Dropped in the Database
Doc ID: 740340.1
How to - Exclude a Tablespace From The Tablespace Used (%) Metric
Doc ID: 392268.1
10g NEW FEATURE on TABLESPACE ADVISORY
Doc ID: 240965.1
Supported RDBMS Metric on Oracle Database 10.2
Doc ID: 564260.1
How To Configure Notification Rules in EM 10G
Doc ID: 429422.1
How to include the numeric portion of the "ORA-" in alerts and email notifications
Doc ID: 458605.1
Problem: RAC Metrics: Unable to get E-mail Notification for some metrics against Cluster Databases
Doc ID: 403886.1
How to Generate an Alert When There Are Blocked Emails
Doc ID: 290278.1
Configuring Email Notification Method in EM - Steps and Troubleshooting
Doc ID: 429426.1
Configuring SNMP Trap Notification Method in EM - Steps and Troubleshooting
Doc ID: 434886.1
How to be notified for all ORA- Errors recorded in the alert.log file
Doc ID: 405396.1
Troubleshooting a Database Tablespace Used(%) Alert problem
Doc ID: 403264.1
-- RDA
Note 314422.1 - Remote Diagnostic Agent (RDA) 4 - Getting Started
Note 359395.1 - Remote Diagnostic Agent (RDA) 4 - RAC Cluster Guide
-- OCM
Note 436857.1 What Is The Difference Between RDA and OCM
http://docs.oracle.com/cd/B19306_01/server.102/b14237/dynviews_2124.htm
* possible mismatch
{{{
OPTIMIZER_MISMATCH
OUTLINE_MISMATCH
STATS_ROW_MISMATCH
LITERAL_MISMATCH
FORCE_HARD_PARSE
EXPLAIN_PLAN_CURSOR
BUFFERED_DML_MISMATCH
PDML_ENV_MISMATCH
INST_DRTLD_MISMATCH
SLAVE_QC_MISMATCH
TYPECHECK_MISMATCH
AUTH_CHECK_MISMATCH
BIND_MISMATCH
DESCRIBE_MISMATCH
LANGUAGE_MISMATCH
TRANSLATION_MISMATCH
BIND_EQUIV_FAILURE - ACS - adaptive cursor sharing
INSUFF_PRIVS
INSUFF_PRIVS_REM
REMOTE_TRANS_MISMATCH
LOGMINER_SESSION_MISMATCH
INCOMP_LTRL_MISMATCH
OVERLAP_TIME_MISMATCH
EDITION_MISMATCH
MV_QUERY_GEN_MISMATCH
USER_BIND_PEEK_MISMATCH
TYPCHK_DEP_MISMATCH
NO_TRIGGER_MISMATCH
}}}
! BIND_EQUIV_FAILURE
https://hourim.wordpress.com/2015/04/06/bind_equiv_failure-or-when-you-will-regret-using-adaptive-cursor-sharing/
https://www.freelists.org/post/oracle-l/SQL-with-BIND-EQUIV-FAILURE-repeatedly-creates-same-range-over-and-over-again,8
ACS https://hourim.wordpress.com/2017/10/25/cursor-selectivity-cube-part-i/ , https://hourim.wordpress.com/2017/10/28/cursor-selectivity-cube-part-ii/
https://learning.oreilly.com/library/view/oracle-database-problem/9780134429267/ch04.html#ch04lev1sec1
http://blog.cubrid.org/web-2-0/database-technology-for-large-scale-data/
http://servarica.com/a-word-on-over-selling/
SmartOS: virtualization with ZFS and KVM http://lwn.net/Articles/459754/
<<<
Illumos KVM diverges from Linux KVM in some areas. For starters, apart from the focus on Intel and x86/x86-64 there are some limitations in functionality. As a cloud provider, Joyent doesn't believe in overselling memory, so it locks down guest memory in KVM: if the host hasn't enough free memory to lock down all the needed memory for the virtual machine, the guest fails to start. Using the same argument, Joyent also didn't implement kernel same-page merging (KSM), the Linux functionality for memory deduplication. According to Cantrill's presentation, it's technically possible to implement this in illumos, but Joyent doesn't see an acute need for it. Another limitation is that illumos KVM doesn't support nested virtualization.
<<<
-- PORT FORWARDING on LINUX
http://www.vmware.com/support/ws55/doc/ws_net_nat_advanced.html
http://www.vmware.com/support/ws55/doc/ws_net_nat_sample_vmnetnatconf.html
http://www.howforge.com/port-forward-in-vmware-server-for-linux
http://communities.vmware.com/thread/18573
{{{
- use NAT
- edit the /etc/vmware/vmnet8/nat/nat.conf
- put
1521 = 192.168.27.128:1521
- vmware shouldn't be running while you restart it's services (which should also be done as root)
/usr/lib/vmware/net-services.sh restart
}}}
-- MULTIPLE VMDK TO SINGLE FILE
https://wiki.ubuntu.com/UbuntuMagazine/HowTo/Switching_From_VMWare_To_VirtualBox:_.vmdk_To_.vdi_Using_Qemu_+_VdiTool#Multiple vmdk files to single vdi <-- nice multiple VMDK to single file trick!!
http://www.ubuntugeek.com/howto-convert-vmware-image-to-virtualbox-image.html
http://forums.virtualbox.org/viewtopic.php?t=104
http://dietrichschroff.blogspot.com/2008/10/using-vmware-images-with-virtualbox.html
http://communities.vmware.com/thread/88468?start=0&tstart=0 <-- vdiskmanager GUI toolkit!!
http://communities.vmware.com/docs/DOC-7471
-- Examples Using the VMware Virtual Disk Manager
http://www.vmware.com/support/ws55/doc/ws_disk_manager_examples.html
http://www.linux.com/learn/tutorials/264732-create-and-run-virtual-machines-with-kvm
https://help.ubuntu.com/community/KVM/FAQ
http://www.tuxradar.com/content/howto-linux-and-windows-virtualization-kvm-and-qemu
http://ubuntu-tutorials.com/2009/03/22/how-to-convert-vmware-image-vmdk-to-virtualbox-image-vdi/
http://blog.mymediasystem.net/uncategorized/vmware-kvm-migration-guide/ <-- NICE!
! Background
On VMware.. usually what I do on VMDK disk is make them ''dynamically extend'' per 2GB files. When you have this VMDK attribute and you migrate to VirtualBox and flag the VMDK as ''type shareable'' it will not allow you.
''The Workaround/Solution''
You need to clone the base disks to variant ''fixed''.. therefore if you have created 4GB VMDK that is dynamically extend but the utilized space is just 500MB.. when you clone it as ''fixed'' it will create a 4GB file utilizing all the space
{{{
-- CLONE THE VMDK
VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk1.vmdk" "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --format VMDK --variant Fixed
}}}
and that file you can use to make it as variant ''shareable'' and assign it to the RAC nodes
{{{
VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 1 --device 0
--type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --mtype shareable
}}}
! Step by step
!! 1) Clone the VMDK to fixed
{{{
-- CLONE THE VMDK
VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk1.vmdk" "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --format VMDK --variant Fixed
VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk2.vmdk" "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk2.vmdk" --format VMDK --variant Fixed
VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk3.vmdk" "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk3.vmdk" --format VMDK --variant Fixed
VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk4.vmdk" "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk4.vmdk" --format VMDK --variant Fixed
VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk5.vmdk" "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk5.vmdk" --format VMDK --variant Fixed
VBoxManage clonehd <uuid>|<filename> <outputfile>
[--format VDI|VMDK|VHD|RAW|<other>]
[--variant Standard,Fixed,Split2G,Stream,ESX]
[--existing]
C:\Program Files\Oracle\VirtualBox>VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk1.vmdk" "
C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VMDK'. UUID: 498fd160-2998-4cf9-9957-fa0c74dafdbf
C:\Program Files\Oracle\VirtualBox>
C:\Program Files\Oracle\VirtualBox>VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk1-s001.vm
dk" "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1-s001.vmdk" --format VMDK --variant Fixed
VBoxManage.exe: error: Could not get the storage format of the medium 'C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_stora
ge\disk1-s001.vmdk' (VERR_NOT_SUPPORTED)
VBoxManage.exe: error: Details: code VBOX_E_IPRT_ERROR (0x80bb0005), component Medium, interface IMedium, callee IUnknow
n
Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, AccessMode_ReadWrite, pMedium.asOutParam())" at line 209
of file VBoxManageDisk.cpp
C:\Program Files\Oracle\VirtualBox>VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk2.vmdk" "
C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk2.vmdk" --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VMDK'. UUID: 989ca746-f00b-449b-89b0-5004447c2c44
C:\Program Files\Oracle\VirtualBox>
C:\Program Files\Oracle\VirtualBox>
C:\Program Files\Oracle\VirtualBox>VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk3.vmdk" "
C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk3.vmdk" --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VMDK'. UUID: 53998810-36e4-476a-a251-dd1cbab7edca
C:\Program Files\Oracle\VirtualBox>
C:\Program Files\Oracle\VirtualBox>VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk4.vmdk" "
C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk4.vmdk" --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VMDK'. UUID: 0d73ec18-df74-4bc8-81cb-cc33292ced05
C:\Program Files\Oracle\VirtualBox>
C:\Program Files\Oracle\VirtualBox>VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk5.vmdk" "
C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk5.vmdk" --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VMDK'. UUID: 782c3d15-7705-4065-922c-3cd638c8706c
}}}
!! 2) Attach the shared storage as variant ''shareable''
{{{
VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 1 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --mtype shareable
VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 2 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk2.vmdk" --mtype shareable
VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 3 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk3.vmdk" --mtype shareable
VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 4 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk4.vmdk" --mtype shareable
VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 5 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk5.vmdk" --mtype shareable
VBoxManage storageattach racnode2 --storagectl "SCSI Controller" --port 1 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --mtype shareable
VBoxManage storageattach racnode2 --storagectl "SCSI Controller" --port 2 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk2.vmdk" --mtype shareable
VBoxManage storageattach racnode2 --storagectl "SCSI Controller" --port 3 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk3.vmdk" --mtype shareable
VBoxManage storageattach racnode2 --storagectl "SCSI Controller" --port 4 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk4.vmdk" --mtype shareable
VBoxManage storageattach racnode2 --storagectl "SCSI Controller" --port 5 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk5.vmdk" --mtype shareable
VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --type shareable
VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk2.vmdk" --type shareable
VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk3.vmdk" --type shareable
VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk4.vmdk" --type shareable
VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk5.vmdk" --type shareable
C:\Program Files\Oracle\VirtualBox>VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 1 --device 0
--type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --mtype shareable
C:\Program Files\Oracle\VirtualBox>VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 2 --device 0
--type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk2.vmdk" --mtype shareable
C:\Program Files\Oracle\VirtualBox>VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 3 --device 0
--type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk3.vmdk" --mtype shareable
C:\Program Files\Oracle\VirtualBox>VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 4 --device 0
--type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk4.vmdk" --mtype shareable
C:\Program Files\Oracle\VirtualBox>VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 5 --device 0
--type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk5.vmdk" --mtype shareable
C:\Program Files\Oracle\VirtualBox>VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --type s
hareable
C:\Program Files\Oracle\VirtualBox>VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk2.vmdk" --type s
hareable
C:\Program Files\Oracle\VirtualBox>VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk3.vmdk" --type s
hareable
C:\Program Files\Oracle\VirtualBox>VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk4.vmdk" --type s
hareable
C:\Program Files\Oracle\VirtualBox>VBoxManage modifyhd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk5.vmdk" --type s
hareable
}}}
!! 3) Startup the RAC nodes, detect new devices and fix the network settings, do another reboot
!! 4) Output of crs_stat
{{{
[oracle@racnode1 bin]$ crs_stat2
HA Resource Target State
----------- ------ -----
ora.orcl.db ONLINE ONLINE on racnode1
ora.orcl.orcl1.inst ONLINE ONLINE on racnode1
ora.orcl.orcl2.inst ONLINE ONLINE on racnode2
ora.orcl.orcl_service.cs ONLINE ONLINE on racnode2
ora.orcl.orcl_service.orcl1.srv ONLINE ONLINE on racnode1
ora.orcl.orcl_service.orcl2.srv ONLINE ONLINE on racnode2
ora.racnode1.ASM1.asm ONLINE ONLINE on racnode1
ora.racnode1.LISTENER_RACNODE1.lsnr ONLINE ONLINE on racnode1
ora.racnode1.gsd ONLINE ONLINE on racnode1
ora.racnode1.ons ONLINE ONLINE on racnode1
ora.racnode1.vip ONLINE ONLINE on racnode1
ora.racnode2.ASM2.asm ONLINE ONLINE on racnode2
ora.racnode2.LISTENER_RACNODE2.lsnr ONLINE ONLINE on racnode2
ora.racnode2.gsd ONLINE ONLINE on racnode2
ora.racnode2.ons ONLINE ONLINE on racnode2
ora.racnode2.vip ONLINE ONLINE on racnode2
}}}
! Comments on Facebook
{{{
Karl Arao
I can say.. Virtualbox is way faster than VMware on my netbook nb305-n600. I love VirtualBox.
--------
Sebastian Leyton Mmmm. Can yo made a RAC on virtual box? With a shared storage?
--------
Lewis Cunningham Yes you can. Fairly simple too. Search around on Oracle-base.com. There's a good article there for doing exactly that.
--------
Karl Arao Yes, look here http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
--------
Karl Arao ?@Lewis: Windows went fine after doing a repair install and following this doc http://www.virtualbox.org/wiki/Migrate_Windows
I'm now doing the same on a Linux guest, but encountered kernel panic. Trying to run the vmdk as it is. I found this guide
http://blogs.sun.com/jimlaurent/entry/importing_solaris_vmdk_image_into and looking for a linux counterpart
Have you also migrated from VMware to Virtualbox?
--------
Felix Chua may pang windows 7 64 bit ba na virtualbox? :-)
--------
Karl Arao ?@Lewis: It's now working ;) I have to change the IDE controller to SCSI and add all my VMDK disk.. the ASM instance & database started without errors ;)
see here http://lh5.ggpht.com/_F2x5WXOJ6Q8/TT5GWlOMYMI/AAAAAAAABBo/iCEWErNA5v4/VirtualBoxDisk.png
http://forums.virtualbox.org/viewtopic.php?f=2&t=26444
@Felix: I'm not sure, but that should work http://www.virtualbox.org/wiki/Downloads
--------
Nuno Pinto Do Souto What's the native OS you're running Virtualbox on? Win7 64-bit?
--------
Karl Arao ?@Nuno: just a Windows 7 Starter 32bit on atom n550 & 2GB RAM
I just made my RAC from VMware work with VirtualBox.. apparently when you configure your VMware VMDK files to dynamically extend it will not allow you to make the VMDK files shareable.. like this:
VBoxManage storageattach racnode1 --storagectl "SCSI Controller" --port 1 --device 0 --type hdd --medium "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --mtype shareable
As a workaround... you need to make the VMDK files as "fixed" with the command below:
VBoxManage clonehd "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared_storage\disk1.vmdk" "C:\Virtual Machines\RAC_10gR2_OEL4.4\shared\disk1.vmdk" --format VMDK --variant Fixed
then you can reexecute the first command above.. then instantly you will see on the VM settings that you now have the ASM & OCFS2 disks as "shared".. then reconfigure the network adapters.. that's it! ;)
BTW.. take note that if you have dynamic VMDK files, it will have <file>-<number>.vmdk like the one below
disk1.vmdk
disk1-s001.vmdk
if you are executing the "VBoxManage clonehd" just do it on the <file>.vmdk or the disk1.vmdk as shown... then the tool will do it on the rest of the files.. the output will be like the one below that's after making them "fixed"
disk1.vmdk
disk1-flat.vmdk
--------
Nuno Pinto Do Souto Cool, thanks!
--------
Karl Arao Now I'm working on making the NAT work.. will update this later ;)
--------
Karl Arao As per the official doc, by default VirtualBox is using NAT..
Seems like it's different from configuring NAT on VMware, in VirtualBox you have to create a new interface and actually configure it for NAT (just as you would on physical machines) then when you create a new VM you have to make use of this NAT'd interface.
There's a configured "VirtualBox Host-Only Ethernet Adapter" when you go to File->Preferences->Network , my VMs from VMware are on 192.168.203 segment.. so click on "Edit host-only network" and edit the Adapater (192.168.203.1) and DHCP (192.168.203.2) tabs.. then go to the VM settings and change the "Attached to" to "Host-only Adapter", then that's it.. when you power on the VMs you can do SSH from the host :)
--------
Asif Momen Even I think VirtualBox is better than VMware.
--------
}}}
! Follow the steps below to run your VMDK files on VirtualBox:
1) Click New, then Next
[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TUWx5k16HlI/AAAAAAAABCA/sgFlJjKGGGQ/s400/2.png]]
2) Type the name and the OS details
[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TUWx5z6BsYI/AAAAAAAABCE/VuRjZSbHEnQ/s400/3.png]]
3) Set the memory
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TUWx5x96E1I/AAAAAAAABCI/peNj6LIEiqE/s400/4.png]]
4) Boot hard disk on existing.. you can just pick disk0 here.. we will remove this disk config later because we will be using SCSI controllers. By default it creates IDE.
[img[picturename| http://lh5.ggpht.com/_F2x5WXOJ6Q8/TUWx51MkVBI/AAAAAAAABCM/H_AMBWshwbY/s400/5.png]]
5) Finish
[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TUWx6MMQSLI/AAAAAAAABCQ/vrECq2bXMOU/s400/6.png]]
6) Go the the VM, then Settings, remove the IDE and SATA controllers
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TUWyJ6zi1qI/AAAAAAAABCU/GYMAUCcHX8E/s400/7.png]]
If you do not replace the controllers, you will have error similar to this on boot up. Disk devices are not detected and can't be mounted.
[img[picturename| http://lh5.ggpht.com/_F2x5WXOJ6Q8/TUW-E7RgShI/AAAAAAAABDM/F0uvPEzqC7k/s400/IMG_2932.JPG]]
7) Add SCSI controller
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TUWyKfW8kUI/AAAAAAAABCY/0WOo1lt9IQU/s400/8.png]]
8) Choose existing disk
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TUWyKnJXwCI/AAAAAAAABCc/8J7qx951cdA/s400/9.png]]
9) Now, just pick the ''Base disk names'' .. also for VMwareToVbox-RAC when you clone using the command ''VBoxManage clonehd'' you only need to choose the ''Base disk name'' and it will do the cloning to the rest of the disk under the ''Base disk''
[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/TUWyuRGdGSI/AAAAAAAABCs/BPX6Au5DXcg/s400/10.png]]
10) All SCSI disks are added
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TUWyKv7LyBI/AAAAAAAABCk/36mTdHPX9fs/s400/11.png]]
11) On the network settings of the VM, choose ''Host-only Adapter'' and name ''VirtualBox Host-only Ethernet Adapter'' this will be much like your NAT network
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TUWzExGWJPI/AAAAAAAABC4/8LTd1cQ29pQ/s400/13.png]]
12) On the system wide preferences..
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TUWzEznWZZI/AAAAAAAABC8/lKJcPwoSaEk/s400/14.png]]
13) Edit the ''VirtualBox Host-only Ethernet Adapter'', my usual subnet for my VMs are on 192.168.203
[img[picturename| http://lh5.ggpht.com/_F2x5WXOJ6Q8/TUWzExLUW3I/AAAAAAAABDA/4gzI3ZwnFKE/s400/15.png]]
14) Then configure the DHCP server
[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TUWzFIHIOdI/AAAAAAAABDE/u0cmeOcBVIs/s400/16.png]]
15) Start the VM! :)
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TUWzJb5QIrI/AAAAAAAABDI/u_CBiXHw4Ow/s400/17.png]]
-- other links
http://blogs.sun.com/fatbloke/entry/moving_a_vmware_vm_to
! Repair Installation
For Windows XP VMware to Virtual Box.. after adding it to the configuration and selecting the base disks just like here below
[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/TUWyuRGdGSI/AAAAAAAABCs/BPX6Au5DXcg/s400/10.png]]
you need to do a ''repair installation'' using your Windows XP installer..
after that you may encounter issues on ''agp440.sys and intelppm.sys'' giving you blue screen on boot up..
do the steps indicated here to fix it http://www.virtualbox.org/wiki/Migrate_Windows
! Networking
For the network adapter on Windows.. remove the current network card then choose the ''Intel Pro 1000 MT Desktop'' then download the drivers here:
http://www.intel.com/support/network/adapter/1000mtdesktop/index.htm
http://www.intel.com/support/network/sb/cs-006120.htm
http://downloadcenter.intel.com/detail_desc.aspx?agr=Y&DwnldID=18717
To install the drivers you need to make use of USB to copy the ''PROWin32.exe'' file to the VM
''Issues on Host-Only''
host only networking doesn't seem to work on Windows. You may need to use Bridged Networking or NAT
! Other issues
when installing SQL*Developer you may encounter this error.. http://i.justrealized.com/2009/how-to-fix-missing-msvcr71dll-problem-in-windows/
http://www.evernote.com/shard/s48/sh/d3033468-d565-41aa-afc1-013c02e37e7f/41b138362507dfef2e308c202ba4e40c
exceed https://www2.physics.ox.ac.uk/it-services/exceed/exceed-configuration
http://www.realvnc.com/pipermail/vnc-list/1998-September/002635.html
http://www.realvnc.com/pipermail/vnc-list/2001-August/024738.html
http://ubuntuforums.org/showthread.php?t=122402
http://codeghar.wordpress.com/2009/06/11/remote-login-with-gdm-and-vnc-on-fedora-11/
http://www.walkernews.net/2008/06/20/configure-vnc-server-to-auto-start-up-in-red-hat-linux/ <-- GOOD STUFF
http://www.abdevelopment.ca/blog/start-vnc-server-ubuntu-boot
{{{
chkconfig vncserver on
/etc/sysconfig/vncservers
VNCSERVERS="1:oracle"
}}}
Linux desktops, Windows only VPN clients, virtual machines and you — DIY VPN jump box
http://www.pythian.com/news/17205/linux-desktops-windows-only-vpn-clients-virtual-machines-and-you-diy-vpn-jump-box/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+PythianGroupBlog+(Pythian+Group+Blog)
<<showtoc>>
! rewrite process
* was able to rewrite the query, now runs 45secs in OUHUBPRD and 86secs in OUHUBDEV
* transformed the correlated query cisadm.ci_per_name, cisadm.ci_acct_per to a join on cisadm.ci_sa
* i retained the other one as correlated because it's just 80+ rows
{{{
cisa_cte_tmp as
(SELECT/*+parallel(12)*/ t1.sa_id,
sa.acct_id,
t1.customer,
sa.sa_type_cd,
(SELECT rev_cl_cd
FROM cisadm.ci_sa_type
WHERE sa_type_cd = sa.sa_type_cd) revn_cls
FROM cisadm.ci_sa sa
left join
(
SELECT/*+ parallel(8) */ sa.sa_id, a.entity_name customer
FROM cisadm.ci_per_name a, cisadm.ci_acct_per b, cisadm.ci_sa sa
WHERE a.per_id = b.per_id
and b.acct_id = sa.acct_id
and a.prim_name_sw = 'Y'
and b.main_cust_sw = 'Y'
) t1
on t1.sa_id = sa.sa_id)
}}}
! query logic before rewrite
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/113662855-ee1b3580-9676-11eb-9ae2-61f5d027954d.png ]]
! query logic after rewrite
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/113662858-ef4c6280-9676-11eb-8d6f-05faf9c72f82.png ]]
! query logic onion approach
* the outer onion is the last query (A7)
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/113662860-efe4f900-9676-11eb-81df-9365af0587c7.png ]]
! how do you compare 2 queries and say if they are logically equivalent?
<<<
when I hit the number 7044266 rows on that CI_SA row source that gave me assurance that i'm not altering the result. and if you see the structure on the viz remained the same then logically they are the same. and also you need to actually eyeball the result
that's why i generated the VST before and after rewrite
also do a CTAS on the original and new query, and do a SELECT MINUS SELECT to make sure both results are the same
select * from t2 minus select * from t3
and then switch positions
<<<
! the SQLs
!! original 2.6hours
{{{
WITH dtrng
AS (SELECT mnth
FROM (SELECT DISTINCT To_char(dt, 'YYYYMM') mnth
FROM (SELECT LEVEL n,
Add_months(Trunc(SYSDATE, 'YEAR')-1, -12)
+ LEVEL AS dt
FROM dual
CONNECT BY LEVEL <= ( Add_months(Trunc(SYSDATE, 'MM'),
-1) -
Add_months(Trunc(SYSDATE,
'YEAR'), -13)
))x
ORDER BY 1)),
sas
AS (SELECT
/*+parallel(12)*/ x.*
FROM (SELECT rh.sa_id,
rh.rs_cd,
rc.char_val tariff,
rh.effdt rheffdt,
rc.effdt rceffdt,
Rank()
over (
PARTITION BY rh.sa_id
ORDER BY rh.effdt DESC) rnk
FROM cisadm.ci_sa_rs_hist rh
join cisadm.ci_rc_char rc
ON ( rh.rs_cd = rc.rs_cd )
WHERE rc.char_type_cd = 'RATEALPH')x
WHERE rnk = 1)
SELECT /*+parallel(12)*/ acct_id,
customer,
sa_type_cd,
revn_cls,
rate,
tariff,
accounting_date,
SUM(bdgt_amt) bdgt_amt,
CASE
WHEN Trim(revn_cls) = 'COMPANY' THEN SUM(cu_amt)
ELSE SUM(actual_amt)
END actual_amt,
SUM(revn_amt) revn_amt,
SUM(bill_kwh) bill_kwh,
Max(bill_kw) bill_kw
FROM (SELECT sa.acct_id,
ft.ft_id,
(SELECT entity_name
FROM cisadm.ci_per_name
WHERE prim_name_sw = 'Y'
AND per_id IN (SELECT per_id
FROM cisadm.ci_acct_per
WHERE main_cust_sw = 'Y'
AND acct_id = sa.acct_id)) customer
,
sa.sa_type_cd,
(SELECT rev_cl_cd
FROM cisadm.ci_sa_type
WHERE sa_type_cd = sa.sa_type_cd) revn_cls
,
ss.rs_cd
rate,
ss.tariff,
Trunc(ft.accounting_dt, 'MM')
accounting_date
,
cur_amt
bdgt_amt,
tot_amt
actual_amt,
Nvl((SELECT SUM(-calc_amt)
FROM cisadm.ci_bseg_calc_ln bc
join cisadm.ci_bseg bs
ON ( bc.bseg_id = bs.bseg_id )
WHERE bs.bill_id = ft.parent_id
AND Trim(bc.dst_id) = 'EX-MISCGL'), 0) cu_amt,
Nvl((SELECT SUM(-amount)
FROM cisadm.ci_ft_gl
WHERE dst_id LIKE 'RV%'
AND ft_id = ft.ft_id), 0) revn_amt
,
CASE
WHEN ft_type_flg = 'BX' THEN -1
ELSE 1
END * (SELECT SUM(bill_sq)
FROM cisadm.ci_bseg_sq
WHERE uom_cd NOT IN ( 'MTRR', 'KW', 'KVRH', 'PWRF',
'HPS', 'KWHE', 'KWHQ', 'KWHH',
' ' )
AND bseg_id = ft.sibling_id) bill_kwh
,
CASE
WHEN ft_type_flg = 'BX' THEN -1
ELSE 1
END * (SELECT Max(bill_sq)
FROM cisadm.ci_bseg_sq
WHERE sqi_cd = 'BILLKW'
AND bseg_id = ft.sibling_id) bill_kw
FROM cisadm.ci_ft ft
join cisadm.ci_sa sa
ON ( ft.sa_id = sa.sa_id )
join sas ss
ON ( ft.sa_id = ss.sa_id )
join dtrng dt
ON ( To_char(accounting_dt, 'YYYYMM') = dt.mnth )
WHERE ft_type_flg LIKE 'B%'
AND freeze_dttm IS NOT NULL
AND ( To_char(rheffdt, 'YYYYMM') <= dt.mnth )
AND ( To_char(rceffdt, 'YYYYMM') <= dt.mnth ))
GROUP BY acct_id,
customer,
sa_type_cd,
revn_cls,
rate,
tariff,
accounting_date
/
}}}
!! 2nd version - optimized CTE
{{{
WITH dtrng
AS (SELECT /* test no temp table */ mnth
FROM (SELECT DISTINCT To_char(dt, 'YYYYMM') mnth
FROM (SELECT LEVEL n,
Add_months(Trunc(SYSDATE, 'YEAR')-1, -12)
+ LEVEL AS dt
FROM dual
CONNECT BY LEVEL <= ( Add_months(Trunc(SYSDATE, 'MM'),
-1) -
Add_months(Trunc(SYSDATE,
'YEAR'), -13)
))x
ORDER BY 1)),
rc_char
AS (SELECT RS.rs_cd,
CH.char_val,
Trim(CH.char_type_cd) char_type_cd,
effdt
FROM cisadm.ci_rs RS
left join (SELECT rs_cd,
calc_grp_cd,
effdt
FROM cisadm.c1_rs_rv2) RV
ON RV.rs_cd = RS.rs_cd
left join cisadm.c1_calc_rule_char CH
ON CH.calc_grp_cd = RV.calc_grp_cd
WHERE Trim(CH.char_type_cd) = 'RATEALPH'),
sas
AS (SELECT
/*+parallel(12)*/ x.*
FROM (SELECT rh.sa_id,
rh.rs_cd,
rc.char_val tariff,
rh.effdt rheffdt,
rc.effdt rceffdt,
Rank()
over (
PARTITION BY rh.sa_id
ORDER BY rh.effdt DESC) rnk
FROM cisadm.ci_sa_rs_hist rh
join rc_char rc
ON ( rh.rs_cd = rc.rs_cd )
WHERE rc.char_type_cd = 'RATEALPH')x
WHERE rnk = 1),
cisa_cte_tmp as
(SELECT /*+parallel(12)*/ t1.sa_id,
sa.acct_id,
t1.customer,
sa.sa_type_cd,
(SELECT rev_cl_cd
FROM cisadm.ci_sa_type
WHERE sa_type_cd = sa.sa_type_cd) revn_cls
FROM cisadm.ci_sa sa
left join
(
SELECT /*+ parallel(8) */ sa.sa_id, a.entity_name customer
FROM cisadm.ci_per_name a, cisadm.ci_acct_per b, cisadm.ci_sa sa
WHERE a.per_id = b.per_id
and b.acct_id = sa.acct_id
and a.prim_name_sw = 'Y'
and b.main_cust_sw = 'Y'
) t1
on t1.sa_id = sa.sa_id)
SELECT /* main query */ /*+parallel(12)*/ acct_id,
customer,
sa_type_cd,
revn_cls,
rate,
tariff,
accounting_date,
SUM(bdgt_amt) bdgt_amt,
CASE
WHEN Trim(revn_cls) = 'COMPANY' THEN SUM(cu_amt)
ELSE SUM(actual_amt)
END actual_amt,
SUM(revn_amt) revn_amt,
SUM(bill_kwh) bill_kwh,
Max(bill_kw) bill_kw
FROM (
SELECT sa.acct_id,
ft.ft_id,
sa.customer,
sa.sa_type_cd,
sa.revn_cls,
ss.rs_cd rate,
ss.tariff,
Trunc(ft.accounting_dt, 'MM') accounting_date,
cur_amt bdgt_amt,
tot_amt actual_amt,
Nvl((SELECT SUM(-calc_amt)
FROM cisadm.ci_bseg_calc_ln bc
join cisadm.ci_bseg bs
ON ( bc.bseg_id = bs.bseg_id )
WHERE bs.bill_id = ft.parent_id
AND Trim(bc.dst_id) = 'EX-MISCGL'), 0) cu_amt,
Nvl((SELECT SUM(-amount)
FROM cisadm.ci_ft_gl
WHERE dst_id LIKE 'RV%'
AND ft_id = ft.ft_id), 0) revn_amt,
CASE
WHEN ft_type_flg = 'BX' THEN -1
ELSE 1
END * (SELECT SUM(bill_sq)
FROM cisadm.ci_bseg_sq sq
WHERE ( sq.uom_cd NOT IN ( 'MTRR', 'KW', 'KVARH', 'PWRF',
'HPS', 'KWHE', 'KWHQ', 'KWHH',
' ' )
-- KWH Usage / CORRECTED RCB 05/2020
AND sq.tou_cd = ' ' )
AND bseg_id = ft.sibling_id) bill_kwh
,
CASE
WHEN ft_type_flg = 'BX' THEN -1
ELSE 1
END * (SELECT Max(bill_sq)
FROM cisadm.ci_bseg_sq
WHERE sqi_cd = 'BILLKW'
AND bseg_id = ft.sibling_id) bill_kw
FROM cisadm.ci_ft ft
join cisadm.cisa_cte_tmp sa
ON ( ft.sa_id = sa.sa_id )
join sas ss
ON ( ft.sa_id = ss.sa_id )
join dtrng dt
ON ( To_char(accounting_dt, 'YYYYMM') = dt.mnth )
WHERE ft_type_flg LIKE 'B%'
AND ft.accounting_dt >= To_date('01-AUG-20', 'DD-MON-YY')
/* Added for PREPROD */
AND ft.accounting_dt <= To_date('31-AUG-20', 'DD-MON-YY')
/* Added for PREPROD */
AND freeze_dttm IS NOT NULL
AND ( To_char(rheffdt, 'YYYYMM') <= dt.mnth )
AND ( To_char(rceffdt, 'YYYYMM') <= dt.mnth )
)
GROUP BY acct_id,
customer,
sa_type_cd,
revn_cls,
rate,
tariff,
accounting_date;
/*
Elapsed: 00:01:50.49
Difference Statistics Name
---------------- --------------------------------------------------------------
15 active txn count during cleanout
1,737,168 Batched IO block miss count
435,779 Batched IO (bound) vector count
342 Batched IO buffer defrag count
68,376 Batched IO double miss count
1,809 Batched IO (full) vector count
935,885 Batched IO same unit count
472,306 Batched IO single block count
1,526 Batched IO slow jump count
303,600 Batched IO vector block count
31,636 Batched IO vector read count
21,305,812 buffer is not pinned count
144,721,486 buffer is pinned count
13,017 bytes received via SQL*Net from client
3,156 bytes sent via SQL*Net to client
30,025 Cached Commit SCN referenced
1,166 calls to get snapshot scn: kcmgss
99 calls to kcmgas
24,509 calls to kcmgcs
84 CCursor + sql area evicted
196,687 cell blocks helped by minscn optimization
239,807 cell blocks processed by cache layer
237,607 cell blocks processed by data layer
239,807 cell blocks processed by txn layer
2,423,216 cell flash cache read hits
1,946,476,544 cell IO uncompressed bytes
137,053 cell logical write IO requests
864 cell num smartio automem buffer allocation attempts
10 cell num smartio transient cell failures
229,978 cell overwrites in flash cache
52,942,118,912 cell physical IO bytes eligible for predicate offload
52,942,118,912 cell physical IO bytes eligible for smart IOs
50,996,019,200 cell physical IO bytes saved by storage index
122,626,859,464 cell physical IO interconnect bytes
117,514,696 cell physical IO interconnect bytes returned by smart scan
864 cell scans
852 cell smart IO session cache hits
852 cell smart IO session cache lookups
852 cell smart IO session cache soft misses
273,972 cell writes to flash cache
5 change write time
105 cleanout - number of ktugct calls
93 cleanouts only - consistent read gets
366 cluster key scan block gets
92 cluster key scans
113 cluster wait time
79 commit batch/immediate performed
79 commit batch/immediate requested
6 commit cleanout failures: block lost
2 commit cleanout failures: callback failure
45 commit cleanouts
37 commit cleanouts successfully completed
79 commit immediate performed
79 commit immediate requested
93 Commit SCN cached
93 commit txn count during cleanout
13,713 concurrency wait time
1,258 consistent changes
52,938,886 consistent gets
6,462,661 consistent gets direct
21,750,528 consistent gets examination
21,711,058 consistent gets examination (fastpath)
46,476,225 consistent gets from cache
24,725,697 consistent gets pin
21,899,692 consistent gets pin (fastpath)
59,625 CPU used by this session
5,640 CPU used when call started
3,897 db block changes
10,973 db block gets
8,410 db block gets direct
2,563 db block gets from cache
354 db block gets from cache (fastpath)
193,625 DB time
12 deferred (CURRENT) block cleanout applications
4 DFO trees parallelized
196 dirty buffers inspected
65 enqueue conversions
34,597 enqueue releases
34,663 enqueue requests
11 enqueue timeouts
18 enqueue waits
309 execute count
1,128,777 fastpath consistent get quota limit
61,591,109,414 file io wait time
2,594,822 free buffer inspected
2,296,443 free buffer requested
7 gc cr blocks received
54 gc current blocks received
2,164,338 gc local grants
132,054 gc remote grants
423 gcs affinity lock failures
132,189 gcs messages sent
1 gcs read-mostly lock failures
635,721 gcs read-mostly lock grants
26 ges messages sent
32 global enqueue get time
36,687 global enqueue gets sync
36,586 global enqueue releases
2 Heap Segment Array Updates
1,403,459 hot buffers moved to head of LRU
1,218 HSC Heap Segment Block Changes
93 immediate (CR) block cleanout applications
16 immediate (CURRENT) block cleanout applications
72,976 in call idle wait time
4 index fast full scans (full)
159 index fast full scans (rowid ranges)
3,465,146 index fetch by key
7,715,969 index scans kdiixs1
80 KTFB alloc req
75,497,472 KTFB alloc space (block)
1,993 KTFB alloc time (ms)
380,754,231,296 logical read bytes from cache
203 messages sent
88,571 min active SCN optimization applied on CR
125,759 no buffer to keep pinned count
24,699,824 no work - consistent read gets
11,488,635 non-idle wait count
69,644 non-idle wait time
308 opened cursors cumulative
4 Parallel operations not downgraded
6 parse count (hard)
160 parse count (total)
1,251 parse time cpu
14,900 parse time elapsed
105,944,801,280 physical read bytes
2,472,946 physical read IO requests
2,472,960 physical read requests optimized
105,945,030,656 physical read total bytes
105,945,030,656 physical read total bytes optimized
2,472,960 physical read total IO requests
186,145 physical read total multi block requests
12,932,715 physical reads
2,296,214 physical reads cache
271,099 physical reads cache prefetch
10,636,501 physical reads direct
4,173,840 physical reads direct temporary tablespace
34,753,216,512 physical write bytes
137,053 physical write IO requests
136,986 physical write requests optimized
34,753,216,512 physical write total bytes
34,684,321,792 physical write total bytes optimized
137,053 physical write total IO requests
136,585 physical write total multi block requests
4,242,336 physical writes
4,242,336 physical writes direct
4,233,926 physical writes direct temporary tablespace
4,242,336 physical writes non checkpoint
43 pinned buffers inspected
6 prefetched blocks aged out before use
2,230,749 PX local messages recv'd
2,230,749 PX local messages sent
12 PX remote messages recv'd
12 PX remote messages sent
4 queries parallelized
35,208 recursive calls
54,478 recursive cpu usage
10,576 redo entries
69,611,152 redo size
69,270,612 redo size for direct writes
107 redo subscn max counts
1 redo synch writes
16 Requests to/from client
608,188 rows fetched via callback
56 session cursor cache count
162 session cursor cache hits
52,949,859 session logical reads
3,080,192 session pga memory
192,741,376 session pga memory max
1,833,600 session uga memory
177,646,336 session uga memory max
790,834 shared hash latch upgrades - no wait
2 shared hash latch upgrades - wait
162 sorts (memory)
7,913,932 sorts (rows)
110 sql area evicted
16 SQL*Net roundtrips to/from client
2 switch current to new buffer
78,830,039 table fetch by rowid
1 table fetch continued row
110,357 table scan blocks gotten
13,524,901 table scan disk non-IMC rows gotten
13,813,969 table scan rows gotten
864 table scans (direct read)
2,959 table scans (long tables)
366 table scans (rowid ranges)
1,746 table scans (short tables)
2,097,152 temp space allocated (bytes)
110,832 undo change vector size
143 user calls
2 user commits
55,172 user I/O wait time
12 workarea executions - onepass
371 workarea executions - optimal
system@ouhubprd> select count(1) from t3;
COUNT(1)
---------
589881
1 row selected.
*/
}}}
!! 3rd version - onion approach
{{{
WITH
dtrng AS (
SELECT /* test rate class */ mnth
FROM (
SELECT DISTINCT To_char(dt, 'YYYYMM') mnth
FROM (
SELECT LEVEL n, Add_months(Trunc(SYSDATE, 'YEAR')-1, -12) + LEVEL AS dt
FROM dual
CONNECT BY LEVEL <= ( Add_months(Trunc(SYSDATE, 'MM'), -1) - Add_months(Trunc(SYSDATE, 'YEAR'), -13))
) x
ORDER BY 1
)
),
rc_char AS (
SELECT RS.rs_cd,
CH.char_val,
Trim(CH.char_type_cd) char_type_cd,
effdt
FROM cisadm.ci_rs RS
left join (
SELECT rs_cd,
calc_grp_cd,
effdt
FROM cisadm.c1_rs_rv2
) RV
ON RV.rs_cd = RS.rs_cd
left join cisadm.c1_calc_rule_char CH ON CH.calc_grp_cd = RV.calc_grp_cd
WHERE Trim(CH.char_type_cd) = 'RATEALPH'
),
sas AS (
SELECT
/*+parallel(12)*/ x.*
FROM (
SELECT rh.sa_id,
rh.rs_cd,
rc.char_val tariff,
rh.effdt rheffdt,
rc.effdt rceffdt,
Rank() over ( PARTITION BY rh.sa_id ORDER BY rh.effdt DESC) rnk
FROM cisadm.ci_sa_rs_hist rh
join rc_char rc ON ( rh.rs_cd = rc.rs_cd )
WHERE rc.char_type_cd = 'RATEALPH'
) x
WHERE rnk = 1
),
a1 as (
SELECT
ss.rs_cd rate, ss.tariff, Trunc(ft.accounting_dt, 'MM') accounting_date, cur_amt bdgt_amt, tot_amt actual_amt,
sa.acct_id, sa.sa_type_cd,
ft.ft_id, ft.parent_id, ft.ft_type_flg, ft.sibling_id
FROM cisadm.ci_ft ft
join cisadm.ci_sa sa ON ( ft.sa_id = sa.sa_id )
join sas ss ON ( ft.sa_id = ss.sa_id )
join dtrng dt ON ( To_char(accounting_dt, 'YYYYMM') = dt.mnth )
where ft_type_flg LIKE 'B%'
AND ft.accounting_dt >= To_date('01-AUG-20', 'DD-MON-YY')
/* Added for PREPROD */
AND ft.accounting_dt <= To_date('31-AUG-20', 'DD-MON-YY')
/* Added for PREPROD */
AND freeze_dttm IS NOT NULL
AND ( To_char(rheffdt, 'YYYYMM') <= dt.mnth )
AND ( To_char(rceffdt, 'YYYYMM') <= dt.mnth )
),
a2 as (
select
a1.rate, a1.tariff, a1.accounting_date, a1.bdgt_amt, a1.actual_amt,
a1.acct_id, a1.sa_type_cd, a1.ft_id, a1.parent_id, a1.ft_type_flg, a1.sibling_id,
max(CASE WHEN ft_type_flg = 'BX' THEN -1 ELSE 1 END * sq.bill_sq) bill_kw
from a1
left join cisadm.ci_bseg_sq sq ON (sq.bseg_id = a1.sibling_id and sq.sqi_cd = 'BILLKW' )
group by a1.rate, a1.tariff, a1.accounting_date, a1.bdgt_amt, a1.actual_amt,
a1.acct_id, a1.sa_type_cd, a1.ft_id, a1.parent_id, a1.ft_type_flg, a1.sibling_id
),
a3 as (
select a2.rate, a2.tariff, a2.accounting_date, a2.bdgt_amt, a2.actual_amt,
a2.acct_id, a2.sa_type_cd, a2.ft_id, a2.parent_id, a2.ft_type_flg, a2.sibling_id, a2.bill_kw,
CASE WHEN ft_type_flg = 'BX' THEN -1 ELSE 1 END * sum(sq2.bill_sq) bill_kwh
from a2
left join cisadm.ci_bseg_sq sq2 ON (sq2.bseg_id = a2.sibling_id and sq2.uom_cd NOT IN ( 'MTRR', 'KW', 'KVARH', 'PWRF', 'HPS', 'KWHE', 'KWHQ', 'KWHH', ' ' ) and sq2.tou_cd = ' ' )
group by a2.rate, a2.tariff, a2.accounting_date, a2.bdgt_amt, a2.actual_amt,
a2.acct_id, a2.sa_type_cd, a2.ft_id, a2.parent_id, a2.ft_type_flg, a2.sibling_id, a2.bill_kw
),
a4 as (
select a3.rate, a3.tariff, a3.accounting_date, a3.bdgt_amt, a3.actual_amt,
a3.acct_id, a3.sa_type_cd, a3.ft_id, a3.parent_id, a3.ft_type_flg, a3.sibling_id, a3.bill_kw, a3.bill_kwh,
max(pn.entity_name) customer
from a3
left join cisadm.ci_acct_per ap ON (ap.main_cust_sw = 'Y' AND ap.acct_id = a3.acct_id)
left join cisadm.ci_per_name pn ON (pn.prim_name_sw = 'Y' AND pn.per_id = ap.per_id)
group by a3.rate, a3.tariff, a3.accounting_date, a3.bdgt_amt, a3.actual_amt,
a3.acct_id, a3.sa_type_cd, a3.ft_id, a3.parent_id, a3.ft_type_flg, a3.sibling_id, a3.bill_kw, a3.bill_kwh
),
a5 as (
select a4.rate, a4.tariff, a4.accounting_date, a4.bdgt_amt, a4.actual_amt,
a4.acct_id, a4.sa_type_cd, a4.ft_id, a4.parent_id, a4.ft_type_flg, a4.sibling_id, a4.bill_kw, a4.bill_kwh, a4.customer,
max(st.rev_cl_cd) revn_cls
from a4
left join cisadm.ci_sa_type st ON (st.sa_type_cd = a4.sa_type_cd)
group by a4.rate, a4.tariff, a4.accounting_date, a4.bdgt_amt, a4.actual_amt,
a4.acct_id, a4.sa_type_cd, a4.ft_id, a4.parent_id, a4.ft_type_flg, a4.sibling_id, a4.bill_kw, a4.bill_kwh, a4.customer
),
a6 as (
select a5.rate, a5.tariff, a5.accounting_date, a5.bdgt_amt, a5.actual_amt,
a5.acct_id, a5.sa_type_cd, a5.ft_id, a5.parent_id, a5.ft_type_flg, a5.sibling_id, a5.bill_kw, a5.bill_kwh, a5.customer, a5.revn_cls,
nvl(sum(-fg.amount),0) revn_amt
from a5
left join cisadm.ci_ft_gl fg ON (fg.dst_id LIKE 'RV%' AND fg.ft_id = a5.ft_id)
group by a5.rate, a5.tariff, a5.accounting_date, a5.bdgt_amt, a5.actual_amt,
a5.acct_id, a5.sa_type_cd, a5.ft_id, a5.parent_id, a5.ft_type_flg, a5.sibling_id, a5.bill_kw, a5.bill_kwh, a5.customer, a5.revn_cls
),
a7 as (
select a6.rate, a6.tariff, a6.accounting_date, a6.bdgt_amt, a6.actual_amt,
a6.acct_id, a6.sa_type_cd, a6.ft_id, a6.parent_id, a6.ft_type_flg, a6.sibling_id, a6.bill_kw, a6.bill_kwh, a6.customer, a6.revn_cls, a6.revn_amt,
nvl(sum(-bc.calc_amt),0) cu_amt
from a6
left join cisadm.ci_bseg bs ON ( bs.bill_id = a6.parent_id )
left join cisadm.ci_bseg_calc_ln bc ON (trim(bc.dst_id) = 'EX-MISCGL' AND bc.bseg_id = bs.bseg_id)
group by a6.rate, a6.tariff, a6.accounting_date, a6.bdgt_amt, a6.actual_amt,
a6.acct_id, a6.sa_type_cd, a6.ft_id, a6.parent_id, a6.ft_type_flg, a6.sibling_id, a6.bill_kw, a6.bill_kwh, a6.customer, a6.revn_cls, a6.revn_amt
)
SELECT /*+parallel(12)*/
acct_id,
customer,
sa_type_cd,
revn_cls,
rate,
tariff,
accounting_date,
SUM(bdgt_amt) bdgt_amt,
CASE WHEN Trim(revn_cls) = 'COMPANY' THEN SUM(cu_amt) ELSE SUM(actual_amt) END actual_amt,
SUM(revn_amt) revn_amt,
SUM(bill_kwh) bill_kwh,
Max(bill_kw) bill_kw
FROM a7
GROUP BY acct_id,
customer,
sa_type_cd,
revn_cls,
rate,
tariff,
accounting_date
/
/*
Elapsed: 00:01:03.48
Difference Statistics Name
---------------- --------------------------------------------------------------
6 active txn count during cleanout
1,271,660 Batched IO block miss count
359,302 Batched IO (bound) vector count
14 Batched IO buffer defrag count
70,097 Batched IO double miss count
763,930 Batched IO same unit count
395,067 Batched IO single block count
72,622 Batched IO vector block count
34,102 Batched IO vector read count
28,534,825 buffer is not pinned count
162,191,316 buffer is pinned count
12,361 bytes received via SQL*Net from client
3,535 bytes sent via SQL*Net to client
16,179 Cached Commit SCN referenced
1,909 calls to get snapshot scn: kcmgss
107 calls to kcmgas
47,437 calls to kcmgcs
56 CCursor + sql area evicted
197,785 cell blocks helped by minscn optimization
240,904 cell blocks processed by cache layer
238,168 cell blocks processed by data layer
240,904 cell blocks processed by txn layer
1,752,886 cell flash cache read hits
1,951,072,256 cell IO uncompressed bytes
1,187 cell logical write IO requests
864 cell num smartio automem buffer allocation attempts
12 cell num smartio transient cell failures
1,906 cell overwrites in flash cache
52,942,118,912 cell physical IO bytes eligible for predicate offload
52,942,118,912 cell physical IO bytes eligible for smart IOs
50,991,439,872 cell physical IO bytes saved by storage index
15,198,069,312 cell physical IO interconnect bytes
117,514,816 cell physical IO interconnect bytes returned by smart scan
864 cell scans
852 cell smart IO session cache hits
852 cell smart IO session cache lookups
852 cell smart IO session cache soft misses
2,240 cell writes to flash cache
3 change write time
70 cleanout - number of ktugct calls
66 cleanouts only - consistent read gets
522 cluster key scan block gets
214 cluster key scans
80 cluster wait time
79 commit batch/immediate performed
79 commit batch/immediate requested
6 commit cleanout failures: block lost
2 commit cleanout failures: callback failure
61 commit cleanouts
53 commit cleanouts successfully completed
79 commit immediate performed
79 commit immediate requested
66 Commit SCN cached
67 commit txn count during cleanout
9,219 concurrency wait time
1,258 consistent changes
57,260,214 consistent gets
6,462,661 consistent gets direct
19,405,844 consistent gets examination
19,249,529 consistent gets examination (fastpath)
50,797,553 consistent gets from cache
31,391,709 consistent gets pin
29,821,428 consistent gets pin (fastpath)
32,714 CPU used by this session
443 CPU used when call started
3,906 db block changes
10,987 db block gets
8,410 db block gets direct
2,577 db block gets from cache
358 db block gets from cache (fastpath)
146,256 DB time
28 deferred (CURRENT) block cleanout applications
4 DFO trees parallelized
167 dirty buffers inspected
66 enqueue conversions
769 enqueue releases
870 enqueue requests
22 enqueue timeouts
21 enqueue waits
914 execute count
375,723,722 file io wait time
1,902,404 free buffer inspected
1,767,840 free buffer requested
1 gc cr blocks received
12 gc current block receive time
1,029 gc current blocks received
1,759,369 gc local grants
7,480 gc remote grants
544 gcs affinity lock failures
8,510 gcs messages sent
1 gcs read-mostly lock failures
538,609 gcs read-mostly lock grants
15 ges messages sent
1 global enqueue get time
3,392 global enqueue gets sync
3,291 global enqueue releases
1 heap block compress
2 Heap Segment Array Updates
880,631 hot buffers moved to head of LRU
1,213 HSC Heap Segment Block Changes
66 immediate (CR) block cleanout applications
9 immediate (CURRENT) block cleanout applications
72,676 in call idle wait time
1,266 index fast full scans (full)
3,827,633 index fetch by key
6,220,919 index scans kdiixs1
80 KTFB alloc req
69,206,016 KTFB alloc space (block)
2,127 KTFB alloc time (ms)
416,154,664,960 logical read bytes from cache
211 messages sent
65,121 min active SCN optimization applied on CR
10,257 no buffer to keep pinned count
31,342,106 no work - consistent read gets
4,166,666 non-idle wait count
47,004 non-idle wait time
1,051 opened cursors cumulative
4 Parallel operations not downgraded
12 parse count (failures)
18 parse count (hard)
448 parse count (total)
855 parse time cpu
10,095 parse time elapsed
67,415,498,752 physical read bytes
1,802,611 physical read IO requests
1,802,625 physical read requests optimized
67,415,728,128 physical read total bytes
67,415,728,128 physical read total bytes optimized
1,802,625 physical read total IO requests
51,514 physical read total multi block requests
8,229,431 physical reads
1,766,770 physical reads cache
42,632 physical reads cache prefetch
6,462,661 physical reads direct
303,472,640 physical write bytes
1,187 physical write IO requests
1,120 physical write requests optimized
303,472,640 physical write total bytes
234,700,800 physical write total bytes optimized
1,187 physical write total IO requests
962 physical write total multi block requests
37,045 physical writes
37,045 physical writes direct
28,635 physical writes direct temporary tablespace
37,045 physical writes non checkpoint
17 pinned buffers inspected
1,720 prefetched blocks aged out before use
406,318 PX local messages recv'd
406,318 PX local messages sent
12 PX remote messages recv'd
12 PX remote messages sent
4 queries parallelized
2,515 recursive calls
32,574 recursive cpu usage
10,567 redo entries
69,612,644 redo size
69,270,656 redo size for direct writes
69 redo subscn max counts
1 redo synch writes
16 Requests to/from client
608,392 rows fetched via callback
81 session cursor cache count
689 session cursor cache hits
57,271,201 session logical reads
2,883,584 session pga memory
109,707,264 session pga memory max
2,746,992 session uga memory
105,709,736 session uga memory max
843,754 shared hash latch upgrades - no wait
68 shared hash latch upgrades - wait
78 sorts (memory)
7,265,269 sorts (rows)
56 sql area evicted
12 sql area purged
16 SQL*Net roundtrips to/from client
2 switch current to new buffer
91,938,642 table fetch by rowid
2 table fetch continued row
101,686 table scan blocks gotten
10,064,796 table scan disk non-IMC rows gotten
11,099,748 table scan rows gotten
864 table scans (direct read)
10,208 table scans (long tables)
370 table scans (rowid ranges)
89 table scans (short tables)
2,097,152 temp space allocated (bytes)
112,856 undo change vector size
143 user calls
2 user commits
37,610 user I/O wait time
523 workarea executions - optimal
system@ouhubprd> select count(1) from t2;
COUNT(1)
---------
589881
1 row selected.
*/
}}}
using this is the quickest way of attaching storage.. GUI is so tedious!
http://www.virtualbox.org/manual/ch08.html#vboxmanage-storageattach
http://www.linux-mag.com/id/7673/
''Developer VMs'' http://www.oracle.com/technetwork/community/developer-vm/index.html
''VirtualBox Template for Oracle VM Manager'' http://www.oracle.com/technetwork/server-storage/vm/template-1482544.html
''Oracle VM 3:Building a Demo Environment using Oracle VM VirtualBox'' http://www.oracle.com/technetwork/server-storage/vm/ovm3-demo-vbox-1680215.pdf
''sunstorage7000 simulator'' http://download.oracle.com/otn/other/servers-storage/storage/open-storage/sunstorage7000/SunStorageVBox.zip, ''quick start guide'' http://www.oracle.com/us/dm/h2fy11/simulator5-minute-guide-192152.pdf
<<<
Setup
When the simulator initially boots you will be prompted for some basic network settings (this is exactly the same as if you were using an actual 7110, 7210 or 7410). Many of these fields should be filled in for you. Here are some tips if you're unsure how to fill in any of the required fields:
Host Name: Any name you want.
DNS Domain: "localdomain"
Default Router: The same as the IP address, but put 1 as the final octet.
DNS Server: The same as the IP address, but put 1 as the final octet.
Password: Whatever you want.
After you enter this information, wait until the screen provides you with two URLs to use for subsequent configuration and administration, one with an IP address and one with the hostname you specified. Direct a web browser to the URL with the IP address (for example, https://192.168.56.101:215/) to complete appliance configuration. The login name is 'root' and the password is what you entered in the initial configuration screen.
''a quick samba demo'' http://www.evernote.com/shard/s48/sh/be6faabd-df78-465a-bc4b-ce4db3c99358/6b4d5f084d8b5d1351b081009914829e
<<<
''Solaris 11.1 admin'' http://download.oracle.com/otn/solaris/11_1/README_OracleSolaris11_1-VM.txt?AuthParam=1351355880_da0ac074c8dcb5decf237138474298fe
''12c VM 12.1.0.1'' http://www.thatjeffsmith.com/archive/2014/02/introducing-the-otn-developer-day-database-12c-virtualbox-image/
''12c VM 12.1.0.2 with In-Memory Option'' http://www.oracle.com/technetwork/database/enterprise-edition/databaseappdev-vm-161299.html
https://blogs.oracle.com/fatbloke/entry/creating_a_virtualbox_appliance_that
http://www.wintips.org/how-to-fix-oracle-virtualbox-copy-paste-clipboard-windows/
{{{
end process the "VBoxTray.exe"
new task the "C:\Program Files\Oracle\VirtualBox Guest Additions\VBoxTray.exe"
}}}
then "disconnect" the redundant mapped drives
http://superuser.com/questions/86202/how-do-i-refresh-network-drives-that-were-not-accessible-during-boot-up
http://helpdeskgeek.com/virtualization/virtualbox-share-folder-host-guest/
http://www.youtube.com/watch?v=HzgL77r1AB8
! Download the installer
{{{
http://www.virtualbox.org/wiki/Linux_Downloads
}}}
! Get the signature and import
{{{
wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | rpm --import -
}}}
! Download the RHEL repo
{{{
http://download.virtualbox.org/virtualbox/rpm/rhel/virtualbox.repo
}}}
! Install
{{{
yum search virtualbox
yum install virtualbox
}}}
! Reinstall
{{{
http://forums.virtualbox.org/viewtopic.php?f=7&t=40785 <-- NS_ERROR_ABORT (0x80004004) ERROR
* install kernel-source package <-- I did not install this
* add your user to vboxusers group <-- do this before the reinstall
* install dkms package <-- install this before the reinstall
* install fresh VirtualBox release <-- I did a yum remove/install
}}}
! this seems to occur on OEL6
! Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again
{{{
yum install kernel-devel
yum install kernel-headers
yum install gcc
yum install dkms
yum install kernel-uek-devel
yum install kernel-uek-headers
Secondly you have to make sure you have a variable names KERN_DIR pointing to your sources kernel. In my case this is /usr/src/kernerls/2.6.32-131.0.15.el6.i686
http://johanlouwers.blogspot.com/2011/10/unable-to-find-sources-of-your-current.html
VirtualBox --> File --> Preferences --> Input
stuck in virtualbox!! cant exit capture mode! http://ubuntuforums.org/showthread.php?t=1236990
}}}
http://code.google.com/p/vboxmon/
http://mirspo.narod.ru/labs/vbmon.html
! get memory
{{{
Karl-MacBook:~ karl$ cat vbox-list.sh
clear
echo ""
echo "# ALL config'd VMs"
VBoxManage list vms
echo ""
echo "# ALL running VMs"
VBoxManage list runningvms
echo ""
VBoxManage list --long runningvms | egrep "Memory size:" | awk -F':' '{print $2}' | sed 's/MB//g' | awk '{ x=x+$0 } END { print x "MB current used vbox physical memory"}'
echo ""
}}}
''output''
{{{
Karl-MacBook:~ karl$ sh vbox-list.sh
# ALL config'd VMs
"windows7remote" {187d94f0-fb09-41a7-aa8b-742d76db6078}
"tableauvm" {5f06e841-6ddf-4be8-906b-a6291842bfab}
"OEL5.5 11.2.0.2" {2d539a2d-22b0-41e9-93aa-da44640089b5}
"OEL6.5 12cR1" {73ab2011-cf90-4e11-ae2b-0aa4cd94f643}
"OEL6.5 - sandbox" {96282416-9199-49f2-b126-6afa48161388}
# ALL running VMs
"windows7remote" {187d94f0-fb09-41a7-aa8b-742d76db6078}
"OEL5.5 11.2.0.2" {2d539a2d-22b0-41e9-93aa-da44640089b5}
"OEL6.5 12cR1" {73ab2011-cf90-4e11-ae2b-0aa4cd94f643}
6144MB current used vbox physical memory
}}}
http://blogs.oracle.com/johngraves/entry/virtualbox_port_forward_issue
http://blogs.oracle.com/johngraves/entry/virtualbox_port_forward
http://blogs.oracle.com/johngraves/entry/virtualbox_clone_root_hd_ubuntu_network_issue
http://blogs.oracle.com/johngraves/entry/nodemanager_initd_script
http://blogs.oracle.com/johngraves/entry/remote_display_configsh_using_ssh
http://tombuntu.com/index.php/2008/12/17/configure-port-forwarding-to-a-virtualbox-guest-os/
http://sethkeiper.com/2008/01/05/virtualbox-port-forwarding-with-linux-host/
http://sethkeiper.com/2008/08/17/virtualbox-port-forwarding-with-windows-host/
http://forums.virtualbox.org/viewtopic.php?t=246
http://www.virtualbox.org/manual/ch06.html#natforward <-- official doc
''Storage Foundation for Real Application Clusters (SFRAC) documents at:'' http://www.symantec.com/business/support/documentation.jsp?pid=23029
''How To Integrate Symantec SFRAC with Oracle CRS and Troubleshoot the Results'' [ID 782148.1]
''Storage Foundation Docs'' http://www.symantec.com/business/support/index?page=content&key=23029&channel=DOCUMENTATION
''certification matrix for database'' http://www.symantec.com/business/support/index?page=content&id=TECH74389
''symantec supported products'' http://www.symantec.com/business/support/index?page=products
''Best Practices for Optimal Configuration of Oracle Databases on Sun Hardware'' http://wikis.sun.com/download/attachments/128484865/OOW09+Optimal+Oracle+DB+Perf+1_1.pdf
Oracle Disk Manager FAQ [ID 815085.1]
VxFS vs ZFS benchmark comparison http://www.symantec.com/connect/sites/default/files/Veritas-Storage-Foundation-and-Sun-Solaris-ZFS.pdf
-- Concurrent IO
''How to use Concurrent I/O on HP-UX and improve throughput on an Oracle single-instance database [ID 1231869.1]''
* remount command/option should not be used while mounting a filesystem with "-o cio".
* Do not use "-o cio" and "-o mincache=direct,convosync=direct" together. Use either Direct I/O or Concurrent I/O.
* Using Direct I/O and Concurrent I/O("-o mincache=direct,convosync=direct,cio") may cause performance regression.
* Quick I/O, ODM, mount -o cio, and the VX_CONCURRENT advisory are mutually exclusive
* Hence mounting Oracle binaries ($ORACLE_BASE directory) on a filesystem mounted with "cio"option is not supported (same with JFS2). Default options are recommended
* Concurrent IO is not expected to provide a performance benefit over direct IO when used with online and archived redo logs.
* ''Use the following:'' Online redo logs and Archived redo logs ==> DIRECT, Database files (FS block size 8KB) ==> CONCURRENT, Database binaries ==> DEFAULT
symantec doc on concurrent IO http://sfdoccentral.symantec.com/sf/5.0/hpux/html/fs_ref/pr_ch_fsio_fs5.html, http://sfdoccentral.symantec.com/sf/5.0/solaris64/html/sfcfs_notes/rn_ch_notes_sol_sfcfs23.html
there should be "cio" option in etc/fstab (or mount output)
Check these:
http://sfdoccentral.symantec.com/sf/5.0/hpux/html/fs_ref/pr_ch_fsio_fs5.html <— Licensing FAQ
http://www.symantec.com/business/support/index?page=content&id=TECH49211&key=15101&actp=LIST
-- Quick IO
VERITAS Quick I/O Equivalent to Raw Volumes, Yet Easier http://eval.veritas.com/webfiles/docs/qiowp.pdf
http://www.dbatools.net/experience/convert-to-veritas-qio.html
-- Direct IO
Pros and Cons of Using Direct I/O for Databases [ID 1005087.1]
! VxFS and ASM
''Working with Oracle ASM disks mixed with VxFS'' http://h30499.www3.hp.com/t5/LVM-and-VxVM/Working-with-Oracle-ASM-disks-mixed-with-VxFS/m-p/5195483#M47442
{{{
Ken,
What I *really* *really* hate about ASM is it gets DBAs involved in stuff they shouldn't be involved in - Storage admin is not a DBA task whatever they/oracle think - unfortunately Oracle sees the whole world as an Oracle database...
I wouldn't let ASM anywhere near the contents of /dev/rdsk or /dev/rdisk... If I want a disk to be used for ASM, then I'll identify its major and minor number using
ll /dev/rdisk/disk10
or whatever, and then create a new DSF specifically for Oracle ASM using mknod with the major/minor numbers I identiified:
mkdir /dev/asm
mknod /dev/asm/asmdisk1 c 23 0x000005
chown oracle:dba /dev/asm/asmdisk1
chmod 640 /dev/asm/asmdisk1
or whatever...
Then tell the DBAs to set the Oracle ASM parameter ASM_DISKSTRING to /dev/asm
This keeps ASM away from my disks, and has the added value of not resetting the ownership on the disk to bin:sys or root:sys every time insf is run...
HTH
Duncan
}}}
''Asm And Veritas Storage Foundation'' [ID 736265.1]
{{{
Goal
When using Oracle RAC systems and running ASM for Database related storage , also on the same servers Veritas Storage Foundation for non clustered standalone file systems is used.
Sometimes when adding new LUN to ASM diskgroup, it resulted in loss of the ASM disks, due to Veritas overwriting the header of the LUN. Is there a known problem when running ASM and Veritas storage manager V4.1 MP 2 to manage disks for different purposes ?
Solution
The current releases of Veritas and ASM simply do not know of each other. ASM has a look at the first couple of blocks on disk and tries an educated guess as to whether this may be a disk already in use. Unfortunately the Veritas volume manager puts meta data (the so called private region) at the very end of a disk. So anybody with the right permissions can add a disk already in use by ASM to a Veritas disk group and vice versa.
Each disk/LUN in ASM is for other storage software a non formated disk (RAW device). Veritas will not understand that ASM is using these disks and there are no restrictions for it to overwrite the information.
In the end it is up to storage managers to know which disks/LUN's that are in use.
}}}
''Migrating an Oracle RAC database from Automatic Storage Management to Veritas Cluster File System'' http://www.symantec.com/connect/articles/migrating-oracle-rac-database-automatic-storage-management-veritas-cluster-file-system
''ASM to replace vxvm/vxfs'' http://www.freelists.org/post/oracle-l/ASM-to-replace-vxvmvxfs
''VxFS sparc to x86'' http://goo.gl/W4CZe
Also check out the [[ASM migration]] and [[StorageMetalink]] and [[SolarisASM]]
VxFS Commands quick reference - http://eval.veritas.com/downloads/van/fs_quickref.pdf
http://en.wikipedia.org/wiki/Comparison_of_revision_control_software#Repos
View merging
http://blogs.oracle.com/optimizer/2010/10/optimizer_transformations_view_merging_part_1.html
-- COMPREHENSIVE LIST OF HOWTO
http://www.virtualbox.org/wiki/User_HOWTOS
-- VMWARE TO VIRTUAL BOX
https://help.ubuntu.com/community/VirtualBox
http://www.virtualbox.org/wiki/Migrate_Windows
http://www.justsoftwaresolutions.co.uk/general/importing-windows-into-virtualbox.html
https://wiki.ubuntu.com/UbuntuMagazine/HowTo/Switching_From_VMWare_To_VirtualBox:_.vmdk_To_.vdi_Using_Qemu_+_VdiTool#Multiple vmdk files to single vdi <— nice multiple VMDK to single file trick!!
http://www.ubuntugeek.com/howto-convert-vmware-image-to-virtualbox-image.html
http://forums.virtualbox.org/viewtopic.php?t=104
http://dietrichschroff.blogspot.com/2008/10/using-vmware-images-with-virtualbox.html
http://communities.vmware.com/thread/88468?start=0&tstart=0 <— vdiskmanager GUI toolkit!!
http://communities.vmware.com/docs/DOC-7471
-- NETWORKING
http://open-source-experiments.blogspot.com/2008/04/migrating-from-vmware-to-virtualbox.html
''Networking in VirtualBox'' https://blogs.oracle.com/fatbloke/entry/networking_in_virtualbox1
-- NAT
http://open-source-experiments.blogspot.com/2008/04/migrating-from-vmware-to-virtualbox.html
{{{
-- page 145 of VirtualBox official doc <-- well this does not seem to work!
In NAT mode, the guest network interface is assigned to the IPv4 range 10.0.x.0/24 by default
where x corresponds to the instance of the NAT interface +2. So x is 2 when there is only one
NAT instance active. In that case the guest is assigned to the address 10.0.2.15, the gateway is
set to 10.0.2.2 and the name server can be found at 10.0.2.3.
If, for any reason, the NAT network needs to be changed, this can be achieved with the following
command:
VBoxManage modifyvm "VM name" --natnet1 "192.168/16"
This command would reserve the network addresses from 192.168.0.0 to 192.168.254.254
for the first NAT network instance of “VM name”. The guest IP would be assigned to
192.168.0.15 and the default gateway could be found at 192.168.0.2.
}}}
http://forums.virtualbox.org/viewtopic.php?f=6&p=171816 <-- NAT on windows xp
https://wiki.ubuntu.com/VirtualBoxNetworking <-- NAT on Virtual Box, the physical host way!
https://wiki.ubuntu.com/VirtualboxHostNetworkingAndWIFI
http://www.tolaris.com/2009/03/05/using-host-networking-and-nat-with-virtualbox/
http://kdl.nobugware.com/post/2009/02/17/virtualbox-nat-ssh-guest/
http://tombuntu.com/index.php/2008/12/17/configure-port-forwarding-to-a-virtualbox-guest-os/
http://www.aviransplace.com/2008/06/12/virtualbox-configuring-port-forwarding-with-nat/
http://virtualboximages.com/node/158
-- ROUTE
''Share the guest's internet connection to the host'' https://forums.virtualbox.org/viewtopic.php?f=2&t=23823
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch03_:_Linux_Networking
http://www.cyberciti.biz/faq/linux-setup-default-gateway-with-route-command/
-- STORAGE
http://forums.virtualbox.org/viewtopic.php?f=2&t=26444
http://forums.virtualbox.org/viewtopic.php?f=1&t=23117&start=0
-- RAC ON VIRTUALBOX
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
{{{
Create 5 sharable virtual disks and associate them as virtual media using one of the following sets of commands on the host server. If you are using a version of VirtualBox prior to 4.0.0, then use the following commands.
$ # Create the disks and associate them with VirtualBox as virtual media.
$ VBoxManage createhd --filename asm1.vdi --size 10240 --format VDI --variant Fixed --type shareable --remember
$ VBoxManage createhd --filename asm2.vdi --size 10240 --format VDI --variant Fixed --type shareable --remember
$ VBoxManage createhd --filename asm3.vdi --size 10240 --format VDI --variant Fixed --type shareable --remember
$ VBoxManage createhd --filename asm4.vdi --size 10240 --format VDI --variant Fixed --type shareable --remember
$ VBoxManage createhd --filename asm5.vdi --size 10240 --format VDI --variant Fixed --type shareable --remember
$
$ # Connect them to the VM.
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium asm1.vdi
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 2 --device 0 --type hdd --medium asm2.vdi
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 3 --device 0 --type hdd --medium asm3.vdi
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 4 --device 0 --type hdd --medium asm4.vdi
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 5 --device 0 --type hdd --medium asm5.vdi
If you are using VirtualBox version 4.0.0 or later, use the following commands.
$ # Create the disks and associate them with VirtualBox as virtual media.
$ VBoxManage createhd --filename asm1.vdi --size 10240 --format VDI --variant Fixed
$ VBoxManage createhd --filename asm2.vdi --size 10240 --format VDI --variant Fixed
$ VBoxManage createhd --filename asm3.vdi --size 10240 --format VDI --variant Fixed
$ VBoxManage createhd --filename asm4.vdi --size 10240 --format VDI --variant Fixed
$ VBoxManage createhd --filename asm5.vdi --size 10240 --format VDI --variant Fixed
$
$ # Connect them to the VM.
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium asm1.vdi --mtype shareable
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 2 --device 0 --type hdd --medium asm2.vdi --mtype shareable
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 3 --device 0 --type hdd --medium asm3.vdi --mtype shareable
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 4 --device 0 --type hdd --medium asm4.vdi --mtype shareable
$ VBoxManage storageattach rac1 --storagectl "SATA Controller" --port 5 --device 0 --type hdd --medium asm5.vdi --mtype shareable
$
$ # Make shareable.
$ VBoxManage modifyhd asm1.vdi --type shareable
$ VBoxManage modifyhd asm2.vdi --type shareable
$ VBoxManage modifyhd asm3.vdi --type shareable
$ VBoxManage modifyhd asm4.vdi --type shareable
$ VBoxManage modifyhd asm5.vdi --type shareable
}}}
-- SOLARIS
http://blogs.sun.com/jimlaurent/entry/importing_solaris_vmdk_image_into
-- VMDK DYNAMIC TO FIXED
http://forums.virtualbox.org/viewtopic.php?f=1&t=15063
http://forums.virtualbox.org/viewtopic.php?t=8669
http://forums.virtualbox.org/viewtopic.php?f=1&t=35188
http://forums.virtualbox.org/viewtopic.php?f=2&t=31186
-- GNOME3 FALLBACK MODE
http://forums.fedoraforum.org/showthread.php?t=263491
https://www.virtualbox.org/wiki/Downloads
{{{
# File: execute_leveltape.rman
# Description: This RMAN script executes the level0 tape backup
SHOW ALL;
REPORT SCHEMA;
LIST BACKUP OF DATABASE;
REPORT NEED BACKUP;
REPORT UNRECOVERABLE;
LIST EXPIRED BACKUP BY FILE;
LIST ARCHIVELOG ALL;
REPORT OBSOLETE;
CROSSCHECK BACKUP DEVICE TYPE DISK;
CROSSCHECK COPY OF ARCHIVELOG ALL;
DELETE NOPROMPT EXPIRED BACKUP DEVICE TYPE DISK;
DELETE NOPROMPT OBSOLETE DEVICE TYPE DISK;
DELETE NOPROMPT ARCHIVELOG ALL BACKED UP 1 TIMES TO DEVICE TYPE DISK COMPLETED BEFORE 'SYSDATE-30';
# The command below is doesn't perform and actual backup of the database, it checks for any type of corruption
RUN {
ALLOCATE CHANNEL SBT1 DEVICE TYPE SBT_TAPE
FORMAT '%d-%T-%U' parms='SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u02/tape/RCTST)';
BACKUP RECOVERY FILES TAG = 'RMAN_TAPE-$DATE';
}
}}}
''common error''
{{{
released channel: SBT1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on SBT1 channel at 05/11/2012 08:36:00
ORA-19506: failed to create sequential file, name="RCPROD-20120511-3onamsff_1_1", parms=""
ORA-27028: skgfqcre: sbtbackup returned error
ORA-19511: Error received from media manager layer, error text:
sbtpvt_catalog_open: file /u02/tape/RCPROD/Oracle_Disk_SBT_Catalog, open error, errno = 2
Solution:
chmod 775 /u02/tape/RCPROD/Oracle_Disk_SBT_Catalog
}}}
''Timeline for x86 Virtualization''
http://jason.wordpress.com/2011/12/22/im-liking-this-slide-timeline-for-x86-virtualization/
''comments on VM for production environments''
http://www.evernote.com/shard/s48/sh/36d29fe0-a22e-4f6a-b3f4-163325e7d685/c84a1184c24e5636a9b54170677951af
''The Virtualization Landscape''
http://itechthoughts.wordpress.com/2009/11/10/virtualization-basics/
[img[ http://itechthoughts.files.wordpress.com/2009/11/vbasics6.png ]]
How to Check whether Your Hardware is Capable of Full Virtualization
Doc ID: Note:468463.1
{{{
# echo; grep -q "\(vmx\|svm\)" /proc/cpuinfo; if [ $? -eq 0 ]; then echo "Full Virtualization IS supported"; else echo "Full Virtualization IS NOT supported"; fi
}}}
Oracle VM General Policy Description
Doc ID: Note:466538.1
Oracle VM and Microsoft Windows
Doc ID: Note:468634.1
Certified Software on Oracle VM
Doc ID: Note:464754.1
Why am I able to use a single license on more than one machine?
Doc ID: Note:451818.1
-- SUPPORT POSITION ON OTHER VIRTUALIZATION PLATFORMS
Support Position for Oracle Products Running on VMWare Virtualized Environments [ID 249212.1]
Support Status for Oracle Business Intelligence on VMware Virtualized Environments (Doc ID 475484.1)
-- QoS
Oracle VM: Configuring Quality of Service (QoS) for Guest Virtual Machines
Doc ID: 735975.1
''Start with the following youtube videos''
https://www.youtube.com/watch?v=urK9sQErxHI&list=PLklSCDzsQHdmNhQUKYydLHO5d7E_2NDwK
''the VST concept''
https://sites.google.com/site/embtdbo/sql-tuning-1
https://sites.google.com/site/embtdbo/Home/sql-hint-documentation
''SQL tuning methodology''
http://docwiki.embarcadero.com/DBOptimizer/en/SQL_Tuning
{{{
1. List tables in query
2. List the joins in the query
3. Layout a diagram tables as nodes
4. Draw connectors for each join in where clause
1 to 1 connection
use blue line
1 to many
use black line
crows feet towards many
many to many
crows feet on both ends
5. Calculate filter ratios for any tables with filters
filter ratio = number of rows returned with fitler / number of rows
display in yellow below table node
6. Calculate two table join sizes for all two table joins in diagram (ie for every connection line)
extract the join information, including filters, and run count on that join
display in red on join line
Alternatively, or as a complement, calculate join filter ratios in both directions
display at each end of join connection line
7. Find the table sizes for each table
display in green above table node
}}}
{{{
1 Methodology
1.1 Step 1 : Tables
1.2 Step 2 : Joins
1.3 Step 3: Laying Out Join Diagram
1.4 Step 4: Join Connectors
1.5 Step 5: Predicate Filter Ratio
1.6 Step 6: Two Table Join Sizes and Join Filters
1.7 Step 7: Table Sizes
2 Analyzing Results
2.1 Query 1
2.2 Query 2
2.3 Query 3
2.4 Query 4
3 Subquery Diagramming
3.1 Simple join and non-correlated subqueries
3.2 Single row subquery
3.3 Correlated subquery
3.3.1 Example
3.4 Outer Joins
3.5 In/Exists
3.6 Semi Joins
}}}
''Patents''
Dan Tow Memory structure and method for tuning a database statement using a join-tree data structure representation, including selectivity factors, of a master table and detail table http://www.freepatentsonline.com/5761654.html
Mozes, Ari Method and system for sample size determination for database optimizers http://www.freepatentsonline.com/6732085.html
http://www.freelists.org/post/oracle-l/Dan-Tows-SQL-formula
http://www.hotsos.com/sym09/sym_speakers_tow.html
''Recommended readables''
SQL Tuning by Dan Tow
Refactoring SQL Applications by Stephan Faroult , chapter 5
''Related blog posts''
http://db-optimizer.blogspot.com/2010/09/sql-tuning-best-practice-visualizing.html
"Designing Efficient SQL: A Visual Approach" by Jonathan Lewis
http://www.simple-talk.com/sql/performance/designing-efficient-sql-a-visual-approach/
http://www.oaktable.net/content/kscope-2013-win-db-optimizer-license-sql-tuning-presentation
''Other Visualizations''
SQL Joins Graphically
http://db-optimizer.blogspot.com/2010/09/sql-joins-graphically.html based on http://www.codeproject.com/KB/database/Visual_SQL_Joins.aspx?msg=2919602
http://db-optimizer.blogspot.com/2009/06/sql-joins.html based on http://blog.mclaughlinsoftware.com/oracle-sql-programming/basic-sql-join-semantics/
http://www.gplivna.eu/papers/sql_join_types.htm
http://docwiki.embarcadero.com/DBOptimizer/en/Subquery_Diagramming
Tableau Visualization Gallery http://www.tableausoftware.com/learn/gallery
Google Chart API documentation
http://groups.google.com/group/google-chart-api/web/useful-links-to-api-libraries?hl=en&pli=1
Google Motion Chart
http://www.excelcharts.com/blog/google-motion-chart-api-visualization-population-trends/
http://jpbi.blogspot.com/2008/03/motion-chart-provided-by-google.html
http://www.segfl.org.uk/spot/post/how_to_use_google_motion_chart_gadget_to_liven_up_those_tables_of_data/
https://docs.google.com/viewer?url=http://nationsreportcard.gov/math_2009/media/pdf/motion_chart.pdf
Tablespace Allocation
http://www.pythian.com/news/1490/
3D rotator - Picturing, Rotation, Isolation, and Masking
http://had.blip.tv/file/1522611/
Disk IO and Latency
http://fedoraproject.org/wiki/File:Benchmark2.png
http://fedoraproject.org/wiki/F13_one_page_release_notes
Visualizing System Latency http://queue.acm.org/detail.cfm?id=1809426
Some Examples of Data Visualization
http://mathworld.wolfram.com/topics/DataVisualization.html
''Google Visualization API Gallery'' http://code.google.com/apis/visualization/documentation/gallery.html
''Yahoo Finance'' http://finance.yahoo.com/ http://finance.yahoo.com/echarts?s=^DJI+Interactive#chart4:symbol=^dji;range=1d;indicator=volume;charttype=line;crosshair=on;ohlcvalues=0;logscale=on;source=undefined
28 Rich Data Visualization Tools http://insideria.com/2009/12/28-rich-data-visualization-too.html
Videos/Talks on Visualization
http://www.gapminder.org/videos/the-joy-of-stats/
http://www.ted.com/talks/thomas_goetz_it_s_time_to_redesign_medical_data.html
http://www.ted.com/talks/lang/eng/david_mccandless_the_beauty_of_data_visualization.html
Historical milestones in data visualization
http://www.datavis.ca/milestones/
jfreechart
http://www.jfree.org/jfreechart/
http://www.webforefront.com/about/danielrubio/articles/ostg/jfreechart.html
http://cewolf.sourceforge.net/new/index.html
RRDTOOL
http://www.ludovicocaldara.net/dba/oracle-capacity-planning-with-rrdtool/
Oracle and visualization
http://rnm1978.wordpress.com/2011/03/11/getting-good-quality-io-throughput-data/
http://rnm1978.wordpress.com/2011/03/09/comparing-methods-for-recording-io-vsysstat-vs-hp-measureware/
https://github.com/grahn/oow-vote-hacking
http://www.crummy.com/software/BeautifulSoup/
http://gephi.org/
http://wiki.gephi.org/index.php/Modularity
http://inmaps.linkedinlabs.com/
http://www.cloudera.com/blog/2010/12/hadoop-world-2010-tweet-analysis/
http://www.wordle.net/
http://infosthetics.com/archives/2010/07/review_gephi_graph_exploration_software.html
''Flame Graphs'' http://dtrace.org/blogs/brendan/2011/12/16/flame-graphs/ , and here http://137.254.16.27/realneel/entry/visualizing_callstacks_via_dtrace_and
Visualizing device utilization - http://dtrace.org/blogs/brendan/2011/12/18/visualizing-device-utilization/
{{{
Command Line Interface Tools
Tabulated Data
Highlighted Data
3D Surface Plot
Animated Data
Instantaneous Values
Bar Graphs
Vector Graphs
Line Graphs
Ternary Plots
Quantized Heat Maps
}}}
''Visualizing Active Session History (ASH) Data With R'' http://structureddata.org/2011/12/20/visualizing-active-session-history-ash-data-with-r/
also talks about ''TIME_WAITED – micro, only the last sample is fixed up, the others will have TIME_WAITED=0
thanks to John Beresniewicz for this info. '' http://dboptimizer.com/2011/07/20/oracle-time-units-in-v-views/
Operating System Interface Design Between 1981-2009 http://www.webdesignerdepot.com/2009/03/operating-system-interface-design-between-1981-2009/
dboptimizer Kyle Hailey ''ggplot2'' looks cool had.co.nz/ggplot2/, check out cool graphic with ggplot by @gregrahn bit.ly/Akj6X4
hive plots http://mkweb.bcgsc.ca/linnet/
''Box Plots'' http://www.lcgceurope.com/lcgceurope/data/articlestandard/lcgceurope/132005/152912/article.pdf
http://kb.tableausoftware.com/articles/knowledgebase/box-plot-analog
http://mkt.tableausoftware.com/files/tips/pdf/BoxPlot.pdf
''Heatmap''
http://www.tableausoftware.com/public/gallery/nba-heatmap
http://support.uptimesoftware.com/community/?showtopic=81 <- heatmap example
How to Create Heat Maps in Tableau http://www.youtube.com/watch?v=Yk1odFhBJxc
http://www.visualisingdata.com/index.php/2010/07/visualising-the-wikileaks-war-logs-using-tableau-public/
http://dtrace.org/blogs/brendan/2012/03/26/subsecond-offset-heat-maps/
http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/
''Motion Charts''
http://www.tableausoftware.com/blog/6-techniques-experts
http://www.jeromecukier.net/blog/2009/09/01/review-of-tableau-50/
http://eden.rutgers.edu/~arousos/554/final/files/visualizing_data.ppt
''Stepped Line Charts''
http://community.tableausoftware.com/thread/106204
''Histogram''
http://onlinehelp.tableausoftware.com/v6.1/pro/online/en-us/i524769.html
http://www.interworks.com/blogs/iwbiteam/2012/01/30/simple-histograms-tableau
''d3.js''
Visualizing Oracle Data - ApEx and Beyond http://ba6.us/book/export/html/268, http://ba6.us/d3js_application_express_basic_dynamic_action
Mike Bostock http://bost.ocks.org/mike/
https://github.com/mbostock/d3/wiki/Gallery
http://mbostock.github.com/d3/tutorial/bar-1.html
http://mbostock.github.com/d3/tutorial/bar-2.html
AJAX retrieval using Javascript Object Notation (JSON) http://anthonyrayner.blogspot.com/2007/06/ajax-retrieval-using-javascript-object.html
http://dboptimizer.com/2012/01/22/ash-visualizations-r-ggplot2-gephi-jit-highcharts-excel-svg/
Videos:
http://css.dzone.com/articles/d3js-way-way-more-just-another
json - data
html - structure
JS+D3 - layout
CSS - pretty
webdevelop toolkit
chrome development
chrome developer toolkit
readables:
mbostock.github.com/d3/api
book: javascript: the good bits by douglas crockford
browse: https://developer.mozilla.org/en/SVG
watch: vimeo.com/29458354
clone: GraphDB https://github.com/sones/sones
clone: Cube http://square.github.com/cube
clone: d3py https://github.com/mikedewar/D3py
''Highcharts and JSON''
http://www.highcharts.com/demo/arearange
W-ASH : Web Enabled ASH http://dboptimizer.com/2011/10/31/w-ash-web-enabled-ash/
http://dboptimizer.com/2012/06/15/how-do-people-prototype-their-quantitiative-visualizations/
http://dboptimizer.com/2011/01/05/graphing-packages-which-are-the-best-and-why/
http://dboptimizer.com/2011/08/30/highchart-and-json-fun/
''Click and Drag - XKCD'' http://www.xkcd.com/1110/, http://xkcd-map.rent-a-geek.de/#9/1.4322/18.2483
''Timezone case study'' http://worrydream.com/MagicInk/#case_study_train_schedules
''Security visualization'' http://gigaom.com/data/6-ways-big-data-is-helping-reinvent-enterprise-security/, http://secviz.org/
''Pareto Chart''
http://www.qimacros.com/quality-tools/pareto-compare/
http://www.youtube.com/watch?v=TBtGI2z8V48
Harvard Visualization online videos http://cm.dce.harvard.edu/2012/02/22872/publicationListing.shtml
6 comprehensive dataviz catalogues for journalists & designers http://mediakar.org/2014/05/18/1708/
https://bottlerocketapps.zendesk.com/entries/15238-faq-s-on-voxie-s-wi-fi-sync-and-copying-files-to-your-computer
''How to use Concurrent I/O on HP-UX and improve throughput on an Oracle single-instance database [ID 1231869.1]'' <-- READ THIS
http://bytes.com/topic/oracle/answers/65152-very-strange-io-problems
http://www.symantec.com/business/support/index?page=content&id=TECH35220
http://www.datadisk.co.uk/html_docs/veritas/veritas_cluster_cs.htm
http://en.wikipedia.org/wiki/Comparison_of_file_systems
http://en.wikipedia.org/wiki/Veritas_File_System
http://www.bigadmins.com/solaris/how-to-build-vxfs-file-system/
http://blogs.oracle.com/hippy/entry/vxfs_cp_performance_issue
https://forums.oracle.com/forums/thread.jspa?threadID=545758&start=30&tstart=0 <-- ''Tanel on VxFS on temp tablespace, CIO mount option''
{{{
It very much looks like the second query is executed with parallel execution (as it did not do any LIOs and mostly waited for PX Deq: events).
The first query is executed in serial. To be closer to comparing apples with apples, make either both queries parallel or both serial (using either parallel (t x) hints or alter session disable parallel query).
If you have got the parallelism under control and have ruled out any other major differences in execution plans, then one reason for temp tablespace slowness may be lack of concurrent IO on tempfiles. VxFS doesn't have concurrent direct IO as far as I know unless you use QuickIO.
Which means that if you run some heavy sorting in parallel with only one tempfile, there may be unix file level read/write contention going on. The solution would be to either enable concurrent direct IO (or put temp on raw device) or have many multiple smaller tempfiles instead of one big one, Oracle can balance the IO load between them.
}}}
http://blogs.oracle.com/hippy/entry/vxfs_cp_performance_issue <-- good stuff - Diagnosing a VxFS filesystem performance issue
{{{
Filesystem i/o parameters for /filesystem
read_pref_io = 65536
read_nstream = 1
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 1048576
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
write_throttle = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 8192
hsm_write_prealloc = 0
read_ahead = 1
inode_aging_size = 0
inode_aging_count = 0
fcl_maxalloc = 8130396160
fcl_keeptime = 0
fcl_winterval = 3600
oltp_load = 0
}}}
http://blogs.oracle.com/hippy/entry/oracle_redo_logs_and_dd <-- Oracle redo logs and dd
http://blogs.oracle.com/hippy/?page=1
http://www.symantec.com/business/support/index?page=content&id=TECH161726 <-- Poor Oracle IO performance on VXFS for RDBMS 11gR2 compared to 10gR2 on Solaris, ''Apply Oracle patch# 10336129 to RDBMS 11.2.0.2''
http://www.sunmanagers.org/pipermail/summaries/2006-September/007567.html <-- issue on directio so resorted to large cache vxfs buffering
{{{
# iostat -xnz is a great was of looking at your Busy
luns/disks
# vxdisk -e list is a great way of seeing storage vxvm
sees Complete With WWNs
}}}
http://christianbilien.wordpress.com/
http://cuddletech.com/veritas/advx/advx.pdf <-- Advanced Veritas Theory cuddletech
http://unix.derkeiler.com/pdf/Mailing-Lists/SunManagers/2006-09/msg00297.pdf
http://goo.gl/a2FHy <-- san disk metrics ppt
http://www.redbooks.ibm.com/redbooks/pdfs/sg247186.pdf <-- red books solaris to linux migration
http://www.lugod.org/presentations/filesystems.pdf <-- brief history of unix filesystems
http://www.symantec.com/business/support/index?page=content&id=TECH87101 <-- ''Performance checklist with VXFS and VXVM''
{{{
Problem
Performance checklist with VXFS and VXVM.
Solution
If the application read/write times are slow check the following configuration/tunable's.
A general rule of thumb would be to check the volume layout (concatenated / stripe) as a volume relayout may help out quite a bit.
OS level striping should be considered for the following reasons:
1) To spread I/O across as many physical spindles as possible.
2) Avoid the tedious task of manually balancing I/O(s) across the available LUN(s)
For best performance, the following rules should be considered:
1) Each LUN involved in the striping should be coming from different hardware parity group(s). Striping across multiple LUN(s) from the same hardware parity group will increase disk rotation and have adverse effect(s) on performance.
2) Each LUN involved in striping should have similar I/O performance. In general, the first LUN exported from the parity group should reside on the outer most spindles this will give you the best performance. You would put files with highest performance requirement on these LUN(s)s. The performance of LUN(s) decreases towards the center of the disk.
3) When creating multiple volumes in the same disk group, avoid leaving large unused space in between volumes. Data unnecessarily spread across larger area's contribute to longer I/O service time.
If using a database , look into changing the volume layout to a stripe. Performance will depend a lot on the block size that your database writes (normally double your stripe unit size).
If the database writes with a block size of 8k then your stripe unit size should be between 64 and 128k. With this combination best performance is achieved with 6-8 column stripe.
If your block size is 32k then the stripe unit size should be 128k-256k. Again 6-8 columns make sense, anything above that realm may hinder performance.
See the appropriate administration guide on how to relayout a Veritas volume. Or see the below link for general performance guidelines:
http://support.veritas.com/docs/241620
Once we relayout the volume, you'll want to capture and change a few values within vxtunefs.
All four of the below parameters should be set to the Stripe Unit Size of the volume or multiples of it. "131072=128k stripe unit size"
read_unit_io
read_prefer_io
write_unit_io
write_pref_io
The number of outstanding parallel read and write requests are defined by the parameters:
read_nstream
write_nstream
These two values would be set to the number of columns of the volume
You would use vxtunefs to make those changes:
# /usr/lib/fs/vxfs/vxtunefs -s -o read_nstream=8 /mount_point
If the issue still persists after the above changes were made - check current patch revisions and have the customer download and run firstlook for further analysis.
ftp://ftp.veritas.com/pub/support/FirstLook.tar.gz
Legacy ID
312590
Article URL http://www.symantec.com/docs/TECH87101
}}}
http://www.symantec.com/business/support/index?page=content&id=TECH157737 <-- ''List of the steps for collecting data in the perspective of VxVM, VxFS, & VCS in general''
{{{
Problem
List of the steps for collecting data in the perspective of VxVM, VxFS, & VCS in general.
Solution
1) Collect date and time as soon as problem is observed.
2) Check what all operations are going on when performance degradation is observed, check if VERITAS operations are going on (look for VxVM/VxFS/VCS command if running).
3) Collect all the process currently running.
# time ps -elf
4) Collect kernel threadlist every 2 minutes, when performance degradation is observed. Threadlist will be logged into /var/log/messages file.
To avoid system hang (it can cause problems with VCS/CFS) while collecting kernel threadlist using the Linux sysrq interface follow these steps.
# cat /proc/sys/kernel/printk > /tmp/printk
# echo 0 > /proc/sys/kernel/printk
# echo t > /proc/sysrq-trigger
# cat /tmp/printk > /proc/sys/kernel/printk
5) Collect top, meminfo and vmstat command output at regular interval (top 1 sec, vmstat 5 sec).
In one session watch continuously the top 20 process:
# watch -n 1 "top -b -d 1 | head -n 20"
# top –b –d 1 /* 1 sec interval */
If interested to look at any specific process (like vxconfigd) then use the following command
# top -b -d 5 -p `ps -ef |grep vxconfigd |grep -v grep|awk '{print $2}'`
# vmstat 5 /* 5 seconds interval */
# while true
> do
> date
> cat /proc/meminfo | egrep 'MemTotal|MemFree|Buffers|Dirty'
> sleep 2
> done
6) Collect VxVM i/o statistics if there is an application i/o load.
# vxstat -i 2 –vps
7) Collect following commands output in regular interval (10 seconds) in case if any VxVM command is taking time.
# pmap `ps aux | grep vxconfigd | grep -v grep | awk '{print $2}'` | grep -i total
# cat /proc/`pgrep vxconfigd`/status
# /usr/sbin/lsof -a -p `pgrep vxconfigd`
# pstack `pgrep vxconfigd`
Collect stack of vxconfigd using 'gdb' if pstack command doesn’t work.
# gdb -p `pgrep vxconfigd`
(gdb) bt
(gdb) info threads
(gdb) info sharedlibrary
(gdb) info dcache
(gdb) quit
8) Please run the following command only if any VxVM commands are not working as expected or any VxVM/vxconfigd related issue is observed. Running vxconfigd in debug mode at
level 9 can slow down the vxconfigd.
How to start vxconfigd in debug mode at various labels (min 1, max 9)
# vxdctl debug <label> /var/tmp/vxconfigd_debug.<label>.log
Once done, disable the logging using following command.
# vxdctl debug 0
Generate gcore of vxconfigd when high memory usage is observed:
# gcore -o /usr/tmp/vxconfigd.core `pgrep vxconfigd`
VxFS specific commands (9,10) :
--------------------------------
9) Run the following command on VxFS mount points and get the "statfile" for each mount point.
# vxfsstat -w "statfile" -t 10 -c 2 /mount_point
10) If you experience any performance issue of a VxFS files system, please collect the metasave of the VxFS filesystem in question.
# cd /opt/VRTSspt/FS/MetaSave/
# ls
metasave_rhel4_x86 metasave_sles9_x86 README.metasave
metasave_rhel4_x86_64 metasave_sles9_x86_64
Please follow the steps in README.metasave.
---------------------------------------------------------------------------
11) Please collect the output of FirstLook
Please see the Technote for FirstLook (Please skip downloading it):
http://www.symantec.com/business/support/indexpage=content&id=TECH70537&key=15107&actp=LIST
To execute FirstLook, invoke start_look. For example, executing FirstLook on Solaris:
# cd /opt/VRTSspt/FirstLook
# ./start_look -l
NOTE:
1) Always use the -l option to get lockstat data unless system is VERY heavily loaded.
2) Note - do not run more than one hour, the logs will overwrite after 60 minutes.
3) Try to start the script 5 minutes before the performance issue is expected to be seen.
4) For example, files will be saved in /opt/VRTSspt/FirstLook/Flook_Apr_8_11:58:00_logs.
Optionally, it is possible to confirm that the FirstLook process is running:
# ps -ef | grep look
root 19984 1 0 08:58:19 pts/1 0:00 /bin/ksh /vx_install/FirstLook/Flook_sol
After enough time to collect useful information, or if the performance issue is intermittent and has stopped, stop FirstLook:
# ./stop_look
Please compress the output using gzip, e.g., '/opt/VRTSspt/FirstLook/Flook_04_08_11_12:00:07_logs.tar'
12) Please collect the output of new version of VRTSexplorer from the affected node, in question:
# cd /opt/VRTSspt/VRTSexplorer
# ./VRTSexplorer
13) Please collect the output of VRTSexplorer from all other nodes in the VCS cluster.
14) Collect system dump from all nodes in the cluster around the same time. Please set peerinact timeout as shown in the article TECH54616.
Article URL http://www.symantec.com/docs/TECH157737
}}}
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01919408/c01919408.pdf <-- VxFS tuning and performance
http://linux-xfs.sgi.com/projects/xfs/papers/xfs_white/xfs_white_paper.html <-- ''scalability and performance in modern filesystems''
http://www.solaris-11.info/how-solaris-zfs-cache-management-differs-from-ufs-and-vxfs-file-systems.html <-- How Solaris ZFS Cache Management Differs From UFS and VXFS File Systems
{{{
ZFS unlike ufs and vxfs file systems does not throttle writers.
The trade off is that limiting this memory footprint means that the ARC is unable to cache as much file system data, and this limit could impact performance. In general, limiting the ARC is wasteful if the memory that now goes unused by ZFS is also unused by other system components. Note that non-ZFS file systems typically manage to cache data in what is nevertheless reported as free memory by the system.
}}}
http://www.filibeto.org/sun/lib/blueprints/2002-vol4-cd/0400/ram-vxfs.pdf <-- Tales from the Trenches: The Case of the RAM Starved Cluster
{{{
VxFS, has caching and non-caching capabilities similar to UFS. However, VxFS tries to
intelligently decide when to use each. By default, for small I/O operations VxFS will cache.
For large I/O operations VxFS will use direct I/O and not cache. This is called discovered
direct I/O. The vxtunefs parameter which dictates this decision is discovered_direct_iosz (in bytes) which is set on a per file system basis. The
default value for discovered_direct_iosz is 256kB. Thus, I/O operations that are
smaller than 256kB will be cached, while those larger than 256kB will not be cached
One day, the databases fill up. The database administrator (DBA) creates a new table space <-- this happened to us, only that killing a CPU hogger made the process to proceed
for the growth. The primary machine "locks up." The fail over secondary machine initiates a
takeover and recovery. The primary machine panics. Users are unhappy. System
administrators are unhappy.
}}}
Configuring and Tuning Oracle Storage with VERITAS Database Edition for Oracle
http://eval.veritas.com/downloads/pro/dbed22_best_practices_paper.pdf
{{{
qiostat -l
}}}
RE: Async and Direct I/O On Solaris using VXFS http://www.freelists.org/post/oracle-l/Async-and-Direct-IO-On-Solaris-using-VXFS,5
{{{
The one thing that really made a difference for me was setting a kernel
parameter called vmodsort(set vxfs:vx_vmodsort=1 in /etc/system). This
parameter makes the fdsync kernel call a lot faster which really cut down on
the CPU utilization on the servers and made IO a lot faster especially for apps
using LOBs. <-- hmm.. which they do have on the large 15TB database..
}}}
''Useful commands''
{{{
pkginfo -l VRTSvxfs
pkginfo -l VRTSvxvm
uname -a
vxdg list mydg
vxdisk list Disk_13
/etc/vx/tunefstab
}}}
http://www.solarisinternals.com/wiki/index.php/Application_Specific_Tuning <-- ''explicitly mentions VxFS and Quick IO''
Oracle File System Integration and Performance http://solarisinternals.com/si/reading/oracle_fsperf.pdf
{{{
Pick the right file system. Good combinations are:
UFS + Direct I/O
NFS + Direct I/O
QFS + Direct I/O, Q-Writes, and samaio
VxFS + Quick I/O (QIO) or Oracle Disk Manager (ODM)
}}}
http://www.solarisinternals.com/wiki/index.php/Direct_I/O <-- ''posix single writer locking''
{{{
The 'direct I/O' functionality on non-UFS filesystems is significantly different from that of UFS Direct I/O, since only UFS Direct I/O bypasses POSIX single-writer locking. Other filesystems may have their own means of bypassing this lock, but those means are usually distinct from enabling their direct I/O features. For example, QFS offers 'Q-writes' and Veritas VxFS offers 'Quick I/O' (QIO) and 'Oracle Data Manager' (ODM) features that acheive a similar effect.
}}}
Database Performance with NAS: Optimizing Oracle on NFS http://media.netapp.com/documents/tr-3322.pdf
I/O Microbenchmarking with Oracle in Mind http://www.slideshare.net/WhizBob/io-micro-preso07 <-- ''nice chart on direct io and concurrent io comparison on vxfs''
Solaris System Commands for Oracle DBA’s http://www.asktherealtom.ch/?tag=solaris
VxFS 3.5 is confused about the cpu_t again [ID 1003664.1]
High fsync() times to VRTSvxfs Files can be reduced using Solaris VMODSORT Feature [ID 842718.1]
{{{
Symptoms
When RDBMS processes perform cached writes to files (i.e. writes which are not issued by DBWR)
such as to a LOB object which is
stored out-of-line (e.g. because the LOB column length exceeds 3964 bytes)
and for which "STORE AS ( NOCACHE )" option has not been used
then increased processing times can be experienced which are due to longer fsync() call times to flush the dirty pages to disk.
Changes
Performing (datapump) imports or writes to LOB segments and
1. running "truss -faedDl -p " for the shadow or background process doing the writes
shows long times spent in fsync() call.
Example:
create table lobtab(n number not null, c clob);
-- insert.sql
declare
mylob varchar2(4000);
begin
for i in 1..10 loop
mylob := RPAD('X', 3999, 'Z');
insert into lobtab values (i , rawtohex(mylob));
end loop;
end;
/
truss -faedDl sqlplus user/passwd @insert
shows 10 fsync() calls being executed possibly having high elapsed times:
25829/1: 1.3725 0.0121 fdsync(257, FSYNC) = 0
25829/1: 1.4062 0.0011 fdsync(257, FSYNC) = 0
25829/1: 1.4112 0.0008 fdsync(257, FSYNC) = 0
25829/1: 1.4164 0.0010 fdsync(257, FSYNC) = 0
25829/1: 1.4213 0.0008 fdsync(257, FSYNC) = 0
25829/1: 1.4508 0.0008 fdsync(257, FSYNC) = 0
25829/1: 1.4766 0.0207 fdsync(257, FSYNC) = 0
25829/1: 1.4821 0.0006 fdsync(257, FSYNC) = 0
25829/1: 1.4931 0.0063 fdsync(257, FSYNC) = 0
25829/1: 1.4985 0.0007 fdsync(257, FSYNC) = 0
25829/1: 1.5406 0.0002 fdsync(257, FSYNC) = 0
2. Solaris lockstat command showing frequent hold events for fsync internal functions:
Example:
Adaptive mutex hold: 432933 events in 7.742 seconds (55922 events/sec)
------------------------------------------------------------------------
Count indv cuml rcnt nsec Lock Hottest Caller
15052 48% 48% 0.00 385437 vph_mutex[32784] pvn_vplist_dirty+0x368
nsec ------ Time Distribution ------ count Stack
8192 |@@@ 1634 vx_putpage_dirty+0xf0
16384 | 187 vx_do_putpage+0xac
32768 | 10 vx_fsync+0x2a4
65536 |@@@@@@@@@@@@@@@@@@@@@@ 12884 fop_fsync+0x14
131072 | 255 fdsync+0x20
262144 | 30 syscall_trap+0xac
3. AWR report would show increased CPU activity (SYS_TIME is unusual high in Operating System Statistics section).
Cause
The official Sun document explaining this issue is former Solaris Alert # 201248 and new
"My Oracle Support" Doc Id 1000932.1
From a related Sun document:
Sun introduced a page ordering vnode optimization in Solaris 9
and 10. The optimization includes a new vnode flag, VMODSORT,
which, when turned on, indicates that the Virtual Memory (VM)
should maintain the v_pages list in an order depending on if
a page is modified or unmodified.
Veritas File System (VxFS) can now take advantage of that flag,
which can result in significant performance improvements on
operations that depend on flushing, such as fsync.
This optimization requires the fixes for Sun BugID's 6393251 and
6538758 which are included in Solaris kernel patches listed below.
Symatec information about VMODSORT can be found in the Veritas 5.0 MP1RP2 Patch README:
https://sort.symantec.com/patch/detail/276
Solution
The problem is resolved by applying Solaris patches and enabling the VMODSORT
feature in /etc/system:
1. apply patches as per Sun document (please always refer to
the Sun alert for the most current recommended version of patches):
SPARC Platform
VxFS 4.1 (for Solaris 9) patches 122300-11 and 123828-04 or later
VxFS 5.0 (for Solaris 9) patches 122300-11 and 125761-02 or later
VxFS 4.1 (for Solaris 10) patches 127111-01 and 123829-04 or later
VxFS 5.0 (for Solaris 10) patches 127111-01 and 125762-02
x86 Platform
VxFS 5.0 (for Solaris 10) patches 127112-01 and 125847-01 or later
2. enable vmodsort in /etc/system and reboot server
i.e. add line to /etc/system after vxfs forceload:
set vxfs:vx_vmodsort=1 * enable vxfs vmodsort
Please be aware that enabling VxFS VMODSORT functionality without
the correct OS kernel patches can result in data corruption.
References
http://sunsolve.sun.com/search/document.do?assetkey=1-66-201248-1
}}}
''VxFS file system is taking too much memory. Suggestions for tuning the system''
http://www.symantec.com/business/support/index?page=content&id=TECH5836
''vn_vfslocks_bucket''
http://fxr.watson.org/fxr/ident?v=OPENSOLARIS;im=bigexcerpts;i=vn_vfslocks_bucket
there should be "cio" option in etc/fstab (or mount output)
Check these:
http://sfdoccentral.symantec.com/sf/5.0/hpux/html/fs_ref/pr_ch_fsio_fs5.html ''<-- Licensing FAQ''
http://www.symantec.com/business/support/index?page=content&id=TECH49211&key=15101&actp=LIST
VxFS vs ZFS
https://blogs.oracle.com/dom/entry/zfs_v_vxfs_iozone
https://blogs.oracle.com/dom/entry/zfs_v_vxfs_ease
What is the maximum file size supported?
https://kb.netapp.com/support/index?id=3010842&page=content&locale=en_US
Few more updates from Exadata Development in analyzing the data
from the standby on 5/2, where resync is now complete and we have the two
standby databases running.
The standby database workloads are IO bound. Every disk is busy with ~220
IOPS (while our data sheet for max IOPS on x2-2 with HC disks is ~160).
In addition, WBFC is offloading ~6.3K IOPS (3.8K write IOPS + 2.5K read IOPS)
from disk into flash per cell.
In short, the workload has fully saturated the storage system.
At this point, they'll need to look into expanding their storage, and/or
tuning their workloads to reduce IOs. From the IO point of view, they are
definitely max'ed out.
Note also that based on the analysis WBFC is helping their workload quite a
bit as well.
1) WBFC is absorbing 2.5K read hits and 3.8K redirty writes per second. WBFC
redirty ratio (# of redirty writes/ # of total writes) is 60%.
2) WBFC read misses are sending ~40 reads per second to each HDD.
3) WBFC flushing is sending ~120 writes per second to each HDD. Flushing
occurs due to read misses/populations and new writes.
4) Standby log reads and writes take up the rest ~60 IOPS on each HDD.
There is no smart scan in the workload so there is no read fragmentation. All datafile IOs are 8K.
http://blog.oracle-ninja.com/2012/08/exadata-flash-write-back-sooner-than-we-think/
http://kevinclosson.wordpress.com/2012/08/13/oracle-announces-the-worlds-second-oltp-machine-public-disclosure-of-exadata-futures-with-write-back-flash-cache-thats-a-sneak-peek-at-oow-2012-big-news/
* kevin talks about the destage
Destage Algorithms for Disk Arrays with Non-Volatile Caches http://reference.kfupm.edu.sa/content/d/e/destage_algorithms_for_disk_arrays_with__88539.pdf
http://en.wikipedia.org/wiki/Copy-on-write
-- the events
{{{
cell flash cache read hits
+
writes
}}}
http://glennfawcett.wordpress.com/2013/06/18/analyzing-io-at-the-exadata-cell-level-a-simple-tool-for-iops/
''enable the wbfc''
http://motodreamer.wordpress.com/2013/06/24/how-can-i-enable-the-exadata-write-back-flash-cache/
Oracle Exadata Database Machine: "Write-Back Cache Flash" http://bit.ly/1tKN4QC , http://www.oracle.com/technetwork/pt/articles/database-performance/exadata-write-back-flash-cache-2164875-ptb.html
! IOs that don't get cached to WBFC
<<<
There are IOs that don't get cached to WBFC (when in write-through mode). Here are the types of IOs that don't get cached:
• RMAN backups
• ASM rebalance
• temp space used to sort/hash joins
• redo log writes (goes to flash log and disk).
The fix here is to enable the write-back flash cache (WBFC) and the overall IOPS capacity will increase. Starting 12.2.1.1.0 when WBFC is enabled all those IOs that bypass the flash cache are now supported and will go primarily to flash (except for ASM rebalance IOs).
12.2.1.1.7.180506 is the cell version installed on the cells. Take note that flash compression needs to be disabled.
https://docs.oracle.com/cd/E80920_01/DBMSO/exadata-whatsnew.htm#DBMSO-GUID-9AD242C7-D385-4FF2-BD18-98F4C50BF0AD
Follow the steps on the following MOS note on how to enable the WBFC
• Exadata Write-Back Flash Cache - FAQ (Doc ID 1500257.1)
<<<
! enable WBFC
https://blog.zeddba.com/2018/01/24/how-to-enable-exadata-write-back-flash-cache/
https://unknowndba.blogspot.com/2019/10/exadata-flashcache-to-writeback-writethrough-online.html
https://stefanpanek.wordpress.com/2017/10/20/exadata-flash-cache-enabled-for-write-back/
https://www.bigdba.com/oracle/827/how-to-enable-oracle-exadata-write-back-smart-flash-cache/
https://motodreamer.wordpress.com/2013/06/24/how-can-i-enable-the-exadata-write-back-flash-cache/
{{{
MyBookLive:~# cat /proc/cpuinfo
processor : 0
cpu : APM82181
clock : 800.000008MHz
revision : 28.130 (pvr 12c4 1c82)
bogomips : 1600.00
timebase : 800000008
platform : PowerPC 44x Platform
model : amcc,apollo3g
Memory : 256 MB
MyBookLive:~# parted print /dev/sda
Error: Could not stat device print - No such file or directory.
Retry/Cancel? ^C
MyBookLive:~#
MyBookLive:~# parted /dev/sda print
Model: ATA WDC WD30EZRS-11J (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
3 15.7MB 528MB 513MB linux-swap(v1) primary
1 528MB 2576MB 2048MB ext3 primary raid
2 2576MB 4624MB 2048MB ext3 primary raid
4 4624MB 3001GB 2996GB ext4 primary
MyBookLive:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/md1 ext3 1.9G 887M 939M 49% /
tmpfs tmpfs 50M 0 50M 0% /lib/init/rw
udev tmpfs 10M 6.5M 3.5M 65% /dev
tmpfs tmpfs 50M 0 50M 0% /dev/shm
tmpfs tmpfs 50M 3.0M 48M 6% /tmp
ramlog-tmpfs tmpfs 20M 2.1M 18M 11% /var/log
/dev/sda4 ext4 2.8T 65G 2.7T 3% /DataVolume
MyBookLive:~# free -m
total used free shared buffers cached
Mem: 248 244 4 0 31 82
-/+ buffers/cache: 130 118
Swap: 488 34 454
MyBookLive:/DataVolume/shares/Public/Documents# top -c
top - 03:52:29 up 12 days, 7:50, 2 users, load average: 3.69, 1.96, 1.40
Tasks: 98 total, 4 running, 94 sleeping, 0 stopped, 0 zombie
Cpu(s): 73.7%us, 16.1%sy, 0.0%ni, 0.0%id, 0.0%wa, 1.3%hi, 8.9%si, 0.0%st
Mem: 254336k total, 248704k used, 5632k free, 11264k buffers
Swap: 500608k total, 36736k used, 463872k free, 66048k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8190 root 10 -10 15040 9.8m 6016 R 92.1 4.0 1:11.15 sshd: root@notty
8239 root 10 -10 8960 5184 3520 S 6.6 2.0 0:06.15 scp -v -r -p -t /DataVolume/shares/Public/Backup
225 root 20 0 0 0 0 D 0.3 0.0 0:18.36 [kswapd0]
1742 root 20 0 35712 2240 1600 S 0.3 0.9 5:58.45 /usr/mionet/changeNotifySocket
8307 root 10 -10 5056 3520 2368 R 0.3 1.4 0:00.28 top -c
1 root 20 0 4352 1664 1344 S 0.0 0.7 0:12.45 init [2]
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 0:00.52 [ksoftirqd/0]
4 root RT 0 0 0 0 S 0.0 0.0 0:00.00 [watchdog/0]
5 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [events/0]
6 root 20 0 0 0 0 S 0.0 0.0 0:00.01 [khelper]
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [async/mgr]
159 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [sync_supers]
161 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [bdi-default]
162 root 20 0 0 0 0 S 0.0 0.0 0:00.35 [kblockd/0]
167 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [ata/0]
168 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [ata_aux]
169 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [ksuspend_usbd]
174 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [khubd]
177 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kseriod]
197 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rpciod/0]
224 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [khungtaskd]
226 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [aio/0]
227 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [nfsiod]
228 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 [kslowd000]
229 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 [kslowd001]
230 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [xfs_mru_cache]
231 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [xfslogd/0]
}}}
http://www.storagereview.com/western_digital_my_book_live_review
-- chat with a friend that had a data loss on the 3TB USB external drive
<<<
Karl Arao bumili ako nyan, tapos pinalitan ko ng WD MybookLive.. ang issue ko dyan 3TB sha di sha nababasa ng ntfs-3g sa OEL5.7 desktop server ko, kelangan ko mag OEL6 tapos mag xfs para mabasa ko ung 3TB na yan kasi naka GPT yang drive na yan.. built tlga yang drive na yan for windows7, nababasa yan dahil sa dynamic partition/GPT ng windows7.. so ginawa ko pinalit ko ung unit sa WD MybookLive na network storage tapos meron shang small footprint na Linux OS na pwede mong i hack.. so pwede mong gawing NFS,FTP,or virtual tape mo sa amanda.. so far sa attached storage ng WD mas reliable pa rin ung 2TB nila.. so meron akong WD1TB (no power adaptor) and 2TB (with power adaptor), tapos ung data nung dalawang un naka backup sa WD MybookLive.. :) kung mababasa mo pa ung disk nyan, baka pwede mong patakbuhan ng photorec pero since makabagong ntfs partition yan di ko sure kung supported yan.. http://www.cgsecurity.org/wiki/PhotoRec
....
Karl Arao BTW, ung WD MybookLive 3TB din un, mas mautak ung ginawa nila dun.. alam nila siguro marami reklamo sa 3TB na attached storage kaya ginawa nilang network storage para ma overcome ung limit na 3TB sa mga ibat ibang OS..
...
Karl Arao kung gagawin mong NTFS yan na non GPT pwede rin, pero ang ipapartition lang nya 2TB.. wag mo muna i firmware upgrade, kunin mo muna ung data mo!
<<<
https://sites.google.com/site/embtdbo/wait-event-documentation
http://fritshoogland.wordpress.com/2012/04/26/getting-to-know-oracle-wait-events-in-linux/
''gc cr disk read'' http://orainternals.wordpress.com/2012/01/13/gc-cr-disk-read/
''V8 Bundled Exec'' https://forums.oracle.com/forums/thread.jspa?messageID=9819259&tstart=0
''rdbms ipc message'' http://www.oaktable.net/content/what-%E2%80%98rdbms-ipc-message%E2%80%99-wait-event
''Cursor :Pin S Wait Event'' https://blogs.oracle.com/myoraclediary/entry/troubleshooting_cursor_pin_s_wait1
http://www.pythian.com/news/35145/cursor-pin-s-wait-on-x-in-the-top-5-wait-events/
''enq WL - contention'' http://www.evernote.com/shard/s48/sh/18ef4c38-bbe6-468b-92ae-68ddc56d6eb8/d49b3e4906b1744a5a0823be41748073
''free buffer waits & write complete waits'' - http://www.ixora.com.au/q+a/dbwn.htm, http://www.freelists.org/post/oracle-l/Write-Complete-Waits-P1-P2,4, http://docs.oracle.com/cd/B16240_01/doc/doc.102/e16282/oracle_database_help/oracle_database_wait_bottlenecks_write_complete_waits_pct.html, http://jonathanlewis.wordpress.com/2006/11/21/free-buffer-waits/
''gc buffer busy acquire vs release'' http://orainternals.wordpress.com/2012/04/19/gc-buffer-busy-acquire-vs-release/, http://www.confio.com/logicalread/oracle-rac-wait-events/#.Ub8qHfZATbA, Top 5 Database and/or Instance Performance Issues in RAC Environment [ID 1373500.1]
http://www.youtube.com/watch?v=vpWEuOJQYYs
http://gizmodo.com/358362/torture-yourself-by-building-a-water-gun-alarm-clock
http://www.howcast.com/videos/866-How-To-Make-a-Water-Gun-Alarm-Clock
http://blog.onlineclock.net/water-pistol-alarm-clock/
http://www.amazon.com/gp/product/B000667LGY?ie=UTF8&tag=churmdevelopm-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000667LGY
http://www.amazon.com/Saturator-AK-Electronic-Water-Black/dp/B003JFYTNW/ref=dp_cp_ob_t_title_3
http://www.amazon.com/Saturator-AK-47-Water-Gun-Clear/dp/B003A01ZCO/ref=pd_sim_sg_3
http://www.amazon.com/gp/product/B00264GL9M?ie=UTF8&tag=churmdevelopm-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00264GL9M
''other alarm clocks''
http://www.uberreview.com/2006/03/top-ten-most-annoying-alarm-clocks.htm
Web and App Performance: Top Problems to avoid to keep you out of the News http://www.slideshare.net/grabnerandi/web-performance-42628823
https://www.google.com/search?q=Wh+to+Mah&oq=Wh+to+Mah&aqs=chrome..69i57j69i61l2j69i60j0l2.5049j0j1&sourceid=chrome&ie=UTF-8
Convert mAh to Watt hours - Calculator https://milliamps-watts.appspot.com/
other power conversions http://convert-formula.com/
http://www.rapidtables.com/calc/electric/wh-to-mah-calculator.htm
How many charges can Omnicharge charge your laptop? https://omnicharge.zendesk.com/hc/en-us/articles/115002760788-How-many-charges-can-Omnicharge-charge-your-laptop-
What is the macbook pro 15 mid 2014 Full Charge Capacity(mAh)? https://discussions.apple.com/thread/6631387?start=0&tstart=0
http://guides.macrumors.com/Laptop_Battery_Guide
https://community.oracle.com/thread/3896622
<<<
ORA$AT_SA_SPC_SY_nnn is for Space Advisor tasks
ORA$AT_OS_OPT_SY_nnn is for CBO stats collection tasks
ORA$AT_SQ_SQL_SW_nnn is for SQL Tuning Advisor tasks.
<<<
Subject: Opatch - Where Can I Find the Latest Version of Opatch?
Doc ID: Note:224346.1 Type: BULLETIN
Last Revision Date: 18-DEC-2007 Status: PUBLISHED
Where Can I Find the Latest Version of Opatch?
----------------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~
* FOR version 10.2 onwards
~~~~~~~~~~~~~~~~~~~~~~~~~~
Oracle 10.2 ships "opatch" as part of the standard product
shipment. However there may be updates to this which are
loaded under the placeholder bug:
Bug 4898608 OPATCH 10.2 ARU PLACEHOLDER
To download the opatch from metalink go to:
http://updates.oracle.com/download/4898608.html
(Use your Metalink username/password)
~~~~~~~~~~~~~~~~~~~~~~~~~~
* FOR versions before 10.2
~~~~~~~~~~~~~~~~~~~~~~~~~~
The latest version of opatch for server versions prior to 10.2
is on hold as the fix for
Bug 2617419 OPATCH ARU PLACEHOLDER.
To download the opatch from metalink go to:
http://updates.oracle.com/download/2617419.html
(Use your Metalink username/password)
Please also note that the OPatch release number (1.0.0.0.XX) appears
on Metalink labeled as release number (10.1.0.X).
RELATED DOCUMENTS
-----------------
Note 189489.1 Oracle9i Data Server Interim Patch Installation (OPatch)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Change History:
20-Feb-2006 added 10.2 opatch details
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
http://databaseperformance.blogspot.com/2010/03/which-disk-is-faster.html
Why is it hard to scale a database?
It's a short question yet challenging to answer. Check out some interesting views here:
http://www.quora.com/Database-Systems/Why-is-it-hard-to-scale-a-database-in-layman%E2%80%99s-terms
http://karlarao.wordpress.com/2011/05/17/nocoug-journal-ask-the-oracle-aces-why-is-my-database-slow/
<<<
short piece for the "Ask the Oracle ACEs" section of next issue of the Northern California Oracle Users' Group Journal
This series has several Oracle ACEs sharing their wisdom on the same practical topic. Readers appreciate the variety of voices and diversity of advice. The pieces are short, 600 to 700 words (plus a
short bio), so they are easy to write and read. Here is the question for the upcoming issue:
''Why is my database slow?''
You are free to interpret that question as you wish!
The NoCOUG Journal has a very wide readership. In addition to print copies being sent to NoCOUG members, the latest issue has been downloaded 6000 times so far from more than 200 countries. Here is a recent copy online: http://www.nocoug.org/Journal/NoCOUG_Journal_201102.pdf
<<<
-- SYSINTERNALS SITE
http://technet.microsoft.com/en-us/sysinternals/bb545027.aspx
''“BSoD” on 2node 11gR1 RAC on Windows + making use of “windbg & kd”''
-- READ THE DUMPFILE
c:\program files\debugging tools>kd -z C:\WINDOWS\memory.dmp
kd> .logopen c:\debuglog.txt
kd> .sympath srv*c:\symbols*http://msdl.microsoft.com/download/symbols
kd> .reload;!analyze -v;r;kv;lmnt;!thread;.logclose;q
-- I WANT TO KNOW THE RUNNING TASK. YOU WOULD RUN THE COMMAND AND PASTE THE CONTENT OF DEBUG1.TXT HERE.
kd> .logopen c:\debuglog1.txt
kd> .sympath srv*c:\symbols*http://msdl.microsoft.com/download/symbols
kd> .reload;!process;.logclose;q
-- GET OS ERRORS
D:\>net helpmsg 10013
An attempt was made to access a socket in a way forbidden by its access permissions.
-- procdump
http://goo.gl/ikrjm
http://windows.microsoft.com/en-US/windows7/products/features/backup-and-restore
http://www.howtogeek.com/howto/1838/using-backup-and-restore-in-windows-7/
Incremental Backup Strategy
http://www.brighthub.com/computing/windows-platform/articles/53645.aspx
http://www.brighthub.com/computing/windows-platform/articles/53645.aspx?p=2
http://www.raymond.cc/blog/archives/2010/03/22/bind-windows-application-to-specific-network-adapter-with-forcebindip/
Karl Arao's Blog Facebook Activity Feed
https://sites.google.com/site/karlarao/karl-arao-blog-facebook-activity-feed
Google Calendar on wordpress.com
http://en.forums.wordpress.com/topic/google-calendar?replies=22
http://helensnerdyblog.wordpress.com/2007/11/27/how-to-add-a-google-calendar-to-your-wordpresscom-blog-using-the-rss-widget/
http://www.jeffhendricksondesign.com/google-calendar-wordpress/
Other nice to have for oracleusersph blog:
- Newsletter (could use google sites)
- Facebook Page
- Event Calendar
- Twitter
- Blog aggregator
- About
- Copyright
http://www.typealyzer.com/?lang=en
Wordpress.com enhancements
http://wordpress.org/support/topic/increase-width-of-template
http://antbag.com/how-to-change-the-width-of-your-blog-design/
http://www.easywebtutorials.com/blogging-tutorial/faq/
http://en.support.wordpress.com/editing-css/
http://millionclues.com/problogging/wordpress-tips/make-full-width-page-in-wordpress/
http://wordpress.org/tags/remove-sidebar
http://wordpress.org/support/topic/remove-sidebar-and-stretch-out-content
http://wordpress.org/support/topic/remove-side-bar-2
http://www.neoease.com/themes/
http://wordpress.org/support/topic/customizing-shades-of-blue-theme
http://wordpress.org/support/topic/display-read-more-link-instead-of-full-article
http://wordpress.org/support/topic/display-more-posts-for-categories-than-on-indexphp
http://digwp.com/2010/01/wordpress-more-tag-tricks/
''wordpress installation''
{{{
[root@theexamples www]# tar -zxf latest.tar.gz
[root@theexamples www]# ls
acblog cgi-bin greenfigtree_20090518.tar.gz katieblog news.example.com wordpress
backups example greenfigtree_20101101.tar.gz latest.tar.gz og_backup wordpress-3.0.1.zip
blog.ac-oracle.com error html manual slis5720.thegreenfigtree.com
blog.example.com greenfigtree icons mrsexample usage
[root@theexamples www]# mv wordpress blogtest
[root@theexamples www]# cd blogtest/
[root@theexamples blogtest]# ls
index.php wp-activate.php wp-atom.php wp-commentsrss2.php wp-cron.php wp-links-opml.php wp-mail.php wp-register.php wp-settings.php xmlrpc.php
license.txt wp-admin wp-blog-header.php wp-config-sample.php wp-feed.php wp-load.php wp-pass.php wp-rss2.php wp-signup.php
readme.html wp-app.php wp-comments-post.php wp-content wp-includes wp-login.php wp-rdf.php wp-rss.php wp-trackback.php
[root@theexamples blogtest]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
<?php
Your MySQL connection id is 103845
Server version: 5.0.77 Source distribution
</VirtualHost>
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> create database blogtest;
Query OK, 1 row affected (0.01 sec)
mysql> GRANT ALL PRIVILEGES ON blogtest.* TO "blogtest"@"localhost" identified by "example";
Query OK, 0 rows affected (0.13 sec)
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| ac_oracle |
| aexample |
| blog2_example_com |
| blog_example_com |
| blogtest |
| greenfigtree |
| mrsexample |
| mysql |
| news |
| slis5720 |
| test |
| wordpressmu |
+--------------------+
13 rows in set (0.05 sec)
mysql> exit
Bye
[root@theexamples blogtest]# ls
index.php wp-activate.php wp-atom.php wp-commentsrss2.php wp-cron.php wp-links-opml.php wp-mail.php wp-register.php wp-settings.php xmlrpc.php
license.txt wp-admin wp-blog-header.php wp-config-sample.php wp-feed.php wp-load.php wp-pass.php wp-rss2.php wp-signup.php
readme.html wp-app.php wp-comments-post.php wp-content wp-includes wp-login.php wp-rdf.php wp-rss.php wp-trackback.php
[root@theexamples blogtest]# cp wp-config-sample.php wp-config.php
[root@theexamples blogtest]# vi wp-config
[root@theexamples blogtest]# vi wp-config.php
[root@theexamples blogtest]# cd ..
[root@theexamples www]# chown -Rf apache:apache blogtest
[root@theexamples www]# vi /etc/httpd/conf/httpd.conf
[root@theexamples www]# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]
[root@theexamples www]# cd blog.example.com/
[root@theexamples blog.example.com]# ls
index.php sitemap.xml.gz wp-atom.php wp-config.php wp-feed.php wp-login.php wp-register.php wp-signup.php
license.txt wp-activate.php wp-blog-header.php wp-config-sample.php wp-includes wp-mail.php wp-rss2.php wp-trackback.php
readme.html wp-admin wp-comments-post.php wp-content wp-links-opml.php wp-pass.php wp-rss.php xmlrpc.php
sitemap.xml wp-app.php wp-commentsrss2.php wp-cron.php wp-load.php wp-rdf.php wp-settings.php
[root@theexamples blog.example.com]# cd wp-content/
cache/ index.php plugins/ themes/ upgrade/ uploads/ wp-content.orig/
[root@theexamples blog.example.com]# cd wp-content/plugins/
[root@theexamples plugins]# ls
akismet google-analytics-for-wordpress hello.php opml-importer user-role-editor wp-codebox wp-recaptcha
deans-fckeditor-for-wordpress-plugin google-sitemap-generator index.php simple-ldap-login wordpress-importer wp-polls wp-status-notifier
[root@theexamples plugins]# cp -ax * /var/www/blogtest/wp-content/plugins/ -f
[root@theexamples plugins]# ls
akismet google-analytics-for-wordpress hello.php opml-importer user-role-editor wp-codebox wp-recaptcha
deans-fckeditor-for-wordpress-plugin google-sitemap-generator index.php simple-ldap-login wordpress-importer wp-polls wp-status-notifier
[root@theexamples plugins]# cd /var/www/blogtest/wp-content/plugins/
[root@theexamples plugins]# ls
akismet google-analytics-for-wordpress hello.php opml-importer user-role-editor wp-codebox wp-recaptcha
deans-fckeditor-for-wordpress-plugin google-sitemap-generator index.php simple-ldap-login wordpress-importer wp-polls wp-status-notifier
[root@theexamples plugins]# ll
total 56
drwxr-xr-x 2 apache apache 4096 Feb 15 13:13 akismet
drwxr-xr-x 5 apache apache 4096 Nov 11 2008 deans-fckeditor-for-wordpress-plugin
drwxr-xr-x 3 apache apache 4096 May 9 19:37 google-analytics-for-wordpress
drwxr-xr-x 4 apache apache 4096 Aug 25 2010 google-sitemap-generator
-rw-r--r-- 1 apache apache 2262 May 1 21:42 hello.php
-rw-r--r-- 1 apache apache 30 May 1 21:42 index.php
drwxr-xr-x 3 apache apache 4096 Feb 15 13:58 opml-importer
drwxr-xr-x 2 apache apache 4096 May 9 19:38 simple-ldap-login
drwxr-xr-x 5 apache apache 4096 May 1 21:42 user-role-editor
drwxr-xr-x 3 apache apache 4096 May 1 21:42 wordpress-importer
drwxr-xr-x 7 apache apache 4096 Apr 19 2010 wp-codebox
drwxr-xr-x 4 apache apache 4096 Feb 15 10:43 wp-polls
drwxr-xr-x 2 apache apache 4096 May 1 21:42 wp-recaptcha
drwxr-xr-x 2 apache apache 4096 Apr 19 2010 wp-status-notifier
[root@theexamples plugins]# cd ..
[root@theexamples wp-content]# ls
index.php plugins themes uploads
[root@theexamples wp-content]# cd themes/
[root@theexamples themes]# ls
index.php twentyten
[root@theexamples themes]# pwd
/var/www/blogtest/wp-content/themes
[root@theexamples themes]# cp /var/www/blog.example.com/wp-content/themes/
blue21/ classic/ daleri-sweet/ default/ index.php magicblue/ twentyten/ unnamed-lite/
[root@theexamples themes]# cp /var/www/blog.example.com/wp-content/themes/daleri-sweet .
cp: omitting directory `/var/www/blog.example.com/wp-content/themes/daleri-sweet'
[root@theexamples themes]# cp /var/www/blog.example.com/wp-content/themes/daleri-sweet .
[root@theexamples themes]# cp -ax /var/www/blog.example.com/wp-content/themes/daleri-sweet .
[root@theexamples themes]# ls
daleri-sweet index.php twentyten
}}}
''wordpress page''
when you want a new page created let's say "NEWPAGE" but you want this new page linked to the root link http://www.testsite.com/NEWPAGE
then just do your categorization stuff, and as long as the "NEWPAGE" name exist then wordpress is intelligent enough to know and you can access it as http://www.testsite.com/NEWPAGE
''menu bar, nav bar''
you can just add a page, then drag and drop
''newsy template''
default link color #196aaf
''list of post'', ''display posts''
http://en.support.wordpress.com/display-posts-shortcode/
http://en.forums.wordpress.com/topic/list-of-posts-1
http://en.support.wordpress.com/archives-shortcode/
http://en.support.wordpress.com/sitemaps/#create-a-site-index
http://en.support.wordpress.com/sitemaps/shortcode/
http://en.support.wordpress.com/menus/
http://www.evernote.com/shard/s48/sh/86e19522-27e0-4d9b-a6ee-c2075cf7a8c1/4bab40b89efb021e29ed10fc6f66f2b0
http://ask.xmodulo.com/create-mount-xfs-file-system-linux.html
XFS Filesystem on Oracle Linux (Doc ID 1632127.1)
Oracle Linux: How to Create and Extend XFS File System (Doc ID 2032341.1)
What is the Block Size Limit in XFS Filesystem on Oracle Linux 7 (Doc ID 2101949.1)
Supported and Recommended File Systems on Linux (Doc ID 236826.1)
What Is the Recommended RAID and file system Configuration to Use With MySQL? (Doc ID 1546361.1)
Recommended Settings for MySQL 5.6, 5.7 Server for Online Transaction Processing (OLTP) and Benchmarking (Doc ID 1531329.1)
XFS tuning http://www.beegfs.com/wiki/StorageServerTuning
http://xfs.org/index.php/XFS_FAQ
https://kevinclosson.net/2012/03/06/yes-file-systems-still-need-to-support-concurrent-writes-yet-another-look-at-xfs-versus-ext4/
https://docs.oracle.com/cd/E37670_01/E41138/html/ol_create_xfs.html
https://docs.oracle.com/cd/E37670_01/E37355/html/ol_about_xfs.html
https://kevinclosson.net/2015/07/19/focusing-on-ext4-and-xfs-trim-operations-part-i/
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/ch07s03s02s02.html
Gathering Debug Information on Oracle Linux (Doc ID 225346.1)
How to measure I/O Performance on Linux (Doc ID 1931009.1) To BottomTo Bottom
Basic usage of perf command ( tracing tool ) (Doc ID 2174289.1)
* oracle developer VM uses XFS on datafiles and OH
https://myvirtualcloud.net/ext4-vs-xfs-for-oracle-which-performs-better/ <- XFS is faster
https://www.linuxtechi.com/create-extend-xfs-filesystem-on-lvm/
Use the XFS File System on Oracle Linux 8 https://www.youtube.com/watch?v=OUW1cbR-WuA
https://oracle-base.com/articles/misc/xmltable-convert-xml-data-into-rows-and-columns-using-sql
official doc https://docs.oracle.com/database/121/SQLRF/functions269.htm#SQLRF06232
Flatten nested xml structure using xmltable https://community.oracle.com/tech/apps-infra/discussion/3672618/flatten-nested-xml-structure-using-xmltable
Index not used in Outer Join with XMLTable https://community.oracle.com/tech/apps-infra/discussion/4048191/index-not-used-in-outer-join-with-xmltable
https://stackoverflow.com/questions/45591606/oracle-outer-join-with-xmltable
https://stackoverflow.com/questions/46918090/read-nested-xml-in-oracle-sql
https://odieweblog.wordpress.com/2016/05/20/how-to-using-outer-join-with-xmltable-or-xquery/ <-- GOOD STUFF
! issues
* flatten of rows using multiple chained XMLTABLEs with OUTER JOINs is hitting pga_aggregate_limit
* also the read is serial "XMLTABLE EVALUATION"
oracle xmltable pga_aggregate_limit
https://www.google.com/search?sxsrf=ALeKk03432QQYtgh2__tmLwT5a6ZgEnANQ%3A1609179426488&source=hp&ei=IiHqX7CoG-jF_QbD5qaACQ&iflsig=AINFCbYAAAAAX-ovMsoiOEVrBAmxM2wFKAoYt3mX5eCq&q=oracle+xmltable+pga_aggregate_limit&oq=oracle+xmltable+pga_aggregate_limit&gs_lcp=CgZwc3ktYWIQAzIFCCEQoAE6BAgjECc6BAgAEEM6BwgjEMkDECc6AggAOgcIABAUEIcCOgUIABDJAzoGCAAQFhAeOgUIIRCrAjoICCEQFhAdEB46BwghEAoQoAFQggZYtk1g5k5oBHAAeACAAfYCiAGBH5IBCDIwLjcuMy4ymAEAoAEBoAECqgEHZ3dzLXdpeg&sclient=psy-ab&ved=0ahUKEwiwwd_ApPHtAhXoYt8KHUOzCZAQ4dUDCAk&uact=5
oracle parallel XMLTABLE EVALUATION nested loops outer
https://www.google.com/search?sxsrf=ALeKk03LYwKuGCUQo8TfONHoNWKVyjJWLw%3A1609181994932&source=hp&ei=KivqX8j9NZCf_Qbznp7oDA&iflsig=AINFCbYAAAAAX-o5OpPk4GAR8WlNFvSCsqPeyGqmEvYz&q=oracle+parallel+XMLTABLE+EVALUATION&oq=oracle+parallel+XMLTABLE+EVALUATION&gs_lcp=CgZwc3ktYWIQAzIFCAAQzQIyBQgAEM0CMgUIABDNAjoECCMQJzoKCC4QxwEQowIQJzoICC4QsQMQgwE6BQgAELEDOg4ILhCxAxCDARDHARCjAjoLCC4QsQMQxwEQowI6BQguELEDOgUIABCRAjoKCAAQsQMQFBCHAjoNCC4QsQMQgwEQFBCHAjoCCAA6CAgAEMkDEJECOgcIABAUEIcCOggIABCxAxCDAVBMWLgWYNsXaABwAHgAgAF3iAHrC5IBBDEwLjaYAQCgAQGgAQKqAQdnd3Mtd2l6&sclient=psy-ab&ved=0ahUKEwjIuryJrvHtAhWQT98KHXOPB80Q4dUDCAk&uact=5
https://www.google.com/search?sxsrf=ALeKk02vf4gxVxgWJOCiQfQPbfLHfH0I2g%3A1609183360787&source=hp&ei=gDDqX96aLYKKggeLvaDwCA&iflsig=AINFCbYAAAAAX-o-kEvuuVNqJaGXxNVUIE1WhIrMJDW8&q=XMLTABLE+EVALUATION+nested+loops+outer&oq=XMLTABLE+EVALUATION+nested+loops+outer&gs_lcp=CgZwc3ktYWIQAzIFCCEQoAEyBQghEKABMgUIIRCgATIFCCEQqwIyBQghEKsCMgUIIRCrAjoHCCEQChCgAVD7Alj1GmD4G2gAcAB4AYABsgKIAfsQkgEIMTMuNi4wLjGYAQCgAQKgAQGqAQdnd3Mtd2l6&sclient=psy-ab&ved=0ahUKEwje2uGUs_HtAhUCheAKHYseCI4Q4dUDCAk&uact=5
XMLTable join causes parallel query not to work
https://community.oracle.com/tech/apps-infra/discussion/2531687/xmltable-join-causes-parallel-query-not-to-work
! related MOS
hint https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=132000920381083&id=13452852.8&displayIndex=2&_afrWindowMode=0&_adf.ctrl-state=6swcy1brg_170#REF
hidden parameters https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=301091721411779&id=1667557.1&_afrWindowMode=0&_adf.ctrl-state=wah6m4iul_4
bugs
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=131986757664227&id=2490886.1&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=6swcy1brg_72
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=131986990955620&id=2010525.1&displayIndex=2&_afrWindowMode=0&_adf.ctrl-state=6swcy1brg_113
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=141408834866494&id=8849734.8&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=h8mqnx9qi_589#aref_section16
memory leak CTAS https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=141410541921334&id=2095124.1&displayIndex=2&_afrWindowMode=0&_adf.ctrl-state=h8mqnx9qi_630#SYMPTOM
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=141140949373346&id=1470452.1&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=h8mqnx9qi_452#SYMPTOM
memory w/ testcase https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=140490567262596&id=1512845.1&displayIndex=4&_afrWindowMode=0&_adf.ctrl-state=h8mqnx9qi_301#SYMPTOM
! other articles
https://www.databasedevelop.com/article/12562357/Performance+problem+with+XMLTABLE
! example output
{{{
SQL> select x1.customer_name
2 , x2.contract_id
3 , x3.product_name
4 , x3.product_price
5 , x4.property
6 from t_xml_test_1 t
7 , xmltable('/customer' passing t.xml_data
8 columns customer_name varchar2(30) path 'name'
9 , contracts xmltype path 'contract'
10 ) x1
11 , xmltable('/contract' passing x1.contracts
12 columns contract_id varchar2(30) path 'contract_id'
13 , products xmltype path 'products/product'
14 ) x2
15 , xmltable('/product' passing x2.products
16 columns product_name varchar2(30) path 'name'
17 , product_price number path 'price'
18 , properties xmltype path 'properties/property'
19 ) (+) x3
20 , xmltable('/property' passing x3.properties
21 columns property varchar2(30) path '.'
22 ) (+) x4
23 ;
CUSTOMER_NAME CONTRACT_ID PRODUCT_NAME PRODUCT_PRICE PROPERTY
------------------------------ ------------------------------ ------------------------------ ------------- ------------------------------
Mustermann C1 Product C1.P1 23,12 Property C1.P1.A
Mustermann C1 Product C1.P1 23,12 Property C1.P1.B
Mustermann C1 Product C1.P2 2,32
Mustermann C2 Product C2.P1 143,33
Mustermann C2 Product C2.P2 231,76 Property C2.P2.A
Mustermann C2 Product C2.P2 231,76 Property C2.P2.B
Mustermann C3
7 rows selected.
}}}
! with JSON data, use JSON_TABLE to flatten
Flattening JSON arrays with JSON_TABLE https://livesql.oracle.com/apex/livesql/file/content_HOB7SQ6N54UBZXIND3F9VR17G.html
{{{
select m.id, jt.*
from mytest m, JSON_TABLE(m.data, '$.a[*]' COLUMNS (
val1 NUMBER path '$.x',
val2 VARCHAR2(30) path '$.y'
)) jt
}}}
.
Oracle XML DB: Best Practices to Get Optimal Performance out of XML Queries
https://www.oracle.com/technetwork/database-features/xmldb/overview/xmldb-bpwp-12cr1-1964809.pdf
[img(90%,90%)[ https://user-images.githubusercontent.com/3683046/103587974-4d21b080-4eb6-11eb-8ac2-d20ffb748206.png ]]
[img(90%,90%)[ https://user-images.githubusercontent.com/3683046/103587982-51e66480-4eb6-11eb-8611-baefc84389cf.png ]]
[img(90%,90%)[ https://user-images.githubusercontent.com/3683046/103587998-56128200-4eb6-11eb-917f-4bddbeae36ea.png ]]
[img(90%,90%)[ https://user-images.githubusercontent.com/3683046/103588007-5a3e9f80-4eb6-11eb-8295-021bef26742e.png ]]
[img(90%,90%)[ https://user-images.githubusercontent.com/3683046/103588014-5e6abd00-4eb6-11eb-84b8-eaf4cee90c0a.png ]]
http://www.virtuatopia.com/index.php/How_to_Convert_a_Xen_Guest_to_VirtualBox
http://linuxnextgen.blogspot.com/2010/09/migrating-redhat-xen-virtual-machine-to.html
http://www.howtoforge.com/how-to-convert-a-xen-virtual-machine-to-vmware
http://serverfault.com/questions/244343/need-to-migrate-xen-domu-to-vmware-esxi-rhel-5-3-esxi-4-1
http://www.smoothblog.co.uk/2009/03/25/converting-a-xen-vm-to-vmware/
https://community.hortonworks.com/questions/96354/yarn-application-id-how-is-it-generated.html
https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html
http://public-yum.oracle.com/ <== OEL public repo
http://www.oracle.com/technetwork/articles/servers-storage-admin/ginnydbinstallonlinux-488779.html <== How I Simplified Oracle Database Installation on Oracle Linux
https://blogs.oracle.com/wim/entry/trying_uek2 <-- UEK2 beta
! Using a custom repo:
<<<
I was having errors while manually updating the RPMs when I placed them all in a directory and used the rpm -Fvh *rpm
[root@dbrocaix01 RPMS]# rpm -Fvh *rpm
warning: package glibc = 2.3.4-2.39 was already added, replacing with glibc <= 2.3.4-2.39
warning: package openssl = 0.9.7a-43.17.el4_6.1 was already added, replacing with openssl <= 0.9.7a-43.17.el4_6.1
error: Failed dependencies:
openib is needed by dapl-1.2.1.1-7.i386
openib is needed by libibverbs-1.1.1-7.i386
openib is needed by libmthca-1.0.4-7.i386
openib is needed by librdmacm-1.0.1-7.i386
openib is needed by libsdp-1.1.99-7.i386
systemtap-runtime = 0.5.14-1 is needed by systemtap-0.5.14-1.i386
So I decided to go for YUM.. :)
1) Copy all the contents on the media
su -
cd /u01/installers
mkdir -pv oel/4.7/{os,updates}/x86-64
cp -av /media/cdrom/* /u01/installers/oel/4.7/os/x86-64
2) Read the Release Notes
3) Get info on the kernel being used, OEL by default is using the kernel with bug fix
[root@dg10g2 RPMS]# cat /etc/redhat-release
Enterprise Linux Enterprise Linux AS release 4 (October Update 5)
[root@dg10g2 RPMS]# rpm -qa | grep -i kernel
kernel-2.6.9-55.0.0.0.2.EL
kernel-smp-2.6.9-55.0.0.0.2.EL
kernel-doc-2.6.9-55.0.0.0.2.EL
kernel-utils-2.4-13.1.99.0.1
[root@dg10g2 RPMS]# uname -a
Linux dg10g2.us.oracle.com 2.6.9-55.0.0.0.2.ELsmp #1 SMP Wed May 2 15:06:32 PDT 2007 x86_64 x86_64 x86_64 GNU/Linux
4) Install the new kernel
cd /u01/installers/oel/4.7/os/x86-64/Enterprise/RPMS
[root@dg10g2 RPMS]# ls -l | grep -i kernel
-r--r--r-- 1 root root 2328774 Jul 29 07:28 kernel-doc-2.6.9-78.0.0.0.1.EL.noarch.rpm
-r--r--r-- 1 oracle oinstall 13103736 Jul 29 07:28 kernel-smp-2.6.9-78.0.0.0.1.EL.x86_64.rpm
-r--r--r-- 1 root root 4069937 Jul 29 07:28 kernel-smp-devel-2.6.9-78.0.0.0.1.EL.x86_64.rpm
[root@dg10g2 RPMS]# rpm -ivh kernel-doc-2.6.9-78.0.0.0.1.EL.noarch.rpm kernel-smp-2.6.9-78.0.0.0.1.EL.x86_64.rpm kernel-smp-devel-2.6.9-78.0.0.0.1.EL.x86_64.rpm
Preparing... ########################################### [100%]
1:kernel-smp-devel ########################################### [ 33%]
2:kernel-doc ########################################### [ 67%]
3:kernel-smp ########################################### [100%]
5) Reboot, boot on the new kernel
[root@dg10g2 ~]# uname -a
Linux dg10g2.us.oracle.com 2.6.9-78.0.0.0.1.ELsmp #1 SMP Fri Jul 25 16:04:35 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux
6) Download the following packages at rpmfind.net, if both packages are already installed goto step 7
createrepo-0.4.4-1.el4.rf.noarch.rpm or higher
ftp://fr2.rpmfind.net/linux/dag/redhat/el4/en/x86_64/dag/RPMS/createrepo-0.4.4-1.el4.rf.noarch.rpm
python-urlgrabber-2.9.7-1.2.el4.rf.noarch.rpm or higher
http://rpmfind.net/linux/rpm2html/search.php?query=urlgrabber
7) Execute the createrepo command
- you could also get the updated comps.xml file by installing the comps.rpm on the /u01/installers/oel/4.7/os/x86-64/Enterprise/base directory
- the comps.xml inside the /u01/installers/oel/4.7/os/x86-64/Enterprise/base/comps.rpm is the same as the /u01/installers/oel/4.7/os/x86-64/Enterprise/base/comps.xml when you do a DIFF
[root@dg10g2 base]# createrepo -g /u01/installers/oel/4.7/os/x86-64/Enterprise/base/comps.xml /u01/installers/oel/4.7/os/x86-64/Enterprise/RPMS
8) Verify the repository
[root@dg10g2 x86-64]# ls /u01/installers/oel/4.7/os/x86-64/Enterprise/RPMS/repodata/
comps.xml filelists.xml.gz other.xml.gz primary.xml.gz repomd.xml
8) Configure Yum repository
[root@dg10g2 base]# cd /etc/yum.repos.d/
[root@dg10g2 yum.repos.d]# mv ULN-Base.repo ULN-Base.repo.bak
[root@dg10g2 yum.repos.d]# vi /etc/yum.repos.d/local.repo
[OEL4.7]
name=Enterprise-$releasever - Media
baseurl=file:///u01/installers/oel/4.7/os/x86-64/Enterprise/RPMS
gpgcheck=1
enabled=1
gpgkey=file:///u01/installers/oel/4.7/os/x86-64/RPM-GPG-KEY-oracle
9) Verify Yum repository
[root@dg10g2 x86-64]# yum grouplist
Setting up Group Process
Setting up repositories
OEL4.7 100% |=========================| 1.1 kB 00:00
comps.xml 100% |=========================| 711 kB 00:00
Installed Groups:
Administration Tools
Compatibility Arch Support
Editors
GNOME Desktop Environment
Graphical Internet
Mail Server
Office/Productivity
OpenFabrics Enterprise Distribution
PostgreSQL Database
Printing Support
Server Configuration Tools
X Window System
Available Groups:
Authoring and Publishing
Compatibility Arch Development Support
DNS Name Server
Development Tools
Engineering and Scientific
FTP Server
GNOME Software Development
Games and Entertainment
Graphics
KDE (K Desktop Environment)
KDE Software Development
Legacy Network Server
Legacy Software Development
MySQL Database
Network Servers
News Server
Sound and Video
System Tools
Text-based Internet
Web Server
Windows File Server
X Software Development
Done
10) Update the RPMS
[root@dg10g2 x86-64]# yum clean all
Cleaning up Everything
0 headers removed
0 packages removed
4 metadata files removed
0 cache files removed
1 cache files removed
[root@dg10g2 x86-64]# yum list updates
[root@dg10g2 x86-64]# yum update
<<<
! Using DVD as a repo:
<<<
There is another way to build the repository without copying all the RPMs to disk. In the ISO file there are repo data directories, and you can use these directly.
1. Mount the ISO file:
mount -o loop,ro rhel-5.2-server-i386-dvd.iso /mnt/iso
2. Create the file /etc/yum.repos.d/file.repo:
cat /etc/yum.repos.d/file.repo
[RHEL_5_Server_Repository]
baseurl=file:///mnt/iso/Server
enabled=1
[RHEL_5_VT_Repository]
baseurl=file:///mnt/iso/VT
enabled=1
<<<
! Using YUM and httpd
<<<
[root@oel4 ~]# vi /etc/httpd/conf/httpd.conf
* ADD THE LINE BELOW ON THE ALIAS PART
Alias /oel4.6 "/oracle/installers/oel/4.6/os/x86"
<Directory "/oracle/installers/oel/4.6/os/x86">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>
[root@oel4 ~]# service httpd restart
Stopping httpd: [FAILED]
Starting httpd: [ OK ]
[root@racnode1 ocfs2]# cd /etc/yum.repos.d/
[root@racnode1 yum.repos.d]# ls -ltr
total 8
-rw-r--r-- 1 root root 2079 Oct 24 2006 ULN-Base.repo
[root@racnode1 yum.repos.d]# mv ULN-Base.repo ULN-Base.repo.bak
[root@racnode1 yum.repos.d]# vi oel46.repo
* ADD THE FOLLOWING LINES
[OEL4.6]
name=Enterprise-$releasever - Media
baseurl=http://192.168.203.24/oel4.6/Enterprise/RPMS
gpgcheck=1
gpgkey=http://192.168.203.24/oel4.6/RPM-GPG-KEY-oracle
<<<
! Using up2date:
<<<
http://kbase.redhat.com/faq/docs/DOC-11215
<<<
! Using oracle validated public repo:
<<<
start terminal
# cd /etc/yum.repos.d/
# wget http://public-yum.oracle.com/public-yum-el5.repo
# vi public-yum-el5.repo
goto [el5_u5_base]
! enabled=1
goto [ol5_u5_base]
! enabled=1
# yum update
y
y
# yum install kernel-devel
y
# yum groupinstall ‘Development Tools’
# reboot
<<<
! Enable/Disable
http://fedoraforum.org/forum/showthread.php?t=26925
{{{
yum --enablerepo=freshrpms --enablerepo=dag --enablerepo=dries --enablerepo=newsrpms --disablerepo=livna-stable --disable=livna-testing --disablerepo=livna-unstable --disablerepo=fedora-us --disablerepo=atrpms-stable --disablerepo=atrpms-good --disablerepo=atrpms-testing --disablerepo=atrpms-bleeding install xmms-mp3
}}}
''or''
{{{
yum \
--disablerepo=adobe-linux-i386 \
--disablerepo=google-chrome \
--disablerepo=google-talkplugin \
--disablerepo=rpmfusion-free \
--disablerepo=rpmfusion-free-updates \
--disablerepo=rpmfusion-nonfree \
--disablerepo=rpmfusion-nonfree-updates \
--disablerepo=skype \
--enablerepo=fedora \
--enablerepo=updates \
list \
logwatch
}}}
''or''
this script is the coolest [[yum.py]]
http://forums.fedoraforum.org/archive/index.php/t-27212.html
Sun ZFS storage appliance simulator
http://www.evernote.com/shard/s48/sh/be6faabd-df78-465a-bc4b-ce4db3c99358/6b4d5f084d8b5d1351b081009914829e
Good papers and links for the ZFSSA https://blogs.oracle.com/7000tips/entry/good_papers_and_links_for
7320 <-- end of life
Oracle ZFS Storage ZS3-4 http://www.oracle.com/us/products/servers-storage/storage/nas/zs3-4/features/index.html
Oracle ZFS Storage ZS3-2 http://www.oracle.com/us/products/servers-storage/storage/nas/zs3-2/overview/index.html
data sheet http://www.oracle.com/us/products/servers-storage/storage/nas/zs3-ds-2008069.pdf
''Articles''
http://wikibon.org/wiki/v/Oracle_ZFS_Hybrid_Storage_Appliance_Reads_for_Show_but_Writes_for_Dough
http://wikibon.org/wiki/v/Oracle_ZS3_Storage_Appliances%3A_Tier-1_Performance_at_Tier-3_Prices
http://wikibon.org/wiki/v/Oracle_Continues_Convergence_of_Storage_and_Database_with_ZS4
http://www.oracle.com/technetwork/server-storage/sun-unified-storage/downloads/sun-simulator-1368816.html
from bjoern rost
<<<
“You just need to update that ZFS simulator appliance to the latest firmware just like a real hw appliance. patch is 19004324, maybe you also need one version between the one it ships and that one but it will tell you. I just verified that it works on my playground vm. And Oracle VM3.3 also comes with a REST API! So much cool stuff to play with”
<<<
Using Sun ZFS Storage Appliance iSCSI LUNs in an Oracle Linux Environment http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-104-iscsi-luns-linux-518400.html
http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases <-- start here
https://blogs.oracle.com/realneel/entry/zfs_and_databases
https://blogs.oracle.com/realneel/entry/zfs_and_databases_time_for
http://myunix.dk/2010/12/13/why-zfs-rocks-for-databases/
''Papers''
Configuring Oracle Solaris ZFS for an Oracle Database http://www.oracle.com/technetwork/server-storage/solaris10/documentation/wp-oraclezfsconfig-0510-ds-ac2-163550.pdf
Database Cloning using Oracle Sun ZFS Storage Appliance and Oracle Data Guard http://www.oracle.com/technetwork/database/features/availability/maa-db-clone-szfssa-172997.pdf
ZFS and OLTP https://blogs.oracle.com/roch/entry/zfs_and_oltp
ZFS and OLTP workloads: Time for some numbers https://blogs.oracle.com/realneel/entry/zfs_and_databases_time_for
ZFS for Databases http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases
ZFS Evil Tuning Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Tuning_ZFS_for_Database_Performance
Oracle Solaris 10 9/10 ZFS OLTP Performance Improvements https://blogs.oracle.com/BestPerf/entry/20100920_s10_zfs_oltp
ZFS and Databases https://blogs.oracle.com/realneel/entry/zfs_and_databases
Configuring ZFS for an Oracle Database http://www.oracle.com/technetwork/server-storage/solaris10/config-solaris-zfs-wp-167894.pdf
Backup and Recovery with Oracle’s Sun ZFS Storage Appliances and Oracle Recovery Manager http://www.oracle.com/us/products/servers-storage/storage/nas/zfs-storage-appliance-rman-db-507914.pdf
Certification of Zeta File System (Zfs) On Solaris 10 for Oracle RDBMS [ID 403202.1]
ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
http://www.oracle.com/technetwork/articles/servers-storage-admin/zfs-appliance-scripting-1508184.html
ZFS Write Performance Slow, but Devices Not Busy (Doc ID 1405347.1) To BottomTo Bottom
Oracle SuperCluster ZFS Storage Appliance Best Practices (Doc ID 2002988.1)
Oracle ZFS Storage Appliance: Calculation of maximum IOPS (Doc ID 1553903.1)
https://blogs.oracle.com/roch/entry/zfs_performance_boosts_since_2010
<<<
reARC
Scales the ZFS cache to TB class machines and CPU counts in thousands.
Sequential Resilvering
Converts a random workload to a sequential one.
ZIL Pipelining
Allows the ZIL to carve up smaller units of work for better pipelining and higher log device utilisation.
It is the dawning of the age of the L2ARC
Not only did we make the L2ARC persistent on reboot, we made the feeding process so much more efficient we had to slow it down.
Zero Copy I/O Aggregation
A new tool delivered by the Virtual Memory team allows the already incredible ZFS I/O aggregation feature to actually do its thing using one less copy.
Scalable Reader/Writer locks
Reader/Writer locks, used extensively by ZFS and Solaris, had their scalability greatly improved on on large systems.
New thread Scheduling class
ZFS transaction groups are now managed by a new type of taskqs which behave better managing bursts of cpu activity.
Concurrent Metaslab Syncing
The task of syncing metaslabs is now handled with more concurrency, boosting ZFS write throughput capabilities.
Block Picking
The task of choosing blocks for allocations has been enhanced in a number of ways, allowing us to work more efficiently at a much higher pool capacity percentage.
<<<
http://www.oracle.com/us/products/servers-storage/sun-power-calculators/calc/s7420-power-calculator-180618.html
ZFS sizing raid calculator http://www.servethehome.com/raid-calculator/
http://blog.allanglesit.com/2012/02/adventures-in-zfs-storage-sizing/
https://www.evernote.com/l/ADAUYs2eOqBBe49oZkNX55SBQYrm9wbWCZ0
http://jongsma.wordpress.com/2012/11/26/using-hcc-on-zfs-storage-appliances/
http://blogs.oracle.com/travi/entry/ultrasparc_t1_utilization_explained
http://blogs.oracle.com/travi/entry/corestat_core_utilization_reporting_tool
http://blogs.oracle.com/travi/
http://www.solarisinternals.com/wiki/index.php/CMT_Utilization
''pgstat''
https://blogs.oracle.com/cmt/entry/solaris_knows_hardware_pgstat_explains
http://www.c0t0d0s0.org/archives/7494-Artifact-of-measuring.html
http://docs.oracle.com/cd/E19205-01/820-5372/ahoak/index.html
http://docs.oracle.com/cd/E19963-01/html/821-1462/pgstat-1m.html
Using zone maps http://docs.oracle.com/database/121/DWHSG/zone_maps.htm#DWHSG9355
zone maps http://docs.oracle.com/database/121/VLDBG/whatsnew.htm#VLDBG14186
licensing http://docs.oracle.com/database/121/DBLIC/editions.htm#DBLIC118
http://www.linuxquestions.org/questions/slackware-installation-40/should-i-put-the-swap-partition-at-the-beginning-or-the-end-of-the-drive-365793/
excellent thread!
more sectors on the outer tracks of the hard drive platter than on the inner tracks
see picture and details about Zoned Bit Recording
http://www.pcguide.com/ref/hdd/geom/tracksZBR-c.html
https://orainternals.wordpress.com/2009/08/06/ora-4031-and-shared-pool-duration/#more-511
Severe Performance Problems Due to Sudden drop/resize/shrink of buffer cache (Doc ID 1344228.1)
{{{
Impact of setting _enable_shared_pool_durations = false
This will change the architecture of memory in the pools. When set to FALSE, subpools within the SGA will no longer have 4 durations. Instead, each subpool will have only a single duration. This mimics the behavior in 9i, and the shared pool will no longer be able to shrink.
The advantage of this is that the performance issues documented in this note can be avoided. A duration will not encounter memory exhaustion while another duration has free memory.
The disadvantage is that the shared pool (and streams pool) are not able to shrink, mostly negating the benefits of ASMM.
PLEASE NOTE : Even if you have AMM / ASMM disabled similar behavior might be seen as per the note.
Document:1269139.1 SGA Re-Sizes Occurring Despite AMM/ASMM Being Disabled (MEMORY_TARGET/SGA_TARGET=0)
If manual SGA management is in place, the decrease in buffer cache size can be due to IMMEDIATE memory resizes occurring to avoid ORA-4031 error.
}}}
https://blogs.oracle.com/askmaclean/entry/know_more_about_shared_pool
{{{
SQL> select * from v $ version;
BANNER
-------------------------------------------------- --------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL / SQL Release 10.2.0.5.0 - Production
CORE 10.2.0.5.0 Production
TNS for Linux: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production
SQL> set linesize 200 pagesize 1400
SQL> show parameter kgh
NAME TYPE VALUE
------------------------------------ -------------- ------------------ ------------------------------
_kghdsidx_count integer 7
SQL> oradebug setmypid;
Statement processed.
SQL> oradebug dump heapdump 536870914;
Statement processed.
SQL> oradebug tracefile_name
/s01/admin/G10R25/udump/g10r25_ora_11783.trc
[Oracle @ vrh8 dbs] $ grep "sga heap" /s01/admin/G10R25/udump/g10r25_ora_11783.trc
HEAP DUMP heap name = "sga heap" desc = 0x60000058
HEAP DUMP heap name = "sga heap (1,0)" desc = 0x60036110
FIVE LARGEST SUB HEAPS for heap name = "sga heap (1,0)" desc = 0x60036110
HEAP DUMP heap name = "sga heap (2,0)" desc = 0x6003f938
FIVE LARGEST SUB HEAPS for heap name = "sga heap (2,0)" desc = 0x6003f938
HEAP DUMP heap name = "sga heap (3,0)" desc = 0x60049160
FIVE LARGEST SUB HEAPS for heap name = "sga heap (3,0)" desc = 0x60049160
HEAP DUMP heap name = "sga heap (4,0)" desc = 0x60052988
FIVE LARGEST SUB HEAPS for heap name = "sga heap (4,0)" desc = 0x60052988
HEAP DUMP heap name = "sga heap (5,0)" desc = 0x6005c1b0
FIVE LARGEST SUB HEAPS for heap name = "sga heap (5,0)" desc = 0x6005c1b0
HEAP DUMP heap name = "sga heap (6,0)" desc = 0x600659d8
FIVE LARGEST SUB HEAPS for heap name = "sga heap (6,0)" desc = 0x600659d8
HEAP DUMP heap name = "sga heap (7,0)" desc = 0x6006f200
FIVE LARGEST SUB HEAPS for heap name = "sga heap (7,0)" desc = 0x6006f200
SQL> alter system set "_kghdsidx_count" = 6 scope = spfile;
System altered.
SQL> startup force;
ORACLE instance started.
Total System Global Area 859832320 bytes
Fixed Size 2100104 bytes
Variable Size 746587256 bytes
Database Buffers 104857600 bytes
Redo Buffers 6287360 bytes
Database mounted.
Database opened.
SQL>
SQL> oradebug setmypid;
Statement processed.
SQL> oradebug dump heapdump 536870914;
Statement processed.
SQL> oradebug tracefile_name
/s01/admin/G10R25/udump/g10r25_ora_11908.trc
[Oracle @ vrh8 dbs] $ grep "sga heap" /s01/admin/G10R25/udump/g10r25_ora_11908.trc
HEAP DUMP heap name = "sga heap" desc = 0x60000058
HEAP DUMP heap name = "sga heap (1,0)" desc = 0x600360f0
FIVE LARGEST SUB HEAPS for heap name = "sga heap (1,0)" desc = 0x600360f0
HEAP DUMP heap name = "sga heap (2,0)" desc = 0x6003f918
FIVE LARGEST SUB HEAPS for heap name = "sga heap (2,0)" desc = 0x6003f918
HEAP DUMP heap name = "sga heap (3,0)" desc = 0x60049140
FIVE LARGEST SUB HEAPS for heap name = "sga heap (3,0)" desc = 0x60049140
HEAP DUMP heap name = "sga heap (4,0)" desc = 0x60052968
FIVE LARGEST SUB HEAPS for heap name = "sga heap (4,0)" desc = 0x60052968
HEAP DUMP heap name = "sga heap (5,0)" desc = 0x6005c190
FIVE LARGEST SUB HEAPS for heap name = "sga heap (5,0)" desc = 0x6005c190
HEAP DUMP heap name = "sga heap (6,0)" desc = 0x600659b8
FIVE LARGEST SUB HEAPS for heap name = "sga heap (6,0)" desc = 0x600659b8
SQL>
SQL> alter system set "_kghdsidx_count" = 2 scope = spfile;
System altered.
SQL>
SQL> startup force;
ORACLE instance started.
Total System Global Area 851443712 bytes
Fixed Size 2100040 bytes
Variable Size 738198712 bytes
Database Buffers 104857600 bytes
Redo Buffers 6287360 bytes
Database mounted.
Database opened.
SQL> oradebug setmypid;
Statement processed.
SQL> oradebug dump heapdump 2;
Statement processed.
SQL> oradebug tracefile_name
/s01/admin/G10R25/udump/g10r25_ora_12003.trc
[Oracle @ vrh8 ~] $ grep "sga heap" /s01/admin/G10R25/udump/g10r25_ora_12003.trc
HEAP DUMP heap name = "sga heap" desc = 0x60000058
HEAP DUMP heap name = "sga heap (1,0)" desc = 0x600360b0
HEAP DUMP heap name = "sga heap (2,0)" desc = 0x6003f8d
SQL> alter system set cpu_count = 16 scope = spfile;
System altered.
SQL> startup force;
ORACLE instance started.
Total System Global Area 851443712 bytes
Fixed Size 2100040 bytes
Variable Size 738198712 bytes
Database Buffers 104857600 bytes
Redo Buffers 6287360 bytes
Database mounted.
Database opened.
SQL> oradebug setmypid;
Statement processed.
SQL> oradebug dump heapdump 2;
Statement processed.
SQL> oradebug tracefile_name
/s01/admin/G10R25/udump/g10r25_ora_12065.trc
[Oracle @ vrh8 ~] $ grep "sga heap" /s01/admin/G10R25/udump/g10r25_ora_12065.trc
HEAP DUMP heap name = "sga heap" desc = 0x60000058
HEAP DUMP heap name = "sga heap (1,0)" desc = 0x600360b0
HEAP DUMP heap name = "sga heap (2,0)" desc = 0x6003f8d8
SQL> show parameter sga_target
NAME TYPE VALUE
------------------------------------ -------------- ------------------ ------------------------------
sga_target big integer 0
SQL> alter system set sga_target = 1000M scope = spfile;
System altered.
SQL> startup force;
ORACLE instance started.
Total System Global Area 1048576000 bytes
Fixed Size 2101544 bytes
Variable Size 738201304 bytes
Database Buffers 301989888 bytes
Redo Buffers 6283264 bytes
Database mounted.
Database opened.
SQL> alter system set sga_target = 1000M scope = spfile;
System altered.
SQL> startup force;
ORACLE instance started.
Total System Global Area 1048576000 bytes
Fixed Size 2101544 bytes
Variable Size 738201304 bytes
Database Buffers 301989888 bytes
Redo Buffers 6283264 bytes
Database mounted.
Database opened.
SQL>
SQL>
SQL> oradebug setmypid;
Statement processed.
SQL> oradebug dump heapdump 2;
Statement processed.
SQL> oradebug tracefile_name
/s01/admin/G10R25/udump/g10r25_ora_12148.trc
SQL>
SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[Oracle @ vrh8 dbs] $ grep "sga heap" /s01/admin/G10R25/udump/g10r25_ora_12148.trc
HEAP DUMP heap name = "sga heap" desc = 0x60000058
HEAP DUMP heap name = "sga heap (1,0)" desc = 0x60036690
HEAP DUMP heap name = "sga heap (1,1)" desc = 0x60037ee8
HEAP DUMP heap name = "sga heap (1,2)" desc = 0x60039740
HEAP DUMP heap name = "sga heap (1,3)" desc = 0x6003af98
HEAP DUMP heap name = "sga heap (2,0)" desc = 0x6003feb8
HEAP DUMP heap name = "sga heap (2,1)" desc = 0x60041710
HEAP DUMP heap name = "sga heap (2,2)" desc = 0x60042f68
_enable_shared_pool_durations: This parameter controls whether 10g shared pool duration peculiar feature is enabled, when we set for 0:00 sga_target this parameter is false;
10.2.0.5 cursor_space_for_time while ago if this parameter is set to true it is false, but in 10.2.0.5 parameters are discarded after cursor_space_for_time
SQL> alter system set "_enable_shared_pool_durations" = false scope = spfile;
System altered.
SQL>
SQL> startup force;
ORACLE instance started.
Total System Global Area 1048576000 bytes
Fixed Size 2101544 bytes
Variable Size 738201304 bytes
Database Buffers 301989888 bytes
Redo Buffers 6283264 bytes
Database mounted.
Database opened.
SQL> oradebug setmypid;
Statement processed.
SQL> oradebug dump heapdump 2;
Statement processed.
SQL> oradebug tracefile_name
/s01/admin/G10R25/udump/g10r25_ora_12233.trc
SQL>
SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options \
[Oracle @ vrh8 dbs] $ grep "sga heap" /s01/admin/G10R25/udump/g10r25_ora_12233.trc
HEAP DUMP heap name = "sga heap" desc = 0x60000058
HEAP DUMP heap name = "sga heap (1,0)" desc = 0x60036690
HEAP DUMP heap name = "sga heap (2,0)" desc = 0x6003feb8
}}}
! why?
curious, why would you need 7 kghdsidx_count ?
oracle probably has automatically set it based on the (high) number of threads and shared pool size. this ought to reduce shared pool latch contention by spreading contention across 7 latches ... of course it would help if you actually have a shared pool latch contention problem... (that can't be solved by other means a'la hard parsing less).
but the added complexity causes sometimes bugs/problems (possibly in conjunction with shared pool resize etc). i have a case where we went from 2 to 1 and the problems went away, in another case the customer went from 4 to 2 (I had recommended 1 but they wanted 2 "just in case" :-) and the problem went away too.
https://forums.oracle.com/thread/1130379
http://christianbilien.wordpress.com/2007/05/01/two-useful-hidden-parameters-_smm_max_size-and-_pga_max-size/
http://www.dbaglobe.com/2009/12/11gr2-pga-related-hidden-parameters.html
Bug 5947623 - Excess PGA memory used by hash joins [ID 5947623.8]
How To Super-Size Work Area Memory Size Used By Sessions? [ID 453540.1]
Bug 7635419: SORTS SPILLING TO DISK EVEN THOUGH HAVING ENOUGH PGA
SQL Memory Management in Oracle9i http://www.cse.ust.hk/vldb2002/VLDB2002-proceedings/papers/S29P03.pdf
http://www.freelists.org/post/oracle-l/Doing-large-sort-in-RAM-sort-workarea-manipulation
Thread: Query Tuning Suggestion when scanning 20 Millions records,retriving 100 rec https://forums.oracle.com/forums/thread.jspa?messageID=10255833�
Thread: Performance with Analytic functions https://forums.oracle.com/forums/thread.jspa?messageID=10229037�
http://hoopercharles.wordpress.com/2010/01/18/pga-memory-the-developers-secret-weapon-for-stealing-all-of-the-memory-in-the-server/
Order By Clause And Cost http://smanroy.wordpress.com/tag/_smm_max_size/
PGA Memory and PGA_AGGREGATE_TARGET, is there Something Wrong with this Quote? http://hoopercharles.wordpress.com/2010/08/04/pga-memory-and-pga_aggregate_target-is-there-something-wrong-with-this-quote/
http://www.scribd.com/doc/6427605/Senegacnik-PGA-Memory-Management-Oracle9i-10g
''Formula:''
<<<
_smm_max_size = 10%* pga_aggregate_target
and _smm_max_size in turns now drives _pga_max_size: _pga_max_size = 2 * _smm_max_size
A pga_aggregate_target larger than 1000MB will now allow much higher default thresholds in 10gR2: pga_aggregate_target set to 5GB will allow an _smm_max_size of 500MB (was 100MB before) and _pga_max_size of 1000MB (was 200MB).
from MOS 453540.1 How To Super-Size Work Area Memory Size Used By Sessions?
_PGA_MAX_SIZE
''-> should be set in minimum of twice the desired work area size'' The default value is 200Mb.
_SMM_MAX_SIZE
-> normally this parameter is not needed but maybe under certain circumstances
-> if set it should be equal to the desired work area size (in kb !)
<<<
[img[ https://lh5.googleusercontent.com/-vOzqxmhiDDQ/T5B6iyhB2oI/AAAAAAAABk0/fUBh7-S0-KA/s2048/pga_aggregate_target.png ]]
http://hoopercharles.wordpress.com/2010/06/17/_small_table_threshold-parameter-and-buffer-cache-what-is-wrong-with-this-quote/
http://www.centroid.com/knowledgebase/blog/smart-scan-why-is-small-table-threshold-important
http://blog.tanelpoder.com/2012/09/03/optimizer-statistics-driven-direct-path-read-decision-for-full-table-scans-_direct_read_decision_statistics_driven/
http://afatkulin.blogspot.co.uk/2012/07/serial-direct-path-reads-in-11gr2-and.html
http://afatkulin.blogspot.ca/2009/01/11g-adaptive-direct-path-reads-what-is.html
http://oracle-tech.blogspot.com/2014/04/directreaddecisionstatistcsdriven.html
! decision tree
nice viz by http://progeeking.com/2014/02/25/the-big-q-direct-path-and-cell-offloading/ , http://bit.ly/1EXUj90
[img[ https://progeeking.files.wordpress.com/2014/02/dpr10.png?w=690&h=629 ]]
summary of this chart
* direct reads is used to avoid wiping out many buffer from the buffer cache by a single large (bigger than _small_table_threshold) table scan
* In Oracle 11gr1, direct reads were invoked when the size of the object exceeded five times the _small_table_threshold value. With 11gr2, Oracle is more aggressive with direct reads and uses the _small_table_threshold setting as the threshold.
* In 11gR2, _small_table_threshold initialization parameter. This parameter defaults to 2% of the size of your database buffer cache.
* everything below STT is never read with DPR, everything above VLOT is always read with DPR.
* If the object is between STT and VLOT, and is not compressed with OLTP/HCC and is not cached/dirty on 50%, then we have DPR. If the object is between STT and VLOT, and is compressed with OLTP/HCC and is not cached/dirty on 95%, then we have DPR.
* we have more DPR for compressed objects. In my opinion there are 2 general reasons for this. First, “typically” we are compressing old/rarely accessed data. Second, in Exadata environment the cells are capable to decompress, so the hard work is done in there. Nice.
* ABTC - 2% of the buffer cache above uses ABTC, but also depends on db_big_table_cache_percent_target parameter
-- obsolete stuff
* About MTT, this threshold is obsoleted /in 11.2 AFAIK/. Before it was standing for what STT is now.
* _very_large_object_threshold – till 12.1.0.1
-- serial, parallel, abtc
* serial reads
direct path reads.. avoid wiping out many buffer from the buffer cache by a single large (bigger than _small_table_threshold )table scan
In Oracle 11gr1, direct reads were invoked when the size of the object exceeded five times the _small_table_threshold value. With 11gr2, Oracle is more aggressive with direct reads and uses the _small_table_threshold setting as the threshold.
In 11gR2, _small_table_threshold initialization parameter. This parameter defaults to 2% of the size of your database buffer cache.
* parallel reads
auto-dop and in-memory px (also kicks in when the segment (or partition) is marked as CACHE or KEEP)
Starting from Oracle 11.2 though, Oracle parallel execution slaves can also do the parallel full segment scans via the buffer cache. The feature is called In-Memory Parallel Execution, not to be confused with the Oracle 12c upcoming In-Memory Option (which gives you a columnar, compressed, in-memory cache of your on-disk data)
So, this is a great feature – if disk scanning, data retrieval IO is your bottleneck (and you have plenty of memory). But on Exadata, the storage cells give you awesome disk scanning speeds and data filtering/projection/decompression offloading anyway. If you use buffered reads, then you won’t use Smart Scans – as smart scans need direct path reads as a prerequisite. And if you don’t use smart scans, the Exadata storage cells will just act as block IO servers for you – and even if you have all the data cached in RAM, your DB nodes (compute nodes) would be used for all the filtering and decompression of billions of rows. Also, there’s no storage indexing in memory (well, unless you use zone-maps which do a similar thing at higher level in 12c).
So, long story short, you likely do not want to use In-Memory PX on Exadata – and even on non-Exadata, you probably do not want it to kick in automatically at “random” times without you controlling this.
* automatic big table caching
designed primarily to enhance performance for data warehouse workloads, but it also improves performance in mixed workloads
Use object temperature for cache decisions
It’s not working for serial executions on RAC
Automatic big table caching improves in-memory query performance for large tables that do not fit completely in the buffer cache. Such tables can be stored in the big table cache, an optional, configurable portion of the database buffer cache.
If a large table is approximately the size of the combined size of the big table cache of all instances, then the table is partitioned and cached, or mostly cached, on all instances. An in-memory query could eliminate most disk reads for queries on the table, or the database could intelligently read from disk only for that portion of the table that does not fit in the big table cache. If the big table cache cannot cache all the tables to be scanned, only the most frequently accessed table are cached, and the rest are read through direct read automatically.
https://www.google.com/search?q=digital+decoupling&oq=digital+decoup&aqs=chrome.0.0j69i57j0l4.5512j0j0&sourceid=chrome&ie=UTF-8
https://www.accenture.com/t00010101T000000__w__/nz-en/_acnmedia/Accenture/Conversion-Assets/DotCom/Documents/Global/PDF/Digital_2/Accenture-Digital-Decoupling.pdf
https://www.quora.com/What-is-digital-decoupling-And-which-are-some-of-the-lesser-know-agencies-doing-this
sporadic machine shutdown..
RDBMS and OS alert log shows sudden shutdown of the machine..
Where to find additional info?
/var/log/acpid
shows the particular event of the shutdown..
{{{
messages log:
Dec 25 10:21:02 cistest shutdown[3510]: shutting down for system halt
acpid log:
[Sat Dec 25 10:21:02 2010] received event "button/power PWRF 00000080 00000001"
[Sat Dec 25 10:21:02 2010] notifying client 3274[68:68]
[Sat Dec 25 10:21:02 2010] notifying client 3484[0:0]
[Sat Dec 25 10:21:02 2010] executing action "/bin/ps awwux | /bin/grep gnome-power-manager | /bin/grep -qv grep || /sbin/shutdown -h now"
[Sat Dec 25 10:21:02 2010] BEGIN HANDLER MESSAGES
[Sat Dec 25 10:21:03 2010] END HANDLER MESSAGES
[Sat Dec 25 10:21:03 2010] action exited with status 0
[Sat Dec 25 10:21:03 2010] completed event "button/power PWRF 00000080 00000001"
}}}
<<<
Add column RECV_SHIP_STATUS to existing index PSARECV_LN_SHIP should save you visiting the table.
However, you may need to create an extended histogram on BUSINESS_UNIT_PO, PO_ID, LINE_NBR, SCHED_NBR on "JOIN CONDITIONS"
Regarding parameters, in 12.2 I would not set optimizer_adaptive_features=false.
<<<
{{{
INSERT INTO ps_recv_load_t3a6
(business_unit,
business_unit_po,
line_nbr,
po_id,
process_instance,
qty,
qty_sh_accpt_vuom,
receiver_id,
recv_ln_nbr,
recv_ship_seq_nbr,
sched_nbr,
receive_uom)
SELECT DISTINCT ' ',
A.business_unit_po,
A.line_nbr,
A.po_id,
A.process_instance,
SUM(A.qty_sh_accpt_vuom),
0,
A.receiver_id,
A.recv_ln_nbr,
A.recv_ship_seq_nbr,
A.sched_nbr,
A.receive_uom
FROM ps_recv_load_t2a6 A,
ps_recv_ln_ship B
WHERE A.process_instance = :1
AND B.business_unit_po = A.business_unit_po
AND A.business_unit = :2
AND A.receiver_id = :3
AND A.po_id = :4
AND A.line_nbr = :5
AND A.sched_nbr = :6
AND B.po_id = A.po_id
AND B.line_nbr = A.line_nbr
AND B.sched_nbr = A.sched_nbr
AND B.recv_ship_status <> 'X'
GROUP BY A.business_unit_po,
A.line_nbr,
A.po_id,
A.process_instance,
A.receiver_id,
A.sched_nbr,
A.recv_ln_nbr,
A.recv_ship_seq_nbr,
A.receive_uom
}}}
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=bash%20addition%20parse%20error
https://www.shell-tips.com/2010/06/14/performing-math-calculation-in-bash/
http://www.unix.com/shell-programming-and-scripting/141599-bc-giving-error-standard_in-2-parse-error.html
{{{
cat *addm* | egrep "ADDM Report for|\||AWR snapshot range from|Time period starts at|Time period ends at" | less
ADDM Report for Task 'ADDM_20180925_151302'
AWR snapshot range from 76292 to 76294.
Time period starts at 2018-09-18T15:30:03.
Time period ends at 2018-09-18T16:30:20.
1 Top SQL Statements 923.75 | 83.46 4
2 Top Segments by "User I/O" and "Cluster" 446.06 | 40.3 5
3 Temp Space Contention 359.33 | 32.46 1
4 Buffer Busy - Hot Objects 267.97 | 24.21 5
5 I/O Throughput 239.97 | 21.68 1
6 Global Cache Messaging 172.57 | 15.59 1
7 Undersized PGA 107.23 | 9.69 1
ADDM Report for Task 'ADDM_20180925_151652'
AWR snapshot range from 76243 to 76244.
Time period starts at 2018-09-17T15:00:08.
Time period ends at 2018-09-17T15:30:10.
1 Top SQL Statements 991.1 | 98.7 1
2 Soft Parse 990.82 | 98.68 1
3 "Concurrency" Wait Class 990.39 | 98.63 0
4 Shared Pool Latches 990.39 | 98.63 1
ls | grep -i addm | grep "76292" | grep "76294"
00015_edb360_7a_7z_372308_addmrpt_rac_76292_76294_max5wd2.txt
00151_edb360_7a_7z_372308_addmrpt_2_76292_76294_max5wd3.txt
00283_edb360_7a_7z_372308_addmrpt_4_76292_76294_max5wd3.txt
}}}
{{{
$ cat *addm* | grep "Summary of Findings" -A13 -B5 > addm_summary.txt
}}}
{{{
airflow - airbnb
luigi - spotify
nifi -
}}}
Building Data Pipelines with Python https://learning.oreilly.com/videos/building-data-pipelines/9781491970270/9781491970270-video289818
airflow vs nifi
https://www.google.com/search?ei=1tJsXOzvK7Cw5wKIrKjYAw&q=nifi+vs+airflow&oq=nifi+vs+ai&gs_l=psy-ab.3.0.0.14311.14742..15812...0.0..0.102.174.1j1......0....1..gws-wiz.......0i71j0i20i263j0i22i30.NvnT11kl5zg
https://stackoverflow.com/questions/39399065/airbnb-airflow-vs-apache-nifi
https://www.reddit.com/r/bigdata/comments/51mgk6/comparing_airbnb_airflow_and_apache_nifi/
https://learning.oreilly.com/search/?query=author%3A%22Stephane%20Maarek%22&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&is_academic_institution_account=false&sort=relevance&page=1
https://community.hortonworks.com/questions/146822/has-anyone-done-a-tool-comparison-hdf-nifi-vs-tale.html
airflow HOWTO
https://www.udemy.com/the-complete-hands-on-course-to-master-apache-airflow/
kafka vs nifi
https://stackoverflow.com/questions/53536681/difference-between-kafka-and-nifi
kafka cdc
https://learning.oreilly.com/search/?query=kafka%20CDC&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&is_academic_institution_account=false&sort=relevance&page=0
apache nifi practice data set / simulator
https://www.google.com/search?q=apache+nifi+practice+data+set&oq=apache+nifi+practice+data+set&aqs=chrome..69i57.8493j1j4&sourceid=chrome&ie=UTF-8
https://medium.com/hashmapinc/its-here-an-apache-nifi-simulator-for-generating-random-and-realistic-time-series-data-d7e463aa5e78
https://github.com/hashmapinc/nifi-simulator-bundle
https://dzone.com/articles/real-time-stock-processing-with-apache-nifi-and-ap
https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates
nifi oracle cdc
https://www.google.com/search?q=oracle+nifi&oq=oracle+nifi&aqs=chrome..69i57j0l5.2508j1j1&sourceid=chrome&ie=UTF-8
https://blog.pythian.com/database-migration-using-apache-nifi/
kafka oracle
Building a Real-Time Streaming Platform with Oracle, Apache Kafka, and KSQL https://www.youtube.com/watch?v=bc4wfzuCbAo
ETL Is Dead, Long Live Streams: real-time streams w/ Apache Kafka https://www.youtube.com/watch?v=I32hmY4diFY
dmbs_kafka https://technology.amis.nl/2018/10/25/querying-and-publishing-kafka-events-from-oracle-database-sql-and-pl-sql/
https://www.confluent.io/blog/ksql-in-action-real-time-streaming-etl-from-oracle-transactional-data
https://www.google.com/search?q=dbms_kafka&oq=dbms_kafka&aqs=chrome..69i57.2153j1j1&sourceid=chrome&ie=UTF-8
https://download.oracle.com/otndocs/products/spatial/pdf/OOW2018_Query_Real-Time_Kafka_Streams_with_Oracle_SQL.pdf
https://guidoschmutz.wordpress.com/2016/06/12/providing-oracle-stream-analytics-12c-environment-using-docker/
kafka striim
https://www.cloudera.com/solutions/gallery/striim-real-time-data-streaming-from-oracle-to-kafka.html
https://www.striim.com/oracle-to-kafka/
Robin Moffatt kafka
https://www.confluent.io/blog/author/robin-moffatt/
https://www.confluent.io/blog/ksql-in-action-real-time-streaming-etl-from-oracle-transactional-data
Instantiating Oracle GoldenGate with an Initial Load https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_initsync.htm#GWUAD546
https://debezium.io/docs/tutorial/
https://www.confluent.io/blog/no-more-silos-how-to-integrate-your-databases-with-apache-kafka-and-cdc
https://medium.com/myheritage-engineering/achieving-real-time-analytics-via-change-data-capture-d69ed2ead889
https://www.striim.com/blog/2018/03/tutorial-real-time-database-integration-apache-kafka-change-data-capture/
https://www.hvr-software.com/docs/Requirements_for_Kafka
https://www.rittmanmead.com/blog/2016/07/introduction-oracle-stream-analytics/
https://www.rittmanmead.com/blog/author/robin-moffatt/
{{{
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
$ lparstat -i
Node Name : hostexample
Partition Name : hostexample
Partition Number : 2
Type : Shared-SMT
Mode : Uncapped
Entitled Capacity : 2.30
Partition Group-ID : 32770
Shared Pool ID : 0
Online Virtual CPUs : 8
Maximum Virtual CPUs : 8
Minimum Virtual CPUs : 1
Online Memory : 21247 MB
Maximum Memory : 40960 MB
Minimum Memory : 256 MB
Variable Capacity Weight : 128
Minimum Capacity : 0.10
Maximum Capacity : 8.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 8
Active Physical CPUs in system : 8
Active CPUs in Pool : 8
Shared Physical CPUs in system : 8
Maximum Capacity of Pool : 800
Entitled Capacity of Pool : 740
Unallocated Capacity : 0.00
Physical CPU Percentage : 28.75%
Unallocated Weight : 0
$ lparstat
System configuration: type=Shared mode=Uncapped smt=On lcpu=16 mem=21247 psize=8 ent=2.30
%user %sys %wait %idle physc %entc lbusy vcsw phint
----- ----- ------ ------ ----- ----- ------ ----- -----
26.7 3.7 6.0 63.6 0.71 30.8 15.9 3080772535 417231070
$ lscfg | grep proc
+ proc0 Processor
+ proc2 Processor
+ proc4 Processor
+ proc6 Processor
+ proc8 Processor
+ proc10 Processor
+ proc12 Processor
+ proc14 Processor
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lsattr -El proc0
frequency 4204000000 Processor Speed False
smt_enabled true Processor SMT enabled False
smt_threads 2 Processor SMT threads False
state enable Processor state False
type PowerPC_POWER6 Processor type False
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ uname -M
IBM,8204-E8A
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lsattr -El sys0 -a realmem
realmem 21757952 Amount of usable physical memory in Kbytes False
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lscfg | grep proc
+ proc0 Processor
+ proc2 Processor
+ proc4 Processor
+ proc6 Processor
+ proc8 Processor
+ proc10 Processor
+ proc12 Processor
+ proc14 Processor
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lsattr -El sys0 -a realmem
realmem 21757952 Amount of usable physical memory in Kbytes False
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lscfg | grep proc
+ proc0 Processor
+ proc2 Processor
+ proc4 Processor
+ proc6 Processor
+ proc8 Processor
+ proc10 Processor
+ proc12 Processor
+ proc14 Processor
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lsattr -El proc0
frequency 4204000000 Processor Speed False
smt_enabled true Processor SMT enabled False
smt_threads 2 Processor SMT threads False
state enable Processor state False
type PowerPC_POWER6 Processor type False
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ uname -M
IBM,8204-E8A
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ smctl
bash: smctl: command not found
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lsdev -Cc processor
proc0 Available 00-00 Processor
proc2 Available 00-02 Processor
proc4 Available 00-04 Processor
proc6 Available 00-06 Processor
proc8 Available 00-08 Processor
proc10 Available 00-10 Processor
proc12 Available 00-12 Processor
proc14 Available 00-14 Processor
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lscfg -vp |grep -ip proc |grep "PROC"
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ odmget -q"PdDvLn LIKE processor/*" CuDv
CuDv:
name = "proc0"
status = 1
chgstatus = 2
ddins = ""
location = "00-00"
parent = "sysplanar0"
connwhere = "P0"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc2"
status = 1
chgstatus = 2
ddins = ""
location = "00-02"
parent = "sysplanar0"
connwhere = "P2"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc4"
status = 1
chgstatus = 2
ddins = ""
location = "00-04"
parent = "sysplanar0"
connwhere = "P4"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc6"
status = 1
chgstatus = 2
ddins = ""
location = "00-06"
parent = "sysplanar0"
connwhere = "P6"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc8"
status = 1
chgstatus = 2
ddins = ""
location = "00-08"
parent = "sysplanar0"
connwhere = "P8"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc10"
status = 1
chgstatus = 2
ddins = ""
location = "00-10"
parent = "sysplanar0"
connwhere = "P10"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc12"
status = 1
chgstatus = 2
ddins = ""
location = "00-12"
parent = "sysplanar0"
connwhere = "P12"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc14"
status = 1
chgstatus = 2
ddins = ""
location = "00-14"
parent = "sysplanar0"
connwhere = "P14"
PdDvLn = "processor/sys/proc_rspc"
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ odmget -q"PdDvLn LIKE processor/* AND name=proc0" CuDv
CuDv:
name = "proc0"
status = 1
chgstatus = 2
ddins = ""
location = "00-00"
parent = "sysplanar0"
connwhere = "P0"
PdDvLn = "processor/sys/proc_rspc"
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ odmget -q"PdDvLn LIKE processor/* AND name=proc14" CuDv
CuDv:
name = "proc14"
status = 1
chgstatus = 2
ddins = ""
location = "00-14"
parent = "sysplanar0"
connwhere = "P14"
PdDvLn = "processor/sys/proc_rspc"
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lsattr -El sys0 -a modelname
modelname IBM,8204-E8A Machine name False
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lparstat -i|grep ^Active\ Phys
Active Physical CPUs in system : 8
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lscfg -vp|grep WAY
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lscfg -vp|grep proc
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
proc8 Processor
proc10 Processor
proc12 Processor
proc14 Processor
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lsattr -El sys0 -a modelname
modelname IBM,8204-E8A Machine name False
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lparstat -i|grep Active\ Phys
Active Physical CPUs in system : 8
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lscfg -vp | grep WAY
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lscfg -vp |grep proc
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
proc8 Processor
proc10 Processor
proc12 Processor
proc14 Processor
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
$ lscfg -vp
INSTALLED RESOURCE LIST WITH VPD
The following resources are installed on your machine.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
sys0 System Object
sysplanar0 System Planar
vio0 Virtual I/O Bus
ent6 U8204.E8A.10F2441-V2-C2-T1 Virtual I/O Ethernet Adapter (l-lan)
Network Address.............8E8CF1FC7B02
Displayable Message.........Virtual I/O Ethernet Adapter (l-lan)
Hardware Location Code......U8204.E8A.10F2441-V2-C2-T1
vscsi1 U8204.E8A.10F2441-V2-C4-T1 Virtual SCSI Client Adapter
Hardware Location Code......U8204.E8A.10F2441-V2-C4-T1
cd0 U8204.E8A.10F2441-V2-C4-T1-L8f0000000000 Virtual SCSI Optical Served by VIO Server
vscsi0 U8204.E8A.10F2441-V2-C3-T1 Virtual SCSI Client Adapter
Hardware Location Code......U8204.E8A.10F2441-V2-C3-T1
hdisk78 U8204.E8A.10F2441-V2-C3-T1-La00000000000 Virtual SCSI Disk Drive
hdisk77 U8204.E8A.10F2441-V2-C3-T1-L900000000000 Virtual SCSI Disk Drive
hdisk76 U8204.E8A.10F2441-V2-C3-T1-Lb00000000000 Virtual SCSI Disk Drive
hdisk75 U8204.E8A.10F2441-V2-C3-T1-L880000000000 Virtual SCSI Disk Drive
hdisk74 U8204.E8A.10F2441-V2-C3-T1-La80000000000 Virtual SCSI Disk Drive
hdisk73 U8204.E8A.10F2441-V2-C3-T1-L980000000000 Virtual SCSI Disk Drive
hdisk72 U8204.E8A.10F2441-V2-C3-T1-Lb80000000000 Virtual SCSI Disk Drive
hdisk71 U8204.E8A.10F2441-V2-C3-T1-L840000000000 Virtual SCSI Disk Drive
hdisk70 U8204.E8A.10F2441-V2-C3-T1-La40000000000 Virtual SCSI Disk Drive
hdisk69 U8204.E8A.10F2441-V2-C3-T1-L940000000000 Virtual SCSI Disk Drive
hdisk68 U8204.E8A.10F2441-V2-C3-T1-Lb40000000000 Virtual SCSI Disk Drive
hdisk67 U8204.E8A.10F2441-V2-C3-T1-L8c0000000000 Virtual SCSI Disk Drive
hdisk66 U8204.E8A.10F2441-V2-C3-T1-Lac0000000000 Virtual SCSI Disk Drive
hdisk65 U8204.E8A.10F2441-V2-C3-T1-L9c0000000000 Virtual SCSI Disk Drive
hdisk64 U8204.E8A.10F2441-V2-C3-T1-Lbc0000000000 Virtual SCSI Disk Drive
hdisk63 U8204.E8A.10F2441-V2-C3-T1-L820000000000 Virtual SCSI Disk Drive
hdisk62 U8204.E8A.10F2441-V2-C3-T1-La20000000000 Virtual SCSI Disk Drive
hdisk61 U8204.E8A.10F2441-V2-C3-T1-L920000000000 Virtual SCSI Disk Drive
hdisk60 U8204.E8A.10F2441-V2-C3-T1-Lb20000000000 Virtual SCSI Disk Drive
hdisk59 U8204.E8A.10F2441-V2-C3-T1-L8a0000000000 Virtual SCSI Disk Drive
hdisk58 U8204.E8A.10F2441-V2-C3-T1-Laa0000000000 Virtual SCSI Disk Drive
hdisk57 U8204.E8A.10F2441-V2-C3-T1-L9a0000000000 Virtual SCSI Disk Drive
hdisk56 U8204.E8A.10F2441-V2-C3-T1-Lba0000000000 Virtual SCSI Disk Drive
hdisk55 U8204.E8A.10F2441-V2-C3-T1-L860000000000 Virtual SCSI Disk Drive
hdisk54 U8204.E8A.10F2441-V2-C3-T1-La60000000000 Virtual SCSI Disk Drive
hdisk53 U8204.E8A.10F2441-V2-C3-T1-L960000000000 Virtual SCSI Disk Drive
hdisk52 U8204.E8A.10F2441-V2-C3-T1-Lb60000000000 Virtual SCSI Disk Drive
hdisk51 U8204.E8A.10F2441-V2-C3-T1-L8e0000000000 Virtual SCSI Disk Drive
hdisk50 U8204.E8A.10F2441-V2-C3-T1-Lae0000000000 Virtual SCSI Disk Drive
hdisk49 U8204.E8A.10F2441-V2-C3-T1-L9e0000000000 Virtual SCSI Disk Drive
hdisk48 U8204.E8A.10F2441-V2-C3-T1-Lbe0000000000 Virtual SCSI Disk Drive
hdisk47 U8204.E8A.10F2441-V2-C3-T1-L810000000000 Virtual SCSI Disk Drive
hdisk46 U8204.E8A.10F2441-V2-C3-T1-La10000000000 Virtual SCSI Disk Drive
hdisk45 U8204.E8A.10F2441-V2-C3-T1-L910000000000 Virtual SCSI Disk Drive
hdisk44 U8204.E8A.10F2441-V2-C3-T1-Lb10000000000 Virtual SCSI Disk Drive
hdisk43 U8204.E8A.10F2441-V2-C3-T1-L890000000000 Virtual SCSI Disk Drive
hdisk42 U8204.E8A.10F2441-V2-C3-T1-La90000000000 Virtual SCSI Disk Drive
hdisk41 U8204.E8A.10F2441-V2-C3-T1-L990000000000 Virtual SCSI Disk Drive
hdisk40 U8204.E8A.10F2441-V2-C3-T1-Lb90000000000 Virtual SCSI Disk Drive
hdisk39 U8204.E8A.10F2441-V2-C3-T1-L850000000000 Virtual SCSI Disk Drive
hdisk38 U8204.E8A.10F2441-V2-C3-T1-La50000000000 Virtual SCSI Disk Drive
hdisk37 U8204.E8A.10F2441-V2-C3-T1-L950000000000 Virtual SCSI Disk Drive
hdisk36 U8204.E8A.10F2441-V2-C3-T1-Lb50000000000 Virtual SCSI Disk Drive
hdisk35 U8204.E8A.10F2441-V2-C3-T1-L8d0000000000 Virtual SCSI Disk Drive
hdisk34 U8204.E8A.10F2441-V2-C3-T1-Lad0000000000 Virtual SCSI Disk Drive
hdisk33 U8204.E8A.10F2441-V2-C3-T1-L9d0000000000 Virtual SCSI Disk Drive
hdisk32 U8204.E8A.10F2441-V2-C3-T1-Lbd0000000000 Virtual SCSI Disk Drive
hdisk31 U8204.E8A.10F2441-V2-C3-T1-L830000000000 Virtual SCSI Disk Drive
hdisk30 U8204.E8A.10F2441-V2-C3-T1-La30000000000 Virtual SCSI Disk Drive
hdisk29 U8204.E8A.10F2441-V2-C3-T1-L930000000000 Virtual SCSI Disk Drive
hdisk28 U8204.E8A.10F2441-V2-C3-T1-Lb30000000000 Virtual SCSI Disk Drive
hdisk27 U8204.E8A.10F2441-V2-C3-T1-L8b0000000000 Virtual SCSI Disk Drive
hdisk26 U8204.E8A.10F2441-V2-C3-T1-Lab0000000000 Virtual SCSI Disk Drive
hdisk25 U8204.E8A.10F2441-V2-C3-T1-L9b0000000000 Virtual SCSI Disk Drive
hdisk24 U8204.E8A.10F2441-V2-C3-T1-Lbb0000000000 Virtual SCSI Disk Drive
hdisk23 U8204.E8A.10F2441-V2-C3-T1-L870000000000 Virtual SCSI Disk Drive
hdisk22 U8204.E8A.10F2441-V2-C3-T1-La70000000000 Virtual SCSI Disk Drive
hdisk21 U8204.E8A.10F2441-V2-C3-T1-L970000000000 Virtual SCSI Disk Drive
hdisk20 U8204.E8A.10F2441-V2-C3-T1-Lb70000000000 Virtual SCSI Disk Drive
hdisk19 U8204.E8A.10F2441-V2-C3-T1-L8f0000000000 Virtual SCSI Disk Drive
hdisk18 U8204.E8A.10F2441-V2-C3-T1-Laf0000000000 Virtual SCSI Disk Drive
hdisk17 U8204.E8A.10F2441-V2-C3-T1-L9f0000000000 Virtual SCSI Disk Drive
hdisk16 U8204.E8A.10F2441-V2-C3-T1-Lbf0000000000 Virtual SCSI Disk Drive
hdisk15 U8204.E8A.10F2441-V2-C3-T1-L902000000000 Virtual SCSI Disk Drive
hdisk14 U8204.E8A.10F2441-V2-C3-T1-L882000000000 Virtual SCSI Disk Drive
hdisk13 U8204.E8A.10F2441-V2-C3-T1-L842000000000 Virtual SCSI Disk Drive
hdisk12 U8204.E8A.10F2441-V2-C3-T1-L8c2000000000 Virtual SCSI Disk Drive
hdisk11 U8204.E8A.10F2441-V2-C3-T1-L822000000000 Virtual SCSI Disk Drive
hdisk10 U8204.E8A.10F2441-V2-C3-T1-L8a2000000000 Virtual SCSI Disk Drive
hdisk9 U8204.E8A.10F2441-V2-C3-T1-L862000000000 Virtual SCSI Disk Drive
hdisk8 U8204.E8A.10F2441-V2-C3-T1-L8e2000000000 Virtual SCSI Disk Drive
hdisk7 U8204.E8A.10F2441-V2-C3-T1-L812000000000 Virtual SCSI Disk Drive
hdisk6 U8204.E8A.10F2441-V2-C3-T1-L892000000000 Virtual SCSI Disk Drive
hdisk5 U8204.E8A.10F2441-V2-C3-T1-L852000000000 Virtual SCSI Disk Drive
hdisk4 U8204.E8A.10F2441-V2-C3-T1-L8d2000000000 Virtual SCSI Disk Drive
hdisk3 U8204.E8A.10F2441-V2-C3-T1-L832000000000 Virtual SCSI Disk Drive
hdisk2 U8204.E8A.10F2441-V2-C3-T1-L8b2000000000 Virtual SCSI Disk Drive
hdisk1 U8204.E8A.10F2441-V2-C3-T1-L872000000000 Virtual SCSI Disk Drive
hdisk0 U8204.E8A.10F2441-V2-C3-T1-L8f2000000000 Virtual SCSI Disk Drive
vsa0 U8204.E8A.10F2441-V2-C0 LPAR Virtual Serial Adapter
Hardware Location Code......U8204.E8A.10F2441-V2-C0
vty0 U8204.E8A.10F2441-V2-C0-L0 Asynchronous Terminal
lhea0 U78A0.001.DNWGG63-P1 Logical Host Ethernet Adapter (l-hea)
Hardware Location Code......U78A0.001.DNWGG63-P1
ent3 U78A0.001.DNWGG63-P1-C6-T4 Logical Host Ethernet Port (lp-hea)
IBM Host Ethernet Adapter:
Network Address.............00145E5257B6
ent2 U78A0.001.DNWGG63-P1-C6-T3 Logical Host Ethernet Port (lp-hea)
IBM Host Ethernet Adapter:
Network Address.............00145E5257B0
ent1 U78A0.001.DNWGG63-P1-C6-T2 Logical Host Ethernet Port (lp-hea)
IBM Host Ethernet Adapter:
Network Address.............00145E5257A4
ent4 U78A0.001.DNWGG63-P1-C6-T1 Logical Host Ethernet Port (lp-hea)
IBM Host Ethernet Adapter:
Network Address.............00145E5257A0
L2cache0 L2 Cache
mem0 Memory
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
proc8 Processor
proc10 Processor
proc12 Processor
proc14 Processor
PLATFORM SPECIFIC
Name: IBM,8204-E8A
Model: IBM,8204-E8A
Node: /
Device Type: chrp
System VPD:
Record Name.................VSYS
Flag Field..................XXSV
Brand.......................P0
Hardware Location Code......U8204.E8A.10F2441
Machine/Cabinet Serial No...10F2441
Machine Type and Model......8204-E8A
System Unique ID (SUID).....0004AC11184D
World Wide Port Name........C0507600478B
Version.....................ipzSeries
Physical Location: U8204.E8A.10F2441
CEC:
Record Name.................VCEN
Flag Field..................XXEV
Brand.......................P0
Hardware Location Code......U78A0.001.DNWGG63
Machine/Cabinet Serial No...DNWGG63
Machine Type and Model......78A0-001
Controlling CEC ID..........8204-E8A 10F2441
Rack Serial Number..........0000000000000000
Feature Code/Marketing ID...78A0-001
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63
SYSTEM BACKPLANE:
Record Name.................VINI
Flag Field..................XXBP
Hardware Location Code......U78A0.001.DNWGG63-P1
Customer Card ID Number.....2B3E
Serial Number...............YL11HA1A4055
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................46K7761
Part Number.................46K7762
Power.......................2A00000000000000
Product Specific.(HE).......0001
Product Specific.(CT).......40F30023
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1
QUAD ETHERNET :
Record Name.................VINI
Flag Field..................XXET
Hardware Location Code......U78A0.001.DNWGG63-P1-C6
Customer Card ID Number.....1819
Serial Number...............YL13W8024073
CCIN Extender...............1
Product Specific.(VZ).......04
FRU Number..................10N9622
Part Number.................10N9623
Product Specific.(HE).......0001
Product Specific.(CT).......40910006
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B1).......00145E5257A00020
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C6
ANCHOR :
Record Name.................VINI
Flag Field..................XXAV
Hardware Location Code......U78A0.001.DNWGG63-P1-C9
Customer Card ID Number.....52AE
Serial Number...............YL1107001931
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................10N8696
Part Number.................10N8696
Power.......................8100300000000000
Product Specific.(HE).......0010
Product Specific.(CT).......40B40000
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......435381070919039C24735350CBDBADF05D78B4D94D
31D75F243746BAAA124D327F984046D87765484D33
4AB8080FC8C217424D34875473BF73431DE2
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C9
THERMAL PWR MGMT:
Record Name.................VINI
Flag Field..................XXTP
Hardware Location Code......U78A0.001.DNWGG63-P1-C12
Customer Card ID Number.....2A0E
Serial Number...............YL11W802100E
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................10N9588
Part Number.................10N9588
Product Specific.(HE).......0001
Product Specific.(CT).......40B60003
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C12
2 WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78A0.001.DNWGG63-P1-C13
Customer Card ID Number.....53E1
Serial Number...............YL1008000275
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................10N9725
Part Number.................46K6593
Power.......................3300200100028000
Product Specific.(HE).......0001
Product Specific.(CT).......40170102
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......43538023181005E8189C53506DC6973B55E479314D
31FFC8397DB30B47B24D328594AC072AA64F0A4D33
DAB4D353E0EB55804D34F24CEC72D057C0C6
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C2
Customer Card ID Number.....31A6
Serial Number...............YLD004008055
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C4
Customer Card ID Number.....31A6
Serial Number...............YLD006005108
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C7
Customer Card ID Number.....31A6
Serial Number...............YLD005007022
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C9
Customer Card ID Number.....31A6
Serial Number...............YLD007007029
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13-C9
2 WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78A0.001.DNWGG63-P1-C14
Customer Card ID Number.....53E1
Serial Number...............YL1008000193
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................10N9725
Part Number.................46K6593
Power.......................3300200100028000
Product Specific.(HE).......0001
Product Specific.(CT).......40170102
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......4353801819353977372F53508CAC6FB727BEE6AD4D
3149DB3D4BBF5A99BC4D32A97DCC22655418824D33
DF725A07596CB2F44D345022B75E070529CF
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C2
Customer Card ID Number.....31A6
Serial Number...............YLD00C003047
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C4
Customer Card ID Number.....31A6
Serial Number...............YLD00E004076
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C7
Customer Card ID Number.....31A6
Serial Number...............YLD00D004004
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C9
Customer Card ID Number.....31A6
Serial Number...............YLD00F003003
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14-C9
2 WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78A0.001.DNWGG63-P1-C15
Customer Card ID Number.....53E1
Serial Number...............YL1008000495
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................10N9725
Part Number.................46K6593
Power.......................3300200100028000
Product Specific.(HE).......0001
Product Specific.(CT).......40170102
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......435380371831020A7F2D535095FD5C56AD37C1794D
31DBE640CB6D01B9C44D329B18533F7550B99C4D33
74D20AC6FFAD2F454D343A3569C146732595
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C2
Customer Card ID Number.....31A6
Serial Number...............YLD014006065
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C4
Customer Card ID Number.....31A6
Serial Number...............YLD016003032
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C7
Customer Card ID Number.....31A6
Serial Number...............YLD015003094
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C9
Customer Card ID Number.....31A6
Serial Number...............YLD017002093
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15-C9
2 WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78A0.001.DNWGG63-P1-C16
Customer Card ID Number.....53E1
Serial Number...............YL1008000250
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................10N9725
Part Number.................46K6593
Power.......................3300200100028000
Product Specific.(HE).......0001
Product Specific.(CT).......40170102
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......4353802308231509285253504BB5FB0780FA968C4D
315F6AEB59A857C99A4D327407C9A2FCBD9DAF4D33
5D14E010B135574B4D34F7E2766C968DCA89
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C2
Customer Card ID Number.....31A6
Serial Number...............YLD01C001037
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C4
Customer Card ID Number.....31A6
Serial Number...............YLD01E007034
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C7
Customer Card ID Number.....31A6
Serial Number...............YLD01D005064
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C9
Customer Card ID Number.....31A6
Serial Number...............YLD01F008052
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16-C9
CEC OP PANEL :
Record Name.................VINI
Flag Field..................XXOP
Hardware Location Code......U78A0.001.DNWGG63-D1
Customer Card ID Number.....296C
Serial Number...............YL12W02920J2
CCIN Extender...............1
Product Specific.(VZ).......02
FRU Number..................44V4749
Part Number.................44V4747
Product Specific.(HE).......0001
Product Specific.(CT).......40B50000
Product Specific.(HW).......0001
Product Specific.(B3).......000000000000
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-D1
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C10
Customer Card ID Number.....2A29
FRU Number.................. 42R6492
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C13-C10
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C5
Customer Card ID Number.....2A2C
FRU Number.................. 42R6498
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C13-C5
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C10
Customer Card ID Number.....2A29
FRU Number.................. 42R6492
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C14-C10
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C5
Customer Card ID Number.....2A2C
FRU Number.................. 42R6498
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C14-C5
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C10
Customer Card ID Number.....2A29
FRU Number.................. 42R6492
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C15-C10
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C5
Customer Card ID Number.....2A2C
FRU Number.................. 42R6498
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C15-C5
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C10
Customer Card ID Number.....2A29
FRU Number.................. 42R6492
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C16-C10
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C5
Customer Card ID Number.....2A2C
FRU Number.................. 42R6498
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C16-C5
A IBM AC PS :
Record Name.................VINI
Flag Field..................XXPS
Hardware Location Code......U78A0.001.DNWGG63-E1
Customer Card ID Number.....51C3
Serial Number...............YL10HA833026
Part Number.................44V4951
FRU Number.................. 44V4951
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-E1
A IBM AC PS :
Record Name.................VINI
Flag Field..................XXPS
Hardware Location Code......U78A0.001.DNWGG63-E2
Customer Card ID Number.....51C3
Serial Number...............YL10HA81T119
Part Number.................44V4951
FRU Number.................. 44V4951
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-E2
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78A0.001.DNWGG63-A1
Customer Card ID Number.....27B8
FRU Number.................. 42R7657
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-A1
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78A0.001.DNWGG63-A2
Customer Card ID Number.....27B8
FRU Number.................. 42R7657
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-A2
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78A0.001.DNWGG63-A3
Customer Card ID Number.....27B8
FRU Number.................. 42R7657
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-A3
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78A0.001.DNWGG63-A4
Customer Card ID Number.....27B8
FRU Number.................. 42R7657
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-A4
PSBPD6E4 3GSAS:
Record Name.................VINI
Flag Field..................XXDB
Hardware Location Code......U78A0.001.DNWGG63-P2
Customer Card ID Number.....2875
Serial Number...............YL1AW9105006
FRU Number.................. 46K6634
Part Number.................44V5561
CCIN Extender...............1
Product Specific.(VZ).......06
Product Specific.(B2).......5005076C0DA5D9000000000000000000
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P2
System Firmware:
Code Level, LID Keyword.....Phyp_1 10092012061180A00701
Code Level, LID Keyword.....PFW 10232012071881CF0681
Code Level, LID Keyword.....FSP_Ker 12452012071881E00100
Code Level, LID Keyword.....FSP_Fil 12502012071881E00101
Code Level, LID Keyword.....FipS_BU 12502012071881E00200
Code Level, LID Keyword.....SPCN3 124620060531A0E00A11
Code Level, LID Keyword.....SPCN1 183020070213A0E00D00
Code Level, LID Keyword.....SPCN2 183420070213A0E00D20
Microcode Image.............EL350_132 EL350_132 EL350_132
Hardware Location Code......U8204.E8A.10F2441-Y1
Physical Location: U8204.E8A.10F2441-Y1
Name: openprom
Model: IBM,EL350_132
Node: openprom
Name: interrupt-controller
Model: IBM, Logical PowerPC-PIC, 00
Node: interrupt-controller@0
Device Type: PowerPC-External-Interrupt-Presentation
Name: lhea
Node: lhea@23c00200
Physical Location: U78A0.001.DNWGG63-P1
Name: vty
Node: vty@30000000
Device Type: serial
Physical Location: U8204.E8A.10F2441-V2-C0
Name: l-lan
Node: l-lan@30000002
Device Type: network
Physical Location: U8204.E8A.10F2441-V2-C2-T1
Name: v-scsi
Node: v-scsi@30000003
Device Type: vscsi
Physical Location: U8204.E8A.10F2441-V2-C3-T1
Name: v-scsi
Node: v-scsi@30000004
Device Type: vscsi
Physical Location: U8204.E8A.10F2441-V2-C4-T1
Name: ethernet
Node: ethernet@23e00000
Device Type: network
Physical Location: U78A0.001.DNWGG63-P1-C6-T1
Name: ethernet
Node: ethernet@23e00400
Device Type: network
Physical Location: U78A0.001.DNWGG63-P1-C6-T2
Name: ethernet
Node: ethernet@23e01000
Device Type: network
Physical Location: U78A0.001.DNWGG63-P1-C6-T3
Name: ethernet
Node: ethernet@23e01600
Device Type: network
Physical Location: U78A0.001.DNWGG63-P1-C6-T4
oracle@hostexample:/home/oracle/enkitec/awrscripts:icpr1
}}}
{{{
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
$ lparstat -i
Node Name : hostexample
Partition Name : hostexample
Partition Number : 6
Type : Shared-SMT
Mode : Uncapped
Entitled Capacity : 2.30
Partition Group-ID : 32774
Shared Pool ID : 0
Online Virtual CPUs : 8
Maximum Virtual CPUs : 16
Minimum Virtual CPUs : 1
Online Memory : 16384 MB
Maximum Memory : 32768 MB
Minimum Memory : 512 MB
Variable Capacity Weight : 128
Minimum Capacity : 0.10
Maximum Capacity : 8.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 8
Active Physical CPUs in system : 8
Active CPUs in Pool : 8
Shared Physical CPUs in system : 8
Maximum Capacity of Pool : 800
Entitled Capacity of Pool : 740
Unallocated Capacity : 0.00
Physical CPU Percentage : 28.75%
Unallocated Weight : 0
Memory Mode : Dedicated
Total I/O Memory Entitlement : -
Variable Memory Capacity Weight : -
Memory Pool ID : -
Physical Memory in the Pool : -
Hypervisor Page Size : -
Unallocated Variable Memory Capacity Weight: -
Unallocated I/O Memory entitlement : -
Memory Group ID of LPAR : -
Desired Virtual CPUs : 8
Desired Memory : 16384 MB
Desired Variable Capacity Weight : 128
Desired Capacity : 2.30
Target Memory Expansion Factor : -
Target Memory Expansion Size : -
Power Saving Mode : Disabled
oracle@hostexample:/home/oracle:HRPRD911
$ lparstat
System configuration: type=Shared mode=Uncapped smt=On lcpu=16 mem=16384MB psize=8 ent=2.30
%user %sys %wait %idle physc %entc lbusy vcsw phint
----- ----- ------ ------ ----- ----- ------ ----- -----
21.4 5.4 0.5 72.6 0.64 27.9 4.3 4081457850 484752660
oracle@hostexample:/home/oracle:HRPRD911
$ lscfg | grep proc
+ proc0 Processor
+ proc2 Processor
+ proc4 Processor
+ proc6 Processor
+ proc8 Processor
+ proc10 Processor
+ proc12 Processor
+ proc14 Processor
oracle@hostexample:/home/oracle:HRPRD911
$ lsattr -El proc0
frequency 4204000000 Processor Speed False
smt_enabled true Processor SMT enabled False
smt_threads 2 Processor SMT threads False
state enable Processor state False
type PowerPC_POWER6 Processor type False
oracle@hostexample:/home/oracle:HRPRD911
$ uname -M
IBM,8204-E8A
oracle@hostexample:/home/oracle:HRPRD911
$ lsattr -El sys0 -a realmem
realmem 16777216 Amount of usable physical memory in Kbytes False
oracle@hostexample:/home/oracle:HRPRD911
$ lscfg | grep proc
+ proc0 Processor
+ proc2 Processor
+ proc4 Processor
+ proc6 Processor
+ proc8 Processor
+ proc10 Processor
+ proc12 Processor
+ proc14 Processor
oracle@hostexample:/home/oracle:HRPRD911
$ smctl
bash: smctl: command not found
oracle@hostexample:/home/oracle:HRPRD911
$ lsdev -Cc processor
proc0 Available 00-00 Processor
proc2 Available 00-02 Processor
proc4 Available 00-04 Processor
proc6 Available 00-06 Processor
proc8 Available 00-08 Processor
proc10 Available 00-10 Processor
proc12 Available 00-12 Processor
proc14 Available 00-14 Processor
oracle@hostexample:/home/oracle:HRPRD911
$ lscfg -vp |grep -ip proc |grep "PROC"
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
oracle@hostexample:/home/oracle:HRPRD911
$ odmget -q"PdDvLn LIKE processor/*" CuDv
CuDv:
name = "proc0"
status = 1
chgstatus = 2
ddins = ""
location = "00-00"
parent = "sysplanar0"
connwhere = "P0"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc2"
status = 1
chgstatus = 2
ddins = ""
location = "00-02"
parent = "sysplanar0"
connwhere = "P2"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc4"
status = 1
chgstatus = 2
ddins = ""
location = "00-04"
parent = "sysplanar0"
connwhere = "P4"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc6"
status = 1
chgstatus = 2
ddins = ""
location = "00-06"
parent = "sysplanar0"
connwhere = "P6"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc8"
status = 1
chgstatus = 2
ddins = ""
location = "00-08"
parent = "sysplanar0"
connwhere = "P8"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc10"
status = 1
chgstatus = 2
ddins = ""
location = "00-10"
parent = "sysplanar0"
connwhere = "P10"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc12"
status = 1
chgstatus = 2
ddins = ""
location = "00-12"
parent = "sysplanar0"
connwhere = "P12"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc14"
status = 1
chgstatus = 2
ddins = ""
location = "00-14"
parent = "sysplanar0"
connwhere = "P14"
PdDvLn = "processor/sys/proc_rspc"
oracle@hostexample:/home/oracle:HRPRD911
$ odmget -q"PdDvLn LIKE processor/* AND name=proc0" CuDv
CuDv:
name = "proc0"
status = 1
chgstatus = 2
ddins = ""
location = "00-00"
parent = "sysplanar0"
connwhere = "P0"
PdDvLn = "processor/sys/proc_rspc"
oracle@hostexample:/home/oracle:HRPRD911
$ odmget -q"PdDvLn LIKE processor/* AND name=proc14" CuDv
CuDv:
name = "proc14"
status = 1
chgstatus = 2
ddins = ""
location = "00-14"
parent = "sysplanar0"
connwhere = "P14"
PdDvLn = "processor/sys/proc_rspc"
oracle@hostexample:/home/oracle:HRPRD911
$ lsattr -El sys0 -a modelname
modelname IBM,8204-E8A Machine name False
oracle@hostexample:/home/oracle:HRPRD911
$ lparstat -i|grep ^Active\ Phys
Active Physical CPUs in system : 8
oracle@hostexample:/home/oracle:HRPRD911
$ lscfg -vp|grep WAY
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
oracle@hostexample:/home/oracle:HRPRD911
$ lscfg -vp|grep proc
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
proc8 Processor
proc10 Processor
proc12 Processor
proc14 Processor
oracle@hostexample:/home/oracle:HRPRD911
$ lsattr -El sys0 -a modelname
modelname IBM,8204-E8A Machine name False
oracle@hostexample:/home/oracle:HRPRD911
$ lparstat -i|grep Active\ Phys
Active Physical CPUs in system : 8
oracle@hostexample:/home/oracle:HRPRD911
$ lscfg -vp
INSTALLED RESOURCE LIST WITH VPD
The following resources are installed on your machine.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
sys0 System Object
sysplanar0 System Planar
lhea0 U78A0.001.DNWGG63-P1 Logical Host Ethernet Adapter (l-hea)
Hardware Location Code......U78A0.001.DNWGG63-P1
ent1 U78A0.001.DNWGG63-P1-C6-T3 Logical Host Ethernet Port (lp-hea)
IBM Host Ethernet Adapter:
Network Address.............00145E5257B4
ent0 U78A0.001.DNWGG63-P1-C6-T1 Logical Host Ethernet Port (lp-hea)
IBM Host Ethernet Adapter:
Network Address.............00145E5257A2
vio0 Virtual I/O Bus
vscsi3 U8204.E8A.10F2441-V6-C2-T1 Virtual SCSI Client Adapter
Hardware Location Code......U8204.E8A.10F2441-V6-C2-T1
vscsi2 U8204.E8A.10F2441-V6-C3-T1 Virtual SCSI Client Adapter
Hardware Location Code......U8204.E8A.10F2441-V6-C3-T1
hdisk23 U8204.E8A.10F2441-V6-C3-T1-L9600000000000000 Virtual SCSI Disk Drive
hdisk22 U8204.E8A.10F2441-V6-C3-T1-L8900000000000000 Virtual SCSI Disk Drive
hdisk21 U8204.E8A.10F2441-V6-C3-T1-L8800000000000000 Virtual SCSI Disk Drive
hdisk20 U8204.E8A.10F2441-V6-C3-T1-L9400000000000000 Virtual SCSI Disk Drive
hdisk19 U8204.E8A.10F2441-V6-C3-T1-L9300000000000000 Virtual SCSI Disk Drive
hdisk18 U8204.E8A.10F2441-V6-C3-T1-L9200000000000000 Virtual SCSI Disk Drive
hdisk17 U8204.E8A.10F2441-V6-C3-T1-L9100000000000000 Virtual SCSI Disk Drive
hdisk16 U8204.E8A.10F2441-V6-C3-T1-L9000000000000000 Virtual SCSI Disk Drive
hdisk15 U8204.E8A.10F2441-V6-C3-T1-L8f00000000000000 Virtual SCSI Disk Drive
hdisk14 U8204.E8A.10F2441-V6-C3-T1-L8e00000000000000 Virtual SCSI Disk Drive
hdisk13 U8204.E8A.10F2441-V6-C3-T1-L8d00000000000000 Virtual SCSI Disk Drive
hdisk12 U8204.E8A.10F2441-V6-C3-T1-L8c00000000000000 Virtual SCSI Disk Drive
hdisk11 U8204.E8A.10F2441-V6-C3-T1-L8b00000000000000 Virtual SCSI Disk Drive
hdisk10 U8204.E8A.10F2441-V6-C3-T1-L9500000000000000 Virtual SCSI Disk Drive
hdisk9 U8204.E8A.10F2441-V6-C3-T1-L8a00000000000000 Virtual SCSI Disk Drive
hdisk6 U8204.E8A.10F2441-V6-C3-T1-L8700000000000000 Virtual SCSI Disk Drive
hdisk5 U8204.E8A.10F2441-V6-C3-T1-L8600000000000000 Virtual SCSI Disk Drive
hdisk4 U8204.E8A.10F2441-V6-C3-T1-L8500000000000000 Virtual SCSI Disk Drive
hdisk3 U8204.E8A.10F2441-V6-C3-T1-L8400000000000000 Virtual SCSI Disk Drive
hdisk2 U8204.E8A.10F2441-V6-C3-T1-L8300000000000000 Virtual SCSI Disk Drive
hdisk1 U8204.E8A.10F2441-V6-C3-T1-L8200000000000000 Virtual SCSI Disk Drive
hdisk0 U8204.E8A.10F2441-V6-C3-T1-L8100000000000000 Virtual SCSI Disk Drive
vsa0 U8204.E8A.10F2441-V6-C0 LPAR Virtual Serial Adapter
Hardware Location Code......U8204.E8A.10F2441-V6-C0
vty0 U8204.E8A.10F2441-V6-C0-L0 Asynchronous Terminal
L2cache0 L2 Cache
mem0 Memory
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
proc8 Processor
proc10 Processor
proc12 Processor
proc14 Processor
PLATFORM SPECIFIC
Name: IBM,8204-E8A
Model: IBM,8204-E8A
Node: /
Device Type: chrp
System VPD:
Record Name.................VSYS
Flag Field..................XXSV
Brand.......................P0
Hardware Location Code......U8204.E8A.10F2441
Machine/Cabinet Serial No...10F2441
Machine Type and Model......8204-E8A
System Unique ID (SUID).....0004AC11184D
World Wide Port Name........C0507600478B
Version.....................ipzSeries
Physical Location: U8204.E8A.10F2441
CEC:
Record Name.................VCEN
Flag Field..................XXEV
Brand.......................P0
Hardware Location Code......U78A0.001.DNWGG63
Machine/Cabinet Serial No...DNWGG63
Machine Type and Model......78A0-001
Controlling CEC ID..........8204-E8A 10F2441
Rack Serial Number..........0000000000000000
Feature Code/Marketing ID...78A0-001
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63
SYSTEM BACKPLANE:
Record Name.................VINI
Flag Field..................XXBP
Hardware Location Code......U78A0.001.DNWGG63-P1
Customer Card ID Number.....2B3E
Serial Number...............YL11HA1A4055
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................46K7761
Part Number.................46K7762
Power.......................2A00000000000000
Product Specific.(HE).......0001
Product Specific.(CT).......40F30023
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1
QUAD ETHERNET :
Record Name.................VINI
Flag Field..................XXET
Hardware Location Code......U78A0.001.DNWGG63-P1-C6
Customer Card ID Number.....1819
Serial Number...............YL13W8024073
CCIN Extender...............1
Product Specific.(VZ).......04
FRU Number..................10N9622
Part Number.................10N9623
Product Specific.(HE).......0001
Product Specific.(CT).......40910006
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B1).......00145E5257A00020
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C6
ANCHOR :
Record Name.................VINI
Flag Field..................XXAV
Hardware Location Code......U78A0.001.DNWGG63-P1-C9
Customer Card ID Number.....52AE
Serial Number...............YL1107001931
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................10N8696
Part Number.................10N8696
Power.......................8100300000000000
Product Specific.(HE).......0010
Product Specific.(CT).......40B40000
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......435381070919039C24735350CBDBADF05D78B4D94D
31D75F243746BAAA124D327F984046D87765484D33
4AB8080FC8C217424D34875473BF73431DE2
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C9
THERMAL PWR MGMT:
Record Name.................VINI
Flag Field..................XXTP
Hardware Location Code......U78A0.001.DNWGG63-P1-C12
Customer Card ID Number.....2A0E
Serial Number...............YL11W802100E
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................10N9588
Part Number.................10N9588
Product Specific.(HE).......0001
Product Specific.(CT).......40B60003
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C12
2 WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78A0.001.DNWGG63-P1-C13
Customer Card ID Number.....53E1
Serial Number...............YL1008000275
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................10N9725
Part Number.................46K6593
Power.......................3300200100028000
Product Specific.(HE).......0001
Product Specific.(CT).......40170102
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......43538023181005E8189C53506DC6973B55E479314D
31FFC8397DB30B47B24D328594AC072AA64F0A4D33
DAB4D353E0EB55804D34F24CEC72D057C0C6
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C2
Customer Card ID Number.....31A6
Serial Number...............YLD004008055
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C4
Customer Card ID Number.....31A6
Serial Number...............YLD006005108
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C7
Customer Card ID Number.....31A6
Serial Number...............YLD005007022
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C9
Customer Card ID Number.....31A6
Serial Number...............YLD007007029
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C13-C9
2 WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78A0.001.DNWGG63-P1-C14
Customer Card ID Number.....53E1
Serial Number...............YL1008000193
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................10N9725
Part Number.................46K6593
Power.......................3300200100028000
Product Specific.(HE).......0001
Product Specific.(CT).......40170102
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......4353801819353977372F53508CAC6FB727BEE6AD4D
3149DB3D4BBF5A99BC4D32A97DCC22655418824D33
DF725A07596CB2F44D345022B75E070529CF
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C2
Customer Card ID Number.....31A6
Serial Number...............YLD00C003047
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C4
Customer Card ID Number.....31A6
Serial Number...............YLD00E004076
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C7
Customer Card ID Number.....31A6
Serial Number...............YLD00D004004
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C9
Customer Card ID Number.....31A6
Serial Number...............YLD00F003003
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C14-C9
2 WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78A0.001.DNWGG63-P1-C15
Customer Card ID Number.....53E1
Serial Number...............YL1008000495
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................10N9725
Part Number.................46K6593
Power.......................3300200100028000
Product Specific.(HE).......0001
Product Specific.(CT).......40170102
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......435380371831020A7F2D535095FD5C56AD37C1794D
31DBE640CB6D01B9C44D329B18533F7550B99C4D33
74D20AC6FFAD2F454D343A3569C146732595
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C2
Customer Card ID Number.....31A6
Serial Number...............YLD014006065
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C4
Customer Card ID Number.....31A6
Serial Number...............YLD016003032
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C7
Customer Card ID Number.....31A6
Serial Number...............YLD015003094
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C9
Customer Card ID Number.....31A6
Serial Number...............YLD017002093
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C15-C9
2 WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78A0.001.DNWGG63-P1-C16
Customer Card ID Number.....53E1
Serial Number...............YL1008000250
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................10N9725
Part Number.................46K6593
Power.......................3300200100028000
Product Specific.(HE).......0001
Product Specific.(CT).......40170102
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......4353802308231509285253504BB5FB0780FA968C4D
315F6AEB59A857C99A4D327407C9A2FCBD9DAF4D33
5D14E010B135574B4D34F7E2766C968DCA89
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C2
Customer Card ID Number.....31A6
Serial Number...............YLD01C001037
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C4
Customer Card ID Number.....31A6
Serial Number...............YLD01E007034
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C7
Customer Card ID Number.....31A6
Serial Number...............YLD01D005064
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C9
Customer Card ID Number.....31A6
Serial Number...............YLD01F008052
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................77P6500
Part Number.................77P6500
Power.......................4400000000000000
Size........................4096
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-P1-C16-C9
CEC OP PANEL :
Record Name.................VINI
Flag Field..................XXOP
Hardware Location Code......U78A0.001.DNWGG63-D1
Customer Card ID Number.....296C
Serial Number...............YL12W02920J2
CCIN Extender...............1
Product Specific.(VZ).......02
FRU Number..................44V4749
Part Number.................44V4747
Product Specific.(HE).......0001
Product Specific.(CT).......40B50000
Product Specific.(HW).......0001
Product Specific.(B3).......000000000000
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78A0.001.DNWGG63-D1
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C10
Customer Card ID Number.....2A29
FRU Number.................. 42R6492
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C13-C10
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C13-C5
Customer Card ID Number.....2A2C
FRU Number.................. 42R6498
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C13-C5
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C10
Customer Card ID Number.....2A29
FRU Number.................. 42R6492
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C14-C10
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C14-C5
Customer Card ID Number.....2A2C
FRU Number.................. 42R6498
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C14-C5
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C10
Customer Card ID Number.....2A29
FRU Number.................. 42R6492
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C15-C10
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C15-C5
Customer Card ID Number.....2A2C
FRU Number.................. 42R6498
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C15-C5
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C10
Customer Card ID Number.....2A29
FRU Number.................. 42R6492
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C16-C10
Voltage Reg :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78A0.001.DNWGG63-P1-C16-C5
Customer Card ID Number.....2A2C
FRU Number.................. 42R6498
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P1-C16-C5
A IBM AC PS :
Record Name.................VINI
Flag Field..................XXPS
Hardware Location Code......U78A0.001.DNWGG63-E1
Customer Card ID Number.....51C3
Serial Number...............YL10HA833026
Part Number.................44V4951
FRU Number.................. 44V4951
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-E1
A IBM AC PS :
Record Name.................VINI
Flag Field..................XXPS
Hardware Location Code......U78A0.001.DNWGG63-E2
Customer Card ID Number.....51C3
Serial Number...............YL10HA81T119
Part Number.................44V4951
FRU Number.................. 44V4951
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-E2
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78A0.001.DNWGG63-A1
Customer Card ID Number.....27B8
FRU Number.................. 42R7657
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-A1
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78A0.001.DNWGG63-A2
Customer Card ID Number.....27B8
FRU Number.................. 42R7657
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-A2
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78A0.001.DNWGG63-A3
Customer Card ID Number.....27B8
FRU Number.................. 42R7657
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-A3
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78A0.001.DNWGG63-A4
Customer Card ID Number.....27B8
FRU Number.................. 42R7657
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-A4
PSBPD6E4 3GSAS:
Record Name.................VINI
Flag Field..................XXDB
Hardware Location Code......U78A0.001.DNWGG63-P2
Customer Card ID Number.....2875
Serial Number...............YL1AW9105006
FRU Number.................. 46K6634
Part Number.................44V5561
CCIN Extender...............1
Product Specific.(VZ).......06
Product Specific.(B2).......5005076C0DA5D9000000000000000000
Version.....................RS6K
Physical Location: U78A0.001.DNWGG63-P2
System Firmware:
Code Level, LID Keyword.....Phyp_1 10092012061180A00701
Code Level, LID Keyword.....PFW 10232012071881CF0681
Code Level, LID Keyword.....FSP_Ker 12452012071881E00100
Code Level, LID Keyword.....FSP_Fil 12502012071881E00101
Code Level, LID Keyword.....FipS_BU 12502012071881E00200
Code Level, LID Keyword.....SPCN3 124620060531A0E00A11
Code Level, LID Keyword.....SPCN1 183020070213A0E00D00
Code Level, LID Keyword.....SPCN2 183420070213A0E00D20
Microcode Image.............EL350_132 EL350_132 EL350_132
Hardware Location Code......U8204.E8A.10F2441-Y1
Physical Location: U8204.E8A.10F2441-Y1
Name: openprom
Model: IBM,EL350_132
Node: openprom
Name: interrupt-controller
Model: IBM, Logical PowerPC-PIC, 00
Node: interrupt-controller@0
Device Type: PowerPC-External-Interrupt-Presentation
Name: lhea
Node: lhea@23c00600
Physical Location: U78A0.001.DNWGG63-P1
Name: vty
Node: vty@30000000
Device Type: serial
Physical Location: U8204.E8A.10F2441-V6-C0
Name: v-scsi
Node: v-scsi@30000002
Device Type: vscsi
Physical Location: U8204.E8A.10F2441-V6-C2-T1
Name: v-scsi
Node: v-scsi@30000003
Device Type: vscsi
Physical Location: U8204.E8A.10F2441-V6-C3-T1
Name: ethernet
Node: ethernet@23e00200
Device Type: network
Physical Location: U78A0.001.DNWGG63-P1-C6-T1
Name: ethernet
Node: ethernet@23e01400
Device Type: network
Physical Location: U78A0.001.DNWGG63-P1-C6-T3
}}}
{{{
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 8
$ lparstat -i
Node Name : hostexample
Partition Name : hostexample
Partition Number : 5
Type : Shared-SMT-4
Mode : Capped
Entitled Capacity : 1.50
Partition Group-ID : 32773
Shared Pool ID : 0
Online Virtual CPUs : 2
Maximum Virtual CPUs : 8
Minimum Virtual CPUs : 1
Online Memory : 26624 MB
Maximum Memory : 32768 MB
Minimum Memory : 512 MB
Variable Capacity Weight : 0
Minimum Capacity : 0.10
Maximum Capacity : 8.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 16
Active Physical CPUs in system : 16
Active CPUs in Pool : 16
Shared Physical CPUs in system : 16
Maximum Capacity of Pool : 1600
Entitled Capacity of Pool : 620
Unallocated Capacity : 0.00
Physical CPU Percentage : 75.00%
Unallocated Weight : 0
Memory Mode : Dedicated
Total I/O Memory Entitlement : -
Variable Memory Capacity Weight : -
Memory Pool ID : -
Physical Memory in the Pool : -
Hypervisor Page Size : -
Unallocated Variable Memory Capacity Weight: -
Unallocated I/O Memory entitlement : -
Memory Group ID of LPAR : -
Desired Virtual CPUs : 2
Desired Memory : 26624 MB
Desired Variable Capacity Weight : 0
Desired Capacity : 1.50
Target Memory Expansion Factor : -
Target Memory Expansion Size : -
Power Saving Mode : Disabled
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lparstat
System configuration: type=Shared mode=Capped smt=4 lcpu=8 mem=26624MB psize=16 ent=1.50
%user %sys %wait %idle physc %entc lbusy vcsw phint
----- ----- ------ ------ ----- ----- ------ ----- -----
20.7 2.2 0.5 76.7 0.48 31.9 11.3 1894493622 36375281
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lscfg | grep proc
+ proc0 Processor
+ proc4 Processor
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lsattr -El proc0
frequency 3550000000 Processor Speed False
smt_enabled true Processor SMT enabled False
smt_threads 4 Processor SMT threads False
state enable Processor state False
type PowerPC_POWER7 Processor type False
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ s1
Error 6 initializing SQL*Plus
Message file sp1<lang>.msb not found
SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ uname -M
IBM,8205-E6C
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lsattr -El sys0 -a realmem
realmem 27262976 Amount of usable physical memory in Kbytes False
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lscfg | grep proc
+ proc0 Processor
+ proc4 Processor
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lsdev -Cc processor
proc0 Available 00-00 Processor
proc4 Available 00-04 Processor
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lscfg -vp |grep -ip proc |grep "PROC"
PROC REGULATOR :
8-WAY PROC CUOD:
8-WAY PROC CUOD:
PROC REGULATOR :
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ odmget -q"PdDvLn LIKE processor/*" CuDv
CuDv:
name = "proc0"
status = 1
chgstatus = 2
ddins = ""
location = "00-00"
parent = "sysplanar0"
connwhere = "P0"
PdDvLn = "processor/sys/proc_rspc"
CuDv:
name = "proc4"
status = 1
chgstatus = 2
ddins = ""
location = "00-04"
parent = "sysplanar0"
connwhere = "P4"
PdDvLn = "processor/sys/proc_rspc"
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ odmget -q"PdDvLn LIKE processor/* AND name=proc0" CuDv
CuDv:
name = "proc0"
status = 1
chgstatus = 2
ddins = ""
location = "00-00"
parent = "sysplanar0"
connwhere = "P0"
PdDvLn = "processor/sys/proc_rspc"
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ odmget -q"PdDvLn LIKE processor/* AND name=proc4" CuDv
CuDv:
name = "proc4"
status = 1
chgstatus = 2
ddins = ""
location = "00-04"
parent = "sysplanar0"
connwhere = "P4"
PdDvLn = "processor/sys/proc_rspc"
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lsattr -El sys0 -a modelname
modelname IBM,8205-E6C Machine name False
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lparstat -i|grep ^Active\ Phys
Active Physical CPUs in system : 16
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lscfg -vp|grep WAY
8-WAY PROC CUOD:
8-WAY PROC CUOD:
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lscfg -vp|grep proc
proc0 Processor
proc4 Processor
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lsattr -El sys0 -a modelname
modelname IBM,8205-E6C Machine name False
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lparstat -i|grep Active\ Phys
Active Physical CPUs in system : 16
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lscfg -vp | grep WAY
8-WAY PROC CUOD:
8-WAY PROC CUOD:
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lscfg -vp |grep proc
proc0 Processor
proc4 Processor
healjim@hostexample:/home/healjim:
$
healjim@hostexample:/home/healjim:
$ lscfg -vp
INSTALLED RESOURCE LIST WITH VPD
The following resources are installed on your machine.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
sys0 System Object
sysplanar0 System Planar
vio0 Virtual I/O Bus
vscsi1 U8205.E6C.107FB2R-V5-C8-T1 Virtual SCSI Client Adapter
Hardware Location Code......U8205.E6C.107FB2R-V5-C8-T1
vscsi0 U8205.E6C.107FB2R-V5-C7-T1 Virtual SCSI Client Adapter
Hardware Location Code......U8205.E6C.107FB2R-V5-C7-T1
ent0 U8205.E6C.107FB2R-V5-C2-T1 Virtual I/O Ethernet Adapter (l-lan)
Network Address.............42CDB6831802
Displayable Message.........Virtual I/O Ethernet Adapter (l-lan)
Hardware Location Code......U8205.E6C.107FB2R-V5-C2-T1
vsa0 U8205.E6C.107FB2R-V5-C0 LPAR Virtual Serial Adapter
Hardware Location Code......U8205.E6C.107FB2R-V5-C0
vty0 U8205.E6C.107FB2R-V5-C0-L0 Asynchronous Terminal
fcs0 U8205.E6C.107FB2R-V5-C3-T1 Virtual Fibre Channel Client Adapter
Network Address.............C05076053F2C0010
ROS Level and ID............
Device Specific.(Z0)........
Device Specific.(Z1)........
Device Specific.(Z2)........
Device Specific.(Z3)........
Device Specific.(Z4)........
Device Specific.(Z5)........
Device Specific.(Z6)........
Device Specific.(Z7)........
Device Specific.(Z8)........C05076053F2C0010
Device Specific.(Z9)........
Hardware Location Code......U8205.E6C.107FB2R-V5-C3-T1
fscsi0 U8205.E6C.107FB2R-V5-C3-T1 FC SCSI I/O Controller Protocol Device
hdisk1 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L0 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000B3
hdisk2 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L1000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000B4
hdisk3 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L2000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000B5
hdisk4 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L3000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000B6
hdisk5 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L4000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000B7
hdisk14 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-LD000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C0
hdisk15 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-LE000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C1
hdisk16 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-LF000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C2
hdisk17 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L10000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C3
hdisk18 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L11000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C4
hdisk19 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L12000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C5
hdisk20 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L13000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C6
hdisk21 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L14000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C7
hdisk22 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L15000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C8
hdisk23 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L16000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000C9
hdisk24 U8205.E6C.107FB2R-V5-C3-T1-W5005076802208662-L17000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000CA
sfwcomm0 U8205.E6C.107FB2R-V5-C3-T1-W0-L0 Fibre Channel Storage Framework Comm
fcs1 U8205.E6C.107FB2R-V5-C4-T1 Virtual Fibre Channel Client Adapter
Network Address.............C05076053F2C0012
ROS Level and ID............
Device Specific.(Z0)........
Device Specific.(Z1)........
Device Specific.(Z2)........
Device Specific.(Z3)........
Device Specific.(Z4)........
Device Specific.(Z5)........
Device Specific.(Z6)........
Device Specific.(Z7)........
Device Specific.(Z8)........C05076053F2C0012
Device Specific.(Z9)........
Hardware Location Code......U8205.E6C.107FB2R-V5-C4-T1
fscsi1 U8205.E6C.107FB2R-V5-C4-T1 FC SCSI I/O Controller Protocol Device
sfwcomm1 U8205.E6C.107FB2R-V5-C4-T1-W0-L0 Fibre Channel Storage Framework Comm
fcs2 U8205.E6C.107FB2R-V5-C5-T1 Virtual Fibre Channel Client Adapter
Network Address.............C05076053F2C0014
ROS Level and ID............
Device Specific.(Z0)........
Device Specific.(Z1)........
Device Specific.(Z2)........
Device Specific.(Z3)........
Device Specific.(Z4)........
Device Specific.(Z5)........
Device Specific.(Z6)........
Device Specific.(Z7)........
Device Specific.(Z8)........C05076053F2C0014
Device Specific.(Z9)........
Hardware Location Code......U8205.E6C.107FB2R-V5-C5-T1
fscsi2 U8205.E6C.107FB2R-V5-C5-T1 FC SCSI I/O Controller Protocol Device
sfwcomm2 U8205.E6C.107FB2R-V5-C5-T1-W0-L0 Fibre Channel Storage Framework Comm
fcs3 U8205.E6C.107FB2R-V5-C6-T1 Virtual Fibre Channel Client Adapter
Network Address.............C05076053F2C0016
ROS Level and ID............
Device Specific.(Z0)........
Device Specific.(Z1)........
Device Specific.(Z2)........
Device Specific.(Z3)........
Device Specific.(Z4)........
Device Specific.(Z5)........
Device Specific.(Z6)........
Device Specific.(Z7)........
Device Specific.(Z8)........C05076053F2C0016
Device Specific.(Z9)........
Hardware Location Code......U8205.E6C.107FB2R-V5-C6-T1
fscsi3 U8205.E6C.107FB2R-V5-C6-T1 FC SCSI I/O Controller Protocol Device
hdisk6 U8205.E6C.107FB2R-V5-C6-T1-W5005076802208662-L5000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000B8
hdisk7 U8205.E6C.107FB2R-V5-C6-T1-W5005076802208662-L6000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000B9
hdisk8 U8205.E6C.107FB2R-V5-C6-T1-W5005076802208662-L7000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000BA
hdisk9 U8205.E6C.107FB2R-V5-C6-T1-W5005076802208662-L8000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000BB
hdisk10 U8205.E6C.107FB2R-V5-C6-T1-W5005076802208662-L9000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000BC
hdisk11 U8205.E6C.107FB2R-V5-C6-T1-W5005076802208662-LA000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000BD
hdisk12 U8205.E6C.107FB2R-V5-C6-T1-W5005076802208662-LB000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000BE
hdisk13 U8205.E6C.107FB2R-V5-C6-T1-W5005076802208662-LC000000000000 MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200a04
Serial Number...............600507680281031250000000000000BF
sfwcomm3 U8205.E6C.107FB2R-V5-C6-T1-W0-L0 Fibre Channel Storage Framework Comm
L2cache0 L2 Cache
mem0 Memory
proc0 Processor
proc4 Processor
PLATFORM SPECIFIC
Name: IBM,8205-E6C
Model: IBM,8205-E6C
Node: /
Device Type: chrp
System VPD:
Record Name.................VSYS
Flag Field..................XXSV
Brand.......................S0
Hardware Location Code......U8205.E6C.107FB2R
Machine/Cabinet Serial No...107FB2R
Machine Type and Model......8205-E6C
System Unique ID (SUID).....0004AC180194
World Wide Port Name........C05076053F2C
Product Specific.(FV).......AL740_077
Version.....................ipzSeries
Physical Location: U8205.E6C.107FB2R
CEC:
Record Name.................VCEN
Flag Field..................XXEV
Brand.......................S0
Hardware Location Code......U78AA.001.WZSH9S1
Machine/Cabinet Serial No...WZSH9S1
Machine Type and Model......78AA-001
Controlling CEC ID..........8205-E6C 107FB2R
Rack Serial Number..........0000000000000000
Feature Code/Marketing ID...78AA-001
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1
SYSTEM BACKPLANE:
Record Name.................VINI
Flag Field..................XXBP
Hardware Location Code......U78AA.001.WZSH9S1-P1
Customer Card ID Number.....2B4A
Serial Number...............Y210P201702D
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................74Y4135
Part Number.................74Y4132
Power.......................2A00000000000000
Product Specific.(HE).......0001
Product Specific.(CT).......40F30025
Product Specific.(HW).......0100
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(BS).........
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1
PROC REGULATOR :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78AA.001.WZSH9S1-P1-C9
Customer Card ID Number.....2B50
Serial Number...............YL102121G1FD
Part Number.................00J0254
FRU Number.................. 00J0254
CCIN Extender...............1
Product Specific.(VZ).......01
Product Specific.(CR).......
Version.....................RS6K
Physical Location: U78AA.001.WZSH9S1-P1-C9
8-WAY PROC CUOD:
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78AA.001.WZSH9S1-P1-C10
Customer Card ID Number.....543C
Serial Number...............YA1931297636
FRU Number..................74Y8598
Part Number.................74Y8598
Product Specific.(HE).......0001
Product Specific.(CT).......40110008
Product Specific.(HW).......0001
Product Specific.(B3).......000000000000
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Power.......................3400800133038000
Product Specific.(VZ).......01
CCIN Extender...............1
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C10
8-WAY PROC CUOD:
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U78AA.001.WZSH9S1-P1-C11
Customer Card ID Number.....543C
Serial Number...............YA1931297638
FRU Number..................74Y8598
Part Number.................74Y8598
Product Specific.(HE).......0001
Product Specific.(CT).......40110008
Product Specific.(HW).......0001
Product Specific.(B3).......000000000000
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Power.......................3400800133038000
Product Specific.(VZ).......01
CCIN Extender...............1
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C11
PROC REGULATOR :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78AA.001.WZSH9S1-P1-C12
Customer Card ID Number.....2B50
Serial Number...............YL102121G174
Part Number.................00J0254
FRU Number.................. 00J0254
CCIN Extender...............1
Product Specific.(VZ).......01
Product Specific.(CR).......
Version.....................RS6K
Physical Location: U78AA.001.WZSH9S1-P1-C12
MEMORY CARD :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78AA.001.WZSH9S1-P1-C15
Customer Card ID Number.....2C1C
Serial Number...............YL10P20300B9
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................00E0638
Part Number.................00E0639
Product Specific.(HE).......0001
Product Specific.(CT).......40B60006
Product Specific.(HW).......0100
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C15
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C15-C1
Customer Card ID Number.....31F4
Serial Number...............YLD01896AC5F
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C15-C1
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C15-C2
Customer Card ID Number.....31F4
Serial Number...............YLD01996AC8F
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C15-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C15-C3
Customer Card ID Number.....31F4
Serial Number...............YLD01A96AC9F
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C15-C3
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C15-C4
Customer Card ID Number.....31F4
Serial Number...............YLD01B96AC90
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C15-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C15-C7
Customer Card ID Number.....31F4
Serial Number...............YLD01C96AC60
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C15-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C15-C8
Customer Card ID Number.....31F4
Serial Number...............YLD01D94F042
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C15-C8
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C15-C9
Customer Card ID Number.....31F4
Serial Number...............YLD01E94F155
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C15-C9
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C15-C10
Customer Card ID Number.....31F4
Serial Number...............YLD01F94F15E
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C15-C10
MEMORY CARD :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78AA.001.WZSH9S1-P1-C16
Customer Card ID Number.....2C1C
Serial Number...............YL10P2030180
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................00E0638
Part Number.................00E0639
Product Specific.(HE).......0001
Product Specific.(CT).......40B60006
Product Specific.(HW).......0100
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C16
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C16-C1
Customer Card ID Number.....31F4
Serial Number...............YLD01095561E
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C16-C1
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C16-C2
Customer Card ID Number.....31F4
Serial Number...............YLD01195560C
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C16-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C16-C3
Customer Card ID Number.....31F4
Serial Number...............YLD012955625
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C16-C3
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C16-C4
Customer Card ID Number.....31F4
Serial Number...............YLD01395560E
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C16-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C16-C7
Customer Card ID Number.....31F4
Serial Number...............YLD01495562B
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C16-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C16-C8
Customer Card ID Number.....31F4
Serial Number...............YLD015955586
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C16-C8
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C16-C9
Customer Card ID Number.....31F4
Serial Number...............YLD016955620
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C16-C9
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C16-C10
Customer Card ID Number.....31F4
Serial Number...............YLD017955621
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C16-C10
MEMORY CARD :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78AA.001.WZSH9S1-P1-C17
Customer Card ID Number.....2C1C
Serial Number...............YL10P20301AE
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................00E0638
Part Number.................00E0639
Product Specific.(HE).......0001
Product Specific.(CT).......40B60006
Product Specific.(HW).......0100
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C17
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C17-C1
Customer Card ID Number.....31F4
Serial Number...............YLD00895562F
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C17-C1
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C17-C2
Customer Card ID Number.....31F4
Serial Number...............YLD009955630
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C17-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C17-C3
Customer Card ID Number.....31F4
Serial Number...............YLD00A955631
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C17-C3
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C17-C4
Customer Card ID Number.....31F4
Serial Number...............YLD00B955632
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C17-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C17-C7
Customer Card ID Number.....31F4
Serial Number...............YLD00C955679
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C17-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C17-C8
Customer Card ID Number.....31F4
Serial Number...............YLD00DA4EC21
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C17-C8
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C17-C9
Customer Card ID Number.....31F4
Serial Number...............YLD00EA38C96
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C17-C9
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C17-C10
Customer Card ID Number.....31F4
Serial Number...............YLD00FA4EBF3
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C17-C10
MEMORY CARD :
Record Name.................VINI
Flag Field..................XXRG
Hardware Location Code......U78AA.001.WZSH9S1-P1-C18
Customer Card ID Number.....2C1C
Serial Number...............YL10P2030099
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................00E0638
Part Number.................00E0639
Product Specific.(HE).......0001
Product Specific.(CT).......40B60006
Product Specific.(HW).......0100
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C18
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C18-C1
Customer Card ID Number.....31F4
Serial Number...............YLD000A3509B
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C18-C1
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C18-C2
Customer Card ID Number.....31F4
Serial Number...............YLD001A3509C
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C18-C2
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C18-C3
Customer Card ID Number.....31F4
Serial Number...............YLD002A3509D
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C18-C3
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C18-C4
Customer Card ID Number.....31F4
Serial Number...............YLD003A35091
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C18-C4
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C18-C7
Customer Card ID Number.....31F4
Serial Number...............YLD004A35092
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C18-C7
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C18-C8
Customer Card ID Number.....31F4
Serial Number...............YLD005A35098
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C18-C8
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C18-C9
Customer Card ID Number.....31F4
Serial Number...............YLD006A35099
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C18-C9
Memory DIMM:
Record Name.................VINI
Flag Field..................XXMS
Hardware Location Code......U78AA.001.WZSH9S1-P1-C18-C10
Customer Card ID Number.....31F4
Serial Number...............YLD007A3509A
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................78P0555
Part Number.................78P0555
Power.......................4800000000020000
Size........................8192
Product Specific.(HE).......0001
Product Specific.(CT).......10210004
Product Specific.(HW).......0001
Product Specific.(B3).......030000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C18-C10
RAID :
Record Name.................VINI
Flag Field..................XXRD
Hardware Location Code......U78AA.001.WZSH9S1-P1-C19
Customer Card ID Number.....2B4F
Serial Number...............YL10P2065336
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................00E0660
Part Number.................00E0656
Product Specific.(HE).......0001
Product Specific.(CT).......30F20006
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C19
ANCHOR :
Record Name.................VINI
Flag Field..................XXAV
Hardware Location Code......U78AA.001.WZSH9S1-P1-C20
Customer Card ID Number.....52DB
Serial Number...............YL101123V00C
CCIN Extender...............1
Product Specific.(VZ).......03
FRU Number..................00E0942
Part Number.................00E1147
Power.......................8100300000000000
Product Specific.(HE).......0010
Product Specific.(CT).......40B40000
Product Specific.(HW).......0001
Product Specific.(B3).......000000000001
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Product Specific.(B9).......43532087071201A016E85350968AD79CF7F6C9514D
310AA00BA08FC4B4D04D3279A01D525DAE3A504D33
FFAF49BC09BFD6DE4D34ADE46F23CC7129ED
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-P1-C20
CEC OP PANEL :
Record Name.................VINI
Flag Field..................XXOP
Hardware Location Code......U78AA.001.WZSH9S1-D1
Customer Card ID Number.....2BCD
Serial Number...............YL10W13470UK
CCIN Extender...............1
Product Specific.(VZ).......01
FRU Number..................74Y2057
Part Number.................74Y3133
Product Specific.(HE).......0001
Product Specific.(CT).......40B50000
Product Specific.(HW).......0002
Product Specific.(B3).......000000000000
Product Specific.(B4).......00
Product Specific.(B7).......000000000000000000000000
Version.....................ipzSeries
Physical Location: U78AA.001.WZSH9S1-D1
A IBM AC PS :
Record Name.................VINI
Flag Field..................XXPS
Hardware Location Code......U78AA.001.WZSH9S1-E1
Customer Card ID Number.....2BCB
Serial Number...............YL1021CV0210
Part Number.................74Y9082
FRU Number.................. 74Y9082
Version.....................RS6K
Physical Location: U78AA.001.WZSH9S1-E1
A IBM AC PS :
Record Name.................VINI
Flag Field..................XXPS
Hardware Location Code......U78AA.001.WZSH9S1-E2
Customer Card ID Number.....2BCB
Serial Number...............YL1021CV0206
Part Number.................74Y9082
FRU Number.................. 74Y9082
Version.....................RS6K
Physical Location: U78AA.001.WZSH9S1-E2
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78AA.001.WZSH9S1-A1
Customer Card ID Number.....6B1D
FRU Number.................. 74Y5220
Version.....................RS6K
Physical Location: U78AA.001.WZSH9S1-A1
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78AA.001.WZSH9S1-A2
Customer Card ID Number.....6B1D
FRU Number.................. 74Y5220
Version.....................RS6K
Physical Location: U78AA.001.WZSH9S1-A2
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78AA.001.WZSH9S1-A3
Customer Card ID Number.....6B1D
FRU Number.................. 74Y5220
Version.....................RS6K
Physical Location: U78AA.001.WZSH9S1-A3
IBM Air Mover :
Record Name.................VINI
Flag Field..................XXAM
Hardware Location Code......U78AA.001.WZSH9S1-A4
Customer Card ID Number.....6B1D
FRU Number.................. 74Y5220
Version.....................RS6K
Physical Location: U78AA.001.WZSH9S1-A4
VSBPD6E4A 3GSAS:
Record Name.................VINI
Flag Field..................XXDB
Hardware Location Code......U78AA.001.WZSH9S1-P2
Customer Card ID Number.....2BD5
Serial Number...............YL10P2034135
FRU Number.................. 00E0968
Part Number.................00E0969
CCIN Extender...............1
Product Specific.(VZ).......02
Product Specific.(B2).......50050760511B07000000000000000100
Version.....................RS6K
Physical Location: U78AA.001.WZSH9S1-P2
System Firmware:
Code Level, LID Keyword.....Phyp_1 19132012030180A00701
Code Level, LID Keyword.....PFW 13522011120981CF0681
Code Level, LID Keyword.....FSP_Ker 22072012030181E00100
Code Level, LID Keyword.....FSP_Fil 22082012030181E00109
Code Level, LID Keyword.....FipS_BU 22082012030181E00208
Code Level, LID Keyword.....Phyp_2 19132012030185A00702
Code Level, LID Keyword.....SPCN3 124620060531A0E00A11
Code Level, LID Keyword.....SPCN1 183020070213A0E00D00
Code Level, LID Keyword.....SPCN2 183420070213A0E00D20
Microcode Image.............AL740_077 AL740_077 AL740_077
Hardware Location Code......U8205.E6C.107FB2R-Y1
Physical Location: U8205.E6C.107FB2R-Y1
Name: openprom
Model: IBM,AL740_077
Node: openprom
Name: interrupt-controller
Model: IBM, Logical PowerPC-PIC, 00
Node: interrupt-controller@0
Device Type: PowerPC-External-Interrupt-Presentation
Name: vty
Node: vty@30000000
Device Type: serial
Physical Location: U8205.E6C.107FB2R-V5-C0
Name: l-lan
Node: l-lan@30000002
Device Type: network
Physical Location: U8205.E6C.107FB2R-V5-C2-T1
Name: vfc-client
Node: vfc-client@30000003
Device Type: fcp
Physical Location: U8205.E6C.107FB2R-V5-C3-T1
Name: vfc-client
Node: vfc-client@30000004
Device Type: fcp
Physical Location: U8205.E6C.107FB2R-V5-C4-T1
Name: vfc-client
Node: vfc-client@30000005
Device Type: fcp
Physical Location: U8205.E6C.107FB2R-V5-C5-T1
Name: vfc-client
Node: vfc-client@30000006
Device Type: fcp
Physical Location: U8205.E6C.107FB2R-V5-C6-T1
Name: v-scsi
Node: v-scsi@30000007
Device Type: vscsi
Physical Location: U8205.E6C.107FB2R-V5-C7-T1
Name: v-scsi
Node: v-scsi@30000008
Device Type: vscsi
Physical Location: U8205.E6C.107FB2R-V5-C8-T1
}}}
https://oracle-base.com/articles/misc/materialized-views#refresh-materialized-views
{{{
create tablespace t1;
create table T1(A number primary key) tablespace t1;
create materialized view log on T1 tablespace t1 with primary key;
select * from user_objects;
select * from user_segments;
select MASTER, LOG_TABLE from USER_MVIEW_LOGS;
select master, log, temp_log from sys.mlog$ where mowner = user and master = 'T1';
select table_name, INITIAL_EXTENT, next_extent from user_tables;
select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';
alter table t1 allocate extent (datafile '/home/oracle/app/oracle/oradata/CDB1/FF72AA85A0370D5BE045000000000001/datafile/o1_mf_t1_c0xjd07g_.dbf');
alter table t1 allocate extent (size 5M);
alter table t1 allocate extent (size 5M datafile '/home/oracle/app/oracle/oradata/CDB1/FF72AA85A0370D5BE045000000000001/datafile/o1_mf_t1_c0xjd07g_.dbf');
alter table MLOG$_T1 allocate extent (size 5M);
-- output --
Connected to Oracle Database 12c Enterprise Edition Release 12.1.0.2.0
Connected as karlarao@orcl_192.168.203.5
SQL>
SQL>
SQL> select * from user_objects;
OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME TIMESTAMP STATUS TEMPORARY GENERATED SECONDARY NAMESPACE EDITION_NAME SHARING EDITIONABLE ORACLE_MAINTAINED
-------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ---------- -------------- ----------------------- ----------- ------------- ------------------- ------- --------- --------- --------- ---------- -------------------------------------------------------------------------------- ------------- ----------- -----------------
SYS_C0011838 96428 96428 INDEX 10/2/2015 1 10/2/2015 10: 2015-10-02:10:40:29 VALID N Y N 4 NONE N
MLOG$_T1 96429 96429 TABLE 10/2/2015 1 10/2/2015 10: 2015-10-02:10:41:47 VALID N N N 1 NONE N
RUPD$_T1 96430 TABLE 10/2/2015 1 10/2/2015 10: 2015-10-02:10:41:47 VALID Y N N 1 NONE N
T1 96427 96427 TABLE 10/2/2015 1 10/2/2015 10: 2015-10-02:10:40:29 VALID N N N 1 NONE N
I_MLOG$_T1 96431 96431 INDEX 10/2/2015 1 10/2/2015 10: 2015-10-02:10:41:47 VALID N N N 4 NONE N
SQL> select * from user_segments;
SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE SEGMENT_SUBTYPE TABLESPACE_NAME BYTES BLOCKS EXTENTS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS MAX_SIZE RETENTION MINRETENTION PCT_INCREASE FREELISTS FREELIST_GROUPS BUFFER_POOL FLASH_CACHE CELL_FLASH_CACHE INMEMORY INMEMORY_PRIORITY INMEMORY_DISTRIBUTE INMEMORY_DUPLICATE INMEMORY_COMPRESSION
-------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ------------------ --------------- ------------------------------ ---------- ---------- ---------- -------------- ----------- ----------- ----------- ---------- --------- ------------ ------------ ---------- --------------- ----------- ----------- ---------------- -------- ----------------- ------------------- ------------------ --------------------
I_MLOG$_T1 INDEX ASSM USERS 65536 8 1 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
MLOG$_T1 TABLE ASSM T1 65536 8 1 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
SQL> select MASTER, LOG_TABLE from USER_MVIEW_LOGS;
MASTER LOG_TABLE
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
T1 MLOG$_T1
SQL> select master, log, temp_log from sys.mlog$ where mowner = user and master = 'T1';
MASTER LOG TEMP_LOG
-------------------------------------------------------------------------------- -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
T1 MLOG$_T1 RUPD$_T1
SQL> select table_name, INITIAL_EXTENT, next_extent from user_tables;
TABLE_NAME INITIAL_EXTENT NEXT_EXTENT
-------------------------------------------------------------------------------- -------------- -----------
T1
MLOG$_T1 65536 1048576
RUPD$_T1
SQL> select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';m user_extents where segment_name = 'T1';
select segment_name,tablespace_name, file_id,block_id,blocks from user_extents where segment_name = 'T1'
ORA-00904: "BLOCK_ID": invalid identifier
SQL>
SQL>
SQL>
SQL>
SQL>
SQL> select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';
SEGMENT_NAME TABLESPACE_NAME FILE_ID BLOCK_ID BLOCKS
-------------------------------------------------------------------------------- ------------------------------ ---------- ---------- ----------
T1 SYSTEM 8 35304 8
SQL> select * from v$datafile;
FILE# CREATION_CHANGE# CREATION_TIME TS# RFILE# STATUS ENABLED CHECKPOINT_CHANGE# CHECKPOINT_TIME UNRECOVERABLE_CHANGE# UNRECOVERABLE_TIME LAST_CHANGE# LAST_TIME OFFLINE_CHANGE# ONLINE_CHANGE# ONLINE_TIME BYTES BLOCKS CREATE_BYTES BLOCK_SIZE NAME PLUGGED_IN BLOCK1_OFFSET AUX_NAME FIRST_NONLOGGED_SCN FIRST_NONLOGGED_TIME FOREIGN_DBID FOREIGN_CREATION_CHANGE# FOREIGN_CREATION_TIME PLUGGED_READONLY PLUGIN_CHANGE# PLUGIN_RESETLOGS_CHANGE# PLUGIN_RESETLOGS_TIME CON_ID
---------- ---------------- ------------- ---------- ---------- ------- ---------- ------------------ --------------- --------------------- ------------------ ------------ ----------- --------------- -------------- ----------- ---------- ---------- ------------ ---------- -------------------------------------------------------------------------------- ---------- ------------- -------------------------------------------------------------------------------- ------------------- -------------------- ------------ ------------------------ --------------------- ---------------- -------------- ------------------------ --------------------- ----------
4 1591076 7/7/2014 7:03 2 4 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 1594142 1594143 7/30/2014 4 225443840 27520 0 8192 /home/oracle/app/oracle/oradata/cdb1/undotbs01.dbf 0 8192 NONE 0 0 0 NO 0 0 0
8 1765664 7/30/2014 4:3 0 1 SYSTEM READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 293601280 35840 272629760 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/system01.dbf 0 8192 NONE 0 0 0 NO 0 0 3
9 1765667 7/30/2014 4:3 1 4 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 754974720 92160 513802240 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/sysaux01.dbf 0 8192 NONE 0 0 0 NO 0 0 3
10 1765670 7/30/2014 4:3 3 9 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 326369280 39840 5242880 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/SAMPLE_SCHEMA_users01.dbf 0 8192 NONE 0 0 0 NO 0 0 3
11 1765672 7/30/2014 4:3 4 10 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 1369047040 167120 1304166400 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/example01.dbf 0 8192 NONE 0 0 0 NO 0 0 3
23 3217359 8/14/2014 12: 16 23 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 10551296 1288 2686976 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3496321052328731.dbf 0 8192 NONE 0 0 0 NO 0 0 3
24 5298914 9/23/2014 6:0 17 24 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 2686976 328 2686976 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3618122143496338.dbf 0 8192 NONE 0 0 0 NO 0 0 3
25 5305774 9/23/2014 7:0 18 25 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 2686976 328 2686976 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3624405313851440.dbf 0 8192 NONE 0 0 0 NO 0 0 3
26 5408322 9/23/2014 7:3 19 26 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 2686976 328 2686976 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3632418306476203.dbf 0 8192 NONE 0 0 0 NO 0 0 3
27 5410407 9/23/2014 7:5 20 27 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 10551296 1288 2686976 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3638406606618093.dbf 0 8192 NONE 0 0 0 NO 0 0 3
28 5411728 9/23/2014 7:5 21 28 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 6574598 6577353 4/3/2015 12 5308416 648 2686976 8192 /home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3753603687620385.dbf 0 8192 NONE 0 0 0 NO 0 0 3
40 8207595 8/20/2015 1:2 22 40 ONLINE READ WRITE 8732685 10/2/2015 10:37 0 0 0 104857600 12800 104857600 8192 /home/oracle/app/oracle/oradata/CDB1/FF72AA85A0370D5BE045000000000001/datafile/o 0 8192 NONE 0 0 0 NO 0 0 3
41 8734614 10/2/2015 10: 23 41 ONLINE READ WRITE 8734615 10/2/2015 10:40 0 0 0 104857600 12800 104857600 8192 /home/oracle/app/oracle/oradata/CDB1/FF72AA85A0370D5BE045000000000001/datafile/o 0 8192 NONE 0 0 0 NO 0 0 3
13 rows selected
SQL> select * from dba_tablespaces;
TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS MAX_SIZE PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FORCE_LOGGING EXTENT_MANAGEMENT ALLOCATION_TYPE PLUGGED_IN SEGMENT_SPACE_MANAGEMENT DEF_TAB_COMPRESSION RETENTION BIGFILE PREDICATE_EVALUATION ENCRYPTED COMPRESS_FOR DEF_INMEMORY DEF_INMEMORY_PRIORITY DEF_INMEMORY_DISTRIBUTE DEF_INMEMORY_COMPRESSION DEF_INMEMORY_DUPLICATE
------------------------------ ---------- -------------- ----------- ----------- ----------- ---------- ------------ ---------- --------- --------- --------- ------------- ----------------- --------------- ---------- ------------------------ ------------------- ----------- ------- -------------------- --------- ------------------------------ ------------ --------------------- ----------------------- ------------------------ ----------------------
SYSTEM 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING YES LOCAL SYSTEM NO MANUAL DISABLED NOT APPLY NO HOST NO DISABLED
SYSAUX 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING YES LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
TEMP 8192 1048576 1048576 1 2147483645 0 1048576 ONLINE TEMPORARY NOLOGGING NO LOCAL UNIFORM NO MANUAL DISABLED NOT APPLY NO HOST NO DISABLED
USERS 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
EXAMPLE 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT NOLOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
APEX_3496321052328731 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
APEX_3618122143496338 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
APEX_3624405313851440 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
APEX_3632418306476203 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
APEX_3638406606618093 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
APEX_3753603687620385 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
PARTITION1 8192 65536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
T5536 1 2147483645 2147483645 65536 ONLINE PERMANENT LOGGING NO LOCAL SYSTEM NO AUTO DISABLED NOT APPLY NO HOST NO DISABLED
13 rows selected
SQL>
SQL>
SQL>
SQL> select * from dba_data_files;
FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUTOEXTENSIBLE MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_STATUS
-------------------------------------------------------------------------------- ---------- ------------------------------ ---------- ---------- --------- ------------ -------------- ---------- ---------- ------------ ---------- ----------- -------------
/home/oracle/app/oracle/oradata/cdb1/orcl/system01.dbf 8 SYSTEM 293601280 35840 AVAILABLE 1 YES 3435972198 4194302 1280 292552704 35712 SYSTEM
/home/oracle/app/oracle/oradata/cdb1/orcl/sysaux01.dbf 9 SYSAUX 754974720 92160 AVAILABLE 4 YES 3435972198 4194302 1280 753926144 92032 ONLINE
/home/oracle/app/oracle/oradata/cdb1/orcl/SAMPLE_SCHEMA_users01.dbf 10 USERS 326369280 39840 AVAILABLE 9 YES 3435972198 4194302 160 325320704 39712 ONLINE
/home/oracle/app/oracle/oradata/cdb1/orcl/example01.dbf 11 EXAMPLE 1369047040 167120 AVAILABLE 10 YES 3435972198 4194302 80 1367998464 166992 ONLINE
/home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3496321052328731.dbf 23 APEX_3496321052328731 10551296 1288 AVAILABLE 23 YES 26279936 3208 320 9502720 1160 ONLINE
/home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3618122143496338.dbf 24 APEX_3618122143496338 2686976 328 AVAILABLE 24 YES 26279936 3208 320 1638400 200 ONLINE
/home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3624405313851440.dbf 25 APEX_3624405313851440 2686976 328 AVAILABLE 25 YES 26279936 3208 320 1638400 200 ONLINE
/home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3632418306476203.dbf 26 APEX_3632418306476203 2686976 328 AVAILABLE 26 YES 26279936 3208 320 1638400 200 ONLINE
/home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3638406606618093.dbf 27 APEX_3638406606618093 10551296 1288 AVAILABLE 27 YES 26279936 3208 320 9502720 1160 ONLINE
/home/oracle/app/oracle/oradata/cdb1/orcl/APEX_3753603687620385.dbf 28 APEX_3753603687620385 5308416 648 AVAILABLE 28 YES 26279936 3208 320 4259840 520 ONLINE
/home/oracle/app/oracle/oradata/CDB1/FF72AA85A0370D5BE045000000000001/datafile/o 40 PARTITION1 104857600 12800 AVAILABLE 40 YES 3435972198 4194302 12800 103809024 12672 ONLINE
/home/oracle/app/oracle/oradata/CDB1/FF72AA85A0370D5BE045000000000001/datafile/o 41 T1 104857600 12800 AVAILABLE 41 YES 3435972198 4194302 12800 103809024 12672 ONLINE
12 rows selected
SQL>
SQL>
SQL> select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';
SEGMENT_NAME TABLESPACE_NAME FILE_ID BLOCK_ID BLOCKS
-------------------------------------------------------------------------------- ------------------------------ ---------- ---------- ----------
T1 SYSTEM 8 35304 8
SQL>
SQL>
SQL> select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';
SEGMENT_NAME TABLESPACE_NAME FILE_ID BLOCK_ID BLOCKS
-------------------------------------------------------------------------------- ------------------------------ ---------- ---------- ----------
T1 SYSTEM 8 35304 8
T1 T1 41 136 8
T1 T1 41 152 8
SQL>
SQL> select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';
SEGMENT_NAME TABLESPACE_NAME FILE_ID BLOCK_ID BLOCKS
-------------------------------------------------------------------------------- ------------------------------ ---------- ---------- ----------
T1 SYSTEM 8 35304 8
T1 T1 41 136 8
T1 T1 41 152 8
T1 T1 41 160 8
SQL>
SQL> select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';
SEGMENT_NAME TABLESPACE_NAME FILE_ID BLOCK_ID BLOCKS
-------------------------------------------------------------------------------- ------------------------------ ---------- ---------- ----------
T1 SYSTEM 8 35304 8
T1 T1 41 136 8
T1 T1 41 152 8
T1 T1 41 160 8
T1 T1 41 256 128
T1 T1 41 384 128
T1 T1 41 512 128
T1 T1 41 640 128
T1 T1 41 768 128
9 rows selected
SQL>
SQL>
SQL>
SQL> select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';
SEGMENT_NAME TABLESPACE_NAME FILE_ID BLOCK_ID BLOCKS
-------------------------------------------------------------------------------- ------------------------------ ---------- ---------- ----------
T1 SYSTEM 8 35304 8
T1 T1 41 136 8
T1 T1 41 152 8
T1 T1 41 160 8
T1 T1 41 256 128
T1 T1 41 384 128
T1 T1 41 512 128
T1 T1 41 640 128
T1 T1 41 768 128
T1 T1 41 896 128
T1 T1 41 1024 128
T1 T1 41 1152 128
T1 T1 41 1280 128
T1 T1 41 1408 128
14 rows selected
SQL>
SQL>
SQL>
SQL>
SQL> select * from user_objects;
OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME TIMESTAMP STATUS TEMPORARY GENERATED SECONDARY NAMESPACE EDITION_NAME SHARING EDITIONABLE ORACLE_MAINTAINED
-------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ---------- -------------- ----------------------- ----------- ------------- ------------------- ------- --------- --------- --------- ---------- -------------------------------------------------------------------------------- ------------- ----------- -----------------
SYS_C0011838 96428 96428 INDEX 10/2/2015 1 10/2/2015 10: 2015-10-02:10:40:29 VALID N Y N 4 NONE N
MLOG$_T1 96429 96429 TABLE 10/2/2015 1 10/2/2015 10: 2015-10-02:10:41:47 VALID N N N 1 NONE N
RUPD$_T1 96430 TABLE 10/2/2015 1 10/2/2015 10: 2015-10-02:10:41:47 VALID Y N N 1 NONE N
T1 96427 96427 TABLE 10/2/2015 1 10/2/2015 10: 2015-10-02:10:40:29 VALID N N N 1 NONE N
I_MLOG$_T1 96431 96431 INDEX 10/2/2015 1 10/2/2015 10: 2015-10-02:10:41:47 VALID N N N 4 NONE N
SQL> select * from user_segments;
SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE SEGMENT_SUBTYPE TABLESPACE_NAME BYTES BLOCKS EXTENTS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS MAX_SIZE RETENTION MINRETENTION PCT_INCREASE FREELISTS FREELIST_GROUPS BUFFER_POOL FLASH_CACHE CELL_FLASH_CACHE INMEMORY INMEMORY_PRIORITY INMEMORY_DISTRIBUTE INMEMORY_DUPLICATE INMEMORY_COMPRESSION
-------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ------------------ --------------- ------------------------------ ---------- ---------- ---------- -------------- ----------- ----------- ----------- ---------- --------- ------------ ------------ ---------- --------------- ----------- ----------- ---------------- -------- ----------------- ------------------- ------------------ --------------------
I_MLOG$_T1 INDEX ASSM USERS 65536 8 1 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
MLOG$_T1 TABLE ASSM T1 65536 8 1 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
SYS_C0011838 INDEX ASSM T1 65536 8 1 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
T1 TABLE ASSM T1 10682368 1304 13 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
SQL> select MASTER, LOG_TABLE from USER_MVIEW_LOGS;
MASTER LOG_TABLE
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
T1 MLOG$_T1
SQL> select master, log, temp_log from sys.mlog$ where mowner = user and master = 'T1';
MASTER LOG TEMP_LOG
-------------------------------------------------------------------------------- -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
T1 MLOG$_T1 RUPD$_T1
SQL> select table_name, INITIAL_EXTENT, next_extent from user_tables;
TABLE_NAME INITIAL_EXTENT NEXT_EXTENT
-------------------------------------------------------------------------------- -------------- -----------
T1 65536 1048576
MLOG$_T1 65536 1048576
RUPD$_T1
SQL> select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';
SEGMENT_NAME TABLESPACE_NAME FILE_ID BLOCK_ID BLOCKS
-------------------------------------------------------------------------------- ------------------------------ ---------- ---------- ----------
T1 SYSTEM 8 35304 8
T1 T1 41 136 8
T1 T1 41 152 8
T1 T1 41 160 8
T1 T1 41 256 128
T1 T1 41 384 128
T1 T1 41 512 128
T1 T1 41 640 128
T1 T1 41 768 128
T1 T1 41 896 128
T1 T1 41 1024 128
T1 T1 41 1152 128
T1 T1 41 1280 128
T1 T1 41 1408 128
14 rows selected
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL> select * from user_objects;
OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME TIMESTAMP STATUS TEMPORARY GENERATED SECONDARY NAMESPACE EDITION_NAME SHARING EDITIONABLE ORACLE_MAINTAINED
-------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ---------- -------------- ----------------------- ----------- ------------- ------------------- ------- --------- --------- --------- ---------- -------------------------------------------------------------------------------- ------------- ----------- -----------------
SYS_C0011838 96428 96428 INDEX 10/2/2015 1 10/2/2015 10: 2015-10-02:10:40:29 VALID N Y N 4 NONE N
MLOG$_T1 96429 96429 TABLE 10/2/2015 1 10/2/2015 11: 2015-10-02:10:41:47 VALID N N N 1 NONE N
RUPD$_T1 96430 TABLE 10/2/2015 1 10/2/2015 10: 2015-10-02:10:41:47 VALID Y N N 1 NONE N
T1 96427 96427 TABLE 10/2/2015 1 10/2/2015 10: 2015-10-02:10:40:29 VALID N N N 1 NONE N
I_MLOG$_T1 96431 96431 INDEX 10/2/2015 1 10/2/2015 10: 2015-10-02:10:41:47 VALID N N N 4 NONE N
SQL> select * from user_segments;
SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE SEGMENT_SUBTYPE TABLESPACE_NAME BYTES BLOCKS EXTENTS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS MAX_SIZE RETENTION MINRETENTION PCT_INCREASE FREELISTS FREELIST_GROUPS BUFFER_POOL FLASH_CACHE CELL_FLASH_CACHE INMEMORY INMEMORY_PRIORITY INMEMORY_DISTRIBUTE INMEMORY_DUPLICATE INMEMORY_COMPRESSION
-------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ------------------ --------------- ------------------------------ ---------- ---------- ---------- -------------- ----------- ----------- ----------- ---------- --------- ------------ ------------ ---------- --------------- ----------- ----------- ---------------- -------- ----------------- ------------------- ------------------ --------------------
I_MLOG$_T1 INDEX ASSM USERS 65536 8 1 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
MLOG$_T1 TABLE ASSM T1 5308416 648 6 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
SYS_C0011838 INDEX ASSM T1 65536 8 1 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
T1 TABLE ASSM T1 10682368 1304 13 65536 1048576 1 2147483645 2147483645 DEFAULT DEFAULT DEFAULT DISABLED
SQL> select MASTER, LOG_TABLE from USER_MVIEW_LOGS;
MASTER LOG_TABLE
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
T1 MLOG$_T1
SQL> select master, log, temp_log from sys.mlog$ where mowner = user and master = 'T1';
MASTER LOG TEMP_LOG
-------------------------------------------------------------------------------- -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
T1 MLOG$_T1 RUPD$_T1
SQL> select table_name, INITIAL_EXTENT, next_extent from user_tables;
TABLE_NAME INITIAL_EXTENT NEXT_EXTENT
-------------------------------------------------------------------------------- -------------- -----------
MLOG$_T1 65536 1048576
RUPD$_T1
T1 65536 1048576
SQL> select segment_name,tablespace_name, file_id,block_id,blocks from dba_extents where segment_name = 'T1';
SEGMENT_NAME TABLESPACE_NAME FILE_ID BLOCK_ID BLOCKS
-------------------------------------------------------------------------------- ------------------------------ ---------- ---------- ----------
T1 SYSTEM 8 35304 8
T1 T1 41 136 8
T1 T1 41 152 8
T1 T1 41 160 8
T1 T1 41 256 128
T1 T1 41 384 128
T1 T1 41 512 128
T1 T1 41 640 128
T1 T1 41 768 128
T1 T1 41 896 128
T1 T1 41 1024 128
T1 T1 41 1152 128
T1 T1 41 1280 128
T1 T1 41 1408 128
14 rows selected
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
}}}
http://www.evernote.com/shard/s48/sh/dc5d356a-a38a-4c1d-9893-61c74f1a9115/24726d73619fffa493e7c9650900a9ad
http://progeeking.com/2013/10/28/exadata-io-statistics/
{{{
This is a short list of the commands:
I/O latency statistics:
alter cell events="immediate cellsrv.cellsrv_dump('iolstats',0)";
alter cell events="immediate cellsrv.cellsrv_resetstats('iolstats')";
I/O latency statistics – last 384 sec:
alter cell events="immediate cellsrv.cellsrv_dump('iolhiststats',0)";
I/O reason statistics:
alter cell events="immediate cellsrv.cellsrv_dump('ioreasons',0)";
alter cell events="immediate cellsrv.cellsrv_resetstats('ioreasons')";
Basic I/O statistics:
alter cell events="immediate cellsrv.cellsrv_dump('devio_stats',0)";
Predicate I/O statistics:
alter cell events="immediate cellsrv.cellsrv_dump('predicateio',0)";
}}}
''Ravi, request on more info - follow up: Exadata IORM profiles - non-12c databases''
{{{
cellcli -e 'alter cell events="immediate cellsrv.cellsrv_statedump(1,0)"'
}}}
http://home.cmit.net/rwolbeck/Reference/power/consumption.htm
http://serverfault.com/questions/348591/how-to-measure-server-power-consumption
http://serverfault.com/questions/62339/how-can-i-calculate-a-servers-amperage
http://lists.centos.org/pipermail/centos/2006-August/025327.html
http://www.webhostingtalk.com/archive/index.php/t-575735.html
http://askubuntu.com/questions/13337/how-do-i-measure-server-power-consumption
http://lonesysadmin.net/2010/05/13/power-consumption-of-a-dell-poweredge-r10/
http://towardex.com/inf_pwr_understand.html
http://www.kdnuggets.com/2016/08/anomaly-detection-periodic-big-data-streams.html
https://github.com/VividCortex/ewma
https://www.linkedin.com/in/preetamjinka/
https://www.oreilly.com/library/view/anomaly-detection-for/9781492042341/
Methods and apparatus for fault detection https://patents.google.com/patent/US20170302506A1/en
https://patents.google.com/patent/US9921900B1/en?inventor=Baron+Schwartz
https://patentimages.storage.googleapis.com/12/46/e8/ca5feddca5fc55/US20170302506A1.pdf
http://highscalability.com/blog/2015/3/30/how-we-scale-vividcortexs-backend-systems.html
How VividCortex's MySQL-Based Time Series Database Works https://www.youtube.com/watch?v=7WRwB7yNmv8
Creating a Time-Series Database in MySQL https://www.youtube.com/watch?v=ldyKJL_IVl8
https://www.torproject.org/about/overview.html.en
https://lockbin.com/
btguard
http://www.nytimes.com/2012/11/17/technology/trying-to-keep-your-e-mails-secret-when-the-cia-chief-couldnt.html?_r=0
https://www.talend.com/blog/2018/04/23/how-to-develop-a-data-processing-job-using-apache-beam/
https://beam.apache.org/documentation/runners/capability-matrix/
https://blog.usejournal.com/why-are-you-still-doing-batch-processing-etl-is-dead-3dac2392a772
https://github.com/apache/beam/tree/master/sdks/python/apache_beam/coders
https://cloud.google.com/dataflow/docs/
https://cloud.google.com/pubsub/
https://kafka.apache.org
! installation
{{{
laptophostname:apache_beam kristofferson.a.arao$ pyenv virtualenvs
2.7.9/envs/py279 (created from /Users/kristofferson.a.arao/.pyenv/versions/2.7.9)
3.7.0/envs/flask (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.0)
3.7.0/envs/py370 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.0)
3.7.2/envs/py372 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.2/Python.framework/Versions/3.7)
flask (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.0)
py279 (created from /Users/kristofferson.a.arao/.pyenv/versions/2.7.9)
py370 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.0)
py372 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.2/Python.framework/Versions/3.7)
laptophostname:apache_beam kristofferson.a.arao$ pyenv activate py370
pyenv-virtualenv: prompt changing will be removed from future release. configure `export PYENV_VIRTUALENV_DISABLE_PROMPT=1' to simulate the behavior.
(py370) laptophostname:apache_beam kristofferson.a.arao$
(py370) laptophostname:apache_beam kristofferson.a.arao$
(py370) laptophostname:apache_beam kristofferson.a.arao$ pip install apache_beam
Collecting apache_beam
Downloading https://files.pythonhosted.org/packages/4f/af/cca5464976d508db5ed3673ffdefed2b4a5d4c8c2e472d34642355634c2b/apache_beam-2.16.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (4.3MB)
100% |████████████████████████████████| 4.3MB 4.8MB/s
Collecting mock<3.0.0,>=1.0.1 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/e6/35/f187bdf23be87092bd0f1200d43d23076cee4d0dec109f195173fd3ebc79/mock-2.0.0-py2.py3-none-any.whl (56kB)
100% |████████████████████████████████| 61kB 10.6MB/s
Collecting pymongo<4.0.0,>=3.8.0 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/a3/8c/ec46f4aa95515989711a7893e64c30f9d33c58eaccc01f8f37c4513739a2/pymongo-3.9.0-cp37-cp37m-macosx_10_6_intel.whl (378kB)
100% |████████████████████████████████| 378kB 1.7MB/s
Collecting pyyaml<4.0.0,>=3.12 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/9e/a3/1d13970c3f36777c583f136c136f804d70f500168edc1edea6daa7200769/PyYAML-3.13.tar.gz (270kB)
100% |████████████████████████████████| 276kB 9.9MB/s
Requirement already satisfied: python-dateutil<3,>=2.8.0 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from apache_beam) (2.8.0)
Requirement already satisfied: pytz>=2018.3 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from apache_beam) (2018.9)
Collecting avro-python3<2.0.0,>=1.8.1; python_version >= "3.0" (from apache_beam)
Downloading https://files.pythonhosted.org/packages/76/b2/98a736a31213d3e281a62bcae5572cf297d2546bc429accf36f9ee1604bf/avro-python3-1.9.1.tar.gz
Collecting oauth2client<4,>=2.0.1 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/c0/7b/bc893e35d6ca46a72faa4b9eaac25c687ce60e1fbe978993fe2de1b0ff0d/oauth2client-3.0.0.tar.gz (77kB)
100% |████████████████████████████████| 81kB 11.9MB/s
Collecting httplib2<=0.12.0,>=0.8 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/ce/ed/803905d670b52fa0edfdd135337e545b4496c2ab3a222f1449b7256eb99f/httplib2-0.12.0.tar.gz (218kB)
100% |████████████████████████████████| 225kB 137kB/s
Collecting pydot<2,>=1.2.0 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/33/d1/b1479a770f66d962f545c2101630ce1d5592d90cb4f083d38862e93d16d2/pydot-1.4.1-py2.py3-none-any.whl
Collecting dill<0.3.1,>=0.3.0 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/39/7a/70803635c850e351257029089d38748516a280864c97cbc73087afef6d51/dill-0.3.0.tar.gz (151kB)
100% |████████████████████████████████| 153kB 20.2MB/s
Collecting hdfs<3.0.0,>=2.1.0 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/82/39/2c0879b1bcfd1f6ad078eb210d09dbce21072386a3997074ee91e60ddc5a/hdfs-2.5.8.tar.gz (41kB)
100% |████████████████████████████████| 51kB 11.1MB/s
Collecting future<1.0.0,>=0.16.0 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/45/0b/38b06fd9b92dc2b68d58b75f900e97884c45bedd2ff83203d933cf5851c9/future-0.18.2.tar.gz (829kB)
100% |████████████████████████████████| 829kB 1.8MB/s
Collecting protobuf<4,>=3.5.0.post1 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/a5/c6/a8b6a74ab1e165f0aaa673a46f5c895af8780976880c98934ae82060356d/protobuf-3.10.0-cp37-cp37m-macosx_10_9_intel.whl (1.4MB)
100% |████████████████████████████████| 1.4MB 6.7MB/s
Collecting crcmod<2.0,>=1.7 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/6b/b0/e595ce2a2527e169c3bcd6c33d2473c1918e0b7f6826a043ca1245dd4e5b/crcmod-1.7.tar.gz (89kB)
100% |████████████████████████████████| 92kB 9.0MB/s
Collecting pyarrow<0.15.0,>=0.11.1; python_version >= "3.0" or platform_system != "Windows" (from apache_beam)
Downloading https://files.pythonhosted.org/packages/0c/fa/baea2c0713b4edf5abfcbd676c3d54e43edde0e62af273352660ef3368ae/pyarrow-0.14.1-cp37-cp37m-macosx_10_6_intel.whl (34.4MB)
100% |████████████████████████████████| 34.4MB 228kB/s
Collecting grpcio<2,>=1.12.1 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/c1/22/c462571c795d7071847f1046fbdb43c03ec1b358948c2c83ff56691e5a32/grpcio-1.24.3-cp37-cp37m-macosx_10_9_x86_64.whl (2.1MB)
100% |████████████████████████████████| 2.1MB 411kB/s
Collecting fastavro<0.22,>=0.21.4 (from apache_beam)
Downloading https://files.pythonhosted.org/packages/7b/ed/addc70938c732420dc2808e48e6965091c2944bcd999f6606f92ce084209/fastavro-0.21.24-cp37-cp37m-macosx_10_13_x86_64.whl (379kB)
100% |████████████████████████████████| 389kB 535kB/s
Collecting pbr>=0.11 (from mock<3.0.0,>=1.0.1->apache_beam)
Downloading https://files.pythonhosted.org/packages/46/a4/d5c83831a3452713e4b4f126149bc4fbda170f7cb16a86a00ce57ce0e9ad/pbr-5.4.3-py2.py3-none-any.whl (110kB)
100% |████████████████████████████████| 112kB 370kB/s
Requirement already satisfied: six>=1.9 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from mock<3.0.0,>=1.0.1->apache_beam) (1.12.0)
Collecting pyasn1>=0.1.7 (from oauth2client<4,>=2.0.1->apache_beam)
Downloading https://files.pythonhosted.org/packages/a1/71/8f0d444e3a74e5640a3d5d967c1c6b015da9c655f35b2d308a55d907a517/pyasn1-0.4.7-py2.py3-none-any.whl (76kB)
100% |████████████████████████████████| 81kB 493kB/s
Collecting pyasn1-modules>=0.0.5 (from oauth2client<4,>=2.0.1->apache_beam)
Downloading https://files.pythonhosted.org/packages/52/50/bb4cefca37da63a0c52218ba2cb1b1c36110d84dcbae8aa48cd67c5e95c2/pyasn1_modules-0.2.7-py2.py3-none-any.whl (131kB)
100% |████████████████████████████████| 133kB 558kB/s
Collecting rsa>=3.1.4 (from oauth2client<4,>=2.0.1->apache_beam)
Downloading https://files.pythonhosted.org/packages/02/e5/38518af393f7c214357079ce67a317307936896e961e35450b70fad2a9cf/rsa-4.0-py2.py3-none-any.whl
Requirement already satisfied: pyparsing>=2.1.4 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from pydot<2,>=1.2.0->apache_beam) (2.3.1)
Collecting docopt (from hdfs<3.0.0,>=2.1.0->apache_beam)
Using cached https://files.pythonhosted.org/packages/a2/55/8f8cab2afd404cf578136ef2cc5dfb50baa1761b68c9da1fb1e4eed343c9/docopt-0.6.2.tar.gz
Requirement already satisfied: requests>=2.7.0 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from hdfs<3.0.0,>=2.1.0->apache_beam) (2.22.0)
Requirement already satisfied: setuptools in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from protobuf<4,>=3.5.0.post1->apache_beam) (39.0.1)
Requirement already satisfied: numpy>=1.14 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from pyarrow<0.15.0,>=0.11.1; python_version >= "3.0" or platform_system != "Windows"->apache_beam) (1.16.1)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache_beam) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache_beam) (2.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache_beam) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /Users/kristofferson.a.arao/.pyenv/versions/3.7.0/envs/py370/lib/python3.7/site-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache_beam) (2019.3.9)
Installing collected packages: pbr, mock, pymongo, pyyaml, avro-python3, httplib2, pyasn1, pyasn1-modules, rsa, oauth2client, pydot, dill, docopt, hdfs, future, protobuf, crcmod, pyarrow, grpcio, fastavro, apache-beam
Running setup.py install for pyyaml ... done
Running setup.py install for avro-python3 ... done
Running setup.py install for httplib2 ... done
Running setup.py install for oauth2client ... done
Running setup.py install for dill ... done
Running setup.py install for docopt ... done
Running setup.py install for hdfs ... done
Running setup.py install for future ... done
Running setup.py install for crcmod ... done
Successfully installed apache-beam-2.16.0 avro-python3-1.9.1 crcmod-1.7 dill-0.3.0 docopt-0.6.2 fastavro-0.21.24 future-0.18.2 grpcio-1.24.3 hdfs-2.5.8 httplib2-0.12.0 mock-2.0.0 oauth2client-3.0.0 pbr-5.4.3 protobuf-3.10.0 pyarrow-0.14.1 pyasn1-0.4.7 pyasn1-modules-0.2.7 pydot-1.4.1 pymongo-3.9.0 pyyaml-3.13 rsa-4.0
You are using pip version 19.0.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
(py370) laptophostname:apache_beam kristofferson.a.arao$
(py370) laptophostname:apache_beam kristofferson.a.arao$
(py370) laptophostname:apache_beam kristofferson.a.arao$ pyenv rehash
}}}
https://druid.apache.org/
https://druid.apache.org/docs/latest/tutorials/tutorial-batch.html
https://imply.io/
! part 1-3
https://hortonworks.com/blog/apache-hive-druid-part-1-3/
https://hortonworks.com/blog/sub-second-analytics-hive-druid/
https://hortonworks.com/blog/connect-tableau-druid-hive/
https://www.quora.com/What-are-the-differences-between-Druid-and-AWS-Redshift
<<showtoc>>
https://medium.com/@stephane.maarek/how-to-use-apache-kafka-to-transform-a-batch-pipeline-into-a-real-time-one-831b48a6ad85
https://talks.rmoff.net/ , https://rmoff.net/
https://www.kafka-tutorials.com/
https://medium.com/@stephane.maarek/how-can-i-learn-confluent-kafka-fb6a453defc
! prereq reading
https://www.confluent.io/blog/event-streaming-platform-1 <- jay kreps
https://www.confluent.io/blog/event-streaming-platform-2
https://www.confluent.io/blog/the-future-of-etl-isnt-what-it-used-to-be/# <- gwen shapira
https://www.confluent.io/blog/using-logs-to-build-a-solid-data-infrastructure-or-why-dual-writes-are-a-bad-idea/ <- Martin Kleppmann
! architecture
[img(70%,70%)[ https://i.imgur.com/tTZ49F7.png]]
! kafka summit talks
https://www.confluent.io/resources/kafka-summit-san-francisco-2019/
! kafka courses
* https://www.udemy.com/course/apache-kafka/
* https://app.pluralsight.com/library/courses/apache-kafka-getting-started/table-of-contents
! managed kafka (on cloud)
* https://www.heroku.com/kafka
! golden gate CDC
check [[golden gate for big data]]
! Kafka messaging system on Informatica Intelligent Streaming
https://network.informatica.com/thread/23651
! Kafka Connect vs StreamSets
https://www.linkedin.com/pulse/kafka-connect-vs-streamsets-advantages-disadvantages-slim-baltagi/
.
https://jack-vanlightly.com/blog/2018/10/2/understanding-how-apache-pulsar-works
https://www.google.com/search?q=apache+sentry+vs+ranger+vs+knox&oq=apache+sentry+vs&aqs=chrome.2.69i57j0l4.4925j0j1&sourceid=chrome&ie=UTF-8
https://www.xplenty.com/blog/5-hadoop-security-projects/
https://community.hortonworks.com/questions/72341/whats-the-difference-between-ranger-and-knox.html
<<<
Keeping the jargon aside -
''Ranger'' is used for deciding who can access what resources on a Hadoop cluster with the help of policies (there is more to this but this is in the most basic terms).
''Knox'' can be imagined as the gatekeeper which decides whether to allow user access to Hadoop cluster or not.
More complete definitions:
Ranger is an authorization system which allows / denies access to Hadoop cluster resources (HDFS files, Hive tables etc.) based on pre-defined Ranger policies. When user request comes to Ranger, it is assumed to be authenticated already.
Knox is a REST API based perimeter security gateway system which 'authenticates' user credentials (mostly against AD/LDAP). Only the successfully authenticated user are allowed access to Hadoop cluster. Knox also provides a layer of abstraction to the underneath Hadoop services i.e. all endpoints are accessed via Knox gateway URL.
<<<
https://stackoverflow.com/questions/39326456/how-to-choose-between-apache-ranger-and-sentry
https://community.hortonworks.com/questions/105721/is-ranger-is-the-right-approach-for-hive-securityw.html
http://ranger.apache.org/faq.html#How_does_Apache_Ranger_authorization_compare_to_SQL_standard_authorization
<<<
You can use Sentry or Ranger depends upon what hadoop distribution tool that you are using like Cloudera or Hortonworks.
''Apache Sentry'' - Owned by Cloudera. Supports HDFS, Hive, Solr and Impala. (Ranger will not support Impala)
''Apache Ranger'' - Owned by Hortonworks. Apache Ranger offers a centralized security framework to manage fine-grained access control across: HDFS, Hive, HBase, Storm, Knox, Solr, Kafka, and YARN
Apache Ranger overlaps with Apache Sentry since it also deals with authorization and permissions. It adds an authorization layer to Hive, HBase, and Knox. Both Sentry and Ranger support column-level permissions in Hive (startig from 1.5 release).
<<<
! howto configure
ranger, knox https://www.youtube.com/channel/UCakdSIPsJqiOLqylgoYmwQg/search?query=ranger
Hadoop Day To Day Operations - Manage Ambari - Groups, users and permissions https://www.youtube.com/watch?v=v3LiS2tJXE4
! other references
Securing Hadoop with Apache Ranger Strategies & Best Practices https://www.youtube.com/watch?v=uCZKrKo5ebQ
30. Hadoop Administration Tutorial - LDAP Server Integration https://www.youtube.com/watch?v=RJSUhbr9mZM , https://www.youtube.com/watch?v=VDAh-YzUMm4&list=PLRt-r4QiDOMfq7d9llYvXdkjS8SQ_DB6C
HDP-2.6.3 Ranger User And Group Sync With Active Directory https://www.youtube.com/watch?v=2aZ9GBhCOhA
Ambari with external authentication ( LDAP ) https://www.youtube.com/watch?v=vB1SN0LBicE
Integrating OpenLDAP with Hadoop servers https://www.youtube.com/watch?v=pnweuITIlP4
the LDAP connection check tool to discover Ambari properties https://www.youtube.com/watch?v=vJPWDfsrJek
Integrating Ambari with Active Directory https://www.youtube.com/watch?v=YCOZLAMxZ-I
HDP-2.6.3 Integrating Ambari with Active Directory for External Authentication https://www.youtube.com/watch?v=6DMEpXXw3J8
adduser in hdfs (part of LDAP + HDP + Kerberos series ) https://www.youtube.com/watch?v=emcCTxbziRs
mit kerberos on HDP with OpenLDAP HortonWorks Big Data Hadoop https://www.youtube.com/watch?v=rM6q6IjXql8
Ambari with external authentication ( LDAP ) https://www.youtube.com/watch?v=vB1SN0LBicE&list=PLxxNActWNRJhacZNMIZ9c7hbDgz_Hd7Ql
hadoop security playlist https://www.youtube.com/watch?v=tJeLOVaVqjk&list=PLY-V_O-O7h4fHTSxNCipvqfOvFCa-6f07
hdp security https://www.youtube.com/watch?v=2Ya4jSLvX-o&list=PLFvhu6WZOvmnSPQXcC195vEX3ZxRorAyu
https://www.quora.com/Why-doesnt-Apache-Spark-support-Node-js-and-provide-the-API-for-Node-js
! on learning spark
<<<
OK developer friends, after years on the DB production support, and architecture side of things, I'm going to be adding some coding back into my job role - mostly Python (kafka/spark type steaming and data transformation). I've spent a lot of time the past few days on GitHub, stackoverflow, and Apache docs. My question to you is, can you share some good resources for helpful coding tips & creating efficient code? I know I can get things done, but I think I'm being pretty brute force. I'm focusing on Python for now, but should I also get familiar with Scala?
<<<
<<<
for scala/spark this will get you started https://www.udemy.com/aws-emr-and-spark-2-using-scala for pyspark https://www.datacamp.com/courses/introduction-to-pyspark to see how all these play end-to-end (backend to UI) check the book https://github.com/rjurney/Agile_Data_Code_2
some companies also like using GUI to orchestrate the flow of data (ELT/ETL/data integration). using this tool you can call kafka, spark jobs, hive/impala SQLs, python code, gluent offload commands, and others. these are companies or groups within company who wants to get their data out to hadoop/data lake without much coding. all they need is an ETL developer with a little bit of python or spark background and it would be enough to get things going. talend is one of them https://www.youtube.com/watch?v=WHo7QmOupVE , (pentaho, tibco, and others)
<<<
.
https://zeppelin.apache.org/
Introducing Oracle Machine Learning A Collaborative Zeppelin notebook for Oracle’s machine learning capabilities http://www.biwasummit.org/biwa/wp-content/uploads/2016/11/832253-Introducing-Oracle-Machine-Learning-v7_BIWA.pdf
http://www.unix.com/shell-programming-and-scripting/59311-appending-string-file-first-column.html
{{{
awk '{ print "File1,"$0}' Source_File> Result_File
}}}
http://unix.stackexchange.com/questions/117568/adding-a-column-of-values-in-a-tab-delimited-file
<<showtoc>>
! good demos
http://www.dominicgiles.com/blog/files/8278e1395e583ab4ba63429cfcf609bb-138.html , youtube video https://www.youtube.com/watch?v=P9bY7ZjZ-VA
RAC and app cont. https://www.youtube.com/watch?v=0J3eC7AZ4Gg
App cont. https://www.youtube.com/watch?v=Y9_cxR6pEWY
https://bdrouvot.wordpress.com/2015/05/07/12c-application-continuity-with-data-guard-in-action/
! how it works
https://asktom.oracle.com/pls/apex/asktom.search?tag=query-on-application-continuity
https://www.oracle.com/technetwork/database/application-development/applicationcontinuityforjava-1965585.pdf
https://www.youtube.com/watch?v=Y9_cxR6pEWY
https://www.youtube.com/watch?v=0J3eC7AZ4Gg
https://www.youtube.com/watch?v=P9bY7ZjZ-VA
http://nocoug.org/download/2018-02/Colrain_Transparent_High_Availability.pdf
https://www.google.com/search?q=oracle+application+continuity&oq=oracle+app&aqs=chrome.0.69i59j69i60j0j69i65l3.1394j0j1&sourceid=chrome&ie=UTF-8
https://bdrouvot.wordpress.com/2015/05/07/12c-application-continuity-with-data-guard-in-action/
https://oracle-base.com/articles/12c/sqlplus-enhancements-12c
https://blogs.oracle.com/imc/application-continuity-and-transaction-guard:-standing-guard-over-your-business-transactions
https://docs.oracle.com/database/121/RACAD/GUID-04472116-5B0F-45D3-83BC-7A029C1E61C4.htm#RACAD8428
http://www.dominicgiles.com/blog/files/8278e1395e583ab4ba63429cfcf609bb-138.html
! 2020
https://bdrouvot.wordpress.com/2015/05/07/12c-application-continuity-with-data-guard-in-action/
https://laurent-leturgez.com/2015/06/01/oracle-12c-application-continuity-and-its-resources-usage/
https://karlarao.github.io/karlaraowiki/index.html#%5B%5Bconfigure%20weblogic%20on%20RAC%20%2C%20active%20gridlink%5D%5D
http://nocoug.org/download/2018-02/Colrain_Transparent_High_Availability.pdf
Workload Management, Connection Management, and Application Continuity https://learning.oreilly.com/library/view/oracle-database-12c/9780071830478/ch13.xhtml
! sample codes
!! OFFICIAL DOC - Application Continuity for Java
https://docs.oracle.com/en/database/oracle/oracle-database/20/jjdbc/application-continuity.html#GUID-9298BB8C-0E93-44BF-BA26-A615940C0C84
!! MOS NOTE 1602233.1
How To Test Application Continuity Using A Standalone Java Program (Doc ID 1602233.1)
.
https://grumpygrace.dev/posts/gcp-flowcharts/
data pipelines pocket reference
https://learning.oreilly.com/library/view/data-pipelines-pocket/9781492087823/
data engineer roadmap
https://github.com/datastacktv/data-engineer-roadmap
https://en.support.wordpress.com/archives-shortcode/
https://dailypost.wordpress.com/2013/02/06/creating-an-index-on-wordpress-com/
https://en.support.wordpress.com/category-pages/
<<showtoc>>
Troubleshooting Database Contention With V$Wait_Chains (Doc ID 1428210.1)
http://blog.tanelpoder.com/2013/09/11/advanced-oracle-troubleshooting-guide-part-11-complex-wait-chain-signature-analysis-with-ash_wait_chains-sql/
http://blog.dbi-services.com/oracle-is-hanging-dont-forget-hanganalyze-and-systemstate/
! troubleshooting SYS:(GTXn) DFS lock handle
ASH wait chain saves the day again! ;) http://goo.gl/7QCSyW @TanelPoder XA config issue.. rdbms ipc reply -> SYS:(GTXn) DFS lock handle
https://twitter.com/karlarao/status/477168595223326721
{{{
@<the script> username||':'||program2||event2 session_type='FOREGROUND' "TIMESTAMP'2014-11-19 17:00:00'" "TIMESTAMP'2014-11-19 18:00:00'"
}}}
! troubleshooting buffer pins were preempted by RM (concurrency + CPU starvation)
I've experienced a scenario where buffer pins were preempted by RM (concurrency + CPU starvation), see more details here http://bit.ly/1xuqbfM or this evernote link if the bitly URL doesn't work https://www.evernote.com/l/ADBZSq7DnIpCsLIloybj-87jZ-q0LxHo6uA
{{{
22:18:35 SYS@pib01scp4> @ash_wait_chains program2||event2||sql_id "event='buffer busy waits'" sysdate-1/24/60 sysdate
%This SECONDS
------ ----------
WAIT_CHAIN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
72% 2085
-> (JDBC Thin Client) buffer busy waits [segment header] fv4fs01rvgdju -> (JDBC Thin Client) resmgr:cpu quantum fv4fs01rvgdju
}}}
! troubleshooting library cache: mutex X
{{{
@<the script> username||':'||program2||event2||top_level_call_name||sql_opname||p1text||p1 session_type='FOREGROUND' "TIMESTAMP'2014-11-19 13:00:00'" "TIMESTAMP'2014-11-19 14:00:00'"
SELECT * FROM *v$db_object_cache* WHERE hash_value = *3309402135*
select * from (
select case when (kglhdadr = kglhdpar) then 'Parent' else 'Child '||kglobt09 end cursor,
kglhdadr ADDRESS, substr(kglnaobj,1,20) NAME, kglnahsh HASH_VALUE, kglobtyd TYPE,
kglobt23 LOCKED_TOTAL, kglobt24 PINNED_TOTAL,kglhdexc EXECUTIONS, kglhdnsp NAMESPACE
from x$kglob -- where kglobtyd != 'CURSOR'
order by kglobt24 desc)
where rownum <= 20;
}}}
see more here [[library cache: mutex X]]
! SCRIPT: Troubleshooting Database Contention With V$Wait_Chains (Doc ID 1428210.1)
{{{
set pages 1000
set lines 120
set heading off
column w_proc format a50 tru
column instance format a20 tru
column inst format a28 tru
column wait_event format a50 tru
column p1 format a16 tru
column p2 format a16 tru
column p3 format a15 tru
column Seconds format a50 tru
column sincelw format a50 tru
column blocker_proc format a50 tru
column fblocker_proc format a50 tru
column waiters format a50 tru
column chain_signature format a100 wra
column blocker_chain format a100 wra
SELECT *
FROM (SELECT 'Current Process: '||osid W_PROC, 'SID '||i.instance_name INSTANCE,
'INST #: '||instance INST,'Blocking Process: '||decode(blocker_osid,null,'<none>',blocker_osid)||
' from Instance '||blocker_instance BLOCKER_PROC,
'Number of waiters: '||num_waiters waiters,
'Final Blocking Process: '||decode(p.spid,null,'<none>',
p.spid)||' from Instance '||s.final_blocking_instance FBLOCKER_PROC,
'Program: '||p.program image,
'Wait Event: ' ||wait_event_text wait_event, 'P1: '||wc.p1 p1, 'P2: '||wc.p2 p2, 'P3: '||wc.p3 p3,
'Seconds in Wait: '||in_wait_secs Seconds, 'Seconds Since Last Wait: '||time_since_last_wait_secs sincelw,
'Wait Chain: '||chain_id ||': '||chain_signature chain_signature,'Blocking Wait Chain: '||decode(blocker_chain_id,null,
'<none>',blocker_chain_id) blocker_chain
FROM v$wait_chains wc,
gv$session s,
gv$session bs,
gv$instance i,
gv$process p
WHERE wc.instance = i.instance_number (+)
AND (wc.instance = s.inst_id (+) and wc.sid = s.sid (+)
and wc.sess_serial# = s.serial# (+))
AND (s.final_blocking_instance = bs.inst_id (+) and s.final_blocking_session = bs.sid (+))
AND (bs.inst_id = p.inst_id (+) and bs.paddr = p.addr (+))
AND ( num_waiters > 0
OR ( blocker_osid IS NOT NULL
AND in_wait_secs > 10 ) )
ORDER BY chain_id,
num_waiters DESC)
WHERE ROWNUM < 101;
}}}
! tm, tx lock chain
{{{
column "wait event" format a50 word_wrap
column "session" format a25
column "minutes" format 9999D9
column CHAIN_ID noprint
column N noprint
column l noprint
with w as (
select
chain_id,rownum n,level l
,lpad(' ',level,' ')||(select instance_name from gv$instance where inst_id=w.instance)||' '''||w.sid||','||w.sess_serial#||'@'||w.instance||'''' "session"
,lpad(' ',level,' ')||w.wait_event_text ||
case
when w.wait_event_text like 'enq: TM%' then
' mode '||decode(w.p1 ,1414332418,'Row-S' ,1414332419,'Row-X' ,1414332420,'Share' ,1414332421,'Share RX' ,1414332422,'eXclusive')
||( select ' on '||object_type||' "'||owner||'"."'||object_name||'" ' from all_objects where object_id=w.p2 )
when w.wait_event_text like 'enq: TX%' then
(
select ' on '||object_type||' "'||owner||'"."'||object_name||'" on rowid '
||dbms_rowid.rowid_create(1,data_object_id,relative_fno,w.row_wait_block#,w.row_wait_row#)
from all_objects ,dba_data_files where object_id=w.row_wait_obj# and w.row_wait_file#=file_id
)
end "wait event"
, w.in_wait_secs/60 "minutes"
, s.username , s.program
from v$wait_chains w join gv$session s on (s.sid=w.sid and s.serial#=w.sess_serial# and s.inst_id=w.instance)
connect by prior w.sid=w.blocker_sid and prior w.sess_serial#=w.blocker_sess_serial# and prior w.instance = w.blocker_instance
start with w.blocker_sid is null
)
select * from w where chain_id in (select chain_id from w group by chain_id having max("minutes") >= 1 and max(l)>1 )
order by n
/
}}}
<<showtoc>>
! manual way
https://github.com/karlarao/pull_dump_and_explore_ash
! ashdumpseconds
<<<
nice! yes this is a good alternative. a few points: 1) ashdumpseconds pulls from ash buffer (so it's memory - shorter for very busy systems vs dba_hist - way longer) 2) if RAC you need to do dump per node and sqlldr each output file
<<<
http://www.juliandyke.com/Diagnostics/Dumps/ASHDUMP.php
http://www.hhutzler.de/blog/create-and-analyze-ashdumps/
http://blog.dbi-services.com/oracle-is-hanging-dont-forget-hanganalyze-and-systemstate/
http://mfzahirdba.blogspot.com/2011/12/dump-ash-data.html
{{{
sqlplus / as sysdba
oradebug setmypid
oradebug unlimit
oradebug dump ashdumpseconds 30
}}}
https://bdrouvot.wordpress.com/2013/10/04/asm-metrics-are-a-gold-mine-welcome-to-asm_metrics-pl-a-new-utility-to-extract-and-to-manipulate-them-in-real-time/
{{{
. oraenv
+ASM1
./asm_metrics.pl -show=dbinst -display=snap,avg -dg=DATAQA -interval=5 -sort_field=iops
}}}
also see for real world example
[[rman active duplicate performance]]
{{{
#!/bin/bash
#
# du of each subdirectory in a directory for ASM
#
D=$1
if [[ -z $D ]]
then
echo "Please provide a directory !"
exit 1
fi
(for DIR in `asmcmd ls ${{ds{29/10/20}}}`
do
echo ${DIR} `asmcmd du ${{ds{29/10/20}}}/${DIR} | tail -1`
done) | awk -v D="$D" ' BEGIN { printf("\n\t\t%40s\n\n", D " subdirectories size") ;
printf("%25s%16s%16s\n", "Subdir", "Used MB", "Mirror MB") ;
printf("%25s%16s%16s\n", "------", "-------", "---------") ;}
{
printf("%25s%16s%16s\n", $1, $2, $3) ;
use += $2 ;
mir += $3 ;
}
END { printf("\n\n%25s%16s%16s\n", "------", "-------", "---------");
printf("%25s%16s%16s\n\n", "Total", use, mir) ;} '
}}}
http://www.ora600.be/Oracle+ASS+-+one+time+it+will+save+yours+!
http://www.ora-solutions.net/web/2008/12/10/system-state-dump-evaluation-with-assawk/
https://www.astronomer.io/docs/cloud/
https://itsvit.com/blog/google-cloud-composer-vs-astronomer-what-to-choose/
https://github.com/astronomer/astronomer
encountered performance issues or slow shutdown of database due to different number of LMD/LMS processes in asymmetric RAC explained in MOS Doc 1911398.1 (11.2 and above) and 2028503.1 (12c). Keeping the instance CPU count consistent across nodes & validating process counts or applying the suggested workaround should help avoid the issues explained
<<showtoc>>
! plugins to install
http://www.sitepoint.com/10-essential-atom-add-ons/
https://atom.io/packages/save-session
language-r
white-cursor
http://stackoverflow.com/questions/25579775/github-atom-remove-the-center-line-in-the-editor
https://atom.io/packages/image-copipe <- to upload images in imgur
{{{
$ cat test
567 cat /u01/app/oraInventory/ContentsXML/inventory.xml
568 . ~oracle/.karlenv
569 /u01/app/11.2.0/grid/bin/olsnodes -s -t
570 ./runInstaller -detachHome ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
571 cat /u01/app/oraInventory/ContentsXML/inventory.xml
572 ./runInstaller -clone -silent -ignorePreReq ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 ORACLE_HOME_NAME=OraDb11g_home1 ORACLE_BASE=/u01/app/oracle OSDBA_GROUP=dba OSOPER_GROUP=dba
573 cat /u01/app/oraInventory/ContentsXML/inventory.xml
574 cat /u01/app/oraInventory/ContentsXML/inventory.xml
575 ls -ltr
576 ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES=db1,db2" -local
577 cat /u01/app/oraInventory/ContentsXML/inventory.xml
$ /u01/app/11.2.0/grid/bin/olsnodes -s -t
db1 Active Unpinned
db2 Active Unpinned
oracle@db1.local:/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin:
$ ./runInstaller -detachHome ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 1270 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'DetachHome' was successful.
oracle@db1.local:/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin:
$
oracle@db1.local:/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin:
$ /u01/app/11.2.0/grid/bin/olsnodes -s -t
db1 Active Unpinned
db2 Active Unpinned
oracle@db1.local:/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin:
$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2010, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>11.2.0.2.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="db1"/>
<NODE NAME="db2"/>
</NODE_LIST>
</HOME>
</HOME_LIST>
</INVENTORY>
$ ./runInstaller -clone -silent -ignorePreReq ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 ORACLE_HOME_NAME=OraDb11g_home1 ORACLE_BASE=/u01/app/oracle OSDBA_GROUP=dba OSOPER_GROUP=dba
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 1270 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-08-30_07-10-31AM. Please wait ...oracle@db1.local:/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin:
$ Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.
You can find the log of this install session at:
/u01/app/oraInventory/logs/cloneActions2012-08-30_07-10-31AM.log
.................................................................................................... 100% Done.
Installation in progress (Thursday, August 30, 2012 7:11:11 AM CDT)
.............................................................................. 78% Done.
Install successful
Linking in progress (Thursday, August 30, 2012 7:11:32 AM CDT)
Link successful
Setup in progress (Thursday, August 30, 2012 7:15:42 AM CDT)
Setup successful
End of install phases.(Thursday, August 30, 2012 7:16:02 AM CDT)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/u01/app/oracle/product/11.2.0/dbhome_1/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
The cloning of OraDb11g_home1 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2012-08-30_07-10-31AM.log' for more details.
$ /u01/app/11.2.0/grid/bin/olsnodes -s -t
db1 Active Unpinned
db2 Active Unpinned
$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2010, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>11.2.0.2.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="db1"/>
<NODE NAME="db2"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2"/>
</HOME_LIST>
</INVENTORY>
$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES=db1,db2" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 904 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
$ cat /u01/app/oraInventory/ContentsXML/inventory.xml<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2010, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>11.2.0.2.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="db1"/>
<NODE NAME="db2"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="db1"/>
<NODE NAME="db2"/>
</NODE_LIST>
</HOME>
</HOME_LIST>
</INVENTORY>
oracle@db1.local:/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin:
}}}
{{{
/u01/app/11.2.0/grid/bin/olsnodes -s -t
./runInstaller -detachHome ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
cat /u01/app/oraInventory/ContentsXML/inventory.xml
./runInstaller -clone -silent -ignorePrereq ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 ORACLE_HOME_NAME=OraDb11g_home1 ORACLE_BASE=/u01/app/oracle OSDBA_GROUP=dba OSOPER_GROUP=dba
cat /u01/app/oraInventory/ContentsXML/inventory.xml
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES=db1,db2" -local
cat /u01/app/oraInventory/ContentsXML/inventory.xml
opatch lsinventory
./runInstaller -silent -attachHome -invPtrLoc ./oraInst.loc
ORACLE_HOME="<Oracle_Home_Location>" ORACLE_HOME_NAME="<Oracle_Home_Name>"
"CLUSTER_NODES={<node1,node2>}" LOCAL_NODE="<node_name>"
--non rac
/runInstaller -silent -attachHome -invPtrLoc ./oraInst.loc
oracle_home="<oracle_home_location>" "cluster_nodes={}"
-- rac
./runInstaller -silent -attachHome oracle_home="<oracle_home_location>"
"cluster_nodes={<node1, node2>}" local_node="<node_name>"
./runInstaller -silent -attachHome -invPtrLoc ./oraInst.loc
oracle_home="<oracle_home_location>" "remote_nodes={}" -local
cd $ORACLE_HOME/oui/bin
./runInstaller -detachHome ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
./runInstaller -silent -attachHome ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 ORACLE_HOME_NAME=OraDb11g_home1
}}}
https://learning.oreilly.com/search/?query=attunity&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&include_collections=true&include_notebooks=true&is_academic_institution_account=false&source=user&sort=relevance&facet_json=true&page=0
Fortune 100 lessons: Architecting data lakes for real-time analytics and AI (sponsored by Attunity) - Ted Orme (Attunity)
https://learning.oreilly.com/videos/strata-data-conference/9781492025993/9781492025993-video320433?autoplay=false
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1133388300346992024
<<<
{{{
In Oracle 11g, there has been a restructuring of the job scheduling framework. In particular, the automatic gathering of optimizer statistics. In Oracle 10g, the following query reveals the association of the program GATHER_STATS_PROG with a job GATHER_STATS_JOB. In Oracle 11g, that association does not seem to be present as the query returns no rows.
SQL> SELECT JOB_NAME, SCHEDULE_NAME, SCHEDULE_TYPE, ENABLED
2 FROM DBA_SCHEDULER_JOBS
3 WHERE PROGRAM_NAME = 'GATHER_STATS_PROG';
JOB_NAME SCHEDULE_NAME SCHEDULE_TYPE ENABLED
---------------- ------------------------ -------------- -------
GATHER_STATS_JOB MAINTENANCE_WINDOW_GROUP WINDOW_GROUP TRUE
However, the program GATHER_STATS_PROG is present in Oracle 11g, and the documentation suggests that it is still run as the means of gathering optimizer statistics automatically. The Autotask process gathers optimizer statistics by calling the GATHER_DATABASE_STATS_JOB_PROC procedure of the DBMS_STATS package. The GATHER_DATABASE_STATS_JOB_PROC procedure is associated with job GATHER_STATS_JOB.
}}}
<<<
{{{
If you are relying on GATHER STALE to gather stats on newly created tables, you have to manually gather stats on them once to get it going.
SQL> alter session set nls_date_format='dd-MON-yy hh:mi:ss AM';
Session altered.
SQL> drop table t1;
Table dropped.
SQL> create table t1 as select * from all_objects;
Table created.
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1
SQL> delete from t1;
36002 rows deleted.
SQL> commit;
Commit complete.
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1
SQL> begin
2 DBMS_STATS.GATHER_TABLE_STATS (
3 ownname =>'XYZ',
4 tabname=>'T1',
5 estimate_percent=>10,
6 degree=>1,
7 cascade=>TRUE,
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:44:33 AM
SQL> insert into t1 select * from all_objects;
36002 rows created.
SQL> commit;
Commit complete.
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:46:07 AM
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:46:07 AM
SQL> select sysdate from dual;
SYSDATE
---------------------
21-DEC-09 11:48:30 AM
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:46:07 AM
SQL> delete from t1;
36002 rows deleted.
SQL> commit;
Commit complete.
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:49:23 AM
SQL>
}}}
https://mahmoudhatem.wordpress.com/2014/10/26/oracle-12c-automatic-big-table-caching/
some more
here http://docs.oracle.com/database/121/DWHSG/ch2logdes.htm#CIHCCAJA (Automatic Big Table Caching to Improve the Performance of In-Memory Parallel Queries)
and here http://docs.oracle.com/cloud/latest/db121/VLDBG/parallel008.htm#VLDBG14145 (Oracle® Database VLDB and Partitioning Guide)
also see [[_small_table_threshold]]
! the catch
In Oracle Real Application Clusters (Oracle RAC) environments, this feature is supported only with parallel queries. In single instance environments, this feature is supported with both parallel and serial queries.
also check this out @@Oracle by Example: Oracle 12c In-Memory series https://apexapps.oracle.com/pls/apex/f?p=44785:24:106572632124906::::P24_CONTENT_ID,P24_PREV_PAGE:10152,24@@
! avg latency issue
The simplified IO latency formula used in AWR is as follows:
latency (ms) = (readtim / phy reads) * 10
The images below show that latency values may be normalized if the snap interval is too long as compared to shorter snap intervals. Also read on this link for the effects of CPU scheduling issues on latency http://www.freelists.org/post/oracle-l/Disk-Device-Busy-What-exactly-is-this,7
<<<
''Check out the comparison of 60mins vs 10mins interval''
[img[ https://lh5.googleusercontent.com/-EFIgD8LRAd4/T7FDbvPtmFI/AAAAAAAABoI/FNYfpaz-q6s/s2048/20120514_avglatencyissue1.png ]]
''Check out the comparison of 10mins vs 5secs interval''
[img[ https://lh3.googleusercontent.com/-fqgWDfHZRA0/T7FDbv3PdyI/AAAAAAAABoM/RHQCcYeebz4/s2048/20120514_avglatencyissue2.png ]]
<<<
! average latency issue correlation of SAN, datafiles, session IO
http://www.evernote.com/shard/s48/sh/1dcde656-9ab3-4d33-91c7-5373b2e6ad3c/797500d33a67d66446093856dabc89d4
<<<
''Comparison of latency from the session level, datafile level, and SAN storage level''
[img[ https://lh6.googleusercontent.com/-7tH8UR3JhUc/T7FDbvyCJ8I/AAAAAAAABoU/ZUTptdoXSfA/s2048/20120514_avglatencyissue3.png ]]
<<<
! MIXPRD IO issue - SAN performance validation - saturated storage processor
https://www.evernote.com/shard/s48/sh/39f289d4-fd7e-48b9-80aa-07fb5aeb1ee7/0a0758f44235f6e63ed12577ecc8207a
Also check out the tiddler
[[RAID5 and RAID10 comparison]]
http://www.igvita.com/2009/06/23/measuring-optimizing-io-performance/
http://www.pythian.com/news/247/basic-io-monitoring-on-linux/
http://lists.lustre.org/pipermail/lustre-discuss/2011-June/015655.html
http://www.facebook.com/note.php?note_id=76191543919
http://www.unix.com/shell-programming-scripting/118605-awk-whats-nf-nr.html
http://unix.stackexchange.com/questions/63370/compute-average-file-size
http://blog.damiles.com/2008/10/awk-calculate-mean-min-and-max/
{{{
[root@enkx3cel01 ~]# awk '{if(min==""){min=max=$1}; if($1>max) {max=$1}; if($1< min) {min=$1}; total+=$1; count+=1} END {print total/count, min, max}' data.txt
146.667 10 500
[root@enkx3cel01 ~]#
[root@enkx3cel01 ~]#
[root@enkx3cel01 ~]# cat data.txt
10
20
50
100
200
500
}}}
Joining two consecutive lines awk sed
http://stackoverflow.com/questions/3194534/joining-two-consecutive-lines-awk-sed
{{{
awk '!(NR%2){print$0p}{p=$0}' infile
}}}
http://www.unix.com/shell-programming-scripting/126110-awk-division-modulus.html
Awk, trying NOT to divide by zero http://stackoverflow.com/questions/7905317/awk-trying-not-to-divide-by-zero
https://www.commandlinefu.com/commands/view/6872/exclude-a-column-with-awk
{{{
awk '{ $5=""; print }' file
awk '{ $3="";$4="";print}' main_metrics_tmp2.txt > main_metrics_tmp3.txt
}}}
TOTAL_MEM=`cat curr_mem.txt | awk '{ x=x+$0 } END { print x }'`
echo "Total Vbox Memory: $TOTAL_MEM"
https://www.google.com/search?sxsrf=ACYBGNSX3WXTFA60PbsARnULOBLug2PsIA%3A1564383700079&ei=1Jk-Xfi2BOmGggeh-Yi4CA&q=awk+move+2nd+column+to+first&oq=awk+move+2nd+column+to+first&gs_l=psy-ab.3...15309.15781..16130...0.0..0.61.240.4......0....1..gws-wiz.......0i71.yfmoO_OXGCs&ved=0ahUKEwj48PC1x9njAhVpg-AKHaE8AocQ4dUDCAo&uact=5
https://stackoverflow.com/questions/17084108/moving-the-second-column-to-first-column-using-awk
{{{
awk '{tmp=$4;$4=$1;$1=tmp;print $0}' main_metrics_tmp.txt
awk '{tmp=$4;$4=$1;$1=tmp; tmp2=$5;$5=$2;$2=tmp2; print $0}' main_metrics_tmp.txt
}}}
split a string using a string as delimiter in awk http://stackoverflow.com/questions/5333599/split-a-string-using-a-string-as-delimiter-in-awk
{{{
awk '{split($0,a,"/root/");$1=a[1] "/"; $2=a[2]; print $1,$2}'
}}}
http://stackoverflow.com/questions/3307162/awk-match-string-after-separator
http://www.cyberciti.biz/tips/processing-the-delimited-files-using-cut-and-awk.html
{{{
Processing the delimited files using awk
You can also use awk command for same purpose:
$ awk -F':' '{ print $1 }' /etc/passwd
}}}
http://stackoverflow.com/questions/6528660/how-to-cut-in-awk
Conversation with Tyler Muth
http://www.evernote.com/shard/s48/sh/89d4be94-2edb-4a38-9dc3-e2c38ebd01b5/939371f06cc0c5350df9b2069af2775c
! Hierarchy of Exadata IO
* Total Read + Write + Redo MB/s - ''Total Read Write Redo MB/s across storage cells''
* dba_hist_sysstat s29t0, -- cell physical IO bytes eligible for predicate offload, diffed, cellpiobpreoff - CellPIOpredoffloadmbs - ''Total Smart Scans across storage cells''
* dba_hist_sysstat s26t0, -- cell physical IO interconnect bytes, diffed, cellpiob - CellPIOICmbs - ''Interconnect MB/s on a node''
* dba_hist_sysstat s31t0, -- cell physical IO interconnect bytes returned by smart scan, diffed, cellpiobss - CellPIOICSmartScanmbs - ''Returned by Smart Scan MB/s on a node''
! Scenario below..
Characterizing the IO workload from Tableau using the AWR Tableau toolkit
[img[ https://lh3.googleusercontent.com/-LrR7P5REhPs/T64EWZpbfUI/AAAAAAAABls/_n8zeQ0nzI8/s800/20120512-hierarchy1.png ]]
This is the IO graph during that workload period
[img[ https://lh4.googleusercontent.com/-e7kp_S1QggU/T64EWwOa-ZI/AAAAAAAABl0/yI6fhZt9EKY/s800/20120512-hierarchy2.png ]]
This 15GB/s IO read spike was generated by just two SQLs
[img[ https://lh5.googleusercontent.com/-h6mjNTBjtBM/T64EW8Ab6cI/AAAAAAAABl4/Z868vGNBSjg/s800/20120512-hierarchy3.png ]]
Those two SQLs read 15GB/s through Smart Scans only to plow back to the database nodes just 3.729 MB/s of useful data from the SQL. That’s a lot of waste work.
[img[ https://lh3.googleusercontent.com/-P1DbJNOt5Hs/T64EXDD4RDI/AAAAAAAABmE/Pahy-uUr8O0/s800/20120512-hierarchy4.png ]]
Also, this just means that you can saturate the entire Storage Grid just by executing an inefficient SQL on one node. And without imposing limits through IO Resource Manager (IORM) if these two SQLs were executed during a peak workload period it may have caused a lot of trouble on the whole cluster.
http://kerryosborne.oracle-guy.com/2009/06/what-did-my-new-index-mess-up/
{{{
set lines 155
col execs for 999,999,999
col avg_etime for 999,999.999
col avg_lio for 999,999,999.9
col begin_interval_time for a30
col node for 99999
break on plan_hash_value on startup_time skip 1
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,
(io_offload_elig_bytes_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_offload
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id = nvl('&sql_id','4dqs2k5tynk61')
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
order by 1, 2, 3
/
}}}
<<<
There are a couple of bug fixes on this top_events script.. originally this is designed to be executed on each node which is dependent on the instance number and it accounts for the CPU Wait which is essentaially the unaccounted for DB Time for each snap_id. Now there's a subquery that has this stuff (below), and it sums all the "time" and "AAS" for all the wait events and CPU time and then subtract it to the DB time of that snap_id therefore getting the metric called "CPU Wait"
{{{
(select snap_id, sum(time) time, sum(AAS) aas from
(select * from (select
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
}}}
the bug fixes are as follows:
1) running on all nodes
this script is intended to run manually on every instance.. the bug is the "inst" is missing and it should be beside the snap_id.. so if you run this on all nodes the above sub-query will group "wait events" and "AAS" into 1 snapshot ID across all instances
if you are encountering this bug, you'll just have data problems on the "CPU Wait" metric which will end up showing as a very high value.
the fix is adding "inst" on the sum and the join at the bottom:
{{{
(select snap_id, inst, sum(time) time, sum(AAS) aas from
(select * from (select
s0.atk_seq_id atk_seq_id,
s0.snap_id snap_id,
s0.END_INTERVAL_TIME tm,
s0.instance_number inst,
... output snipped ...
) group by snap_id, inst) accounted_dbtime
where dbtime.snap_id = accounted_dbtime.snap_id
and dbtime.inst = accounted_dbtime.inst
}}}
the fixed script for all nodes run is here http://goo.gl/zBqNU8
2) running on all nodes with ATK_SEQ_ID, inst, and snap_id (the new awr toolkit w/ Carlos Sierra)
I have this awr tableau toolkit (available here http://karlarao.wordpress.com/scripts-resources/) to create an AWR performance data warehouse, Carlos revised this toolkit and added a column on every dba_hist view called ATK_SEQ_ID which is a unique identifier for a particular data set.
The problems I have are as follows:
* the introduction of atk_seq_id, which I am missing the join from the dba_hist_snapshot to the other tables on the same subquery resulting to a cartesian join
* the dense_rank portion where I'm missing the "inst" beside the snap_id resulting to a non-top5 events kind of output
the fix is adding the "inst" beside the snap_id
{{{
(select atk_seq_id, snap_id, TO_CHAR(tm,'MM/DD/YY HH24:MI:SS') tm, inst, dur, event, waits, time, avgwt, pctdbt, aas, wait_class,
DENSE_RANK() OVER (
PARTITION BY inst, snap_id ORDER BY time DESC) event_rank
... output snipped ...
adding the joins before the dbid section
where
s1.atk_seq_id = s0.atk_seq_id
AND b.atk_seq_id = s0.atk_seq_id
AND e.atk_seq_id = s0.atk_seq_id
AND s5t0.atk_seq_id = s0.atk_seq_id
AND s5t1.atk_seq_id = s0.atk_seq_id
then fixing the where clause at the bottom
WHERE
event_rank <= 5
and aas > 0
ORDER BY inst, snap_id;
}}}
<<<
{{{
I've already imported the AWR data on my test environment... then I'm interested on the running SQLs on the
high write IOPS period so I made use of this script "awr_topsqlx-exa.sql.bak"
to characterize the SQLs. I've altered the initial section of the script to reflect the DBID and instance_number
of the karlarao data then spooled it to a file (awr_topsqlx-exa.sql.bak-karlarao.txt).
So this file (awr_topsqlx_summary.txt) gives you the summary of the top SQLs
$ less awr_topsqlx-exa.sql.bak-karlarao.txt | sort -nk6 | tail
then I grouped the SQLs by type:
$ less awr_topsqlx-exa.sql.bak-karlarao.txt | sort -nk6 | grep -i insert > karlarao-insert.txt
$ less awr_topsqlx-exa.sql.bak-karlarao.txt | sort -nk6 | grep -i select > karlarao-select.txt
$ less awr_topsqlx-exa.sql.bak-karlarao.txt | sort -nk6 | grep -i update > karlarao-update.txt
$ less awr_topsqlx-exa.sql.bak-karlarao.txt | sort -nk6 | grep -i delete > karlarao-delete.txt
$ less awr_topsqlx-exa.sql.bak-karlarao.txt | sort -nk6 | grep -i merge > karlarao-merge.txt
$ less awr_topsqlx-exa.sql.bak-karlarao.txt | sort -nk6 | grep -i create > karlarao-create.txt
and searched for the SQL_ID using this
select sql_text from dba_hist_sqltext where sql_id = 'fd9hn33xa7bph';
and did some quick grouping of the top hoggers using this:
less karlarao-select.txt | cut -d, -f5,26,28,29 | grep "," | less -- AAS,sql_id,module,SQLtype
less karlarao-select.txt | cut -d, -f26 | sort | uniq -- distinct SQL_IDs, then feed to select sql_text from dba_hist_sqltext where sql_id in (
So the two SQLs below are the ones generating the 60K+ read/write IOPS... (karlarao-60kWIOPS.txt) which is somewhere around snap_id 970-980..
g5uxtmf4zmm5p
INSERT /*+ append monitor */ INTO ENROUTE select * from ENROUTE2_IN_EXTERNAL
5jy1dz8y8d0qf
DECLARE job BINARY_INTEGER := :job; next_date TIMESTAMP WITH TIME ZONE := :mydate; broken BOOLEAN := FALSE; job_name VARCHAR2(30) := :job_name; job_subname VARCHAR2(30) := :job_subname; job_owner VARCHAR2(30) := :job_owner; job_start TIMESTAMP WITH TIME ZONE := :job_start; job_scheduled_start
TIMESTAMP WITH TIME ZONE := :job_scheduled_start; window_start TIMESTAMP WITH TIME ZONE := :window_start; window_end TIMESTAMP WITH TIME ZONE := :window_end; chain_id VARCHAR2(14) := :chainid; credential_owner varchar2(30) := :credown; credential_name varchar2(30) := :crednam; destination_o
wner varchar2(30) := :destown; destination_name varchar2(30) := :destnam; job_dest_id varchar2(14) := :jdestid; log_id number := :log_id; BEGIN begin
enroute_ingest2.file_find_new;
end;
:mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
}}}
<<showtoc>>
! awrinfo.sql
! manual purge
https://grepora.com/2017/11/08/orphan-awr-data-consuming-sysaux-space/
!! purge related notes
{{{
“_awr_mmon_deep_purge_all_expired” = TRUE
manual purge Doc ID: 2537247.1
}}}
<<showtoc>>
! root (default) vs IAM login
root (default)
[img(20%,20%)[https://i.imgur.com/ardewlt.png]]
IAM user
[img(20%,20%)[https://i.imgur.com/U4xSflU.png]]
! delete root user
[img(80%,80%)[https://i.imgur.com/vjFGnTa.png]]
[img(80%,80%)[https://i.imgur.com/vrpDuoL.png]]
! enable MFA (multi factor authentication)
[img(80%,80%)[https://i.imgur.com/MxYzYQl.png]]
[img(80%,80%)[https://i.imgur.com/8OXdbJa.png]]
[img(80%,80%)[https://i.imgur.com/jdfEipx.png]]
[img(80%,80%)[https://i.imgur.com/rE2EEjf.png]]
[img(80%,80%)[https://i.imgur.com/l1IBFL3.png]]
! create individual IAM user
URL is here: https://karlarao.signin.aws.amazon.com/console
.
[img(80%,80%)[https://i.imgur.com/NQRUmSz.png]]
.
[img(80%,80%)[https://i.imgur.com/L59tUQi.png]]
.
[img(80%,80%)[https://i.imgur.com/GimsjYW.png]]
.
[img(80%,80%)[https://i.imgur.com/TU9bWJw.png]]
.
[img(80%,80%)[https://i.imgur.com/B1AMoeH.png]]
.
[img(80%,80%)[https://i.imgur.com/GDZzQWN.png]]
.
[img(80%,80%)[https://i.imgur.com/F4D1tnG.png]]
.
[img(80%,80%)[https://i.imgur.com/AphmZ6z.png]]
.
[img(80%,80%)[https://i.imgur.com/JKxMgni.png]]
.
[img(80%,80%)[https://i.imgur.com/OAz5Mnp.png]]
.
[img(80%,80%)[https://i.imgur.com/bn0immI.png]]
.
[img(80%,80%)[https://i.imgur.com/ytI6OZV.png]]
.
[img(80%,80%)[https://i.imgur.com/RPN1hUW.png]]
.
[img(80%,80%)[https://i.imgur.com/cNKh1vR.png]]
! references
https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
https://aws.amazon.com/blogs/aws/important-aws-account-key-change-coming-on-april-21-2014/
https://aws.amazon.com/iam/details/mfa/
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html?icmpid=docs_iam_console
https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html?icmpid=docs_iam_console
.
https://console.aws.amazon.com/billing/home?#/
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/tracking-free-tier-usage.html
https://wiki.centos.org/Cloud/AWS
https://wiki.centos.org/Cloud/AWS/AmazonCLI
<<showtoc>>
! windows install and configure
!! download AWS CLI
!! configure AWS CLI
{{{
karl@karl-remote ~/.aws
$ /cygdrive/c/"Program Files"/Amazon/AWSCLI/bin/aws.cmd --version
aws-cli/1.16.91 Python/3.6.0 Windows/7 botocore/1.12.81
$ cat rootkey.csv
AWSAccessKeyId=xxx
AWSSecretKey=xxx
$ /cygdrive/c/"Program Files"/Amazon/AWSCLI/bin/aws.cmd configure
AWS Access Key ID [None]: xxx
AWS Secret Access Key [None]: xxx
Default region name [None]: us-east-1
Default output format [None]:
}}}
! mac install and configure
!! download and install
{{{
brew install awscli
}}}
!! configure
{{{
==> Installing dependencies for awscli: gdbm, openssl, readline, sqlite and python
==> Installing awscli dependency: gdbm
==> Downloading https://homebrew.bintray.com/bottles/gdbm-1.18.1.high_sierra.bottle.1.tar.gz
######################################################################## 100.0%
==> Pouring gdbm-1.18.1.high_sierra.bottle.1.tar.gz
🍺 /usr/local/Cellar/gdbm/1.18.1: 20 files, 590.8KB
==> Installing awscli dependency: openssl
==> Downloading https://homebrew.bintray.com/bottles/openssl-1.0.2q.high_sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring openssl-1.0.2q.high_sierra.bottle.tar.gz
==> Caveats
A CA file has been bootstrapped using certificates from the SystemRoots
keychain. To add additional certificates (e.g. the certificates added in
the System keychain), place .pem files in
/usr/local/etc/openssl/certs
and run
/usr/local/opt/openssl/bin/c_rehash
openssl is keg-only, which means it was not symlinked into /usr/local,
because Apple has deprecated use of OpenSSL in favor of its own TLS and crypto libraries.
If you need to have openssl first in your PATH run:
echo 'export PATH="/usr/local/opt/openssl/bin:$PATH"' >> ~/.bash_profile
For compilers to find openssl you may need to set:
export LDFLAGS="-L/usr/local/opt/openssl/lib"
export CPPFLAGS="-I/usr/local/opt/openssl/include"
For pkg-config to find openssl you may need to set:
export PKG_CONFIG_PATH="/usr/local/opt/openssl/lib/pkgconfig"
==> Summary
🍺 /usr/local/Cellar/openssl/1.0.2q: 1,794 files, 12.1MB
==> Installing awscli dependency: readline
==> Downloading https://homebrew.bintray.com/bottles/readline-8.0.0.high_sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring readline-8.0.0.high_sierra.bottle.tar.gz
==> Caveats
readline is keg-only, which means it was not symlinked into /usr/local,
because macOS provides the BSD libedit library, which shadows libreadline.
In order to prevent conflicts when programs look for libreadline we are
defaulting this GNU Readline installation to keg-only.
For compilers to find readline you may need to set:
export LDFLAGS="-L/usr/local/opt/readline/lib"
export CPPFLAGS="-I/usr/local/opt/readline/include"
For pkg-config to find readline you may need to set:
export PKG_CONFIG_PATH="/usr/local/opt/readline/lib/pkgconfig"
==> Summary
🍺 /usr/local/Cellar/readline/8.0.0: 48 files, 1.5MB
==> Installing awscli dependency: sqlite
==> Downloading https://homebrew.bintray.com/bottles/sqlite-3.26.0_1.high_sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring sqlite-3.26.0_1.high_sierra.bottle.tar.gz
==> Caveats
sqlite is keg-only, which means it was not symlinked into /usr/local,
because macOS provides an older sqlite3.
If you need to have sqlite first in your PATH run:
echo 'export PATH="/usr/local/opt/sqlite/bin:$PATH"' >> ~/.bash_profile
For compilers to find sqlite you may need to set:
export LDFLAGS="-L/usr/local/opt/sqlite/lib"
export CPPFLAGS="-I/usr/local/opt/sqlite/include"
For pkg-config to find sqlite you may need to set:
export PKG_CONFIG_PATH="/usr/local/opt/sqlite/lib/pkgconfig"
==> Summary
🍺 /usr/local/Cellar/sqlite/3.26.0_1: 11 files, 3.7MB
==> Installing awscli dependency: python
==> Downloading https://homebrew.bintray.com/bottles/python-3.7.2_1.high_sierra.bottle.1.tar.gz
######################################################################## 100.0%
==> Pouring python-3.7.2_1.high_sierra.bottle.1.tar.gz
==> /usr/local/Cellar/python/3.7.2_1/bin/python3 -s setup.py --no-user-cfg install --force --verbose --install-scripts=/usr/local/Cellar/pytho
==> /usr/local/Cellar/python/3.7.2_1/bin/python3 -s setup.py --no-user-cfg install --force --verbose --install-scripts=/usr/local/Cellar/pytho
==> /usr/local/Cellar/python/3.7.2_1/bin/python3 -s setup.py --no-user-cfg install --force --verbose --install-scripts=/usr/local/Cellar/pytho
==> Caveats
Python has been installed as
/usr/local/bin/python3
Unversioned symlinks `python`, `python-config`, `pip` etc. pointing to
`python3`, `python3-config`, `pip3` etc., respectively, have been installed into
/usr/local/opt/python/libexec/bin
If you need Homebrew's Python 2.7 run
brew install python@2
You can install Python packages with
pip3 install <package>
They will install into the site-package directory
/usr/local/lib/python3.7/site-packages
See: https://docs.brew.sh/Homebrew-and-Python
==> Summary
🍺 /usr/local/Cellar/python/3.7.2_1: 3,837 files, 59.4MB
==> Installing awscli
==> Downloading https://homebrew.bintray.com/bottles/awscli-1.16.90.high_sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring awscli-1.16.90.high_sierra.bottle.tar.gz
==> Caveats
The "examples" directory has been installed to:
/usr/local/share/awscli/examples
Bash completion has been installed to:
/usr/local/etc/bash_completion.d
zsh completions and functions have been installed to:
/usr/local/share/zsh/site-functions
==> Summary
🍺 /usr/local/Cellar/awscli/1.16.90: 5,009 files, 47.9MB
==> Caveats
==> openssl
A CA file has been bootstrapped using certificates from the SystemRoots
keychain. To add additional certificates (e.g. the certificates added in
the System keychain), place .pem files in
/usr/local/etc/openssl/certs
and run
/usr/local/opt/openssl/bin/c_rehash
openssl is keg-only, which means it was not symlinked into /usr/local,
because Apple has deprecated use of OpenSSL in favor of its own TLS and crypto libraries.
If you need to have openssl first in your PATH run:
echo 'export PATH="/usr/local/opt/openssl/bin:$PATH"' >> ~/.bash_profile
For compilers to find openssl you may need to set:
export LDFLAGS="-L/usr/local/opt/openssl/lib"
export CPPFLAGS="-I/usr/local/opt/openssl/include"
For pkg-config to find openssl you may need to set:
export PKG_CONFIG_PATH="/usr/local/opt/openssl/lib/pkgconfig"
==> readline
readline is keg-only, which means it was not symlinked into /usr/local,
because macOS provides the BSD libedit library, which shadows libreadline.
In order to prevent conflicts when programs look for libreadline we are
defaulting this GNU Readline installation to keg-only.
For compilers to find readline you may need to set:
export LDFLAGS="-L/usr/local/opt/readline/lib"
export CPPFLAGS="-I/usr/local/opt/readline/include"
For pkg-config to find readline you may need to set:
export PKG_CONFIG_PATH="/usr/local/opt/readline/lib/pkgconfig"
==> sqlite
sqlite is keg-only, which means it was not symlinked into /usr/local,
because macOS provides an older sqlite3.
If you need to have sqlite first in your PATH run:
echo 'export PATH="/usr/local/opt/sqlite/bin:$PATH"' >> ~/.bash_profile
For compilers to find sqlite you may need to set:
export LDFLAGS="-L/usr/local/opt/sqlite/lib"
export CPPFLAGS="-I/usr/local/opt/sqlite/include"
For pkg-config to find sqlite you may need to set:
export PKG_CONFIG_PATH="/usr/local/opt/sqlite/lib/pkgconfig"
==> python
Python has been installed as
/usr/local/bin/python3
Unversioned symlinks `python`, `python-config`, `pip` etc. pointing to
`python3`, `python3-config`, `pip3` etc., respectively, have been installed into
/usr/local/opt/python/libexec/bin
If you need Homebrew's Python 2.7 run
brew install python@2
You can install Python packages with
pip3 install <package>
They will install into the site-package directory
/usr/local/lib/python3.7/site-packages
See: https://docs.brew.sh/Homebrew-and-Python
==> awscli
The "examples" directory has been installed to:
/usr/local/share/awscli/examples
Bash completion has been installed to:
/usr/local/etc/bash_completion.d
zsh completions and functions have been installed to:
/usr/local/share/zsh/site-functions
}}}
!! README
{{{
cd /usr/local/share/awscli/examples
AMAC02T60SJH03Y:examples kristofferson.a.arao$ ls
acm codecommit ecr inspector s3api
acm-pca codepipeline ecs iot secretsmanager
apigateway configservice eks kms ses
application-autoscaling configure elasticache logs sns
autoscaling cur elasticbeanstalk opsworks sqs
batch datapipeline elastictranscoder opsworkscm ssm
budgets deploy elb organizations storagegateway
ce devicefarm elbv2 pi sts
cloud9 directconnect emr pricing swf
cloudformation discovery es rds waf
cloudfront dlm events redshift waf-regional
cloudsearchdomain dms glacier resource-groups workdocs
cloudtrail dynamodb iam route53 workmail
cloudwatch ec2 importexport s3 workspaces
AMAC02T60SJH03Y:examples kristofferson.a.arao$ less s3
s3 is a directory
AMAC02T60SJH03Y:examples kristofferson.a.arao$ cd s3
AMAC02T60SJH03Y:s3 kristofferson.a.arao$ ls
_concepts.rst ls.rst mv.rst rm.rst website.rst
cp.rst mb.rst rb.rst sync.rst
AMAC02T60SJH03Y:s3 kristofferson.a.arao$ ls -ltr
total 96
-rw-r--r-- 1 kristofferson.a.arao 562225435 883 Jan 16 13:51 website.rst
-rw-r--r-- 1 kristofferson.a.arao 562225435 4753 Jan 16 13:51 sync.rst
-rw-r--r-- 1 kristofferson.a.arao 562225435 1367 Jan 16 13:51 rm.rst
-rw-r--r-- 1 kristofferson.a.arao 562225435 641 Jan 16 13:51 rb.rst
-rw-r--r-- 1 kristofferson.a.arao 562225435 2834 Jan 16 13:51 mv.rst
-rw-r--r-- 1 kristofferson.a.arao 562225435 535 Jan 16 13:51 mb.rst
-rw-r--r-- 1 kristofferson.a.arao 562225435 2875 Jan 16 13:51 ls.rst
-rw-r--r-- 1 kristofferson.a.arao 562225435 5388 Jan 16 13:51 cp.rst
-rw-r--r-- 1 kristofferson.a.arao 562225435 7029 Jan 16 13:51 _concepts.rst
}}}
! create another user profile
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
https://stackoverflow.com/questions/46319880/how-do-i-clear-the-credentials-in-aws-configure
{{{
$ aws configure --profile karltest
AWS Access Key ID [None]: xxx
AWS Secret Access Key [None]: xxx
Default region name [None]: us-east-1
Default output format [None]:
$ aws s3 ls
An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
# this works!
$ aws s3 ls --profile karltest
$ aws --profile karltest s3 ls
}}}
! references
https://docs.aws.amazon.com/cli/latest/userguide/install-macos.html
https://formulae.brew.sh/formula/awscli
https://github.com/aws/aws-cli/issues/727
https://www.code2bits.com/how-to-install-awscli-on-macos-using-homebrew/
https://medium.com/@yogeshdarji99/steps-to-install-awscli-on-mac-5bad783483a
https://www.youtube.com/results?search_query=AWS+DMS+and+SCT
AWS re:Invent 2018: Migrating Databases to the Cloud with AWS Database Migration Service (DAT207) https://www.youtube.com/watch?v=11IHvxjy4hw
! DMS
Lab Session - Migrating from Oracle to Aurora using AWS Data Migration Service https://learning.oreilly.com/videos/aws-certified-sysops/9781788999946/9781788999946-video7_6?autoplay=false
Using an Oracle Database as a Source for AWS DMS https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html
https://ilparle.com/2017/08/30/setup-change-data-capture-cdc-for-dms/
https://blog.pythian.com/aws-dms-tips-oracle-migrations/
! SCT
https://aws.amazon.com/blogs/database/tag/schema-conversion-tool-sct/
[img(50%,50%)[ https://i.imgur.com/o2RtG0v.png ]]
! make 400
{{{
chmod 400 *pem
}}}
! initial
{{{
$ ssh -i "centos7freetier.pem" centos@ec2-3-86-102-235.compute-1.amazonaws.com
Last login: Thu Jan 17 21:13:21 2019 from cpe-69-200-224-180.nyc.res.rr.com
[centos@ip-172-31-91-114 ~]$
[centos@ip-172-31-91-114 ~]$
[centos@ip-172-31-91-114 ~]$
[centos@ip-172-31-91-114 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 172.31.91.114 netmask 255.255.240.0 broadcast 172.31.95.255
inet6 fe80::10c3:38ff:feae:1ee0 prefixlen 64 scopeid 0x20<link>
ether 12:c3:38:ae:1e:e0 txqueuelen 1000 (Ethernet)
RX packets 225 bytes 26162 (25.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 271 bytes 27622 (26.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 6 bytes 416 (416.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 416 (416.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[centos@ip-172-31-91-114 ~]$
[centos@ip-172-31-91-114 ~]$ Connection to ec2-3-86-102-235.compute-1.amazonaws.com closed by remote host.
Connection to ec2-3-86-102-235.compute-1.amazonaws.com closed.
}}}
! after stop and start
{{{
karl@karl-remote ~
# this doesn't work
$ ssh -i "centos7freetier.pem" centos@ec2-3-86-102-235.compute-1.amazonaws.com
# need to get new IP
karl@karl-remote ~
$ ssh -i "centos7freetier.pem" centos@ec2-3-80-207-12.compute-1.amazonaws.com
Last login: Thu Jan 17 21:42:14 2019 from cpe-69-200-224-180.nyc.res.rr.com
[centos@ip-172-31-91-114 ~]$
[centos@ip-172-31-91-114 ~]$
[centos@ip-172-31-91-114 ~]$ ls
[centos@ip-172-31-91-114 ~]$
[centos@ip-172-31-91-114 ~]$
[centos@ip-172-31-91-114 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 172.31.91.114 netmask 255.255.240.0 broadcast 172.31.95.255
inet6 fe80::10c3:38ff:feae:1ee0 prefixlen 64 scopeid 0x20<link>
ether 12:c3:38:ae:1e:e0 txqueuelen 1000 (Ethernet)
RX packets 267 bytes 32476 (31.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 307 bytes 33773 (32.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 6 bytes 416 (416.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 416 (416.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
}}}
! To fix this, go to "Network and Security" -> "Elastic IPs" and associate the EIP to Instance ID
https://aws.amazon.com/kinesis/
https://medium.com/aws-activate-startup-blog/the-tale-of-two-messaging-platforms-apache-kafka-and-amazon-kinesis-654963bdbf35
http://www.jesse-anderson.com/2017/07/apache-kafka-and-amazon-kinesis/
https://www.udemy.com/awskinesis/
https://www.udemy.com/aws-lambda-in-action-hunk-and-nifi-integration/
.
Creating, Connecting, and Monitoring Databases with Amazon RDS https://app.pluralsight.com/library/courses/aws-amazon-rds/table-of-contents
Querying Data in AWS Databases https://app.pluralsight.com/library/courses/querying-data-aws-databases/table-of-contents
<<showtoc>>
! create bucket
[img(80%,80%)[https://i.imgur.com/BZsr0LX.png]]
[img(80%,80%)[https://i.imgur.com/SlqoWrm.png]]
[img(80%,80%)[https://i.imgur.com/rh0ODkX.png]]
[img(80%,80%)[https://i.imgur.com/3AmqR4l.png]]
[img(80%,80%)[https://i.imgur.com/UZfsalO.png]]
[img(80%,80%)[https://i.imgur.com/DEIL9RZ.png]]
! ls using aws cli
{{{
$ /cygdrive/c/"Program Files"/Amazon/AWSCLI/bin/aws.cmd s3 ls
2019-01-18 00:50:02 hadoopbucketdemo
$ /cygdrive/c/"Program Files"/Amazon/AWSCLI/bin/aws.cmd s3 ls hadoopbucketdemo
}}}
<<showtoc>>
! public/private key pair, pem file
! elastic ip
* not charged when you are using it
! security group
* takes effect immediately
! bastion server (similar to edge node/gateway node)
* 1 server out of 10 could be the bastion node
! Using Access Keys (AWSAccessKeyId, SecretKey)
* use AWS CLI
! IAM
* https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html?icmpid=docs_iam_console
http://www.slideshare.net/heinrichvk/axibase-time-series-database
http://www.slideshare.net/pramitchoudhary/need-for-time-series-database
https://www.mapr.com/blog/ted-dunning-and-ellen-friedman-discuss-their-time-series-databases-book
https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=Ted+Dunning+and+Ellen+Friedman
https://github.com/axibase/atsd-docs/blob/master/installation/README.md
https://github.com/rodionos?tab=activity <- CEO
https://github.com/axibase?page=2 <- git organization
https://github.com/alexandertokarev
https://github.com/axibase/atsd-api-r <- R api
https://github.com/axibase/atsd-api-r/blob/master/forecast_and_save_series_example.md
https://github.com/axibase/atsd-api-r/blob/master/usage_example.md
https://axibase.com/products/axibase-time-series-database/architecture/
https://github.com/axibase/atsd-docs/blob/master/README.md#axibase-time-series-database-documentation
https://github.com/heinrichvk
https://github.com/axibase/nmon
https://github.com/axibase/atsd-api-java
http://gigaom.com/2013/10/26/why-the-economics-of-cloud-storage-arent-always-what-they-seem/
''the NSA challenge'' http://blog.backblaze.com/2009/11/12/nsa-might-want-some-backblaze-pods/
''howto build the storage pods''
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/
http://blog.backblaze.com/2013/02/20/180tb-of-good-vibrations-storage-pod-3-0/
list of background processes
https://www.linkedin.com/pulse/list-background-processes-oracle-database-12c-akash-pramanik
http://www.morganslibrary.org/reference/processes.html
{{{
$ cat backup.sh
. $HOME/.bash_profile
export ORAENV_ASK=NO
export ORACLE_SID=$1
export BACKUPDIR=/reco/rman/$ORACLE_SID
export PATH=$ORACLE_HOME/bin:$PATH
export DATE=$(date +%Y%m%d%H%M)
. oraenv
rman <<EOF
CONNECT TARGET
SHOW ALL;
REPORT SCHEMA;
LIST BACKUP OF DATABASE;
REPORT NEED BACKUP;
REPORT UNRECOVERABLE;
LIST EXPIRED BACKUP BY FILE;
LIST ARCHIVELOG ALL;
REPORT OBSOLETE;
CROSSCHECK BACKUP DEVICE TYPE DISK;
CROSSCHECK COPY OF ARCHIVELOG ALL;
DELETE NOPROMPT EXPIRED BACKUP DEVICE TYPE DISK;
DELETE NOPROMPT OBSOLETE DEVICE TYPE DISK;
DELETE NOPROMPT ARCHIVELOG ALL BACKED UP 1 TIMES TO DEVICE TYPE DISK COMPLETED BEFORE 'SYSDATE-30';
#BACKUP DEVICE TYPE DISK VALIDATE CHECK LOGICAL DATABASE;
run
{
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 60 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '$BACKUPDIR/%d-%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 2 G FORMAT '$BACKUPDIR/%d.%T.%s.%p.bkp';
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE ENCRYPTION FOR DATABASE OFF;
CONFIGURE ENCRYPTION ALGORITHM 'AES128';
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '$BACKUPDIR/snapcf-$ORACLE_SID.bkp';
BACKUP AS COMPRESSED BACKUPSET DEVICE TYPE DISK INCREMENTAL LEVEL 0 TAG 'level0-$DATE' DATABASE PLUS ARCHIVELOG;
BACKUP DEVICE TYPE DISK SPFILE TAG 'spfile-$DATE' FORMAT '$BACKUPDIR/spfile.%d.%T.%t.bkp';
BACKUP AS COPY CURRENT CONTROLFILE TAG 'control-$DATE' FORMAT '$BACKUPDIR/control-$ORACLE_SID-$DATE.bkp';
BACKUP AS COPY CURRENT CONTROLFILE FOR STANDBY TAG 'control-standby-$DATE' FORMAT '$BACKUPDIR/control-standby-$ORACLE_SID-$DATE.bkp';
}
#RESTORE DEVICE TYPE DISK VALIDATE CHECK LOGICAL DATABASE;
quit
EOF
export RTNCDE=$?
echo return code from RMAN is $RTNCDE
if [[ $RTNCDE -eq 0 ]]
then
echo FULL DATABASE BACKUP COMPLETED SUCCESSFULLY
else
echo ERROR ON DATABASE BACKUP
mailx -s "ERROR ON $ORACLE_SID BACKUP" karlarao@gmail.com </dev/null
fi
}}}
11gR2
{{{
backup recovery area to destination ‘/mnt/remote/backups/orasql/FRA_BACKUP/’
}}}
http://gavinsoorma.com/2012/08/rman-11g-new-feature-backup-fast-recovery-area-fra-to-disk/
Rman Enhancements In Oracle 11g. (Doc ID 1115423.1)
Top RMAN 11.2 issues (Doc ID 1473914.1)
http://ilovemydbajob.blogspot.com/2012/04/rman-enhancements-in-11g.html
{{{
CREATE TABLE restou_staging.et_enrollment_tracking_bkup1
STORED AS ORC
LOCATION
'/sdge/staging/working/cisco/et_enrollment_tracking_bkup1'
AS SELECT m.*
FROM restou_derived.et_enrollment_tracking m;
}}}
http://www.balsamiq.com/products/mockups
http://rimblas.com/blog/2014/08/a-wireframe-is-worth-a-1000-words/
! balsamiq resources
https://mockupstogo.mybalsamiq.com/projects
https://docs.balsamiq.com/desktop/text/
https://support.balsamiq.com/desktop/accountassets/
https://support.balsamiq.com/tutorials/
{{{
#!/usr/bin/env bash
# from http://www.annasyme.com/docs/bash_script_structure.html
#Example of bash script structure
#............................................................................
# Defaults
set -e #exit if a command exits with non zero status
script=$(basename $0) #script name less file path
threads=16
#............................................................................
# Functions
function msg {
echo -e "$*"
}
# $* is args passed in, -e means interpret backslashes
function msg_banner {
printf '%*s\n' "${COLUMNS:-$(tput cols)}" '' | tr ' ' -
msg "$*"
printf '%*s\n' "${COLUMNS:-$(tput cols)}" '' | tr ' ' -
}
#prints a message inside dashed lines
function usage {
msg "$script\n This script does stuff."
msg "Usage:\n $script [options] R1.fq.gz"
msg "Parameters:"
msg " R1 reads R1.fq.gz"
msg "Options:"
msg " -h Show this help"
msg " -t NUM Number of threads (default=16)"
exit 1
#exits script
}
#...........................................................................
# Parse the command line options
#loops through, sets variable or runs function
#have to add in a colon after the flag, if it's looking for an arg
#e.g. t: means flag t is set, then look for the arg (eg 32) to set $threads to
# : at the start disables default error handling for this bit
#instead it will label an odd flag as a ?
#the extra : at the end means it will label missing args as :
while getopts ':ht::' opt ; do
case $opt in
h)
usage
;;
t)
threads=$OPTARG
;;
\?)
echo "Invalid option '=$OPTARG'"
exit 1
;;
:)
echo "Option '-$OPTARG' requires an argument"
exit 1
;;
esac
done
shift $((OPTIND-1))
#remove all options that has been parsed by getopts
#so that $1 refers to next arg passed to script
#e.g. a positional arg
if [ $# -ne 1 ]; then
msg "\n **Please provide one input parameter** \n"
usage
fi
#if number of pos params is not 1, print usage msg
#...........................................................................
#start
msg "\n"
msg_banner "now running $script"
#...........................................................................
#check inputs
#give variable names to the input
raw_R1=$1
msg "This script will use:"
msg " Illumina R1 reads: $raw_R1" #positional arg
msg " Threads: $threads" #option setting or default
#...........................................................................
msg_banner "now activating conda env with tools needed"
source activate bio
#this activates the conda env called bio
#note - use source not conda
conda env export --name bio > conda_env_bio.yml
#saves this conda env
#...........................................................................
msg_banner "now getting seq stats"
seqkit stats $raw_R1 > R1.stats
msg "Script finished."
}}}
this serves as an index for every sizing that I've done so far..
https://www.evernote.com/shard/s48/sh/27d9ef58-e064-4b51-9a6b-d6f26998578a/fd5d0510612f0cf35b4ecad85c8fbdef
https://blog.chartio.com/posts/traditional-bi-buzzwords-for-modern-data-teams
https://www.infoq.com/articles/ml-data-processing
Recent trends in data and machine learning technologies - Ben Lorica (O'Reilly Media)
https://www.youtube.com/watch?v=xnf4Dr8regw&list=PLklSCDzsQHdn76gvqjszRugFuPhN-7BdL&index=3&t=0s
! 2021
!! The Big Book of Machine Learning Use Cases
https://pages.databricks.com/rs/094-YMS-629/images/The%20Big%20Book%20of%20Machine%20Learning%20Use%20Case.pdf?mkt_tok=eyJpIjoiWkRjNVpUWmxZVEpoTmpVdyIsInQiOiJENVlRODAxMWpIT00waDFta2tVNDRSZERGSkRrbzVSRGkza0pURG5lbFpuc1NlanRDXC9DWWF6NzZ4WDRnQ0RSUTFIc3RucTBkUWs4VU9TVVFvNWgycTlLazFZQjI1OXVmRWxldEdYeCtBRTd3Q2pWcCtocDZhZ1JMRHFaOXRrV1AifQ%3D%3D
!! cache
https://www.scylladb.com/wp-content/uploads/wp-7-reasons-cache.pdf
!! vmware
https://www.vmware.com/content/dam/learn/en/amer/fy21/pdf/549356_intel_modern_infrastructure_for_dummies.pdf
!! segment
https://gopages.segment.com/rs/667-MPQ-382/images/Segment-Customer-Data-Platform-Report-2020.pdf?mkt_tok=eyJpIjoiWlRBek9HRTBNbVkxWlRVeiIsInQiOiJBV1Vwd1lvSDh5SjZpSEZxTTRLRXVuOHVjOXd6dzVtRGVSd0s1SnAwN0k5OVVMNjVWenVxSkk5cjRtVWduY0h5RVFIS2VPV0l3WU5uWDUxaGZzREJxN0E2amdrOFwvSGpWUXd1bTBqQlNlVXREV1dxNFwvdkJGblNiSjUxOGV3YzdDIn0%3D
!! snowflake
https://info.snowflake.net/rs/252-RFO-227/images/SnowflakeDataeBook.pdf
https://www.tableau.com/sites/default/files/media/Whitepapers/whitepaper_top_10_big_data_trends_2017.pdf?ref=lp&signin=a8f4e60b13afde365bc1777bef434cb8
https://www.tableau.com/reports/business-intelligence-trends
http://bigbinary.com/videos
<<showtoc>>
! docs
https://cloud.google.com/bigquery/docs/information-schema-jobs
! build a report like BI publisher
https://codelabs.developers.google.com/codelabs/bigquery-pricing-workshop/#4
archived - https://web.archive.org/web/20200523055544/https://codelabs.developers.google.com/codelabs/bigquery-pricing-workshop/#4
! get previous job_id performance (past 180 days)
<<<
so the "AWR" of jobs info are on information schema and kept for 180 days
https://cloud.google.com/bigquery/docs/information-schema-jobs
then you can do queries like this to find the topN and get the query
plug the job info to bq visualizer and you'll get the details. as long as it's within 180days
was able to pull my 1st SQL on my dev project
and jobs are logged (flushed) on information schema in a few seconds
{{{
get example-dev-284123:US.bquxjob_282b3d9d_17378ee854d
SELECT
project_id, user_email, job_id, statement_type, start_time, end_time, query, state, total_bytes_processed, total_slot_ms
FROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
ORDER BY start_time ASC;
}}}
<<<
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/89816699-043b1b00-db15-11ea-900f-4e506a8f3465.png]]
.
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/89816700-04d3b180-db15-11ea-96a4-a51a26934600.png]]
.
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/89816701-04d3b180-db15-11ea-8b1e-d28b165f1993.png]]
<<showtoc>>
! install packages
{{{
pip install google-cloud-bigquery
pip install google-cloud-bigquery-datatransfer
pip install google-cloud-bigquery-storage
pip list | grep google
google-api-core 1.22.0
google-auth 1.19.2
google-cloud-bigquery 1.26.0
google-cloud-bigquery-datatransfer 1.1.0
google-cloud-bigquery-storage 1.0.0
google-cloud-core 1.3.0
google-resumable-media 0.5.1
googleapis-common-protos 1.52.0
}}}
! configure credentials
{{{
python test.py
Traceback (most recent call last):
File "test.py", line 5, in <module>
client = bigquery.Client()
File "/Users/kristofferson.a.arao/.pyenv/versions/py385/lib/python3.8/site-packages/google/cloud/bigquery/client.py", line 178, in __init__
super(Client, self).__init__(
File "/Users/kristofferson.a.arao/.pyenv/versions/py385/lib/python3.8/site-packages/google/cloud/client.py", line 226, in __init__
_ClientProjectMixin.__init__(self, project=project)
File "/Users/kristofferson.a.arao/.pyenv/versions/py385/lib/python3.8/site-packages/google/cloud/client.py", line 178, in __init__
project = self._determine_default(project)
File "/Users/kristofferson.a.arao/.pyenv/versions/py385/lib/python3.8/site-packages/google/cloud/client.py", line 193, in _determine_default
return _determine_default_project(project)
File "/Users/kristofferson.a.arao/.pyenv/versions/py385/lib/python3.8/site-packages/google/cloud/_helpers.py", line 186, in _determine_default_project
_, project = google.auth.default()
File "/Users/kristofferson.a.arao/.pyenv/versions/py385/lib/python3.8/site-packages/google/auth/_default.py", line 338, in default
raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started
}}}
https://cloud.google.com/bigquery/docs/reference/libraries
<<showtoc>>
! cost
The Economic Advantages of Google BigQuery versus Alternative Cloud-based EDW Solutions
https://services.google.com/fh/files/blogs/esg_economic_validation_google_bigquery_vs_cloud-based-edws-september_2019.pdf
! multi region performance
https://medium.com/weareservian/impact-of-dataset-locations-on-bigquery-query-execution-performance-ea1ffdf071fe
<<showtoc>>
! No matching signature for operator > for argument types: STRING, INT64.
fix
{{{
SAFE_CAST(ADM_RATE_ALL AS FLOAT64)
Had we not included the cast, we would have received an error:
No matching signature for operator > for argument types: STRING, INT64.
Had we simply cast as a float, it would have failed on a row where the value was a string (PrivacySuppressed) that cannot be cast as a float:
Bad double value: PrivacySuppressed; while executing the filter ...
This is because the automatic schema detection did not identify the admission rate column as numeric. Instead, that column is being treated as a string because, in some of the rows, the value is suppressed for privacy reasons (e.g., if the number of applications is very small) and replaced by the text PrivacySuppressed. Indeed, even the median family income is a string (it happens to always be numeric for colleges that meet the criteria we outlined), and so we need to cast it before ordering
}}}
https://stackoverflow.com/questions/40538737/any-jdbc-driver-for-google-bigquery-standard-sql
https://cloud.google.com/bigquery/docs/reference/standard-sql/enabling-standard-sql#sql-prefix
! simba driver
https://cloud.google.com/bigquery/providers/simba-drivers
! connecting to bq using dbeaver
https://justnumbersandthings.com/post/2018-09-22-dbeaver-bigquery/
<<showtoc>>
! doc
https://cloud.google.com/bigquery/docs/reference/auditlogs
! logs viewer
https://console.cloud.google.com/logs/dashboard?orgonly=true&project=example-dev-284123&supportedpurview=organizationId
<<showtoc>>
! teradata official doc
!! SET and MULTISET
https://docs.teradata.com/reader/rgAb27O_xRmMVc_aQq2VGw/ZAb34NX2bdG0HpkqUsmUmQ
! sql converter
https://roboquery.com/app/
https://roboquery.com/
! bigquery datatypes teradata
https://cloud.google.com/solutions/migration/dw2bq/td2bq/td-bq-sql-translation-reference-tables
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types
! query band - for sql tracking
Using the BLOCKCOMPRESSION Reserved Query Band https://docs.teradata.com/reader/scPHvjfglIlB8F70YliLAw/~iM4rfkST9SWmEwqW4jXPQ
http://preshing.com/20140127/what-is-a-bitcoin-really/ <-- very nice and concise explanation of bit coin
http://en.wikipedia.org/wiki/Satoshi_Nakamoto
http://en.wikipedia.org/wiki/Gavin_Andresen
http://en.wikipedia.org/wiki/History_of_Bitcoin
http://www.cs.kent.edu/~JAVED/class-P2P12F/papers-2012/PAPER2012-p2p-bitcoin-satoshinakamoto.pdf
https://dioncho.wordpress.com/tag/block-class/
{{{
select rownum, class from v$waitstat;
ROWNUM CLASS
---------- ------------------
1 data block
2 sort block
3 save undo block
4 segment header
5 save undo header
6 free list
7 extent map
8 1st level bmb
9 2nd level bmb
10 3rd level bmb
11 bitmap block
12 bitmap index block
13 file header block <-- Here!
14 unused
15 system undo header
16 system undo block
17 undo header
18 undo block
}}}
<<showtoc>>
{{{
select segment_name, segment_Type
from dba_extents
where file_id = 8
and 2 between block_id and block_id + blocks - 1
and rownum = 1
;
alter system dump datafile 8 block 2;
create tablespace test_uniform size 200M;
alter tablespace test_uniform uniform size 100M;
create tablespace test_allocate;
create tablespace test_allocate SIZE 200m
EXTENT management local autoallocate segment space management auto;
CREATE BIGFILE TABLESPACE TBS_UNIFORM DATAFILE ‘+DATHC_DM2B’ SIZE 1G AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED
SEGMENT SPACE MANAGEMENT AUTO
UNIFORM SIZE 64m
/
create tablespace test_uniform2 datafile '+DATA/oltp/datafile/test1.dbf' SIZE 200M
AUTOEXTEND ON NEXT 100M
EXTENT management local uniform size 100M;
create tablespace test_allocate2 datafile '+DATA/oltp/datafile/test2.dbf' SIZE 200M
AUTOEXTEND ON NEXT 100M
EXTENT management local autoallocate segment space management auto;
11:29:25 SYS@oltp1> select tablespace_name, initial_Extent, next_extent, max_extents, extent_management, segment_space_management from dba_tablespaces;
TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT MAX_EXTENTS EXTENT_MAN SEGMEN
------------------------------ -------------- ----------- ----------- ---------- ------
TEST_UNIFORM2 104857600 104857600 2147483645 LOCAL AUTO
TEST_ALLOCATE2 65536 2147483645 LOCAL AUTO
create tablespace test_uniform3 datafile '+DATA/oltp/datafile/test3.dbf' SIZE 200M
AUTOEXTEND ON NEXT 100M uniform size 100M;
select table_name, initial_Extent, next_extent, max_extents from dba_tables where table_name like 'TEST_UNIFORM%';
create table test_uniform3_tbl tablespace test_uniform3 initial extent size 100M next_extent 100M;
CREATE TABLE test_uniform3_tbl (
empno NUMBER(5) )
TABLESPACE test_uniform3
STORAGE ( initial 100M next 100M );
TABLE_NAME INITIAL_EXTENT NEXT_EXTENT MAX_EXTENTS
------------------------------ -------------- ----------- -----------
TEST_UNIFORM3_TBL 104857600 104857600 2147483645
CREATE TABLE test_uniform3_tbl2 (
empno NUMBER(5) )
TABLESPACE test_uniform3 ;
TABLE_NAME INITIAL_EXTENT NEXT_EXTENT MAX_EXTENTS
------------------------------ -------------- ----------- -----------
TEST_UNIFORM3_TBL2 104857600 104857600 2147483645
TEST_UNIFORM3_TBL 104857600 104857600 2147483645
1048576
104857600
}}}
! get object id from block dump
{{{
14:46:34 SYS@oltp1> select * from dba_extents where segment_name = 'TEST_UNIFORM3_TBL';
OWNER SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME EXTENT_ID FILE_ID BLOCK_ID BYTES BLOCKS
------------------------------ --------------------------------------------------------------------------------- ------------------------------ ------------------ ------------------------------ ---------- ---------- ---------- ---------- ----------
RELATIVE_FNO
------------
SYS TEST_UNIFORM3_TBL TABLE TEST_UNIFORM3 0 16 512 104857600 12800
16
14:47:13 SYS@oltp1> alter system dump datafile 16 block 512;
System altered.
$ less oltp1_ora_112161.trc
LIST OF BUFFERS LINKED TO THIS GLOBAL CACHE ELEMENT:
flg: 0x02200000 state: XCURRENT tsn: 16 tsh: 1
addr: 0x263b95518 obj: 123090 cls: 1ST LEVEL BMB bscn: 0x0.1cc91a21
14:49:18 SYS@oltp1> select object_name from dba_objects where object_id = 123090;
OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
TEST_UNIFORM3_TBL
}}}
! get object id from a background process
https://community.oracle.com/thread/2246696?tstart=0
{{{
Object id on Block? Y
seg/obj: 0x8 csc: 0xcec.1a80431a itc: 4 flg: - typ: 1 - DATA
fsl: 0 fnx: 0x0 ver: 0x01
Object id on Block? Y
seg/obj: 0x8 csc: 0xcec.1cb994be itc: 6 flg: O typ: 1 - DATA
fsl: 3 fnx: 0x5d3ca2 ver: 0x01
convert the 0x8 from hex to decimal
15:02:30 SYS@oltp1> select object_name from dba_objects where DATA_OBJECT_ID = 8;
OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
UET$
SEG$
C_FILE#_BLOCK#
13 UET$
14 SEG$
8 C_FILE#_BLOCK#
}}}
! references
https://blogs.oracle.com/corruptions/entry/master_noter_for_handling_oracle_database_corruption_issues_doc_id_10880181
https://community.oracle.com/thread/2190726?tstart=0
http://kamranagayev.com/2011/10/31/dumping-an-oracle-data-block-reading-the-output-and-doing-some-math/
https://oraclue.com/2008/11/06/how-to-dump-corrupted-oracle-database-blocks/
http://blog.tanelpoder.com/files/scripts/dba.sql
http://blog.tanelpoder.com/files/scripts/dba2.sql
http://www.orafaq.com/wiki/Data_block_address
http://www.orafaq.com/wiki/Data_block
https://hemantoracledba.blogspot.com/2013/06/getting-rowids-present-in-block.html
http://www.oracle-wiki.net/startsqllistobjectbyblockfile
AUTOALLOCATE vs UNIFORM SIZE https://community.oracle.com/thread/2518951?tstart=0
http://it.toolbox.com/blogs/david/oracle-extent-allocation-autoallocate-vs-uniform-15340
dump block
https://community.oracle.com/thread/2190726?tstart=0
https://dioncho.wordpress.com/2009/07/06/object-name-from-file-and-block/
https://sites.google.com/site/ukja/sql-scripts-1/s-z/which_obj2
https://blogs.oracle.com/sysdba/entry/how_to_dump_oracle_data_block
db_block_size question
http://forums.oracle.com/forums/thread.jspa?messageID=4526681�
How to increase block size of database from 4kb to 16kb
http://forums.oracle.com/forums/thread.jspa?threadID=1116397&start=0&tstart=0
https://www.linkedin.com/pulse/blockchain-tables-extend-oracle-database-multi-model-rakhmilevich/?trackingId=yZ8mfPgKo36435bg2u2SRw%3D%3D
https://blogs.oracle.com/jrose/entry/bloom_filters_in_a_nutshell
http://juliandontcheff.wordpress.com/2012/08/28/bloom-filters-for-dbas/
http://www.oaktable.net/category/tags/bloom-filters
https://groups.google.com/forum/#!msg/comp.databases.oracle.server/pZrH_-CyrCs/2LXYMhdP6I0J
http://perfexpert2.wordpress.com/2011/05/21/undocumented-parameter/
http://en.wikipedia.org/wiki/Bloom_filter
http://antognini.ch/papers/BloomFilters20080620.pdf
https://martinfowler.com/bliki/BlueGreenDeployment.html
https://phauer.com/2015/databases-challenge-continuous-delivery/
* to install on OEL, need to enabled EPEL repository
https://linux.die.net/man/1/bmon
https://github.com/tgraf/bmon
https://www.putorius.net/bmon-monitor-bandwidth-linux-command-line.html
https://github.com/tgraf/bmon/issues/9
{{{
Added new -b/--use-bit
At the moment bmon uses all-or-nothing approach to units. Every field is either MiB, or MB, or Mbit.
But most of the times, when it comes to the network "speed", is customary to use bits. And in order to display "amount" transferred, base2 MiB or base10 MB are usually used. (religion thing, really)
It would be nice to allow to configure unit scale per field, or at least allow to customize it separately for "speed" and "amount" fields.
P.S. bmon is hands down the best console console bandwidth I've encountered so far, using it on numerous embedded openwrt routers :) Good job!
}}}
https://github.com/tgraf/bmon
https://www.tecmint.com/bmon-network-bandwidth-monitoring-debugging-linux/
https://www.tecmint.com/linux-network-bandwidth-monitoring-tools/
Sync Oracle to BigQuery With Golden Gate BigQuery Adapter https://medium.com/searce/sync-oracle-to-bigquery-with-golden-gate-bigquery-adapter-59991bbdb5e3
<<showtoc>>
! the error
<<<
errors with Exceeded rate limits: too many table update operations for this table. For more information, see https://cloud.google.com/bigquery/troubleshooting-errors
* this occurs on both TEMP and PERMANENT tables
* this occurs on both Stored Procedure and just consecutive UPDATE SQLs
* so you really have to do SET operations as much as possible and watch the DMLs to not call the small fast ones consecutively
* On the docs, it says there's only 5000 DML operations per day
* On the consecutive UPDATE SQLs that was just 18 of them and I was looking for the docs where I’m hitting the limit, seems like bigquery derived my “rate” of execution for 18 of those then just killed it
<<<
! two patterns test case
!! test1 - TEMP table - insert select cursor loop through every row
* execute that twice on the same table you'll hit the error, if you do this on a new table name you'll also hit the limit after executing twice
{{{
BEGIN
DECLARE x INT64 DEFAULT 1;
DECLARE z INT64 DEFAULT 0;
CREATE TEMP TABLE temp_country_final (country string);
# cursor loop through every rownum
CREATE TEMP TABLE temp_country AS
SELECT country, RANK() OVER(ORDER BY country) rownum
FROM (SELECT DISTINCT country
FROM `bigquery-public-data.faa.us_airports`)
order by country;
SET z= (SELECT COUNT(*) FROM temp_country);
WHILE x<=z DO
insert into temp_country_final SELECT country
FROM temp_country
WHERE rownum = x;
SET x=x+1;
END WHILE;
select * from temp_country_final;
END;
}}}
!! test 2 - PERMANENT table - 18 consecutive UPDATE
{{{
CREATE OR REPLACE TABLE dataset01.bucket_locations
AS
SELECT 'ASIA-EAST1' bucket_location
UNION ALL SELECT 'ASIA-NORTHEAST2' bucket_location;
select * from dataset01.bucket_locations;
#update several times
UPDATE dataset01.bucket_locations
SET bucket_location = "US"
WHERE UPPER(bucket_location) LIKE "US%";
UPDATE dataset01.bucket_locations
SET bucket_location = "TW"
WHERE UPPER(bucket_location) LIKE "ASIA-EAST1%";
UPDATE dataset01.bucket_locations
SET bucket_location = "JP"
WHERE UPPER(bucket_location) LIKE "ASIA-NORTHEAST1%";
UPDATE dataset01.bucket_locations
SET bucket_location = "HK"
WHERE UPPER(bucket_location) LIKE "ASIA-EAST2%";
UPDATE dataset01.bucket_locations
SET bucket_location = "JP"
WHERE UPPER(bucket_location) LIKE "ASIA-NORTHEAST2%";
UPDATE dataset01.bucket_locations
SET bucket_location = "KR"
WHERE UPPER(bucket_location) LIKE "ASIA-NORTHEAST3%";
UPDATE dataset01.bucket_locations
SET bucket_location = "IN"
WHERE UPPER(bucket_location) LIKE "ASIA-SOUTH1%";
UPDATE dataset01.bucket_locations
SET bucket_location = "SG"
WHERE UPPER(bucket_location) LIKE "ASIA-SOUTHEAST1%";
UPDATE dataset01.bucket_locations
SET bucket_location = "AU"
WHERE UPPER(bucket_location) LIKE "AUSTRALIA%";
UPDATE dataset01.bucket_locations
SET bucket_location = "FI"
WHERE UPPER(bucket_location) LIKE "EUROPE-NORTH1%";
UPDATE dataset01.bucket_locations
SET bucket_location = "BE"
WHERE UPPER(bucket_location) LIKE "EUROPE-WEST1%";
UPDATE dataset01.bucket_locations
SET bucket_location = "GB"
WHERE UPPER(bucket_location) LIKE "EUROPE-WEST2%";
UPDATE dataset01.bucket_locations
SET bucket_location = "DE"
WHERE UPPER(bucket_location) LIKE "EUROPE-WEST3%";
UPDATE dataset01.bucket_locations
SET bucket_location = "NL"
WHERE UPPER(bucket_location) LIKE "EUROPE-WEST4%";
UPDATE dataset01.bucket_locations
SET bucket_location = "CH"
WHERE UPPER(bucket_location) LIKE "EUROPE-WEST6%";
UPDATE dataset01.bucket_locations
SET bucket_location = "CA"
WHERE UPPER(bucket_location) LIKE "NORTHAMERICA%";
UPDATE dataset01.bucket_locations
SET bucket_location = "BR"
WHERE UPPER(bucket_location) LIKE "SOUTHAMERICA%";
--# refactored set operation
begin
select * from dataset01.bucket_locations;
CREATE TEMP TABLE `mappings`
AS
SELECT *
FROM UNNEST(
# add mappings
[STRUCT('US' AS abbr, 'US%' AS long), ('TW', 'ASIA-EAST1%'), ('JP', 'ASIA-NORTHEAST2%')]);
select * from mappings;
UPDATE dataset01.bucket_locations
SET bucket_location = abbr
FROM mappings
WHERE UPPER(bucket_location) LIKE long;
select * from dataset01.bucket_locations;
end;
}}}
! references
https://stackoverflow.com/questions/55844004/goolgebigquery-exceeded-rate-limits
{{{
SELECT
*
FROM
`bigquery-public-data.github_repos.__TABLES__`;
SELECT
project_id,
dataset_id,
table_id,
creation_time,
last_modified_time,
timestamp_millis(last_modified_time) as last_modified_utc,
timestamp_diff(current_timestamp(), timestamp_millis(last_modified_time), day) last_modified_diff_day,
row_count,
size_bytes,
type
FROM
`bigquery-public-data.github_repos.__TABLES__`;
}}}
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91462165-de717e00-e857-11ea-92da-947575a926f3.png]]
<<showtoc>>
! doc
https://cloud.google.com/bigquery/docs/access-control-examples
! cross dataset access
https://cloud.google.com/bigquery/docs/dataset-access-controls#controlling_access_to_a_dataset
on the dataset , click SHARE and add the service account ID "client_email": "example-dev-svc@iam.gserviceaccount.com"
! limit access to datasets
Is it possible to limit a Google service account to specific BigQuery datasets within a project
https://stackoverflow.com/questions/59736056/is-it-possible-to-limit-a-google-service-account-to-specific-bigquery-datasets-w
https://github.com/sungchun12/airflow-toolkit
Fetch results from BigQueryOperator in airflow https://stackoverflow.com/questions/53565834/fetch-results-from-bigqueryoperator-in-airflow
https://docs.ansible.com/ansible/latest/modules/gcp_bigquery_table_module.html
https://github.com/sungchun12/schedule-python-script-using-Google-Cloud
authorized views - limit access to users
https://cloud.google.com/bigquery/docs/share-access-views
! 1st edition
this book is old but has a lot of good stuff and python code examples with api usage. also more detailed explanations on architectural decisions and features of bigquery and how stuff works
{{{
https://learning.oreilly.com/library/view/google-bigquery-analytics/9781118824795/
}}}
! 2nd edition
i skipped chapters and read the following
love the book, and the github repo contains a lot of examples and Lak has some updates through his blog which has a separate folder on the repo
the security chapter was pretty useful and updated. they showed the encryption, data catalog, etc.
{{{
4. Loading Data into BigQuery
5. Developing with BigQuery
7. Optimizing Performance and Cost
8. Advanced Queries
10. Administering and Securing BigQuery
}}}
<<<
https://github.com/GoogleCloudPlatform/bigquery-oreilly-book
https://medium.com/@lakshmanok
https://github.com/GoogleCloudPlatform/bigquery-oreilly-book
https://github.com/googleapis/python-bigquery/tree/master/samples
<<<
<<<
build a report like BI publisher
https://codelabs.developers.google.com/codelabs/bigquery-pricing-workshop/#4
archived - https://web.archive.org/web/20200523055544/https://codelabs.developers.google.com/codelabs/bigquery-pricing-workshop/#4
<<<
<<<
create_bq_load_info_schema.sh https://gist.github.com/geoffmc/6d0309773c8c1a8d81aa538f5695b345
create_bq_load_audit.sh https://gist.github.com/geoffmc/bb4d52566045e904b1b907777044c1b6
https://github.com/vicenteg
<<<
<<showtoc>>
! doc
https://cloud.google.com/bigquery/docs/column-level-security-intro
<<showtoc>>
! the easiest is to use dominic giles DATA GENERATOR java tool
https://www.dominicgiles.com/datagenerator.html
! or use google's tool
https://github.com/GoogleCloudPlatform/professional-services/tree/master/examples/dataflow-data-generator
https://medium.com/google-cloud/yet-another-way-to-generate-fake-datasets-in-bigquery-93ee87c1008f
https://datarunsdeep.com.au/blog/flying-beagle/how-consistently-select-randomly-distributed-sample-rows-bigquery-table
.
https://rittmananalytics.com/blog/2018/08/27/date-partitioning-and-table-clustering-in-google-bigquery-and-looker-pdts
https://rittmananalytics.com/blog/tag/Big+Data
<<showtoc>>
! parse_date
https://cloud.google.com/bigquery/docs/reference/standard-sql/date_functions#parse_date
{{{
-- This works because elements on both sides match.
SELECT PARSE_DATE("%A %b %e %Y", "Thursday Dec 25 2008")
-- This doesn't work because the year element is in different locations.
SELECT PARSE_DATE("%Y %A %b %e", "Thursday Dec 25 2008")
-- This doesn't work because one of the year elements is missing.
SELECT PARSE_DATE("%A %b %e", "Thursday Dec 25 2008")
-- This works because %F can find all matching elements in date_string.
SELECT PARSE_DATE("%F", "2000-12-30")
}}}
similar to the parse_date functions of data fusion and data prep https://cloud.google.com/dataprep/docs/html/PARSEDATE-Function_145272352
! supported_format_elements_for_date
https://cloud.google.com/bigquery/docs/reference/standard-sql/date_functions#supported_format_elements_for_date
! parse_timestamp
https://cloud.google.com/bigquery/docs/reference/standard-sql/timestamp_functions#parse_timestamp
<<showtoc>>
! references
https://medium.com/weareservian/bigquery-dbt-modern-problems-require-modern-solutions-b40faedc8aaf
https://towardsdatascience.com/get-started-with-bigquery-and-dbt-the-easy-way-36b9d9735e35
https://medium.com/weareservian/bigquery-dbt-modern-problems-require-modern-solutions-b40faedc8aaf
https://rittmananalytics.com/blog/2020/2/8/multichannel-attribution-bigquery-dbt-looker-segment
https://medium.com/google-cloud/loading-and-transforming-data-into-bigquery-using-dbt-65307ad401cd
! examples
https://github.com/GoogleCloudPlatform/bigquery-oreilly-book/tree/master/blogs/dbt_load , https://medium.com/google-cloud/loading-and-transforming-data-into-bigquery-using-dbt-65307ad401cd
https://github.com/sungchun12/dbt_bigquery_example
! dbt community forum
https://discourse.getdbt.com/t/modelling-multiple-bigquery-projects-with-dbt/1115/3
https://medium.com/google-cloud/using-bigquery-flex-slots-to-run-machine-learning-workloads-more-efficiently-7fc7f400f7a7
https://github.com/GoogleCloudPlatform/bigquery-oreilly-book/blob/master/blogs/flex_slots/run_query_on_flex_slots.sh
<<showtoc>>
! also check
on private wiki - @@.bq query patterns@@ and @@dynamic sql@@
! the general pattern
{{{
BEGIN
DECLARE SQLScript STRING DEFAULT '';
set SQLScript =(
-- put your SQL here
);
EXECUTE IMMEDIATE SQLScript;
END;
}}}
! first working example
{{{
BEGIN
DECLARE SQLScript STRING DEFAULT '';
set SQLScript =(
select 'SELECT ' || string_agg(column_name) || ' from `project.personas_ind_corporate2.canonical_user_ctas`'
from `project.personas_ind_corporate2.INFORMATION_SCHEMA.COLUMNS`
where table_name = 'canonical_user_ctas'
and column_name in ('canonical_user_id','first_name'));
EXECUTE IMMEDIATE SQLScript;
END;
}}}
<<showtoc>>
! doc
https://cloud.google.com/bigquery/docs/encryption-at-rest
! AEAD encryption
https://github.com/google/tink
https://cloud.google.com/bigquery/docs/reference/standard-sql/aead-encryption-concepts
! crypto shredding
https://medium.com/google-cloud/bigquery-encryption-functions-part-i-data-deletion-retention-with-crypto-shredding-7085ecf6e53f
https://medium.com/google-cloud/end-to-end-crypto-shredding-part-ii-data-deletion-retention-with-crypto-shredding-a67f5300a8c8
! Customer-Managed Encryption Keys for BigQuery
Customer-Managed Encryption Keys for BigQuery https://www.youtube.com/watch?v=-dlv9wJheF8
https://cloud.google.com/bigquery/docs/customer-managed-encryption
! other references
https://github.com/google/encrypted-bigquery-client/blob/master/tutorial.md <- EBQ (old stuff)
https://stackoverflow.com/questions/53274323/field-level-encryption-in-big-query?rq=1
https://stackoverflow.com/questions/57723775/how-can-i-decrypt-columns-in-bigquery
! how to create view
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91067834-6873d980-e601-11ea-8974-944f8e61f15e.png]]
! how to upload data from google sheets
* if you add a column on the sheet, then you need to add a column on the table by editing the DDL
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91065084-11203a00-e5fe-11ea-8e0f-f9c005b263d2.png]]
.
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91065094-12e9fd80-e5fe-11ea-8e61-d8b40e54b0e4.png]]
.
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91065092-12516700-e5fe-11ea-9bba-4013964e0140.png]]
https://cloud.google.com/bigquery/docs/reference/standard-sql/json_functions
! translation reference
snow to bq https://cloud.google.com/architecture/dw2bq/snowflake/snowflake-bq-sql-translation-reference
teradata to bq https://cloud.google.com/architecture/dw2bq/td2bq/td-bq-sql-translation-reference-tables
redshift to bq https://cloud.google.com/architecture/dw2bq/redshift/redshift-bq-sql-translation-reference
oracle to bq https://cloud.google.com/architecture/dw2bq/oracle/oracle-bq-sql-translation-reference
netezza to bq https://cloud.google.com/architecture/dw2bq/netezza/netezza-bq-sql-translation-reference
.
https://adtmag.com/articles/2020/07/16/google-cloud-bigquery-omni.aspx
https://cloud.google.com/blog/products/data-analytics/introducing-bigquery-omni
<<showtoc>>
! Performance Patterns
[img(30%,30%)[ https://user-images.githubusercontent.com/3683046/90551877-fc582800-e15f-11ea-9789-661c3f78966d.png ]]
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91461786-6a36da80-e857-11ea-8131-d3746211ed5f.png]]
https://rittmananalytics.com/blog/tag/Bigquery
! drilltodetail podcast
!! done
* Drill to Detail Ep.64 ‘Google BigQuery, BI Engine and the Future of Data Warehousing’ with Special Guest Jordan Tigani https://rittmananalytics.com/drilltodetail/2019/4/29/drill-to-detail-ep64-google-bigquery-bi-engine-and-the-future-of-data-warehousing-with-special-guest-jordan-tigani
!! pending
* Drill to Detail Ep.31 'Dremel, Druid and Data Modeling on Google BigQuery' With Special Guest Dan McClary https://rittmananalytics.com/drilltodetail/2017/6/19/drill-to-detail-ep31-dremel-druid-and-data-modeling-on-google-bigquery-with-special-guest-dan-mcclary
* Drill to Detail Ep.41 'Developing with Google BigQuery and Google Cloud Platform' With Special Guest Felipe Hoffa https://rittmananalytics.com/drilltodetail/2017/10/30/drill-to-detail-ep41-developing-with-google-bigquery-and-google-cloud-platform-with-special-guest-filipe-hoffa
* Drill to Detail Ep.44 'Pandas, Apache Arrow and In-Memory Analytics' With Special Guest Wes McKinney https://rittmananalytics.com/drilltodetail/2017/12/8/drill-to-detail-ep44-pandas-apache-arrow-and-in-memory-analytics-with-special-guest-wes-mckinney
* Drill to Detail Ep.69 'Looker, Tableau and Consolidation in the BI Industry' featuring Special Guests Tristan Handy and Stewart Bryson https://rittmananalytics.com/drilltodetail/2019/6/23/drill-to-detail-ep69-looker-tableau-and-consolidation-in-the-bi-industry-featuring-special-guests-tristan-handy-and-stewart-bryson
* Drill to Detail Ep.66 'ETL, Incorta and the Death of the Star Schema' with Special Guest Matthew Halliday https://rittmananalytics.com/drilltodetail/2019/5/27/drill-to-detail-ep67-etl-incorta-and-the-death-of-the-star-schema-with-special-guest-matthew-halliday
* Drill to Detail Ep.39 'Gluent Update and Offloading to the Cloud' with Special Guest Michael Rainey https://rittmananalytics.com/drilltodetail/2017/10/3/drill-to-detail-ep39-gluent-update-and-offloading-to-the-cloud-with-special-guest-michael-rainey
[img(90%,90%)[https://user-images.githubusercontent.com/3683046/91461789-6acf7100-e857-11ea-8d63-1b227a60d051.png]]
..
<<showtoc>>
! docs
https://cloud.google.com/bigquery/docs/slots
! how
* it is to this project that storage costs for tables in this dataset will be billed (queries are charged to the project of the querier)
! limitations
* Be careful when choosing a region for loading data: as of this writing, queries cannot join tables held in different regions
! quotas and limits
https://cloud.google.com/bigquery/quotas
..
<<showtoc>>
! documentation
https://cloud.google.com/bigquery/docs/reference/standard-sql/scripting#bigquery_scripting
! limitations
<<<
* The maximum size of a variable is 1 MB, and the maximum size of all variables used in a script is 10 MB. https://cloud.google.com/bigquery/docs/reference/standard-sql/scripting#declare
<<<
! BQ scripting: Writing results of a loop to a table
https://stackoverflow.com/questions/58267001/bq-scripting-writing-results-of-a-loop-to-a-table
{{{
DECLARE d DATE DEFAULT DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY);
DECLARE pfix STRING;
DECLARE vis_count INT64;
DECLARE result ARRAY<STRUCT<vis_date DATE, vis_count INT64>> DEFAULT [];
CREATE OR REPLACE TABLE test.looped_results (Date DATE, Visits INT64);
WHILE d > '2019-10-01' DO
SET pfix = REGEXP_REPLACE(CAST(d AS STRING),"-","");
SET vis_count = (SELECT SUM(totals.visits) AS visits
FROM `project.dataset.ga_sessions_*`
WHERE _table_suffix = pfix);
SET result = ARRAY_CONCAT(result, [STRUCT(d, vis_count)]);
SET d = DATE_SUB(d, INTERVAL 1 DAY);
END WHILE;
INSERT INTO test.looped_results SELECT * FROM UNNEST(result);
}}}
{{{
DECLARE d DATE DEFAULT DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY);
DECLARE pfix STRING DEFAULT REGEXP_REPLACE(CAST(d AS STRING),"-","");
DECLARE vis_count INT64;
CREATE OR REPLACE TABLE test.looped_results (Date DATE, Visits INT64);
WHILE d > '2019-10-01' DO
SET vis_count = (SELECT SUM(totals.visits) AS visits
FROM `project.dataset.ga_sessions_*`
WHERE _table_suffix = pfix);
INSERT INTO test.looped_results VALUES (d, vis_count);
SET d = DATE_SUB(d, INTERVAL 1 DAY);
SET pfix = REGEXP_REPLACE(CAST(d AS STRING),"-","");
END WHILE;
}}}
! Setting Big Query variables like mysql
https://stackoverflow.com/questions/29759628/setting-big-query-variables-like-mysql
{{{
You could use a WITH clause. It's not ideal, but it gets the job done.
-- Set your variables here
WITH vars AS (
SELECT '2018-01-01' as from_date,
'2018-05-01' as to_date
)
-- Then use them by pulling from vars with a SELECT clause
SELECT *
FROM your_table
WHERE date_column BETWEEN
CAST((SELECT from_date FROM vars) as date)
AND
CAST((SELECT to_date FROM vars) as date)
Or even less wordy:
#standardSQL
-- Set your variables here
WITH vars AS (
SELECT DATE '2018-01-01' as from_date,
DATE '2018-05-01' as to_date
)
-- Then use them by pulling from vars with a SELECT clause
SELECT *
FROM your_table, vars
WHERE date_column BETWEEN from_date AND to_date
}}}
! bq script parent child job
<<<
When a script is run each statement becomes a separate job.
In the example the submitted script is considered the parent job and each of the 3 statements are represented as child jobs.
<<<
https://potensio.zendesk.com/hc/en-us/articles/360035075212-BigQuery-Scripting-Support
! sample code - GetNextIds SEQUENCE
{{{
-- Generates next ids and inserts them into a sample table.
-- This implementation prevents against race condition.
--
-- @param INT64 id_count number of id to increase
-- @return ARRAY<int64> an array of generated ids
-- a sample id table
CREATE OR REPLACE TABLE dataset01.Ids (id INT64);
CREATE OR REPLACE PROCEDURE dataset01.GetNextIds (id_count INT64, OUT next_ids ARRAY<INT64>)
BEGIN
DECLARE id_start INT64 DEFAULT (SELECT COUNT(*) FROM dataset01.Ids);
SET next_ids = GENERATE_ARRAY(id_start, id_start + id_count);
MERGE dataset01.Ids
USING UNNEST(next_ids) AS new_id
ON id = new_id
WHEN MATCHED THEN UPDATE SET id = ERROR(FORMAT('Race when adding ID %t', new_id))
WHEN NOT MATCHED THEN INSERT VALUES (new_id);
END;
-- a unit test of GetNextIds
BEGIN
DECLARE i INT64 DEFAULT 1;
DECLARE next_ids ARRAY<INT64> DEFAULT [];
DECLARE ids ARRAY<INT64> DEFAULT [];
WHILE i < 10 DO
CALL dataset01.GetNextIds(0, next_ids);
SET ids = ARRAY_CONCAT(ids, next_ids);
SET i = i + 1;
END WHILE;
SELECT FORMAT('IDs are: %t', ids);
END;
select * from dataset01.Ids;
delete from dataset01.Ids where 1=1;
}}}
! TEMP table - insert select cursor loop through every row - rownum, row_number
see [[bq Exceeded rate limits too many table update]]
! PERMANENT table - 18 consecutive UPDATE
see [[bq Exceeded rate limits too many table update]]
! IF/While Loop chunk 500 records using row_number
https://stackoverflow.com/questions/60157270/google-bigquery-if-while-loop
<<<
One of the challenges while making this loop is that you cannot use variables in the LIMIT and the OFFSET clause. So, I used ROW_NUMBER() to create a column that I could use to process with the WHERE clause:
{{{
WHERE row_number BETWEEN offset_ AND offset_ + limit_
}}}
If you want to read more about ROW_NUMBER() I recommend to check this SO answer (https://stackoverflow.com/a/16534965/7517757).
Finally, if you want to use this approach, consider that there are some caveats like scripting being a Beta feature, and possible quota issues depending on how often you insert data into your temporary table. Also, since the query changes in each iteration, the first time you run it, and it is not cached, the bytes_processed will be number_of_iterations*byte_size of the table
<<<
{{{
DECLARE offset_ INT64 DEFAULT 1; -- OFFSET starts in 1 BASED on ROW NUMBER ()
DECLARE limit_ INT64 DEFAULT 500; -- Size of the chunks to be processed
DECLARE size_ INT64 DEFAULT 7000; -- Size of the data (used for the condition in the WHILE loop)
-- Table to be processed. I'm creating this new temporary table to use it as an example
CREATE TEMPORARY TABLE IF NOT EXISTS data_numbered AS (
SELECT *, ROW_NUMBER() OVER() row_number
FROM (SELECT * FROM `bigquery-public-data.stackoverflow.users` LIMIT 7000)
);
-- WHILE loop
WHILE offset_ < size_ DO
IF offset_ = 1 THEN -- OPTIONAL, create the temporary table in the first iteration
CREATE OR REPLACE TEMPORARY TABLE temp_table AS (
SELECT * FROM data_numbered
WHERE row_number BETWEEN offset_ AND offset_ + limit_ - 1 -- Use offset and limit to control the chunks of data
);
ELSE
-- This is the same query as above.
-- Each iteration will fill the temporary table
-- Iteration
-- 501 - 1000
-- 1001 - 1500
-- ...
INSERT INTO temp_table (
SELECT * FROM data_numbered WHERE row_number BETWEEN offset_ AND offset_ + limit_ - 1 -- -1 because BETWEEN is inclusive, so it helps to avoid duplicated values in the edges
);
END IF;
-- Adjust the offset_ variable
SET offset_ = offset_ + limit_;
END WHILE;
}}}
! references
https://cloud.google.com/blog/products/data-analytics/command-and-control-now-easier-in-bigquery-with-scripting-and-stored-procedures
https://stackoverflow.com/questions/59728269/t-sql-cursors-alternate-in-bigquery
Iterate over rows in Stored Procedure BigQuery https://stackoverflow.com/questions/59396443/iterate-over-rows-in-stored-procedure-bigquery
Row number in BigQuery https://stackoverflow.com/questions/11057219/row-number-in-bigquery/16534965#16534965
Cursors in BigQuery https://stackoverflow.com/questions/59937084/cursors-in-bigquery
How to pass table name variable into query https://www.reddit.com/r/bigquery/comments/fdlxsw/how_to_pass_table_name_variable_into_query/
Calling a stored procedure + using the output in a query https://www.reddit.com/r/bigquery/comments/idhxi2/calling_a_stored_procedure_using_the_output_in_a/
! bigquery SO peeps
https://stackoverflow.com/users/5221944/mikhail-berlyant
https://stackoverflow.com/users/7517757/tlaquetzal
.
<<showtoc>>
! examples
{{{
-- covid data
SELECT
date, SUM(confirmed) num_reports
FROM `bigquery-public-data.covid19_open_data.compatibility_view`
WHERE ST_Distance(ST_GeogPoint(longitude, latitude),
ST_GeogPoint(103.8, 1.4)) < 200*1000 -- 200km
GROUP BY date
HAVING num_reports IS NOT NULL AND num_reports > 0
ORDER BY date ASC
;
-- nyc bike data
-- one-way rentals by year, month
SELECT
EXTRACT(YEAR FROM starttime) AS year,
EXTRACT(MONTH FROM starttime) AS month,
COUNT(starttime) AS number_one_way
FROM
`bigquery-public-data`.new_york_citibike.citibike_trips
WHERE
start_station_name != end_station_name
GROUP BY year, month
ORDER BY year ASC, month ASC
;
}}}
! how to pin a public dataset
https://console.cloud.google.com/marketplace/product/bitcoin/crypto-bitcoin?filter=solution-type:dataset&project=example-dev-284123
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91063628-3c098e80-e5fc-11ea-9876-4cbeefc94c2c.png]]
make sure to select crypto_bitcoin
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91063639-3dd35200-e5fc-11ea-9a9a-e3c01ba6431d.png]]
.
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91063634-3d3abb80-e5fc-11ea-8a8f-92685cb75734.png]]
.
How to get detailed Big Query error by using PYTHON https://stackoverflow.com/questions/57039771/how-to-get-detailed-big-query-error-by-using-python/57280504#57280504
https://googleapis.dev/python/bigquery/latest/generated/google.cloud.bigquery.job.LoadJob.html
{{{
try:
job.result()
except BadRequest as e:
for e in job.errors:
print('ERROR: {}'.format(e['message']))
Output:
ERROR: Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the errors[] collection for more details.
ERROR: Error while reading data, error message: JSON processing encountered too many errors, giving up. Rows: 1; errors: 1; max bad: 0; error percent: 0
ERROR: Error while reading data, error message: JSON parsing error in row starting at position 0: No such field: SourceSystem.
}}}
<<showtoc>>
! loading the modules
https://ipython.readthedocs.io/en/stable/config/extensions/index.html?highlight=load_ext#using-extensions
{{{
%load_ext google.cloud.bigquery
}}}
! list of JUPYTER MAGIC
https://googleapis.dev/python/bigquery/latest/magics.html <- bigquery magics
https://ipython.readthedocs.io/en/stable/interactive/magics.html <- general magics
https://towardsdatascience.com/the-top-5-magic-commands-for-jupyter-notebooks-2bf0c5ae4bb8
{{{
In [1]: %load_ext google.cloud.bigquery
In [2]: %%bigquery
...: SELECT name, SUM(number) as count
...: FROM `bigquery-public-data.usa_names.usa_1910_current`
...: GROUP BY name
...: ORDER BY count DESC
...: LIMIT 3
Out[2]: name count
...: -------------------
...: 0 James 4987296
...: 1 John 4866302
...: 2 Robert 4738204
In [3]: %%bigquery df --project my-alternate-project --verbose
...: SELECT name, SUM(number) as count
...: FROM `bigquery-public-data.usa_names.usa_1910_current`
...: WHERE gender = 'F'
...: GROUP BY name
...: ORDER BY count DESC
...: LIMIT 3
Executing query with job ID: bf633912-af2c-4780-b568-5d868058632b
Query executing: 2.61s
Query complete after 2.92s
In [4]: df
Out[4]: name count
...: ----------------------
...: 0 Mary 3736239
...: 1 Patricia 1568495
...: 2 Elizabeth 1519946
In [5]: %%bigquery --params {"num": 17}
...: SELECT @num AS num
Out[5]: num
...: -------
...: 0 17
In [6]: params = {"num": 17}
In [7]: %%bigquery --params $params
...: SELECT @num AS num
Out[7]: num
...: -------
...: 0 17
}}}
! downloading large data to pandas
https://cloud.google.com/bigquery/docs/bigquery-storage-python-pandas
https://googleapis.dev/python/bigquery/1.17.0/usage/pandas.html
https://www.kaggle.com/sohier/how-to-integrate-bigquery-pandas
{{{
Download large query results with the BigQuery Storage API by adding the --use_bq_storage_api argument to the %%bigquery magics.
%%bigquery tax_forms --use_bqstorage_api
SELECT * FROM `bigquery-public-data.irs_990.irs_990_2012`
}}}
! Migrating from pandas-gbq
https://cloud.google.com/bigquery/docs/pandas-gbq-migration
https://friendliness.dev/2019/07/29/bigquery-arrow/
* use_bq_storage_api is fast because it uses apache arrow under the hood
{{{
import pandas
sql = "SELECT * FROM `bigquery-public-data.irs_990.irs_990_2012`"
# Use the BigQuery Storage API to download results more quickly.
df = pandas.read_gbq(sql, dialect='standard', use_bqstorage_api=True)
}}}
* add tink for encryption
{{{
pandas
google-cloud-bigquery[pandas]
google-cloud-bigquery-storage
pandas-gbq
fastavro
tqdm
pyarrow
jupytext
jupyter
ipython
pip install -r requirements.txt
}}}
{{{
pyenv virtualenvs
2.7.9/envs/py279 (created from /Users/kristofferson.a.arao/.pyenv/versions/2.7.9)
3.7.0/envs/flask (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.0)
3.7.0/envs/py370 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.0)
3.7.2/envs/py372 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.2/Python.framework/Versions/3.7)
3.8.5/envs/py385 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.8.5)
flask (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.0)
py279 (created from /Users/kristofferson.a.arao/.pyenv/versions/2.7.9)
py370 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.0)
py372 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.7.2/Python.framework/Versions/3.7)
py385 (created from /Users/kristofferson.a.arao/.pyenv/versions/3.8.5)
pyenv virtualenv 3.8.5 py385_2
Looking in links: /var/folders/wq/8lk144454f30tp7dbr1g42mskx2f84/T/tmprc1c58kl
Requirement already satisfied: setuptools in /Users/kristofferson.a.arao/.pyenv/versions/3.8.5/envs/py385_2/lib/python3.8/site-packages (47.1.0)
Requirement already satisfied: pip in /Users/kristofferson.a.arao/.pyenv/versions/3.8.5/envs/py385_2/lib/python3.8/site-packages (20.1.1)
pyenv activate py385_2
pip list
Package Version
---------- -------
pip 20.1.1
setuptools 47.1.0
vi requirements.txt
pandas
google-cloud-bigquery[pandas]
google-cloud-bigquery-storage
pandas-gbq
fastavro
tqdm
pyarrow
jupytext
jupyter
ipython
pip install -r requirements.txt
Collecting pandas
Downloading pandas-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl (10.6 MB)
|████████████████████████████████| 10.6 MB 2.2 MB/s
Collecting google-cloud-bigquery[pandas]
Downloading google_cloud_bigquery-1.27.2-py2.py3-none-any.whl (172 kB)
|████████████████████████████████| 172 kB 34.6 MB/s
Collecting google-cloud-bigquery-storage
Using cached google_cloud_bigquery_storage-1.0.0-py2.py3-none-any.whl (133 kB)
Collecting pandas-gbq
Downloading pandas_gbq-0.13.2-py3-none-any.whl (24 kB)
Collecting fastavro
Downloading fastavro-1.0.0.post1-cp38-cp38-macosx_10_14_x86_64.whl (435 kB)
|████████████████████████████████| 435 kB 35.9 MB/s
Collecting tqdm
Downloading tqdm-4.48.2-py2.py3-none-any.whl (68 kB)
|████████████████████████████████| 68 kB 13.7 MB/s
Collecting pyarrow
Downloading pyarrow-1.0.1-cp38-cp38-macosx_10_9_x86_64.whl (11.0 MB)
|████████████████████████████████| 11.0 MB 14.8 MB/s
Collecting jupytext
Downloading jupytext-1.6.0.tar.gz (683 kB)
|████████████████████████████████| 683 kB 39.2 MB/s
Collecting jupyter
Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Collecting ipython
Downloading ipython-7.18.1-py3-none-any.whl (786 kB)
|████████████████████████████████| 786 kB 39.0 MB/s
Collecting numpy>=1.15.4
Using cached numpy-1.19.1-cp38-cp38-macosx_10_9_x86_64.whl (15.3 MB)
Collecting python-dateutil>=2.7.3
Using cached python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Collecting pytz>=2017.2
Using cached pytz-2020.1-py2.py3-none-any.whl (510 kB)
Collecting google-resumable-media<2.0dev,>=0.5.0
Downloading google_resumable_media-1.0.0-py2.py3-none-any.whl (42 kB)
|████████████████████████████████| 42 kB 2.6 MB/s
Collecting google-cloud-core<2.0dev,>=1.4.1
Downloading google_cloud_core-1.4.1-py2.py3-none-any.whl (26 kB)
Collecting google-api-core<2.0dev,>=1.21.0
Downloading google_api_core-1.22.2-py2.py3-none-any.whl (91 kB)
|████████████████████████████████| 91 kB 15.5 MB/s
Collecting six<2.0.0dev,>=1.13.0
Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: setuptools in /Users/kristofferson.a.arao/.pyenv/versions/3.8.5/envs/py385_2/lib/python3.8/site-packages (from pandas-gbq->-r requirements.txt (line 4)) (47.1.0)
Collecting google-auth-oauthlib
Downloading google_auth_oauthlib-0.4.1-py2.py3-none-any.whl (18 kB)
Collecting pydata-google-auth
Downloading pydata_google_auth-1.1.0-py2.py3-none-any.whl (13 kB)
Collecting google-auth
Downloading google_auth-1.21.1-py2.py3-none-any.whl (93 kB)
|████████████████████████████████| 93 kB 4.0 MB/s
Collecting nbformat>=4.0.0
Using cached nbformat-5.0.7-py3-none-any.whl (170 kB)
Collecting pyyaml
Downloading PyYAML-5.3.1.tar.gz (269 kB)
|████████████████████████████████| 269 kB 34.0 MB/s
Collecting toml
Downloading toml-0.10.1-py2.py3-none-any.whl (19 kB)
Collecting markdown-it-py~=0.5.2
Downloading markdown_it_py-0.5.3-py3-none-any.whl (111 kB)
|████████████████████████████████| 111 kB 53.7 MB/s
Collecting nbconvert
Using cached nbconvert-5.6.1-py2.py3-none-any.whl (455 kB)
Collecting qtconsole
Downloading qtconsole-4.7.7-py2.py3-none-any.whl (118 kB)
|████████████████████████████████| 118 kB 39.6 MB/s
Collecting ipywidgets
Using cached ipywidgets-7.5.1-py2.py3-none-any.whl (121 kB)
Collecting notebook
Downloading notebook-6.1.3-py3-none-any.whl (9.4 MB)
|████████████████████████████████| 9.4 MB 27.6 MB/s
Collecting ipykernel
Using cached ipykernel-5.3.4-py3-none-any.whl (120 kB)
Collecting jupyter-console
Downloading jupyter_console-6.2.0-py3-none-any.whl (22 kB)
Collecting pickleshare
Using cached pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
Collecting pygments
Using cached Pygments-2.6.1-py3-none-any.whl (914 kB)
Collecting decorator
Using cached decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
Collecting appnope; sys_platform == "darwin"
Using cached appnope-0.1.0-py2.py3-none-any.whl (4.0 kB)
Collecting backcall
Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB)
Collecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0
Downloading prompt_toolkit-3.0.7-py3-none-any.whl (355 kB)
|████████████████████████████████| 355 kB 44.4 MB/s
Collecting pexpect>4.3; sys_platform != "win32"
Using cached pexpect-4.8.0-py2.py3-none-any.whl (59 kB)
Collecting jedi>=0.10
Using cached jedi-0.17.2-py2.py3-none-any.whl (1.4 MB)
Collecting traitlets>=4.2
Downloading traitlets-5.0.4-py3-none-any.whl (98 kB)
|████████████████████████████████| 98 kB 11.0 MB/s
Collecting google-crc32c<2.0dev,>=1.0; python_version >= "3.5"
Downloading google-crc32c-1.0.0.tar.gz (10 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing wheel metadata ... done
Collecting protobuf>=3.12.0
Downloading protobuf-3.13.0-cp38-cp38-macosx_10_9_x86_64.whl (1.3 MB)
|████████████████████████████████| 1.3 MB 20.3 MB/s
Collecting requests<3.0.0dev,>=2.18.0
Using cached requests-2.24.0-py2.py3-none-any.whl (61 kB)
Collecting googleapis-common-protos<2.0dev,>=1.6.0
Using cached googleapis_common_protos-1.52.0-py2.py3-none-any.whl (100 kB)
Collecting requests-oauthlib>=0.7.0
Downloading requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB)
Collecting cachetools<5.0,>=2.0.0
Using cached cachetools-4.1.1-py3-none-any.whl (10 kB)
Collecting rsa<5,>=3.1.4; python_version >= "3.5"
Using cached rsa-4.6-py3-none-any.whl (47 kB)
Collecting pyasn1-modules>=0.2.1
Using cached pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting ipython-genutils
Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
Collecting jupyter-core
Using cached jupyter_core-4.6.3-py2.py3-none-any.whl (83 kB)
Collecting jsonschema!=2.5.0,>=2.4
Using cached jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)
Collecting attrs<21,>=19
Downloading attrs-20.2.0-py2.py3-none-any.whl (48 kB)
|████████████████████████████████| 48 kB 7.9 MB/s
Collecting defusedxml
Using cached defusedxml-0.6.0-py2.py3-none-any.whl (23 kB)
Collecting entrypoints>=0.2.2
Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)
Collecting jinja2>=2.4
Using cached Jinja2-2.11.2-py2.py3-none-any.whl (125 kB)
Collecting pandocfilters>=1.4.1
Using cached pandocfilters-1.4.2.tar.gz (14 kB)
Collecting mistune<2,>=0.8.1
Using cached mistune-0.8.4-py2.py3-none-any.whl (16 kB)
Collecting bleach
Using cached bleach-3.1.5-py2.py3-none-any.whl (151 kB)
Collecting testpath
Using cached testpath-0.4.4-py2.py3-none-any.whl (163 kB)
Collecting qtpy
Using cached QtPy-1.9.0-py2.py3-none-any.whl (54 kB)
Collecting pyzmq>=17.1
Downloading pyzmq-19.0.2-cp38-cp38-macosx_10_9_x86_64.whl (806 kB)
|████████████████████████████████| 806 kB 31.3 MB/s
Collecting jupyter-client>=4.1
Downloading jupyter_client-6.1.7-py3-none-any.whl (108 kB)
|████████████████████████████████| 108 kB 43.0 MB/s
Collecting widgetsnbextension~=3.5.0
Using cached widgetsnbextension-3.5.1-py2.py3-none-any.whl (2.2 MB)
Collecting tornado>=5.0
Using cached tornado-6.0.4.tar.gz (496 kB)
Collecting prometheus-client
Using cached prometheus_client-0.8.0-py2.py3-none-any.whl (53 kB)
Collecting Send2Trash
Using cached Send2Trash-1.5.0-py3-none-any.whl (12 kB)
Collecting argon2-cffi
Downloading argon2_cffi-20.1.0-cp37-abi3-macosx_10_6_intel.whl (65 kB)
|████████████████████████████████| 65 kB 11.7 MB/s
Collecting terminado>=0.8.3
Using cached terminado-0.8.3-py2.py3-none-any.whl (33 kB)
Collecting wcwidth
Using cached wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Collecting ptyprocess>=0.5
Using cached ptyprocess-0.6.0-py2.py3-none-any.whl (39 kB)
Collecting parso<0.8.0,>=0.7.0
Downloading parso-0.7.1-py2.py3-none-any.whl (109 kB)
|████████████████████████████████| 109 kB 37.3 MB/s
Collecting cffi>=1.0.0
Using cached cffi-1.14.2-cp38-cp38-macosx_10_9_x86_64.whl (176 kB)
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1
Using cached urllib3-1.25.10-py2.py3-none-any.whl (127 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2020.6.20-py2.py3-none-any.whl (156 kB)
Collecting chardet<4,>=3.0.2
Using cached chardet-3.0.4-py2.py3-none-any.whl (133 kB)
Collecting idna<3,>=2.5
Using cached idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting oauthlib>=3.0.0
Downloading oauthlib-3.1.0-py2.py3-none-any.whl (147 kB)
|████████████████████████████████| 147 kB 10.7 MB/s
Collecting pyasn1>=0.1.3
Using cached pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Collecting pyrsistent>=0.14.0
Using cached pyrsistent-0.16.0.tar.gz (108 kB)
Collecting MarkupSafe>=0.23
Using cached MarkupSafe-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl (16 kB)
Collecting webencodings
Using cached webencodings-0.5.1-py2.py3-none-any.whl (11 kB)
Collecting packaging
Using cached packaging-20.4-py2.py3-none-any.whl (37 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Using legacy setup.py install for jupytext, since package 'wheel' is not installed.
Using legacy setup.py install for pyyaml, since package 'wheel' is not installed.
Using legacy setup.py install for pandocfilters, since package 'wheel' is not installed.
Using legacy setup.py install for tornado, since package 'wheel' is not installed.
Using legacy setup.py install for pyrsistent, since package 'wheel' is not installed.
Building wheels for collected packages: google-crc32c
Building wheel for google-crc32c (PEP 517) ... done
Created wheel for google-crc32c: filename=google_crc32c-1.0.0-py3-none-any.whl size=9916 sha256=3774faf6bd519e605eb0b724394527c5ef6e2b8514d1718fc671c71916eacfe8
Stored in directory: /Users/kristofferson.a.arao/Library/Caches/pip/wheels/80/8c/fa/7a4feadbb9c2aa7bada4831f00fdcb58fe0d5efc3ebb5d7fd0
Successfully built google-crc32c
Installing collected packages: numpy, six, python-dateutil, pytz, pandas, pycparser, cffi, google-crc32c, google-resumable-media, cachetools, pyasn1, rsa, pyasn1-modules, google-auth, protobuf, urllib3, certifi, chardet, idna, requests, googleapis-common-protos, google-api-core, google-cloud-core, google-cloud-bigquery, google-cloud-bigquery-storage, oauthlib, requests-oauthlib, google-auth-oauthlib, pydata-google-auth, pandas-gbq, fastavro, tqdm, pyarrow, ipython-genutils, traitlets, jupyter-core, attrs, pyrsistent, jsonschema, nbformat, pyyaml, toml, markdown-it-py, jupytext, defusedxml, entrypoints, MarkupSafe, jinja2, pandocfilters, mistune, webencodings, pyparsing, packaging, bleach, pygments, testpath, nbconvert, qtpy, pyzmq, tornado, jupyter-client, appnope, pickleshare, decorator, backcall, wcwidth, prompt-toolkit, ptyprocess, pexpect, parso, jedi, ipython, ipykernel, qtconsole, prometheus-client, Send2Trash, argon2-cffi, terminado, notebook, widgetsnbextension, ipywidgets, jupyter-console, jupyter
Running setup.py install for pyrsistent ... done
Running setup.py install for pyyaml ... done
Running setup.py install for jupytext ... done
Running setup.py install for pandocfilters ... done
Running setup.py install for tornado ... done
Successfully installed MarkupSafe-1.1.1 Send2Trash-1.5.0 appnope-0.1.0 argon2-cffi-20.1.0 attrs-20.2.0 backcall-0.2.0 bleach-3.1.5 cachetools-4.1.1 certifi-2020.6.20 cffi-1.14.2 chardet-3.0.4 decorator-4.4.2 defusedxml-0.6.0 entrypoints-0.3 fastavro-1.0.0.post1 google-api-core-1.22.2 google-auth-1.21.1 google-auth-oauthlib-0.4.1 google-cloud-bigquery-1.27.2 google-cloud-bigquery-storage-1.0.0 google-cloud-core-1.4.1 google-crc32c-1.0.0 google-resumable-media-1.0.0 googleapis-common-protos-1.52.0 idna-2.10 ipykernel-5.3.4 ipython-7.18.1 ipython-genutils-0.2.0 ipywidgets-7.5.1 jedi-0.17.2 jinja2-2.11.2 jsonschema-3.2.0 jupyter-1.0.0 jupyter-client-6.1.7 jupyter-console-6.2.0 jupyter-core-4.6.3 jupytext-1.6.0 markdown-it-py-0.5.3 mistune-0.8.4 nbconvert-5.6.1 nbformat-5.0.7 notebook-6.1.3 numpy-1.19.1 oauthlib-3.1.0 packaging-20.4 pandas-1.1.1 pandas-gbq-0.13.2 pandocfilters-1.4.2 parso-0.7.1 pexpect-4.8.0 pickleshare-0.7.5 prometheus-client-0.8.0 prompt-toolkit-3.0.7 protobuf-3.13.0 ptyprocess-0.6.0 pyarrow-1.0.1 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycparser-2.20 pydata-google-auth-1.1.0 pygments-2.6.1 pyparsing-2.4.7 pyrsistent-0.16.0 python-dateutil-2.8.1 pytz-2020.1 pyyaml-5.3.1 pyzmq-19.0.2 qtconsole-4.7.7 qtpy-1.9.0 requests-2.24.0 requests-oauthlib-1.3.0 rsa-4.6 six-1.15.0 terminado-0.8.3 testpath-0.4.4 toml-0.10.1 tornado-6.0.4 tqdm-4.48.2 traitlets-5.0.4 urllib3-1.25.10 wcwidth-0.2.5 webencodings-0.5.1 widgetsnbextension-3.5.1
WARNING: You are using pip version 20.1.1; however, version 20.2.2 is available.
You should consider upgrading via the '/Users/kristofferson.a.arao/.pyenv/versions/3.8.5/envs/py385_2/bin/python3.8 -m pip install --upgrade pip' command.
pip list
Package Version
----------------------------- -----------
appnope 0.1.0
argon2-cffi 20.1.0
attrs 20.2.0
backcall 0.2.0
bleach 3.1.5
cachetools 4.1.1
certifi 2020.6.20
cffi 1.14.2
chardet 3.0.4
decorator 4.4.2
defusedxml 0.6.0
entrypoints 0.3
fastavro 1.0.0.post1
google-api-core 1.22.2
google-auth 1.21.1
google-auth-oauthlib 0.4.1
google-cloud-bigquery 1.27.2
google-cloud-bigquery-storage 1.0.0
google-cloud-core 1.4.1
google-crc32c 1.0.0
google-resumable-media 1.0.0
googleapis-common-protos 1.52.0
idna 2.10
ipykernel 5.3.4
ipython 7.18.1
ipython-genutils 0.2.0
ipywidgets 7.5.1
jedi 0.17.2
Jinja2 2.11.2
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.7
jupyter-console 6.2.0
jupyter-core 4.6.3
jupytext 1.6.0
markdown-it-py 0.5.3
MarkupSafe 1.1.1
mistune 0.8.4
nbconvert 5.6.1
nbformat 5.0.7
notebook 6.1.3
numpy 1.19.1
oauthlib 3.1.0
packaging 20.4
pandas 1.1.1
pandas-gbq 0.13.2
pandocfilters 1.4.2
parso 0.7.1
pexpect 4.8.0
pickleshare 0.7.5
pip 20.1.1
prometheus-client 0.8.0
prompt-toolkit 3.0.7
protobuf 3.13.0
ptyprocess 0.6.0
pyarrow 1.0.1
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.20
pydata-google-auth 1.1.0
Pygments 2.6.1
pyparsing 2.4.7
pyrsistent 0.16.0
python-dateutil 2.8.1
pytz 2020.1
PyYAML 5.3.1
pyzmq 19.0.2
qtconsole 4.7.7
QtPy 1.9.0
requests 2.24.0
requests-oauthlib 1.3.0
rsa 4.6
Send2Trash 1.5.0
setuptools 47.1.0
six 1.15.0
terminado 0.8.3
testpath 0.4.4
toml 0.10.1
tornado 6.0.4
tqdm 4.48.2
traitlets 5.0.4
urllib3 1.25.10
wcwidth 0.2.5
webencodings 0.5.1
widgetsnbextension 3.5.1
WARNING: You are using pip version 20.1.1; however, version 20.2.2 is available.
You should consider upgrading via the '/Users/kristofferson.a.arao/.pyenv/versions/3.8.5/envs/py385_2/bin/python3.8 -m pip install --upgrade pip' command.
jupyter notebook
[I 11:34:10.126 NotebookApp] [Jupytext Server Extension] Deriving a JupytextContentsManager from LargeFileManager
[I 11:34:10.128 NotebookApp] Serving notebooks from local directory: /Users/kristofferson.a.arao/gcp-bigquery/practical-bigquery
[I 11:34:10.128 NotebookApp] Jupyter Notebook 6.1.3 is running at:
[I 11:34:10.128 NotebookApp] http://localhost:8888/?token=902486cdcaa7d5813ac3efd1174dc6913d2fa061fa865246
[I 11:34:10.128 NotebookApp] or http://127.0.0.1:8888/?token=902486cdcaa7d5813ac3efd1174dc6913d2fa061fa865246
[I 11:34:10.128 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 11:34:10.142 NotebookApp]
}}}
https://stackoverflow.com/questions/51133540/how-do-i-use-labels-in-big-query-queries-to-track-cost
{{{
bq query \
--nouse_legacy_sql \
--label foo:bar \
'SELECT COUNT(*) FROM `bigquery-public-data`.samples.shakespeare'
Then, you can issue below filter in Cloud logging
resource.type="bigquery_resource"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.labels.foo="bar"
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfiguration
}}}
! find the query on information_schema
{{{
SELECT
*
FROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
WHERE query like '%shake%'
ORDER BY start_time ASC;
SELECT *
FROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT, unnest(labels) l
where l.key = 'foo'
ORDER BY start_time DESC;
select * from
tink.table1 a, tink.table2 b
where a.string_field_2 = b.string_field_0
order by 1 asc;
}}}
<<showtoc>>
<<<
SAFE_OFFSET and SAFE_ORDINAL
https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#safe_offset_and_safe_ordinal
<<<
! bq SPLIT and SAFE_ORDINAL, dash delimited to cols
{{{
# split
WITH YourTable AS (
SELECT
'row1' AS rnum,
'abc1-us-nv-dev-xyzzz1' AS data_string
UNION ALL
SELECT
'row2',
'abc1-us-nv-dev-xyzzz1'
)
SELECT
*
FROM (
SELECT
* ,
SPLIT(data_string, '-') AS parts
FROM YourTable
);
WITH mytable AS (
SELECT
'row1' AS rnum,
'abc1-us-nv-dev-xyzzz1' AS hostname
UNION ALL
SELECT
'row2',
'abc1-us-nv-dev-xyzzz1'
)
SELECT
rnum,
hostname,
parts[OFFSET(0)] AS c1,
parts[OFFSET(1)] AS c2,
parts[OFFSET(2)] AS c3,
parts[OFFSET(3)] AS c4,
parts[OFFSET(4)] AS c5
FROM (
SELECT
* ,
SPLIT(hostname, '-') AS parts
FROM mytable
);
WITH mytable AS (
SELECT
'abc1-us-nv-dev-xyzzz1' AS hostname
UNION ALL
SELECT
'abc1-us-nv-dev-xyzzz1'
)
SELECT
hostname,
separate_the_cols[OFFSET(0)] AS c1,
separate_the_cols[OFFSET(1)] AS c2,
separate_the_cols[OFFSET(2)] AS c3,
separate_the_cols[OFFSET(3)] AS c4,
separate_the_cols[OFFSET(4)] AS c5
FROM (
SELECT
* ,
SPLIT(hostname, '-') AS separate_the_cols
FROM mytable
);
WITH
data as
(
SELECT "abc1-us-nv-dev-xyzzz1"AS value, "-" AS delimiter
UNION ALL
SELECT "a2-eu-v-prod-xyz2" AS value, "-" AS delimiter
)
SELECT
SPLIT(data.value, data.delimiter)[SAFE_ORDINAL(1)] part1,
SPLIT(data.value, data.delimiter)[SAFE_ORDINAL(2)] part2,
SPLIT(data.value, data.delimiter)[SAFE_ORDINAL(3)] part3,
SPLIT(data.value, data.delimiter)[SAFE_ORDINAL(4)] part4,
SPLIT(data.value, data.delimiter)[SAFE_ORDINAL(5)] part5
FROM data
WITH mytable AS (
SELECT
'abc1-us-nv-dev-xyzzz1' AS hostname
UNION ALL
SELECT
'abc1-us-nv-dev-xyzzz1'
)
SELECT
hostname,
separate_the_cols[ORDINAL(1)] AS c2,
separate_the_cols[ORDINAL(2)] AS c3,
separate_the_cols[ORDINAL(3)] AS c4,
separate_the_cols[ORDINAL(4)] AS c5,
separate_the_cols[ORDINAL(5)] AS c6
FROM (
SELECT
* ,
SPLIT(hostname, '-') AS separate_the_cols
FROM mytable
);
}}}
{{{
WITH mytable AS (
SELECT
'abc1-us-nv-dev-xyzzz1' AS hostname
UNION ALL
SELECT
'abc1-us-nv-dev-xyzzz1'
)
SELECT
hostname,
separate_the_cols[ORDINAL(1)] AS c2,
separate_the_cols[ORDINAL(2)] AS c3,
separate_the_cols[ORDINAL(3)] AS c4,
separate_the_cols[ORDINAL(4)] AS c5,
separate_the_cols[ORDINAL(5)] AS c6
FROM (
SELECT
* ,
SPLIT(hostname, '-') AS separate_the_cols
FROM mytable
);
WITH mytable AS (
SELECT
'abc1-us-nv-dev-xyzzz1' AS hostname
UNION ALL
SELECT
'abc1-us-nv-dev-xyzzz1'
)
SELECT
* ,
SPLIT(hostname, '-')[SAFE_OFFSET(0)] AS c1,
SPLIT(hostname, '-')[SAFE_OFFSET(1)] AS c2,
SPLIT(hostname, '-')[SAFE_OFFSET(2)] AS c3,
SPLIT(hostname, '-')[SAFE_OFFSET(3)] AS c4,
SPLIT(hostname, '-')[SAFE_OFFSET(4)] AS c5
FROM mytable
;
}}}
regex example
{{{
WITH hammads_table AS (
SELECT
'abc1-us-nv-dev-xyzzz1' AS hammads_string
UNION ALL
SELECT
'a2-eu-v-prod-xyz2'
)
select REGEXP_EXTRACT( hammads_string, r"([\w\d]+)-") c1 ,
REGEXP_EXTRACT( hammads_string, r"[\w\d]+-([\w\d]+)-" ) c2 ,
REGEXP_EXTRACT( hammads_string, r"[\w\d]+-[\w\d]+-([\w\d]+)-" ) c3 ,
REGEXP_EXTRACT( hammads_string, r"[\w\d]+-[\w\d]+-[\w\d]+-([\w\d]+)-" ) c4 ,
REGEXP_EXTRACT( hammads_string, r"[\w\d]+-[\w\d]+-[\w\d]+-[\w\d]+-([\w\d]+)" ) c5
from hammads_table;
}}}
https://stackoverflow.com/questions/35109451/how-to-access-saved-queries-programmatically
https://cloud.google.com/bigquery/docs/saving-sharing-queries
https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators
<<showtoc>>
! also check
on private wiki - @@.bq query patterns@@
! the query
{{{
Here's the new version of the query. Added the IFNULL because there are two
segment_ids with no birthday, gender, and name columns
('use_BFEkP6q3zvgcXQYfjAQ1ZtSvaCl','use_UpzD3hYF7fV2QcbxRH61ZtEIR1x')
CREATE TEMP FUNCTION json2array(json STRING)
RETURNS ARRAY<STRING>
LANGUAGE js AS """
var replacer = function(k, v) { if (v === null) { return ""; } return v; };
try { return JSON.parse(json).map(x=>JSON.stringify(x, replacer));
} catch (e) { return null }
""";
SELECT * EXCEPT(array_dependents),
ARRAY(SELECT IFNULL(JSON_EXTRACT_SCALAR(x, '$.birthday'),"") FROM UNNEST(array_dependents) x) BIRTHDAY,
ARRAY(SELECT IFNULL(JSON_EXTRACT_SCALAR(x, '$.idNumber'),"") FROM UNNEST(array_dependents) x) ID_NUMBER,
ARRAY(SELECT IFNULL(JSON_EXTRACT_SCALAR(x, '$.gender'),"") FROM UNNEST(array_dependents) x) GENDER,
ARRAY(SELECT IFNULL(JSON_EXTRACT_SCALAR(x, '$.relationship'),"") FROM UNNEST(array_dependents) x) RELATIONSHIP,
ARRAY(SELECT IFNULL(JSON_EXTRACT_SCALAR(x, '$.name'),"") FROM UNNEST(array_dependents) x) NAME
FROM (
SELECT
id AS SEGMENT_ID
, marketing_program_number AS MKTNG_PGM_NBR
, json2array(JSON_EXTRACT(dependents, '$')) array_dependents
FROM `personas_fra_corporate.users_view`
where LENGTH(dependents)>3
)
}}}
<<showtoc>>
! vagrant gce provider
https://github.com/mitchellh/vagrant-google
https://github.com/mitchellh/vagrant-google/blob/master/vagrantfile_examples/Vagrantfile.multiple_machines <- examples
! example videos
https://www.youtube.com/watch?v=u4Y5eRg-feI&ab_channel=GoogleCloudPlatform <- Partnering on open source: Vagrant and GCP - GOOD STUFF, shows "provisioning script", "synced folders" examples and more
! vagrant boxes
https://learn.hashicorp.com/tutorials/vagrant/getting-started-boxes
https://app.vagrantup.com/boxes/search?utf8=%E2%9C%93&sort=downloads&provider=&q=gce
! step by step example vagrant provisioning on GCE - from MACBOOK
!! 1) install vagrant-google provider
{{{
vagrant plugin install vagrant-google
}}}
!! 2) on gcp IAM grant "compute admin" on the service account
* thanks to this thread https://github.com/mitchellh/vagrant-google/issues/234
* verify your permission by doing
{{{
Perhaps the permissions are locked down on the organisation level somehow?
You should be able to verify whether it's vagrant or something with your service account by making a
direct call to the api like so:
$ gcloud auth activate-service-account --key-file=/full/path/to/your/key.json
$ alias gcurl='curl -H "Authorization: Bearer $(gcloud auth print-access-token)"'
$ gcurl https://compute.googleapis.com/compute/v1/projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard
}}}
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/93125899-c1331100-f699-11ea-93a0-eb4bb5bdd4d0.png ]]
!! 3) create public key
{{{
Add the username to prefix the SSH key, so the Compute Engine will pick it up and create the corresponding user:
$ echo -n 'kristofferson.a.arao:' > /tmp/id_rsa.pub
$ cat ~/.ssh/id_rsa.pub >> /tmp/id_rsa.pub
}}}
!! 4) add ssh public key to GCE metadata
{{{
Setting up SSH keys at the project level:
gcloud compute project-info add-metadata --metadata-from-file sshKeys=/tmp/id_rsa.pub
}}}
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/93496065-c6cc6900-f8dc-11ea-8a89-ef0d087fb8dc.png ]]
!! 5) add the Vagrant provider box
{{{
vagrant box add gce https://github.com/mitchellh/vagrant-google/raw/master/google.box
vagrant box list
vagrant box list
gce (google, 0)
geerlingguy/centos7 (virtualbox, 1.2.3)
google/gce (google, 0.1.0)
ol7-latest (virtualbox, 0)
}}}
!! 6) create the Vagrantfile script
{{{
(py385_2) kristofferson.a.arao$ ls
the vagrant script -> Vagrantfile
the gcp auth file -> example-dev-284123-1c4cf8cf3f8c.json
(py385_2) kristofferson.a.arao$ cat Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "google/gce"
config.vm.provider :google do |google, override|
google.google_project_id = "example-dev-284123"
google.google_json_key_location = "example-dev-284123-1c4cf8cf3f8c.json"
# Define the name of the instance.
google.name = "develmach3"
# Set the zone where the instance will be located. To find out available zones:
# `gcloud compute zones list`.
google.zone = "us-east1-b"
# Set the machine type to use. To find out available types:
# `gcloud compute machine-types list --zone asia-east1-c`.
google.machine_type = "n1-standard-2"
# Set the machine image to use. To find out available images:
# `$ gcloud compute images list`.
google.image = "centos-7-v20200910"
google.disk_size = 20
override.ssh.username = "kristofferson.a.arao"
override.ssh.private_key_path = "~/.ssh/id_rsa"
end
end
}}}
!! 7) provision the machine
{{{
vagrant up --provider=google
vagrant up --provider=google
Bringing machine 'default' up with 'google' provider...
==> default: Checking if box 'google/gce' version '0.1.0' is up to date...
==> default: Launching an instance with the following settings...
==> default: -- Name: develmach3
==> default: -- Project: example-dev-284123
==> default: -- Type: n1-standard-2
==> default: -- Disk type: pd-standard
==> default: -- Disk size: 20 GB
==> default: -- Disk name:
==> default: -- Image: centos-7-v20200910
==> default: -- Image family:
==> default: -- Instance Group:
==> default: -- Zone: us-east1-b
==> default: -- Network: default
==> default: -- Network Project: example-dev-284123
==> default: -- Metadata: '{}'
==> default: -- Labels: '{}'
==> default: -- Network tags: '[]'
==> default: -- IP Forward:
==> default: -- Use private IP: false
==> default: -- External IP:
==> default: -- Network IP:
==> default: -- Preemptible: false
==> default: -- Auto Restart: true
==> default: -- On Maintenance: MIGRATE
==> default: -- Autodelete Disk: true
==> default: -- Additional Disks:[]
==> default: Waiting for instance to become "ready"...
==> default: Machine is booted and ready for use!
==> default: Waiting for SSH to become available...
==> default: Machine is ready for SSH access!
==> default: Rsyncing folder: /Users/kristofferson.a.arao/vagrant/gcp-vagrant/develmach3/ => /vagrant
}}}
!! 8) list machines
{{{
vagrant global-status
id name provider state directory
--------------------------------------------------------------------------------------------------
615a946 node1 virtualbox poweroff /Users/kristofferson.a.arao/vagrant/hadoop
3d0685c node2 virtualbox poweroff /Users/kristofferson.a.arao/vagrant/hadoop
45be751 node3 virtualbox poweroff /Users/kristofferson.a.arao/vagrant/hadoop
1c13821 default google running /Users/kristofferson.a.arao/vagrant/gcp-vagrant/develmach3
}}}
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/93504764-eddc6800-f8e7-11ea-8132-00dd653c90ba.png ]]
!! 9) ssh to the machine
{{{
SSH into the machine:
$ vagrant ssh
vagrant ssh
==> vagrant: A new version of Vagrant is available: 2.2.10 (installed version: 2.2.9)!
==> vagrant: To upgrade visit: https://www.vagrantup.com/downloads.html
Last login: Sun Sep 13 19:39:41 2020 from cpe-69-206-123-5.nyc.res.rr.com
[kristofferson.a.arao@develmach3 ~]$ exit
logout
Connection to 35.190.174.255 closed.
}}}
!! 10) destroy the machine
{{{
Dispose the DEFAULT machine:
$ vagrant destroy
OR specity the specific ID
vagrant destroy 1c13821
vagrant destroy
default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Terminating the instance...
vagrant global-status
id name provider state directory
-------------------------------------------------------------------------------------------------
615a946 node1 virtualbox poweroff /Users/kristofferson.a.arao/vagrant/hadoop
3d0685c node2 virtualbox poweroff /Users/kristofferson.a.arao/vagrant/hadoop
45be751 node3 virtualbox poweroff /Users/kristofferson.a.arao/vagrant/hadoop
}}}
! step by step example vagrant provisioning on GCE - from GCE DEV VM
!! 1) create a vm and install vagrant
{{{
sudo apt-cache search vagrant
sudo apt-get install vagrant
}}}
[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/93125901-c1cba780-f699-11ea-977a-1325c22b28f0.png ]]
!! 2) install the google cloud sdk
{{{
sudo apt-get update && sudo apt-get install google-cloud-sdk
gcloud init
}}}
!! 3) follow steps on the MACBOOK guide (see VM specific notes below - asterisk)
{{{
1) install vagrant-google provider
2) on gcp IAM grant "compute admin" on the service account
*3) use the public key on your MACBOOK
*4) if you haven't added then add ssh public key to GCE metadata
5) add the Vagrant provider box
6) create the Vagrantfile script
7) provision the machine
8) list machines
9) ssh to the machine
10) destroy the machine
}}}
!! you can use the same public key on both MACBOOK and GCE VM and be able to passwordless ssh
! references
https://realguess.net/2015/09/07/setup-development-environment-with-vagrant-on-google-compute-engine/
.
<<showtoc>>
! list installed and dependencies
{{{
brew deps --tree --installed
brew list
}}}
http://blogs.oracle.com/wim/entry/btrfs_root_and_yum_update
http://blogs.oracle.com/wim/entry/playing_with_btrfs
<<<
* you can't update a bucketed column
* bucketing is needed for transactional table
<<<
https://stackoverflow.com/questions/30818447/hivebigdata-difference-between-bucketing-and-indexing
https://acadgild.com/blog/indexing-in-hive/
customer perf issue on bbw contention and Bug 13972394 - Doc ID 13972394.8 and also making use of ash wait chains script that led to Tanel's blog post
https://blog.tanelpoder.com/2013/11/06/diagnosing-buffer-busy-waits-with-the-ash_wait_chains-sql-script-v0-2/
details here:
https://www.evernote.com/shard/s48/sh/c83f3f17-e974-4850-b454-606874453744/339fffc00fd48266dd7981b5ca887ff6
https://stackoverflow.com/questions/16476568/how-to-increase-dbms-output-buffer
{{{
BEGIN
dbms_output.enable(NULL); -- Disables the limit of DBMS
-- Your print here !
END;
}}}
ORA-20000: ORU-10027: buffer overflow, limit of 1000000 bytes https://community.oracle.com/thread/308557
XEON E5-2600 http://globalsp.ts.fujitsu.com/dmsp/Publications/public/wp-sandy-bridge-ep-memory-performance-ww-en.pdf
XEON 5600 http://globalsp.ts.fujitsu.com/dmsp/Publications/public/wp-westmere-ep-memory-performance-ww-en.pdf
Exadata updating to Xeon E5 (Fall 2012)..(X3-2,X3-2L,X2-4)
QPI replaces FSB
memory is still in parallel bus which doesn't mingle well with the serial bus
! exadata memory
Yeap.. that drops the memory speed.
It’s like my Dell laptop when I was on just 2GB ram my speed was 1333 Mhz
And then when I upgraded to 8GB it dropped to 1066 Mhz but I was still happy because I have more room to play with and did not feel any performance degradation.
-Karl
{{{
[root@enkcel04 ~]# dmidecode --type 17 | more
On the storage cells
# dmidecode 2.10
SMBIOS 2.6 present.
Handle 0x0029, DMI type 17, 28 bytes
Memory Device
Array Handle: 0x0026
Error Information Handle: Not Provided
Total Width: 72 bits
Data Width: 64 bits
Size: 4096 MB
Form Factor: DIMM
Set: None
Locator: D2
Bank Locator: /SYS/MB/P0
Type: DDR3
Type Detail: Other
Speed: 1333 MHz
Manufacturer: Samsung
Serial Number: 470286B8
Asset Tag: AssetTagNum0
Part Number: M393B5270CH0-YH9
Rank: 1
[root@enkdb03 ~]# dmidecode --type 17 | more
On the compute nodes
# dmidecode 2.10
SMBIOS 2.6 present.
Handle 0x0029, DMI type 17, 28 bytes
Memory Device
Array Handle: 0x0026
Error Information Handle: Not Provided
Total Width: 72 bits
Data Width: 64 bits
Size: 8192 MB
Form Factor: DIMM
Set: None
Locator: D2
Bank Locator: /SYS/MB/P0
Type: DDR3
Type Detail: Other
Speed: 1333 MHz
Manufacturer: Samsung
Serial Number: 855C8CA7
Asset Tag: AssetTagNum0
Part Number: M393B1K70CH0-YH9
Rank: 2
}}}
call RMarkdown on command line using a.R that is passed a file
http://stackoverflow.com/questions/28507693/call-rmarkdown-on-command-line-using-a-r-that-is-passed-a-file
{{{
Rscript Graphs.R
Rscript -e "rmarkdown::render('output_file.Rmd')"
args <- commandArgs(trailingOnly = TRUE)
}}}
http://stackoverflow.com/questions/2151212/how-can-i-read-command-line-parameters-from-an-r-script
camtasia9
camtasia recorder9
<<showtoc>>
! capex opex aws emr
https://www.google.com/search?q=capex+opex+aws+emr&oq=capex+opex+aws+emr&aqs=chrome..69i57.6862j0j1&sourceid=chrome&ie=UTF-8
https://learning.oreilly.com/search/?query=capex%20opex%20aws&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&is_academic_institution_account=false&sort=relevance&page=4
cloud application architectures https://learning.oreilly.com/library/view/cloud-application-architectures/9780596157647/ch03s02.html <-- GOOD STUFF
Cloud Data Centers and Cost Modeling https://learning.oreilly.com/library/view/cloud-data-centers/9780128014134/xhtml/chp017.xhtml <-- GOOD STUFF
data center as computer https://learning.oreilly.com/library/view/the-datacenter-as/9781627050098/xhtml/ch006.html <-- GOOD STUFF
Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism https://learning.oreilly.com/library/view/computer-architecture-5th/9780123838735/OEBPS/B9780123838728000070.htm <-- GOOD STUFF
The Cloud DBA-Oracle : Managing Oracle Database in the Cloud https://learning.oreilly.com/library/view/the-cloud-dba-oracle/9781484226353/A421400_1_En_1_ChapterPart1.html
GCP https://learning.oreilly.com/library/view/building-your-next/9781484210048/9781484210055_Ch01.xhtml
computer networking problems and solutions https://learning.oreilly.com/library/view/computer-networking-problems/9780134762814/ch28.xhtml
Designing and Building a Hybrid Cloud https://learning.oreilly.com/library/view/designing-and-building/9781492036937/ch05.html
Practice of System and Network Administration, The: DevOps and other Best Practices for Enterprise IT, Volume 1 https://learning.oreilly.com/library/view/practice-of-system/9780133415087/ch25.xhtml
Business Intelligence and the Cloud: Strategic Implementation Guide https://learning.oreilly.com/library/view/business-intelligence-and/9781118631720/12_chapter03.html
seeking sre https://learning.oreilly.com/library/view/seeking-sre/9781491978856/ch05.html
eCommerce in the Cloud https://learning.oreilly.com/library/view/ecommerce-in-the/9781491946626/ch03.html
Big Data Demystified https://learning.oreilly.com/library/view/big-data-demystified/9781292218120/html/chapter-001.html
Cloud Computing: Principles and Paradigms https://learning.oreilly.com/library/view/cloud-computing-principles/9780470887998/ch02.html
.
https://cdnjs.com/#q=backbone
https://learning.oreilly.com/library/view/parallel-programming-with/9781783288397/ch07s02.html
with Oracle examples
https://www.udemy.com/python-for-data-analysis-and-pipelines/
Using RabbitMQ to Message Events across Nodes
https://www.oreilly.com/library/view/socketio-solutions/9781786464927/video6_5.html?autoplay=false
rabbitmq in depth
https://learning.oreilly.com/library/view/rabbitmq-in-depth/9781617291005/kindle_split_013.html
http://www.evernote.com/shard/s48/sh/5f38dc47-056d-471f-b195-cd95e0556a13/1735471c65fcec342dcc7298cbe5e21c
<<<
Disk Scrubbing causing I/O Performance issue on Exadata Cell Server (Doc ID 1684213.1)
Action Plan:
list cell attributes hardDiskScrubInterval
Stop disk scrubbing and reschedule it for non peak hours time.
CellCLI> ALTER CELL hardDiskScrubInterval=none
1. As discussed over the web conference, please decide on hardDiskScrubStartTime to start over weekend/non-peak hours and set appropriately. Ex:
CellCLI> ALTER CELL hardDiskScrubStartTime='2014-02-05T05:45:46-05:00'
2. Change the interval to BIWEEKLY if the previous action plan was implemented to stop the disk scrub. Ex :
CellCLI> ALTER CELL hardDiskScrubInterval=biweekly
Solution:
The Exadata 11.2.3.3.0 version is affected by this bug. This issue is fixed in Exadata software version 11.2.3.3.1 and later. Changes include a decreased threshold of 15% I/O utilization instead of 25% that determines whether scrubbing is started as well as changes to how the flash cache handles these types of scrub I/Os.
<<<
http://www.oracle.com/technetwork/database/exadata/exadata-x4-changes-2080051.pdf
"Automatic hard disk scrubbing for bad sectors with 11.2.3.3 and Grid Infrastructure 11.2.0.4
http://docs.oracle.com/database/121/HAOVW/haexadata.htm#HAOVW12017
• Inspects and repairs hard disks with damaged or worn out disk sectors (cluster of storage) or other physical or logical defects periodically when there are idle resources with Exadata Automatic Hard Disk Scrub and Repair
How to repair bad sectors (blocks) reported by fsck or the kernel on non IDE drives (Doc ID 1004452.1)
''Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration (Doc ID 1302539.1)''
Starting with Oracle Exadata Software 11.2.3.3.0 and Oracle Database 11.2.0.4, Oracle Exadata Storage Server Software provides Automatic Hard Disk Scrub and Repair. This feature automatically inspects and repairs hard disks periodically when hard disks are idle. If bad sectors are detected on a hard disk, then Oracle Exadata Storage Server Software automatically sends a request to Oracle ASM to repair the bad sectors by reading the data from another mirror copy. By default, the hard disk scrub runs every two weeks. It’s very lightweight and has enormous value add by fixing physical block corruptions even with infrequently access data.
{{{
CellCLI> list cell attributes hardDiskScrubInterval
biweekly
CellCLI>
[root@enkx3cel01 ~]# less /var/log/oracle/diag/asm/cell/enkx3cel01/trace/alert.log | grep -i scrub
Scrubbing re-enabled after last incompatible ASM instance (enkx3db01.enkitec.com pid: 25118) disconnected
Scrubbing re-enabled after last incompatible ASM instance (enkx3db02.enkitec.com pid: 19520) disconnected
Scrubbing re-enabled after last incompatible ASM instance (enkx3db02.enkitec.com pid: 43607) disconnected
Begin scrubbing CellDisk:CD_03_enkx3cel01.
Begin scrubbing CellDisk:CD_10_enkx3cel01.
Begin scrubbing CellDisk:CD_07_enkx3cel01.
Begin scrubbing CellDisk:CD_11_enkx3cel01.
Begin scrubbing CellDisk:CD_02_enkx3cel01.
Begin scrubbing CellDisk:CD_08_enkx3cel01.
Begin scrubbing CellDisk:CD_04_enkx3cel01.
Begin scrubbing CellDisk:CD_06_enkx3cel01.
Begin scrubbing CellDisk:CD_09_enkx3cel01.
Begin scrubbing CellDisk:CD_05_enkx3cel01.
Begin scrubbing CellDisk:CD_00_enkx3cel01.
Begin scrubbing CellDisk:CD_01_enkx3cel01.
Finished scrubbing CellDisk:CD_10_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Finished scrubbing CellDisk:CD_05_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Finished scrubbing CellDisk:CD_02_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Finished scrubbing CellDisk:CD_11_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Finished scrubbing CellDisk:CD_01_enkx3cel01, scrubbed blocks (1MB):2826352, found bad blocks:0
Finished scrubbing CellDisk:CD_08_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Finished scrubbing CellDisk:CD_04_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Finished scrubbing CellDisk:CD_07_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Finished scrubbing CellDisk:CD_09_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Finished scrubbing CellDisk:CD_06_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Finished scrubbing CellDisk:CD_00_enkx3cel01, scrubbed blocks (1MB):2826352, found bad blocks:0
Finished scrubbing CellDisk:CD_03_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Wed May 21 19:00:04 2014
Begin scrubbing CellDisk:CD_03_enkx3cel01.
Begin scrubbing CellDisk:CD_10_enkx3cel01.
Begin scrubbing CellDisk:CD_07_enkx3cel01.
Begin scrubbing CellDisk:CD_11_enkx3cel01.
Begin scrubbing CellDisk:CD_02_enkx3cel01.
Begin scrubbing CellDisk:CD_08_enkx3cel01.
Begin scrubbing CellDisk:CD_04_enkx3cel01.
Begin scrubbing CellDisk:CD_06_enkx3cel01.
Begin scrubbing CellDisk:CD_09_enkx3cel01.
Begin scrubbing CellDisk:CD_05_enkx3cel01.
Begin scrubbing CellDisk:CD_00_enkx3cel01.
Begin scrubbing CellDisk:CD_01_enkx3cel01.
Thu May 22 02:08:44 2014
Finished scrubbing CellDisk:CD_10_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Thu May 22 02:14:43 2014
Finished scrubbing CellDisk:CD_05_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Thu May 22 02:15:16 2014
Finished scrubbing CellDisk:CD_02_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Thu May 22 02:16:25 2014
Finished scrubbing CellDisk:CD_11_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Thu May 22 02:16:31 2014
Finished scrubbing CellDisk:CD_01_enkx3cel01, scrubbed blocks (1MB):2826352, found bad blocks:0
Thu May 22 02:22:31 2014
Finished scrubbing CellDisk:CD_08_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Thu May 22 02:23:51 2014
Finished scrubbing CellDisk:CD_04_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Thu May 22 02:24:30 2014
Finished scrubbing CellDisk:CD_07_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Thu May 22 02:26:29 2014
Finished scrubbing CellDisk:CD_09_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Thu May 22 02:28:52 2014
Finished scrubbing CellDisk:CD_06_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
Thu May 22 02:34:30 2014
Finished scrubbing CellDisk:CD_00_enkx3cel01, scrubbed blocks (1MB):2826352, found bad blocks:0
Thu May 22 02:34:31 2014
Finished scrubbing CellDisk:CD_03_enkx3cel01, scrubbed blocks (1MB):2860960, found bad blocks:0
}}}
! disk scrub elapsed time across disk type
https://docs.oracle.com/cd/E50790_01/doc/doc.121/e50471/cellcli.htm#SAGUG20555
the disk latency on exadata can be measured in levels on the top is the storage cells by the following metrics
cat metrichistory_raw.txt | egrep "CD_IO_TM_R_LG_RQ|CD_IO_TM_R_SM_RQ|CD_IO_TM_W_LG_RQ|CD_IO_TM_W_SM_RQ" > metrichistory_latency.txt
on the database and consumer group level there's no breakdown of large and small on the flash but there is on the disk
take note that the metrics are "IO_TM"
Flash latency DB_FD_IO_TM_RQ
Disk latency DB_IO_TM_LG_RQ | DB_IO_TM_SM_RQ
Flash latency CG_FD_IO_TM_RQ
Disk latency CG_IO_TM_LG_RQ | CG_IO_TM_SM_RQ
since the disk ultimate latency would come from the top level (storage cells) then you don't have to repeat doing it on the database
and consumer group level. what would make more sense is to measure the IORM latency (queue time) on the database and consumer group levels.
take note that here the metrics are "IO_WT"
DB_FD_IO_WT_LG_RQ
DB_FD_IO_WT_SM_RQ
DB_IO_WT_LG_RQ
DB_IO_WT_SM_RQ
CG_FD_IO_WT_LG_RQ
CG_FD_IO_WT_SM_RQ
CG_IO_WT_LG_RQ
CG_IO_WT_SM_RQ
also see IORM QUEUE (https://community.oracle.com/thread/2613363?start=0&tstart=0)
I have this script that mines the cell metriccurrent https://www.dropbox.com/s/rcwek0rx8e50imc/cell_iops.sh
created it last night, pretty cool for monitoring and IO test cases (IORM, wbfc, esfc, esfl)
here's a sample viz you can do with the script https://www.evernote.com/shard/s48/sh/d89a1aa2-d1b1-42b6-b338-c95ba31bf3e9/c1c7604911c1aa21c821fae9e3e258a0 I haven’t included the latency yet on the viz
-Karl
! Sample run as "cellmonitor"
Edit the following line on the script
{{{
datafile=`echo /home/oracle/dba/karao/scripts/metriccurrentall.txt`
/usr/local/bin/dcli -l cellmonitor -g /home/oracle/dba/karao/scripts/cell_group "cellcli -e list metriccurrent" > $datafile
export TM=$(date +%m/%d/%y" "%H:%M:%S)
}}}
Run it as follows
{{{
while :; do ./cell_iops.sh >> cell_iops.csv ; egrep "CS,ALL|DB," cell_iops.csv ; sleep 20; echo "--"; done
}}}
! for longer runs
{{{
make sure you have a cell_group file on the /root directory, this contains the IPs of the storage cells
[root@enkx3cel01 ~]# cat /root/cell_group
192.168.12.3
192.168.12.4
192.168.12.5
-- install
[root@enkx3cel01 ~]# vi runit.sh
while :; do ./cell_iops.sh >> cell_iops.csv ; egrep "CS,ALL|DB," cell_iops.csv ; sleep 60; echo "--"; cat /dev/null > nohup.out; done
-- run
[root@enkx3cel01 ~]# nohup sh runit.sh &
-- killing it afterwards
ps -ef | grep cell_iops ; lsof cell_iops.csv
ps -ef | grep -i runit
root 13311 8482 0 11:14 pts/0 00:00:00 sh runit.sh
root 14660 8482 0 11:14 pts/0 00:00:00 grep -i runit
[root@enkx3cel01 ~]#
[root@enkx3cel01 ~]#
[root@enkx3cel01 ~]# kill -9 13311
[1]+ Killed nohup sh runit.sh
lsof | grep '(deleted)$' | sort -rnk 7
lsof | sort -nk7 | grep -i runit
}}}
! the cell_iops.sh script
{{{
#!/bin/ksh
#
# cell_iops.sh - a "sort of" end to end Exadata IO monitoring script
# * inspired by http://glennfawcett.wordpress.com/2013/06/18/analyzing-io-at-the-exadata-cell-level-a-simple-tool-for-iops/
# and modified to show end to end breakdown of IOPS, inter-database, consumer groups, and latency across Exadata storage cells
# * you must use this script together with "iostat -xmd" on storage cells on both flash and spinning disk and database IO latency on
# system level (AWR) and session level (Tanel Poder's snapper) for a "real" end to end IO troubleshooting and monitoring
# * the inter-database and consumer groups data is very useful for overall resource management and IORM configuration and troubleshooting
# * check out the sample viz that can be done by mining the data here goo.gl/0Q1Oeo
#
# Karl Arao, Oracle ACE (bit.ly/karlarao), OCP-DBA, RHCE, OakTable
# http://karlarao.wordpress.com
#
# on any Exadata storage cell node you can run this one time
# ./cell_iops.sh
#
# OR on loop spooling to a file and consume later with Tableau for visualization
# while :; do ./cell_iops.sh >> cell_iops.csv ; egrep "CS,ALL|DB,_OTHER_DATABASE_" cell_iops.csv ; sleep 20; echo "--"; done
#
# Here are the 19 column headers:
#
# TM - the time on each snap
# CATEGORY - CS (cell server - includes IOPS, MBs, R+W breakdown, latency), DB (database - IOPS, MBs), CG (consumer group - IOPS, MBs)
# GROUP - grouping per CATEGORY, it could be databases or consumer groups.. a pretty useful dimension in Tableau to drill down on IO
# DISK_IOPS - (applies to CS, DB, CG) high level spinning disk IOPS
# FLASH_IOPS - (applies to CS, DB, CG) high level flash disk IOPS
# DISK_MBS - (applies to CS, DB, CG) high level spinning disk MB/s (bandwidth)
# FLASH_MBS - (applies to CS, DB, CG) high level flash disk MB/s (bandwidth)
# DISK_IOPS_R - (applies to CS only) IOPS breakdown, spinning disk IOPS read
# FLASH_IOPS_R - (applies to CS only) IOPS breakdown, flash disk IOPS read
# DISK_IOPS_W - (applies to CS only) IOPS breakdown, spinning disk IOPS write
# FLASH_IOPS_W - (applies to CS only) IOPS breakdown, flash disk IOPS write
# DLAT_RLG - (applies to CS only) average latency breakdown, spinning disk large reads
# FLAT_RLG - (applies to CS only) average latency breakdown, flash disk large reads
# DLAT_RSM - (applies to CS only) average latency breakdown, spinning disk small reads
# FLAT_RSM - (applies to CS only) average latency breakdown, flash disk small reads
# DLAT_WLG - (applies to CS only) average latency breakdown, spinning disk large writes
# FLAT_WLG - (applies to CS only) average latency breakdown, flash disk large writes
# DLAT_WSM - (applies to CS only) average latency breakdown, spinning disk small writes
# FLAT_WSM - (applies to CS only) average latency breakdown, flash disk small writes
#
datafile=`echo /tmp/metriccurrentall.txt`
/usr/local/bin/dcli -l root -g /root/cell_group "cellcli -e list metriccurrent" > $datafile
export TM=$(date +%m/%d/%y" "%H:%M:%S)
# Header
print "TM,CATEGORY,GROUP,DISK_IOPS,FLASH_IOPS,DISK_MBS,FLASH_MBS,DISK_IOPS_R,FLASH_IOPS_R,DISK_IOPS_W,FLASH_IOPS_W,DLAT_RLG,FLAT_RLG,DLAT_RSM,FLAT_RSM,DLAT_WLG,FLAT_WLG,DLAT_WSM,FLAT_WSM"
#######################################
# extract IOPS for cells
#######################################
export DRW=`cat $datafile | egrep 'CD_IO_RQ_R_LG_SEC|CD_IO_RQ_R_SM_SEC|CD_IO_RQ_W_LG_SEC|CD_IO_RQ_W_SM_SEC' |grep -v FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export FRW=`cat $datafile | egrep 'CD_IO_RQ_R_LG_SEC|CD_IO_RQ_R_SM_SEC|CD_IO_RQ_W_LG_SEC|CD_IO_RQ_W_SM_SEC' |grep FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export DRWM=`cat $datafile | egrep 'CD_IO_BY_R_LG_SEC|CD_IO_BY_R_SM_SEC|CD_IO_BY_W_LG_SEC|CD_IO_BY_W_SM_SEC' |grep -v FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export FRWM=`cat $datafile | egrep 'CD_IO_BY_R_LG_SEC|CD_IO_BY_R_SM_SEC|CD_IO_BY_W_LG_SEC|CD_IO_BY_W_SM_SEC' |grep FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export DR=`cat $datafile | egrep 'CD_IO_RQ_R_LG_SEC|CD_IO_RQ_R_SM_SEC' |grep -v FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export FR=`cat $datafile | egrep 'CD_IO_RQ_R_LG_SEC|CD_IO_RQ_R_SM_SEC' |grep FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export DW=`cat $datafile | egrep 'CD_IO_RQ_W_LG_SEC|CD_IO_RQ_W_SM_SEC' |grep -v FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export FW=`cat $datafile | egrep 'CD_IO_RQ_W_LG_SEC|CD_IO_RQ_W_SM_SEC' |grep FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export DLATRLG=`cat $datafile | egrep 'CD_IO_TM_R_LG_RQ' |grep -v FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export FLATRLG=`cat $datafile | egrep 'CD_IO_TM_R_LG_RQ' |grep FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export DLATRSM=`cat $datafile | egrep 'CD_IO_TM_R_SM_RQ' |grep -v FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export FLATRSM=`cat $datafile | egrep 'CD_IO_TM_R_SM_RQ' |grep FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export DLATWLG=`cat $datafile | egrep 'CD_IO_TM_W_LG_RQ' |grep -v FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export FLATWLG=`cat $datafile | egrep 'CD_IO_TM_W_LG_RQ' |grep FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export DLATWSM=`cat $datafile | egrep 'CD_IO_TM_W_SM_RQ' |grep -v FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export FLATWSM=`cat $datafile | egrep 'CD_IO_TM_W_SM_RQ' |grep FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
print "$TM,CS,ALL,$DRW,$FRW,$DRWM,$FRWM,$DR,$FR,$DW,$FW,$DLATRLG,$FLATRLG,$DLATRSM,$FLATRSM,$DLATWLG,$FLATWLG,$DLATWSM,$FLATWSM"
#######################################
# extract IOPS for database
#######################################
export db_str=`cat $datafile | egrep 'DB_FD_IO_RQ_LG_SEC' | grep -v DBUA | awk '{ print $3}' | sort | uniq`
for db_name in `echo $db_str`
do
# Calculate Total IOPS of harddisk
# DB_IO_RQ_LG_SEC
# DB_IO_RQ_SM_SEC
db_drw=`cat $datafile | egrep 'DB_IO_RQ_LG_SEC|DB_IO_RQ_SM_SEC' |grep $db_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
# Calculate Total IOPS of flashdisk
# DB_FD_IO_RQ_LG_SEC
# DB_FD_IO_RQ_SM_SEC
db_frw=`cat $datafile | egrep 'DB_FD_IO_RQ_LG_SEC|DB_FD_IO_RQ_SM_SEC' |grep $db_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
# Calculate Total MB/s of harddisk
# DB_IO_BY_SEC
db_drwm=`cat $datafile | egrep 'DB_IO_BY_SEC' |grep $db_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
# Calculate Total MB/s of flashdisk
# DB_FC_IO_BY_SEC
# DB_FD_IO_BY_SEC
# DB_FL_IO_BY_SEC
db_frwm=`cat $datafile | egrep 'DB_FC_IO_BY_SEC|DB_FD_IO_BY_SEC|DB_FL_IO_BY_SEC' |grep $db_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
print "$TM,DB,$db_name,$db_drw,$db_frw,$db_drwm,$db_frwm,0,0,0,0,0,0,0,0,0,0,0,0"
done
#######################################
# extract IOPS for DBRM consumer groups
#######################################
export cg_str=`cat $datafile | egrep 'CG_FD_IO_RQ_LG_SEC' | grep -v DBUA | awk '{ print $3}' | sort | uniq`
for cg_name in `echo $cg_str`
do
# Calculate Total IOPS of harddisk
# CG_IO_RQ_LG_SEC
# CG_IO_RQ_SM_SEC
cg_drw=`cat $datafile | egrep 'CG_IO_RQ_LG_SEC|CG_IO_RQ_SM_SEC' |grep $cg_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
# Calculate Total IOPS of flashdisk
# CG_FD_IO_RQ_LG_SEC
# CG_FD_IO_RQ_SM_SEC
cg_frw=`cat $datafile | egrep 'CG_FD_IO_RQ_LG_SEC|CG_FD_IO_RQ_SM_SEC' |grep $cg_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
# Calculate Total MB/s of harddisk
# CG_IO_BY_SEC
cg_drwm=`cat $datafile | egrep 'CG_IO_BY_SEC' |grep $cg_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
# Calculate Total MB/s of flashdisk
# CG_FC_IO_BY_SEC
# CG_FD_IO_BY_SEC
cg_frwm=`cat $datafile | egrep 'CG_FC_IO_BY_SEC|CG_FD_IO_BY_SEC' |grep $cg_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
print "$TM,CG,$cg_name,$cg_drw,$cg_frw,$cg_drwm,$cg_frwm,0,0,0,0,0,0,0,0,0,0,0,0"
done
}}}
{{{
Repeat the steps for each node in the cluster
su - celladmin
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
/usr/bin/ssh-keygen -t dsa
<then just hit ENTER all the way>
On the 1st node do the following
ssh celladmin@enkcel04 cat ~celladmin/.ssh/id_dsa.pub >> ~celladmin/.ssh/authorized_keys
ssh celladmin@enkcel05 cat ~celladmin/.ssh/id_dsa.pub >> ~celladmin/.ssh/authorized_keys
ssh celladmin@enkcel06 cat ~celladmin/.ssh/id_dsa.pub >> ~celladmin/.ssh/authorized_keys
Then copy the authorized_keys across
scp -p ~celladmin/.ssh/authorized_keys celladmin@enkcel04:~celladmin/.ssh/authorized_keys
scp -p ~celladmin/.ssh/authorized_keys celladmin@enkcel05:~celladmin/.ssh/authorized_keys
scp -p ~celladmin/.ssh/authorized_keys celladmin@enkcel06:~celladmin/.ssh/authorized_keys
Then test it
ssh -l celladmin enkcel04 date;ssh -l celladmin enkcel05 date;ssh -l celladmin enkcel06 date
}}}
http://www.evernote.com/shard/s48/sh/d7badceb-3b11-41db-85a9-87a8b038f08b/8a8f07e0f54db2a7bf298a8b6df3be57
https://info.credly.com/
https://www.credential.net/welcome
cgroups - overallocation, guarantee
https://www.evernote.com/l/ADBpJf873bZJvr617pV3PZL1S5NN7f1BS8Y
http://kerryosborne.oracle-guy.com/2011/02/exadata-serial-numbers/
http://www.exadata-certification.com/2013/09/how-to-find-serial-number-of-exadata.html
{{{
cat hdinfo.sh
MegaCli64 -PdList -aALL | egrep "Media Error|Device Firmware|Inquiry Data"
export EXHOST=$(hostname -a)
egrep "Error -8" /var/log/oracle/diag/asm/cell/$EXHOST/trace/alert.log | sort -u
ipmitool sunoem cli "show /SYS product_serial_number"
}}}
{{{
/usr/local/bin/dcli -l root -g /root/cell_group -x ~/hdinfo.sh
}}}
{{{
cellcli -e list cell detail
dcli -g ./cell_group -l root cellcli -e list flashcache attributes name,size,status
192.168.12.3: enkx3cel01_FLASHCACHE 1488.75G normal
192.168.12.4: enkx3cel02_FLASHCACHE 1488.75G normal
192.168.12.5: enkx3cel03_FLASHCACHE 1488.75G normal
dcli -g ./cell_group -l root cellcli -e list flashlog attributes name,size,status
192.168.12.3: enkx3cel01_FLASHLOG 512M normal
192.168.12.4: enkx3cel02_FLASHLOG 512M normal
192.168.12.5: enkx3cel03_FLASHLOG 512M normal
dcli -g cell_group -l root "cellcli -e list cell detail" | grep "flashCacheMode"
192.168.12.3: flashCacheMode: WriteThrough
192.168.12.4: flashCacheMode: WriteThrough
192.168.12.5: flashCacheMode: WriteThrough
list cell attributes hardDiskScrubInterval
list cell attributes hardDiskScrubstarttime detail
}}}
<<showtoc>>
DBA_HIST_ACTIVE_SESS_HISTORY.SQL_ADAPTIVE_PLAN_RESOLVED
https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/DBA_HIST_ACTIVE_SESS_HISTORY.html#GUID-335EC838-FEA0-4872-9E14-67C5A1908B35
* but the adaptive plans should be turned on
<<<
UPDATE 2018-06-05: after almost 6 months, today I closed the service request. Through it I got the following information from development:
SQL_ADAPTIVE_PLAN_RESOLVED=0 only while the SQL plan is currently being adapted — once it is adapted, SQL_ADAPTIVE_PLAN_RESOLVED=1. If the plan adaption is disabled, it SQL_ADAPTIVE_PLAN_RESOVLED will always = 1 because the SQL plan is never in adaption.
<<<
DBA_HIST_REPORTS
https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/DBA_HIST_REPORTS.html#GUID-767A0EFB-B46F-4CFF-A4B9-580E9B96CFE7
DBA_HIST_REPORTS_DETAILS
https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/DBA_HIST_REPORTS_DETAILS.html#GUID-F204BB21-6553-4D04-A650-2C84452EC4B8
* REPORT_COMPRESSED BLOB Actual XML report in compressed form
DBA_HIST_SQL_PLAN
* OTHER_XML
ASH Analysis: Detecting and Profiling Adaptive Plans in 12c
https://blog.go-faster.co.uk/2016/11/ash-analysis-detecting-and-profiling.html
https://antognini.ch/2017/12/sql_adaptive_plan_resolved-is-broken/
https://smarttechways.com/2021/08/25/find-sql-query-using-adaptive-execution-plan-in-oracle/
! other references
How To Get Historical SQL Monitor Report For SQL Statements (Doc ID 2555350.1)
How to Determine if a SQL Statement is Using an Adaptive Execution Plan with V$SQL (Doc ID 2043960.1)
http://serverfault.com/questions/309052/check-if-port-is-open-or-closed-on-a-linux-server-using-ssh
http://www.planetmy.com/blog/how-to-check-which-port-is-listern-or-open-on-linux/
{{{
Using Telnet to test whether or not a firewall is blocking your connection:
Not Blocked -- Look for "Connected". Note: you must have the Database Listener (or
some other socket listener accepting connections on this port).
-------------------------------------------------------------------------------------------------------------------
drhpap01:/root>telnet uscdcd60 1521
Trying...
Connected to uscdcd60.tnd.us.cbre.net.
Escape character is '^]'. <-- Success (not blocked by firewall)
No-Listener -- If no firewall is blocking your connection *but* you have no listener
On that port you will get "Connection refused"
-------------------------------------------------------------------------------------------------------------------
[td01db01:oracle:dbm1] /home/oracle
> telnet td01db02 66666
Trying 10.70.11.112...
telnet: connect to address 10.70.11.112: Connection refused <-- Not blocked but no listener on that port
Blocked -- If a firewall is blocking you will hang at "Trying…" and finally timeout.
-------------------------------------------------------------------------------------------------------------------
drhpwb01:/root>telnet uscdcd60 1521
Trying… <-- Failure (you will never get "Escape character is '^]'")
Connection closed by foreign host. <-- if you don't complete your login it will eventually timeout.
}}}
{{{
How to determine BP Level? https://forums.oracle.com/forums/thread.jspa?threadID=2224966
opatch lsinv -bugs_fixed | egrep -i 'bp|exadata|bundle'
/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'bp|exadata|bundle'
OR
registry$history or dba_registry_history
col action format a10
col namespace format a10
col action_time format a30
col version format a10
col comments format a30
select * from dba_registry_history;
and then.. go to
MOS 888828.1 --> Patch Release History for Exadata Database Machine Components --> Exadata Storage Server software patches
}}}
{{{
SELECT TO_CHAR(action_time, 'YYYY-MM-DD HH24:MI:SS') as action_time,
action,
status,
description,
version,
patch_id,
bundle_series
FROM sys.dba_registry_sqlpatch
ORDER by action_time;
ACTION_TIME ACTION STATUS DESCRIPTION VERSION PATCH_ID BUNDLE_SERIES
------------------- --------------- --------------- ---------------------------------------------------------------------------------------------------- -------------------- ---------- ------------------------------
2020-03-26 20:26:32 APPLY SUCCESS Database PSU 12.1.0.2.200114, Oracle JavaVM Component (JAN2020) 12.1.0.2 30502041
2020-03-26 20:26:34 APPLY SUCCESS DATABASE BUNDLE PATCH 12.1.0.2.200114 12.1.0.2 30364137 DBBP
2020-09-11 11:50:33 APPLY WITH ERRORS DATABASE BUNDLE PATCH 12.1.0.2.200714 12.1.0.2 31001106 DBBP
2020-09-11 12:16:42 APPLY SUCCESS DATABASE BUNDLE PATCH 12.1.0.2.200714 12.1.0.2 31001106 DBBP
2020-09-11 13:16:37 ROLLBACK SUCCESS Database PSU 12.1.0.2.200114, Oracle JavaVM Component (JAN2020) 12.1.0.2 30502041
2020-09-11 13:16:37 APPLY SUCCESS Database PSU 12.1.0.2.200714, Oracle JavaVM Component (JUL2020) 12.1.0.2 31219939
}}}
{{{
select * from dba_registry_sqlpatch order by action_time desc;
}}}
{{{
select * from registry$history;
ACTION_TIME ACTION NAMESPACE VERSION ID COMMENTS BUNDLE_SERIES
------------------------------ ---------- ---------- ---------- ---------- ---------------------------------------- ---------------
BOOTSTRAP DATAPATCH 12.1.0.2 RDBMS_12.1.0.2.0DBBP_LINUX.X64_170106
}}}
{{{
COL cv_cellname HEAD CELL_NAME FOR A20
COL cv_cell_path HEAD CELL_PATH FOR A30
COL cv_cellversion HEAD CELLSRV_VERSION FOR A20
COL cv_flashcachemode HEAD FLASH_CACHE_MODE FOR A20
PROMPT Show Exadata cell versions from V$CELL_CONFIG....
SELECT
cellname cv_cell_path
, CAST(extract(xmltype(confval), '/cli-output/cell/name/text()') AS VARCHAR2(20)) cv_cellname
, CAST(extract(xmltype(confval), '/cli-output/cell/releaseVersion/text()') AS VARCHAR2(20)) cv_cellVersion
, CAST(extract(xmltype(confval), '/cli-output/cell/flashCacheMode/text()') AS VARCHAR2(20)) cv_flashcachemode
, CAST(extract(xmltype(confval), '/cli-output/cell/cpuCount/text()') AS VARCHAR2(10)) cpu_count
, CAST(extract(xmltype(confval), '/cli-output/cell/upTime/text()') AS VARCHAR2(20)) uptime
, CAST(extract(xmltype(confval), '/cli-output/cell/kernelVersion/text()') AS VARCHAR2(30)) kernel_version
, CAST(extract(xmltype(confval), '/cli-output/cell/makeModel/text()') AS VARCHAR2(50)) make_model
FROM
v$cell_config -- gv$ isn't needed, all cells should be visible in all instances
WHERE
conftype = 'CELL'
ORDER BY
cv_cellname
/
}}}
{{{
$ opatch lsinventory
}}}
{{{
$ opatch lspatches
}}}
{{{
opatch lsinventory|grep "Patch description"
}}}
! Type of Patches Used with OPatch
https://docs.oracle.com/middleware/1221/core/OPATC/GUID-C87906B2-D871-4279-A1D6-FC9D76C87C7E.htm#OPATC102
! new - query opatch
{{{
with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
select x.*
from a,
xmltable('InventoryInstance/patches/*'
passing a.patch_output
columns
patch_id number path 'patchID',
patch_uid number path 'uniquePatchID',
description varchar2(80) path 'patchDescription',
applied_date varchar2(30) path 'appliedDate',
sql_patch varchar2(8) path 'sqlPatch',
rollbackable varchar2(8) path 'rollbackable'
) x;
with a as (select dbms_qopatch.get_opatch_bugs patch_output from dual)
select x.*
from a,
xmltable('bugInfo/bugs/*'
passing a.patch_output
columns
bug_id number path '@id',
description varchar2(160) path 'description'
) x;
select patch_id, patch_uid, target_version, status, description, action_time
from dba_registry_sqlpatch
where action = 'APPLY';
}}}
http://www.dbaglobe.com/2020/10/query-opatch-inventory-using-sql.html
! ADW check patch level
{{{
-- registry
SELECT /*+ NO_MERGE */ /* 1a.15 */
x.*
,c.name con_name
FROM cdb_registry x
LEFT OUTER JOIN v$containers c ON c.con_id = x.con_id
ORDER BY
x.con_id,
x.comp_id;
-- registry history
SELECT /*+ NO_MERGE */ /* 1a.17 */
x.*
,c.name con_name
FROM cdb_registry_history x
LEFT OUTER JOIN v$containers c ON c.con_id = x.con_id
ORDER BY 1
,x.con_id;
-- registry hierarchy
SELECT /*+ NO_MERGE */ /* 1a.18 */
x.*
,c.name con_name
FROM cdb_registry_hierarchy x
LEFT OUTER JOIN v$containers c ON c.con_id = x.con_id
ORDER BY
1, 2, 3;
}}}
.
http://uptime.netcraft.com/up/graph?site=karlarao.tiddlyspot.com
http://www.harshj.com/2007/12/06/convert-chm-files-to-pdf-in-linux/
http://code.google.com/p/chm2pdf/
http://www.karakas-online.de/forum/viewtopic.php?t=10275
http://www.python-forum.org/pythonforum/viewtopic.php?f=1&t=4820
http://code.google.com/p/chm2pdf/issues/detail?id=16
Install RPMs (see also google code):
1000 yum install htmldoc
1001 yum install libchm-bin
1002 yum install python-chm
1003 yum install libchm-bin
1004 yum install libchm
1006 yum install chmlib
1008 yum install pychm
1022 yum install python-devel
Usage:
chm2pdf --continuous UNIXShellsbyExampleFourthEdition.chm
<<showtoc>>
! developer.chrome.com
https://developer.chrome.com/home
! websql
(deprecated)
! indexed db
(replaced websql)
! timeline view
this is pretty much like the sql monitor in Oracle
it has the following views
> events
> frames
> memory
awesome instrumentation, and you can trace by doing a start/stop "record"
you can also mark a particular code from the timeline putting the following code inside the loop
which is doing a mod of 100, meaning for every 100th do the instrumentation
{{{
//marking the timeline
for (i % 100 == 0) {
console.timeStamp('setting ' + i);
}
}}}
! keyboard shortcuts
http://anti-code.com/devtools-cheatsheet/
https://developer.chrome.com/devtools/docs/shortcuts
http://stackoverflow.com/questions/1437412/google-chromes-javascript-console-keyboard-shortcuts
http://codepen.io/TheodoreVorillas/blog/chrome-devtools-tips-and-tricks
! end
{{{
What I said isn’t 100% correct, let me rephrase that
If the CBO can prune to a single partition (usually “statically” because of a literal or “dynamically” because of bind peeking at parse then it will use it, otherwise it will use global level stats).
Static pruning because of literals -> estimation is ok assuming the partition stats are ok
Pruning based on bind with bind peeking -> estimation is ok *for the first execution* but if you pass a different value for a bind that goes to another partition you get a “wrong” estimation (classic bind peeking issue)
Pruning based on some weird function that makes it impossible to the CBO to identify the partition at parse -> global level stats are used and if they are obsoleted then you get wrong estimation (num_rows from the global level divided by number of partitions).
Anyway, easier to explain with an example :-)
I only have a 12c handy but it doesn’t change much.
I have a table with 2 partitions where the global level stats are incorrect because they were NOT regathered after a large insert into one partition (this is just one of the cases why they could be wrong, often the customers have the wrong impression they only need partition level and not global level).
SQL> create table karl (n1 number, n2 number) partition by range (n1) (partition p1 values less than (100), partition p2 values less than (200));
SQL> insert into karl select mod(rownum, 200), rownum from dual connect by rownum <= 10000;
SQL> commit;
SQL> @gather_stats
table_name: karl
exec dbms_stats.gather_table_stats(user,UPPER('&&table_name.'),method_opt=>'&&method_opt.')
TABLE_NAME NUM_ROWS BLOCKS PAR LAST_ANALYZED
------------------------------ ---------- ---------- --- -------------------
KARL 10000 2012 YES 2016-02-24/02:19:31
COLUMN_NAME DATA_TYPE DATA_DEFAULT NUM_DISTINCT NUM_NULLS HISTOGRAM
------------------------------ ------------------------------ ------------------------------ ------------ ---------- ---------------
N1 NUMBER 200 0 NONE
N2 NUMBER 10000 0 NONE
SQL> @show_p_stats
Enter value for table_name: karl
TABLE_NAME PARTITION_NAME NUM_ROWS BLOCKS LAST_ANALYZED
------------------------------ ------------------------------ ---------- ---------- -------------------
KARL P1 5000 1006 2016-02-24/02:19:31
KARL P2 5000 1006 2016-02-24/02:19:31
So we have 2 partitions, 5k rows each.
Let’s now add more data into P1 (another 50k rows) and gather only partition level stats, now the table says it has 10k rows but a single partition has 55k rows
SQL> insert into karl select mod(rownum, 100), rownum from dual connect by rownum <= 50000;
SQL> commit;
SQL> exec dbms_Stats.gather_table_stats(user,'KARL', partname => 'P1', granularity => 'PARTITION');
SQL> @show_stats
TABLE_NAME NUM_ROWS BLOCKS PAR LAST_ANALYZED
------------------------------ ---------- ---------- --- -------------------
KARL 10000 2012 YES 2016-02-24/02:19:31
COLUMN_NAME DATA_TYPE DATA_DEFAULT NUM_DISTINCT NUM_NULLS HISTOGRAM
------------------------------ ------------------------------ ------------------------------ ------------ ---------- ---------------
N1 NUMBER 200 0 NONE
N2 NUMBER 10000 0 NONE
SQL> @show_p_stats
TABLE_NAME PARTITION_NAME NUM_ROWS BLOCKS LAST_ANALYZED
------------------------------ ------------------------------ ---------- ---------- -------------------
KARL P1 55000 1006 2016-02-24/02:26:33
KARL P2 5000 1006 2016-02-24/02:19:31
Let’s now run three SQL, one with a literal on one partition, one with bind and one with an expression and see the 10053
select count(*) from karl where n1 = 50
...
Table Stats::
Table: KARL Alias: KARL Partition [0] <- this is the partition id, which is 0 (dba_tab_partitions.partition_position will show 1)
…
Table: KARL Alias: KARL
Card: Original: 55000.000000 Rounded: 550 Computed: 550.000000 Non Adjusted: 550.000000
vs
select count(*) from karl where n1 = :b1
…
Table Stats::
Table: KARL Alias: KARL Partition [0]
…
Table: KARL Alias: KARL
Card: Original: 55000.000000 Rounded: 550 Computed: 550.000000 Non Adjusted: 550.000000
vs
select count(*) from karl where n1 - :b1 = :b2
…
Table Stats::
Table: KARL Alias: KARL (Using composite stats) <—— GLOBAL LEVEL STATS SCALED BY NUM OF PARTITIONS
…
Table: KARL Alias: KARL
Card: Original: 5000.000000 Rounded: 50 Computed: 50.000000 Non Adjusted: 50.000000 <- CHECK ORIGINAL IS 5K AND NOT 55K
}}}
Fivetran Fivetran automates data integration from source to destination, providing data your team can analyze immediately.
Stitch Stitch is a cloud-first, open source platform for rapidly moving data.
Matillion Matillion is an ETL and data pipeline tool that is purpose-built for cloud data warehouses.
! cloud data warehouses in real life
https://www.slideshare.net/AlexanderTokarev4/cloud-dwh-deep-dive
! Cloud Data Warehouse Benchmark: Redshift, Snowflake, Azure, Presto and BigQuery
https://fivetran.com/blog/warehouse-benchmark
<<showtoc>>
! cloud dw comparison - cloud data warehouses in real life
https://www.slideshare.net/AlexanderTokarev4/cloud-dwh-deep-dive
! Cloud Data Warehouse Benchmark: Redshift, Snowflake, Azure, Presto and BigQuery
https://fivetran.com/blog/warehouse-benchmark
!! oracle adw
https://indico.cern.ch/event/757894/attachments/1720580/2777513/8b_AutonomousIsDataWarehouse_AntogniniSchnider.pdf\
https://antognini.ch/2018/07/observations-about-the-scalability-of-data-loads-in-adwc/
oracle adw vs snowflake https://piquesolutions.com/wp-content/uploads/2020/09/Pique-Solutions-Cloud-Data-Warehouse-White-Paper-FINAL-.pdf
oracle adw vs redshift https://piquesolutions.com/wp-content/uploads/2018/11/Pique-Solutions-Cloud-Data-Warehouse-White-Paper-Final-Draft-2-OCT-2018.pdf , https://piquesolutions.com/wp-content/uploads/2020/11/Pique-Solutions-ADW-vs.-Redshift-Cloud-Data-Warehouse-White-Paper-Final.pdf
!! bigquery
https://medium.com/dataseries/costs-and-performance-lessons-after-using-bigquery-with-terabytes-of-data-54a5809ac912
https://medium.com/techking/benchmarking-google-bigquery-at-scale-13e1e85f3bec
!! cloud benchmark tools
https://github.com/fivetran/benchmark
https://github.com/gregrahn/tpcds-kit
<<showtoc>>
! ETL - simple is better than complex
Data Engineering Principles - Build frameworks not pipelines - Gatis Seja
https://www.youtube.com/watch?v=pzfgbSfzhXg
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/108928119-ac2ab880-760f-11eb-84c8-115ae31ba0d9.png ]]
! times have changed , SCD limitations
Maxime Beauchemin - Functional Data Engineering - A Set of Best Practices | Lyft
https://www.youtube.com/watch?v=4Spo2QRTz1k
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/108930675-5d335200-7614-11eb-9c7d-4f6ac985777d.png ]]
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/108930676-5d335200-7614-11eb-9b4f-f06e88a4b65c.png ]]
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/108930677-5dcbe880-7614-11eb-95c9-cff9ad33eac8.png ]]
! datavault hands-on
https://www.snowflake.com/blog/tips-for-optimizing-the-data-vault-architecture-on-snowflake-part-3/
https://twitter.com/dangalavan/status/1314567189257748480
https://galavan.com/optimizing-the-data-vault-architecture-on-snowflake-free-sql/
Data Modeling Meetup Munich: Optimizing Data Vault Architecture on Snowflake with Dan Galavan https://www.youtube.com/watch?v=_TkCRbIyOwQ&ab_channel=Obaysch
https://github.com/dangalavan/Optimizing-DataVault-on-Snowflake
<<<
SQL Scripts summary
Script name Description
01 – DDL.sql The DDL used to create the required Data Vault tables etc. on Snowflake.
05 – Multi table inserts.sql Loads the Data Vault from ‘Staging’ (sample Snowflake database) using Multi Table Insert statements. Uses Overwrite All for testing purposes.
12 – Create Warehouses.sql Creates 3 x Snowflake Virtual Warehouses to load Hubs, Links, and Satellites separately.
15 – MultipleWarehouses – 01 – Load Hubs.sql
15 – MultipleWarehouses – 02 – Load Sats.sql
15 – MultipleWarehouses – 03 – Load Links.sql Each script loads Hubs / Links / Satellites separately using a dedicated Virtual warehouse for separation of workload.
30 – VARIANT – Data Load.sql Load JSON into the Data Vault
40 – BusinessVault.sql Parse JSON data and optimization of the Business Vault.
<<<
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/108931486-be0f5a00-7615-11eb-9712-75838617b680.png ]]
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/108931487-be0f5a00-7615-11eb-8ff5-c008a9bdd7b3.png ]]
https://avinetworks.com/docs/18.2/health-monitor-troubleshooting/#common-monitor-issues
https://comparisons.financesonline.com/avi-vantage-vs-new-relic
https://www.google.com/search?q=new+relic+aws&sxsrf=ACYBGNRC5b4XaihX0RO_A7c81YW60AJtKw:1568396656510&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiXwc_srM7kAhWDtZ4KHW3UBDIQ_AUIEigC&biw=1800&bih=921#imgrc=_
https://blog.upala.com/2017/03/04/setting-up-tez-on-cdh-cluster/
https://community.cloudera.com/t5/Cloudera-Manager-Installation/Using-Hive-on-TEZ-with-CDH5/td-p/16582
<<showtoc>>
! parallel ssh
https://www.youtube.com/watch?v=sl17yNXfwjI
https://code.google.com/archive/p/parallel-ssh/
https://github.com/ParallelSSH/parallel-ssh
https://parallel-ssh.org/
! clusterssh
https://github.com/duncs/clusterssh
https://www.linux.com/learn/managing-multiple-linux-servers-clusterssh
https://www.youtube.com/watch?v=Gy_8qctTPUo
clusterwide AWR (awrgrpt.sql) and ASH reports (ashrpti.sql)
under $ORACLE_HOME/rdbms/admin
http://pythontutor.com/
https://github.com/pgbovine/OnlinePythonTutor/
http://codecondo.com/coding-challenges/
http://programmingprogress.blogspot.com/2014/11/talentbuddy-problems.html
http://programmingprogress.blogspot.com/2014/11/number-of-talentbuddy-practice-problems.html
https://projecteuler.net/archives
http://www.codewars.com/
https://codility.com/programmers/
https://www.hackerrank.com/
https://www.talentbuddy.co/practice/tracks
<<showtoc>>
! create table of contents (TOC) in html
https://www.tipsandtricks-hq.com/simple-table-of-contents-toc-using-pure-html-and-css-code-9217
https://stackoverflow.com/questions/29256197/is-it-valid-to-assign-id-to-tr-or-td-in-table
.
<<showtoc>>
https://codementor-classes.slack.com/messages/ideatomvp-class/details/
https://codementor-classes.slack.com/messages/idea2mvp-nutrition/details/
https://eatornot.mybalsamiq.com/projects
https://codementor-classes.slack.com/messages/@nycitt/details/
https://codementor-classes.slack.com/messages/@mroswell/details/
https://codementor-classes.slack.com/messages/@wilsonmar/details/
! dev materials
https://www.parse.com/apps/foodhealthyornot/collections
! project repo
https://github.com/mroswell/EatOrNot
! swipe left / right
http://stackoverflow.com/questions/20427799/adding-buttons-to-fire-right-left-swipe-on-jquery-mobile
http://stackoverflow.com/questions/17546576/jquery-mobile-change-to-the-next-and-previous-data-role-page
http://jsfiddle.net/eEKzh/
http://stackoverflow.com/questions/17449649/how-to-swipe-an-element-using-hammer-js-in-backbone-view
http://jsfiddle.net/uZjCB/7/
http://stackoverflow.com/questions/19915085/javascript-hot-and-cold-game
https://github.com/mattSpell/hot-or-not
! references
http://backbonejs.org
http://underscorejs.org/
my JS code repo is here https://github.com/karlarao/code_ninja
<<showtoc>>
! log
fizzbuzz
missing number
palindrome
count digits
sort names
tokenize query
remove stop words
query tokens stemming
basic search query
! Languages
!! Getting started ### Warm up with two simple challenges
!!! Simple sum ### (normal) Given two integer numbers A and B Your task is to write a function that prints to the standard output (stdout) their sum Note that your function
!!! Max ### (normal) Given two integer numbers a and b Your task is to write a function that prints to the standard output (stdout) the maximum value between the two Note
!! Web analytics ### Practice with: variables, operators, statements and flow control
!!! Growth ### (normal) Given two integer numbers d1 and d2 representing the unique visitors on a website on the first and second day since launch Your task is to write
!!! Bounce rate ### (normal) The bounce rate is the percentage of users who arrive on the website but leave in the first 10 seconds of their visit
!!! Top locations ### (normal) Given three integer numbers a, b and c representing the number of unique visitors from three different locations Your task is to write a function that prints to
!!! Prediction ### (normal) Every week the number of unique visitors grows with 7% compared to the previous week
!!! Priority ### (medium) Our website receives visitors from 3 locations and the number of unique visitors from each of them is represented by three integers a, b and c
!! Classroom analysis ### Practice with: arrays & basic sorting algorithms
!!! Highest grade ### (normal) Given an array with all final grades for a course Your task is to write a function that finds the highest grade and prints this grade to standard
!!! Successful students ### (normal) Given an array with all final grades for a course and the minimum grade that a student needs to have in order to pass the course Your task is to
!!! Average grade ### (normal) An easy way to understand how well the students performed at this year’s course is to compute the average of their final grade
!!! Student progress ### (normal) Students are graded for their activity in each lab session
!!! Longest improvement ### (medium) A student's performance in lab activities should always improve, but that is not always the case
!!! Sorting students ### (normal) After an exam all the students are graded and sorted by their grades
!!! Common courses ### (normal) A teacher wants to compare the performance of two students
!! Text editor ### Practice with: simple string operations
!!! Select substring ### (normal) A common operation with text editors is selecting a block of text for further manipulation
!!! Remove substring ### (normal) Another common operation is deleting characters starting from the position of our cursor in the editor
!!! Copy-Paste ### (normal) When writing text we frequently use the copy and paste commands
!!! Count words ### (normal) When writing an essay it is useful to know the total number of words that we have typed
!!! Sort words ### (normal) Some advanced code editors allow programmers to sort the words from a selected block of text
!! Data conversion ### Practice with: data format conversions
!!! Binary ### (normal) Given an integer number Your task is to write a function that prints to the standard output (stdout) its binary format Note that your function will receive the following
!!! Time ### (normal) Given an integer representing a large number of seconds Your task is to write a function that prints it to the standard output (stdout) in the hours:minutes:seconds format Note
!!! Caesar shift ### (normal) Create a function that takes an input string and encrypts it using a caesar shift of +1
!!! Precision ### (medium) Sometimes you want to do calculations maintaining precision
!!! Count ones ### (normal) Given an integer N Your task is to write a function that prints to the standard output (stdout) the number of 1's present in its binary representation
!!! Binary float ### (normal) Given a binary string containing a fractional part your task is to print to the standard output (stdout) its numeric value (as a float)
!! Simple loops ### Practice with: loops
!!! Vowel count ### (normal) Given a string Your task is to write a function that prints to the standard output (stdout) the number of vowels it contains Note that your function will receive
!!! done @ FizzBuzz ### (normal) Given an integer number N Your task is to write a function that prints to the standard output (stdout) the numbers from 1 to N (one per line) with
https://www.talentbuddy.co/challenge/51846c184af0110af3822c30
<<<
problem
write a function that prints to the standard output (stdout) the numbers from 1 to N (one per line) with the following restrictions
for multiples of three print “Fizz” instead of the number
for the multiples of five print “Buzz” instead of the number
for numbers which are multiples of both three and five print “FizzBuzz”
Note that your function will receive the following arguments: n
which is the integer number described above
<<<
{{{
function fizzbuzz(n) {
for ( var i = 1; i <= n; i++ ) {
if (i % 3 == 0 && i % 5 == 0 ) {
console.log("FizzBuzz");
} else if ( i % 3 == 0 ) {
console.log("Fizz");
} else if ( i % 5 == 0 ) {
console.log("Buzz");
} else {
console.log(i);
}
}
}
fizzbuzz(20);
}}}
http://stackoverflow.com/questions/13736690/javascript-fizzbuzz-issue
https://gist.github.com/virajkulkarni14/6847512
!!! done @ Longest palindrome ### (medium) Given a string S, find the longest substring in S that is the same in reverse and print it to the standard output
https://www.talentbuddy.co/challenge/51e486994af0110af3827b17
<<<
Given a string S, find the longest substring in S that is the same in reverse and print it to the standard output. If there are multiple substrings that have the same longest length, print the first one that appears in S from left to right.
Expected complexity: O(N2)
Example input:
S: "abcdxyzyxabcdaaa"
Example output:
xyzyx
<<<
{{{
function isPalindrome(s) {
//test split
//var rev = s.split('').reverse().join('');
//console.log(rev);
//test substr
//var subs = s.substr(2, s.length);
//console.log(subs);
}
console.log(isPalindrome('123')); //example2
}}}
solution https://www.codementor.io/tips/1943378231/find-longest-palindrome-in-a-string-with-javascript
some explanation http://articles.leetcode.com/2011/11/longest-palindromic-substring-part-i.html
http://codegolf.stackexchange.com/questions/16327/how-do-i-find-the-longest-palindrome-in-a-string
http://stackoverflow.com/questions/1115001/write-a-function-that-returns-the-longest-palindrome-in-a-given-string
http://codedbot.com/questions/1426374/largest-palindrome-product-in-javascript
http://stackoverflow.com/questions/3647453/counting-palindromic-substrings-in-on
{{{
// talentbuddy
function longest_palind(s) {
var longest = '';
for (var pos = 0; pos < s.length - 1; pos++) { // first loop
console.log(pos + ' first loop');
for (var len = 2; len <= s.length; len++) { // 2nd loop
console.log(len + ' 2nd loop');
if (pos + len <= s.length) {
var subStr = s.substr(pos, len);
var reverseStr = subStr.split('').reverse().join('');
if (subStr === reverseStr) {
if (longest.length < subStr.length) {
longest = subStr;
}
}
}
}
}
console.log(longest);
}
//var s = "abcdxyzyxabcdaaa";
var s = "123456xyx";
longest_palind(s);
// 2nd solution - this is O(N^3) due to the function call and reverse iteration on j
function isPalindrome(s) {
var rev = s.split("").reverse().join("");
return s == rev;
}
function longestPalind(s){
var maxp_length = 0,
maxp = '';
for(var i=0; i < s.length; i++) {
var subs = s.substr(i, s.length);
for(var j=subs.length; j>=0; j--) {
var sub_subs = subs.substr(0, j);
if (sub_subs.length <= 1)
continue;
//console.log('checking: '+ sub_subs);
if (isPalindrome(sub_subs)) {
//console.log('palindrome: '+ sub_subs);
if (sub_subs.length > maxp_length) {
maxp_length = sub_subs.length;
maxp = sub_subs;
}
}
}
}
//console.log(maxp_length, maxp);
return maxp;
}
console.log(longestPalind("abcxyzyxabcdaaa"));
// answer to the 2nd question if O(n^2) - this is O(n^3) after conversation with Andrei
function longest_palind(s) {
var longest = '';
for (var pos = 0; pos < s.length - 1; pos++) { // first loop
console.log(pos + ' first loop'); // DEBUG
//for (var len = 2; len <= s.length; len++) { // 2nd loop - original, iterates 73 times
// rewrite below (either of the two):
for (var len = s.length; len >= 0; len--) { // 2nd loop - decrement, iterates 89 times possibly (n^2)
//for (var len = 0; len <= s.length; len++) { // 2nd loop - another regular loop, also iterates 89 times possibly (n^2)
console.log(len + ' 2nd loop'); // DEBUG
if (pos + len <= s.length) {
var subStr = s.substr(pos, len);
var reverseStr = subStr.split('').reverse().join('');
if (subStr === reverseStr) {
if (longest.length < subStr.length) {
longest = subStr;
}
}
}
}
}
console.log(longest);
}
//var s = "abcdxyzyxabcdaaa";
var s = "123456xyx";
longest_palind(s);
// talentbuddy answer to palindrome
// O(n^2)
function longest_palind(s) {
var longest = ''; // O(1)
var match = ''; // O(1)
for(var i = s.length - 1; i >= 0; i--){ // O(n)
match += s[i]; // O(n)
if(s.indexOf(match) > -1){ // O(n)
if(match.length > longest.length){ // O(n)
longest = match; // O(1)
}
} else {
match = s[i]; // 0(n)
}
}
console.log(longest); // O(1)
}
// = O(1) + O(1) + O(n) * ( O(n) + O(n) + O(n) + O(1) + O(n) ) + O(1)
// = O(n^2)
// solution 1 - talentbuddy
function longest_palind(s) {
var longest = '';
//for (var pos = 0; pos < s.length - 1; pos++) {
for (var pos = 0; pos < s.length; pos++) {
for (var len = 2; len <= s.length; len++) {
if (pos + len <= s.length) {
var subStr = s.substr(pos, len);
var reverseStr = subStr.split('').reverse().join('');
if (subStr === reverseStr) {
if (longest.length < subStr.length) {
longest = subStr;
}
}
}
}
}
console.log(longest);
}
//var s = "abcdxyzyxabcdaaa";
//longest_palind(s);
//
//xyzyx
// solution 2 - using charAt - O(n^2)
function longest_palind(s) {
var reversed = s.split( "" ).reverse().join( "" ),
longest = "",
temp = "";
for( var i = 0, len = s.length; i < len; i++ ) {
var char = s.charAt( i );
if( reversed.indexOf( temp + char ) !== -1 ) {
temp += char;
} else {
if( temp.length > longest.length )
longest = temp;
temp = char;
}
}
// Edge case: longest palindrome exists at the end of the string
console.log( ( temp.length > longest.length ) ? temp : longest );
}
var s = "21xyx";
longest_palind(s);
// solution 3 - this could actually be O(n^3) because of the while, need to validate with Andrei
function longest_palind(s) {
sarray = s.split("");
var longest_yet = ""
for (var i = 0; i < sarray.length; i+=1) {
var size = 1;
var palindromeSlice = "";
while( sarray[i-size] === sarray[i+size] ){
palindromeSlice = sarray.slice(i-size,i+size+1).join("");
size += 1
};
palindromeSlice.length > longest_yet.length ? longest_yet = palindromeSlice : longest_yet = longest_yet;
};
console.log(longest_yet)
}
var s = "21xyx";
longest_palind(s);
// solution 4 - center - but this is O(n^3)
function longest_palind(s) {
// Write your code here
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
var l = s.length;
var longest1 = 0;
var realCenter = 0;
for (var center=0; center<l; center++) {
for (var j=1; j<l; ) {
var first = s[center-j];
var second = s[center+j];
if (first === second && longest1 < j) {
j++;
longest1 = j;
realCenter = center;
} else {
break;
}
}
}
var left = realCenter - longest1;
var right = realCenter + longest1 +1;
console.log(s.substring(left,right));
}
var s = "21xyx";
longest_palind(s);
// talentbuddy answer to palindrome
// = O(1) + O(1) + O(n) * ( O(n) + O(n) + O(n) + O(1) + O(n) ) + O(1)
// = O(n^2)
function longestPalind_palind(s) {
var foundString = ''; // O(1)
var longestPalind = ''; // O(1)
for(var i = s.length - 1; i >= 0; i--){ // O(n)
foundString += s[i]; // O(n)
// console.log(foundString + ' = foundString'); // debug
// console.log(s.indexOf(foundString) + ' indexof'); // debug
if(s.indexOf(foundString) > -1){ // O(n)
//console.log(s.indexOf(foundString) + ' indexof'); // debug
if(foundString.length > longestPalind.length){ // O(n)
longestPalind = foundString; // O(1)
//console.log(longestPalind + ' = longestPalind'); // debug
}
} else {
foundString = s[i]; // O(n)
//console.log(s[i] + ' s[i]'); // debug
}
}
console.log(longestPalind); // O(1)
}
//var s = "321xyx321";
//var s = "321xyx321";
var s = "21xyx";
longestPalind_palind(s);
// = O(1) + O(1) + O(n) * ( O(n) + O(n) + O(n) + O(1) + O(n) ) + O(1)
// = O(n^2)
//// test1
//var a = 'abcdefg';
//console.log(a.split('').reverse());
//console.log(a.split('').sort());
//console.log(typeof a);
//
//// test2
//var anyString = 'Brave new world';
//
//console.log("The character at index 0 is '" + anyString.charAt(0) + "'");
//console.log("The character at index 1 is '" + anyString.charAt(1) + "'");
//console.log("The character at index 2 is '" + anyString.charAt(2) + "'");
//console.log("The character at index 3 is '" + anyString.charAt(3) + "'");
//console.log("The character at index 4 is '" + anyString.charAt(4) + "'");
//console.log("The character at index 999 is '" + anyString.charAt(999) + "'");
//
//console.log(typeof anyString.charAt(0));
//
//// test3
//
//var a = 'abcdefg';
//console.log(a.split(''));
//console.log(typeof a);
//
//// test4
//var a = 'a';
//var b = 'b';
//console.log(a + b);
// solution 2 - using charAt - O(n^2)
function longest_palind(s) {
var reversePalind = s.split( '' ).reverse().join( '' ); // O(n)
var longestPalind = ''; // O(1)
var found = ''; // O(1)
for( var i = 0, len = s.length; i < len; i++ ) { // O(n)
var char = s.charAt( i ); // O(n)
//console.log(found + char + ' debug0_found+char');
//console.log(reversePalind.indexOf( found + char ) + ' #' + found + ' #' + char + ' debug1_reversePalind found + char');
if( reversePalind.indexOf( found + char ) !== -1 ) { // O(n)
found += char; // O(n)
//console.log(found + ' debug2');
} else {
if( found.length > longestPalind.length ) // O(1)
longestPalind = found; // O(1)
//console.log(longestPalind + ' debug3');
found = char; // O(1)
}
}
//console.log(found);
if ( found.length > longestPalind.length ) { // O(1)
console.log(found); // O(1) because found gets the previous char
} else {
console.log(longestPalind); // O(1) this hold the longestPalind in the end
}
}
//var s = "21xyx";
//var s = 'zyz2xyxaabaa2' // this is giving wrong result, outputs '2' need to resolve this w/ if
var s = 'xyaabaa2' // this is giving wrong result, outputs '2' need to resolve this w/ if
//var s = '2aabaa2' // this is giving wrong result, outputs '2aabaa2' need to resolve this w/ if
longest_palind(s);
//= O(n) + O(1) + O(1) + O(n) * ( O(n) + O(n) + O(n) + O(1) + O(1) + O(1) ) + O(1) + O(1) + O(1)
//= O(n) + O(1) + O(1) + O(n) * ( O(n) ) + O(1) + O(1) + O(1)
//= O(n) * O(n)
//= O(n^2)
}}}
!!! Sqrt ### (hard) Given an integer number N, compute its square root without using any math library functions and print the result to standard output
!!! done @ Count digits ### (normal) Given a string s, your task is to print to the standard output (stdout) the number of digits it contains
https://www.talentbuddy.co/challenge/52fa6489acdbfdef717c08ca
<<<
problem
Given a string s, your task is to print to the standard output (stdout) the number of digits it contains.
Example input:
s: "abc123"
Example output:
3
<<<
{{{
function count_digits(s) {
var theWord = s;
var count = 0;
for ( var i = 0; i < theWord.length; i++ ) {
if (!isNaN( theWord[i] )) {
count = count + 1;
//or //count += 1;
}
}
console.log(count);
}
}}}
https://github.com/CodeThat/Talentbuddy/blob/master/CountDigits.cpp
http://codegolf.stackexchange.com/questions/50556/count-the-number-of-vowels-in-each-word-of-a-string
http://stackoverflow.com/questions/6331964/counting-number-of-vowels-in-a-string
http://stackoverflow.com/questions/8935632/check-if-character-is-number
!!! Odd square sum ### (normal) Print to the standard output (stdout) the sum of all the odd numbers squared between x and y
!! Expressions ### Practice with: expression evaluations
!!! Fraction ### (medium) Given a rational, decimal value, write a function prints out the simplest possible fraction
!!! Balanced brackets ### (medium) Given a string of open and closed brackets output "Balanced" if the brackets are balanced or "Unbalanced" otherwise
!!! Simple expression ### (hard) Write a simple parser to parse a formula and calculate the result
!! Async JavaScript ### Practice with: asynchronous programming in JavaScript
read on this first - intro to async programming https://docs.google.com/document/d/1lWedbmkszeYH988vlhEAzMBl4yHrlhIkyXTiBoIniOs/edit
!!! Read async ### (medium) Given a string f and a callback next Your task is to write a function that reads the content of the file given by f and prints the content
!!! Copy async ### (medium) Given two file names as strings input_file_name and output_file_name and a callback next Your task is to write a function that reads the content of the input file and
!!! Parallel async ### (medium) Given 3 already implemented asynchronous functions f1, f2, f3 Your task is to write a function called parallel(next) that executes the 3 functions in parallel and calls the callback next
! Databases
!! MongoDB basics ### A series of challenges to help you get your hands dirty with MongoDB, one of the world's fastest growing NoSQL database technologies
!!! Medical app ### (medium) Data in MongoDB has a flexible schema
!!! Book store ### (medium) Data in MongoDB has a flexible schema
!!! Purchase tracking ### (medium) An e-commerce website is tracking the activity of its customers with the help of events in order to make better business decisions
!!! User administration ### (medium) Let's build a user administration interface for Talentbuddy
!!! Indexes ### (medium) Indexes support the efficient resolution of queries in MongoDB
!!! Brands ### (medium) An ecommerce company stored in their database the shoes that each customer prefers
!!! Contact management ### (medium) A sales team needs to keep track of the company's clients
!! Redis basics ### A series of challenges to help you get your hands dirty with Redis
!!! User table ### (medium) The very first thing one has to deal with when building an app is setting up the user table
!!! Currency exchange ### (medium) Building an app that helps people find the currency exchange rates provided by different banks requires setting up a database that keeps the data updated and serves it to the users
!!! Shopping cart ### (medium) An online e-commerce site needs to implement a shopping cart using Redis
!!! Social network ### (hard) Let's implement a few basic operations necessary to create a social network
! Tech Interviews
!! Elementary data structures ### Practice with: arrays, stacks, queues, dequeues
!!! FizzBuzz ### (normal) Given an integer number N Your task is to write a function that prints to the standard output (stdout) the numbers from 1 to N (one per line) with
!!! done @ Missing number ### (normal) Given an array containing all numbers from 1 to N with the exception of one print the missing number to the standard output
https://www.talentbuddy.co/challenge/51e486994af0110af3827b14
<<<
problem
Given an array containing all numbers from 1 to N with the exception of one print the missing number to the standard output.
Example input:
array: 5 4 1 2
Example output:
3
Note: This challenge was part of Microsoft interviews. The expected complexity is O(N).
<<<
first a simple function to test array input
{{{
function myFunction(a) {
var result = a.split(',').map(Number);
return result;
}
var x = myFunction('3,2,3'); //return will be stored in var x
console.log(x);
}}}
then the function
{{{
function find_missing_number(v) {
var x = v.split(',').map(Number);
var n = x.length;
var totalOfLength = (n * (n + 1)) / 2;
var sumOfNumbers= 0;
for(var i=0; i<n; i++){
sumOfNumbers = sumOfNumbers + x[i];
}
/* check array and other values
console.log(x[2]);
console.log(n);
console.log(sumOfNumbers);
console.log(totalOfLength);
*/
console.log(n - (sumOfNumbers - totalOfLength) + 1);
}
find_missing_number('5,4,1,2');
}}}
http://stackoverflow.com/questions/4291447/convert-string-into-array-of-integers
{{{
function find_missing_number(v) {
var n = v.length;
var totalOfLength = (n * (n + 1)) / 2;
var sumOfNumbers= 0;
for(var i=0; i<n; i++){
sumOfNumbers = sumOfNumbers + v[i];
}
/* check array and other values
console.log(x[2]);
console.log(n);
console.log(sumOfNumbers);
console.log(totalOfLength);
*/
console.log(n - (sumOfNumbers - totalOfLength) + 1);
}
}}}
http://www.geeksforgeeks.org/find-the-missing-number/
http://stackoverflow.com/questions/2113795/quickest-way-to-find-missing-number-in-an-array-of-numbers
https://github.com/groleauw/Solutions/blob/master/TalentBuddy/MissingNumber.java
!!! Pair product ### (normal) Write to the standard output the greatest product of 2 numbers to be divisible by 3 from a given array of pozitive integers
!!! Longest improvement ### (medium) A student's performance in lab activities should always improve, but that is not always the case
!!! Balanced brackets ### (medium) Given a string of open and closed brackets output "Balanced" if the brackets are balanced or "Unbalanced" otherwise
!!! Simple expression ### (hard) Write a simple parser to parse a formula and calculate the result
!!! Tweets per second ### (hard) Japan Castle in the Sky airing broke a Twitter record on August 3, 2013
!!! Linked List Cycle ### (medium) Given a linked list of integer values = 0 Your task is to write a function that prints to the standard output (stdout) the value 1 if the list
!! Sorting and order statistics ### Practice with: sorting algorithms and order statistics
!!! Sorting students ### (normal) After an exam all the students are graded and sorted by their grades
!!! done @ Sort names ### (normal) Take an array of first and last names, sort them into alphabetical order by last name, and then print them to the standard output (stdout) one per line
https://www.talentbuddy.co/challenge/52d63de4acdbfdef717ac537
<<<
problem
Take an array of first and last names, sort them into alphabetical order by last name, and then print them to the standard output (stdout) one per line.
Example input:
names: ["Ashley Yards", "Melissa Banks", "Martin Stove", "Erika Johnson", "Robert Jones"]
Example output:
Melissa Banks
Erika Johnson
Robert Jones
Martin Stove
Ashley Yards
Note: All first and last names are separated by a white space.
<<<
{{{
function sort_names(names) {
names.sort(function compare(a, b) {
var xa = a.split(" ")[1];
var yb = b.split(" ")[1];
if (xa > yb) {
return 1;
}
if (xa < yb) {
return -1;
}
return 0;
});
var y=names.toString().split(",");
for (i=0;i<y.length;i++)
{
//document.write(y[i] + "<br >");
console.log(y[i]);
}
}
}}}
{{{
very first version
//solution1
/* ------------------------------ */
var names = ["Ashley Yards", "Melissa Banks", "Martin Stove", "Erika Johnson", "Robert Jones"];
function compare(a, b) {
var splitA = a.split(" ");
var splitB = b.split(" ");
var lastA = splitA[splitA.length - 1];
var lastB = splitB[splitB.length - 1];
if (lastA < lastB) return -1;
if (lastA > lastB) return 1;
return 0;
}
var sorted = names.sort(compare);
console.log(sorted);
//solution2
/* ------------------------------ */
var names = ["Ashley Yards", "Melissa Banks", "Martin Stove", "Erika Johnson", "Robert Jones"];
function compare(a,b) {
return a.split(" ").pop()[0] > b.split(" ").pop()[0]
};
var sorted = names.sort(compare);
console.log(sorted);
}}}
{{{
var str = ["Ashley Yards", "Melissa Banks", "Martin Stove", "Erika Johnson", "Robert Jones"];
var a1 = new Array();
a1=str.toString().split(",");
/// display elements ///
for (i=0;i<a1.length;i++)
{
//document.write(a1[i] + "<br >");
console.log(a1[i]);
}
}}}
{{{
var names = ["Ashley Yards", "Melissa Banks", "Martin Stove", "Erika Johnson", "Robert Jones"];
var x = names.sort(function compare(a, b) {
var xa = a.split(" ")[1];
var yb = b.split(" ")[1];
if (xa > yb) {
return 1;
}
if (xa < yb) {
return -1;
}
return 0;
});
var y=x.toString().split(",");
for (i=0;i<y.length;i++)
{
//document.write(y[i] + "<br >");
console.log(y[i]);
}
}}}
{{{
function sort_names(names) {
var names = ["Ashley Yards", "Melissa Banks", "Martin Stove", "Erika Johnson", "Robert Jones"]; //test the names array
names.sort(function compare(a, b) {
var xa = a.split(" ")[1];
var yb = b.split(" ")[1];
if (xa > yb) {
return 1;
}
if (xa < yb) {
return -1;
}
return 0;
});
var y=names.toString().split(",");
for (i=0;i<y.length;i++)
{
//document.write(y[i] + "<br >");
console.log(y[i]);
}
}
sort_names();
}}}
!!! Sorted merge ### (medium) Given 2 sorted arrays, merge them into one single sorted array and print its elements to standard output
!!! Relative sort ### (medium) Given an array of integer numbers your task is to print to the standard output (stdout) the initial array, but sorted in a special way: all negative numbers come first
!!! Majority number ### (medium) Given an array of integer numbers your task is to print to the standard output (stdout) the majority number
!!! Nth number ### (medium) Given an array of unsorted but unique integers, your task is to print the n-th largest value
!!! Median ### (medium) Given two sorted arrays A and B Your task is to write a function that prints to the standard output (stdout) the median of the array obtained after merging
!! Search ### Practice with: binary search, hash tables, prefix trees
!!! Sqrt ### (hard) Given an integer number N, compute its square root without using any math library functions and print the result to standard output
!!! Pair Sum ### (normal) Given an array of integer numbers and a value S Your task is to write a function that prints to the standard output "1" if two numbers from
!!! Count occurences ### (normal) Given an array of sorted integers and a number K, your task is to print to the standard output (stdout) the number of occurences for K in the initial array
!!! Tuple sum ### (hard) Given an array A your tastk is to print in ascending order 4 distinct 0-based indexes of elements in the array that add up to a sum S, if such indexes
!!! Typeahead ### (hard) The Typeahead feature allows you to quickly search people while writing a tweet on Twitter
!!! Find string ### (medium) Given two strings str1 and str2 Your task is to write a function that prints to the standard output (stdout) the first occurenece of str2 in str1 or -1
!! Elementary graph problems ### Practice with: graph representation and traversal, shortest path etc
!!! Depth first traversal ### (medium) Given a binary tree, your task is to do a depth-first traversal and print its node values in order
!!! Topological sort ### (medium) Topological sorting is scheduling a sequence of jobs or tasks based on their dependencies
!!! Bacon number ### (medium) A "Bacon Number" as defined on Wikipedia is: "the number of degrees of separation an actor has from Kevin Bacon"
!!! PACO ### (medium) Casian is infiltrating an alien base to destroy the controls of the mothership that threatens planet PACO
!!! Pouring ### (hard) A robot in a factory is tasked with getting exactly L litres of water into any one of two containers A and B
!!! Dispatcher ### (medium) Uber needs to connect the rider with their closest available car
!!! Selection ### (hard) Given an array with N integer numbers your task is to print to the standard output (stdout) the smallest K numbers among them
!!! Reducer ### (hard) As you remember, the mappers previously produced key-value pairs of data - the key was the string we were searching for and the value was a list of matched documents
!!! Countries ### (normal) Given a map: square region NxN
!! Advanced techniques ### Practice with: dynamic programming
!!! Coins ### (hard) You are given a list of coin values [V1, V2
!!! Max sum ### (medium) You are given a list of integer numbers [a1, a2,
!!! Heads and tails ### (medium) Given a sequence of heads and tails
!! Math ### Practice with: general concepts of mathematics
!!! Linear equation ### (normal) Let's consider a simple equation of the following form: a*x + b = 0 Given the values for a and b Your task is to write a function that prints
!!! Nth permutation ### (medium) Given an array of integer numbers print to the standard output the nth circular permutation to the right
!!! Power of 2 ### (medium) Given an integer number x, print to the standard output: the value 1 if x is a power of 2 or the value 0 if x is not a power
!!! Fraction ### (medium) Given a rational, decimal value, write a function prints out the simplest possible fraction
!!! Precision ### (medium) Sometimes you want to do calculations maintaining precision
!!! Pair product ### (normal) Write to the standard output the greatest product of 2 numbers to be divisible by 3 from a given array of pozitive integers
!!! Bottle ### (normal) You have a bottle
!!! Prime numbers ### (normal) Given an integer number n Your task is to write a function that prints to the standard output (stdout) all the prime numbers up to (but not including)
!! General interview practice ### Practice with: various interview questions
!!! Rain ### (medium) NxM cuboids were put on a rectangular mesh comprising NxM fields, one cuboid on each field
!!! Skyscrapers ### (medium) The Wall Street in New York is known for its breathtaking skyscrapers
!!! Palindromes count ### (hard) Given a string, your task is to print to the standard output (stdout) the total number of palindromic sequences
!!! Check ### (hard) Write a program that reads a chess table and reports which player, if any, is in check
!!! Chocolate bars ### (hard) Jimmy challenges you to play a game about chocolate bars
!! HubSpot challenges ### Exercise your skills with a set of problems from HubSpot, an all-in-one marketing software that helps more than 10,000 companies in 56 countries attract leads and convert them into customers
!!! Plane tickets ### (medium) A plane ticket contains a departure location and a destination
!!! Unique sequence ### (hard) Given an array of integer numbers Your task is to write a function that prints to the standard output the length of the longest sequence of consecutive elements ...
!! Redbeacon challenges ### Exercise your skills with a set of problems from Redbeacon, a marketplace that enables you to get competitive quotes from trusted local pros for your home-service project
!!! Request counting ### (normal) A large number of customers requested a cleaning service on Redbeacon
!!! Scheduling ### (normal) A professional receives multiple service requests in a day, but not all of them can be fulfilled due to time conflicts
!! Twitter challenges ### Tweet, tweet! What kind of problems challenge the minds of the engineers that are building Twitter? Solve this set of problems to get an idea of how they look and feel
!!! Typeahead ### (hard) The Typeahead feature allows you to quickly search people while writing a tweet on Twitter
!!! Tweets per second ### (hard) Japan Castle in the Sky airing broke a Twitter record on August 3, 2013
!! Uber challenges ### Uber connects you with a driver at the tap of a button
!!! Dispatcher ### (medium) Uber needs to connect the rider with their closest available car
!!! Price experiment ### (medium) Uber is a company based in San Francisco, California that makes a mobile application that connects passengers with drivers of luxury vehicles for hire
!!! Neighbourhood ### (medium) Uber needs to connect a rider with a car as fast as possible
!!! Pub crawl ### (hard) How to visit as many pubs as possible in one night while spending as little money as possible on Uber rides? Given a list of roads connecting pubs
!! Google interview ### If you are wondering how Google interviews look like
!!! Relative sort ### (medium) Given an array of integer numbers your task is to print to the standard output (stdout) the initial array, but sorted in a special way: all negative numbers come first
!!! Majority number ### (medium) Given an array of integer numbers your task is to print to the standard output (stdout) the majority number
!!! Selection ### (hard) Given an array with N integer numbers your task is to print to the standard output (stdout) the smallest K numbers among them
!!! Count occurences ### (normal) Given an array of sorted integers and a number K, your task is to print to the standard output (stdout) the number of occurences for K in the initial array
!!! Invert binary tree ### (medium) Given a binary tree Your task is to write a function that inverts and prints it to the standard output (stdout)
!! Bitwise operations ### Practice with: bitwise operatins
!!! Even number ### (medium) Given an integer number n Your task is to write a function that prints to the standard output (stdout) 0 if n is even, otherwise 1
!!! Divide by 2 ### (medium) Given an integer number n Your task is to write a function that prints to the standard output (stdout) the quotient of dividing n by 2
!!! Multiply by 2 ### (medium) Given an integer number n Your task is to write a function that prints to the standard output (stdout) the result of multiplying n by 2
!!! 2^n ### (medium) Given an integer number n Your task is to write a function that prints to the standard output (stdout) the result of raising 2 to the power of n
!!! Power of 2 ### (medium) Given an integer number x, print to the standard output: the value 1 if x is a power of 2 or the value 0 if x is not a power
!!! Swap values ### (medium) Given two integer numbers a and b Your task is to write a function that swaps their values and prints to the standard output (stdout) the new values of
!!! Compute average ### (medium) Given two integer numbers a and b Your task is to write a function that prints to the standard output (stdout) their average
!!! Set bit ### (medium) Given two integer numbers a and n Your task is to write a function that prints to the standard output (stdout) the value of a after its n-th bit
!!! Unset bit ### (medium) Given two integer numbers a and n Your task is to write a function that prints to the standard output (stdout) the value of a after its n-th bit
! Projects
!! Search engine ### Build: a search engine
!!! done @ Tokenize query ### (medium) As soon as a user inputs a query, the search engine must tokenize it - that means break it down into understandable tokens
<<<
https://www.talentbuddy.co/challenge/51846c184af0110af3822c32
problem:
As soon as a user inputs a query, the search engine must tokenize it - that means break it down into understandable tokens. A token is defined as a sequence of characters separated by white spaces and/or punctuation.
Given a string representing a user query and a set of punctuation characters
Your task is to
write a function that prints to the standard output (stdout) all the tokens in the user query (one per line)
Note that your function will receive the following arguments:
query
which is a string giving the user query
punctuation
which is a string giving the punctuation characters that separate tokens
<<<
{{{
// ********************************************************************************
// the 1st working version
var text = 'car? dealers! bmw, audi';
var punctuation = /(?:\?|!| ){1,2}/;
var parseText = text.split(punctuation);
console.log(parseText.join('\n'));
// test some more
var text = "which...";
//var punctuation = /(?:-|\^|,|\.{3}|\?|!|'| | | ){1,3}/
var punctuation = /\.+/
//var punctuation = /(?:,|'|\.| ){1,2}/; //",'." //", which isn't that surprising I gu.ess"
//var punctuation = /(?:\?|!| ){1,2}/;
// ".!?^-" Bellman-Ford ^algorithm, which... is? very! similar to Dijkstra's, is definitely dynamic
var parseText = text.replace(punctuation, 'x');
console.log(parseText);
// ********************************************************************************
// just testing stuff
function testThis(a) {
var text = a;
console.log('this is the text ' + text);
}
testThis('hello');
// ********************************************************************************
// test split
var string = '?!';
var splitString = string.split('');
console.log(typeof splitString);
// ********************************************************************************
// work on the separator, parse each character and push it in an array
// the goal is to build a dynamic var like this /(?:\?|!| ){1,2}/
function testThis(a) {
var punctuation = a.split('');
var punctuationSeparator = [];
var adjustQuestionMark1 = /:\?/;
var adjustQuestionMark2 = /\|\?/;
punctuationSeparator.push("/(?:");
for (var i = 0; i < punctuation.length; i++) {
punctuationSeparator.push(punctuation[i] + '\|');
}
punctuationSeparator.push(" ){1,2}/");
//console.log(punctuationSeparator.join('').replace(adjustQuestionMark,':\\?'));
var finalSeparator = punctuationSeparator.join('').replace(adjustQuestionMark1,':\\?').replace(adjustQuestionMark2,'\|\\?');
console.log(finalSeparator);
}
testThis('?!');
// ********************************************************************************
// FINAL
// now make it a function
function tokenize_query(query, punctuation) {
// take care of the separator and creates the array
var punctuation = punctuation.split('');
var punctuationSeparator = [];
var adjustPunctuation1a = /:\?/;
var adjustPunctuation1b = /\|\?/;
var adjustPunctuation2a = /:|\./;
var adjustPunctuation2b = /\|\./;
var adjustPunctuation3a = /:|\^/;
var adjustPunctuation3b = /\|\^/;
var adjustPunctuation4 = /\.+/;
var adjustPunctuation5 = /ng,+/;
var adjustPunctuation6 = /, /;
// initial push to build the regex
punctuationSeparator.push("(?:");
// continue building the regex
for (var i = 0; i < punctuation.length; i++) {
punctuationSeparator.push(punctuation[i] + '\|');
}
// last push for the regex
punctuationSeparator.push(" ){1,2}");
//console.log(punctuationSeparator.join());
// join the array without comma, also apply the adjustments
var finalSeparator = punctuationSeparator.join('').replace(adjustPunctuation1a, ':\\?').replace(adjustPunctuation1b, '\|\\?').replace(adjustPunctuation2a, ':\\.').replace(adjustPunctuation2b, '\|\\.').replace(adjustPunctuation3a, ':\\^').replace(adjustPunctuation3b, '\|\\^');
//console.log(finalSeparator);
// create the regex
var finalSeparator = new RegExp(finalSeparator); // the regex format is /(?:\?|!| ){1,2}/
// apply the regex to the query, also apply the other adjustments
var query = query;
var parseText = query.split(finalSeparator);
var finalParse = parseText.join('\n').replace(adjustPunctuation4, '').replace(adjustPunctuation5, 'ng').replace(adjustPunctuation6, '');
// this removes the first line if it's comma
if (finalParse.substr(0, 1) === ',') {
console.log(finalParse.split('\n').slice(1).join('\n'))
} else {
console.log(finalParse);
}
}
// run it!
tokenize_query(", which isn't that surprising ", ",");
}}}
!!!! solutions
{{{
// solutions here https://www.talentbuddy.co/news/f/news:Tokenize%20query:JavaScript:All
/*function tokenize_query(query, punctuation) {
console.log(query.split(new RegExp('[ '+punctuation.replace(/[\-\[\]\/\{\}\(\)\*\+\?\.\\\^\$\|]/g, "\\$&")+']')).join('\n'));
}*/
/*function tokenize_query(query, punctuation) {
for (c in punctuation){
query = query.split(punctuation[c]).join(" ");
}
query = query.split(" ");
for (i in query){
if (query[i]!=''){
console.log(query[i]);
}
}
}*/
/*function tokenize_query(query, punctuation) {
if (query.length < 1000 && punctuation.length < 10) {
var puncs = punctuation.split("");
var bits = [];
var st = "";
var print = function (el) {
if (el !== "") {
console.log(el);
}
};
var isPunc = function (ch) {
for (var i=0, l=puncs.length; i<l; i++) {
if (ch === puncs[i]) {
return true;
}
}
};
for (var i=0, l=query.length; i<l; i++ ) {
var ch = query.charAt(i);
if (isPunc(ch)) {
st += " ";
}
else {
st += ch;
}
}
st.split(" ").forEach(print);
}
}*/
/*function tokenize_query(query, punctuation) {
for(var p = 0; p < punctuation.length; p++) {
query = query.split(punctuation[p]).join(" ");
}
var tokens = query.split(" ").join(" ").split(" ");
for(var i = 0; i < tokens.length; i++) {
console.log(tokens[i]);
}
}*/
tokenize_query("this,is.?the l i f e",",.?")
}}}
!!!! pre-req
Tokenize query ### lessons https://www.talentbuddy.co/set/51d773034af0110af3826a2d
<<<
Find character https://www.talentbuddy.co/challenge/51d773034af0110af3826a2d51d773034af0110af3826a2e
Find substring https://www.talentbuddy.co/challenge/51d773034af0110af3826a2d51d773034af0110af3826a2f
Count substrings https://www.talentbuddy.co/challenge/51d773034af0110af3826a2d51d773034af0110af3826a30
Count tokens https://www.talentbuddy.co/challenge/51d773034af0110af3826a2d51d773034af0110af3826a31
<<<
!!! done @ Remove stop words ### (normal) Stop words are tokens which are filtered out from queries because they add little value in finding relevant web pages
https://www.talentbuddy.co/challenge/51846c184af0110af3822c3151846c184af0110af3822c33
<<<
Stop words are tokens which are filtered out from queries because they add little value in finding relevant web pages.
Given a list of tokens that were obtained after the search engine tokenized the user query using your code from the previous task and a list of stopwords
Your task is to
write a function that prints to the standard output (stdout) all the tokens in the user query that are not stop words (one per line)
Note that your function will receive the following arguments:
query
which is an array of strings giving the tokens in the user query
stopwords
which is an array of strings giving the stop words
Data constraints
the length of the query array will not exceed 1000
the length of the stopwords array will not exceed 1000
all string comparisons are case-sensitive (i.e: Cool != cool)
<<<
{{{
// first stab at it
//When the input is
//query = ["episode", "of", "the", "third", "season", "of", "the", "animated", "comedy", "series"]
//stopwords = ["of", "the"]
//
//Correct output
//episode
//third
//season
//animated
//comedy
//series
//
//query = ["10", "times", "a", "year", "IEN", "Italia", "provides", "a", "digest", "of", "the", "latest", "products", "news", "and", "technologies", "available", "on", "the", "Italian", "market", "In", "2009", "nearly", "14", "000", "subscribers", "received", "IEN", "Italia", "mostly", "engineers", "and", "purchasing", "managers", "IEN", "Italia", "also", "publishes", "newsletters", "and", "updates", "its", "website", "with", "daily", "news", "about", "new", "products", "and", "services", "available", "to", "the", "Italian", "market"]
//stopwords = ["and", "IEN", "Italia", "the", "products", "a", "news", "market"]
function remove_stopwords(query, stopwords) {
// parse the stopwords as individual elements
var stopwordsArray = [];
for (var i in stopwords) {
stopwords = stopwords.toString().split(' ').toString().split(',');
if (stopwords[i] != '') {
stopwordsArray.push(stopwords[i]);
}
}
for (i in stopwordsArray) {
// apply the stopwords to the query
var query = query.toString().split(stopwordsArray[i]).toString().split(',');
// remove the empty lines
var query = query.filter(function(v){return v!==' '});
// remove the white space
var query = query.toString().replace(/' '/g,'').split(' ');
// split to object
var query = query.toString().split(',');
}
for ( i in query) {
// filter the empty element
if ( query[i] != '' ) {
console.log(query[i]);
}
}
}
// run it!
remove_stopwords('episode of the third season of the animated comedy series','of, the'); //working
}}}
{{{
// FINAL
function remove_stopwords(query,stopwords) {
//console.log(query);
//console.log(stopwords);
// this iterates on the query marking every stopword blank
for (var i = 0; i < stopwords.length; i++) {
for (var j = 0; j < query.length; j++) {
if (query[j] == stopwords [i]) {
query[j] = '';
}
//console.log(query[j]);
}
}
// this outputs the non-blank query
for (var i = 0; i < query.length; i++) {
if (query[i] !== '') {
console.log(query[i]);
}
}
}
//var query = ['karl', 'arao'];
//var stopwords = ['karl'];
var query = ['10', 'times', 'a', 'year', 'IEN', 'Italia', 'provides', 'a', 'digest', 'of', 'the', 'latest', 'products', 'news', 'and', 'technologies', 'available', 'on', 'the', 'Italian', 'market', 'In', '2009', 'nearly', '14', '000', 'subscribers', 'received', 'IEN', 'Italia', 'mostly', 'engineers', 'and', 'purchasing', 'managers', 'IEN', 'Italia', 'also', 'publishes', 'newsletters', 'and', 'updates', 'its', 'website', 'with', 'daily', 'news', 'about', 'new', 'products', 'and', 'services', 'available', 'to', 'the', 'Italian', 'market']
var stopwords = ['and', 'IEN', 'Italia', 'the', 'products', 'a', 'news', 'market']
remove_stopwords(query,stopwords);
}}}
!!!! solutions
{{{
// ********************************************************************************
// vlad's solution, uses for and indexOf
/*
function remove_stopwords(query, stopwords) {
for (var i = 0; i < query.length; i++) {
if (stopwords.indexOf(query[i]) == -1) {
console.log(query[i]);
}
}
}
*/
// ********************************************************************************
// this uses for in and indexOf, much like compare
/*
function remove_stopwords(query, stopwords) {
for(var key in query){
if(stopwords.indexOf(query[key]) === -1){
console.log(query[key]);
}
}
}
*/
// ********************************************************************************
// this uses filter and indexOf
/*
function remove_stopwords(query, stopwords) {
console.log(query.filter(function (i) {
return !~stopwords.indexOf(i);
}).join('\n'))
}
*/
// ********************************************************************************
// pretty cool object population, then compare stopwords with object
/*
function remove_stopwords (query, stopwords) {
var stops = {};
for (var i = 0; i < stopwords.length; i++) {
stops[stopwords[i]] = stops;
}
for (i = 0; i < query.length; i++)
{
if (stops[query[i]] != stops) {
console.log(query[i] + "\n");
}
}
}
*/
// ********************************************************************************
// using a SET function
/*
function remove_stopwords(query, stopwords) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
var set={};
for(var i=0;i<stopwords.length;++i){
set[stopwords[i]]=true;
}
for(var i=0;i<query.length;++i){
if(!set[query[i]]){
console.log(query[i]);
}
}
}
*/
// ********************************************************************************
// another variation of the SET logic
/*
function remove_stopwords(query, stopwords) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
var stops = {};
for(var i = 0; i < stopwords.length; i++) {
stops[stopwords[i]] = true;
}
function print(word) {
if(!stops[word]) {
console.log(word);
}
}
query.forEach(print);
}
*/
// ********************************************************************************
// this is what I was trying to do initially, IF ELSE + function
/*
function remove_stopwords(query, stopwords) {
if (query.length <= 1000 && stopwords.length <= 1000) {
var a = [];
var isStopWord = function (word) {
for (var i=0, l=stopwords.length; i<l; i++) {
if ( word === stopwords[i] ) {
return true;
}
}
return false;
};
for (var i=0, l=query.length; i<l; i++) {
if (isStopWord(query[i])) {
continue;
}else {
a.push(query[i]);
}
}
a.forEach(function(el){
console.log(el);
});
}
}*/
// ********************************************************************************
// I was trying to do this as well, compare TRUE and use on another loop
/*
function remove_stopwords(query, stopwords) {
var i, j, found;
for (i = 0; i < query.length; i++) {
found = false;
for (j = 0; j < stopwords.length; j++) {
if (query[i] == stopwords[j]) {
found = true;
break;
}
}
if (!found) console.log(query[i]);
}
}*/
// ********************************************************************************
// pretty smart use of array
/*
function remove_stopwords(query, stopwords) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
hash={};
for(var i=0;i<stopwords.length;++i){
hash[stopwords[i]]=true;
}
for(var i=0;i<query.length;++i){
if(!hash.hasOwnProperty(query[i])) console.log(query[i]);
}
}*/
// ********************************************************************************
// these two below are nice examples of negation
/*
function remove_stopwords(query, stopwords) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
if(query.length > 1000 || stopwords.length > 1000) {
throw new Error('query and stopwords arrays have a max length of 1000');
}
for (var i=0, l= query.length; i < l; i++) {
found = false;
for (var j =0; j<stopwords.length; j++) {
if(query[i] === stopwords[j]) {
found = true;
}
}
if(!found) {
console.log(query[i]);
}
}
}
function remove_stopwords(query, stopwords) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
for(var i=0;i<query.length;i++)
{
var ok=true;
for(var j=0;j<stopwords.length;j++)
{
if(query[i]==stopwords[j])
ok=false;
}
if(ok)
console.log(query[i]);
}
}
*/
// ********************************************************************************
// short and interesting
/*
function remove_stopwords(query, stopwords) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
query.filter(function (word) {
return stopwords.indexOf(word) == -1;
}).forEach(function (word) {
console.log(word);
});
}*/
var query = ['10', 'times', 'a', 'year', 'IEN', 'Italia', 'provides', 'a', 'digest', 'of', 'the', 'latest', 'products', 'news', 'and', 'technologies', 'available', 'on', 'the', 'Italian', 'market', 'In', '2009', 'nearly', '14', '000', 'subscribers', 'received', 'IEN', 'Italia', 'mostly', 'engineers', 'and', 'purchasing', 'managers', 'IEN', 'Italia', 'also', 'publishes', 'newsletters', 'and', 'updates', 'its', 'website', 'with', 'daily', 'news', 'about', 'new', 'products', 'and', 'services', 'available', 'to', 'the', 'Italian', 'market']
var stopwords = ['and', 'IEN', 'Italia', 'the', 'products', 'a', 'news', 'market']
remove_stopwords(query,stopwords);
}}}
!!! done @ Query tokens stemming ### (medium) Stemming removes word suffixes to reduce inflected (or sometimes derived) words to their base or root form
https://www.talentbuddy.co/challenge/51846c184af0110af3822c3151846c184af0110af3822c34
<<<
Stemming removes word suffixes to reduce inflected (or sometimes derived) words to their base or root form.
E.g. “friendly” is an inflection of “friend”. By stemming (in this case stemming means removing the suffix “ly”), “friendly” is reduced to “friend”.
Given a list of tokens and a list of suffixes
Your task is to
write a function that prints to the standard output (stdout) all the tokens having their suffix removed if found in the list of suffixes (please print one token per line)
for each token if there is more than one suffix that can be removed please choose the one that is the longest
Note that your function will receive the following arguments:
tokens
which is an array of strings giving the tokens described above
suffixes
which is an array of strings giving the suffixes described above
Data constraints
the length of the tokens array will not exceed 1000
the length of the suffixes array will not exceed 100
all string comparisons are case-sensitive (i.e: Cool != cool)
<<<
{{{
//**************************************************
// research
//**************************************************
// test indexOf
var fruits = ["Strawberry", "Banana", "Mango"]
var pos = fruits.indexOf("Mango"); // 2
console.log(pos);
// test search
var str="This is testing for javascript search !!!";
if(str.search("for") != -1) {
//logic
console.log('found');
} else {
console.log('not found');
}
// test indexOf, lastIndexOf, search, split
// all will return TRUE
var stringVariable = "some text";
var findString = "text";
//using `indexOf()`
var containResult = stringVariable.indexOf(findString) != -1;
console.log(containResult);
//using `lastIndexOf()`
var containResult = stringVariable.lastIndexOf(findString) != -1;
console.log(containResult);
//using `search()`
var containResult = stringVariable.search(findString) != -1;
console.log(containResult);
//using `split()`
var containResult = stringVariable.split(findString)[0] != stringVariable;
console.log(containResult);
// test getting the subset of value based on lastIndexOf
// example from: http://stackoverflow.com/questions/25356825/difference-between-string-indexof-and-string-lastindexof
var str = "var1/var2/var3";
var rest = str.substring(0, str.lastIndexOf("/") + 1); // output remaining
var last = str.substring(str.lastIndexOf("/") + 1, str.length); // output string
console.log(str);
console.log(rest);
console.log(last);
//**************************************************
// initial versions
//**************************************************
// initial prototype
// the problem here is only the last word is being processed
var tokens = ["episode", "of", "taste"];
var suffixes = ["e"];
for (i in suffixes) {
tokens = tokens.toString().substring(0, tokens.toString().lastIndexOf(suffixes[i]));
}
console.log(tokens);
// working version 1 - with errors on missing characters
function token_stemming(tokens, suffixes) {
for (var i = 0; i < suffixes.length; i++) {
for (var j = 0; j < tokens.length; j++) {
if ( tokens[j].toString().lastIndexOf(suffixes[i]) !== -1) {
tokens[j] = tokens[j].toString().substring(0, tokens[j].toString().lastIndexOf(suffixes[i]));
} else
tokens[j] = tokens[j];
}
}
console.log(tokens.join('\n'));
}
//**************************************************
// final
// **************************************************
function token_stemming(tokens, suffixes) {
//console.log(tokens.length); // this is an object
//console.log(tokens);
//var stem = ''; // different result
for (var j = 0; j < tokens.length; j++) {
var stem = ''; // place it here, above the for fresh value every loop
for (var i = 0; i < suffixes.length; i++) {
if ( (tokens[j].substring(tokens[j].length - suffixes[i].length)) == suffixes[i] ) {
if (stem.length < suffixes[i].length) {
stem = suffixes[i];
}
}
//console.log(stem);
//console.log(tokens[j].length - suffixes[i].length);
//console.log(tokens[j].substring(0, tokens[j].length - suffixes[i].length));
//console.log(tokens[j].substring(tokens[j].length - suffixes[i].length));
}
if (stem !== '') {
//console.log(tokens[j].substring(0, tokens[j].length - stem.length));
var outStem = tokens[j].substring(0, tokens[j].length - stem.length);
console.log(outStem);
} else {
//console.log(tokens[j]);
var outOrig = tokens[j];
console.log(outOrig);
}
}
}
//var tokens = ["times","animated","season", "of", "taste"];
//var suffixes = ["es","e"];
//var tokens = ["episode", "of", "the", "third", "season", "of", "the", "animated", "comedy", "series"];
//var suffixes = ["e", "rd", "f", "dy", "es"];
var tokens = ["10", "times", "a", "year", "IEN", "Italia", "provides", "a", "digest", "of", "the", "latest", "products", "news", "and", "technologies", "available", "on", "the", "Italian", "market", "In", "2009", "nearly", "14", "000", "subscribers", "received", "IEN", "Italia", "mostly", "engineers", "and", "purchasing", "managers", "IEN", "Italia", "also", "publishes", "newsletters", "and", "updates", "its", "website", "with", "daily", "news", "about", "new", "products", "and", "services", "available", "to", "the", "Italian", "market"]
var suffixes = ["es", "a", "est", "le", "n", "e", "09", "rly", "ved", "lia", "rs", "ers", "N", "ia", "so", "s", "ters", "nd", "th", "ws", "w", "ts", "d"]
// runit!
token_stemming(tokens, suffixes);
//episod
//o
//th
//thi
//season <-- check this, only s
//o
//th
//animated <-- check this, animat
//come <-- check this, com
//seri
//**************************************************
// execute
//**************************************************
//var tokens = ["episode", "of", "taste"];
//var suffixes = ["e"];
var tokens = ["episode", "of", "the", "third", "season", "of", "the", "animated", "comedy", "series"];
var suffixes = ["e", "rd", "f", "dy", "es"];
// runit!
token_stemming(tokens, suffixes);
//episod
//o
//th
//thi
//season <-- check this, only s
//o
//th
//animated <-- check this, animat
//come <-- check this, com
//seri
}}}
http://stackoverflow.com/questions/1789945/how-can-i-check-if-one-string-contains-another-substring
http://stackoverflow.com/questions/19445994/javascript-string-search-for-regex-starting-at-the-end-of-the-string
http://stackoverflow.com/questions/25356825/difference-between-string-indexof-and-string-lastindexof
!!!! solutions
{{{
// solution 1
function token_stemming(query, suffixes) {
var maxSuffixLength = 0;
var queryLength;
var suffixLength;
var token;
for (var i in query) {
for (var k in suffixes) {
if (query[i].indexOf(suffixes[k]) > -1) {
queryLength = query[i].length;
suffixLength = suffixes[k].length;
if (query[i].substr(queryLength - suffixLength, suffixLength) === suffixes[k]) {
if (suffixLength > maxSuffixLength) {
maxSuffixLength = suffixLength;
}
}
}
}
if (maxSuffixLength > 0) {
query[i] = query[i].substr(0, queryLength - maxSuffixLength);
}
console.log(query[i]);
maxSuffixLength = 0;
}
}
// solution 2
function token_stemming(query, suffixes) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
var i;
var j;
var reg;
var longestSuffix;
var longestSuffixIx;
var ret;
var tmp;
for (i = 0; i < query.length; i++) {
longestSuffix = 0;
longestSuffixIx = -1;
for (j = 0; j < suffixes.length; j++) {
reg = new RegExp(suffixes[j] + "$");
if (reg.test(query[i]) && suffixes[j].length > longestSuffix) {
//match and longest so far
longestSuffixIx = j;
longestSuffix = suffixes[j].length;
}
}
tmp = query[i];
ret = tmp.substring(0, tmp.length - longestSuffix);
console.log(ret);
}
}
// solution 3 - nice and readable
function token_stemming(query, suffixes) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
for (var token in query) {
var suffix = "";
for (var s in suffixes) {
//console.log(query[token].substring(query[token].length - suffixes[s].length) + ' == ' + suffixes[s]);
if (query[token].substring(query[token].length - suffixes[s].length) == suffixes[s]) {
if (suffix.length < suffixes[s].length) {
suffix = suffixes[s];
}
}
}
//console.log('suffix = ' + suffix);
if (suffix !== "") {
console.log(query[token].substring(0, query[token].length - suffix.length));
}
else {
console.log(query[token]);
}
}
}
// solution 4
function token_stemming(query, suffixes) {
var i,
j,
stemmed = "",
suffix_length = 0;
for (i = 0; i < query.length; i++) {
stemmed = query[i];
for (j = 0; j < suffixes.length; j++) {
stem_length = query[i].length - suffixes[j].length;
if (query[i].indexOf(suffixes[j], stem_length) != -1) {
if (stem_length < stemmed.length) {
stemmed = query[i].substring(0, stem_length);
}
}
}
console.log(stemmed);
}
}
// solution 5
function token_stemming(query, suffixes) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
function cmp(a, b){
return b.length - a.length;
}
suffixes.sort(cmp);
for (i = 0; i < query.length; i++){
var result = query[i];
var j = 0;
var index = 0;
while (j < suffixes.length){
index = result.lastIndexOf(suffixes[j]);
if ((index == (result.length - suffixes[j].length)) && (index != -1)){
result = result.substring(0, index);
break;
}
else
j++;
}
console.log(result);
}
}
// solution 6 - interesting
function token_stemming(query, suffixes) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
suffixes.sort(function(a, b){
return b.length - a.length;
})
for(var i=0;i<query.length;++i){
var ok=false;
for(var j=0;j<suffixes.length;++j){
if(endsWith(query[i],suffixes[j])){
console.log(query[i].substring(0,query[i].length-suffixes[j].length));
ok=true;
break;
}
}
if(ok==false){
console.log(query[i]);
}
}
}
function endsWith(str, suffix) {
return str.indexOf(suffix, str.length - suffix.length) !== -1;
}
// solution 7 - interesting
function endsWith(str, suffix) {
return str.slice(-suffix.length) === suffix;
}
function trimLongestSuffix(str, suffixes) {
// Note: this assumes stems is sorted from shortest to longest
var i = suffixes.length;
while (i--) {
if (endsWith(str, suffixes[i])) {
return str.slice(0, -suffixes[i].length);
}
}
// Return string as-is if no suffix matched
return str;
}
function sortShortestToLongest(a, b) {
return a.length > b.length ? 1 : -1;
}
function token_stemming(tokens, suffixes) {
var len = tokens.length,
i;
// Sort suffixes shortest-to-longest to allow
// short-circuiting during processing
suffixes.sort(sortShortestToLongest);
for (i = 0; i < len; i++) {
console.log(trimLongestSuffix(tokens[i], suffixes));
}
}
// solution 8
function token_stemming(tokens, suffixes) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
tokens.forEach(function(token) {
var stemmed = token;
suffixes.forEach(function(suffix) {
if (token.substr(-suffix.length) == suffix && token.length-suffix.length < stemmed.length)
stemmed = token.substr(0, token.length-suffix.length);
});
console.log(stemmed);
});
}
// solution 9 - interesting
function token_stemming(tokens, suffixes) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
for(var i=0; i<tokens.length; i++)
{
// for every word in the query
// looking for suffix, so let's reverse the word
var reversedToken = tokens[i].split("").reverse().join("");
// substr to a very large number first...
var substrTo = 0;
var longestSuffix = 0;
for(var j=0; j<suffixes.length; j++)
{
// for every suffix we need to compare
reversedSuffix = suffixes[j].split("").reverse().join("");
// find the index at which there is a substring of the suffix
if(reversedToken.indexOf(reversedSuffix) === 0)
{
// special case
if(longestSuffix < reversedSuffix.length)
{
//contains a longest suffix
longestSuffix = reversedSuffix.length;
substrTo = reversedToken.indexOf(reversedSuffix) + reversedSuffix.length;
}
else
{
}
}
}
// reverse back
console.log(tokens[i].substring(0, tokens[i].length-substrTo));
}
}
// solution 10 - regex example
function token_stemming(tokens, suffixes) {
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
var r = new RegExp("("+suffixes.join("|")+")$","g");
for(var i=0; i<tokens.length; i++) {
console.log(tokens[i].replace(r,""));
}
}
// runit!
var tokens = ["episode", "of", "the", "third", "season", "of", "the", "animated", "comedy", "series"];
var suffixes = ["e", "rd", "f", "dy", "es"];
token_stemming(tokens, suffixes);
//episod
//o
//th
//thi
//season <-- check this, only s
//o
//th
//animated <-- check this, animat
//come <-- check this, com
//seri
}}}
!!! done @ Basic search query ### (medium) time to retrieve the web pages that match a user query
https://www.talentbuddy.co/challenge/51846c184af0110af3822c3151846c184af0110af3822c35
<<<
Basic search query
It's time to retrieve the web pages that match a user query.
Given a query as a list of tokens and a list of strings representing the content of each web page
Your task is to
write a function that prints to the standard output (stdout) the number of web pages that contain all the given tokens in the same order.
Note that your function will receive the following arguments:
query
which is an array of strings giving the tokens in the user query
pages
which is an array of strings giving the content of each web page
Data constraints
the length of the query array will not exceed 10
the length of the pages array will not exceed 1000
the length of any web page content will not exceed 1000
all string comparisons are case-sensitive (i.e: Cool != cool)
Efficiency constraints
your function is expected to print the requested result and return in less than 2 seconds
<<<
{{{
//**************************************************
// research
//**************************************************
var x = 'hello';
var x = x.indexOf('e');
console.log(x);
"C:\Program Files (x86)\JetBrains\WebStorm 10.0.4\bin\runnerw.exe" "C:\Program Files (x86)\nodejs\node.exe" sandbox3.js
1
var anyString = "Brave new world";
var a = anyString.indexOf("w"); // result = 8
var b = anyString.lastIndexOf("w"); // result 10
console.log(a);
console.log(b);
// test
var query = ["the", "new", "store"]
var pages = ["the new store", "the new store"]
console.log(pages[1]);
console.log(query[1]);
if (pages[0].indexOf(query[1]) !== -1) {
var testThis = pages[0].indexOf(query[1]);
console.log(testThis);
} else {
console.log('not found');
}
// slice test case getting the index and length
var fruits = ["Apple", "Banana"];
var query = 'Apple';
var fruitsIndex = fruits.indexOf(query);
console.log(fruits);
console.log(fruitsIndex);
console.log(query.length);
console.log(fruits[0].slice(fruitsIndex,query.length));
//**************************************************
// initial versions
//**************************************************
// 1st version
function search_query(query, pages) {
//console.log(query);
//console.log(pages);
// set final count counter to zero
var finalCount = 0;
for (var i = 0; i < pages.length; i++) { // loop
// set pagesCount to zero
var pagesCount = 0;
for (var j = 0; j < query.length; j++) { // loop
// if query is in pages increment
if (pages[i].indexOf(query[j]) !== -1) {
// increment pagesCount
pagesCount++
} else {
// else break and continue to loop
break;
}
}
// if pages count equal query length then increment final counter
if (query.length == pagesCount) {
finalCount++
}
}
console.log(finalCount); // output final counter
}
// test
var query = ["the", "new", "store"]
//var pages = ["the badass store", "the new store", " is the new store"]
var pages = ["the store", "the new store"]
//**************************************************
// final
//**************************************************
function search_query(query, pages) {
//console.log(query);
//console.log(pages);
// set final count counter to zero
var finalCount = 0;
for (var i = 0; i < pages.length; i++) { // loop
// set pagesCount to zero
var pagesCount = 0;
for (var j = 0; j < query.length; j++) { // loop
// if query is in pages increment
if (pages[i].indexOf(query[j]) !== -1) {
//console.log(pages[i] + ' -- ' + query[j] + ' -- increment pagesCount'); // instrument
//console.log(pagesCount); // instrument
// increment pagesCount
pagesCount++
// trim the working set
pages[i] = pages[i].slice(pages[i].indexOf(query[j]), pages[i].length);
} else {
//console.log(pages[i] + ' -- ' + query[j] + ' -- not found'); // instrument
// else break and continue to loop
break;
}
}
// if pages count equal query length then increment final counter
if (query.length == pagesCount) {
//console.log(finalCount + ' !!!!!!!!!this is the final count!!!!!!!!!'); // instrument
finalCount++
}
}
console.log(finalCount); // output final counter
}
// test
//var query = ["the", "new", "store"]
//var pages = ["the newstore", "the new store"]
// correct answer is 4
//var query = ["the", "new", "store"]
//var pages = ["the new store is in san francisco", "the boy enters a new and cool store", "this boy enters a new store", "The new store is in the city", "the newstore is a brand", "there is newton in the store"]
// correct answer is 7
//var query = ["fun"]
//var pages = ["e hecim hnihoped smee e tdttoosdedo raaiefsonfrya ti ses", "yeeesdeeosfrre otahtna sic ps hmeded tdt foo o si aimnehi", "neeooaets thaesre s oefeh y ansiiftdit doremdoecdpsimh", "edrf a oooie sedsfpi eadh aitnhseoestidemh nryometcte s", "hyetn d sfhdeiof mr deeeedaeeioc s aap soi iorohttssmnt e", "adatoo istisfsdde eietroseedome a ynchthh mfreep eiosn ", "episode of the third season of the animated comedy series fun", "e edohemfetcodi tnsron e t adafthosyhmrdiseipe eoi ssea", "episode of the third season of the animated fun comedy series", "eesoe ost ooeedetarm s ypd drdtmf eaisn n eifohstehcihia", "episode fun of the third season of the animated comedy series", "episode of the third season of the animated comedy series fun", "episode of the fun third season of the animated comedy series", "soeesyffr tn otaoai e redmtam sdiocepo h heteddhsieinse ", "ey mneh osedrodtiod patt nehd chemoa s foest efriieissae", "temaed ds essesrroo f thytfedoenhsietehmia ipcon ieoad ", "episode of fun the third season of the animated comedy series", "episode of fun the third season of the animated comedy series", "hroe ca eemeiri e itoho ddomoe fs petseisadntfe assnhydt", "s d nteaesmdhendhieemceas o orf eieasitsoyietdphooft r "]
// correct answer is 35
var query = ["funny", "joke"];
var pages = ["10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about joke new products and services available to funny the Italian market.", "dsuitNes ealn oasv itmo0e e.mayeltIdtagcetiyrvu rt,0rateenreinE iiewusuaw0vi prec fpe sni0aaeoesIe aoyecIll iiltoelsaarettI Naoaatl lod dhhIc lvaaled nIrathe g1e ee ogs ib,rI0aa9t ,s tr siintis n noIss ent a bE roig swnraa ye doecl su evrimsi lnsh0 ewtewd c.harn,Iatbcdmetawn datnsbatsknekitp upbrasNhalt tpiin dteh sdEsl sba r1asp4al ln ia2sea g.mtesulebn", "EeppeuetIeu a dii dgtel bdt0bawh I apweps,ts0led0tch,eraltp l s la9lachnnhncooer fn.a nrt1t s alIitalaea a ra oeIitiot sdneI m hli rl tid eyselI nson d nvas ,vcatydsese paekh a i 1rilbenu .l ibE ryariao,buledreitsoelto4tviawseta imisdu2 ntelrw ean ks tca gleyIguN rese ioiisgeawItnmsscmn oensm.t sttb cEisavsaats oe0anaaatngnI NNhv r0d0ueaireweebs rri satase ", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available joke on the Italian market. In funny 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services funny available to the joke Italian market.", "nivkelris aoi0lIeaeisah nr seoewutinnc0 lbieae igsaIa0ibe,ElsgerIclyalm e1mNv cho di.oldbdepsui vt aaa tss newctgdmlye p bnla wsse .rllut 2etnI0a acecsynslirea,settsean a t ps0 gsdt tdiehrwen 4do1gul naI beseIwoErti si i tf raEi Na.t tbslap uo wtuh IeahItset0, rnaiadtanapd tosne tlnriroiaese shei snea9dvtaitnmaabv asrNptlaaautkc,r osdoIneee rymheletr", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily funny news about new products and joke services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and funny services joke available to the Italian market.", "10 joke times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, funny nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "dsssbomi naalseel bob.chnIn0 teitt anteuoirtm earenaeapisi tsrsrNes i iu lds nseetdterrt2 4ayaNtseI1d9th 1ai etr a iuwna0ipniktvvce pl lld Emttehy bo md eisg,refu t g.eia u tanlwcrEnre n lra ievamee0n,Ioatl tdago.ke aane lrw Nl es hcssseabac listaI,ocw vgyn pIteastrooeawals enpnsaIlbesae0tsr dnogiipy aIls tdwI0hueahiie,estI edia l shEcvbi stnod la0aruaa", "10 times a year, IEN Italia, provides a funny digest of joke the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and funny technologies joke available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "optcntwiiaeapvlih Na bhb0 eeEIe e bnanIdamt terin0 sln tteI,sNidatsl dieblir etloelw at aeca1n1e roeie deusrroifcnnl snws, twoop usmaEil ss0ni ,rom sgd.aesen rsul.Eaeertk shanet sslreswpIrlehIa ysaw 4tiuo a sea.l evamsitdcta n0laatogeg rde gI ciittrcacdst byv tda0tslhnnuvIdsi a rlu,e mIe itnd si a0 a9 akNeeaoyoes trentglavlbnthuh spaasaeebea2iiiIayrp t", "nEeo .l0mbeca cIagneh hlvlrbvrev mlcv rwi,,ikslanelistisiINasnn 9uI eta n tle cnaoaedarsv 4pirnrao0 gmseu.trtsdnrswart ra NoInad tdsliteubEnsln atN adii2ewsfartIppeleIa ud eldhoetgte e, hlassra so0sutt eeyo0a b iiabst snab sipncaa odhu ii eatitc oteur m.ai genyhiy a1sa to laI ye ea wtigt tatse nep m,1dase swlpIa eltdiakeeeibrEswahns sso0ti0dl eIclteener", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with funny daily news about new products and services joke available to the Italian market.", " h. rgga1I sttnsliocpseey lltailaeawdrn betone rbas 0us0s faa l ln0 de eu d a bbravd eisestondcad0mNbuasmc.oeelElieoat o1edsiylhsek ee,nluaepmiosi n riat sgEnmauiIc nupie a pgss Int nIbeattsgotInrsl esamsthape4rat ii rte,raaa0ahstwI9 ileiIy s dewl rkerhaaa irN2esi,ahbv e c e ny ,tttitwdvoecvrlcidneattes n tasta hwlep e.svu0leIidtn tniIrN aa antwsorleio E", ",n0ts td ran srEioiir lrt Iabb siahlcvoam satplsIu i tdia0rlnh g4 oint oeleroersnI.behn s eu a ae0srl wigasat recnIEvw c etoah.ptIeelis aewa, taua s sn hitanwpeuw mtpidtledal i1u rN e0u2t acI otaod tirsoEoattndyilli dseaobetmeyleaIc ,db p9teaete esysdeN eein Iretwk snl a eksfsmve ped e,en 0reu vtshcss lav rbninbaeriaNl1eim idacsasaetg In0sa n g.glaystant ilha", "een naasdtc e0au Il a sl m oiacwbe.gstanah vi tIca 0epsiieeo lineiai 0e,ro Eht roe4 htt E esunuhl trsi Iraaebe veo ulpnwstadt.tams eaory dsvbarlasle olb iurmlrlskngp erssaytidadotaI ,ba i g sintrttah cn ctfvhid alerbwpytsdon l aleIpe ,c,seos NeE daseIIailuo1I eneNmateIentaemarnactt ks ywdipana rnnwgtaidansnte9 rN0 svu wlae segt 0elitis1 2t bsi0 rdlshiseee.i", "IhlimscaI saehae,idroeep stpk N treeeae0easw das h rnms saie neoese2 oda 0aeplnle vty 0omiltd nawtru ge,eso t vtrntatutaa pEeresrtansate abs uly1 ehdgs in b0tlecIteapolIwescdbttilc we tIuiva4nfadEaIouah0ni.dblwsiesceI astaaedrtleivt sut rN 9 ii nI 1tia i sepr l gn,imbnroinNbnd ,las gko .iirssionnio0lh rnyia em uIa vatsaryhrteaaeEclwasgd le tt.nb ceslals", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available joke to funny the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers funny received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to joke the Italian market.", "10 times a year, IEN Italia, provides a digest of the funny latest products news and technologies available on the Italian market. joke In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "meanvrssdttll4ets 0ioalealasIer eerrmigeaa N a naneawot eni,swc a ce at d setonIort,vg 1e0 av tssilNesldnithdE.mo.i, ueeleNnn ea eEabe r,ihIseic0si1nget wlednppai a2ygl sodItralbnrIaitobdm eu t tirwIih lt0.ucpslle0vh u ygnudnpdfttwsitInas a sest rtst b0 epas siatisrn lra oal pyaeriid ntuoaaceEs ki hcie bata Ieh tah sdiav wyeaskabeeralnoubreclmtIos e9sn", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market. funny joke", "10 times a year, IEN Italia, provides a digest of the funny latest products news and technologies available joke on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "wiIsl lruIlusenmal a En rInl0soieie r IoIie u tgd ,N,r ratber ea1deb c9aasm divN e Ias lswhtan swrsr o e sbckenaittolpweti4inan dsslt yspoyleod a fpt i0lla,katai dar.agesIg ea0inehr.risobas eena Imvcsar0t iaot tI ltsvgydaear eEipNtsm a sreaniaa tw ntc stne0lieotc eitiun eedsieciluohoy auh2wnt elaee e s leneis,b aEhate0ubvdpntesrhndgtsshl1anaeptcbm va.dtt", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on funny the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new joke products and services available to the Italian market.", "iaspec ghsnIwbacu getsteIeuul1screbm, ev ka.gtidiNrye naechtloerrbtahh rlb scdsy ia0 e isenotrI ahr sn domhp.emelauisneame sgaksdvetec rlanatt,eyii n, ieudwellIva l.a odsi riaieas pN,wtwIo0ls oeaIl wo u Ef0espnntasstt iias1trarlt e este2esansd celea privtt hnnreaEa4lEd0IdtNntao elt ib 9ae eolsaia nt dbi0tvapbitsm alat Is 0 nwigrndnaai uoIoatrs yelnes", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the funny Italian joke market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In funny 2009, nearly 14 000 subscribers received IEN joke Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available joke on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers funny and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers joke and purchasing managers. IEN Italia also publishes newsletters and updates its website funny with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian joke market. In 2009, nearly 14 000 subscribers received IEN funny Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about funny new products and joke services available to the Italian market.", "scdenI eaiataIilpIissetuealhse e srn dei0aa ivnne ivavEaee oybets l a.icbm illapsyneae enrsvb rtEm t0ehonaecolaw0nIe trtneo s I. grit,NbtueaaEn4esg iaa u sNutrole gws I 01tahene lse rttpneyace rNndtdaaewaul uobt2f wot pord m iaiasnh, tnsvl tsta0ideatrsI,dscgtoast 0eetaan nawtlrtsie1 dr.i driuie k,csrwbatlyahosI hco kl Ili ig ledn 9aip bll ssad sepismmher", "10 times a year, IEN Italia, provides a digest of the latest funny products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with joke daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to funny the Italian joke market.", " btgi cetlaa ate tt.tea hda no lusd iwo ,tsedautkn wasa0sumstey0Ia nemnE9srhccp,aItpes IrraiubaInslvl mrhpnot t ellrbosea enetvblIlr iidennb lt ru rer1Io nivt im1et b0liwsiwthtecewg iI Isasnt pag lEtnalEehea ed,i usel 0aaeI blpo .snishmt asetnsN2otei y eieldirv seerre.no h0siaeifdope vwd Nk useette drla iey4asc 0gaasiaa eiassa anN ots,sondgac aanylrarcd i", "n sgalan9pIn eleatsaleotaeiartbse p0 ntaig1t teymn e te I ala1en,a mlnc e rbr l ehc etEowIerstsntbeilossNe esub0oeavmtbcosi lttaatdns0iagl ksset p E 2yIesd ovneen0N,ehaadwlbhiiie .pse4 sNd0a vi lwasaniihry ut u.raispawcolltg dviu tesud oIpu ddt nnEtkiosel aeecnn hsae,w rh diIf0rr,Irele rli ae adeacsdyitiaeis gc rbtooi svstta r maeIirautaIa m ln.wanast shr", "10 times a year, IEN Italia, provides a digest of the latest joke products news and technologies available on the funny Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "w e4wka.nr iuttapnNlskIideaoru It,uehwIdhota1 ebr ce,s,onrt srIedea aen gntabas eli1 dneIytitaonles nysg sgan0de tr boavaEsc m eus lnph .lla an l lsor das gtstddnniassoaioir eeaibemipc sefnt9se.waaaemNt evIsi t0 0t Eurnpriiavs2Ivbletot ataNs leeIe eIrt eii estaha sl isacn ceciEbi h00 hdsyrrlieh laeiiaaoa amoeu egwl dlwtbni stetevn, myprtapea tus0sc d tllr ", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates funny its joke website with daily news about new products and services available to the Italian market.", "wo ddsrpreaethi eaeItee4l eytswg yltitktyv0dc 1e r d asa hsretlpil s.nenv na.n.ubol0osbshtiIanir ao aue0 senrlstypsducel di eo0er9ho w h eb1ea tli IaedIetalhmtnE i ibasalwa as vsent im isbagm bsdst0ee leandbdtnte trltiEttn e dwaustma eainsrimacNneft Ir lcei peonselnlha aonl ukp tsvorist,raain ,rl uea, anvpIerNIa N ,it0saew 2IieaciE tsrcaosaaiocgsIuse gag", " taieo i or s esEsmtaa tpns neerawnaeltetIitweIa0u0 .d. icps aIae hgg e2aheuahvaN4lIasatev0n0b augelrtuolaai rrefnti ieakss m atrn1tgr ncilsoienh dl slectnc ac oa Nia od ree viblohaat,s eansitaoaicieiltr cvss ellrimnmne odetsldl bans ha aEtg N,e.streeeat,bldIEd me tdwsirsu 0yeeedusap wessyptb ,rsybnpiIyntI nItt l ld ei vko 0w deas s wt1hb ta onrre9a npl Isiiu", "e uauerbitaestllndn ewatvt risvasl i wwnbIl9Npl r e ,leetnh4 aahesistt0 osesvs sEaorobtil n c nmclobb wrIssimoess nw.esteaeamn,Erlt teslaru i 0iiaii eeeedae losIytusalra eIanss w l tNnngsroNEletyhih iaItupdnteIlmpa tbigethn atmpaeaa ilparardek a1a sta uol rgfncbo1 Ieedcnkc cho nopi neaesis 0rattada.ec 0riadsieyt,tegIas e ddIi0 r2 u g.tnthdsy va, edaie 0v ", "mtttn. ardsi cmIia ovd gepnsn.sEirkor diaul, sir se ts0e tdnenn stacew aaiae0 enE ste1nlrssi4 oo Il o npnrbiis eety Ia pgritlr gtwrrdielrvlsmen rh ee10treilienwaI tyuian s Ileias,Icnoapyeteme attNheseluahs0,sdctoesba,trssdnal a ee eoaloNasb utecdnel0c aaataaI90ic vhshil 2sveaaam lwgsdupbeekelniwitwt rb.tba bpt letaaoIohganuI ie aEtle daasuiyNsvd tfh", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In funny 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website joke with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a funny digest of the latest products news and technologies available on the Italian market. In 2009, joke nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly funny engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new joke products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available funny to the Italian joke market.", "10 times a year, IEN Italia, provides a digest of the latest products news funny and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available joke to the Italian market.", " tn0hb eii1eue nedshIa ghvI ilra s arI,0 rcrlehes0 bset,deashilnol wne0wc tInav cadgttspweansi a ynit oeenerrigdtiht aan moetrt yt psun llrksualuvea oI t semebateEe akl ss.oa srrusshlc 4aietugInfNsaNsneieaeoE d atpIe,.eI, dnvmplg01Ndcdsoitroianvwpei iadr o2.E ne essnssela aa ct atls btid0nlamwtiiwsia ecyastapt aeoasl ayi a u dte bieo it9nrlleab r bt lItmre", "g0eeedgy slrkhbe iirse cIeetriEr b edbt4ialeawta 0.u trpis s iIusc n seeIsrihepeoiigbno,op taeawiaes,e e,den wll sIntl1npe1 menos tsl ocNmreahv aara0buaeiemesk su I a ada nwl r bhiles2ptgutstgsat0 EsdlldsohIlat.ul anyattthssmnnvrno oae alt i a hsnisttcenenfsddtyys r ureair oact diil ep ndaam I0toIevn9esivrl ie lnatvEtae t, waN.eal isaaocIbiradatcN ta 0wn", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the joke Italian market. funny", ",nrlua ya ei kreaeae i mlitiwwhew 4saeto apele gn vneoeiptnt gss,lhhi 0lantesE er ebd nmE tvIIl t tdaesNec se,ilris.aaseeutbpsrnob1esrb aaiysmhou0bt ruhraniy0dtre m cysalc n2dl rieauaatanNcf 1ctlI tai clv h .anneia o0gaa e 09i ltaslageteIwkvsal stIodio tei d iten sals wwap Eodrtvmicn ape t e edtsranbrtiIdtnIoosruoeselnds.tlaesIsnrsui,aadseIheg ip tN sb a 0", "chrkiee rw i0 n 1aot cn 0asoIy t bn e0 9NI aa o. aaas,us is oehoatp gv n ss vebtaetilpse0lt.sci4a ls dtawntm e.ebh grrNddaaa tobtfsi 1atedrlitenrsIlohrniEsnEinareelmnryIeeacte e,relhmidoieaoebemsse w n ges nmscuav rral tedg dsn,tinsalapive lia aab0I eEuslsd l a rshNidssr bt, 2eaedi hkiy0pdpateo ltIa pusan ela aatyw tieiI ne Iet Iellvnwtittuorlgastestinw cucu", "10 times a year, IEN funny Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market. joke", "10 times joke a year, IEN Italia, provides a digest of the latest products news and technologies available on funny the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", ",0sea ospoidsdt0aesaiur pwi iriv ara d n9sevhNauisa ariaas1n roI hpa tciyaeese Nsm rIralinsaeksI ted a a40eentl tbEI,l grdtNeyl tpseh1ieles akf nnaositaniswbgatbtadpers daeeeclutsei ln esnmoene l, astlhbbcmaiensIwgllu tsvrv catgtyueiwcmaoab0 Eiidnt.il2 o Ie lv y mIweensc p w .taoll leEioieat. 0r esd gctdsh nu nhul aheebrener aIs troostitantetda0atrI,t n ", "srkc aglbeiItn iyres0blildnaIrse e wlc ao2IneIeddnitm 9 rrlee sldulms a l eat nye b iatrac oiaba0fo.rtesEEianaaetsnstn tIiIe0tstri srareutcp0nmwwhmteailas1ghpalk,oha yitvtsotlog EsIt,. leh r4asa csu si uaeo dsa iueIteataitsdw s ton ts iati rpnbee.biaha ce aucpp teaailrw peeow,b d te lIe,ehe nvmse a ddanNynie dtalnei se 0vnogleNsvslas Nthveunn rogaisd10r ", "ea i bohacrtc Esy IiauiNnuat d 0pe ehsotieacIspo innaakfsrslessEs se s r ayderrnaasenawls0tlasthblmns aheavlu ,co t rE aloti ewagsastaIlggi adac tNmieb tpldie neentetasmi rl ,shperbo i o2weusebty.0v tr ln NIdhosetitear .iIiat rhr si0 bdecdmtec aesaetl 1waiw 0vnm uav ytsivI lieIased dtnt.tron aeItdlaieaId iensnbn, tpegnpau eeluotg wlar,nkel9l oi ee r 14 sn0a", " tlvnsEnoag ah b I bpchkh idm4tap hoeaos cIerlueaIeey olnlntiesretcou ku 2rhnrerdinwttin tetniavv tpa tmEsur sbubc.es drnl INaaIse sr aa.uslers e.insd irmasaara aa tlynr sl,sb ch,tttaon1t0Eetreeeis iws tIasvang 0etitpunasevdetwa l 1 0ws 00pelaordaNitsletby ds re g aaa9w lnlmi c e imiInitea0oescealagepnoessgIeaty b ,i h oInNdoei tswe leiif ,deeidail d alsta", "lll tnddnrav esuIe a d0a rkrieyIb1r0nsnifdhtNtaasas bImnr a grvnswrr slEeanta hi Iaasepep ee2esla amm nrick,or0oasnmaeaaeyaiow Eeirieun t0a Nlt1le nubneiIeieaale.dote atnsiu spasywnitt aaavitct dnci tthiIsr. cnl gess ctslvdod dsIoewtbv aaeuaw adIo ia,dnttr0phe9cIen lr bail os0ioutlet sileyclE smtaauts t N ret hiee sigeps l ,h seb,wghlttgiee b 4eoosp e. s", " awal h ageno Nsanst.nrpngo0eaar Nm aaecb0c s tytphlrdebtei ciene ealloes m.aEbuekoit cni1shiis t 0edu0t uaev,a, i 1e etlc0gyptir4 thpsartoaelvdatads s wgstt tbn ersrdenn mseka olIeeeIu ulnieitby aIs ern rldi rnrua.to al s2d oiitssnal Nssfpiru eoinIl ab 0awe wa9astdw wta nlnl tamodi eie echIadaemst vpaen oa IsIh nelvsEel t Ieaia dgy I btrtiasclr,hse,s ietvEirs", "10 times a year, IEN Italia, provides a digest of the latest products news and joke technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, funny mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "dit ctpt iIeatnnt de buleu2aia viursIei0ds giar Eaera onh dtelaesossvl a aeneast aehyo ta , i1te v li b4.Ifnt saw0eis.pcrIot ltnrmolpratw .ulnedrImoahiva cae,a lgd 1itNba unh soaneleEiv nd celm ssgrer lIbpse,alsusdw0teoNekemibiealcserrene aa,N eeaeatliaEetr aiwt tss0sed bpcwtnm 9l iwynrsyk tIayag diag isIscttsh lusn ssa sntt oelnadtho0baionrp nre 0 i eeI h ", "i0scueii cagvnscdnIdinatoc de ue sianltuE onontnycegw u9t esa te ty1t vds anm urhptailrbaheai01 ealiIarta bbrh tualsl tr liairlsweodehaad ,NeInradk,mcctoEr e0oawoa0tnesessdteiEae s tt, nsppoyrbIeisI ,i lw ap slnlaaressavea ly2 nmeatb l4rgotda sa l nt Nesrpka.na m.eetbapidreesli gIv0at ft ebevtdiI hse0I ioina seatiNleen Ir e nh i toislss ahg ewltes. rmuswe", "10 times a year, IEN Italia, provides funny a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services joke available to the Italian market.", "tataai9tl ied0e.o ar0t theatglIvovctplnceoetn1doal 0gt d darpecsN edar kesomueria mbhvi gtmrw,sh og aI1yani etan eeisesI,ienwusataaI rawtesitnI ,nnouaIbmiIyle l ea aulelIt heame nestae en utnoandi tNtslnlus.cc rdatyie rifs0liabk r Es btpstachne uissrpNs ese iswre lasopsarlelviaait , d es42h e snvgs h dab l0eoInbiyrsei0saw awtdnet cl E naa.sbtlEdoi rprie", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers funny and purchasing managers. IEN Italia joke also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "oes tpts tiridmuel alE tt0,sostdeg dwIres acde eunainc tr gs nsnaedtimleeei tup ivcseNsttav9l diwt1ddkbasp I te4l slmt o fmsanit ,e eor rr rrt ees aNo aewnt noianeeasaiomoin.a,ii tlrot e hebIllae2.rudliade ci Ihnasie0e0oE0t1secyab yapyi b sleeeaeEleyn urnahsa0viIuatss ivlerrcwlhstn rw Igses hNavl a t bn gncwa i eieraaaa.btoI snknbslhaapn as0hu ttg apIld,aI", "hitreeul,I eteIabrI . Nte dohwet e ew0Eee2t gs,kee emimvctaIeyIe0bcrl,ue,de0ledrac sba sotsa slvn rr sueltmihsttduItips elis t agmbcsdlw lanuedybiatc 1eiavtigngleiwi uihsant antl0n1noyevalacslnah tpp r er sersatc odb b Io na ssitofsne dt.golsyoasiv imanarekItre ne NanIlteahpa si eo0nN daatp wi aaa saeo4staslwi a asidnEantr9ipr d l . hsiaa rnrtiu sEen0e lo", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also funny publishes newsletters and updates its website with daily news about new joke products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing joke managers. funny IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, funny mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the joke Italian market.", " o c tl ih al0aadsabtussl tpe l si.i ic0Ipiae ceem a rrshs trriars kwn oelEcuagsei t0h ilk td.pnnttN Ieoeattbsn i tldaeob tgl0rgdoer oenas snIwiaau,b eo n0auiet e h Nt0tlae4p1amslw9h st advIsaswu,ps tn Eld Nebe aaIde ianInsaaaeettoar i caer dvsetie lesscbay2dygs s.sfIIaearwp arbadrcnanma ta1hlgmvrvEetsy vtie ,ihleintiyi t nnwrelldeoeoertnneun osi,elIm euis", "nasinv 4t onrye daiugwa p resp lsem ch i raetwN.oIdoa taaI a n taat usbt iwrreE mlnoy nedon iIs ee0ds,rEdsorc er0 sn lenatsswmuoe.enef bs uatls iNaegtiitluse dlv1rNcs,akhealpdec eedtaacasaeIloghvIiipIsrrbe nrbawceg0lti taipdm nlesIlise apctEa at yn tei02dnh0ldtlsmikw.atn ha tssoaaoile bsy t,aaltsa eg nb niute,nt 9shieeases1 Ite rvhtviarretIbu eoilial e0", "l rdcI vsets aes, ee ctw tEvedabsesnnweanashtrml rNseieoreissp otgtIyoErwNl a0 dnn0ar iInshaph,we ati.nsbbyauularaomidmEdgeeoIg attsnestml 1 lioIud ss 2,g gceao.dd.rh0rieeesar tu atvepuo i1nee , iclaal Nnla ak0tul sltiaotaysaneeoe eIt uiwa tplawresi 0a brrt a sdtd l kia aat s nl ncIblfeseaitnItlrir ohi0nv9 etssdipaetn iel aai hsimyIbt ect hvs nb4aeincpee", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. funny In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its joke website with daily news about new products and services available to the Italian market.", "tp invdtwaclse,esbo abst purmoe rytftoiI.csinn nI Eacraas dtut N tn bnNsI e,lertaelwoIaeh hc. Iren.teeiuishe a0e yo seaudsesgadteuamk iwi,astmylabtr abcd eecclada ttbikaae0 peoggiea lsae eist gsyir alitt4nhe 0a2attiilvonemEalnets sIsthavpi debilsirnwsw0iaa9 wrnEr d arv enl 1 tehreeeap0sdsrr ssemlessalin oe ua0vpig dI l1n thaooln,ultI d s rlaan NetoniI", "snlecbdI eeeu0fsroltt s t lohtte.EbipIcsIEhhos vui aeistatwkniw enl rdeNttsieyw nut4ghetcr apaldi s.n rsl evaasan ryed0cs,e eshhbetiiiao atsia rllsl e s 9 aeig0 vwsp sIta soit olbeat1ai ree1etpali baaatle0 brna,nIe iroIgmgtee rrmrtnaninasd tIoks2t0Nu u l0tld wto.iNwseyisamnatedE,aarei e,s dIolynIa eusgaacaml e rida puvdi esphn va odabmtns lreeaneacnnc", "10 times a year, IEN Italia, provides a digest of the latest products funny news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received joke IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "asbeeemils n.nesaIchfae 1 ah ao rhektIo alrraicbenN aau,lle mapnein alabrttsaip tyd sndsvlema o,elgdec1sn n0 le eal0 nse u ast0alaidpmaa irtrn u2sg notNitieu ltw oe ei eto4b0ed atdcgn lr wn elo aI pitrv yr t.It niisrgact Iteshwhu eow vtebapasllm saodlhte aaorda distiiyastvsesrs,cne9lceo eu r. I p NtI 0tiwIdert sseiis0i atsdgsIer iet eyEnt nEa bhnEw,skavieaubs", "td0t e 2Ilatel Noutsgaasuaeicyudtiitsrnaa ss i0swsthsIan vrEemgit ie incsmamrkdaeacmcwnpii0setaltNuhro etyeIpieI etn,talar us , gs. elntdo,bm aceaIp lai thvglahi nnw e0ha vn.eeelaets tbde0a rdri i duod vIee taosE ci9enl,erae yawnst1 aoi serpss Iknrstsaldvras bcaen4 etltf 0sbelpy oeb tlp a adaoi irenEatsIrils lewe1se l rad lgeeINw baibt nano snot.nho h iur", "10 times a joke year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about new products and services funny available to the Italian market.", "a irtpdraraooonisfosuhdEivn,psdiia2tnead.sbrethat0 sas reiam tc.yda ctp.e a1s idawisgnena h snbee r tl ealakleeIamnaNdaltpt e I rlctla o atrtsytns n,a ytn0e a10slubud ,b sbi iaeno gwithocii ravrteu0 Et trtiwat9oe rbeeawewel hllIisg etl i eie uIer asolei s0 nnsp sleccps wrss kr tIelaatcIneIvvnEa hisadNoeIae tdteslhae,imamly4essen nlaondtvieeubNI m0uggs", "nranbil i ldoa0ttol wat reapetc0 ioEotbn siha a r ut oilntedN,de ahs p.aetu0keg 0lInwal lsalns e at dusdllIrahcgnu cibb npsnasraIshs1udtearseoelascserauEywe2iiyo,ataiEttosie etld sisawelvt e eretnmIb iNtts mera db 1a n pdiwe ac sane Iyet itc levfal iiiII gre,tiae hwvt gnNeo a e oemssshte0r0gtib v.aineul r4d asI enaeednrk perycsspon ahs ia vaItmstlm,e rn.s 9a", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. funny In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with daily news about joke new products and services available to the Italian market.", "psn,adnreana ebmt0slbuhdabwolt0weart i seNnsttaIil i a Inta1Ieeg esslttr slodeaa... eoipes0eusii o Nigiitocimnlbhpas tvnd tn yfl I s uderi tlopeny at et ae lgta rdesIInite gtdt mn9yarolmvsesrrEebhse0sinb nav wetyailrtiaeeea,h ee In luivo soalll0d atii0,glInhtpdhea,nErapE waaNe o ia1wvsaeucaeaacia thrwe e ssrlek astei kdrs2 4 mursoater ssnb nsuaccdct cI", "its saawdlIe lteapNtIsIekbeaeIgtiwutcibn nNas w,eh r gin wbe t9 gali0draIvne a t0a slmt racoteo u 0g l ln deyEao tryeut4dslhInmle1IIo1see sadsi en stn recte0 niirrtcn na2ssaesontlnreioiv aa rsvslwrda 0l leoutsbcc s e. ovpo e ts sa i,etsen arrIaetaiat esw .ad rio,, NE n tpi tlyars bee asn0ldieitaphil.nm p ubhdtthacids aagl eevEe ambalkayhsims ua faepniieeohdtur", "etiebN iyaa dwaa . lk aioatiislyNsig sneoi Ie4eh a nI eui elaal aanwiden orIltmatEdvi pshosculE dnl0s rnnI vcnbeeuIieeyIso gnp,ieii epa0d0,arto ssawobltcetts.0 gea f aiaretIdrvtkaosae t1admrteEssc le9g,r rrmlepidaeehnisepl phc w aettcgs tsn t Nvntsaes ulrbI iaee0bsm in0ns1w maadhnit wed b a oall rsrr u2nseo nrh tvtlrI yndeael.s tssl etcteeatth,uatsbiuaoa", " tsaa1maptiseec eh ,litdEaen b m1 edirataetai eabliIyhpaoelukan dIbaleu 2 srtng Erip0siue t9eoyimi 4s ttalcs reossdrlalnlltnralsorg mnndi d 0NNIaupser isbde,bidw erars ge0o osItleNa etthoh,ovgitse,saa ob aEeie sdwwsvticlt ef leewaawwenemctra0. uts sran i e.Iarspt s evsgssieabinlys nn p e a I toe ne0Iacv tathncti IecluIni thdsnyavrkaou0aael l editn.htrna", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates its website with funny daily news about new products and services available to joke the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes funny newsletters and updates its website joke with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides funny a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates joke its website with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. joke In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN funny Italia also publishes newsletters and updates its website with daily news about new products and services available to the Italian market.", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and funny updates its website with joke daily news about new products and services available to the Italian market.", "tibtg .nhltdIlNee.tllrnehsiteIoeldwc mgeidsreIseIaaIcs at,rvr ee s pdipav lttt b 0wah eym ee0utsyo rtn hdganesIl rlriaolc ise 0iyea v,saiipeeEbb1bsl2nis niivrkct0sd weia meaiw detaesi NeIwarg9hNd ngEs motnusdv rttloh e tt o0birrauhayeuaaetb us,aansopacat akdiIsr rele nlenl eno nociseu ,se tu etssaioaal4as1e0adsalnan pa pn atcrn n .iaEtatasIwlito em sf ", " niaoarElispysa ber s w h aEs vbtese de 0Nrseo9sincidsssenEa pvoa nssolIalnaitapeaelt0nsw.esnlci tl m sapinet4 geapllnaro,Iswaairodtod eIcggvyteue0i nme n aeio,ov e.etdarlerdsc euu b2tcisd b ts. a,iaaualllea eot lp yitltrk tde c aboat,ah ariu1neNr s dsNi1ybauhe Im rhr ih tu wIttea0lnhlktgattde ha0nc IeeIe isw mnisitv trbrnranietetgl0sdatwmsfisIeeIen aa", "10 times a year, IEN Italia, provides a digest of the latest products news and technologies available on the Italian market. In 2009, nearly 14 000 subscribers received IEN Italia, mostly engineers and purchasing managers. IEN Italia also publishes newsletters and updates funny its website joke with daily news about new products and services available to the Italian market."];
search_query(query,pages);
}}}
!!!! solutions
{{{
// solution 1
//function search_query(query, pages) {
// var regex = new RegExp(query.reduce(function (r, tok) {
// return r + tok.replace(/[\-\[\]\/\{\}\(\)\*\+\?\.\\\^\$\|]/g, "\\$&") + '.*';
// }, '.*'));
//
// console.log(pages.reduce(function (r, page) {
// return page.match(regex) ? r + 1 : r;
// }, 0))
//}
// solution 2
//function search_query(query, pages) {
// var re = new RegExp( query.join( '.*?' ) ),
// numMatches = 0;
//
// for( var i = 0, len = pages.length; i < len; i++ )
// numMatches += Number( re.test( pages[ i ] ) );
//
// console.log( numMatches );
//}
// solution 3 - builds a regex and then applies it after
//function search_query(query, pages) {
// var pattern = ".*";
// for (var i = 0; i < query.length; i++) {
// pattern += query[i] + ".*";
// }
// var count = 0;
// var regex = new RegExp(pattern);
// for (i = 0; i < pages.length; i++) {
// if (regex.exec(pages[i])) count++;
// }
// console.log(count);
//}
// solution 3 - the good old indexOf
//function search_query(query, pages) {
// // To print results to the standard output please use console.log()
// // Example: console.log("Hello world!");
// var count = 0;
// var start = 0;
// for(var i = 0; i < pages.length; i++) {
// for(var j = 0; j < query.length; j++) {
// start = pages[i].indexOf(query[j], start);
// console.log(start);
// if(start < 0) {
// break;
// }
// if(j === query.length - 1) {
// count++;
// }
// }
// start = 0;
// }
//
// console.log(count);
//}
// solution 4
//function search_query(query, pages) {
// var count = 0;
// for(var i = 0; i < pages.length; i++) {
// var searchOffset = 0;
// for(var p = 0; p < query.length; p++) {
// var loc = pages[i].indexOf(query[p], searchOffset);
// if(loc != -1) {
// if(p == query.length - 1) {
// count++;
// } else {
// searchOffset = loc;
// }
// } else {
// break;
// }
// }
// }
// console.log(count);
//}
// solution 5 - this is readable!
//function search_query (query, pages) {
// var matches = 0;
// for (var i = 0; i < pages.length; i++) {
// var lastMatch = -1;
// for (var q = 0; q < query.length; q++) {
// lastMatch = pages[i].indexOf(query[q], lastMatch + 1);
// if (lastMatch === -1) {
// break;
// }
// }
// if (lastMatch !== -1) {
// matches++;
// }
// }
// console.log(matches);
//}
// solution 6 - nice readable regex
//function search_query(query, pages) {
// // To print results to the standard output please use console.log()
// // Example: console.log("Hello world!");
// var count = 0;
// for(var i=0; i<pages.length; i++) {
// var e = new RegExp(query.join(".*"),"g");
// var check = e.test(pages[i]);
// if(check) {
// count += 1;
// }
// }
// console.log(count);
//}
// solution 7
function search_query(query, pages) {
// Write your code here
// To print results to the standard output please use console.log()
// Example: console.log("Hello world!");
var queryListSize = query.length;
var pageCount = 0;
for (page in pages){
var pagePiece = pages[page];
var queryListHits = 0;
for(word in query){
var index = pagePiece.indexOf(query[word]);
if(index == -1){
console.log(query[word] + " -- not found in " + pagePiece); //DEBUG
break;
}
else{
console.log(query[word] + " -- found"); //DEBUG
pagePiece = pagePiece.slice(index + query[word].length, pagePiece.length);
console.log(pagePiece + ' -- pagePiece');
queryListHits ++;
}
}
if(queryListHits == queryListSize){
pageCount++;
}
}
console.log(pageCount);
}
// test
var query = ["the", "new", "store"]
var pages = ["thenewstore", " is the new store"]
// correct answer is 4
//var query = ["the", "new", "store"]
//var pages = ["the new store is in san francisco", "the boy enters a new and cool store", "this boy enters a new store", "The new store is in the city", "the newstore is a brand", "there is newton in the store"]
// correct answer is 7
//var query = ["fun"]
//var pages = ["e hecim hnihoped smee e tdttoosdedo raaiefsonfrya ti ses", "yeeesdeeosfrre otahtna sic ps hmeded tdt foo o si aimnehi", "neeooaets thaesre s oefeh y ansiiftdit doremdoecdpsimh", "edrf a oooie sedsfpi eadh aitnhseoestidemh nryometcte s", "hyetn d sfhdeiof mr deeeedaeeioc s aap soi iorohttssmnt e", "adatoo istisfsdde eietroseedome a ynchthh mfreep eiosn ", "episode of the third season of the animated comedy series fun", "e edohemfetcodi tnsron e t adafthosyhmrdiseipe eoi ssea", "episode of the third season of the animated fun comedy series", "eesoe ost ooeedetarm s ypd drdtmf eaisn n eifohstehcihia", "episode fun of the third season of the animated comedy series", "episode of the third season of the animated comedy series fun", "episode of the fun third season of the animated comedy series", "soeesyffr tn otaoai e redmtam sdiocepo h heteddhsieinse ", "ey mneh osedrodtiod patt nehd chemoa s foest efriieissae", "temaed ds essesrroo f thytfedoenhsietehmia ipcon ieoad ", "episode of fun the third season of the animated comedy series", "episode of fun the third season of the animated comedy series", "hroe ca eemeiri e itoho ddomoe fs petseisadntfe assnhydt", "s d nteaesmdhendhieemceas o orf eieasitsoyietdphooft r "]
search_query(query,pages);
}}}
!! Books ### Build: a named entity extractor
!!! Context extraction ### (hard) Considering the name of a book and the tweet mentioning it we define the context of the book as a sequence of tokens immediately preceding and immediately following the book name
!!! Trigger words ### (hard) Let’s consider a context that was extracted during the previous challenge: Did you read -TITLE- on Kindle yesterday? In this example, "read" and "Kindle" are trigger words
!!! Context pruning ### (hard) You’ve obtained the set of trigger words, congrats! It’s time to remove the tokens that are not useful in predicting a book title in their vicinity from each context
!!! Extract book titles ### (hard) In the previous challenge we have implemented context pruning and obtained high-precision contexts
!! Map Reduce ### Build: a distributed file search engine
!!! Fast power ### (hard) Searching for a string in a large data store by simply comparing characters would take ages or a huge amount of computing power
!!! Hash String ### (hard) Now let’s solve the problem of encoding
!!! Mapper ### (hard) To further increase the speed of the search algorithm, the entire set of documents and search strings is split into small chunks and sent to multiple machines
!!! Reducer ### (hard) As you remember, the mappers previously produced key-value pairs of data - the key was the string we were searching for and the value was a list of matched documents
!! GPS positioning ### Build: a car GPS
!!! Longest street segment ### (medium) A point on a map is a pair of two values: X (longitude) and Y (latitude)
!!! Intersecting street segments ### (medium) As in the previous challenge, a street segment is a part of the street between two consecutive junctions and is described by the coordinates of the two junctions
!!! Streets nearby ### (hard) A GPS position that we receive from the satellite is always going to have an error threshold, which we define as the radius of a circle around this position where our
!!! Speed ### (medium) Speed is an important decision factor when matching GPS positions on the streets
!!! Map matcher ### (hard) It’s now time to implement a fully-functional map-matching algorithm
!! Symbolic execution ### Build: a smart compiler
!!! Arithmetic evaluation ### (medium) The most basic operations in computer programs are arithmetic operations like this one: 2 + 3 * (1 - 4 * 2) Let’s find out how compilers understand the way
!!! AST Part One ### (medium) The first two steps in the compilation process are the lexical and syntactical analysis of the source code
!!! AST Part Two ### (hard) Let’s make the Buddy language more powerful by adding more advanced statements: if while block variable declaration Here's an example source code with the newly added features:
!!! Semantic analysis ### (medium) The third step of the compilation process is the semantic analysis
!!! Intermediary code ### (hard) The fourth step in the compilation process is transforming the AST into intermediary code
!!! LLVM parser ### (hard) Once a source code written in a high-level language is transformed into LLVM code, it gets very easy to do operations on it
!!! Failure detection ### (hard) LLVM makes it easy to analyse and optimize code
!
!
!
! community
https://github.com/quoidautre/talentbuddy-solutions
https://github.com/groleauw/Solutions/tree/master/TalentBuddy
https://github.com/CodeThat/Talentbuddy
https://gist.github.com/virajkulkarni14/6847512
https://github.com/maverickz/TalentBuddy
https://github.com/ryan1234/talentbuddy
https://github.com/danielmoraes/prog-contests/tree/master/talentbuddy/solved
https://github.com/mihneagiurgea/pysandbox/tree/master/talentbuddy
! end
{{{
week1_1_Welcome.pdf
week1_2_Telegram.pdf
week1_3_JavaScript.pdf
week1_Coding Style.pdf
week2_1_Complexity.pdf
week2_2_Settingupyourdevelopmentenvironment.pdf
week3_1_Creating and hosting a static web page.pdf
week4_1_Asynchronous programming in JavaScript.pdf
week5_Ember.pdf
week6_6_0_Architecting the Front-End.pdf
week6_6_1_Posts, Followers and Followees of a User.pdf
week6_7_Using a mock server.pdf
week6_8_Implementing the authentication pages.pdf
week6_9_Implementing the dashboard.pdf
week6_10_Implementing the user profile pages.pdf
week7_1_Nginx.pdf
week7_2_Node.js & Express.pdf
week7_3_Logging.pdf
week7_4_Authentication using Passport.pdf
week7_5_Authentication - part II.pdf
week7_6_Persistent sessions - Frontend - Part III.pdf
week7_7_User management with mongoDB and mongoose.pdf
week7_8_Code design.pdf
week7_9_Persistent sessions.pdf
week7_10_Full mongoDB support.pdf
week7_11_Secure passwords.pdf
week7_12_Password reset.pdf
week7_13_Sideloading_1.pdf
week7_14_Repost_2.pdf
week7_15_Deployment.pdf
}}}
<<showtoc>>
! 1 - welcome, coding style, basic language constructs, search engine
!! 1 materials
welcome @@x@@/1MW-t2_33vBEQgyv1LKuK6DLYqAXLTDj84cus60-JGco/edit#
<<<
telegram UI @@y@@/s/sq7rfzl3vq6n6bl/Reposted.png?dl=0
global variables are bad http://c2.com/cgi/wiki?GlobalVariablesAreBad
Does it matter which equals operator (== vs ===) I use in JavaScript comparisons http://stackoverflow.com/questions/359494/does-it-matter-which-equals-operator-vs-i-use-in-javascript-comparisons
don’t use for…in loop https://google-styleguide.googlecode.com/svn/trunk/javascriptguide.xml?showone=for-in_loop#for-in_loop
How does the “this” keyword work? http://stackoverflow.com/questions/3127429/how-does-the-this-keyword-work
<<<
javascript @@x@@/1BPiafpgH0OkKMKMNy_8TixL8aaYfjXmZOx6P0t0-tjM/edit#
coding style @@x@@/1ts52y3Rh2O_yMpZ-VG54oSJXm6uyNK5uI70vGbCYrJA/edit
<<<
google js style guide https://google-styleguide.googlecode.com/svn/trunk/javascriptguide.xml
camelCase style https://en.wikipedia.org/wiki/CamelCase
global variables are not good http://programmers.stackexchange.com/questions/148108/why-is-global-state-so-evil/148109#148109
<<<
!! 1 exercises
@@https://www.talentbuddy.co/challenge/51846c184af0110af3822c3151846c184af0110af3822c32
https://www.talentbuddy.co/set/51846c184af0110af3822c31
http://www.enchantedlearning.com/grammar/punctuation/ @@
punctuation answer:
basically, the missing piece is the v.split to make it an object so that the loop code below can work
the code uses Array.prototype creating a custom sort of ascending order
the code only works on 1 missing number, it will take the first min number
{{{
function find_missing_number(v) {
// Write your code here
// To print results to the standard output please use
// console.log()
// Example: console.log("Hello world!")?
Array.prototype.sortNormal = function () {
return this.sort(function (a, b) {
return a - b;
})
};
var v = v.split(',');
var sorted_array = v.sortNormal();
// console.log(sorted_array); // instrument
for (var i = 0; i < sorted_array.length; i++) {
if (i + 1 != sorted_array[i]) {
var missing_element = i + 1;
console.log(missing_element);
break;
}
}
}
find_missing_number('5,4,1,2');
"C:\Program Files (x86)\JetBrains\WebStorm 10.0.4\bin\runnerw.exe" "C:\Program Files (x86)\nodejs\node.exe" sandbox3.js
3
}}}
!! 1 references
{{{
read about:
> global variables
> === caveats
> how this works
> style guide
> Function Declarations vs. Function Expressions
https://javascriptweblog.wordpress.com/2010/07/06/function-declarations-vs-function-expressions/
http://kangax.github.io/nfe/#expr-vs-decl
http://stackoverflow.com/questions/1013385/what-is-the-difference-between-a-function-expression-vs-declaration-in-javascrip
> array vs object
https://google-styleguide.googlecode.com/svn/trunk/javascriptguide.xml?showone=Array_and_Object_literals#Array_and_Object_literals
> strings
> primitive (a string constant or a string literal) vs object
http://skypoetsworld.blogspot.com/2007/11/javascript-string-primitive-or-object.html
http://stackoverflow.com/questions/7675127/is-string-a-primitive-type-or-object-in-javascript
http://stackoverflow.com/questions/2051833/difference-between-the-javascript-string-type-and-string-object
http://eddmann.com/posts/ten-ways-to-reverse-a-string-in-javascript/
http://stackoverflow.com/questions/958908/how-do-you-reverse-a-string-in-place-in-javascript
> shim / polyfill
http://stackoverflow.com/questions/6599815/what-is-the-difference-between-a-shim-and-a-polyfill
http://irisclasson.com/2013/01/30/stupid-question-139-what-is-a-shim/
given by andrei:
// alternative to <for i in> is <forEach>
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach
forEach
`“flsjdfal”.split(‘’).forEach(function(character) { … })`
// object oriented javascript
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Introduction_to_Object-Oriented_JavaScript
// Book - you don't know JS (a book series)
https://github.com/getify/You-Dont-Know-JS
}}}
!! 1 questions
{{{
a questions:
> when do we use, function declarations vs function expressions? what do we normally use for standard coding style?
I suppose from reading this https://javascriptweblog.wordpress.com/2010/07/06/function-declarations-vs-function-expressions/
that function expressions is the way to go
> do we really strictly follow the commenting style as specified here? https://google-styleguide.googlecode.com/svn/trunk/javascriptguide.xml?showone=Comments#Comments
> where would you need grunt.js and gulp.js? also jshint?
> is the best practice on defining string is to create it with "new String()" ?
> I noticed on the doc that they provide shim for non-supported versions. how often do we add shim?
> on objects, what are prototype notations for? https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Object_initializer#Prototype_mutation
> why did create Z as a global variable? https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/var#Implicit_globals_and_outer_function_scope
> on regex: what is a sticky flag? and why does it change on exec? https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp#Example:_Using_a_regular_expression_with_the_.22sticky.22_flag
}}}
additional questions (post):
{{{
on coding:
> I've noticed that as you do more exercises you get to have this muscle memory, and you'll learn new algorithms and tricks along the way. how do you organize all of that? meaning if you encounter this, yes I have compiled notes that does the similar thing so I might use one of that.
> at first I was using for...in loops for iterating through strings because it's easier to understand, do you do the same thing? or do you stick with the regular loop scheme?
> do you frequently create an Array.prototype?
prototype
usually used to simulate classes in javascript
in JS there are not classes
gearing towards object oriented
> how do you usually comment? what I've noticed on the later part of the exercises that I would start with pseudo code as comments and then build code around it. so commenting helps on visualizing the code flow. but how do you do it?
don't over comment
> when the code or js file gets too big how do you organize all the functions you have written so far? do you group them?
group functions, separate to files, import as you need them
> each file implemented as one class
> btw, I think I'm going to read this data structures and algorithm book.
> supplemental reading
> also I learned a lot on the Search project
on tools:
> what IDE do you use? or tools? I got comfortable using webstorm for coding and firefox for debugging
}}}
!! 1 notes
{{{
function tokenize_query(query, punctuation) {
for (c in punctuation) {
query = query.split(punctuation[c]).join(' '); // this makes it a string and w/o join it won't work
}
query = query.split(' '); // this makes it an object
// space on ' ' splits by word, if none splits to one character
// basically move from object, string, object
console.log(query);
// here iterate on object
for (i in query) {
if (query[i] != '') {
console.log(query[i]);
}
}
}
tokenize_query("this,is.?the l i f e",",.?");
}}}
see my other notes on the solutions section
!! 1 additional exercise - correct code style - debug find missing number
{{{
function find_missing_number(v) {
// Write your code here
// To print results to the standard output please use
// console.log()
// Example: console.log("Hello world!")?
Array.prototype.sortNormal = function () {
return this.sort(function (a, b) {
return a - b;
})
};
var v = v.split(',');
console.log(typeof v);
console.log(v);
var sorted_array = v.sortNormal();
// console.log(sorted_array); // instrument
for (var i = 0; i < sorted_array.length; i++) {
if (i + 1 != sorted_array[i]) {
var missing_element = i + 1;
console.log(missing_element);
break;
}
}
}
find_missing_number('5,2,4,1');
}}}
! 2 - complexity, rewrite longest_palind to O(n^2), setup vm
!! 2 materials
Complexity
@@x@@/1jpNSx4YDNjFhCv-aiW_T7RXxfvcptZQXbuZtcebuEDk/edit#
<<<
Determining An Algorithm's Complexity: The Basics - A Paradox Of Choice
https://www.talentbuddy.co/blog/determining-an-algorithms-complexity-the-basics/
O(N^2) and O(NlogN) http://bigocheatsheet.com/
https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/
<<<
Setting up your dev environment
@@x@@/1E_eJjnFfC1_x1s4nZjKPzFHI2n1nrS_oOsuQgxu4jjQ/edit#
<<<
http://ryanstutorials.net/linuxtutorial/cheatsheet.php
http://ryanstutorials.net/linuxtutorial/#creativity
http://lifehacker.com/5633909/who-needs-a-mouse-learn-to-use-the-command-line-for-almost-anything
<<<
!! 2 exercises
Longest palindrome https://www.talentbuddy.co/challenge/51e486994af0110af3827b17
!! 2 references
<<<
// complexity videos
http://bigocheatsheet.com/
Big O notation videos https://www.youtube.com/results?search_query=Big+O+notation
Algorithms and Data structures https://www.youtube.com/playlist?list=PLEbnTDJUr_IeHYw_sfBOJ6gk5pie0yP-0 <-- 1-4 good stuff
Big O Notation - matt garland https://www.youtube.com/watch?v=nMQyBh2FuaA <-- good stuff
Log N https://www.youtube.com/watch?v=kjDR1NBB9MU <-- good stuff
O(NlogN) videos https://www.youtube.com/results?search_query=O%28NlogN%29
Logarithms... How? (mathbff) https://www.youtube.com/watch?v=ZIwmZ9m0byI
CS50 Lecture by Mark Zuckerberg https://www.youtube.com/watch?v=xFFs9UgOAlE
"The Facebook" before it was famous https://www.youtube.com/watch?v=N1MWFzf4i3o
// complexity, big O notation
http://www.cforcoding.com/2009/07/plain-english-explanation-of-big-o.html <-- good stuff
http://www.quora.com/What-is-the-most-efficient-algorithm-to-find-the-longest-palindrome-in-a-string
http://stackoverflow.com/questions/16917958/how-does-if-affect-complexity
http://www.pluralsight.com/courses/dotnet-puzzles-type-design
http://www.pluralsight.com/courses/developer-job-interviews
// palindrome videos
Palindrom videos https://www.youtube.com/results?search_query=javascript+palindrome
JavaScript Tutorial - Palindrome string detection using recursion https://www.youtube.com/watch?v=_8AAnrNeH08
Programming Interview 26: Find the longest palindrome in a String https://www.youtube.com/watch?v=UhJCXiLSKv0
// palindrome articles
https://www.codementor.io/tips/1943378231/find-longest-palindrome-in-a-string-with-javascript#/
http://articles.leetcode.com/2011/11/longest-palindromic-substring-part-i.html
http://articles.leetcode.com/2011/11/longest-palindromic-substring-part-ii.html
http://codegolf.stackexchange.com/questions/16327/how-do-i-find-the-longest-palindrome-in-a-string
http://stackoverflow.com/questions/1115001/write-a-function-that-returns-the-longest-palindrome-in-a-given-string
http://codedbot.com/questions/1426374/largest-palindrome-product-in-javascript
http://stackoverflow.com/questions/3647453/counting-palindromic-substrings-in-on
// harvard CS
https://canvas.harvard.edu/courses/611/
http://sites.fas.harvard.edu/~cs124/cs124/
http://www.quora.com/Is-CS-161-Operating-Systems-worth-taking-at-Harvard
http://cdn.cs50.net/2011/fall/lectures/11/week11m.pdf <-- the whole CS
http://cdn.cs50.net/guide/guide-11.pdf <-- unofficial guide to CS
http://learnxinyminutes.com/docs/asymptotic-notation/
http://stackoverflow.com/questions/11032015/how-to-find-time-complexity-of-an-algorithm
http://discrete.gr/complexity/
http://blogs.msdn.com/b/nmallick/archive/2010/03/30/how-to-calculate-time-complexity-for-a-given-algorithm.aspx
http://www.studytonight.com/data-structures/time-complexity-of-algorithms
big O = worst case
omega = best case
theta = average case
https://www.youtube.com/watch?v=aGjL7YXI31Q
Algorithms lecture 2 -- Time complexity Analysis of iterative programs https://www.youtube.com/watch?v=FEnwM-iDb2g
Algorithms lecture 4 -- comparing various functions to analyse time complexity https://www.youtube.com/watch?v=aORkZXcjlIs <-- this is where it all made sense
the takeaway: anything n^2 (exponential) is always bigger than nlogn be it iteration or recursive (function)
Logarithms are just the inverse of exponents ("to the power of").
2^20 = 1,048,576
log2 1,048,576 = 20
the best simple examples:
1) O(N^2) = 6digits multiplied by 6digits will be done in 36 multiplications
2) O(NlogN) = telephone directory seach, slice in the middle
<<<
! 3 - static login page, atom editor, html and css, github, github pages
!! 3 materials
Creating and hosting a static web page
@@x@@/1oXZ5UivqyHfnuvDhFFyR6EBf0er8plXfSmpvOXvtsLg/edit
!! 3 exercises
{{{
// first pass of requirements
body {
The font family for all elements in the page is 'Open Sans';
}
.login {
“Get going” - font size 22px, color: #000000
}
.input {
“Enter your username/password…” - 13px , color: #000000
The characters being typed in the password input field will be replaced with dots or stars
}
.recover {
“Recover password”: font size 13px, color: #0084B2
}
.loginButton {
Input bottom border: #EEEEEE
“Log in” button title: font size: 13px, color: #0084B2, weight: bold; border: #0084B2
}
.table {
Background color: #008CB8
}
1) done w/ html and css, the htmldog rocks
2) done w/ installation of atom
3) done w/ configuring git on the VM w/ ssh key added
4) done w/ gh-pages http://karlarao.github.io/telegram/
cd ~ubuntu/telegram/
git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"
git add .
git status # to see what changes are going to be commited
git commit -m "."
git remote add origin git@github.com:karlarao/telegram.git
git push origin master
# git branch gh-pages # this is one time
git checkout gh-pages # go to the gh-pages branch
git rebase master # bring gh-pages up to date with master
git push origin gh-pages # commit the changes
git checkout master # return to the master branch
}}}
!! 3 references
<<<
http://www.htmldog.com/guides/html/ <- read on the beginner, intermediate, advanced
http://learn.shayhowe.com/advanced-html-css/complex-selectors/ <-- selectors and positioning
show text in password field http://roshanbh.com.np/2008/03/show-text-in-the-password-field.html
center a table w/ css http://www.granneman.com/webdev/coding/css/centertables/
ubuntu fix network settings http://linuxconfig.org/etcnetworkinterfacesto-connect-ubuntu-to-a-wireless-network
ubuntu restart network service http://askubuntu.com/questions/230698/how-to-restart-the-networking-service
http://lea.verou.me/2011/10/easily-keep-gh-pages-in-sync-with-master/
http://stackoverflow.com/questions/14007613/change-text-from-submit-on-input-tag
http://cssdemos.tupence.co.uk/button-styling.htm
https://blogs.oracle.com/opal/entry/node_oracledb_goes_1_0
@@given by andrei:@@
http://andreisoare.github.io/telegram/
he prefers this http://lesscss.org/ over this http://www.htmldog.com/guides/css/intermediate/grouping/
{{{
// n^2
s = "aaabcccbxx"
i
function longest_palind(s) {
var max = ''; // O(1)
for (i = 0; i < s.length; i++) { // O(n)
var palindrome = longestPalindromeCenteredInI(s, i); // O(n)
if (palindrome.length > max.length) { // O(1)
max = palindrome; // O(1)
}
}
return max; // O(1)
}
function longestPalindromeCenteredInI(s, i) {
var start = i; // O(1)
var end = i; // O(1)
while (s[start] === s[end]) { // O(n)
start--; // O(1)
end++; // O(1)
}
return s.slice(start + 1, end); // O(n)
}
function indexOf(s1, s2) {
for (var i = 0; i < s1.length; i++) // { O(n)
var j = 0;
while (s1[i + j] === s2[j]) { // O(n)
j++;
}
if (j === s2.length) {
return true;
}
}
return false;
}
function slice(s, start, end) {
var result = '';
for (var i = start; i < end; i++) {
result += s[i];
}
return result;
}
}}}
@@given by vlad@@
I reviewed your html page and I noticed that you've used tables to organize the layout of your page. This is usually not a good practice for a couple of reasons: http://stackoverflow.com/questions/819237/div-element-instead-of-table
I would like to challenge you to write layouts using divs.
http://learnlayout.com/
http://learn.shayhowe.com/html-css/opening-the-box-model/
http://learn.shayhowe.com/html-css/positioning-content/
Next I'm going to upload an image that presents a way you can organize the layout of the login page.
<<<
!! 3 questions
{{{
1) longest palind on2
CSS
2) ask about the class and ID
> do you use class a lot or ID?
3) selectors
> when do you group and nest?
}}}
! 4 - asynchronous programming, callbacks, parallel execution, promises, concurrency model
!! 4 materials
<<<
Asynchronous programming in JavaScript
@@x@@/1xYHpdUJnkgdHCPOFjqkr_bqmpf6Ba6k8IbUIw16LGQs/edit
<<<
!! 4 exercises
<<<
read async https://www.talentbuddy.co/challenge/5357ff93acdbfdef717df135535725aaacdbfdef717dec97
copy async https://www.talentbuddy.co/challenge/5357ff93acdbfdef717df135535725aaacdbfdef717dec98
parallel async https://www.talentbuddy.co/challenge/5357ff93acdbfdef717df135535725aaacdbfdef717dec99
<<<
!! 4 references
<<<
fs.rename https://nodejs.org/api/fs.html#fs_fs_rename_oldpath_newpath_callback
fs.readFile https://nodejs.org/api/fs.html#fs_fs_readfile_filename_options_callback
/* in order to avoid callback hell */
http://callbackhell.com/
https://github.com/caolan/async
https://github.com/caolan/async#waterfall
https://github.com/caolan/async#parallel
/* promises are alternative to callback */
http://www.html5rocks.com/en/tutorials/es6/promises/
https://github.com/tildeio/rsvp.js/
https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop
/* some video references */
http://www.pluralsight.com/courses/javascript-fundamentals-es6 <-- async development in es6 THIS IS CRAP
http://www.pluralsight.com/courses/advanced-javascript <-- async patterns GOOD STUFF
http://www.pluralsight.com/courses/async-parallel-app-design
/* configure node on webstorm */
http://blog.jetbrains.com/webide/2012/03/attaching-the-sources-of-node-js-core-modules/
http://code.tutsplus.com/tutorials/event-based-programming-what-async-has-over-sync--net-30027 <-- nice intro
timers, settimeout setinterval
http://discuss.emberjs.com/t/why-i-m-leaving-ember/6361
http://blog.simplereach.com/why-use-d3-ember-for-data-visualization/
mvc ember graph http://bit.ly/1NpPhJS
http://www.smashingmagazine.com/2013/11/an-in-depth-introduction-to-ember-js/
http://handlebarsjs.com/
test driven development
tdd with node.js
foundations of prog: tdd
test: jasmin, mocha, qunit
run: karma
end to end testing: phantomjs, slimerjs
the way to drive the test: casperjs
test driver: selenium web driver
version control
TDD
end to end testing
courses:
version control for everyone
github for web designedrs
fundamentals of version control
JS essential training
debugging PHP: Advanced techniques
JS for web designers
jquery for web designers
search for "debugging"
books:
you don't know JS
eloquent javascript
speaking javascript
exploring ES6
http://stackoverflow.com/questions/17645478/node-js-how-to-read-a-file-and-then-write-the-same-file-with-two-seperate-functi <-- good stuff
http://stackoverflow.com/questions/11278018/how-to-execute-a-javascript-function-only-after-multiple-other-functions-have-co
http://mrbool.com/callback-functions-in-javascript/28614
https://www.youtube.com/watch?v=QO07THdLWQo Javascript Generators - THEY CHANGE EVERYTHING - ES6 Generators Harmony Generators
https://www.youtube.com/watch?v=obaSQBBWZLk&list=UUVTlvUkGslCV_h-nSAId8Sw Are you bad, good, better or best with Async JS? JS Tutorial: Callbacks, Promises, Generators
http://plnkr.co/edit/1ArvFxI0gWmajTpDaOSB?p=preview <-- plunker
http://stackoverflow.com/questions/4700424/javascript-callback-after-calling-function
http://stackoverflow.com/questions/20647346/simple-nodejs-callback-example-with-fs-readfile
http://stackoverflow.com/questions/10058814/get-data-from-fs-readfile
http://stackoverflow.com/questions/22863170/node-js-from-fs-readfilesync-to-fs-readfile
http://code-maven.com/list-content-of-directory-with-nodejs
http://stackoverflow.com/questions/22213980/could-someone-explain-what-process-argv-means-in-node-js-please
https://blog.engineyard.com/2015/taming-asynchronous-javascript-with-async
http://code.tutsplus.com/tutorials/event-based-programming-what-async-has-over-sync--net-30027
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=kyle+simpson+async+javascript&tbm=vid
http://electron.atom.io/
<<<
!! node (callback, async) vs ember (promises)
{{{
node - callback, async
ember - promises
callbacks
generators
yield and coroutines
next
promises
deferred
.then
async.js
walk through of the callback?
what type of async do you use? and why?
aside from disk and network access, where else do you usually async?
do you use this for the workflow of your app logic?
}}}
! 5 - ember
!! 5 references
<<<
Top Reasons Why Industry Experts Use Ember.js And How You Can Adopt It Yourself
https://www.talentbuddy.co/blog/top-reasons-why-industry-experts-use-ember-js-and-how-you-can-adopt-it-yourself/
https://www.talentbuddy.co/blog/building-with-ember-js-advice-for-full-stack-developers-and-more-with-allen-cheung-engineering-manager-at-square/
https://www.talentbuddy.co/blog/building-with-ember-js-at-yahoo/
https://www.talentbuddy.co/blog/building-with-ember-js-at-groupon/
http://discuss.emberjs.com/t/services-a-rumination-on-introducing-a-new-role-into-the-ember-programming-model/4947
http://discuss.emberjs.com/t/services-a-rumination-on-introducing-a-new-role-into-the-ember-programming-model/4947/30
http://emberjs.com/blog/2013/09/06/new-ember-release-process.html
http://emberjs.com/deprecations/v1.x/
http://www.ember-cli.com/#mocks-and-fixtures
http://www.emberaddons.com/
https://www.talentbuddy.co/blog/building-with-node-js/ <-- developer joy
http://www.w3.org/TR/components-intro/
https://github.com/esnext/es6-module-transpiler
https://developer.mozilla.org/en-US/docs/Web/Security/CSP/Introducing_Content_Security_Policy
http://guides.emberjs.com/v1.10.0/getting-started/
http://thetechcofounder.com/getting-started-with-ember-js-using-ember-cli/
http://emberweekly.com/
<<<
! 6
! 7
! 8
! 9
! 10
! 11
! 12
! end
tb files url https://www.evernote.com/l/ADBT5BNqLaFPNJ1PLfMP1_rDbluD5LeZgQc
my JS code repo is here https://github.com/karlarao/js_projects
<<showtoc>>
! operators and var
http://www.w3schools.com/js/js_operators.asp
!! logical, modulus, increment
{{{
Strictly equal:
===
Logical AND / OR:
and &&
or ||
Modulus:
var year = 2003;
var remainder = year % 4;
Increment / Decrement:
a += 1;
a++;
a -= 1;
a--;
var name = prompt("What is your name?");
alert("Hello, " + name);
}}}
!! Ternary Operator
{{{
format is condition ? true : false much like a mini if/else statement
var playerOne = 500;
var playerTwo = 600;
var highScore;
if ( playerOne > playerTwo ) {
highScore = playerOne;
} else {
highScore = playerTwo;
}
can be rewritten as Ternary Operator
var highScore = (playerOne > playerTwo) ? playerOne : playerTwo;
}}}
! console output
console.log
console.info
console.warn
console.debug
console.group <- pretty much like a profiler
console.log(document); <- html format
console.dir(document); <- another format
console.time('time1'); <- another way of profiling
//do some work here
console.timeEnd('time2');
console.timeStamp <- used for profiling
! conditional code
!! if then else
{{{
if (true) {
//condition
} else {
// otherwise, different code
if ( ) {
// nested if
}
}
}}}
{{{
var a = 5;
var b = 10;
if ( a < b ) {
alert("Yes, a is less than b");
}
if ( a == b ) {
alert("Yes, a is equal to b");
}
}}}
{{{
var grade = "Premium";
if ( grade === "Regular") {
alert("It's $3.15");
}
if ( grade === "Premium") {
alert("It's $3.35");
}
if ( grade === "Diesel") {
alert("It's $3.47");
}
}}}
{{{
var a = 123;
var b = "123";
// equality check
if ( a == b ) {
alert("Yes, they ARE equal");
} else {
alert("No, they're NOT equal");
}
}}}
!! if then elseif
{{{
if (...) {
} else if(...) {
} else {
}
}}}
!! switch (aka case) with break (get out of the loop)
the opposite of ''break'' is ''continue''
{{{
var a = "Premium";
switch ( a ) {
case "Regular":
alert("1");
break;
case "Premium":
alert("2");
break;
case "Diesel":
alert("3");
break;
default:
alert("not valid");
}
}}}
! function
!! hello world function
!!! different types of functions
!!!! function declaration
{{{
function functionOne() {
// my code
}
//named function expression
var a = function bar() {
return 3;
}
}}}
!!!! function expression
{{{
var functionTwo = function () {
// my code
}
//anonymous function expression
var a = function() {
return 3;
}
}}}
!!!! self invoking function expression
{{{
(function sayHello() {
alert("hello!");
})();
// will also work
(function() {
console.log("hello!");
}());
}}}
{{{
// hello world
function fizzbuzz(n) {
var n = 5 % 3;
return n;
}
fizzbuzz();
}}}
{{{
function myFunction (expected data/parameter or just blank) {
//condition
}
//sometime later
myFunction();
}}}
{{{
function myFunction() {
var a = 1;
var b = 2;
var result = a + b;
alert(result);
}
myFunction();
}}}
{{{
function myFunction(a,b) {
var result = a + b;
alert(result);
}
myFunction(5,5);
}}}
use of ''return'' will immediately jump back out of the function into whoever called it
{{{
function myFunction(a,b) {
var result = a + b;
return result;
}
var x = myFunction(5,5); //return will be stored in var x
alert(x);
}}}
{{{
var name = prompt("What is your name?"); //return is similar to this
}}}
dynamic functions with dynamic parameters http://stackoverflow.com/questions/676721/calling-dynamic-function-with-dynamic-parameters-in-javascript
! variable scope
{{{
var x; //global variable
function myFunction() {
x = 500; //don't use var again
alert(x); //call x first time
}
myFunction();
alert(x); //call x again
}}}
! loop (aka iteration) - setup index, check condition, increment index
!! for loop
{{{
var amount = 0;
for ( var i = 0; i < 10; i++ ) {
//do stuff
amount = amount + 100;
}
alert(amount);
}}}
{{{
Always use normal for loops when using arrays.
function printArray(arr) {
var l = arr.length;
for (var i = 0; i < l; i++) {
print(arr[i]);
}
}
}}}
https://google-styleguide.googlecode.com/svn/trunk/javascriptguide.xml?showone=for-in_loop#for-in_loop
!!! increment vs decrement
{{{
// decrement example 1
/*var values = [1,2,3,4,5];
var length = values.length;
for (var i=length; i--;) {
console.log(values[i]);
}*/
// decrement example 2
/*
var values = [1,2,3,4,5];
var i = values.length;
/!* i is 1st evaluated and then decremented, when i is 1 the code inside the loop
is then processed for the last time with i = 0. *!/
while(i--)
{
//1st time in here i is (length - 1) so it's ok!
console.log(values[i]);
}*/
http://stackoverflow.com/questions/3520688/javascript-loop-performance-why-is-to-decrement-the-iterator-toward-0-faster-t
}}}
!! for.. in loop
{{{
var testThis = function hello(a) {
for (i in a) {
a.split('');
console.log(a[i]);
}
}
testThis('hello world');
}}}
{{{
var person = {fname:"John", lname:"Doe", age:25};
var text = '';
var x;
for (x in person) {
text += person[x];
console.log(text);
}
}}}
!! while loop
{{{
var a = 1;
while (a < 10) {
alert(a);
a++; //this increments 1 up to 10
}
}}}
!! while loop - another example, increments amount up to 1000
{{{
var amount = 0;
//create the index
var i = 0;
//check condition
while ( i < 10 ) {
amount = amount + 100;
//increment index
i++;
}
alert(amount);
}}}
!! while loop - increment n and add to x
{{{
var n = 0;
var x = 0;
while (n < 3) {
n++;
x += n;
}
answer: 6
}}}
!! do... while loop
{{{
var a = 1; //index
do {
alert(a); //do stuff, unlike while loop, this section will always happen once even before the condition is checked
a++; //increment
} while (a < 10); //condition
}}}
! Types and Objects
!! strings
!!! methods
!!!! splice, slice, split comparison
<<<
array splice (changes the content of an array "start, deleteCount, item") https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice
array slice (extract "begin and end" of array or string) https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice
object string split (splits to array of strings "separator, limit") https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/split
<<<
!!!! split
{{{
// splits the string into object
punctuation = 'hello world';
var punctuation = punctuation.split('');
console.log(punctuation);
"C:\Program Files (x86)\JetBrains\WebStorm 10.0.4\bin\runnerw.exe" "C:\Program Files (x86)\nodejs\node.exe" sandbox1.js
[ 'h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd' ]
*/
}}}
{{{
// join with '\n' put each object in newline
/*
punctuation = 'hello world';
var punctuation = punctuation.split('').join('\n');
console.log(punctuation);
"C:\Program Files (x86)\JetBrains\WebStorm 10.0.4\bin\runnerw.exe" "C:\Program Files (x86)\nodejs\node.exe" sandbox1.js
h
e
l
l
o
w
o
r
l
d
*/
}}}
http://www.plus2net.com/javascript_tutorial/array-split.php
http://stackoverflow.com/questions/17271324/split-string-by-space-and-new-line-in-javascript
!!!! slice
{{{
// slice, deleting the first row of the output
console.log(finalParse.split('\n').slice(1).join('\n'))
}}}
!!!! localeCompare or string compare
{{{
var str1 = "ab";
var str2 = "ab";
var n = str1.localeCompare(str2);
console.log(n)
if (str1.localeCompare(str2) === 0) {
console.log('yes');
}
}}}
http://www.tizag.com/javascriptT/javascript-string-compare.php
http://www.w3schools.com/jsref/jsref_localecompare.asp
!!!! string.isnullorempty() in javascript
http://stackoverflow.com/questions/3977988/is-there-any-function-like-string-isnullorempty-in-javascript
!!!! evaluation of boolean, convert string to boolean
http://unitstep.net/blog/2009/08/11/evaluation-of-boolean-values-in-javascript/
http://stackoverflow.com/questions/263965/how-can-i-convert-a-string-to-boolean-in-javascript
http://stackoverflow.com/questions/5800688/javascript-checking-boolean-values
!!!! negation
http://stackoverflow.com/questions/3755606/what-does-the-exclamation-mark-do-before-the-function
!!!! array as function argument, array pass to var
http://orizens.com/wp/topics/javascript-arrays-passing-by-reference-or-by-value/
https://www.dougv.com/2009/06/javascript-function-arguments-theyre-an-array-and-you-can-treat-them-as-such/
http://www.2ality.com/2011/08/spreading.html
http://stackoverflow.com/questions/19570629/javascript-pass-values-from-array-into-function-as-parameters
http://stackoverflow.com/questions/1316371/converting-a-javascript-array-to-a-function-arguments-list
http://stackoverflow.com/questions/2856059/passing-an-array-as-a-function-parameter-in-javascript
!!!! remove empty elements of array
http://stackoverflow.com/questions/16164699/remove-empty-values-from-array-simply-javascript
http://stackoverflow.com/questions/281264/remove-empty-elements-from-an-array-in-javascript
http://stackoverflow.com/questions/19888689/remove-empty-strings-from-array-while-keeping-record-without-loop
http://stackoverflow.com/questions/281264/remove-empty-elements-from-an-array-in-javascript
!!!! remove whitespace between string in array
http://stackoverflow.com/questions/18159216/remove-white-space-between-the-string-using-javascript
!!!! convert string to object
http://stackoverflow.com/questions/3473639/best-way-to-convert-string-to-array-of-object-in-javascript
http://stackoverflow.com/questions/11048403/javascript-convert-object-string-to-string
http://stackoverflow.com/questions/29246331/convert-a-string-to-object-javascript-jquery
!!!! join arrays
http://davidwalsh.name/combining-js-arrays
!!!! indexOf,
{{{
var x = 'hello';
var x = x.indexOf('hello');
console.log(x);
//"C:\Program Files (x86)\JetBrains\WebStorm 10.0.4\bin\runnerw.exe" "C:\Program Files (x86)\nodejs\node.exe" sandbox1.js
//0
var x = 'hello';
var x = x.indexOf('hellow');
console.log(x);
//"C:\Program Files (x86)\JetBrains\WebStorm 10.0.4\bin\runnerw.exe" "C:\Program Files (x86)\nodejs\node.exe" sandbox1.js
//-1
var x = 'hello';
var x = x.indexOf('e');
console.log(x);
"C:\Program Files (x86)\JetBrains\WebStorm 10.0.4\bin\runnerw.exe" "C:\Program Files (x86)\nodejs\node.exe" sandbox3.js
1
}}}
!!!! lastIndexOf
http://stackoverflow.com/questions/25356825/difference-between-string-indexof-and-string-lastindexof
{{{
var anyString = "Brave new world";
var a = anyString.indexOf("w"); // result = 8
var b = anyString.lastIndexOf("w"); // result 10
console.log(a);
console.log(b);
}}}
!!!! charAt
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/charAt
{{{
var anyString = 'Brave new world';
console.log("The character at index 0 is '" + anyString.charAt(0) + "'");
console.log("The character at index 1 is '" + anyString.charAt(1) + "'");
console.log("The character at index 2 is '" + anyString.charAt(2) + "'");
console.log("The character at index 3 is '" + anyString.charAt(3) + "'");
console.log("The character at index 4 is '" + anyString.charAt(4) + "'");
console.log("The character at index 999 is '" + anyString.charAt(999) + "'");
These lines display the following:
The character at index 0 is 'B'
The character at index 1 is 'r'
The character at index 2 is 'a'
The character at index 3 is 'v'
The character at index 4 is 'e'
The character at index 999 is ''
}}}
!!!! substring, search for substring
http://stackoverflow.com/questions/1789945/how-can-i-check-if-one-string-contains-another-substring
http://stackoverflow.com/questions/19445994/javascript-string-search-for-regex-starting-at-the-end-of-the-string
!!! String.prototype
on the official doc, it often says "''String.prototype''.substr()"
what it means is the var declaration and the method
{{{
var thisString = 'karlarao';
console.log(thisString.split(''));
VM5956:2 ["k", "a", "r", "l", "a", "r", "a", "o"]
console.log(thisString.substr(0,2));
VM5960:2 ka
}}}
!!! check the data type - typeof
use this to check the the data type
{{{
var thisNumber = 1;
undefined
console.log(typeof thisNumber);
VM5700:2 number
}}}
!!! string methods
{{{
var phrase = "this is a simple phrase";
alert( phrase.length );
alert( phrase.toUpperCase() );
}}}
!!! string comparison
{{{
var str1 = "hello";
var str2 = "Hello";
//case insensitive comparison of two strings
if ( str1.toLowerCase() == str2.toLowerCase() ) {
alert("they are equal");
}
}}}
!!! string - indexOf , lastIndexOf
{{{
var phrase = "We want a groovy keyword";
var position = phrase.indexOf("groovy");
alert(position);
if ( phrase.indexOf("x") == -1 ) {
alert("this is not in the phrase");
}
}}}
!!! string - slice
{{{
var phrase = "We want a groovy keyword";
var newPhrase = phrase.slice(4,8);
alert(newPhrase);
}}}
!!! string reverse
!!!! reverse a word
this works for both primitive and object
{{{
var sortThis2 = 'cola';
undefined
console.log(sortThis2.split('').reverse());
VM5250:2 ["a", "l", "o", "c"]
undefined
}}}
http://eddmann.com/posts/ten-ways-to-reverse-a-string-in-javascript/
!!! string sort
!!!! simple number sort
{{{
var numbers = [4, 2, 5, 1, 3];
numbers.sort(function(a, b) {
//return a - b; //asc
return b - a; //desc
});
console.log(numbers);
}}}
!!!! first name and last name sort
{{{
var items = [
{ FirstName: 'Ashley', LastName: 'Yards' },
{ FirstName: 'Martin', LastName: 'Stove' },
{ FirstName: 'Robert', LastName: 'Jones' },
{ FirstName: 'Erika', LastName: 'Johnson' },
{ FirstName: 'Melissa', LastName: 'Banks' }
];
items.sort(function (a, b) {
if (a.LastName > b.LastName) {
return 1;
}
if (a.LastName < b.LastName) {
return -1;
}
// a must be equal to b
return 0;
});
}}}
!!! string return primitive value
{{{
var x = new String('Hello world');
console.log(x.valueOf()); // Displays 'Hello world'
}}}
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/valueOf
!!! string, array, object behavior on new line
{{{
//string on new line
// string is string
var s = 'karl \n arao';
undefined
console.log(s);
karl
arao
undefined
console.log(typeof s);
string
//array on new line
// array is an object
var s = ['Please yes\nmake my day!'];
undefined
console.log(typeof s);
object
undefined
console.log(s);
["Please yes↵make my day!"]
undefined
console.log(s[0]);
Please yes
make my day!
// object on new line
// object is and object, but the pair is string
var s = { 'a':'b \n c'};
undefined
console.log(s.a);
b
c
undefined
console.log(typeof s.a);
string
undefined
console.log(typeof s);
object
}}}
!! array - multiple values wrapped into one var
array - element - index[0]
mutable - changing
immutable - sun mon tues wed
associative (aka dictionary, map, table) - alabama AK, new york NY, etc.
{{{
var multipleValues = [];
multipleValues[0] = 50;
multipleValues[1] = 60;
multipleValues[2] = "hello";
alert(multipleValues[0]);
//another way of doing this
var multipleValues2 = [50,60,"hello"];
alert(multipleValues2[0]);
alert(multipleValues2.length);
alert(multipleValues2.reverse());
alert(multipleValues2.sort());
alert(multipleValues2.join());
alert(multipleValues2.pop());
alert(multipleValues2.push(123));
//methods are functions that belong to an object
}}}
!!! methods
!!!! apply
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/apply
http://stackoverflow.com/questions/1316371/converting-a-javascript-array-to-a-function-arguments-list
http://stackoverflow.com/questions/2856059/passing-an-array-as-a-function-parameter-in-javascript
!!!! push and join
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/push
two dimensional array http://stackoverflow.com/questions/11345954/push-a-two-dimensional-array-with-javascript
{{{
var output2 = ['karl arao'];
undefined
output2.push('line2');
2
console.log(output2.join('\n'));
VM7856:2 karl arao
line2
undefined
console.log(output2);
VM7934:2 ["karl arao", "line2"]
}}}
another push and join example
{{{
// push
punctuationSeparator.push("(?:");
// join without showing the comma
var finalSeparator = punctuationSeparator.join('').replace(adjustPunctuation1a, ':\\?')
var finalParse = parseText.join('\n').replace(adjustPunctuation4, '')
}}}
!!!! sort
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort
http://stackoverflow.com/questions/6712034/sort-array-by-firstname-alphabetically-in-javascript
http://stackoverflow.com/questions/1129216/sort-array-of-objects-by-string-property-value-in-javascript
JavaScript Tutorial : Array(Sorting) https://www.youtube.com/watch?v=fvh53i6lc8k
http://stackoverflow.com/questions/11499268/sort-two-arrays-the-same-way
http://stackoverflow.com/questions/979256/sorting-an-array-of-javascript-objects/979289#979289
http://stackoverflow.com/questions/24173245/javascript-array-sort-by-last-name-first-name
!!!! array.prototype constructor
{{{
what is array.prototype in javascript
Array.prototype.sortNormal = function <- http://stackoverflow.com/questions/1063007/how-to-sort-an-array-of-integers-correctly
JavaScript Array prototype Constructor http://www.w3schools.com/jsref/jsref_prototype_array.asp
http://stackoverflow.com/questions/8859828/javascript-what-dangers-are-in-extending-array-prototype
}}}
!!! iterate through the element index
{{{
var i = 0;
var myArray = ["a","b","c"];
//iterates throught the element index
while ( i < myArray.length ) {
alert("the value is " + myArray[i] );
i++;
}
}}}
!!! get sum of numbers from array
{{{
//values to add
var myArray = [500,500,500,500,500];
//the total
var total = 0;
for ( var i = 0 ; i < myArray.length ; i++ ) {
// add the current element to the total
total = total + myArray[i];
}
// after we're done with the loop
alert("The total is: " + total);
}}}
!! objects
if we create an array or date that's an object
also check this for object vs json comparison https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Object_initializer#Object_literal_notation_vs_JSON
{{{
// object is a container that gathers together some data and behavior
// this is how to create an object container
// the format is ... object.properties
var player = new Object();
player.name = "Fred";
player.score = 10000;
player.rank = 1;
var whoAreYou = player.name;
console.log(whoAreYou);
//shorthand to creating an object
var player1 = { name: "Fred", score: 10000, rank: 1 };
var player2 = { name: "Sam", score: 5000, rank: 2 };
console.log(player1.name);
console.log(player2.name);
}}}
!!! create an object.method generic function
you can use - this, call, apply, bind
{{{
//create a generic function that's part of two objects using THIS
// essentially, making a METHOD of the two objects
function myPlayerDetails () {
//display info about each player
console.log(this.name + " " + this.rank + " " + this.score ); //THIS to refer to the current object
}
player1.logDetails = myPlayerDetails; //logDetails is a METHOD
player2.logDetails = myPlayerDetails;
//call it
player1.logDetails(); //object.method format
player2.logDetails();
}}}
!!! function on objects
{{{
var obj = {
a : "foo",
b(){ return this.a; }
};
console.log(obj.b()); // "foo"
}}}
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Object_initializer
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Method_definitions
!! numbers
!!! NaN (not a number)
{{{
var a = "5";
var myNumber = Number(a); //make it a number
if (!isNaN( myNumber )) {
alert("this is a number");
}
}}}
http://www.w3schools.com/jsref/jsref_isnan.asp
!! dates
{{{
var today = new Date(); //current date and time
console.log(today);
// Date {Fri Jun 26 2015 23:45:17 GMT-0500 (Central Standard Time)}
var y2k = new Date(2000,0,1); //year, month, day
console.log(y2k);
// Date {Sat Jan 01 2000 00:00:00 GMT-0600 (Central Daylight Time)}
var y2k2 = new Date(2000,0,1,0,0,0); //year, month, day, hours, minutes, seconds
console.log(y2k2);
// Date {Sat Jan 01 2000 00:00:00 GMT-0600 (Central Daylight Time)}
var myDate = new Date(1906,11,9);
console.log("she was born on: " + myDate.getDay()); // 0 is sunday
// get
today.getMonth();
today.getFullYear();
today.getDate();
today.getDay();
today.getHours();
console.log(today.getTime());
// set
console.log(today.setMonth(5));
console.log(today.setFullYear(2011));
var date1 = new Date(2000,0,1);
var date2 = new Date(2000,0,1);
//not same
if (date1 == date2) {
console.log('we are the same!');
} else {
console.log('we are the NOT same!');
}
//same with getTime()
if (date1.getDay == date2.getDay) {
console.log('we are the same!');
} else {
console.log('we are the NOT same!');
}
}}}
!! regular expressions
unicode block range http://kourge.net/projects/regexp-unicode-block
!!! a simple test
{{{
var myRE = /hello/; //build your expression
//or
//var myRE = new RegExp("hello");
//apply expression
var myString = "hey I'm here hello";
if ( myRE.test(myString) ) {
alert("yes");
}
}}}
!!! split using semicolon
{{{
//test the regex
var xString = 'Karl arao; karl arao ; karl';
var xRegexPattern = /\s*;\s*/;
var xMatch = xRegexPattern.test(xString);
console.log(xMatch);
VM8530:5 true
undefined
//split the string using the regex
var xString = 'Karl arao; karl arao ; karl';
var xRegexPattern = /\s*;\s*/;
var xMatch = xString.split(xRegexPattern);
console.log(xMatch.join('\n'));
VM8531:5 Karl arao
karl arao
karl
}}}
!!! first name and last name sort with regex (uses split, replace, sort, push)
{{{
// The name string contains multiple spaces and tabs,
// and may have multiple spaces between first and last names.
var names = "Harry Trump ;Fred Barney; Helen Rigby ; Bill Abel ; Chris Hand ";
var output = ["---------- Original String\n", names + "\n"];
// Prepare two regular expression patterns and array storage.
// Split the string into array elements.
// pattern: possible white space then semicolon then possible white space
var pattern = /\s*;\s*/;
// Break the string into pieces separated by the pattern above and
// store the pieces in an array called nameList
var nameList = names.split(pattern);
// new pattern: one or more characters then spaces then characters.
// Use parentheses to "memorize" portions of the pattern.
// The memorized portions are referred to later.
pattern = /(\w+)\s+(\w+)/;
// New array for holding names being processed.
var bySurnameList = [];
// Display the name array and populate the new array
// with comma-separated names, last first.
//
// The replace method removes anything matching the pattern
// and replaces it with the memorized string—second memorized portion
// followed by comma space followed by first memorized portion.
//
// The variables $1 and $2 refer to the portions
// memorized while matching the pattern.
output.push("---------- After Split by Regular Expression");
var i, len;
for (i = 0, len = nameList.length; i < len; i++){
output.push(nameList[i]);
bySurnameList[i] = nameList[i].replace(pattern, "$2, $1");
}
// Display the new array.
output.push("---------- Names Reversed");
for (i = 0, len = bySurnameList.length; i < len; i++){
output.push(bySurnameList[i]);
}
// Sort by last name, then display the sorted array.
bySurnameList.sort();
output.push("---------- Sorted");
for (i = 0, len = bySurnameList.length; i < len; i++){
output.push(bySurnameList[i]);
}
output.push("---------- End");
console.log(output.join("\n"));
}}}
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions
the output
{{{
---------- Original String
Harry Trump ;Fred Barney; Helen Rigby ; Bill Abel ; Chris Hand
---------- After Split by Regular Expression
Harry Trump
Fred Barney
Helen Rigby
Bill Abel
Chris Hand
---------- Names Reversed
Trump, Harry
Barney, Fred
Rigby, Helen
Abel, Bill
Hand, Chris
---------- Sorted
Abel, Bill
Barney, Fred
Hand, Chris
Rigby, Helen
Trump, Harry
---------- End
}}}
! DOM - document object model
it is the way to reach into the page from our script and the way our page can reach into our script
model is the tree structure and the relationships between them
!! DOM nodes and elements
12 node types in the DOM but we are really interested in the three of them
node.element_node -> ELEMENT (p, li, h1)
node.attribute_node -> ATTRIBUTE (id="some text")
node.text_node -> TEXT
!! accessing DOM elements
!!! document.getElementById
uses the word ''document''
{{{
<html>
<head>
<title>Simple Page</title>
</head>
<body>
<h1 id="mainHeading">Interesting Headline</h1>
<p>This is a very simple HTML page</p>
<script src="script.js"></script>
</body>
</html>
}}}
{{{
var headline = document.getElementById("mainHeading");
headline.innerHTML = "wow a new headline";
}}}
!!! getElementsByTagName
retrieve elements by tag
{{{
var myListItems = document.getElementsByTagName("li");
}}}
!!! restricting elements to retrieve
{{{
var mainText = document.getElementById("mainTitle");
console.log(mainText.nodeType);
console.log(mainText.innerHTML);
console.log(mainText.childNodes.length);
var myLinks = document.getElementsByName("a");
console.log(myLinks.length);
//restricting elements to retrieve
var myFirstList = document.getElementById("abc"); //target the ID abc
var limitedList = myFirstList.getElementById("li"); //target the abc li nodes
}}}
!!! other examples
{{{
// grab single element
var myTitleLink = document.getElementById("mainTitle");
// information about the node
console.log("This is a node of type: ", myTitleLink.nodeType);
console.log("Inner HTML: ", myTitleLink.innerHTML);
console.log("Child nodes: ", myTitleLink.childNodes.length);
// how many links?
var myLinks = document.getElementsByTagName("a");
console.log("Links: ", myLinks.length);
// here are some extra examples of combining these:
// First, grab a div called "homeNav"
var navItems = document.getElementById("homeNav");
// get information about that node
console.log("This is a node of type: ", navItems.nodeType);
console.log("Inner HTML: ", navItems.innerHTML);
console.log("Child nodes: ", navItems.childNodes.length);
// how many ordered lists?
var orderedLists = document.getElementsByTagName("ol");
console.log("Ordered lists: ", orderedLists.length);
// narrowing down the links further - use another element, not the whole document.
var myLinks = navItems.getElementsByTagName("a");
console.log("Links in navItems: ", myLinks.length);
// or even combined
var x = document.getElementById("mainNav").getElementsByTagName("a");
console.log("Links in mainNav: ", x.length);
}}}
!! changing DOM elements
!!! setAttribute
{{{
var mainContent = document.getElementById("mainContent");
mainContent.setAttribute("align","right");
}}}
!!! innerHTML
{{{
//mainTitle = document.getElementById("mainTitle");
//console.log(mainTitle.innerHTML);
//var sidebar = document.getElementById("sidebar");
//console.log(sidebar.innerHTML);
var arrayOfH1s = mainContent.getElementsByTagName("h1");
arrayOfH1s[0].innerHTML = "This is a new title";
}}}
!! creating DOM elements
!!! version1 - createElement and appendChild
{{{
//create the elements
//var newHeading = document.createElement("h1");
//var newParagraph = document.createElement("p");
// to add content, either use innerHTML
//newHeading.innerHTML = "Did You Know?";
//newParagraph.innerHTML = "California produces over 17 million gallons of wine each year!";
// and we still need to attach them to the document!
//document.getElementById("trivia").appendChild(newHeading);
//document.getElementById("trivia").appendChild(newParagraph);
}}}
!!! version2 - createElement and createTextNode and appendChild
{{{
//create the elements
//var newHeading = document.createElement("h1");
//var newParagraph = document.createElement("p");
// OR create child nodes manually
//var h1Text = document.createTextNode("Did You Know?");
//var paraText = document.createTextNode("California produces over 17 million gallons of wine each year!");
// and add them as child nodes to the new elements
//newHeading.appendChild(h1Text);
//newParagraph.appendChild(paraText);
// and we still need to attach them to the document!
//document.getElementById("trivia").appendChild(newHeading);
//document.getElementById("trivia").appendChild(newParagraph);
}}}
!!! version3 - alternative to appenchild.. insertBefore
let's say you append a new li, it will always add it to the end. but what if you don't want that.. then you can use this
{{{
var myNewElement = document.createElement("li);
var secondItem = myElement.getElementsByTagName("li")[1];
myElement.insertBefore(myNewElement, secondItem);
}}}
! Event and Event Listeners
what's the event and what do you want to do?
uses the format ''element.event'', example.. ''window.onload'', ''nameField.onblur''
onload
onclick
onmouseover
onblur
onfocus
!! onclick change the text
{{{
var headline = document.getElementById("mainHeading");
headline.onclick = function() {
headline.innerHTML = "Wow! A new headline!";
};
}}}
!! event handling - method 1
!! event handling - method 2
!! event handling - method 3
! File IO
read, write, streams
! Debugging
syntax errors, logic errors
text editor color coding
wrong math
reproduce the problem
firebug
! Object Orientation
class - the blueprint, representation of groups attributes/properties (data/variables) or behavior/methods (functions)
object - is the thing itself, can create multiple objects based on one class
encapsulation - box them up (both properties and methods)
{{{
objects - they are smart, they have methods and properties
array
variable
data object today.getMonth(); today.getFullYear(); today.setMonth(5); today.setFullYear(2012);
math object Math.round(x); Math.max(a,b,c); Math.min(a,b,c);
RegExp
//new means creating new Object (myRegExp) based on Class (RegExp)
var today = new Date();
var myArray = new Array(1,2,3);
var myRegExp = new RegExp("hello");
primitives
number
boolean
}}}
! Advanced topics
!! memory management
memory leak
dangling object
manual, reference counting (depends on counter), garbage collection (batches)
!! algorithm
!! multithreading
web workers feature of html5
! Libraries
! HTML5
https://stackoverflow.com/questions/7087331/what-is-the-meaning-of-polyfills-in-html5
! compilers
Closure compiler https://developers.google.com/closure/compiler/?csw=1
! CDN
http://www.sitepoint.com/7-reasons-to-use-a-cdn/
http://www.cdnreviews.com/cdn-advantages-disadvantages/
http://encosia.com/3-reasons-why-you-should-let-google-host-jquery-for-you/
! other tools
!! sass
http://www.lynda.com/Sass-tutorials/What-you-should-know-before-watching-course/375925/435555-4.html
!! node.js
http://ryanhlewis.com/2015/08/10/installing-node/
!! handlebars.js - The Handlebars Dance
Get Template
Compile Template
Render with Data
Insert Rendered HTML (template into the DOM)
!! bower - A package manager for the web
http://bower.io/
!! gulp - to automate and build handlebars templates
you'll have a gulp watcher task for file changes, then will run a task in response to that change
http://gulpjs.com/
http://www.sitepoint.com/improving-ember-js-workflow-using-gulp-js/
!! jquery
http://youmightnotneedjquery.com/
! References
!! learn x in minutes
http://learnxinyminutes.com/docs/javascript/
https://developer.mozilla.org/en-US/docs/Web/JavaScript/A_re-introduction_to_JavaScript
!! documentation
''THIS IS THE ULTIMATE DOC'' https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference
https://developer.mozilla.org/en-US/docs/Web/JavaScript
<<<
array https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array
string https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String
object https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object
for https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for
if https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/if...else
while https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/while
var https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/var
regexp https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp
<<<
!! coding style
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style
http://javascript.crockford.com/code.html
!! developer networks
https://developer.mozilla.org/en-US/
https://dev.opera.com/
https://jquery.com/
https://developer.yahoo.com/performance/
https://developer.yahoo.com/
https://developer.chrome.com/devtools
!! gui algorithm tools
https://medium.com/@the_taqquikarim/introducing-bonfire-2c0e437895e2#.hrbzm9zbw
!! promises
Learn JavaScript promises in about 70 minutes https://qntm.org/files/promise/promise.html
!! frontend cheat sheets
https://github.com/logeshpaul/Frontend-Cheat-Sheets/blob/master/README.md
http://blog.sellfy.com/cheat-sheets-for-web-designers/
!!! JS cheat sheet
http://www.cheatography.com/davechild/cheat-sheets/javascript/
https://github.com/detailyang/cheat-sheets
http://www.idiotinside.com/2014/09/29/60-cheatsheet-collection-for-full-stack-developers/
!!! backbone.js cheat sheet
http://www.igloolab.com/downloads/backbone-cheatsheet.pdf
! end
<<showtoc>>
! rest tools
!! rested extension
https://chrome.google.com/webstore/detail/rested/eelcnbccaccipfolokglfhhmapdchbfg?hl=en
!! swagger editor
https://editor.swagger.io
!!! swagger vs postman
https://www.google.com/search?q=swagger+vs+postman&oq=swagger+vs&aqs=chrome.1.69i57j0l5.10102j0j4&sourceid=chrome&ie=UTF-8
https://medium.com/@francoiskruger/swagger-vs-postman-there-is-an-obvious-winner-92c33692e1aa
https://www.quora.com/What-is-the-difference-between-postman-and-swagger-tool
! Client-Server Web Apps with JavaScript and Java - REST and JSON
https://www.safaribooksonline.com/library/view/client-server-web-apps/9781449369323/#toc
! Installing Oracle REST Data Services
https://docs.oracle.com/cd/E56351_01/doc.30/e56293/install.htm#AELIG7015
! node-oracledb driver
Sources for a SIG session on Node.js at AMIS https://github.com/lucasjellema/sig-nodejs-amis-2016
hello world example https://github.com/gvenzl/Oracle-NodeJsDriver/tree/master/examples/connect , https://github.com/gvenzl/Oracle-CommittingData
! node-oracledb - building a json database api
http://www.liberidu.com/blog/2015/05/01/howto-building-a-json-database-api/
http://www.liberidu.com/blog/2015/05/01/howto-building-a-json-database-api-2/
http://www.liberidu.com/blog/2015/05/01/howto-building-a-json-database-api-3/
! Comparing Node js Frameworks Express hapi LoopBack Sailsjs and Meteor (from Dan Mcghan)
https://www.youtube.com/watch?v=N7VXGHDheiQ
! GET, auth(), function
Authentication with Node.js, JWTs, and Oracle Database https://jsao.io/2015/06/authentication-with-node-js-jwts-and-oracle-database/
! 12c json datatype
Oracle Database 12.1 JSON Datatype https://github.com/oracle/node-oracledb/blob/master/doc/api.md#jsondatatype <- GOOD STUFF
! 12c JSON blog series
https://blogs.oracle.com/developer/entry/series_json_within_the_oracle1 <- GOOD STUFF
https://blogs.oracle.com/developer/entry/series_json_within_the_oracle , https://github.com/gvenzl/Oracle-JSONInTheDatabase
! RoR REST API
https://emberigniter.com/modern-bridge-ember-and-rails-5-with-json-api/
see also [[coderepo RoR, rails]]
! Oracle ORDS nginx
https://www.google.com/search?q=Oracle+ORDS+nginx&oq=Oracle+ORDS+nginx&aqs=chrome..69i57j69i64l3.5051j0j1&sourceid=chrome&ie=UTF-8
https://www.oracle-and-apex.com/the-oracle-apex-reverse-proxy-guide-using-nginx/
ORDS base url https://community.oracle.com/thread/4159633
http://www.oracle-and-apex.com/the-oracle-apex-reverse-proxy-guide-using-nginx/
https://www.oracle-and-apex.com/tag/nginx/
http://www.oracle-and-apex.com/the-oracle-apex-reverse-proxy-guide-using-nginx/
https://github.com/araczkowski/docker-oracle-apex-ords/issues/6
https://orclcs.blogspot.com/2017/11/ssl-reverse-proxy-using-nginx.html
Oracle APEX and REST Without the Pain https://fuzziebrain.com/content/id/1706/
<<showtoc>>
! setup
https://app.pluralsight.com/library/courses/building-linux-server-for-ruby-on-rails/table-of-contents
!! intelliJ vs rubymine
https://confluence.jetbrains.com/display/RUBYDEV/RubyMine+and+IntelliJ+IDEA+Ruby+Plugin
https://stackoverflow.com/questions/6984288/what-are-the-big-differences-between-intellij-ruby-plugin-vs-rubymine
https://www.reddit.com/r/ruby/comments/2vb7du/is_rubymine_worth_it/
! ember and RoR
http://fromrailstoember.com/
https://emberigniter.com/modern-bridge-ember-and-rails-5-with-json-api/
https://emberigniter.com/building-user-interface-around-ember-data-app/
http://railscasts.com/episodes/408-ember-part-1?view=comments
http://railscasts.com/episodes/410-ember-part-2?view=comments
! oracle tutorials on RoR
index http://www.oracle.com/technetwork/articles/dsl/index.html
Ruby on Rails on Oracle: A Simple Tutorial http://www.oracle.com/technetwork/articles/haefel-oracle-ruby-089811.html
HR Schema on Rails http://www.oracle.com/technetwork/articles/saternos-rails-085771.html
How to configure Ruby on Rails with Oracle? http://stackoverflow.com/questions/764887/how-to-configure-ruby-on-rails-with-oracle
Using Oracle and Ruby on Rails http://www.oracle.com/technetwork/testcontent/rubyrails-085107.html
Oracle+Ruby on Rails: Yummy https://blogs.oracle.com/otn/entry/oracleruby_on_rails_yummy
Ruby on Rails with Oracle FAQ http://www.oracle.com/technetwork/articles/saternos-ror-faq-097353.html
Sample Rails application using legacy Oracle database schema https://github.com/rsim/legacy_oracle_sample
Using Oracle and Ruby on Rails http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/oow10/rubyhol/instructions/rubyrails.htm
Tips for Optimizing Rails on Oracle http://www.oracle.com/technetwork/articles/dsl/mearelli-optimizing-oracle-rails-092774.html
! ruby and pl/sql
https://github.com/rsim/oracle-enhanced <- Oracle enhaced adapter for ActiveRecord
http://www.oracle.com/technetwork/testcontent/rubyrails-085107.html#tcollections <- GOOD STUFF
https://github.com/rsim/ruby-plsql
https://www.cheatography.com/jgebal/cheat-sheets/utplsql-vs-ruby-plsql-feature-comparison/
https://www.netguru.co/blog/back-to-thinking-sql-ruby-rails <- GOOD STUFF
http://stevenfeuersteinonplsql.blogspot.com/2015/03/recommendations-for-unit-testing-plsql.html
https://www.owasp.org/index.php/Query_Parameterization_Cheat_Sheet <- GOOD STUFF
Which programming language “pays” better? https://www.sitepoint.com/community/t/which-programming-language-pays-better/96316/20
! ActiveRecord
http://blog.rayapps.com/
!! ActiveRecord bind variables
http://stackoverflow.com/questions/2105651/activerecord-and-oracle-bind-variables
!! ActiveRecord errors
https://rails.lighthouseapp.com/projects/8994/tickets/5902?utm_source=twitterfeed&utm_medium=twitter
! RoR REST JSON API
https://emberigniter.com/modern-bridge-ember-and-rails-5-with-json-api/
http://9elements.com/io/index.php/an-ember-js-application-with-a-rails-api-backend/
http://blog.arkency.com/2016/02/how-and-why-should-you-use-json-api-in-your-rails-api/
! ruby CRUD series
http://learncodeshare.net/2016/10/04/insert-crud-using-ruby-oci8/
! RoR webserver
https://www.digitalocean.com/community/tutorials/a-comparison-of-rack-web-servers-for-ruby-web-applications
http://stackoverflow.com/questions/579515/what-is-the-best-web-server-for-ruby-on-rails-application
https://www.quora.com/What-web-server-are-you-using-for-Ruby-on-Rails-and-why
https://www.quora.com/Does-your-Node-js-code-need-a-web-server-to-run
! ruby and R
https://quickleft.com/blog/running-r-script-ruby-rails-app/ <- good stuff
http://www.slideshare.net/sausheong/rubyand-r <- good stuff
https://www.safaribooksonline.com/library/view/exploring-everyday-things/9781449342203/ <- good stuff
https://github.com/thesteady/test-r
Exploring Everyday Things with R and Ruby http://shop.oreilly.com/product/0636920022626.do
http://stackoverflow.com/questions/10086605/best-way-to-use-r-in-ruby
https://sites.google.com/a/ddahl.org/rinruby-users/
https://www.r-bloggers.com/using-r-in-ruby/
https://airbrake.io/blog/software-development/exploring-everything
! rails vs django
https://bernardopires.com/2014/03/rails-vs-django-an-in-depth-technical-comparison/
https://www.quora.com/Which-one-should-I-start-first-Ruby-on-Rails-or-Django
http://www.skilledup.com/articles/battle-frameworks-django-vs-rails
http://danielhnyk.cz/django-vs-rails-for-beginner/
https://blog.jaredfriedman.com/2015/09/15/why-i-wouldnt-use-rails-for-a-new-company/
https://infinum.co/the-capsized-eight/articles/i-moved-from-django-to-rails-and-nothing-terrible-happened
! tutorials
http://railscasts.com/
ruby on rails tutorial (referred by Jeffrey) https://softcover.s3-us-west-2.amazonaws.com/636/ruby_on_rails_tutorial_4th_edition/ebooks/ruby_on_rails_tutorial-preview.pdf?X-Amz-Expires=14400&X-Amz-Date=20161101T061951Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJMNNDDBSYVXVHGAA/20161101/us-west-2/s3/aws4_request&X-Amz-SignedHeaders=host&X-Amz-Signature=b734f8dca8e9c8c166d23f440de3d288c4b5b3e2429b260e17103cf18a1ca31b
! video
https://www.emberscreencasts.com
http://railscasts.com/about , https://github.com/ryanb/railscasts-episodes
https://code.tutsplus.com/courses/create-a-full-stack-rails-and-ember-app
! books
Pro Active Record - Databases with Ruby on Rails http://bit.ly/2fNNqoJ , http://data.ceh.vn/Ebook/ebooks.shahed.biz/RUBY/Pro_Active_Record_-_Databases_with_Ruby_and_Rails.pdf
Addison-Wesley Professional Ruby Series Writing Efficient Ruby Code https://www.safaribooksonline.com/library/view/addison-wesley-professional-ruby/9780321540034/
Effective Ruby: 48 Specific Ways to Write Better Ruby https://www.safaribooksonline.com/library/view/effective-ruby-48/9780133847086/
The Ruby Way: Solutions and Techniques in Ruby Programming, Second Edition https://www.safaribooksonline.com/library/view/the-ruby-way/0768667208/
Easy active record for rails developers https://www.safaribooksonline.com/library/view/easy-active-record/90261WJG00001/chap03.xhtml
also check [[Data Model, Design]]
also check [[SQL developer data modeler]]
<<showtoc>>
Building a Data Mart with Pentaho Data Integration https://learning.oreilly.com/videos/building-a-data/9781782168638/9781782168638-video1_2?autoplay=false
http://diethardsteiner.blogspot.com/2014/01/building-data-mart-with-pentaho-data.html
http://diethardsteiner.blogspot.com/2011/11/star-schema-modeling-with-pentaho-data.html
Knight's Microsoft Business Intelligence 24-Hour Trainer: Leveraging Microsoft SQL Server® Integration, Analysis, and Reporting
https://learning.oreilly.com/videos/knights-microsoft-r-business/01420110004SI/01420110004SI-ch02
SQL Developer Data Modeler Just what you need
https://www.youtube.com/watch?time_continue=3707&v=NfrUy-TYP_8
! tableau data model
Enhancing Tableau Data Queries https://www.youtube.com/watch?v=STfTQ55QE9s
Tableau Server How to Connect to the System PostGreSQL Tableau Database https://www.youtube.com/watch?v=hU3zgwypIXA
Connect to a Custom SQL Query https://onlinehelp.tableau.com/current/pro/desktop/en-us/customsql.htm
Tableau Desktop Connect to MySQL Server Database https://www.youtube.com/watch?v=Bl58YtVoCDk
data model time dimension tableau https://www.google.com/search?q=data+model+time+dimension+tableau&oq=data+model+time+dimension+tableau&aqs=chrome..69i57j69i64.5005j1j1&sourceid=chrome&ie=UTF-8
Loading date dimension into Tableau https://community.tableau.com/thread/136227
Tableau Advance Data Modeling Part I | Tableau Connection | Dimension Modeling in Tableau | BISP Tab https://www.youtube.com/watch?v=8FZUi_g3z6A
Tableau Data Modeling "Resolving Many to Many Relationship" | Tableau Database connection https://www.youtube.com/watch?v=voW2gXtyZVQ
https://www.youtube.com/user/MrAlooa2/search?query=data+model
Data and Dimension Modeling Using ERWin Intro CLASS https://www.youtube.com/watch?v=lNpkj0sT1ZE&list=PL7CWBDRZZ_Qed12CwFzVDlnPmQdjhArI2
Tableau Set Using HR DataModel https://www.youtube.com/watch?v=VcCWxZAUo2E
Tableau playlist https://www.youtube.com/playlist?list=PL7CWBDRZZ_QdwowDfUJ1pD6t8ImWOjOmI
Dimensional Modeling | Tableau Tutorial | Online Tableau Training | Intellipaat https://www.youtube.com/watch?v=-Sl6QpTiILQ
https://www.youtube.com/user/intellipaaat/search?query=data+model
Normalisation in Tableau | Tableau Tutorial | Online Tableau Training | Inatellipaat https://www.youtube.com/watch?v=a9f88a_LLUo
Dimensional Modeling in Tableau | Tableau Certification Training | Tableau Online | Intellipaat https://www.youtube.com/watch?v=hWrfxDovbB0
Dimensional Modeling | Tableau Tutorial | Online Tableau Training | Intellipaat https://www.youtube.com/watch?v=-Sl6QpTiILQ
Data Modeling Training | Data Modeling Tutorial | Online Data Modeling Training https://www.youtube.com/watch?v=k_frBZj7f-I
https://www.youtube.com/results?search_query=tableau+data+modeling
Tableau - Managing Metadata https://www.youtube.com/watch?v=4stVd3IVLNE
Conceptual, Logical & Physical Data Models https://www.youtube.com/watch?v=RJ9TpkWKyU0
https://amitsharmair.wordpress.com/latest-training-schedule/
! pentaho
http://everything-oracle.com/pthvobiee.htm
! obiee
OBIEE Bootcamp Training Program https://www.youtube.com/watch?v=z9LkHiLnRks
<<showtoc>>
! video
https://godjango.com/
! RAILS EMBERJS REST architecture
<<<
I want to build the server side on rails and have it talk to ember.
On emberscreencasts you’ve got a ton of good stuff and I like the “search” functionality. Because I can just search CRUD then it shows all videos on CRUD.
I’d like to be able to create a server side REST JSON API that’s based on rails then my client web framework would be ember. On my part coming from emberschool.com modules what would be the order of videos to watch (only the relevant ones) to be able to create the server side rails?
And from what I understand is you can do two ways of accessing the server side stack:
# plain ORM (rails) + Ember
# REST JSON API (rails) + Ember
The end to end tech stack for each would be the following:
# plain ORM (rails) + Ember (from your previous email)
Postgresql <-> ActiveRecord (in a Rails Controller) <-> ActiveModel::Serializers <-> Ember Data (with active-model-adapter) <-> Ember
# REST JSON API (rails) + Ember
Postgresql <-> REST JSON API (rails) <-> built-in JSONAPIAdapter (it's the default) <-> Ember Data <-> Ember
<<<
! flask vs django
flask - lightweight
django - full blown
! django and REST
https://github.com/LondonAppDeveloper/recipe-app-api <-- REST
https://github.com/lookininward/my_library <-- REST and EMBERJS
! django and R
http://www.dreisbach.us/blog/building-dashboards-with-django-and-d3/ <- GOOD STUFF
Using Django and R in a production environment https://groups.google.com/forum/#!topic/django-users/JjuMnIQdFIY
How do you use R in Python to return an R graph through Django http://stackoverflow.com/questions/32697469/how-do-you-use-r-in-python-to-return-an-r-graph-through-django
https://www.researchgate.net/publication/266114045_Django_and_R_Using_Open_Source_Database_and_Statistical_Tools_for_Efficient_Scientific_Data_Evaluation_and_Analysis
Writing your first Django app, part 1 https://docs.djangoproject.com/en/1.10/intro/tutorial01/
Python + Django + Rpy + R = Web App https://news.ycombinator.com/item?id=3562410
https://github.com/Sleepingwell/DjangoRpyDemo
https://github.com/jbkunst/django-r-recommendation-system
https://hashnode.com/post/using-python-django-and-r-to-create-a-machine-learning-based-predictor-for-the-2015-rugby-world-cup-ciibz8epg00rkj3xtco833w9m
https://www.reddit.com/r/Python/comments/3lsk41/using_python_django_and_r_to_create_a/
http://www.amara.org/en/videos/fAz1GNht8V95/info/data-classification-using-python-django-and-r/?tab=comments
https://www.quora.com/Is-Django-used-by-Data-Scientists
https://thinkster.io/django-angularjs-tutorial
https://www.madewithtea.com/simple-todo-api-with-django-and-oauth2.html
! flask
https://realpython.com/the-ultimate-flask-front-end/
https://serge-m.github.io/sample-ember-and-flask.html
https://blog.fossasia.org/badgeyay-integrating-emberjs-frontend-with-flask-backend/
https://github.com/gaganpreet/todo-flask-ember
https://www.google.com/search?ei=VWBPXLTkKIib_QaF4reoAQ&q=flask+emberjs&oq=flask+emberjs&gs_l=psy-ab.3..0i13j0i13i5i30j0i8i13i30.32780.33611..33838...0.0..0.110.639.4j3......0....1..gws-wiz.......0i71j0i20i263j0j0i22i30j0i22i10i30._63z61w8FUo
! django emberjs
https://medium.freecodecamp.org/eli5-full-stack-basics-breakthrough-with-django-emberjs-402fc7af0e3 <-- GOOD STUFF
https://www.smallsurething.com/making-ember-and-django-play-nicely-together-a-todo-mvc-walkthrough/
https://github.com/lookininward/my_library
https://www.django-rest-framework.org/
lightweight django https://learning.oreilly.com/library/view/lightweight-django/9781491946275/
django and emberjs https://www.youtube.com/results?search_query=Django+%26+EmberJS
! django oracle
https://www.oracle.com/technetwork/articles/dsl/vasiliev-django-100257.html
! move from django to rails
https://infinum.co/the-capsized-eight/i-moved-from-django-to-rails-and-nothing-terrible-happened
https://www.tivix.com/blog/django-vs-rails-picking-right-web-framework <-- GOOD STUFF
! time series application django
https://www.google.com/search?q=time+series+application+django&ei=3n26XLqANK-zggfIx5P4Dw&start=10&sa=N&ved=0ahUKEwi654aZyt3hAhWvmeAKHcjjBP8Q8tMDCIoB&biw=1339&bih=798
https://github.com/anthonyalmarza/django-timeseries
https://stackoverflow.com/questions/25212009/django-postgres-large-time-series
https://www.reddit.com/r/django/comments/9m2hnv/storing_user_specific_timeseries_data_in_django/
https://www.quora.com/What-is-the-best-timeseries-database-to-use-with-a-Django-project
https://medium.com/python-data/time-series-aggregation-techniques-with-python-a-look-at-major-cryptocurrencies-a9eb1dd49c1b
<<showtoc>>
! Flask vs Django as a whole
https://www.reddit.com/r/Python/comments/38osar/flask_or_django_for_restful_api/ <-- GOOD STUFF
https://tech.gadventures.com/migrating-our-api-gateway-from-flask-to-django-88a585c4df1a <-- GOOD STUFF
https://gearheart.io/blog/flask-vs-django-which-is-better-for-your-web-app/ <-- GOOD STUFF
https://eshlox.net/2017/08/27/do-you-want-to-use-django-for-rest-api-consider-it/ <-- GOOD STUFF
flask vs django for rest api https://www.google.com/search?q=flask+vs+django+for+rest+api&oq=flask+vs+django+for+REST&aqs=chrome.0.0j69i57.3882j1j1&sourceid=chrome&ie=UTF-8
https://stackshare.io/stackups/django-rest-framework-vs-flask-vs-rails-api
https://blog.resellerclub.com/flask-vs-django-how-to-choose-the-right-web-framework-for-your-web-app/
https://www.quora.com/Which-web-api-framework-is-better-Flask-API-or-Django-Rest-Framework
! Flask vs Django as REST framework
https://www.reddit.com/r/django/comments/4m7ptx/flask_vs_django_rest_framwork/
https://stackshare.io/stackups/django-rest-framework-vs-flask
https://stackabuse.com/flask-vs-django/
! flask rest sqlalchemy nginx
https://www.google.com/search?q=flask+rest+sqlalchemy+nginx&ei=Rl-6XKnLFKfv_QbT9IzYBw&start=10&sa=N&ved=0ahUKEwiphpyCrd3hAhWnd98KHVM6A3sQ8tMDCIYB&biw=1339&bih=798
Setting up nginx and our REST API - REST APIs with Flask and Python
flask-flaskrestful-nginx-setup.sh https://gist.github.com/lawrencefoley/e6f45bdedd1d6eaa2f0ad9bf548e163c
http://acostanza.com/2017/10/10/api-python-flask-restful-docker-nginx/
https://blog.miguelgrinberg.com/post/designing-a-restful-api-with-python-and-flask/page/2
https://blog.miguelgrinberg.com/post/running-your-flask-application-over-https
http://michal.karzynski.pl/blog/2016/06/19/building-beautiful-restful-apis-using-flask-swagger-ui-flask-restplus/
! flask nginx
https://www.patricksoftwareblog.com/how-to-configure-nginx-for-a-flask-web-application/
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-14-04
! time series flask
https://smirnov-am.github.io/2018/11/26/flask-streaming.html
https://geoyi.org/2017/06/18/stock-api-development-with-python-bokeh-and-flask-to-heroku/
https://thingsmatic.com/2016/07/10/a-web-app-for-iot-data-visualization/
!! flask influxdb
https://www.influxdata.com/blog/getting-started-python-influxdb/
https://github.com/btashton/flask-influxdb
https://github.com/LarsBergqvist/IoT_Charts
! flask websocket
https://www.youtube.com/results?search_query=flask+websockets
Building RESTful APIs with Flask: When Should You Use Websockets https://www.youtube.com/watch?v=5QUv14SQyjw
Nick Janetakis https://www.youtube.com/channel/UCorzANoC3fX9VVefJHM5wtA/search?query=rest
! websocket bash pipe python
http://websocketd.com/
https://github.com/joewalnes/websocketd/wiki
https://spin.atomicobject.com/2017/05/10/unix-utility-chart-animation/
https://github.com/jvranish/websocketd_examples/blob/master/adc/index.html
http://smoothiecharts.org/
https://github.com/mattbaker/websocket-pipe
https://github.com/minikomi/pipesock
Turn any application that uses stdin/stdout into a WebSocket server https://news.ycombinator.com/item?id=6879667
https://github.com/joewalnes/web-vmstats
https://stackoverflow.com/questions/47860689/how-to-read-websocket-response-in-bash
! realtime websocket unix command output
https://stackoverflow.com/questions/17658512/how-to-pipe-input-to-python-line-by-line-from-linux-program
https://stackoverflow.com/questions/8478137/how-redirect-a-shell-command-output-to-a-python-script-input
https://medium.com/@benjaminmbrown/real-time-data-visualization-with-d3-crossfilter-and-websockets-in-python-tutorial-dba5255e7f0e
https://github.com/benjaminmbrown/real-time-data-viz-d3-crossfilter-websocket-tutorial
https://github.com/benjaminmbrown/real-time-data-viz-d3-crossfilter-websocket-tutorial/tree/master/rt-data-viz
https://medium.com/front-end-weekly/websocket-and-data-visualization-be3613c880db
tutorial https://www.youtube.com/watch?v=Lc2TA0-gZqg&list=PLz6r7YssJoKSlZk78GeJdIlLzXcSg4w1d&index=40&t=0s
https://github.com/ignoreintuition/websocket
<<showtoc>>
! how to start with APEX
How to start as a developer https://apex.world/ords/f?p=100:211:::NO
https://github.com/Dani3lSun/awesome-orclapex
https://joelkallman.blogspot.com/2009/11/apeks.html
! ux prototyping - bubble.is
https://karlarao.bubbleapps.io/
http://bubble.is
https://www.quora.com/Is-bubble-is-good-for-startup-MVP-And-why-framework-e-g-express-js-better
https://coachingnocodeapps.com/post/is-bubble-the-right-no-code-app-builder-for-you-1528816444225x374596674843050000
https://airdev.co/sprint
https://forum.bubble.is/t/cost-benefit-learning-bubble-vs-using-wordpress/2651/11
! apex install and configure
!! using oxar
How to build Oracle XE & APEX https://www.youtube.com/watch?v=RfIp4Mmj3rA&list=PLEaM1tilka8pSMzT-1rLaGTX5n6iwfcqf
http://www.oraopensource.com/oxar
https://www.talkapex.com/2019/02/5-chrome-dev-tools-for-apex-developers/
!! manual install
https://dsavenko.me/oracledb-apex-ords-tomcat-httpd-centos7-all-in-one-guide-introduction/
! apex on postgresql REST data
https://dsavenko.me/read-write-apex-application-fully-based-on-alien-data/
! apex REST performance
https://telegra.ph/Oracle-XE-184-Free-High-load-02-25-2
! apex install end to end
https://dsavenko.me/oracledb-apex-ords-tomcat-httpd-centos7-all-in-one-guide-introduction/
https://joelkallman.blogspot.com/2017/05/apex-and-ords-up-and-running-in2-steps.html
! apex security oauth2
<<<
social login from apex is different from oauth2 in ords
oauth2 in ords protects the endpoints and is handled inside the apex app
<<<
https://jsao.io/2015/06/authentication-with-node-js-jwts-and-oracle-database/
Creating OAuth2 Protected RESTful Web Services with ORDS and APEX https://www.youtube.com/watch?v=koNIRQY-ioY
! apex and ords deployment
https://www.jmjcloud.com/blog/tuning-glassfish-for-oracle-apex-ords-in-production
https://smartdogservices.com/benefits-of-running-ords-in-an-application-server-container/
https://www.thatjeffsmith.com/archive/2017/03/running-oracle-rest-data-services-ords-without-oracle-application-express-apex/
https://www.mysliderule.com//workshops/data-science/learn/
https://docs.google.com/document/d/1HRa2LlGFDGJxzPLu7mrmSVizCMn8lePsAdprM94LYVE/edit#heading=h.3myxcv9du0ev
https://docs.google.com/document/d/1NXELAq9YzeqOT7ye-bdrRazH7MM5A9sPTBoL2QIfJCA/edit
https://docs.google.com/document/d/1fSmITamN4ulF49DZO9-bAUKsEaQ0iIDGJPjOXQxAAaA/edit#heading=h.p8xihufuhz2p
Python + SQL + Tableau: Integrating Python, SQL, and Tableau
https://www.udemy.com/python-sql-tableau-integrating-python-sql-and-tableau/learn/v4/content
! this is free
https://colab.research.google.com/notebooks/intro.ipynb <- run beam code for free, free GPUs
! new - ai platform notebook instance
https://cloud.google.com/ai-platform/notebooks/docs
https://console.cloud.google.com/ai-platform/notebooks/instances?_ga=2.158829823.490684926.1613933102-567264090.1572987775 <- integrated in cloud console
http://sourceforge.net/projects/collectl/ <-- project page
http://sourceforge.net/mailarchive/forum.php?forum_name=collectl-interest
http://collectl.sourceforge.net/Features.html
http://collectl.sourceforge.net/Export.html
http://collectl.sourceforge.net/Import.html
http://collectl.sourceforge.net/Performance.html
http://collectl.sourceforge.net/Architecture.html
http://collectl.sourceforge.net/FAQ-collectl.html
http://collectl.sourceforge.net/RunningAsAService.html
http://collectl.sourceforge.net/Misc.html <-- custom module
http://collectl.sourceforge.net/Hello.html
http://www.davygoat.com/software/meter/Disk_Stats.html
<<<
I'm happy to hear you're finding collectl useful. Can you say a little more about how you customized the output and maybe post an example? Just being curious.
I'm also wondering what you use for plotting? I'm always looking for a good plotting tool but have yet to find anything faster and more accurate than gnuplot and so that's what colplot uses. Have you tried that yet? It's part of collectl-utils.sourceforge.net/ and from a functionality perspective does just about any collectl data plotting you can imagine AND if you want to do something different you can easily define your own plots.
-mark
> {{{
Hi Mark,
Thanks for visiting my blog, sorry I wasn’t able to get the output of the customized collectl script, the Oracle guys made it for the demo (there’s a group in Oracle that make these tools) so that we could see the OS performance numbers from the Storage Servers and the RAC nodes..
but to give you a picture the output is horizontally divided into two (storage servers and RAC nodes), the hostnames are just stacked together and has columns on the right for IO,CPU,memory statistics (real time).. that’s pretty neat, instead of having so many putty sessions of collectl..
For the regression plot shown here (http://goo.gl/V1Gd) I used the tool “Simple Linear Regression Analysis Template” by Craig Shallahamer (http://goo.gl/yIBI).. It’s just excel based.. and complete with confidence level/interval, residual analysis, and graphing… I find this easy to use rather than “R”.. (http://goo.gl/RNob)
“I’m always looking for a good plotting tool but have yet to find anything faster and more accurate than gnuplot and so that’s what colplot uses” <– Have you also tried the RRDtool (http://goo.gl/K1aa) for the collectl-utils?
and for the OS performance data shown here (http://goo.gl/MUWr)… I use kSar (http://goo.gl/SsZO) which is Java based… often times on client engagements if they are encountering performance problems the only time series OS performance data we could have to validate what we are getting on the database performance numbers is from SAR data.. and pretty much has the info I need, but if I need detailed info like "ps -elf" or "netstat -s" I won't have it with SAR.. so that's the time I'll use either OSWatcher or collectl.. but if the client already got any of these two the better..
I’ll give collectl-utils a try, the three monitor output of colmux is cool you know it’s still possible to stack three more monitors above!
}}}
<<<
<<<
As the author of collectl I'm always interested in performance and one thing I always find fascinating is the correlation of activities across a system. When I look at some of your output, which only seems to sample at once an hour, I wonder about what's happening to other system resources at the times you highlighted. Perhaps the real question is when you report a disk load, what is the sample time? is it the snap time of 1 hour?Experience shows you have to get pretty fine grained for the numbers to be meaningful, closer to the 5-15 second range. The other thing is it can also be very useful to see those same resources prior to and after the times you're showing as single point in time may not show what's really going on.
Are the disks running at their maximums and might there be an opportunity for increasing throughput by configuration changes? Did the increase in disk rates exactly correlate with queries? What's happening to memory utilization, particularly slabs, since they can fluctuate quite a lot under heavy load. Or what about memory fragmentation as reported by /proc/buddy? Lots of other things.
A couple of thoughts come to mind that may or may not make sense, but if could plot your data against collectl's, some interesting things might show up. But the question of HOW to do that can get complicated. You can certainly load collectl's data into a spreadsheet and plot it there, but it's kind of hard work. On the other hand if you could append you data to collectl's you could define some additional plots using colplot.
Not being an oracle person I'm not really sure what's possible and what isn't, but if you can get any of these stats in real time, you can relatively easily write a custom module that will allow collectl to read that data directly and display [or even plot] that data along with all of collectl's other data natively. If you're a perl programmer and are interested I could probably walk you through it. But as I said I don't know if it's worth the effort of not.
-mark
> {{{
Hi Mark,
Yes the snap interval is 1hour and I can say that the latency numbers could be pretty bad when we drill down on 5sec intervals and it is very likely that the devices are running at their maximum..
Sorry to reply just now.. I’ve been thingking about this for quite a while. I made a few test cases and will blog my observations and what I’ve learned from it..
The initial post is here:
http://karlarao.wordpress.com/2010/07/05/oracle-datafile-io-latency-part-1/
All the best..
}}}
<<<
{{{
ssh db1.local cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh karl.fedora cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp -p ~/.ssh/authorized_keys karl.fedora:.ssh/authorized_keys
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
ssh db1.local date; ssh karl.fedora date
192.168.0.5 karl.fedora serv1
192.168.0.8 db1.local serv2
colmux -addr karl.fedora,db1.local -command "--all" -column 2 <-- display disk stats per hostname
colmux -addr karl.fedora,db1.local -command "-sn -oT" -cols 2,4 -coltot <-- with total column output, and timestamp
colmux -addr karl.fedora,db1.local -command "-sZ -i:1" <-- top like interactive sort
colmux -addr karl.fedora,db1.local -command "-sD" -column 6 <-- most active disks
colmux -addr karl.fedora,db1.local -command "-sm -i2 -oT" -cols 6,7 -coltot -colk -test <-- test
colmux -addr karl.fedora,db1.local -command "-sm -i2 -oT" -cols 6,7 -coltot -colk <-- divide by MB
colmux -addr karl.fedora,db1.local -command "-sZ :1" -column 5
colmux -addr karl.fedora,db1.local <-- shows cpu, disks, mem
colmux -addr karl.fedora,db1.local -command "-sCDN"
colmux -addr enkdb01,enkdb02,enkcel01,enkcel02,enkcel03 -command "-scdmn --import hello.ph" -column 0 <-- COOL COMMAND
colmux -addr enkdb01,enkdb02,enkcel01,enkcel02,enkcel03 -command "-scdmn" -column 0
collectl -sd -i 5 --verbose <-- disks stats
collectl -sD -c 1 --dskfilt "dm-(1\b|3\b|5\b|8\b)" <-- filter disks
collectl --all -o T -o D >> collectl-all.txt 2> /dev/null &
collectl -sc --verbose -o T -o D >> collectl-cpuverbose.txt 2> /dev/null &
collectl -sD --verbose -o T -o D >> collectl-ioverbose.txt 2> /dev/null &
is there a way to put infiniband stats?
how can i add the stats of flash ?
how can i add custom metrics? avg,min,max?
the host column on colmux keeps on sorting.. how can i keep them from sorting?
}}}
http://prefetch.net/blog/index.php/2011/10/05/using-collectl-on-linux-to-view-system-performance/
https://alexzeng.wordpress.com/2009/06/29/how-to-conver-column-to-row-and-convert-row-to-column/
http://www.club-oracle.com/forums/pivoting-row-to-column-conversion-techniques-sql-t144/
http://stackoverflow.com/questions/9319997/oracle-sql-convert-n-rows-column-values-to-n-columns-in-1-row
http://stackoverflow.com/questions/19812930/transpose-rows-to-columns-in-oracle-10g
http://hearingtheoracle.com/2013/01/11/more-about-pivot-queries/
http://www.oracle-developer.net/display.php?id=506
https://community.modeanalytics.com/sql/tutorial/sql-pivot-table/
! tableau data source - custom SQL pivot
https://help.tableau.com/current/pro/desktop/en-us/pivot.htm
Combine multiple dimensions / pivot multiple columns https://community.tableau.com/thread/189601
https://www.google.com/search?q=tableau+pivot+dimension+columns&oq=tableau+pivot+dimension+columns&aqs=chrome..69i57j33.11202j0j1&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=tableau+pivot+column+not+working&oq=tableau+pivot+column+not+&aqs=chrome.2.69i57j33l6.7094j1j1&sourceid=chrome&ie=UTF-8
$ cat snap_id.txt | awk '{print $0}' | awk 'END {print line} { line = line "|" $0 }'
|34430|34248|34991|34434|35552|34249|35526|34730|35649|35450|35285
$ cat topsql.txt | egrep "0mb4th0xnw9ph|0uuhus45t8tzm|10karfh53pcds|gsxnz03pd3q5m" | sort -k6
http://nixcraft.com/shell-scripting/15186-convert-column-row-output.html
* merges the data set of file1.txt and file2.txt and generate the following:
''onlist'' - what's on file1 that also exist on file2
''offlist'' - what's on file2 that's not on file1, for my purpose this is kind of the "delta" or new data between the two files
{{{
-- the sample data
echo test1 >> file1.txt
echo test2 >> file1.txt
echo test3 >> file1.txt
echo test1 >> file2.txt
echo test4 >> file2.txt
echo test5 >> file2.txt
echo test6 >> file2.txt
echo test6 >> file2.txt
-- initial diff
Karl-MacBook:test2 karl$ wc -l file1.txt
3 file1.txt
Karl-MacBook:test2 karl$ wc -l file2.txt
5 file2.txt
-- unique
Karl-MacBook:test2 karl$ cat file1.txt | sort | uniq | wc -l
3
Karl-MacBook:test2 karl$ cat file2.txt | sort | uniq | wc -l
4
-- sort uniq
cat file1.txt | sort | uniq > file1sorted.txt
cat file2.txt | sort | uniq > file2sorted.txt
-- do the compare
mkdir test
cat file1sorted.txt file2sorted.txt > test/complete.txt
cp file1sorted.txt test/
cd test
for i in `cat complete.txt`; do touch $i ; done
alist=`cat complete.txt`
origlist=`cat file1sorted.txt`
for i in $alist;
do
ls -ltr $origlist 2> error.txt | grep -i $i ;
if [ $? = 0 ]; then
echo $i >> onthelist.txt
else
echo $i >> offthelist.txt
fi
done
-- output
Karl-MacBook:test karl$ pwd
/Users/karl/Dropbox/tmp/test2/test
Karl-MacBook:test karl$ ls -ltr
total 32
-rw-r--r-- 1 karl staff 0 Feb 24 23:52 test6
-rw-r--r-- 1 karl staff 0 Feb 24 23:52 test5
-rw-r--r-- 1 karl staff 0 Feb 24 23:52 test4
-rw-r--r-- 1 karl staff 0 Feb 24 23:52 test3
-rw-r--r-- 1 karl staff 0 Feb 24 23:52 test2
-rw-r--r-- 1 karl staff 0 Feb 24 23:52 test1
-rw-r--r-- 1 karl staff 18 Feb 24 23:52 file1sorted.txt
-rw-r--r-- 1 karl staff 42 Feb 24 23:52 complete.txt
-rw-r--r-- 1 karl staff 24 Feb 24 23:52 onthelist.txt
-rw-r--r-- 1 karl staff 18 Feb 24 23:52 offthelist.txt
-rw-r--r-- 1 karl staff 0 Feb 24 23:52 error.txt
Karl-MacBook:test karl$ cat onthelist.txt
test1
test2
test3
test1
Karl-MacBook:test karl$ cat offthelist.txt
test4
test5
test6
Karl-MacBook:test karl$ cat onthelist.txt | sort | uniq | wc -l <-- onlist may return duplicate from file1.txt and file2.txt so do sort uniq
3
Karl-MacBook:test karl$ cat offthelist.txt | sort | uniq | wc -l
3
Karl-MacBook:test karl$ cat offthelist.txt | wc -l
3
}}}
''references, below does different things to compare, above code suits my needs''
{{{
http://stackoverflow.com/questions/17954908/bash-read-lines-from-a-file-then-search-a-different-file-using-those-values
http://stackoverflow.com/questions/14354784/bash-only-process-line-if-not-in-second-file
http://unix.stackexchange.com/questions/85460/how-to-remove-lines-included-in-one-file-from-another-file-in-bash
http://stackoverflow.com/questions/15396190/read-file-line-by-line-and-perform-action-for-each-in-bash
http://stackoverflow.com/questions/19727576/looping-through-lines-in-a-file-in-bash-without-using-stdin
while read name; do
grep "$name" file2.txt
done < file1.txt > diff.txt
grep -f file1.txt file2.txt > diff2.txt
grep -f file2.txt file1.txt > diff3.txt
join -t$'\n' -v1 <(sort file2.bak) <(sort file1.bak)
grep -Fvx -f file1.bak file2.bak >remaining.list
}}}
{{{
# rename the files from PGTarch1_<filename> to 1_<filename>
for filename in PGT*; do echo ls \"$filename\" \"${filename//PGTarch1_/1_}\"; done
for filename in PGT*; do echo mv \"$filename\" \"${filename//PGTarch1_/1_}\"; done > rename.sh
# compare
cd /oracle/fast_recovery_area/PGT/stage/
for file in /oracle/fast_recovery_area/PGT/*; do
name=${file##*/}
if [[ -f /oracle/fast_recovery_area/PGT/stage/$name ]]; then
echo "$name exists in both directories"
mv $name /oracle/fast_recovery_area/PGT/stage/exist/
fi
done
# move to main directory
mv *dbf ../
}}}
! references
https://www.google.com/search?q=bash+diff+two+folders&oq=bash+diff+two+folders&aqs=chrome..69i57j0.5991j0j1&sourceid=chrome&ie=UTF-8#q=bash+diff+two+folders+loop&*
http://stackoverflow.com/questions/28205735/how-do-i-compare-file-names-in-two-directories-in-shell-script
http://stackoverflow.com/questions/16787916/difference-between-two-directories-in-linux
See the tweet here https://twitter.com/karlarao/status/582053491079843840
[img(95%,95%)[ http://i.imgur.com/WpgZHre.png ]]
! complete view of metrics for CPU speed comparison - elap_exec, lios_elap, us_lio
I had this chat with a co-worker that convinced me that with cputoolkit (https://karlarao.wordpress.com/scripts-resources/) I'm measuring the bandwidth and that I also need to consider the latency of the CPU. That way I can have a complete view of CPU performance when doing speed comparisons.
So I modified the cputoolkit and the sql_detail.sql output is now showing the following columns:
''"elap_exec"'' - the response time as more threads are saturated (response time)
''"lios_elap"'' - the LIOS/elap as more threads are saturated (bandwidth)
''"us_lio"'' - the microseconds per LIO as more threads are saturated (LIO latency)
Below shows the comparison of threaded_execution parameter ''FALSE vs TRUE''
{{{
enkdb03 - X2 (24 CPU threads) - threaded_execution=FALSE
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, US_LIO
18:45:56, 594, 166428301, 600.05, 600.25, .2, 1.01, 1.01, 0, 280182.32, 277266.13, 3.64
18:56:05, 2791, 702896963, 2976.54, 2976.18, -.36, 1.07, 1.07, 0, 251844.13, 236174.44, 4.52
19:06:12, 4981,1213266832, 5339.11, 5339.79, .68, 1.07, 1.07, 0, 243578.97, 227212.42, 4.72
19:16:28, 6687,1603048857, 7653.09, 7654.44, 1.35, 1.14, 1.14, 0, 239726.16, 209427.26, 5.47
19:26:48, 7275,1684044032, 9966.47, 9951.03, -15.43, 1.37, 1.37, 0, 231483.72, 169233.06, 8.08
19:37:12, 7699,1676906973, 12075.14, 12052.83, -22.31, 1.57, 1.57, 0, 217808.41, 139129.73, 11.25
19:47:41, 8061,1820795571, 13612.74, 14558.81, 946.07, 1.69, 1.81, .12, 225877.13, 125064.87, 14.44
19:58:16, 8107,1868801926, 13545.58, 16727.22, 3181.64, 1.67, 2.06, .39, 230517.07, 111722.19, 18.47
20:08:53, 8165,2020838521, 13919.41, 19589.78, 5670.37, 1.7, 2.4, .69, 247500.13, 103157.79, 23.26
20:19:36, 8230,2065854617, 14132.69, 22253.64, 8120.95, 1.72, 2.7, .99, 251015.14, 92832.22, 29.13
20:30:21, 8238,2094876335, 14159.49, 24696.61, 10537.12, 1.72, 3, 1.28, 254294.29, 84824.45, 35.34
enkdb03 - X2 (24 CPU threads) - threaded_execution=TRUE
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, US_LIO
--------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,----------
17:34:20, 411, 137826034, 595.42, 598.59, 3.17, 1.45, 1.46, .01, 335343.15, 230252.21, 6.34
17:44:45, 2076, 592989139, 2997.59, 3009.07, 11.48, 1.44, 1.45, .01, 285640.24, 197067.19, 7.36
17:55:04, 3580, 972277515, 5178.8, 5208.31, 29.51, 1.45, 1.45, .01, 271585.9, 186678.09, 7.77
18:09:01, 6835,1977465789, 10376.23, 10441.31, 65.08, 1.52, 1.53, .01, 289314.67, 189388.73, 8.08
18:19:22, 5999,1740373391, 9907.78, 9971.27, 63.49, 1.65, 1.66, .01, 290110.58, 174538.8, 9.51
18:29:49, 6204,1655424726, 11779.94, 11863.38, 83.44, 1.9, 1.91, .01, 266831.84, 139540.73, 13.69
18:40:15, 6119,1658482168, 13085.15, 14351.18, 1266.04, 2.14, 2.35, .21, 271038.11, 115564.14, 20.34
18:50:49, 6348,1792805041, 12640.17, 15966.1, 3325.94, 1.99, 2.52, .52, 282420.45, 112288.19, 22.44
19:01:28, 6943,1971447383, 13455.64, 19329.45, 5873.81, 1.94, 2.78, .85, 283947.48, 101991.88, 27.26
19:12:11, 7085,2079470608, 13755.52, 22124.75, 8369.22, 1.94, 3.12, 1.18, 293503.26, 93988.45, 33.2
19:22:57, 7199,2149773337, 13885.33, 24711.81, 10826.47, 1.93, 3.43, 1.5, 298621.11, 86993.77, 39.43
}}}
{{{
-- create view (from local tables), create mview on top of the view
create or replace view VIEW_CCPROD2A ("BILLING_PERIOD_CYCLE_ID","MEMBERSHIP_JOIN_DATE","BENEFIT_PROGRAM_ID", "MEMBERSHIP_ID", "PERSON_ID", "MBRSHIP_TYPE_ID", "MBRSHIP_TYPE_CODE", "MBRSHIP_CATEGORY_CODE", "MP_TRANSACTION_C
ODE", "TRANSACTION_CODE", "RECUR_BILL_ITEM_DETAIL_ID", "DETAIL_DESC", "BILLING_PROFILE_ID", "ENTITY_CODE", "ACCOUNT_NUM", "BILL_TO_BILLING_PROFILE_ID", "TRANSACTION_DATE", "BILLING_CATEGORY", "TAX_EXEMPT", "DUES_EXEMPT", "L
IQUOR_POOL_EXEMPT", "BILLING_FREQUENCY", "REVENUE_SHARED_BY", "CREATED_ON", "LAST_UPDATED_ON", "OCCURRENCES", "AMOUNT_BILLED", "LIMIT_AMOUNT", "CLIENT_REC_BILL_ITEM_ID", "FIRST_NAME", "LAST_NAME") as
SELECT
<name of all columns>
from
<table>;
CREATE materialized VIEW "VIEW_CCPROD2A_MV"
build immediate
refresh complete
as
select
*
from VIEW_CCPROD2A;
}}}
Concurrent Updates Impact Query Performance http://www.dbspecialists.com/files/presentations/concurrent_updates.html <-- good stuff, concurrent CR effect on response time
<<<
I would collect SQLd360 for the SQL now (or right after an exec so data from V$ is still there) and then when it runs slower just attach it with a 10046 and get like a 1h trace.
Assuming the plan is the same (it likely is) then the problem could be the concurrency with other jobs that cause a lot of CR.
<<<
https://blogs.oracle.com/rtsai/entry/how_to_configure_multiple_oracle
{{{
How to configure multiple Oracle listeners
By Robert Tsai on Aug 10, 2009
It happened to me few times during the stress tests that Oracle listener became a bottleneck and not be able to handle the required workload. It was resolved by creating multiple listeners which appeared to be a quick solution. Here is a short step-by-step procedure to configure multiple Oracle listeners on Solaris for standalone Oracle 9i and 10g environment.
1) First of all, add additional NIC and cables. They can be on separate subnetwork or the same. In the later, make sure to set static route if needed.
2) Assume that we are going to configure two listeners, LISTENER and LISTENER2
Modify listener.ora and tnsnames.ora as following:
Here is a sample of listener.ora
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.3)(PORT = 1521))
)
)
LISTENER2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.2)(PORT = 1525))
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/my10g/orcl)
(PROGRAM = extproc)
)
)
Here is sample of tnsnames.ora
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.3)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = ORCL)
)
)
LISTENER2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.2)(PORT = 1525))
)
(CONNECT_DATA =
(SERVICE_NAME = ORCL)
)
)
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)
3) To change database registry
Without changing the registration, when starting the Oracle 10g database would cause the database to register itself with the listener running on port 1521 (the default listener). This is not what I wanted. It should register itself to two listeners, LISTENER & LISTENER2, defined on port 1521 & 1525. For this to happen we have to add an extra line in the database parameter file init{$SID}.ora. The parameter used by oracle is LOCAL_LISTENER. The reference for this parameter in the Oracle's Database Reference Guide says: LOCAL_LISTENER specifies a network name that resolves to an address or address list of Oracle Net local listeners (that is, listeners that are running on the same machine as this instance). The address or address list is specified in the TNSNAMES.ORA file or other address repository as configured for your system. With default value: (ADDRESS=(PROTOCOL=TCP)(HOST=hostname)(PORT=1521)) where hostname is the network name of the local host. See sample below:
LOCAL_LISTENER=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.2)(PORT=1525))
If you don't use a database parameter file (or the above overwrite the previous definiation on 10.6.142.145), but use the spfile construction, then you can alter/set this setting via a SQL statement in eg. SQL\*Plus and an account with the correct privileges:
Before change:
SQL> show parameter LISTENER
NAME TYPE VALUE
----------------------------------- --------- -----------------------------
local_listener string
remote_listener string
SQL>
To change: (Do not put it in a single line which is "TOO LONG").
SQL> alter system set LOCAL_LISTENER="(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.6.142.145)(PORT=1521))
2 (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.2)(PORT=1525)))" scope=BOTH;
System altered.
SQL>
After change
SQL> show parameter LISTENER
NAME TYPE VALUE
---------------------- -------- -----------------------------
local_listener string (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.6.142.145)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.2)(PORT=1525))) log_archive_local_first boolean TRUE
SQL>
SQL>
SQL> show parameter DISPATCHERS
NAME TYPE VALUE
----------------------- ------- ----------------------------
dispatchers string (PROTOCOL=TCP) (SERVICE=orclXDB)
max_dispatchers integer
SQL>
4) Restart the listeners:
lsnrctl stop
lsnrctl start LISTENER
lsnrctl start LISTENER2
5) Check both listeners status and should report the same thing except on different IP:
lsnrctl stat
lsnrctl stat LISTENER1
lsnrctl stat LISTENER2
ps -ef |grep -i tns <=== should see two listeners running
6) Should spread out your connections among different listeners and here are some samples of how to connect to a particular listener. i.e
sqlplus system/oracle@//192.168.100.3:1521/orcl
sqlplus system/oracle@//192.168.100.2:1522/orcl
}}}
<<showtoc>>
! References
!! Different approaches for connecting Weblogic Server to RAC database
http://www.albinsblog.com/2014/04/different-approaches-for-connecting.html , http://java-x.blogspot.com/2006/05/weblogic-server-and-oracle-rac.html
weblogic is an application server https://www.google.com/search?q=is+weblogic+a+web+server+or+application+server&oq=is+weblogic+a+web+server+or+app&aqs=chrome.1.69i57j0l2j33.5824j0j1&sourceid=chrome&ie=UTF-8
!! youtube - Oracle WebLogic Active GridLink for RAC Demo
https://www.youtube.com/watch?v=8D6cf6Y5z94
!! whitepaper - Active GridLink for RAC: Intelligent Integration Between WebLogic Server and Oracle Real Application Clusters <- really good stuff
http://www.oracle.com/technetwork/middleware/weblogic/overview/activegridlinkwhitepaperoraclenec-1987937.pdf
http://jpn.nec.com/soft/oracle/files/WLS_NECpart_OOW_2014.pdf
https://www.slideshare.net/jeckels/oracle-web-logic-server-12c-seamless-oracle-database-integration
https://blogs.oracle.com/weblogicserver/active-gridlink-configuration-for-database-outages
!! whitepaper - Oracle WebLogic Server Active GridLink for Oracle Real Application Clusters (RAC)
http://www.oracle.com/technetwork/middleware/weblogic/gridlink-rac-wp-494900.pdf
!! https://martincarstenbach.wordpress.com/2014/02/18/runtime-load-balancing-advisory-in-rac-12c/
!! How To Configure and Test a WebLogic Gridlink Datasource Connecting To An Oracle RAC SCAN Address (Doc ID 1382656.1) <- good stuff (with gridlink_ha.war)
<<<
Note: Ideally the ONS and FAN should be configured due to the choice of the balancing algorithm to allocate connections to Oracle RAC nodes.
Refer to the Documentation:
https://docs.oracle.com/middleware/11119/wls/JDBCA/gridlink_datasources.htm#CHDHHADB
--------------
GridLink data sources provide load balancing in XA and non-XA environments. GridLink data sources use runtime connection load balancing (RCLB) to distribute connections to Oracle RAC instances based on Oracle FAN events issued by the database. This simplifies data source configuration and improves performance as the database drives load balancing of connections through the GridLink data source, independent of the database topology.
(...)
If FAN is not enabled, GridLink data sources use a round-robin load balancing algorithm to allocate connections to Oracle RAC nodes.
<<<
!! https://docs.oracle.com/middleware/11119/wls/JDBCA/gridlink_datasources.htm#CHDHHADB <- Using GridLink Data Sources
!! https://docs.oracle.com/middleware/1212/wls/JDBCA/gridlink_datasources.htm#JDBCA373 <- Using Active GridLink Data Sources
!! 10.3.4 - How-To: Use Oracle WebLogic Server with a JDBC GridLink Data Source
http://www.oracle.com/technetwork/middleware/weblogic/wls-jdbc-gridlink-howto-333331.html
!! configure ONS on weblogic
https://dbhk.wordpress.com/2009/12/16/change-ons-listening-ports/
https://blogs.oracle.com/weblogicserver/ons-configuration-in-wls
http://blog.dbauniversity.com/2009/02/ons-port-and-crs-installation.html
https://docs.oracle.com/cd/E11882_01/java.112/e16548/apxracfan.htm#JJDBC28947
https://web.stanford.edu/dept/itss/docs/oracle/10gR2/java.102/b14355/fstconfo.htm
Testing WLS and ONS Configuration https://blogs.oracle.com/weblogicserver/testing-wls-and-ons-configuration
!! other HOWTO
!!! old - How-To Configure and Use Oracle Real Application Clusters (RAC) with Oracle WebLogic Server 10.3
http://www.oracle.com/technetwork/middleware/weblogic/index-086651.html
!!! pdf - Oracle WebLogic Server 10.3 and Oracle Real Application Clusters (RAC)
http://www.oracle.com/technetwork/middleware/weblogic/oraclewls-rac-1-131306.pdf
!! books safari
!!! Oracle WebLogic Server 12c: First Look
https://www.safaribooksonline.com/library/view/oracle-weblogic-server/9781849687188/ch06.html#ch06lvl3sec05
!!! Tuning multi data sources – surviving RAC node failures
https://www.safaribooksonline.com/library/view/oracle-weblogic-server/9781849686846/ch03s08.html
!!! Using WebLogic Server with Oracle RAC
http://sqltech.cl/doc/oas11gR1/web.1111/e13737/oracle_rac.htm
Configuring And Managing WebLogic JDBC - Using WebLogic Server with Oracle RAC https://docs.oracle.com/cd/E13222_01/wls/docs103/jdbc_admin/oracle_rac.html
!! other videos
Creating a GridLink Data Source in WebLogic Server 12c https://www.youtube.com/watch?v=bRu-vFNWxPo
Advance Weblogic JDBC Configuration Tutorials - Connecting WLS with Oracle Database https://www.youtube.com/watch?v=5Od9iU3VahA
https://www.youtube.com/user/vimalkabra/videos
WebLogic Admin Training (WebLogic JDBC Configuration) - nCodeIT - Live Sample1 https://www.youtube.com/watch?v=fK2pUBiQUNo
!! tuning weblogic
!!! Tuning WebLogic Server
https://docs.oracle.com/cd/E24329_01/web.1211/e24390/wls_tuning.htm#PERFM173
!!! Oracle Optimized Solution for Secure Oracle WebLogic Server
https://static.ziftsolutions.com/files/0000000055eeed84015608bba82c7933
!!! How to Load balance heavy Traffic on Application Servers - Understanding Network Channels on WLS
https://www.youtube.com/watch?v=7DHCDXQBro0
!!! Performance Tuning Oracle Weblogic Server 12c
https://www.slideshare.net/ajithpathiyil1/performance-tuning-oracle-weblogic-server-12c
!!! https://tekslate.com/weblogic-performance-tuning/
!!! Top Tuning Recommendations for WebLogic Server
https://docs.oracle.com/cd/E13222_01/wls/docs81/perform/topten.html
!!! gorbachev weblogic workload management RAC
https://www.slideshare.net/alexgorbachev/demistifying-oracle-rac-workload-management-by-alex-gorbachev-nocoug-2010-spring-conference
!! weblogic app continuity
!!! srvctl commit_outcome
https://books.google.com/books?id=gbp5AAAAQBAJ&pg=PA85&lpg=PA85&dq=srvctl+commit_outcome&source=bl&ots=6f4I6UabTa&sig=5td22Keu0pgA-bh_ivQkq09LBMA&hl=en&sa=X&ved=0ahUKEwjDhtDC-YjVAhXK34MKHY6gD84Q6AEISTAF#v=onepage&q=srvctl%20commit_outcome&f=false
!!! Advanced Configurations for Oracle Drivers and Databases
https://docs.oracle.com/middleware/1212/wls/JDBCA/ds_oracledriver.htm#JDBCA478
!!! doc - Programming WebLogic JDBC - Using WebLogic Server with Oracle RAC
https://docs.oracle.com/cd/E13222_01/wls/docs81/jdbc/oracle_rac.html
http://docs.oracle.com/cd/E17904_01/web.1111/e13737/oracle_rac.htm#JDBCA394
!!! doc - Fusion Middleware Configuring and Managing JDBC Data Sources for Oracle WebLogic Server - Using Multi Data Sources with Oracle RAC
http://docs.oracle.com/cd/E17904_01/web.1111/e13737/generic_oracle_rac.htm#JDBCA292
!! weblogic deploy war file
!!! Weblogic Server - Deploy The WAR file
https://www.youtube.com/watch?v=yex4n8WXBlI
! 2021
Active GridLink Configuration for Database Outages https://blogs.oracle.com/weblogicserver/active-gridlink-configuration-for-database-outages
Migrating from Generic Data Source to Active GridLink https://blogs.oracle.com/weblogicserver/migrating-from-generic-data-source-to-active-gridlink
6 Using Active GridLink Data Sources https://docs.oracle.com/en/middleware/standalone/weblogic-server/14.1.1.0/jdbca/gridlink_datasources.html#GUID-82D615E4-857E-4DC1-89D2-34270809690A
Comparing Active GridLink and Multi Data Sources
https://docs.oracle.com/en/middleware/standalone/weblogic-server/14.1.1.0/jdbca/gridlink_datasources.html#GUID-7FB75861-31E6-44CC-88B3-41EE3B1371B5
<<<
There are several benefits to using AGL data sources over multi data sources when using Oracle RAC clusters.
The benefits include:
Requires one data source with a single URL. Multi data sources require a configuration with n generic data sources and a multi data source.
Eliminates a polling mechanism that can fail if one of the generic data sources is performing slowly.
Eliminates the need to manually add or delete a node to/from the cluster.
Provides a fast internal notification (out-of-band) when nodes are available so that connections are load-balanced to the new nodes using Oracle Notification Service (ONS).
Provides a fast internal notification when a node goes down so that connections are steered away from the node using ONS.
Provides load balancing advisories (LBA) so that new connections are created on the node with the least load, and the LBA information is also used for gravitation to move idle connections around based on load.
Provides affinity based on your XA transaction or your web session which may significantly improve performance.
Leverages all the advantages of HA configurations like DataGuard. For more information, see Oracle WebLogic Server and Highly Available Oracle Databases: Oracle Integrated Maximum Availability Solutions on the Oracle Technology network at http://www.oracle.com/technetwork/middleware/weblogic/learnmore/index.html
.
<<<
Migrating from Multi Data Source to Active GridLink
https://docs.oracle.com/en/middleware/standalone/weblogic-server/14.1.1.0/jdbca/gridlink_datasources.html#GUID-9B644C17-A2C0-4BF3-9AF1-8271008C1622
http://www.howtogeek.com/howto/9739/turn-your-windows-7-laptop-into-a-wifi-hotspot-with-connectify/
''JDBC connection pool''
http://blog.enkitec.com/2010/04/jdbc-connection-pooling-for-oracle-databases/
A good tutorial JDBC connection pool (with specific examples) http://www.webdbtips.com/267632/
''from oaktable list''
<<<
On a similar note, a recent discovery we made was in terms of connection pooling for odp.net.
The docs say:
Minpool = how many connections you start with
Maxpool = the cap
Lifetime = max life of a connection (0=forever)
We mistakenly assumed the third parameter, if set to 0, would mean connections would grow from minpool to 'peak usage' and stay there. But they don't. The pool management keeps on trying to get connections down to minpool size.
So we had the 'saw tooth' graph of connections until we bumped up minpool.
Connor McDonald
<<<
shared by Graham
<<<
https://www.evernote.com/shard/s48/sh/fb7056a3-cb9d-4f9f-a735-274519410839/20e3669a15bfee775aa65461d58c3095
<<<
control M vs UC4 https://groups.yahoo.com/neo/groups/Control-X/conversations/topics/10308
The formatter we use http://www.dpriver.com/pp/sqlformat.htm
! howto
The howto http://www.dpriver.com/blog/list-of-demos-illustrate-how-to-use-general-sql-parser/rewrite-oracle-propriety-joins-to-ansi-sql-compliant-joins/
ansi joins in oracle 9i http://www.oracle-developer.net/display.php?id=213
http://oracledoug.com/serendipity/index.php?/archives/933-ANSI-Join-Syntax.html
! convert SQL99 to ANSI and back
The converter from SQL99 to ANSI (select -> Executable demo for General SQL Parser (1.36M)) http://www.sqlparser.com/dlaction.php?fid=gspdemo&ftitle=gsp%20demo or use this online version @@http://107.170.101.241:8080/joinConverter/@@
The converter from ANSI to SQL99 http://www.toadworld.com/products/toad-for-oracle/f/10/t/27185
! other live demos
http://www.sqlparser.com/livedemo.php
https://github.com/simsekbunyamin/AnsiJoinConvertor
! other references
https://stackoverflow.com/questions/13599182/convert-t-sql-query-to-ansi-sql-99-standard
http://developer.mimer.com/validator/parser99/index.tml#parser
https://databricks.com/blog/2019/03/19/efficient-upserts-into-data-lakes-databricks-delta.html
https://stackoverflow.com/questions/58341437/convert-merge-statement-to-update-statement
http://stackoverflow.com/questions/2535255/fastest-way-convert-tab-delimited-file-to-csv-in-linux
{{{
awk 'NR>1{$1=$1}1' OFS="," file
}}}
copy_plan_hash_value.sql
http://www.evernote.com/shard/s48/sh/3f253d02-bc13-4ae6-9f2f-fbc7a5b9f944/7997ffa935589baba158177ad98019f8
The research I've done on [[cores vs threads, v2 vs x2]] confirms the material on the Guerilla Capacity Planning book chapter 6 - Software Scalability (published 2006).
A section of that chapter is shown here [[cores vs threads and guerilla capacity planning - software scalability]] where you can see the "saturation effect" on the trend (diminishing returns on performance).
! Chapter 6 Section 6
https://www.amazon.com/Guerrilla-Capacity-Planning-Tactical-Applications-ebook/dp/B004CRTNAA/ref=mt_kindle?_encoding=UTF8&me=
[img[ http://i.imgur.com/KN5g2p4.png ]]
[img[ http://i.imgur.com/12ny2an.png ]]
[img[ http://i.imgur.com/SGM21T7.png ]]
[img[ http://i.imgur.com/lytvorz.png ]]
[img[ http://i.imgur.com/Z6mD42i.png ]]
<<showtoc>>
The benchmark script used is the __cputoolkit__ which is a simple toolkit that allows you to control the saturation of specific number of CPUs (AAS CPU), available here http://karlarao.wordpress.com/scripts-resources/
note that the sharp drops from the workload level numbers come from the test case stops and starts (kill and ramp up) so focus on the steady state line. The cores vs threads does 1 CPU increments (default) while the Exadata v2 and x2 tests does "for i in $(seq $1 4 $2)" so that's the every 4 CPU increments
! cores vs threads, HT and turbo boost comparisons
Below is my desktop server that has 1socket, 4cores, 8threads which shows as 8 Logical CPUs (both CPU_COUNT and /proc/cpuinfo) that can do a max LIOs/sec of 1,400,000M (''see the bottom graph - AWR Load Profile LIOs/sec'') and even if you saturate past the number of Logical CPUs you would only be on that rate. To measure the response time effect as you saturate up to the max CPUs count you have to measure the session level numbers (''see the 4 graphs above'').. so here as you use more and more CPUs the lesser LIOs/elapsed you can do (diminishing returns) and once you reach the max CPUs count you start to see some wait on run queue which ultimately affects the session response times. The four test cases shows the effects of having HT and TurboBoost on or off. In my case when I turned off the HT it only left me with 4 CPUs although I can still handle the same workload the response time effects manifest on the CPU wait (run queue) once you get past the max CPUs count.. that's why HTonTurboOn is the best case
The narration of the graph would be like this:
On the HTonTurboOn test case.. the x axis 14 saturated CPUs has y axis value of about 100,000 LIOs/Elap which means on the workload level the LIOs/sec range is about 14 x 100,000 = 1400000 but at this point I'm consuming part of my response time on CPU wait (run queue) for 1.2secs per execution (from 2.76 total elapsed.. not shown on graph)
[img[ https://lh5.googleusercontent.com/-oL5_-FgDcjs/UM_yHxBiquI/AAAAAAAABxo/4GR4QFGDhn4/s2048/threadsVScores.png ]]
! Exadata v2 vs x2, HT and turbo boost on
At the top are the session level numbers while at the bottom are the workload numbers
[img[ https://lh3.googleusercontent.com/-5Nq7Jv8fbwE/UO_0EgfVzWI/AAAAAAAAByM/M_UseIp7bXs/s2048/02_exadatav2vsx2.png ]]
check out my post at oracle-l for cores vs threads LIO/sec increase comparison on my desktop server, v2, and x2 here http://www.freelists.org/post/oracle-l/Hyperthreading-Oracle-license,22
!! UPDATE: v2 vs x2 vs x3
I added the X3 numbers on the charts here [[v2 vs x2 vs x3]]
It still shows the same behavior and trend
!! UPDATE: cores vs threads and guerilla capacity planning book - software scalability (year 2006)
The research I've done here on [[cores vs threads, v2 vs x2]] confirms the material on the Guerilla Capacity Planning book chapter 6 - Software Scalability (published 2006).
A section of that chapter is shown here [[cores vs threads and guerilla capacity planning - software scalability]] where you can see the "saturation effect" on the trend (diminishing returns on performance).
! SLOB - cores vs threads
the slob's workload is pretty random causing a more spread out thread utilization as I explained here [[cpu centric benchmark comparisons]] and executes at a very fast rate that's why it was able to achive a higher LIOs/sec while the cputoolkit which is doing sustained and more LIOs and really does the job of controlling the saturation of ''specific'' number of CPUs (AAS CPU), below are the slob LIOs/sec increase numbers
3.3M to 4M 21% LIOs/sec increase from CPU4 to 8 when HT enabled
3.1M to 4M 29% LIOs/sec increase from CPU4 to 8 comparing the HT enabled and disabled
''Intel Sandby Bridge i7-2600K CPU @ 3.40GHz
1socket,4cores,8threads''
[img[ https://lh3.googleusercontent.com/-u1CUeJqA48Q/UPZeWOscmYI/AAAAAAAABzE/gPuU7C4lYTM/s2048/03_threadsVScores-slob.png ]]
! Saturation point
* on HTonTurboOn on 1socket,4cores,8threads it will show as 8CPUs, so the saturation point is when you start seeing ''CPU wait'' (time spent on run-queue) which you can see on the CPU_ORA_WAIT column of the AAS CPU breakdown output. So here once it gets past the CPU_COUNT you'll start seeing ''CPU wait''
{{{
-- OS CPU usage
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20130115 23:33:35 96 0 3 0 0 0 0 0 8 13K 7519 14 786 18 15.12 11.97 7.11
20130115 23:33:36 95 0 4 0 0 0 0 0 8 13K 8385 18 782 16 15.12 11.97 7.11
20130115 23:33:37 97 0 2 0 0 0 0 0 8 13K 7277 6 784 18 15.12 11.97 7.11
20130115 23:33:38 95 0 4 0 0 0 0 0 8 13K 7648 11 782 16 15.35 12.07 7.16
-- AAS at the WORKLOAD LEVEL, you'll see here that it's saturating 16CPUs... the output will be the same for SLOB and cputoolkit
TO_CHAR(STA SAMPLES FASL FIRST SASL SECOND GRAPH
----------- ---------- ------- ------------------------- ------- ------------------------- --------------------------------------------------
15 11:32:20 5 15.36 CPU .64 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:25 5 15.80 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:30 5 15.60 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:35 5 15.20 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:40 5 12.80 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:45 5 14.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:50 5 15.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:55 5 15.80 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:00 5 14.74 CPU .46 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:05 5 15.60 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:10 5 15.68 CPU .48 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:15 5 15.60 CPU .00 control file sequential r ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:20 5 13.58 CPU .44 kksfbc child completion ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:25 5 13.20 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:30 5 15.40 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:35 5 15.40 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:40 1 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
88 rows selected.
-- AAS CPU BREAKDOWN, the CPU_ORA_WAIT is the saturation point...
SQL>
TM CPU_TOTAL CPU_OS CPU_ORA CPU_ORA_WAIT COMMIT READIO WAIT
----------------- ---------- ---------- ---------- ------------ ---------- ---------- ----------
01/15/13 23:34:00 16.774 .304 7.539 8.931 0 0 .147
}}}
! Complete view of metrics for CPU speed comparison - elap_exec, lios_elap, us_lio
[img(95%,95%)[ http://i.imgur.com/WpgZHre.png ]]
! CPU topology
Using the [[cpu_topology]] script, check out this doc [[redhat kbase DOC-7715 multi-processor/core or supports HT]] to interpret the script output
''X2''
{{{
$ sh cpu_topology
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
processor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
physical id (processor socket) 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
siblings (logical cores/socket) 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
core id 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10
cpu cores (physical cores/socket) 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
}}}
''V2''
{{{
$ sh cpu_topology
model name : Intel(R) Xeon(R) CPU E5540 @ 2.53GHz
processor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
physical id (processor socket) 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
siblings (logical cores/socket) 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
core id 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
}}}
! effect of cpu binding on per thread performance
check out the behavior or possible effect of cpu binding on per thread performance here [[cpu binding,dynamic allocation on per thread perf]]
! cpu centric benchmark comparisons
also check this out as I explain the [[cpu centric benchmark comparisons]]
<<showtoc>>
! projects
https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/activity
https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/activity
https://www.kaggle.com/imdevskp/corona-virus-report/activity
! analysis
predictions https://www.kaggle.com/neelkudu28/covid-19-visualizations-predictions-forecasting
https://www.kaggle.com/imdevskp/covid-19-analysis-visualization-comparisons
https://www.kaggle.com/parulpandey/tracking-india-s-coronavirus-spread-wip
https://www.kaggle.com/parulpandey/wuhan-coronavirus-a-geographical-analysis
ssh or scp connection terminates with the error "Corrupted MAC on input" (Doc ID 1389880.1) To BottomTo Bottom
''to debug opatch''
{{{
export OPATCH_DEBUG=TRUE
then run
opatch lsinventory
}}}
''fixing the inventory.xml''
{{{
$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2011, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>11.2.0.3.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
<HOME NAME="agent11g1" LOC="/u01/app/oracle/agent11g" TYPE="O" IDX="3"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
have to find a backup! binggo!
$ locate inventory.xml
/u01/app/11.2.0/grid/oc4j/.patch_storage/7439847_Feb_8_2009_20_36_24/original_patch/etc/config/inventory.xml
/u01/app/11.2.0/grid/oc4j/.patch_storage/8513914_Jun_5_2009_17_18_10/backup/sainventory/oneoffs/7439847/etc/config/inventory.xml
/u01/app/11.2.0/grid/oc4j/.patch_storage/9452259_Mar_18_2010_23_37_55/backup/sainventory/oneoffs/7439847/etc/config/inventory.xml
/u01/app/11.2.0/grid/oc4j/.patch_storage/9452259_Mar_18_2010_23_37_55/backup/sainventory/oneoffs/8513914/etc/config/inventory.xml
/u01/app/11.2.0/grid/oc4j/.patch_storage/9452259_Mar_18_2010_23_37_55/original_patch/etc/config/inventory.xml
/u01/app/11.2.0/grid/oc4j/sainventory/oneoffs/7439847/etc/config/inventory.xml
/u01/app/11.2.0/grid/oc4j/sainventory/oneoffs/8513914/etc/config/inventory.xml
/u01/app/11.2.0/grid/oc4j/sainventory/oneoffs/9452259/etc/config/inventory.xml
/u01/app/oraInventory/ContentsXML/inventory.xml
/u01/app/oraInventory/backup/2011-11-14_11-56-28PM/ContentsXML/inventory.xml
/u01/app/oraInventory/backup/2011-11-14_11-59-03PM/ContentsXML/inventory.xml
/u01/app/oraInventory/backup/2012-02-27_10-31-55PM/ContentsXML/inventory.xml
/u01/app/oraInventory/backup/2012-08-29_03-52-46PM/ContentsXML/inventory.xml <-- here's the backup!
$ cat /u01/app/oraInventory/backup/2012-08-29_03-52-46PM/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2010, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
<VERSION_INFO>
<SAVED_WITH>11.1.0.8.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2"/>
<HOME NAME="agent11g1" LOC="/u01/app/oracle/agent11g" TYPE="O" IDX="3"/>
</HOME_LIST>
</INVENTORY>
oracle@desktopserver.local:/u01/app/oracle/agent12c:
$ /u01/app/oracle/agent12c/agent_inst/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
Agent Version : 12.1.0.2.0
OMS Version : 12.1.0.2.0
Protocol Version : 12.1.0.1.0
Agent Home : /u01/app/oracle/agent12c/agent_inst
Agent Binaries : /u01/app/oracle/agent12c/core/12.1.0.2.0
Agent Process ID : 11429
Parent Process ID : 11352
Agent URL : https://desktopserver.local:3872/emd/main/
Repository URL : https://emgc12c.local:4900/empbs/upload
Started at : 2012-10-18 14:47:47
Started by user : oracle
Last Reload : (none)
Last successful upload : 2012-10-18 14:49:52
Last attempted upload : 2012-10-18 14:49:52
Total Megabytes of XML files uploaded so far : 0.01
Number of XML files pending upload : 0
Size of XML files pending upload(MB) : 0
Available disk space on upload filesystem : 25.58%
Collection Status : Collections enabled
Heartbeat Status : Ok
Last attempted heartbeat to OMS : 2012-10-18 14:49:58
Last successful heartbeat to OMS : 2012-10-18 14:49:58
Next scheduled heartbeat to OMS : 2012-10-18 14:50:58
---------------------------------------------------------------
Agent is Running and Ready
ERROR
#################################################################################################################################################
$ opatch lsinventory
Invoking OPatch 11.2.0.1.7
Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/product/11.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.7
OUI version : 11.2.0.3.0
Log file location : /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2012-10-18_14-07-45PM.log
org.xml.sax.SAXParseException: <Line 13, Column 12>: XML-20121: (Fatal Error) End tag does not match start tag 'HOME'.
at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:422)
at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:287)
at oracle.xml.parser.v2.NonValidatingParser.parseEndTag(NonValidatingParser.java:1433)
at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1378)
at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:399)
at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:345)
at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:226)
at oracle.sysman.oii.oiii.OiiiInstallXMLReader.readHomes(OiiiInstallXMLReader.java:153)
at oracle.sysman.oii.oiii.OiiiInstallXMLReader.readHomes(OiiiInstallXMLReader.java:91)
at oracle.sysman.oii.oiii.OiiiInstallInventory.readHomes(OiiiInstallInventory.java:726)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadPartialInstallInv(OiiiInstallAreaControl.java:776)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initInstallInv(OiiiInstallAreaControl.java:821)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadInstallInventory(OiiiInstallAreaControl.java:592)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1977)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1930)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:301)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:240)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:189)
at oracle.opatch.OUIInventorySession.initSession(OUIInventorySession.java:63)
at oracle.opatch.OUISessionManager.setupSession(OUISessionManager.java:150)
at oracle.opatch.OUISessionManager.lockCentralInventory(OUISessionManager.java:267)
at oracle.opatch.OUISessionManager.instantiate(OUISessionManager.java:87)
at oracle.opatch.OUISessionManager.updateOPatchEnvironment(OUISessionManager.java:661)
at oracle.opatch.InventorySessionManager.updateOPatchEnvironment(InventorySessionManager.java:91)
at oracle.opatch.OPatchSession.main(OPatchSession.java:1627)
at oracle.opatch.OPatch.main(OPatch.java:651)
org.xml.sax.SAXParseException: <Line 13, Column 12>: XML-20121: (Fatal Error) End tag does not match start tag 'HOME'.
at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:422)
at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:287)
at oracle.xml.parser.v2.NonValidatingParser.parseEndTag(NonValidatingParser.java:1433)
at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1378)
at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:399)
at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:345)
at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:226)
at oracle.sysman.oii.oiii.OiiiInstallXMLReader.readHomes(OiiiInstallXMLReader.java:153)
at oracle.sysman.oii.oiii.OiiiInstallXMLReader.readHomes(OiiiInstallXMLReader.java:91)
at oracle.sysman.oii.oiii.OiiiInstallInventory.readHomes(OiiiInstallInventory.java:726)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadPartialInstallInv(OiiiInstallAreaControl.java:776)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initInstallInv(OiiiInstallAreaControl.java:821)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadInstallInventory(OiiiInstallAreaControl.java:592)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1977)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1930)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:301)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:240)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:189)
at oracle.opatch.OUIInventorySession.initSession(OUIInventorySession.java:63)
at oracle.opatch.OUISessionManager.setupSession(OUISessionManager.java:150)
at oracle.opatch.OUISessionManager.lockCentralInventory(OUISessionManager.java:267)
at oracle.opatch.OUISessionManager.instantiate(OUISessionManager.java:87)
at oracle.opatch.OUISessionManager.updateOPatchEnvironment(OUISessionManager.java:661)
at oracle.opatch.InventorySessionManager.updateOPatchEnvironment(InventorySessionManager.java:91)
at oracle.opatch.OPatchSession.main(OPatchSession.java:1627)
at oracle.opatch.OPatch.main(OPatch.java:651)
List of Homes on this system:
Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
Oracle Home dir. path does not exist in Central Inventory
Oracle Home is a symbolic link
Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo
OPatch failed with error code 73
ERROR TRACE
#################################################################################################################################################
oracle@desktopserver.local:/u01/app/oracle/agent12c:dw
$ export OPATCH_DEBUG=TRUE
oracle@desktopserver.local:/u01/app/oracle/agent12c:dw
$ opatch lsinventory
ORACLE_HOME is set at OPatch invocation
Machine Info: Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
_osArch is amd64
_javaVMSpecVersion is 1.0
_javaVMSpecVendor is Sun Microsystems Inc.
_javaVMSpecName is Java Virtual Machine Specification
_javaVMVendor is Sun Microsystems Inc.
_javaJRESpecVersion is 1.5
_javaJRESpecVendor is Sun Microsystems Inc.
_javaJRESpecName is Java Platform API Specification
_javaSupportedClassVersion is 49.0
OPatch compiled with major version: 48, minor version: 0
_osArch (from OCM API) is
/u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java -mx150m -cp /u01/app/oracle/product/11.2.0/dbhome_1/OPatch/ocm/lib/emocmclnt.jar:/u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib/OraInstaller.jar:/u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib/OraPrereq.jar:/u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib/share.jar:/u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib/srvm.jar:/u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib/orai18n-mapping.jar:/u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib/xmlparserv2.jar:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/jlib/opatch.jar:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/jlib/opatchutil.jar:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/jlib/opatchprereq.jar:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/jlib/opatchactions.jar:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/jlib/opatchext.jar:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/jlib/opatchfmw.jar: -DOPatch.ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 -DOPatch.DEBUG=true -DOPatch.RUNNING_DIR=/u01/app/oracle/product/11.2.0/dbhome_1/OPatch -DOPatch.MW_HOME= -DOPatch.WL_HOME= -DOPatch.COMMON_COMPONENTS_HOME= oracle/opatch/OPatch lsinventory
Invoking OPatch 11.2.0.1.7
Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation. All rights reserved.
OPatchSession::parse() on "lsinventory",
Argument is "lsinventory"
Add commands for Help
add command "apply"
add command "napply"
add command "rollback"
add command "nrollback"
add command "lsinventory"
add command "lsinv"
add command "query"
add command "util"
add command "prereq"
add command "version"
add command "-help"
add command "-help -fmw"
Add supported commands for validation
add command "apply"
add command "rollback"
add command "lsinv"
add command "lsinventory"
add command "query"
add command "util"
add command "prereq"
add command "version"
add command "napply"
add command "nrollback"
add command "-fmw"
Not a command that can be mapped to Util Session.
CmdLineParser::initRuntimeOptions()
Checking on class oracle.opatch.opatchutil.CmdLineOptions$StringArguments
Get list of fields defined in the class oracle.opatch.opatchutil.CmdLineOptions$StringArguments
There are 7 fields defined in this class.
adding option "fp"
adding option "dp"
adding option "fr"
adding option "dr"
adding option "mp"
adding option "phbasedir"
adding option "phbasefile"
Checking on class oracle.opatch.opatchutil.CmdLineOptions$BooleanArguments
Get list of fields defined in the class oracle.opatch.opatchutil.CmdLineOptions$BooleanArguments
There are 2 fields defined in this class.
adding option "delay_link"
adding option "cmd_end"
Checking on class oracle.opatch.opatchutil.CmdLineOptions$IntegerArguments
Get list of fields defined in the class oracle.opatch.opatchutil.CmdLineOptions$IntegerArguments
There are 2 fields defined in this class.
adding option "integerarg1"
adding option "integerarg2"
Checking on class oracle.opatch.opatchutil.CmdLineOptions$StringtegerArguments
Get list of fields defined in the class oracle.opatch.opatchutil.CmdLineOptions$StringtegerArguments
There are 5 fields defined in this class.
adding option "stringtegerarg1"
adding option "stringtegerarg2"
adding option "ps"
adding option "mp"
adding option "xmlinput"
Checking on class oracle.opatch.opatchutil.CmdLineOptions$DoubleArguments
Get list of fields defined in the class oracle.opatch.opatchutil.CmdLineOptions$DoubleArguments
There are 2 fields defined in this class.
adding option "doublearg1"
adding option "doublearg2"
Checking on class oracle.opatch.opatchutil.CmdLineOptions$RawStringArguments
Get list of fields defined in the class oracle.opatch.opatchutil.CmdLineOptions$RawStringArguments
There are 1 fields defined in this class.
adding option "cmd"
CmdLineHelper::loadRuntimeOption() for Class "oracle.opatch.opatchutil.OUSession"
initializing String option 0, fp
initializing String option 1, dp
initializing String option 2, fr
initializing String option 3, dr
initializing String option 4, mp
initializing String option 5, phbasedir
initializing String option 6, phbasefile
done init. String arg.
initializing Boolean option 0, delay_link
initializing Boolean option 1, cmd_end
done init. Boolean arg.
initializing Integer option 0, integerarg1
initializing Integer option 1, integerarg2
done init. Integer arg.
initializing StringTeger option 0, stringtegerarg1
initializing StringTeger option 1, stringtegerarg2
initializing StringTeger option 2, ps
initializing StringTeger option 3, mp
initializing StringTeger option 4, xmlinput
done init. SringTeger arg.
initializing Double option 0, doublearg1
initializing Double option 1, doublearg2
done init. Double arg.
initializing RawString option 0, cmd
done init. RawString arg.
CmdLineHelper::loadRuntimeOption() for Class "oracle.opatch.opatchutil.OUSession", done.
CmdLineParser::initRuntimeOptions()
Checking on class oracle.opatch.opatchprereq.CmdLineOptions$StringArguments
Get list of fields defined in the class oracle.opatch.opatchprereq.CmdLineOptions$StringArguments
There are 3 fields defined in this class.
adding option "phbasedir"
adding option "patchids"
adding option "phbasefile"
Checking on class oracle.opatch.opatchprereq.CmdLineOptions$BooleanArguments
Get list of fields defined in the class oracle.opatch.opatchprereq.CmdLineOptions$BooleanArguments
There are 2 fields defined in this class.
adding option "booleanarg1"
adding option "booleanarg2"
Checking on class oracle.opatch.opatchprereq.CmdLineOptions$IntegerArguments
Get list of fields defined in the class oracle.opatch.opatchprereq.CmdLineOptions$IntegerArguments
There are 2 fields defined in this class.
adding option "integerarg1"
adding option "integerarg2"
Checking on class oracle.opatch.opatchprereq.CmdLineOptions$StringtegerArguments
Get list of fields defined in the class oracle.opatch.opatchprereq.CmdLineOptions$StringtegerArguments
There are 2 fields defined in this class.
adding option "stringtegerarg1"
adding option "stringtegerarg2"
Checking on class oracle.opatch.opatchprereq.CmdLineOptions$DoubleArguments
Get list of fields defined in the class oracle.opatch.opatchprereq.CmdLineOptions$DoubleArguments
There are 2 fields defined in this class.
adding option "doublearg1"
adding option "doublearg2"
CmdLineHelper::loadRuntimeOption() for Class "oracle.opatch.opatchprereq.PQSession"
initializing String option 0, phbasedir
initializing String option 1, patchids
initializing String option 2, phbasefile
done init. String arg.
initializing Boolean option 0, booleanarg1
initializing Boolean option 1, booleanarg2
done init. Boolean arg.
initializing Integer option 0, integerarg1
initializing Integer option 1, integerarg2
done init. Integer arg.
initializing StringTeger option 0, stringtegerarg1
initializing StringTeger option 1, stringtegerarg2
done init. SringTeger arg.
initializing Double option 0, doublearg1
initializing Double option 1, doublearg2
done init. Double arg.
CmdLineHelper::loadRuntimeOption() for Class "oracle.opatch.opatchprereq.PQSession", done.
reqVer For using getEnv() = 10.2.0.4.0
curVer = 11.2.0.3.0
Current Ver later than required? :true
Current Ver equals required? :false
Checking EMDROOT using OUI's API...
CmdLineParser.processOPatchProperties() begins
CmdLineParser.processOPatchProperties() ends
OUIReplacer::runEnvScript() called
SystemCall:RuntimeExec(cmds, runDir): GOING to start thread to read Input Stream
SystemCall:RuntimeExec(cmds, runDir): Started thread to read Input Stream
SystemCall:RuntimeExec(cmds, runDir): GOING to start thread to read Error Stream
ReaderThread::run(): Stream InputStream about to be read
ReaderThread::run(): Stream InputStream about to be readReaderThread::run(): Stream ErrorStream about to be read
SystemCall:RuntimeExec(cmds, runDir): Started thread to read Error Stream
SystemCall:RuntimeExec(cmds, runDir): Started thread to read Error StreamSystemCall:RuntimeExec(cmds, runDir): GOING into process.waitFor()
ReaderThread::run(): Stream ErrorStream reading completed
SystemCall:RuntimeExec(cmds, runDir): process.waitFor() is OVER
ReaderThread::run(): Stream InputStream reading completed
SystemCall:RuntimeExec(cmds, runDir): Error stream thread joined successfully
SystemCall:RuntimeExec(cmds, runDir): Error stream thread joined successfullySystemCall:RuntimeExec(cmds, runDir): Input stream thread joined successfully
OUIReplacer::setKeyValue() called
OPatchSession::main()
Environment:
OPatch.ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
oracle.installer.invPtrLoc=/etc/oraInst.loc
oracle.installer.oui_loc=/u01/app/oracle/product/11.2.0/dbhome_1/oui
oracle.installer.library_loc=/u01/app/oracle/product/11.2.0/dbhome_1/oui/lib/linux64
oracle.installer.startup_location=/u01/app/oracle/product/11.2.0/dbhome_1/oui
OPatch.PLATFORM_ID=
os.name=Linux
OPatch.NO_FUSER=
OPatch.SKIP_VERIFY=null
OPatch.SKIP_VERIFY_SPACE=null
oracle.installer.clusterEnabled=false
TRACING.ENABLED=TRUE
TRACING.LEVEL=2
OPatch.DEBUG=true
OPATCH_VERSION=11.2.0.1.7
Bundled OPatch Property File=properties
Minimum OUI version: 10.2
OPatch.PATH=/home/oracle/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:/bin::/home/oracle/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:/u01/app/oracle/product/11.2.0/dbhome_1/bin:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch:/home/oracle/:/tmp/login.sql:/home/oracle/dba/tanel:/home/oracle/dba/scripts:/home/oracle/dba/bin:
Stand-Alone home : false
OPatch.MW_HOME=
OPatch.WL_HOME=
OPatch.COMMON_COMPONENTS_HOME=
Environment:
OPatch.ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
oracle.installer.invPtrLoc=/etc/oraInst.loc
oracle.installer.oui_loc=/u01/app/oracle/product/11.2.0/dbhome_1/oui
oracle.installer.library_loc=/u01/app/oracle/product/11.2.0/dbhome_1/oui/lib/linux64
oracle.installer.startup_location=/u01/app/oracle/product/11.2.0/dbhome_1/oui
OPatch.PLATFORM_ID=
os.name=Linux
OPatch.NO_FUSER=
OPatch.SKIP_VERIFY=null
OPatch.SKIP_VERIFY_SPACE=null
oracle.installer.clusterEnabled=false
TRACING.ENABLED=TRUE
TRACING.LEVEL=2
OPatch.DEBUG=true
OPATCH_VERSION=11.2.0.1.7
Bundled OPatch Property File=properties
Minimum OUI version: 10.2
OPatch.PATH=/home/oracle/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:/bin::/home/oracle/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:/u01/app/oracle/product/11.2.0/dbhome_1/bin:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch:/home/oracle/:/tmp/login.sql:/home/oracle/dba/tanel:/home/oracle/dba/scripts:/home/oracle/dba/bin:
Stand-Alone home : false
OPatch.MW_HOME=
OPatch.WL_HOME=
OPatch.COMMON_COMPONENTS_HOME=
Oracle Home : /u01/app/oracle/product/11.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.7
OUI version : 11.2.0.3.0
Log file location : /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2012-10-18_14-11-48PM.log
OUISessionManager::instantiate()
lockCentralInventory(): OUISessionManager::lockCentralInventory() will retry 0 times with 120-second interval to get an Inventory lock.
OUISessionManager::lockCentralInventory() try round # 1
OUISessionManager::setupSession()
OUISessionManager::setupSession() instantiates a OUIInventorySession obj.
OUISessionManager::setupSession() init. the session
OUISessionManager::setupSession() sets up READ-ONLY session
org.xml.sax.SAXParseException: <Line 13, Column 12>: XML-20121: (Fatal Error) End tag does not match start tag 'HOME'.
at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:422)
at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:287)
at oracle.xml.parser.v2.NonValidatingParser.parseEndTag(NonValidatingParser.java:1433)
at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1378)
at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:399)
at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:345)
at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:226)
at oracle.sysman.oii.oiii.OiiiInstallXMLReader.readHomes(OiiiInstallXMLReader.java:153)
at oracle.sysman.oii.oiii.OiiiInstallXMLReader.readHomes(OiiiInstallXMLReader.java:91)
at oracle.sysman.oii.oiii.OiiiInstallInventory.readHomes(OiiiInstallInventory.java:726)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadPartialInstallInv(OiiiInstallAreaControl.java:776)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initInstallInv(OiiiInstallAreaControl.java:821)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadInstallInventory(OiiiInstallAreaControl.java:592)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1977)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1930)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:301)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:240)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:189)
at oracle.opatch.OUIInventorySession.initSession(OUIInventorySession.java:63)
at oracle.opatch.OUISessionManager.setupSession(OUISessionManager.java:150)
at oracle.opatch.OUISessionManager.lockCentralInventory(OUISessionManager.java:267)
at oracle.opatch.OUISessionManager.instantiate(OUISessionManager.java:87)
at oracle.opatch.OUISessionManager.updateOPatchEnvironment(OUISessionManager.java:661)
at oracle.opatch.InventorySessionManager.updateOPatchEnvironment(InventorySessionManager.java:91)
at oracle.opatch.OPatchSession.main(OPatchSession.java:1627)
at oracle.opatch.OPatch.main(OPatch.java:651)
org.xml.sax.SAXParseException: <Line 13, Column 12>: XML-20121: (Fatal Error) End tag does not match start tag 'HOME'.
at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:422)
at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:287)
at oracle.xml.parser.v2.NonValidatingParser.parseEndTag(NonValidatingParser.java:1433)
at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1378)
at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:399)
at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:345)
at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:226)
at oracle.sysman.oii.oiii.OiiiInstallXMLReader.readHomes(OiiiInstallXMLReader.java:153)
at oracle.sysman.oii.oiii.OiiiInstallXMLReader.readHomes(OiiiInstallXMLReader.java:91)
at oracle.sysman.oii.oiii.OiiiInstallInventory.readHomes(OiiiInstallInventory.java:726)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadPartialInstallInv(OiiiInstallAreaControl.java:776)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initInstallInv(OiiiInstallAreaControl.java:821)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadInstallInventory(OiiiInstallAreaControl.java:592)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1977)
at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1930)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:301)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:240)
at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:189)
at oracle.opatch.OUIInventorySession.initSession(OUIInventorySession.java:63)
at oracle.opatch.OUISessionManager.setupSession(OUISessionManager.java:150)
at oracle.opatch.OUISessionManager.lockCentralInventory(OUISessionManager.java:267)
at oracle.opatch.OUISessionManager.instantiate(OUISessionManager.java:87)
at oracle.opatch.OUISessionManager.updateOPatchEnvironment(OUISessionManager.java:661)
at oracle.opatch.InventorySessionManager.updateOPatchEnvironment(InventorySessionManager.java:91)
at oracle.opatch.OPatchSession.main(OPatchSession.java:1627)
at oracle.opatch.OPatch.main(OPatch.java:651)
OUISessionManager::setupSession() throws IOException
OUISessionManager::setupSession() done
reqVer = 10.2
curVer = 11.2.0.3.0
Current Ver later than required? :true
Current Ver equals required? :false
OracleHomeInventory::createInventoryObj()
OracleHomeInventory::createInventoryObj() gets OUIInventorySession object
Locker::lock()
calling lockCentralInventory()
OUISessionManager::getInventorySession()
Caller Details:
Caller Name : OPatch Caller Version : 11.2.0.1.7 Requested Read-only access : true Oracle Home : /u01/app/oracle/product/11.2.0/dbhome_1
OUISessionManager::instantiate()
lockCentralInventory(): OUISessionManager::lockCentralInventory() will retry 30 times with 120-second interval to get an Inventory lock.
OUISessionManager::lockCentralInventory() try round # 1
OUISessionManager::setupSession()
OUISessionManager::setupSession() instantiates a OUIInventorySession obj.
OUISessionManager::setupSession() init. the session
OUISessionManager::setupSession() sets up READ-ONLY session
OUISessionManager::setupSession() done
OUISessionManager::lockCentralInventory() set up session OK
OUISessionManager::register()
Registering the caller : OPatch
OracleHomeInventory::createInventoryObj() gets OUIInstallAreaControl object
OracleHomeInventory::createInventoryObj() gets OUIInstallInventory object
OracleHomeInventory::createInventoryObj() gets OUIOracleHomeInfo object
OracleHomeInfo::lock() fails, and there is no retry supported.
OracleHomeInventory::createInventoryObj() gets a null OUIOracleHomeInfo object
OracleHomeInventory::createInventoryObj() tries to print a list of Oracle Homes on this system
OracleHomeInventory::createInventoryObj() Your Oracle Home path: "/u01/app/oracle/product/11.2.0/dbhome_1"
List of Homes on this system:
OracleHomeInventory::createInventoryObj() construction done
LsInventory::loadAndPrintInventory()
Retrieving inventory from Oracle Home...
OracleHomeInventory::load()
Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
Oracle Home dir. path does not exist in Central Inventory
Oracle Home is a symbolic link
Oracle Home inventory is corrupted
Locker::release()
OUISessionManager::unRegister()
Un-Registering the caller : OPatch
LsInventory::getInstance() returns
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo
Cleaning up the directory : "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/patch_unzip"...
OPatch failed with error code 73
oracle@desktopserver.local:/u01/app/oracle/agent12c:dw
}}}
http://www.novell.com/coolsolutions/appnote/19386.html
http://ubuntuforums.org/archive/index.php/t-436850.html
http://www.linuxquestions.org/questions/linux-newbie-8/unknown-device-with-lvm-292280/
https://docs.oracle.com/javadb/10.8.3.0/tuning/ctunoptimz42425.html
! good stuff
https://www.frc.ri.cmu.edu/~hpm/book97/ch3/processor.list.txt
http://perfdynamics.blogspot.kr/2009/04/modern-microprocessor-mips.html
http://stackoverflow.com/questions/12941247/how-to-convert-process-cpu-core-based-upon-mips
http://www.clipper.com/research/TCG2012019.pdf
http://www.mail-archive.com/linux-390@vm.marist.edu/msg18587.html
http://mainframe.typepad.com/blog/2011/12/migrating-off-the-mainframe.html
http://forums.theregister.co.uk/forum/1/2013/03/26/oracle_ellison_m5_t5_launch/#c_1774219
! others
http://www-03.ibm.com/systems/z/hardware/zenterprise/zec12_specs.html
http://www.osnews.com/thread?438597
http://www.cpushack.com/2012/08/30/we-are-hitting-the-limits-of-physics-in-many-cases-ibm-zec12-5-5ghz/
RPE2 whitepaper http://as.ideascp.com/cp/RPE2_Whitepaper.pdf
http://www.gartner.com/technology/research/ideas-cp-server-car.jsp
http://en.wikipedia.org/wiki/RPE2
RPE2 on sizing https://www.ibm.com/developerworks/mydeveloperworks/blogs/IndustryBPTSE/entry/how_i_go_about_sizing?lang=en
kevin on RPE2 https://twitter.com/kevinclosson/status/326812405239803904
! generate the data points
* from this link http://www.tpc.org/information/results_spreadsheet.asp save this http://www.tpc.org/downloaded_result_files/tpcc_results.xls as csv file (tpc.csv)
''-- computation using cores'' (USE THIS by default)
{{{
cat tpc.csv | awk -F',' '{ print $4",",$6",",$7",",$8",",$9",",$10",",$13",",$16",",$18",",$23"," }' | grep -v ", ," | grep -v "System, tpmC, Price/Perf," > compute.csv
awk -F',' '{print $2/$8",", $0}' compute.csv > tpmC.csv
}}}
''-- computation using threads''
{{{
cat tpc.csv | awk -F',' '{ print $4",",$6",",$7",",$8",",$9",",$10",",$13",",$17",",$18",",$23"," }' | grep -v ", ," | grep -v "System, tpmC, Price/Perf," > compute.csv
awk -F',' '{print $2/$8",", $0}' compute.csv > tpmC.csv
}}}
{{{
below is the header for the cores variables
- raw
System, tpmC, Price/Perf, Total Sys. Cost, Currency, Database Software, Server CPU Type, Total Server Cores, Cluster, Date Submitted
- raw divide
tpmC, Total Server Cores
— this is the final header
tpmC/core, System, tpmC, Price/Perf, Total System Cost, Currency, Database Software, Server CPU Type, Total Server Cores, Cluster, Date Submitted
}}}
! the data points header
-- this is the header
tpmC/core, System, tpmC, Price/Perf, Total System Cost, Currency, Database Software, Server CPU Type, Total Server Cores, Cluster, Date Submitted
! this is the TPC-C results data set - ''CORES''
{{{
Karl@Karl-LaptopDell ~/home
$ cat tpmC.csv | grep -i ibm | grep -i oracle | sort -rnk1
101116, IBM System p 570 , 404462, 3.5, Oracle Database 10g Enterprise Edition , IBM POWER6 - 4.7 GHz , 4, N, 8/6/2007,
101116, Bull Escala PL1660R , 404462, 3.51, Oracle Database 10g Enterprise Edition , IBM POWER6 - 4.7 GHz , 4, N, 12/17/2007,
59067.8, IBM System p5 570 4P c/s , 236271, 2.43, Oracle Database 10g R2 Enterprise Edition , IBM POWER5+ - 2.2 GHz , 4, N, 10/4/2007,
50860, IBM eServer p5 570 , 203439.87, 3.93, Oracle Database 10g Enterprise Edition , IBM POWER5 - 1.9 GHz , 4, N, 10/17/2005,
50055.8, IBM eServer p5 595 , 1601784.98, 5.05, Oracle Database 10g Enterprise Edition , IBM POWER5 - 1.9 GHz , 32, N, 4/20/2005,
48597.9, IBM eServer p5 570 , 194391.43, 5.62, Oracle 10g , IBM POWER5 - 1.9 GHz , 4, N, 7/12/2004,
46380.5, IBM eServer p5 570 , 371044.22, 5.26, Oracle 10g , IBM POWER5 - 1.9 GHz , 8, N, 7/12/2004,
24026.2, IBM eServer pSeries 690 Turbo 7040-681 , 768839.4, 8.55, Oracle Database 10g Enterprise Edition , IBM POWER4 - 1700 MHz , 32, N, 9/12/2003,
13128.1, IBM eServer pSeries 660 Model 6M1 , 105025.02, 23.45, Oracle 9i Database Enterprise Ed. 9.0.1 , IBM RS64-IV - 750 MHz , 8, N, 9/10/2001,
13128.1, Bull Escala PL800R , 105025.02, 25.41, Oracle 9i Database Enterprise Ed. 9.0.1 , IBM RS64-IV - 750 MHz , 8, N, 9/26/2001,
12601.7, Bull Escala PL3200R , 403255.46, 17.96, Oracle 9i R2 Enterprise Edition , IBM POWER4 - 1300 MHz , 32, N, 6/3/2002,
9557.82, IBM eServer pSeries 660 , 57346.93, 28.47, Oracle9i Database Ent. Edition 9.0.1 , IBM RS64-IV - 668 MHz , 6, N, 4/23/2001,
9557.82, Bull Escala PL600R C/S , 57346.93, 28.57, Oracle9i.9.0.1 Enterprise Edition , IBM RS64-IV - 668 MHz , 6, N, 7/3/2001,
9200.3, IBM eServer pSeries 680 Model 7017-S85 , 220807.27, 29.3, Oracle 8i Enterprise Edition v. 8.1.7 , IBM RS64-IV - 600 MHz , 24, N, 6/22/2001,
9200.3, Bull Escala EPC2450 c/s , 220807.27, 34.67, Oracle 8i Enterprise Edition v. 8.1.7 , IBM RS64-IV - 600 MHz , 24, N, 5/28/2001,
8343.78, IBM RS/6000 Enterprise Server M80 c/s , 66750.27, 37.8, Oracle 8i Enterprise Edition v. 8.1.7 , IBM RS64-III - 500 MHz , 8, N, 6/22/2001,
8343.78, Bull Escala Epc 810 c/s , 66750.27, 37.57, Oracle 8i Enterprise Edition v. 8.1.7 , IBM RS64-III - 500 MHz , 8, N, 5/28/2001,
Karl@Karl-LaptopDell ~/home
$ cat tpmC.csv | grep -i "e5" | grep -i oracle
100574, Cisco UCS C240 M3 Rack Server , 1609186.39, 0.47, Oracle Database 11g Standard Edition One , Intel Xeon E5-2690 - 2.90 GHz , 16, N, 9/27/2012,
}}}
! this is the TPC-C results data set - ''THREADS''
{{{
-- check the first row for IBM with submitted date 8/6/2007
Karl@Karl-LaptopDell ~/home
$ cat tpmC.csv | grep -i ibm | grep -i oracle | sort -rnk1
50557.8, IBM System p 570 , 404462, 3.5, Oracle Database 10g Enterprise Edition , IBM POWER6 - 4.7 GHz , 8, N, 8/6/2007,
50557.8, Bull Escala PL1660R , 404462, 3.51, Oracle Database 10g Enterprise Edition , IBM POWER6 - 4.7 GHz , 8, N, 12/17/2007,
29533.9, IBM System p5 570 4P c/s , 236271, 2.43, Oracle Database 10g R2 Enterprise Edition , IBM POWER5+ - 2.2 GHz , 8, N, 10/4/2007,
25430, IBM eServer p5 570 , 203439.87, 3.93, Oracle Database 10g Enterprise Edition , IBM POWER5 - 1.9 GHz , 8, N, 10/17/2005,
25027.9, IBM eServer p5 595 , 1601784.98, 5.05, Oracle Database 10g Enterprise Edition , IBM POWER5 - 1.9 GHz , 64, N, 4/20/2005,
24298.9, IBM eServer p5 570 , 194391.43, 5.62, Oracle 10g , IBM POWER5 - 1.9 GHz , 8, N, 7/12/2004,
24026.2, IBM eServer pSeries 690 Turbo 7040-681 , 768839.4, 8.55, Oracle Database 10g Enterprise Edition , IBM POWER4 - 1700 MHz , 32, N, 9/12/2003,
23190.3, IBM eServer p5 570 , 371044.22, 5.26, Oracle 10g , IBM POWER5 - 1.9 GHz , 16, N, 7/12/2004,
13128.1, Bull Escala PL800R , 105025.02, 25.41, Oracle 9i Database Enterprise Ed. 9.0.1 , IBM RS64-IV - 750 MHz , 8, N, 9/26/2001,
12601.7, Bull Escala PL3200R , 403255.46, 17.96, Oracle 9i R2 Enterprise Edition , IBM POWER4 - 1300 MHz , 32, N, 6/3/2002,
9557.82, Bull Escala PL600R C/S , 57346.93, 28.57, Oracle9i.9.0.1 Enterprise Edition , IBM RS64-IV - 668 MHz , 6, N, 7/3/2001,
9200.3, Bull Escala EPC2450 c/s , 220807.27, 34.67, Oracle 8i Enterprise Edition v. 8.1.7 , IBM RS64-IV - 600 MHz , 24, N, 5/28/2001,
8343.78, IBM RS/6000 Enterprise Server M80 c/s , 66750.27, 37.8, Oracle 8i Enterprise Edition v. 8.1.7 , IBM RS64-III - 500 MHz , 8, N, 6/22/2001,
8343.78, Bull Escala Epc 810 c/s , 66750.27, 37.57, Oracle 8i Enterprise Edition v. 8.1.7 , IBM RS64-III - 500 MHz , 8, N, 5/28/2001,
6564.06, IBM eServer pSeries 660 Model 6M1 , 105025.02, 23.45, Oracle 9i Database Enterprise Ed. 9.0.1 , IBM RS64-IV - 750 MHz , 16, N, 9/10/2001,
4778.91, IBM eServer pSeries 660 , 57346.93, 28.47, Oracle9i Database Ent. Edition 9.0.1 , IBM RS64-IV - 668 MHz , 12, N, 4/23/2001,
4600.15, IBM eServer pSeries 680 Model 7017-S85 , 220807.27, 29.3, Oracle 8i Enterprise Edition v. 8.1.7 , IBM RS64-IV - 600 MHz , 48, N, 6/22/2001,
Karl@Karl-LaptopDell ~/home
$ cat tpmC.csv | grep -i "e5" | grep -i oracle
50287.1, Cisco UCS C240 M3 Rack Server , 1609186.39, 0.47, Oracle Database 11g Standard Edition One , Intel Xeon E5-2690 - 2.90 GHz , 32, N, 9/27/2012,
}}}
! this is the SPECint_rate2006 of the two servers being compared
{{{
$ cat spec.txt | grep -i ibm | grep -i "p 570" | sort -rnk1
30.5, 4, 2, 2, 2, 108, 122, IBM Corporation, IBM System p 570 (4.7 GHz 4 core RHEL)
30.5, 4, 2, 2, 2, 106, 122, IBM Corporation, IBM System p 570 (4.7 GHz 4 core)
30.45, 2, 1, 2, 2, 53.2, 60.9, IBM Corporation, IBM System p 570 (4.7 GHz 2 core)
30.375, 8, 4, 2, 2, 210, 243, IBM Corporation, IBM System p 570 (4.7 GHz 8 core RHEL)
30.25, 16, 8, 2, 2, 420, 484, IBM Corporation, IBM System p 570 (4.7 GHz 16 core RHEL)
30, 8, 4, 2, 2, 206, 240, IBM Corporation, IBM System p 570 (4.7 GHz 8 core)
29.875, 16, 8, 2, 2, 410, 478, IBM Corporation, IBM System p 570 (4.7 GHz 16 core)
29.5, 4, 2, 2, 2, 105, 118, IBM Corporation, IBM System p 570 (4.7 GHz 4 core SLES)
29.25, 8, 4, 2, 2, 204, 234, IBM Corporation, IBM System p 570 (4.7 GHz 8 core SLES)
29.125, 16, 8, 2, 2, 407, 466, IBM Corporation, IBM System p 570 (4.7 GHz 16 core SLES)
$ cat spec.txt | grep -i cisco | grep -i ucs | grep -i c240 | sort -rnk1 | grep -i 2690
43.625, 16, 2, 8, 2, 670, 698, Cisco Systems, Cisco UCS C240 M3 (Intel Xeon E5-2690 2.90 GHz)
}}}
! this is the TPC-C executive summary of the two hardware compared
''IBM System p 570'' http://c970058.r58.cf2.rackcdn.com/individual_results/IBM/IBM_570_4_20070806_es.pdf
''Cisco UCS C240 M3 Rack Server'' http://www.tpc.org/results/individual_results/Cisco/Cisco-Oracle%20C240%20TPC-C%20ES.pdf
<<showtoc>>
! Background
<<<
I got this hack of SPECint_rate2006 data which you can download here https://www.dropbox.com/s/egoe79z2t0z4t25/spec-20120703.txt
Where you can easily grep/search for specific server/processor model to compare their cpu speed .. so let’s say I found out that the SPARC is half the speed of the Xeon and IBM is double the speed of Xeon (at least)..
if you go to this link it will show the spec comparison of x2,x2-8, storage cells, and ODA http://goo.gl/doBI5
spec numbers
• Exadata compute nodes
x2-2
$ cat spec.txt | sort -rnk1 -t',' | grep -i x4170 | grep 5670
29.4167, 12, 2, 6, 2, 316, 353, Oracle Corporation, Sun Fire X4170 M2 (Intel Xeon X5670 2.93GHz)
x2-8
$ cat spec.txt | sort -rnk1 -t',' | grep -i x4800
21.5625, 64, 8, 8, 2, 1260, 1380, Oracle Corporation, Sun Fire X4800 (Intel Xeon X7560 2.26GHz)
• Storage Cells
$ cat spec.txt | sort -rnk1 -t',' | grep -i "x4270 m2"
32.5833, 12, 2, 6, 2, 367, 391, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon X5675 3.06 GHz)
30.3333, 12, 2, 6, 2, 336, 364, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon X5680 3.33GHz)
27.875, 8, 2, 4, 2, 210, 223, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon E5620 2.4GHz)
27, 12, 2, 6, 2, 304, 324, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon X5649 2.53 GHz)
23.1667, 12, 2, 6, 2, 259, 278, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon L5640 2.27 GHz)
• ODA
$ cat spec.txt | sort -rnk1 -t',' | grep -i "X5675" | grep -i sun
33.4167, 12, 2, 6, 2, 384, 401, Oracle Corporation, Sun Blade X6270 M2 Server Module (Intel Xeon X5675 3.06 GHz)
32.5833, 12, 2, 6, 2, 367, 391, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon X5675 3.06 GHz)
32.5833, 12, 2, 6, 2, 367, 391, Oracle Corporation, Sun Fire X4170 M2 (Intel Xeon X5675 3.06 GHz)
32.1667, 12, 2, 6, 2, 361, 386, Oracle Corporation, Sun Fire X2270 M2 (Intel Xeon X5675 3.06 GHz)
32.0833, 12, 2, 6, 2, 361, 385, Oracle Corporation, Sun Blade X6275 M2 Server Module (Intel Xeon X5675 3.06 GHz)
31.4583, 24, 4, 6, 2, 717, 755, Oracle Corporation, Sun Blade X6275 M2 Server Module (Intel Xeon X5675 3.06 GHz)
Why SPEC?
I'd like to make use of this metric as a cpu_speed multiplier for server sizing purposes.
I've been eyeing on this SPECint rate numbers for a long time but can't seem to make them fit on sizing oracle database servers or at that time I just don't know how to use it until I saw Rick Greenwald’s slides on Exadata Technical bootcamp and a couple of other URLs. And I've also noticed that this metric is being used in the OEM12c consolidation planner (http://goo.gl/MRYvK).
What I find really accurate is just running a test harness on the server itself. I'm running a cpu speed test on 1million rows and reading all of them in the buffer cache, and the time it takes to read them will be the cpu_speed metric. Of course I take multiple samples and make sure that the number is consistent across those multiple runs. And with this I was able to successfully forecast the CPU utilization of a RAC downsize migration (4->2nodes) from Itanium to x86. See here http://twitpic.com/9y84vo/full
But there are other ways of deriving the cpu_speed of a server, and SPECint is one of them
So if Oracle uses it on their presentations and OEM12c, then I can also make sense of it.
Right now, I’m still playing around on what cool things I can do with it..
<<<
Update: Check out the [[cpu core comparison]] tiddler for a step by step on HOWTO use the SPECint_rate2006
! SPECint_rate2006 data to Tableau
you can take advantage of the time dimension in tableau for the "Published" column to compare CPU speeds across years/months. After sed on the csv just import it to tableau
{{{
-on linux just do this
sed -i 's/Jan-/01\/01\//g' *.csv
sed -i 's/Feb-/02\/01\//g' *.csv
sed -i 's/Mar-/03\/01\//g' *.csv
sed -i 's/Apr-/04\/01\//g' *.csv
sed -i 's/May-/05\/01\//g' *.csv
sed -i 's/Jun-/06\/01\//g' *.csv
sed -i 's/Jul-/07\/01\//g' *.csv
sed -i 's/Aug-/08\/01\//g' *.csv
sed -i 's/Sep-/09\/01\//g' *.csv
sed -i 's/Oct-/10\/01\//g' *.csv
sed -i 's/Nov-/11\/01\//g' *.csv
sed -i 's/Dec-/12\/01\//g' *.csv
- on mac, put LANG on specpower and just '' on specint
LANG=C sed -i '' 's/Jan-/01\/01\//g' *.csv
LANG=C sed -i '' 's/Feb-/02\/01\//g' *.csv
LANG=C sed -i '' 's/Mar-/03\/01\//g' *.csv
LANG=C sed -i '' 's/Apr-/04\/01\//g' *.csv
LANG=C sed -i '' 's/May-/05\/01\//g' *.csv
LANG=C sed -i '' 's/Jun-/06\/01\//g' *.csv
LANG=C sed -i '' 's/Jul-/07\/01\//g' *.csv
LANG=C sed -i '' 's/Aug-/08\/01\//g' *.csv
LANG=C sed -i '' 's/Sep-/09\/01\//g' *.csv
LANG=C sed -i '' 's/Oct-/10\/01\//g' *.csv
LANG=C sed -i '' 's/Nov-/11\/01\//g' *.csv
LANG=C sed -i '' 's/Dec-/12\/01\//g' *.csv
}}}
! Convert SPECint_rate2006 data to text
* download the dump here http://www.spec.org/cgi-bin/osgresults?conf=rint2006 by clicking on the "Dump All Records As CSV" here http://www.spec.org/cgi-bin/osgresults?conf=rint2006;op=dump;format=csvdump
* open the CSV in MS Excel.. the filename is something like this "rint2006-results-20130210-032032.csv".. then ''Find and Replace the comma with blank'' (see this image here http://goo.gl/OE5Wf)
* save it to filename spec.csv
* use the following command to convert it to text
{{{
cat spec.csv | grep -v "# Cores" | awk -F',' '{ print $26/$4",", $4",", $5",", $6",", $7",", $27",", $26",", $2",", $3",", $56 }' > spec.txt
-- below are the variable values (raw and final header)
Result/# Cores, # Cores, # Chips, # Cores Per Chip, # Threads Per Core, Baseline, Result, Hardware Vendor, System, Published
}}}
*** note that $26/$4 will be the first column of spec.txt which is the "SPECint_rate2006/core" and is = Result/Enabled Cores
*** on the website http://www.spec.org/cpu2006/results/rint2006.html the ''Peak'' column is equivalent to the ''Result'' column on the CSV
*** the em12c Consolidation Planner SPEC_RATE column is based on the column "Baseline" and not normalizing the speed by enabled cores
* example usage
{{{
cat spec.txt | cut -d, -f1,2,3,4,5,6,7,8,9
cat spec.txt | sort -rnk7 -t',' | less <-- sort by peak
cat spec.txt | sort -rnk1 -t',' | less <-- sort by SPECint_rate2006/core
cat spec.txt | sort -rnk2 -t',' | less <-- sort by # of cores
cat spec.txt | sort -rnk1 -t',' | grep -i intel | grep -i x5670 | grep M2 | grep -i sun <-- search for Sun hardware
cat spec.txt | sort -rnk1 -t',' | grep AMD | grep -i "dl385 g7" | grep 6176 <-- search for AMD CPUs
cat spec.txt | sort -rnk1 -t',' | grep -i itanium | grep 1.6 | grep HP | grep -v dome | grep "4, 4, 2" <-- search for Itanium 2 proc on rx2620
cat spec.txt | sort -rnk1 -t',' | grep -i 3650 | grep E5540 <-- search for IBM x3650 M2 on E5540
}}}
! spec.txt output header
SPECint_rate2006/core, #Cores, #Chips, #Cores/Chip, #Threads/Core, Base, Peak, Hardware Vendor, System
! spec numbers
* Exadata compute nodes
<<<
V2 cpu - no spec equivalent of the same machine, but notice the speed increase of V2 to X2 is 17 to 29 which is 70% and the LIOs/sec perf increase is from 2.1M to 3.6M LIOs/sec which is 71% as shown here [[cores vs threads, v2 vs x2]]
{{{
$ cat spec.txt | sort -rnk1 -t',' | grep -i "E5430" | grep "2.66"
17.05, 4, 1, 4, 1, 57.3, 68.2, Fujitsu Siemens Computers, PRIMERGY RX200 S4 Intel Xeon E5430 2.66 GHz
16.125, 8, 2, 4, 1, 119, 129, Dell Inc., PowerEdge M600 (Intel Xeon E5430 2.66 GHz)
}}}
x2-2
{{{
$ cat spec.txt | sort -rnk1 -t',' | grep -i x4170 | grep 5670
29.4167, 12, 2, 6, 2, 316, 353, Oracle Corporation, Sun Fire X4170 M2 (Intel Xeon X5670 2.93GHz)
}}}
x2-8
{{{
$ cat spec.txt | sort -rnk1 -t',' | grep -i x4800
21.5625, 64, 8, 8, 2, 1260, 1380, Oracle Corporation, Sun Fire X4800 (Intel Xeon X7560 2.26GHz)
}}}
x3-2 vs IBM power
{{{
$ cat spec.txt | grep -i ibm | grep -i "power 7" | sort -rnk1
45.9375, 32, 8, 4, 4, 1310, 1470, IBM Corporation, IBM Power 780 (4.14 GHz 32 core SLES)
45.625, 32, 8, 4, 4, 1300, 1460, IBM Corporation, IBM Power 780 (4.14 GHz 32 core)
45, 32, 4, 8, 4, 1270, 1440, IBM Corporation, IBM Power 795 (4.0 GHz 32 core)
44.1406, 256, 32, 8, 4, 9930, 11300, IBM Corporation, IBM Power 795 (4.0 GHz 256 core RedHat)
43.75, 256, 32, 8, 4, 9880, 11200, IBM Corporation, IBM Power 795 (4.0 GHz 256 core)
$ cat spec.txt | grep -i intel | grep -i "E5-26" | grep -i sun | sort -rnk1
44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X6270 M3 (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X3-2B (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Server X3-2L (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Fire X4270 M3 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Server X3-2 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Fire X4170 M3 (Intel Xeon E5-2690 2.9GHz)
}}}
x3-8
{{{
$ cat spec.txt | grep -i intel | grep 8870 | sort -rnk1
27, 40, 4, 10, 2, 1010, 1080, Unisys Corporation, Unisys ES7000 Model 7600R G3 (Intel Xeon E7-8870)
26.75, 40, 4, 10, 2, 1010, 1070, NEC Corporation, Express5800/A1080a-S (Intel Xeon E7-8870)
26.75, 40, 4, 10, 2, 1010, 1070, NEC Corporation, Express5800/A1080a-D (Intel Xeon E7-8870)
26.5, 40, 4, 10, 2, 1000, 1060, Oracle Corporation, Sun Server X2-8 (Intel Xeon E7-8870 2.40 GHz)
25.875, 80, 8, 10, 2, 1960, 2070, Supermicro, SuperServer 5086B-TRF (X8OBN-F Intel E7-8870)
24.875, 80, 8, 10, 2, 1890, 1990, Oracle Corporation, Sun Server X2-8 (Intel Xeon E7-8870 2.40 GHz)
but Kevin says X3-8 is the same CPU as x2-8
http://kevinclosson.wordpress.com/2012/08/31/more-predictions-and-some-guesses-about-oracle-openworld-2012-announcements/
I’ll make this short. Sandy Bridge Xeon processors (E5-2600, E5-4600) only have 2-QPI links and they do not support glueless 8-socket designs.
Unless Sun emerged from the grave to radically change the pre-Oracle acquisition x4800 design to implement “glue” (ala HP DL980 PRIMA or IBM eX5) the
X3-8 will not include a CPU refresh. I’m willing to admit I could be totally wrong with that and thus the word guesses appears in the title of this blog entry.
I have no guesses what warrants a moniker change (from X2-8 to X3-8) if not a CPU refresh. We’ll all know soon enough.
}}}
x4-8
{{{
38 https://twitter.com/karlarao/status/435882623500423168
47%+ in performance, 50%+ in cores, as fast as X4-2, big jump for Exadata X4-8 in terms of speed+capacity (120 cores)
http://www.intel.com/content/www/us/en/benchmarks/server/xeon-e7-v2/xeon-e7-v2-8s-specint-rate-base2006.html
}}}
<<<
* Storage Cells
{{{
$ cat spec.txt | sort -rnk1 -t',' | grep -i "x4270 m2"
32.5833, 12, 2, 6, 2, 367, 391, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon X5675 3.06 GHz)
30.3333, 12, 2, 6, 2, 336, 364, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon X5680 3.33GHz)
27.875, 8, 2, 4, 2, 210, 223, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon E5620 2.4GHz)
27, 12, 2, 6, 2, 304, 324, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon X5649 2.53 GHz)
23.1667, 12, 2, 6, 2, 259, 278, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon L5640 2.27 GHz)
}}}
* ODA
{{{
$ cat spec.txt | sort -rnk1 -t',' | grep -i "X5675" | grep -i sun
33.4167, 12, 2, 6, 2, 384, 401, Oracle Corporation, Sun Blade X6270 M2 Server Module (Intel Xeon X5675 3.06 GHz)
32.5833, 12, 2, 6, 2, 367, 391, Oracle Corporation, Sun Fire X4270 M2 (Intel Xeon X5675 3.06 GHz)
32.5833, 12, 2, 6, 2, 367, 391, Oracle Corporation, Sun Fire X4170 M2 (Intel Xeon X5675 3.06 GHz)
32.1667, 12, 2, 6, 2, 361, 386, Oracle Corporation, Sun Fire X2270 M2 (Intel Xeon X5675 3.06 GHz)
32.0833, 12, 2, 6, 2, 361, 385, Oracle Corporation, Sun Blade X6275 M2 Server Module (Intel Xeon X5675 3.06 GHz)
31.4583, 24, 4, 6, 2, 717, 755, Oracle Corporation, Sun Blade X6275 M2 Server Module (Intel Xeon X5675 3.06 GHz)
}}}
! References
http://en.wikipedia.org/wiki/SPECint "A more complete system-level benchmark that allows all CPUs to be used is known as SPECint_rate2006, also called "CINT2006 Rate"."
spec explanation with dilbert stories http://www.mrob.com/pub/comp/benchmarks/spec.html
spec results http://www.spec.org/cpu2006/results/
''The Xeon E5520: Popular for VMWare'' http://h30507.www3.hp.com/t5/Eye-on-Blades-Blog-Trends-in/The-Xeon-E5520-Popular-for-VMWare/ba-p/79934
http://theseobay.com/xrumer-discussions/1251-config-better.html
''CPU IV+ , SPARC64 VI and SPARC64 VII'' http://www.motherboardpoint.com/cpu-iv-sparc64-vi-and-sparc64-vii-t185526.html
''SPEC CPU2006: Read Me First'' http://www.spec.org/cpu2006/Docs/readme1st.html#Q14
''LiveCycle Capacity Planning and Server Sizing'' http://blogs.adobe.com/livecycle/2011/01/livecycle-capacity-planning-and-server-sizing.html, http://www.adobe.com/products/livecycle/pdfs/95011594_lcds_planguide_wp_ue.pdf
Identifying Server Candidates for Virtualization http://goo.gl/vqSph
nice debate on sparc http://forums.theregister.co.uk/forum/1/2009/04/21/oracle_sun_open_source/
Driving Down Total Cost of Ownership http://download.intel.com/design/intarch/platforms/iaclient/321259.pdf
Reducing Cost and Improving Performance - Consolidating the Smaller Data Center http://www.dell.com/downloads/global/corporate/iar/20070331_clipper_dell_consolidation.pdf
http://en.wikipedia.org/wiki/Transaction_Processing_Performance_Council
http://en.wikipedia.org/wiki/Standard_Performance_Evaluation_Corporation
http://en.wikipedia.org/wiki/Benchmark_(computing)
http://en.wikipedia.org/wiki/SPECint
http://govitwiki.com/wiki/TpmC
http://support.primatelabs.com/kb/geekbench/interpreting-geekbench-scores <-- integer, floating point perf
http://www.spec.org/cgi-bin/osgresults?conf=rint2017
see the SLOB in action here http://karlarao.wordpress.com/2012/06/29/the-effect-of-asm-redundancyparity-on-readwrite-iops-slob-test-case-for-exadata-and-non-exa-environments/ used to investigate the ASM redundancy effect on read/write IOPS
! Installation
There are two parts of the installation - the database creation and the loading of data
* ''SLOB database creation''
<<<
* read the README here /<stage directory>/SLOB/misc/create_database_kit/README
{{{
oracle@desktopserver.local:/home/oracle/dba/benchmark/SLOB/misc/create_database_kit:slob
$ ls -ltr
total 10960
-rw-r--r-- 1 oracle dba 658 Dec 27 18:34 cr_db.sql
-rw-r--r-- 1 oracle dba 915 Feb 6 22:23 README <-- here!
-rw-r--r-- 1 oracle dba 11193506 Jun 3 15:20 createDB.lis
-rw-r--r-- 1 oracle dba 505 Jun 19 12:42 create.ora
}}}
* create a softlink on SLOB parameter file (so that it will be easier for you to find and edit the parameter file)
{{{
ln -s /home/oracle/dba/benchmark/SLOB/misc/create_database_kit/create.ora $ORACLE_HOME/dbs/initslob.ora
}}}
* add an /etc/oratab entry and do a .oraenv on slob
{{{
oracle@desktopserver.local:/home/oracle/dba/benchmark/SLOB/misc/create_database_kit:slob
$ cat /etc/oratab
+ASM:/u01/app/11.2.0/grid:N
*:/u01/app/oracle/agent11g:N
slob:/u01/app/oracle/product/11.2.0/dbhome_1:N
dw:/u01/app/oracle/product/11.2.0/dbhome_1:N # line added by Agent
oltp:/u01/app/oracle/product/11.2.0/dbhome_1:N # line added by Agent
}}}
* edit the create.ora file with the following parameters (for IOPS test case)
{{{
db_create_file_dest = '+DATA'
control_files=('+data/slob/controlfile/Current.258.784998889')
db_name = SLOB
compatible = 11.2.0.0.0
UNDO_MANAGEMENT=AUTO
db_block_size = 8192
db_files = 300
filesystemio_options=setall
*._db_block_prefetch_limit=0
*._db_block_prefetch_quota=0
*._db_file_noncontig_mblock_read_count=0
*.cpu_count=1
*.processes=1000
*.db_cache_size=12M
*.shared_pool_size=600M
}}}
* execute the following to create the database
NOTE: edit the cr_db.sql file to make “alter tablespace SLOB” to “alter tablespace IOPS”
{{{
$ sqlplus '/ as sysdba' <<EOF
@cr_db
exit;
EOF
}}}
<<<
* ''Load the test case data + creation of users''
<<<
* read on the README-FIRST file at the /<stage directory>/SLOB directory and follow the setup steps
{{{
SETUP STEPS
-----------
1. First, create the trigger tools. Change directory to ./wait_kit
and execute "make all"
2. Next, execute the setup.sh script, e.g., sh ./setup.sh IOPS 128
3. Next, run the kit such as sh ./runit.sh 0 8
}}}
<<<
! Run the benchmark
{{{
$ sh ./runit.sh 0 32 # zero writers 32 readers
$ sh ./runit.sh 32 0 # 32 writers zero readers
$ sh ./runit.sh 16 16 # 16 of each reader/writer
}}}
! HOWTO on LIOs test case
the initial steps above are for IOPS test case.. the following will show you how to do it with LIOs, also check this out [[cpu centric benchmark comparisons]] for more details on LIOs tests
<<<
1) Increase the SGA
*.db_cache_size=10240M
*.shared_pool_size=600M
2) Edit the runit.sh and comment this portion (the sleep and sqlplus)
{{{
#sleep 10
#sqlplus -L '/as sysdba' @awr/awr_snap > /dev/null
B=$SECONDS
./trigger > /dev/null 2>&1
wait
(( TM = $SECONDS - $B ))
echo "Tm $TM"
#sqlplus -L '/as sysdba' @awr/awr_snap > /dev/null
#sqlplus -L '/as sysdba' @awr/create_awr_report > /dev/null
}}}
3) Run the LIO benchmark
{{{
while :; do ./runit.sh 0 16; done
}}}
<<<
! Instrument SLOB - read/write load profile, ASH, and latency
whenever you run this.... sh ./runit.sh 128 0
you should have multiple windows executing the following to instrument your SLOB run..
''OS side of things''
check out the tool here http://collectl.sourceforge.net/
{{{
collectl --all -o T -o D >> collectl-all.txt
collectl -sc --verbose -o T -o D >> collectl-cpuverbose.txt
collectl -sD --verbose -o T -o D >> collectl-ioverbose.txt
}}}
''Database side of things''
check out the script here [[loadprof.sql]]
{{{
while : ; do sqlplus "/ as sysdba" @loadprof ; echo "--"; sleep 2 ; done
}}}
check out the script here [[ASH - aveactn300.sql]]
{{{
ash
}}}
check out the script here http://tech.e2sn.com/oracle-scripts-and-tools/session-snapper
{{{
spool snapper.txt
@snapper ash=event+wait_class,stats,gather=tsw,tinclude=CPU,sinclude=redo|reads|writes 5 5 "select sid from v$session where username like 'USER%' or program like '%DBW%' or program like '%CKP%' or program like '%LGW%'"
spool off
cat snapper.txt | grep write | egrep "DBW|CKP|LGW" | grep WAIT | sort -rnk2
cat snapper.txt | grep "db file sequential read"
cat snapper.txt | grep "free buffer waits"
cat snapper.txt | grep "write complete waits"
}}}
! Issues and workarounds
* you should see ''db file sequential read'' for IO waits
* “alter tablespace SLOB” to “alter tablespace IOPS”
* increase processes from 100 to at least 200
* Some DB File Parallel Read Against An Empty Partition [ID 1367059.1] -- fix _db_block_prefetch_limit, _db_block_prefetch_quota, _db_file_noncontig_mblock_read_count
! References
http://kevinclosson.wordpress.com/2010/11/17/reintroducing-slb-the-silly-little-benchmark/
http://structureddata.org/2009/04/10/kevin-clossons-silly-little-benchmark-is-silly-fast-on-nehalem/
http://kevinclosson.wordpress.com/2007/01/30/oracle-on-opteron-with-linux-the-numa-angle-part-iii/
http://kevinclosson.wordpress.com/2012/02/06/introducing-slob-the-silly-little-oracle-benchmark/#comments <-- new
{{{
Sizing'Inputs'
The'sizing'tool'sizes'CPU'based'on'?MBValues.'MBValue'uses'a'wide'variety'of'internal'workloads'and'
published'benchmarks'that'are'combined'in'an'analyTcal'model'to'provide'sizing'esTmates'for'a'
variety'of'applicaTon'and'database'sizing.'MBValue'does'not'represent'any'specific'public'benchmark'
results'and'should'not'be'used'to'esTmate'the'results'of'those'public'benchmarks.''
}}}
! some mention of the m-values
http://www.oracle.com/us/solutions/eci-for-sparc-bus-wp-1659588.pdf
http://www.informationweek.com/servers/oracle-sparc-t5-cant-make-sun-rise/d/d-id/1109269?
Oracle HW geeks (exadata/sparc), what is this M-values? I don't like this being a hidden magic number, i like transparency on sizing please
https://twitter.com/karlarao/status/568162925824876544
http://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf
http://en.m.wikipedia.org/wiki/SPECpower
http://spec.org/benchmarks.html#power
http://spec.org/sert/
''performance per watt''
http://en.wikipedia.org/wiki/Performance_per_watt
http://www.zdnet.com/blog/ou/spec-launches-standardized-energy-efficiency-benchmark/927
http://avasseur.blogspot.com/2007/09/price-per-cpu-is-foolish-and-price-per.html
http://files.shareholder.com/downloads/INTC/2955738073x0x719217/75F1A495-8936-48F9-9DC1-1B10708C3FA1/Jan_19_14_Recommended_Customer_Price_List..pdf
http://www.amd.com/us/products/pricing/Pages/server-opteron.aspx
http://www.amd.com/us/products/server/benchmarks/Pages/specint-rate-base2006-perf-proc-price-four-soc.aspx
http://www.amd.com/us/products/server/benchmarks/Pages/specint-rt2006-perf-proc-price-two-soc.aspx
''blog about it''
http://www.zdnet.com/blog/ou/spec-launches-standardized-energy-efficiency-benchmark/927
http://spec.org/power_ssj2008/results/power_ssj2008.html
http://www.spec.org/power_ssj2008/results/res2007q4/power_ssj2008-20071205-00023.html
''download''
http://www.spec.org/cgi-bin/osgresults?conf=power_ssj2008
http://www.spec.org/cgi-bin/osgresults?conf=power_ssj2008;op=dump;format=csvdump
''important columns''
{{{
result <-- this is already (sum of ssj) / (sum of watt), which is the official number
processor
published
system
cores
average watts @ 100% of target load
performance/power @ 100% of target load
power supply rating (watts)
processor MHz
ssj_ops @ 100% of target load
}}}
''hacking the data''
{{{
-- cpuprice.txt
go to http://www.cpu-world.com/Price_Compare/Server_CPU_prices_(latest).html and copy contents
vi price.txt, then replace $ with USD
cat price.txt | grep " USD" > cpuprice.txt
sed -i '' 's/- USD/,/g' cpuprice.txt
sed -i '' 's/Xeon/Intel Xeon/g' cpuprice.txt
sed -i '' 's/Opteron/AMD Opteron/g' cpuprice.txt
sed -i '' 's/ //g' cpuprice.txt
-- spec.txt
cat spec.csv | awk -F , '{print $8","$0}' > spec.txt
sed -i '' 's/ //g' spec.txt
./compare.py -f cpuprice.txt -t spec.txt > specprice.txt
vi specprice.txt, then update the header with price
Processor,Benchmark,"HardwareVendor ",System,#Cores,#Chips,#CoresPerChip,#ThreadsPerCore,Processor,ProcessorMHz,ProcessorCharacteristics,CPU(s)Orderable,AutoParallelization,BasePointerSize,PeakPointerSize,1stLevelCache,2ndLevelCache,3rdLevelCache,OtherCache,Memory,OperatingSystem,FileSystem,Compiler,HWAvail,SWAvail,BaseCopies,Result,Baseline,400Peak,400Base,401Peak,401Base,403Peak,403Base,429Peak,429Base,445Peak,445Base,456Peak,456Base,458Peak,458Base,462Peak,462Base,464Peak,464Base,471Peak,471Base,473Peak,473Base,483Peak,483Base,License,TestedBy,TestSponsor,TestDate,Published,Updated,Price
LANG=C sed -i '' 's/Jan-/01\/01\//g' specprice.txt
LANG=C sed -i '' 's/Feb-/02\/01\//g' specprice.txt
LANG=C sed -i '' 's/Mar-/03\/01\//g' specprice.txt
LANG=C sed -i '' 's/Apr-/04\/01\//g' specprice.txt
LANG=C sed -i '' 's/May-/05\/01\//g' specprice.txt
LANG=C sed -i '' 's/Jun-/06\/01\//g' specprice.txt
LANG=C sed -i '' 's/Jul-/07\/01\//g' specprice.txt
LANG=C sed -i '' 's/Aug-/08\/01\//g' specprice.txt
LANG=C sed -i '' 's/Sep-/09\/01\//g' specprice.txt
LANG=C sed -i '' 's/Oct-/10\/01\//g' specprice.txt
LANG=C sed -i '' 's/Nov-/11\/01\//g' specprice.txt
LANG=C sed -i '' 's/Dec-/12\/01\//g' specprice.txt
}}}
http://www.thelinuxdaily.com/2010/03/a-quick-cpu-intensive-script-for-testing-purposes-with-md5sum/
http://stackoverflow.com/questions/141707/how-to-set-cpu-load-on-a-red-hat-linux-box
http://www.ibm.com/developerworks/linux/library/l-stress/index.html
http://unixfoo.blogspot.com/2008/11/linux-cpu-hammer-script.html
''cpu binding and dynamic allocation test cases..'' measuring the effect on per thread performance (note here that 1 cpu = 1 thread)
check out the detailed screenshots here http://www.evernote.com/shard/s48/sh/9731ac86-59b8-4a6e-86a5-450ac0cbfbd7/d9c8d2c0fda63e0cc2c172d83e46e6a3 which includes the turbostat output
summary screenshot below
[img[ https://lh5.googleusercontent.com/-vxbpy6bhprw/UPO9aTfcQ_I/AAAAAAAAByo/BS9eF2Oy44Q/s2048/numa-all-2dw-summary.png ]]
! 1) 2sessions on 1cpu
{{{
numactl --physcpubind=5 ./saturate 2 dw
}}}
notice that even if it hasn't reached the CPU_COUNT it starts having CPU Wait because 2 sessions are contending for 1 cpu/thread causing a much lower per thread LIO performance unlike the remaining test cases below where they are either dynamically allocated or dedicated with 1 cpu each.
also on the workload level since it is essentially just using 1 cpu then the Per Second throughput is just 298K which is really just the throughput of 1 cpu.. then this is being shared by 2 sessions causing a much lower per thread LIO (see LIOS_ELAP column)
you have to be aware about this kind of behavior or possible effect when doing cpu binding
also when you do this you'll not see any WAIT IO
{{{
$ vmstat 1 10000
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 1 11696 587544 534824 10496068 0 0 503 626 18 172 64 20 12 4 0
2 0 11696 587648 534824 10496068 0 0 64 176 5583 9561 13 0 87 0 0
2 0 11696 587836 534824 10496068 0 0 64 32 5292 8605 13 0 87 0 0
2 0 11696 587868 534824 10496068 0 0 0 4 5531 9424 13 0 87 0 0
2 0 11696 588152 534824 10496068 0 0 0 16 5287 8603 13 0 87 0 0
2 0 11696 578240 534824 10496068 0 0 64 81 5590 9431 13 0 86 0 0
2 0 11696 574528 534824 10496068 0 0 241 219 5492 8912 14 0 86 0 0
3 1 11696 563088 534824 10496068 0 0 736 170 5601 8730 13 0 86 0 0
2 0 11696 572624 534824 10496072 0 0 400 33 5954 8896 16 0 84 0 0
2 0 11696 572624 534824 10496072 0 0 0 80 5372 8451 13 0 87 0 0
2 0 11696 572624 534824 10496072 0 0 64 16 5236 8133 13 0 87 0 0
}}}
<<<
''workload level: check "Per Second" column''
{{{
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
01/13/13 02:42:29 Logical reads 298,794.5 690,330.2
}}}
''session level: check "LIOS_ELAP" column''
{{{
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------
01/13/13 02:37:56, 214, 62442962, 209.1, 418.52, 209.41, .98, 1.96, .98, 291789.54, 149200.93
}}}
<<<
! 2) 1session on each cpu
{{{
numactl --physcpubind=4 ./saturate 1 dw
numactl --physcpubind=5 ./saturate 1 dw
}}}
<<<
''workload level: check "Per Second" column''
{{{
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
01/13/13 02:50:03 Logical reads 580,815.5 1,516,938.6
}}}
''session level: check "LIOS_ELAP" column''
{{{
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------
01/13/13 02:49:45, 373, 108796562, 374.1, 374.33, .24, 1, 1, 0, 291679.79, 290641.94
}}}
<<<
! 3) 2sessions no numa binding (dynamic)
{{{
./saturate 2 dw
}}}
<<<
''workload level: check "Per Second" column''
{{{
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
01/13/13 02:59:20 Logical reads 578,101.4 2,670,828.5
}}}
''session level: check "LIOS_ELAP" column''
{{{
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------
01/13/13 03:04:19, 852, 248384635, 855.95, 857.25, 1.31, 1, 1.01, 0, 291531.26, 289745.2
}}}
<<<
! Swingbench on a CPU centric micro bench scenario VS cputoolkit
The thing here is the more consistent and less parameters you have on your test cases
the more repeatable it will be and easier for you to measure the response time effects
of the any environment and configuration changes
* It's doing IO, so you'll not only see CPU on your AAS.. but also log file sync when you ramp up the number of users so if you have slow disks then you are burning some of your response time on IO
* As you ramp up more users let's say from 1 to 50.. and as you do more CPU WAIT IO and increase your load average you'll start getting ORA-03111
{{{
12:00:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15
12:10:01 AM 5 813 0.00 0.00 0.00
12:20:01 AM 5 811 0.00 0.00 0.00
02:20:01 PM 2 1872 4.01 20.40 11.73
02:30:02 PM 4 1896 327.89 126.20 50.80 <-- load avg
02:40:01 PM 6 871 0.31 18.10 27.58
02:50:01 PM 7 867 0.12 2.57 14.52
You'll start getting ORA-03111: break received on communication channel
http://royontechnology.blogspot.com/2009/06/mysterious-ora-03111-error.html
}}}
* But swingbench is still a totally awesome tool, I can use it for a bunch of test cases but using it on a CPU centric micro benchmark like observing threads vs cores just gives a lot of non-CPU noise
* cputoolkit as a micro benchmark tool allows you to measure the effect on scalability (LIOs per unit of work) as you consumes the max # of CPUs.. see the LIOS_ELAP which takes into account the time spent on CPU_WAIT, if you just derive the unit of work on LIOS_EXEC it will be incorrect as it only takes into account how many times you have done work and not including the time spent on run-queue (CPU_WAIT) which is the most important thing to quantify the effect on the users
{{{
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------
12/05/12 01:02:47, 1002, 254336620, 1476.18, 2128.63, 652.45, 1.47, 2.12, .65, 253828.96, 119483.73
}}}
! slob VS cputoolkit
* slob is also a good tool for CPU centric benchmark, although you have to edit the runit.sh to have a sustained load.. check the ''HOWTO on LIOs test case'' section of this tiddler [[cpu - SillyLittleBenchmark - SLOB]] for the steps
* on the ramp up it has to do some db file sequential/scattered reads (reads and caches all the blocks) then it switches to all CPU workload.. and the ramp up state is doing a bunch of CPU WAIT IO (if your IO subsystem can't keep up, in short if it's sloow) so if you are firing off a bunch of users like 128 then your server might stall.. so keep in mind that only spawn sessions equivalent to the number of your CPUs then try to double it, better yet start with half.. then double, then double
* the slob LIOs test is pretty random workload because the driver files runit.sh spawns a number of users and the reader.sql uses dbms_random on the where clause and executes it at a very fast rate for 5000 times and also generates multiple child numbers which is 1 child (SQL_ID 68vu5q46nu22s) for each session
* if you don't edit the runit.sh..the CPU utilization doesn't really get to a sustained state although it ramps up to the max utilization, also the AAS CPU tends to be lower and is not equal to the # of users because each SQL is not doing sustained LIOs whereas on the cputoolkit 1session will drive 1CPU (doing sustained and more LIOs).. check the ''HOWTO on LIOs test case'' section of [[cpu - SillyLittleBenchmark - SLOB]] to workaround this behavior and behave like the cputoolkit (having a more sustained CPU load)
* the fast rate of execution with ''.000478 secs'' and ''282 LIOs'' for each execution looping it for ''5000 times'' results to a higher sustained ''1.2M LIOs/sec'' AWR load profile, on the other hand the cputoolkit does ''1sec'' per execute which is about ''293212 LIOs'' for each execution looping it for ''3600 times'' resulting to lower ''293K LIOs/sec'' AWR load profile <-- these slob/cputoolkit numbers are for 1 user test case
* the other reason why slob has higher LIOs is because it's doing more thread parallelism by making all the threads equally busy at the same time resulting to more work getting done while the cputoolkit is saturaing specific number of threads (fewer) thus limiting the work that it can do
* the randomness of slob workload results to better thread utilization (active 8 threads) on 4 sessions test case vs the cputoolkit which is 100% on all the 4 threads..
the slob which has more spread out thread utilization
{{{
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
30.25 3.28 3.41 40.69 10.53 18.53 0.00 0.72 0.66 0.00 0.00
0 0 48.04 3.26 3.41 29.96 21.70 0.29 0.00 0.72 0.66 0.00 0.00
0 4 30.54 3.32 3.41 47.46 21.70 0.29 0.00 0.72 0.66 0.00 0.00
1 1 34.64 3.29 3.41 34.63 7.21 23.52 0.00 0.72 0.66 0.00 0.00
1 5 17.85 3.12 3.41 51.42 7.21 23.52 0.00 0.72 0.66 0.00 0.00
2 2 41.74 3.45 3.41 7.67 4.90 45.70 0.00 0.72 0.66 0.00 0.00
2 6 5.04 3.15 3.41 44.36 4.89 45.70 0.00 0.72 0.66 0.00 0.00
3 3 40.24 3.42 3.41 46.86 8.30 4.61 0.00 0.72 0.66 0.00 0.00
3 7 23.90 2.89 3.41 63.20 8.30 4.61 0.00 0.72 0.66 0.00 0.00
}}}
vs the cputoolkit which is doing sustained and more LIOs and really does the job of controlling the saturation of specific number of CPUs (AAS CPU) on this case it's 4
{{{
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
55.95 3.51 3.41 44.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 18.36 3.51 3.41 81.64 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 4 100.00 3.51 3.41 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 100.00 3.51 3.41 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 5.15 3.51 3.41 94.85 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 2 100.00 3.51 3.41 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 6 13.17 3.51 3.41 86.83 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 3 10.89 3.51 3.41 89.11 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 7 100.00 3.51 3.41 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
}}}
* check out this tiddler [[slob VS cputoolkit LIOs driver SQLs]] to compare the driver LIOs sqls being executed by both tools
* although the workloads of both tools are different (shown here [[slob VS cputoolkit LIOs driver SQLs]]) they both showed here [[cores vs threads, v2 vs x2]] that once the CPU intensive load gets past the core count (real cpu cores) you only get about 20-30% LIO/sec increase up to the max # of threads (logical cpus)
* to know the saturation point of both tools check out the ''Saturation point'' section of this tiddler [[cores vs threads, v2 vs x2]]
This explains the CPU core comparison using SPECint_rate2006 numbers
http://www.evernote.com/shard/s48/sh/8e65e8f9-7f91-44f0-a32f-f4bcfa200abe/474d5705fe52a50d1ed01b5fa6a52227
also see [[Exadata CPU comparative sizing]] for the particular technical bootcamp slides
* related tiddlers
[[cpu - SPECint_rate2006]]
[[cpu core comparison]]
[[Exadata CPU comparative sizing]]
http://oreilly.com/catalog/linuxkernel/chapter/ch10.html
http://en.wikipedia.org/wiki/Quantum_computer
finding time quantum of round robin cpu scheduling algorithm in general computing systems using integer programming http://arpapress.com/Volumes/Vol5Issue1/IJRRAS_5_1_09.pdf
http://www.physlink.com/education/askexperts/ae391.cfm
http://en.wikipedia.org/wiki/Spinlock
<<<
This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished.
<<<
! WARNING: Don't use orapub's cpu_speed test for comparing speed. It only test for single CPU and not all CPUs. Use cputoolkit or slob.
Thread: SGA Utilization v/S CPU Utilization calculations on 10g R1, R2 and 11g https://forums.oracle.com/forums/thread.jspa?threadID=1062908
<<<
The CPU speed could be abstracted by counting how many "logical block" operations a CPU can process per millisecond... but this measurement is Oracle Centric..
There's a script from ORAPUB that does this..
CPU speed test http://resources.orapub.com/ProductDetails.asp?ProductCode=CPU_SPEED_TEST&Show=ExtInfo
Also I suggest you read this paper.. Introduction To Oracle Server Consolidation http://resources.orapub.com/product_p/server_consolidation_doc.htm
When you do the speed test you have to ensure that the "physical blocks (PIOs)" is zero... if it's not, then you are encountering physical IOs which affecting your speed test numbers meaning the measured tests could not be reliable.
And going back to your question if SGA is a factor for your CPU speed, on the context of "CPU speed test" yes it is..
But in general if you have a small SGA you may be incurring a lot of PIOs which demands more work from your IO subsystem rather than entirely on your CPU if the work could be done entirely on LIOs.
<<<
<<<
Linux gurus: Is there any tool like corestat for Solaris/SPARC that shows core utilization for #Linux #Intel processors?
Karl Arao @karlarao
@GregRahn mpstat -P ALL 1 1000 or collectl -scC
@GregRahn it would also be cool & useful if linux has something like poolstat (solaris) and lparstat (aix) as a measure of core utilization
http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-055-solaris-rm-419384.pdf ''<- page 29''
<<<
{{{
00:41:20 SYS@dw> alter system set cpu_count=1 ;
System altered.
00:41:29 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 664.309
Other statistics: stdev=6.976 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1634.286)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 161
1085568 0 162
1085568 0 164
1085568 0 163
1085568 0 166
1085568 0 163
1085568 0 165
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
00:41:45 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 667.755
Other statistics: stdev=2.197 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1625.714)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 163
1085568 0 163
1085568 0 162
1085568 0 162
1085568 0 163
1085568 0 163
1085568 0 162
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
00:42:05 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 663.679
Other statistics: stdev=3.18 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1635.714)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 164
1085568 0 163
1085568 0 163
1085568 0 163
1085568 0 163
1085568 0 164
1085568 0 165
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
00:42:22 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 673.077
Other statistics: stdev=2.031 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1612.857)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 162
1085568 0 161
1085568 0 162
1085568 0 161
1085568 0 161
1085568 0 161
1085568 0 161
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
00:42:38 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 672.482
Other statistics: stdev=2.225 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1614.286)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 162
1085568 0 162
1085568 0 161
1085568 0 161
1085568 0 162
1085568 0 161
1085568 0 161
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
Karl@Karl-LaptopDell ~/home
$ cat cpu_speed-desktop-10GB-1CPU.txt | grep -i "orapub cpu speed stat"
OraPub CPU speed statistic is 664.309
OraPub CPU speed statistic is 667.755
OraPub CPU speed statistic is 663.679
OraPub CPU speed statistic is 673.077
OraPub CPU speed statistic is 672.482
Karl@Karl-LaptopDell ~/home
$ cat cpu_speed-desktop-10GB-1CPU.txt | grep -i "pios"
Other statistics: stdev=6.976 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1634.286)
Other statistics: stdev=2.197 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1625.714)
Other statistics: stdev=3.18 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1635.714)
Other statistics: stdev=2.031 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1612.857)
Other statistics: stdev=2.225 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1614.286)
}}}
{{{
00:34:23 SYS@dw> show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 10G
sga_target big integer 10G
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
00:35:04 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 668.349
Other statistics: stdev=3.218 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1624.286)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 164
1085568 0 162
1085568 0 162
1085568 0 162
1085568 0 162
1085568 0 162
1085568 0 163
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
00:35:22 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 659.639
Other statistics: stdev=2.144 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1645.714)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 164
1085568 0 164
1085568 0 165
1085568 0 165
1085568 0 165
1085568 0 165
1085568 0 164
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
00:35:45 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 664.832
Other statistics: stdev=1.982 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1632.857)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 163
1085568 0 164
1085568 0 164
1085568 0 163
1085568 0 163
1085568 0 163
1085568 0 163
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
00:36:08 SYS@dw>
00:36:24 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 662.023
Other statistics: stdev=8.405 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1640)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 167
1085568 0 164
1085568 0 164
1085568 0 164
1085568 0 162
1085568 0 161
1085568 0 166
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
00:36:38 SYS@dw>
00:36:42 SYS@dw> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 659.639
Other statistics: stdev=2.144 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1645.714)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085568 0 165
1085568 0 164
1085568 0 164
1085568 0 165
1085568 0 165
1085568 0 165
1085568 0 164
7 rows selected.
Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local dw 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
..........................................................................
Karl@Karl-LaptopDell ~/home
$ cat cpu_speed-desktop.txt | grep -i "orapub cpu speed stat"
OraPub CPU speed statistic is 668.349
OraPub CPU speed statistic is 659.639
OraPub CPU speed statistic is 664.832
OraPub CPU speed statistic is 662.023
OraPub CPU speed statistic is 659.639
Karl@Karl-LaptopDell ~/home
$ cat cpu_speed-desktop.txt | grep -i "pios"
Other statistics: stdev=3.218 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1624.286)
Other statistics: stdev=2.144 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1645.714)
Other statistics: stdev=1.982 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1632.857)
Other statistics: stdev=8.405 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1640)
Other statistics: stdev=2.144 PIOs=0 pio/lio=0 avg lios/test=1085568 avg time(ms)/test=1645.714)
}}}
{{{
oracle@emgc12c.local:/home/oracle/cpu_speed:emrep12c
$ ls -ltr
total 16
-rw-r--r-- 1 oracle oinstall 819 Nov 6 00:05 create.sql
-rw-r--r-- 1 oracle oinstall 9756 Nov 6 00:06 cpu_speed.sql
oracle@emgc12c.local:/home/oracle/cpu_speed:emrep12c
$ s1
SQL*Plus: Release 11.2.0.3.0 Production on Tue Nov 6 00:06:44 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
00:06:44 SYS@emrep12c>
00:06:45 SYS@emrep12c> @create
drop table op$speed_test
*
ERROR at line 1:
ORA-00942: table or view does not exist
Table created.
drop table op$stats
*
ERROR at line 1:
ORA-00942: table or view does not exist
Table created.
10000 rows created.
10000 rows created.
20000 rows created.
40000 rows created.
80000 rows created.
160000 rows created.
320000 rows created.
640000 rows created.
Commit complete.
COUNT(*)
----------
1280000
00:06:58 SYS@emrep12c>
00:07:08 SYS@emrep12c>
00:07:08 SYS@emrep12c> ! ls -ltr
total 16
-rw-r--r-- 1 oracle oinstall 819 Nov 6 00:05 create.sql
-rw-r--r-- 1 oracle oinstall 9756 Nov 6 00:06 cpu_speed.sql
00:07:10 SYS@emrep12c> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 451.656
Other statistics: stdev=12.705 PIOs=0 pio/lio=0 avg lios/test=1086435.429 avg time(ms)/test=2407.143)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1091244 0 244
1085634 0 236
1085634 0 235
1085634 0 236
1085634 0 236
1085634 0 244
1085634 0 254
7 rows selected.
Linux emgc12c.local 2.6.32-100.0.19.el5 #1 SMP Fri Sep 17 17:51:41 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
emgc12c.local emrep12c 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 1
..........................................................................
00:07:31 SYS@emrep12c> show parameter cpu_count
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1
00:08:00 SYS@emrep12c> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 476.612
Other statistics: stdev=9.315 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=2278.571)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085634 0 236
1085634 0 232
1085634 0 227
1085634 0 226
1085634 0 223
1085634 0 225
1085634 0 226
7 rows selected.
Linux emgc12c.local 2.6.32-100.0.19.el5 #1 SMP Fri Sep 17 17:51:41 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
emgc12c.local emrep12c 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 1
..........................................................................
00:08:26 SYS@emrep12c> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 473.323
Other statistics: stdev=8.552 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=2294.286)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085634 0 235
1085634 0 233
1085634 0 228
1085634 0 225
1085634 0 225
1085634 0 233
1085634 0 227
7 rows selected.
Linux emgc12c.local 2.6.32-100.0.19.el5 #1 SMP Fri Sep 17 17:51:41 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
emgc12c.local emrep12c 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 1
..........................................................................
00:09:00 SYS@emrep12c> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 457.827
Other statistics: stdev=17.176 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=2374.286)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085634 0 233
1085634 0 236
1085634 0 236
1085634 0 240
1085634 0 229
1085634 0 257
1085634 0 231
7 rows selected.
Linux emgc12c.local 2.6.32-100.0.19.el5 #1 SMP Fri Sep 17 17:51:41 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
emgc12c.local emrep12c 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 1
..........................................................................
00:09:32 SYS@emrep12c> @cpu_speed
............................................................
.
op_cpu_speed.sql - OraPub CPU Speed Test
Version 1.0c 22-Jan-2009
(c)OraPub, Inc. - Author: Craig Shallahamer, craig@orapub.com
This script can be used to compare Oracle
database server CPU speed.
.
............................................................
. Use at your own risk. There is no warrenty.
. OraPub assumes no responsibility for the use of this script.
............................................................
.
Usage: Simply review and run the sript, it's self contained.
............................................................
Speed test rows are 1280000
Caching all rows...
Speed tests about to begin. This will take a few minutes.
Starting now...
Complete!
..........................................................................
OraPub CPU speed statistic is 475.905
Other statistics: stdev=5.122 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=2281.429)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1085634 0 228
1085634 0 229
1085634 0 233
1085634 0 225
1085634 0 227
1085634 0 227
1085634 0 228
7 rows selected.
Linux emgc12c.local 2.6.32-100.0.19.el5 #1 SMP Fri Sep 17 17:51:41 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
emgc12c.local emrep12c 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 1
..........................................................................
Karl@Karl-LaptopDell ~/home
$ cat cpu_speed-emgc.txt | grep -i "orapub cpu speed stat"
OraPub CPU speed statistic is 451.656
OraPub CPU speed statistic is 476.612
OraPub CPU speed statistic is 473.323
OraPub CPU speed statistic is 457.827
OraPub CPU speed statistic is 475.905
Karl@Karl-LaptopDell ~/home
$ cat cpu_speed-emgc.txt | grep -i "pios"
Other statistics: stdev=12.705 PIOs=0 pio/lio=0 avg lios/test=1086435.429 avg time(ms)/test=2407.143)
Other statistics: stdev=9.315 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=2278.571)
Other statistics: stdev=8.552 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=2294.286)
Other statistics: stdev=17.176 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=2374.286)
Other statistics: stdev=5.122 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=2281.429)
oracle@emgc12c.local:/home/oracle/cpu_speed:emrep12c
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 3339.204
cache size : 6144 KB
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm constant_tsc up rep_good pni monitor ssse3 lahf_lm
bogomips : 6678.40
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
00:13:30 SYS@emrep12c> show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 3504M
sga_target big integer 3504M
}}}
''the cpu_topology script''
{{{
dmidecode | grep -i "product name"
cat /proc/cpuinfo | grep -i "model name" | uniq
function filter(){
sed 's/^.*://g' | xargs echo
}
echo "processors (OS CPU count) " `grep processor /proc/cpuinfo | filter`
echo "physical id (processor socket) " `grep 'physical id' /proc/cpuinfo | filter`
echo "siblings (logical CPUs/socket) " `grep siblings /proc/cpuinfo | filter`
echo "core id (# assigned to a core) " `grep 'core id' /proc/cpuinfo | filter`
echo "cpu cores (physical cores/socket)" `grep 'cpu cores' /proc/cpuinfo | filter`
}}}
run it as follows
(example output from Exadata V2)
{{{
[enkdb01:root] /root
> sh cpu_topology
Product Name: SUN FIRE X4170 SERVER
Product Name: ASSY,MOTHERBOARD,X4170
model name : Intel(R) Xeon(R) CPU E5540 @ 2.53GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
physical id (processor socket) 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
siblings (logical CPUs/socket) 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
core id (# assigned to a core) 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
}}}
(example output from Exadata X2)
{{{
[root@enkdb03 ~]# sh cpu_topology
Product Name: SUN FIRE X4170 M2 SERVER
Product Name: ASSY,MOTHERBOARD,X4170
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
physical id (processor socket) 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
siblings (logical CPUs/socket) 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
core id (# assigned to a core) 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10
cpu cores (physical cores/socket) 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
}}}
! output on my desktop server
HT on
{{{
[root@desktopserver ~]# sh cpu_topology
Product Name: System Product Name
Product Name: P8Z68-V PRO
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7
physical id (processor socket) 0 0 0 0 0 0 0 0
siblings (logical CPUs/socket) 8 8 8 8 8 8 8 8
core id (# assigned to a core) 0 1 2 3 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4
}}}
HT off
{{{
[root@desktopserver ~]# sh cpu_topology
Product Name: System Product Name
Product Name: P8Z68-V PRO
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
processors (OS CPU count) 0 1 2 3
physical id (processor socket) 0 0 0 0
siblings (logical CPUs/socket) 4 4 4 4
core id (# assigned to a core) 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4
}}}
! 2020
{{{
cat /sys/devices/virtual/dmi/id/product_name
cat /proc/cpuinfo | grep -i "model name" | uniq
function filter(){
sed 's/^.*://g' | xargs echo
}
echo "processors (OS CPU count) " `grep processor /proc/cpuinfo | filter`
echo "physical id (processor socket) " `grep 'physical id' /proc/cpuinfo | filter`
echo "siblings (logical CPUs/socket) " `grep siblings /proc/cpuinfo | filter`
echo "core id (# assigned to a core) " `grep 'core id' /proc/cpuinfo | filter`
echo "cpu cores (physical cores/socket)" `grep 'cpu cores' /proc/cpuinfo | filter`
}}}
{{{
$ cat cpumonitor/bin/cpu.xml
<?xml version = '1.0' encoding = 'UTF-8'?>
<CPUMonitor Title="Compute Nodes" xmlns="http://www.dominicgiles.com/cpumonitor">
<MonitoredNode>
<HostName>desktopserver</HostName>
<Username>oracle</Username>
<!-- the password below will be encrypted after the first access by cpumonitor -->
<Password>oracle</Password>
<!-- You can optionally specify a port-->
<!-- <Port>22</Port> -->
<Comment>desktopserver</Comment>
</MonitoredNode>
</CPUMonitor>
}}}
a simple toolkit that allows you to control the saturation of specific number of CPU cores (AAS CPU)
https://www.dropbox.com/s/je6eafm1a9pnfpk/cputoolkit.tar.bz2
! the auto
{{{
$ cat x
for i in $(seq $1 1 $2)
do
echo "run for $i CPU/s"
done
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit:dw
$ sh x 1 1 dw
run for 1 CPU/s
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit:dw
$ sh x 1 4 dw
run for 1 CPU/s
run for 2 CPU/s
run for 3 CPU/s
run for 4 CPU/s
}}}
! specify specific cpus then auto
{{{
$ cat x
for i in 1 40 80 120 160 180
do
echo "run for $i CPU/s"
echo "$3 this"
done
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit:dw
$ sh x 1 180 dw
run for 1 CPU/s
dw this
run for 40 CPU/s
dw this
run for 80 CPU/s
dw this
run for 120 CPU/s
dw this
run for 160 CPU/s
dw this
run for 180 CPU/s
dw this
}}}
! HT-TurboOn
<<<
{{{
[root@desktopserver ~]# ./cpu
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
processor 0 1 2 3 4 5 6 7
physical id (processor socket) 0 0 0 0 0 0 0 0
siblings (logical cores/socket) 8 8 8 8 8 8 8 8
core id 0 1 2 3 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4
[root@desktopserver ~]#
[root@desktopserver ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.33
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 4
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 5
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 6
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 2900.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 5
initial apicid : 5
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
}}}
<<<
! HToff-TurboOn
<<<
{{{
[root@desktopserver ~]# ./cpu
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
processor 0 1 2 3
physical id (processor socket) 0 0 0 0
siblings (logical cores/socket) 4 4 4 4
core id 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4
[root@desktopserver ~]#
[root@desktopserver ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.81
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
}}}
<<<
! HT-TurboOff
<<<
{{{
[root@desktopserver ~]# ./cpu
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
processor 0 1 2 3 4 5 6 7
physical id (processor socket) 0 0 0 0 0 0 0 0
siblings (logical cores/socket) 8 8 8 8 8 8 8 8
core id 0 1 2 3 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4
[root@desktopserver ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.34
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 4
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 5
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 6
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 5
initial apicid : 5
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
}}}
<<<
! HToff-TurboOff
<<<
{{{
[root@desktopserver ~]# ./cpu
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
processor 0 1 2 3
physical id (processor socket) 0 0 0 0
siblings (logical cores/socket) 4 4 4 4
core id 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4
[root@desktopserver ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.95
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
}}}
<<<
{{{
create pluggable database testpdb admin user admin identified by welcome1;
alter pluggable database testpdb open read write;
alter session set container=testpdb;
create directory testpdb as '/nfs-shared/ADAT/awr_dumps/testpdb';
grant dba to admin;
testpdb =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = scas03adm05)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = testpdb)
)
)
-- use this if using developer vm
-- sed -i 's/testpdb/$1/g' create_pdb.sql
-- ALTER SESSION SET PDB_FILE_NAME_CONVERT='/u02/app/oracle/oradata/DEVCDB/pdbseed/','/u02/app/oracle/oradata/DEVCDB/testpdb/';
-- create pluggable database testpdb admin user admin identified by welcome1;
-- alter pluggable database testpdb open read write;
-- alter session set container=testpdb;
-- grant dba to admin;
-- create tablespace testpdbtbs datafile '/u02/app/oracle/oradata/DEVCDB/testpdb/testpdb01.dbf' size 10M;
-- alter database datafile '/u02/app/oracle/oradata/DEVCDB/testpdb/testpdb01.dbf' autoextend on next 10M maxsize 1000M;
--run after install sqlt
-- grant SQLT_USER_ROLE to admin;
--alter pluggable database testpdb close;
--drop pluggable database testpdb including datafiles;
}}}
{{{
createrepo -g /flash_reco/installers/oel/5.3/os/x86-64/VT/repodata/comps-rhel5-vt.xml /flash_reco/installers/oel/5.3/os/x86-64/VT
createrepo -g /flash_reco/installers/oel/5.3/os/x86-64/ClusterStorage/repodata/comps-rhel5-cluster-st.xml /flash_reco/installers/oel/5.3/os/x86-64/ClusterStorage
createrepo -g /flash_reco/installers/oel/5.3/os/x86-64/Cluster/repodata/comps-rhel5-cluster.xml /flash_reco/installers/oel/5.3/os/x86-64/Cluster
createrepo -g /flash_reco/installers/oel/5.3/os/x86-64/Server/repodata/comps-rhel5-server-core.xml /flash_reco/installers/oel/5.3/os/x86-64/Server
}}}
{{{
# twiki web notification of updates every 30 mins
*/30 * * * * cd /var/lib/twiki && perl -I bin tools/mailnotify -q
}}}
http://www.thegeekstuff.com/2009/06/15-practical-crontab-examples/
cron on specific time Tue Oct 9 17:32:00 CDT 2012
{{{
32 17 09 10 3 /root/rsync.sh
}}}
http://www.ibm.com/developerworks/linux/library/l-job-scheduling/index.html
http://kris.me.uk/2010/08/27/sshd-and-cron-on-windows-using-cygwin.html <-- good stuff, creates cron as a windows service
http://stackoverflow.com/questions/707184/how-do-you-run-a-crontab-in-cygwin-on-windows <-- good stuff on troubleshooting
{{{
Create a test entry of something simple, e.g.,
"* * * * * echo $HOME >> /tmp/mycron.log" and save it.
cat /tmp/mycron.log. Does it show cron environment variable HOME every minute?
}}}
https://fixerrors.wordpress.com/2009/01/28/cygrunsrv-error-starting-a-service-queryservicestatus-win32-error-1062/ <-- run the terminal "as administrator"
http://solidlinux.wordpress.com/2010/06/28/starting-crontab-in-cygwin/ <-- good stuff
! my config
{{{
karl@karl-remote ~
$ cron-config
Do you want to install the cron daemon as a service? (yes/no) yes
Enter the value of CYGWIN for the daemon: [ ] ntsec
You must decide under what account the cron daemon will run.
If you are the only user on this machine, the daemon can run as yourself.
This gives access to all network drives but only allows you as user.
To run multiple users, cron must change user context without knowing
the passwords. There are three methods to do that, as explained in
http://cygwin.com/cygwin-ug-net/ntsec.html#ntsec-nopasswd1
If all the cron users have executed "passwd -R" (see man passwd),
which provides access to network drives, or if you are using the
cyglsa package, then cron should run under the local system account.
Otherwise you need to have or to create a privileged account.
This script will help you do so.
Do you want the cron daemon to run as yourself? (yes/no) yes
Please enter the password for user 'karl':
Reenter:
Running cron_diagnose ...
... no problem found.
Do you want to start the cron daemon as a service now? (yes/no) yes
OK. The cron daemon is now running.
In case of problem, examine the log file for cron,
/var/log/cron.log, and the Windows event log (using /usr/bin/cronevents)
for information about the problem cron is having.
Examine also any cron.log file in the HOME directory
(or the file specified in MAILTO) and cron related files in /tmp.
If you cannot fix the problem, then report it to cygwin@cygwin.com.
Please run the script /usr/bin/cronbug and ATTACH its output
(the file cronbug.txt) to your e-mail.
WARNING: PATH may be set differently under cron than in interactive shells.
Names such as "find" and "date" may refer to Windows programs.
karl@karl-remote ~
$ net start cron
The requested service has already been started.
More help is available by typing NET HELPMSG 2182.
karl@karl-remote ~
$ crontab -e
crontab: no changes made to crontab
karl@karl-remote ~
$ crontab -l
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.gRMzCMIVE2 installed on Wed Dec 17 02:15:43 2014)
# (Cron version V5.0 -- $Id: crontab.c,v 1.12 2004/01/23 18:56:42 vixie Exp $)
* * * * * echo $HOME >> /tmp/mycron.log
karl@karl-remote ~
$ while : ; do ls -ltr /tmp/mycron.log ; echo "--"; sleep 30 ; done
-rw-r--r-- 1 karl None 11 Dec 17 03:20 /tmp/mycron.log
--
-rw-r--r-- 1 karl None 22 Dec 17 03:21 /tmp/mycron.log
--
-rw-r--r-- 1 karl None 22 Dec 17 03:21 /tmp/mycron.log
--
-rw-r--r-- 1 karl None 33 Dec 17 03:22 /tmp/mycron.log
--
karl@karl-remote ~
$ ^C
karl@karl-remote ~
$ cat /tmp/mycron.log
/home/karl
/home/karl
/home/karl
karl@karl-remote ~
}}}
-- crsctl 11gR2, 12c
https://docs.oracle.com/cd/E18283_01/rac.112/e16794/crsref.htm
https://docs.oracle.com/database/121/CWADD/crsref.htm#CWADD92615
http://blog.enkitec.com/2011/10/my-crsstat-script-improved-formatting-of-crs_stat-on-10g-and-11g/
''cool script here'' http://blog.enkitec.com/wp-content/uploads/2011/10/crsstat.zip
{{{
# cd /usr/local/bin
# wget http://blog.enkitec.com/wp-content/uploads/2011/10/crsstat.zip
# unzip crsstat.zip
# chmod 755 crsstat
# ./crsstat
}}}
! crs_stat has been deprecated in 11gR2
{{{
crsctl stat res -t
$ORACLE_HOME/bin/crsctl stat res -t
$GRID_HOME/bin/crsctl stat res -t
/u01/app/12.1.0.2/grid/bin/crsctl stat res -t
}}}
! crsstat 12c version
http://wiki.markgruenberg.info/doku.php?id=oracle:rac:crsstat_with_improved_formatting_from_built_in_oracle_command
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-library-cache#TOC-cursor:-pin-S
http://blog.tanelpoder.com/2010/04/21/cursor-pin-s-waits-sporadic-cpu-spikes-and-systematic-troubleshooting/
<<<
Your issue may be different than the one I described, as my issue was “cursor: pin s”, your’s is “cursor: pin s wait on x”. Someone holds some frequently used cursor pin in X mode and others can’t even use that cursor. Could be many things, but usually I take a look into V$SGA_RESIZE_OPS then, to make sure that the SGA_TARGET / ASMM hasn’t shrunk the shared pool around the time the spike happened. Another “usual suspect” would be some library cache child cursor “leak” where new child cursors are constantly created under a parent – search for parent cursors with thousands of children under it from V$SQL. But it could be a number of other things / bugs.
<<<
Library Cache: Mutex X " On Koka Cursors (LOBs) Non-Shared : [ID 758674.1] http://www.sql.ru/forum/actualthread.aspx?tid=845230
cursor: pin S wait on X in the Top 5 wait events http://www.pythian.com/news/35145/cursor-pin-s-wait-on-x-in-the-top-5-wait-events/
SYSTEMSTATE DUMP: Shooting DB Hang, sqlplus Hang, self deadlock latch, PMON deadlock latch http://dbakevin.blogspot.com/2012/08/db-hang-sqlplus-hang-pmon-dead-lock.html
http://www.pythian.com/news/33871/locks-latches-mutexes-and-cpu-usage/
http://halimdba.blogspot.com/2011/08/system-state-dumps-when-database-is.html
http://www.ora-solutions.net/web/2008/12/10/system-state-dump-evaluation-with-assawk/
<<<
ass109.awk
awk -f ass109.awk mddb1_diag_12345.trc
<<<
http://dba-elcaro.blogspot.com/2011/05/generate-analyze-system-state-dump.html
http://orainternals.wordpress.com/2009/03/10/systemstate-dump-analysis-nocache-on-high-intensive-sequences-in-rac/
''parameters''
SESSION_CACHED_CURSORS Vs CURSOR_SPACE_FOR_TIME - which, when and why? http://oracle-online-help.blogspot.com/2007/01/sessioncachedcursors-vs.html
http://blog.tanelpoder.com/2008/06/17/cursor_space_for_time-to-be-deprecated/
http://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams036.htm
DB Optimizer: Cursor_sharing : a picture is worth a 1000 words
https://groups.google.com/forum/#!topic/oracle_db_newbies/sQpyRFM34YY
{{{
1) create the customer numbering first
https://www.youtube.com/watch?v=DZsDp7hFbBw
2) create a new style from the numbering
3) create a custom TOC from the new style
https://www.youtube.com/watch?v=LBfKKQSbyD0
}}}
v$rman_backup_job_details
dba_registry_history
ora- errors on alert log
https://oracle.github.io/python-cx_Oracle/
https://github.com/oracle/python-cx_Oracle/tree/master/samples
http://ba6.us/?q=cx_Oracle_easy_windows_install
!!! mac
https://jasonstitt.com/cx_oracle_on_os_x
https://gist.github.com/thom-nic/6011715
http://www.oracle.com/technetwork/topics/intel-macsoft-096467.html
<<showtoc>>
! python 2
{{{
# references
# https: // www.udemy.com/using-python-with-oracle-db/learn/v4/t/lecture/4892394?start = 0
# http://cx-oracle.readthedocs.io/en/latest/installation.html#quick-start-cx-oracle-installation
# https://stackoverflow.com/questions/245465/cx-oracle-connecting-to-oracle-db-remotely
C:\WinPython27\python-2.7.10.amd64>python -m pip install cx_Oracle --upgrade
Collecting cx-Oracle
Downloading https://files.pythonhosted.org/packages/0a/7a/a679d1c3e711b083ce8d40aa38e3010e0f27084ccee7ac9cd4147fa3da9a
/cx_Oracle-6.4.1.tar.gz (253kB)
100% |################################| 253kB 1.1MB/s
Building wheels for collected packages: cx-Oracle
Running setup.py bdist_wheel for cx-Oracle
Stored in directory: C:\Users\karl\AppData\Local\pip\Cache\wheels\e8\ef\77\9a8078479d91b6be72ad0b65fe889cd23eea3362984
4520df7
Successfully built cx-Oracle
Installing collected packages: cx-Oracle
Successfully installed cx-Oracle-6.4.1
You are using pip version 7.1.2, however version 18.0 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
C:\WinPython27\python-2.7.10.amd64>python
Python 2.7.10 (default, May 23 2015, 09:44:00) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_Oracle
>>>
>>> connstr = 'system/oracle@<put_hostname_here>:1521/cdb1'
>>> conn = cx_Oracle.connect(connstr)
>>>
>>> conn.version
'12.1.0.2.0'
>>>
>>>
}}}
! python 3
{{{
# install on py3
C: \WinPython\scripts > python -m pip install cx_Oracle --upgrade
Collecting cx_Oracle
Downloading https: // files.pythonhosted.org/packages/ae/ee/5e9704b60f99540a6b6f527f355b5e7025ba3f637f31566c5021e6e337c5
/cx_Oracle-6.4.1-cp36-cp36m-win_amd64.whl(164kB)
100 % |████████████████████████████████| 174kB 3.6MB/s
Installing collected packages: cx-Oracle
Successfully installed cx-Oracle-6.4.1
You are using pip version 9.0.3, however version 18.0 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
C: \WinPython\scripts > python
Python 3.6.5 (v3.6.5: f59c0932b4, Mar 28 2018, 17: 00: 18)[MSC v.1900 64 bit(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>> >
>> >
>> > import cx_Oracle
>> >
>> > connstr = 'system/oracle@<put_hostname_here>:1521/cdb1'
>> > conn = cx_Oracle.connect(connstr)
>> > conn.version
'12.1.0.2.0'
>> >
>> >
}}}
http://earthobservatory.nasa.gov/blogs/earthmatters/files/2016/09/tempanoms_gis_august2016.gif
http://forums.oracle.com/forums/thread.jspa?messageID=9273406
http://bderzhavets.blogspot.com/2007/11/cygwinx-install-oracle-10.html
http://solaris.reys.net/how-to-x11-forwarding-using-ssh-putty-and-xming/ <-- XMING
http://blogs.oracle.com/muralins/2007/09/cygwin_as_display_server.html
-- FOR A UNIX LIKE ENVIRONMENT
http://www.cs.nyu.edu/~yap/prog/cygwin/FAQs.html
http://www.cygwin.com/faq/faq.using.html
http://stackoverflow.com/questions/4334467/cygwin-using-a-path-variable-containing-a-windows-path-with-a-space-in-it <-- on windows path with space
http://stackoverflow.com/questions/2641391/how-to-format-a-dos-path-to-a-unix-path-on-cygwin-command-line <-- dos path to a unix path
http://cygwin.com/problems.html
http://cygwin.com/ml/cygwin/2005-02/msg00232.html
http://www.catb.org/~esr/faqs/smart-questions.html <-- How To Ask Questions The Smart Way
http://stackoverflow.com/questions/299504/connecting-to-remote-oracle-via-cygwin-sqlplus <-- connecting to sqlplus
!!! cygwin package path
{{{
C:\cygwin64
# list installed packages
cygcheck --check-setup
}}}
https://stackoverflow.com/questions/46626455/how-to-configure-cygwin-local-package-directory
https://www.google.com/search?q=get+packages+installed+in+cygwin&oq=get+packages+installed+in+cygwin&aqs=chrome..69i57j0.7750j1j1&sourceid=chrome&ie=UTF-8
https://codeyarns.com/2013/07/26/how-to-list-installed-packages-in-cygwin/
! pm2
http://pm2.keymetrics.io/docs/tutorials/capistrano-like-deployments
https://keymetrics.io/
https://ifelse.io/2015/09/02/running-node-js-apps-in-production/
https://www.youtube.com/results?search_query=pm2+keymetrics
! socket programming
https://www.safaribooksonline.com/library/view/linux-socket-programming/0789722410/
{{{
select min(tm),max(tm) from
(
select TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm
FROM
dba_hist_snapshot s0 )
where
to_date(tm,'MM/DD/YY HH24:MI:SS') > sysdate - 1
-- replace to cast
select min(tm),max(tm) from
(
select TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm
FROM
dba_hist_snapshot s0
where
CAST(s0.END_INTERVAL_TIME as DATE) > sysdate - 1
)
-- cast past hour
CAST(s0.END_INTERVAL_TIME as DATE) > sysdate - 1/24
-- on the AWR scripts
from this
AND tm > to_char(sysdate - :g_retention, 'MM/DD/YY HH24:MI')
to this
AND to_date(tm,'MM/DD/YY HH24:MI:SS') > sysdate - :g_retention
}}}
! data extract using SQL Developer
follow the steps on this evernote link http://goo.gl/pxuQEP
! packaging the CSV from windows
if you have multiple files coming from different instances and you want to merge all of them in one file on the windows side do the following:
if you want to package it on the linux side (server) do the steps mentioned here [[data capture - manually package csv files]]
{{{
H:\Desk\tableau-final\raw csv\iowl>dir
Volume in drive H is Desk
Volume Serial Number is Desk
Directory of H:\Desk\tableau-final\raw csv\iowl
07/24/2013 05:51 AM <DIR> .
07/24/2013 05:50 AM <DIR> ..
07/23/2013 11:19 PM 1,852,678 export_iowl4.csv
07/23/2013 11:17 PM 1,854,849 export_iowl3.csv
07/23/2013 10:59 PM 1,750,306 export_iowl2.csv
07/23/2013 10:52 PM 1,774,552 export_iowl1.csv
4 File(s) 7,240,577 bytes
2 Dir(s) 301,508,382,720 bytes free
H:\Desk\tableau-final\raw csv\iowl>
H:\Desk\tableau-final\raw csv\iowl>
H:\Desk\tableau-final\raw csv\iowl>copy *csv iowl.csv
export_iowl4.csv
export_iowl3.csv
export_iowl2.csv
export_iowl1.csv
1 file(s) copied.
}}}
{{{
Manually packaging the csv files, execute the following on the root of csv directory
mkdir awr_topevents
mv awr_topevents-tableau-* awr_topevents
cat awr_topevents/*csv > awr_topevents.txt
mkdir awr_services
mv awr_services-tableau-* awr_services
cat awr_services/*csv > awr_services.txt
mkdir awr_cpuwl
mv awr_cpuwl-tableau-* awr_cpuwl
cat awr_cpuwl/*csv > awr_cpuwl.txt
mkdir awr_sysstat
mv awr_sysstat-tableau-* awr_sysstat
cat awr_sysstat/*csv > awr_sysstat.txt
mkdir awr_topsqlx
mv awr_topsqlx-tableau-* awr_topsqlx
cat awr_topsqlx/*csv > awr_topsqlx.txt
mkdir awr_iowl
mv awr_iowl-tableau-* awr_iowl
cat awr_iowl/*csv > awr_iowl.txt
mkdir awr_storagesize_summary
mv awr_storagesize_summary-tableau-* awr_storagesize_summary
cat awr_storagesize_summary/*csv > awr_storagesize_summary.txt
mkdir awr_storagesize_detail
mv awr_storagesize_detail-tableau-* awr_storagesize_detail
cat awr_storagesize_detail/*csv > awr_storagesize_detail.txt
tar -cjvpf awrtxt.tar.bz2 *txt
}}}
! hortonworks
https://hortonworks.com/about-us/founders/
<<<
ALAN GATES
ARUN C. MURTHY
DEVARAJ DAS
ERIC BALDESCHWIELER
MAHADEV KONAR
OWEN O'MALLEY
SANJAY RADIA
SURESH SRINIVAS
<<<
! cloudera
https://twitter.com/owen_omalley/status/1083430148953714688
Amr Awadallah
Christophe Bisciglia
Jeff Hammerbacher
Mike Olson
! consulting
https://datametica.com/
! chris pinkham - AWS creators
https://www.linkedin.com/in/chris-pinkham-27252091/
https://www.linkedin.com/in/willem-van-biljon-14028a/?originalSubdomain=za
https://hortonworks.com/services/training/certification/
https://www.cloudera.com/more/training/certification.html
Hadoop Certification - HDPCA - Introduction https://www.youtube.com/watch?v=R1YohML3Gpw&list=PLf0swTFhTI8rPlPxDnDrYWE8bg2kKs3Q-
https://labs.itversity.com/learn
https://www.credential.net/profile/jamesyorkwinegar/wallet
https://www.oreilly.com/ideas/data-engineers-vs-data-scientists
enq PS
http://www.orafaq.com/usenet/comp.databases.oracle.server/2007/06/13/0376.htm
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues#TOC-enq:-CF---contention-
ktsj
http://erdemcer.blogspot.com/2014/06/instant-data-file-extension-ktsj-smco.html
SEG$, uniform extent allocation, space wastage
http://blog.tanelpoder.com/2013/11/06/diagnosing-buffer-busy-waits-with-the-ash_wait_chains-sql-script-v0-2/
http://www.slideshare.net/tanelp/troubleshooting-complex-performance-issues-oracle-seg-contention
https://blogs.oracle.com/datawarehousing/entry/parallel_load_uniform_or_autoallocate
http://www.oracle.com/technetwork/issue-archive/2007/07-may/o37asktom-101781.html
enq FB, HW
https://aprakash.wordpress.com/2012/04/09/enq-fb-contention-no-its-not-facebook-contention/
https://forums.oracle.com/forums/thread.jspa?threadID=672534
{{{
Amundsen
https://eng.lyft.com/amundsen-lyfts-data-discovery-metadata-engine-62d27254fbb9
https://www.dataengineeringpodcast.com/amundsen-data-discovery-episode-92/
Amundsen: A Data Discovery Platform From Lyft | Lyft https://www.youtube.com/watch?v=EOCYw0yf63k
https://www.amundsen.io/amundsen/
Data Fusion
https://cloud.google.com/data-fusion/docs/tutorials/lineage
C360
https://en.wikipedia.org/wiki/Janrain
https://marketingplatform.google.com/about/tag-manager/
https://www.ordergroove.com/home-vb/
https://www.bigcommerce.com/
https://www.lytics.com/
https://segment.com/
DBT
https://rittmananalytics.com/blog/2020/5/28/introducing-the-ra-warehouse-dbt-framework-how-rittman-analytics-does-data-centralization
https://rittmananalytics.com/blog/tag/dbt
https://blog.getdbt.com/what--exactly--is-dbt-/
https://docs.getdbt.com/docs/building-a-dbt-project/documentation
collibra
https://marketplace.collibra.com/listings/jdbc-driver-for-oracle/
https://www.collibra.com/
apache atlas
https://medium.com/google-cloud/a-metadata-comparison-between-apache-atlas-and-google-data-catalog-7e1ad391b4c2
Apache Atlas Introduction: Need for Governance and Metadata management: Vimal Sharma https://www.youtube.com/watch?v=6vNJOHDE15g
https://community.cloudera.com/t5/Community-Articles/Customizing-Atlas-Part1-Model-governance-traceability-and/ta-p/249250
Data Discovery and Lineage: Integrating streaming data in the public cloud with on-prem, classic datastores and heterogeneous schema types
https://conferences.oreilly.com/strata/strata-ny-2018/cdn.oreillystatic.com/en/assets/1/event/278/Data%20discovery%20and%20lineage_%20Integrating%20streaming%20data%20in%20the%20public%20cloud%20with%20on-prem,%20classic%20data%20stores,%20and%20heterogeneous%20schema%20types%20Presentation.pdf
Barbara Eckman, Ph.D.Principal ArchitectComcast
https://conferences.oreilly.com/strata/strata-ny-2018/public/schedule/detail/69518.html
https://learning.oreilly.com/videos/strata-data-conference/9781491976326/9781491976326-video316438?autoplay=false
https://www.youtube.com/results?search_query=heterogeneous+big+data+environment+with+Apache+Atlas+and+Avro+-+Barbara+Eckman
navigator (precursor to atlas)
https://community.cloudera.com/t5/Support-Questions/Cloudera-Navigator-vs-Apache-Atlas/td-p/297931
gcp data lineage
https://stackoverflow.com/questions/55000865/how-can-i-perform-data-lineage-in-gcp
https://github.com/GoogleCloudPlatform/datacatalog-connectors-hive/tree/master/google-datacatalog-apache-atlas-connector
spark spline
https://medium.com/@reenugrewal/data-lineage-tracking-using-spline-on-atlas-via-event-hub-6816be0fd5c7
apache airflow marquez
https://www.dremio.com/subsurface/data-lineage-with-apache-airflow
https://www.slideshare.net/WillyLulciuc/data-lineage-with-apache-airflow-using-marquez
Data Lineage with Apache Airflow | Datakin https://www.youtube.com/watch?v=dfRetdg9444
airflow with atlas
https://www.datastackpros.com/2020/04/gcp-cloud-data-lineage-with-airflow.html
https://stackoverflow.com/questions/53539491/how-import-the-lineage-of-airflow-to-the-atlas
https://github.com/apache/airflow/blob/master/docs/apache-airflow/lineage.rst
talend to atlas
https://help.talend.com/r/TakcAFoOnWMPdNo8XXlPpg/_FqzpSa92Vga9NfumjCM~Q
https://help.talend.com/r/XSofSS9S7oLFhM0dJizFlg/hPYn15mq7Is2544uaVhvxQ
streaming avro schema registry
https://community.cloudera.com/t5/Community-Articles/Avro-Schema-Registry-with-Apache-Atlas-for-Streaming-Data/ta-p/247037
End to end Data Governance with Apache Avro and Atlas https://www.youtube.com/watch?v=b--xwHHukRA
https://community.cloudera.com/t5/Community-Articles/Apache-Atlas-as-an-Avro-Schema-Registry-Test-Drive/ta-p/247039
https://community.cloudera.com/t5/Community-Articles/Avro-Schema-Registry-with-Apache-Atlas-for-Streaming-Data/ta-p/247037
finos waltz https://www.finos.org/blog/introduction-to-finos-waltz
https://waltz.finos.org/
https://waltz.finos.org/blog/index.html
http://www.waltztechnology.com/measurable-category/4
data lineage GDPR
https://info.talend.com/rs/talend/images/WP_EN_TLD_Talend_Outlining_PracticalSteps_GDPR_Compliance.pdf
https://www.octopai.com/gdpr-experts/?creative=499030836191&keyword=data%20governance%20data%20lineage&matchtype=b&network=g&device=c&gclid=CjwKCAjwr_uCBhAFEiwAX8YJgf2QLkmBIZu7JRszD2JtmwbbGgwZ6QSGlsB5niAFmAmN_GDmTXWv3hoCS8cQAvD_BwE
https://www.talend.com/resources/gdpr-stitch-data-lineage/
https://www.talend.com/resources/16-step-data-governance-plan-gdpr-compliance/?ty=content
https://www.talend.com/resources/gdpr-govern-analytical-models/
https://www.talend.com/solutions/data-protection-gdpr-compliance/
https://www.slideshare.net/Hadoop_Summit/practical-experiences-using-atlas-and-ranger-to-implement-gdpr
https://theworldofapenguin.blogspot.com/
https://github.com/SvenskaSpel/cobra-policytool
https://github.com/SvenskaSpel/cobra-policytool
}}}
https://www.datanami.com/2016/05/04/rise-data-science-notebooks/
notebook innovation at netflix https://medium.com/netflix-techblog/notebook-innovation-591ee3221233
.
! programming
http://www.pluralsight.com/courses/learning-program-better-programmer
http://www.pluralsight.com/courses/learning-programming-javascript
http://www.pluralsight.com/courses/learning-programming-abstractions-python
! logic and algorithms
Algorithms and Data Structures - Part 1 http://www.pluralsight.com/courses/ads-part1
Algorithms and Data Structures - Part 2 http://www.pluralsight.com/courses/ads2
http://www.pluralsight.com/courses/math-for-programmers
http://www.pluralsight.com/courses/refactoring-fundamentals
http://www.pluralsight.com/courses/provable-code
http://www.pluralsight.com/courses/writing-clean-code-humans
http://www.pluralsight.com/courses/better-software-through-measurement
Functional Data Structures in R https://www.amazon.com/Functional-Data-Structures-Statistical-Programming-ebook/dp/B077KPFDXX/ref=pd_sim_351_8?_encoding=UTF8&psc=1&refRID=GQYFW66FG2H28WN6BTVW
https://www.youtube.com/user/DataVaultAcademy/videos <-- GOOD STUFF
<<showtoc>>
! automated setup
https://github.com/SlalomBuild/snowflake-on-ecs
! datavault
https://www.snowflake.com/blog/tips-for-optimizing-the-data-vault-architecture-on-snowflake-part-3/
https://twitter.com/dangalavan/status/1314567189257748480
https://galavan.com/optimizing-the-data-vault-architecture-on-snowflake-free-sql/
Data Modeling Meetup Munich: Optimizing Data Vault Architecture on Snowflake with Dan Galavan https://www.youtube.com/watch?v=_TkCRbIyOwQ&ab_channel=Obaysch
https://github.com/dangalavan/Optimizing-DataVault-on-Snowflake
<<<
SQL Scripts summary
Script name Description
01 – DDL.sql The DDL used to create the required Data Vault tables etc. on Snowflake.
05 – Multi table inserts.sql Loads the Data Vault from ‘Staging’ (sample Snowflake database) using Multi Table Insert statements. Uses Overwrite All for testing purposes.
12 – Create Warehouses.sql Creates 3 x Snowflake Virtual Warehouses to load Hubs, Links, and Satellites separately.
15 – MultipleWarehouses – 01 – Load Hubs.sql
15 – MultipleWarehouses – 02 – Load Sats.sql
15 – MultipleWarehouses – 03 – Load Links.sql Each script loads Hubs / Links / Satellites separately using a dedicated Virtual warehouse for separation of workload.
30 – VARIANT – Data Load.sql Load JSON into the Data Vault
40 – BusinessVault.sql Parse JSON data and optimization of the Business Vault.
<<<
sample code index http://www.oracle.com/technetwork/indexes/samplecode/index.html
Oracle PL/SQL Sample Code
http://www.oracle.com/technetwork/indexes/samplecode/plsql-sample-522110.html
https://github.com/oracle/db-sample-schemas
Sample Models and Scripts http://www.oracle.com/technetwork/developer-tools/datamodeler/sample-models-scripts-224531.html
! courses
https://www.datacamp.com/courses/machine-learning-toolbox
https://www.datacamp.com/courses/manipulating-time-series-data-in-r-with-xts-zoo
https://www.datacamp.com/courses/financial-trading-in-r
https://www.jetbrains.com/datagrip/buy/#edition=personal](https://www.jetbrains.com/datagrip/buy/#edition=personal
https://asktom.oracle.com/pls/asktom/f%3Fp%3D100:11:0::::P11_QUESTION_ID:4843459400346642911
{{{
which method should I use (or other if you have a better one):
1)
where dateCol = to_date(:param, 'YYYY-MM-DD')
2)
where to_date(dateCol, 'YYYY-MM-DD') = to_date(:param, 'YYYY-MM-DD')
3)
where to_char(dateCol, 'YYYY-MM-DD') = :param
4)
where to_char(dateCol, 'YYYY-MM-DD') = to_char(:param, 'YYYY-MM-DD')
thank you,
Antonio
and we said...
I have a database/table with a DATE column that has no index and has the format "DD-MM-YYYY".
No, a date doesn't have a format! A date is 7 bytes of binary information - a byte each for the century, year, month, day, hour, minute, and second. It gets formatted when fetched as a string by a client or when implicitly converted in SQL or PLSQL.
Your database might have a NLS_DATE_FORMAT set to dd-mm-yyyy - but your date doesn't!
You should always explicitly convert the STRING into a DATE to compare. ALWAYS.
1) that is the correct way to do that.
2) that is horrible - NEVER do that. that is really:
where to_date( TO_CHAR(dateCol), 'YYYY-MM-DD' ) = to_date(:param,'YYYY-MM-DD')
first - there is that implicit to_char taking place! And if your default date mask it DD-MM-YYYY - it wouldn't even work!!
Never allow implicit conversions
Convert the less specific type (string) into the more specific type (date)
3) that would be a non-performant approach. do you want to convert EVERY ROW from a date to string to compare?
Or do you want to convert a string into a date ONCE and compare.
Also, using to_char() on the database date column would typically obviate the use of any indexes
4) see #2
#1 all the way
}}}
http://www.evernote.com/shard/s48/sh/02058cbe-a7e6-40c3-b15f-1d8f781b04b0/79bf11874c975632cd93b49d7e8bb4c3
db_file_multiblock_read_count
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:499197100346264909
{{{
set echo on
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
spool dbcheck_&_instname..txt
-- WITH REDUNDANCY
set lines 400
col state format a9
col name format a10
col sector format 999990
col block format 999990
col label format a25
col path format a40
col redundancy format a25
col pct_used format 990
col pct_free format 990
col raw_gb heading "RAW|TOTAL_GB"
col usable_total_gb heading "REAL|TOTAL_GB"
col usable_used_gb heading "REAL|USED_GB"
col usable_free_gb heading "REAL|FREE_GB"
col required_mirror_free_gb heading "REQUIRED|MIRROR_FREE|GB"
col usable_file_gb heading "USABLE|FILE|GB"
col voting format a6 heading "VOTING"
BREAK ON REPORT
COMPUTE SUM OF raw_gb ON REPORT
COMPUTE SUM OF usable_total_gb ON REPORT
COMPUTE SUM OF usable_used_gb ON REPORT
COMPUTE SUM OF usable_free_gb ON REPORT
COMPUTE SUM OF required_mirror_free_gb ON REPORT
COMPUTE SUM OF usable_file_gb ON REPORT
select
state,
type,
sector_size sector,
block_size block,
allocation_unit_size au,
round(total_mb/1024,2) raw_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * total_mb, 'NORMAL', .5 * total_mb, total_mb))/1024,2) usable_total_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * (total_mb - free_mb), 'NORMAL', .5 * (total_mb - free_mb), (total_mb - free_mb)))/1024,2) usable_used_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * free_mb, 'NORMAL', .5 * free_mb, free_mb))/1024,2) usable_free_gb,
round((DECODE(TYPE, 'HIGH', 0.3333 * required_mirror_free_mb, 'NORMAL', .5 * required_mirror_free_mb, required_mirror_free_mb))/1024,2) required_mirror_free_gb,
round(usable_file_mb/1024,2) usable_file_gb,
round((total_mb - free_mb)/total_mb,2)*100 as "PCT_USED",
round(free_mb/total_mb,2)*100 as "PCT_FREE",
offline_disks,
voting_files voting,
name
from v$asm_diskgroup
where total_mb != 0
order by 1;
-- SHOW FREE
SET LINESIZE 300
SET PAGESIZE 9999
SET VERIFY OFF
COLUMN status FORMAT a9 HEADING 'Status'
COLUMN name FORMAT a25 HEADING 'Tablespace Name'
COLUMN type FORMAT a12 HEADING 'TS Type'
COLUMN extent_mgt FORMAT a10 HEADING 'Ext. Mgt.'
COLUMN segment_mgt FORMAT a9 HEADING 'Seg. Mgt.'
COLUMN pct_free FORMAT 999.99 HEADING "% Free"
COLUMN gbytes FORMAT 99,999,999 HEADING "Total GBytes"
COLUMN used FORMAT 99,999,999 HEADING "Used Gbytes"
COLUMN free FORMAT 99,999,999 HEADING "Free Gbytes"
BREAK ON REPORT
COMPUTE SUM OF gbytes ON REPORT
COMPUTE SUM OF free ON REPORT
COMPUTE SUM OF used ON REPORT
SELECT d.status status, d.bigfile, d.tablespace_name name, d.contents type, d.extent_management extent_mgt, d.segment_space_management segment_mgt, df.tssize gbytes, (df.tssize - fs.free) used, fs.free free
FROM
dba_tablespaces d,
(SELECT tablespace_name, ROUND(SUM(bytes)/1024/1024/1024) tssize FROM dba_data_files GROUP BY tablespace_name) df,
(SELECT tablespace_name, ROUND(SUM(bytes)/1024/1024/1024) free FROM dba_free_space GROUP BY tablespace_name) fs
WHERE
d.tablespace_name = df.tablespace_name(+)
AND d.tablespace_name = fs.tablespace_name(+)
AND NOT (d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY')
UNION ALL
SELECT d.status status, d.bigfile, d.tablespace_name name, d.contents type, d.extent_management extent_mgt, d.segment_space_management segment_mgt, df.tssize gbytes, (df.tssize - fs.free) used, fs.free free
FROM
dba_tablespaces d,
(select tablespace_name, sum(bytes)/1024/1024/1024 tssize from dba_temp_files group by tablespace_name) df,
(select tablespace_name, sum(bytes_cached)/1024/1024/1024 free from v$temp_extent_pool group by tablespace_name) fs
WHERE
d.tablespace_name = df.tablespace_name(+)
AND d.tablespace_name = fs.tablespace_name(+)
AND d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY'
ORDER BY 9;
CLEAR COLUMNS BREAKS COMPUTES
SELECT tablespace_name,status,contents
,logging,predicate_evaluation,compress_for
FROM dba_tablespaces;
select sum(bytes)/1024/1024 db_segment_size_gb from dba_segments;
-- multiblock
col "Parameter" FOR a40
col "Session Value" FOR a20
col "Instance Value" FOR a20
SELECT a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value"
FROM x$ksppi a, x$ksppcv b, x$ksppsv c
WHERE a.indx = b.indx AND a.indx = c.indx
AND substr(ksppinm,1,1)='_'
AND a.ksppinm like '%db_file%read%'
/
-- system stats
select pname, pval1
from sys.aux_stats$;
-- Measuring Your Direct Read Threshold
select a.ksppinm name, b.ksppstvl value, b.ksppstdf isdefault
from x$ksppi a, x$ksppsv b
where a.indx = b.indx
and (a.ksppinm ='_small_table_threshold'
or a.ksppinm='__db_cache_size')
order by 1,2
/
-- dop
select count(*) , degree from dba_tables group by degree order by 1;
select count(*) , degree from dba_indexes group by degree order by 1;
select count(*) count, degree, owner from dba_tables group by degree, owner order by owner, count;
select count(*) count, degree, owner from dba_indexes group by degree, owner order by owner, count;
-- INSTANCE OVERVIEW
CLEAR COLUMNS BREAKS COMPUTES
SELECT
instance_name instance_name_print
, instance_number instance_number_print
, thread# thread_number_print
, host_name host_name_print
, version version
, TO_CHAR(startup_time,'mm/dd/yyyy HH24:MI:SS') start_time
, ROUND(TO_CHAR(SYSDATE-startup_time), 2) uptime
, parallel parallel
, status instance_status
, logins logins
, DECODE( archiver
, 'FAILED'
, archiver
, archiver ) archiver
FROM gv$instance
ORDER BY instance_number;
-- resource manager ---------------------
show parameter cpu_count
show parameter resource_manager_plan
-- cell config
COL cv_cellname HEAD CELLNAME FOR A20
COL cv_cellversion HEAD CELLSRV_VERSION FOR A20
COL cv_flashcachemode HEAD FLASH_CACHE_MODE FOR A20
SELECT
cellname cv_cellname
, CAST(extract(xmltype(confval), '/cli-output/cell/releaseVersion/text()') AS VARCHAR2(20)) cv_cellVersion
, CAST(extract(xmltype(confval), '/cli-output/cell/flashCacheMode/text()') AS VARCHAR2(20)) cv_flashcachemode
, CAST(extract(xmltype(confval), '/cli-output/cell/cpuCount/text()') AS VARCHAR2(10)) cpu_count
, CAST(extract(xmltype(confval), '/cli-output/cell/upTime/text()') AS VARCHAR2(20)) uptime
, CAST(extract(xmltype(confval), '/cli-output/cell/kernelVersion/text()') AS VARCHAR2(30)) kernel_version
, CAST(extract(xmltype(confval), '/cli-output/cell/makeModel/text()') AS VARCHAR2(50)) make_model
FROM
v$cell_config -- gv$ isn't needed, all cells should be visible in all instances
WHERE
conftype = 'CELL'
ORDER BY
cv_cellname
/
-- version
CLEAR COLUMNS BREAKS COMPUTES
COLUMN banner FORMAT a120 HEADING 'Banner'
SELECT * FROM v$version;
-- patch info
col action format a10
col namespace format a10
col action_time format a40
col version format a10
col comments format a30
select * from dba_registry_history;
-- options
CLEAR COLUMNS BREAKS COMPUTES
COLUMN parameter HEADING 'Option Name'
COLUMN value HEADING 'Installed?'
SELECT
DECODE( value
, 'FALSE'
, parameter
, parameter ) parameter
, DECODE( value
, 'FALSE'
, value
, value ) value
FROM v$option
ORDER BY parameter;
-- db registry
CLEAR COLUMNS BREAKS COMPUTES
SELECT
comp_id comp_id
, comp_name comp_name
, version
, DECODE( status
, 'VALID', status
, 'INVALID', status
, status ) status
, modified modified
, control
, schema
, procedure
FROM dba_registry
ORDER BY comp_name;
-- feature usage stats
CLEAR COLUMNS BREAKS COMPUTES
COLUMN feature_name FORMAT a115 HEADING 'Feature|Name'
COLUMN version FORMAT a75 HEADING 'Version'
COLUMN detected_usages FORMAT a75 HEADING 'Detected|Usages'
COLUMN total_samples FORMAT a75 HEADING 'Total|Samples'
COLUMN currently_used FORMAT a60 HEADING 'Currently|Used'
COLUMN first_usage_date FORMAT a30 HEADING 'First Usage|Date'
COLUMN last_usage_date FORMAT a30 HEADING 'Last Usage|Date'
COLUMN last_sample_date FORMAT a30 HEADING 'Last Sample|Date'
COLUMN next_sample_date FORMAT a30 HEADING 'Next Sample|Date'
SELECT
name feature_name
, DECODE( detected_usages
, 0
, version
, version ) version
, DECODE( detected_usages
, 0
, NVL(TO_CHAR(detected_usages), '')
, NVL(TO_CHAR(detected_usages), '') ) detected_usages
, DECODE( detected_usages
, 0
, NVL(TO_CHAR(total_samples), '')
, NVL(TO_CHAR(total_samples), '') ) total_samples
, DECODE( detected_usages
, 0
, NVL(currently_used, '')
, NVL(currently_used, '') ) currently_used
, DECODE( detected_usages
, 0
, NVL(TO_CHAR(first_usage_date, 'mm/dd/yyyy HH24:MI:SS'), '')
, NVL(TO_CHAR(first_usage_date, 'mm/dd/yyyy HH24:MI:SS'), '') ) first_usage_date
, DECODE( detected_usages
, 0
, NVL(TO_CHAR(last_usage_date, 'mm/dd/yyyy HH24:MI:SS'), '')
, NVL(TO_CHAR(last_usage_date, 'mm/dd/yyyy HH24:MI:SS'), '') ) last_usage_date
, DECODE( detected_usages
, 0
, NVL(TO_CHAR(last_sample_date, 'mm/dd/yyyy HH24:MI:SS'), '')
, NVL(TO_CHAR(last_sample_date, 'mm/dd/yyyy HH24:MI:SS'), '') ) last_sample_date
, DECODE( detected_usages
, 0
, NVL(TO_CHAR((last_sample_date+SAMPLE_INTERVAL/60/60/24), 'mm/dd/yyyy HH24:MI:SS'), '')
, NVL(TO_CHAR((last_sample_date+SAMPLE_INTERVAL/60/60/24), 'mm/dd/yyyy HH24:MI:SS'), '') ) next_sample_date
FROM dba_feature_usage_statistics
ORDER BY name;
-- high watermark stats
set lines 1000
CLEAR COLUMNS BREAKS COMPUTES
COLUMN statistic_name FORMAT a115 HEADING 'Statistic Name'
COLUMN version FORMAT a62 HEADING 'Version'
COLUMN highwater FORMAT 9,999,999,999,999,999 HEADING 'Highwater'
COLUMN last_value FORMAT 9,999,999,999,999,999 HEADING 'Last Value'
COLUMN description FORMAT a120 HEADING 'Description'
SELECT
name statistic_name
, version version
, highwater highwater
, last_value last_value
, description description
FROM dba_high_water_mark_statistics
ORDER BY name;
-- db overview
CLEAR COLUMNS BREAKS COMPUTES
COLUMN name FORMAT a75 HEADING 'Database|Name'
COLUMN dbid HEADING 'Database|ID'
COLUMN db_unique_name HEADING 'Database|Unique Name'
COLUMN creation_date HEADING 'Creation|Date'
COLUMN platform_name_print HEADING 'Platform|Name'
COLUMN current_scn HEADING 'Current|SCN'
COLUMN log_mode HEADING 'Log|Mode'
COLUMN open_mode HEADING 'Open|Mode'
COLUMN force_logging HEADING 'Force|Logging'
COLUMN flashback_on HEADING 'Flashback|On?'
COLUMN controlfile_type HEADING 'Controlfile|Type'
COLUMN last_open_incarnation_number HEADING 'Last Open|Incarnation Num'
SELECT
name name
, dbid dbid
, db_unique_name db_unique_name
, TO_CHAR(created, 'mm/dd/yyyy HH24:MI:SS') creation_date
, platform_name platform_name_print
, current_scn current_scn
, log_mode log_mode
, open_mode open_mode
, protection_mode protection_mode
, database_role database_role
, dataguard_broker dataguard_broker
, force_logging force_logging
, flashback_on flashback_on
, controlfile_type controlfile_type
, last_open_incarnation# last_open_incarnation_number
FROM v$database;
-- init parameters
CLEAR COLUMNS BREAKS COMPUTES
COLUMN spfile HEADING 'SPFILE Usage'
SELECT
'This database '||
DECODE( (1-SIGN(1-SIGN(count(*) - 0)))
, 1
, 'IS'
, 'IS NOT') ||
' using an SPFILE.'spfile
FROM v$spparameter
WHERE value IS NOT null;
COLUMN pname FORMAT a75 HEADING 'Parameter Name' ENTMAP off
COLUMN instance_name_print FORMAT a45 HEADING 'Instance Name' ENTMAP off
COLUMN value FORMAT a75 HEADING 'Value' ENTMAP off
COLUMN isdefault FORMAT a75 HEADING 'Is Default?' ENTMAP off
COLUMN issys_modifiable FORMAT a75 HEADING 'Is Dynamic?' ENTMAP off
BREAK ON report ON pname
SELECT
DECODE( p.isdefault
, 'FALSE'
, SUBSTR(p.name,0,512)
, SUBSTR(p.name,0,512) ) pname
, DECODE( p.isdefault
, 'FALSE'
, i.instance_name
, i.instance_name ) instance_name_print
, DECODE( p.isdefault
, 'FALSE'
, p.isdefault
, p.isdefault ) isdefault
, DECODE( p.isdefault
, 'FALSE'
, p.issys_modifiable
, p.issys_modifiable ) issys_modifiable
, DECODE( p.isdefault
, 'FALSE'
, SUBSTR(p.value,0,512)
, SUBSTR(p.value,0,512) ) value
FROM
gv$parameter p
, gv$instance i
WHERE
p.inst_id = i.inst_id
ORDER BY
p.name
, i.instance_name;
-- big segments
col segment_name format a50
select * from
(
select owner, segment_name, segment_type, tablespace_name, blocks, round(sum(bytes)/1024/1024/1024,2) gb
from dba_segments
group by owner, segment_name, segment_type, tablespace_name, blocks
)
where gb > 4
order by gb asc;
-- query big non-partitioned tables
col segment_name format a50
select * from
(
select owner, segment_name, segment_type, tablespace_name, sum(bytes)/1024/1024/1024 gb
from dba_segments
group by owner, segment_name, segment_type, tablespace_name
)
where segment_name not in (SELECT TABLE_NAME
FROM ALL_TAB_PARTITIONS
where table_owner not in ('SYS','SYSTEM'))
and gb > 4
and segment_type = 'TABLE'
order by gb asc;
-- distinct partitioned tables
SELECT DISTINCT table_owner, TABLE_NAME
FROM ALL_TAB_PARTITIONS
where table_owner not in ('SYS','SYSTEM')
order by 1,2;
-- query big partitioned tables
col segment_name format a50
select * from
(
select owner, segment_name, segment_type, tablespace_name, sum(bytes)/1024/1024/1024 gb
from dba_segments
group by owner, segment_name, segment_type, tablespace_name
)
where segment_name in (SELECT TABLE_NAME
FROM ALL_TAB_PARTITIONS
where table_owner not in ('SYS','SYSTEM'))
and gb > 4
order by gb asc;
-- list subpartitions
col owner format a15
select * from
(
select a.owner, b.table_name, b.PARTITION_NAME, b.TABLESPACE_NAME, b.COMPRESS_FOR, b.last_analyzed, b.blocks, round(sum(a.bytes)/1024/1024/1024,2) GB
from dba_segments a,dba_tab_subpartitions b
where a.segment_name=b.table_name and
a.PARTITION_NAME=b.SUBPARTITION_NAME
group by a.owner, b.table_name, b.PARTITION_NAME, b.TABLESPACE_NAME, b.COMPRESS_FOR, b.last_analyzed, b.blocks
)
where GB > 1
order by GB asc;
-- compression
select tablespace_name,
def_tab_compression,
nvl(compress_for,'NONE') compress_for
from dba_tablespaces
order by 2;
select count(*), COMPRESS_FOR, tablespace_name, partitioned from dba_tables group by COMPRESS_FOR, tablespace_name , partitioned order by 2;
-- table in flash cache keep
col table_name format a40
select owner, table_name, COMPRESS_FOR, CELL_FLASH_CACHE, last_analyzed, blocks
from dba_tables
where CELL_FLASH_CACHE not in ('DEFAULT')
and owner not in ('SYS','SYSTEM')
order by 6;
-- table size in flash cache keep
select owner, segment_name, segment_type, tablespace_name, sum(bytes)/1024/1024/1024 gb
from dba_segments
where segment_name in (select table_name
from dba_tables
where CELL_FLASH_CACHE not in ('DEFAULT')
and owner not in ('SYS','SYSTEM'))
group by owner, segment_name, segment_type, tablespace_name
order by 5;
-- query big non-partitioned indexes
col segment_name format a50
select * from
(
select owner, segment_name, segment_type, tablespace_name, sum(bytes)/1024/1024/1024 gb
from dba_segments
group by owner, segment_name, segment_type, tablespace_name
)
where segment_name not in (SELECT index_NAME
FROM ALL_IND_PARTITIONS
where index_owner not in ('SYS','SYSTEM'))
and gb > 4
and segment_type = 'INDEX'
order by gb asc;
-- distinct partitioned indexes
SELECT DISTINCT index_owner, index_NAME
FROM ALL_ind_PARTITIONS
where index_owner not in ('SYS','SYSTEM')
order by 1,2;
-- query big partitioned indexes
col segment_name format a50
select * from
(
select owner, segment_name, segment_type, tablespace_name, sum(bytes)/1024/1024/1024 gb
from dba_segments
group by owner, segment_name, segment_type, tablespace_name
)
where segment_name in (SELECT index_NAME
FROM ALL_IND_PARTITIONS
where index_owner not in ('SYS','SYSTEM'))
and gb > 4
order by gb asc;
-- list subpartitions indexes
col owner format a15
select * from
(
select a.owner, b.index_name, b.PARTITION_NAME, b.TABLESPACE_NAME, b.last_analyzed, round(sum(a.bytes)/1024/1024/1024,2) GB
from dba_segments a,dba_ind_subpartitions b
where a.segment_name=b.index_name and
a.PARTITION_NAME=b.SUBPARTITION_NAME
group by a.owner, b.index_name, b.PARTITION_NAME, b.TABLESPACE_NAME, b.last_analyzed
)
where GB > 1
order by GB asc;
-- indexes in flash cache keep
col index_name format a40
select owner, index_name, CELL_FLASH_CACHE, last_analyzed
from dba_indexes
where CELL_FLASH_CACHE not in ('DEFAULT')
and owner not in ('SYS','SYSTEM')
order by 4;
-- indexes size in flash cache keep
select owner, segment_name, segment_type, tablespace_name, sum(bytes)/1024/1024/1024 gb
from dba_segments
where segment_name in (select index_name
from dba_indexes
where CELL_FLASH_CACHE not in ('DEFAULT')
and owner not in ('SYS','SYSTEM'))
group by owner, segment_name, segment_type, tablespace_name
order by 5;
spool off
}}}
http://sourceforge.net/projects/dbgen/
http://www.tpc.org/tpch/
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dbseg/managing-security-for-definers-rights-and-invokers-rights.html#GUID-EF625176-A3AA-47F5-BAEE-248DBAAF22A3
<<showtoc>>
<<<
Steps to execute the MOS note
ADBS/ADBD:How To Get AWR DUMP (Doc ID 2732530.1)
<<<
! get snap IDs
{{{
select dbid,count(*), min(snap_id), max(snap_id) from DBA_HIST_SNAPSHOT group by dbid;
}}}
! export AWR
<<<
@@IMPORTANT NOTE: the file below awrdump189439690133074033.dmp will be appended with .dmp
the final name will be created as awrdump189439690133074033.dmp.dmp@@
<<<
{{{
begin
/* call PL/SQL routine to extract the data */
dbms_workload_repository.extract(dmpfile => 'awrdump189439690133074033.dmp',
dmpdir => 'DATA_PUMP_DIR',
bid => 3307,
eid => 4033,
dbid => 1894396901);
end;
/
}}}
! list contents of DATA_PUMP_DIR
{{{
SQL> SQL> SELECT object_name FROM DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR') where object_name like 'awrdump%';
OBJECT_NAME
--------------------------------------------------------------------------------
awrdump.dmp.dmp
awrdump.dmp.log
awrdump1894396901.dmp.dmp
awrdump1894396901.dmp.log
awrdump189439690133074033.dmp.dmp
awrdump189439690133074033.dmp.log
}}}
! create AUTH token
{{{
Identity -> Users -> User Details -> Auth Tokens
* Copy the contents right away, you'll use it for creating credential
}}}
! create credential
* you have to put the email address and not the admin user, else it will error with ORA-20401: Authorization failed for URI
* your email is tied with the AUTH TOKEN
* you can also create a named credential other than DEF_CREDS
{{{
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CREDS',
username => 'karl.arao@oracle.com',
password => 'xxx PUT AUTH TOKEN HERE xxx' );
END;
/
}}}
! create BUCKET
* after creating a bucket, upload a sample file to get the URI
{{{
Object Storage -> Bucket Details
}}}
! list contents of bucket - using dbms_cloud.list_objects
* URL path below is captured from object details of a sample file uploaded
* URL Path (URI): https://objectstorage.us-phoenix-1.oraclecloud.com/n/intoraclerwp/b/kabucket/o/awrdump%2Fash_analysis.txt
<<<
@@
there are 3 types of URIs in OCI - native URIs, Swift URIs, or pre-authenticated request URIs
Cloud Object Storage URI Formats https://docs.oracle.com/en/cloud/paas/autonomous-database/adbdj/index.html#articletitle
@@
<<<
{{{
objectstorage and swiftobjectstorage URI works
col object_name format a50
select object_name, bytes from dbms_cloud.list_objects('DEF_CREDS','https://objectstorage.us-phoenix-1.oraclecloud.com/n/intoraclerwp/b/kabucket');
select object_name, bytes from dbms_cloud.list_objects('DEF_CREDS','https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/intoraclerwp/kabucket');
OBJECT_NAME BYTES
-------------------------------------------------- ----------
awrdump/ 0
awrdump/ash_analysis.txt 9726
awrdump189439690133074033.dmp 578949120
awrdump189439690133074033_2.dmp 578949120
awrextract/ 0
}}}
! list contents of bucket - using curl
* URL below is a pre-auth URL
{{{
host curl -X GET https://objectstorage.us-phoenix-1.oraclecloud.com/p/32FxVHVvqKGBDpzebfVF6MMOix3ZBKxEde6vHExk40weKTHF0hT3Wmm9jeWTAnZs/n/intoraclerwp/b/kabucket/o/
{"objects":[{"name":"awrdump/"},{"name":"awrdump/ash_analysis.txt"},{"name":"awrdump189439690133074033.dmp"},{"name":"awrdump189439690133074033_2.dmp"},{"name":"awrextract/"}]}
SQL>
}}}
! copy the dump file from DATA_PUMP_DIR to the bucket
{{{
BEGIN
DBMS_CLOUD.PUT_OBJECT (
credential_name => 'DEF_CREDS'
,object_uri =>'https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/intoraclerwp/kabucket/awrdump189439690133074033_4.dmp'
,directory_name =>'DATA_PUMP_DIR'
,file_name =>'awrdump189439690133074033.dmp.dmp');
end;
/
}}}
.
Q: How Could I Format The Output From Dbms_metadata.Get_ddl Utility?
Doc ID: Note:394143.1
How to Use Package DBMS_METADATA With SCHEMA_EXPORT to Extract DDLs Owned by Another User
Doc ID: Note:556823.1
Using DBMS_METADATA To Get The DDL For Objects
Doc ID: Note:188838.1
WHY DOES DBMS_METADATA.GET_DDL NOT SHOW ALL LINES?
Doc ID: Note:332077.1
Object types for DBMS_METADATA
Doc ID: Note:207859.1
Dbms_metadata.Get_ddl Can Not Capture Ddl For Scheduler Jobs
Doc ID: Note:567504.1
''-- recreate a user''
{{{
set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('USER', username) || ';' usercreate
from dba_users where username = 'SCOTT';
}}}
''-- get role, system, object grants''
{{{
SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT','SCOTT') FROM DUAL;
SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT','SCOTT') FROM DUAL;
SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','SCOTT') FROM DUAL;
}}}
-- sample create user output
<<<
{{{
CREATE USER "SCOTT" IDENTIFIED BY VALUES 'S:B214FE9CDA4F58C2114A419564313612C32CB71B92CA6585997B22FB8261;F894844C34402B67'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";
GRANT "CONNECT" TO "SCOTT";
GRANT "RESOURCE" TO "SCOTT";
GRANT UNLIMITED TABLESPACE TO "SCOTT";
}}}
<<<
''-- table''
{{{
set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('TABLE','PARTITION1','SCOTT') from dual;
}}}
<<<
create table DDL for dbms_redefinition
{{{
-- get the create DDL for the table
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',true);
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',true);
-- don't show fk
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'REF_CONSTRAINTS',false);
-- get rid of constraints and not nulls
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'CONSTRAINTS',false);
select dbms_metadata.get_ddl('TABLE','EMPLOYEES_INT','HR') from dual;
}}}
<<<
''-- index''
{{{
set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('INDEX',index_name,owner) from dba_indexes where owner = 'HR' and table_name = 'EMPLOYEES';
}}}
<<<
index DDL for dbms_redefinition
{{{
-- index DDL
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',true);
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',true);
select dbms_metadata.get_ddl('INDEX',index_name,owner) from dba_indexes where owner = 'HR' and table_name = 'EMPLOYEES';
}}}
-- script to register the INDEXES
{{{
select 'DBMS_REDEFINITION.REGISTER_DEPENDENT_OBJECT ('''||a.owner||''','''||a.table_name||''','''||b.table_name||''',DBMS_REDEFINITION.CONS_INDEX,'''||a.owner||''','''||a.INDEX_name||''','''||b.INDEX_name||''');' from
(select * from dba_INDEXES where table_name = 'EMPLOYEES') a,
(select * from dba_INDEXES where table_name = 'EMPLOYEES_INT') b
where a.OWNER = b.OWNER
AND A.INDEX_NAME||'_INT' = B.INDEX_NAME
and a.owner = 'HR';
}}}
<<<
-- ''constraints''
{{{
select dbms_metadata.get_ddl('CONSTRAINT',constraint_name)
from user_constraints;
}}}
<<<
constraints DDL for dbms_redefinition
{{{
-- constraints DDL
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',true);
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',true);
-- for constraints
select dbms_metadata.get_ddl('CONSTRAINT',constraint_name,owner)
from dba_constraints
where table_name = 'EMPLOYEES'
and constraint_type != 'R';
-- for foreign key
select dbms_metadata.get_ddl('REF_CONSTRAINT',constraint_name,owner)
from dba_constraints
where table_name = 'EMPLOYEES'
and constraint_type = 'R';
}}}
script to register the constraints
{{{
-- script to register the constraints
select 'DBMS_REDEFINITION.REGISTER_DEPENDENT_OBJECT ('''||a.owner||''','''||a.table_name||''','''||b.table_name||''',DBMS_REDEFINITION.CONS_CONSTRAINT,'''||a.owner||''','''||a.constraint_name||''','''||b.constraint_name||''');' from
(select * from dba_cons_columns where table_name = 'EMPLOYEES') a,
(select * from dba_cons_columns where table_name = 'EMPLOYEES_INT') b
where a.column_name = b.column_name
and a.owner = 'HR';
}}}
<<<
-- ''DB Link''
<<<
{{{
set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('DB_LINK', DB_LINK) || ';' dblinkcreate
from dba_db_links;
}}}
<<<
-- ''trigger''
{{{
SPOOL TRIGGER_NAME.trg
SELECT DBMS_METADATA.GET_DDL('TRIGGER', 'TRIGGER_NAME', 'SCHEMA_NAME' ) txt
FROM DUAL;
SPOOL OFF
}}}
http://www.optimaldba.com/papers/DBMS_METADATA_handout.pdf
https://oracle-base.com/articles/11g/dbms_parallel_execute_11gR2
http://www.oralytics.com/2013/12/running-plsql-procedures-in-parallel.html
https://stackoverflow.com/questions/14624650/exec-gather-table-stats-from-procedure
https://dba.stackexchange.com/questions/118138/how-to-call-a-stored-procedure-in-oracle-scheduler-job
<<showtoc>>
! test case
{{{
set serveroutput on
VARIABLE sql1 VARCHAR2(100)
VARIABLE sql2 VARCHAR2(100)
VARIABLE sql3 VARCHAR2(100)
VARIABLE sql4 VARCHAR2(100)
VARIABLE sql5 VARCHAR2(100)
VARIABLE sql6 VARCHAR2(100)
BEGIN
:sql1 := q'[SELECT * FROM dual WHERE dummy = 'X']';
:sql2 := q'[select * from dual where dummy='X']';
:sql3 := q'[SELECT * FROM dual WHERE dummy = 'x']';
:sql4 := q'[SELECT * FROM dual WHERE dummy = 'Y']';
:sql5 := q'[SELECT * FROM dual WHERE dummy = 'X' OR dummy = :b1]';
:sql6 := q'[SELECT * FROM dual WHERE dummy = 'Y' OR dummy = :b1]';
END;
/
REM
REM get the signature of the SQL statements with FORCE_MATCHING = FALSE (0)
REM
col signature format 999999999999999999999999
SELECT :sql1 sql_text, dbms_sqltune.sqltext_to_signature(:sql1,0) signature FROM dual
UNION ALL
SELECT :sql2 sql_text, dbms_sqltune.sqltext_to_signature(:sql2,0) signature FROM dual
UNION ALL
SELECT :sql3 sql_text, dbms_sqltune.sqltext_to_signature(:sql3,0) signature FROM dual
UNION ALL
SELECT :sql4 sql_text, dbms_sqltune.sqltext_to_signature(:sql4,0) signature FROM dual
UNION ALL
SELECT :sql5 sql_text, dbms_sqltune.sqltext_to_signature(:sql5,0) signature FROM dual
UNION ALL
SELECT :sql6 sql_text, dbms_sqltune.sqltext_to_signature(:sql6,0) signature FROM dual;
REM
REM get the signature of the SQL statements with FORCE_MATCHING = TRUE (1)
REM
SELECT :sql1 sql_text, dbms_sqltune.sqltext_to_signature(:sql1,1) signature FROM dual
UNION ALL
SELECT :sql2 sql_text, dbms_sqltune.sqltext_to_signature(:sql2,1) signature FROM dual
UNION ALL
SELECT :sql3 sql_text, dbms_sqltune.sqltext_to_signature(:sql3,1) signature FROM dual
UNION ALL
SELECT :sql4 sql_text, dbms_sqltune.sqltext_to_signature(:sql4,1) signature FROM dual
UNION ALL
SELECT :sql5 sql_text, dbms_sqltune.sqltext_to_signature(:sql5,1) signature FROM dual
UNION ALL
SELECT :sql6 sql_text, dbms_sqltune.sqltext_to_signature(:sql6,1) signature FROM dual;
REM
REM what about case insensitive searches?
REM
ALTER SESSION SET nls_sort=binary_ci;
ALTER SESSION SET nls_comp=ansi;
SELECT :sql1 sql_text, dbms_sqltune.sqltext_to_signature(:sql1,0) signature FROM dual
UNION ALL
SELECT :sql2 sql_text, dbms_sqltune.sqltext_to_signature(:sql2,0) signature FROM dual
UNION ALL
SELECT :sql3 sql_text, dbms_sqltune.sqltext_to_signature(:sql3,0) signature FROM dual
UNION ALL
SELECT :sql4 sql_text, dbms_sqltune.sqltext_to_signature(:sql4,0) signature FROM dual
UNION ALL
SELECT :sql5 sql_text, dbms_sqltune.sqltext_to_signature(:sql5,0) signature FROM dual
UNION ALL
SELECT :sql6 sql_text, dbms_sqltune.sqltext_to_signature(:sql6,0) signature FROM dual;
}}}
! output
{{{
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SIGNATURE
-------------------------
SELECT * FROM dual WHERE dummy = 'X'
7181225531830258335
select * from dual where dummy='X'
7181225531830258335
SELECT * FROM dual WHERE dummy = 'x'
18443846411346672783
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SIGNATURE
-------------------------
SELECT * FROM dual WHERE dummy = 'Y'
909903071561515954
SELECT * FROM dual WHERE dummy = 'X' OR dummy = :b1
14508885911807130242
SELECT * FROM dual WHERE dummy = 'Y' OR dummy = :b1
816238779370039768
6 rows selected.
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SIGNATURE
-------------------------
SELECT * FROM dual WHERE dummy = 'X'
10668153635715970930
select * from dual where dummy='X'
10668153635715970930
SELECT * FROM dual WHERE dummy = 'x'
10668153635715970930
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SIGNATURE
-------------------------
SELECT * FROM dual WHERE dummy = 'Y'
10668153635715970930
SELECT * FROM dual WHERE dummy = 'X' OR dummy = :b1
14508885911807130242
SELECT * FROM dual WHERE dummy = 'Y' OR dummy = :b1
816238779370039768
6 rows selected.
Session altered.
Session altered.
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SIGNATURE
-------------------------
SELECT * FROM dual WHERE dummy = 'X'
7181225531830258335
select * from dual where dummy='X'
7181225531830258335
SELECT * FROM dual WHERE dummy = 'x'
18443846411346672783
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SIGNATURE
-------------------------
SELECT * FROM dual WHERE dummy = 'Y'
909903071561515954
SELECT * FROM dual WHERE dummy = 'X' OR dummy = :b1
14508885911807130242
SELECT * FROM dual WHERE dummy = 'Y' OR dummy = :b1
816238779370039768
6 rows selected.
}}}
Statistical Analysis Template
http://www.evernote.com/shard/s48/sh/47200301-7a8f-4995-86a8-dfd4e49bb0a8/d8a3b4cbbf1f0178f08a0112e53c1599
''dbms_stat_funcs''
http://psoug.org/reference/dbms_stat_funcs.html
http://numoraclerecipes.blogspot.com/2007/10/analyzing-your-data-for-distributions.html
http://www.oracle.com/technology/industries/life_sciences/presentations/ls_eseminar_19jan06_stat_fns.pdf
http://tonguc.wordpress.com/2008/10/04/understand-your-data-with-oracles-in-database-statistical-functions/
http://oraexplorer.com/2008/06/statistic-calculation-dbmsstatfunc/
! CPU use case
''15 days workload profile of a database (Feb 19 - Mar 5, 2010)''
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/THs5c9W0K3I/AAAAAAAAA0A/n_c-UE6HB4o/s400/CPUpct.png]]
''dbms_stat_funcs First Output'' - numbers are still on decimal which is quite dubious textually..
SUMMARY STATISTICS
---------------------------
Count: 1345
Min: 0
Max: .59
Range: .59
Mean: 0
Mode Count: 1
Mode: 0
Variance: 0
Stddev: 0
---------------------------
Quantile 5 -> 0
Quantile 25 -> 0
Median -> 0
Quantile 75 -> .01
Quantile 95 -> .04
---------------------------
Extreme Count: 1
Extremes: .59
Top 5: .59,.1,.1,.09,.09
Bottom 5: 0,0,0,0,.09
---------------------------
Sample1:
[img[picturename| http://lh5.ggpht.com/_F2x5WXOJ6Q8/THuxdQeAjqI/AAAAAAAAA1Q/JzMAQ0DGbuk/s400/samcpu1.png]]
[img[picturename| http://lh5.ggpht.com/_F2x5WXOJ6Q8/THuxdVGQPMI/AAAAAAAAA1U/CzWXUMMRqk8/s400/popcpu1.png]]
''After converting the percentage to whole numbers ... pretty match the images below especially the mean and SD, but must also have a facility to get the occurrence in a textual manner..''
SUMMARY STATISTICS
---------------------------
Count: 1345
Min: .06
Max: 58.78
Range: 58.72
Mean: 1
Mode Count: 1
Mode: .11
Variance: 4
Stddev: 2
---------------------------
Quantile 5 -> .06
Quantile 25 -> .11
Median -> .11
Quantile 75 -> 1
Quantile 95 -> 3.82
---------------------------
Extreme Count: 1
Extremes: 58.78
Top 5: 58.78,10.13,9.52,9.3,9.25
Bottom 5: .06,.06,.06,.06,9.25
---------------------------
Sample2:
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/THuvvUcK1TI/AAAAAAAAA04/C0rl23NUXeI/s400/samcpu2.png]]
[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/THuvvo-qGCI/AAAAAAAAA08/wuCwX_44p0k/s400/popcpu2.png]]
! IO subsystem use case
[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/THtGx18p_WI/AAAAAAAAA0M/r6eBobmT1f8/s800/IOpct.png]]
SUMMARY STATISTICS - ''Total Disk IOPS''
---------------------------
Count: 1345
Min: .55
Max: 296.75
Range: 296.2
Mean: 8
Mode Count: 1
Mode: .97
Variance: 406
Stddev: 20
---------------------------
Quantile 5 -> .91
Quantile 25 -> .98
Median -> 1.15
Quantile 75 -> 7.5
Quantile 95 -> 36.504
---------------------------
Extreme Count: 6
Extremes: 247.89
Top 5: 296.75,287.11,269.21,247.89,169.7
Bottom 5: .55,.62,.63,.62,169.7
---------------------------
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/THuvv80QgnI/AAAAAAAAA1A/Ezmz8wQfltI/s400/samio1.png]]
[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/THuv17iVWcI/AAAAAAAAA1E/TyzUOxbLuqI/s400/popio1.png]]
! download
http://edn.embarcadero.com/
go to my account > My registered user downloads
go to my account > My registered products
! new features
http://docs.embarcadero.com/products/db_optimizer/
http://docs.embarcadero.com/products/db_optimizer/17.0/DB%20Optimizer%2017.0.0%20User%20Guide.pdf
http://docs.embarcadero.com/products/db_optimizer/17.0/DB%20Optimizer%2017.0.0%20Quick%20Start%20Guide.pdf
http://docs.embarcadero.com/products/db_optimizer/17.0/DBOptimizer_17.0.0_ReadMe.htm
http://docwiki.embarcadero.com/DBOptimizer/en/Grants_Example
{{{
CONFIGURING PROFILING
Configuring Oracle
Oracle users need access to V$ views. In order to configure Oracle to provide users with these privleges:
If you are setting up Oracle 10 or later, ensure you are logged in as sys or system with the sysdba role, or the SELECT_CATALOG_ROLE has been granted to user_name.
If you are setting up an earlier version of Oracle, ensure you are logged in as sys or system with the sysdba role.
CONFIGURING TUNING
Set Roles and Permissions on Data Sources
In order to take advantage of all tuning features, each user must have a specific set of permissions. The code below creates a role with all required permissions. To create the required role, execute the SQL against the target data source, modified according to the specific needs of your site:
/* Create the role */
CREATE ROLE SQLTUNING NOT IDENTIFIED
/
GRANT SQLTUNING TO "CONNECT"
/
GRANT SQLTUNING TO SELECT_CATALOG_ROLE.
/
GRANT ANALYZE ANY TO SQLTUNING
/
GRANT CREATE ANY OUTLINE TO SQLTUNING
/
GRANT CREATE ANY PROCEDURE TO SQLTUNING
/
GRANT CREATE ANY TABLE TO SQLTUNING
/
GRANT CREATE ANY TRIGGER TO SQLTUNING
/
GRANT CREATE ANY VIEW TO SQLTUNING
/
GRANT CREATE PROCEDURE TO SQLTUNING
/
GRANT CREATE SESSION TO SQLTUNING
/
GRANT CREATE TRIGGER TO SQLTUNING
/
GRANT CREATE VIEW TO SQLTUNING
/
GRANT DROP ANY OUTLINE TO SQLTUNING
/
GRANT DROP ANY PROCEDURE TO SQLTUNING
/
GRANT DROP ANY TRIGGER TO SQLTUNING
/
GRANT DROP ANY VIEW TO SQLTUNING
/
GRANT SELECT ON SYS.V_$SESSION TO SQLTUNING /
GRANT SELECT ON SYS.V_$SESSTAT TO SQLTUNING /
GRANT SELECT ON SYS.V_$SQL TO SQLTUNING
/
GRANT SELECT ON SYS.V_$STATNAME TO SQLTUNING /
Once complete, you can assign the role to users who will be running tuning jobs:
/* Create a sample user*/
CREATE USER TUNINGUSER IDENTIFIED BY VALUES '05FFD26E95CF4A4B'
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
QUOTA UNLIMITED ON USERS
PROFILE DEFAULT
ACCOUNT UNLOCK
/
GRANT SQLTUNING TO TUNINGUSER
/
ALTER USER TUNINGUSER DEFAULT ROLE SQLTUNING /
}}}
emctl setpasswd dbconsole
{{{
$ cat dbtimemonitor/bin/databases.xml
<?xml version = '1.0' encoding = 'UTF-8'?>
<WaitMonitor Title="Monitored Databases" xmlns="http://www.dominicgiles.com/waitmonitor">
<!-- You can add as many Monitored Databases as you like. -->
<!-- They will connect to the server over the thin jdbc connection stack.-->
<!-- Passwords will be encrypted from clear text after the file is read.-->
<!-- To change/update a password simply change all the text between the <Password> tags</Password>-->
<!-- -->
<MonitoredDatabase>
<ConnectString>desktopserver.local:1521:dw</ConnectString>
<Comment>desktopserver</Comment>
<Username>system</Username>
<Password>enc(VkH5cfs24bA=)</Password>
</MonitoredDatabase>
</WaitMonitor>
}}}
Using DCLI on Oracle Big Data Appliance (Doc ID 1476008.1)
http://houseofbrick.com/configuring-dd-boost/
<<<
The root cause was misconfiguration on the Data Domain. They did not have the “Interface Groups” activated, so all traffic was going to the management interface at 1Gb instead of spread across the 4 10Gb interfaces meant for data traffic.
Their network was always a HUGE challenge. They actually ran a cable across the data center from the switch connected to the old machine across the floor and to one of the nics for the Exadata.
<<<
{{{
I’m running a restore test to ensure backups are ok and am getting very slow reads from DDBoost. The time is lost on the reads from the SBT device per v$backup_sync_io. I’ve asked the client’s storage team to check if there’s anything to do on the Data Domain side to speed things up. I’ve looked in the DDBoost docs and MOS for info on speeding up reads, but not finding much. Anyone have any info on speeding up reads from DDBoost?
select buffer_size, buffer_count,
round(elapsed_time/100) as et_seconds,
round(bytes/1024/1024) mbytes,
round(effective_bytes_per_second/1024/1024) as eff_mb_sec,
io_count, io_time_total/100 as io_tot_secs,
round((io_time_total/100) / io_count, 3) as secs_per_io
from v$backup_sync_io
where rman_status_recid = 1771
and status = 'FINISHED';
BUFFER_SIZE
BUFFER_COUNT
ET_SECONDS
MBYTES
EFF_MB_SEC
IO_COUNT
IO_TOT_SECS
SECS_PER_IO
1048576
64
13698
204408
15
204352
13667.04
0.067
1048576
64
9945
170411
17
170368
9919.14
0.058
Rman buffer params set:
select name, value from v$parameter where name like '_backup%'
order by 1
NAME
VALUE
_backup_disk_bufcnt
20
_backup_disk_bufsz
2097152
_backup_file_bufcnt
20
_backup_file_bufsz
2097152
_backup_seq_bufcnt
64
_backup_seq_bufsz
2097152
The DDBoost parameters for the SBT channel include “BLKSIZE=1048576”, which is given by the DDBoost setup guide. I cannot find anywhere in the DDBoost documentation or online about changing this to another value. Out of curiosity I doubled it for a restore test and the driver flat out rejected the new value.
}}}
.
{{{
SQL> show parameter resource
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
resource_limit boolean FALSE
resource_manager_cpu_allocation integer 4
resource_manager_plan string SCHEDULER[0x3109]:DEFAULT_MAIN
TENANCE_PLAN
WINDOW_NAME RESOURCE_PLAN
----------------- -------------------------
LAST_START_DATE DURATION ENABL
-------------------------------------------------- --------------- -----
MONDAY_WINDOW DEFAULT_MAINTENANCE_PLAN
20-FEB-12 10.00.01.653437 PM CST6CDT +000 04:00:00 TRUE
TUESDAY_WINDOW DEFAULT_MAINTENANCE_PLAN
21-FEB-12 10.00.00.020035 PM CST6CDT +000 04:00:00 TRUE
WEDNESDAY_WINDOW DEFAULT_MAINTENANCE_PLAN
22-FEB-12 10.00.00.011144 PM CST6CDT +000 04:00:00 TRUE
WINDOW_NAME RESOURCE_PLAN
----------------- -------------------------
LAST_START_DATE DURATION ENABL
-------------------------------------------------- --------------- -----
THURSDAY_WINDOW DEFAULT_MAINTENANCE_PLAN
+000 04:00:00 TRUE
FRIDAY_WINDOW DEFAULT_MAINTENANCE_PLAN
+000 04:00:00 TRUE
SATURDAY_WINDOW DEFAULT_MAINTENANCE_PLAN
+000 20:00:00 TRUE
WINDOW_NAME RESOURCE_PLAN
----------------- -------------------------
LAST_START_DATE DURATION ENABL
-------------------------------------------------- --------------- -----
SUNDAY_WINDOW DEFAULT_MAINTENANCE_PLAN
+000 20:00:00 TRUE
WEEKNIGHT_WINDOW
+000 08:00:00 FALSE
WEEKEND_WINDOW
+002 00:00:00 FALSE
9 rows selected.
}}}
{{{
deinstall dbv and ols
ORA-29504: invalid or missing schema name reported during DV Installation [ID 1509963.1]
-- you have to shutdown all the databases and disable the dbv on the home.. then do this note
How To Uninstall Or Reinstall Database Vault in 11g [ID 803948.1]
How to Install / Deinstall Oracle Label Security [ID 171155.1]
sqlplus "/ as sysdba"
SQL> @?/rdbms/admin/catnools.sql
}}}
{{{
https://forums.oracle.com/thread/2169244
http://dba.stackexchange.com/questions/33923/privileges-needed-for-oracle-text
}}}
http://www.antognini.ch/2009/07/impact-of-direct-reads-on-delayed-block-cleanouts/
_rollback_segment_count https://blogs.oracle.com/UPGRADE/entry/parameter_rollback_segment_count_can
http://afatkulin.blogspot.com/2010/06/row-cache-objects-latch-contention.html
http://hemantoracledba.blogspot.com/2008/10/delayed-block-cleanout.html
http://chaitanya-sr-dba.blogspot.com/2012/10/delayed-block-cleanout-oracle-basics.html
http://www.oaktable.net/content/delayed-block-cleanout-ora-01555
http://www.jlcomp.demon.co.uk/cleanout.html
ORA-01555 "Snapshot too old" - Detailed Explanation [ID 40689.1]
Troubleshooting Database Startup/Shutdown Problems [ID 851057.1]
http://www.antognini.ch/2009/07/impact-of-direct-reads-on-delayed-block-cleanouts/
Reduced Buffer Cache Size can Increase Physical IO on Update Due to Block Cleanout Requirements (Doc ID 1335796.1)
http://jonathanlewis.wordpress.com/2009/06/16/clean-it-up/
https://portal.hotsos.com/company/newsletters/volume-iii-issue-10/tips/delayed-block-cleanout-and-read-only-tablespaces-by-ric-van-dyke
http://oracle-randolf.blogspot.com/2011/04/delayed-block-cleanout-ora-01555.html
http://askubuntu.com/questions/537956/sed-to-delete-blank-spaces
{{{
sed 's/ //g'
}}}
http://carlos-sierra.net/2012/08/13/should-i-delete-my-column-histograms-to-improve-plan-stability/
https://blogs.oracle.com/optimizer/entry/how_do_i_drop_an_existing_histogram_on_a_column_and_stop_the_auto_stats_gathering_job_from_creating
http://stackoverflow.com/questions/5410757/delete-lines-in-a-text-file-that-containing-a-specific-string
{{{
cat parallel.csv | parallel --pipe sed s^IO/sec^""^g | parallel --pipe sed '/Date/d'
}}}
https://www.dellemc.com/en-us/converged-infrastructure/vxrail/index.htm
https://www.slideshare.net/Denodo/how-to-achieve-fast-data-performance-in-big-data-logical-data-warehouse-and-operational-scenarios
https://www.google.com/search?q=db2+big+sql+denodo+performance&sxsrf=ACYBGNQrkSSQ2pEclsSOpXzEEkc6BQ_HDQ:1576023051838&source=lnms&tbm=isch&sa=X&ved=2ahUKEwi77aWxp6zmAhWwg-AKHcBrBAA4ChD8BSgDegQIDBAF&biw=1675&bih=937#imgrc=nCz_LQsSnjj3KM:
..
applicaiton Data denormalization - facebook app example
https://hackernoon.com/data-denormalization-is-broken-7b697352f405#.m71xsooem
https://blog.jooq.org/2016/10/05/why-you-should-design-your-database-to-optimise-for-statistics/?utm_source=dbweekly&utm_medium=email
{{{
select id, salary from
(
select a.id, a.salary,
dense_rank() over(order by a.salary desc) drank
from test a)
where drank = 2;
SELECT MAX(salary) AS SecondHighestSalary
FROM test
WHERE salary NOT IN (
SELECT MAX(salary)
FROM test
);
}}}
https://javarevisited.blogspot.com/2015/11/2nd-highest-salary-in-oracle-using-rownumber-rank-example.html
https://javarevisited.blogspot.com/search/label/SQL%20Interview%20Questions?max-results=3
http://www.linuxjournal.com/content/tech-tip-dereference-variable-names-inside-bash-functions
{{{
# i've got a list of directories, like this:
dir_app1="/opt/config/app1"
dir_app2="/opt/config/app2"
dir_app3="/opt/config/test/app_3"
# i then create the directory structure
# the trick here is ${!dir_*} that expands to the name
# of all variables beginning with dir_
for i in ${!dir_*}; do
echo " making directory ${!i}"
mkdir -p ${!i}
done
# then i change access rights
chmod 755 $dir_app1
chmod 700 $dir_app2
...
}}}
! cloud architecture considerations
{{{
> filesystem interaction
S3
rackspace cloud network
(Cloud Delivery Network)
> library dependencies
bundler, npm, composer
> configuration management
chef, pupper, ansible
}}}
explanation here https://community.oracle.com/thread/4080860
the fix is to download a new software, install to a new home and create a new database. then kill or delete the old database
https://github.com/karlarao/developer-roadmap
https://www.techinasia.com/talk/learn-to-learn-like-a-developer
DTrace by Example http://developers.sun.com/solaris/articles/dtrace_tutorial.html
<<showtoc>>
! Development env as a service in the cloud
* https://c9.io/pricing
! Build tools as a service
* atlassian
! provisioning and deployment tools as a service on top of chef
* AWS OpsWorks
! Platform as a service
* Heroku
* AWS Elastic Beanstalk – Deploy Web Applications
! Common Service Types
* IaaS - cloud provider
* DBaaS - managed database, you don't need to maintain the underlying OS
* PaaS - OS is managed for you, just deploy the app/software package
also see [[Cloud Service Model]]
https://en.wikipedia.org/wiki/As_a_service
https://en.wikipedia.org/wiki/Product-service_system#Servicizing
https://en.wikipedia.org/wiki/Cloud-based_integration
http://johnmathon.wordpress.com/2014/02/11/a-simple-guide-to-cloud-computing-iaas-paas-saas-baas-dbaas-ipaas-idaas-apimaas/
{{{
https://phoenixnap.com/blog/devops-metrics-kpis
DevOps Metrics and Key Performance Indicators
1. Deployment Frequency
2. Change Volume
3. Deployment Time
4. Failed Deployment Rate
5. Change Failure Rate
6. Time to Detection
7. Mean Time to Recovery
8. Lead Time
9. Defect Escape Rate
10. Defect Volume
11. Availability
12. Service Level Agreement Compliance
13. Unplanned Work
14. Customer Ticket Volume
15. Cycle Time
https://stackify.com/15-metrics-for-devops-success/
Deployment frequency
Change volume
Deployment time
Lead time
Customer tickets
Automated test pass %
Defect escape rate
Availability
Service level agreements
Failed deployments
Error rates
Application usage and traffic
Application performance
Mean time to detection (MTTD)
Mean time to recovery (MTTR)
00 cpu investigation validation
00000_sharepoint
000_documentation
000_test_workbooks
00_ME_HOURLY
00_ash
00_awr_cpuwl_wh
00_troubleshooting
00_ts_mapping
01_Clarity ETL Performance
02_File Delivery Performance Monthly Trend
03_BOXI Scheduled Report Runtime Profiles
04_BOXI Scheduled Reports Long-Running
05_Canary Query Profiles
05b_Canary Query Profiles Detail by Query Name-not implemented
06_Database CPU Hours Used by Service Name
07_Database Physical IOs Used by Service Name
08_System Utilization Database Host CPU Percent Busy
09_System Utilization Storage Cell Node CPU Percent Busy
10_System Utilization Storage
11_KPHC Clarity ETL Performance Monthly Trend
12_KPCC File Delivery Performance Monthly Trend
13_BOXI Scheduled Reports Runtime Monthly Trend
14_Canary Query Profiles Runtime Monthly Trend
15_Database CPU Hours Used by Service Name Monthly Trend
16_Database Phy IOs Used by Service Name Monthly Trend
17_Database Node CPU Utilization Monthly Trend
18_Storage Cell Node CPU Utilization Monthly Trend
19_System Utilization Storage - Region Monthly Trend
20_System Utilization Storage - TS Type Monthly Trend
21_Consolidated_Story
22_Final Scripts
23_ASH troubleshooting awr warehouse
Capacity Planning SQLs 2019-03-06_Karl.xlsx
Tableau Dashboard lead developer
Daily and monthly reporting of application KPIs including but not limited to: ETL elapsed times, nightly file delivery, canary queries performance, system failure/downtime, system metrics for capacity planning
All reports are merged into one dashboard with blended data coming from SQL server, oracle, flat files, and excel files.
Created all the SQLs, R, and python scripts to pull the data out of the flat and excel files and PL/SQL to SCD on Oracle tables.
}}}
{{{
db=`ps -eo comm | grep pmon | grep -v grep | grep -v perl | cut -f3 -d_`
for i in $db ; do
echo "==================="
echo "ARCHIVE GAP for $i"
ls -1 /oracle/fast_recovery_area/$i/*dbf | sort -nk1 | cut -d '_' -f 4 | awk '$1!=p+1{print p+1"-"$1-1}{p=$1}' | grep -v ^1-*
done
}}}
<<showtoc>>
! open read only
!! alter database open read only
{{{
---------------------------------------------------------------
STARTING UP THE PHYSICAL STANDBY FOR READ-ONLY ACCESS
---------------------------------------------------------------
1. Start the Oracle instance for the standby database without mounting it
SQL> STARTUP NOMOUNT;
2. Mount the standby database
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
3. Open the database for read-only access
SQL> ALTER DATABASE OPEN READ ONLY;
NOTE:
Since the temporary tablespace (locally managed) existed on the primary database before the standby was created, the only thing to do is to associate a temporary file with a temporary tablespace on a read-only physical standby database.
The temporary tablespace was created with the following options:
create temporary tablespace TEMP01 tempfile
'@ORACLE_DATA@/@ORACLE_SID@/@ORACLE_SID@_temp01_01.dbf' size 512064k reuse
AUTOEXTEND ON NEXT 100M MAXSIZE 2000M
extent management local uniform size 1M;
Now, you must use the ADD TEMPFILE clause to actually create the disk file on
the standby database. In order to do it perform the following steps:
- Cancel managed recovery and open the physical standby database for
read-only access using the following SQL statements:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE OPEN READ ONLY;
Opening the physical standby database for read-only access allows you to add a
temporary file. Because adding a temporary file does not generate redo data, it
is allowed for a database that is open for read-only access.
- Create a temporary file for the temporary tablespace. The size and names for
the files can differ from the primary database.
SQL> ALTER TABLESPACE TEMP01 ADD TEMPFILE '/ORA/dbs03/oradata/rls1/rls1_temp01_01.dbf' SIZE 512064k reuse;
-- to resume recovery
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
}}}
!! alter database open read only + network link export
expdp from physical standby database https://asktom.oracle.com/pls/apex/f?p=100:11:::NO:RP:P11_QUESTION_ID:9530201800346481934
https://learnwithme11g.wordpress.com/2012/02/16/using-expdp-to-export-from-physical-standby-database/
<<<
network link, eg this from MOS note 1356592.1
<<<
!! snapshot standby
expdp from physical standby database https://asktom.oracle.com/pls/apex/f?p=100:11:::NO:RP:P11_QUESTION_ID:9530201800346481934
{{{
Snapshot standby
snapshot
========
shutdown immediate;
startup mount;
alter database convert to snapshot standby;
shutdown immediate;
startup;
back to physical
================
shutdown immediate
startup mount exclusive
alter database convert to physical standby;
shutdown immediate
startup mount
alter database recover managed standby database disconnect from session;
SELECT flashback_on FROM v$database;
}}}
!! flashback scn
<<<
Incomplete Recovery of the Physical Standby Database http://www.ibmdba.com/?p=7
<<<
!! alter database open read only + active data guard
http://www.juliandyke.com/Research/DataGuard/ActiveDataGuard.php
https://dbatricksworld.com/how-to-open-physical-standby-database-in-read-only-mode-data-guard-part-iii/
http://gavinsoorma.com/2010/09/11g-active-data-guard-enabling-real-time-query/
{{{
SELECT database_role, open_mode FROM v$database;
STARTUP MOUNT;
ALTER DATABASE OPEN READ ONLY;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
SELECT database_role, open_mode FROM v$database;
}}}
! open read write
!! dg open read write when primary is lost
<<<
https://blogs.oracle.com/AlejandroVargas/resource/How-to-open-the-standby-when-the-primary-is-lost.pdf
https://blogs.oracle.com/AlejandroVargas/entry/how_to_manually_open_the_stand
http://neeraj-dba.blogspot.com/2011/10/open-standby-in-read-write-mode-when.html
https://oracleracdba1.wordpress.com/2012/10/10/how-to-open-physical-standby-database-in-read-write-mode/comment-page-1/
<<<
{{{
How To Open The Standby Database When The Primary Is Lost
sqlplus / as sysdba
STARTUP MOUNT
SELECT OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE FROM V$DATABASE;
RECOVER STANDBY DATABASE;
cancel
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;
SELECT OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE FROM V$DATABASE;
ALTER DATABASE OPEN;
SELECT OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE FROM V$DATABASE;
}}}
http://uhesse.com/2012/04/23/diff_table_stats_in_history-example/
http://kerryosborne.oracle-guy.com/2009/05/ill-gladly-pay-you-tuesday-for-a-hamburger-today/
{{{
select report, maxdiffpct from
table(dbms_stats.diff_table_stats_in_history('&owner','&table_name',
systimestamp-&days_ago))
/
}}}
<<showtoc>>
https://github.com/joyent/node/wiki/Node-Hosting
! -- digitalocean
https://www.digitalocean.com/
http://en.wikipedia.org/wiki/DigitalOcean
http://www.pluralsight.com/courses/building-nosql-apps-redis
http://www.pluralsight.com/courses/meteorjs-fundamentals-single-page-apps
https://blog.digitalocean.com/introducing-go-qemu-and-go-libvirt/ , https://www.codeclouds.com/blog/digitalocean-vs-vultr-find-best-vps-provider/ <- digital ocean runs on qemu
! billing details
https://www.digitalocean.com/community/questions/where-are-invoices-to-export-in-digital-ocean
! ''getting started - after creating a droplet''
https://www.digitalocean.com/community/tutorials/how-to-use-the-mean-one-click-install-image
node.js tag https://www.digitalocean.com/community/tags/node-js
mongodb tag https://www.digitalocean.com/community/tags/mongodb
! -- install R studio
https://www.digitalocean.com/community/tutorials/how-to-set-up-rstudio-on-an-ubuntu-cloud-server
! -- node.js
https://github.com/joyent/node/wiki/installing-node.js-via-package-manager
! -- fedora install
http://blog.michel-slm.name/how-to-install-fedora-21-server-on-digitalocean/
http://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux
http://en.wikipedia.org/wiki/Fedora_(operating_system)
! install oracle fedora
https://www.digitalocean.com/community/questions/how-can-i-install-oracle-11g
http://linux.oracle.com/switch/centos/
https://gist.github.com/martndemus/7ad8209f9be9185bcf3a
https://bl.ocks.org/martndemus/7ad8209f9be9185bcf3a
https://oracle-base.com/articles/12c/oracle-db-12cr1-installation-on-fedora-23
follow this:
{{{
follow https://oracle-base.com/articles/12c/oracle-db-12cr1-installation-on-fedora-23
disable firewall
dd if=/dev/zero of=/opt/swapfile bs=1024k count=1024
mkswap /opt/swapfile
chmod 0600 /opt/swapfile
swapon -a
/opt/swapfile swap swap defaults 0 0
install vncserver http://linoxide.com/linux-how-to/configure-tigervnc-server-fedora-22/
asmm 700sga 100pga
}}}
! install oracle ubuntu
http://www.techienote.com/install-oracle-12c-on-ubuntu/
! ubuntu server initial setup
https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-14-04
! setup domain
namecheap https://www.shivarweb.com/2336/namecheap-or-godaddy/
! setup DNS
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean
https://www.digitalocean.com/community/tutorials/how-to-point-to-digitalocean-nameservers-from-common-domain-registrars
! protect SSH with fail2ban
https://www.digitalocean.com/community/tutorials/how-to-protect-ssh-with-fail2ban-on-ubuntu-14-04
! setup private networking
https://www.digitalocean.com/community/tutorials/how-to-set-up-and-use-digitalocean-private-networking
https://www.digitalocean.com/community/tutorials/how-to-secure-traffic-between-vps-using-openvpn
https://www.digitalocean.com/community/tutorials/how-to-isolate-servers-within-a-private-network-using-iptables
! block storage
https://www.digitalocean.com/community/tutorials/how-to-use-block-storage-on-digitalocean
<<<
limit of 5 volumes per droplet
need to use a cluster file system to share between droplets
<<<
! load balancer
https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-load-balancers
! run vm inside digital ocean
https://www.digitalocean.com/community/questions/support-for-vt-x-for-virtualbox
http://stackoverflow.com/questions/30648095/virtualbox-and-google-compute-engine
https://www.whatuptime.com/installing-microsoft-windows-onto-digitalocean-droplet/
community vagrant files http://www.vagrantbox.es/
https://www.digitalocean.com/community/tutorials/how-to-install-virtualbox-on-centos-6-3-x64
https://twitter.com/digitalocean/status/314052472899526660?lang=en
https://www.digitalocean.com/community/tutorials/how-to-use-vagrant-on-your-own-vps-running-ubuntu
https://www.digitalocean.com/community/questions/are-we-allowed-to-install-a-desktop-version-of-a-os-e-g-windows-ubuntu
https://www.google.com/search?q=can+you+run+virtualbox+inside+digitalocean&oq=can+you+run+virtualbox+inside+digitalocean&aqs=chrome..69i57.5962j0j7&sourceid=chrome&ie=UTF-8
! digital ocean CLI - DOCTL
https://blog.digitalocean.com/introducing-doctl/
https://www.digitalocean.com/community/tutorials/how-to-use-doctl-the-official-digitalocean-command-line-client
! droplet configure RDP / VNC
https://www.digitalocean.com/community/questions/how-to-connect-to-droplet-via-rdp-connection
https://community.oracle.com/docs/DOC-998357
installing dinodate https://www.youtube.com/watch?v=KSkNRmQhBsA&index=2&list=PL1T-f7vLvANlnSQBQJYOtOdkmov8pqhgQ
https://www.youtube.com/watch?v=KbvBr-p4XNY&list=PL1T-f7vLvANlnSQBQJYOtOdkmov8pqhgQ
https://learncodeshare.net/
{{{
SELECT s.sid,
s.serial#,
s.server,
lower(
CASE
WHEN s.server IN ('DEDICATED','SHARED') THEN
i.instance_name || '_' ||
nvl(pp.server_name, nvl(ss.name, 'ora')) || '_' ||
p.spid || '.trc'
ELSE NULL
END
) AS trace_file_name
FROM v$instance i,
v$session s,
v$process p,
v$px_process pp,
v$shared_server ss
WHERE s.paddr = p.addr
AND s.sid = pp.sid (+)
AND s.paddr = ss.paddr(+)
AND s.type = 'USER'
ORDER BY s.sid;
select u_dump.value || '/' || instance.value || '_ora_' || v$process.spid
|| nvl2(v$process.traceid, '_' || v$process.traceid, null ) || '.trc'"Trace File"
from V$PARAMETER u_dump
cross join V$PARAMETER instance
cross join V$PROCESS
join V$SESSION on v$process.addr = V$SESSION.paddr
where u_dump.name = 'user_dump_dest'
and instance.name = 'instance_name'
and V$SESSION.audsid=sys_context('userenv','sessionid');
SID SERVER TRACE_FILE_NAME
---------- --------- --------------------------------------------------
8 DEDICATED fsprd1_ora_8210.trc
9 DEDICATED fsprd1_ora_4347.trc
10 DEDICATED fsprd1_ora_4460.trc
12 DEDICATED fsprd1_ora_4460.trc
14 DEDICATED fsprd1_ora_6006.trc
15 DEDICATED fsprd1_ora_1916.trc
19 DEDICATED fsprd1_ora_22149.trc
20 DEDICATED fsprd1_ora_21063.trc
21 DEDICATED fsprd1_ora_4347.trc
22 DEDICATED fsprd1_ora_21567.trc
23 DEDICATED fsprd1_ora_4495.trc
24 DEDICATED fsprd1_ora_4495.trc
25 DEDICATED fsprd1_ora_14443.trc
27 DEDICATED fsprd1_ora_21546.trc
29 DEDICATED fsprd1_ora_22149.trc
34 DEDICATED fsprd1_ora_32047.trc
35 DEDICATED fsprd1_ora_23954.trc
36 DEDICATED fsprd1_ora_23954.trc
178 DEDICATED fsprd1_ora_26291.trc
179 DEDICATED fsprd1_ora_8212.trc
180 DEDICATED fsprd1_ora_26416.trc
185 DEDICATED fsprd1_ora_20083.trc
186 DEDICATED fsprd1_ora_21409.trc
189 DEDICATED fsprd1_ora_12941.trc
194 DEDICATED fsprd1_ora_21389.trc
196 DEDICATED fsprd1_ora_26291.trc
200 DEDICATED fsprd1_ora_7052.trc
201 DEDICATED fsprd1_ora_1531.trc
204 DEDICATED fsprd1_ora_18717.trc
352 DEDICATED fsprd1_ora_8214.trc
353 DEDICATED fsprd1_ora_7178.trc
355 DEDICATED fsprd1_ora_16925.trc
358 DEDICATED fsprd1_ora_24943.trc
363 DEDICATED fsprd1_ora_21433.trc
364 DEDICATED fsprd1_ora_30104.trc
366 DEDICATED fsprd1_ora_9836.trc
369 DEDICATED fsprd1_ora_30104.trc
373 DEDICATED fsprd1_ora_9836.trc
374 DEDICATED fsprd1_ora_21066.trc
377 DEDICATED fsprd1_ora_4477.trc
379 DEDICATED fsprd1_ora_4477.trc
522 DEDICATED fsprd1_ora_9942.trc
526 DEDICATED fsprd1_ora_11427.trc
527 DEDICATED fsprd1_ora_23673.trc
534 DEDICATED fsprd1_ora_23673.trc
535 DEDICATED fsprd1_ora_14458.trc
537 DEDICATED fsprd1_ora_14435.trc
538 DEDICATED fsprd1_ora_22301.trc
544 DEDICATED fsprd1_ora_21559.trc
546 DEDICATED fsprd1_ora_7223.trc
548 DEDICATED fsprd1_ora_5635.trc
550 DEDICATED fsprd1_ora_6846.trc
551 DEDICATED fsprd1_ora_27491.trc
697 DEDICATED fsprd1_ora_20934.trc
699 DEDICATED fsprd1_ora_11431.trc
702 DEDICATED fsprd1_ora_19317.trc
704 DEDICATED fsprd1_ora_26606.trc
706 DEDICATED fsprd1_ora_28822.trc
708 DEDICATED fsprd1_ora_19973.trc
709 DEDICATED fsprd1_ora_23388.trc
712 DEDICATED fsprd1_ora_19905.trc
715 DEDICATED fsprd1_ora_23388.trc
716 DEDICATED fsprd1_ora_19897.trc
717 DEDICATED fsprd1_ora_19897.trc
722 DEDICATED fsprd1_ora_25682.trc
723 DEDICATED fsprd1_ora_13620.trc
725 DEDICATED fsprd1_ora_19905.trc
727 DEDICATED fsprd1_ora_19973.trc
869 DEDICATED fsprd1_ora_11813.trc
870 DEDICATED fsprd1_ora_11426.trc
871 DEDICATED fsprd1_ora_16950.trc
872 DEDICATED fsprd1_ora_23111.trc
876 DEDICATED fsprd1_ora_18948.trc
878 DEDICATED fsprd1_ora_11576.trc
879 DEDICATED fsprd1_ora_26619.trc
888 DEDICATED fsprd1_ora_5642.trc
891 DEDICATED fsprd1_ora_19901.trc
893 DEDICATED fsprd1_ora_11894.trc
894 DEDICATED fsprd1_ora_21054.trc
1041 DEDICATED fsprd1_ora_8206.trc
1044 DEDICATED fsprd1_ora_21050.trc
1045 DEDICATED fsprd1_ora_27390.trc
1050 DEDICATED fsprd1_ora_562.trc
1054 DEDICATED fsprd1_ora_881.trc
1056 DEDICATED fsprd1_ora_562.trc
1057 DEDICATED fsprd1_ora_25049.trc
1059 DEDICATED fsprd1_ora_14418.trc
1060 DEDICATED fsprd1_ora_26549.trc
1061 DEDICATED fsprd1_ora_25049.trc
1062 DEDICATED fsprd1_ora_19782.trc
1066 DEDICATED fsprd1_ora_29248.trc
1071 DEDICATED fsprd1_ora_13186.trc
1214 DEDICATED fsprd1_ora_8208.trc
1215 DEDICATED fsprd1_ora_31579.trc
1221 DEDICATED fsprd1_ora_4364.trc
1222 DEDICATED fsprd1_ora_13480.trc
1226 DEDICATED fsprd1_ora_5923.trc
1231 DEDICATED fsprd1_ora_20326.trc
1233 DEDICATED fsprd1_ora_10144.trc
1234 DEDICATED fsprd1_ora_31579.trc
1235 DEDICATED fsprd1_ora_21563.trc
1236 DEDICATED fsprd1_ora_9126.trc
1239 DEDICATED fsprd1_ora_21059.trc
1242 DEDICATED fsprd1_ora_17882.trc
1243 DEDICATED fsprd1_ora_21174.trc
1244 DEDICATED fsprd1_ora_5923.trc
106 rows selected.
Trace File
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/fsprddal/fsprd1/trace/fsprd1_ora_8206.trc
/u01/app/oracle/diag/rdbms/fsprddal/fsprd1/trace/fsprd1_ora_8208.trc
/u01/app/oracle/diag/rdbms/fsprddal/fsprd1/trace/fsprd1_ora_8210.trc
/u01/app/oracle/diag/rdbms/fsprddal/fsprd1/trace/fsprd1_ora_8212.trc
/u01/app/oracle/diag/rdbms/fsprddal/fsprd1/trace/fsprd1_ora_8214.trc
/u01/app/oracle/diag/rdbms/fsprddal/fsprd1/trace/fsprd1_ora_12941.trc
6 rows selected.
SELECT TRACE_TYPE, PRIMARY_ID, QUALIFIER_ID1, waits, binds FROM DBA_ENABLED_TRACES;
SYS@fsprd1> select sid, SQL_TRACE from v$session where SQL_TRACE != 'DISABLED'
SID SQL_TRAC
---------- --------
29 ENABLED
1062 ENABLED
select v$session.sid, u_dump.value || '/' || instance.value || '_ora_' || v$process.spid
|| nvl2(v$process.traceid, '_' || v$process.traceid, null ) || '.trc'"Trace File"
from V$PARAMETER u_dump
cross join V$PARAMETER instance
cross join V$PROCESS
join V$SESSION on v$process.addr = V$SESSION.paddr
where u_dump.name = 'user_dump_dest'
and instance.name = 'instance_name'
-- and V$SESSION.audsid=sys_context('userenv','sessionid')
and v$session.sid in ('1530');
SID
----------
Trace File
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
29
/u01/app/oracle/diag/rdbms/fsprddal/fsprd1/trace/fsprd1_ora_22149.trc
1062
/u01/app/oracle/diag/rdbms/fsprddal/fsprd1/trace/fsprd1_ora_19782.trc
exec dbms_monitor.session_trace_disable(29);
exec dbms_monitor.session_trace_disable(1062);
SYS@fsprd1>
select sid, SQL_TRACE from v$session where SQL_TRACE != 'DISABLED';
no rows selected
}}}
<<<
Do you guys have any opinions why the X*-8 machines are configured with numa=on (on boot) by default?
@@From what I’ve been reading, the general answer is it’s mainly for greater total throughput.@@
! And on the disabling part, this is the best answer so far:
https://software.intel.com/en-us/forums/topic/392519
Patrick Fay (Intel) Mon, 05/13/2013 - 13:41
Hello Ekleel,
Here is an example which I've seen before.
Say you have a 4 socket (4 numa nodes) system. The application starts up on node 0, allocates all it's memory from node 0 and then spawns worker threads to each of the 4 sockets. All the worker threads are memory intensive (read/write lots of memory) but all the threads are using memory allocated at start up on node 0. So the threads on 3 of the 4 sockets are all doing remote memory accesses and only the threads on node 0 are doing local memory accesses. This is a case where disabling numa would speed the application up (assuming there is a signicant penalty for remote memory access). The QPI links can't dump memory as fast as the cpu can read memory (that is, QPI bandwidth is less than local memory bandwidth). In the 'allocate on node 0 and read/write everywhere' case, all the nodes have to get the memory from node0. @@This will saturate the node 0 QPI link. When you disable numa (in this case), the overall QPI traffic is usually lower.@@
One can say this is a poorly threaded application but I've seen it many times.
Pat
<<<
! another good read
https://www.sqlskills.com/blogs/jonathan/understanding-non-uniform-memory-accessarchitectures-numa/
http://www.intel.com/content/www/us/en/performance/performance-quickpath-architecture-demo.html
https://www.sqlskills.com/blogs/jonathan/category/numa/
<<<
@@When node interleaving is enabled, the system becomes a Sufficiently Uniform Memory Architecture (SUMA) configuration where the memory is broken into 4KB addressable regions and mapped from each of the nodes in a round robin fashion so that the memory address space is distributed across the nodes. When the system is configured for NUMA, the memory from each node is mapped into a single sequential block of memory address space for all of the memory in each node.@@ For certain workloads, node interleaving can be beneficial to performance since the latency between the hardware nodes of a system using processors with integrated memory controllers is small. However, for applications that are NUMA aware and
<<<
! another on on interleaving
https://ycolin.wordpress.com/2015/01/18/numa-interleave-memory/
<<<
The MPOL_INTERLEAVE mode specifies that page allocations has to be interleaved across the set of nodes specified in nodemask. @@This optimizes for bandwidth instead of latency by spreading out pages and memory accesses to those pages across multiple nodes. To be effective the memory area should be fairly large, at least 1MB or bigger with a fairly uniform access pattern. Accesses to a single page of the area will still be limited to the memory bandwidth of a single node.@@
@@The variable component of the SGA will always need to be interleaved for performance.@@
<<<
! kevin's posts
http://kevinclosson.net/2007/02/03/oracle-on-opteron-with-linux-the-numa-angle-part-v-introducing-numactl8-and-suma/
http://kevinclosson.net/2007/06/05/oracle-on-opteron-with-linux-the-numa-angle-part-vii/
http://kevinclosson.net/kevin-closson-index/oracle-on-opteron-k8l-numa-etc/
! Rik van Riel
Automatic NUMA balancing internals http://events.linuxfoundation.org/sites/events/files/slides/summit2014_riel_chegu_w_0340_automatic_numa_balancing_0.pdf
why computers are getting slower http://www.redhat.com/promo/summit/2008/downloads/pdf/Wednesday_1015am_Rik_Van_Riel_Hot_Topics.pdf
<<<
nodes are equally used, maximizing memory bandwidth
he shows benchmarks on 4 socket vs 8 socket systems
<<<
! References
http://blogs.vmware.com/vsphere/2013/10/does-corespersocket-affect-performance.html
Red Hat Linux NUMA Support for HP ProLiant Servers http://h20564.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c03261871
https://ycolin.wordpress.com/2015/01/18/numa-interleave-memory/
http://www.ruoug.org/events/20111028/NUMA.pdf
server architecture http://www.oracle.com/technetwork/articles/systems-hardware-architecture/sfx4170m2-x4270m2arch-163860.pdf
http://www.specbench.org/cpu2006/results/res2014q4/cpu2006-20141120-33211.html
https://software.intel.com/en-us/articles/getting-high-performance-on-numa-based-nehalem-ex-system-with-mkl-without-controlling-numa
http://kevinclosson.net/2009/08/14/intel-xeon-5500-nehalem-ep-numa-versus-interleaved-memory-aka-suma-there-is-no-difference-a-forced-confession/
file:///Users/karl/Downloads/NUMA%20for%20Dell%20PowerEdge%2012G%20Servers%20(1).pdf
NUMA effects on multicore multi socket systems http://static.ph.ed.ac.uk/dissertations/hpc-msc/2010-2011/IakovosPanourgias.pdf <- nice thesis
! 2021
<<<
numa effects on multicore, multi socket systems https://static.epcc.ed.ac.uk/dissertations/hpc-msc/2010-2011/IakovosPanourgias.pdf
Rik van Riel https://www.redhat.com/files/summit/2014/summit2014_riel_chegu_w_0340_automatic_numa_balancing.pdf
Rik van Riel https://surriel.com/system/files/summit08-why-computers-are-getting-slower.pdf
[[supercomputers]]
https://www.sgi.com/products/servers/uv/uv_300_30ex.html <- enkitec was able to benchmark one of these
<<<
firmware
http://www.cyberciti.biz/faq/linux-find-out-what-driver-my-ethernet-card-is-using/
{{{
[root@localhost 0000:00:1f.2]# cd /sys/block/
[root@localhost block]# ls
dm-0 loop1 loop3 loop5 loop7 ram0 ram10 ram12 ram14 ram2 ram4 ram6 ram8 sda sdc sde sdg
loop0 loop2 loop4 loop6 md0 ram1 ram11 ram13 ram15 ram3 ram5 ram7 ram9 sdb sdd sdf sdh
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sda
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sda/device -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdb
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdb/device -> ../../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdc
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdc/device -> ../../devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdd
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdd/device -> ../../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sde
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sde/device -> ../../devices/pci0000:00/0000:00:1f.2/host4/target4:0:0/4:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdf
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdf/device -> ../../devices/pci0000:00/0000:00:1f.2/host5/target5:0:0/5:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdg
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdg/device -> ../../devices/pci0000:00/0000:00:1c.7/0000:09:00.0/host8/target8:0:0/8:0:0:0 <-- marvell
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdh
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdh/device -> ../../devices/pci0000:00/0000:00:1c.7/0000:09:00.0/host9/target9:0:0/9:0:0:0 <-- marvell
[root@localhost ~]# lspci | grep -i sata
00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05)
05:00.0 SATA controller: JMicron Technology Corp. JMB362 AHCI Controller (rev 10)
09:00.0 SATA controller: Marvell Technology Group Ltd. Device 9172 (rev 11)
[root@localhost ~]#
[root@localhost ~]# lspci -n
00:00.0 0600: 8086:0100 (rev 09)
00:01.0 0604: 8086:0101 (rev 09)
00:02.0 0300: 8086:0122 (rev 09)
00:16.0 0780: 8086:1c3a (rev 04)
00:19.0 0200: 8086:1503 (rev 05)
00:1a.0 0c03: 8086:1c2d (rev 05)
00:1b.0 0403: 8086:1c20 (rev 05)
00:1c.0 0604: 8086:1c10 (rev b5)
00:1c.1 0604: 8086:1c12 (rev b5)
00:1c.2 0604: 8086:1c14 (rev b5)
00:1c.3 0604: 8086:1c16 (rev b5)
00:1c.4 0604: 8086:1c18 (rev b5)
00:1c.6 0604: 8086:244e (rev b5)
00:1c.7 0604: 8086:1c1e (rev b5)
00:1d.0 0c03: 8086:1c26 (rev 05)
00:1f.0 0601: 8086:1c44 (rev 05)
00:1f.2 0106: 8086:1c02 (rev 05)
00:1f.3 0c05: 8086:1c22 (rev 05)
03:00.0 0c03: 1b21:1042
05:00.0 0106: 197b:2362 (rev 10)
06:00.0 0c03: 1b21:1042
07:00.0 0604: 1b21:1080 (rev 01)
08:02.0 0c00: 1106:3044 (rev c0)
09:00.0 0106: 1b4b:9172 (rev 11)
login as: root
root@192.168.0.101's password:
Access denied
root@192.168.0.101's password:
Last login: Fri Sep 9 02:52:29 2011 from 192.168.0.103
[root@localhost ~]#
[root@localhost ~]# ls -la /sys/bus/pci/devices
total 0
drwxr-xr-x 2 root root 0 Sep 9 12:11 .
drwxr-xr-x 5 root root 0 Sep 9 12:11 ..
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:00.0 -> ../../../devices/pci0000:00/0000:00:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:01.0 -> ../../../devices/pci0000:00/0000:00:01.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:02.0 -> ../../../devices/pci0000:00/0000:00:02.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:16.0 -> ../../../devices/pci0000:00/0000:00:16.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:19.0 -> ../../../devices/pci0000:00/0000:00:19.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1a.0 -> ../../../devices/pci0000:00/0000:00:1a.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1b.0 -> ../../../devices/pci0000:00/0000:00:1b.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.0 -> ../../../devices/pci0000:00/0000:00:1c.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.1 -> ../../../devices/pci0000:00/0000:00:1c.1
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.2 -> ../../../devices/pci0000:00/0000:00:1c.2
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.3 -> ../../../devices/pci0000:00/0000:00:1c.3
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.4 -> ../../../devices/pci0000:00/0000:00:1c.4
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.6 -> ../../../devices/pci0000:00/0000:00:1c.6
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.7 -> ../../../devices/pci0000:00/0000:00:1c.7
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1d.0 -> ../../../devices/pci0000:00/0000:00:1d.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1f.0 -> ../../../devices/pci0000:00/0000:00:1f.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1f.2 -> ../../../devices/pci0000:00/0000:00:1f.2
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1f.3 -> ../../../devices/pci0000:00/0000:00:1f.3
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:03:00.0 -> ../../../devices/pci0000:00/0000:00:1c.1/0000:03:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:05:00.0 -> ../../../devices/pci0000:00/0000:00:1c.3/0000:05:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:06:00.0 -> ../../../devices/pci0000:00/0000:00:1c.4/0000:06:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:07:00.0 -> ../../../devices/pci0000:00/0000:00:1c.6/0000:07:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:08:02.0 -> ../../../devices/pci0000:00/0000:00:1c.6/0000:07:00.0/0000:08:02.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:09:00.0 -> ../../../devices/pci0000:00/0000:00:1c.7/0000:09:00.0
[root@localhost ~]# cd /sys/bus/pci/devices
[root@localhost devices]# ls -ltr
total 0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:09:00.0 -> ../../../devices/pci0000:00/0000:00:1c.7/0000:09:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:08:02.0 -> ../../../devices/pci0000:00/0000:00:1c.6/0000:07:00.0/0000:08:02.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:07:00.0 -> ../../../devices/pci0000:00/0000:00:1c.6/0000:07:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:06:00.0 -> ../../../devices/pci0000:00/0000:00:1c.4/0000:06:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:05:00.0 -> ../../../devices/pci0000:00/0000:00:1c.3/0000:05:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:03:00.0 -> ../../../devices/pci0000:00/0000:00:1c.1/0000:03:00.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1f.3 -> ../../../devices/pci0000:00/0000:00:1f.3
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1f.2 -> ../../../devices/pci0000:00/0000:00:1f.2
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1f.0 -> ../../../devices/pci0000:00/0000:00:1f.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1d.0 -> ../../../devices/pci0000:00/0000:00:1d.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.7 -> ../../../devices/pci0000:00/0000:00:1c.7
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.6 -> ../../../devices/pci0000:00/0000:00:1c.6
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.4 -> ../../../devices/pci0000:00/0000:00:1c.4
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.3 -> ../../../devices/pci0000:00/0000:00:1c.3
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.2 -> ../../../devices/pci0000:00/0000:00:1c.2
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.1 -> ../../../devices/pci0000:00/0000:00:1c.1
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1c.0 -> ../../../devices/pci0000:00/0000:00:1c.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1b.0 -> ../../../devices/pci0000:00/0000:00:1b.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:1a.0 -> ../../../devices/pci0000:00/0000:00:1a.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:19.0 -> ../../../devices/pci0000:00/0000:00:19.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:16.0 -> ../../../devices/pci0000:00/0000:00:16.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:02.0 -> ../../../devices/pci0000:00/0000:00:02.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:01.0 -> ../../../devices/pci0000:00/0000:00:01.0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 0000:00:00.0 -> ../../../devices/pci0000:00/0000:00:00.0
[root@localhost devices]# cd 0000:00:1f.2
[root@localhost 0000:00:1f.2]# ls
broken_parity_status config enable host1 host4 local_cpulist msi_bus power reset resource1 resource4 subsystem_device vendor
bus device firmware_node host2 host5 local_cpus msi_irqs remove resource resource2 resource5 subsystem_vendor
class driver host0 host3 irq modalias numa_node rescan resource0 resource3 subsystem uevent
[root@localhost 0000:00:1f.2]# ls -ltr
total 0
-rw-r--r-- 1 root root 4096 Sep 9 12:11 uevent
drwxr-xr-x 4 root root 0 Sep 9 12:11 host5
drwxr-xr-x 4 root root 0 Sep 9 12:11 host4
drwxr-xr-x 4 root root 0 Sep 9 12:11 host3
drwxr-xr-x 4 root root 0 Sep 9 12:11 host2
drwxr-xr-x 4 root root 0 Sep 9 12:11 host1
drwxr-xr-x 4 root root 0 Sep 9 12:11 host0
lrwxrwxrwx 1 root root 0 Sep 9 12:11 bus -> ../../../bus/pci
lrwxrwxrwx 1 root root 0 Sep 9 12:11 driver -> ../../../bus/pci/drivers/ahci
-r--r--r-- 1 root root 4096 Sep 9 12:11 resource
-r--r--r-- 1 root root 4096 Sep 9 12:11 vendor
-r--r--r-- 1 root root 4096 Sep 9 12:11 irq
-r--r--r-- 1 root root 4096 Sep 9 12:11 device
-r--r--r-- 1 root root 4096 Sep 9 12:11 class
-rw-r--r-- 1 root root 256 Sep 9 12:11 config
-r--r--r-- 1 root root 4096 Sep 9 12:11 subsystem_vendor
-r--r--r-- 1 root root 4096 Sep 9 12:11 subsystem_device
-r--r--r-- 1 root root 4096 Sep 9 12:11 modalias
-r--r--r-- 1 root root 4096 Sep 9 12:11 local_cpus
-r--r--r-- 1 root root 4096 Sep 9 12:11 local_cpulist
lrwxrwxrwx 1 root root 0 Sep 9 12:11 firmware_node -> ../../LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:26
lrwxrwxrwx 1 root root 0 Sep 9 12:11 subsystem -> ../../../bus/pci
-rw------- 1 root root 2048 Sep 9 12:11 resource5
-rw------- 1 root root 32 Sep 9 12:11 resource4
-rw------- 1 root root 4 Sep 9 12:11 resource3
-rw------- 1 root root 8 Sep 9 12:11 resource2
-rw------- 1 root root 4 Sep 9 12:11 resource1
-rw------- 1 root root 8 Sep 9 12:11 resource0
--w------- 1 root root 4096 Sep 9 12:11 reset
--w--w---- 1 root root 4096 Sep 9 12:11 rescan
--w--w---- 1 root root 4096 Sep 9 12:11 remove
drwxr-xr-x 2 root root 0 Sep 9 12:11 power
-r--r--r-- 1 root root 4096 Sep 9 12:11 numa_node
-r--r--r-- 1 root root 4096 Sep 9 12:11 msi_irqs
-rw-r--r-- 1 root root 4096 Sep 9 12:11 msi_bus
-rw------- 1 root root 4096 Sep 9 12:11 enable
-rw-r--r-- 1 root root 4096 Sep 9 12:11 broken_parity_status
[root@localhost 0000:00:1f.2]#
[root@localhost 0000:00:1f.2]# cat device
0x1c02
[root@localhost 0000:00:1f.2]# cat class
0x010601
[root@localhost 0000:00:1f.2]# less /usr/share/hwdata/pci.ids
[root@localhost 0000:00:1f.2]#
[root@localhost 0000:00:1f.2]#
[root@localhost 0000:00:1f.2]#
[root@localhost 0000:00:1f.2]#
[root@localhost 0000:00:1f.2]# cd /sys/block/
[root@localhost block]# ls
dm-0 loop1 loop3 loop5 loop7 ram0 ram10 ram12 ram14 ram2 ram4 ram6 ram8 sda sdc sde sdg
loop0 loop2 loop4 loop6 md0 ram1 ram11 ram13 ram15 ram3 ram5 ram7 ram9 sdb sdd sdf sdh
[root@localhost block]# ls -ltr
total 0
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram8
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram7
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram6
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram5
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram4
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram3
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram2
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram1
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram0
drwxr-xr-x 10 root root 0 Sep 9 12:11 sda
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram9
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram15
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram14
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram13
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram12
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram11
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram10
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop7
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop6
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop5
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop4
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop3
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop2
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop1
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop0
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdh
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdg
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdf
drwxr-xr-x 8 root root 0 Sep 9 12:11 sde
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdd
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdc
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdb
drwxr-xr-x 8 root root 0 Sep 9 12:11 dm-0
drwxr-xr-x 8 root root 0 Sep 9 12:11 md0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sda
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sda/device -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdb
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdb/device -> ../../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdc
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdc/device -> ../../devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdd
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdd/device -> ../../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sde
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sde/device -> ../../devices/pci0000:00/0000:00:1f.2/host4/target4:0:0/4:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdf
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdf/device -> ../../devices/pci0000:00/0000:00:1f.2/host5/target5:0:0/5:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdg
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdg/device -> ../../devices/pci0000:00/0000:00:1c.7/0000:09:00.0/host8/target8:0:0/8:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdh
lrwxrwxrwx 1 root root 0 Sep 9 12:11 ./sdh/device -> ../../devices/pci0000:00/0000:00:1c.7/0000:09:00.0/host9/target9:0:0/9:0:0:0
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdi
[root@localhost block]# find . -name device -exec ls -l {} \; |grep sdj
[root@localhost block]# cd sdb
[root@localhost sdb]# ls
alignment_offset bdi capability dev device ext_range holders inflight power queue range removable ro sdb1 size slaves stat subsystem trace uevent
[root@localhost sdb]# ls -ltr
total 0
-rw-r--r-- 1 root root 4096 Sep 9 12:11 uevent
drwxr-xr-x 2 root root 0 Sep 9 12:11 trace
lrwxrwxrwx 1 root root 0 Sep 9 12:11 subsystem -> ../../block
-r--r--r-- 1 root root 4096 Sep 9 12:11 stat
drwxr-xr-x 2 root root 0 Sep 9 12:11 slaves
-r--r--r-- 1 root root 4096 Sep 9 12:11 size
drwxr-xr-x 5 root root 0 Sep 9 12:11 sdb1
-r--r--r-- 1 root root 4096 Sep 9 12:11 ro
-r--r--r-- 1 root root 4096 Sep 9 12:11 removable
-r--r--r-- 1 root root 4096 Sep 9 12:11 range
drwxr-xr-x 3 root root 0 Sep 9 12:11 queue
drwxr-xr-x 2 root root 0 Sep 9 12:11 power
-r--r--r-- 1 root root 4096 Sep 9 12:11 inflight
drwxr-xr-x 2 root root 0 Sep 9 12:11 holders
-r--r--r-- 1 root root 4096 Sep 9 12:11 ext_range
-r--r--r-- 1 root root 4096 Sep 9 12:11 dev
-r--r--r-- 1 root root 4096 Sep 9 12:11 capability
lrwxrwxrwx 1 root root 0 Sep 9 12:11 bdi -> ../../class/bdi/8:16
-r--r--r-- 1 root root 4096 Sep 9 12:11 alignment_offset
lrwxrwxrwx 1 root root 0 Sep 9 12:11 device -> ../../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0
[root@localhost sdb]# cd device
[root@localhost device]# cat vendor
ATA
[root@localhost device]# cat model
ST31000524AS
[root@localhost device]# cat rev
JC45
[root@localhost device]# cat scsi_level
6
[root@localhost device]# cd ..
[root@localhost sdb]# cd ..
[root@localhost block]# ls -ltr
total 0
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram8
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram7
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram6
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram5
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram4
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram3
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram2
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram1
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram0
drwxr-xr-x 10 root root 0 Sep 9 12:11 sda
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram9
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram15
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram14
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram13
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram12
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram11
drwxr-xr-x 7 root root 0 Sep 9 12:11 ram10
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop7
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop6
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop5
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop4
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop3
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop2
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop1
drwxr-xr-x 7 root root 0 Sep 9 12:11 loop0
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdh
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdg
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdf
drwxr-xr-x 8 root root 0 Sep 9 12:11 sde
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdd
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdc
drwxr-xr-x 8 root root 0 Sep 9 12:11 sdb
drwxr-xr-x 8 root root 0 Sep 9 12:11 dm-0
drwxr-xr-x 8 root root 0 Sep 9 12:11 md0
[root@localhost block]# cat sdc/device/scsi_level
6
[root@localhost block]# cat sdd/device/scsi_level
6
[root@localhost block]# cat sde/device/scsi_level
6
[root@localhost block]# cat sdf/device/scsi_level
6
[root@localhost block]# cat sdg/device/scsi_level
6
[root@localhost block]# man hdparm
[root@localhost block]# hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8JBTD
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
164min for SECURITY ERASE UNIT. 164min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]#
[root@localhost block]# hdparm -I /dev/sdb
/dev/sdb:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8JYNH
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
166min for SECURITY ERASE UNIT. 166min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdc
/dev/sdc:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8HM9V
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
162min for SECURITY ERASE UNIT. 162min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdd
/dev/sdd:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8HT6F
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
170min for SECURITY ERASE UNIT. 170min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sde
/dev/sde:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8JVWV
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
162min for SECURITY ERASE UNIT. 162min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdf
/dev/sdf:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8FK4C
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
168min for SECURITY ERASE UNIT. 168min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdg
/dev/sdg:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8FNRR
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = ?
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
170min for SECURITY ERASE UNIT. 170min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdf
/dev/sdf:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8FK4C
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
168min for SECURITY ERASE UNIT. 168min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdg
/dev/sdg:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8FNRR
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = ?
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
170min for SECURITY ERASE UNIT. 170min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdg
/dev/sdg:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8FNRR
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = ?
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
170min for SECURITY ERASE UNIT. 170min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdf
/dev/sdf:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8FK4C
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
168min for SECURITY ERASE UNIT. 168min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdg
/dev/sdg:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8FNRR
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = ?
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
170min for SECURITY ERASE UNIT. 170min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -I /dev/sdg
/dev/sdg:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8FNRR
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = ?
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
170min for SECURITY ERASE UNIT. 170min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]# hdparm -t /dev/sdg
/dev/sdg:
^C
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# man hdparm
[root@localhost block]# hdparm -v /dev/sdg
/dev/sdg:
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 56065/255/63, sectors = 1953525168, start = 0
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# man hdparm
[root@localhost block]# hdparm -C /dev/sdg
/dev/sdg:
drive state is: active/idle
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# man hdparm
[root@localhost block]# hdparm -v /dev/sdg
/dev/sdg:
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 56065/255/63, sectors = 1953525168, start = 0
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# hdparm -v /dev/sda
/dev/sda:
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 56065/255/63, sectors = 1953525168, start = 0
[root@localhost block]# hdparm -v /dev/sdb
/dev/sdb:
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 56065/255/63, sectors = 1953525168, start = 0
[root@localhost block]# hdparm -v /dev/sdc
/dev/sdc:
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 56065/255/63, sectors = 1953525168, start = 0
[root@localhost block]# hdparm -v /dev/sdd
/dev/sdd:
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 56065/255/63, sectors = 1953525168, start = 0
[root@localhost block]# hdparm -v /dev/sdr
/dev/sdr: No such file or directory
[root@localhost block]# hdparm -v /dev/sde
/dev/sde:
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 56065/255/63, sectors = 1953525168, start = 0
[root@localhost block]# hdparm -v /dev/sdf
/dev/sdf:
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 56065/255/63, sectors = 1953525168, start = 0
[root@localhost block]# hdparm -v /dev/sdg
/dev/sdg:
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 56065/255/63, sectors = 1953525168, start = 0
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# hdparm -i /dev/sdg
/dev/sdg:
Model=ST31000524AS , FwRev=JC45 , SerialNo= 5VP8FNRR
Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }
RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4
BuffType=unknown, BuffSize=0kB, MaxMultSect=16, MultSect=?16?
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2
AdvancedPM=no WriteCache=enabled
Drive conforms to: unknown: ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7
* signifies the current active mode
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# lshw -class disk
-bash: lshw: command not found
[root@localhost block]# lshw -class /dev/sdg
-bash: lshw: command not found
[root@localhost block]#
[root@localhost block]# lshw -class sdg
-bash: lshw: command not found
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# hdparm -I /dev/sdg
/dev/sdg:
ATA device, with non-removable media
Model Number: ST31000524AS
Serial Number: 5VP8FNRR
Firmware Revision: JC45
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 1953525168
device size with M = 1024*1024: 953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = ?
Recommended acoustic management value: 208, current value: 208
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* unknown 76[3]
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
170min for SECURITY ERASE UNIT. 170min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# hdparm -z -S /dev/sdg
-S: missing value
/dev/sdg:
[root@localhost block]# hdparm -z /dev/sdg
/dev/sdg:
[root@localhost block]#
[root@localhost block]# hdparm -z -S /dev/sdg
-S: missing value
/dev/sdg:
[root@localhost block]# hdparm -z -S / hdparm -I /dev/sda | grep .signaling speed.dev/sdg
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# hdparm -I /dev/sdg | grep "signaling speed"
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# hdparm -I /dev/sda | grep "signaling speed"
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
[root@localhost block]# hdparm -I /dev/sdb | grep "signaling speed"
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
[root@localhost block]# hdparm -I /dev/sdc | grep "signaling speed"
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
[root@localhost block]# hdparm -I /dev/sdd | grep "signaling speed"
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
[root@localhost block]# hdparm -I /dev/sde | grep "signaling speed"
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
[root@localhost block]# hdparm -I /dev/sdf | grep "signaling speed"
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
[root@localhost block]# hdparm -I /dev/sdg | grep "signaling speed"
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
[root@localhost block]# hdparm -I /dev/sdh | grep "signaling speed"
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
[root@localhost block]# hdparm -I /dev/sdi | grep "signaling speed"
/dev/sdi: No such file or directory
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# dmesg | grep SATA
ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x3f impl SATA mode
ata1: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25100 irq 31
ata2: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25180 irq 31
ata3: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25200 irq 31
ata4: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25280 irq 31
ata5: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25300 irq 31
ata6: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25380 irq 31
ahci 0000:05:00.0: AHCI 0001.0100 32 slots 2 ports 3 Gbps 0x3 impl SATA mode
ata7: SATA max UDMA/133 abar m512@0xfbb10000 port 0xfbb10100 irq 19
ata8: SATA max UDMA/133 abar m512@0xfbb10000 port 0xfbb10180 irq 19
ahci 0000:09:00.0: AHCI 0001.0000 32 slots 2 ports 6 Gbps 0x3 impl SATA mode
ata9: SATA max UDMA/133 abar m512@0xfb810000 port 0xfb810100 irq 32
ata10: SATA max UDMA/133 abar m512@0xfb810000 port 0xfb810180 irq 32
ata8: SATA link down (SStatus 0 SControl 300)
ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata7: SATA link down (SStatus 0 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: limiting SATA link speed to 3.0 Gbps
ata9: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: limiting SATA link speed to 3.0 Gbps
ata10: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[root@localhost block]#
[root@localhost block]# dmesg | less
[root@localhost block]# dmesg | grep error
ata9.00: irq_stat 0x08000000, interface fatal error
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:10:3f:40:89/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
ata9.00: irq_stat 0x08000000, interface fatal error
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:08:3f:34:13/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
ata9.00: irq_stat 0x08000000, interface fatal error
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:00:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
ata9.00: irq_stat 0x08000000, interface fatal error
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:f0:3f:00:2d/00:00:15:00:00/40 Emask 0x10 (ATA bus error)
ata10.00: irq_stat 0x08000000, interface fatal error
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:90:3f:b0:11/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
ata10.00: irq_stat 0x08000000, interface fatal error
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:30:3f:64:98/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
ata10.00: irq_stat 0x08000000, interface fatal error
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:40:3f:f4:3d/00:00:05:00:00/40 Emask 0x10 (ATA bus error)
ata10.00: irq_stat 0x08000000, interface fatal error
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
res 40/00:c0:3f:58:02/00:00:18:00:00/40 Emask 0x10 (ATA bus error)
[root@localhost block]# dmesg | less
[root@localhost block]# dmesg | less
[root@localhost block]# q
-bash: q: command not found
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# dmesg | grep -i sata
ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x3f impl SATA mode
ata1: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25100 irq 31
ata2: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25180 irq 31
ata3: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25200 irq 31
ata4: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25280 irq 31
ata5: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25300 irq 31
ata6: SATA max UDMA/133 abar m2048@0xfbd25000 port 0xfbd25380 irq 31
ahci 0000:05:00.0: AHCI 0001.0100 32 slots 2 ports 3 Gbps 0x3 impl SATA mode
ata7: SATA max UDMA/133 abar m512@0xfbb10000 port 0xfbb10100 irq 19
ata8: SATA max UDMA/133 abar m512@0xfbb10000 port 0xfbb10180 irq 19
ahci 0000:09:00.0: AHCI 0001.0000 32 slots 2 ports 6 Gbps 0x3 impl SATA mode
ata9: SATA max UDMA/133 abar m512@0xfb810000 port 0xfb810100 irq 32
ata10: SATA max UDMA/133 abar m512@0xfb810000 port 0xfb810180 irq 32
ata8: SATA link down (SStatus 0 SControl 300)
ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata7: SATA link down (SStatus 0 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: limiting SATA link speed to 3.0 Gbps
ata9: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: limiting SATA link speed to 3.0 Gbps
ata10: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[root@localhost block]#
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# dmesg | grep -i "limiting SATA"
ata9: limiting SATA link speed to 3.0 Gbps
ata10: limiting SATA link speed to 3.0 Gbps
[root@localhost block]#
[root@localhost block]#
[root@localhost block]# cd
[root@localhost ~]#
[root@localhost ~]# ls -ltr
total 88
-rw-r--r-- 1 root root 4136 Sep 6 15:39 install.log.syslog
-rw-r--r-- 1 root root 49355 Sep 6 15:39 install.log
-rw------- 1 root root 1446 Sep 6 15:39 anaconda-ks.cfg
drwxr-xr-x 2 root root 4096 Sep 6 20:48 Desktop
drwxrwxrwx 3 root root 4096 Sep 6 21:13 installers
[root@localhost ~]# vi ldrv.sh
[root@localhost ~]# chmod 755 ldrv.sh
[root@localhost ~]# ./ldrv.sh
Controller device @ pci0000:00/0000:00:1c.3/0000:05:00.0 [ahci]
SATA controller: JMicron Technology Corp. JMB362 AHCI Controller (rev 10)
host6 [Empty]
host7 [Empty]
Controller device @ pci0000:00/0000:00:1c.7/0000:09:00.0 [ahci]
SATA controller: Marvell Technology Group Ltd. Device 9172 (rev 11)
./ldrv.sh: line 42: sginfo: command not found
host8 0:0:0 sdg ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host9 0:0:0 sdh ATA ST31000524AS
Controller device @ pci0000:00/0000:00:1f.2 [ahci]
SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05)
./ldrv.sh: line 42: sginfo: command not found
host0 0:0:0 sda ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host1 0:0:0 sdb ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host2 0:0:0 sdc ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host3 0:0:0 sdd ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host4 0:0:0 sde ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host5 0:0:0 sdf ATA ST31000524AS
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# ./ldrv.sh | grep 10
SATA controller: JMicron Technology Corp. JMB362 AHCI Controller (rev 10)
./ldrv.sh: line 42: sginfo: command not found
host8 0:0:0 sdg ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host9 0:0:0 sdh ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host0 0:0:0 sda ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host1 0:0:0 sdb ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host2 0:0:0 sdc ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host3 0:0:0 sdd ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host4 0:0:0 sde ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host5 0:0:0 sdf ATA ST31000524AS
[root@localhost ~]#
[root@localhost ~]# ./ldrv.sh | grep host10
./ldrv.sh: line 42: sginfo: command not found
./ldrv.sh: line 42: sginfo: command not found
./ldrv.sh: line 42: sginfo: command not found
./ldrv.sh: line 42: sginfo: command not found
./ldrv.sh: line 42: sginfo: command not found
./ldrv.sh: line 42: sginfo: command not found
./ldrv.sh: line 42: sginfo: command not found
./ldrv.sh: line 42: sginfo: command not found
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | less
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "limiting SATA"
ata9: limiting SATA link speed to 3.0 Gbps
ata10: limiting SATA link speed to 3.0 Gbps
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# ./ldrv.sh
Controller device @ pci0000:00/0000:00:1c.3/0000:05:00.0 [ahci]
SATA controller: JMicron Technology Corp. JMB362 AHCI Controller (rev 10)
host6 [Empty]
host7 [Empty]
Controller device @ pci0000:00/0000:00:1c.7/0000:09:00.0 [ahci]
SATA controller: Marvell Technology Group Ltd. Device 9172 (rev 11)
./ldrv.sh: line 42: sginfo: command not found
host8 0:0:0 sdg ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host9 0:0:0 sdh ATA ST31000524AS
Controller device @ pci0000:00/0000:00:1f.2 [ahci]
SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05)
./ldrv.sh: line 42: sginfo: command not found
host0 0:0:0 sda ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host1 0:0:0 sdb ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host2 0:0:0 sdc ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host3 0:0:0 sdd ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host4 0:0:0 sde ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host5 0:0:0 sdf ATA ST31000524AS
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "limiting SATA"
ata9: limiting SATA link speed to 3.0 Gbps
ata10: limiting SATA link speed to 3.0 Gbps
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -"
[root@localhost ~]# dmesg | grep -i "6.0 Gbps"
ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# ./ldrv.sh
Controller device @ pci0000:00/0000:00:1c.3/0000:05:00.0 [ahci]
SATA controller: JMicron Technology Corp. JMB362 AHCI Controller (rev 10)
host6 [Empty]
host7 [Empty]
Controller device @ pci0000:00/0000:00:1c.7/0000:09:00.0 [ahci]
SATA controller: Marvell Technology Group Ltd. Device 9172 (rev 11)
./ldrv.sh: line 42: sginfo: command not found
host8 0:0:0 sdg ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host9 0:0:0 sdh ATA ST31000524AS
Controller device @ pci0000:00/0000:00:1f.2 [ahci]
SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05)
./ldrv.sh: line 42: sginfo: command not found
host0 0:0:0 sda ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host1 0:0:0 sdb ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host2 0:0:0 sdc ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host3 0:0:0 sdd ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host4 0:0:0 sde ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host5 0:0:0 sdf ATA ST31000524AS
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "6.0 Gbps"
ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[root@localhost ~]# dmesg | grep -i "6.0 Gbps"
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "6.0 Gbps"
ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[root@localhost ~]#
[root@localhost ~]# ./ldrv.sh | grep host
host6 [Empty]
host7 [Empty]
./ldrv.sh: line 42: sginfo: command not found
host8 0:0:0 sdg ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host9 0:0:0 sdh ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host0 0:0:0 sda ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host1 0:0:0 sdb ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host2 0:0:0 sdc ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host3 0:0:0 sdd ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host4 0:0:0 sde ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host5 0:0:0 sdf ATA ST31000524AS
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "6.0 Gbps"
ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# ./ldrv.sh | grep ATA
SATA controller: JMicron Technology Corp. JMB362 AHCI Controller (rev 10)
SATA controller: Marvell Technology Group Ltd. Device 9172 (rev 11)
./ldrv.sh: line 42: sginfo: command not found
host8 0:0:0 sdg ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host9 0:0:0 sdh ATA ST31000524AS
SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05)
./ldrv.sh: line 42: sginfo: command not found
host0 0:0:0 sda ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host1 0:0:0 sdb ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host2 0:0:0 sdc ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host3 0:0:0 sdd ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host4 0:0:0 sde ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host5 0:0:0 sdf ATA ST31000524AS
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "6.0 Gbps"
ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "sd"
ACPI: RSDP 00000000000f0420 00024 (v02 ALASKA)
ACPI: XSDT 00000000ca7c6068 0004C (v01 ALASKA A M I 01072009 AMI 00010013)
ACPI: DSDT 00000000ca7c6140 0A1D6 (v02 ALASKA A M I 00000000 INTL 20051117)
ACPI: SSDT 00000000ca7d04a8 001D6 (v01 AMICPU PROC 00000001 MSFT 03000001)
ACPI: EC: Look up EC in DSDT
ACPI: SSDT 00000000cadd4818 0079C (v01 AMI IST 00000001 MSFT 03000001)
ACPI: SSDT 00000000caddba18 0021C (v01 AMI CST 00000001 MSFT 03000001)
sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 0:0:0:0: Attached scsi generic sg0 type 0
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sda:
sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 1:0:0:0: Attached scsi generic sg1 type 0
sd 1:0:0:0: [sdb] Write Protect is off
sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:0:0: Attached scsi generic sg2 type 0
sd 2:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sdb:
sd 2:0:0:0: [sdc] Write Protect is off
sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 3:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sdc:
sd 3:0:0:0: Attached scsi generic sg3 type 0
sd 3:0:0:0: [sdd] Write Protect is off
sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00
sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 4:0:0:0: Attached scsi generic sg4 type 0
sdd:
sd 5:0:0:0: Attached scsi generic sg5 type 0
sd 4:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 5:0:0:0: [sdf] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 4:0:0:0: [sde] Write Protect is off
sd 4:0:0:0: [sde] Mode Sense: 00 3a 00 00
sd 5:0:0:0: [sdf] Write Protect is off
sd 5:0:0:0: [sdf] Mode Sense: 00 3a 00 00
sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sde:
sdf: sda1 sda2 sda3
sd 0:0:0:0: [sda] Attached SCSI disk
sdc1
sdd1
sd 2:0:0:0: [sdc] Attached SCSI disk
sd 3:0:0:0: [sdd] Attached SCSI disk
sdf1
sd 5:0:0:0: [sdf] Attached SCSI disk
sdb1
sde1
sd 1:0:0:0: [sdb] Attached SCSI disk
sd 4:0:0:0: [sde] Attached SCSI disk
sd 8:0:0:0: [sdg] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 8:0:0:0: Attached scsi generic sg6 type 0
sd 8:0:0:0: [sdg] Write Protect is off
sd 8:0:0:0: [sdg] Mode Sense: 00 3a 00 00
sd 8:0:0:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdg:
sd 9:0:0:0: [sdh] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 9:0:0:0: Attached scsi generic sg7 type 0
sd 9:0:0:0: [sdh] Write Protect is off
sd 9:0:0:0: [sdh] Mode Sense: 00 3a 00 00
sd 9:0:0:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdh: sdg1
sd 8:0:0:0: [sdg] Attached SCSI disk
sdh1
sd 9:0:0:0: [sdh] Attached SCSI disk
EXT3 FS on sda1, internal journal
sdg: sdg1
sdg: sdg1
sdg: sdg1
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "6.0 Gbps"
ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# ./ldrv.sh
Controller device @ pci0000:00/0000:00:1c.3/0000:05:00.0 [ahci]
SATA controller: JMicron Technology Corp. JMB362 AHCI Controller (rev 10)
host6 [Empty]
host7 [Empty]
Controller device @ pci0000:00/0000:00:1c.7/0000:09:00.0 [ahci]
SATA controller: Marvell Technology Group Ltd. Device 9172 (rev 11)
./ldrv.sh: line 42: sginfo: command not found
host8 0:0:0 sdg ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host9 0:0:0 sdh ATA ST31000524AS
Controller device @ pci0000:00/0000:00:1f.2 [ahci]
SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05)
./ldrv.sh: line 42: sginfo: command not found
host0 0:0:0 sda ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host1 0:0:0 sdb ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host2 0:0:0 sdc ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host3 0:0:0 sdd ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host4 0:0:0 sde ATA ST31000524AS
./ldrv.sh: line 42: sginfo: command not found
host5 0:0:0 sdf ATA ST31000524AS
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "SATA link"
ata8: SATA link down (SStatus 0 SControl 300)
ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata7: SATA link down (SStatus 0 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata9: limiting SATA link speed to 3.0 Gbps
ata9: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata10: limiting SATA link speed to 3.0 Gbps
ata10: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "SATA link down"
ata8: SATA link down (SStatus 0 SControl 300)
ata7: SATA link down (SStatus 0 SControl 300)
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# dmesg | grep -i "3.0 Gbps"
ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata9: limiting SATA link speed to 3.0 Gbps
ata9: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
ata10: limiting SATA link speed to 3.0 Gbps
ata10: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[root@localhost ~]#
[root@localhost ~]# cat ldrv.sh
#! /bin/bash
#
# Examine specific system host devices to identify the drives attached
#
# sample usage: ./ldrv.sh 2> /dev/null
#
function describe_controller () {
local device driver modprefix serial slotname
driver="`readlink -f \"$1/driver\"`"
driver="`basename $driver`"
modprefix="`cut -d: -f1 <\"$1/modalias\"`"
echo "Controller device @ ${1##/sys/devices/} [$driver]"
if [[ "$modprefix" == "pci" ]] ; then
slotname="`basename \"$1\"`"
echo " `lspci -s $slotname |cut -d\ -f2-`"
return
fi
if [[ "$modprefix" == "usb" ]] ; then
if [[ -f "$1/busnum" ]] ; then
device="`cat \"$1/busnum\"`:`cat \"$1/devnum\"`"
serial="`cat \"$1/serial\"`"
else
device="`cat \"$1/../busnum\"`:`cat \"$1/../devnum\"`"
serial="`cat \"$1/../serial\"`"
fi
echo " `lsusb -s $device` {SN: $serial}"
return
fi
echo -e " `cat \"$1/modalias\"`"
}
function describe_device () {
local empty=1
while read device ; do
empty=0
if [[ "$device" =~ ^(.+/[0-9]+:)([0-9]+:[0-9]+:[0-9]+)/block[/:](.+)$ ]] ; then
base="${BASH_REMATCH[1]}"
lun="${BASH_REMATCH[2]}"
bdev="${BASH_REMATCH[3]}"
vnd="$(< ${base}${lun}/vendor)"
mdl="$(< ${base}${lun}/model)"
sn="`sginfo -s /dev/$bdev | \
sed -rn -e \"/Serial Number/{s%^.+' *(.+) *'.*\\\$%\\\\1%;p;q}\"`" &>/dev/null
if [[ -n "$sn" ]] ; then
echo -e " $1 `echo $lun $bdev $vnd $mdl {SN: $sn}`"
else
echo -e " $1 `echo $lun $bdev $vnd $mdl`"
fi
else
echo -e " $1 Unknown $device"
fi
done
[[ $empty -eq 1 ]] && echo -e " $1 [Empty]"
}
function check_host () {
local found=0
local pController=
while read shost ; do
host=`dirname "$shost"`
controller=`dirname "$host"`
bhost=`basename "$host"`
if [[ "$controller" != "$pController" ]] ; then
pController="$controller"
describe_controller "$controller"
fi
find $host -regex '.+/target[0-9:]+/[0-9:]+/block[:/][^/]+' |describe_device "$bhost"
done
}
find /sys/devices/ -name 'scsi_host*' |check_host
}}}
''ldrv output''
{{{
[root@desktopserver README_ioerrors]# ./ldrv.sh 2> /dev/null
Controller device @ pci0000:00/0000:00:1f.2 [ahci]
SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05)
host5 0:0:0 sdf ATA ST31000524AS
host4 0:0:0 sde ATA ST31000524AS
host3 0:0:0 sdd ATA ST31000524AS
host2 0:0:0 sdc ATA ST31000524AS
host1 0:0:0 sdb ATA ST31000524AS
host0 0:0:0 sda ATA ST31000524AS
Controller device @ pci0000:00/0000:00:1d.0/usb2/2-1/2-1.6/2-1.6:1.0 [usb-storage]
Bus 002 Device 004: ID 1058:1021 Western Digital Technologies, Inc. Elements 2TB {SN: 574D415A4133363632383435}
host63 0:0:0 sdi WD Ext HDD 1021
Controller device @ pci0000:00/0000:00:1c.7/0000:09:00.0 [ahci]
SATA controller: Marvell Technology Group Ltd. Device 9172 (rev 11)
host7 0:0:0 sdh ATA ST31000524AS
host6 0:0:0 sdg ATA ST31000524AS
}}}
{{{
[root@desktopserver README_ioerrors]# cd /sys/block/
[root@desktopserver block]# ls
dm-0 dm-2 dm-4 loop1 loop3 loop5 loop7 ram0 ram10 ram12 ram14 ram2 ram4 ram6 ram8 sda sdc sde sdg
dm-1 dm-3 loop0 loop2 loop4 loop6 md0 ram1 ram11 ram13 ram15 ram3 ram5 ram7 ram9 sdb sdd sdf sdh
[root@desktopserver block]# find . -name device -exec ls -l {} \; |grep sda
lrwxrwxrwx 1 root root 0 Nov 13 17:50 ./sda/device -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0
[root@desktopserver block]# find . -name device -exec ls -l {} \; |grep sdb
lrwxrwxrwx 1 root root 0 Nov 13 17:50 ./sdb/device -> ../../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0
[root@desktopserver block]# find . -name device -exec ls -l {} \; |grep sdc
lrwxrwxrwx 1 root root 0 Nov 13 17:50 ./sdc/device -> ../../devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0
[root@desktopserver block]# find . -name device -exec ls -l {} \; |grep sdd
lrwxrwxrwx 1 root root 0 Nov 13 17:50 ./sdd/device -> ../../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0
[root@desktopserver block]# find . -name device -exec ls -l {} \; |grep sde
lrwxrwxrwx 1 root root 0 Nov 13 17:50 ./sde/device -> ../../devices/pci0000:00/0000:00:1f.2/host4/target4:0:0/4:0:0:0
[root@desktopserver block]# find . -name device -exec ls -l {} \; |grep sdf
lrwxrwxrwx 1 root root 0 Nov 13 17:50 ./sdf/device -> ../../devices/pci0000:00/0000:00:1f.2/host5/target5:0:0/5:0:0:0
[root@desktopserver block]# find . -name device -exec ls -l {} \; |grep sdg
lrwxrwxrwx 1 root root 0 Nov 13 17:50 ./sdg/device -> ../../devices/pci0000:00/0000:00:1c.7/0000:09:00.0/host6/target6:0:0/6:0:0:0
[root@desktopserver block]# find . -name device -exec ls -l {} \; |grep sdh
lrwxrwxrwx 1 root root 0 Nov 13 17:50 ./sdh/device -> ../../devices/pci0000:00/0000:00:1c.7/0000:09:00.0/host7/target7:0:0/7:0:0:0
}}}
''sde bad sectors''
{{{
[root@desktopserver README_ioerrors]# cat hd_sde_DETECTED.txt
Sep 11 18:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Currently unreadable (pending) sectors
Sep 11 18:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Offline uncorrectable sectors
Sep 11 18:52:14 localhost smartd[4549]: Device: /dev/sde, 2 Currently unreadable (pending) sectors
Sep 11 18:52:14 localhost smartd[4549]: Device: /dev/sde, 2 Offline uncorrectable sectors
Sep 11 19:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Currently unreadable (pending) sectors
Sep 11 19:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Offline uncorrectable sectors
Sep 11 19:52:14 localhost smartd[4549]: Device: /dev/sde, 2 Currently unreadable (pending) sectors
Sep 11 19:52:14 localhost smartd[4549]: Device: /dev/sde, 2 Offline uncorrectable sectors
Sep 11 20:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Currently unreadable (pending) sectors
Sep 11 20:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Offline uncorrectable sectors
Sep 11 20:52:14 localhost smartd[4549]: Device: /dev/sde, 2 Currently unreadable (pending) sectors
Sep 11 20:52:14 localhost smartd[4549]: Device: /dev/sde, 2 Offline uncorrectable sectors
Sep 11 21:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Currently unreadable (pending) sectors
Sep 11 21:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Offline uncorrectable sectors
Sep 11 21:52:14 localhost smartd[4549]: Device: /dev/sde, 2 Currently unreadable (pending) sectors
Sep 11 21:52:14 localhost smartd[4549]: Device: /dev/sde, 2 Offline uncorrectable sectors
Sep 11 22:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Currently unreadable (pending) sectors
Sep 11 22:22:13 localhost smartd[4549]: Device: /dev/sde, 2 Offline uncorrectable sectors
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# hdparm -I /dev/sda | grep -i "serial number"
Serial Number: 5VP8JBTD
[root@localhost ~]# hdparm -I /dev/sdb | grep -i "serial number"
Serial Number: 5VP8JYNH
[root@localhost ~]# hdparm -I /dev/sdc | grep -i "serial number"
Serial Number: 5VP8HM9V
[root@localhost ~]# hdparm -I /dev/sdd | grep -i "serial number"
Serial Number: 5VP8HT6F
[root@localhost ~]# hdparm -I /dev/sde | grep -i "serial number"
Serial Number: 5VP8JVWV
[root@localhost ~]# hdparm -I /dev/sdf | grep -i "serial number"
Serial Number: 5VP8FK4C
[root@localhost ~]# hdparm -I /dev/sdg | grep -i "serial number"
Serial Number: 5VP8FNRR
[root@localhost ~]# hdparm -I /dev/sdh | grep -i "serial number"
Serial Number: 5VP8DYF9
[root@localhost ~]#
[root@desktopserver README_ioerrors]# cat hd_sde_AFTER_REPLACE.txt
[root@desktopserver README_ioerrors]# hdparm -I /dev/sda | grep -i "serial number"
Serial Number: 5VP8JBTD
[root@desktopserver README_ioerrors]# hdparm -I /dev/sdb | grep -i "serial number"
Serial Number: 5VP8JYNH
[root@desktopserver README_ioerrors]# hdparm -I /dev/sdc | grep -i "serial number"
Serial Number: 5VP8HM9V
[root@desktopserver README_ioerrors]# hdparm -I /dev/sdd | grep -i "serial number"
Serial Number: 5VP8HT6F
[root@desktopserver README_ioerrors]# hdparm -I /dev/sde | grep -i "serial number" <-- new SERIAL
Serial Number: 5VP8J5AL
[root@desktopserver README_ioerrors]# hdparm -I /dev/sdf | grep -i "serial number"
Serial Number: 5VP8FK4C
[root@desktopserver README_ioerrors]# hdparm -I /dev/sdg | grep -i "serial number"
Serial Number: 5VP8FNRR
[root@desktopserver README_ioerrors]# hdparm -I /dev/sdh | grep -i "serial number"
Serial Number: 5VP8DYF9
}}}
{{{
[root@desktopserver ~]# dmidecode | grep -i "product name"
Product Name: System Product Name
Product Name: P8Z68-V PRO
}}}
{{{
REM srdc_db_dnfs_layout.sql - collect dNfs layout information
define SRDCNAME='DB_DNFS_LAYOUT'
SET MARKUP HTML ON PREFORMAT ON
set TERMOUT off FEEDBACK off VERIFY off TRIMSPOOL on HEADING off
COLUMN SRDCSPOOLNAME NOPRINT NEW_VALUE SRDCSPOOLNAME
select 'SRDC_'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_'||
to_char(sysdate,'YYYYMMDD_HH24MISS') SRDCSPOOLNAME from v$instance;
set TERMOUT on MARKUP html preformat on
REM
spool &&SRDCSPOOLNAME..htm
select '+----------------------------------------------------+' from dual
union all
select '| Diagnostic-Name: '||'&&SRDCNAME' from dual
union all
select '| Timestamp: '||
to_char(systimestamp,'YYYY-MM-DD HH24:MI:SS TZH:TZM') from dual
union all
select '| Machine: '||host_name from v$instance
union all
select '| Version: '||version from v$instance
union all
select '| DBName: '||name from v$database
union all
select '| Instance: '||instance_name from v$instance
union all
select '+----------------------------------------------------+' from dual
/
set HEADING on MARKUP html preformat off
REM === -- end of standard header -- ===
set echo on
set pagesize 200
alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';
select 'THIS DB REPORT WAS GENERATED AT: ==)> ' , sysdate " " from dual;
select 'HOSTNAME ASSOCIATED WITH THIS DB INSTANCE: ==)> ' , MACHINE " " from v$session where program like '%SMON%';
SELECT 'INSTANCE NAME: ==)> ' , INSTANCE_NAME " " FROM V$INSTANCE;
SELECT 'DATABASE NAME: ==)> ' , NAME " " FROM V$DATABASE;
select * from v$dnfs_servers;
select * from v$dnfs_files;
select * from v$dnfs_channels;
select * from v$dnfs_stats;
select * from v$datafile;
select * from dba_data_files;
select * from DBA_TEMP_FILES;
select * from v$logfile;
select * from v$log;
select * from v$version;
show parameter
spool off
exit
}}}
From the following materials you need to be on ubuntu installed on the power machine
https://www.ibm.com/developerworks/library/d-docker-on-power-linux-platform/
https://www.slideshare.net/cmacielatl/docker-on-power-systems
and stalking the engineers listed on the article I found the developerworks link https://github.com/linux-on-ibm-z
https://github.com/seelam?tab=repositories
https://github.com/inatatsu?tab=stars -> starred https://github.com/linux-on-ibm-z/docs
that contains example dockerfiles here
https://github.com/linux-on-ibm-z/dockerfile-examples
! other
https://stackoverflow.com/questions/29466954/creating-docker-container-with-hp-ux-and-ibm-aix
''does AAS of 2 really mean 2 users?'' http://www.evernote.com/shard/s48/sh/63a00349-ab70-4c34-a1fe-f2a14521f238/d9cdcdaa7072518eddd2086d8654ab29
.dot files to http://viz-js.com/ or http://graphviz.it/#/QgMbDsZH
!collectl
{{{
# collectl -sc --verbose -o T -o D
20121126 12:00:47 0 0 6 0 0 0 0 91 8 12K 13K 0 881 2 0.28 0.46 0.28
20121126 12:00:48 1 0 5 0 0 0 0 92 8 13K 12K 8 881 3 0.28 0.46 0.28
20121126 12:00:49 0 0 3 0 0 0 0 94 8 10K 11K 0 881 4 0.26 0.45 0.28
20121126 12:00:50 7 0 5 0 0 0 0 87 8 10K 16K 16 881 4 0.26 0.45 0.28
20121126 12:00:51 0 0 3 0 0 0 0 94 8 12K 11K 0 881 4 0.26 0.45 0.28
20121126 12:00:52 1 0 4 0 0 0 0 93 8 9805 12K 8 881 5 0.26 0.45 0.28
20121126 12:00:53 1 0 3 0 0 0 0 94 8 10K 11K 4 882 2 0.26 0.45 0.28
20121126 12:00:54 30 0 3 2 0 0 0 63 8 14K 11K 12 882 6 0.48 0.49 0.29
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20121126 12:00:55 10 0 3 0 0 0 0 85 8 13K 12K 7 882 4 0.48 0.49 0.29
20121126 12:00:56 15 0 6 0 0 0 0 77 8 11K 14K 17 883 5 0.48 0.49 0.29
20121126 12:00:57 6 0 3 0 0 0 0 89 8 16K 13K 0 883 4 0.48 0.49 0.29
20121126 12:00:58 68 0 3 0 0 0 0 27 8 15K 12K 6 884 19 0.48 0.49 0.29
20121126 12:00:59 95 0 3 0 0 0 0 0 8 19K 13K 4 883 14 1.40 0.68 0.36
20121126 12:01:00 66 0 4 0 0 0 0 28 8 21K 15K 0 883 6 1.40 0.68 0.36
20121126 12:01:01 8 0 8 0 0 0 0 82 8 21K 15K 31 887 4 1.40 0.68 0.36
20121126 12:01:02 1 0 5 0 0 0 0 92 8 20K 16K 5 884 5 1.40 0.68 0.36
20121126 12:01:03 1 0 5 0 0 0 0 92 8 14K 12K 8 884 2 1.40 0.68 0.36
20121126 12:01:04 0 0 4 0 0 0 0 93 8 9043 11K 0 883 4 1.37 0.69 0.36
20121126 12:01:05 1 0 6 0 0 0 0 91 8 14K 13K 8 883 5 1.37 0.69 0.36
20121126 12:01:06 3 0 5 0 0 0 0 89 8 10K 11K 7 884 4 1.37 0.69 0.36
20121126 12:01:07 3 0 5 0 0 0 0 89 8 11K 18K 9 883 4 1.37 0.69 0.36
20121126 12:01:08 0 0 4 0 0 0 0 94 8 9193 12K 0 883 4 1.37 0.69 0.36
-- vmstat
0 0 710324 5110712 894716 5086624 0 0 0 0 13320 13775 2 4 94 0 0
1 0 710324 5110744 894716 5086632 0 0 84 32 12328 13159 1 7 91 1 0
1 0 710324 5104396 894716 5086632 0 0 0 110 13218 12446 2 4 94 0 0
1 0 710324 5111364 894716 5086636 0 0 0 24 12267 12162 1 5 94 0 0
3 0 710324 5102908 894720 5086648 0 0 7 325 10561 11343 7 3 90 0 0
0 0 710324 5111240 894720 5086660 0 0 0 4 12616 16690 2 6 92 0 0
0 0 710324 5111116 894720 5086680 0 0 0 12 10142 12880 1 3 95 0 0
1 0 710324 5111240 894720 5086684 0 0 0 40 9178 10923 2 5 93 0 0
3 0 710324 5103660 894720 5086684 0 0 204 721 15058 11432 18 3 76 3 0
2 0 710324 5107132 894720 5090236 0 0 97 1544 12739 11384 20 4 76 0 0
1 0 710324 5107000 894720 5090276 0 0 2 442 12595 15684 13 7 80 0 0
0 0 710324 5099512 894724 5090268 0 0 4 538 17240 14742 10 4 86 0 0
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
15 0 710324 5099544 894724 5090468 0 0 0 24 12716 11700 39 3 58 0 0
7 0 710324 5100296 894724 5090468 0 0 0 329 18990 13151 96 4 0 0 0
6 0 710324 5099916 894724 5090472 0 0 3 133 20236 13732 94 4 2 0 0
2 0 710324 5091088 894724 5090472 0 0 0 81 22992 15273 13 5 82 0 0
2 0 710324 5097552 894724 5090480 0 0 8 412 18789 17918 4 8 87 0 0
0 0 710324 5098264 894724 5090488 0 0 0 64 17661 13739 2 6 92 0 0
0 0 710324 5098172 894724 5090492 0 0 0 0 9608 11037 1 5 94 0 0
0 0 710324 5098512 894724 5090356 0 0 3 93 14169 12166 2 6 92 0 0
}}}
!gas - global active sessions http://blog.enkitec.com/enkitec_scripts/as.sql
{{{
USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED SPID OPID CPID SADDR PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
SYS dw desktopserver.local 1075 3041 11.2.0.3.0 20121125 16136 31 16133 00000002D92818D8 00000002DEA28A90
SQL>
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME SQL_TEXT OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ----------------------------------------- ------------------------------ -------------------------
11/26/12 12:00:58 1 8 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 SELECT /*+ cputoolkit ordered oracle desktopserver.local
11/26/12 12:00:58 1 10 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 159 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 161 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 313 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 467 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 1082 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 773 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 774 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 927 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 928 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 468 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 1081 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 1,008 1.075086 oracle desktopserver.local
11/26/12 12:00:58 1 1075 sqlplus@de SYS c23yqc3mfvp4a 0 2886813138 1,157 .000526 select to_char(sysdate,'MM/DD/YY HH24:MI: oracle desktopserver.local
14 rows selected.
}}}
! session wait
{{{
SYS@dw> SELECT vs.inst_id,vs.osuser,vw.event,vw.p1,vw.p2,vw.p3 ,vt.sql_text , vs.program
2 FROM gv$session_wait vw, gv$sqltext vt , gv$session vs
3 WHERE vw.event = 'PL/SQL lock timer'
4 AND vt.address=vs.sql_address
5 AND vs.inst_id = vw.inst_id
6 AND vs.sid = vw.sid;
INST_ID OSUSER EVENT P1 P2 P3 SQL_TEXT PROGRAM
---------- ------------------------------ ---------------------------------------------------------------- ---------- ---------- ---------- ---------------------------------------------------------------- ------------------------------------------------
1 oracle PL/SQL lock timer 0 0 0 end loop; end; sqlplus@desktopserver.local (TNS V1-V3)
1 oracle PL/SQL lock timer 0 0 0 end loop; end; sqlplus@desktopserver.local (TNS V1-V3)
1 oracle PL/SQL lock timer 0 0 0 and rownum <= 10000000; dbms_lock.sleep(60); sqlplus@desktopserver.local (TNS V1-V3)
1 oracle PL/SQL lock timer 0 0 0 and rownum <= 10000000; dbms_lock.sleep(60); sqlplus@desktopserver.local (TNS V1-V3)
1 oracle PL/SQL lock t
}}}
!Statspack
{{{
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- ------------------
Begin Snap: 4 26-Nov-12 11:54:22 42 1.5
End Snap: 5 26-Nov-12 12:01:20 53 1.4
Elapsed: 6.97 (mins) Av Act Sess: 0.8
DB time: 5.26 (mins) DB CPU: 2.52 (mins)
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
PL/SQL lock timer 80 4,366 54572 96.1
CPU time 138 3.0
inactive session 32 32 1001 .7
latch: cache buffers chains 1,287 6 5 .1
os thread startup 17 0 21 .0
Time Model System Stats DB/Inst: DW/dw Snaps: 4-5
-> Ordered by % of DB time desc, Statistic name
Statistic Time (s) % DB time
----------------------------------- -------------------- ---------
sql execute elapsed time 281.1 89.0
DB CPU 151.1 47.8
PL/SQL execution elapsed time 42.1 13.3
connection management call elapsed 0.9 .3
parse time elapsed 0.2 .1
hard parse elapsed time 0.0 .0
repeated bind elapsed time 0.0 .0
DB time 315.9
background elapsed time 4.7
background cpu time 2.1
^LSQL ordered by CPU DB/Inst: DW/dw Snaps: 4-5
-> Total DB CPU (s): 151
-> Captured SQL accounts for 179.8% of Total DB CPU
-> SQL reported below exceeded 1.0% of Total DB CPU
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
130.06 92 1.41 86.1 222.94 24,669,380 175009430
Module: sqlplus@desktopserver.local (TNS V1-V3)
SELECT /*+ cputoolkit ordered us
e_nl(b) use_nl(c) use_nl(d) full
(a) full(b) full(c) full(d) */ COUNT(*) FROM SYS.OBJ$ A, SYS.OBJ
$ B, SYS.OBJ$ C, SYS.OBJ$ D WHERE A.OWNER# = B.OWNER# AND B.OWNE
119.00 14 8.50 78.7 251.14 24,164,729 1927962500
Module: sqlplus@desktopserver.local (TNS V1-V3)
declare rcount number; begin -- 600/60=10 minute
s of workload for j in 1..1800 loop -- lotslios
by Tanel Poder select /*+ cputoolkit ordered
use_nl(b) use_nl(c) use_nl(d)
19.18 46 0.42 12.7 19.81 0 2248514484
Module: sqlplus@desktopserver.local (TNS V1-V3)
select to_char(start_time,'DD HH:MI:SS'), samples,
--total, --waits, --cpu, round(fpct * (tot
al/samples),2) fasl, decode(fpct,null,null,first) first,
round(spct * (total/samples),2) sasl, decode(spct,
1.60 277 0.01 1.1 1.87 0 2550496894
Module: sqlplus@desktopserver.local (TNS V1-V3)
select value ||'/'||(select instance_name from v$instance) ||'_
ora_'|| (select spid||case when traceid is not null then
'_'||traceid else null end from v$process where
addr = (select paddr from v$session
130.06 + 119.00 + 19.18 + 1.60 = 269.84 seconds
}}}
!collectl
{{{
20121126 14:55:13 1 0 4 0 0 0 0 93 8 11K 11K 8 894 2 0.17 0.43 0.43
20121126 14:55:14 1 0 3 0 0 0 0 93 8 14K 11K 8 894 4 0.17 0.43 0.43
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20121126 14:55:15 2 0 4 0 0 0 0 92 8 16K 12K 14 895 6 0.17 0.43 0.43
20121126 14:55:16 1 0 4 0 0 0 0 93 8 15K 11K 2 894 3 0.16 0.42 0.43
20121126 14:55:17 1 0 4 0 0 0 0 93 8 22K 12K 7 895 4 0.16 0.42 0.43
20121126 14:55:18 0 0 3 0 0 0 0 94 8 10K 11K 1 894 3 0.16 0.42 0.43
20121126 14:55:19 1 0 4 0 0 0 0 93 8 10K 11K 8 894 2 0.16 0.42 0.43
20121126 14:55:20 34 0 3 0 0 0 0 61 8 14K 12K 8 894 22 0.16 0.42 0.43
20121126 14:55:21 95 0 4 0 0 0 0 0 8 22K 14K 8 894 20 1.43 0.68 0.51
20121126 14:55:22 95 0 4 0 0 0 0 0 8 27K 17K 8 894 19 1.43 0.68 0.51
20121126 14:55:23 81 0 6 0 0 0 0 11 8 27K 15K 0 894 8 1.43 0.68 0.51
20121126 14:55:24 2 0 5 0 0 0 0 91 8 17K 15K 16 894 4 1.43 0.68 0.51
20121126 14:55:25 1 0 4 0 0 0 0 93 8 16K 16K 0 894 3 1.43 0.68 0.51
20121126 14:55:26 3 0 5 0 0 0 0 91 8 15K 15K 16 894 4 1.39 0.68 0.51
20121126 14:55:27 0 0 4 0 0 0 0 94 8 19K 16K 0 894 6 1.39 0.68 0.51
20121126 14:55:28 2 0 4 0 0 0 0 91 8 10K 11K 8 894 5 1.39 0.68 0.51
20121126 14:55:29 1 0 3 0 0 0 0 94 8 11K 11K 8 894 6 1.39 0.68 0.51
20121126 14:55:30 2 0 4 0 0 0 0 93 8 10K 12K 8 894 6 1.39 0.68 0.51
20121126 14:55:31 3 0 5 0 0 0 0 90 8 13K 12K 11 894 4 1.28 0.67 0.51
20121126 14:55:32 2 0 5 0 0 0 0 91 8 12K 14K 8 894 3 1.28 0.67 0.51
20121126 14:55:33 0 0 5 0 0 0 0 93 8 16K 10K 0 894 4 1.28 0.67 0.51
20121126 14:55:34 2 0 5 0 0 0 0 92 8 11K 12K 14 895 4 1.28 0.67 0.51
20121126 14:55:35 1 0 4 0 0 0 0 93 8 12K 13K 2 894 3 1.28 0.67 0.51
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20121126 14:55:36 1 0 4 0 0 0 0 92 8 12K 11K 8 894 4 1.18 0.66 0.51
20121126 14:55:37 2 0 5 0 0 0 0 91 8 15K 15K 9 895 4 1.18 0.66 0.51
20121126 14:55:38 1 0 4 0 0 0 0 93 8 11K 11K 8 897 4 1.18 0.66 0.51
20121126 14:55:39 2 0 5 0 0 0 0 91 8 11K 11K 16 897 5 1.18 0.66 0.51
20121126 14:55:40 1 0 4 0 0 0 0 93 8 12K 12K 1 897 4 1.18 0.66 0.51
20121126 14:55:41 2 0 5 0 0 0 0 91 8 11K 11K 16 897 4 1.08 0.65 0.50
20121126 14:55:42 1 0 4 0 0 0 0 94 8 16K 13K 0 897 4 1.08 0.65 0.50
20121126 14:55:43 1 0 4 0 0 0 0 93 8 11K 12K 10 898 2 1.08 0.65 0.50
20121126 14:55:44 1 0 3 0 0 0 0 94 8 13K 12K 6 899 5 1.08 0.65 0.50
20121126 14:55:45 2 0 4 0 0 0 0 92 8 19K 13K 10 898 4 1.08 0.65 0.50
20121126 14:55:46 1 0 5 0 0 0 0 92 8 12K 12K 8 896 4 1.00 0.64 0.50
20121126 14:55:47 2 0 4 0 0 0 0 92 8 20K 14K 8 896 3 1.00 0.64 0.50
20121126 14:55:48 0 0 3 0 0 0 0 94 8 12K 11K 0 896 4 1.00 0.64 0.50
20121126 14:55:49 1 0 4 0 0 0 0 92 8 11K 11K 8 896 4 1.00 0.64 0.50
20121126 14:55:50 1 0 3 0 0 0 0 93 8 9913 12K 8 896 7 1.00 0.64 0.50
20121126 14:55:51 2 0 4 0 0 0 0 92 8 11K 11K 16 896 3 0.92 0.63 0.50
20121126 14:55:52 1 0 4 0 0 0 0 94 8 10K 13K 0 896 4 0.92 0.63 0.50
20121126 14:55:53 1 0 5 5 0 0 0 86 8 12K 12K 6 897 5 0.92 0.63 0.50
20121126 14:55:54 0 0 5 0 0 0 0 92 8 12K 11K 2 896 4 0.92 0.63 0.50
}}}
!ASH
{{{
26 02:53:15 1 1.00 CPU +++++ 8
26 02:53:50 2 1.00 CPU +++++ 8
26 02:53:55 1 1.00 CPU +++++ 8
26 02:54:00 1 1.00 CPU +++++ 8
26 02:54:15 4 15.68 CPU .32 null event ++++++++++++++++++++++++++++++++++++++++8+++++++++
26 02:54:30 1 1.00 CPU +++++ 8
26 02:55:10 1 1.00 CPU +++++ 8
26 02:55:15 1 4.00 CPU ++++++++++++++++++++ 8
26 02:55:20 3 12.32 CPU 1.68 latch: cache buffers chai ++++++++++++++++++++++++++++++++++++++++8+++++++++
26 02:55:25 1 1.00 CPU +++++ 8
26 02:55:40 1 1.00 CPU +++++ 8
26 02:55:45 1 1.00 CPU +++++ 8
26 02:56:05 1 1.00 CPU +++++ 8
26 02:56:20 3 12.29 CPU .38 latch: cache buffers chai ++++++++++++++++++++++++++++++++++++++++8+++++++++
26 02:56:25 1 3.00 CPU +++++++++++++++ 8
26 02:56:35 1 1.00 db file sequential read ----- 8
26 02:57:20 1 2.00 CPU ++++++++++ 8
26 02:57:25 3 13.33 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
}}}
!snapper
{{{
# @snapper out 1 120 "select sid from v$session where status = 'ACTIVE'"
-- End of ASH snap 51, end=2012-11-26 14:55:18, seconds=1, samples_taken=1
<No active sessions captured during the sampling period>
-- End of ASH snap 52, end=2012-11-26 14:55:19, seconds=1, samples_taken=1
<No active sessions captured during the sampling period>
-- End of ASH snap 53, end=2012-11-26 14:55:20, seconds=1, samples_taken=1
---------------------------------------------------------------------------------
Active% | SQL_ID | EVENT | WAIT_CLASS
---------------------------------------------------------------------------------
1000% | 9fx889bgz15h3 | ON CPU | ON CPU
600% | 9fx889bgz15h3 | latch: cache buffers chains | Concurrency
100% | 22mxn0aggczj5 | ON CPU | ON CPU
-- End of ASH snap 54, end=2012-11-26 14:55:21, seconds=1, samples_taken=1
---------------------------------------------------------------------------------
Active% | SQL_ID | EVENT | WAIT_CLASS
---------------------------------------------------------------------------------
1600% | 9fx889bgz15h3 | ON CPU | ON CPU
100% | 0ws7ahf1d78qa | ON CPU | ON CPU
-- End of ASH snap 55, end=2012-11-26 14:55:22, seconds=1, samples_taken=1
---------------------------------------------------------------------------------
Active% | SQL_ID | EVENT | WAIT_CLASS
---------------------------------------------------------------------------------
1500% | 9fx889bgz15h3 | ON CPU | ON CPU
-- End of ASH snap 56, end=2012-11-26 14:55:23, seconds=1, samples_taken=1
<No active sessions captured during the sampling period>
-- End of ASH snap 57, end=2012-11-26 14:55:24, seconds=1, samples_taken=1
<No active sessions captured during the sampling period>
-- End of ASH snap 58, end=2012-11-26 14:55:25, seconds=1, samples_taken=1
}}}
!gas
{{{
SQL>
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME SQL_TEXT OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ----------------------------------------- ------------------------------ -------------------------
11/26/12 14:55:21 1 3 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 SELECT /*+ cputoolkit ordered oracle desktopserver.local
11/26/12 14:55:21 1 1082 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 9 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 7 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 159 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 160 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 312 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 469 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 620 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 771 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 772 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 774 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 925 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 927 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 1075 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 1079 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 112 2.555075 oracle desktopserver.local
11/26/12 14:55:21 1 156 sqlplus@de SYS c23yqc3mfvp4a 0 2732833552 154 .021560 select to_char(sysdate,'MM/DD/YY HH24:MI: oracle desktopserver.local
17 rows selected.
}}}
!sql_detail
{{{
SQL>
TM ,SQL_ID ,EXECUTIONS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC
-----------------,-------------,----------,----------,----------,-------------,----------
11/26/12 14:55:23,9fx889bgz15h3, 112, 1.46, 2.8, 1.33, 274649.23
}}}
!AWR
{{{
############# AWR
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 35255 26-Nov-12 14:53:55 39 1.7
End Snap: 35257 26-Nov-12 15:01:24 38 1.8
Elapsed: 7.49 (mins)
DB Time: 5.57 (mins)
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
DB CPU 157 47.0
inactive session 37 37 1001 11.1 Other
latch: cache buffers chains 1,347 7 5 2.2 Concurrenc
db file sequential read 365 2 4 .5 User I/O
db file scattered read 253 1 4 .3 User I/O
^LTime Model Statistics DB/Inst: DW/dw Snaps: 35255-35257
-> Total time in database user-calls (DB Time): 334.5s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 294.3 88.0
DB CPU 157.1 47.0
PL/SQL execution elapsed time 48.4 14.5
parse time elapsed 1.1 .3
connection management call elapsed time 1.1 .3
hard parse elapsed time 0.8 .3
PL/SQL compilation elapsed time 0.2 .1
repeated bind elapsed time 0.0 .0
hard parse (sharing criteria) elapsed time 0.0 .0
sequence load elapsed time 0.0 .0
DB time 334.5
background elapsed time 5.7
background cpu time 2.4
Operating System Statistics DB/Inst: DW/dw Snaps: 35255-35257
-> *TIME statistic values are diffed.
All others display actual values. End Value is displayed if different
-> ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name
Statistic Value End Value
------------------------- ---------------------- ----------------
BUSY_TIME 38,861
IDLE_TIME 328,116
IOWAIT_TIME 591
NICE_TIME 1,566
SYS_TIME 17,304
USER_TIME 19,933
LOAD 0 0
VM_IN_BYTES 8,192
VM_OUT_BYTES 0
PHYSICAL_MEMORY_BYTES 16,758,501,376
NUM_CPUS 8
NUM_CPU_CORES 4
NUM_CPU_SOCKETS 1
GLOBAL_RECEIVE_SIZE_MAX 4,194,304
GLOBAL_SEND_SIZE_MAX 1,048,576
TCP_RECEIVE_SIZE_DEFAULT 87,380
TCP_RECEIVE_SIZE_MAX 4,194,304
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 4,194,304
TCP_SEND_SIZE_MIN 4,096
^LSQL ordered by CPU Time DB/Inst: DW/dw Snaps: 35255-35257
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - CPU Time as a percentage of Total DB CPU
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 95.9% of Total CPU Time (s): 157
-> Captured PL/SQL account for 87.1% of Total CPU Time (s): 157
CPU CPU per Elapsed
Time (s) Executions Exec (s) %Total Time (s) %CPU %IO SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
137.0 96 1.43 87.2 228.9 59.9 .0 9fx889bgz15h3
Module: sqlplus@desktopserver.local (TNS V1-V3)
SELECT /*+ cputoolkit ordered use_nl(b) use_nl(c
) use_nl(d) full(a) full(b) full(c) full(d) */ C
OUNT(*) FROM SYS.OBJ$ A, SYS.OBJ$ B, SYS.OBJ$ C, SYS.OBJ$ D WHERE A.OWNER# = B.O
WNER# AND B.OWNER# = C.OWNER# AND C.OWNER# = D.OWNER# AND ROWNUM <= 10000000
133.1 16 8.32 84.7 269.5 49.4 .0 4rx5ca5s6w7g7
Module: sqlplus@desktopserver.local (TNS V1-V3)
declare rcount number; begin -- 600/60=10 minutes of workload
for j in 1..1800 loop -- lotslios by Tanel Poder select /
*+ cputoolkit ordered use_nl(b) use_nl(c) use_nl
(d) full(a) full(b) full(c) full(d) */
4.4 160 0.03 2.8 4.6 95.3 .0 c23yqc3mfvp4a
Module: sqlplus@desktopserver.local (TNS V1-V3)
select to_char(sysdate,'MM/DD/YY HH24:MI:SS') tm, a.inst_id inst, sid, substr(pr
ogram,1,19) prog, a.username, b.sql_id, child_number child, plan_hash_value, exe
cutions execs, (elapsed_time/decode(nvl(executions,0),0,1,executions))/1000000 a
vg_etime, sql_text, a.osuser, a.machine from gv$session a, gv$sql b where status
3.7 309 0.01 2.4 4.2 88.2 .0 74vzwyaatnts3
Module: sqlplus@desktopserver.local (TNS V1-V3)
select value ||'/'||(select instance_name from v$instance) ||'_ora_'||
(select spid||case when traceid is not null then '_'||traceid else null end
from v$process where addr = (select paddr from v$session
where sid = (select sid from v$mystat
1.5 59 0.02 0.9 1.6 92.8 .0 22mxn0aggczj5
Module: sqlplus@desktopserver.local (TNS V1-V3)
select to_char(start_time,'DD HH:MI:SS'), samples, --total,
--waits, --cpu, round(fpct * (total/samples),2) fasl, deco
}}}
!Statspack
{{{
############# Statspack
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- ------------------
Begin Snap: 33 26-Nov-12 14:54:02 39 1.7
End Snap: 34 26-Nov-12 15:01:29 38 1.8
Elapsed: 7.45 (mins) Av Act Sess: 0.7
DB time: 5.56 (mins) DB CPU: 2.61 (mins)
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
PL/SQL lock timer 336 5,578 16601 96.5
CPU time 153 2.6
inactive session 37 37 1001 .6
latch: cache buffers chains 1,347 7 5 .1
db file sequential read 401 2 5 .0
Time Model System Stats DB/Inst: DW/dw Snaps: 33-34
-> Ordered by % of DB time desc, Statistic name
Statistic Time (s) % DB time
----------------------------------- -------------------- ---------
sql execute elapsed time 293.6 88.0
DB CPU 156.5 46.9
PL/SQL execution elapsed time 48.4 14.5
parse time elapsed 1.1 .3
connection management call elapsed 1.1 .3
hard parse elapsed time 0.8 .3
PL/SQL compilation elapsed time 0.2 .1
repeated bind elapsed time 0.0 .0
hard parse (sharing criteria) elaps 0.0 .0
sequence load elapsed time 0.0 .0
DB time 333.7
background elapsed time 5.7
background cpu time 2.4
OS Statistics DB/Inst: DW/dw Snaps: 33-34
-> ordered by statistic type (CPU use, Virtual Memory, Hardware Config), Name
Statistic Total
------------------------- ----------------------
BUSY_TIME 38,694
IDLE_TIME 325,973
IOWAIT_TIME 584
NICE_TIME 1,565
SYS_TIME 17,206
USER_TIME 19,866
VM_IN_BYTES 8,192
VM_OUT_BYTES 0
PHYSICAL_MEMORY_BYTES 16,758,501,376
NUM_CPUS 8
NUM_CPU_CORES 4
NUM_CPU_SOCKETS 1
GLOBAL_RECEIVE_SIZE_MAX 4,194,304
GLOBAL_SEND_SIZE_MAX 1,048,576
TCP_RECEIVE_SIZE_DEFAULT 87,380
TCP_RECEIVE_SIZE_MAX 4,194,304
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 4,194,304
TCP_SEND_SIZE_MIN 4,096
^LSQL ordered by CPU DB/Inst: DW/dw Snaps: 33-34
-> Total DB CPU (s): 156
-> Captured SQL accounts for 181.6% of Total DB CPU
-> SQL reported below exceeded 1.0% of Total DB CPU
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
137.01 96 1.43 87.5 228.89 25,327,553 175009430
Module: sqlplus@desktopserver.local (TNS V1-V3)
SELECT /*+ cputoolkit ordered us
e_nl(b) use_nl(c) use_nl(d) full
(a) full(b) full(c) full(d) */ COUNT(*) FROM SYS.OBJ$ A, SYS.OBJ
$ B, SYS.OBJ$ C, SYS.OBJ$ D WHERE A.OWNER# = B.OWNER# AND B.OWNE
133.07 16 8.32 85.0 269.49 26,858,262 1927962500
Module: sqlplus@desktopserver.local (TNS V1-V3)
declare rcount number; begin -- 600/60=10 minute
s of workload for j in 1..1800 loop -- lotslios
by Tanel Poder select /*+ cputoolkit ordered
use_nl(b) use_nl(c) use_nl(d)
4.38 160 0.03 2.8 4.60 0 2005132824
Module: sqlplus@desktopserver.local (TNS V1-V3)
select to_char(sysdate,'MM/DD/YY HH24:MI:SS') tm, a.inst_id inst
, sid, substr(program,1,19) prog, a.username, b.sql_id, child_nu
mber child, plan_hash_value, executions execs, (elapsed_time/dec
ode(nvl(executions,0),0,1,executions))/1000000 avg_etime, sql_te
3.70 307 0.01 2.4 4.19 0 2550496894
Module: sqlplus@desktopserver.local (TNS V1-V3)
select value ||'/'||(select instance_name from v$instance) ||'_
ora_'|| (select spid||case when traceid is not null then
'_'||traceid else null end from v$process where
addr = (select paddr from v$session
2.16 107 0.02 1.4 2.34 350 2248514484
Module: sqlplus@desktopserver.local (TNS V1-V3)
select to_char(start_time,'DD HH:MI:SS'), samples,
--total, --waits, --cpu, round(fpct * (tot
al/samples),2) fasl, decode(fpct,null,null,first) first,
round(spct * (total/samples),2) sasl, decode(spct,
}}}
!collectl
{{{
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20121126 15:48:06 1 0 5 0 0 0 0 92 8 13K 11K 8 874 6 0.31 0.44 0.39
20121126 15:48:07 1 0 6 0 0 0 0 92 8 13K 13K 0 874 4 0.28 0.43 0.39
20121126 15:48:08 2 0 6 0 0 0 0 90 8 12K 12K 16 874 3 0.28 0.43 0.39
20121126 15:48:09 2 0 6 0 0 0 0 90 8 12K 12K 15 881 5 0.28 0.43 0.39
20121126 15:48:10 3 0 7 5 0 0 0 82 8 16K 13K 16 876 4 0.28 0.43 0.39
20121126 15:48:11 4 0 5 7 0 0 0 82 8 13K 13K 0 876 4 0.28 0.43 0.39
20121126 15:48:12 4 0 7 2 0 0 0 85 8 15K 14K 8 876 7 0.42 0.45 0.40
20121126 15:48:13 1 0 5 0 0 0 0 92 8 15K 12K 8 876 4 0.42 0.45 0.40
20121126 15:48:14 1 0 4 0 0 0 0 93 8 12K 12K 8 876 4 0.42 0.45 0.40
20121126 15:48:15 4 0 4 1 0 0 0 88 8 10K 13K 0 876 5 0.42 0.45 0.40
20121126 15:48:16 1 0 5 0 0 0 0 92 8 10K 12K 8 876 5 0.42 0.45 0.40
20121126 15:48:17 0 0 5 0 0 0 0 92 8 10K 13K 0 876 5 0.38 0.45 0.39
20121126 15:48:18 27 0 17 0 0 0 0 55 8 15K 16K 1219 908 22 0.38 0.45 0.39
20121126 15:48:19 94 0 5 0 0 0 0 0 8 24K 12K 7 908 20 0.38 0.45 0.39
20121126 15:48:20 95 0 3 0 0 0 0 0 8 26K 15K 0 908 19 0.38 0.45 0.39
20121126 15:48:21 95 0 3 0 0 0 0 0 8 21K 14K 8 908 14 0.38 0.45 0.39
20121126 15:48:22 14 0 6 0 0 0 0 78 8 18K 15K 0 908 5 0.67 0.51 0.41
20121126 15:48:23 3 0 7 0 0 0 0 88 8 20K 14K 43 910 4 0.67 0.51 0.41
20121126 15:48:24 0 0 4 0 0 0 0 93 8 19K 14K 0 910 4 0.67 0.51 0.41
20121126 15:48:25 1 0 4 0 0 0 0 92 8 20K 16K 8 910 4 0.67 0.51 0.41
20121126 15:48:26 2 0 5 0 0 0 0 91 8 16K 15K 18 917 4 0.67 0.51 0.41
}}}
!ASH
{{{
26 03:48:15 3 13.39 CPU .27 null event ++++++++++++++++++++++++++++++++++++++++8+++++++++
26 03:48:20 1 13.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
26 03:48:45 1 1.00 CPU +++++ 8
26 03:49:20 4 11.27 CPU .98 latch: cache buffers chai ++++++++++++++++++++++++++++++++++++++++8+++++++++
26 03:49:25 4 .50 CPU +++ 8
26 03:50:20 2 10.50 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
26 03:50:25 2 7.00 CPU +++++++++++++++++++++++++++++++++++ 8
26 03:51:10 1 1.00 null event ----- 8
26 03:52:00 1 1.00 CPU +++++ 8
}}}
!snapper
{{{
-- End of ASH snap 3, end=2012-11-26 15:48:17, seconds=1, samples_taken=1
<No active sessions captured during the sampling period>
-- End of ASH snap 4, end=2012-11-26 15:48:18, seconds=1, samples_taken=1
---------------------------------------------------------------------------------
Active% | SQL_ID | EVENT | WAIT_CLASS
---------------------------------------------------------------------------------
1600% | 9fx889bgz15h3 | ON CPU | ON CPU
-- End of ASH snap 5, end=2012-11-26 15:48:19, seconds=1, samples_taken=1
---------------------------------------------------------------------------------
Active% | SQL_ID | EVENT | WAIT_CLASS
---------------------------------------------------------------------------------
1600% | 9fx889bgz15h3 | ON CPU | ON CPU
-- End of ASH snap 6, end=2012-11-26 15:48:20, seconds=1, samples_taken=1
---------------------------------------------------------------------------------
Active% | SQL_ID | EVENT | WAIT_CLASS
---------------------------------------------------------------------------------
1600% | 9fx889bgz15h3 | ON CPU | ON CPU
-- End of ASH snap 7, end=2012-11-26 15:48:21, seconds=1, samples_taken=1
<No active sessions captured during the sampling period>
-- End of ASH snap 8, end=2012-11-26 15:48:22, seconds=1, samples_taken=1
<No active sessions captured during the sampling period>
-- End of ASH snap 9, end=2012-11-26 15:48:23, seconds=1, samples_taken=1
}}}
!gas
{{{
SQL>
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME SQL_TEXT OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ----------------------------------------- ------------------------------ -------------------------
11/26/12 15:48:18 1 7 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 SELECT /*+ cputoolkit ordered oracle desktopserver.local
11/26/12 15:48:18 1 1081 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 161 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 162 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 313 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 314 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 469 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 470 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 620 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 621 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 774 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 775 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 926 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 927 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 8 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 1080 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 16 .000187 oracle desktopserver.local
11/26/12 15:48:18 1 928 sqlplus@de SYS c23yqc3mfvp4a 0 2886813138 54 .000540 select to_char(sysdate,'MM/DD/YY HH24:MI: oracle desktopserver.local
17 rows selected.
}}}
!sql_detail
{{{
SQL>
TM ,SQL_ID ,EXECUTIONS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC
-----------------,-------------,----------,----------,----------,-------------,----------
11/26/12 15:48:22,9fx889bgz15h3, 16, 1.54, 3.17, 1.63, 286891.13
}}}
!AWR
{{{
############# AWR
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 35265 26-Nov-12 15:46:53 38 1.6
End Snap: 35266 26-Nov-12 15:52:54 41 1.7
Elapsed: 6.02 (mins)
DB Time: 2.66 (mins)
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
DB CPU 89 55.7
db file sequential read 1,797 5 3 3.2 User I/O
latch: cache buffers chains 868 4 5 2.7 Concurrenc
direct path read 343 3 8 1.8 User I/O
db file scattered read 204 1 3 .3 User I/O
^LTime Model Statistics DB/Inst: DW/dw Snaps: 35265-35266
-> Total time in database user-calls (DB Time): 159.3s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 155.5 97.6
DB CPU 88.8 55.7
parse time elapsed 4.8 3.0
hard parse elapsed time 4.5 2.8
connection management call elapsed time 0.7 .4
PL/SQL execution elapsed time 0.5 .3
PL/SQL compilation elapsed time 0.5 .3
hard parse (sharing criteria) elapsed time 0.1 .0
repeated bind elapsed time 0.0 .0
sequence load elapsed time 0.0 .0
hard parse (bind mismatch) elapsed time 0.0 .0
DB time 159.3
background elapsed time 3.2
background cpu time 1.5
Operating System Statistics DB/Inst: DW/dw Snaps: 35265-35266
-> *TIME statistic values are diffed.
All others display actual values. End Value is displayed if different
-> ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name
Statistic Value End Value
------------------------- ---------------------- ----------------
BUSY_TIME 28,609
IDLE_TIME 266,549
IOWAIT_TIME 893
NICE_TIME 1,219
SYS_TIME 14,831
USER_TIME 12,511
LOAD 1 0
VM_IN_BYTES 0
VM_OUT_BYTES 0
PHYSICAL_MEMORY_BYTES 16,758,501,376
NUM_CPUS 8
NUM_CPU_CORES 4
NUM_CPU_SOCKETS 1
GLOBAL_RECEIVE_SIZE_MAX 4,194,304
GLOBAL_SEND_SIZE_MAX 1,048,576
TCP_RECEIVE_SIZE_DEFAULT 87,380
TCP_RECEIVE_SIZE_MAX 4,194,304
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 4,194,304
TCP_SEND_SIZE_MIN 4,096
^LSQL ordered by CPU Time DB/Inst: DW/dw Snaps: 35265-35266
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - CPU Time as a percentage of Total DB CPU
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 89.8% of Total CPU Time (s): 89
-> Captured PL/SQL account for 84.5% of Total CPU Time (s): 89
CPU CPU per Elapsed
Time (s) Executions Exec (s) %Total Time (s) %CPU %IO SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
70.9 48 1.48 79.8 134.7 52.6 .0 9fx889bgz15h3
Module: sqlplus@desktopserver.local (TNS V1-V3)
SELECT /*+ cputoolkit ordered use_nl(b) use_nl(c
) use_nl(d) full(a) full(b) full(c) full(d) */ C
OUNT(*) FROM SYS.OBJ$ A, SYS.OBJ$ B, SYS.OBJ$ C, SYS.OBJ$ D WHERE A.OWNER# = B.O
WNER# AND B.OWNER# = C.OWNER# AND C.OWNER# = D.OWNER# AND ROWNUM <= 10000000
69.8 16 4.36 78.6 131.0 53.3 .0 16v59hy8gjh1x
Module: sqlplus@desktopserver.local (TNS V1-V3)
declare rcount number; begin -- 600/60=10 minutes of workload
for j in 1..3 loop -- lotslios by Tanel Poder select /*+
cputoolkit ordered use_nl(b) use_nl(c) use_nl(d)
full(a) full(b) full(c) full(d) */
2.3 1 2.27 2.6 5.1 44.7 55.2 gh9pd08vhptgr
Module: Oracle Enterprise Manager.Metric Engine
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD'
) AS curr_timestamp, COUNT(username) AS failed_count FROM sys.dba_audit_session
WHERE returncode != 0 AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:3
0:00')
1.9 303 0.01 2.1 2.2 85.9 .0 74vzwyaatnts3
Module: sqlplus@desktopserver.local (TNS V1-V3)
select value ||'/'||(select instance_name from v$instance) ||'_ora_'||
(select spid||case when traceid is not null then '_'||traceid else null end
from v$process where addr = (select paddr from v$session
where sid = (select sid from v$mystat
}}}
!Statspack
{{{
############# Statspack
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- ------------------
Begin Snap: 52 26-Nov-12 15:47:09 42 1.6
End Snap: 53 26-Nov-12 15:53:00 41 1.7
Elapsed: 5.85 (mins) Av Act Sess: 0.4
DB time: 2.60 (mins) DB CPU: 1.45 (mins)
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
PL/SQL lock timer 288 3,121 10835 97.1
CPU time 83 2.6
latch: cache buffers chains 868 4 5 .1
db file sequential read 1,259 4 3 .1
direct path read 343 3 8 .1
Time Model System Stats DB/Inst: DW/dw Snaps: 52-53
-> Ordered by % of DB time desc, Statistic name
Statistic Time (s) % DB time
----------------------------------- -------------------- ---------
sql execute elapsed time 152.1 97.6
DB CPU 87.0 55.8
parse time elapsed 2.7 1.7
hard parse elapsed time 2.5 1.6
connection management call elapsed 0.7 .4
PL/SQL execution elapsed time 0.5 .3
PL/SQL compilation elapsed time 0.3 .2
hard parse (sharing criteria) elaps 0.0 .0
repeated bind elapsed time 0.0 .0
hard parse (bind mismatch) elapsed 0.0 .0
DB time 155.9
background elapsed time 3.0
background cpu time 1.5
OS Statistics DB/Inst: DW/dw Snaps: 52-53
-> ordered by statistic type (CPU use, Virtual Memory, Hardware Config), Name
Statistic Total
------------------------- ----------------------
BUSY_TIME 27,660
IDLE_TIME 259,389
IOWAIT_TIME 711
NICE_TIME 1,192
SYS_TIME 14,182
USER_TIME 12,239
VM_IN_BYTES 0
VM_OUT_BYTES 0
PHYSICAL_MEMORY_BYTES 16,758,501,376
NUM_CPUS 8
NUM_CPU_CORES 4
NUM_CPU_SOCKETS 1
GLOBAL_RECEIVE_SIZE_MAX 4,194,304
GLOBAL_SEND_SIZE_MAX 1,048,576
TCP_RECEIVE_SIZE_DEFAULT 87,380
TCP_RECEIVE_SIZE_MAX 4,194,304
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 4,194,304
TCP_SEND_SIZE_MIN 4,096
^LSQL ordered by CPU DB/Inst: DW/dw Snaps: 52-53
-> Total DB CPU (s): 87
-> Captured SQL accounts for 170.8% of Total DB CPU
-> SQL reported below exceeded 1.0% of Total DB CPU
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
70.90 48 1.48 81.5 134.75 13,681,640 175009430
Module: sqlplus@desktopserver.local (TNS V1-V3)
SELECT /*+ cputoolkit ordered us
e_nl(b) use_nl(c) use_nl(d) full
(a) full(b) full(c) full(d) */ COUNT(*) FROM SYS.OBJ$ A, SYS.OBJ
$ B, SYS.OBJ$ C, SYS.OBJ$ D WHERE A.OWNER# = B.OWNER# AND B.OWNE
69.81 16 4.36 80.3 131.00 13,958,076 425003924
Module: sqlplus@desktopserver.local (TNS V1-V3)
declare rcount number; begin -- 600/60=10 minute
s of workload for j in 1..3 loop -- lotslios by
Tanel Poder select /*+ cputoolkit ordered
use_nl(b) use_nl(c) use_nl(d)
2.27 1 2.27 2.6 5.07 59,804 3029524923
Module: Oracle Enterprise Manager.Metric Engine
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD
HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_c
ount FROM sys.dba_audit_session WHERE returncode != 0 AND times
tamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00')
1.97 322 0.01 2.3 2.33 243 2550496894
Module: sqlplus@desktopserver.local (TNS V1-V3)
select value ||'/'||(select instance_name from v$instance) ||'_
ora_'|| (select spid||case when traceid is not null then
'_'||traceid else null end from v$process where
addr = (select paddr from v$session
}}}
<<showtoc>>
! previous executed SQL
{{{
set pages 0
select * from table( dbms_xplan.display_cursor(null, null, 'ADVANCED +ALLSTATS LAST +MEMSTATS LAST') );
select * from table( dbms_xplan.display_cursor(null, null, 'ADVANCED +ALLSTATS LAST +MEMSTATS LAST +PREDICATE +PEEKED_BINDS') );
SELECT * FROM table(dbms_xplan.display_awr('3755qf31bhbnm','830184137',format=>'advanced +predicate +PEEKED_BINDS'));
}}}
! sql_id
''dplan''
{{{
select * from table( dbms_xplan.display_cursor('&sql_id', null, 'ADVANCED +ALLSTATS LAST +MEMSTATS LAST') );
}}}
''dplan_awr''
{{{
select * from table(DBMS_XPLAN.DISPLAY_AWR('&sql_id',NULL,NULL,'ADVANCED +ALLSTATS LAST +MEMSTATS LAST'));
}}}
! get the SQL_ID using
{{{
explain plan for
<<SQL_ID goes here>>
select * from table(dbms_xplan.display_cursor);
}}}
! explain plan for
{{{
WORKS WITH BINDS
explain plan for
select /* two */ distinct dbid from dba_hist_snapshot where dbid = :B1;
set lines 600 pages 0
select * from table( dbms_xplan.display);
}}}
{{{
explain plan for
select /* two */ distinct dbid from dba_hist_snapshot;
set lines 600 pages 0
select * from table( dbms_xplan.display);
}}}
! references
https://www.dremio.com/accelerating-time-to-insight-with-dremio-snowflake-arp-connector/
https://www.dremio.com/tutorials/how-to-create-an-arp-connector/
https://www.dremio.com/accelerating-time-to-insight-with-dremio-snowflake-arp-connector/
https://github.com/narendrans/dremio-snowflake
https://www.dremio.com/webinars/columnar-roadmap-apache-parquet-and-arrow/
https://docs.dremio.com/advanced-administration/parquet-files.html?h=parque
Introduction to Dremio—the data lake engine https://www.youtube.com/watch?v=YMPwYnvGU4k
https://druid.apache.org/docs/latest/operations/basic-cluster-tuning.html
.
http://www.scsifaq.org/RMiller_Tools/dt.html
http://www.scsifaq.org/RMiller_Tools/scu.html
http://arstechnica.com/civis/viewtopic.php?f=16&t=1116279&start=160
https://nenadnoveljic.com/blog/wait-events-dtrace/
https://blogs.sap.com/2015/12/10/oracle-understanding-the-oracle-code-instrumentation-wait-interface-a-deep-dive-into-what-is-really-measured/
https://blogs.oracle.com/optimizer/entry/dynamic_sampling_and_its_impact_on_the_optimizer
Plan Stability - Apress Book (bind peek, ACS, dynamic sampling, cardinality feedback) - https://www.evernote.com/shard/s48/sh/013cd51e-e484-49ac-911b-e01bdd54ac06/ce780dd4ca02d3d0b72b493acf8c33fd
https://www.youtube.com/watch?v=mdfbm35DlAs&list=PLqt2rd0eew1arEMzMM_tCZzF0JwgANaFt&index=21
demo https://www.youtube.com/watch?v=lqm88SBeI8M&list=PLqt2rd0eew1arEMzMM_tCZzF0JwgANaFt
5min https://www.youtube.com/watch?v=0ihCziAJ07U
https://www.dynatrace.com/
https://github.com/carlos-sierra/eadam
dataflow-elasticsearch-indexer https://github.com/GoogleCloudPlatform/professional-services/tree/master/examples/dataflow-elasticsearch-indexer
https://github.com/bizzabo/elasticsearch_to_bigquery_data_pipeline
https://github.com/bizzabo/elasticsearch-reindexing
https://medium.com/google-cloud/using-cloud-dataflow-to-index-documents-into-elasticsearch-b3a31e999dfc
https://www.google.com/search?q=elasticsearch+cloud+dataflow+to+bigquery&sxsrf=ALeKk02TeUUcfZ2ydVuIzyw0V-GYYUgzMA:1592401046237&ei=lhzqXtD5DfSVwbkP5Py90Ao&start=10&sa=N&ved=2ahUKEwjQnM2V_IjqAhX0SjABHWR-D6oQ8tMDegQIDBAt&biw=1571&bih=886
https://stackoverflow.com/questions/49527703/elastic-search-to-google-big-query
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-google_bigquery.html
https://www.google.com/search?q=elasticsearch+to+bigquery&oq=elasticsearch+to+b&aqs=chrome.1.69i57j35i39j0l5j69i60.5662j0j7&sourceid=chrome&ie=UTF-8
.
..
...
{{{
desc emct_spec_rate_lib
SPEC_RATE_ID NOT NULL NUMBER(38)
DATA_SOURCE_ID NOT NULL NUMBER(38)
BENCHMARK NOT NULL VARCHAR2(64)
HARDWARE_VENDOR NOT NULL VARCHAR2(128)
SYSTEM_CONFIG NOT NULL VARCHAR2(256)
CORES NOT NULL NUMBER
CHIPS NOT NULL NUMBER
CORES_PER_CHIP NOT NULL NUMBER
THREADS_PER_CORE NOT NULL NUMBER
PROCESSOR NOT NULL VARCHAR2(256)
PROCESSOR_MHZ NOT NULL NUMBER
PROCESSOR_CHARACTERISTICS NOT NULL VARCHAR2(256)
FIRST_LEVEL_CACHE NOT NULL VARCHAR2(256)
SECOND_LEVEL_CACHE NOT NULL VARCHAR2(256)
THIRD_LEVEL_CACHE NOT NULL VARCHAR2(256)
MEMORY NOT NULL VARCHAR2(256)
RESULTS NOT NULL NUMBER
BASELINE NOT NULL NUMBER
PUBLISHED NOT NULL DATE
NORM_HARDWARE_VENDOR VARCHAR2(64)
NORM_SYSTEM VARCHAR2(256)
NORM_CPU_VENDOR VARCHAR2(64)
NORM_CPU_FAMILY VARCHAR2(256)
NORM_CPU_MODEL VARCHAR2(256)
NORM_CPU_THREADS NUMBER
NORM_1ST_CACHE_MB NUMBER
NORM_2ND_CACHE_MB NUMBER
NORM_3RD_CACHE_MB NUMBER
NORM_MEMORY_GB NUMBER
SPEC_RATE NUMBER
SPEC_RATE_ORIGIN_FLAG NUMBER
oracle@emgc12c.local:/home/oracle:emrep12c
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 3401.099
cache size : 6144 KB
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm constant_tsc up rep_good pni monitor ssse3 lahf_lm
bogomips : 6802.19
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
[root@desktopserver reco]# cat /proc/cpuinfo
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 6821.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
0)
select * from dba_objects
where object_name = 'MGMT_ECM_HW';
-- the spec rate table
select * from EMCT_SPEC_RATE_LIB;
select distinct benchmark from EMCT_SPEC_RATE_LIB;
-- the list of targets to get the target GUID
select * from gc$target g where g.target_type ='host'
-- desktopserver 0C474BF51B89823AFE1040B6ADC7147C 121
-- emgc12c 0EE088EC2D56D4DF9A747BBE24DFB7D8 10.48
select * FROM MGMT$HW_CPU_DETAILS t1
select * FROM MGMT$OS_HW_SUMMARY t2
-- where you can find the system config info
select * from MGMT_ECM_HW
######################################################################################################################################################
SELECT DISTINCT t1.vendor_name,
t1.impl,
t1.freq_in_mhz,
t2.cpu_count num_cores, -- modify this to use cpu_count in t2 for now, need to confirm with host team if this is right.
t2.vendor_name system_vendor,
t2.system_config system_config,
t2.mem/1024 memory_mb,
NULL cpu_threads
FROM MGMT$HW_CPU_DETAILS t1,
MGMT$OS_HW_SUMMARY t2
WHERE t1.target_guid = t2.target_guid
AND t1.target_guid in ('0C474BF51B89823AFE1040B6ADC7147C', '0EE088EC2D56D4DF9A747BBE24DFB7D8');
1)
SELECT DISTINCT t1.vendor_name,
t1.impl,
t1.freq_in_mhz,
t2.cpu_count num_cores, -- modify this to use cpu_count in t2 for now, need to confirm with host team if this is right.
t2.vendor_name system_vendor,
t2.system_config system_config,
t2.mem/1024 memory_mb,
NULL cpu_threads
FROM MGMT$HW_CPU_DETAILS t1,
MGMT$OS_HW_SUMMARY t2
WHERE t1.target_guid = t2.target_guid
AND t1.target_guid in ('0C474BF51B89823AFE1040B6ADC7147C', '0EE088EC2D56D4DF9A747BBE24DFB7D8');
Vendor IMPL FREQ NumCores System Vendor System Config Memory CPU threads
GenuineIntel Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz 3401 1 Intel Based Hardware VirtualBox 1.2 3.87109375
GenuineIntel Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz 1600 4 Intel Based Hardware System Product Name System Version 15.607421875
Here's the reference:
SELECT spec_rate
INTO l_spec_rate
FROM emct_spec_rate_lib
WHERE UPPER(p_cpu_vendor) LIKE '%'||UPPER(norm_cpu_vendor)||'%'
AND UPPER(p_cpu_name) LIKE '%'||UPPER(norm_cpu_family)||'%'
AND UPPER(p_cpu_name) LIKE '%'||UPPER(norm_cpu_model)||'%'
AND p_cpu_mhz = processor_mhz
AND p_cpu_cores = cores
AND p_cpu_chips = chips
AND p_norm_cpu_threads = norm_cpu_threads
AND p_norm_1st_cache_mb = norm_1st_cache_mb
AND p_norm_memory_gb = norm_memory_gb
AND UPPER(p_system_vendor) LIKE '%'||UPPER(norm_hardware_vendor)||'%';
p_spec_rate := l_spec_rate;
2)
-- define the variables in SQLPlus
alter session set current_schema=SYSMAN;
variable B1 varchar2(32)
variable B2 varchar2(32)
variable B3 varchar2(32)
variable B4 varchar2(32)
variable B5 varchar2(32)
variable B6 varchar2(32)
variable B7 varchar2(32)
variable B8 varchar2(32)
variable B9 varchar2(32)
-- Set the bind values
begin
:B9 := 'GenuineIntel';
:B8 := 'Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz';
:B7 := '1600';
:B6 := '4';
:B2 := '15.607421875';
:B1 := 'Intel Based Hardware';
end;
/
SELECT SPEC_RATE FROM EMCT_SPEC_RATE_LIB
WHERE UPPER(:B9 ) LIKE '%'||UPPER(NORM_CPU_VENDOR)||'%'
AND UPPER(:B8 ) LIKE '%'||UPPER(NORM_CPU_FAMILY)||'%'
AND UPPER(:B8 ) LIKE '%'||UPPER(NORM_CPU_MODEL)||'%'
AND :B7 = PROCESSOR_MHZ
AND :B6 = CORES
AND :B5 = CHIPS
AND :B4 = NORM_CPU_THREADS
AND :B3 = NORM_1ST_CACHE_MB
AND :B2 = NORM_MEMORY_GB
AND UPPER(:B1 ) LIKE '%'||UPPER(NORM_HARDWARE_VENDOR)||'%';
3)
SELECT target_type
FROM GC$TARGET
WHERE target_guid in ('0C474BF51B89823AFE1040B6ADC7147C', '0EE088EC2D56D4DF9A747BBE24DFB7D8');
-- this SQL
SELECT count(*) FROM EMCT_SPEC_RATE_LIB
WHERE UPPER('GenuineIntel') LIKE '%'||UPPER(NORM_CPU_VENDOR)||'%'
-- is the same as this, CRAZY?
SELECT count(*) FROM EMCT_SPEC_RATE_LIB
WHERE UPPER(NORM_CPU_VENDOR) LIKE '%INTEL%'
##########################################################################################################################################
## THE DESKTOPSERVER
-- select match, result: 0
SELECT SPEC_RATE FROM EMCT_SPEC_RATE_LIB
WHERE UPPER(:B9 ) LIKE '%'||UPPER(NORM_CPU_VENDOR)||'%'
AND UPPER(:B8 ) LIKE '%'||UPPER(NORM_CPU_FAMILY)||'%'
AND UPPER(:B8 ) LIKE '%'||UPPER(NORM_CPU_MODEL)||'%'
AND :B7 = PROCESSOR_MHZ
AND :B6 = CORES
AND :B5 = CHIPS
AND :B4 = NORM_CPU_THREADS
AND :B3 = NORM_1ST_CACHE_MB
AND :B2 = NORM_MEMORY_GB
AND UPPER(:B1 ) LIKE '%'||UPPER(NORM_HARDWARE_VENDOR)||'%'
Vendor IMPL FREQ NumCores System Vendor System Config Memory CPU threads
GenuineIntel Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz 3401 1 Intel Based Hardware VirtualBox 1.2 3.87109375
GenuineIntel Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz 1600 4 Intel Based Hardware System Product Name System Version 15.607421875
-- Match with CPU Vendor
-- select 'GenuineIntel', result: 5048 rows
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%'
-- CPU Vendor matched, Now match with Cores
-- result: 1261
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 4 = CORES
-- CPU Vendor, Cores matched, Now match with CPU Family
-- result: 10
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 4 = CORES
AND UPPER('Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%'
-- CPU Vendor, Family, Cores matched, Now match with Speed
-- result: 0
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 4 = CORES
AND UPPER('Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%' AND 1600 = processor_mhz
-- Speed not found, return AVG of current match + closest Speed match
-- result: 105.66
SELECT AVG(SPEC_RATE) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 4 = CORES
AND UPPER('Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%'
-- the final data points
1130 4 CINT2006rate ASUSTeK Computer Inc. Asus P6T Deluxe (Intel Core i7-920) 4 1 4 2 Intel Core i7-920 2667 Intel Turbo Boost Technology up to 2.93 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 12 GB (6 x 2GB Samsung M378B5673DZ1-CF8 DDR3-1066 CL7) 109 102 01-NOV-08 ASUSTeK Asus P6T Intel Core i7-920 8 0.0625 0.25 8 12 102 0
1131 4 CINT2006rate ASUSTeK Computer Inc. Asus P6T Deluxe (Intel Core i7-940) 4 1 4 2 Intel Core i7-940 2933 Intel Turbo Boost Technology up to 3.20 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 12 GB (6 x 2GB Samsung M378B5673DZ1-CF8 DDR3-1066 CL7) 116 108 01-NOV-08 ASUSTeK Asus P6T Intel Core i7-940 8 0.0625 0.25 8 12 108 0
1132 4 CINT2006rate ASUSTeK Computer Inc. Asus P6T Deluxe (Intel Core i7-950) 4 1 4 2 Intel Core i7-950 3066 Intel Turbo Boost Technology up to 3.33 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 12 GB (6 x 2GB Samsung M378B5673DZ1-CF8 DDR3-1066 CL7) 120 112 01-JUN-09 ASUSTeK Asus P6T Intel Core i7-950 8 0.0625 0.25 8 12 112 0
1134 4 CINT2006rate ASUSTeK Computer Inc. Asus P6T Deluxe (Intel Core i7-975) 4 1 4 2 Intel Core i7-975 3333 Intel Turbo Boost Technology up to 3.60 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 12 GB (6 x 2GB Samsung M378B5673DZ1-CF8 DDR3-1066 CL7) 128 121 01-JUN-09 ASUSTeK Asus P6T Intel Core i7-975 8 0.0625 0.25 8 12 121 0
1811 4 CINT2006rate Clevo Clevo STYLE-NOTE 4 1 4 2 Intel Core i7-940XM 2133 Intel Turbo Boost Technology up to 3.33 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 8 GB (2 x 4 GB 2Rx8 PC3-10600S-9) 88.5 84.2 01-DEC-10 Clevo Clevo STYLE-NOTE Intel Core i7-940XM 8 0.0625 0.25 8 8 84.2 0
2617 4 CINT2006rate Fujitsu CELSIUS W280, Intel Core i5-750 4 1 4 1 Intel Core i5-750 2667 Intel Turbo Boost Technology up to 3.2 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 8 GB (2x4 GB PC3-10600U, 2 rank, CL9) 99.2 92.4 01-MAR-10 Fujitsu CELSIUS W280, Intel Core i5-750 4 0.0625 0.25 8 8 92.4 0
2618 4 CINT2006rate Fujitsu CELSIUS W280, Intel Core i7-860 4 1 4 2 Intel Core i7-860 2800 Intel Turbo Boost Technology up to 3.46 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 8 GB (2x4 GB PC3-10600U, 2 rank, CL9) 121 113 01-MAR-10 Fujitsu CELSIUS W280, Intel Core i7-860 8 0.0625 0.25 8 8 113 0
2619 4 CINT2006rate Fujitsu CELSIUS W280, Intel Core i7-870 4 1 4 2 Intel Core i7-870 2933 Intel Turbo Boost Technology up to 3.6 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 8 GB (2x4 GB PC3-10600U, 2 rank, CL9) 127 119 01-MAR-10 Fujitsu CELSIUS W280, Intel Core i7-870 8 0.0625 0.25 8 8 119 0
4580 4 CINT2006rate Intel Corporation Intel DP55KG motherboard (Intel Core i7-870) 4 1 4 2 Intel Core i7-870 2933 Intel Turbo Boost Technology up to 3.60 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 4 GB (2x2GB Micron MT16JTF25664AZ-1G4 DDR3-1333 CL9) 106 101 01-MAR-10 Intel Intel DP55KG Intel Core i7-870 8 0.0625 0.25 8 4 101 0
3462 4 CINT2006rate GIGA-BYTE Technology Co. Ltd. Gigabyte GA-X58A-UD7 motherboard (Intel Core i7-920) 4 1 4 2 Intel Core i7-920 2667 Intel Turbo Boost Technology up to 2.93 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 12 GB (6x2GB Micron 16JTF25664AY-1G1D1 DDR3-1066 CL7) 109 104 01-MAR-10 GIGA-BYTE Gigabyte GA-X58A-UD7 Intel Core i7-920 8 0.0625 0.25 8 12 104 0
-- this matches the result on em12c
-- result: 84.2
SELECT AVG(spec_rate)
FROM emct_spec_rate_lib
WHERE UPPER('GenuineIntel') LIKE '%'
||UPPER(norm_cpu_vendor)
||'%'
AND UPPER('Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz') LIKE '%'
||UPPER(norm_cpu_family)
||'%'
AND 4 = cores
AND
(
processor_mhz =
(
SELECT 1600 + MIN(ABS(processor_mhz - 1600))
FROM emct_spec_rate_lib
WHERE UPPER('GenuineIntel') LIKE '%'
||UPPER(norm_cpu_vendor)
||'%'
AND UPPER('Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz') LIKE '%'
||UPPER(norm_cpu_family)
||'%'
AND 4 = cores
)
OR
processor_mhz =
(
SELECT 1600 - MIN(ABS(processor_mhz - 1600))
FROM emct_spec_rate_lib
WHERE UPPER('GenuineIntel') LIKE '%'
||UPPER(norm_cpu_vendor)
||'%'
AND UPPER('Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz') LIKE '%'
||UPPER(norm_cpu_family)
||'%'
AND 4 = cores
)
);
-- final data point
1811 4 CINT2006rate Clevo Clevo STYLE-NOTE 4 1 4 2 Intel Core i7-940XM 2133 Intel Turbo Boost Technology up to 3.33 GHz 32 KB I + 32 KB D on chip per core 256 KB I+D on chip per core 8 MB I+D on chip per chip 8 GB (2 x 4 GB 2Rx8 PC3-10600S-9) 88.5 84.2 01-DEC-10 Clevo Clevo STYLE-NOTE Intel Core i7-940XM 8 0.0625 0.25 8 8 84.2 0
''the 84.2 is based on the SPEC Base number''.. and what I'm doing is ''Peak/Enabled Cores''
$ cat spec.txt | grep -i "Clevo"
22.125, | CSV | Text | PDF | PS | Config, 8, 4, 1, 4, 2, 84.2 , 88.5 , Intel Corporation, Clevo STYLE-NOTE, ,
spec.txt output header
here's the header see the text in BOLD
SPECint_rate2006/core, | CSV | Text | PDF | PS | Config, Base Copies, Enabled Cores, Enabled Chips, Cores/Chip, Threads/Core, Base, Peak, Test Sponsor, System Name
-- this is what I don't like about the SPEC rate algorithm of consolidation planner, it gets it wrong.. below is the exact match of my processor architecture
and not the "Core i7-940XM" and it should be ''156 SPEC base rate''
$ cat spec.txt | grep -i "i7-2600K"
41.5, | CSV | Text | PDF | PS | Config,8,4,1,4,2,156 ,166 , Intel Corporation,Intel DZ68DB motherboard (Intel Core i7-2600K),
##########################################################################################################################################
## THE VIRTUAL BOX
SELECT TARGET_TYPE FROM GC$TARGET WHERE TARGET_GUID = :B1
SELECT DISTINCT T1.VENDOR_NAME, T1.IMPL, T1.FREQ_IN_MHZ, T2.CPU_COUNT NUM_CORES, T2.VENDOR_NAME SYSTEM_VENDOR,
T2.SYSTEM_CONFIG SYSTEM_CONFIG, T2.MEM/1024 MEMORY_MB, NULL CPU_THREADS FROM MGMT$HW_CPU_DETAILS T1, MGMT$OS_HW_SUMMARY T2
WHERE T1.TARGET_GUID = T2.TARGET_GUID AND T1.TARGET_GUID = :B1 AND ROWNUM = 1
-- Match with CPU Vendor
-- select 'GenuineIntel', result: 5048 rows
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%'
-- CPU Vendor matched, Now match with Cores
-- result: 5
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 1 = CORES
-- CPU Vendor, Cores matched, Now match with CPU Family
-- result: 0
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 5 = CORES
AND UPPER('Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%'
-- AVG
-- result: 10.48
SELECT AVG(SPEC_RATE) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 1 = CORES
-- the final data points
4616 4 CINT2006rate Intel Corporation Lenovo ThinkPad T43 (Intel Pentium M 780) 1 1 1 1 Intel Pentium M 780 2266 NULL 32 KB I + 32 KB D on chip per chip 2 MB I+D on chip per chip None 2 GB (2x1GB Hynix DDR2-667 CL5) 11.4 10.7 01-JAN-08 Intel Lenovo ThinkPad Intel Pentium M 780 1 0.0625 2 0 2 10.7 0
4548 4 CINT2006rate Intel Corporation Acer Aspire 3410T (Intel Celeron M 723) 1 1 1 1 Intel Celeron M 723 1200 NULL 32 KB I + 32 KB D on chip per chip 1 MB I+D on chip per chip None 2 GB (2x1GB DDR3-1066 CL7; bios sets it to DDR2-800 CL5) 0 9 01-JUN-09 Intel Acer Aspire Intel Celeron M 723 1 0.0625 1 0 2 9 0
4550 4 CINT2006rate Intel Corporation Acer Aspire 3810T (Intel Core 2 Solo ULV SU3500) 1 1 1 1 Intel Core 2 Solo ULV SU3500 1400 NULL 32 KB I + 32 KB D on chip per chip 3 MB I+D on chip per chip None 2 GB (2x1GB DDR3-1066 CL7; bios sets it to DDR2-800 CL5) 0 11.7 01-JUN-09 Intel Acer Aspire Intel Core 2 Solo ULV SU3500 1 0.0625 3 0 2 11.7 0
4551 4 CINT2006rate Intel Corporation Acer Aspire 3810T (Intel Pentium SU2700) 1 1 1 1 Intel Pentium SU2700 1300 NULL 32 KB I + 32 KB D on chip per chip 2 MB I+D on chip per chip None 2 GB (2x1GB DDR3-1066 CL7; bios sets it to DDR2-800 CL5) 0 10.1 01-JUN-09 Intel Acer Aspire Intel Pentium SU2700 1 0.0625 2 0 2 10.1 0
3905 4 CINT2006rate IBM Corporation IBM BladeCenter HS12 (Intel Celeron 445) 1 1 1 1 Intel Celeron 445 1866 1066MHz system bus 32 KB I + 32 KB D on chip per chip 512 KB I+D on chip per chip None 8 GB (4 x 2 GB DDR2-5300 ECC) 12.3 10.9 01-JUN-08 IBM IBM BladeCenter Intel Celeron 445 1 0.0625 0.5 0 8 10.9 0
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER(:B4 ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%'
AND UPPER(:B3 ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%' AND :B2 = CORES AND :B1 = PROCESSOR_MHZ
SELECT AVG(SPEC_RATE) FROM EMCT_SPEC_RATE_LIB WHERE UPPER(:B3 ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%'
AND UPPER(:B2 ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%' AND :B1 = CORES
AND ( PROCESSOR_MHZ = ( SELECT :B4 + MIN(ABS(PROCESSOR_MHZ - :B4 )) FROM EMCT_SPEC_RATE_LIB WHERE UPPER(:B3 )
LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%'
AND UPPER(:B2 ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%' AND :B1 = CORES )
OR PROCESSOR_MHZ = ( SELECT :B4 - MIN(ABS(PROCESSOR_MHZ - :B4 )) FROM EMCT_SPEC_RATE_LIB
WHERE UPPER(:B3 ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND UPPER(:B2 ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%' AND :B1 = CORES ) )
}}}
The em12c way of getting the SPECint_rate value is 56% CPU speed gain when migrating from V2 to X2...
the V2 at ''211.333333'' and X2 at ''330.61165''
with my way of getting the [[cpu - SPECint_rate2006]] the speed increase of V2 to X2 is 17 to 29 (SPECint_rate2006/core) which is 70% increase and the actual benchmark the LIOs/sec perf increase is from 2.1M to 3.6M LIOs/sec which is 71% as shown here [[cores vs threads, v2 vs x2]] so that's pretty close ;)
both methods can be used to have a single currency system where you can compare how fast A is to Z.. but my issue is the Consolidation Planner is doing an AVG on the filtered samples and not the exact match or closest match to the hardware make and model as I explained here [[ConsolidationPlannerViews]] and here [[em12c SPEC computation]]
SPECint_rate data points are here https://www.dropbox.com/sh/igjxt517wuceh6z/KQP4X-iMsT
BTW.. from my computations/benchmark/research here's the summary of equivalent CPUs between v2,x2,and x3
* 16 CPUs of V2 is equal to 9.4 CPUs of X2
* 16 CPUs of V2 is equal to 6.3 CPUs of X3
* 24 CPUs of X2 is equal to 16 CPUs of X3
<<<
''migrating from v2 to x2''
* when migrating from a v2 to x2 the per execution gets faster because of the CPU speed gains resulting to lower CPU utilization
*
* you'll be able to process the same amount of TPS at less time and lower CPU
* if you do a same transaction rate 1205 TPS on swingbench and execute it on v2 and x2 you will see this behavior
v2, SPECint=17
--------------
40-45% CPU util
17ms latency
AAS 9
x2, SPECint=29
--------------
15-20% CPU util
10ms latency
AAS 6
* when migrating from v2 to x2 the chip efficiency factor is 58.62% (17/29)
= 16*.45*.5862
= 4.2 this is how many CPUs you need on the X2 if you are using 7.2 CPUs (16*.45) on the V2.. so 16 CPUs of V2 is equal to 9.4 CPUs of X2
* so.. on the same rate of TPS, the faster CPU translates to faster per execution resulting to lower CPU utilization
* so the extra CPU capacity on the faster CPU can be used for additional transactions or workload growth.. resulting to more work being done
* now if you do it the other way.. faster to slower.. each SQL will execute longer and in turn will cause higher CPU.. The important thing here is to have the CPU equivalent of the source # of CPUs taking into consideration the chip efficiency factor on the destination to accommodate the processing power it needs and not anymore aggravate the speed down (faster to slower) effect. So you'll still be able to achieve the same TPS but the response times will be longer and CPU util higher.
even on slower to faster CPUs, the TPS may not even change.. because it's the same amount of workers that's doing the same amount of work as you move across platforms. the change you'll be seeing is just lower resource utilization and the gains you'll have is being able to put more work on the workers resulting to more transaction rate
So when observing change of workloads when moving to slower-faster with ''no workload'' change do the following: check the load profile.. it should be the same.. then check on per SQL if per exec remained the same (it should be the same).. then check on CPU usage it should be lower
And when observing change of workloads when moving to slower-faster ''with workload'' change do the following: check the load profile.. it should be higher.. then check on per SQL if per exec remained the same (it should be higher).. then check on CPU usage it should be higher
<<<
! V2
{{{
V2
#####
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5540 @ 2.53GHz
stepping : 5
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 1
siblings : 8
core id : 3
cpu cores : 4
apicid : 23
initial apicid : 23
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips : 5053.17
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
$ cat em12c_spec.csv | grep -i e5540 | grep -i sun
5289,4,"CINT2006rate","Sun Microsystems","Sun Fire X2270 (Intel Xeon E5540 2.53GHz)",8,2,4,2,"Intel Xeon E5540",2534,"Intel Turbo Boost Technology up to 2.80 GHz","32 KB I + 32 KB D on chip per core","256 KB I+D on chip per core","8 MB I+D on chip per chip","24 GB (6 x 4 GB DDR3-1333 downclocked to 1066 MHz )",213,198,01-SEP-09,"Sun","Sun Fire","Intel","Xeon","E5540",16,0.0625,0.25,8,24,198,0
5304,4,"CINT2006rate","Sun Microsystems","Sun Fire X4170 (Intel Xeon E5540 2.53GHz)",8,2,4,2,"Intel Xeon E5540",2534,"Intel Turbo Boost Technology up to 2.80 GHz","32 KB I + 32 KB D on chip per core","256 KB I+D on chip per core","8 MB I+D on chip per chip","24 GB (6 x 4 GB DDR3-1333 downclocked to 1066 MHz )",213,199,01-SEP-09,"Sun","Sun Fire","Intel","Xeon","E5540",16,0.0625,0.25,8,24,199,0
5313,4,"CINT2006rate","Sun Microsystems","Sun Fire X4270 (Intel Xeon E5540 2.53GHz)",8,2,4,2,"Intel Xeon E5540",2534,"Intel Turbo Boost Technology up to 2.80 GHz","32 KB I + 32 KB D on chip per core","256 KB I+D on chip per core","8 MB I+D on chip per chip","24 GB (6x4 GB DDR3-1333 downclocked to 1066 MHz )",213,199,01-SEP-09,"Sun","Sun Fire","Intel","Xeon","E5540",16,0.0625,0.25,8,24,199,0
-- Match with CPU Vendor
-- select 'GenuineIntel', result: 5048 rows
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%'
-- CPU Vendor matched, Now match with Cores
-- result: 2065
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 8 = CORES
-- CPU Vendor, Cores matched, Now match with CPU Family
-- result: 2055
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 8 = CORES
AND UPPER('Intel Xeon E5540 2.53GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%'
-- CPU Vendor, Family, Cores matched, Now match with Speed
-- result: 3
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 8 = CORES
AND UPPER('Intel Xeon E5540 2.53GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%' AND 2530 = processor_mhz
-- Speed not found, return AVG of current match + closest Speed match
-- result: 153.460097
SELECT AVG(SPEC_RATE) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 8 = CORES
AND UPPER('Intel Xeon E5540 2.53GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%'
-- this matches the result on em12c
-- result: 211.333333
SELECT AVG(spec_rate)
FROM emct_spec_rate_lib
WHERE UPPER('GenuineIntel') LIKE '%'
||UPPER(norm_cpu_vendor)
||'%'
AND UPPER('Intel Xeon E5540 2.53GHz') LIKE '%'
||UPPER(norm_cpu_family)
||'%'
AND 8 = cores
AND
(
processor_mhz =
(
SELECT 2530 + MIN(ABS(processor_mhz - 2530))
FROM emct_spec_rate_lib
WHERE UPPER('GenuineIntel') LIKE '%'
||UPPER(norm_cpu_vendor)
||'%'
AND UPPER('Intel Xeon E5540 2.53GHz') LIKE '%'
||UPPER(norm_cpu_family)
||'%'
AND 8 = cores
)
OR
processor_mhz =
(
SELECT 2530 - MIN(ABS(processor_mhz - 2530))
FROM emct_spec_rate_lib
WHERE UPPER('GenuineIntel') LIKE '%'
||UPPER(norm_cpu_vendor)
||'%'
AND UPPER('Intel Xeon E5540 2.53GHz') LIKE '%'
||UPPER(norm_cpu_family)
||'%'
AND 8 = cores
)
);
}}}
! X2
{{{
X2
#####
processor : 23
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2
cpu MHz : 2926.097
cache size : 12288 KB
physical id : 1
siblings : 12
core id : 10
cpu cores : 6
apicid : 53
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips : 5852.01
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
$ cat em12c_spec.csv | grep -i "Intel Xeon X5670" | grep -i sun | grep -i fire
5206,4,"CINT2006rate","Oracle Corporation","Sun Fire X2270 M2 (Intel Xeon X5670 2.93GHz)",12,2,6,2,"Intel Xeon X5670",2933,"Intel Turbo Boost Technology up to 3.33 GHz","32 KB I + 32 KB D on chip per core","256 KB I+D on chip per core","12 MB I+D on chip per chip","48 GB (12 x 4 GB DDR3-1333 CL9, 2 Rank, ECC)",346,311,01-JUL-10,"Oracle","Sun Fire","Intel","Xeon","X5670",24,0.0625,0.25,12,48,311,0
5207,4,"CINT2006rate","Oracle Corporation","Sun Fire X2270 M2 (Intel Xeon X5670 2.93GHz)",12,2,6,2,"Intel Xeon X5670",2933,"Intel Turbo Boost Technology up to 3.33 GHz","32 KB I + 32 KB D on chip per core","256 KB I+D on chip per core","12 MB I+D on chip per chip","48 GB (12 x 4 GB DDR3-1333 CL9, 2 Rank, ECC)",342,320,01-JUL-10,"Oracle","Sun Fire","Intel","Xeon","X5670",24,0.0625,0.25,12,48,320,0
5211,4,"CINT2006rate","Oracle Corporation","Sun Fire X4170 M2 (Intel Xeon X5670 2.93GHz)",12,2,6,2,"Intel Xeon X5670",2933,"Intel Turbo Boost Technology up to 3.33 GHz","32 KB I + 32 KB D on chip per core","256 KB I+D on chip per core","12 MB I+D on chip per chip","48 GB (12 x 4 GB DDR3-1333 CL9, 2 Rank, ECC)",353,316,01-JUL-10,"Oracle","Sun Fire","Intel","Xeon","X5670",24,0.0625,0.25,12,48,316,0
-- Match with CPU Vendor
-- select 'GenuineIntel', result: 5048 rows
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%'
-- CPU Vendor matched, Now match with Cores
-- result: 689
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 12 = CORES
-- CPU Vendor, Cores matched, Now match with CPU Family
-- result: 689
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 12 = CORES
AND UPPER('Intel Xeon X5670 2.93GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%'
-- CPU Vendor, Family, Cores matched, Now match with Speed
-- result: 0
SELECT COUNT(*) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 12 = CORES
AND UPPER('Intel Xeon X5670 2.93GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%' AND 2930 = processor_mhz
-- Speed not found, return AVG of current match + closest Speed match
-- result: 318.195936
SELECT AVG(SPEC_RATE) FROM EMCT_SPEC_RATE_LIB WHERE UPPER('GenuineIntel' ) LIKE '%' ||UPPER(NORM_CPU_VENDOR) ||'%' AND 12 = CORES
AND UPPER('Intel Xeon X5670 2.93GHz' ) LIKE '%' ||UPPER(NORM_CPU_FAMILY) ||'%'
-- this matches the result on em12c
-- result: 330.61165
SELECT AVG(spec_rate)
FROM emct_spec_rate_lib
WHERE UPPER('GenuineIntel') LIKE '%'
||UPPER(norm_cpu_vendor)
||'%'
AND UPPER('Intel Xeon X5670 2.93GHz') LIKE '%'
||UPPER(norm_cpu_family)
||'%'
AND 12 = cores
AND
(
processor_mhz =
(
SELECT 2930 + MIN(ABS(processor_mhz - 2930))
FROM emct_spec_rate_lib
WHERE UPPER('GenuineIntel') LIKE '%'
||UPPER(norm_cpu_vendor)
||'%'
AND UPPER('Intel Xeon X5670 2.93GHz') LIKE '%'
||UPPER(norm_cpu_family)
||'%'
AND 12 = cores
)
OR
processor_mhz =
(
SELECT 2930 - MIN(ABS(processor_mhz - 2930))
FROM emct_spec_rate_lib
WHERE UPPER('GenuineIntel') LIKE '%'
||UPPER(norm_cpu_vendor)
||'%'
AND UPPER('Intel Xeon X5670 2.93GHz') LIKE '%'
||UPPER(norm_cpu_family)
||'%'
AND 12 = cores
)
);
}}}
http://stackoverflow.com/questions/25471713/what-are-the-core-differences-between-firebase-and-express
https://www.quora.com/Is-it-worth-developing-server-side-with-node-js-rather-than-being-helped-from-firebase
http://stackoverflow.com/questions/37378050/how-to-get-and-set-data-to-firebase-with-node-js
https://www.quora.com/Whats-the-downside-for-using-Ruby-on-Rails-as-a-back-end-for-Android-iOS-apps
https://www.quora.com/Is-right-now-Ruby-on-Rails-the-best-back-end-language-in-order-to-connect-Android-iOS-apps-to-a-server-database
http://9elements.com/io/index.php/an-ember-js-application-with-a-rails-api-backend/
http://discuss.emberjs.com/t/best-ember-back-end/7237/12
https://github.com/bahudso/ember-rails-project-base
http://stackoverflow.com/questions/30787983/ember-js-and-ruby-on-rails-strategy
[img(50%,50%)[https://i.imgur.com/mnBPUqO.png]]
https://medium.com/ember-ish/the-simplest-possible-ember-data-crud-16eacee33ae6
<<showtoc>>
! the URL of apps
https://emberjs.pagefrontapp.com/
https://cheftracker.pagefrontapp.com/
https://folders.pagefrontapp.com
! ember install and configure
!! install nodejs
<<<
https://nodejs.org/en/
<<<
!! check nodejs version
<<<
Karl-MacBook:~ karl$ npm -v
2.15.9
<<<
!! install ember-cli
<<<
Karl-MacBook:~ karl$ sudo npm install -g ember-cli
{{{
Password:
npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
> spawn-sync@1.0.15 postinstall /usr/local/lib/node_modules/ember-cli/node_modules/inquirer/node_modules/external-editor/node_modules/spawn-sync
> node postinstall
npm WARN deprecated lodash.assign@4.2.0: This package is deprecated. Use Object.assign.
/usr/local/bin/ember -> /usr/local/lib/node_modules/ember-cli/bin/ember
ember-cli@2.8.0 /usr/local/lib/node_modules/ember-cli
├── ember-cli-is-package-missing@1.0.0
├── ember-cli-string-utils@1.0.0
├── clean-base-url@1.0.0
├── ember-cli-normalize-entity-name@1.0.0
├── get-caller-file@1.0.2
├── silent-error@1.0.1
├── git-repo-info@1.2.0
├── fs-monitor-stack@1.1.1
├── diff@1.4.0
├── broccoli-funnel-reducer@1.0.0
├── ember-cli-get-component-path-option@1.0.0
├── is-git-url@0.2.3
├── escape-string-regexp@1.0.5
├── isbinaryfile@3.0.1
├── promise-map-series@0.2.3
├── symlink-or-copy@1.1.6
├── core-object@2.0.6
├── broccoli-source@1.1.0
├── exists-sync@0.0.3
├── semver@5.3.0
├── ember-cli-lodash-subset@1.0.10
├── filesize@3.3.0
├── inflection@1.10.0
├── bower-endpoint-parser@0.2.2
├── broccoli-viz@2.0.1
├── node-modules-path@1.0.1
├── through@2.3.8
├── node-uuid@1.4.7
├── exit@0.1.2
├── amd-name-resolver@0.0.5 (ensure-posix-path@1.0.2)
├── walk-sync@0.2.7 (matcher-collection@1.0.4, ensure-posix-path@1.0.2)
├── broccoli-sane-watcher@1.1.5 (broccoli-slow-trees@1.1.0)
├── lodash.template@4.4.0 (lodash.templatesettings@4.1.0, lodash._reinterpolate@3.0.0)
├── nopt@3.0.6 (abbrev@1.0.9)
├── temp@0.8.3 (os-tmpdir@1.0.2, rimraf@2.2.8)
├── debug@2.2.0 (ms@0.7.1)
├── http-proxy@1.15.1 (eventemitter3@1.2.0, requires-port@1.0.0)
├── npm-package-arg@4.2.0 (hosted-git-info@2.1.5)
├── rsvp@3.3.3
├── find-up@1.1.2 (path-exists@2.1.0, pinkie-promise@2.0.1)
├── chalk@1.1.3 (ansi-styles@2.2.1, supports-color@2.0.0, strip-ansi@3.0.1, has-ansi@2.0.0)
├── minimatch@3.0.3 (brace-expansion@1.1.6)
├── morgan@1.7.0 (on-headers@1.0.1, basic-auth@1.0.4, depd@1.1.0, on-finished@2.3.0)
├── glob@7.0.5 (path-is-absolute@1.0.1, inherits@2.0.3, fs.realpath@1.0.0, once@1.4.0, inflight@1.0.5)
├── fs-extra@0.30.0 (path-is-absolute@1.0.1, rimraf@2.5.4, graceful-fs@4.1.9, klaw@1.3.0, jsonfile@2.4.0)
├── quick-temp@0.1.5 (mktemp@0.3.5, rimraf@2.2.8, underscore.string@2.3.3)
├── ora@0.2.3 (object-assign@4.1.0, cli-spinners@0.1.2, cli-cursor@1.0.2)
├── compression@1.6.2 (on-headers@1.0.1, vary@1.1.0, bytes@2.3.0, compressible@2.0.8, accepts@1.3.3)
├── fs-tree-diff@0.5.3 (fast-ordered-set@1.0.3, heimdalljs-logger@0.1.7)
├── configstore@2.1.0 (os-tmpdir@1.0.2, object-assign@4.1.0, graceful-fs@4.1.9, uuid@2.0.3, dot-prop@3.0.0, xdg-basedir@2.0.0, osenv@0.1.3, write-file-atomic@1.2.0, mkdirp@0.5.1)
├── portfinder@1.0.7 (async@1.5.2, mkdirp@0.5.1)
├── tree-sync@1.1.4 (mkdirp@0.5.1)
├── sane@1.4.1 (watch@0.10.0, minimist@1.2.0, exec-sh@0.2.0, fb-watchman@1.9.0, walker@1.0.7)
├── broccoli-funnel@1.0.7 (fast-ordered-set@1.0.3, blank-object@1.0.2, array-equal@1.0.0, broccoli-plugin@1.2.2, path-posix@1.0.0, rimraf@2.5.4, walk-sync@0.3.1, heimdalljs@0.2.1, mkdirp@0.5.1)
├── yam@0.0.21 (lodash.merge@4.6.0, findup@0.1.5)
├── resolve@1.1.7
├── express@4.14.0 (vary@1.1.0, escape-html@1.0.3, array-flatten@1.1.1, cookie-signature@1.0.6, utils-merge@1.0.0, content-type@1.0.2, encodeurl@1.0.1, merge-descriptors@1.0.1, methods@1.1.2, parseurl@1.3.1, fresh@0.3.0, range-parser@1.2.0, content-disposition@0.5.1, cookie@0.3.1, serve-static@1.11.1, etag@1.7.0, path-to-regexp@0.1.7, depd@1.1.0, qs@6.2.0, on-finished@2.3.0, finalhandler@0.5.0, proxy-addr@1.1.2, accepts@1.3.3, type-is@1.6.13, send@0.14.1)
├── broccoli-config-replace@1.1.2 (broccoli-plugin@1.2.2, fs-extra@0.24.0, broccoli-kitchen-sink-helpers@0.3.1)
├── broccoli-merge-trees@1.1.4 (broccoli-plugin@1.2.2, rimraf@2.5.4, heimdalljs-logger@0.1.7, fast-ordered-set@1.0.3, heimdalljs@0.2.1, can-symlink@1.0.0)
├── broccoli-concat@2.3.4 (lodash.uniq@4.5.0, lodash.omit@4.5.0, lodash.merge@4.6.0, mkdirp@0.5.1, broccoli-kitchen-sink-helpers@0.3.1, fast-sourcemap-concat@1.0.1, broccoli-caching-writer@2.3.1)
├── broccoli-config-loader@1.0.0 (broccoli-caching-writer@2.3.1)
├── tiny-lr@0.2.1 (parseurl@1.3.1, livereload-js@2.2.2, qs@5.1.0, body-parser@1.14.2, faye-websocket@0.10.0)
├── markdown-it@7.0.0 (linkify-it@2.0.1, mdurl@1.0.1, uc.micro@1.0.3, entities@1.1.1, argparse@1.0.9)
├── markdown-it-terminal@0.0.3 (ansi-styles@2.2.1, lodash.merge@3.3.2, cli-table@0.3.1, markdown-it@4.4.0, cardinal@0.5.0)
├── leek@0.0.22 (lodash.assign@3.2.0, request@2.75.0)
├── testem@1.12.0 (styled_string@0.0.1, lodash.assignin@4.2.0, lodash.clonedeep@4.5.0, rimraf@2.5.4, lodash.find@4.6.0, did_it_work@0.0.6, printf@0.2.5, spawn-args@0.2.0, consolidate@0.14.1, xmldom@0.1.22, mustache@2.2.1, commander@2.9.0, charm@1.0.1, backbone@1.3.3, mkdirp@0.5.1, bluebird@3.4.6, cross-spawn@4.0.2, fireworm@0.7.1, tap-parser@1.3.2, npmlog@4.0.0, js-yaml@3.6.1, socket.io@1.4.7, node-notifier@4.6.1)
├── ember-cli-broccoli@0.16.10 (broccoli-slow-trees@1.1.0, rimraf@2.5.4, copy-dereference@1.0.0, mime@1.3.4, commander@2.9.0, connect@3.5.0, broccoli-kitchen-sink-helpers@0.2.9, findup-sync@0.2.1, handlebars@4.0.5)
├── ember-cli-preprocess-registry@2.0.0 (process-relative-require@1.0.0, broccoli-clean-css@1.1.0, lodash@3.10.1)
├── inquirer@1.2.1 (ansi-escapes@1.4.0, mute-stream@0.0.6, cli-width@2.1.0, strip-ansi@3.0.1, pinkie-promise@2.0.1, figures@1.7.0, run-async@2.2.0, cli-cursor@1.0.2, string-width@1.0.2, external-editor@1.1.0, rx@4.1.0)
├── ember-try@0.2.6 (rimraf@2.5.4, extend@3.0.0, sync-exec@0.6.2, core-object@1.1.0, ember-cli-babel@5.1.10, ember-cli-version-checker@1.1.6, fs-extra@0.26.7, ember-try-config@2.1.0, cli-table2@0.2.0)
├── bower-config@1.4.0 (graceful-fs@4.1.9, untildify@2.1.0, osenv@0.1.3, optimist@0.6.1, mout@1.0.0)
├── ember-cli-legacy-blueprints@0.1.1 (ember-cli-test-info@1.0.0, ember-cli-path-utils@1.0.0, ember-cli-get-dependency-depth@1.0.0, ember-cli-valid-component-name@1.0.0, fs-extra@0.24.0, ember-cli-babel@5.1.10, ember-router-generator@1.2.2, lodash@3.10.1)
├── lodash@4.16.3
├── broccoli-babel-transpiler@5.6.1 (clone@0.2.0, json-stable-stringify@1.0.1, hash-for-dep@1.0.3, broccoli-persistent-filter@1.2.11, babel-core@5.8.38)
├── npm@2.15.5
└── bower@1.7.9
}}}
<<<
!! check ember version
<<<
Karl-MacBook:~ karl$ ember -v
Could not start watchman; falling back to NodeWatcher for file system events.
Visit http://ember-cli.com/user-guide/#watchman for more info.
ember-cli: 2.8.0
node: 4.6.0
os: darwin x64
<<<
!! create symbolic links
<<<
ln -s /usr/local/lib/node_modules/ember-cli/bin/ember /usr/local/bin/ember
ln -s /usr/local/lib/node_modules/ember-cli/node_modules/bower/bin/bower /usr/local/bin/bower
<<<
!! install watchman
<<<
brew install watchman
<<<
!! generate an ember project
{{{
ember new ember_project1
}}}
!! launch ember app
{{{
cd ember_project1
Karl-MacBook:ember_project1 karl$ ember server
Could not start watchman; falling back to NodeWatcher for file system events.
Visit http://ember-cli.com/user-guide/#watchman for more info.
Just getting started with Ember? Please visit http://localhost:4200/ember-getting-started to get going
Livereload server on http://localhost:49152
Serving on http://localhost:4200/
Build successful - 16862ms.
Slowest Trees | Total
----------------------------------------------+---------------------
Babel | 6813ms
Babel | 5550ms
Babel | 944ms
Slowest Trees (cumulative) | Total (avg)
----------------------------------------------+---------------------
Babel (12) | 15344ms (1278 ms)
}}}
!! ember versions
!!! edit the package.json
change to
<<<
"ember-data": "^2.0.1",
<<<
leave bower.json as is
!!! npm install + bower install
type in
<<<
npm install
bower install
<<<
then launch ember again
<<<
ember server
<<<
!! remove welcome page
edit package.json
remove the line
<<<
"ember-welcome-page"
<<<
! Controllers and Templates
!! generate ember application template
type in
<<<
ember generate template application
<<<
will generate app/templates/application.hbs
!! generate ember application controller
type in
<<<
Karl-MacBook:ember_project1 karl$ ember g controller application
installing controller
create app/controllers/application.js
installing controller-test
create tests/unit/controllers/application-test.js
<<<
!! edit the template and controller
edit the following files:
{{{
app\controllers\application.js
import Ember from 'ember';
export default Ember.Controller.extend({
name: 'Ember',
greeting: "HELLOOOO"
});
}}}
{{{
app\templates\application.hbs
<h1>
{{greeting}}
</h1>
<p>
hello my name is {{name}}
</p>
}}}
!! create template to controller binding 2-way
edit the following files:
{{{
app\controllers\application.js
import Ember from 'ember';
export default Ember.Controller.extend({
name: 'Ember',
greeting: "HELLOOOO"
});
}}}
use input helper with value argument
{{{
app\templates\application.hbs
<h1>
{{greeting}}
</h1>
<p>
hello my name is {{name}}
</p>
<hr>
{{input value=greeting}}
}}}
! Displaying list, object, array of objects
!! create and display list
https://github.com/karlarao/code_ninja/commit/2221b658fbef426e31bcbc5ae16121c51b4480e7
{{{
application.js
foods: ["Taco", "Pizza", "Salad", "Fruits"]
application.hbs
<ul>
+ {{#each foods as |food|}}
+ <li>Let's eat some {{food}}!</li>
+ {{/each}}
+</ul>
}}}
!! create and display object
https://github.com/karlarao/code_ninja/commit/e713791dafd53348a0721c1eb53e8294a9cd56b6
{{{
application.js
import Ember from 'ember';
export default Ember.Controller.extend({
foods: ["Taco", "Pizza", "Salad", "Fruits"],
restaurant: {name: "Our awesome resto", yearsOpen: 1}
});
}}}
{{{
application.hbs
<p>
Our resto name is {{restaurant.name}}. We've been open for {{restaurant.yearsOpen}} year(s).
</p>
}}}
!! create and display array of objects
https://github.com/karlarao/code_ninja/commit/310bf10584a10ba2953b0920bf0148bb8ea89ad0
{{{
controllers/application.js
+ restaurant: {name: "Our awesome resto", yearsOpen: 1},
+ foods: [
+ {name: "Taco", isAvailable: true},
+ {name: "Pizza", isAvailable: true},
+ {name: "Salad", isAvailable: false},
+ {name: "Fruits", isAvailable: true}
+ ]
}}}
{{{
templates/application.hbs
{{#each foods as |food|}}
+ <li>Let's eat some {{food.name}}! Currently available: {{food.isAvailable}}</li>
{{/each}}
}}}
! Ember Helpers
!! IF helpers
https://github.com/karlarao/code_ninja/commit/c0d7f363221160d0c17279d807cea91371246b53
{{{
<li>Let's eat some {{food.name}}!
{{if food.isAvailable "" "Not Available"}}
</li>
}}}
!! Block helpers (if,else)
https://github.com/karlarao/code_ninja/commit/4b6f0a64279be579bb4c75acb7e77456af9ced02
{{{
{{#if food.isAvailable}}
<li>Let's eat some {{food.name}}!</li>
{{else}}
<li><s>{{food.name}}</s></li>
{{/if}}
}}}
! CSS
!! primarily consist of
{{{
* selectors - define in <style><li> </style>
** elements <li>
** class .not-available (starts with a . - can be applied to many)
** id #explanation (starts with a hash # - only applied once)
** pseudo class li:first-of-type (will apply to first occurence)
* properties (think of it as rules) - within <li style>
}}}
!! CSS order of priority
{{{
* Browsers read CSS from top to bottom
* That means that, in the event of a conflict, the browser will use whichever CSS declaration came last
* But id attribute will always take precedence over class
* But in-line style take precedence on both id and class <h1 style="color: green">
* Most of all putting !important takes precedence on all of them .pink-text { color: pink !important; }
}}}
!! CSS examples
examples:
* css on .hbs (template) https://github.com/karlarao/code_ninja/commit/ee7fe4faadc039979c186f4ad467fbdf70f86032
* css on app.css https://github.com/karlarao/code_ninja/commit/02c9d011758f7c890edc8890d250668ae148bae5
* css on separate file https://github.com/karlarao/code_ninja/commit/9f3e2863466e4bf5d0966aa910577761f37ffb40
! Function
!! simple JS functions
{{{
alert()
alert("hey")
-- simple function
function tacoAlert() {
alert("hey");
alert("tacos are great")
}
-- function parameter
function tacoAlert(adjective) {
alert("hey");
alert("tacos are" + adjective)
}
tacoAlert("awesoooome")
-- return an alert
function tacoAlert(adjective,topping) {
alert("hey");
alert("tacos are " + adjective + " with " + topping);
}
tacoAlert("awesoooome","cheese")
-- return a string
function tacoString(adjective,topping) {
return "hey tacos are " + adjective + " with " + topping ;
}
tacoString("awesoooome","cheese")
-- stacking the function with another function
alert(tacoString("awesoooome","cheese"))
}}}
!! Functions in Controller
* An Ember controller is a big fancy JS object, everything has key and value (key-value pair).
* With the Key-value system inherited from the JS object, the function of naming it is taken up by the Key. So the function is an anonymous function which is the form below:
{{{
myFunction: function(){
<code here>
}
}}}
* An alternative to this which is most common is the following form:
{{{
myFunction(){
<code here>
}
}}}
!!! buttonClick() - Action button
buttonClick() - ember.js - controller functions - action button https://github.com/karlarao/code_ninja/commit/b5fd1533350fae6bb6f0434448ccd1b820bc6937
step by step:
* create "action" helper in the opening tag of the button inside the template, in the tag call the buttonClick function
* create "actions" hash in the controller to contain all of the actions
** inside the "actions" hash, create the function "buttonClick"
!!!! functions used
!!!!! actions hash
google search: ember controller actions hash
https://guides.emberjs.com/v1.10.0/templates/actions/
http://stackoverflow.com/questions/19432125/ember-best-way-to-reuse-controller-actions-on-mixins
!!!!! Ember.set()
http://www.ember-doc.com/classes/Ember.Set.html
http://stackoverflow.com/questions/17976333/how-to-change-value-with-ember-js-array-foreach
{{{
set value with Ember.set(yourObject, propertyName, value)
and Ember.get(yourObject, propertyName); to safely set and get properties
}}}
!!! wasClicked() - generify the action button, pass argument
ember.js - generify action button, pass argument https://github.com/karlarao/code_ninja/commit/f72752c4fa17fa1b82d4698be88ccb53783c3664
!!! makeUnavailable() , makeAvailable() - change property of food to available and unavailable
create makeUnavailable and makeAvailable functions to change the property value on the list loop https://github.com/karlarao/code_ninja/commit/cb86d41224efefcb3af170c37f575e886cb10d06
ember.js - cheftracker - Enter,Exit change status https://github.com/karlarao/code_ninja/commit/df586062686130f648fa09854415c6669648e5df
! Ember Data
ember data progress update http://emberjs.com/blog/2013/05/03/ember-data-progress-update.html
!! model hook and route
!!! generate a route file
(will also generate a template file, hit "n" - NO)
{{{
ember g route application
}}}
!!! model hook
* model hook is being called by the framework whenever the route (one or more routes) is entered
* model hook's job to give us data when route is entered. leading up to a "return" value from a server
!!! model hook example
https://github.com/karlarao/code_ninja/commit/2c2ddc240c51438ad5bf49a8fbce8a5040a2bd9f
* model method on the route returns the same data as what specified on controller
* then use "model" instead of "foods" on the hbs template
* the name should be "model", the name is a special hook that is used in the route
!! create firebase datastore
!!! create "Menu Tracker" database
* create new firebase database "Menu Tracker" and change the rules to the following
{{{
{
"rules": {
".read": true,
".write": true
}
}
}}}
!!! install emberfire
{{{
ember install emberfire@2.0
}}}
!!! edit environment.js
* go to config/environment.js and copy paste the following taken from the firebase console
{{{
contentSecurityPolicy: {
'script-src': "'self' 'unsafe-eval' apis.google.com",
'frame-src': "'self' https://*.firebaseapp.com",
'connect-src': "'self' wss://*.firebaseio.com https://*.googleapis.com"
},
firebase: {
apiKey: "xxx",
authDomain: "menu-tracker-16f6b.firebaseapp.com",
databaseURL: "https://menu-tracker-16f6b.firebaseio.com",
storageBucket: "menu-tracker-16f6b.appspot.com",
messagingSenderId: "xxx"
},
}}}
!! enter data
* enter the data like this in firebase
[img[ http://i.imgur.com/tY6zDPU.png ]]
!! connecting ember data and firebase
!!! create data model "food"
* create a data model "food"
* data model tells ember data when it's grabbing json (a big hash of data) from the server
* data model tells you which things you can pull in from that json because it may have more things than you want
* then, you need to import attribute method from ember data
* then, you need to know the attribute we are bringing in and define the attribute
{{{
ember g model food
}}}
!!! edit models/food.js
* define the attributes
{{{
name: DS.attr("string"),
isAvailable: DS.attr("boolean")
}}}
!!! edit the routes/application.js to grab data from firebase
* go to the route and define the ember data store to reach out to firebase and return to the model method
{{{
return this.store.findAll("food");
}}}
!! saving data
* save the changes on the food model
* the food object on controller is now an ember data object from data store which is defined in food.js
* the Ember.set(food, "isAvailable", true) changes what's in the fetched data store
* hitting food.save() tells the data store to push the changes back to the server
* changes on the data store propagates to the other referenced places
{{{
food.save();
}}}
!! configuration summary
{{{
I ended up having two schemas or databases. The cooks and foods.
http://i.imgur.com/xYd215e.png
And I'm using ember 2.8, so I used the
name: DS.attr("string"),
isAvailable: DS.attr("boolean")
On the emberfire environment.js I added the firebase and contentSecurityPolicy key value. On the contentSecurityPolicy I have the following:
contentSecurityPolicy: {
'script-src': "'self' 'unsafe-eval' apis.google.com",
'frame-src': "'self' https://*.firebaseapp.com",
'connect-src': "'self' wss://*.firebaseio.com https://*.googleapis.com"
},
I got that from http://yoember.com/ which uses Ember 2.8 + firebase. Not sure if that property is really needed for firebase and if my entries are correct.
I saw a couple of references to it, and seems like an add-on https://blog.justinbull.ca/ember-cli-and-content-security-policy-csp/ https://github.com/rwjblue/ember-cli-content-security-policy https://emberigniter.com/modify-content-security-policy-on-new-ember-cli-app/
Here's the full details of the my committed files for this exercise: https://github.com/karlarao/code_ninja/commit/84fb324f0e8a832edf45821ac422c57496a61a17
}}}
!! creating new record
!!! two ways of creating a new record
!!!! using newItem as argument to action
new record - using newItem as argument to action https://github.com/karlarao/code_ninja/commit/d63871908207af5965155ed8c484d56e54724c23
!!!! using this.get as property of the contoller
new record - using this.get as property of the controller https://github.com/karlarao/code_ninja/commit/88d20b9bb152ff99a5d7ef07065eca9d8695337f
* the newItem by being defined on the input value is considered as property on the controller, and since it's not explicitly defined the value of it is default of "null"
* so it doesn't have to be pass'd on action to have access to it on the controller, it can be accessed with this.get("newItem")
* to save thew new item use the .save()
* to clear the value on input bar after save use either
{{{
Ember.set(this, "newItem". "")
or
this.set("newItem","")
}}}
!!!!! functions used
!!!!!! createRecord
http://www.ember-doc.com/classes/DS.Store.html#method_createRecord
!!!!!! this
http://stackoverflow.com/questions/39161718/what-is-this-really-represent-in-emberjs-framework/39187322#39187322
{{{
you can test by using
console.log(this);
}}}
http://stackoverflow.com/questions/24362656/ember-data-this-store-getting-undefine
https://blog.farrant.me/how-to-access-the-store-from-within-an-ember-component/
http://stackoverflow.com/questions/10267546/this-store-is-undefined
!! destroy/delete record
!!! use destroyRecord()
to delete the food row https://github.com/karlarao/code_ninja/commit/db912ee506f17a2315a9e0f816b32501847e0241
!! count records
!!! use .length()
https://github.com/karlarao/code_ninja/commit/660698ba74e7e16bc38030612a0dd1867926a087
!!! use count as a computed property
http://emberjs.com/api/classes/Ember.computed.html
!!!! computed alias - Ember.computed.alias
* Creates a new property that is an alias for another property on an object and gives it a new name
{{{
menuLength: Ember.computed.alias("model.length")
}}}
!!!! filter by - Ember.computed.filterBy
* Filters the array by the property and value
{{{
availableItems: Ember.computed.filterBy("model", "isAvailable", true)
}}}
then use the two on the template
{{{
{{menuLength}}
{{availableItems.length}}
}}}
!! filter table list
http://timgthomas.com/2014/08/quick-and-easy-filterable-lists-in-ember-js/
http://blog.crowdint.com/2014/07/08/simple-collection-s-sorting-and-filtering-with-ember-js.html
!! ember data RoR
google search "ember rest adapter" http://bit.ly/2fPGqrC
https://emberigniter.com/fit-any-backend-into-ember-custom-adapters-serializers/
http://thejsguy.com/2015/12/05/which-ember-data-serializer-should-i-use.html <- GOOD STUFF
https://www.emberscreencasts.com/search?q=serializer <- GOOD STUFF
https://www.emberscreencasts.com/search?q=REST <- GOOD STUFF
!!! RoR - REST JSON API (rails) + Ember
https://code.tutsplus.com/courses/create-a-full-stack-rails-and-ember-app
https://emberigniter.com/modern-bridge-ember-and-rails-5-with-json-api/
!!! RoR - plain ORM (rails) + Ember
https://mail.google.com/mail/u/0/#inbox/1584aec3ea4fab48
!! ember data nodejs
https://www.youtube.com/results?search_query=ember+data+nodejs
google search "emberjs on nodejs ember data" http://bit.ly/2fPJDqR
http://stackoverflow.com/questions/35639530/how-to-connect-an-emberjs-front-end-with-an-nodejs-express-api
!! ember data node-oracledb
https://www.youtube.com/results?search_query=emberjs+node-oracledb
!! ember data postgresql
http://stackoverflow.com/questions/34889835/ember-js-ember-data-postgresql
!! ember data mysql
http://stackoverflow.com/questions/18137185/ember-data-routing-mapping-to-mysql-database
!! ember data resources
https://leanpub.com/emberdatainthewild
!! ember REST
https://emberigniter.com/fit-any-backend-into-ember-custom-adapters-serializers/ <-- GOOD STUFF
https://medium.freecodecamp.org/eli5-full-stack-basics-breakthrough-with-django-emberjs-402fc7af0e3
<<<
The JSONAPIAdapter is the ‘default adapter used by Ember Data’. It transforms the store’s requests into HTTP requests that follow the JSON API format. It plugs into the data management library called Ember Data.
<<<
https://guides.emberjs.com/v1.12.0/models/the-rest-adapter/
https://www.emberjs.com/api/ember-data/release/classes/DS.RESTAdapter
! Challenges
!! increment/decrement student chefs
https://github.com/karlarao/code_ninja/commit/a0a60833cc3429c39ea5295ff20035e5171f3796
!!! functions used
!!!! set/get/save
!!!! incrementProperty/decrementProperty
!!!! model - DS.attr("number", {defaultValue: 0})
!! no negative student chefs
https://github.com/karlarao/code_ninja/commit/e8ca6910f69bc21712e3b416df69fcc5b51befa1
!!! functions used
!!!! if (cook.get) > 0
!! total enrollment of student chefs
https://github.com/karlarao/code_ninja/commit/8841219dfc512579e376d8edb00c8f090f75cf26
!!! functions used
!!!! Ember.computed.mapBy("model", "numberStudents")
* return an array of numberStudents
!!!! Ember.computed.sum("chefStudents")
! Routing
* for each URL we define a route
* ember comes automatically with one below, you don't have to explicitly define this. Else, it will error with "Error: Assertion Failed: 'application' cannot be used as a route name". This was generated by the command "ember g route application"
* the default "controller" and "route" are used at runtime, and we only need to create a file for them if we want to change anything about the defaults
{{{
Router.map(function() {
this.route("application", {path: "/"});
});
}}}
!! create new router - foods and about
* with the route name "foods" it's going to expect a controller, template, route to bear the name "foods"
https://github.com/karlarao/code_ninja/commit/4a9c7c81ff2b76a2684fcb8e36e5ca6871550312
[img[ http://i.imgur.com/ovo3oPh.png ]]
https://guides.emberjs.com/v2.9.0/routing/defining-your-routes/#toc_the-application-route
!! create links to other routes
https://github.com/karlarao/code_ninja/commit/981863b203a0bfb7edbd31f5d3a98e6840adfb60
* just like the IF helper there are two forms
!!! 1) block form
longer form but can easily add pretty button
args:
* name of the route that we want to link to
* the text on the open/close tags
!!! 2) inline form
shorter form but less flexibility
args:
* string
* application path
http://www.ember-doc.com/classes/Ember.Handlebars.helpers.html#method_link-to
https://guides.emberjs.com/v2.9.0/templates/links/#toc_the-code-link-to-code-helper
!! create the navbar
https://github.com/karlarao/code_ninja/commit/b50b0125462f3dee954385e11f9d300d85bbb148
<<<
application.hbs - is a little bit special because it is automatically put on the top of every page
{{{
{{outlet}}
}}}
outlet - the default content of application.hbs, it means show whatever is on the page below. so the navbar should be above the outlet.
<<<
http://stackoverflow.com/questions/21553992/how-to-link-to-root-route-of-application
!! URL parameters / variable - http://localhost:4200/favorite-word/tacos
https://github.com/karlarao/js_projects/commit/104fdf0028de87f29d395cdd00f79e8425c5074d
On the foods route, we have to define the model on the route and return a value. Well it turns out that if you put in a "parameter" on the URL then it will do a default route (default behavior). Meaning you can make use of that variable right away on the model without creating a new route.
Here's something what this route might look like when explicitly created:
{{{
// this is no longer have to be explicitly set
ember g route favorite-word
import Ember from 'ember';
export default Ember.Route.extend({
model(params){
return params;
}
});
}}}
!! find records with URL parameters (a.k.a. show page - more detailed data about that model) - this.store.findRecord
https://github.com/karlarao/js_projects/commit/0660d57b544a090331692355161fcee97b995449
* IDs that are passed in to a route with the name of model are automatically used to find that model with that ID
* link-to requires another parameter that gives either the ID or the object
* this.store
https://guides.emberjs.com/v2.9.0/tutorial/subroutes/#toc_finding-by-id
http://www.ember-doc.com/classes/Ember.Route.html#method_store
http://emberjs.com/api/classes/Ember.Route.html#method_model
* link-to
https://guides.emberjs.com/v2.9.0/tutorial/subroutes/#toc_linking-to-a-specific-rental
http://www.ember-doc.com/classes/Ember.Handlebars.helpers.html#method_link-to
<<<
* When you pass an object as second argument into the link-to block helper, it will by default serialize the object to the ID of the model into the URL. Alternately, you may just pass rental.id for clarity.
* The model for the post route is store.findRecord('post', params.post_id). By default, if your route has a dynamic segment ending in _id:
<<<
!! nested routes (show both summary and detail in same page)
ember.js - nested routes foods.food.calories/price https://github.com/karlarao/js_projects/commit/60f4a01ebe2752760ec49fc71e2cfdd5d7b705c1
{{{
* application outlet is the stealth parent to all the routes (for navbar)
* whatever is nested on the foods route will show up in the outlet
* the nesting is done on the:
** router file entry
** file structure (child route files are put in the folder of the parent route - applies to routes, templates, controller)
** outlet entry on the parent route
** links foods.food.eating
templates/
foods.hbs
foods
food.hbs
food
eating.hbs
}}}
{{{
Nested routes is pretty cool! I did some test case on how can I implement this with grandchild routes and this resulted to two outlets and
three output windows/panels (numbered 1-3 on the photo below). Pretty much the same as the example foods.food.eating but in my
case I have foods -> food then underneath food are clickable price and calories info for each food. The route entry is below:
this.route("foods", function(){
this.route("food", {path: ":food_id"}, function(){
this.route("price", {path: "usd"});
this.route("calories");
});
});
http://i.imgur.com/9JZ8ID1.png
1st question: On the video you showed how to separate the two panels using flex-container. In my case I have two {{outlet}} on separate .hbs
files. How will I format this in a single page in such a way that I have windows/panels on left for foods and right for food and bottom for price and calories?
2nd question: On the output #3 on the image I was trying to get the name of the restaurant as an output. Does this mean {{restaurant.name}} from
the controller can't be accessed from child and grandchild .hbs files? Or am I missing a step?
Price: 2 million {{model.isAvailable}} , {{restaurant.name}}
Below are the detailed files
[ember.js - nested routes foods.food.calories/price](https://github.com/karlarao/js_projects/commit/60f4a01ebe2752760ec49fc71e2cfdd5d7b705c1)
}}}
[img[http://i.imgur.com/9JZ8ID1.png ]]
Answer
{{{
1st question: The way to do this is to have the first {{outlet}} (the one in foods.hbs) in the right pane, just like in the videos, and then
have the second {{outlet}} at the bottom of foods/food.hbs. Essentially, you're subdividing the right pane.
2nd question: Correct, controller properties are not inherited by the controllers of the nested routes.
}}}
!!! functions used
!!!! this.route / nested routes / multiple outlets
https://medium.com/@FiscalNote/ember-js-nested-routing-with-multiple-outlets-273f744be8bd#.5udq8pwet <- good stuff
http://ember101.com/videos/007-nested-routes/ <- good stuff
http://discuss.emberjs.com/t/multiple-outlets-with-different-models-in-application-route/8421
https://recalll.co/app/?q=javascript%20-%20Multiple%20outlets%20in%20Ember.js%20v2%20router
http://stackoverflow.com/questions/14531956/multiple-outlets-in-ember-js-v2-router
http://stackoverflow.com/questions/12150624/ember-js-multiple-named-outlet-usage
https://guides.emberjs.com/v2.9.0/routing/rendering-a-template/
http://stackoverflow.com/questions/35008592/how-do-i-render-multiple-outlets-in-one-template-file-using-ember-2-3
http://stackoverflow.com/questions/26878536/ember-js-ember-cli-outlets-not-nesting-properly
!! Challenge
!!! Folders
https://folders.pagefrontapp.com/
https://github.com/karlarao/js_projects/commit/6d1d0cc2039ef53103e74a1e7d20e7c1b31515b4
!! Index routes (implicit routes)
https://github.com/karlarao/js_projects/commit/367206e0ec9020a4253af9a21460f140efe07e8b
the three implicit routes:
* index.hbs - Index route automatically gets created whenever you have a nested route. Can be created directly in the templates folder or the nested folders.
* loading.hbs
* error.hbs
! ember concurrency
ember concurrency for data issue http://bit.ly/2fPfdFm
http://ember-concurrency.com/#/docs
! Deploy
!! add .gitignore file
on the root folder of the app add the .gitignore entry
{{{
# See http://help.github.com/ignore-files/ for more about ignoring files.
# compiled output
/dist
/tmp
# dependencies
/node_modules
/bower_components
# misc
/.sass-cache
/connect.lock
/coverage/*
/libpeerconnection.log
npm-debug.log
testem.log
}}}
!! using pagefront
{{{
ember install ember-pagefront --app=emberjs --key=xxx
ember deploy production
}}}
https://www.pagefronthq.com/
http://discuss.emberjs.com/t/divshot-is-shutting-down-what-are-the-alternatives/9053
!! large scale
<<<
what web server you use for large scale node.js app?
what web server you use for large scale rails app?
<<<
!! other ways to deploy
google search "how to deploy emberjs application to multiple machines" http://bit.ly/2fvt3Zv
https://www.heroku.com/emberjs , https://github.com/heroku/ember-js-getting-started
https://www.distelli.com/docs/tutorials/build-and-deploy-emberjs-app
http://discuss.emberjs.com/t/best-practices-for-real-world-ember-deployment/3171
http://stackoverflow.com/questions/39013274/ember-js-deploy-on-multiple-servers
http://discuss.emberjs.com/t/deploying-on-s3-and-scaling/9317
http://discuss.emberjs.com/t/any-experience-with-deploying-to-ec2/9270
http://discuss.emberjs.com/t/deploying-to-amazon/7487
https://emberigniter.com/deploy-ember-cli-app-amazon-s3-linux-ssh-rsync/
google search "how to scale emberjs deploy" http://bit.ly/2fP9RK8
ember js - web hosting, deploy to amazon AWS S3, development and production https://www.youtube.com/watch?v=ZV_kyHwGuow
https://guides.emberjs.com/v2.6.0/tutorial/deploying/
https://www.youtube.com/results?search_query=ember+deploy+to+aws
! HOWTO CREATE an app from scratch
{{{
prereq:
-----
sudo npm install -g ember-cli
ln -s /usr/local/lib/node_modules/ember-cli/bin/ember /usr/local/bin/ember
ln -s /usr/local/lib/node_modules/ember-cli/node_modules/bower/bin/bower /usr/local/bin/bower
create new project:
-----
ember new ember_project1
cd ember_project1
ember data / setup database:
-----
ember install emberfire
> setup data in firebase/database (<plural_db_name>)
db_url -> db_name -> ID -> columns/attribute (key:value)
read:write rules -> true
copy/paste config to environment.js
UI/UX:
-----
> define the handlebars (templates folder - <plural_name>.hbs) (#each model as |folder|)
ember g template application (define the {{outlet}} here)
ember g template <plural_name>.hbs (the custom names of template, routes, controller should match)
> define the nested hbs ( <singular_name>.hbs )
> define the css (styles folder - app.css)
URL/Pages:
-----
> define router.js for the URLs (nested routes - folders.folder)
Fetch Data:
-----
> define the model/column attributes (models folder - <singular_name>.js)
ember g model <singular_name>.js
> define the routes (routes folder - <plural_name>.js, define .findAll("folder") )
ember g route <plural_name>.js (the custom names of template, routes, controller should match)
or the default
ember g route application
> define the nested routes (routes folder - <singular_name>.js, define .findRecord("folder", params.folder_id) )
ember g route <singular_name>.js
App Logic:
-----
> define controller (app logic)
ember g controller <plural_name>.js (the custom names of template, routes, controller should match)
or the default
ember g controller application
Deploy:
-----
ember install ember-pagefront --app=<appname> --key=<pagefrontkey>
ember deploy production
Push to Git:
-----
add .gitignore file
}}}
! emberjs sandbox , ember-twiddle , JSFiddle
https://ember-twiddle.com/
https://github.com/ember-cli/ember-twiddle
! errors
!! xcrun error after upgrade to macosx sierra
{{{
fix:
xcode-select --install
}}}
http://tips.tutorialhorizon.com/2015/10/01/xcrun-error-invalid-active-developer-path-library-developer-commandline-tools-missing-xcrun/
https://ohthehugemanatee.org/blog/2015/10/01/how-i-got-el-capitain-working-with-my-developer-tools/
http://stackoverflow.com/questions/32893412/command-line-tools-not-working-os-x-el-capitan-macos-sierra
!! node (libuv) kqueue(): Too many open files in system
I just restarted my laptop here http://stackoverflow.com/questions/36902605/libuv-kqueue-too-many-open-files-in-system-when-trying-to-write-a-file-in-n
! references
mozilla html references - https://developer.mozilla.org/en-US/docs/Web/HTML
mozilla JavaScript data types and data structures - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures
http://handlebarsjs.com/
https://en.wikipedia.org/wiki/Ember.js
https://grantnorwood.com/why-i-chose-ember-over-react/
http://www.creativebloq.com/web-design/react-goes-head-head-emberjs-31514361
http://www.slideshare.net/mraible/comparing-hot-javascript-frameworks-angularjs-emberjs-and-reactjs-springone-2gx-2015
ember inspector https://www.youtube.com/watch?v=ufNyHNK790g
!! ember documentation
ember documentation https://guides.emberjs.com
ember-doc http://www.ember-doc.com/
ember api http://emberjs.com/api/
!! ember books
emberjs cookbook https://www.safaribooksonline.com/library/view/emberjs-cookbook/9781783982202/
!! other ember tutorials
http://yoember.com/
https://emberway.io/ember-get-shit-done-36383c2ccc53#.w48qcf9yf
http://ember101.com/
!! firebase
https://firebase.google.com/docs/database/web/start
https://firebase.google.com/docs/storage/web/start
!! ember spreadsheet
https://github.com/ConorLinehan/ember-spreadsheet
https://github.com/colinhunt/ember-spreadsheet
https://github.com/foxnewsnetwork/table-grid-2d
http://jspreadsheets.com/ember-table.html , https://github.com/addepar/ember-table
http://discuss.emberjs.com/t/spreadsheet-data-model-with-ember-data/6648
https://www.foraker.com/blog/google-spreadsheets-on-ember
https://www.npmjs.com/package/ember-cli-sheetjs
https://www.freelancer.com/projects/Javascript-Excel/Ember-table-EmberJS-Excel-Spreadsheets/
A spreadsheet in fewer than 30 lines of JavaScript, no library used https://news.ycombinator.com/item?id=6725387 , http://jsfiddle.net/ondras/hYfN3/ , https://news.ycombinator.com/item?id=6725685
!! ember reactive programming
https://emberway.io/ember-js-reactive-programming-computed-properties-and-observers-cf80c2fbcfc#.kc1ei8t3n
http://frontside.io/blog/2014/09/21/reactive-modeling-with-ember.html
Concurrent FRP for Functional GUIs http://elm-lang.org/papers/concurrent-frp.pdf
https://blog.risingstack.com/functional-reactive-programming-with-the-power-of-nodejs-streams/
The introduction to Reactive Programming you've been missing https://gist.github.com/staltz/868e7e9bc2a7b8c1f754
https://spin.atomicobject.com/2014/12/15/ember-js-computed-properties/
http://www.oreilly.com/programming/free/files/why-reactive.pdf
!! ember functions used
!!! get/set
http://discuss.emberjs.com/t/definitive-guide-of-when-to-use-get/6789
!! ember debugging
https://www.youtube.com/watch?v=mXHzC0LdTuk <- How to debug an Ember.js application <- GOOD STUFFFFFF (conditional breakpoint, console log)
http://www.akshay.cc/blog/2013-02-22-debugging-ember-js-and-ember-data.html <- good stuff
http://www.annema.me/debugging-ember <- watch the talk here
https://spin.atomicobject.com/2013/09/19/ember-inspector-chrome-plugin/
http://stackoverflow.com/questions/26412628/console-logging-on-ember-js
http://stackoverflow.com/questions/18246635/how-to-inspect-ember-js-objects-in-the-console
http://blog.salsify.com/engineering/ember-debug-logger
! others
!! nodejs + oracle
https://github.com/kaven276/noradle
https://jsao.io/2015/02/real-time-data-with-node-js-socket-io-and-oracle-database/
https://technology.amis.nl/2016/02/06/rest-api-on-node-js-and-express-for-data-retrieved-from-oracle-database-with-node-oracledb-database-driver-running-on-application-container-cloud/
https://technology.amis.nl/tag/node-js/
https://technology.amis.nl/tag/node-oracledb/
An Intro to JavaScript Web Apps on Oracle Database http://nyoug.org/wp-content/uploads/2015/04/McGhan_JavaScript.pdf
http://stackoverflow.com/questions/36009085/how-to-execute-stored-procedure-query-in-node-oracledb-if-we-are-not-aware-of
!! ember + socket.io
http://stackoverflow.com/questions/22817908/ember-js-socket-io
building real-time applications with ember https://www.youtube.com/watch?v=nfGORL8ebn8
http://www.broerse.net/wordpress/2013/08/29/ember-js-node-js-clock-via-socket-io/
!! ember + websockets
http://www.programwitherik.com/getting-started-with-web-sockets-and-ember/
!! ember + stripe
http://stackoverflow.com/questions/24678980/how-do-i-add-the-stripe-js-to-an-ember-cli-app
http://hawkins.io/2013/06/ember-and-stripe-with-custom-forms/
http://hawkins.io/2013/06/ember-and-stripe-checkout/
!! ember-mega-giveaway - build apps
!!! build pacman game
http://www.jeffreybiles.com/build-pacman
!!! ember data in the wild - purchased - check https://leanpub.com/user_dashboard/library
https://leanpub.com/emberdatainthewild , https://leanpub.com/b/ember-bundle
!!! rock and roll emberjs
http://balinterdi.com/rock-and-roll-with-emberjs/
!!! ember iframe
http://stackoverflow.com/questions/20699874/comunicating-with-iframe-in-ember-js
http://stackoverflow.com/questions/38839740/how-do-i-embed-an-ember-js-app-in-other-sites-without-an-iframe
https://guides.emberjs.com/v2.7.0/configuring-ember/embedding-applications/
https://github.com/mitchlloyd/ember-islands
!!! ember-extended + Firebase Queries + APIs
https://www.learnhowtoprogram.com/javascript/ember-js
https://www.learnhowtoprogram.com/javascript/ember-extended
https://www.learnhowtoprogram.com/javascript/ember-extended/firebase-queries
https://www.learnhowtoprogram.com/rails/apis
!! Boilerplate github repos
https://github.com/JakeDluhy/node-boilerplate
https://github.com/JakeDluhy/ember-boilerplate
https://github.com/JakeDluhy/node-react-boilerplate
!! time series emberjs
https://discuss.emberjs.com/t/time-series-data-in-ember-data/6945
<<<
We just create simple model objects that extend Ember.Object and have fetch() methods on them. When you call fetch(), it looks at its state and sends an XHR to the appropriate endpoint for the specified time range.
The data structures we’re operating on are quite complex, but the data handling is quite simple—perhaps surprisingly so.
<<<
https://www.skylight.io/
https://discuss.emberjs.com/t/chart-datapoints-in-ember-data-ala-skylight/7622/2
https://github.com/wrobstory/mcflyin
https://github.com/Addepar/ember-charts/issues/44
https://opensource.addepar.com/ember-charts/#/documentation
!! emberjs d3
https://www.youtube.com/results?search_query=Sam+Selikoff+d3
http://www.samselikoff.com/blog/ember-d3-getting-new-data-within-a-route/
https://simplereach.com/content/why-we-use-d3-and-ember-for-data-visualization
https://discuss.emberjs.com/t/using-d3-to-render-data-based-charts-in-ember/14151
https://ember-twiddle.com/45ac2a5f201128044aa6a25706ea1125?openFiles=components.pie-chart.js%2Ctemplates.components.pie-chart.hbs
!! emberjs websocket
https://www.google.com/search?ei=pYm6XNqGDubB_QaS1rmwDQ&q=emberjs+websocket&oq=emberjs+websocket&gs_l=psy-ab.3..35i39j0j0i22i30l7j0i22i10i30.14417.14417..14698...0.0..0.70.70.1......0....1..gws-wiz.......0i71.QQQv4tjcAa0
https://www.programwitherik.com/getting-started-with-web-sockets-and-ember/
! video
https://code.tutsplus.com/courses/end-to-end-analytics
<<<
analytics dashboard for a fictional e-commerce company. On the web server side, we’ll use ExpressJS, MongoDB, NPM and NodeJS. For the client side of things, we’ll be using EmberJS, RequireJS, D3 and jQueryUI to build the various components of the dashboard
<<<
! ember blogs/bloggers
https://mike.works/
! emberscreencasts
!! CRUD
https://www.emberscreencasts.com/search?q=crud&hPP=20&idx=Post_production&p=0&is_v=1
!! Ember with Rails Introduction Series
https://gorails.com/series/ember-with-rails-introduction
! other emberjs community paid screencast
!! embermap
https://embermap.com/topics
! ember and cordova
http://journal.wingmen.fi/5min-mobile-app-ember-cordova/
http://embercordova.com/
https://cordova.apache.org/docs/en/latest/guide/support/index.html
! new features
!! ember octane
http://hangaroundtheweb.com/2018/08/ember-octane-everything-one-can-expect-in-the-next-ember-edition/
!! ember glimmer
https://glimmerjs.com/
https://unix.stackexchange.com/questions/147044/cat-dev-null-emptied-my-log-file-but-size-did-not-change
''this simple''
{{{
> file
}}}
http://www.solvemyissue.com/2013/03/rdbms-enabling-and-disabling-database.html <-- do this on each RAC node
http://shrikantrao.wordpress.com/2011/12/29/oracle-11-2-new-feature-chopt-utility/
http://docs.oracle.com/cd/E11882_01/install.112/e17214/postinst.htm#CHDBDCGE
http://surachartopun.com/2009/06/check-rac-option-in-oracle-binary.html
http://askdba.org/weblog/2011/09/11gr2enable-and-disable-oracle-feature-with-chopt/
http://blogdaprima.com/2011/oracle-11gr2-enabling-and-disabling-database-options-after-installation/
https://blogs.oracle.com/d/entry/endianness
https://www.google.com/search?q=index+contention+1415053316&oq=index+contention+1415053316&aqs=chrome..69i57.2715j1j7&sourceid=chrome&ie=UTF-8
https://jonathanlewis.wordpress.com/2008/09/29/index-analysis-2/ <-index block split
https://www.freelists.org/post/oracle-l/TX-index-contention
table modification lock
, often caused by foreign keys lacking index
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-enq-tm---contention
https://aprakash.wordpress.com/2011/01/17/enq-tx-row-lock-contention-and-enqtm-contention/
Bug 9801919 - "enq: HW - contention" against segments that add an extent frequently during high concurrency (Doc ID 9801919.8)
How to convert FND_LOBS from basicfile to securefile LOBS and overcome 'enq: HW contention' waits? (Doc ID 1451379.1)
Resolving Issues Where Sessions show "enq: HW - contention" Not Due to Inserts (Doc ID 1476196.1)
Resolving Issues Where Sessions show "enq: HW - contention" Due to Inserts with Suboptimal Segment Storage Configuration (Doc ID 1475393.1)
HW Lock "HighWater Lock" V8/8i (Doc ID 1065358.6)
High Waits On HW Enqueue due to insert into LOB column (Doc ID 1349285.1)
enq: HW - contention waits https://community.oracle.com/message/2523561
http://askdba.org/weblog/2008/03/hw-enqueue-contention-with-lob/
http://dbasolved.com/2014/03/05/combat-with-an-enqueue-wait-event-enq-hwcontention/
https://www.freelists.org/post/oracle-l/Enq-SS-Contention,3
http://lefterhs.blogspot.com/2009/02/temporary-tablespace-ss-contention-in.html
<<<
Configuration "enq: SS - contention"
• SS enqueue is used for protecting extent caching/un-caching operations.
If there’s a huge amount of SS enqueue, it is likely that a SQL went crazy with sorting or (hash) joining and used up all the cached extents in
its (local) instance. If this happens the other instance(s) are asked
to release the soft reservation for a bunch of extents.
• Both SQLs (ga3w4k1gnpmmx and 7jh6z1cxav5gy) are also running whenever this happens. Although it only occurred on the database on 09/18. If this happens every time these two SQLs are executed, then the following could be done:
• Option 1) SQLs must be optimized to do less HASH JOIN
• Option 2) Create a separate TEMP2 tablespace and assign it to a new user that would only execute the two queries and also bind that SQL to a specific instance (through a service). This eliminates the cross-instance extent caching/un-caching
• Option 3) Create a new TEMP2 tablespace and set it as the new default temporary tablespace. Then kill all sessions blocked by SS enqueue.
<<<
https://rollbar.com/features/
https://rollbar.com/why-rollbar/
! Rollbar vs Raygun vs Sentry vs Airbrake vs Bugsnag vs OverOps
http://blog.takipi.com/the-complete-guide-to-error-tracking-tools-rollbar-vs-raygun-vs-sentry-vs-airbrake-vs-bugsnag-vs-overops/
http://guyharrison.squarespace.com/blog/2010/7/12/stolen-cpu-on-xen-based-virtual-machines.html
http://www.vmware.com/pdf/esx2_using_esxtop.pdf
https://support.quest.com/SolutionDetail.aspx?id=SOL96214&pr=Spotlight%20on%20Oracle
{{{
event="10261 trace name context forever, level <MEM in KB>"
}}}
ORA-00600 [723], [67108952], [pga heap] When Event 10261 Set To Limit The PGA Leak (Doc ID 1162423.1)
How To Super-Size Work Area Memory Size Used By Sessions? (Doc ID 453540.1)
October 2013 Best Practices for Database Consolidation On Exadata Database Machine http://www.oracle.com/technetwork/database/features/availability/exadata-consolidation-522500.pdf
https://fritshoogland.wordpress.com/2014/12/16/oracle-database-operating-system-memory-allocation-management-for-pga-part-2-oracle-11-2/
Mmmh, apparently you can set event 10261 to limit PGA usage in 11g for consolidation. #didntknow #oracle #exadata https://twitter.com/fritshoogland/status/405376026059882497 , https://twitter.com/bdcbuning/status/544780920885358592
https://dioncho.wordpress.com/2010/06/14/rapid-pga-size-increase/
exachk: healtcheck for Exadata https://blogs.oracle.com/XPSONHA/entry/exachk_healtcheck_for_exadata
Oracle Exadata Database Machine exachk or HealthCheck (Doc ID 1070954.1) - exachk 2.1.3
OEM12C Message :Exachk Not Running Recommendation: Exachk has not been run. Please execute Exachk and re-evaluate the metric. (Doc ID 1528068.1)
Master Note On Exalogic Instrumentation (ASR, Exachk, Exalogs, ELLC, OCM, OSWatcher, RDA, VMPinfo3) (Doc ID 1506224.1)
http://www.askmaclean.com/archives/exadata-health-check.html
Oracle Exadata’s Exachk and Oracle Enterprise Manager 12c: Keeping Up with Oracle Exadata [CON2699] https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=2699
http://dbasolved.com/2014/01/23/configure-oem12c-to-perform-checkups-on-exadata-exachk/
ORAchk replaces EXAchks and RACcheck https://juliandontcheff.wordpress.com/2014/06/21/orachk-replaces-exachks-and-raccheck/
https://blogs.oracle.com/patch/entry/orachk
https://levipereira.wordpress.com/2014/03/18/orachk-health-checks-for-the-oracle-stack-orachk-2-2-4-and-above/
ORAchk - Health Checks for the Oracle Stack (Doc ID 1268927.2)
! exachk w/ OEM
http://dbasolved.com/2014/01/23/configure-oem12c-to-perform-checkups-on-exadata-exachk/
file:///C:/Users/karl/Downloads/CON2699_Curtis-Exachk%20and%20OEM12c.pdf
! AUTORUN_SCHEDULE
https://support.oracle.com/epmos/faces/CommunityDisplay?resultUrl=https%3A%2F%2Fcommunity.oracle.com%2Fthread%2F3527124&_afrLoop=226615052593283&resultTitle=Schedule+Exachk&commId=3527124&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=1dgqr04xsn_126
http://oracleabout.blogspot.com/2014/11/schedule-exachk-using-oem-and-command.html
Oracle Platinum Services: Exadata Exachk Automation Project (Doc ID 2043991.1)
! collection manager
https://blogs.oracle.com/RobertGFreeman/entry/thoughts_and_musings_on_oracle
Unable to run graphical tools (gui) (runInstaller, dbca and dbua) on Exadata 12.1.2.1.0 - 12.1.2.2.0 (Doc ID 1969308.1)
http://fontanadba.blogspot.com/2016/06/enabling-x11-on-exadata-and-oracle.html
https://easyoradba.com/2016/06/25/install-gui-x11-packages-and-vnc-server-on-exadata-compute-nodes/
also check andy's guide
<<<
related oracle support notes on antivirus:
> Can We Install An Anti Virus Software On Exadata Compute Nodes? (Doc ID 1935746.1)
> Installing Third Party Monitoring Tools in Exadata Environment (Doc ID 1157343.1)
> Security Hardening on Exadata (Doc ID 1396654.1)
External security hardening:
This is allowed only on Exadata Database servers, nothing should be done on cell servers.
Considerations:
1) Exadata Database servers:
You can apply your standard security policy for server to the database node right provided the fundamental requirement is met:
You have the software items needed for Oracle software to function and perform during its life time within customer's desired goals for use of Oracle software.
2) Exadata Cell storage servers:
Nothing may be altered on the cells.
We will not be able to preserve such modifications across cell patches or we simply may not be able to
upgrade a cell due to such modifications.
<<<
https://www.freelists.org/post/oracle-l/Exadata-and-antivirus,5
Anti-virus for Exadata server https://community.oracle.com/thread/2801410
! papers
https://www.evernote.com/shard/s48/sh/f4935a26-5397-42cb-b55b-cca71332d1ad/2b88adc38e8d346ada0868bb122ceabc
! 11204
{{{
19:46:00 SYS@demo1> select table_name from dictionary where table_name like '%CELL%';
TABLE_NAME
------------------------------
V$CELL
V$CELL_CONFIG
V$CELL_OFL_THREAD_HISTORY
V$CELL_REQUEST_TOTALS
V$CELL_STATE
V$CELL_THREAD_HISTORY
GV$CELL
GV$CELL_CONFIG
GV$CELL_OFL_THREAD_HISTORY
GV$CELL_REQUEST_TOTALS
GV$CELL_STATE
GV$CELL_THREAD_HISTORY
12 rows selected.
}}}
! 12c
{{{
19:42:50 SYS@dbfs1> select table_name from dictionary where table_name like '%CELL%';
TABLE_NAME
--------------------------------------------------------------------------------------------------------------------------------
DBA_HIST_CELL_CONFIG
DBA_HIST_CELL_CONFIG_DETAIL
DBA_HIST_CELL_DB
DBA_HIST_CELL_DISKTYPE
DBA_HIST_CELL_DISK_NAME
DBA_HIST_CELL_DISK_SUMMARY
DBA_HIST_CELL_GLOBAL
DBA_HIST_CELL_GLOBAL_SUMMARY
DBA_HIST_CELL_IOREASON
DBA_HIST_CELL_IOREASON_NAME
DBA_HIST_CELL_METRIC_DESC
DBA_HIST_CELL_NAME
DBA_HIST_CELL_OPEN_ALERTS
GV$CELL
GV$CELL_CONFIG
GV$CELL_CONFIG_INFO
GV$CELL_DB
GV$CELL_DB_HISTORY
GV$CELL_DISK
GV$CELL_DISK_HISTORY
GV$CELL_GLOBAL
GV$CELL_GLOBAL_HISTORY
GV$CELL_IOREASON
GV$CELL_IOREASON_NAME
GV$CELL_METRIC_DESC
GV$CELL_OFL_THREAD_HISTORY
GV$CELL_OPEN_ALERTS
GV$CELL_REQUEST_TOTALS
GV$CELL_STATE
GV$CELL_THREAD_HISTORY
V$CELL
V$CELL_CONFIG
V$CELL_CONFIG_INFO
V$CELL_DB
V$CELL_DB_HISTORY
V$CELL_DISK
V$CELL_DISK_HISTORY
V$CELL_GLOBAL
V$CELL_GLOBAL_HISTORY
V$CELL_IOREASON
V$CELL_IOREASON_NAME
V$CELL_METRIC_DESC
V$CELL_OFL_THREAD_HISTORY
V$CELL_OPEN_ALERTS
V$CELL_REQUEST_TOTALS
V$CELL_STATE
V$CELL_THREAD_HISTORY
47 rows selected.
}}}
http://guyharrison.squarespace.com/blog/2011/7/31/a-perl-utility-to-improve-exadata-cellcli-statistics.html
http://bdrouvot.wordpress.com/2012/11/27/exadata-real-time-metrics-extracted-from-cumulative-metrics/
http://bdrouvot.wordpress.com/2013/03/05/exadata-real-time-metrics-extracted-from-cumulative-metrics-part-ii/
http://www.oracle.com/technetwork/articles/oem/exadata-commands-part3-402445.html
http://www.oracle.com/technetwork/articles/oem/exadata-commands-part4-402446.html
http://www.databasejournal.com/features/oracle/monitoring-exadata-storage-servers-with-the-cellcli.html
-- cell latency
http://books.google.com/books?id=1Aqw6mrxz5AC&pg=PA221&lpg=PA221&dq=exadata+metricHistoryDays&source=bl&ots=T9vOisPOuA&sig=FlPl5sGkyyIlz8oT12Gjky8NEb0&hl=en&sa=X&ei=IsMkUsDAIcTnqwGzsYCQAQ&ved=0CEkQ6AEwAg#v=onepage&q=exadata%20metricHistoryDays&f=false
exadata cellwall
https://www.google.com/search?q=exadata+cellwall&oq=exadata+cellwall+&aqs=chrome..69i57.3225j0j0&sourceid=chrome&ie=UTF-8
Configuring Security for Oracle Exadata System Software
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-security.html#GUID-9858E126-0D9F-4F99-BE68-391E77916EC6
http://ermanarslan.blogspot.com/2016/11/exadata-cell-ntp-problem-real-life.html
http://www.pythian.com/blog/adding-networks-to-exadata-fun-with-policy-routing/
http://www.linuxquestions.org/questions/linux-networking-3/routing-problems-w-oracle-linux-exadata-solved-4175445695/
http://sameer-samspeaks.blogspot.com/2013/10/one-or-more-storage-server-has-test.html
http://www.akadia.com/services/ora_exchange_partition.html
http://www.oracle-base.com/articles/misc/partitioning-an-existing-table-using-exchange-partition.php
http://apunhiran.blogspot.com/2009/05/oracle-partitioning-exchange-partition.html
http://www.oaktable.net/media/tim-gorman-oaktable-world-2012
http://www.oracle-base.com/articles/misc/partitioning-an-existing-table-using-exchange-partition.php
http://hemantoracledba.blogspot.com/2011/10/dbmsredefinition-to-redefine-partition.html
wrong way:
prepare bind execute
right way:
prepare bind execute bind execute bind execute ...
<<showtoc>>
https://exercism.io/profiles/karlarao
! watch this
https://www.youtube.com/watch?v=f6xc3QoiC3A
! configure
{{{
brew update && brew install exercism
exercism configure --token=xxx
exercism debug
}}}
! HOWTO view solutions
{{{
https://exercism.io/tracks/python/exercises/<REPLACE THIS WITH EXERCISE NAME>/solutions
}}}
https://www.slideshare.net/HadoopSummit/how-to-understand-and-analyze-apache-hive-query-execution-plan-for-performance-debugging <-- GOOD STUFF
https://community.hortonworks.com/questions/108237/hive-explain-plan-interpretation.html
http://www.youtube.com/results?search_query=exponential+moving+average+baron&oq=exponential+moving+average+baron&gs_l=youtube.3...6120.7028.0.7630.6.6.0.0.0.0.67.325.6.6.0...0.0...1ac.1.11.youtube.YOu-n_xDKGg
https://vividcortex.com/blog/2013/06/25/quantifying-abnormal-behavior/
http://velocityconf.com/velocity2013/public/schedule/detail/28118
http://www.youtube.com/results?search_query=Baron+Schwartz&oq=Baron+Schwartz&gs_l=youtube.3...1089822.1089822.0.1090932.1.1.0.0.0.0.64.64.1.1.0...0.0...1ac.2.11.youtube.0N7mCQjr-aQ
https://github.com/VividCortex/ewma
http://golang.org/#
Online Algorithms in High-frequency Trading http://queue.acm.org/detail.cfm?id=2534976
> Online mean algorithm
> Online variance algorithm
> Online regression algorithm
{{{
Zone root file systems are stored under:
/zones/<zone_name>
root@er1p2app02:/# cd /zones
root@er1p2app02:/zones# ls -al
total 53
drwxr-xr-x 17 root root 17 Nov 10 2015 .
drwxr-xr-x 30 root root 34 Feb 1 20:23 ..
drwx------ 4 root root 4 Jan 30 18:46 bw-zc
drwx------ 4 root root 4 Jan 30 18:46 ecc-zc
drwx------ 4 root root 4 Jan 30 18:47 er1zboe045v
drwx------ 4 root root 4 Jan 30 18:47 er1zboe049v
drwx------ 4 root root 4 Jan 30 18:47 er1zbw026v
drwx------ 4 root root 4 Jan 30 18:47 er1zbw030v
drwx------ 4 root root 4 Jan 30 18:47 er1zecc007v
drwx------ 4 root root 4 Jan 30 18:47 er1zecc010v
drwx------ 4 root root 4 Jan 30 18:47 er1zpo039v
drwx------ 4 root root 4 Jan 30 18:47 er1zpo041v
drwx------ 4 root root 4 Jan 30 18:47 er1zrhl014v
drwx------ 4 root root 4 Jan 30 18:46 grc-zc
drwx------ 4 root root 4 Jan 30 18:46 gts-zc
drwx------ 4 root root 4 Jan 30 18:46 po-zc
drwx------ 4 root root 4 Jan 30 18:46 rhl-zc
root@er1p2app02:/zones# cd bw-zc/
root@er1p2app02:/zones/bw-zc# ls -al
total 12
drwx------ 4 root root 4 Jan 30 18:46 .
drwxr-xr-x 17 root root 17 Nov 10 2015 ..
drwxr-xr-x 2 root root 2 Jan 30 18:46 lu
drwxr-xr-x 24 root root 27 Feb 16 17:45 root
root@er1p2app02:~# zonecfg -z bw-zc export
create -b
set brand=solaris
set zonepath=/zones/bw-zc
set autoboot=false
set autoshutdown=shutdown
set limitpriv=default,proc_priocntl,proc_clock_highres
set ip-type=shared
add net
set address=er1zbw021v
set configure-allowed-address=true
set physical=sc_ipmp0
end
add net
set address=er1zbw021v-i
set configure-allowed-address=true
set physical=stor_ipmp0
end
add attr
set name=image-uuid
set type=string
set value=f844b7a5-153f-44fd-a0c2-562c89ef85f5
end
add attr
set name=cluster
set type=boolean
set value=true
end
}}}
{{{
sqlplus sys/oracle@enkdb03.enkitec.com/dw.enkitec.com as sysdba
sqlplus scott/aaa@SERVER/orcl
sqlplus scott/aaa@SERVER:1234/orcl
sqlplus scott/aaa@//SERVER/orcl
}}}
! with password in it
{{{
C:\cygwin64\home\karl\client\scripts-master\performance>C:\oracle\instantclient_12_2\sqlplus.exe C##KARAOO/"password42!"@//10.10.10.42:1521/ORCL.example.com
SQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 10 13:26:15 2020
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Last Successful login time: Tue Mar 10 2020 13:23:14 -04:00
Connected to:
Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
SQL> exit
Disconnected from Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
}}}
! prompt password
{{{
C:\cygwin64\home\karl\client\scripts-master\performance>C:\oracle\instantclient_12_2\sqlplus.exe C##KARAOO@\"10.10.10.42:1521/ORCL.example.com\"
SQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 10 13:42:44 2020
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Enter password:
Last Successful login time: Tue Mar 10 2020 13:38:32 -04:00
Connected to:
Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
SQL>
SQL>
SQL>
SQL> exit
Disconnected from Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
}}}
! references
https://dba.stackexchange.com/questions/140032/how-to-connect-to-oracle-12c-from-sqlplus-without-password-in-command-line
http://carlos.bueno.org/
http://www.laurenipsum.org/
project page: https://research.fb.com/prophet-forecasting-at-scale/
paper: https://facebookincubator.github.io/prophet/static/prophet_paper_20170113.pdf
github: https://github.com/facebookincubator/prophet
the engineers: http://seanjtaylor.com and http://www.lethalletham.com
https://www.r-bloggers.com/prophet-how-facebook-operationalizes-time-series-forecasting-at-scale/
https://research.fb.com/prophet-forecasting-at-scale/
paper https://facebookincubator.github.io/prophet/static/prophet_paper_20170113.pdf
https://github.com/facebookincubator/prophet
https://facebookincubator.github.io/prophet/
https://facebookincubator.github.io/prophet/docs/quick_start.html#r-api
https://www.slideshare.net/seanjtaylor/automatic-forecasting-at-scale
https://twitter.com/seanjtaylor/status/834818352992251904
http://blog.revolutionanalytics.com/2014/05/facebooks-on-line-course-on-exploratory-data-analysis-with-r.html
https://www.rdocumentation.org/packages/prophet/versions/0.1/topics/prophet
https://en.wikipedia.org/wiki/Generalized_additive_model
! sample data
https://github.com/facebookincubator/prophet/tree/master/examples
! notebooks
https://github.com/facebookincubator/prophet/tree/master/notebooks
! others
https://cran.r-project.org/web/packages/wikipediatrend/vignettes/using-wikipediatrend.html
! prophet questions
Prophet forecasting questions https://github.com/facebookincubator/prophet/issues/24 <- this is me!
Time-varying seasonality in prophet https://github.com/facebookincubator/prophet/issues/20
! prophet stan code
https://github.com/facebookincubator/prophet/tree/master/R/inst/stan
! prophet HOWTO
https://www.digitalocean.com/community/tutorials/a-guide-to-time-series-forecasting-with-prophet-in-python-3
https://www.digitalocean.com/community/tutorials/a-guide-to-time-series-forecasting-with-arima-in-python-3
https://www.digitalocean.com/community/tutorials/a-guide-to-time-series-visualization-with-python-3
a nice explanation/conversation about redundancy (normal,high, and caveats) in exadata
https://www.evernote.com/shard/s48/sh/afde1bca-f41f-4cba-9703-2705c9ebd3ec/6a053d57366462ef84d9d8aec8b97c8a
High Redundancy Disk Groups in an Exadata Environment [ID 1339373.1]
http://dbatrain.wordpress.com/2011/03/28/mirror-mirror-on-the-exadata/
http://dbatrain.wordpress.com/2012/01/16/mounting-failures-with-your-failgroups/
-- PARTNER DISKS
http://asmsupportguy.blogspot.com/2011/07/how-many-partners.html
http://www.centroid.com/knowledgebase/blog/where-do-our-extents-reside-on-asm-grid-disks-in-exadata
https://twiki.cern.ch/twiki/bin/view/PDBService/ASM_Internals#X_KFFXP_metadata_file_extent_poi
http://jarneil.wordpress.com/2011/10/26/exadata-storage-cells-and-asm-mirroring-and-disk-partnering/
http://afatkulin.blogspot.com/2010/07/asm-mirroring-and-disk-partnership.html
{{{
SELECT disk "Disk", count(number_kfdpartner) "Number of partners"
FROM x$kfdpartner
WHERE grp=1
GROUP BY disk
ORDER BY 1;
SYS@+ASM1>
set lines 300
col path format a50
select disk_number, path, total_mb, failgroup from v$asm_disk
where path like '%DATA%'
order by failgroup, path;
DISK_NUMBER PATH TOTAL_MB FAILGROUP
----------- -------------------------------------------------- ---------- ------------------------------
71 o/192.168.203.201/DATA_CD_DISK00_cell1 0 CELL1
64 o/192.168.203.201/DATA_CD_DISK01_cell1 0 CELL1
50 o/192.168.203.201/DATA_CD_DISK02_cell1 0 CELL1
42 o/192.168.203.201/DATA_CD_DISK03_cell1 0 CELL1
47 o/192.168.203.201/DATA_CD_DISK04_cell1 0 CELL1
63 o/192.168.203.201/DATA_CD_DISK05_cell1 0 CELL1
39 o/192.168.203.201/DATA_CD_DISK06_cell1 0 CELL1
68 o/192.168.203.201/DATA_CD_DISK07_cell1 0 CELL1
41 o/192.168.203.201/DATA_CD_DISK08_cell1 0 CELL1
54 o/192.168.203.201/DATA_CD_DISK09_cell1 0 CELL1
58 o/192.168.203.201/DATA_CD_DISK10_cell1 0 CELL1
70 o/192.168.203.201/DATA_CD_DISK11_cell1 0 CELL1
19 o/192.168.203.202/DATA_CD_DISK00_cell2 1501184 CELL2
23 o/192.168.203.202/DATA_CD_DISK01_cell2 1501184 CELL2
16 o/192.168.203.202/DATA_CD_DISK02_cell2 1501184 CELL2
15 o/192.168.203.202/DATA_CD_DISK03_cell2 1501184 CELL2
17 o/192.168.203.202/DATA_CD_DISK04_cell2 1501184 CELL2
18 o/192.168.203.202/DATA_CD_DISK05_cell2 1501184 CELL2
13 o/192.168.203.202/DATA_CD_DISK06_cell2 1501184 CELL2
12 o/192.168.203.202/DATA_CD_DISK07_cell2 1501184 CELL2
22 o/192.168.203.202/DATA_CD_DISK08_cell2 1501184 CELL2
21 o/192.168.203.202/DATA_CD_DISK09_cell2 1501184 CELL2
14 o/192.168.203.202/DATA_CD_DISK10_cell2 1501184 CELL2
20 o/192.168.203.202/DATA_CD_DISK11_cell2 1501184 CELL2
25 o/192.168.203.203/DATA_CD_DISK00_cell3 0 CELL3
35 o/192.168.203.203/DATA_CD_DISK01_cell3 0 CELL3
26 o/192.168.203.203/DATA_CD_DISK02_cell3 0 CELL3
20 o/192.168.203.203/DATA_CD_DISK03_cell3 0 CELL3
5 o/192.168.203.203/DATA_CD_DISK04_cell3 0 CELL3
23 o/192.168.203.203/DATA_CD_DISK05_cell3 0 CELL3
31 o/192.168.203.203/DATA_CD_DISK06_cell3 0 CELL3
14 o/192.168.203.203/DATA_CD_DISK07_cell3 0 CELL3
13 o/192.168.203.203/DATA_CD_DISK08_cell3 0 CELL3
16 o/192.168.203.203/DATA_CD_DISK09_cell3 0 CELL3
1 o/192.168.203.203/DATA_CD_DISK10_cell3 0 CELL3
18 o/192.168.203.203/DATA_CD_DISK11_cell3 0 CELL3
36 rows selected.
-- search for the file
asmcmd find --type datafile +DATA "*"
select
number_kffxp "file",
xnum_kffxp "virtual extent",
pxn_kffxp "physical extent",
lxn_kffxp "extent copy",
disk_kffxp "disk",
au_kffxp "allocation unit"
from
x$kffxp
where group_kffxp=1
and number_kffxp=3
and xnum_kffxp <> 2147483648
order by 1,2,3;
SYS@+ASM1>
select
number_kffxp "file",
xnum_kffxp "virtual extent",
pxn_kffxp "physical extent",
lxn_kffxp "extent copy",
disk_kffxp "disk",
au_kffxp "allocation unit"
from
x$kffxp
where group_kffxp=1
and number_kffxp=3
and xnum_kffxp <> 2147483648
order by 1,2,3;
file virtual extent physical extent extent copy disk allocation unit
---------- -------------- --------------- ----------- ---------- ---------------
3 0 0 0 14 152
3 0 1 1 10 4294967294
3 0 2 2 65534 4294967294
3 1 3 0 21 144 <-- cell2
3 1 4 1 5 4294967294 <-- cell3
SYS@+ASM1>
SELECT d.group_number "Group#", d.disk_number "Disk#", p.number_kfdpartner "Partner disk#", d.failgroup
FROM x$kfdpartner p, v$asm_disk d
WHERE p.disk=d.disk_number
and p.grp=d.group_number
ORDER BY 1, 2, 3;
Group# Disk# Partner disk# FAILGROUP
---------- ---------- ------------- ------------------------------
1 0 16 CELL3
17 CELL3
19 CELL3
23 CELL3
1 15 CELL3
17 CELL3
21 CELL3
23 CELL3
2 14 CELL3
17 CELL3
21 CELL3
22 CELL3
3 15 CELL3
19 CELL3
22 CELL3
23 CELL3
4 15 CELL3
16 CELL3
18 CELL3
22 CELL3
5 13 CELL3
17 CELL3
20 CELL3
21 CELL3
6 12 CELL3
13 CELL3
20 CELL3
23 CELL3
7 18 CELL3
19 CELL3
20 CELL3
22 CELL3
8 12 CELL3
13 CELL3
16 CELL3
18 CELL3
9 12 CELL3
14 CELL3
16 CELL3
21 CELL3
10 14 CELL3
18 CELL3
19 CELL3
20 CELL3
11 12 CELL3
13 CELL3
14 CELL3
15 CELL3
12 6 CELL2
8 CELL2
9 CELL2
11 CELL2
13 5 CELL2
6 CELL2
8 CELL2
11 CELL2
14 2 CELL2
9 CELL2
10 CELL2
11 CELL2
15 1 CELL2
3 CELL2
4 CELL2
11 CELL2
16 0 CELL2
4 CELL2
8 CELL2
9 CELL2
17 0 CELL2
1 CELL2
2 CELL2
5 CELL2
18 4 CELL2
7 CELL2
8 CELL2
10 CELL2
19 0 CELL2
3 CELL2
7 CELL2
10 CELL2
20 5 CELL2
6 CELL2
7 CELL2
10 CELL2
21 1 CELL2
2 CELL2
5 CELL2
9 CELL2
22 2 CELL2
3 CELL2
4 CELL2
7 CELL2
23 0 CELL2
1 CELL2
3 CELL2
6 CELL2
2 12 26 CELL2
30 CELL2
31 CELL2
35 CELL2
13 30 CELL2
31 CELL2
34 CELL2
35 CELL2
14 24 CELL2
28 CELL2
29 CELL2
34 CELL2
15 28 CELL2
29 CELL2
33 CELL2
34 CELL2
16 26 CELL2
27 CELL2
33 CELL2
35 CELL2
17 25 CELL2
32 CELL2
33 CELL2
35 CELL2
18 24 CELL2
27 CELL2
32 CELL2
34 CELL2
19 25 CELL2
26 CELL2
32 CELL2
33 CELL2
20 28 CELL2
30 CELL2
31 CELL2
32 CELL2
21 24 CELL2
27 CELL2
29 CELL2
31 CELL2
22 25 CELL2
26 CELL2
27 CELL2
30 CELL2
23 24 CELL2
25 CELL2
28 CELL2
29 CELL2
24 14 CELL3
18 CELL3
21 CELL3
23 CELL3
25 17 CELL3
19 CELL3
22 CELL3
23 CELL3
26 12 CELL3
16 CELL3
19 CELL3
22 CELL3
27 16 CELL3
18 CELL3
21 CELL3
22 CELL3
28 14 CELL3
15 CELL3
20 CELL3
23 CELL3
29 14 CELL3
15 CELL3
21 CELL3
23 CELL3
30 12 CELL3
13 CELL3
20 CELL3
22 CELL3
31 12 CELL3
13 CELL3
20 CELL3
21 CELL3
32 17 CELL3
18 CELL3
19 CELL3
20 CELL3
33 15 CELL3
16 CELL3
17 CELL3
19 CELL3
34 13 CELL3
14 CELL3
15 CELL3
18 CELL3
35 12 CELL3
13 CELL3
16 CELL3
17 CELL3
3 0 16 CELL3
17 CELL3
19 CELL3
23 CELL3
1 15 CELL3
17 CELL3
21 CELL3
23 CELL3
2 14 CELL3
17 CELL3
21 CELL3
22 CELL3
3 15 CELL3
19 CELL3
22 CELL3
23 CELL3
4 15 CELL3
16 CELL3
18 CELL3
22 CELL3
5 13 CELL3
17 CELL3
20 CELL3
21 CELL3
6 12 CELL3
13 CELL3
20 CELL3
23 CELL3
7 18 CELL3
19 CELL3
20 CELL3
22 CELL3
8 12 CELL3
13 CELL3
16 CELL3
18 CELL3
9 12 CELL3
14 CELL3
16 CELL3
21 CELL3
10 14 CELL3
18 CELL3
19 CELL3
20 CELL3
11 12 CELL3
13 CELL3
14 CELL3
15 CELL3
12 6 CELL2
8 CELL2
9 CELL2
11 CELL2
13 5 CELL2
6 CELL2
8 CELL2
11 CELL2
14 2 CELL2
9 CELL2
10 CELL2
11 CELL2
15 1 CELL2
3 CELL2
4 CELL2
11 CELL2
16 0 CELL2
4 CELL2
8 CELL2
9 CELL2
17 0 CELL2
1 CELL2
2 CELL2
5 CELL2
18 4 CELL2
7 CELL2
8 CELL2
10 CELL2
19 0 CELL2
3 CELL2
7 CELL2
10 CELL2
20 5 CELL2
6 CELL2
7 CELL2
10 CELL2
21 1 CELL2
2 CELL2
5 CELL2
9 CELL2
22 2 CELL2
3 CELL2
4 CELL2
7 CELL2
23 0 CELL2
1 CELL2
3 CELL2
6 CELL2
288 rows selected.
-- to check if ASM will be OK if the grid disks go OFFLINE. The following command should return 'Yes' for the grid disks being listed
dcli -l root -g cell_group cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
}}}
http://stackoverflow.com/questions/12041953/numactl-detects-only-one-node-where-it-should-be-two
<<<
you can fake numa nodes, i do it on single processor systems to test numa stuff
just pass and argument to kernel
numa=fake=2 will create 2 fake numa nodes
more info here: http://linux-hacks.blogspot.hk/2009/07/fake-numa-nodes-in-linux.html
<<<
{{{
NO_USE_HASH_AGGREGATION hint
}}}
https://mahtodeepak05.wordpress.com/2014/11/16/does-distinct-operation-in-oracle-really-return-sort-data/
http://faragotamas.blogspot.com/2009/06/sort-unique-hash-unique-in-case-of.html
https://stackoverflow.com/questions/6598778/solution-for-speeding-up-a-slow-select-distinct-query-in-postgres
https://stackoverflow.com/questions/14045405/make-a-query-with-distinct-clause-run-faster
https://stackoverflow.com/questions/5973850/can-i-optimize-a-select-distinct-x-from-hugetable-query-by-creating-an-index-on
https://stackoverflow.com/questions/754957/slow-distinct-query-in-sql-server-over-large-dataset/29286754#29286754
https://www.periscopedata.com/blog/use-subqueries-to-count-distinct-50x-faster.html
http://www.databasejournal.com/features/mssql/article.php/1438631/Speeding-Up-SELECT-DISTINCT-Queries.htm
http://orasql.org/2012/09/21/distinct-values-by-index-topn/
! file header contention (block class 13)
* there are two layers to this object level and tablespace level INITIAL and NEXT settings
* object level settings overrides the tablespace level settings
! you can change the settings in two ways
* alter
* move (table) / rebuild (index)
! references
https://myotragusbalearicus.wordpress.com/2011/10/04/change-the-initial-extent-of-a-table/
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-buffer-busy-wait
Database hangs: "gc buffer busy acquire/release" and "gc current block busy" pointing to the tempfile block #2 (Doc ID 2192851.1)
Resolving Intense and "Random" Buffer Busy Wait Performance Problems (Doc ID 155971.1)
http://www.slideshare.net/tanelp/troubleshooting-complex-performance-issues-oracle-seg-contention
http://blog.tanelpoder.com/2013/11/06/diagnosing-buffer-busy-waits-with-the-ash_wait_chains-sql-script-v0-2/
http://ksun-oracle.blogspot.com/2015/12/oracle-bigfile-tablespace-pre.html
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2045988935154
Extent Sizes for Sort, Direct Load and Parallel Operations (PCTAS & PDML) (Doc ID 50592.1)
http://www.arikaplan.com/oracle/ari82997.html
Tablespace storage settings http://www.orafaq.com/forum/t/20350/
http://yong321.freeshell.org/oranotes/LMTDefaultStorage.txt
resizing the INITIAL extent of an object https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:137212348065
http://dba.stackexchange.com/questions/132013/how-to-change-initial-extent-of-an-existing-partition <- GOOD STUFF
https://myotragusbalearicus.wordpress.com/2011/10/04/change-the-initial-extent-of-a-table/
http://stackoverflow.com/questions/15867557/finding-gaps-sequential-numbers <- good stuff
http://www.unix.com/shell-programming-and-scripting/155491-need-find-gap-sequence-numbers.html
http://www.linuxforums.org/forum/programming-scripting/201016-script-check-missing-numbers-sequential-file-names.html
http://stackoverflow.com/questions/428109/extract-substring-in-bash
https://www.freebsd.org/cgi/man.cgi?query=ps&manpath=SuSE+Linux/i386+11.3
Which files are missing from a sequence https://ubuntuforums.org/showthread.php?t=1929436
http://unix.stackexchange.com/questions/11989/how-do-i-find-which-files-are-missing-from-a-list
https://www.reddit.com/r/commandline/comments/1afaju/detecting_gaps_in_timestamps_logs/
FilestackThe Super API for End User Content
https://www.filestack.com/
@kevinclosson
Kevin Closson
@MartinDBA setall affects both DIO (O_DIRECT) and Async I/O. Setall enables both but EXT3 *does not" support concurrent writes. Use XFS.
I used the following links for the CPU price list
http://stackoverflow.com/questions/301039/how-can-i-escape-white-space-in-a-bash-loop-list
http://stackoverflow.com/questions/13958903/find-unique-string-within-a-file-up-a-line-and-append
http://stackoverflow.com/questions/18877229/find-a-line-in-a-file-and-add-something-to-the-end-of-the-line-in-bash
http://stackoverflow.com/questions/14835977/bash-how-to-append-word-to-end-of-a-line
http://stackoverflow.com/questions/2869669/in-bash-how-do-i-add-a-string-after-each-line-in-a-file
http://superuser.com/questions/423496/bash-script-for-loop-that-is-breaking-lines-up-instead-of-using-the-whole-line
http://unix.stackexchange.com/questions/48605/bash-iterate-over-a-list-of-strings
http://stackoverflow.com/questions/4935278/concatenate-a-string-to-the-end-of-every-output-line-in-bash-csh
http://stackoverflow.com/questions/19058668/find-a-line-based-on-a-pattern-add-new-columns-from-another-file-to-that-line <-- closest
http://unix.stackexchange.com/questions/88466/sed-find-string-and-append
{{{
set long 32000
set verify off
set pagesize 999
set lines 132
col username format a13
col prog format a22
col sql_text format a90
col sid format 999
col child_number format 99999 heading CHILD
col ocategory format a10
col avg_etime format 9,999,999.99
col etime format 9,999,999.99
select sql_id,
dbms_lob.substr(sql_text,3999,1) sql_text
from dba_hist_sqltext
where dbms_lob.substr(sql_text,3999,1) like nvl('&sql_text',dbms_lob.substr(sql_text,3999,1))
and sql_text not like '%from dba_hist_sqltext where sql_text like nvl(%'
and sql_id like nvl('&sql_id',sql_id)
/
}}}
http://www.linux.com/archive/feature/131063
http://blogs.oracle.com/bmc/entry/fishworks_now_it_can_be
http://blogs.oracle.com/bmc/entry/good_bye_sun
http://dtrace.org/blogs/ahl/2011/10/10/oel-this-is-not-dtrace/
http://blogs.oracle.com/ahl/entry/dtrace_knockoffs
http://cld.blog-city.com/wsj_2006_innovation_award_gold_winner__dtrace_and_the_troubl.htm
• Mongodb sql comparison
https://docs.mongodb.com/manual/reference/sql-aggregation-comparison/
https://docs.mongodb.com/manual/reference/sql-comparison/
• Mongodb
https://github.com/search?utf8=%E2%9C%93&q=fitbit+mongodb
https://github.com/vtyagiAggies/Fitbit-Data-Download-ForMultipleUsers-MongoDB
• Mysql
https://github.com/04nd01/fitjunction
https://github.com/MaximeHeckel/node-selftracker-app
https://github.com/MaximeHeckel/node-selftracker-server
• Postgresql
https://github.com/cswingle/fb2psql
https://fivetran.com/docs
https://panoply.io/integrations/fivetran/matillion/
https://fivetran.com/databases
https://s3-us-west-2.amazonaws.com/utoug.documents/Fall+Symposium+2014/Exadata_flash_temp_UTOUG.pdf
http://www.hhutzler.de/blog/how-to-move-redo-logs-to-a-ssd-disk-in-a-racasm-env/
At least this blog kind of implies he is using exadata and moved the redo logs to flash based grid disk
• SSD disk may have some outliers but overall performance still much better as using disks
EXADATA : What Is Flash Cache Compression (Doc ID 1664257.1)
''From an Oracle internal note'' - an ACS guy passed this along to a customer who wants to evaluate the feature
<<<
Compression automatically happens if the machine is and X3 or X4 with F40/F80 controller chips. There is an option to turn on better use of the flash cache area when you install the 11.2.3.3 software.
From an Oracle internal note
The flash compression is done in the F40/F80 controller chip regardless of whether the feature is turned on or not. So in the last year, all the X3-2 systems in the field have been using this compression. Every block written on these systems was compressed and the variable sized format was written by the flash card. All the performance numbers seen last year for X3-2 systems include this compression. So now one would ask, if we had this all along last year, whats new?
With the new software release we are able to take the compression that the chip does and convert it into additional capacity. So additional user data can be stored on the same flash card. So think about it this way -- when you wrote to the flash card from end to end last year, it would compress the data and potentially just use 50% of the space in flash and the rest was free. You couldn't write more to the card because there was no logical space free.
With 11.2.3.3, we have added the software to expose this as additional logical capacity and added the infrastructure needed to manage that space as the compressibility of the incoming data changes.
<<<
! my opinion
<<<
So essentially we are currently compressing on the current max capacity of 1490 GB for the X3.
And enabling the feature will just expose the real numbers on what the hardware is doing under the hood.
So before enabling the feature it is really 2978.75G capacity even if it shows as 1488.75G and with the software update and “enabling it” will just fix that instrumentation that the hardware has been doing under the hood. And so, enabling it will not make any difference because we are currently using 2978.75G even though it shows as 1488.75G.
In short, it’s just going to show more space on the flash. But, we are occupying that space already anyway.
<<<
This blog showed the script to enable/disable
{{{
http://progeeking.com/2014/02/03/exadata-x4-smart-flash-cache-compression/
[root@cell121 ~]# wc -l /opt/oracle/cell/cellsrv/deploy/scripts/unix/hwadapter/diskadp/flash/lsi/set_flash_compression.sh
167 /opt/oracle/cell/cellsrv/deploy/scripts/unix/hwadapter/diskadp/flash/lsi/set_flash_compression.sh
[root@cell121 ~]#
}}}
http://www.freelists.org/post/oracle-l/Exadata-Cacheflash-Compression,9
http://progeeking.com/2014/02/03/exadata-x4-smart-flash-cache-compression/
http://dbmagnus.blogspot.com/2014/02/enabling-flash-cache-compression-on.html
http://exadata-dba.blogspot.com/2014/06/how-to-enable-flash-cache-compression.html
https://library.netapp.com/ecmdocs/ECMP1368017/html/GUID-87340429-8F4A-4AA6-B081-0F5040089C78.html
{{{
* 80% total for keep to not max out for a lot of keeped objects
}}}
seems like dynamic connection pool utility
"The FlexyPool library adds metrics and flexible strategies to a given Connection Pool, allowing it to resize on demand. This is very handy since most connection pools offer a limited set of dynamic configuration strategies."
https://vladmihalcea.com/tutorials/flexypool/
<<<
Tutorials
The anatomy of Connection Pooling
Maximum number of database connections
FlexyPool, reactive connection pooling
Professional connection pool sizing
The simple scalability equation
How to monitor a Java EE DataSource
How does FlexyPool support both Connection proxies and decorators
FlexyPool 2 has been released
<<<
https://vladmihalcea.com/maximum-database-connections/
<<<
By default, the maximum number of connection connections is set way too high, risking resource starvation on the database side.
Therefore, only a performance load test will provide you with the maximum number of connections that can deliver the best throughput on your particular system. That value should be used then as the maximum number of connections that can be shared by all application nodes that connect to the database.
If the maximum number of connections is set too high, as it’s the case with many default settings, then you risk oversubscribing connection requests that starve DB resources, as explained in this very good video presentation.
<<<
https://fortawesome.github.io/Font-Awesome/get-started/
http://blog.tanelpoder.com/2013/03/20/alter-session-force-parallel-query-doesnt-really-force-anything/
http://www.adellera.it/blog/2013/05/17/alter-session-force-parallel-query-and-indexes/
https://twelvec.com/tag/force-matching-signature/
http://karenmorton.blogspot.com/2012/05/force-matching-signature.html
http://blog.tanelpoder.com/2012/08/02/the-limitations-of-cursor_sharing-force-and-force_matching_signature-for-sql-plan-stability/
http://www.pythian.com/blog/the-easy-way-of-finding-similar-sql-statements/
force matching test case
https://forums.oracle.com/forums/thread.jspa?threadID=2214628
http://oracleblogging.wordpress.com/2011/02/18/convert-sql_id-to-signature/
''Query force_matching''
{{{
col force_matching_signature format 999999999999999999999999999
select force_matching_signature, child_number from v$sql where sql_id = 'axany583awdtk';
select force_matching_signature from dba_hist_sqlstat where sql_id = 'axany583awdtk';
col signature format 999999999999999999999999999
select name, force_matching, signature, created from dba_sql_profiles
where signature in (select force_matching_signature from dba_hist_sqlstat where sql_id = 'axany583awdtk');
}}}
{{{
16:17:11 SYS@fsprd2> col force_matching_signature format 999999999999999999999999999
select force_matching_signature, child_number from v$sql where sql_id = '6gp26ps7axr36';
16:17:11 SYS@fsprd2>
no rows selected
16:17:11 SYS@fsprd2> select force_matching_signature from dba_hist_sqlstat where sql_id = '6gp26ps7axr36';
FORCE_MATCHING_SIGNATURE
----------------------------
9952434253236291351
9952434253236291351
9952434253236291351
9952434253236291351
9952434253236291351
9952434253236291351
6 rows selected.
16:17:11 SYS@fsprd2> 16:17:11 SYS@fsprd2> col signature format 999999999999999999999999999
16:17:11 SYS@fsprd2> select name, force_matching, signature, created from dba_sql_profiles
16:17:11 2 where signature in (select force_matching_signature from dba_hist_sqlstat where sql_id = '6gp26ps7axr36');
NAME FOR SIGNATURE CREATED
------------------------------ --- ---------------------------- ---------------------------------------------------------------------------
SP_6gp26ps7axr36_2183398877 YES 9952434253236291351 15-MAY-12 06.14.50.000000 PM
}}}
{{{
16:18:57 SYS@fsprd2> col force_matching_signature format 999999999999999999999999999
16:19:02 SYS@fsprd2> select force_matching_signature, child_number from v$sql where sql_id = 'axany583awdtk';
no rows selected
16:19:02 SYS@fsprd2> select force_matching_signature from dba_hist_sqlstat where sql_id = 'axany583awdtk';
FORCE_MATCHING_SIGNATURE
----------------------------
10520293011312039143
10520293011312039143
10520293011312039143
10520293011312039143
10520293011312039143
10520293011312039143
10520293011312039143
10520293011312039143
10520293011312039143
10520293011312039143
10520293011312039143
10520293011312039143
12 rows selected.
16:19:02 SYS@fsprd2> 16:19:02 SYS@fsprd2> col signature format 999999999999999999999999999
select name, force_matching, signature, created from dba_sql_profiles
16:19:02 SYS@fsprd2> 16:19:02 2 where signature in (select force_matching_signature from dba_hist_sqlstat where sql_id = 'axany583awdtk');
NAME FOR SIGNATURE CREATED
------------------------------ --- ---------------------------- ---------------------------------------------------------------------------
SP_axany583awdtk_2183398877 NO 10520293011312039143 15-MAY-12 05.49.33.000000 PM
}}}
.
https://blog.exploratory.io/a-gentle-introduction-to-backtesting-for-evaluating-the-prophet-forecasting-models-66c132adc37c
! my test cases
https://github.com/karlarao/forecast_examples/tree/master/cross_validation
https://www.r-bloggers.com/predicting-tides-in-r/
! from this
{{{
20:02:59 SYS@dw1> select sql_id, dbms_lob.substr(sql_text,30,1) sqltext from dba_hist_sqltext where rownum < 100;
SQL_ID SQLTEXT
------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
bnznj24yxuzah insert into WRH$_IOSTAT_FILET
467ppfu9jp8ch UPDATE SYS.ILM_RESULTS$ SET JO
357cru8xpxh55 SELECT A.EXECUTION_ID, A.JOBNA
9ctt1scmwbmbg begin dbsnmp.bsln_internal.mai
bb926a5dcb8kr merge into sys.mon_mods$ m
7g29na8v736ns /* SQL Analyze(67,1) */ /* SQL
bk8jk9nzd38uc SELECT count(*) FROM sys.wri$_
7umy6juhzw766 select /*+ connect_by_filterin
60w0bzjajtgmz /* SQL Analyze(1157,1) */ /* S
dgkf7m4g44zz0 /* SQL Analyze(1157,1) */ /* S
cbdfcfcp1pgtp select intcol#, col# , type#,
0avmvd8qaj1md SELECT count(*) FROM sys.wri$_
2wsp95jbfzqts SELECT count(*) FROM sys.wri$_
gx9vnbwq7vu3t /* SQL Analyze(1157,1) */ /* S
4whka0nxtk36f /* SQL Analyze(1157,1) */ /* S
7wv5ruk5nshs0 /* SQL Analyze(1157,1) */ /* S
77wa3wxc661vz SELECT count(*) FROM sys.wri$_
cwb5hrbyu11sm SELECT count(*) FROM sys.wri$_
7qjcqf2q2tw9q SELECT count(*) FROM sys.wri$_
4fxr27su1wn38 /* SQL Analyze(1091,1) */ /* S
496zvdwr6nfnh /* SQL Analyze(1091,1) */ /* S
6y5st0zfb4f93 /* SQL Analyze(1091,1) */ DELE
6ja0rtch4744d SELECT /* DS_SVC */ /*+ dynami
c9umxngkc3byq select sql_id, sql_exec_id, db
7cyaxjygv2cpt /* SQL Analyze(1091,1) */ /* S
c4cr75ud9ujvg SELECT PAR_JOBNAME FROM SYS.IL
96mjb0z08nmp3 /* SQL Analyze(1091,1) */ /* S
adtrr100ucu11 /* SQL Analyze(1091,1) */ /* S
9wqup20cvk9yu SELECT /* DS_SVC */ /*+ dynami
bfbu3kz982s2n /* SQL Analyze(1091,1) */ /* S
6qd7w013bx5bj /* SQL Analyze(1091,1) */ /* S
fuws5bqghb2qh SELECT D.COLUMN_VALUE , NVL(A.
fhz9cq8mw2rh9 SELECT A.OBJ#, A.POLICY#, B.SC
cgtc5gb7c4g07 select dbid, status_flag from
6ajkhukk78nsr begin prvt_hdm.auto_execute( :
7mhq1834c9nza SELECT COUNT(*) FROM SYS.ILM_C
49s332uhbnsma declare
vsn va
a1xgxtssv5rrp select sum(used_blocks), ts.ts
gwg69nz9j8vvb insert into wrh$_latch (dbid
84zqd7a3ap5ud insert into WRH$_SYSSTAT (db
f26kbut9pahja SELECT LAST_EXEC_TIME FROM SYS
gdtyuqcyk8x1c SELECT B.EXECUTION_ID, NVL(A.N
6q9zvynq8f0h0 insert into sys.wri$_optstat_h
89nx1vrqtn1q7 SELECT /*+ no_monitor */ CON_
8qn2s8n8ayxvr SELECT /*+ no_monitor */ CON_I
3kywng531fcxu delete from tab_stats$ where o
38243c4tqrkxm select u.name, o.name, o.names
3cd2hy7yx0u2k UPDATE WRI$_OPTSTAT_OPR_TASKS
3wrrjm9qtr2my SELECT T.CLIENT_ID, T.
0n0000qphr1d2 select /*+ no_monitor */ CON_
6atd17x59rdus SELECT TABOWNER, TABNAME, PART
afcswub17n34t SELECT AO.ATTR1 OBJD, SUM(AR.B
13ys8ux8xvrbm insert into sys.wri$_optstat_o
381t19fqhxdgp MERGE /*+ dynamic_sampling(ST
3axxxnjp5jjwj delete from ind_stats$ where o
cnphq355f5rah DECLARE job BINARY_INTEGER :=
37g281wz56rv3 insert into sys.wri$_optstat_h
4y1y43113gv8f delete from histgrm$ where obj
6wrwqq7jkmv3w INSERT INTO WRI$_HEATMAP_TOPN_
7sx5p1ug5ag12 SELECT SPARE4 FROM SYS.OPTSTAT
f47hz4fd4yyx8 SELECT sqlset_row(sql_id, for
7hu2k3a31b6j7 insert into histgrm$(obj#,intc
3qkhfbf2kyvhk SELECT POS+1 POS, VAL, NONNULL
gsfnqdfcvy33q delete from superobj$ where su
6qg99cfg26kwb SELECT COUNT(UNQ) UNQ, COUNT(P
adzjh275fvvx4 call WWV_FLOW_WORKSHEET_API.DO
97dmg1kjd04rc UPDATE STATS_TARGET$ ST SET ST
axq73su80h4wq select sql_id, plan_hash_value
07n9yv8rac2qq UPDATE ILM_RESULTS$ SET JOB_ST
b6usrg82hwsa3 call dbms_stats.gather_databas
6q42j0018w7t8 insert into sys.wri$_optstat_i
3xjw1ncw5vh27 SELECT OWNER, SEGMENT_NAME, PA
28bgqbzpa87xf declare
policy
74cpnuu24wmx7 SELECT TABLESPACE_ID, HEADER_F
6mcpb06rctk0x call dbms_space.auto_space_adv
14qzpax8518uk SELECT /*+ ordered use_nl(o c
130dvvr5s8bgn select obj#, dataobj#, part#,
7mgr3uwydqq8j select decode(open_mode,
5t7dh3thzzrfh begin
dbms_rcvm
ba3m1hr5zy5q0 /* SQL Analyze(1) */ select /*
3y9bat7nwcjy5 SELECT ELAPSED_TIME FROM DBA_S
84cdm4tn4qk66 BEGIN dbms_stats_internal.p
7zfqkasu7cgvy select /*+ no_parallel_index(
231cqwr9u57s0 insert into WRH$_IOSTAT_DETAI
1qag8bk39f2um /* SQL Analyze(198,1) */ /* SQ
5183s2x0hj4zs /* SQL Analyze(1157,1) */ /* S
2xn8yx0uz75fm SELECT S.SNAP_ID, S.BEGIN_INTE
26wdmuqwqmmpa SELECT count(*) FROM sys.wri$_
8y75j6mh081jh delete /*+ dynamic_sampling(4)
8zc85a8249x81 update obj$ set obj#=:4, type#
a51991cqqyak6 /* SQL Analyze(1) */ select /*
dzxp4mguxfmzr SELECT count(*) FROM sys.wri$_
9sa4fhxz32vtp SELECT count(*) FROM sys.wri$_
1sjp86r761x4b SELECT COUNT(*) FROM (SELECT *
4xa0fqp84dda2 SELECT count(*) FROM sys.wri$_
8wjzcrb6p6tkq SELECT count(*) FROM sys.wri$_
a1zm097zca8qg /* SQL Analyze(1) */ select /*
45h233mys5n3s UPDATE wrh$_seg_stat_obj o
bf7tbka73w96a /* SQL Analyze(1) */ select /*
}}}
! to this
{{{
select sql_id, REPLACE(REPLACE( dbms_lob.substr(sql_text,30,1), CHR(10) ), CHR(13) ) sqltext from dba_hist_sqltext where rownum < 100
SQL_ID SQLTEXT
------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
bnznj24yxuzah insert into WRH$_IOSTAT_FILET
467ppfu9jp8ch UPDATE SYS.ILM_RESULTS$ SET JO
357cru8xpxh55 SELECT A.EXECUTION_ID, A.JOBNA
9ctt1scmwbmbg begin dbsnmp.bsln_internal.mai
bb926a5dcb8kr merge into sys.mon_mods$ m
7g29na8v736ns /* SQL Analyze(67,1) */ /* SQL
bk8jk9nzd38uc SELECT count(*) FROM sys.wri$_
7umy6juhzw766 select /*+ connect_by_filterin
60w0bzjajtgmz /* SQL Analyze(1157,1) */ /* S
dgkf7m4g44zz0 /* SQL Analyze(1157,1) */ /* S
cbdfcfcp1pgtp select intcol#, col# , type#,
0avmvd8qaj1md SELECT count(*) FROM sys.wri$_
2wsp95jbfzqts SELECT count(*) FROM sys.wri$_
gx9vnbwq7vu3t /* SQL Analyze(1157,1) */ /* S
4whka0nxtk36f /* SQL Analyze(1157,1) */ /* S
7wv5ruk5nshs0 /* SQL Analyze(1157,1) */ /* S
77wa3wxc661vz SELECT count(*) FROM sys.wri$_
cwb5hrbyu11sm SELECT count(*) FROM sys.wri$_
7qjcqf2q2tw9q SELECT count(*) FROM sys.wri$_
4fxr27su1wn38 /* SQL Analyze(1091,1) */ /* S
496zvdwr6nfnh /* SQL Analyze(1091,1) */ /* S
6y5st0zfb4f93 /* SQL Analyze(1091,1) */ DELE
6ja0rtch4744d SELECT /* DS_SVC */ /*+ dynami
c9umxngkc3byq select sql_id, sql_exec_id, db
7cyaxjygv2cpt /* SQL Analyze(1091,1) */ /* S
c4cr75ud9ujvg SELECT PAR_JOBNAME FROM SYS.IL
96mjb0z08nmp3 /* SQL Analyze(1091,1) */ /* S
adtrr100ucu11 /* SQL Analyze(1091,1) */ /* S
9wqup20cvk9yu SELECT /* DS_SVC */ /*+ dynami
bfbu3kz982s2n /* SQL Analyze(1091,1) */ /* S
6qd7w013bx5bj /* SQL Analyze(1091,1) */ /* S
fuws5bqghb2qh SELECT D.COLUMN_VALUE , NVL(A.
fhz9cq8mw2rh9 SELECT A.OBJ#, A.POLICY#, B.SC
cgtc5gb7c4g07 select dbid, status_flag from
6ajkhukk78nsr begin prvt_hdm.auto_execute( :
7mhq1834c9nza SELECT COUNT(*) FROM SYS.ILM_C
49s332uhbnsma declare vsn va
a1xgxtssv5rrp select sum(used_blocks), ts.ts
gwg69nz9j8vvb insert into wrh$_latch (dbid
84zqd7a3ap5ud insert into WRH$_SYSSTAT (db
f26kbut9pahja SELECT LAST_EXEC_TIME FROM SYS
gdtyuqcyk8x1c SELECT B.EXECUTION_ID, NVL(A.N
6q9zvynq8f0h0 insert into sys.wri$_optstat_h
89nx1vrqtn1q7 SELECT /*+ no_monitor */ CON_
8qn2s8n8ayxvr SELECT /*+ no_monitor */ CON_I
3kywng531fcxu delete from tab_stats$ where o
38243c4tqrkxm select u.name, o.name, o.names
3cd2hy7yx0u2k UPDATE WRI$_OPTSTAT_OPR_TASKS
3wrrjm9qtr2my SELECT T.CLIENT_ID, T.
0n0000qphr1d2 select /*+ no_monitor */ CON_
6atd17x59rdus SELECT TABOWNER, TABNAME, PART
afcswub17n34t SELECT AO.ATTR1 OBJD, SUM(AR.B
13ys8ux8xvrbm insert into sys.wri$_optstat_o
381t19fqhxdgp MERGE /*+ dynamic_sampling(ST
3axxxnjp5jjwj delete from ind_stats$ where o
cnphq355f5rah DECLARE job BINARY_INTEGER :=
37g281wz56rv3 insert into sys.wri$_optstat_h
4y1y43113gv8f delete from histgrm$ where obj
6wrwqq7jkmv3w INSERT INTO WRI$_HEATMAP_TOPN_
7sx5p1ug5ag12 SELECT SPARE4 FROM SYS.OPTSTAT
f47hz4fd4yyx8 SELECT sqlset_row(sql_id, for
7hu2k3a31b6j7 insert into histgrm$(obj#,intc
3qkhfbf2kyvhk SELECT POS+1 POS, VAL, NONNULL
gsfnqdfcvy33q delete from superobj$ where su
6qg99cfg26kwb SELECT COUNT(UNQ) UNQ, COUNT(P
adzjh275fvvx4 call WWV_FLOW_WORKSHEET_API.DO
97dmg1kjd04rc UPDATE STATS_TARGET$ ST SET ST
axq73su80h4wq select sql_id, plan_hash_value
07n9yv8rac2qq UPDATE ILM_RESULTS$ SET JOB_ST
b6usrg82hwsa3 call dbms_stats.gather_databas
6q42j0018w7t8 insert into sys.wri$_optstat_i
3xjw1ncw5vh27 SELECT OWNER, SEGMENT_NAME, PA
28bgqbzpa87xf declare policy
74cpnuu24wmx7 SELECT TABLESPACE_ID, HEADER_F
6mcpb06rctk0x call dbms_space.auto_space_adv
14qzpax8518uk SELECT /*+ ordered use_nl(o c
130dvvr5s8bgn select obj#, dataobj#, part#,
7mgr3uwydqq8j select decode(open_mode,
5t7dh3thzzrfh begin dbms_rcvm
ba3m1hr5zy5q0 /* SQL Analyze(1) */ select /*
3y9bat7nwcjy5 SELECT ELAPSED_TIME FROM DBA_S
84cdm4tn4qk66 BEGIN dbms_stats_internal.p
7zfqkasu7cgvy select /*+ no_parallel_index(
231cqwr9u57s0 insert into WRH$_IOSTAT_DETAI
1qag8bk39f2um /* SQL Analyze(198,1) */ /* SQ
5183s2x0hj4zs /* SQL Analyze(1157,1) */ /* S
2xn8yx0uz75fm SELECT S.SNAP_ID, S.BEGIN_INTE
26wdmuqwqmmpa SELECT count(*) FROM sys.wri$_
8y75j6mh081jh delete /*+ dynamic_sampling(4)
8zc85a8249x81 update obj$ set obj#=:4, type#
a51991cqqyak6 /* SQL Analyze(1) */ select /*
dzxp4mguxfmzr SELECT count(*) FROM sys.wri$_
9sa4fhxz32vtp SELECT count(*) FROM sys.wri$_
1sjp86r761x4b SELECT COUNT(*) FROM (SELECT *
4xa0fqp84dda2 SELECT count(*) FROM sys.wri$_
8wjzcrb6p6tkq SELECT count(*) FROM sys.wri$_
a1zm097zca8qg /* SQL Analyze(1) */ select /*
45h233mys5n3s UPDATE wrh$_seg_stat_obj o
bf7tbka73w96a /* SQL Analyze(1) */ select /*
99 rows selected.
}}}
https://stackoverflow.com/questions/407027/oracle-replace-function-isnt-handling-carriage-returns-line-feeds
http://lkml.indiana.edu/hypermail/linux/kernel/9911.0/0163.html
http://real-world-systems.com/docs/fsck.8.html
http://teaching.idallen.com/cst8129/02f/notes/links_and_inodes.html
http://mail-index.netbsd.org/tech-userlevel/2003/04/17/0000.html
http://ubuntuforums.org/showthread.php?t=326235
http://www.linux-archive.org/ext3-users/215978-corruption-happening-sometimes-when-i-do-shutdown-r-now.html
http://www.linux-archive.org/ext3-users/34496-root-inode-corrupted-tries-clear-reallocate-but-cant.html
http://www.cyberciti.biz/faq/linux-force-fsck-on-the-next-reboot-or-boot-sequence/
http://www.cyberciti.biz/faq/linux-unix-bypassing-fsck/
http://acidborg.wordpress.com/2010/12/22/how-to-force-a-file-system-check-on-the-next-reboot-on-gnulinux/
{{{
shutdown -rF now <-- to force fsck on restart
shutdown -rf now <-- to avoid fsck on restart
}}}
http://maketecheasier.com/8-ways-to-maintain-a-clean-lean-ubuntu-machine/2008/10/07
{{{
[enkdb03:oracle:DEMO1] /home/oracle/scripts
> sqlplus class14/class14
SQL*Plus: Release 11.2.0.2.0 Production on Mon Jul 11 15:31:48 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
CLASS14:DEMO1> @fsx3
Enter value for sql_text: class14
Enter value for sql_id:
LAST_EXEC SQL_ID CHILD EXECS AVG_ETIME AVG_PX OFFLOADABLE OFFLOADED OFFLOAD_ELIGIBLE IO_SAVED_%
------------------- ------------- ------ ---------- ------------- ------ ----------- --------------- ---------------- ----------
2011-07-11/15:27:45 fjc1a4pxygp3m 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:31 4wbkrzs7x59jj 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:13 1fcrt9638gcku 0 1 .01 0 No 0 0 .00
2011-07-11/15:26:36 56jb1gkx44nfa 0 2 5.47 0 Yes 2,218,166,144 18,907,054,080 88.27
2011-07-11/15:26:13 azcx8dbvt76tw 0 3 .01 0 No 0 0 .00
2011-07-11/15:26:09 1y8x26h94f27g 0 1 .01 0 No 0 0 .00
CLASS14:DEMO1>
CLASS14:DEMO1>
CLASS14:DEMO1> @fsx2
Enter value for sql_text: class14
Enter value for sql_id:
SQL_ID CHILD PLAN_HASH EXECS AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ----------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
56jb1gkx44nfa 0 3145879882 2 5.47 0 Yes 88.27 select /* class14 */ count(*) from class_sales
CLASS14:DEMO1> select /* class14 */ count(*) from class_sales;
COUNT(*)
-----------
90000000
CLASS14:DEMO1> @fsx3
Enter value for sql_text: class14
Enter value for sql_id:
LAST_EXEC SQL_ID CHILD EXECS AVG_ETIME AVG_PX OFFLOADABLE OFFLOADED OFFLOAD_ELIGIBLE IO_SAVED_%
------------------- ------------- ------ ---------- ------------- ------ ----------- --------------- ---------------- ----------
2011-07-11/15:27:45 fjc1a4pxygp3m 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:31 4wbkrzs7x59jj 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:13 1fcrt9638gcku 0 2 .01 0 No 0 0 .00
2011-07-11/15:26:36 56jb1gkx44nfa 0 3 5.44 0 Yes 3,327,252,560 28,360,581,120 88.27
2011-07-11/15:26:13 azcx8dbvt76tw 0 3 .01 0 No 0 0 .00
2011-07-11/15:26:09 1y8x26h94f27g 0 1 .01 0 No 0 0 .00
CLASS14:DEMO1> select /* class14 */ count(*) from class_sales;
COUNT(*)
-----------
90000000
CLASS14:DEMO1> CLASS14:DEMO1> @fsx3
Enter value for sql_text: class14
Enter value for sql_id:
LAST_EXEC SQL_ID CHILD EXECS AVG_ETIME AVG_PX OFFLOADABLE OFFLOADED OFFLOAD_ELIGIBLE IO_SAVED_%
------------------- ------------- ------ ---------- ------------- ------ ----------- --------------- ---------------- ----------
2011-07-11/15:27:45 fjc1a4pxygp3m 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:31 4wbkrzs7x59jj 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:13 1fcrt9638gcku 0 2 .01 0 No 0 0 .00
2011-07-11/15:26:36 56jb1gkx44nfa 0 4 5.40 0 Yes 4,436,336,088 37,814,108,160 88.27
2011-07-11/15:26:13 azcx8dbvt76tw 0 3 .01 0 No 0 0 .00
2011-07-11/15:26:09 1y8x26h94f27g 0 1 .01 0 No 0 0 .00
CLASS14:DEMO1>
CLASS14:DEMO1> select /* class14 */ count(*) from class_sales;
COUNT(*)
-----------
90000000
CLASS14:DEMO1> @fsx3
Enter value for sql_text: class14
Enter value for sql_id:
LAST_EXEC SQL_ID CHILD EXECS AVG_ETIME AVG_PX OFFLOADABLE OFFLOADED OFFLOAD_ELIGIBLE IO_SAVED_%
------------------- ------------- ------ ---------- ------------- ------ ----------- --------------- ---------------- ----------
2011-07-11/15:27:45 fjc1a4pxygp3m 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:31 4wbkrzs7x59jj 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:13 1fcrt9638gcku 0 2 .01 0 No 0 0 .00
2011-07-11/15:26:36 56jb1gkx44nfa 0 5 5.39 0 Yes 5,545,419,768 47,267,635,200 88.27
2011-07-11/15:26:13 azcx8dbvt76tw 0 3 .01 0 No 0 0 .00
2011-07-11/15:26:09 1y8x26h94f27g 0 1 .01 0 No 0 0 .00
CLASS14:DEMO1> select /* class14 */ count(*) from class_sales;
COUNT(*)
-----------
90000000
CLASS14:DEMO1> @fsx3
Enter value for sql_text: class14
Enter value for sql_id:
LAST_EXEC SQL_ID CHILD EXECS AVG_ETIME AVG_PX OFFLOADABLE OFFLOADED OFFLOAD_ELIGIBLE IO_SAVED_%
------------------- ------------- ------ ---------- ------------- ------ ----------- --------------- ---------------- ----------
2011-07-11/15:35:43 5zqt9vvfgm50j 0 1 5.36 0 Yes 1,109,082,464 9,453,527,040 88.27
2011-07-11/15:27:45 fjc1a4pxygp3m 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:31 4wbkrzs7x59jj 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:13 1fcrt9638gcku 0 2 .01 0 No 0 0 .00
2011-07-11/15:26:36 56jb1gkx44nfa 0 5 5.39 0 Yes 5,545,419,768 47,267,635,200 88.27
2011-07-11/15:26:13 azcx8dbvt76tw 0 3 .01 0 No 0 0 .00
2011-07-11/15:26:09 1y8x26h94f27g 0 1 .01 0 No 0 0 .00
CLASS14:DEMO1> @fsx
Enter value for sql_text: class14
Enter value for sql_id:
CLASS14:DEMO1>
CLASS14:DEMO1> @fsx2
Enter value for sql_text: class14
Enter value for sql_id:
SQL_ID CHILD PLAN_HASH EXECS AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ----------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
56jb1gkx44nfa 0 3145879882 5 5.39 0 Yes 88.27 select /* class14 */ count(*) from class_sales
5zqt9vvfgm50j 0 3145879882 1 5.36 0 Yes 88.27 select /* class14 */ count(*) from class_sales
CLASS14:DEMO1> alter session set cell_offload_processing=false;
alter session set "_kcfis_storageidx_disabled"=true;
show parameter cell_offload_processing
show parameter _kcfis
CLASS14:DEMO1> CLASS14:DEMO1> CLASS14:DEMO1> CLASS14:DEMO1>
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cell_offload_processing boolean FALSE
CLASS14:DEMO1> CLASS14:DEMO1>
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
_kcfis_storageidx_disabled boolean TRUE
CLASS14:DEMO1>
CLASS14:DEMO1>
CLASS14:DEMO1> select /* class14 */ count(*) from class_sales
2 ;
COUNT(*)
-----------
90000000
CLASS14:DEMO1>
CLASS14:DEMO1>
CLASS14:DEMO1> @fsx3
Enter value for sql_text: class14
Enter value for sql_id:
LAST_EXEC SQL_ID CHILD EXECS AVG_ETIME AVG_PX OFFLOADABLE OFFLOADED OFFLOAD_ELIGIBLE IO_SAVED_%
------------------- ------------- ------ ---------- ------------- ------ ----------- --------------- ---------------- ----------
2011-07-11/15:42:01 7v6n7f3p3j90h 0 1 53.01 0 No 0 0 .00
2011-07-11/15:35:43 5zqt9vvfgm50j 0 1 5.36 0 Yes 1,109,082,464 9,453,527,040 88.27
2011-07-11/15:27:45 fjc1a4pxygp3m 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:31 4wbkrzs7x59jj 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:13 1fcrt9638gcku 0 3 .01 0 No 0 0 .00
2011-07-11/15:26:36 56jb1gkx44nfa 0 5 5.39 0 Yes 5,545,419,768 47,267,635,200 88.27
2011-07-11/15:26:13 azcx8dbvt76tw 0 4 .01 0 No 0 0 .00
2011-07-11/15:26:09 1y8x26h94f27g 0 1 .01 0 No 0 0 .00
CLASS14:DEMO1>
CLASS14:DEMO1> set echo on
alter session set cell_offload_processing=true;
show parameter cell_offload_processing
CLASS14:DEMO1> CLASS14:DEMO1> CLASS14:DEMO1> CLASS14:DEMO1>
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cell_offload_processing boolean TRUE
CLASS14:DEMO1>
CLASS14:DEMO1> select /* class14 */ count(*) from class_sales;
COUNT(*)
-----------
90000000
CLASS14:DEMO1>
CLASS14:DEMO1> @fsx3
CLASS14:DEMO1> set verify off
CLASS14:DEMO1> set pagesize 999
CLASS14:DEMO1> set lines 190
CLASS14:DEMO1> col inst format 9999
CLASS14:DEMO1> col sid format 999
CLASS14:DEMO1> col sql_text format a20 trunc
CLASS14:DEMO1> col sid format 999
CLASS14:DEMO1> col child_number format 99999 heading CHILD
CLASS14:DEMO1> col execs format 9,999,999
CLASS14:DEMO1> col avg_etime format 9,999,999.99
CLASS14:DEMO1> col avg_cpu format 9,999,999.99
CLASS14:DEMO1> col avg_lio format 999,999,999
CLASS14:DEMO1> col avg_pio format 999,999,999
CLASS14:DEMO1> col "IO_SAVED_%" format 999.99
CLASS14:DEMO1> col avg_px format 999
CLASS14:DEMO1> col offloadable for a11
CLASS14:DEMO1> col offload_eligible for 99,999,999,999
CLASS14:DEMO1> col offloaded for 99,999,999,999
CLASS14:DEMO1>
CLASS14:DEMO1> accept sqltext -
> prompt 'Enter value for sql_text: '
Enter value for sql_text: class14
CLASS14:DEMO1> accept sqlid -
> prompt 'Enter value for sql_id: '
Enter value for sql_id:
CLASS14:DEMO1>
CLASS14:DEMO1> set feedback off
CLASS14:DEMO1> variable sql_id varchar2(30)
CLASS14:DEMO1> variable sql_text varchar2(280)
CLASS14:DEMO1> variable inst_id varchar2(10)
CLASS14:DEMO1> exec :sql_id := '&&sqlid';
CLASS14:DEMO1> exec :sql_text := '%&&sqltext%';
CLASS14:DEMO1>
CLASS14:DEMO1> select substr(last_load_time,1,99) last_exec, sql_id, child_number, executions execs,
2 (elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime,
3 px_servers_executions/decode(nvl(executions,0),0,1,executions) avg_px,
4 decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offloadable,
5 IO_CELL_OFFLOAD_RETURNED_BYTES OFFLOADED,
6 io_cell_offload_eligible_bytes offload_eligible,
7 decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,0,100*(IO_CELL_OFFLOAD_ELIGIBLE_BYTES-IO_INTERCONNECT_BYTES)/decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,1,IO_CELL_OFFLOAD_ELIGIBLE_BYTES)) "IO_SAVED_%"
8 from v$sql s
9 where upper(sql_text) like upper(nvl(:sql_text,sql_text))
10 and sql_text not like 'BEGIN :sql_text := %'
11 and sql_id like nvl(:sql_id,sql_id)
12 order by 1 desc
13 /
LAST_EXEC SQL_ID CHILD EXECS AVG_ETIME AVG_PX OFFLOADABLE OFFLOADED OFFLOAD_ELIGIBLE IO_SAVED_%
------------------- ------------- ------ ---------- ------------- ------ ----------- --------------- ---------------- ----------
2011-07-11/15:42:01 7v6n7f3p3j90h 0 1 53.01 0 No 0 0 .00
2011-07-11/15:35:43 5zqt9vvfgm50j 0 2 5.35 0 Yes 2,218,164,624 18,907,054,080 88.27
2011-07-11/15:27:45 fjc1a4pxygp3m 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:31 4wbkrzs7x59jj 0 1 .01 0 No 0 0 .00
2011-07-11/15:27:13 1fcrt9638gcku 0 3 .01 0 No 0 0 .00
2011-07-11/15:26:36 56jb1gkx44nfa 0 5 5.39 0 Yes 5,545,419,768 47,267,635,200 88.27
2011-07-11/15:26:13 azcx8dbvt76tw 0 4 .01 0 No 0 0 .00
2011-07-11/15:26:09 1y8x26h94f27g 0 1 .01 0 No 0 0 .00
CLASS14:DEMO1>
CLASS14:DEMO1> undef sqlid
CLASS14:DEMO1> undef sqltext
CLASS14:DEMO1> undef instid
}}}
<<showtoc>>
! start here
Introduction to Functional Programming https://www.safaribooksonline.com/library/view/introduction-to-functional/9781491962756/
hadley https://www.datacamp.com/courses/writing-functions-in-r
! javascript
@@functional programming in JS https://code.tutsplus.com/courses/functional-programming-in-javascript@@
https://code.tutsplus.com/courses/node-for-the-front-end-developer
functional programming in JS https://www.youtube.com/playlist?list=PL0zVEGEvSaeEd9hlmCXrk5yUyqUag-n84
https://www.safaribooksonline.com/library/view/functional-lite-javascript/9781491967508/
https://www.safaribooksonline.com/library/publisher/frontend-masters/
https://www.pluralsight.com/search?q=functional-programming&categories=course
Hardcore Functional Programming in JavaScript https://www.pluralsight.com/courses/hardcore-functional-programming-javascript
JavaScript: From Fundamentals to Functional JS https://www.pluralsight.com/courses/javascript-from-fundamentals-to-functional-js
JavaScript: The Good Parts https://www.pluralsight.com/courses/javascript-good-parts
https://www.pluralsight.com/courses/javascript-from-fundamentals-to-functional-js
JS refactoring techniques https://code.tutsplus.com/courses/javascript-refactoring-techniques
https://github.com/getify/Functional-Light-JS book
''most adequate guide to functional programming'' https://www.gitbook.com/book/drboolean/mostly-adequate-guide/details , read online https://drboolean.gitbooks.io/mostly-adequate-guide/content/
! python
video: Functional Programming in Python https://www.safaribooksonline.com/library/view/functional-programming-in/9781788292450/
video: Functional Programming & Python https://www.safaribooksonline.com/library/view/functional-programming/9781491939802/
video: LEARNING PATH: Python: Functional Programming with Python https://www.safaribooksonline.com/learning-paths/learning-path-python/9781788996396/
book: Functional Python Programming https://www.amazon.com/dp/1784396990/ref=sspa_dk_detail_2?psc=1
book: Functional Python Programming 1st ed - https://www.safaribooksonline.com/library/view/functional-python-programming/9781784396992/
book: Functional Python Programming - Second Edition https://www.safaribooksonline.com/library/view/functional-python-programming/9781788627061/
Learning Path: Python: Programming for Python Users https://www.safaribooksonline.com/library/view/learning-path-python/9781788297042/
https://www.safaribooksonline.com/library/view/functional-programming-with/9781771374651/
https://www.safaribooksonline.com/library/view/functional-programming/9781491939802/
! R
Functional Programming in R: Advanced Statistical Programming for Data Science, Analysis and Finance https://www.safaribooksonline.com/library/view/functional-programming-in/9781484227466/
! c
Understanding functional programming https://www.lynda.com/C-tutorials/Understanding-functional-programming/164457/180372-4.html
<<showtoc>>
! functional
! non-functional
!! operational
!! technical
!! and others (like SLA, etc.)
https://www.google.com/search?source=hp&ei=qxWrXorGLo2o_QbgsLOwBQ&q=functional+vs+technical+requirements&oq=functional+vs+te&gs_lcp=CgZwc3ktYWIQAxgBMgIIADICCAAyAggAMgIIADICCAAyAggAMgIIADICCAAyAggAMgIIADoFCAAQgwFQ-gRY-B5gsS1oAHAAeACAAZUCiAGuDpIBBTcuNy4ymAEAoAEBqgEHZ3dzLXdpeg&sclient=psy-ab
! explanation
https://stackoverflow.com/questions/16475979/what-is-the-difference-between-functional-and-non-functional-requirement
{{{
A functional requirement describes what a software system should do, while non-functional requirements place constraints on how the system will do so.
Let me elaborate.
An example of a functional requirement would be:
A system must send an email whenever a certain condition is met (e.g. an order is placed, a customer signs up, etc).
A related non-functional requirement for the system may be:
Emails should be sent with a latency of no greater than 12 hours from such an activity.
The functional requirement is describing the behavior of the system as it relates to the system's functionality. The non-functional requirement elaborates a performance characteristic of the system.
Typically non-functional requirements fall into areas such as:
Accessibility
Capacity, current and forecast
Compliance
Documentation
Disaster recovery
Efficiency
Effectiveness
Extensibility
Fault tolerance
Interoperability
Maintainability
Privacy
Portability
Quality
Reliability
Resilience
Response time
Robustness
Scalability
Security
Stability
Supportability
Testability
A more complete list is available at Wikipedia's entry for non-functional requirements.
Non-functional requirements are sometimes defined in terms of metrics (i.e. something that can be measured about the system) to make them more tangible. Non-functional requirements may also describe aspects of the system that don't relate to its execution, but rather to its evolution over time (e.g. maintainability, extensibility, documentation, etc.).
}}}
https://softwareengineering.stackexchange.com/questions/209234/what-are-the-differences-between-functional-operational-and-technical-requireme
{{{
functional requirements
What the system is supposed to do, process orders, send bills, regulate the temperature etc. etc.
operational requirements
These are about how to run the system. Logging, startup/shutdown controls, monitoring, resource consumption, back up, availability etc.etc.
technical requirements
These are about how the system is built. Which language, which OS, standards to be adhered to etc.
These days "operational' and "technical" requirements are usually bundled together as "non-functional requirements" -- mainly to stop silly arguments as to whether "system will respond to a user request within 1 second" is an operational or technical requirement.
}}}
! NFRs - non-functional requirements
https://en.wikipedia.org/wiki/Non-functional_requirement
{{{
-------------------------------------------------------------------------------------
fuser - get the user accessing the file
-------------------------------------------------------------------------------------
# get the user accessing the file, then do "ps -aux | grep -i 4897"
fuser -u /var/log/messages
# kill a session holding the file
fuser -k ~karao/test
# to have invoke a different signal
fuser -s KILL
fuser -s 9
fuser -s TERM
In Linux, you can use the -m option to allow you to specify the filesystem by name. On Solaris and IRIX, the -c option performs the same task.
-------------------------------------------------------------------------------------
lsof - lists files opened by processes running on your system
-------------------------------------------------------------------------------------
When lsof is called without parameters, it will show all the files opened by any processes.
lsof | nl
Let us know who is using the apache executable file, /etc/passwd, what files are opened on device /dev/hda6 or who's accessing /dev/cdrom:
lsof `which apache2`
lsof /etc/passwd
lsof /dev/hda6
lsof /dev/cdrom
Now show us what process IDs are using the apache binary, and only the PID:
lsof -t `which apache2`
Show us what files are opened by processes whose names starts by "k" (klogd, kswapd...) and bash. Show us what files are opened by init:
lsof -c k
lsof -c bash
lsof -c init
Show us what files are opened by processes whose names starts by "courier", but exclude those whose owner is the user "zahn":
lsof -c courier -u ^zahn
Show us the processes opened by user apache and user zahn:
lsof -u apache,zahn
Show us what files are using the process whose PID is 30297:
lsof +p 30297
Search for all opened instances of directory /tmp and all the files and directories it contains:
lsof +D /tmp
List all opened internet sockets and sockets related to port 80:
lsof -i
lsof -i :80
List all opened Internet and UNIX domain files:
lsof -i -U
Show us what process(es) has an UDP connection opened to or from the host www.akadia.com at port 123 (ntp):
lsof -iUDP@www.akadia.com:123
lsof provides many more options and could be an unvaluable foresinc tool if your system get compromised or as daily basis check tool.
lsof +L1 <-- list open deleted files
/usr/sbin/lsof +c0 -w +L1 -b -R
du -m . | sort -n | grep u01 <-- check for the huge file hogger
lsof /u01 | grep -i deleted
lsof -n -P -i | grep 3306
mysqld 18255 kristofferson.a.arao 18u IPv6 0x14023dead031347b 0t0 TCP *:3306 (LISTEN)
mysqld 18255 kristofferson.a.arao 77u IPv6 0x14023dea9b3accbb 0t0 TCP 127.0.0.1:3306->127.0.0.1:60413 (ESTABLISHED)
mysqld 18255 kristofferson.a.arao 78u IPv6 0x14023dea9b3ac13b 0t0 TCP 127.0.0.1:3306->127.0.0.1:60414 (ESTABLISHED)
dbeaver 80211 kristofferson.a.arao 75u IPv6 0x14023deacc8fcebb 0t0 TCP 127.0.0.1:56973->127.0.0.1:3306 (CLOSE_WAIT)
dbeaver 80211 kristofferson.a.arao 110u IPv6 0x14023dea9b3abb7b 0t0 TCP 127.0.0.1:60413->127.0.0.1:3306 (ESTABLISHED)
dbeaver 80211 kristofferson.a.arao 136u IPv6 0x14023deacc8ff6fb 0t0 TCP 127.0.0.1:56974->127.0.0.1:3306 (CLOSE_WAIT)
dbeaver 80211 kristofferson.a.arao 163u IPv6 0x14023dea9b3a9ebb 0t0 TCP 127.0.0.1:60414->127.0.0.1:3306 (ESTABLISHED)
-------------------------------------------------------------------------------------
pmap
-------------------------------------------------------------------------------------
pmap <pid>
-------------------------------------------------------------------------------------
ps
-------------------------------------------------------------------------------------
this will show users on D state (most likely waiting on IO), either use the 1st or 2nd line
watch -n 1 "(ps aux | awk '\$8 ~ /D|R/ { print \$0 }')"
ps aux | awk '$8 ~ /D|R/ { print $0 }'
}}}
<<showtoc>>
! Week 1 - different types of goals
focuses on different types of goals that call for forecasting, and link the business goal to forecasting concepts such as forecast horizon, description/prediction, automation, delays, and forecast updating.
!! 1.1 welcome
!! 1.2 why
!! 1.3 Where is forecasting used?
!! 1.4 youbike taiwan (VIDEO)
https://github.com/kaicarver/youbike-stations
!! 1.5 campus electricity (VIDEO)
http://www.postscapes.com/smart-outlets/
!! 1.6 library demand (VIDEO)
!! 1.7 power company load forecasting (VIDEO)
http://robjhyndman.com/papers/MEFMR1.pdf
http://robjhyndman.com/working-papers/mefm/
http://robjhyndman.com/hyndsight/mefm/
https://github.com/robjhyndman/MEFM-package
!! 1.8 Find and share a forecasting application
nice useful ideas
!! 1.9 cafe manager - Working with a stakeholder to discover forecasting opportunities (VIDEO)
!! 1.10 Forecasting language and notation (VIDEO)
!! 1.11 The forecasting process, forecasting language and notation
<<<
You do realize that a predictive forecast is a guess, bound to be wrong. We can only expect to be close to the actual value (Y(t + k)) when using the forecast (F(t + k)). The question is how close is close enough?
For a descriptive forecast we may see trends but there is no guarantee that the trend will continue as noted. Again there will be error. It is the error that we must analyze.
<<<
!! 1.12 Goal definition - Part 1
!!! Issue #1: Descriptive vs. Predictive Goals
{{{
post 1/2
----------
I think the term "descriptive forecasting" is what's confusing here and I think it should just be called "descriptive goal" or "descriptive" all the time. Let me take a stab at explaining this.
When the goal is "just" Descriptive it means we still go through the entire "forecasting process" except we don't project future data.
### Descriptive:
An example of this is on the train ridership. Let's say the business problem is - "what time is the ideal construction time on the railroad or the train units so we can minimize the disruption of service?"
Let's say you already processed, eyeballed, explored, and graphed the data. And then you found out there are two seasonal patterns.
1) weekdays - ridership trend peaks every 9AM and 6PM
2) weekends - ridership trend is very low
Backed with that data/analysis you can tell the business to allocate the time slots 9pm - 5am for the construction. With this, you achieved your goal and your work is done.
(BTW this can be better visualized using a season/cycle plot in tableau https://www.youtube.com/watch?v=IjeEPBz4puc or R)
post 2/2
----------
### Predictive:
It becomes Predictive if the business problem is - "what time is the ideal construction time on the railroad or the train units so we can minimize the disruption of service and also I'd like to know the load during weekdays next month for the Christmas"
So here, you will go through the entire "forecasting process" and naturally you will need the "Descriptive" analysis you did above to better fit the data or increase the accuracy to your model. Let's say you may have to apply the model separately for weekdays and weekends or add an external information or tag special events on your data.
}}}
!! 1.13 Is the following forecasting goal predictive or descriptive
!!! Taipei Metro system administration plans to use the ridership data for dynamic pricing (revenue management)
This is a predictive task because dynamic pricing means that the price will be based on the forecasted demand
!!! The Bureau of Transportation Statistics uses the ridership data for producing “monthly Metro demand indexes”
This is a descriptive task because indexes are typically used to describe periods retrospectively
!! 1.14 Goal definition - Part 2
!!! Issue #2: Forecast Horizon and Forecast Updating
!!! Issue #3: Forecast Use
!!! Issue #4: Level of Automation
!! 1.15 Impact of September 11 on air travel in the United States
http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/estimated_impacts_of_9_11_on_us_travel/pdf/entire.pdf
<<<
The GOAL is descriptive although a predictive step is used on order to make comparisons and draw conclusions/impact.
This both descriptive and predictive in the sense that they are interested in the impact of September 11 on transportation and also to predict what the impact would have been if there was no terrorist attack.
BTS has a descriptive goal because they are interested in finding the impact of Sep 11 attack on transportation. They applied predictive as sub goal to find the impact of attack. They used data before Sep 11 and did forecasting. The forecasts were compared with actual data to find the impact.
<<<
! Week 2 - exploration and visualization for time series data
is about exploration and visualization for time series data. We will look at pattern types common in time series and how to identify them. This is where we showcase the power of interactive visualization tools.
!! 2.1 R, Analytics Solver (XLMiner), and Tableau software
!! 2.2 Data collection
!! 2.3 Time series components
!! 2.4 Visualizing time series
!! 2.5 Visualization screencast (VIDEO)
!! 2.6 Impact of September 11 on air travel in the United States
!! 2.7 Data preprocessing
!! 2.8 Create and share a visualization - kaggle bike share demand
https://www.kaggle.com/c/bike-sharing-demand
https://www.kaggle.com/benhamner/bike-sharing-demand/bike-rentals-by-time/code
https://www.kaggle.com/benhamner/bike-sharing-demand/bike-rentals-by-time-and-temperature/code
https://www.kaggle.com/kaggle/hillary-clinton-emails/kernels
https://www.kaggle.com/c/titanic/forums/t/13390/introducing-kaggle-scripts
!! 2.9 Kaggle bike sharing competition problem
!! 2.10 Forecasting past & future
! Week 3 - performance evaluation
we delve into performance evaluation, a critically important component of forecasting. We’ll learn how to set up the data for proper evaluation, what to measure, and how to compute, display, benchmark, and evaluate forecasting performance.
!! 3.1 Data Partitioning
!! 3.2 Data Partitioning
!! 3.3 Partitioning wine sales
Partitioning Wine Sales Data http://rpubs.com/zentek/224355
!! 3.4 Software for partitioning
!! 3.5 naive/snaive forecast
Naive Forecasts for Wine Sales http://rpubs.com/zentek/224592
!! 3.6 naive/snaive forecast quiz
!! 3.7 naive/snaive forecast
!! 3.8 Partitioning monthly ridership on Amtrak trains quiz
!! 3.9 Naive forecasts for wine sales
!! 3.10 Performance metrics & charts
!! 3.11 Performance of naive forecasts for wine sales
!! 3.12 Prediction intervals
highly advisable to report a forecast interval (also called prediction interval) along with the point forecast
!! 3.13 Prediction intervals
!! 3.14 Prediction intervals for wine sales naive forecasts
Performance of Naive Forecasts for Wine Sales http://rpubs.com/zentek/226488
Performance of Naive Forecasts for Wine Sales & Prediction intervals for wine sales naive forecasts http://rpubs.com/zentek/226512
!! 3.15 Recall Load forecasting - Interview
! Week 4 - moving average and exponential smoothing
introduces a popular family of forecasting methods called smoothing. They include methods such as the moving average, simple exponential smoothing, and advanced exponential smoothing. We’ll see when it is appropriate to use different smoothing methods, and why.
!! 4.1 MA for visualization
!! 4.2 Using a moving average for visualization
!! 4.3 MA for forecasting
!! 4.4 Moving average window width
!! 4.5 Differencing
!! 4.6 Differencing
<<<
The I in ARIMA is the differencing operation that is required before fitting an ARMA model. You cannot tune the ARMA parameters to capture seasonality or trend, because by design it assumes both are absent.
<<<
!! 4.7 Differencing wine sales
http://rpubs.com/zentek/226720
!! 4.8 Forecasting wine sales using a moving average
! Week 5 - linear regression models
we discuss the use of linear regression models and how to capture different trend and seasonality patterns.
! Week 6 - more linear regression models and ARIMA and external data
expands the discussion of regression for capturing the important pattern of autocorrelation (the relationship between values in neighboring periods). We also look at how to integrate external data into our forecasting model. Finally, we talk about big data and the Internet-of-things in the forecasting context.
google search "negative effects of gamification"
https://www.google.com/search?q=gamification&rlz=1C5CHFA_enUS696US696&oq=gamification&aqs=chrome..69i57j0l5.1926j0j1&sourceid=chrome&ie=UTF-8#q=negative+effects+of+gamification&start=10
! papers/articles
paper "Gamification: The effect on student motivation and performance at the post-secondary level"
http://blog.bonus.ly/pitfalls-of-workplace-gamification/
http://greg2dot0.com/2012/11/06/why-gamification-is-bad-for-social-business/
http://radar.oreilly.com/2011/06/gamification-criticism-overjustification-ownership-addiction.html
http://lecturers.haifa.ac.il/en/management/draban/Documents/GamificationChapter.pdf
http://neatoday.org/2014/06/23/gamification-in-the-classroom-the-right-or-wrong-way-to-motivate-students/
http://www.academia.edu/12454798/Assessing_the_Effects_of_Gamification_in_the_Classroom_A_Longitudinal_Study_on_Intrinsic_Motivation_Social_Comparison_Satisfaction_Effort_and_Academic_Performance
http://www.kevindsmith.org/uploads/1/1/2/4/11249861/the-impact-of-gamification-on-student-performance-kevin-smith.pdf
https://byresearch.wordpress.com/2014/05/18/why-work-gamification-is-a-bad-idea/
https://webdesign.tutsplus.com/articles/the-benefits-and-pitfalls-of-gamification--webdesign-6454
http://www.gamespot.com/articles/the-pros-and-cons-of-gamification/1100-6301575/
! books
https://www.amazon.com/Reality-Broken-Games-Better-Change/dp/0143120611/ref=pd_bxgy_14_img_3?_encoding=UTF8&psc=1&refRID=D6RK61NCQN8EQXXV8P4P
https://www.amazon.com/Gamification-Learning-Instruction-Game-based-Strategies/dp/1118096347/ref=pd_sim_14_4?_encoding=UTF8&psc=1&refRID=D6RK61NCQN8EQXXV8P4P
! videos
https://www.lynda.com/Higher-Education-tutorials/Welcome/173211/197002-4.html?srchtrk=index%3a1%0alinktypeid%3a2%0aq%3agamification%0apage%3a1%0as%3arelevance%0asa%3atrue%0aproducttypeid%3a2
http://jonathanlewis.wordpress.com/2012/08/19/compression-units-5/
http://antognini.ch/2013/10/system-statistics-gathered-in-exadata-mode-when-are-they-relevant/
{{{
-- workload stats
io_cost = ceil ( blocks / mbrc * mreadtim / sreadtim ) + 1
-- noworkload stats
mreadtim = ioseektim + db_file_multiblock_read_count * db_block_size / iotfrspeed
sreadtim = ioseektim + db_block_size / iotfrspeed
io_cost = ceil ( blocks / mbrc * mreadtim / sreadtim ) + 1
}}}
<<<
I assume (the latest) Exacheck is run, and the system is configured according common best practices. Is that verifiable? Has exacheck been run including cells and ibswitches?
So the Oracle stack is running with RDS compiled in, no extra devices in the infiniband network, no exotic configurations?
In other words, the /opt/oracle.SupportTools/ibdiagtools/verify-topology tools does not report anything extraordinary?
(it is quite common in these kind of situations that when you start talking and you ask if everything is configured vanilla, the answer is yes…..”but”, and then there’s all kinds of weird stuff which can not have any influence according to the client)
If so, it’s important to break it down into at least two layers: Oracle and Linux/hardware. The event gc block lost means a block is sent by one instance, but never received on the destination instance. The common reason for it is congestion on the interconnect. Most of the time, before this statistic gets increased the statistic gc cr block congestion is increasing already.
I think it’s quite unlikely to have blocks lost if the underlying infra does not show errors, but the next thing to look for is at the Linux level. A simple check would be ifconfig ib0 and ib1 (both ports on the interface) do these display errors? A next check would be ibqueryerrors that will break down errors of the entire ib infrastructure.
It would be logical to have errors on the o/s level. Shut down Oracle entirely (so there’s no other usage of the infiniband fabric). See if you can get errors with rds-stress (weidong has a blog post which goes into more detail on different tools, statistics and rds-stress). See if using tcp and rds makes a difference (I’ve seen tcp functioning fine, but rds not). you can generate tcp based network stress with iptraf.
We have seen “gc cr failure” at one large bank (you know which one I refer to, :) ) after cutover to Exadata. But I am not sure whether it is related to your friend’s issue or not. It’s 11g R2 on X4 full rack. The issue happened randomly and only on the two 120+ TB databases, not the other 40+ TB databases. For example, sometimes with 10 sessions running the same query, 9 sessions can finish within a minute, another one with “gc cr failure” could run for hours and never complete. Then it could gradually make more sessions running with the same query to run into the same issue. At that point, there was no fix for this issue, one workaround solution was to kill any sessions with “gc cr failure” before they escalate to other db sessions. I believe it was an Oracle bug and was fixed during another fix for a major critical bug. I haven’t seen this issue since the patch was implemented.
Personally I doubt it is hardware issue. You might want to check out what kind of db objects have this issue. My guess is that the lost blocks are from certain indexes (very large one), not tables.
This is the reason why you should look careful if it is an O/S/Hardware imposed error, or an Oracle error.
I agree
Check the appearance of errors on the IB interfaces on each of the machines. The verify topology will make sure that everything is plugged in correctly, but will not tell you if you are getting errors.
As Frits said, do a simple ifconfig on ib0 and ib1 and check for errors. It is sometimes common these ports will have errors at the end of an installation, so make sure the errors are incrementing. You can clear the errors to reset the counters and then monitor.
However, do another simple check - query gv$cluster_interconnects to make sure they have not re-routed the traffic across a different network. It happens!
<<<
https://www.google.com/search?q=gc+current+block+congested&oq=gc+current+block+congested&aqs=chrome..69i57j0l4.220j0j7&sourceid=chrome&ie=UTF-8
http://www.centroid.com/knowledgebase/blog/measuring-oracle-11gr2-rac-waits
https://orainternals.wordpress.com/tag/gc-buffer-busy/
http://orakhoj.blogspot.com/2012/01/gc-current-block-2-way-gc-current-block.html#.VkV9iLerTwc
https://martincarstenbach.wordpress.com/2014/12/16/adventures-in-rac-gc-buffer-busy-acquire-and-release/
https://orainternals.wordpress.com/2012/04/19/gc-buffer-busy-acquire-vs-release/ <-- good stuff
http://www.hhutzler.de/blog/debugging-and-fixing-rac-wait-events-for-right-hand-side-index-growth-gc-buffer-busy-release-gc-buffer-busy-acquire/
https://martincarstenbach.wordpress.com/2014/12/16/adventures-in-rac-gc-buffer-busy-acquire-and-release/
<<showtoc>>
! doc
https://cloud.google.com/bigquery/docs/scan-with-dlp
https://github.com/googleapis/
<<showtoc>>
! Architecture flowcharts
https://grumpygrace.dev/posts/gcp-flowcharts/
<<<
Compute
Storage and Data
Security
Networking
Data Analytics
Misc
<<<
!GCP architecture references
[top]
https://cloud.google.com/docs/tutorials#architecture
!architecture - Smart analytics reference patterns
[top]
https://cloud.google.com/solutions/smart-analytics/reference-patterns/overview
!GCP solutions by industry
[top]
https://cloud.google.com/solutions/migrating-oracle-to-cloud-spanner
!migration patterns
[top]
multiple source systems to bigquery
[top]
from pythian whitepapers
https://resources.pythian.com/hubfs/Framework-For-Migrate-Your-Data-Warehouse-Google-BigQuery-WhitePaper.pdf
https://resources.pythian.com/hubfs/White-Papers/Migrate-Teradata-to-Google-BigQuery.pdf
<<showtoc>>
! error
{{{
File "/Users/kristofferson.a.arao/.pyenv/versions/py385/lib/python3.8/site-packages/google/auth/_default.py", line 338, in default
raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started
}}}
https://cloud.google.com/docs/authentication/getting-started
https://cloud.google.com/docs/authentication/getting-started#cloud-console
<<<
Before proceeding, we recommend that all Google Cloud developers first read the Authentication overview topic to understand how authentication works in Google Cloud, including common scenarios and strategies. Additionally, before deploying an application to a production environment, ensure that you've read Authenticating as a service account.
<<<
https://cloud.google.com/docs/authentication
https://cloud.google.com/docs/authentication#strategies
https://cloud.google.com/docs/authentication/production
! create service account
* go to https://console.cloud.google.com/iam-admin/serviceaccounts?folder=&project=example-dev-284123&supportedpurview=project
* create service account
* name account as <projectname>-svc
* create key here https://console.cloud.google.com/iam-admin/serviceaccounts?project=example-dev-284123&supportedpurview=project
* read up on other resources https://cloud.google.com/iam/docs/granting-changing-revoking-access
* add/edit roles here -> IAM -> ROLES -> MANAGE ROLES https://console.cloud.google.com/cloud-resource-manager?_ga=2.101500549.582125405.1595696552-548387713.1595458168
* add the bigquery admin role to service account
! bash_profile
{{{
cat ~/.bash_profile
#export JAVA_HOME=/Library/Java/Home
# Set JAVA_HOME so rJava package can find it
export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)/jre
# pyenv
export PATH="~/.pyenv/versions/3.5.0/bin:${PATH}"
if command -v pyenv 1>/dev/null 2>&1; then
eval "$(pyenv init -)"
fi
# oracle client
export PATH=~/instantclient_12_2:$PATH
export ORACLE_HOME=~/instantclient_12_2
export DYLD_LIBRARY_PATH=$ORACLE_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME
export FORCE_RPATH=1
# DBngin exports
export PATH=/Users/Shared/DBngin/mysql/5.7.23/bin:$PATH
# bazel
export PATH="$PATH:$HOME/bin"
# The next line updates PATH for the Google Cloud SDK.
if [ -f '/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/path.bash.inc' ]; then . '/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/path.bash.inc'; fi
# The next line enables shell command completion for gcloud.
if [ -f '/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/completion.bash.inc' ]; then . '/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/completion.bash.inc'; fi
# gcp service account - example-dev
export GOOGLE_APPLICATION_CREDENTIALS="/Users/kristofferson.a.arao/gcp-sdk/example-dev-284123-1c4cf8cf3f8c.json"
}}}
! run sql
{{{
# this script runs as the service account while logged in as kristofferson.a.arao on cloud init
(py385) AMAC02T60SJH03Y:gcp-bigquery kristofferson.a.arao$ ls -ltr /Users/kristofferson.a.arao/gcp-sdk/example-dev-284123-1c4cf8cf3f8c.json
-rw-rw-rw-@ 1 kristofferson.a.arao 562225435 2339 Jul 25 13:06 /Users/kristofferson.a.arao/gcp-sdk/example-dev-284123-1c4cf8cf3f8c.json
(py385) AMAC02T60SJH03Y:gcp-bigquery kristofferson.a.arao$
(py385) AMAC02T60SJH03Y:gcp-bigquery kristofferson.a.arao$ python test.py
The query data:
name=James, count=272793
name=John, count=235139
name=Michael, count=225320
name=Robert, count=220399
name=David, count=219028
name=Mary, count=209893
name=William, count=173092
name=Jose, count=157362
name=Christopher, count=144196
name=Maria, count=131056
name=Charles, count=126509
name=Daniel, count=117470
name=Richard, count=109888
name=Juan, count=109808
name=Jennifer, count=98696
name=Joshua, count=90679
name=Elizabeth, count=90465
name=Joseph, count=89097
name=Matthew, count=88464
name=Joe, count=87977
}}}
! other urls
https://stackoverflow.com/questions/51366870/xxxxgmail-com-does-not-have-bigquery-jobs-create-permission-in-project-yyyy
https://medium.com/google-cloud-platform-by-cloud-ace/oracle-database-on-google-cloud-platform-what-do-you-need-to-know-1e331c874c24
https://cloud.google.com/bare-metal
https://cloud.google.com/solutions/migrating-bare-metal-workloads
gcp bms servers https://atos.net/en/solutions/enterprise-servers/bullsequana-s
* the hosting vendor is ATOS
* the machine is Bullsequana
<<showtoc>>
! release notes
https://cloud.google.com/bigquery/docs/release-notes
! UI shortcuts
{{{
shortcuts
Run query Ctrl + Enter
Run selected query Ctrl + e
Format query Ctrl + Shift + f after that Enter
Open Table details Ctrl + Click
SQL autosuggest Tab or Ctrl+Space
hidden shortcuts
Delete line Ctrl + d
Comment / Uncomment Ctrl + /
Multi line edit Alt + Shift + left-mouse
}}}
<<showtoc>>
! READ THIS
https://cloud.google.com/bigquery/docs/bq-command-line-tool
! show current Cloud SDK config
{{{
gcloud config list
bq show
bq ls -p
bq ls
}}}
! create dataset
{{{
bq --location=US mk ch04
bq show ch04
bq mk --location=US \
--default_table_expiration 3600 \
--description "Chapter 5 of BigQuery Book." \
ch05
}}}
! create dataset on different project
{{{
bq mk --location=US \
--default_table_expiration 3600 \
--description "Chapter 5 of BigQuery Book." \
projectname:ch05
}}}
! create table
{{{
Table Path
[[project_name.]dataset_name.]table_name
project_name is the name of the project where you are creating the table.
Defaults to the project that runs this DDL query.
If the project name contains special characters such as colons, it should be quoted in backticks ` (example: `google.com:my_project`).
dataset_name is the name of the dataset where you are creating the table.
Defaults to the defaultDataset in the request.
table_name is the name of the table you're creating.
Must be unique per dataset.
Can contain up to 1,024 characters (upper or lower case letters), numbers, and underscores
}}}
{{{
bq mk --table \
--expiration 3600 \
--description "One hour of data" \
--label persistence:volatile \
ch05.rentals_last_hour rental_id:STRING,duration:FLOAT
# using a metadata file schema.json
bq mk --table \
--expiration 3600 \
--description "One hour of data" \
--label persistence:volatile \
ch05.rentals_last_hour schema.json
# this shows the pretty output
bq show ch04.college_scorecard
# export metadata
bq show --schema --format=prettyjson ch04.college_scorecard > schema.json
}}}
! load file to table
https://cloud.google.com/bigquery/docs/loading-data-local#loading_data_from_a_local_data_source
https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-csv#loading_csv_data_into_a_new_table
{{{
Examples:
bq load ds.new_tbl ./info.csv ./info_schema.json
bq load ds.new_tbl gs://mybucket/info.csv ./info_schema.json
bq load ds.small gs://mybucket/small.csv name:integer,value:string
bq load ds.small gs://mybucket/small.csv field1,field2,field3
Arguments:
destination_table: Destination table name.
source: Name of local file to import, or a comma-separated list of
URI paths to data to import.
schema: Either a text schema or JSON file, as above.
}}}
{{{
# compressed from local laptop (66seconds)
bq --location=US \
load \
--source_format=CSV --autodetect \
ch04.college_scorecard \
./college_scorecard.csv.gz
# compressed from cloudshell (65seconds)
bq --location=US \
load \
--source_format=CSV --autodetect \
ch05.college_scorecard \
./college_scorecard.csv.gz
# uncompressed from local laptop
# (934 rows, 18MB, 15seconds)
# (1552 rows, 30MB, 23seconds)
# (2586 rows, 50MB, 34seconds)
# (5134 rows, 100MB, 66seconds)
bq --location=US \
load \
--source_format=CSV --autodetect \
ch06.college_scorecard \
./fileaa
# when the file is too big it errors with, workaround is to compress or put it on gcs
BigQuery error in load operation: Could not connect with BigQuery server due to: RedirectMissingLocation('Redirected but the response is missing a Location:
header.')
# the limit of CSV file from laptop is 100MB
split -b 103424k college_scorecard.csv abc <- this 101MB
}}}
! auto schema detection
https://cloud.google.com/bigquery/docs/schema-detect
! file limit on local laptop upload 100MB
{{{
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$ split -b 103424k college_scorecard.csv abc
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$ ls -ltr
total 557168
-rw-rw-r-- 1 kristofferson.a.arao 562225435 142634461 Jul 6 02:38 college_scorecard.csv
-rw-r--r-- 1 kristofferson.a.arao 562225435 105906176 Jul 26 19:22 abcaa
-rw-r--r-- 1 kristofferson.a.arao 562225435 36728285 Jul 26 19:22 abcab
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$ bq --location=US load --source_format=CSV --autodetect ch06.college_scorecard ./abcaa
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
BigQuery error in load operation: Could not connect with BigQuery server due to: RedirectMissingLocation('Redirected but the response is missing a Location:
header.')
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$ split -b 102400k college_scorecard.csv xyz
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$ ls -ltr
total 835752
-rw-rw-r-- 1 kristofferson.a.arao 562225435 142634461 Jul 6 02:38 college_scorecard.csv
-rw-r--r-- 1 kristofferson.a.arao 562225435 105906176 Jul 26 19:22 abcaa
-rw-r--r-- 1 kristofferson.a.arao 562225435 36728285 Jul 26 19:22 abcab
-rw-r--r-- 1 kristofferson.a.arao 562225435 104857600 Jul 26 19:24 xyzaa
-rw-r--r-- 1 kristofferson.a.arao 562225435 37776861 Jul 26 19:24 xyzab
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$
(py385) AMAC02T60SJH03Y:college_scorecard kristofferson.a.arao$ bq --location=US load --source_format=CSV --autodetect ch06.college_scorecard ./xyzaa
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Upload complete.
Waiting on bqjob_r54bdb0a90f03ea75_000001738d712ab4_1 ... (49s) Current status: DONE
}}}
! load - fixing null values
<<<
Could not parse 'NULL' as int for field HBCU (position 26) starting at location
11945910
This caused the load job to fail with the following error
CSV table encountered too many errors, giving up. Rows: 591; errors: 1.
<<<
{{{
* we could edit the data file itself if we knew what the value ought to be.
* specify explicitly the schema for each column and change the column type of the HBCU column to be a string so that NULL is an acceptable value.
* we could ask BigQuery to ignore a few bad records by specifying, for example, --max_bad_records=20
* we could instruct the BigQuery load program that this particular file uses the string NULL to mark nulls (the standard way in CSV is to use empty fields to represent nulls)
}}}
{{{
bq --location=US \
load --null_marker=NULL \
--source_format=CSV --autodetect \
ch04.college_scorecard \
./college_scorecard.csv.gz
}}}
! load - replace existing table
{{{
you want to replace the existing table, so you should add --replace:
bq --location=US \
load --null_marker=NULL --replace \
--source_format=CSV --autodetect \
ch04.college_scorecard \
./college_scorecard.csv.gz
You can also specify --replace=false to append rows to an existing table
}}}
! load - skip first lines of header
* skip_leading_rows is for load, not query
{{{
bq load --source_format=CSV --skip_leading_rows=1
}}}
! sed to replace all occurrences of PrivacySuppressed by NULL, compressing the result and writing it to a temporary folder
{{{
zless ./college_scorecard.csv.gz | \
sed 's/PrivacySuppressed/NULL/g' | \
gzip > /tmp/college_scorecard.csv.gz
}}}
! load - specify schema metadata file
* edit the schema.json and make necessary data type changes
* Because we are supplying a schema, we need to instruct BigQuery to ignore the first row of the CSV file (which contains the header information).
{{{
bq --location=US \
load --null_marker=NULL --replace \
--source_format=CSV \
--schema=schema.json --skip_leading_rows=1 \
ch06.college_scorecard \
./college_scorecard.csv.gz
}}}
{{{
BEFORE QUERY - suppressing errors
SELECT
INSTNM
, ADM_RATE_ALL
, FIRST_GEN
, MD_FAMINC
, MD_EARN_WNE_P10
, SAT_AVG
FROM
ch04.college_scorecard
WHERE
SAFE_CAST(SAT_AVG AS FLOAT64) > 1300
AND SAFE_CAST(ADM_RATE_ALL AS FLOAT64) < 0.2
AND SAFE_CAST(FIRST_GEN AS FLOAT64) > 0.1
ORDER BY
CAST(MD_FAMINC AS FLOAT64) ASC
AFTER QUERY
SELECT
INSTNM
, ADM_RATE_ALL
, FIRST_GEN
, MD_FAMINC
, MD_EARN_WNE_P10
, SAT_AVG
FROM
ch04.college_scorecard
WHERE
SAT_AVG > 1300
AND ADM_RATE_ALL < 0.2
AND FIRST_GEN > 0.1
ORDER BY
MD_FAMINC ASC
Notice that, because SAT_AVG, ADM_RATE_ALL, and the others are no longer strings, our query is much cleaner because we no longer need to cast them to floating-point numbers. The reason they are no longer strings is that we made a decision on how to deal with the privacy-suppressed data (treat them as being unavailable) during the Extract, Transform, and Load (ETL) process.
}}}
! load - CTAS
* data type gets copied
{{{
CREATE OR REPLACE TABLE ch04.college_scorecard_etl AS
SELECT
INSTNM
, ADM_RATE_ALL
, FIRST_GEN
, MD_FAMINC
, SAT_AVG
, MD_EARN_WNE_P10
FROM ch04.college_scorecard
}}}
! load - stage first on GCS then load to bigquery
{{{
--# multi-thread copy files from local
gsutil -m cp *.csv gs://BUCKET/some/location
--# load from bucket to bigquery
bq load … gs://BUCKET/some/location/*.csv
}}}
{{{
bq load --source_format=NEWLINE_DELIMITED_JSON example-dev-284123:cpb200_flight_data.flights_2014 gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_*.json ./schema_flight_performance.json
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Waiting on bqjob_r263b24173998e9e9_000001739049ffa5_1 ... (49s) Current status: DONE
}}}
! load - create EXTERNAL TEMPORARY TABLE, generate METADATA file from CSV on GCS
https://codelabs.developers.google.com/codelabs/cpb200-loading-data/#8
https://stevenlevine.dev/2019/11/querying-externally-partitioned-data-with-bq/
{{{
gsutil ls gs://example_bucket-dev/flights
gs://example_bucket-dev/flights/
gs://example_bucket-dev/flights/flights_airports_000000000000
--# generate metadata file from CSV
bq mkdef --source_format=CSV --autodetect "gs://example_bucket-dev/flights/flights_airports_*" > mytable.json
cat mytable.json
{
"autodetect": true,
"csvOptions": {
"encoding": "UTF-8",
"quote": "\""
},
"sourceFormat": "CSV",
"sourceUris": [
"gs://example_bucket-dev/flights/flights_airports_*"
]
--# create external table using metadata file
bq mk --external_table_definition=mytable.json cpb200_flight_data.flights_airports
Table 'example-dev-284123:cpb200_flight_data.flights_airports' successfully created.
}}}
! load - handling NULL
{{{
bq load --null_marker="NULL"
}}}
https://stackoverflow.com/questions/46001334/bigquery-load-null-is-treating-as-string-instead-of-empty
! load - jagged rows
https://stackoverflow.com/questions/27024330/how-to-detect-jagged-rows-in-bigquery
https://stackoverflow.com/questions/43495592/bigquery-csv-import-allow-jagged-rows?rq=1
! loading files BEST PRACTICE
On schema metadata
* It is therefore best practice to not autodetect the schema of files that you receive in production—you will be at the mercy of whatever data happens to have been sampled. For production workloads, insist on the data type for a column by specifying it at the time of load.
On file format
* CSV files are inefficient and not very expressive (for example, there is no way to represent arrays and structs in CSV
* An efficient and expressive format is Avro. Avro uses self-describing binary files that are broken into blocks and can be compressed block by block. Because of this, it is possible to parallelize the loading of data from Avro files and the export of data into Avro files. Because the blocks are compressed, the file sizes will also be smaller than the data size might indicate. In terms of expressiveness, the Avro format is hierarchical and can represent nested and repeated fields, something that BigQuery supports but CSV files don’t have an easy way to store. Because Avro files are self-describing, you never need to specify a schema.
** There are two drawbacks to Avro files. One is that they are not human readable. If readability and expressiveness are important to you, use newline-delimited JSON files to store your data
** The second drawback is that Avro files are stored row by row. This makes Avro files not as efficient for federated queries.
* JSON supports the ability to store hierarchical data but requires that binary columns be base-64 encoded. However, JSON files are larger than even the equivalent CSV files because the name of each field is repeated on every line.
* The Parquet file format was inspired by Google’s original Dremel ColumnIO format and like Avro, Parquet is binary, block oriented, compact, and capable of representing hierarchical data. However, whereas Avro files are stored row by row, Parquet files are stored column by column. Columnar files are optimized for reading a subset of the columns; loading data requires reading all columns, and so columnar formats are somewhat less efficient at the loading of data. However, the columnar format makes Parquet a better choice than Avro for federated queries, a topic that we discuss shortly. Optimized Row Columnar (ORC) files are another open source columnar file format. ORC is similar to Parquet in performance and efficiency.
* in summary - Therefore, if you have a choice of file formats, we recommend Avro if you plan to load the data into BigQuery and discard the files. We recommend Parquet if you will be retaining the files for federated queries. Use JSON for small files where human readability is important.
On compression
* CSV and JSON that do not have internal compression, you should consider whether you should compress the files using gzip. Compressed files are faster to transmit and take up less space, but they are slower to load into BigQuery. The slower your network, the more you should lean toward compressing the data.
On staging to GCS before loading to bigquery
* Staging the file on Google Cloud Storage involves paying storage costs at least until the BigQuery load job finishes. However, storage costs are generally quite low and so, on this dataset and this network connection, the best option is to stage compressed data in Cloud Storage and load it from there. Even though it is faster to load uncompressed files into BigQuery, the network time to transfer the files dwarfs whatever benefits you’d get from a faster load.
* As of this writing, the loading of compressed CSV and JSON files is limited to files less than 4 GB in size because BigQuery has to uncompress the files on the fly on workers whose memory is finite. If you have larger datasets, split them across multiple CSV or JSON files. Splitting files yourself can allow for some degree of parallelism when doing the loads, but depending on how you size the files, this can lead to suboptimal file sizes in the table until BigQuery decides to optimize the storage.
On loading LIMITATIONS and PRICING
* BigQuery does not charge for loading data. Ingestion happens on a set of workers that is distinct from the cluster providing the slots used for querying. Hence, your queries (even on the same table into which you are ingesting data) are not slowed down by the fact that data is being ingested.
* Data loads are atomic. Queries on a table will either reflect the presence of all the data that is loaded in through the bq load operation or reflect none of it. You will not get query results on a partial slice of the data.
* The drawback of loading data using a “free” cluster is that load times can become unpredictable and bottlenecked by preexisting jobs. As of this writing, load jobs are limited to 1,000 per table and 100,000 per project per day. In the case of CSV and JSON files, cells and rows are limited to 100 MB, whereas in Avro, blocks are limited to 16 MB. Files cannot exceed 5 TB in size. If you have a larger dataset, split it across multiple files, each smaller than 5 TB. However, a single load job can submit a maximum of 15 TB of data split across a maximum of 10 million files. The load job must finish executing in less than six hours or it will be cancelled.
! running SQL file using FLAGFILE - get_metadata.sql
{{{
--format: <none|json|prettyjson|csv|sparse|pretty>: Format for command output.
Options include:
pretty: formatted table output
sparse: simpler table output
prettyjson: easy-to-read JSON format
json: maximally compact JSON
csv: csv format with header
The first three are intended to be human-readable, and the latter three are
for passing to another program. If no format is selected, one will be chosen
based on the command run.
}}}
* get_metadata
{{{
cat get_metadata.sql
SELECT
TO_JSON_STRING(
ARRAY_AGG(STRUCT(
IF(is_nullable = 'YES', 'NULLABLE', 'REQUIRED') AS
mode,
column_name AS name,
data_type AS type)
ORDER BY ordinal_position), TRUE) AS schema
FROM
ch04.INFORMATION_SCHEMA.COLUMNS
WHERE
table_name = 'college_scorecard';
bq query --format=sparse --use_legacy_sql=false --flagfile=get_metadata.sql
}}}
! running SQL file, remove headers, GENERATE METADATA FROM BQ to JSON
https://cloud.google.com/bigquery/docs/schemas#specify-schema-manual-python
{{{
bq query --format=sparse --use_legacy_sql=false --flagfile=get_metadata.sql | awk 'NR>2' > schema2.json
the difference
schema <
------------------------------------------------- <
[ [
{ {
"mode": "NULLABLE", "mode": "NULLABLE",
"name": "UNITID", "name": "UNITID",
"type": "INT64" "type": "INT64"
}, },
{ {
"mode": "NULLABLE", "mode": "NULLABLE",
"name": "OPEID", "name": "OPEID",
"type": "INT64" "type": "INT64"
}, },
}}}
https://stackoverflow.com/questions/45415564/remove-header-from-query-result-in-bq-command-line
https://stackoverflow.com/questions/33426395/google-bigquery-bq-command-line-execute-query-from-a-file
! running SQL file, remove headers, GENERATE METADATA FROM BQ to JSON - BETTER SIMPLER WAY
{{{
# this shows the pretty output
bq show ch04.college_scorecard
# export metadata
bq show --schema --format=prettyjson ch04.college_scorecard > schema.json
}}}
! generate-schema - AUTO GENERATE METADATA
<<<
https://github.com/bxparks/bigquery-schema-generator
<<<
* need to remove header because INT will be evaluated as STRING
* only supports CSV, not pipe delimited data. You can sample and replace pipe w/ csv as a workaround
{{{
pip install bigquery_schema_generator
--# remove 1st line header
sed 1d LOAN_STATUS_SED_OFFER-pipe.csv | generate-schema --input_format csv
--# sample line 10 to 42 of the file
sed -n '10,42p' LOAN_STATUS_SED_OFFER-pipe.csv | generate-schema --input_format csv
--# sample and replace pipe w/ csv as a workaround
sed -n '10,42p' LOAN_STATUS_SED_OFFER-pipefile.csv | sed 's/|/,/g' | generate-schema --input_format csv
}}}
! running SQL file, specify number of rows output
{{{
$ bq query -n=0 --use_legacy_sql=false "SELECT x FROM UNNEST([1, 2, 3]) AS x;"
Waiting on <job id> ... (0s) Current status: DONE
}}}
! running SQL file - run file from GCS bucket
{{{
gsutil cp get_metadata.sql gs://example_bucket-dev/scripts/
bq query --format=sparse --use_legacy_sql=false "$(gsutil cat gs://example_bucket-dev/scripts/get_metadata.sql)"
}}}
! running SQL - using bash
{{{
cat bq_query.sh
#!/bin/bash
read -d '' QUERY_TEXT << EOF
SELECT
start_station_name
, AVG(duration) as duration
, COUNT(duration) as num_trips
FROM \`bigquery-public-data\`.london_bicycles.cycle_hire
GROUP BY start_station_name
ORDER BY num_trips DESC
LIMIT 5
EOF
bq query --use_legacy_sql=false $QUERY_TEXT
}}}
{{{
sh bq_query.sh
+-------------------------------------+--------------------+-----------+
| start_station_name | duration | num_trips |
+-------------------------------------+--------------------+-----------+
| Belgrove Street , King's Cross | 1011.0766960393734 | 234458 |
| Hyde Park Corner, Hyde Park | 2782.730708763667 | 215629 |
| Waterloo Station 3, Waterloo | 866.3761345037934 | 201630 |
| Black Lion Gate, Kensington Gardens | 3588.0120035565997 | 161952 |
| Albert Gate, Hyde Park | 2359.4139302395806 | 155647 |
+-------------------------------------+--------------------+-----------+
}}}
! data dictionary - get schema of all the tables in the dataset
{{{
SELECT
table_name
, column_name
, ordinal_position
, is_nullable
, data_type
FROM
ch04.INFORMATION_SCHEMA.COLUMNS
}}}
! data dictionary - bq - list tables of dataset
{{{
bq ls cpb200_flight_data
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tableId Type Labels Time Partitioning Clustered Fields
-------------- ------- -------- ------------------- ------------------
AIRPORTS TABLE
flights_2014 TABLE
bq ls ch04
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tableId Type Labels Time Partitioning Clustered Fields
----------------------- ------- -------- ------------------- ------------------
college_scorecard TABLE
college_scorecard3 TABLE
college_scorecard3a TABLE
college_scorecard_etl TABLE
bq ls ch05
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tableId Type Labels Time Partitioning Clustered Fields
------------------- ------- -------- ------------------- ------------------
college_scorecard TABLE
}}}
! data dictionary - bq - show column data type, rows, and table size
{{{
bq show cpb200_flight_data.flights_2014
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Table example-dev-284123:cpb200_flight_data.flights_2014
Last modified Schema Total Rows Total Bytes Expiration Time Partitioning Clustered Fields Labels
----------------- ------------------------------------ ------------ ------------- ------------ ------------------- ------------------ --------
27 Jul 08:41:26 |- YEAR: integer (required) 6303310 1214695872
|- QUARTER: integer (required)
|- MONTH: integer (required)
|- DAY_OF_MONTH: integer
|- DAY_OF_WEEK: integer
|- FULL_DATE: string
|- CARRIER: string
|- TAIL_NUMBER: string
|- FLIGHT_NUMBER: string
|- ORIGIN: string
|- DESTINATION: string
|- SCHEDULED_DEPART_TIME: integer
|- ACTUAL_DEPART_TIME: integer
|- DEPARTURE_DELAY: integer
|- TAKE_OFF_TIME: integer
|- LANDING_TIME: integer
|- SCHEDULED_ARRIVAL_TIME: integer
|- ACTUAL_ARRIVAL_TIME: integer
|- ARRIVAL_DELAY: integer
|- FLIGHT_CANCELLED: integer
|- CANCELLATION_CODE: string
|- SCHEDULED_ELAPSED_TIME: integer
|- ACTUAL_ELAPSED_TIME: integer
|- AIR_TIME: integer
|- DISTANCE: integer
|- CARRIER_DELAY: integer
|- WEATHER_DELAY: integer
|- NAS_DELAY: integer
|- SECURITY_DELAY: integer
|- LATE_AIRCRAFT_DELAY: integer
}}}
! data dictionary - bq - show ACL of dataset
{{{
bq show cpb200_flight_data
Dataset example-dev-284123:cpb200_flight_data
Last modified ACLs Labels
27 Jul 08:20:55 Owners:
kristofferson.a.arao@gmail.com,
projectOwners
Writers:
projectWriters
Readers:
projectReaders
}}}
! DDL - efficient CLONE of a table - bq cp
* "bq cp" preserves the REQUIRED (not null) attribute on the column while CTAS does not
{{{
You are not billed for running a query, but you will be billed for the storage of the new table. The bq cp command supports appending (specify -a or --append_table) and replacement (specify -noappend_table).
You can also use the idiomatic Standard SQL method of using either CREATE TABLE AS SELECT or INSERT VALUES, depending on whether the destination already exists.
However, bq cp is faster (because it copies only the table metadata) and doesn’t incur query costs.
bq cp ch04.college_scorecard ch04.college_scorecard3
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Waiting on bqjob_r3637909fa7bf3265_000001738fa05a55_1 ... (0s) Current status: DONE
Table 'example-dev-284123:ch04.college_scorecard' successfully copied to 'example-dev-284123:ch04.college_scorecard3'
}}}
* this doubles the number of rows
{{{
bq cp -a ch04.college_scorecard ch04.college_scorecard3
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Waiting on bqjob_r6867cc422e6d97fe_000001738fa340cd_1 ... (0s) Current status: DONE
Table 'example-dev-284123:ch04.college_scorecard' successfully copied to 'example-dev-284123:ch04.college_scorecard3'
}}}
! DDL - delete table , delete dataset
{{{
--# delete table
bq rm ch06.college_scorecard
DROP TABLE IF EXISTS ch04.college_scorecard_gcs;
--# delete dataset (database)
bq rm -r -f ch06
}}}
{{{
bq rm ch06.college_scorecard
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
rm: remove table 'example-dev-284123:ch06.college_scorecard'? (y/N) y
bq rm -r -f ch06
/Users/kristofferson.a.arao/gcp-sdk/google-cloud-sdk/platform/bq/bq.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
}}}
! DDL - specify that a table needs to be expired at a certain time in the future
* by default Table expiration "Never"
{{{
It is also possible to specify that a table needs to be expired at a certain time in the future. You can so this with the ALTER TABLE SET OPTIONS statement:
ALTER TABLE ch05.college_scorecard
SET OPTIONS (
expiration_timestamp=TIMESTAMP_ADD(CURRENT_TIMESTAMP(),
INTERVAL 7 DAY),
description="College Scorecard expires seven days from now"
)
}}}
! DML - delete , insert
{{{
DELETE FROM ch04.college_scorecard
WHERE SAT_AVG IS NULL;
INSERT ch04.college_scorecard
SELECT *
FROM ch04.college_scorecard_etl
WHERE SAT_AVG IS NULL;
INSERT ch04.college_scorecard
(INSTNM
, ADM_RATE_ALL
, FIRST_GEN
, MD_FAMINC
, SAT_AVG
, MD_EARN_WNE_P10
)
VALUES ('abc', 0.1, 0.3, 12345, 1234, 23456),
('def', 0.2, 0.2, 23451, 1232, 32456);
}}}
! SELECT - Switching SQL dialects
https://cloud.google.com/bigquery/docs/reference/standard-sql/enabling-standard-sql#sql-prefix
!! using bq parameter
{{{
bq query \
--use_legacy_sql=false \
'SELECT
word
FROM
`bigquery-public-data.samples.shakespeare`'
}}}
!! Using a query prefix in the UI
{{{
#legacySQL
SELECT
weight_pounds, state, year, gestation_weeks
FROM
[bigquery-public-data:samples.natality]
ORDER BY weight_pounds DESC
LIMIT 10;
#legacySQL Runs the query using legacy SQL
#standardSQL Runs the query using standard SQL
}}}
!! set default in bigqueryrc
{{{
set --use_legacy_sql=false in .bigqueryrc
[query]
--use_legacy_sql=false
[mk]
--use_legacy_sql=false
}}}
! SQL TUNING - get jobs running on a project
{{{
# get all projects
bq ls -p
projectId friendlyName
------------------------ ------------------
example-dev-284123 example-dev
example-prod-284123 example-prod
# show jobs on a specific project
bq ls -j example-dev-284123
bq ls -j example-prod-284123
# MUST USE THIS - list job for all users
bq ls -j -a
}}}
! SQL TUNING - show job info
{{{
bq show -j bquxjob_734bad5c_173d4190298
Job example-dev-284123:bquxjob_734bad5c_173d4190298
Job Type State Start Time Duration User Email Bytes Processed Bytes Billed Billing Tier Labels
---------- --------- ----------------- ---------------- -------------------------------- ----------------- -------------- -------------- --------
query SUCCESS 09 Aug 12:41:22 0:00:03.806000 kristofferson.a.arao@gmail.com 156703034 157286400 1
}}}
! SQL TUNING - show job details
{{{
bq --format=prettyjson show -j bquxjob_734bad5c_173d4190298
}}}
! SQL TUNING - generate json execution plan
{{{
bq --format=prettyjson show -j example-dev-284123:US.bquxjob_27d580de_17387a47aff > myjob.export.json
}}}
! SQL TUNING - compare two job_ids
{{{
bq ls -j -a
import imp
jobId Job Type State Start Time Duration
-------------------------------------------- ---------- --------- ----------------- ----------------
bqjob_r7cd933076d264387_00000173d45a2ad4_1 query SUCCESS 09 Aug 13:52:27 0:00:00.112000
bqjob_r44158463afce6211_00000173d456fa48_1 query SUCCESS 09 Aug 13:48:57 0:00:00.369000
bqjob_r186c01fe8ed94868_00000173d456456d_1 query SUCCESS 09 Aug 13:48:11 0:00:00.498000
bqjob_r596685d70d42d244_00000173d4562970_1 query SUCCESS 09 Aug 13:48:03 0:00:00.438000
bqjob_r33a3a095f66b8e5b_00000173d4555421_1 query SUCCESS 09 Aug 13:47:09 0:00:00.546000
bqjob_r59e6252138b0f835_00000173d455045b_1 query SUCCESS 09 Aug 13:46:48 0:00:00.503000
bqjob_r42ce7c4106c80940_00000173d454bbe7_1 query SUCCESS 09 Aug 13:46:30 0:00:00.630000
job_cMB49WraYXFgV1hFl36-3OUo5eVk query SUCCESS 09 Aug 13:44:02 0:00:00.277000 <-- query1
bqjob_r5cc0550b6f367c57_00000173d450769b_1 query SUCCESS 09 Aug 13:41:51 0:00:02.041000 <-- query2
bqjob_r4ac46e2feef9213a_00000173d41eec48_1 query FAILURE 09 Aug 12:47:43 0:00:00
bq show -j job_cMB49WraYXFgV1hFl36-3OUo5eVk
Job Type State Start Time Duration User Email Bytes Processed Bytes Billed Billing Tier Labels
---------- --------- ----------------- ---------------- ------------------------------------------------------------ ----------------- -------------- -------------- --------
query SUCCESS 09 Aug 13:44:02 0:00:00.277000 example-dev-svc@example-dev-284123.iam.gserviceaccount.com 0 0
bq show -j bqjob_r5cc0550b6f367c57_00000173d450769b_1
Job Type State Start Time Duration User Email Bytes Processed Bytes Billed Billing Tier Labels
---------- --------- ----------------- ---------------- -------------------------------- ----------------- -------------- -------------- --------
query SUCCESS 09 Aug 13:41:51 0:00:02.041000 kristofferson.a.arao@gmail.com 903989528 904921088 1
bq --format=prettyjson show -j job_cMB49WraYXFgV1hFl36-3OUo5eVk | grep query
"query": {
"query": "SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT 5",
bq --format=prettyjson show -j bqjob_r5cc0550b6f367c57_00000173d450769b_1 | grep query
"query": {
"query": "SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT 5",
bq --format=prettyjson show -j job_cMB49WraYXFgV1hFl36-3OUo5eVk > query1.sql
bq --format=prettyjson show -j bqjob_r5cc0550b6f367c57_00000173d450769b_1 > query2.sql
diff -y query1.sql query2.sql | less
}}}
! backup and restore dataset and tables
https://github.com/GoogleCloudPlatform/bigquery-oreilly-book/tree/master/blogs/bigquery_backup
{{{
# backup one table
# This saves a schema.json, a tabledef.json, and extracted data in AVRO format to GCS
./bq_backup.py --input dataset.tablename --output gs://BUCKET/backup
# backup all the tables in a data set
python bigquery_backup.py --input ch04 --output gs://example_bucket-dev/backup
# restore per table
gsutil ls gs://example_bucket-dev/backup/ch04
gs://example_bucket-dev/backup/ch04/LOAN_STATUS_SED_OFFER/
gs://example_bucket-dev/backup/ch04/college_scorecard/
gs://example_bucket-dev/backup/ch04/college_scorecard3/
gs://example_bucket-dev/backup/ch04/college_scorecard3a/
gs://example_bucket-dev/backup/ch04/college_scorecard_etl/
python bigquery_restore.py --input gs://example_bucket-dev/backup/ch04/college_scorecard/ --output ch05
}}}
! bq insert to table
{{{
bq insert ch05.rentals_last_hour data.json
}}}
! bq extract from table
https://cloud.google.com/bigquery/docs/exporting-data#exporting_data_stored_in
{{{
bq extract --format=json ch05.bad_bikes gs://bad_bikes.json
}}}
! SCRIPTING - data types
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types
{{{
BigQuery Data Types
Data Type Description
============= =======================================
INT64 range: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
NUMERIC range: -99999999999999999999999999999.999999999 to 99999999999999999999999999999.999999999
FLOAT64 Double precision (approximate) decimal values
BOOL Represented by the keywords TRUE and FALSE (case insensitive)
STRING Variable-length character (Unicode) values must be UTF-8 encoded no way to set per column max length
BYTES Variable-length binary data
DATE range: 0001-01-01 to 9999-12-31 No time part
DATETIME range: 0001-01-01 00:00:00 to 9999-12-31 23:59:59.999999 Date and time parts to the microsecond
TIME range: 00:00:00 to 23:59:59.99999 No date part
TIMESTAMP range: 0001-01-01 00:00:00 to 9999-12-31 23:59:59.999999 UTC Date and time parts to the microsecond and timezone
Data Type Size
============= =======================================
INT64 8 bytes
FLOAT64 8 bytes
NUMERIC 16 bytes
BBOOLEAN 1 byte
STRING 2 bytes + the UTF-8 encoded string size
BYTES 2 bytes + the number of bytes in the value
DATE 8 bytes
DATETIME 8 bytes
TIME 8 bytes
TIMESTAMP 8 bytes
STRUCT/RECORD 0 bytes + the size of the contained fields
Notes:
https://cloud.google.com/bigquery/pricing#data
Null values for any data type are use 0 bytes
=====================================================================
}}}
! SCRIPTING - use bq CLI to execute a query with bind parameters
{{{
echo "select @bv_1 as col1, @bv_2 as col2;" > test.sql
bq query --use_legacy_sql=False --nodry_run --parameter=bv_1::USA --parameter=bv_2::PH < test.sql
Waiting on bqjob_r5bea3ffac049a315_00000173d6fabb8b_1 ... (0s) Current status: DONE
+------+------+
| col1 | col2 |
+------+------+
| USA | PH |
+------+------+
}}}
! SCRIPTING - parameterized queries using the Python Cloud Client API
..
https://console.cloud.google.com/cloudshell
<<<
Welcome to Google Cloud Shell, a tool for managing resources hosted on Google Cloud Platform!
The machine comes pre-installed with the Google Cloud SDK and other popular developer tools.
Your 5GB home directory will persist across sessions, but the VM is ephemeral and will be reset
approximately 20 minutes after your session ends. No system-wide change will persist beyond that.
Type "gcloud help" to get help on using Cloud SDK. For more examples, visit
https://cloud.google.com/shell/docs/quickstart and https://cloud.google.com/shell/docs/examples
Type "cloudshell help" to get help on using the "cloudshell" utility. Common functionality is
aliased to short commands in your shell, for example, you can type "dl <filename>" at Bash prompt to
download a file. Type "cloudshell aliases" to see these commands.
Type "help" to see this message any time. Type "builtin help" to see Bash interpreter help.
<<<
{{{
you need to initialize
gcloud init
}}}
https://cloud.google.com/data-catalog/docs/concepts/overview
https://cloud.google.com/data-catalog/docs/quickstart-tagging
https://github.com/ricardolsmendes/gcp-datacatalog-python
https://medium.com/google-cloud/data-catalog-hands-on-guide-a-mental-model-dae7f6dd49e
https://medium.com/google-cloud/data-catalog-hands-on-guide-templates-tags-with-python-c45eb93372ef
https://www.pluralsight.com/courses/enterprise-database-migration
.
Google Cloud Platform tutorial: Deploying an example app | lynda.com https://www.youtube.com/watch?v=M56RdQwXvEU
https://clrc.org/this-weeks-featured-lynda-com-course-up-and-running-with-google-cloud-platform-with-joseph-lowery/
https://github.com/GoogleCloudPlatform/solutions-photo-sharing-demo-java
Build your first website on the Google Cloud platform https://www.youtube.com/watch?v=KzJxwu2poIc
<<showtoc>>
! READ THIS https://cloud.google.com/storage/docs/gsutil/commands/ls
https://cloud.google.com/storage/docs/quickstart-gsutil
! list files in bucket
{{{
(py385) AMAC02T60SJH03Y:04_load kristofferson.a.arao$ gsutil ls gs://cloud-training/CPB200/BQ/lab4/
gs://cloud-training/CPB200/BQ/lab4/airports.csv
gs://cloud-training/CPB200/BQ/lab4/carrier.json
gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000000.json
gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000001.json
gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000002.json
gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000003.json
gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000004.json
gs://cloud-training/CPB200/BQ/lab4/schema_flight_performance.json
gs://cloud-training/CPB200/BQ/lab4/campaign-finance/
(py385) AMAC02T60SJH03Y:04_load kristofferson.a.arao$ gsutil ls -l gs://cloud-training/CPB200/BQ/lab4/
22687 2015-12-19T01:55:23Z gs://cloud-training/CPB200/BQ/lab4/airports.csv
6840 2015-12-19T01:55:23Z gs://cloud-training/CPB200/BQ/lab4/carrier.json
822645519 2015-12-19T01:55:24Z gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000000.json
839342140 2015-12-19T01:55:24Z gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000001.json
834139371 2015-12-19T01:55:25Z gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000002.json
786923198 2015-12-19T01:55:25Z gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000003.json
272399448 2015-12-19T01:55:26Z gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000004.json
2360 2015-12-19T01:55:26Z gs://cloud-training/CPB200/BQ/lab4/schema_flight_performance.json
gs://cloud-training/CPB200/BQ/lab4/campaign-finance/
TOTAL: 8 objects, 3555481563 bytes (3.31 GiB)
}}}
! copy files to bucket
{{{
gsutil cp *.txt gs://my-bucket
}}}
! read first few lines of a file
{{{
gsutil cat -r 0-1000 gs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000000.json
{"YEAR":"2014","QUARTER":"3","MONTH":"9","DAY_OF_MONTH":"1","DAY_OF_WEEK":"1","FULL_DATE":"2014-09-01","CARRIER":"AA","TAIL_NUMBER":"N794AA","FLIGHT_NUMBER":"1","ORIGIN":"JFK","DESTINATION":"LAX","SCHEDULED_DEPART_TIME":"900","ACTUAL_DEPART_TIME":"851","DEPARTURE_DELAY":"-9","TAKE_OFF_TIME":"910","LANDING_TIME":"1135","SCHEDULED_ARRIVAL_TIME":"1210","ACTUAL_ARRIVAL_TIME":"1144","ARRIVAL_DELAY":"-26","FLIGHT_CANCELLED":"0","CANCELLATION_CODE":"","SCHEDULED_ELAPSED_TIME":"370","ACTUAL_ELAPSED_TIME":"353","AIR_TIME":"325","DISTANCE":"2475"}
{"YEAR":"2014","QUARTER":"3","MONTH":"9","DAY_OF_MONTH":"2","DAY_OF_WEEK":"2","FULL_DATE":"2014-09-02","CARRIER":"AA","TAIL_NUMBER":"N797AA","FLIGHT_NUMBER":"1","ORIGIN":"JFK","DESTINATION":"LAX","SCHEDULED_DEPART_TIME":"900","ACTUAL_DEPART_TIME":"902","DEPARTURE_DELAY":"2","TAKE_OFF_TIME":"922","LANDING_TIME":"1134","SCHEDULED_ARRIVAL_TIME":"1210","ACTUAL_ARRIVAL_TIME":"1210","ARRIVAL_DELAY":"0","FLIGHT_CANCELLED":"0","CANCELLATION_CODE":"","SCHEDULED_(py385) AMAC02T60SJH03Y:04_load
kristofferson.a.arao$ gsutil cat -r 0-1000 gs://cloud-training/CPgs://cloud-training/CPB200/BQ/lab4/domestic_2014_flights_000000000000.json | less
}}}
! get directory size
{{{
gsutil du -h gs://example_bucket-dev/backup/ch04
223.73 KiB gs://example_bucket-dev/backup/ch04/LOAN_STATUS_SED_OFFER/data_000000000000.avro
856 B gs://example_bucket-dev/backup/ch04/LOAN_STATUS_SED_OFFER/schema.json
1.65 KiB gs://example_bucket-dev/backup/ch04/LOAN_STATUS_SED_OFFER/tbldef.json
226.21 KiB gs://example_bucket-dev/backup/ch04/LOAN_STATUS_SED_OFFER/
149.02 MiB gs://example_bucket-dev/backup/ch04/college_scorecard/data_000000000000.avro
163.93 KiB gs://example_bucket-dev/backup/ch04/college_scorecard/schema.json
201.6 KiB gs://example_bucket-dev/backup/ch04/college_scorecard/tbldef.json
149.38 MiB gs://example_bucket-dev/backup/ch04/college_scorecard/
149.02 MiB gs://example_bucket-dev/backup/ch04/college_scorecard3/data_000000000000.avro
149.02 MiB gs://example_bucket-dev/backup/ch04/college_scorecard3/data_000000000001.avro
163.93 KiB gs://example_bucket-dev/backup/ch04/college_scorecard3/schema.json
201.61 KiB gs://example_bucket-dev/backup/ch04/college_scorecard3/tbldef.json
298.4 MiB gs://example_bucket-dev/backup/ch04/college_scorecard3/
149.02 MiB gs://example_bucket-dev/backup/ch04/college_scorecard3a/data_000000000000.avro
163.93 KiB gs://example_bucket-dev/backup/ch04/college_scorecard3a/schema.json
201.61 KiB gs://example_bucket-dev/backup/ch04/college_scorecard3a/tbldef.json
149.38 MiB gs://example_bucket-dev/backup/ch04/college_scorecard3a/
544.75 KiB gs://example_bucket-dev/backup/ch04/college_scorecard_etl/data_000000000000.avro
336 B gs://example_bucket-dev/backup/ch04/college_scorecard_etl/schema.json
1.02 KiB gs://example_bucket-dev/backup/ch04/college_scorecard_etl/tbldef.json
546.09 KiB gs://example_bucket-dev/backup/ch04/college_scorecard_etl/
597.92 MiB gs://example_bucket-dev/backup/ch04/
}}}
<<showtoc>>
https://github.com/googleapis/google-cloud-python
! bigquery related apis
{{{
Google BigQuery (BigQuery README, BigQuery Documentation)
Google BigQuery Data Transfer (BigQuery Data Transfer README, BigQuery Data Transfer Documentation)
Google BigQuery Storage (BigQuery Storage README, BigQuery Storage Documentation)
}}}
!! bigquery api - GENERIC
https://cloud.google.com/bigquery/docs/reference/rest
https://cloud.google.com/bigquery/docs/reference
https://cloud.google.com/bigquery/docs/reference/bq-cli-reference
https://cloud.google.com/bigquery/docs/bigquery-storage-python-pandas
! Third-party BigQuery API client libraries
https://cloud.google.com/bigquery/docs/reference/libraries#third-party_client_libraries
!! pandas-gbq (migration guide)
https://pandas-gbq.readthedocs.io/en/latest/intro.html
https://pandas-gbq.readthedocs.io/en/latest/writing.html
!! bigrquery
https://github.com/r-dbi/bigrquery
!! spark-bigquery
https://github.com/spotify/spark-bigquery
! python apis
https://github.com/googleapis/python-bigquery
https://github.com/googleapis/python-trace
https://github.com/googleapis/python-bigquery-storage
https://github.com/googleapis/python-bigquery-datatransfer
https://github.com/googleapis/python-dlp
https://github.com/googleapis/python-iam
https://github.com/googleapis/python-bigquery-reservation
https://github.com/googleapis/python-access-approval
https://github.com/googleapis/python-datalabeling
https://github.com/googleapis/python-monitoring
https://github.com/googleapis/python-monitoring-dashboards
https://github.com/googleapis/python-billing
https://github.com/googleapis/python-bigquery-connection
https://github.com/googleapis/python-recommendations-ai
! commands summary
{{{
gcloud init
gcloud auth list
gcloud config list
gcloud info
}}}
https://cloud.google.com/sdk/docs/quickstart-macos
https://cloud.google.com/sdk/auth_success
https://cloud.google.com/sdk/gcloud
https://cloud.google.com/sdk/docs/initializing
https://cloud.google.com/compute/docs/storing-retrieving-metadata
https://developers.google.com/api-client-library
gcloud command-line tool overview https://cloud.google.com/sdk/gcloud
Developer Tools
Cloud SDK: Command Line Interface
Documentation
Reference
https://cloud.google.com/sdk/gcloud/reference
{{{
gcloud init
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
core:
account: kristofferson.a.arao@gmail.com
disable_usage_reporting: 'True'
project: example-dev-284123
Pick configuration to use:
[1] Re-initialize this configuration [default] with new settings
[2] Create a new configuration
Please enter your numeric choice: 1
Your current configuration has been set to: [default]
You can skip diagnostics next time by using the following flag:
gcloud init --skip-diagnostics
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).
Choose the account you would like to use to perform operations for
this configuration:
[1] kristofferson.a.arao@gmail.com
[2] Log in with a new account
Please enter your numeric choice: 1
You are logged in as: [kristofferson.a.arao@gmail.com].
Pick cloud project to use:
[1] example-dev-284123
[2] example-prod-284123
[3] genuine-footing-275704
[4] karlarao
[5] Create a new project
Please enter numeric choice or text value (must exactly match list
item): 1
Your current project has been set to: [example-dev-284123].
Not setting default zone/region (this feature makes it easier to use
[gcloud compute] by setting an appropriate default value for the
--zone and --region flag).
See https://cloud.google.com/compute/docs/gcloud-compute section on how to set
default compute region and zone manually. If you would like [gcloud init] to be
able to do this for you the next time you run it, make sure the
Compute Engine API is enabled for your project on the
https://console.developers.google.com/apis page.
Your Google Cloud SDK is configured and ready to use!
* Commands that require authentication will use kristofferson.a.arao@gmail.com by default
* Commands will reference project `example-dev-284123` by default
Run `gcloud help config` to learn how to change individual settings
This gcloud configuration is called [default]. You can create additional configurations if you work with multiple accounts and/or projects.
Run `gcloud topic configurations` to learn more.
Some things to try next:
* Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command.
* Run `gcloud topic --help` to learn about advanced features of the SDK like arg files and output formatting
}}}
{{{
(py385) AMAC02T60SJH03Y:google-cloud-sdk kristofferson.a.arao$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* kristofferson.a.arao@gmail.com
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(py385) AMAC02T60SJH03Y:google-cloud-sdk kristofferson.a.arao$
(py385) AMAC02T60SJH03Y:google-cloud-sdk kristofferson.a.arao$
(py385) AMAC02T60SJH03Y:google-cloud-sdk kristofferson.a.arao$ gcloud config list
[core]
account = kristofferson.a.arao@gmail.com
disable_usage_reporting = True
project = example-dev-284123
Your active configuration is: [default]
(py385) AMAC02T60SJH03Y:google-cloud-sdk kristofferson.a.arao$ gcloud info
Google Cloud SDK [302.0.0]
}}}
<<showtoc>>
! Getting Started with VPC Networking
https://googlepluralsight.qwiklabs.com/focuses/8196573?parent=lti_session
http://fritshoogland.files.wordpress.com/2013/03/profiling_of_oracle_using_function_calls.pdf
http://fritshoogland.files.wordpress.com/2013/02/about-multiblock-reads-v3-as43.pdf
http://www.tenouk.com/ModuleW.html
http://fritshoogland.files.wordpress.com/2012/06/exadata-explained-v3.pdf
Profiling Oracle with gdb Hands on https://fritshoogland.files.wordpress.com/2014/04/profiling-oracle-with-gdb-hands-on.pdf
https://fritshoogland.wordpress.com/2014/02/27/investigating-the-wait-interface-via-gdb/
https://renenyffenegger.ch/notes/development/databases/Oracle/internals/functions/index
profiling the dbwr and lgwr https://www.doag.org/formes/pubfiles/6190144/2014-null-Frits_Hoogland-Oak_Table__Profiling_the_logwriter_and_databasewriter-Manuskript.pdf
Gdb debugging-what is the first wait event encountered when committing https://titanwolf.org/Network/Articles/Article?AID=f38a218c-d162-4fc8-b8bd-e4a949758866#gsc.tab=0
<<showtoc>>
check out [[SQL developer data modeler]]
[img(80%,80%)[https://i.imgur.com/gs3mNJn.png]]
! oracle
SQL Developer ER diagram https://www.youtube.com/watch?v=f80xWJYKJFQ
! data grip
data grip ER https://www.youtube.com/watch?v=OBLAT7W6lWw
! dbeaver
https://github.com/dbeaver/dbeaver/wiki/Database-Structure-Diagrams
https://github.com/dbeaver/dbeaver/wiki/ER-Diagrams
https://dbeaver.com/databases/mongo/
!! mongodb
[img(100%,100%)[ https://i.imgur.com/PCwpA9W.png]]
!! snowflake schema
Querying and Modelling in Snowflake using DBeaver https://www.youtube.com/watch?v=tX-FfVNnOSA
! others
https://severalnines.com/blog/overview-database-diagram-tools-available-postgresql
.
random data
http://viralpatel.net/blogs/oracle-xmltable-tutorial/ <-- xml random data
http://viralpatel.net/blogs/generating-random-data-in-oracle/
http://www.oracle-base.com/articles/misc/dbms_random.php
http://oramatt.wordpress.com/2013/09/27/create-random-data-bulk-insert-from-nested-table/
http://www.java2s.com/Code/Oracle/System-Packages/Insertrandomnumbersintoatable.htm
http://stackoverflow.com/questions/13737715/oracle-insert-randomly-generated-english-words-as-dummy-data
http://www.objgen.com/html
http://www.objgen.com/json
http://www.convertcsv.com/csv-to-json.htm
http://www.convertcsv.com/json-to-csv.htm
https://stackoverflow.com/questions/17093180/store-multiple-values-in-single-key-in-json
https://json-csv.com/
{{{
create these two files, and run genstacks.sh
parameters:
1 - OSPID
2 - number of times to loop
./genstacks 9153 3600
After the profiling run, do this on stacks3.txt for time series visualization
sed -i -e '1itime , data\' stacks3.txt
# get SPID
select /* usercheck */ s.sid sid, s.serial# serial#, lpad(p.spid,7) unix_pid
from gv$process p, gv$session s
where p.addr=s.paddr
and s.username is not null
and (s.inst_id, s.sid) in (select inst_id, sid from gv$mystat where rownum < 2);
$ cat genstacks.sh
#!/bin/ksh
count=$2
sc=0
while [ $sc -lt $count ]
do
sc=`expr $sc + 1`
./stack.sh $1 | gawk '{ print strftime("%m/%d/%Y %H:%M:%S"),"| " $0 }' >> stacks2.txt
./stack.sh $1 | sed 's/<-/\'$'\n''/g' | sed 's/+[^+]*//g' | gawk '{ print strftime("%m/%d/%Y %H:%M:%S"),", " $0 }' >> stacks3.txt
sleep 1
done
$ cat stack.sh
#!/bin/ksh
sqlplus -s 'sys/oracle as sysdba' << EOF
--spool stacks.txt append
oradebug setospid $1
oradebug unlimit
oradebug SHORT_STACK
EOF
}}}
http://toolbar.netcraft.com/site_report?url=enkitec.com
{{{
select dbms_sqltune.report_auto_tuning_task(
(select min(execution_name) from dba_advisor_findings
where task_name like 'SYS_AUTO_SQL%'),
(select max(execution_name) from dba_advisor_findings
where task_name like 'SYS_AUTO_SQL%')
) from dual;
select rec_id, to_char(attr5)
from dba_advisor_rationale
where execution_name = 'EXEC_22037'
and object_id = 18170
and rec_id > 0
order by rec_id;
REC_ID TO_CHAR(ATTR5)
------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------
3778 OPT_ESTIMATE(@"SEL$A065B7E5", NLJ_INDEX_SCAN, "AC"@"SEL$27", ("L"@"SEL$27", "C"@"SEL$27", "U"@"SEL$28", "O"@"SEL$28"), "I_ATTRCOL1", SCALE_ROWS=0.001476321563)
}}}
http://www.pythian.com/news/38467/sql-tuning-advisor-hints/
{{{
--######################################################################################################################################################################################
-- SQL TUNING ADVISOR
-- Create tuning set
DECLARE
cursor1 DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN cursor1 FOR SELECT VALUE(p)
FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('sql_id = ''0fwmyxhv6qpry''')) p;
DBMS_SQLTUNE.CREATE_SQLSET(sqlset_name => '0fwmyxhv6qpry', description => '0fwmyxhv6qpry');
DBMS_SQLTUNE.LOAD_SQLSET(sqlset_name => '0fwmyxhv6qpry', populate_cursor => cursor1);
END;
/
-- Create tuning task from tuning set
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sqlset_name => '0fwmyxhv6qpry',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 3600,
task_name => '0fwmyxhv6qpry',
description => 'Tuning task for an SQL tuning set.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/
-- Execute tuning task
EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '0fwmyxhv6qpry');
-- View result
SET LONG 10000
SET PAGESIZE 1000
SET LINESIZE 200
SELECT DBMS_SQLTUNE.report_tuning_task('0fwmyxhv6qpry') AS recommendations FROM dual;
--my_tuning_set_0fwmyxhv6qpry
--my_tuning_set_0fwmyxhv6qpryb
--staName45412
--0fwmyxhv6qpryb
--0fwmyxhv6qpry
--0fwmyxhv6qpryb
--######################################################################################################################################################################################
-- SQL ACCESS ADVISOR
DECLARE
taskname varchar2(30) := 'access_0fwmyxhv6qpryb';
task_desc varchar2(256) := 'SQL Access Advisor';
task_or_template varchar2(30) := 'SQLACCESS_EMTASK';
task_id number := 0;
num_found number;
sts_name varchar2(256) := '0fwmyxhv6qpryb';
sts_owner varchar2(30) := 'SYSTEM';
BEGIN
/* Create Task */
dbms_advisor.create_task(DBMS_ADVISOR.SQLACCESS_ADVISOR,task_id,taskname,task_desc,task_or_template);
/* Reset Task */
dbms_advisor.reset_task(taskname);
/* Delete Previous STS Workload Task Link */
select count(*) into num_found from user_advisor_sqla_wk_map where task_name = taskname and workload_name = sts_name;
IF num_found > 0 THEN
dbms_advisor.delete_sts_ref(taskname, sts_owner, sts_name);
END IF;
/* Link STS Workload to Task */
dbms_advisor.add_sts_ref(taskname,sts_owner, sts_name);
/* Set STS Workload Parameters */
dbms_advisor.set_task_parameter(taskname,'VALID_ACTION_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'VALID_MODULE_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'SQL_LIMIT',DBMS_ADVISOR.ADVISOR_UNLIMITED);
dbms_advisor.set_task_parameter(taskname,'VALID_USERNAME_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'VALID_TABLE_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_TABLE_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_ACTION_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_USERNAME_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_MODULE_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'VALID_SQLSTRING_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_SQLSTRING_LIST','"@!"');
/* Set Task Parameters */
dbms_advisor.set_task_parameter(taskname,'ANALYSIS_SCOPE','ALL');
dbms_advisor.set_task_parameter(taskname,'RANKING_MEASURE','PRIORITY,OPTIMIZER_COST');
dbms_advisor.set_task_parameter(taskname,'DEF_PARTITION_TABLESPACE',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'TIME_LIMIT','30');
dbms_advisor.set_task_parameter(taskname,'MODE','COMPREHENSIVE');
dbms_advisor.set_task_parameter(taskname,'STORAGE_CHANGE',DBMS_ADVISOR.ADVISOR_UNLIMITED);
dbms_advisor.set_task_parameter(taskname,'DML_VOLATILITY','TRUE');
dbms_advisor.set_task_parameter(taskname,'WORKLOAD_SCOPE','PARTIAL');
dbms_advisor.set_task_parameter(taskname,'DEF_INDEX_TABLESPACE',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'DEF_INDEX_OWNER',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'DEF_MVIEW_TABLESPACE',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'DEF_MVIEW_OWNER',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'DEF_MVLOG_TABLESPACE',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'CREATION_COST','TRUE');
dbms_advisor.set_task_parameter(taskname,'JOURNALING','4');
dbms_advisor.set_task_parameter(taskname,'DAYS_TO_EXPIRE','3');
/* Execute Task */
dbms_advisor.execute_task(taskname);
END;
/
-- Display the resulting script.
SET LONG 100000
SET PAGESIZE 100000
SET LINESIZE 200
SELECT DBMS_ADVISOR.get_task_script('my_tuning_set_0fwmyxhv6qpry') AS script FROM dual;
-- Drop tasks and STS
BEGIN
DBMS_SQLTUNE.drop_tuning_task (task_name => '0fwmyxhv6qpryb');
DBMS_SQLTUNE.drop_tuning_task (task_name => 'access_0fwmyxhv6qpryb');
END;
/
BEGIN
DBMS_SQLTUNE.DROP_SQLSET( sqlset_name => '0fwmyxhv6qpryb' );
END;
/
-----------------
SELECT execution_name, task_name, execution_type, TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end, status
FROM dba_advisor_executions
where owner = 'SYSTEM'
order by execution_start desc
-- WHERE task_name='access_0fwmyxhv6qpryb';
select * from dba_advisor_executions where owner = 'SYSTEM';
select * from dba_advisor_findings where owner = 'SYSTEM';
select * from DBA_ADVISOR_RECOMMENDATIONS where task_name = '0fwmyxhv6qpry';
-----------------
}}}
{{{
-- pull host CPU speed info from inside Oracle across platforms even through TNS
dmidecode | grep -i "product name"
cat /proc/cpuinfo | grep -i "model name" | uniq
The server info has to be executed as root..
$ dmidecode | grep -i "product name"
/dev/mem: Permission denied
[root@enkdb03 ~]# dmidecode | grep -i "product name"
Product Name: SUN FIRE X4170 M2 SERVER
Product Name: ASSY,MOTHERBOARD,X4170
But the CPU info is fine as Oracle user
oracle@enkdb03.enkitec.com:/home/oracle/dba/karao/csvfiles4:DEMO1
$ cat /proc/cpuinfo | grep -i "model name" | uniq
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
But then this is only specific to Linux environments.. so we can just have this as an additional instruction to run on each server
------------------------------------------------------------
Because it uses Java to run the back end process, you have to create a java procedure and then a PL/SQL wrapper for it.
It's the Java Proc that actually reaches out and calls the commands on the OS.
Only way to do that (to my knowledge) is to compile a java proc into the schema.
To get the info (and leave no artifacts behind) your script would have to
1. Create & compile the Java Proc
2. Create the PL/SQL Wrapper
3. Execute the PL/SQL Wrapper and process/Save the data
4. Drop the PL/SQL Wrapper
5. Drop the Java Proc
No other choice that I know of I'm afraid.
Doug Gault
------------------------------------------------------------
By the way, Doug and I worked on a script using Java to pull info even from TNS connection. See attached.
SYS@dbm1> @get_cpu_info.sql
Process out : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
}}}
get the script here http://karlarao.wordpress.com/scripts-resources/ under run_awr-quickextract -> utl folder
! to add
awr - add DBA_CPU_USAGE_STATISTICS
make output csv with host info dimensions
for an updated version check here https://github.com/karlarao/get_run_stats
<<showtoc>>
! the script
{{{
create table get_run_stats
( test_name varchar2(100),
snap_type varchar2(5),
snap_time date,
stat_class varchar2(10),
name varchar2(100),
value number)
/
create or replace package get_snap_time is
procedure begin_snap (p_run_name varchar2);
procedure end_snap (p_run_name varchar2);
end get_snap_time;
/
create or replace package body get_snap_time is
procedure begin_snap (p_run_name varchar2) is
l_sysdate date:=sysdate;
begin
-- snap begin elapsed time
insert into get_run_stats values (p_run_name,'BEGIN',l_sysdate,'SNAP','elapsed time',null);
-- snap begin mystat
insert into get_run_stats
SELECT p_run_name record_type,
'BEGIN',
l_sysdate,
TRIM (',' FROM
TRIM (' ' FROM
DECODE(BITAND(n.class, 1), 1, 'User, ')||
DECODE(BITAND(n.class, 2), 2, 'Redo, ')||
DECODE(BITAND(n.class, 4), 4, 'Enqueue, ')||
DECODE(BITAND(n.class, 8), 8, 'Cache, ')||
DECODE(BITAND(n.class, 16), 16, 'OS, ')||
DECODE(BITAND(n.class, 32), 32, 'RAC, ')||
DECODE(BITAND(n.class, 64), 64, 'SQL, ')||
DECODE(BITAND(n.class, 128), 128, 'Debug, ')
)) class,
n.name,
s.value
FROM v$mystat s,
v$statname n
WHERE s.statistic# = n.statistic#;
commit;
end begin_snap;
procedure end_snap (p_run_name varchar2) is
l_sysdate date:=sysdate;
begin
-- snap end elapsed time
insert into get_run_stats values (p_run_name,'END',l_sysdate,'SNAP','elapsed time',null);
-- snap end mystat
insert into get_run_stats
SELECT p_run_name record_type,
'END',
l_sysdate,
TRIM (',' FROM
TRIM (' ' FROM
DECODE(BITAND(n.class, 1), 1, 'User, ')||
DECODE(BITAND(n.class, 2), 2, 'Redo, ')||
DECODE(BITAND(n.class, 4), 4, 'Enqueue, ')||
DECODE(BITAND(n.class, 8), 8, 'Cache, ')||
DECODE(BITAND(n.class, 16), 16, 'OS, ')||
DECODE(BITAND(n.class, 32), 32, 'RAC, ')||
DECODE(BITAND(n.class, 64), 64, 'SQL, ')||
DECODE(BITAND(n.class, 128), 128, 'Debug, ')
)) class,
n.name,
s.value
FROM v$mystat s,
v$statname n
WHERE s.statistic# = n.statistic#;
commit;
end end_snap;
end get_snap_time;
/
}}}
! Usage:
{{{
exec get_snap_time.begin_snap('Test 1')
run something
exec get_snap_time.end_snap('Test 1')
}}}
! Instrumentation:
{{{
col test_name format a15
col name format a70
select test_name, begin_snap, end_snap, snap_type, stat_class, name, delta from
(
select
test_name,
snap_type,
stat_class,
'secs - ' || name as name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(snap_time - (lag(snap_time) over (order by snap_time)))*86400 delta
from get_run_stats
where name = 'elapsed time'
union all
select
test_name,
snap_type,
stat_class,
name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
value-lag(value) over (order by snap_time) delta
from get_run_stats
)
where snap_type = 'END'
and delta > 0
/
}}}
! HCC Instrumentation:
{{{
col test_name format a15
col name format a70
select test_name, begin_snap, end_snap, snap_type, stat_class, name, delta from
(
select
test_name, snap_type, stat_class,
'secs - ' || name as name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(snap_time - (lag(snap_time) over (order by snap_time)))*86400 delta
from get_run_stats
where name = 'elapsed time'
union all
select
test_name, snap_type, stat_class,
'secs - ' || name as name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(value-lag(value) over (order by snap_time))/100 delta
from get_run_stats
where name = 'CPU used by this session'
union all
select
test_name, snap_type, stat_class,
'MB/s - ' || name as name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(value-lag(value) over (order by snap_time))/1024/1024 delta
from get_run_stats
where name = 'cell physical IO bytes eligible for predicate offload'
union all
select
test_name, snap_type, stat_class,
'MB/s - ' || name as name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(value-lag(value) over (order by snap_time))/1024/1024 delta
from get_run_stats
where name = 'physical read total bytes'
union all
select
test_name, snap_type, stat_class,
'MB/s - ' || name as name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(value-lag(value) over (order by snap_time))/1024/1024 delta
from get_run_stats
where name = 'cell physical IO interconnect bytes'
union all
select
test_name, snap_type, stat_class,
'MB/s - ' || name as name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(value-lag(value) over (order by snap_time))/1024/1024 delta
from get_run_stats
where name = 'cell IO uncompressed bytes'
union all
select
test_name, snap_type, stat_class,
name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(value-lag(value) over (order by snap_time)) delta
from get_run_stats
where name = 'cell CUs processed for uncompressed'
union all
select
test_name, snap_type, stat_class,
name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(value-lag(value) over (order by snap_time)) delta
from get_run_stats
where name = 'cell CUs sent uncompressed'
union all
select
test_name, snap_type, stat_class,
name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(value-lag(value) over (order by snap_time)) delta
from get_run_stats
where name in ('EHCC CUs Decompressed',
'EHCC Query High CUs Decompressed',
'EHCC Query Low CUs Decompressed',
'EHCC Archive CUs Decompressed')
)
where snap_type = 'END'
--and delta > 0
/
}}}
! references
https://jonathanlewis.wordpress.com/my-stats/
http://www.java2s.com/Code/Oracle/System-Tables-Views/Queryvsessionevent.htm
! unpivot and R (gather) prototype
{{{
create table get_run_stats
( test_name varchar2(100),
snap_type varchar2(5),
snap_time date,
stat_class varchar2(10),
name varchar2(100),
value number)
/
-- v$session_event
insert into get_run_stats
select
*
from
(select /*+ no_merge */ sid from v$mystat where rownum = 1) ms,
v$session_event se
where
se.sid = ms.sid;
select
'RUNNAME',
'BEGIN',
sysdate,
wait_class || ' - ' || event as class,
measure,
value
from
(
select * from v$session_event
unpivot (value for measure in (TOTAL_WAITS as 'TOTAL_WAITS',
TOTAL_TIMEOUTS as 'TOTAL_TIMEOUTS',
TIME_WAITED as 'TIME_WAITED',
AVERAGE_WAIT as 'AVERAGE_WAIT',
MAX_WAIT as 'MAX_WAIT',
TIME_WAITED_MICRO as 'TIME_WAITED_MICRO',
EVENT_ID as 'EVENT_ID',
WAIT_CLASS_ID as 'WAIT_CLASS_ID',
WAIT_CLASS# as 'WAIT_CLASS#'
))
where sid in (select /*+ no_merge */ sid from v$mystat where rownum = 1)
)
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,TOTAL_WAITS , 50
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,TOTAL_TIMEOUTS , 0
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,TIME_WAITED , 57
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,AVERAGE_WAIT , 1.15
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,MAX_WAIT , 5
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,TIME_WAITED_MICRO, 574273
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,EVENT_ID , 443865681
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,WAIT_CLASS_ID ,1740759767
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,WAIT_CLASS# , 8
SID, SID,EVENT ,TOTAL_WAITS,TOTAL_TIMEOUTS,TIME_WAITED,AVERAGE_WAIT, MAX_WAIT,TIME_WAITED_MICRO, EVENT_ID,WAIT_CLASS_ID,WAIT_CLASS#,WAIT_CLASS
153, 153,Disk file operations I/O , 4, 0, 0, 0, 0, 76, 166678035, 1740759767, 8,User I/O
153, 153,Disk file Mirror Read , 4, 0, 0, .04, 0, 1408, 13102552, 1740759767, 8,User I/O
153, 153,control file sequential read , 14, 0, 2, .12, 1, 16944,3213517201, 4108307767, 9,System I/O
153, 153,gc cr multi block request , 115, 0, 2, .02, 0, 18459, 661121159, 3871361733, 11,Cluster
153, 153,gc cr block 2-way , 26, 0, 0, .01, 0, 2589, 737661873, 3871361733, 11,Cluster
153, 153,gc current block 2-way , 27, 0, 0, .01, 0, 1992, 111015833, 3871361733, 11,Cluster
153, 153,gc cr grant 2-way , 1, 0, 0, .01, 0, 92,3201690383, 3871361733, 11,Cluster
153, 153,gc current grant 2-way , 5, 0, 0, .01, 0, 387,2685450749, 3871361733, 11,Cluster
153, 153,gc current grant busy , 2, 0, 0, .01, 0, 270,2277737081, 3871361733, 11,Cluster
153, 153,row cache lock , 12, 0, 0, .01, 0, 1452,1714089451, 3875070507, 4,Concurrency
153, 153,library cache pin , 16, 0, 0, .01, 0, 1428,2802704141, 3875070507, 4,Concurrency
153, 153,library cache lock , 15, 0, 0, .01, 0, 2040, 916468430, 3875070507, 4,Concurrency
153, 153,SQL*Net message to client , 125, 0, 0, 0, 0, 120,2067390145, 2000153315, 7,Network
153, 153,SQL*Net message from client , 124, 0, 778694, 6279.79, 153480, 7786936539,1421975091, 2723168908, 6,Idle
153, 153,SQL*Net break/reset to client , 14, 0, 0, 0, 0, 434,1963888671, 4217450380, 1,Application
153, 153,cell single block physical read , 9, 0, 5, .5, 2, 45231,2614864156, 1740759767, 8,User I/O
153, 153,cell multiblock physical read , 50, 0, 57, 1.15, 5, 574273, 443865681, 1740759767, 8,User I/O
17 rows selected.
EVENT
TOTAL_WAITS
TOTAL_TIMEOUTS
TIME_WAITED
AVERAGE_WAIT
MAX_WAIT
TIME_WAITED_MICRO
EVENT_ID
WAIT_CLASS_ID
WAIT_CLASS#
WAIT_CLASS
> gather(x2, MEASURE, VALUE, -WAIT_CLASS, -EVENT) %>% select(WAIT_CLASS, EVENT, MEASURE, VALUE) %>% arrange(EVENT)
WAIT_CLASS EVENT MEASURE VALUE
1 User I/O Disk file operations I/O TOTAL_WAITS 1.00
2 User I/O Disk file operations I/O TOTAL_TIMEOUTS 0.00
3 User I/O Disk file operations I/O TIME_WAITED 0.00
4 User I/O Disk file operations I/O AVERAGE_WAIT 0.01
5 User I/O Disk file operations I/O MAX_WAIT 0.00
6 User I/O Disk file operations I/O TIME_WAITED_MICRO 62.00
7 Idle SQL*Net message from client TOTAL_WAITS 25.00
8 Idle SQL*Net message from client TOTAL_TIMEOUTS 0.00
9 Idle SQL*Net message from client TIME_WAITED 88579.00
10 Idle SQL*Net message from client AVERAGE_WAIT 3543.15
11 Idle SQL*Net message from client MAX_WAIT 86705.00
12 Idle SQL*Net message from client TIME_WAITED_MICRO 885788632.00
13 Network SQL*Net message to client TOTAL_WAITS 25.00
14 Network SQL*Net message to client TOTAL_TIMEOUTS 0.00
15 Network SQL*Net message to client TIME_WAITED 0.00
16 Network SQL*Net message to client AVERAGE_WAIT 0.00
17 Network SQL*Net message to client MAX_WAIT 0.00
18 Network SQL*Net message to client TIME_WAITED_MICRO 18.00
-- v$sess_time_model
STAT_ID
STAT_NAME
VALUE
}}}
<<showtoc>>
! install git
https://github.com/git-guides/install-git
https://devhints.io/homebrew
{{{
brew install git
brew deps --tree --installed
}}}
! install diffmerge
{{{
brew install --cask diffmerge
}}}
! install tower and authenticate
https://www.git-tower.com/help/guides/manage-hosting-services/connect-accounts/windows
github private repo limit https://www.evernote.com/l/ADDermYvLFlLR70-lolzED8-YZB-MRKhWvI
https://web.archive.org/web/20161028234258/http://www.oraclebuffer.com/general-discussions/which-index-to-choose-global-or-local-index-for-partitioned-table/
https://docs.gluent.com/
.
https://gluent.com/gdp/gluent-advisor/
https://gluent.com/run-gluent-advisor/
! download
{{{
wget https://gluent.s3.amazonaws.com/gluent_advisor_data_extractor.tar.bz2
tar -xjvpf gluent_advisor_data_extractor.tar.bz2
cd advisor_extract/
}}}
! running the advisor
{{{
sqlplus <username>/<password>
SYS@orcl> @advisor_extract.sql
===================================================================================
Gluent Offload Advisor (Advisor Data Extractor) v4.0.2 (15f441f)
Copyright 2015-2020 Gluent Inc. All rights reserved.
===================================================================================
***********************************************************************************
!!! WARNING !!!
Gluent Offload Advisor accesses views and packages that are licensed separately
under the Oracle Diagnostics Pack and Oracle Tuning Pack. Please ensure you have
the correct licenses to run this utility. See the README for further details.
To continue, press Enter. To cancel, press Ctrl-C.
***********************************************************************************
Initializing Advisor Data Extractor...
Wrote file _gluent_adv_orig_env.tmp
Parameters:
* gluent_adv_schema................ NULL
* gluent_adv_table_name............ NULL
* gluent_adv_snapshot_days......... 30
* gluent_adv_awr_source............ ROOT
* gluent_adv_awr_dbid.............. 887751344
* gluent_adv_db_name............... CDB1
* gluent_adv_spool_dir............. ./
* gluent_adv_zip_binary............ zip -j
* gluent_adv_include_ash........... Y
* gluent_adv_include_space......... Y
* gluent_adv_include_object_list... Y
* gluent_adv_include_sqlmons....... Y
Step completed.
Extracting Advisor data...
15-JAN-2021 16:04:38: Started advisor_extract_db_settings.sql
15-JAN-2021 16:04:38: Finished advisor_extract_db_settings.sql
15-JAN-2021 16:04:38: Started advisor_extract_execution_metadata.sql
15-JAN-2021 16:04:38: Finished advisor_extract_execution_metadata.sql
15-JAN-2021 16:04:38: Started advisor_extract_partition_data.sql
15-JAN-2021 16:04:40: Finished advisor_extract_partition_data.sql
15-JAN-2021 16:04:43: Started advisor_extract_schema_usage.sql
15-JAN-2021 16:04:47: Finished advisor_extract_schema_usage.sql
15-JAN-2021 16:04:48: Started advisor_extract_segment_data.sql
15-JAN-2021 16:05:18: Finished advisor_extract_segment_data.sql
15-JAN-2021 16:05:22: Started advisor_extract_stats_data.sql
15-JAN-2021 16:05:25: Finished advisor_extract_stats_data.sql
15-JAN-2021 16:05:25: Started advisor_extract_table_data.sql
15-JAN-2021 16:05:39: Finished advisor_extract_table_data.sql
15-JAN-2021 16:05:41: Started advisor_extract_tablespace_usage.sql
15-JAN-2021 16:05:44: Finished advisor_extract_tablespace_usage.sql
15-JAN-2021 16:05:45: Started advisor_extract_ash_data.sql
15-JAN-2021 16:05:49: Finished advisor_extract_ash_data.sql
15-JAN-2021 16:05:50: Started advisor_extract_cpu_usage.sql
15-JAN-2021 16:05:51: Finished advisor_extract_cpu_usage.sql
15-JAN-2021 16:05:51: Started advisor_extract_sql_usage.sql
15-JAN-2021 16:06:22: Finished advisor_extract_sql_usage.sql
Step completed. Advisor data successfully extracted.
Archiving files ./advisor_extract_*_CDB1.csv ./advisor_extract_*_CDB1*.txt into ./advisor_extract_CDB1_2021_01_15-16_04_38.zip
adding: advisor_extract_ash_data_CDB1.csv (stored 0%)
adding: advisor_extract_db_settings_CDB1.csv (deflated 6%)
adding: advisor_extract_execution_metadata_CDB1.csv (deflated 6%)
adding: advisor_extract_partition_data_CDB1.csv (deflated 94%)
adding: advisor_extract_schema_usage_CDB1.csv (deflated 67%)
adding: advisor_extract_segment_data_CDB1.csv (deflated 79%)
adding: advisor_extract_stats_data_CDB1.csv (stored 0%)
adding: advisor_extract_table_data_CDB1.csv (deflated 83%)
adding: advisor_extract_tablespace_usage_CDB1.csv (deflated 67%)
adding: advisor_extract_ash_data_CDB1.sqlmon.txt (deflated 89%)
adding: advisor_extract_cpu_usage_CDB1.sqlmon.txt (deflated 2%)
adding: advisor_extract_cpu_usage_CDB1.txt (deflated 92%)
adding: advisor_extract_partition_data_CDB1.sqlmon.txt (deflated 91%)
adding: advisor_extract_schema_usage_CDB1.sqlmon.txt (deflated 91%)
adding: advisor_extract_segment_data_CDB1.sqlmon.txt (deflated 93%)
adding: advisor_extract_sql_usage_by_plan_CDB1.txt (deflated 95%)
adding: advisor_extract_sql_usage_by_sql_CDB1.txt (deflated 95%)
adding: advisor_extract_sql_usage_CDB1.sqlmon.txt (deflated 2%)
adding: advisor_extract_stats_data_CDB1.sqlmon.txt (deflated 87%)
adding: advisor_extract_table_data_CDB1.sqlmon.txt (deflated 91%)
adding: advisor_extract_tablespace_usage_CDB1.sqlmon.txt (deflated 87%)
O/S Message: No child processes
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
}}}
! advisor output
{{{
$ ls -ltr
total 880
-rw-r--r--. 1 oracle oinstall 62 Oct 22 10:46 VERSION.txt
-rw-r--r--. 1 oracle oinstall 15873 Oct 22 10:46 sql_disco.sql
-rw-r--r--. 1 oracle oinstall 3521 Oct 22 10:46 show_awr_sources.sql
-rw-r--r--. 1 oracle oinstall 3646 Oct 22 10:46 README.txt
-rw-r--r--. 1 oracle oinstall 53 Oct 22 10:46 LICENSE.txt
-rw-r--r--. 1 oracle oinstall 7291 Oct 22 10:46 cpu_usage_by_hour.sql
-rw-r--r--. 1 oracle oinstall 1236 Oct 22 10:46 advisor_extract_valid_schemas.sql
-rw-r--r--. 1 oracle oinstall 1719 Oct 22 10:46 advisor_extract_tablespace_usage.sql
-rw-r--r--. 1 oracle oinstall 8373 Oct 22 10:46 advisor_extract_table_data.sql
-rw-r--r--. 1 oracle oinstall 547 Oct 22 10:46 advisor_extract_stop_script.sql
-rw-r--r--. 1 oracle oinstall 3289 Oct 22 10:46 advisor_extract_stats_data.sql
-rw-r--r--. 1 oracle oinstall 420 Oct 22 10:46 advisor_extract_start_script.sql
-rw-r--r--. 1 oracle oinstall 633 Oct 22 10:46 advisor_extract_sql_usage.sql
-rw-r--r--. 1 oracle oinstall 2461 Oct 22 10:46 advisor_extract_sqlmon_report.sql
-rw-r--r--. 1 oracle oinstall 2503 Oct 22 10:46 advisor_extract.sql
-rw-r--r--. 1 oracle oinstall 14870 Oct 22 10:46 advisor_extract_segment_data.sql
-rw-r--r--. 1 oracle oinstall 1623 Oct 22 10:46 advisor_extract_schema_usage.sql
-rw-r--r--. 1 oracle oinstall 119 Oct 22 10:46 advisor_extract_restore.sql
-rw-r--r--. 1 oracle oinstall 1853 Oct 22 10:46 advisor_extract_partition_data.sql
-rw-r--r--. 1 oracle oinstall 16511 Oct 22 10:46 advisor_extract_init.sql
-rw-r--r--. 1 oracle oinstall 1251 Oct 22 10:46 advisor_extract_execution_metadata.sql
-rw-r--r--. 1 oracle oinstall 3787 Oct 22 10:46 advisor_extract_env.sql
-rw-r--r--. 1 oracle oinstall 1045 Oct 22 10:46 advisor_extract_define_predicate.sql
-rw-r--r--. 1 oracle oinstall 1728 Oct 22 10:46 advisor_extract_db_settings.sql
-rw-r--r--. 1 oracle oinstall 494 Oct 22 10:46 advisor_extract_cpu_usage.sql
-rw-r--r--. 1 oracle oinstall 3786 Oct 22 10:46 advisor_extract_ash_data.sql
-rw-r--r--. 1 oracle oinstall 2672 Jan 15 16:04 _gluent_adv_orig_env.tmp
-rw-r--r--. 1 oracle oinstall 49 Jan 15 16:04 advisor_extract_db_settings_CDB1.csv
-rw-r--r--. 1 oracle oinstall 79 Jan 15 16:04 advisor_extract_execution_metadata_CDB1.csv
-rw-r--r--. 1 oracle oinstall 7922 Jan 15 16:04 advisor_extract_partition_data_CDB1.csv
-rw-r--r--. 1 oracle oinstall 50895 Jan 15 16:04 advisor_extract_partition_data_CDB1.sqlmon.txt
-rw-r--r--. 1 oracle oinstall 1031 Jan 15 16:04 advisor_extract_schema_usage_CDB1.csv
-rw-r--r--. 1 oracle oinstall 61997 Jan 15 16:04 advisor_extract_schema_usage_CDB1.sqlmon.txt
-rw-r--r--. 1 oracle oinstall 4904 Jan 15 16:05 advisor_extract_segment_data_CDB1.csv
-rw-r--r--. 1 oracle oinstall 203241 Jan 15 16:05 advisor_extract_segment_data_CDB1.sqlmon.txt
-rw-r--r--. 1 oracle oinstall 0 Jan 15 16:05 advisor_extract_stats_data_CDB1.csv
-rw-r--r--. 1 oracle oinstall 29378 Jan 15 16:05 advisor_extract_stats_data_CDB1.sqlmon.txt
-rw-r--r--. 1 oracle oinstall 8993 Jan 15 16:05 advisor_extract_table_data_CDB1.csv
-rw-r--r--. 1 oracle oinstall 82119 Jan 15 16:05 advisor_extract_table_data_CDB1.sqlmon.txt
-rw-r--r--. 1 oracle oinstall 1073 Jan 15 16:05 advisor_extract_tablespace_usage_CDB1.csv
-rw-r--r--. 1 oracle oinstall 18839 Jan 15 16:05 advisor_extract_tablespace_usage_CDB1.sqlmon.txt
-rw-r--r--. 1 oracle oinstall 0 Jan 15 16:05 advisor_extract_ash_data_CDB1.csv
-rw-r--r--. 1 oracle oinstall 42655 Jan 15 16:05 advisor_extract_ash_data_CDB1.sqlmon.txt
-rw-r--r--. 1 oracle oinstall 21764 Jan 15 16:05 advisor_extract_cpu_usage_CDB1.txt
-rw-r--r--. 1 oracle oinstall 61 Jan 15 16:05 advisor_extract_cpu_usage_CDB1.sqlmon.txt
-rw-r--r--. 1 oracle oinstall 41236 Jan 15 16:06 advisor_extract_sql_usage_by_sql_CDB1.txt
-rw-r--r--. 1 oracle oinstall 41237 Jan 15 16:06 advisor_extract_sql_usage_by_plan_CDB1.txt
-rw-r--r--. 1 oracle oinstall 61 Jan 15 16:06 advisor_extract_sql_usage_CDB1.sqlmon.txt
-rw-r--r--. 1 oracle oinstall 57735 Jan 15 16:06 advisor_extract_CDB1_2021_01_15-16_04_38.zip <-- email this to us please
}}}
! advisor zip file
{{{
unzip -l advisor_extract_CDB1_2021_01_15-16_04_38.zip
Archive: advisor_extract_CDB1_2021_01_15-16_04_38.zip
Length Date Time Name
--------- ---------- ----- ----
0 01-15-2021 16:05 advisor_extract_ash_data_CDB1.csv
49 01-15-2021 16:04 advisor_extract_db_settings_CDB1.csv
79 01-15-2021 16:04 advisor_extract_execution_metadata_CDB1.csv
7922 01-15-2021 16:04 advisor_extract_partition_data_CDB1.csv
1031 01-15-2021 16:04 advisor_extract_schema_usage_CDB1.csv
4904 01-15-2021 16:05 advisor_extract_segment_data_CDB1.csv
0 01-15-2021 16:05 advisor_extract_stats_data_CDB1.csv
8993 01-15-2021 16:05 advisor_extract_table_data_CDB1.csv
1073 01-15-2021 16:05 advisor_extract_tablespace_usage_CDB1.csv
42655 01-15-2021 16:05 advisor_extract_ash_data_CDB1.sqlmon.txt
61 01-15-2021 16:05 advisor_extract_cpu_usage_CDB1.sqlmon.txt
21764 01-15-2021 16:05 advisor_extract_cpu_usage_CDB1.txt
50895 01-15-2021 16:04 advisor_extract_partition_data_CDB1.sqlmon.txt
61997 01-15-2021 16:04 advisor_extract_schema_usage_CDB1.sqlmon.txt
203241 01-15-2021 16:05 advisor_extract_segment_data_CDB1.sqlmon.txt
41237 01-15-2021 16:06 advisor_extract_sql_usage_by_plan_CDB1.txt
41236 01-15-2021 16:06 advisor_extract_sql_usage_by_sql_CDB1.txt
61 01-15-2021 16:06 advisor_extract_sql_usage_CDB1.sqlmon.txt
29378 01-15-2021 16:05 advisor_extract_stats_data_CDB1.sqlmon.txt
82119 01-15-2021 16:05 advisor_extract_table_data_CDB1.sqlmon.txt
18839 01-15-2021 16:05 advisor_extract_tablespace_usage_CDB1.sqlmon.txt
--------- -------
617534 21 files
}}}
https://docs.gluent.com/4.0.x/goe/gluent_express.html
<<showtoc>>
! product page
Real-Time Transactional Data Streaming Platform
https://www.oracle.com/middleware/data-integration/goldengate/big-data/
https://www.oracle.com/middleware/products/
! quick overview
Oracle GoldenGate for Big Data - Mike Rainey https://www.youtube.com/watch?v=4-1qBhQ_u_I
! installation
Installing Oracle GoldenGate for Big Data https://docs.oracle.com/goldengate/bd123010/gg-bd/GBDIG/GUID-2379B9F2-BDBF-47C8-8B7B-AB273773FBD3.htm#GBDIG-GUID-2379B9F2-BDBF-47C8-8B7B-AB273773FBD3
! deployment patterns
!! GG to HDFS
Oracle GoldenGate Big Data: Apply to Apache HDFS http://www.ateam-oracle.com/oracle-goldengate-big-data-apply-to-apache-hdfs
!! microservices
Microservices Architecture (MA) in Oracle Goldengate 12cR3 https://www.youtube.com/watch?v=uKXMJcxYF6U
!! CDC kafka
Create a CDC Event Stream From Oracle Database to Kafka With GoldenGate https://dzone.com/articles/creates-a-cdc-stream-from-oracle-database-to-kafka
https://www.google.com/search?q=kafka+connect+goldengate&oq=kafka+connect+golden&aqs=chrome.0.0j69i57j0l3j69i60.6515j0j1&sourceid=chrome&ie=UTF-8
robin https://rmoff.net/2018/02/01/howto-oracle-goldengate-apache-kafka-schema-registry-swingbench/
https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d
open in google colab https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo
https://github.com/GokuMohandas/practicalAI
<<<
pandu deep space 1 google search
https://www.google.com/search?client=firefox-b-1-d&ei=tObOX6yZBOGMggfhob9g&q=pandu+deep+space+1+google+search&oq=pandu+deep+space+1+google+search&gs_lcp=CgZwc3ktYWIQAzIFCCEQqwIyBQghEKsCMgUIIRCrAjoECAAQRzoICC4QkQIQkwI6BQgAEJECOggIABCxAxCDAToFCAAQsQM6CAguEMcBEKMCOgQIABBDOgoIABCxAxCDARBDOgsILhCxAxDHARCjAjoLCAAQsQMQgwEQkQI6BQguELEDOgIIADoFCAAQyQM6BQgAEJIDOggIABCxAxDJAzoCCC46BAgAEAo6BwgAEMkDEAo6BwgAEMkDEA06BAgAEA06CQgAEMkDEBYQHjoGCAAQFhAeOgUIIRCgAToHCCEQChCgAVCNJVitmQFg-ZsBaANwAngAgAHiAYgBhhqSAQcyMy4xMS4xmAEAoAEBqgEHZ3dzLXdpesgBCMABAQ&sclient=psy-ab&ved=0ahUKEwisw4T-rL3tAhVhhuAKHeHQDwwQ4dUDCAw&uact=5
https://www.linkedin.com/in/pandu-nayak-4790201/
http://www.pandunayak.com/
http://pandunayak.blogspot.com/
david bez google
https://twitter.com/tweetbez?lang=en
cathy edwards search google
https://www.google.com/search?source=hp&ei=e-jOX5HpBo605gLpxpGYBQ&q=cathy+edwards+search+google&oq=cathy+edwards+search+google&gs_lcp=CgZwc3ktYWIQAzIFCCEQqwIyBQghEKsCOggILhCxAxCTAjoCCC46CAguELEDEIMBOgUIABCxAzoFCC4QsQM6CAguEMcBEK8BOgsILhCxAxCDARCTAjoOCC4QsQMQgwEQxwEQowI6CAguEMkDEJMCOgIIADoFCAAQyQM6BggAEBYQHjoFCCEQoAE6BwghEAoQoAE6BQgAEM0COggIIRAWEB0QHlCsAVj_JGCeKWgAcAB4AIABcYgBhRGSAQQxOS42mAEAoAEBqgEHZ3dzLXdpeg&sclient=psy-ab&ved=0ahUKEwjRkoLXrr3tAhUOmlkKHWljBFMQ4dUDCAg&uact=5
https://www.linkedin.com/in/edwardscathy/
urs hölzle google
https://www.google.com/search?client=firefox-b-1-d&q=urs+h%C3%B6lzle+google
eric lehman google engineer
https://www.google.com/search?client=firefox-b-1-d&q=eric+lehman+google+engineer
nick fox google search
https://twitter.com/thefox?lang=en
ben gomes svp google
https://www.google.com/search?client=firefox-b-1-d&q=ben+gomes+svp+google
fede lebron search google
https://www.linkedin.com/in/federicolebron/?originalSubdomain=ar
jeff hinton machine learning
https://www.google.com/search?client=firefox-b-1-d&q=jeff+hinton+machine+learning
jingcao hu google search
https://scholar.google.com/citations?user=u0qJnugAAAAJ&hl=en
elizabeth tucker google search
tulsee doshi google search
https://www.linkedin.com/in/tulsee-doshi/
<<<
https://cloudplatform.googleblog.com/2017/02/introducing-Cloud-Spanner-a-global-database-service-for-mission-critical-applications.html
https://cloudplatform.googleblog.com/2017/02/inside-Cloud-Spanner-and-the-CAP-Theorem.html
https://cloud.google.com/spanner/docs/best-practices
Spanner, TrueTime and the CAP Theorem https://research.google.com/pubs/pub45855.html
https://quizlet.com/blog/quizlet-cloud-spanner
{{{
15:40:12 SYS@ifstst_1> conn dvowner/<password>
Connected.
15:40:25 DVOWNER@ifstst_1>
15:40:27 DVOWNER@ifstst_1>
15:40:27 DVOWNER@ifstst_1> grant DV_ACCTMGR to remotedba2;
grant DV_ACCTMGR to remotedba2
*
ERROR at line 1:
ORA-47410: Realm violation for GRANT on DV_ACCTMGR
15:40:43 DVOWNER@ifstst_1> conn dvadmin/<password>
Connected.
15:40:59 DVADMIN@ifstst_1> grant DV_ACCTMGR to remotedba2;
Grant succeeded.
}}}
{{{
15:39:30 REMOTEDBA2@ifstst_1> create user karlarao identified by karlarao;
create user karlarao identified by karlarao
*
ERROR at line 1:
ORA-01031: insufficient privileges
15:39:50 REMOTEDBA2@ifstst_1> create user karlarao identified by karlarao; <-- after grant of DV_ACCTMGR
User created.
15:41:19 REMOTEDBA2@ifstst_1> drop user karlarao;
User dropped.
15:41:24 REMOTEDBA2@ifstst_1> create user karlarao identified by karlarao;
create user karlarao identified by karlarao
*
ERROR at line 1:
ORA-01031: insufficient privileges
}}}
https://www.youtube.com/results?search_query=graphviz
https://iggyfernandez.wordpress.com/2010/11/26/explaining-the-explain-plan-using-pictures/
https://community.toadworld.com/platforms/oracle/w/wiki/11002.explaining-the-explain-plan-using-pictures
{{{
/*
Copyright 2010 Iggy Fernandez
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
The purpose of this SQL*Plus script is to generate a Graphviz
program that can draw a tree-structure graphical version of a query
plan. It prompts for a SQL_ID and CHILD_NUMBER. The following basic
data items are first obtained from V$SQL_PLAN_STATISTICS_ALL:
id
parent_id
object_name
operation
options
last_starts
last_elapsed_time / 1000000 AS last_elapsed_time
cardinality
last_output_rows
last_cr_buffer_gets + last_cu_buffer_gets AS last_buffer_gets
last_disk_reads
The following items are then computed from the basic data:
execution_sequence#
delta_elapsed_time
delta_buffer_gets
delta_disk_reads
delta_percentage_elapsed_time
delta_percentage_buffer_gets
delta_percentage_disk_reads
last_percentage_elapsed_time
last_percentage_buffer_gets
last_percentage_disk_reads
Graphviz commands are then spooled to plan.dot. If Graphviz has been
installed, the following command can be used to produce graphical
output.
dot -Tjpg -oplan.jpg plan.dot
As an example, the following query generates a list of employees
whose salaries are higher than their respective managers.
SELECT
emp.employee_id AS emp_id,
emp.salary AS emp_salary
FROM
employees emp
WHERE EXISTS (
SELECT
*
FROM
employees mgr
WHERE
emp.manager_id = mgr.employee_id
AND
emp.salary > mgr.salary
);
Here is an abbreviated version of the traditional tabular query
plan.
----------------------------------------
| Id | Operation | Name |
----------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | HASH JOIN SEMI | |
| 2 | TABLE ACCESS FULL| EMPLOYEES |
| 3 | TABLE ACCESS FULL| EMPLOYEES |
----------------------------------------
And here is an abbreviated version of the Graphviz program produced
by this script.
digraph EnhancedPlan {graph[ordering="out"];node[fontname=Arial fontsize=8];
"0" [label="Step 4 (Id 0)\nSELECT STATEMENT", shape=plaintext];
"1" [label="Step 3 (Id 1)\nHASH JOIN SEMI", shape=plaintext];
"2" [label="Step 1 (Id 2)\nTABLE ACCESS FULL EMPLOYEES", shape=plaintext];
"3" [label="Step 2 (Id 3)\nTABLE ACCESS FULL EMPLOYEES", shape=plaintext];
"0"->"1" [dir=back];
"1"->"2" [dir=back];
"1"->"3" [dir=back];
}
*/
--------------------------------------------------------------------------------
-- SQL*Plus settings
SET linesize 1000
SET trimspool on
SET pagesize 0
SET echo off
SET heading off
SET feedback off
SET verify off
SET time off
SET timing off
SET sqlblanklines on
SPOOL plan.dot
--------------------------------------------------------------------------------
-- First retrieve the basic data from V$SQL_PLAN_STATISTICS_ALL.
-- Modify this subquery if you want data from a different source.
WITH plan_table AS
(
SELECT
id,
parent_id,
object_name,
operation,
options,
last_starts,
last_elapsed_time / 1000000 AS last_elapsed_time,
cardinality,
last_output_rows,
last_cr_buffer_gets + last_cu_buffer_gets AS last_buffer_gets,
last_disk_reads
FROM
v$sql_plan_statistics_all
WHERE
sql_id = '&sql_id'
AND child_number = &child_number
),
--------------------------------------------------------------------------------
-- Determine the order in which steps are actually executed
execution_sequence AS
(
SELECT
id,
ROWNUM AS execution_sequence#
FROM
plan_table pt1
START WITH
-- Start with the leaf nodes
NOT EXISTS (
SELECT *
FROM plan_table pt2
WHERE pt2.parent_id = pt1.id
)
CONNECT BY
-- Connect to the parent node
pt1.id = PRIOR pt1.parent_id
-- if the prior node was the oldest sibling
AND PRIOR pt1.id >= ALL(
SELECT pt2.id
FROM plan_table pt2
WHERE pt2.parent_id = pt1.id
)
-- Process the leaf nodes from left to right
ORDER SIBLINGS BY pt1.id
),
--------------------------------------------------------------------------------
-- Calculate deltas for elapsed time, buffer gets, and disk reads
deltas AS
(
SELECT
t1.id,
t1.last_elapsed_time - NVL(SUM(t2.last_elapsed_time),0) AS delta_elapsed_time,
t1.last_buffer_gets - NVL(SUM(t2.last_buffer_gets),0) AS delta_buffer_gets,
t1.last_disk_reads - NVL(SUM(t2.last_disk_reads),0) AS delta_disk_reads
FROM
plan_table t1
LEFT OUTER JOIN plan_table t2
ON t1.id = t2.parent_id
GROUP BY
t1.id,
t1.last_elapsed_time,
t1.last_buffer_gets,
t1.last_disk_reads
),
--------------------------------------------------------------------------------
-- Join the results of the previous subqueries
enhanced_plan_table AS
(
SELECT
-- Items from the plan_table subquery
plan_table.id,
plan_table.parent_id,
plan_table.object_name,
plan_table.operation,
plan_table.options,
plan_table.last_starts,
plan_table.last_elapsed_time,
plan_table.cardinality,
plan_table.last_output_rows,
plan_table.last_buffer_gets,
plan_table.last_disk_reads,
-- Items from the execution_sequence subquery
execution_sequence.execution_sequence#,
-- Items from the deltas subquery
deltas.delta_elapsed_time,
deltas.delta_buffer_gets,
deltas.delta_disk_reads,
-- Computed percentages
CASE
WHEN (SUM(deltas.delta_elapsed_time) OVER () = 0)
THEN (100)
ELSE (100 * deltas.delta_elapsed_time / SUM(deltas.delta_elapsed_time) OVER ())
END AS delta_percentage_elapsed_time,
CASE
WHEN (SUM(deltas.delta_buffer_gets) OVER () = 0)
THEN (100)
ELSE (100 * deltas.delta_buffer_gets / SUM(deltas.delta_buffer_gets) OVER ())
END AS delta_percentage_buffer_gets,
CASE
WHEN (SUM(deltas.delta_disk_reads) OVER () = 0)
THEN (100)
ELSE (100 * deltas.delta_disk_reads / SUM(deltas.delta_disk_reads) OVER ())
END AS delta_percentage_disk_reads,
CASE
WHEN (SUM(deltas.delta_elapsed_time) OVER () = 0)
THEN (100)
ELSE (100 * plan_table.last_elapsed_time / SUM(deltas.delta_elapsed_time) OVER ())
END AS last_percentage_elapsed_time,
CASE
WHEN (SUM(deltas.delta_buffer_gets) OVER () = 0)
THEN (100)
ELSE (100 * plan_table.last_buffer_gets / SUM(deltas.delta_buffer_gets) OVER ())
END AS last_percentage_buffer_gets,
CASE
WHEN (SUM(deltas.delta_disk_reads) OVER () = 0)
THEN (100)
ELSE (100 * plan_table.last_disk_reads / SUM(deltas.delta_disk_reads) OVER ())
END AS last_percentage_disk_reads
FROM
plan_table,
execution_sequence,
deltas
WHERE
plan_table.id = execution_sequence.id
AND plan_table.id = deltas.id
-- Order the results for cosmetic purposes
ORDER BY plan_table.id
)
--------------------------------------------------------------------------------
-- Begin THE Graphviz program
SELECT
'digraph EnhancedPlan {'
|| 'graph[ordering="out"];'
|| 'node[fontname=Arial fontsize=8];' AS command
FROM DUAL
--------------------------------------------------------------------------------
-- Label the nodes
UNION ALL SELECT
'"' || id || '" [label="'
-- Line 1: Execution Sequence # and Id
|| 'Step ' || execution_sequence#
|| ' (Id ' || id || ')'
|| '\n'
-- Line 2: Operations, options, object name, and starts
|| operation
|| CASE
WHEN (options IS NULL)
THEN ('')
ELSE (' ' || options)
END
|| CASE
WHEN (object_name IS NULL)
THEN ('')
ELSE (' ' || object_name)
END
|| CASE
WHEN (last_starts > 1)
THEN (' (Starts = ' || last_starts || ')')
ELSE ('')
END
|| '\n'
-- Line 3: Delta elapsed time and cumulative elapsed time
|| 'Delta Elapsed = '
|| CASE
WHEN (delta_elapsed_time IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(delta_elapsed_time, '999,999,990.00')) || 's')
END
|| ' ('
|| CASE
WHEN (delta_percentage_elapsed_time IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(delta_percentage_elapsed_time, '990')) || '%')
END
|| ')'
|| ' Cum Elapsed = '
|| CASE
WHEN (last_elapsed_time IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(last_elapsed_time, '999,999,990.00')) || 's')
END
|| ' ('
|| CASE
WHEN (last_percentage_elapsed_time IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(last_percentage_elapsed_time, '990')) || '%')
END
|| ')'
|| '\n'
-- Line 4: Delta buffer gets and cumulative buffer gets
|| 'Delta Buffer Gets = '
|| CASE
WHEN (delta_buffer_gets IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(delta_buffer_gets, '999,999,999,999,990')))
END
|| ' ('
|| CASE
WHEN (delta_percentage_buffer_gets IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(delta_percentage_buffer_gets, '990')) || '%')
END
|| ')'
|| ' Cum Buffer Gets = '
|| CASE
WHEN (last_buffer_gets IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(last_buffer_gets, '999,999,999,999,990')))
END
|| ' ('
|| CASE
WHEN (last_percentage_buffer_gets IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(last_percentage_buffer_gets, '990')) || '%')
END
|| ')'
|| '\n'
-- Line 5: Delta disk reads and cumulative disk reads
|| 'Delta Disk Reads = '
|| CASE
WHEN (delta_disk_reads IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(delta_disk_reads, '999,999,999,999,990')))
END
|| ' ('
|| CASE
WHEN (delta_percentage_disk_reads IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(delta_percentage_disk_reads, '990')) || '%')
END
|| ')'
|| ' Cum Disk Reads = '
|| CASE
WHEN (last_disk_reads IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(last_disk_reads, '999,999,999,999,990')))
END
|| ' ('
|| CASE
WHEN (last_percentage_disk_reads IS NULL)
THEN ('?')
ELSE (TRIM(TO_CHAR(last_percentage_disk_reads, '990')) || '%')
END
|| ')'
|| '\n'
-- Line 6: Estimated rows and actual rows
|| 'Estimated Rows = '
|| CASE
WHEN (cardinality IS NULL)
THEN '?'
ELSE (TRIM(TO_CHAR(cardinality, '999,999,999,999,990')))
END
|| ' Actual Rows = '
|| CASE
WHEN (last_output_rows IS NULL)
THEN '?'
ELSE (TRIM(TO_CHAR(last_output_rows, '999,999,999,999,990')))
END
|| '\n'
|| '", shape=plaintext];' AS command
FROM enhanced_plan_table
--------------------------------------------------------------------------------
-- Connect the nodes
UNION ALL SELECT '"' || parent_id || '"->"' || id || '" [dir=back];' AS command
FROM plan_table
START WITH parent_id = 0
CONNECT BY parent_id = PRIOR id
--------------------------------------------------------------------------------
-- End THE Graphviz program
UNION ALL SELECT '}' AS command
FROM DUAL;
--------------------------------------------------------------------------------
SPOOL off
}}}
http://linux.byexamples.com/archives/304/grep-multiple-lines/
http://www.unix.com/unix-dummies-questions-answers/51767-grep-required-pattern-next-2-3-lines.html
http://www.unix.com/shell-programming-scripting/51395-pattern-matching-file-then-display-10-lines-above-every-time.html
''grep before and after''
{{{
grep -B1 -A2 "DMA" message.txt <-- output before 1 line after 2 lines
}}}
''grep for a search string, and list the file''
{{{
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$ find . -type f | xargs grep "LOCAL_LISTEN"
./biprd1_ora_29214_2.aud:ACTION :[173] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:6:32} */'
./biprd1_ora_31855_2.aud:ACTION :[174] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:13:13} */'
./biprd1_ora_1656_1.aud:ACTION :[174] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:17:23} */'
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$ find . -exec grep -H "LOCAL_LISTEN" {} \;
./biprd1_ora_29214_2.aud:ACTION :[173] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:6:32} */'
./biprd1_ora_31855_2.aud:ACTION :[174] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:13:13} */'
./biprd1_ora_1656_1.aud:ACTION :[174] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:17:23} */'
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$ grep -l "LOCAL_LISTENER" *aud
biprd1_ora_1656_1.aud
biprd1_ora_29214_2.aud
biprd1_ora_31855_2.aud
oracle@host1:/home/oracle:biprd1
$ grep -l "LOCAL_LISTENER" /u01/app/oracle/admin/biprd/adump/*aud | xargs ls -ltr
-rw-r----- 1 oracle dba 1943 Dec 13 16:21 /u01/app/oracle/admin/biprd/adump/biprd1_ora_29214_2.aud
-rw-r----- 1 oracle dba 1944 Dec 15 16:31 /u01/app/oracle/admin/biprd/adump/biprd1_ora_31855_2.aud
-rw-r----- 1 oracle dba 1942 Dec 15 20:54 /u01/app/oracle/admin/biprd/adump/biprd1_ora_1656_1.aud
oracle@host2:/home/oracle:mtaprd111
$ grep -l "LOCAL_LISTENER" /u01/app/oracle/admin/biprd/adump/*aud
oracle@host2:/home/oracle:mtaprd111
$ ls -1 | wc -l
71
}}}
''grep exclude file list''
http://dbaspot.com/shell/199876-grep-exclude-list.html
''grep between two search terms''
http://www.cyberciti.biz/faq/howto-grep-text-between-two-words-in-unix-linux/
{{{
sed -n "/~~BEGIN-OS-INFORMATION~~/,/~~END-OS-INFORMATION~~/p" awr-hist-565219483-PRODRAC-118749-120198.out | grep -v BEGIN- | grep -v END-
}}}
https://orainternals.files.wordpress.com/2017/01/rac_performance-wait-events.pdf <- GOOD STUFF
http://oracle-help.com/oracle-rac/grouping-oracle-rac-wait-events/
[img(50%,50%)[ https://i.imgur.com/wRiyvuQ.png ]]
{{{
-- How to dump raw Active Session History Data into a spreadsheet (Doc ID 1630717.1)
-- gvash_to_csv.sql : modified by Karl Arao
set feedback off pages 0 term off head on und off trimspool on
set arraysize 5000
set termout off
set echo off verify off
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
column INSTNAME format a20
column DELTA_WRITE_IO_BYTES format a24
column DELTA_READ_IO_BYTES format a24
column DELTA_WRITE_IO_REQUESTS format a24
column DELTA_READ_IO_REQUESTS format a24
column DELTA_TIME format a24
column TM_DELTA_DB_TIME format a24
column TM_DELTA_CPU_TIME format a24
column TM_DELTA_TIME format a24
column DBREPLAY_CALL_COUNTER format a24
column DBREPLAY_FILE_ID format a24
column ECID format a66
column PORT format a24
column MACHINE format a66
column CLIENT_ID format a66
column ACTION format a66
column MODULE format a66
column PROGRAM format a66
column SERVICE_HASH format a24
column IS_REPLAYED format a3
column IS_CAPTURED format a3
column REPLAY_OVERHEAD format a3
column CAPTURE_OVERHEAD format a3
column IN_SEQUENCE_LOAD format a3
column IN_CURSOR_CLOSE format a3
column IN_BIND format a3
column IN_JAVA_EXECUTION format a3
column IN_PLSQL_COMPILATION format a3
column IN_PLSQL_RPC format a3
column IN_PLSQL_EXECUTION format a3
column IN_SQL_EXECUTION format a3
column IN_HARD_PARSE format a3
column IN_PARSE format a3
column IN_CONNECTION_MGMT format a3
column TIME_MODEL format a24
column REMOTE_INSTANCE# format a24
column XID format a10
column CONSUMER_GROUP_ID format a24
column TOP_LEVEL_CALL_NAME format a66
column TOP_LEVEL_CALL# format a24
column CURRENT_ROW# format a24
column CURRENT_BLOCK# format a24
column CURRENT_FILE# format a24
column CURRENT_OBJ# format a24
column BLOCKING_HANGCHAIN_INFO format a3
column BLOCKING_INST_ID format a24
column BLOCKING_SESSION_SERIAL# format a24
column BLOCKING_SESSION format a24
column BLOCKING_SESSION_STATUS format a13
column TIME_WAITED format a24
column SESSION_STATE format a9
column WAIT_TIME format a24
column WAIT_CLASS_ID format a24
column WAIT_CLASS format a66
column P3 format a24
column P3TEXT format a66
column P2 format a24
column P2TEXT format a66
column P1 format a24
column P1TEXT format a66
column SEQ# format a24
column EVENT_ID format a24
column EVENT format a66
column PX_FLAGS format a24
column QC_SESSION_SERIAL# format a24
column QC_SESSION_ID format a24
column QC_INSTANCE_ID format a24
column PLSQL_SUBPROGRAM_ID format a24
column PLSQL_OBJECT_ID format a24
column PLSQL_ENTRY_SUBPROGRAM_ID format a24
column PLSQL_ENTRY_OBJECT_ID format a24
column SQL_EXEC_START format a9
column SQL_EXEC_ID format a24
column SQL_PLAN_OPTIONS format a66
column SQL_PLAN_OPERATION format a66
column SQL_PLAN_LINE_ID format a24
column SQL_PLAN_HASH_VALUE format a24
column TOP_LEVEL_SQL_OPCODE format a24
column TOP_LEVEL_SQL_ID format a15
column FORCE_MATCHING_SIGNATURE format a24
column SQL_OPNAME format a66
column SQL_OPCODE format a24
column SQL_CHILD_NUMBER format a24
column IS_SQLID_CURRENT format a3
column SQL_ID format a15
column USER_ID format a24
column FLAGS format a24
column SESSION_TYPE format a12
column SESSION_SERIAL# format a24
column SESSION_ID format a24
column TM format a13
column SAMPLE_TIME format a13
column SAMPLE_ID format a24
column INSTANCE_NUMBER format a24
column DBID format a24
column SNAP_ID format a24
column TEMP_SPACE_ALLOCATED format a24
column PGA_ALLOCATED format a24
column DELTA_INTERCONNECT_IO_BYTES format a24
set pages 2000
set lines 2750
set heading off
set feedback off
set echo off
! echo "INSTNAME, INST_ID , SAMPLE_ID , TM, SAMPLE_TIME , SESSION_ID , SESSION_SERIAL# , SESSION_TYPE , FLAGS , USER_ID , SQL_ID ,"-
"IS_SQLID_CURRENT , SQL_CHILD_NUMBER , SQL_OPCODE , SQL_OPNAME , FORCE_MATCHING_SIGNATURE , TOP_LEVEL_SQL_ID , TOP_LEVEL_SQL_OPCODE , "-
"SQL_PLAN_HASH_VALUE , SQL_PLAN_LINE_ID , SQL_PLAN_OPERATION , SQL_PLAN_OPTIONS , SQL_EXEC_ID , SQL_EXEC_START , PLSQL_ENTRY_OBJECT_ID,"-
"PLSQL_ENTRY_SUBPROGRAM_ID , PLSQL_OBJECT_ID , PLSQL_SUBPROGRAM_ID , QC_INSTANCE_ID , QC_SESSION_ID , QC_SESSION_SERIAL# , PX_FLAGS , EVENT ,"-
" EVENT_ID , SEQ# , P1TEXT , P1 , P2TEXT , P2 , P3TEXT , P3 , WAIT_CLASS , WAIT_CLASS_ID , WAIT_TIME , SESSION_STATE , TIME_WAITED ,"-
"BLOCKING_SESSION_STATUS , BLOCKING_SESSION , BLOCKING_SESSION_SERIAL# , BLOCKING_INST_ID , BLOCKING_HANGCHAIN_INFO , CURRENT_OBJ# , "-
"CURRENT_FILE# , CURRENT_BLOCK# , CURRENT_ROW# , TOP_LEVEL_CALL# , TOP_LEVEL_CALL_NAME , CONSUMER_GROUP_ID , XID , REMOTE_INSTANCE# , TIME_MODEL ,"-
"IN_CONNECTION_MGMT , IN_PARSE , IN_HARD_PARSE , IN_SQL_EXECUTION , IN_PLSQL_EXECUTION , IN_PLSQL_RPC , IN_PLSQL_COMPILATION , IN_JAVA_EXECUTION ,"-
" IN_BIND , IN_CURSOR_CLOSE , IN_SEQUENCE_LOAD , CAPTURE_OVERHEAD , REPLAY_OVERHEAD , IS_CAPTURED , IS_REPLAYED , SERVICE_HASH , PROGRAM , MODULE ,"-
" ACTION , CLIENT_ID , MACHINE , PORT , ECID , DBREPLAY_FILE_ID , DBREPLAY_CALL_COUNTER , TM_DELTA_TIME , TM_DELTA_CPU_TIME , TM_DELTA_DB_TIME , "-
"DELTA_TIME , DELTA_READ_IO_REQUESTS , DELTA_WRITE_IO_REQUESTS , DELTA_READ_IO_BYTES , DELTA_WRITE_IO_BYTES , DELTA_INTERCONNECT_IO_BYTES , "-
"PGA_ALLOCATED , TEMP_SPACE_ALLOCATED " > myash-&_instname..csv
spool myash-&_instname..csv append
select INSTNAME ||','|| INST_ID ||','|| SAMPLE_ID ||','|| TM ||','|| SAMPLE_TIME ||','|| SESSION_ID ||','|| SESSION_SERIAL# ||','|| -
SESSION_TYPE ||','|| FLAGS ||','|| USER_ID ||','|| SQL_ID ||','|| IS_SQLID_CURRENT ||','|| SQL_CHILD_NUMBER ||','|| SQL_OPCODE ||','|| SQL_OPNAME -
||','|| FORCE_MATCHING_SIGNATURE ||','|| TOP_LEVEL_SQL_ID ||','|| TOP_LEVEL_SQL_OPCODE ||','|| SQL_PLAN_HASH_VALUE ||','|| SQL_PLAN_LINE_ID -
||','|| SQL_PLAN_OPERATION ||','|| SQL_PLAN_OPTIONS ||','|| SQL_EXEC_ID ||','|| SQL_EXEC_START ||','|| PLSQL_ENTRY_OBJECT_ID ||','|| -
PLSQL_ENTRY_SUBPROGRAM_ID ||','|| PLSQL_OBJECT_ID ||','|| PLSQL_SUBPROGRAM_ID ||','|| QC_INSTANCE_ID ||','|| QC_SESSION_ID ||','|| QC_SESSION_SERIAL#-
||','|| PX_FLAGS ||','|| EVENT ||','|| EVENT_ID ||','|| SEQ# ||','|| P1TEXT ||','|| P1 ||','|| P2TEXT ||','|| P2 ||','|| P3TEXT ||','|| P3 ||','|| -
WAIT_CLASS ||','|| WAIT_CLASS_ID ||','|| WAIT_TIME ||','|| SESSION_STATE ||','|| TIME_WAITED ||','|| BLOCKING_SESSION_STATUS ||','|| BLOCKING_SESSION-
||','|| BLOCKING_SESSION_SERIAL# ||','|| BLOCKING_INST_ID ||','|| BLOCKING_HANGCHAIN_INFO ||','|| CURRENT_OBJ# ||','|| CURRENT_FILE# ||','|| -
CURRENT_BLOCK# ||','|| CURRENT_ROW# ||','|| TOP_LEVEL_CALL# ||','|| TOP_LEVEL_CALL_NAME ||','|| CONSUMER_GROUP_ID ||','|| XID ||','|| -
REMOTE_INSTANCE# ||','|| TIME_MODEL ||','|| IN_CONNECTION_MGMT ||','|| IN_PARSE ||','|| IN_HARD_PARSE ||','|| IN_SQL_EXECUTION ||','|| -
IN_PLSQL_EXECUTION ||','|| IN_PLSQL_RPC ||','|| IN_PLSQL_COMPILATION ||','|| IN_JAVA_EXECUTION ||','|| IN_BIND ||','|| IN_CURSOR_CLOSE ||','|| -
IN_SEQUENCE_LOAD ||','|| CAPTURE_OVERHEAD ||','|| REPLAY_OVERHEAD ||','|| IS_CAPTURED ||','|| IS_REPLAYED ||','|| SERVICE_HASH ||','|| -
PROGRAM ||','|| MODULE ||','|| ACTION ||','|| CLIENT_ID ||','|| MACHINE ||','|| PORT ||','|| ECID ||','|| DBREPLAY_FILE_ID ||','|| -
DBREPLAY_CALL_COUNTER ||','|| TM_DELTA_TIME ||','|| TM_DELTA_CPU_TIME ||','|| TM_DELTA_DB_TIME ||','|| DELTA_TIME ||','|| -
DELTA_READ_IO_REQUESTS ||','|| DELTA_WRITE_IO_REQUESTS ||','|| DELTA_READ_IO_BYTES ||','|| DELTA_WRITE_IO_BYTES ||','|| -
DELTA_INTERCONNECT_IO_BYTES ||','|| PGA_ALLOCATED ||','|| TEMP_SPACE_ALLOCATED
From
(select trim('&_instname') INSTNAME, TO_CHAR(SAMPLE_TIME,'MM/DD/YY HH24:MI:SS') TM, a.*
from gv$active_session_history a)
Where SAMPLE_TIME > (select min(SAMPLE_TIME) from gv$active_session_history)
Order by SAMPLE_TIME, session_id asc;
spool off;
}}}
! run on all instances
sh run_gash.sh
{{{
$ cat run_gash.sh
for INST in $(ps axo cmd | grep ora_pmo[n] | sed 's/^ora_pmon_//' | grep -v 'sed '); do
if [ $INST = "$( cat /etc/oratab | grep -v ^# | grep -v ^$ | awk -F: '{ print $1 }' | grep $INST )" ]; then
echo "$INST: instance name = db_unique_name (single instance database)"
export ORACLE_SID=$INST; export ORAENV_ASK=NO; . oraenv
else
# remove last char (instance nr) and look for name again
LAST_REMOVED=$(echo "${INST:0:$(echo ${#INST}-1 | bc)}")
if [ $LAST_REMOVED = "$( cat /etc/oratab | grep -v ^# | grep -v ^$ | awk -F: '{ print $1 }' | grep $LAST_REMOVED )" ]; then
echo "$INST: instance name with last char removed = db_unique_name (RAC: instance number added)"
export ORACLE_SID=$LAST_REMOVED; export ORAENV_ASK=NO; . oraenv; export ORACLE_SID=$INST
elif [[ "$(echo $INST | sed 's/.*\(_[12]\)/\1/')" =~ "_[12]" ]]; then
# remove last two chars (rac one node addition) and look for name again
LAST_TWO_REMOVED=$(echo "${INST:0:$(echo ${#INST}-2 | bc)}")
if [ $LAST_TWO_REMOVED = "$( cat /etc/oratab | grep -v ^# | grep -v ^$ | awk -F: '{ print $1 }' | grep $LAST_TWO_REMOVED )" ]; then
echo "$INST: instance name with either _1 or _2 removed = db_unique_name (RAC one node)"
export ORACLE_SID=$LAST_TWO_REMOVED; export ORAENV_ASK=NO; . oraenv; export ORACLE_SID=$INST
fi
else
echo "couldn't find instance $INST in oratab"
continue
fi
fi
sqlplus -s /nolog <<EOF
connect / as sysdba
@gash_to_csv.sql
EOF
done
export DATE=$(date +%Y%m%d%H%M%S%N)
tar -cjvpf myash_$DATE.tar.bz2 myash*.csv rm myash*csv
}}}
! extracting just the relevant pieces for graphing in tableau
{{{
! echo "INSTNAME, INST_ID , SAMPLE_ID , TM, SAMPLE_TIME , SESSION_ID , SESSION_SERIAL# , 8-SESSION_TYPE , FLAGS , 10-USER_ID , 11-SQL_ID ,"-
"12-IS_SQLID_CURRENT , SQL_CHILD_NUMBER , SQL_OPCODE , SQL_OPNAME , 16-FORCE_MATCHING_SIGNATURE , TOP_LEVEL_SQL_ID , TOP_LEVEL_SQL_OPCODE , "-
"19-SQL_PLAN_H SH_VALUE , SQL_PLAN_LINE_ID , SQL_PLAN_OPERATION , SQL_PLAN_OPTIONS , SQL_EXEC_ID , SQL_EXEC_START , 25-PLSQL_ENTRY_OBJECT_ID,"-
"26-PLSQL_ENTRY_SUBPROGRAM_ID , 27-PLSQL_OBJECT_ID , 28-PLSQL_SUBPROGRAM_ID , QC_INSTANCE_ID , QC_SESSION_ID , 31-QC_SESSION_SERIAL# , PX_FLAGS , 33-EVENT ,"-
" 34-EVENT_ID , 35-SEQ# , P1TEXT , P1 , P2TEXT , P2 , P3TEXT , 41-P3 , 42-WAIT_CLASS , WAIT_CLASS_ID , 44-WAIT_TIME , 45-SESSION_STATE , 46-TIME_WAITED ,"-
"47-BLOCKING_SESSION_STATUS , BLOCKING_SESSION , BLOCKING_SESSION_SERIAL# , BLOCKING_INST_ID , BLOCKING_HANGCHAIN_INFO , 52-CURRENT_OBJ# , "-
"53-CURRENT_FILE# , CURRENT_BLOCK# , 55-CURRENT_ROW# , TOP_LEVEL_CALL# , 57-TOP_LEVEL_CALL_NAME , 58-CONSUMER_GROUP_ID , XID , 60-REMOTE_INSTANCE# , 61-TIME_MODEL ,"-
"62-IN_CONNECTION_MGMT , IN_PARSE , IN_HARD_PARSE , IN_SQL_EXECUTION , IN_PLSQL_EXECUTION , IN_PLSQL_RPC , IN_PLSQL_COMPILATION , IN_JAVA_EXECUTION ,"-
" IN_BIND , IN_CURSOR_CLOSE , IN_SEQUENCE_LOAD , CAPTURE_OVERHEAD , REPLAY_OVERHEAD , IS_CAPTURED , IS_REPLAYED , 77-SERVICE_HASH , PROGRAM , 79-MODULE ,"-
" 80-ACTION , CLIENT_ID , 82-MACHINE , PORT , ECID , DBREPLAY_FILE_ID , DBREPLAY_CALL_COUNTER , TM_DELTA_TIME , TM_DELTA_CPU_TIME , TM_DELTA_DB_TIME , "-
"DELTA_TIME , DELTA_READ_IO_REQUESTS , DELTA_WRITE_IO_REQUESTS , DELTA_READ_IO_BYTES , DELTA_WRITE_IO_BYTES , DELTA_INTERCONNECT_IO_BYTES , "-
"PGA_ALLOCATED , TEMP_SPACE_ALLOCATED
-- what i had in tableau
TM
Inst Id
Instname
Wait Class
Session State
Sql Opname
Sql Id
Program
Machine
User Id
Current Obj#
-- columns, the 2.2GB cut down to 1.2GB
cut -d , -f1,2,3,4,6,7,8,10,11,13,15,16,19,25,26,27,28,29,30,31,32,33,35,37,39,41,42,44,45,46,47,48,49,50,51,52,53,54,55,57,58,60,61,62,77,78,79,80,81,82
-- cut down to 578MB
cut -d , -f1,2,3,4,42,45,15,11,78,82,10,52,33
}}}
<<showtoc>>
! introduction
<<<
Interpretable AI: Not just for regulators - Patrick Hall (H2O.ai | George Washington University), Sri Satish (H2O.ai) https://learning.oreilly.com/videos/strata-data-conference/9781491976326/9781491976326-video316338
<<<
! installation
https://www.h2o.ai/blog/hackathon-finalist-with-h2o-guest-post/
http://anotherhunch.blogspot.com/2015/09/how-i-used-h2o-to-crunch-through-banks.html
Installing Driverless AI on RHEL with Nvidia GPU https://www.youtube.com/watch?v=xXzKdua7js8&list=PLNtMya54qvOE9fs3ylzaR_McnoUsuMV7X
Launching an experiment in Driverless AI https://www.youtube.com/watch?v=bw6CbZu0dKk&list=PLNtMya54qvOE1HZFCWx3zmYVLw8o3qD06
-hadoop CPU monitoring
On Modelling and Prediction of Total CPU Usage for Applications in MapReduce Enviornments http://arxiv.org/pdf/1203.4054.pdf
http://stackoverflow.com/questions/9365812/how-to-find-the-cpu-time-taken-by-a-map-reduce-task-in-hadoop
monitoring the shit out of your hadoop http://goo.gl/Y5mjTU
! cloudera BDR
https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cm_bdr_tutorials.html
https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cm_bdr_howto_hive.html#howto_backup_restore_hive_db
https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cm_bdr_howto_hdfs.html
https://github.com/prataprajr/hadoop-bdr-s3
! hortonworks
https://community.hortonworks.com/questions/142103/what-is-the-best-backup-and-recovery-solution-for.html
https://community.hortonworks.com/articles/43525/disaster-recovery-and-backup-best-practices-in-a-t.html
<<<
HDP options I know of are Falcon for 2.x and below, and Data Plane (DLM) for 3.x and above, or, WANdisco for any version
<<<
[img[ https://lh3.googleusercontent.com/-QsRM3czDMkg/Vhfg7pTmFrI/AAAAAAAACzU/4BEa8SfK_KU/s800-Ic42/IMG_8542.JPG ]]
https://twitter.com/karlarao/status/652370648820486144
@gwenshap I enjoyed the @hadooparchbook clickstream analytics end-to-end case study,video -> http://bit.ly/1Opqd4v
https://hortonworks.com/blog/easy-steps-to-create-hadoop-cluster-on-microsoft-azure/
-hadoop benchmark
https://github.com/hibench/HiBench-2.1
http://www.vmware.com/files/pdf/VMW-Hadoop-Performance-vSphere5.pdf
http://odbms.org/download/VirtualizingApacheHadoop.pdf
''hortonworks'' https://community.hortonworks.com/index.html , https://community.hortonworks.com/users/13050/karlarao.html
''cloudera'' http://community.cloudera.com/ , http://community.cloudera.com/t5/user/viewprofilepage/user-id/29094
https://analyticstraining.com/a-two-minute-guide-to-the-top-three-hadoop-distributions/
http://mabblerabble.blogspot.com/2016/09/mapr-vs-cloudera-vs-hortonworks.html
{{{
./hadoop-daemon.sh start datanode
From Mahesh to Everyone: (11:43 AM)
export HADOOP_OPTS="-Xmx4000m"
From Me to Everyone: (11:48 AM)
https://chawlasumit.wordpress.com/2016/03/14/hadoop-gc-overhead-limit-exceeded-error/
export HADOOP_CLIENT_OPTS="-XX:-UseGCOverheadLimit -Xmx4096m"
}}}
-hadoop hardware compression
http://www.exar.com/common/content/document.ashx?id=21230
-hadoop hardware sizing
https://blog.cloudera.com/blog/2013/08/how-to-select-the-right-hardware-for-your-new-hadoop-cluster/
http://hortonworks.com/blog/best-practices-for-selecting-apache-hadoop-hardware/
http://my.safaribooksonline.com/book/databases/hadoop/9781449327279/4dot-planning-a-hadoop-cluster/id2760689
https://twiki.grid.iu.edu/bin/view/Storage/HadoopUnderstanding
http://grokbase.com/t/cloudera/cdh-user/1244pee7rq/hardware-configuration-for-hadoop-cluster
https://medium.com/@acmurthy/hadoop-is-dead-long-live-hadoop-f22069b264ac
-hadoop perf tuning
http://sites.amd.com/us/Documents/HadoopPerformanceTuningGuide.pdf
http://hadoop.intel.com/pdfs/IntelDistributionTuningGuide.pdf
http://cloud-dba-journey.blogspot.com/2013/08/hadoop-reference-architectures.html
.
Interpreting HANGANALYZE trace files to diagnose hanging and performance problems for 9i and 10g. [ID 215858.1]
Steps to generate HANGANALYZE trace files (9i and below) [ID 175006.1]
{{{
connect / as sysdba
oradebug setmypid
oradebug unlimit
oradebug hanganalyze 3
oradebug dump systemstate 267
... wait 1-2 minutes
oradebug hanganalyze 3
oradebug dump systemstate 267
oradebug tracefile_name
core file
-> generates trace file (with a call stack)
* Stack trace represents the order in which calls were made by the offending process before it crashed
* Use a stack trace in conjunction with source code to understand the problem
* The rule of thumb is to ignore the top two to three functions on the stack (these are the kse
error handling routines that are called when an exception is encountered). The prefix kse
stands for kernel service error.
hang situations
-> multiple state dumps
* process state dump <-- view state objects
* system state dump <-- use for looping scenarios, or view state objects
* error stacks
-> hanganalyze event
* The views or the state dumps reveal the information leading to the hang
* Any hanganalyze level above 4 may cripple your OLTP system:
– Significant CPU overhead
– Huge trace files
* state objects are structures in SGA associated with various database entities such as:
- processes, sessions, latches & enqueues, buffer handles
* process state objects
- process -> session -> transaction
Reading System State Dump
A system state dump has three sections:
- Normal trace file header
- System global information
- Process information
* The heading in the file for this section is “System State.”
* The first process state objects listed under this heading are the Oracle background processes.
User processes (client) generally follow as do the other types of state objects (session, call,
enqueue, etc.).
}}}
! testcase
{{{
time asmcmd cp +RECO/hcm2tst/backupset/2012_12_08/nnndf0_tag20121207t233020_0.789.801446405 /dbfs/work/apac/test1
time asmcmd cp +RECO/hcm2tst/backupset/2012_12_07/nnndf0_tag20121207t233020_0.21135.801444645 /dbfs2/apac/test1
strace -fq -o strace_slow.out time asmcmd cp +RECO/hcm2tst/backupset/2013_01_08/ncnnf0_INCREMETAL1_0.32361.804204351 /dbfs/work/apac/test1
strace -fq -o strace_slow.out time asmcmd cp +RECO/hcm2tst/backupset/2013_01_08/ncnnf0_INCREMETAL1_0.32361.804204351 /dbfs2/apac/test1
Type Redund Striped Time Sys Name
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y annnf0_INCREMETAL1_0.22826.804204245
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y annnf0_INCREMETAL1_0.35539.804204245
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y annnf0_INCREMETAL1_0.7754.804204245
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnndn1_INCREMETAL1_0.10769.804204069
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnndn1_INCREMETAL1_0.14494.804204151
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnndn1_INCREMETAL1_0.20038.804204087
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnndn1_INCREMETAL1_0.21167.804204077
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnndn1_INCREMETAL1_0.28252.804204161
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnndn1_INCREMETAL1_0.31254.804204063
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnndn1_INCREMETAL1_0.33080.804204095
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnndn1_INCREMETAL1_0.5581.804204135
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnndn1_INCREMETAL1_0.6186.804204109
BACKUPSET MIRROR COARSE JAN 08 22:00:00 Y nnsnf0_INCREMETAL1_0.32676.804204355
BACKUPSET MIRROR COARSE JAN 11 15:00:00 Y ncnnf0_INCREMETAL1_0.32361.804204351
}}}
! include the ashdump
http://blog.dbi-services.com/oracle-is-hanging-dont-forget-hanganalyze-and-systemstate/
{{{
sqlplus / as sysdba
oradebug setmypid
oradebug unlimit
oradebug hanganalyze 3
oradebug dump ashdumpseconds 30
oradebug dump systemstate 266
oradebug tracefile_name
}}}
''hbase conference archive'' http://hbasecon.com/archive.html
http://www.meetup.com/Google-Cloud-Platform-NYC-Meetup/events/223941239/
Google Cloud Bigtable's Next Big Step - Cory O'Connor https://www.youtube.com/watch?v=YlBlcGe9fyo
Carter Page - HBaseCon 2015 - theCUBE https://www.youtube.com/watch?v=IDrPrjWZHJk
http://fortune.com/2015/05/06/google-launches-bigtable-database/
http://www.cloudwards.net/news/sungard-and-google-team-up-for-consolidated-audit-trail-contract-8123/
fintech startup https://kensho.com/#/press
compress for oltp https://asktom.oracle.com/pls/apex/asktom.search?tag=compress-for-oltp
https://oracle-base.com/articles/12c/online-move-table-12cr2
Example of Table Compression and Partitioning
https://docs.oracle.com/database/121/VLDBG/GUID-9F5B466B-A74F-4D0E-8A9C-EB7138DEBEFF.htm#VLDBG1268
{{{
alter table move partition compress for
ALTER TABLE sales
MOVE PARTITION sales_q1_1998 TABLESPACE ts_arch_q1_1998
COMPRESS FOR ARCHIVE LOW;
}}}
Compressing Individual Partitions in the warehouse https://blogs.oracle.com/datawarehousing/compressing-individual-partitions-in-the-warehouse
https://docs.oracle.com/en/database/oracle/oracle-database/19/vldbg/partition-table-compression.html#GUID-F26AFD78-DC1D-4E6B-9B37-375C59FD1787
https://connor-mcdonald.com/2013/08/26/compressed-partitions-are-not-compressed-tables/ <-- good stuff
https://weidongzhou.wordpress.com/2013/10/25/compression-methods-on-exadata-compression-on-partition-tables-part-4-of-6/
https://oracle-base.com/articles/9i/compressed-tables-9i#partitioned-tables
https://dba.stackexchange.com/questions/96318/compressing-partition-what-about-index
seems like all EHCC segments are recorded here. The TINSIZE and TOUTSIZE columns could be useful.
The columns are defined in $ORACLE_HOME/rdbms/admin/dsqlddl.bsq
{{{
20:00:55 SYS@oltp1> select TS#, FILE#, BLOCK#, OBJ#, DATAOBJ#, ULEVEL, SUBLEVEL, ILEVEL,
FLAGS,BESTSORTCOL, TINSIZE, CTINSIZE, TOUTSIZE, CMPSIZE, UNCMPSIZE,MTIME from compression$
20:00:56 2 /
TS# FILE# BLOCK# OBJ# DATAOBJ# ULEVEL SUBLEVEL ILEVEL FLAGS BESTSORTCOL TINSIZE CTINSIZE TOUTSIZE CMPSIZE UNCMPSIZE MTIME
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------- ---------- ---------- ---------- ---------- ---------- -----------------
10 1024 802 120576 120576 1 9 4901 -1 100939 32000 20160419 15:59:03
10 1024 786 120574 120574 2 9 8007 3 252540 32000 20160419 15:59:44
10 1024 810 120577 120577 3 9 8007 3 391312 42974 20160419 16:00:39
10 1024 818 120578 120578 4 9 8071 3 8371208 256000 20160419 16:01:55
10 1024 1602 120580 120580 1 9 4901 -1 101503 32000 20160419 16:54:14
10 1024 1746 120581 120581 2 9 8007 3 252426 32000 20160419 16:54:49
10 1024 1882 120582 120582 3 9 8007 3 391769 42999 20160419 16:55:35
10 1024 2018 120583 120583 4 9 8071 3 8362663 256000 20160419 16:56:53
8 rows selected.
20:01:14 SYS@oltp1> select object_name from dba_objects where data_object_id in (select dataobj# from compression$);
OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
HCCTABLE_CTAS_ARCHIVE_LOW
HCCTABLE_CTAS_ARCHIVE_HIGH
HCCTABLE_QUERY_HIGH
HCCTABLE_QUERY_LOW
HCCTABLE_ARCHIVE_LOW
HCCTABLE_ARCHIVE_HIGH
HCCTABLE_CTAS_QUERY_LOW
HCCTABLE_CTAS_QUERY_HIGH
8 rows selected.
}}}
<<showtoc>>
! manual cloud
https://www.youtube.com/results?search_query=install+hortonworks+digitalocean
Hadoop Series - Part1 Multinode Hadoop Cluster Installation using Ambari https://www.youtube.com/watch?v=wHaVBoLwzwU
Hadoop Series - Part2 How to Add Node To Existing Hadoop Cluster https://www.youtube.com/watch?v=jiCI4yIkUlc
Hadoop Series - Part3 Adding a node to existing hadoop cluster https://www.youtube.com/watch?v=kpZ8tzqzJmg
Hadoop Series - Part4 HDP upgrade https://www.youtube.com/watch?v=GRgZurCvmMY
Hadoop Multinode Cluster on AWS EC2 Ubuntu Instance https://www.youtube.com/watch?v=l4xAmqzGJRU
2020
Hadoop multi node cluster setup in Google cloud platform - Part 1 to 5 https://www.youtube.com/watch?v=OxEtUAmb2SQ&list=PLYEGL9-7r3BPfwlMFt1lARZrjmqu0OGpF&index=1
! ansible, terraform
https://github.com/search?q=hadoop+aws+ansible&type=Repositories <- examples here
AWS EC2 Provisioning using Ansible https://www.youtube.com/watch?v=avbRuHhomso
! on networking
setup a host only adapter and create a 2nd adapter, then ssh to it
! to access
{{{
-- from within the VM
docker ps -a
ssh -p 2222 root@localhost
root/xxx1txx$1
-- login as raj_ops
hive
-- then from ambari
http://localhost:8888
http://127.0.0.1:8080/#/main/dashboard/metrics
admin/hadoop
}}}
! hive directory
to get the directory
{{{
find / | grep -i hive-site.xml
less /etc/hive2/2.5.0.0-1245/0/hive-site.xml
less /etc/hive/2.5.0.0-1245/0/hive-site.xml
}}}
another way
{{{
describe formatted dbname.tablename
describe formatted dbname.tablename partition (name=value)
}}}
{{{
[root@sandbox ~]# hadoop fs -ls /apps/hive/warehouse/
Found 4 items
drwxrwxrwx - hive hdfs 0 2016-10-25 08:10 /apps/hive/warehouse/foodmart.db
drwxrwxrwx - hive hdfs 0 2016-10-25 08:11 /apps/hive/warehouse/sample_07
drwxrwxrwx - hive hdfs 0 2016-10-25 08:11 /apps/hive/warehouse/sample_08
drwxrwxrwx - hive hdfs 0 2016-10-25 08:02 /apps/hive/warehouse/xademo.db
}}}
<<showtoc>>
! 1) download the docker sandbox
https://hortonworks.com/downloads/#sandbox
! 2) Read - Deploying Hortonworks Sandbox on Docker
https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/3/
! 3) Create a new directory hdp26-demo-sandbox and load the docker file
{{{
:hdp26-demo-sandbox kristofferson.a.arao$ ls -l ../
total 24323232
-rw-r--r--@ 1 kristofferson.a.arao DIR\Domain Users 12453491200 Jan 18 19:21 HDP_2.6.3_docker_10_11_2017.tar
drwxr-xr-x 6 kristofferson.a.arao DIR\Domain Users 204 Jan 22 11:12 hdp26-demo-sandbox
cd hdp26-demo-sandbox
:hdp26-demo-sandbox kristofferson.a.arao$ cat 0_docker_load.sh
docker load < ../HDP_2.6.3_docker_10_11_2017.tar
}}}
! 4) Download the script and run
https://raw.githubusercontent.com/hortonworks/data-tutorials/master/tutorials/hdp/sandbox-deployment-and-install-guide/assets/start_sandbox-hdp.sh
I edited the script to mount the current directory
{{{
:hdp26-demo-sandbox kristofferson.a.arao$ cat 3_star_sandbox.sh
#!/bin/bash
echo "Waiting for docker daemon to start up:"
until docker ps 2>&1| grep STATUS>/dev/null; do sleep 1; done; >/dev/null
docker ps -a | grep sandbox-hdp
if [ $? -eq 0 ]; then
docker start sandbox-hdp
else
docker run -v `pwd`:`pwd` -v hadoop:/hadoop --name sandbox-hdp --hostname "sandbox-hdp.hortonworks.com" --privileged -d \
-p 15500:15500 \
-p 15501:15501 \
-p 15502:15502 \
-p 15503:15503 \
-p 15504:15504 \
-p 15505:15505 \
-p 1111:111 \
-p 4242:4242 \
-p 50079:50079 \
-p 6080:6080 \
-p 16000:16000 \
-p 16020:16020 \
-p 10502:10502 \
-p 33553:33553 \
-p 39419:39419 \
-p 15002:15002 \
-p 18080:18080 \
-p 10015:10015 \
-p 10016:10016 \
-p 2049:2049 \
-p 9090:9090 \
-p 3000:3000 \
-p 9000:9000 \
-p 8000:8000 \
-p 8020:8020 \
-p 2181:2181 \
-p 42111:42111 \
-p 10500:10500 \
-p 16030:16030 \
-p 8042:8042 \
-p 8040:8040 \
-p 2100:2100 \
-p 4200:4200 \
-p 4040:4040 \
-p 8032:8032 \
-p 9996:9996 \
-p 9995:9995 \
-p 8080:8080 \
-p 8088:8088 \
-p 8886:8886 \
-p 8889:8889 \
-p 8443:8443 \
-p 8744:8744 \
-p 8888:8888 \
-p 8188:8188 \
-p 8983:8983 \
-p 1000:1000 \
-p 1100:1100 \
-p 11000:11000 \
-p 10001:10001 \
-p 15000:15000 \
-p 10000:10000 \
-p 8993:8993 \
-p 1988:1988 \
-p 5007:5007 \
-p 50070:50070 \
-p 19888:19888 \
-p 16010:16010 \
-p 50111:50111 \
-p 50075:50075 \
-p 50095:50095 \
-p 18081:18081 \
-p 60000:60000 \
-p 8090:8090 \
-p 8091:8091 \
-p 8005:8005 \
-p 8086:8086 \
-p 8082:8082 \
-p 60080:60080 \
-p 8765:8765 \
-p 5011:5011 \
-p 6001:6001 \
-p 6003:6003 \
-p 6008:6008 \
-p 1220:1220 \
-p 21000:21000 \
-p 6188:6188 \
-p 2222:22 \
sandbox-hdp /usr/sbin/sshd -D
fi
docker exec -t sandbox-hdp /bin/sh -c 'echo "127.0.0.1 sandbox.hortonworks.com" >> /etc/hosts'
docker exec -t sandbox-hdp /bin/sh -c 'chown -R mysql:mysql /var/lib/mysql'
docker exec -t sandbox-hdp service mysqld start
docker exec -t sandbox-hdp service postgresql start
docker exec -t sandbox-hdp ambari-server start
docker exec -t sandbox-hdp ambari-agent start
docker exec -t sandbox-hdp /bin/sh -c 'rm -f /usr/hdp/current/oozie-server/libext/falcon-oozie-el-extension-*'
docker exec -t sandbox-hdp /bin/sh -c 'chown -R hdfs:hadoop /hadoop/hdfs'
docker exec -t sandbox-hdp /etc/init.d/shellinaboxd start
echo "Waiting for ambari agent to connect"
docker exec -t sandbox-hdp /bin/sh -c ' until curl --silent -u raj_ops:raj_ops -H "X-Requested-By:ambari" -i -X GET http://localhost:8080/api/v1/clusters/Sandbox/hosts/sandbox-hdp.hortonworks.com/host_components/ZOOKEEPER_SERVER | grep state | grep -v desired | grep INSTALLED; do sleep 5; echo -n .; done;'
echo "Waiting for ambari services to start "
docker exec -t sandbox-hdp /bin/sh -c 'until curl --silent --user raj_ops:raj_ops -X PUT -H "X-Requested-By: ambari" -d "{\"RequestInfo\":{\"context\":\"_PARSE_.START.HDFS\",\"operation_level\":{\"level\":\"SERVICE\",\"cluster_name\":\"Sandbox\",\"service_name\":\"HDFS\"}},\"Body\":{\"ServiceInfo\":{\"state\":\"STARTED\"}}}" http://localhost:8080/api/v1/clusters/Sandbox/services/HDFS | grep -i accept >/dev/null; do echo -n .; sleep 5; done;'
docker exec -t sandbox-hdp /bin/sh -c 'until curl --silent --user raj_ops:raj_ops -X PUT -H "X-Requested-By: ambari" -d "{\"RequestInfo\":{\"context\":\"_PARSE_.START.ALL_SERVICES\",\"operation_level\":{\"level\":\"CLUSTER\",\"cluster_name\":\"Sandbox\"}},\"Body\":{\"ServiceInfo\":{\"state\":\"STARTED\"}}}" http://localhost:8080/api/v1/clusters/Sandbox/services | grep -i accept > /dev/null; do sleep 5; echo -n .; done; '
docker exec -t sandbox-hdp /bin/sh -c 'until /usr/bin/curl --silent --user raj_ops:raj_ops -H "X-Requested-By: ambari" "http://localhost:8080/api/v1/clusters/Sandbox/requests?to=end&page_size=10&fields=Requests" | tail -n 27 | grep COMPLETED | grep COMPLETED > /dev/null; do echo -n .; sleep 1; done;'
docker exec -t sandbox-hdp su - hue -c '/bin/bash /usr/lib/tutorials/tutorials_app/run/run.sh &>/dev/null'
docker exec -t sandbox-hdp su - hue -c '/bin/bash /usr/lib/hue/tools/start_scripts/update-tutorials.sh &>/dev/null'
docker exec -t sandbox-hdp touch /usr/hdp/current/oozie-server/oozie-server/work/Catalina/localhost/oozie/SESSIONS.ser
docker exec -t sandbox-hdp chown oozie:hadoop /usr/hdp/current/oozie-server/oozie-server/work/Catalina/localhost/oozie/SESSIONS.ser
docker exec -t sandbox-hdp /etc/init.d/tutorials start
docker exec -t sandbox-hdp /etc/init.d/splash
echo ""
echo "Started Hortonworks HDP container"
}}}
! 5) Change root and ambari admin password
{{{
:hdp26-demo-sandbox kristofferson.a.arao$ ssh -p 2222 root@localhost
root@localhost's password:
You are required to change your password immediately (root enforced)
Changing password for root.
(current) UNIX password:
New password:
Retype new password:
[root@sandbox-hdp ~]#
[root@sandbox-hdp ~]#
[root@sandbox-hdp ~]#
[root@sandbox-hdp ~]#
[root@sandbox-hdp ~]# ambari-admin-password-reset
Please set the password for admin:
Please retype the password for admin:
The admin password has been set.
Restarting ambari-server to make the password change effective...
Using python /usr/bin/python
Restarting ambari-server
Waiting for server stop...
Ambari Server stopped
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start........................................
Server started listening on 8080
DB configs consistency check: no errors and warnings were found.
}}}
! 6) Check status of container, login, open the ambari console
{{{
docker ps -a
ssh -p 2222 root@localhost
root/welcome1
-- LOGIN
http://localhost:8888
http://127.0.0.1:8080/#/main/dashboard/metrics
-- FOLLOW HOWTO
https://hortonworks.com/tutorial/learning-the-ropes-of-the-hortonworks-sandbox/#admin-password-reset
}}}
! 7) other commands
{{{
docker start <container id>
docker stop <container id>
-- remove container
docker rm <container id>
-- remove container image
docker rmi <container name>
}}}
! AFTER INSTALLATION - HOW TO RUN
* two scripts are created after installation, you DON'T have to run the 1_docker_load.sh unless you want to refresh your image.
{{{
cd /Users/kristofferson.a.arao/docker/hdp26-demo-sandbox
ls
1_docker_load.sh 2_start_sandbox.sh
}}}
!! 1_docker_load.sh
{{{
docker load < ../HDP_2.6.3_docker_10_11_2017.tar
}}}
!! 2_start_sandbox.sh
{{{
#!/bin/bash
echo "Waiting for docker daemon to start up:"
until docker ps 2>&1| grep STATUS>/dev/null; do sleep 1; done; >/dev/null
docker ps -a | grep sandbox-hdp
if [ $? -eq 0 ]; then
docker start sandbox-hdp
else
docker run -v `pwd`:`pwd` -v hadoop:/hadoop --name sandbox-hdp --hostname "sandbox-hdp.hortonworks.com" --privileged -d \
-p 15500:15500 \
-p 15501:15501 \
-p 15502:15502 \
-p 15503:15503 \
-p 15504:15504 \
-p 15505:15505 \
-p 1111:111 \
-p 4242:4242 \
-p 50079:50079 \
-p 6080:6080 \
-p 16000:16000 \
-p 16020:16020 \
-p 10502:10502 \
-p 33553:33553 \
-p 39419:39419 \
-p 15002:15002 \
-p 18080:18080 \
-p 10015:10015 \
-p 10016:10016 \
-p 2049:2049 \
-p 9090:9090 \
-p 3000:3000 \
-p 9000:9000 \
-p 8000:8000 \
-p 8020:8020 \
-p 2181:2181 \
-p 42111:42111 \
-p 10500:10500 \
-p 16030:16030 \
-p 8042:8042 \
-p 8040:8040 \
-p 2100:2100 \
-p 4200:4200 \
-p 4040:4040 \
-p 8032:8032 \
-p 9996:9996 \
-p 9995:9995 \
-p 8080:8080 \
-p 8088:8088 \
-p 8886:8886 \
-p 8889:8889 \
-p 8443:8443 \
-p 8744:8744 \
-p 8888:8888 \
-p 8188:8188 \
-p 8983:8983 \
-p 1000:1000 \
-p 1100:1100 \
-p 11000:11000 \
-p 10001:10001 \
-p 15000:15000 \
-p 10000:10000 \
-p 8993:8993 \
-p 1988:1988 \
-p 5007:5007 \
-p 50070:50070 \
-p 19888:19888 \
-p 16010:16010 \
-p 50111:50111 \
-p 50075:50075 \
-p 50095:50095 \
-p 18081:18081 \
-p 60000:60000 \
-p 8090:8090 \
-p 8091:8091 \
-p 8005:8005 \
-p 8086:8086 \
-p 8082:8082 \
-p 60080:60080 \
-p 8765:8765 \
-p 5011:5011 \
-p 6001:6001 \
-p 6003:6003 \
-p 6008:6008 \
-p 1220:1220 \
-p 21000:21000 \
-p 6188:6188 \
-p 2222:22 \
sandbox-hdp /usr/sbin/sshd -D
fi
docker exec -t sandbox-hdp /bin/sh -c 'echo "127.0.0.1 sandbox.hortonworks.com" >> /etc/hosts'
docker exec -t sandbox-hdp /bin/sh -c 'chown -R mysql:mysql /var/lib/mysql'
docker exec -t sandbox-hdp service mysqld start
docker exec -t sandbox-hdp service postgresql start
docker exec -t sandbox-hdp ambari-server start
docker exec -t sandbox-hdp ambari-agent start
docker exec -t sandbox-hdp /bin/sh -c 'rm -f /usr/hdp/current/oozie-server/libext/falcon-oozie-el-extension-*'
docker exec -t sandbox-hdp /bin/sh -c 'chown -R hdfs:hadoop /hadoop/hdfs'
docker exec -t sandbox-hdp /etc/init.d/shellinaboxd start
echo "Waiting for ambari agent to connect"
docker exec -t sandbox-hdp /bin/sh -c ' until curl --silent -u raj_ops:raj_ops -H "X-Requested-By:ambari" -i -X GET http://localhost:8080/api/v1/clusters/Sandbox/hosts/sandbox-hdp.hortonworks.com/host_components/ZOOKEEPER_SERVER | grep state | grep -v desired | grep INSTALLED; do sleep 5; echo -n .; done;'
echo "Waiting for ambari services to start "
docker exec -t sandbox-hdp /bin/sh -c 'until curl --silent --user raj_ops:raj_ops -X PUT -H "X-Requested-By: ambari" -d "{\"RequestInfo\":{\"context\":\"_PARSE_.START.HDFS\",\"operation_level\":{\"level\":\"SERVICE\",\"cluster_name\":\"Sandbox\",\"service_name\":\"HDFS\"}},\"Body\":{\"ServiceInfo\":{\"state\":\"STARTED\"}}}" http://localhost:8080/api/v1/clusters/Sandbox/services/HDFS | grep -i accept >/dev/null; do echo -n .; sleep 5; done;'
docker exec -t sandbox-hdp /bin/sh -c 'until curl --silent --user raj_ops:raj_ops -X PUT -H "X-Requested-By: ambari" -d "{\"RequestInfo\":{\"context\":\"_PARSE_.START.ALL_SERVICES\",\"operation_level\":{\"level\":\"CLUSTER\",\"cluster_name\":\"Sandbox\"}},\"Body\":{\"ServiceInfo\":{\"state\":\"STARTED\"}}}" http://localhost:8080/api/v1/clusters/Sandbox/services | grep -i accept > /dev/null; do sleep 5; echo -n .; done; '
docker exec -t sandbox-hdp /bin/sh -c 'until /usr/bin/curl --silent --user raj_ops:raj_ops -H "X-Requested-By: ambari" "http://localhost:8080/api/v1/clusters/Sandbox/requests?to=end&page_size=10&fields=Requests" | tail -n 27 | grep COMPLETED | grep COMPLETED > /dev/null; do echo -n .; sleep 1; done;'
docker exec -t sandbox-hdp su - hue -c '/bin/bash /usr/lib/tutorials/tutorials_app/run/run.sh &>/dev/null'
docker exec -t sandbox-hdp su - hue -c '/bin/bash /usr/lib/hue/tools/start_scripts/update-tutorials.sh &>/dev/null'
docker exec -t sandbox-hdp touch /usr/hdp/current/oozie-server/oozie-server/work/Catalina/localhost/oozie/SESSIONS.ser
docker exec -t sandbox-hdp chown oozie:hadoop /usr/hdp/current/oozie-server/oozie-server/work/Catalina/localhost/oozie/SESSIONS.ser
docker exec -t sandbox-hdp /etc/init.d/tutorials start
docker exec -t sandbox-hdp /etc/init.d/splash
echo ""
echo "Started Hortonworks HDP container"
}}}
hadoop admin: part 62 finalizing the addition of data node and edge node on HDP 2.5 https://www.youtube.com/watch?v=xMWdO0sjZnA
hadoop admin: part 61 discussion of edge node and make the nodes ready for adding in HDP 2.5 https://www.youtube.com/watch?v=yGlRuWbt9xE&t=7s
https://community.hortonworks.com/questions/39568/how-to-create-edge-node-for-kerberized-cluster.html
https://dwbi.org/etl/bigdata/187-set-up-client-node-gateway-node-in-hadoop-cluster
https://www.dummies.com/programming/big-data/hadoop/edge-nodes-in-hadoop-clusters/
https://community.hortonworks.com/questions/63949/do-we-need-to-install-hadoop-on-edge-node.html
http://www.cyberciti.biz/faq/linux-getting-scsi-ide-harddisk-information/
hdparm -I /dev/sda
hdparm -Tt /dev/sda
Capacity Planning Using Headroom Analysis
regions.cmg.org/regions/phcmg/May08PSinha.ppt
http://www.perfcap.com/
! headroom
<<<
* The reason you do this is that in order to have a better chance at navigating your way out of the woods, you need to know the point from which you are starting.
* headroom meaning - headroom to mean the amount of free capacity that exists within your system before you start having problems such as a degradation of performance or an outage.
* growth paths - If you do not plot out where you are in terms of capacity usage and determine what your growth path looks like, you are likely to be blindsided by a surge in capacity from any number of sources.
* variables - Not taking into account different types of growth, existing headroom capacity, and optimizations, there is no way your projections could be accurate other than by pure luck.
* variables2 - budget, hiring plan, new features on apps, prioritization of scalability projects (backed by cost benefit analysis)
* data driven - Using headroom data, you will start making much more data driven decisions and become much better at planning and predicting.
* determine growth - 1) natural and man made 2) seasonality effect
* headroom calculation - This equation states that the headroom of a particular component of your system is equal to the ideal usage percentage of the maximum capacity minus the current usage minus the sum over a time period (here it is 12 months) of the growth rate minus the optimization. We will cover the ideal usage percentage in the next section of this chapter; for now, let’s use 50% as the number.
* ideal usage percentage - the amount of capacity for a particular component that should be planned for usage. 50% starting point up to 75%
* thrashing or excessive swapping or queueing
* SD - max capacity - (3 x SD) = max amount we can plan for If we take 3× that amount, 4.48, and subtract the maximum load capacity that we have established for this server class, we then have the amount of load capacity that we can plan to use up to but not exceed.
* planned maximum - We stipulated that in general we prefer to use a simple 50% as the amount of maximum capacity that should be planned on using. The reason is that this accounts for variability or mistakes in determining the maximum capacity as well as errors in the growth projects. We capitulated that we could be convinced to increase this percentage if the administrators or engineers could make sound and reasonable arguments for why the system is very well understood and not very variable. An alternative method of determining this maximum usage capacity is to subtract three standard deviations of actual usage from the believed maximum and use that number as the planning maximum.
Without a sound and factually based argument for deviating, we recommend not planning on using more than 50% of the maximum capacity on any one component.
<<<
! capacity management vs capacity planning
http://en.wikipedia.org/wiki/Performance_Engineering
http://en.wikipedia.org/wiki/Capacity_management
http://en.wikipedia.org/wiki/Capacity_planning
http://blogs.technet.com/b/mike_wise/archive/2010/04/14/capacity-management-vs-capacity-planning.aspx <- sharepoint guy, ranting on capacity planning presales work
http://blogs.technet.com/b/mike_wise/archive/2010/04/15/the-abyss.aspx
http://blogs.technet.com/b/mike_wise/archive/2010/04/19/measurable-concepts.aspx
http://www.slideshare.net/AnthonyDehnashi/capacity-planning-and-modeling
! CAGR - compounded annual growth rate
http://blog.brickpicker.com/introducing-cmgr-compound-monthly-growth-rate/
http://www.investinganswers.com/calculators/return/compound-annual-growth-rate-cagr-calculator-1262 <- cagr calculator
https://www.youtube.com/results?search_query=Compounded+annual+growth+rate <- awesome youtube videos
http://community.tableausoftware.com/thread/127139
http://www.wikinvest.com/wiki/Compounded_annual_growth_rate_-_CAGR
http://office.microsoft.com/en-us/excel-help/calculate-a-compound-annual-growth-rate-cagr-HP001122506.aspx
http://articles.economictimes.indiatimes.com/2012-04-09/news/31313179_1_returns-fund-navs How to calculate returns from a mutual fund
http://www.wallst-training.com/about/resources.html
https://www.youtube.com/user/wstss/videos?live_view=500&flow=list&sort=dd&view=0
https://www.youtube.com/watch?v=MXuOixWTjsQ
https://www.khanacademy.org/math/algebra2/exponential_and_logarithmic_func/continuous_compounding/v/introduction-to-compound-interest-and-e
! Tableau table calculations
http://tcc13.tableauconference.com/sites/default/files/materials/Table%20Calculation%20Fundamentals.pdf <- GOOD STUFF!!! TCC13 Table Calculation Fundamentals 38 page doc
http://www.tableausoftware.com/table-calculations
1. Percent change from a reference date
2. Common baseline (Toy Story)
3. Percent of total sales over time (Multi-pass aggregation)
4. Preserving ranking even while sorting
5. Running total
6. Weighted average
7. Grouping by a calculation
8. Number of incidents over a moving range
9. Moving average over variable periods
10. Difference from average by period
Z scores http://kb.tableausoftware.com/articles/knowledgebase/z-scores
table calculation functions http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/functions_functions_tablecalculation.html
Percent Difference From Calculation http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/i234022.html
Tableau Year on Year calculation http://reports4u.co.uk/tableau-year-on-year-calculation/
year over year growth http://freakalytics.com/blog/2012/10/18/yoy-growth/
Tip: Running Total Table Calculations http://mkt.tableausoftware.com/files/tips/pdf/tip0612.pdf
http://downloads.tableausoftware.com/quickstart/feature-guides/table_calcs.pdf
http://kb.tableausoftware.com/articles/knowledgebase/running-total-table-calculations
Moving Average http://3danim8.files.wordpress.com/2013/05/using-daily-data-to-calculate-a-seven-day-ma-and-ma-chart.pdf <- Ken Black Moving Average
http://kb.tableausoftware.com/articles/knowledgebase/rolling-calculation
! Models for anomaly detection
http://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1093&context=theses
Holt Winter Based Forecasting
Maximum Entropy based Anomaly Detection
Adaptive Threshold Algorithm
Cumulative Sum Algorithm
Exponentially Weighted Moving Average
http://danslimmon.wordpress.com/2014/05/16/wherein-i-rant-about-fourier-analysis/ <- Fourier Analysis
! References on holt winters - INZIGHT!
https://www.stat.auckland.ac.nz/~wild/iNZight/advanced.html
holt winters in python http://adorio-research.org/wordpress/?p=1230
holt winters + monte carlo http://analyticsmadeskeezy.com/2013/04/25/prediction-intervals/
! References on exponential smoothing
http://answers.oreilly.com/topic/2418-data-analysis-how-to-forecast-with-exponential-smoothing/
! headroom in R
https://github.com/karlarao/forecast_examples/tree/master/storage_forecast
https://github.com/adrianco/headroom-plot
http://perfcap.blogspot.com/2008/07/enhanced-headroom-plot-in-r.html
! SQL forecasting
http://noriegaaoracleexpert.blogspot.com/2007/07/business-intelligence-trends.html
· Exponential Smoothing
· Exponential Smoothing with Seasonal Adjustments
· Time Series Methods
· Moving Average
· Moving Average with Seasonal Adjustments
· ARMA (Auto-regressive Moving Average)
· ARIMA (Auto-regressive Integrated Moving Average)
http://www.sqlservercentral.com/articles/T-SQL/69334/
OLAP developers guide http://www.stanford.edu/dept/itss/docs/oracle/10gR2/olap.102/b14349/forecast.htm
FCQUERY http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_1075.htm#OLADM532
FORECAST http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_commands_1052.htm#OLADM822
Data Mining Blog http://oracledmt.blogspot.com/2006/10/time-series-revisited.html
http://oracledmt.blogspot.com/2006/05/time-series-forecasting-3-multi-step.html
http://oracledmt.blogspot.com/2006/03/time-series-forecasting-2-single-step.html
http://oracledmt.blogspot.com/2006/01/time-series-forecasting-part-1_23.html
! ''some useful links''
{{{
Nice video interview of Adrian Cockroft about Architecture for the Cloud
http://www.infoq.com/interviews/Adrian-Cockcroft-Netflix
he used to be the cloud architect at Netflix
http://gigaom.com/2014/01/07/netflixs-cloud-architect-adrian-cockcroft-is-leaving-to-join-battery-ventures/
but recently joined Battery ventures to work on cloud stuff
and Battery Ventures apparently is an investor to Delphix
http://www.delphix.com/2012/06/delphix-raises-25-million-in-series-c-funding/
You can find Adrian's blog here http://perfcap.blogspot.com/
and Netflix tech blog here http://techblog.netflix.com/
-----
I can make the provisioning worksheet to behave something like this http://www.ca.com/us/repository/flash-video-items/na/right-size-your-existing-infrastructure.aspx
http://www.ca.com/us/products/detail/ca-capacity-manager.aspx
-----
headroom links
http://flylib.com/books/en/1.396.1.106/1/
http://www.cmgaus.org/cmga_web_root/proceedings/2006/schulz2006-pres.pdf
http://reg094.cct.lsu.edu/pdf//index.php?pdf=/hardware/sg247071.pdf
http://www.dice.com/job/result/10117383/PTE-NH?src=19
http://theartofscalability.com/
http://arxiv.org/pdf/cs/0012022.pdf
http://www.theregister.co.uk/2012/04/30/inside_pinterest_virtual_data_center/
http://www.academia.edu/1771947/Projecting_disk_usage_based_on_historical_trends_in_a_cloud_environment
http://www.datacenterdynamics.com/focus/archive/2013/11/how-facebook-deals-constant-change
http://www.datacenterdynamics.com/focus/archive/2013/07/facebook-building-dcim-own-data-centers
http://www.datacenterdynamics.com/video/facebook-future-web-scale-data-centers
http://iieom.org/ieom2014/pdfs/41.pdf
http://packetpushers.net/how-to-use-historical-data-to-calculate-next-years-bandwidth-requirements/
http://www.kkant.net/papers/cap_planning.pdf
http://books.google.com/books?id=eyj4S1XszvUC&pg=PA201&lpg=PA201&dq=capacity+planning+headroom&source=bl&ots=dx-RZNiuw-&sig=6lwkp52bSHm3kxV03uw9d0_kjXU&hl=en&sa=X&ei=0z0tU9TbHsTr2QWb04GoBg&ved=0CIQBEOgBMAc4Wg#v=onepage&q=capacity%20planning%20headroom&f=false
http://www.ca.com/~/media/Files/whitepapers/Capacity-Management-Resource-Scoring-WP.pdf
http://capacitas.wordpress.com/tag/capacity-planning/
http://perfdynamics.blogspot.com/2014/01/monitoring-cpu-utilization-under-hyper.html
http://www.stat.purdue.edu/~wsc/papers/bandwidth.estimation.pdf
http://www.risctec.co.za/images/TeamQuest%20Complete%20Overview%20Q409.pdf
http://info.servertech.com/blog/bid/102277/How-does-redundancy-relate-to-capacity-planning
http://www.systemdynamics.org/conferences/2004/SDS_2004/PAPERS/294WILLI.pdf
file:///Users/karl/Downloads/100818-Craft-Slides-Exadata_System_Sizing.pdf
http://wafl.co.uk/tag/usable-capacity/
http://appsdba.com/workload.htm
http://www.hpts.ws/papers/2007/Cockcroft_CMG06-utilization.pdf
http://www.serviceassurancedaily.com/2013/04/capacity-planning-say-goodbye-to-guesswork/
http://optimalinnovations.com/pdf/green_capacity_planning.pdf
http://java.coe.psu.ac.th/SunDocuments/SunBluePrints/caphi.pdf
file:///Users/karl/Downloads/capplan_mistakes.pdf
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=46&ved=0CHAQFjAFOCg&url=https%3A%2F%2Fshare.confex.com%2Fshare%2F119%2Fwebprogram%2FHandout%2FSession11598%2Fcapplan_mistakes.pdf&ei=LyEtU-GbMuSd2QXKhIHYDg&usg=AFQjCNF0oVypFEAYIxrfXE2e0vSKlT1grQ&sig2=_FKz971ekXGx0hLTq-PXGQ&bvm=bv.62922401,d.b2I&cad=rja
http://www.is.co.za/Documents/The%20Fenix%20Graph%20Views.pdf
http://www.sumerian.com/products/product-overview/
http://books.google.com/books?id=yzUpD2YbhWwC&pg=PT739&lpg=PT739&dq=capacity+planning+headroom&source=bl&ots=L0KcPJ83sj&sig=2ZuOnSMyEaJ3a8x0fbicR1nV8Fo&hl=en&sa=X&ei=kx4tU5HSDqa62gXP-YGYBg&ved=0CJMBEOgBMAk#v=onepage&q=capacity%20planning%20headroom&f=false
http://books.google.com/books?id=Yi3trjJ1JuQC&pg=PA79&lpg=PA79&dq=capacity+planning+headroom&source=bl&ots=Wncw_9IpOl&sig=Dem2P_LV3zyvOX68oGMghYFP5fQ&hl=en&sa=X&ei=xB4tU5OmEI3g2wXX8YDoDQ&ved=0CFMQ6AEwAjgK#v=onepage&q=capacity%20planning%20headroom&f=false
http://www.marcomconsultant.com/samples/vk-wpscp.pdf
http://www.kaggle.com/forums/t/6003/capacity-planning-engineer-twitter-san-francisco-ca
https://www.google.com/search?q=adrian+cockroft&oq=adrian+cockroft&aqs=chrome..69i57j69i60j69i65j69i60l3.2711j0j7&sourceid=chrome&espv=210&es_sm=119&ie=UTF-8
http://perfcap.blogspot.com/
http://www.battery.com/our-companies/
http://www.infoq.com/interviews/Adrian-Cockcroft-Netflix
http://techblog.netflix.com/
http://www.ukauthority.com/Portals/0/Research/IT%20Capacity%20Management%20with%20SAS.pdf
http://www.sascommunity.org/seugi/SFI2006/C00905.pdf
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=46&ved=0CHAQFjAFOCg&url=https%3A%2F%2Fshare.confex.com%2Fshare%2F119%2Fwebprogram%2FHandout%2FSession11598%2Fcapplan_mistakes.pdf&ei=LyEtU-GbMuSd2QXKhIHYDg&usg=AFQjCNF0oVypFEAYIxrfXE2e0vSKlT1grQ&sig2=_FKz971ekXGx0hLTq-PXGQ&bvm=bv.62922401,d.b2I&cad=rja
}}}
I remember sizing Cincinnati’s Exadata.. but check this out about health big data
http://gigaom.com/2013/10/18/health-data-hacktivist-turns-to-the-crowd-to-build-an-open-source-database-for-food/
<<<
He said he decided to launch this latest project after the Cincinnati Children’s Hospital approached him asking for help with an effort to build apps for kids with rare combinations of food allergies. Relatively few children may have a peanut allergy, soy allergy and be lactose intolerant, or have a tree nut allergy and irritable bowel syndrome. But they still need good resources to figure out what they can eat safely, and the databases to support targeted resources are largely lacking, Trotter said.
With this open database, he hopes organizations like the Cincinnati Children’s Hospital and other researchers and clinicians will be able to create hyper-targeted apps and other kinds of tools for patients and caregivers dealing with food-related conditions.
<<<
He co-authored a healthcare stuff book
http://shop.oreilly.com/product/0636920020110.do?cmp=af-prog-books-video-product-cj_auwidget358_0636920020110_7032054
https://blogs.oracle.com/brendan/entry/heat_map_analytics
https://blogs.oracle.com/brendan/entry/visualizing_system_latency
http://spotify.github.io/heroic/?utm_source=dbweekly&utm_medium=email#!/docs/aggregations
https://www.heroku.com/postgres
https://devcenter.heroku.com/categories/heroku-postgres
PLANS
https://devcenter.heroku.com/articles/heroku-postgres-plans
https://dev.to/prisma/how-to-setup-a-free-postgresql-database-on-heroku-1dc1
EXPENSIVE QUERIES
https://devcenter.heroku.com/articles/expensive-queries
! references
* https://learning.oreilly.com/search/?query=postgres%20heroku&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&include_collections=true&include_notebooks=true&is_academic_institution_account=false&source=user&sort=relevance&facet_json=true&page=0&include_scenarios=true&include_sandboxes=true&json_facets=false
* https://www.linkedin.com/learning/search?keywords=heroku
oracle hibernate orm sql plan changes
https://www.google.com/search?q=oracle+hibernate+orm+sql+plan+changes&sxsrf=ALeKk03aF5M9d2A-SB1v6-SbsXXZ2vM8_g:1628546016779&ei=4KMRYcP0LpKy5NoPoLmjmAM&start=20&sa=N&ved=2ahUKEwjD5NHW9qTyAhUSGVkFHaDcCDM4ChDy0wN6BAgBEDg&biw=1440&bih=803
HOWTO series - How to Prevent Execution Plan Troubles when Querying Skewed Data, with jOOQ https://blog.jooq.org/tag/execution-plan-cache/ <- good stuff
https://stackify.com/find-hibernate-performance-issues/
https://vladmihalcea.com/hibernate-performance-tuning-tips/
http://stevenfeuersteinonplsql.blogspot.com/2016/09/looking-for-stories-about-using.html
https://vladmihalcea.com/tutorials/hibernate/
https://vladmihalcea.com/?s=plan+management&submit=Go
https://vladmihalcea.com/execution-plan-oracle-hibernate-query-hints/
hibernate prepared statements https://www.google.com/search?q=hibernate+prepared+statement&sxsrf=ALeKk01tSioWjgLRICL-JUO65WsAN2hvrA%3A1628553342391&ei=fsARYYujF8Lj5NoPiLiBuAw&oq=hibernate+prepared&gs_lcp=Cgdnd3Mtd2l6EAMYADIFCAAQgAQyBQgAEIAEMgUIABCABDIGCAAQFhAeMgYIABAWEB4yBggAEBYQHjIGCAAQFhAeMgYIABAWEB4yBggAEBYQHjIGCAAQFhAeOgUIABCRAkoECEEYAVDToRdYsakXYLS0F2gCcAB4AIABlAGIAZsGkgEDNy4ymAEAoAEBwAEB&sclient=gws-wiz
https://stackoverflow.com/questions/47187186/make-hibernate-discriminator-value-use-bind-variable-instead-of-literal
https://mobile.twitter.com/ChrisAntognini/status/736092892930838528 <- christian tweet hibernate plan instability
Generate identical column aliases among cluster https://hibernate.atlassian.net/browse/HHH-2448
Deterministic column aliases across cluster nodes with non-identical mappings https://hibernate.atlassian.net/browse/HHH-7903
Impact of aliases on database problem identification https://forum.hibernate.org/viewtopic.php?p=2229347
https://stackoverflow.com/questions/24757019/sql-aliases-not-working-anymore-after-migrating-from-hibernate-3-to-4
https://vladmihalcea.com/?s=column+alias&submit=Go
hibernate hints https://forum.hibernate.org/viewtopic.php?p=2439232
Hibernate generates wrong SQL query for Oracle https://forum.hibernate.org/viewtopic.php?p=2340268
Hibernate causing high version counts in Oracle sql area https://forum.hibernate.org/viewtopic.php?f=1&t=1013655&view=next
Extremly bad performance https://forum.hibernate.org/viewtopic.php?p=2434293
Hibernate causes a soft parse on every SQL statement https://forum.hibernate.org/viewtopic.php?t=944025
https://stackoverflow.com/questions/3349344/how-to-monitor-slow-sql-queries-executed-by-jpa-and-hibernate
how are column alias names generated https://forum.hibernate.org/viewtopic.php?f=1&t=982072&hilit=oracle+execution+plan
Hibernate 4.3.x solves the issue of columns aliases changes impacting the use of SQL profile
https://www.google.com/search?q=Hibernate+4.3.x+solves+the+issue+of+columns+aliases+changes+impacting+the+use+of+SQL+profile&sxsrf=ALeKk02AaWt8aZ_p0IRPQctkLT5Siknt9Q%3A1628553895216&source=hp&ei=p8IRYfHzCJfQ5NoPuKKJ8A4&iflsig=AINFCbYAAAAAYRHQt-jVi9r9H_6-0t0Sneh7hX9v_M4P&oq=Hibernate+4.3.x+solves+the+issue+of+columns+aliases+changes+impacting+the+use+of+SQL+profile&gs_lcp=Cgdnd3Mtd2l6EANQyQNYyQNg_gVoAHAAeACAAT-IAT-SAQExmAEAoAECoAEB&sclient=gws-wiz&ved=0ahUKEwix46uDlKXyAhUXKFkFHThRAu4Q4dUDCAg&uact=5
forum.hibernate.org oracle sql profile
https://www.google.com/search?q=forum.hibernate.org+oracle+sql+profile&sxsrf=ALeKk02U6Ft0CPLlyj-g46bnQq4FEwC2_g:1628554384385&ei=kMQRYcPnFpOi5NoPl_KZoA0&start=30&sa=N&ved=2ahUKEwjD78_slaXyAhUTEVkFHRd5BtQ4FBDy0wN6BAgBEDo&biw=1440&bih=803
''troubleshoot high redo generation''
! 1) do it with snapper if you want to do some real time troubleshooting
{{{
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats,gather=s,sinclude=redo_size 5 1 "select sid from v$session"
}}}
output
{{{
17:50:07 SYS@hcmprd1> @snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats,gather=s,sinclude=redo_size 5 1 "select sid from v$session"
Sampling SID select sid from v$session with interval 5 seconds, taking 1 snapshots...
-- Session Snapper v3.52 by Tanel Poder @ E2SN ( http://tech.e2sn.com )
----------------------------------------------------------------------------------------------------------------------------------------------------
SID, USERNAME , TYPE, STATISTIC , DELTA, HDELTA, HDELTA/SEC, %TIME, GRAPH
----------------------------------------------------------------------------------------------------------------------------------------------------
22, SYSADM , STAT, redo size , 11820, 11.82k, 985,
37, SYSADM , STAT, redo size , 104, 104, 8.67,
37, SYSADM , STAT, redo size for lost write detection , 104, 104, 8.67,
45, SYSADM , STAT, redo size , 1032, 1.03k, 86,
45, SYSADM , STAT, redo size for lost write detection , 936, 936, 78,
417, SYSADM , STAT, redo size , 612, 612, 51,
773, SYSADM , STAT, redo size , 1564, 1.56k, 130.33,
788, SYSADM , STAT, redo size , 15268, 15.27k, 1.27k,
814, SYSADM , STAT, redo size , 14696, 14.7k, 1.22k,
818, SYSADM , STAT, redo size , 13991392, 13.99M, 1.17M,
818, SYSADM , STAT, redo size for lost write detection , 548420, 548.42k, 45.7k,
1137, (DBW0) , STAT, redo size , 3092, 3.09k, 257.67,
1190, SYSADM , STAT, redo size , 15852, 15.85k, 1.32k,
1515, (LGWR) , STAT, redo size , 960, 960, 80,
1552, SYSADM , STAT, redo size , 612, 612, 51,
1893, (CKPT) , STAT, redo size , 104, 104, 8.67,
1937, SYSADM , STAT, redo size , 52, 52, 4.33,
1937, SYSADM , STAT, redo size for lost write detection , 52, 52, 4.33,
1939, SYSADM , STAT, redo size , 1520, 1.52k, 126.67,
2270, (LMS0) , STAT, redo size , 136, 136, 11.33,
2312, SYSADM , STAT, redo size , 1728, 1.73k, 144,
2312, SYSADM , STAT, redo size for lost write detection , 104, 104, 8.67,
2324, SYSADM , STAT, redo size , 1584, 1.58k, 132,
2324, SYSADM , STAT, redo size for lost write detection , 52, 52, 4.33,
2698, SYSADM , STAT, redo size , 7828, 7.83k, 652.33,
2698, SYSADM , STAT, redo size for lost write detection , 7540, 7.54k, 628.33,
-- End of Stats snap 1, end=2012-08-14 17:50:20, seconds=12
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Active% | SQL_ID | SID | EVENT | WAIT_CLASS | MODULE | SERVICE_NAME | BLOCKING_SES | P2 | P3
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
17% | 7rnq59utz5jag | 818 | ON CPU | ON CPU | PSRUN@xxxxxxxxxx (TNS V1- | HCMPRDOL | | |
9% | 32vs1pfg8rbpk | 1947 | PX Nsq: PQ load info query | Other | sqlplus@pd01db01xxxxxxxxx | SYS$USERS | | 0 | 0
6% | 3ntqcc32pw135 | 2680 | ON CPU | ON CPU | sqlplus@pd01db01xxxxxxxxx | SYS$USERS | | |
6% | gph5yptw1xj6v | 819 | ON CPU | ON CPU | HRS_CE | HCMPRDOL | | |
6% | 7rnq59utz5jag | 818 | cell single block physical read | User I/O | PSRUN@xxxxxxxxxx (TNS V1- | HCMPRDOL | | 695169720 | 8192
6% | 7rnq59utz5jag | 818 | cell single block physical read | User I/O | PSRUN@xxxxxxxxxx (TNS V1- | HCMPRDOL | | 2071232443 | 8192
6% | 7rnq59utz5jag | 818 | cell single block physical read | User I/O | PSRUN@xxxxxxxxxx (TNS V1- | HCMPRDOL | | 1848692641 | 8192
6% | gph5yptw1xj6v | 2707 | ON CPU | ON CPU | HRS_CE | HCMPRDOL | | |
3% | | 1137 | db file parallel write | System I/O | | SYS$BACKGROUND | | 0 | 2147483647
3% | 7rnq59utz5jag | 818 | cell single block physical read | User I/O | PSRUN@xxxxxxxxxx (TNS V1- | HCMPRDOL | | 1475203102 | 8192
-- End of ASH snap 1, end=2012-08-14 17:50:20, seconds=5, samples_taken=35
PL/SQL procedure successfully completed.
}}}
! 2) if this is a change in workload.. you can mine the AWR.. and check the redo MB/s you are generating... just keep in mind that the 10MB/s of redo is generating 36GB of space per hour.. so that high rate of redo generation could be filling up your flash recovery area.
* run the awr_genwl.sql available here http://karlarao.wordpress.com/scripts-resources/ then look for the redo MB/s
* then get the SNAP_ID and generate the large AWR report
* then check the "Top segments by Writes"
* check the SQLs that are hitting this top segment
output... check out the snap_id 5927 with ''1000 Write IOPS and range of 10 Redo (mb)/s'', since this guy is running on Exadata.. the issue is being hidden by the powerful storage.. but the DBAs are noticing a huge space growth introduced by unnecessary Write Operations from the Peoplesoft security module that they are doing every 15minutes which should be done daily
{{{
AWR CPU and IO Workload Report
i *** *** ***
n Total Total Total U S
Snap s Snap C CPU A Oracle OS Physical Oracle RMAN OS S Y I
Snap Start t Dur P Time DB DB Bg RMAN A CPU OS CPU Memory IOPs IOPs IOPs IO r IO w Redo Exec CPU CPU CPU R S O
ID Time # (m) U (s) Time CPU CPU CPU S (s) Load (s) (mb) r w redo (mb)/s (mb)/s (mb)/s Sess /s % % % % % %
------ --------------- --- ---------- --- ----------- ---------- --------- --------- -------- ------ ----------- ------- ----------- ---------- --------- --------- --------- --------- --------- --------- ---- --------- ------ ---- ---- ---- ---- ----
5926 12/08/15 08:00 2 59.55 24 85752.00 7292.40 4734.31 308.12 0.00 2.0 5042.42 4.33 6585.22 96531.46 347.057 346.961 25.635 54.404 6.001 6.342 327 662.487 6 0 8 7 1 0
5927 12/08/15 09:00 2 59.72 24 85996.80 11173.54 5404.59 600.12 0.00 3.1 6004.71 4.03 7637.65 96531.46 899.513 1002.798 26.245 116.534 14.153 10.729 326 539.930 7 0 9 8 1 0
5928 12/08/15 10:00 2 60.40 24 86976.00 10434.18 6492.10 429.90 0.00 2.9 6922.00 2.76 8597.47 96531.46 962.283 514.159 31.517 362.800 9.765 10.193 336 835.939 8 0 10 9 1 0
}}}
segment stats output
{{{
Segments by Physical Writes DB/Inst: HCMPRD/hcmprd2 Snaps: 5927-5928
-> Total Physical Writes: 6,491,115
-> Captured Segments account for 58.3% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Writes %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYSADM HRLARGE PS_SJT_PERSON TABLE 901,106 13.88
SYSADM TLWORK PS_WRK_PROJ2_TAO4 TABLE 820,789 12.64
SYSADM TLWORK PS_WRK_PROJ_TAO4 TABLE 568,076 8.75
SYSADM TLWORK PS_WRK_PROJ6_TAO4 TABLE 410,510 6.32
SYSADM PSINDEX PSASJT_PERSON INDEX 250,182 3.85
-------------------------------------------------------------
Segments by Physical Write Requests DB/Inst: HCMPRD/hcmprd2 Snaps: 5927-5928
-> Total Physical Write Requestss: 3,593,227
-> Captured Segments account for 63.4% of Total
Tablespace Subobject Obj. Phys Write
Owner Name Object Name Name Type Requests %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYSADM HRLARGE PS_SJT_PERSON TABLE 591,695 16.47
SYSADM TLWORK PS_WRK_PROJ_TAO4 TABLE 454,588 12.65
SYSADM TLWORK PS_WRK_PROJ2_TAO4 TABLE 343,714 9.57
SYSADM PSINDEX PSASJT_PERSON INDEX 170,409 4.74
SYSADM TLWORK PS_WRK_PROJ6_TAO4 TABLE 143,019 3.98
-------------------------------------------------------------
Segments by Direct Physical Writes DB/Inst: HCMPRD/hcmprd2 Snaps: 5927-5928
-> Total Direct Physical Writes: 43,847
-> Captured Segments account for 3.9% of Total
Tablespace Subobject Obj. Direct
Owner Name Object Name Name Type Writes %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYSADM HRSLARGE SYS_LOB0001650271C00 LOB 1,670 3.81
SYSADM PSIMAGE SYS_LOB0001650400C00 LOB 54 .12
-------------------------------------------------------------
Segments by DB Blocks Changes DB/Inst: HCMPRD/hcmprd2 Snaps: 5927-5928
-> % of Capture shows % of DB Block Changes for each top segment compared
-> with total DB Block Changes for all segments captured by the Snapshot
Tablespace Subobject Obj. DB Block % of
Owner Name Object Name Name Type Changes Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
SYSADM HRLARGE PS_SJT_PERSON TABLE 13,872,864 18.74
SYSADM TLWORK PS_WRK_PROJ2_TAO4 TABLE 8,087,248 10.92
SYSADM TLWORK PS_WRK_PROJ_TAO4 TABLE 7,536,976 10.18
SYSADM PSINDEX PSASJT_PERSON INDEX 6,400,592 8.65
SYSADM PSINDEX PS_SJT_PERSON INDEX 5,769,600 7.79
-------------------------------------------------------------
}}}
and this is how it looks like from the EMGC
[img[ https://lh5.googleusercontent.com/-ZQ3sWYookf4/UC5w0Zt5COI/AAAAAAAABts/gIX9s5-fJ5A/s800/20120817_hcmprd.png ]]
''What are the SJT tables?'' http://goo.gl/XRN5K
on our case we are hitting the SJT_PERSON table
{{{
Peoplesoft - What are security join tables? posted by Charul Mohta
What are security join tables? Why is it necessary to refresh SJT processes?
PeopleSoft system stores security data in user and transaction Security Join Tables. (SJTs).
User SJTs are:
SJT_OPR_CLS: Contains the User IDs with their data permission lists.
SJT_CLASS_ALL: Contains the data permission information for all the data permission lists that are given data access on the ‘Security by Dept Tree’ page or ‘Security by Permission List’ page.
Transaction SJTs are:
SJT_PERSON: Contains transaction data for the people (employees, contingent workers, Person of Interest). It has row level security attributes (SetID, DeptID etc) for all the employees.
SJT refresh processes have to be run to keep security data (in user and transaction SJTs) up to date so that the system enforces data permission using the most current information.
}}}
! From: Redo Internals And Tuning By Redo Reduction Doc - OraInternals from Riyaj Shamsudeen:
- Reduce the number of indexes as much as possible
- Use merge instead of delete+insert
- Only update what is updated, instead of all rows
- Try to use GTT's if possible (global temporary tables)
- Try IOTs (only works in certain cases)
- Use nologging insets (insert /*+ append */)
- Partition drop instead of huge deletes
- Try to use unique indexes (non-unique generate slightly more redo)
- Try different number of rows for committing (the paper above comes to conclusion that in it's specific case 100 rows generates the least amount of redo)
- If using sequences, setting a huge cache can reduce some redo
http://www.freelists.org/post/oracle-l/High-shared-pool-usage
https://www.highcharts.com/docs/working-with-data/live-data
! width-balanced histograms or frequency histograms
Distinct values less than or equal to the number of buckets: When you have less number of
distinct values than the number of buckets, the ENDPOINT_VALUE column contains the
distinct values themselves while the ENDPOINT_NUMBER column holds the
CUMULATIVE number of rows with less than that column value (Frequency Histograms).
{{{
create table histogram as select rownum all_distinct, 10000 skew from dual connect by level <= 10000;
update histogram set skew=all_distinct where rownum<=10;
exec dbms_stats.gather_table_stats(user,'HISTOGRAM', method_opt=>'for all columns size 1');
exec dbms_stats.gather_table_stats(user,'HISTOGRAM', method_opt=>'for all columns size auto');
select column_name, density, histogram from user_tab_col_statistics where table_name='HISTOGRAM' and column_name='SKEW';
select column_name,num_distinct,density from user_tab_col_statistics where table_name='HISTOGRAM';
COLUMN_NAME NUM_DISTINCT DENSITY
------------------------------ ------------ ----------
ALL_DISTINCT 10000 .0001
SKEW 11 .090909091
density = 1/#num_distinct
So Oracle is assuming uniform data distribution in the column skew values and estimating the
cardinality = density * 10000 = 909.09 rows.
= .090909091 * 10000
= 909.09091
select * from histogram where skew=1;
SQL_ID 4zbzuswdjz4zg, child number 0
-------------------------------------
select * from histogram where skew=1
Plan hash value: 941738150
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 6 (100)| |
|* 1 | TABLE ACCESS FULL| HISTOGRAM | 909 | 6363 | 6 (0)| 00:00:01 |
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SKEW"=1)
select * from histogram where skew=10000;
SQL_ID 3xpbpk9haz7y2, child number 0
-------------------------------------
select * from histogram where skew=10000
Plan hash value: 941738150
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 6 (100)| |
|* 1 | TABLE ACCESS FULL| HISTOGRAM | 909 | 6363 | 6 (0)| 00:00:01 |
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SKEW"=10000)
However we know that we have only one row with
skew=1 and 9990 rows with skew=10000. This assumption is bound to result in sub-optimal
execution plan. For example, if we have an index on column skew, Oracle will use it for the predicate
skew=10000 considering the number of rows to be returned equals to 909 or only 9.09%.
So we understand that without giving additional inputs, CBO assumes uniform distribution of data
between low and high values of a column and chooses sub-optimal plan.
create index skew_idx on histogram(skew);
exec dbms_stats.gather_index_stats(user,'SKEW_IDX');
select * from histogram where skew=10000;
SQL_ID 3xpbpk9haz7y2, child number 0
-------------------------------------
select * from histogram where skew=10000
Plan hash value: 2822933374
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| HISTOGRAM | 909 | 6363 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 909 | | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=10000)
Two types of histograms:
1) width-balanced histograms or frequency histograms
2) height-balanced histograms
exec dbms_stats.gather_table_stats(user,'HISTOGRAM',method_opt=>'for columns skew size 11');
col COLUMN_NAME format a20
select column_name,endpoint_number,endpoint_value from user_tab_histograms where table_name='HISTOGRAM' and column_name='SKEW';
select endpoint_value as column_value,
endpoint_number as cummulative_frequency,
endpoint_number - lag(endpoint_number,1,0) over (order by endpoint_number) as frequency
from user_tab_histograms
where table_name = 'HISTOGRAM' and column_name = 'SKEW';
COLUMN_VALUE CUMMULATIVE_FREQUENCY FREQUENCY
------------ --------------------- ----------
1 1 1
2 2 1
3 3 1
4 4 1
5 5 1
6 6 1
7 7 1
8 8 1
9 9 1
10 10 1
10000 10000 9990
select column_name, density, histogram from user_tab_col_statistics where table_name='HISTOGRAM' and column_name='SKEW';
COLUMN_NAME NUM_DISTINCT DENSITY
-------------------- ------------ ----------
ALL_DISTINCT 10000 .0001
SKEW 11 .00005
SQL_ID 4zbzuswdjz4zg, child number 0
-------------------------------------
select * from histogram where skew=1
Plan hash value: 2822933374
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| HISTOGRAM | 1 | 7 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 1 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=1)
SQL_ID 3xpbpk9haz7y2, child number 0
-------------------------------------
select * from histogram where skew=10000
Plan hash value: 2822933374
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| HISTOGRAM | 909 | 6363 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SKEW_IDX | 909 | | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("SKEW"=10000)
}}}
! height-balanced histograms
More number of distinct values than the number of buckets: When you have more number of
distinct values than the number of buckets, then the ENDPOINT_NUMBER column
contains the bucket id and the ENDPOINT_VALUE holds the highest value in each bucket.
Bucket 0 is special in that it holds the low value for that column (Height-balanced Histograms)
{{{
exec dbms_stats.gather_table_stats(user,'HISTOGRAM',method_opt=>'for columns skew size 5');
select table_name, column_name,endpoint_number,endpoint_value from user_tab_histograms where table_name='HISTOGRAM' and column_name='SKEW';
TABLE_NAME COLUMN_NAME ENDPOINT_NUMBER ENDPOINT_VALUE
------------- ------------- --------------- --------------
HISTOGRAM SKEW 0 1
HISTOGRAM SKEW 5 10000
SELECT bucket_number, max(skew) AS endpoint_value
FROM (
SELECT skew, ntile(5) OVER (ORDER BY skew) AS bucket_number
FROM histogram)
GROUP BY bucket_number
ORDER BY bucket_number;
BUCKET_NUMBER ENDPOINT_VALUE
------------- --------------
1 10000
2 10000
3 10000
4 10000
5 10000
}}}
https://cwiki.apache.org/confluence/display/Hive/DesignDocs
https://cwiki.apache.org/confluence/display/Hive/LanguageManual
https://cwiki.apache.org/confluence/display/Hive/Home#Home-HiveDocumentation
http://hive.apache.org/
https://repo.spring.io/simple/hortonworks/org/apache/hive/hive-jdbc/ <- ''hive versions''
https://en.wikipedia.org/wiki/Apache_Hive#Comparison_with_traditional_databases
http://www.networkworld.com/article/2161959/tech-primers/hadoop-on-windows-azure--hive-vs--javascript-for-processing-big-data.html
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions
https://community.hortonworks.com/questions/50434/hive-and-acid-table-performance-for-updates.html
https://hortonworks.com/blog/adding-acid-to-apache-hive/
http://www.remay.com.br/blog/how-to-work-with-acid-tables-in-hive/
{{{
If we have to work with hive tables in the transactional mode we have to use two characteristics below:
– bucketing
– table property transactional=true
– Ambari – Hive – Configs – ACID Transactions = ON
We can test with these commands:
--Sets for update the engine and vectorized processing
set hive.execution.engine=tez;
set hive.vectorized.execution.enabled=true;
set hive.vectorized.execution.reduce.enabled=true;
--------------------------
--Target table to create
--------------------------
drop table tbl1;
create table tbl1
(
f1 int,
f2 string
)
clustered by (f2) into 1 buckets
stored as orc tblproperties ("transactional"="true");
----------------------------------------------------
--Simple load using the transactional way
----------------------------------------------------
insert into table tbl1 values (1, 'line1');
insert into table tbl1 values (2, 'line2');
insert into table tbl1 values (3, 'line3');
--------------------------
--First Result
--------------------------
Select * from tbl1;
1 line1
2 line2
3 line3
Time taken: 0.798 seconds, Fetched: 3 row(s)
--------------------------
--Simple update
--------------------------
update tbl1 set
f1 = 200
where f1 = 2;
--------------------------
--Second Result
--------------------------
select * from tbl1 ;
1 line1
200 line2
3 line3
--------------------------
--Simple delete
--------------------------
delete from tbl1 where f1 = 3;
--------------------------
--Third Result
--------------------------
select * from tbl1 ;
1 line1
200 line2
}}}
! issues
* you can't update a bucketed column
* you can delete and insert but just not update
{{{
hive> update tbl1 set
> f2 = 200
> where f2 = 'line2';
FAILED: SemanticException [Error 10302]: Updating values of bucketing columns is not supported. Column f2.
}}}
http://hortonworks.com/blog/hive-0-14-cost-based-optimizer-cbo-technical-overview/
! spark and hive cbo
https://cwiki.apache.org/confluence/download/attachments/73636892/%E7%8E%8B%E6%8C%AF%E5%8D%8E.pdf?version=1&modificationDate=1504755749000&api=v2 <-- good stuff
https://www.google.com/search?q=spark.sql.cbo.enabled&oq=spark.sql.cbo.enabled&aqs=chrome..69i57.463j0j1&sourceid=chrome&ie=UTF-8
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_hive-performance-tuning/content/ch_cost-based-optimizer.html
https://hortonworks.com/blog/hive-0-14-cost-based-optimizer-cbo-technical-overview/
https://www.slideshare.net/hortonworks/hive-on-spark-is-blazing-fast-or-is-it-final
https://code.facebook.com/posts/229861827208629/scaling-the-facebook-data-warehouse-to-300-pb/
Illustrating Spark SQL 2.2.0 Cost-Based Optimization in example of Hive and MySQL https://www.youtube.com/watch?v=RYIwBfqJZ34
{{{
-- ` is not allowed
-- replace string with varchar2(42)
-- replace bigint with int
-- replace double to int
-- remove PARTITIONED BY
}}}
<<showtoc>>
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Explain
! EXPLAIN command
{{{
set hive.log.explain.output=true;
set hive.explain.user=true;
set hive.tez.exec.print.summary=true;
EXPLAIN [FORMATTED|EXTENDED|DEPENDENCY|AUTHORIZATION] query
}}}
! tools to use
!! lipwig
<<<
https://github.com/t3rmin4t0r/lipwig
https://dzone.com/articles/lipwig-for-hive-is-the-greatest
http://www.linuxdevcenter.com/pub/a/linux/2004/05/06/graphviz_dot.html
<<<
!! json viewer for easy navigation/exploration
<<<
http://jsonviewer.stack.hu/
<<<
https://community.hortonworks.com/questions/24595/how-to-shut-downkill-all-queries-in-hive.html
{{{
[root@sandbox ~]# yarn application -kill application_1482410373661_0002
}}}
https://hortonworks.com/blog/3x-faster-interactive-query-hive-llap/
http://hadoop-pig-hive-thejas.blogspot.com/2017/09/new-apache-hive-with-llap-vs-impala.html
https://www.infoworld.com/article/3131058/analytics/big-data-face-off-spark-vs-impala-vs-hive-vs-presto.html
https://hortonworks.com/blog/top-5-performance-boosters-with-apache-hive-llap/
https://www.slideshare.net/HadoopSummit/llap-subsecond-analytical-queries-in-hive-74811820
https://hortonworks.com/tutorial/interactive-sql-on-hadoop-with-hive-llap/
https://hortonworks.com/tutorial/fast-analytics-in-the-cloud-with-hive-llap/
https://cwiki.apache.org/confluence/display/Hive/LLAP <-- official doc
https://www.youtube.com/results?search_query=hive+LLAP+and+tez
Hive LLAP "Inside Out" https://www.youtube.com/watch?v=n8jHOVkvNoc
https://resources.zaloni.com/blog/tez-and-llap-improvements-to-make-hive-faster
hive llap vs spark sql https://www.google.com/search?ei=2jVhWpqvIIn8zgL14YfwAQ&q=hive+llap+vs+spark+sql&oq=hive+llap+vs&gs_l=psy-ab.3.3.0i67k1j0l5.25475.25475.0.27870.1.1.0.0.0.0.244.244.2-1.1.0....0...1c.1.64.psy-ab..0.1.244....0.bm3eynqJK6w
https://hortonworks.com/blog/sparksql-ranger-llap-via-spark-thrift-server-bi-scenarios-provide-row-column-level-security-masking/
spark LLAP use cases https://github.com/hortonworks-spark/spark-llap/wiki/3.-Use-Cases
spark LLAP github https://github.com/hortonworks-spark/spark-llap/wiki
https://developer.ibm.com/hadoop/2017/02/07/experiences-comparing-big-sql-and-spark-sql-at-100tb/ <-- big SQL at 100TB
kudu vs LLAP https://www.youtube.com/results?search_query=kudu+vs+llap
! manual scd
https://hortonworks.com/blog/four-step-strategy-incremental-updates-hive/
https://www.softserveinc.com/en-us/tech/blogs/process-slowly-changing-dimensions-hive/ <-- good stuff
Slowly Changing Dimensions and how to process them in Hive by Oleksandr Berchenko (Eng) https://www.youtube.com/watch?v=IArhNttLdmM&t=2511s <-- video version
http://dwgeek.com/impala-hive-slowly-changing-dimension-scd-type-2.html/
https://www.youtube.com/watch?v=7smtTmjbvhg <-- Hive Bucketing,SCD Type1,Type2 (go to 58:10 seconds for full code)
https://stackoverflow.com/questions/37472146/slowly-changing-dimensions-scd1-and-scd2-implementation-in-hive
https://hadoopdatasolutions.blogspot.com/2016/01/hive-scdtype-ii-implementation-based-on.html
https://hadoopmreduce.blogspot.com/2015/11/hive-insert-for-new-records-from.html
! merge
https://github.com/cartershanklin/hive-scd-examples/blob/master/hive_type2_scd.sql <-- good stuff
https://hortonworks.com/blog/apache-hive-moving-beyond-analytics-offload-with-sql-merge/
https://hortonworks.com/blog/update-hive-tables-easy-way/
https://hortonworks.com/blog/update-hive-tables-easy-way-2/
https://community.hortonworks.com/articles/97113/hive-acid-merge-by-example.html
https://dzone.com/articles/update-hive-tables-the-easy-way-hortonworks
https://dzone.com/articles/update-hive-tables-the-easy-way-part-2-hortonworks
! using a tool
http://amintor.com/1/post/2014/07/implement-scd-type-2-in-hadoop-using-hive-transforms.html <-- using a bash script
https://www.linkedin.com/pulse/steps-incremental-updates-apache-hive-information-server-vik-malhotra/
https://www.youtube.com/watch?v=hFInZp08tYE <-- Talend Open Studio - Implementing SCD Type I, II & III (No Voice)
! references
https://www.amazon.com/Data-Warehouse-Toolkit-Complete-Dimensional/dp/0471200247
http://datawarehouse4u.info/SCD-Slowly-Changing-Dimensions.html
https://sonra.io/2017/05/15/dimensional-modeling-and-kimball-data-marts-in-the-age-of-big-data-and-hadoop/
https://blogs.oracle.com/datawarehousing/oracle-sql-developer-data-modeler-support-for-oracle-big-data-sql
https://community.hortonworks.com/articles/1887/connect-oracle-sql-developer-to-hive.html
https://stackoverflow.com/questions/17905873/how-to-select-current-date-in-hive-sql/38734565#38734565
{{{
To fetch only current date excluding time stamp:
in lower versions, looks like hive CURRENT_DATE is not available, hence you can use (it worked for me on Hive 0.14)
select TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP()));
In higher versions say hive 2.0, you can use :
select CURRENT_DATE;
}}}
! cloudera manager
https://docs.cloudera.com/documentation/enterprise/latest/topics/cloudera_manager.html
https://www.quora.com/Hortonworks-What-are-the-advantages-of-Ambari-Any-specific-aspects-where-Ambari-is-better-than-Cloudera-Manager
https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cm_cdh_compatibility.html#cm_cdh_compatibility
https://docs.cloudera.com/cdp/latest/data-migration/topics/cdp-data-migration-introduction.html
https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_product_compatibility.html
https://docs.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html
https://community.cloudera.com/t5/Support-Questions/Support-matrix-of-CDH-and-other-Cloudera-products/td-p/82676
https://docs.cloudera.com/cdp/latest/index.html
https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cm_cdh_compatibility.html#cm_cdh_compatibility
https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_6_download.html
! hortonworks ambari
https://supportmatrix.hortonworks.com/
https://blog.cloudera.com/cloudera-provides-first-look-at-cloudera-data-platform-the-industrys-first-enterprise-data-cloud/
https://www.slideshare.net/HadoopSummit/an-apache-hive-based-data-warehouse
http://biconsulting.hu/letoltes/2017budapestdata/fekete_zsolt_vallalati_adattarhaz_hadoop_kornyezetben.pdf
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-Beeline%E2%80%93NewCommandLineShell
{{{
For DEV:
$ beeline
> !connect jdbc:hive2://hostname:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
}}}
{{{
For QA:
$ beeline
> !connect jdbc:hive2://hostname:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
}}}
http://ihorbobak.com/index.php/2015/08/05/cluster-profiling/
https://github.com/cerndb/Hadoop-Profiler
https://db-blog.web.cern.ch/blog/joeri-hermans/2016-04-hadoop-performance-troubleshooting-stack-tracing-introduction
{{{
create database hr;
CREATE TABLE IF NOT EXISTS hr.employees ( eid int, name String,
salary String, destination String)
COMMENT 'Employee details'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE;
}}}
{{{
[root@sandbox ~]# hadoop fs -ls /apps/hive/warehouse/
Found 5 items
drwxrwxrwx - hive hdfs 0 2016-10-25 08:10 /apps/hive/warehouse/foodmart.db
drwxrwxrwx - raj_ops hdfs 0 2018-01-23 10:07 /apps/hive/warehouse/hr.db
drwxrwxrwx - hive hdfs 0 2016-10-25 08:11 /apps/hive/warehouse/sample_07
drwxrwxrwx - hive hdfs 0 2016-10-25 08:11 /apps/hive/warehouse/sample_08
drwxrwxrwx - hive hdfs 0 2016-10-25 08:02 /apps/hive/warehouse/xademo.db
[root@sandbox ~]# hadoop fs -ls /apps/hive/warehouse/hr.db
Found 1 items
drwxrwxrwx - raj_ops hdfs 0 2018-01-23 10:08 /apps/hive/warehouse/hr.db/employees
}}}
{{{
[raj_ops@sandbox ~]$ vi hrdepartments.sql
select * from hr.departments;
[raj_ops@sandbox ~]$ hive -f hrdepartments.sql > hrdepartments.csv
# create directory and put the csv in hadoop
[raj_ops@sandbox ~]$ hadoop fs -mkdir /tmp/expimp/departments2
[raj_ops@sandbox ~]$ hadoop fs -put hrdepartments.csv /tmp/expimp/departments2
[raj_ops@sandbox ~]$ hadoop fs -ls /tmp/expimp/departments2
Found 1 items
-rw-r--r-- 1 raj_ops hdfs 741 2018-02-09 01:31 /tmp/expimp/departments2/hrdepartments.csv
# get table metadata
hive> show create table hr.departments;
OK
CREATE TABLE `hr.departments`(
`department_id` int,
`department_name` varchar(30),
`manager_id` int,
`location_id` int)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments'
TBLPROPERTIES (
'COLUMN_STATS_ACCURATE'='{\"BASIC_STATS\":\"true\"}',
'numFiles'='30',
'numRows'='30',
'rawDataSize'='679',
'totalSize'='709',
'transient_lastDdlTime'='1518096058')
Time taken: 2.808 seconds, Fetched: 20 row(s)
hive>
> use hr2;
OK
Time taken: 0.248 seconds
hive> show tables;
OK
departments
# from the metadata edit the storage settings
hive> CREATE TABLE `hr2.departments2`(
> `department_id` int,
> `department_name` varchar(30),
> `manager_id` int,
> `location_id` int)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\t'
> STORED AS TEXTFILE
> LOCATION
> '/tmp/expimp/departments2';
OK
Time taken: 0.509 seconds
# validation
hive>
> select * from hr.departments;
OK
10 Administration 200 1700
20 Marketing 201 1800
110 Accounting 205 1700
120 Treasury NULL 1700
130 Corporate Tax NULL 1700
140 Control And Credit NULL 1700
150 Shareholder Services NULL 1700
160 Benefits NULL 1700
170 Manufacturing NULL 1700
180 Construction NULL 1700
190 Contracting NULL 1700
200 Operations NULL 1700
30 Purchasing 114 1700
210 IT Support NULL 1700
220 NOC NULL 1700
230 IT Helpdesk NULL 1700
240 Government Sales NULL 1700
250 Retail Sales NULL 1700
260 Recruiting NULL 1700
270 Payroll NULL 1700
10 Administration 200 1700
50 Shipping 121 1500
50 Shipping 121 1500
40 Human Resources 203 2400
50 Shipping 121 1500
60 IT 103 1400
70 Public Relations 204 2700
80 Sales 145 2500
90 Executive 100 1700
100 Finance 108 1700
Time taken: 0.532 seconds, Fetched: 30 row(s)
hive> select * from hr2.departments2;
OK
10 Administration 200 1700
20 Marketing 201 1800
110 Accounting 205 1700
120 Treasury NULL 1700
130 Corporate Tax NULL 1700
140 Control And Credit NULL 1700
150 Shareholder Services NULL 1700
160 Benefits NULL 1700
170 Manufacturing NULL 1700
180 Construction NULL 1700
190 Contracting NULL 1700
200 Operations NULL 1700
30 Purchasing 114 1700
210 IT Support NULL 1700
220 NOC NULL 1700
230 IT Helpdesk NULL 1700
240 Government Sales NULL 1700
250 Retail Sales NULL 1700
260 Recruiting NULL 1700
270 Payroll NULL 1700
10 Administration 200 1700
50 Shipping 121 1500
50 Shipping 121 1500
40 Human Resources 203 2400
50 Shipping 121 1500
60 IT 103 1400
70 Public Relations 204 2700
80 Sales 145 2500
90 Executive 100 1700
100 Finance 108 1700
Time taken: 0.18 seconds, Fetched: 30 row(s)
hive> desc hr2.departments2;
OK
department_id int
department_name varchar(30)
manager_id int
location_id int
Time taken: 0.539 seconds, Fetched: 4 row(s)
# nulls stay null
hive> select * from hr2.departments2 where manager_id is null;
OK
120 Treasury NULL 1700
130 Corporate Tax NULL 1700
140 Control And Credit NULL 1700
150 Shareholder Services NULL 1700
160 Benefits NULL 1700
170 Manufacturing NULL 1700
180 Construction NULL 1700
190 Contracting NULL 1700
200 Operations NULL 1700
210 IT Support NULL 1700
220 NOC NULL 1700
230 IT Helpdesk NULL 1700
240 Government Sales NULL 1700
250 Retail Sales NULL 1700
260 Recruiting NULL 1700
270 Payroll NULL 1700
Time taken: 0.311 seconds, Fetched: 16 row(s)
hive> select * from hr.departments where manager_id is null;
OK
120 Treasury NULL 1700
130 Corporate Tax NULL 1700
140 Control And Credit NULL 1700
150 Shareholder Services NULL 1700
160 Benefits NULL 1700
170 Manufacturing NULL 1700
180 Construction NULL 1700
190 Contracting NULL 1700
200 Operations NULL 1700
210 IT Support NULL 1700
220 NOC NULL 1700
230 IT Helpdesk NULL 1700
240 Government Sales NULL 1700
250 Retail Sales NULL 1700
260 Recruiting NULL 1700
270 Payroll NULL 1700
Time taken: 0.219 seconds, Fetched: 16 row(s)
}}}
{{{
-- PROCESS_DATE FILTER
-- FOR DERIVED: where CAST(process_date AS Date) <= '2018-01-11'
-- FOR RAW: where CAST(process_date AS Date) <= '2018-01-11'
select distinct CAST(process_date AS Date) from restou_derived.dc_brc_data where CAST(process_date AS Date) <= '2018-01-11';
select distinct CAST(process_date AS Date) from restou_derived.dc_master_target_summary where CAST(process_date AS Date) <= '2018-01-11';
select distinct CAST(process_date AS Date) from restou_derived.dc_pending_default_info_from_prod where CAST(process_date AS Date) <= '2018-01-11';
select distinct CAST(process_date AS Date) from restou_derived.dc_slg1_exceptions_from_prod where CAST(process_date AS Date) <= '2018-01-11';
select distinct CAST(processed_date AS Date) from restou_raw.crm_service_contract_from_prod where CAST(processed_date AS Date) <= '2018-01-11';
select distinct CAST(processed_date AS Date) from restou_raw.manual_master_tg_from_prod where CAST(processed_date AS Date) <= '2018-01-11';
-- this works
select distinct from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyyMMdd') from restou_raw.crm_service_contract_from_prod
where from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyyMMdd') >= from_unixtime(unix_timestamp('20180101', 'yyyyMMdd'),'yyyyMMdd')
and from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyyMMdd') <= from_unixtime(unix_timestamp('20180217', 'yyyyMMdd'),'yyyyMMdd');
-- this works too
select distinct from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyy-MM-dd') from restou_raw.crm_service_contract_from_prod
where from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyy-MM-dd') <= from_unixtime(unix_timestamp('2018-02-18', 'yyyy-MM-dd'),'yyyy-MM-dd');
}}}
{{{
hive> select distinct from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyyMMdd') from restou_raw.crm_service_contract_from_prod
> where from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyyMMdd') <= from_unixtime(unix_timestamp('20180218', 'yyyyMMdd'),'yyyyMMdd');
Query ID = sryerama_20180225220456_134cad18-fa25-4210-b7ee-60588d53ec13
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1518676904170_43433)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 .......... SUCCEEDED 1 1 0 0 0 0
Reducer 2 ...... SUCCEEDED 2 2 0 0 0 0
--------------------------------------------------------------------------------
VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 6.61 s
--------------------------------------------------------------------------------
OK
20180216
20180217
20180218
Time taken: 8.227 seconds, Fetched: 3 row(s)
hive> select distinct from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyy-MM-dd') from restou_raw.crm_service_contract_from_prod
> where from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyy-MM-dd') <= from_unixtime(unix_timestamp('2018-02-18', 'yyyy-MM-dd'),'yyyy-MM-dd');
Query ID = sryerama_20180225220616_8534e3bf-270d-47e1-a863-34eb24f80440
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1518676904170_43433)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 .......... SUCCEEDED 1 1 0 0 0 0
Reducer 2 ...... SUCCEEDED 2 2 0 0 0 0
--------------------------------------------------------------------------------
VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 9.09 s
--------------------------------------------------------------------------------
OK
2018-02-16
2018-02-17
2018-02-18
Time taken: 10.251 seconds, Fetched: 3 row(s)
}}}
{{{
create a test export first
export table testtable to 'hdfs_exports_location/testtable';
then get the location hdfs://sandbox.hortonworks.com:8020
from here explicitly set the directory name
hive>
> export table departments to 'hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments';
Copying data from file:/tmp/raj_ops/b80ec6b0-1852-464b-b925-ab6684f3bd88/hive_2018-02-09_01-05-56_358_7720928324786932039-1/-local-10000/_metadata
Copying file: file:/tmp/raj_ops/b80ec6b0-1852-464b-b925-ab6684f3bd88/hive_2018-02-09_01-05-56_358_7720928324786932039-1/-local-10000/_metadata
Copying data from hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_1
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_10
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_11
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_12
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_13
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_14
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_15
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_16
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_17
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_18
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_19
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_2
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_20
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_21
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_22
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_23
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_24
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_25
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_26
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_27
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_28
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_29
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_3
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_4
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_5
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_6
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_7
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_8
Copying file: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/hr.db/departments/000000_0_copy_9
OK
Time taken: 3.823 seconds
[raj_ops@sandbox ~]$ hadoop fs -du -h /tmp/expimp/*
1.4 K /tmp/expimp/departments/_metadata
709 /tmp/expimp/departments/data
hive>
> import table hr2.departments from 'hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments';
Copying data from hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_1
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_10
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_11
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_12
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_13
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_14
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_15
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_16
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_17
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_18
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_19
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_2
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_20
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_21
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_22
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_23
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_24
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_25
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_26
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_27
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_28
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_29
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_3
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_4
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_5
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_6
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_7
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_8
Copying file: hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments/data/000000_0_copy_9
Loading data to table hr2.departments
OK
Time taken: 4.487 seconds
hive>
>
> use hr2;
OK
Time taken: 0.241 seconds
hive> show tables;
OK
departments
Time taken: 0.303 seconds, Fetched: 1 row(s)
hive> select * from departments;
OK
10 Administration 200 1700
20 Marketing 201 1800
110 Accounting 205 1700
120 Treasury NULL 1700
130 Corporate Tax NULL 1700
140 Control And Credit NULL 1700
150 Shareholder Services NULL 1700
160 Benefits NULL 1700
170 Manufacturing NULL 1700
180 Construction NULL 1700
190 Contracting NULL 1700
200 Operations NULL 1700
30 Purchasing 114 1700
210 IT Support NULL 1700
220 NOC NULL 1700
230 IT Helpdesk NULL 1700
240 Government Sales NULL 1700
250 Retail Sales NULL 1700
260 Recruiting NULL 1700
270 Payroll NULL 1700
10 Administration 200 1700
50 Shipping 121 1500
50 Shipping 121 1500
40 Human Resources 203 2400
50 Shipping 121 1500
60 IT 103 1400
70 Public Relations 204 2700
80 Sales 145 2500
90 Executive 100 1700
100 Finance 108 1700
Time taken: 0.382 seconds, Fetched: 30 row(s)
hive> import table hr2.departments from 'hdfs://sandbox.hortonworks.com:8020/tmp/expimp/departments';
FAILED: SemanticException [Error 10119]: Table exists and contains data files
}}}
{{{
# create the export directory
# (execute on prod)
hadoop fs -mkdir /sdge/derived/restou/dc_crm_master_tg_from_prod
# export the table
# (execute on prod)
export table restou_raw.manual_master_tg_from_prod to 'hdfs://prod/sdge/derived/restou/dc_crm_master_tg_from_prod';
# transfer the table
# (execute on prod)
# distcp <srcurl> <desturl> copy file or directories recursively
distcp hdfs://prod/sdge/derived/restou/dc_crm_master_tg_from_prod hdfs://dev/sdge/derived/restou/dc_crm_master_tg_from_prod
# import the table
# (execute on dev)
import table restou_raw.manual_master_tg_from_prod from 'hdfs://dev/sdge/derived/restou/dc_crm_master_tg_from_prod';
}}}
{{{
# create the export directory
# (execute on prod)
hadoop fs -mkdir /sdge/tmp/dc_crm_master_tg_from_prod
# create the import directory
# (execute on dev)
hadoop fs -mkdir /sdge/derived/restou/dc_crm_master_tg_from_prod
# export the table
# (execute on prod)
export table restou_raw.manual_master_tg_from_prod to 'hdfs://prod/sdge/tmp/dc_crm_master_tg_from_prod';
# transfer the table
# (execute on prod)
# distcp <srcurl> <desturl> copy file or directories recursively
hadoop distcp hdfs://prod/sdge/tmp/dc_crm_master_tg_from_prod hdfs://dev/sdge/derived/restou/dc_crm_master_tg_from_prod
# import the table
# (execute on dev)
import table restou_raw.manual_master_tg_from_prod from 'hdfs://dev/sdge/derived/restou/dc_crm_master_tg_from_prod';
}}}
{{{
select a.process_date, sum(a.countrows)
from
(select CAST(process_date AS Date) as process_date ,count(*) as countrows from restou_derived.dc_brc_data_from_prod group by process_date) a
group by a.process_date
order by a.process_date;
select a.process_date, sum(a.countrows)
from
(select CAST(process_date AS Date) as process_date ,count(*) as countrows from restou_derived.dc_master_target_summary_from_prod group by process_date) a
group by a.process_date
order by a.process_date;
select a.process_date, sum(a.countrows)
from
(select CAST(process_date AS Date) as process_date ,count(*) as countrows from restou_derived.dc_pending_default_info_from_prod group by process_date) a
group by a.process_date
order by a.process_date;
select a.process_date, sum(a.countrows)
from
(select CAST(process_date AS Date) as process_date ,count(*) as countrows from restou_derived.dc_slg1_exceptions_from_prod group by process_date) a
group by a.process_date
order by a.process_date;
select a.processed_date, sum(a.countrows)
from
(select from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyy-MM-dd') as processed_date ,count(*) as countrows from restou_raw.crm_service_contract_from_prod group by processed_date) a
group by a.processed_date
order by a.processed_date;
select a.processed_date, sum(a.countrows)
from
(select from_unixtime(unix_timestamp(processed_date, 'yyyyMMdd'),'yyyy-MM-dd') as processed_date ,count(*) as countrows from restou_raw.manual_master_tg_from_prod group by processed_date) a
group by a.processed_date
order by a.processed_date;
}}}
{{{
set hive.exec.max.dynamic.partitions=60000;
set hive.exec.max.dynamic.partitions.pernode=3000;
set mapred.reduce.tasks=16;
set hive.auto.convert.join=true;
set hive.auto.convert.join.noconditionaltask=true;
set hive.auto.convert.join.noconditionaltask.size=20971520
set hive.auto.convert.join.use.nonstaged=true;
set hive.mapjoin.smalltable.filesize = 30000000;
}}}
{{{
set hive.auto.convert.join=false;
}}}
{{{
cat /etc/krb5.conf
klist
ls -ltr /etc/security/keytabs
}}}
! references
https://www.safaribooksonline.com/library/view/hadoop-security/9781491900970/ch04.html <- introduction/concepts
https://stackoverflow.com/questions/39362326/connect-to-kerberised-hive-using-jdbc-from-remote-windows-system <- use MIT kinit
http://appcrawler.com/wordpress/2015/06/18/examples-of-connecting-to-kerberos-hive-in-jdbc/ <- kinit keytab file
https://db-blog.web.cern.ch/blog/prasanth-kothuri/2016-02-using-sql-developer-access-apache-hive-kerberos-authentication <- CERN
https://querysurge.zendesk.com/hc/en-us/articles/115001218863-Setting-Up-a-Hive-Connection-with-Kerberos-using-Apache-JDBC-Drivers-Windows- <- keytab
! other references
https://www.safaribooksonline.com/library/view/practical-hadoop-security/9781430265450/9781430265443_FM_1_Title.xhtml
http://www.cloudera.com/documentation/manager/5-0-x/Configuring-Hadoop-Security-with-Cloudera-Manager/cm5chs_get_princ_keytab_s4.html
https://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_sg_kadmin_kerberos_keytab.html <- ktutil/klist
https://www.ibm.com/support/knowledgecenter/en/SSZUMP_7.1.2/management_sym/sym_kerberos_creating_principal_keytab.html <- create keytab/principal file
https://www.cloudera.com/documentation/cdh/5-0-x/CDH5-Security-Guide/cdh5sg_kadmin_kerberos_keytab.html <- keytab
https://www.slideshare.net/Hadoop_Summit/accelerating-query-processing-with-materialized-views-in-apache-hive
<<showtoc>>
! enable "run as end user", restart affected services
[img(90%,90%)[https://i.imgur.com/Rjl9fR2.png]]
[img(90%,90%)[https://i.imgur.com/nfIOikx.png]]
! default hive behavior on ranger
!! to get hive user to access hive, it must be added to principal
{{{
[root@node2 ~]# kadmin.local -q "addprinc hive"
Authenticating as principal admin/admin@EXAMPLE.COM with password.
WARNING: no policy specified for hive@EXAMPLE.COM; defaulting to no policy
Enter password for principal "hive@EXAMPLE.COM":
Re-enter password for principal "hive@EXAMPLE.COM":
Principal "hive@EXAMPLE.COM" created.
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# sudo su - hive
Last login: Wed Jan 9 17:35:14 UTC 2019 on pts/0
[hive@node2 ~]$
[hive@node2 ~]$
[hive@node2 ~]$ kinit
Password for hive@EXAMPLE.COM:
[hive@node2 ~]$
[hive@node2 ~]$
[hive@node2 ~]$
[hive@node2 ~]$ hive
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Logging initialized using configuration in file:/etc/hive/2.6.5.1050-37/0/hive-log4j.properties
hive>
>
> show databases;
OK
default
hr
Time taken: 2.279 seconds, Fetched: 2 row(s)
hive>
>
> create table test (id int);
OK
Time taken: 2.852 seconds
hive>
> select * from test;
OK
Time taken: 0.814 seconds
hive>
[hive@node2 scripts]$ cat beeline_withkrb2.sh
#!/bin/bash
# beeline with kerberos
HADOOP_OPTS="-Djavax.security.auth.useSubjectCredsOnly=false" beeline -u 'jdbc:hive2://localhost:10000/default;principal=hive/node2.example.com@EXAMPLE.COM;auth=kerberos'
[hive@node2 scripts]$
[hive@node2 scripts]$ ./beeline_withkrb2.sh
Connecting to jdbc:hive2://localhost:10000/default;principal=hive/node2.example.com@EXAMPLE.COM;auth=kerberos
Connected to: Apache Hive (version 1.2.1000.2.6.5.1050-37)
Driver: Hive JDBC (version 1.2.1000.2.6.5.1050-37)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1000.2.6.5.1050-37 by Apache Hive
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
| hr |
+----------------+--+
2 rows selected (1.413 seconds)
0: jdbc:hive2://localhost:10000/default> ^C^C[hive@node2 scripts]$
}}}
!! root will not have hive access
{{{
[root@node2 ~]# hive
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Logging initialized using configuration in file:/etc/hive/2.6.5.1050-37/0/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1547023808827_0003 failed 2 times due to AM Container for appattempt_1547023808827_0003_000002 exited with exitCode: -1000
For more detailed output, check the application tracking page: http://node2.example.com:8088/cluster/app/application_1547023808827_0003 Then click on links to logs of each attempt.
Diagnostics: Application application_1547023808827_0003 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is admin
main : requested yarn user is admin
User admin not found
Failing this attempt. Failing the application.
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1547023808827_0003 failed 2 times due to AM Container for appattempt_1547023808827_0003_000002 exited with exitCode: -1000
For more detailed output, check the application tracking page: http://node2.example.com:8088/cluster/app/application_1547023808827_0003 Then click on links to logs of each attempt.
Diagnostics: Application application_1547023808827_0003 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is admin
main : requested yarn user is admin
User admin not found
Failing this attempt. Failing the application.
at org.apache.tez.client.TezClient.waitTillReady(TezClient.java:699)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:218)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:116)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:579)
... 8 more
}}}
!! other users will not have hive resources access even if they can manage to login
{{{
[root@node2 scripts]# cat beeline_withkrb.sh
#!/bin/bash
# beeline with kerberos
HADOOP_OPTS="-Djavax.security.auth.useSubjectCredsOnly=false" beeline -u 'jdbc:hive2://localhost:10000/default;principal=hive/node2.example.com@EXAMPLE.COM;auth=kerberos vagrant dummy'
[vagrant@node2 scripts]$ ./beeline_withkrb.sh
Connecting to jdbc:hive2://localhost:10000/default;principal=hive/node2.example.com@EXAMPLE.COM;auth=kerberos
Connected to: Apache Hive (version 1.2.1000.2.6.5.1050-37)
Driver: Hive JDBC (version 1.2.1000.2.6.5.1050-37)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1000.2.6.5.1050-37 by Apache Hive
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> show databases;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [vagrant] does not have [USE] privilege on [Unknown resource!!] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> create table test (id int);
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [vagrant] does not have [CREATE] privilege on [default/test] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> [vagrant@node2 scripts]$ ^C
}}}
! login on ranger admin and create new policy for vagrant user
* above show that vagrant can't access any hive resources
* below creates a policy so that the vagrant user can have access only on vagrant database
[img(90%,90%)[https://i.imgur.com/ILAy3K1.png]]
[img(90%,90%)[https://i.imgur.com/0dJMRVs.png]]
[img(90%,90%)[https://i.imgur.com/o7ZBgPb.png]]
[img(90%,90%)[https://i.imgur.com/kMGyFjY.png]]
!! vagrant user can only access the vagrant database and not others
{{{
[vagrant@node2 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: vagrant@EXAMPLE.COM
Valid starting Expires Service principal
01/09/2019 01:43:51 01/10/2019 01:43:51 krbtgt/EXAMPLE.COM@EXAMPLE.COM
[vagrant@node2 ~]$
[vagrant@node2 ~]$
[vagrant@node2 ~]$
[vagrant@node2 ~]$ cd /vagrant/scripts/
[vagrant@node2 scripts]$ ./beeline_withkrb.sh
Connecting to jdbc:hive2://localhost:10000/default;principal=hive/node2.example.com@EXAMPLE.COM;auth=kerberos
Connected to: Apache Hive (version 1.2.1000.2.6.5.1050-37)
Driver: Hive JDBC (version 1.2.1000.2.6.5.1050-37)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1000.2.6.5.1050-37 by Apache Hive
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> show databases;
+----------------+--+
| database_name |
+----------------+--+
+----------------+--+
No rows selected (0.615 seconds)
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> use hr;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [vagrant] does not have [USE] privilege on [hr] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> create database vagrant;
No rows affected (1.756 seconds)
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> show databases;
+----------------+--+
| database_name |
+----------------+--+
| vagrant |
+----------------+--+
1 row selected (1.185 seconds)
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> use vagrant;
No rows affected (1.537 seconds)
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> create table test (id int);
No rows affected (1.028 seconds)
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> select * from test;
+----------+--+
| test.id |
+----------+--+
+----------+--+
No rows selected (0.804 seconds)
0: jdbc:hive2://localhost:10000/default>
0: jdbc:hive2://localhost:10000/default> create database vagrant2;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [vagrant] does not have [CREATE] privilege on [vagrant2] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000/default>
}}}
!! but hive user will have access to vagrant database, because it has more privileges but you can lockout the hive user through policies and kerberos
{{{
[hive@node1 ~]$ kinit
Password for hive@EXAMPLE.COM:
[hive@node1 ~]$
[hive@node1 ~]$
[hive@node1 ~]$ hive
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Logging initialized using configuration in file:/etc/hive/2.6.5.1050-37/0/hive-log4j.properties
hive>
>
>
> show databases;
OK
default
hr
vagrant
Time taken: 2.56 seconds, Fetched: 3 row(s)
hive>
>
>
> use vagrant;
OK
Time taken: 1.876 seconds
hive>
>
>
> show tables;
OK
test
Time taken: 0.492 seconds, Fetched: 1 row(s)
hive>
>
> select * from test;
OK
Time taken: 1.141 seconds
hive>
>
}}}
https://github.com/dharmeshkakadia/hive-faq
https://github.com/zhuguangbin/HiveHistoryParser
https://github.com/zhuguangbin/hadoop-dashboard
https://github.com/t3rmin4t0r/lipwig
https://github.com/dharmeshkakadia/PlanSummarizer
https://github.com/hdinsight/HivePerformanceAutomation
https://github.com/t3rmin4t0r/notes/wiki/Perf-sampling-Tez
https://github.com/t3rmin4t0r/notes/wiki/Hive-Perf-Tools
{{{
##1 – Lipwig
https://github.com/t3rmin4t0r/lipwig
This takes a hive query plan and gives something like this
http://people.apache.org/~gopalv/q27-plan.svg
##2 - Swimlanes
https://github.com/apache/tez/tree/master/tez-tools/swimlanes
That gives you something like this to look at
http://people.apache.org/~gopalv/query27.svg
##3 – Tez analyzers
https://github.com/apache/tez/tree/master/tez-tools/analyzers
This has a short-form version of the full swimlane, tracking merely the critical path of a slow job
https://issues.apache.org/jira/secure/attachment/12751186/criticalPath.jpg
And several other analyzers like the total # of unique keys per-reducers vs total # of rows (for skew detection) etc.
This would be a place which would be easy to contribute more to these for the Azure scenario (more counters, automatic analysis etc).
##4 – Tez log parser for PIG
https://github.com/apache/tez/tree/master/tez-tools/tez-tfile-parser
This gives raw line-by-line access to the logs aggregated by Tez (or well, any YARN application)
We use it to debug network hiccups on very large clusters.
http://www.slideshare.net/t3rmin4t0r/tez8-ui-walkthrough/23
##5 – yarnlocaltop
https://github.com/cartershanklin/structor/blob/master/modules/yarnlocaltop_client/src/yarnlocaltop/INSTALL
Carter's toolkit for pulling JVM info out for Tez workloads. This gives per-function sampling with a very high error bar.
##6 - perf–map-agent
Multi-day explanation needed :)
Read the "Java" section for http://www.brendangregg.com/perf.html
I have got a private fork which I haven't finished contributing back
https://github.com/t3rmin4t0r/perf-map-agent/tree/jitdump
This is how I locate SIMD misses in the Hive vectorization and bugs like this one.
https://issues.apache.org/jira/browse/HADOOP-10694
JDK8u65+ has Backtrace handling for perf top, if you use -XX:+PreserveFramePointer.
That gives you a stacked area chart that neatly transitions from the Jvm -> JNI + OS functions.
http://www.brendangregg.com/FlameGraphs/cpu-mixedmode-vertx.svg
This is useful if you're working on the IO latency sensitive apps like the NameNode or HBase.
##7 – jmh + profasm
Perf top isn't a great idea for modify+test loops, because it needs a human interactive session to find hotspots & sometimes a full deploy for each change.
JMH is the thing for microbenchmarking
On linux, assuming you've installed Kenai's hsdis .so files, you can run your microbenchmarks with
java –jar target/benchmarks.jar –prof perfasm
You might get something like this (in this case, it shows the independent byte reads wasting 64 bit registers & going data-dependent on the rdx during the addition).
││ ; - org.notmysock.benchmark.generated.DoubleBench_testIntLongVersion_jmhTest::testIntLongVersion_avgt_jmhStub@1)
4.54% 4.53% ││ 0x00007fd3ad47daa8: movzbq 0x17(%r12,%r10,8),%r11
0.01% 0.01% ││ 0x00007fd3ad47daae: movzbq 0x10(%r12,%r10,8),%rdx
0.58% 0.59% ││ 0x00007fd3ad47dab4: movzbq 0x16(%r12,%r10,8),%r8
0.12% 0.13% ││ 0x00007fd3ad47daba: movzbl 0x11(%r12,%r10,8),%ecx
5.23% 4.92% ││ 0x00007fd3ad47dac0: movzbq 0x15(%r12,%r10,8),%r9
0.02% 0.03% ││ 0x00007fd3ad47dac6: movzbq 0x14(%r12,%r10,8),%rbx
1.83% 2.04% ││ 0x00007fd3ad47dacc: movzbq 0x13(%r12,%r10,8),%rdi
0.32% 0.28% ││ 0x00007fd3ad47dad2: movzbl 0x12(%r12,%r10,8),%r10d
4.34% 4.57% ││ 0x00007fd3ad47dad8: shl $0x38,%r11
0.02% 0.03% ││ 0x00007fd3ad47dadc: shl $0x10,%r10d
4.09% 4.09% ││ 0x00007fd3ad47dae0: shl $0x18,%rdi
0.54% 0.44% ││ 0x00007fd3ad47dae4: movslq %r10d,%r10
4.56% 2.81% ││ 0x00007fd3ad47dae7: shl $0x20,%rbx
0.19% 0.13% ││ 0x00007fd3ad47daeb: shl $0x28,%r9
0.53% 0.37% ││ 0x00007fd3ad47daef: shl $0x8,%ecx
0.52% 0.33% ││ 0x00007fd3ad47daf2: shl $0x30,%r8
3.93% 1.97% ││ 0x00007fd3ad47daf6: movslq %ecx,%rcx
0.48% 0.26% ││ 0x00007fd3ad47daf9: add %rcx,%rdx
1.58% 0.96% ││ 0x00007fd3ad47dafc: add %r10,%rdx
1.98% 1.12% ││ 0x00007fd3ad47daff: add %rdi,%rdx
4.24% 2.61% ││ 0x00007fd3ad47db02: add %rbx,%rdx
4.33% 2.95% ││ 0x00007fd3ad47db05: add %r9,%rdx
4.81% 4.38% ││ 0x00007fd3ad47db08: add %r8,%rdx
4.73% 5.07% ││ 0x00007fd3ad47db0b: add %r11,%rdx ;*ladd
││ ; - org.notmysock.benchmark.DoubleBench::testIntLongVersion@117 (line 71)
}}}
* START HERE https://www.slideshare.net/HadoopSummit/how-to-understand-and-analyze-apache-hive-query-execution-plan-for-performance-debugging
* see Weidong's deck on Hive and Spark Tuning
* hive official doc performance tuning https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_performance_tuning/content/ch_hive_hi_perf_best_practices.html
* itversity - Data Warehouse using Hadoop eco system - 01 Hive Performance Tuning - Split size and no of reducers https://www.youtube.com/watch?v=WrqJOcgRhIM&list=PLf0swTFhTI8q5X0ZBeUaK1yl_TP-tJInC&index=23 , Data Warehouse using Hadoop eco system - 04 Hive Performance Tuning - Strategies https://www.youtube.com/watch?v=mYaKmnO94EE , https://labs.itversity.com/learn
[img[ https://i.imgur.com/jAgnFbR.png ]]
* hadoop summit - optimizing hive queries https://www.youtube.com/watch?v=pftoccjZFOM
* @@the reference book@@ - Programming Hive: Data Warehouse and Query Language for Hadoop https://www.amazon.com/Programming-Hive-Warehouse-Language-Hadoop/dp/1449319335/ref=sr_1_fkmr1_1?ie=UTF8&qid=1516213107&sr=8-1-fkmr1&keywords=hive+sql+tuning , https://www.safaribooksonline.com/library/view/programming-hive/9781449326944/
* hive patterns and anti-patterns https://www.safaribooksonline.com/library/view/learning-apache-hadoop/9781771372374/video183109.html , https://www.safaribooksonline.com/learning-paths/learning-path-hadoop/9781491987285/9781771372374-video183109
* avoiding big data anti-patterns https://www.slideshare.net/grepalex/avoiding-big-data-antipatterns
! course
udemy - Talend For Big Data Integration Course : Beginner to Expert - https://www.udemy.com/talend-for-big-data/learn/v4/overview
udemy - An Advanced Guide for Apache Hive: A Hadoop Ecosystem https://www.udemy.com/an-advanced-guide-for-apache-hive-a-hadoop-ecosystem-tool/learn/v4/content
udemy - Hive to ADVANCE Hive (Real time usage) :Hadoop querying tool - https://www.udemy.com/hadoop-querying-tool-hive-to-advance-hivereal-time-usage/learn/v4/content
pluralsight - Writing Complex Analytical Queries with Hive https://app.pluralsight.com/library/courses/hive-complex-analytical-queries/table-of-contents
pluralsight - Getting Started with Hive for Relational Database Developers https://app.pluralsight.com/library/courses/hive-relational-database-developers-getting-started/table-of-contents
lynda - Analyzing Big Data with Hive https://www.lynda.com/Hive-tutorials/Analyzing-Big-Data-Hive/534413-2.html
lynda - Hadoop for Data Science Tips, Tricks, & Techniques (WITH AS, data structures, beeline, python, etc.) https://www.lynda.com/Hadoop-tutorials/Hadoop-Data-Science-Tips-Tricks-Techniques/585009-2.html
! whitepapers
Apache Hive Performance Tuning https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_hive-performance-tuning/bk_hive-performance-tuning.pdf
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
https://issues.apache.org/jira/browse/HIVE-13290
<<<
Constraints
Version information
As of Hive 2.1.0 (HIVE-13290).
Hive includes support for non-validated primary and foreign key constraints. Some SQL tools generate more efficient queries when constraints are present. Since these constraints are not validated, an upstream system needs to ensure data integrity before it is loaded into Hive.
Example:
create table pk(id1 integer, id2 integer,
primary key(id1, id2) disable novalidate);
create table fk(id1 integer, id2 integer,
constraint c1 foreign key(id1, id2) references pk(id2, id1) disable novalidate);
<<<
Unique Keys and Normalization https://www.safaribooksonline.com/library/view/programming-hive/9781449326944/ch09.html
https://community.hortonworks.com/questions/13368/how-can-i-set-primary-key-and-reference-foreign-ke.html
https://issues.apache.org/jira/browse/HIVE-6905 Implement Auto increment, primary-foreign Key, not null constraints and default value in Hive Table columns
https://cwiki.apache.org/confluence/display/Hive/Column+Statistics+in+Hive Column Statistics in Hive
https://community.hortonworks.com/questions/22321/can-i-create-primary-key-in-hive-table-i-saw-in-tb.html?childToView=22307#answer-22307
https://hortonworks.com/blog/adding-acid-to-apache-hive/
https://stackoverflow.com/questions/28537042/unable-to-create-hive-table-with-primary-key
https://stackoverflow.com/questions/42801102/how-different-is-this-to-creating-a-primary-key-on-a-column-in-hive
<<<
hive support non-validated primary and foreign key constraints
PRIMARY KEY in TBLPROPERTIES is for metadata reference to preserve column significance. It does not apply any constrain on that column. This can be used as a reference from design perspective.
could you explain this bit: "metadata reference to preserve column significance".
For the case when table is replicated on the line of RDBMS table then maintaining the details of the constrain in bigdata will help in future design. Might be in future ACID implemenation of hive will make use of this.
<<<
https://community.hortonworks.com/questions/81756/how-can-you-make-the-row-id-the-primary-key-in-a-h.html
https://community.hortonworks.com/questions/22321/can-i-create-primary-key-in-hive-table-i-saw-in-tb.html
https://stackoverflow.com/questions/46699248/hive-and-primary-key-constraint
https://stackoverflow.com/questions/9696369/hive-foreign-keys
http://grokbase.com/t/hive/user/161arb55tc/foreign-keys-in-hive
http://discuss.itversity.com/t/can-we-implement-concept-of-primary-key-and-foreign-in-hive/974
https://www.bountysource.com/issues/1731195-implement-auto-increment-primary-foreign-key-not-null-constraints-and-default-value-in-hive-table-columns
! kimball DW pk best practice
https://www.safaribooksonline.com/library/view/the-data-warehouse/9781118530801/9781118530801c02.xhtml
https://dwbi1.wordpress.com/2010/02/24/primary-key-and-clustered-index-on-the-fact-table/
https://www.kimballgroup.com/2006/07/design-tip-81-fact-table-surrogate-key/
https://stackoverflow.com/questions/337503/whats-the-best-practice-for-primary-keys-in-tables
http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-dw-best-practices-for-implem-192694.pdf
https://sqldusty.com/2016/05/16/10-sql-server-data-warehouse-design-best-practices-to-follow-part-1/
http://www.agiledata.org/essays/keys.html
http://www.martinsights.com/?p=1065
file:///Users/kristofferson.a.arao/Downloads/DB2BP_Warehouse_Design_0912.pdf
! other implementations
https://community.hortonworks.com/idea/33922/lazy-delayed-primary-key-check-in-hive.html
https://community.hortonworks.com/questions/81756/how-can-you-make-the-row-id-the-primary-key-in-a-h.html
<<<
I found another way of doing this:
1. I first loaded my data set in HDFS. The data set contained the following columns: rwid, ctrname, clndrdate and clndrmonth.
Note that column rwid had no values.
2. Then i created an external table that maps to this data set in hdfs
CREATE EXTERNAL TABLE IF NOT EXISTS calendar(rwid int, ctrname string, clndrdate DATE, clndrmonth string ) COMMENT 'Calendar for Non Business Days' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE location '<location of my file in hdfs>';
3. I created an ORC
CREATE TABLE IF NOT EXISTS calendar_nbd(rwid int, ctrname string, clndrdate DATE, clndrmonth string ) COMMENT 'Calendar for Non Business Days' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS ORC;
4. The last step is most important. I used row_number() over() in the insert overwrite query. This automatically updated the rwid column with the row number.
insert overwrite table calendar_nbd select row_number() over () as rwid, ctrname,clndrdate, clndrmonth from calendarnonbusdays;
<<<
! test case
https://www.techonthenet.com/oracle/primary_keys.php
https://livesql.oracle.com/apex/livesql/file/content_O5AEB2HE08PYEPTGCFLZU9YCV.html
https://dba.stackexchange.com/questions/19767/primary-key-foreign-key-relationship-between-the-department-and-employee-tables
http://www.java2s.com/Code/Oracle/Constraints/AddForeignPrimaryKey.htm
{{{
yarn application -list -appStates ALL
18/02/09 11:32:57 INFO impl.TimelineClientImpl: Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
18/02/09 11:32:58 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
18/02/09 11:32:58 INFO client.AHSProxy: Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
Total number of applications (application-types: [] and states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED]):115
Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL
application_1518033396548_0104 HIVE-525a34f4-0e74-4835-9410-c6084d9f35d5 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0105 HIVE-876aaeb0-2414-427a-a302-e9fc2e0d2ff7 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0102 HIVE-9a4bcc62-8f15-472b-9359-1087f2d4bb41 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0103 HIVE-ecd63290-5ff6-4824-a799-27fd0ae81977 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0108 HIVE-262ba513-5098-4275-b392-76f0a3db03d3 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0109 HIVE-86d707f4-ad9a-45ed-aec1-0628f0b41faf TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0106 HIVE-0f1fb3be-2e2d-4afe-8d07-363b30348708 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0107 HIVE-43b106ab-facb-40b8-827e-9b6c3e7ae5a4 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0112 HIVE-f713a53b-b249-4031-9a11-10c40b995cc6 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0113 HIVE-9f4bf0c1-e4dc-4ed4-bb2c-88f9ed7cd01c TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0110 HIVE-00e727c7-fc28-4374-8c10-e8bb7ebca78a TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0111 HIVE-d899e95b-3e5f-45f3-a5fa-f00a03bc5091 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0114 HIVE-1b3f5ca3-8fdf-4046-9194-93f5f2e0c726 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0115 HIVE-ea34018a-1432-43ca-b296-622062df06b0 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0088 HIVE-83f5c9a8-1b01-4cbf-930e-68112a227d8e TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0089 HIVE-e6eb5158-e5f9-442b-a541-eddf2192f2bb TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0086 HIVE-54acaa7c-a85d-4666-abfd-291176a08a7e TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0087 HIVE-0b5e5528-f17c-410f-8150-6ff7cea618a9 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0092 HIVE-fdf829c4-6dd1-4216-8479-ff010f6c3811 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0093 HIVE-052bbc04-b1da-4201-82c0-a9144c58c05c TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0090 HIVE-6da421dc-5fdd-436f-9141-273276c4a155 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0091 HIVE-beb3bbb7-3ac1-4a18-bdb7-c5407eaae00d TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0096 HIVE-ab012f46-f2b0-459e-92fc-4b2f3ae9189a TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0097 HIVE-439d0063-c6b6-40ff-8311-ef910655eeb3 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0094 HIVE-cccf45b3-fbfb-4eac-8898-67c65a7c5b1f TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0095 HIVE-5449b5a4-51bf-405b-a2eb-57a7748de889 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0100 HIVE-6c33738a-0337-46e8-89e3-87d21a54fd7c TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0101 HIVE-23fcaf10-b67a-4f09-a9c3-6907524f5117 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0098 HIVE-fc81d276-f4cd-49e8-8950-07cdefbcda6f TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0099 HIVE-014d38b5-8d7c-4468-8e68-18abb9c19273 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0072 HIVE-c7c62260-3268-4819-8987-5fb6d55f6fc9 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0073 HIVE-c09824e9-0445-4233-8ef6-3aa47d2bde68 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0070 HIVE-2584eb0c-d725-465d-8765-7f7877a3c6ad TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0071 HIVE-dc7c67a2-6991-48ec-b1be-1e4bc4a90814 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0076 HIVE-11bfd25e-aa03-4564-ab53-a406a8c98acc TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0077 HIVE-ae0c1fc6-6e4f-4a97-b934-77d3528e0881 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0074 HIVE-8a61e18d-36bd-4fc8-802f-cef51fc100c7 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0075 HIVE-a512d254-f1dd-441a-8f79-ddffb3ceffd9 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0080 HIVE-f9554014-5665-4e29-9017-f848598d69ea TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0081 HIVE-085b1385-7182-4da2-ac70-ab9b1f635b2c TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0078 HIVE-267748c2-dcfb-4a26-8028-36f709394a9f TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0079 HIVE-88f0773a-2e36-4070-937c-31eef5335d74 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0084 HIVE-c1037e65-282a-4cd6-9937-6b552d1733c9 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0085 HIVE-665cd8d5-39d3-4c0e-8c2c-102a963b0250 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0082 HIVE-34c4184d-d3c4-4bc3-8db2-a48b2bcbf20f TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0083 HIVE-5e507c41-c1e0-4114-a5b5-bece1eae0c22 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0056 HIVE-6bd7b366-7c46-40af-b4be-8067df42aef2 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0057 HIVE-4bd4f459-2375-48af-bf49-695c7f79e883 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0054 HIVE-ebe17540-90a0-4f0c-aa8e-07eb74b1bdc0 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0055 HIVE-ca911ed7-2860-4fc3-97d0-898c06be84a7 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0060 HIVE-ce17e975-5edc-428b-b580-c28b5f87969c TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0061 HIVE-17038cc9-dcfd-4a9f-948f-1a2454bcd66a TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0058 HIVE-e1a7e639-0e09-4c88-a03f-c6d1e6023ca4 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0059 HIVE-d63a99a1-23a1-4980-9940-ffefe86fbb0f TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0064 HIVE-a3f6f6e9-ce76-4786-ba5c-ccbce298b5fe TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0065 HIVE-3fa5d0ab-30aa-4324-8c97-1b3f7b392e5d TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0062 HIVE-55eb069b-9cdd-4e16-b097-47009caaeea1 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0063 HIVE-52fdd475-34ff-4f3b-80b8-a9795578354d TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0068 HIVE-edc48ebd-d984-4628-b6a9-310762052b08 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0069 HIVE-f5502257-07c2-40cd-a4ef-cce42a304863 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0066 HIVE-f29e1042-77ea-4602-afc6-3257b7d2e4cd TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0067 HIVE-71dff5d0-8956-49a6-a787-a27abb857274 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0040 HIVE-d8ed76b4-17dd-4387-af96-0b722eac615e TEZ root default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0041 HIVE-faadd9ab-d0db-45fb-aa19-30a3e9f01ec8 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0038 HIVE-749ae0be-ad4c-4abf-9ecd-0961acf7228d TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0039 HIVE-2ceb8774-9767-45a2-b2f2-9bbcee53f6f7 TEZ root default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0044 HIVE-2263669d-00f0-4c45-9b40-1dffb698102f TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0045 HIVE-5d02b7c7-f8db-4784-aa25-c92ce6c7019f TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0042 HIVE-58139ae0-14f4-483b-b5b5-6cab9f477078 TEZ root default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0043 HIVE-d5aab4fc-b79b-45d2-9776-f76c2d0687ab TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0048 HIVE-8bd84c64-bd18-4732-8430-928ddc97c4f2 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0049 HIVE-aea5b343-46e5-4b9b-b532-a10d7b7287ea TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0046 HIVE-bccc7ecc-627f-4858-90bc-ff4442f60aa7 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0047 HIVE-dfb7ab32-2994-45f4-8300-afe4cbe3768e TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0052 HIVE-b80ec6b0-1852-464b-b925-ab6684f3bd88 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0053 HIVE-0e19b3f5-5c35-4fd5-a74c-a4f441f17e9a TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0050 HIVE-4f1ff7d9-6a9b-4550-a6a2-b9c8b46e76a5 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0051 HIVE-730ff159-d62c-49ad-b811-e7e9611953e9 TEZ raj_ops default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0024 HIVE-76964d2c-baba-4888-aa50-99960f787111 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0025 HIVE-f978e598-c517-48a6-8d0b-6519dc16e8c1 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0022 HIVE-842b2131-d8ca-4195-bb95-6f0a354389a4 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0023 HIVE-bfb1da58-6bc4-4307-be54-461657d6af03 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0028 HIVE-1e53acb0-2498-4fbe-8a4e-588cbeaf3fb9 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0029 HIVE-366eb523-2dbb-4554-b569-f7d335f57a4b TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0026 HIVE-4a7c0137-6e8f-41e9-83c3-1aee1c8c4f16 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0027 HIVE-c4cf19f4-63a4-4d28-9bc2-ee35c3eaca09 TEZ root default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0032 HIVE-c4cf19f4-63a4-4d28-9bc2-ee35c3eaca09 TEZ root default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0033 HIVE-abdd99e3-6024-41ae-adec-4dfd9d0353fd TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0030 HIVE-4618646e-ecc5-423b-bffb-a7b192bc4d5a TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0031 HIVE-8abf9635-3c86-4cab-876b-4b4de71a2358 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0036 HIVE-fa699973-2db1-44fe-85c1-4277143eab11 TEZ root default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0037 HIVE-f68a4a88-2e53-4beb-82fa-c512794fccf8 TEZ root default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0034 HIVE-c4566d3c-e136-42d1-a833-08acc29b8648 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0035 HIVE-8a08c632-e446-457d-a420-a06d1a4b827f TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0008 HIVE-cc66a9ec-d512-4829-9838-f9c4fe97823a TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0009 HIVE-694f7d31-2436-4916-8ee4-bb221b212066 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0006 HIVE-6075ade9-25a6-45af-823f-d536bf6e7d80 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0007 HIVE-999e5bb9-4d4d-4efd-8334-6ef6e003d168 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0012 HIVE-7e3e5881-b6cb-4c15-8e27-63a6956c8e2a TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0013 HIVE-90957090-78b6-478e-874e-7e38b4208ef4 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0010 HIVE-47ab6557-758b-4278-9d58-07aee860da24 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0011 HIVE-c4cf19f4-63a4-4d28-9bc2-ee35c3eaca09 TEZ root default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0016 HIVE-c4cf19f4-63a4-4d28-9bc2-ee35c3eaca09 TEZ root default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0017 HIVE-f46a1805-e8fc-4c27-b5c5-ecde65a5b99f TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0014 HIVE-46b9172f-979e-42f1-b1cc-298d895cf9a2 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0015 HIVE-08949f50-3f87-42a9-9191-93789fc9e5b2 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0020 HIVE-7bda99f2-e832-43e5-ba1a-e4bfcf815e72 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0021 HIVE-2534e004-ad41-486c-b2cb-5bcece9234a8 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0018 HIVE-fcca89f0-ec4f-4a06-90ff-c590e0f1c615 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0019 HIVE-a7ed5cef-a8d2-4180-94ab-fdf8e99e1aab TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0001 HIVE-03a84ff5-74f3-4141-a0c5-f52129fb285e TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0004 HIVE-a85039f6-0018-41d4-b361-09c6db629766 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0005 HIVE-7d1c062c-6bdc-45f4-bc04-8f546547b25a TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0002 HIVE-89735e71-8095-4ac3-a846-616b383296f1 TEZ hive default FINISHED SUCCEEDED 100% N/A
application_1518033396548_0003 HIVE-107f09e1-e575-469f-86b8-c39305742f97 TEZ hive default FINISHED SUCCEEDED 100% N/A
[raj_ops@sandbox log]$
[raj_ops@sandbox log]$
[raj_ops@sandbox log]$ yarn logs -applicationId application_1518033396548_0115
}}}
https://community.hortonworks.com/articles/47856/using-toad-for-hadoop-with-hdp-24.html
https://weidongzhou.wordpress.com/2016/04/14/use-toad-to-access-hadoop-ecosystem/
https://www.xplenty.com/blog/hive-vs-hbase/
<<<
Apache Hive is a data warehouse infrastructure built on top of Hadoop. It allows for querying data stored on HDFS for analysis via HQL, an SQL-like language that gets translated to MapReduce jobs. Despite providing SQL functionality, Hive does not provide interactive querying yet - it only runs batch processes on Hadoop.
Apache HBase is a NoSQL key/value store which runs on top of HDFS. Unlike Hive, HBase operations run in real-time on its database rather than MapReduce jobs. HBase is partitioned to tables, and tables are further split into column families. Column families, which must be declared in the schema, group together a certain set of columns (columns don’t require schema definition). For example, the "message" column family may include the columns: "to", "from", "date", "subject", and "body". Each key/value pair in HBase is defined as a cell, and each key consists of row-key, column family, column, and time-stamp. A row in HBase is a grouping of key/value mappings identified by the row-key. HBase enjoys Hadoop’s infrastructure and scales horizontally using off the shelf servers.
<<<
Hive vs Impala - Comparing Apache Hive vs Apache Impala
https://www.youtube.com/watch?v=vmiWOlcnFW8&list=PLWM1VK4fv-nfEAeDNDNaJUeOe3AkhA4Wz&index=2
SQL Differences Between Impala and Hive
https://www.cloudera.com/documentation/enterprise/5-5-x/topics/impala_langref_unsupported.html
https://data-flair.training/blogs/impala-vs-hive/
https://hortonworks.com/blog/apache-hive-vs-apache-impala-query-performance-comparison/
https://www.quora.com/What-is-the-difference-between-Apache-HIVE-and-Impala
! hive
http://bigdatariding.blogspot.com/2014/02/hive-complex-data-types-with-examples.html
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types
{{{
arrays: ARRAY<data_type> (Note: negative values and non-constant expressions are allowed as of Hive 0.14.)
maps: MAP<primitive_type, data_type> (Note: negative values and non-constant expressions are allowed as of Hive 0.14.)
structs: STRUCT<col_name : data_type [COMMENT col_comment], ...>
union: UNIONTYPE<data_type, data_type, ...> (Note: Only available starting with Hive 0.7.0.)
}}}
! impala
https://impala.apache.org/docs/build/html/topics/impala_complex_types.html
https://hortonworks.com/blog/hive-cheat-sheet-for-sql-users/
http://xn--mns-ula.dk/TW/phpFiles/
! sch_mnth_p mortgage
look for the keyword "sch_mnth_p"
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiDmaKpme3tAhWxuVkKHbn9AEYQFjACegQIAxAC&url=http%3A%2F%2Fwww200.kdsglobal.com%2F~kdsprod%2FCoreLogic_DataDictionary.xls&usg=AOvVaw3dOWC9_dgfRcu5QfzWXiax
http://mortgagepayofftaigai.blogspot.com/2017/08/
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiDmaKpme3tAhWxuVkKHbn9AEYQFjABegQIBBAC&url=http%3A%2F%2Fwww200.kdsglobal.com%2F~kdsprod%2FAgencySFLL_DataDictionary.xls&usg=AOvVaw3Zu8atG9swoxM9NnugcPY4
https://www.kdsglobal.com/kds/home/techfin.html
https://www.kdsglobal.com/kds/share/downloadWeback.jsp?t=xlsx&f=Fed_holding_Summary_kds_ymd&dir=POD-homedoc
http://www200.kdsglobal.com/~kdsprod/
Jiaqi Yan github
https://github.com/jgraeb/AutoValetParking
https://github.com/tungminhphan/traffic-intersection
https://github.com/Yanjiaqi0328/traffic_intersection_observer
! how to create scheduled queries
* useful for creating a temp table or a daily pre-agg table
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91069855-e933d500-e603-11ea-99a4-2a4dd0fd7989.png]]
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91069853-e802a800-e603-11ea-9dc7-d1dc9f006a5a.png]]
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91069850-e6d17b00-e603-11ea-8cec-2754ab53bf2c.png]]
Solaris 10 11/06: Migrating a Non-Global Zone to a Different Machine https://docs.oracle.com/cd/E18752_01/html/817-1592/gentextid-12492.html
Migrating a Non-Global Zone to a Different Machine https://docs.oracle.com/cd/E23824_01/html/821-1460/migrat.html
Moving Oracle Solaris 11 Zones between physical servers https://blogs.oracle.com/openomics/entry/solaris_zone_migration
! For app zones
it's easy
! For database zones
<<<
The article https://blogs.oracle.com/openomics/entry/solaris_zone_migration makes it look easy and it is for the App zones but it is not easy to migrate database zones on the SuperCluster.
Database zones that are configured in the DB Domains are configured by the Oracle SuperCluster setup utilities and disk groups from the Cell Servers are assigned to zones for setup during the exadata installation portion of the install. All this happens when the SuperCluster config is applied. The only way I know to move zones and remain in a supported configuration is to reset the system and build new Database zones.
You could maybe move databases from one database zone to another.
<<<
https://twitter.com/mlpowered/status/1151235526407553024
<<<
How to ship ML in practice:
1/ Write a simple rule based solution to cover 80% of use cases
2/ Write a simple ML algorithm to cover 95% of cases
3/ Write a filtering algorithm to route inputs to the correct method
4/ Add monitoring
5/ Detect drift
...
24/ Deep Learning
<<<
! the ultimate guide
https://www.lynda.com/Programming-Foundations-tutorials/Foundations-Programming-Open-Source-Licensing/439414-2.html
! tools
http://choosealicense.com/
http://addalicense.com/
http://ben.balter.com/talks/ , https://github.com/benbalter/licensee
https://developer.github.com/v3/licenses/
https://github.com/blog/1964-open-source-license-usage-on-github-com
full list of licenses available http://spdx.org/licenses/
https://tldrlegal.com/license/mit-license
https://tldrlegal.com/license/apache-license-2.0-(apache-2.0)
https://tldrlegal.com/license/gnu-lesser-general-public-license-v3-(lgpl-3)
http://ossperks.com/ , https://github.com/nikmd23/ossperks
http://www.perf-tooling.today/
! books
https://www.safaribooksonline.com/library/view/understanding-open-source/0596005814/
https://www.safaribooksonline.com/library/view/intellectual-property-and/9780596517960/ch10.html
https://www.safaribooksonline.com/library/view/oscon-2015-complete/9781491927991/part332.html <-- good stuff
Understanding Copyright: A Deeper Dive
http://www.lynda.com/Business-Business-Skills-tutorials/Understanding-Copyright-Deeper-Dive/365065-2.html?srchtrk=index:1%0Alinktypeid:2%0Aq:license%0Apage:1%0As:relevance%0Asa:true%0Aproducttypeid:2
! discussions
https://news.ycombinator.com/item?id=3402450
https://www.quora.com/Whats-the-different-between-Apache-v2-0-and-MIT-license
https://www.chef.io/blog/2009/08/11/why-we-chose-the-apache-license/ <-- good stuff
https://www.smashingmagazine.com/2010/03/a-short-guide-to-open-source-and-similar-licenses/
https://developer.jboss.org/thread/147636?_sscc=t
! creative commons
https://wiki.creativecommons.org/wiki/Microsoft_Office_Addin
https://www.microsoft.com/en-us/download/details.aspx?id=13303
https://creativecommons.org/choose/
https://creativecommons.org/licenses/by-nd/4.0/
https://creativecommons.org/2014/01/07/plaintext-versions-of-creative-commons-4-0-licenses/
https://creativecommons.org/2011/04/15/plaintext-versions-of-creative-commons-licenses-and-cc0/
http://programmers.stackexchange.com/questions/205167/does-using-creative-commons-no-derivatives-license-conflict-with-github-terms
https://github.com/github/choosealicense.com/issues/33
https://tldrlegal.com/license/creative-commons-attribution-noderivatives-4.0-international-(cc-by-nd-4.0)#summary
! CLA - contributor license agreement
https://cla.developers.google.com/about/google-individual
! promise on software patent
http://www.redhat.com/legal/patent_policy.html
! how to create your own OSS project
http://osszerotosixty.com/
http://getglimpse.com/
https://en.wikipedia.org/wiki/Outercurve_Foundation
https://github.com/dotnet/coreclr
! code attribution
https://www.reddit.com/r/programming/comments/40zlu1/attribution_will_be_required_when_using_code_from/
https://www.quora.com/How-do-I-properly-credit-an-original-codes-developer-for-her-open-source-contribution
http://stackoverflow.com/questions/873823/how-do-i-attribute-a-license-to-library-used-without-making-it-look-like-my-soft
https://developers.slashdot.org/story/16/01/15/1437231/use-code-from-stack-overflow-you-must-provide-attribution
semantic versioning http://semver.org/
<<<
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
<<<
<<<
how to add the following types of NFS mounts to each server (both app and database)
The mounts are specific to each server (exports on the ZFS has unique names).
<<<
! Below are the Linux version of the mounts that we need help converting to Solaris:
{{{
# ZFS mounts (for SAP) - Environment specific (by application name/environment)
10.10.10.10:/export/sapmntCBP /sapmnt/CBP nfs rw,bg,hard,rsize=32768,wsize=32768,timeo=10,intr,proto=tcp,noac,vers=3,suid,_netdev 0 0
10.10.10.10:/export/ifaceCBP/nonprod /iface/CBP nfs rw,bg,hard,rsize=32768,wsize=32768,timeo=10,intr,proto=tcp,noac,vers=3,suid,_netdev 0 0
10.10.10.10:/export/cbpci/usrsap /usr/sap nfs rw,bg,hard,rsize=32768,wsize=32768,timeo=10,intr,proto=tcp,noac,vers=3,suid,_netdev 0 0
10.10.10.10:/export/cbpci/oracle /oracle nfs rw,bg,hard,rsize=32768,wsize=32768,timeo=10,intr,proto=tcp,noac,vers=3,suid,_netdev 0 0
10.10.10.10:/export/usrsaptrans_ce_np /usr/sap/trans nfs rw,bg,hard,rsize=32768,wsize=32768,timeo=10,intr,proto=tcp,noac,vers=3,suid,_netdev 0 0
# All SAP servers - For install only (then unmount)
10.243.204.253:/export/linux /sw nfs rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
10.10.10.10:/export/SAP_media /SAP_media nfs rw,bg,hard,rsize=32768,wsize=32768,timeo=10,intr,proto=tcp,noac,vers=3,suid,_netdev 0 0 -
10.10.10.10:/export/Install /Install nfs rw,bg,hard,rsize=32768,wsize=32768,timeo=10,intr,proto=tcp,noac,vers=3,suid,_netdev 0 0
}}}
! Solaris version
{{{
Generally the vfstab entry would be like this example:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
IP:/export/file - /stage nfs - yes rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3
The “suid" option is set by default but you can use it or “nosuid” if it needs to be off.
There is no “_netdev" option, my understanding is in linux “_" options are ignored by the mount utility but are kept in the mount table as a flag apps can access with the mount command.
I don’t know that I would use timeo=10 but if it is working for you now, try it.
So your first entry below would look something like:
10.10.10.10:/export/sapmntCBP - /sapmnt/CBP nfs – yes rw,bg,hard,intr,rsize=32768,wsize=32768,timeo=10,proto=tcp,noac,vers=3,suid
}}}
! other example
{{{
# Solaris DB exports
#10.10.14.10:/export/orainst - /oracleexp nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,suid,vers=3
# Linux DB exports (used in ER2 for BOE (PBO, SBO and CBO)
10.10.14.10:/export/orainstx86 - /oracleexp nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,suid,vers=3
}}}
https://docs.oracle.com/cd/E19044-01/sol.containers/817-1592/z.inst.task-13/index.html
{{{
zoneadm -z <zone name> boot
}}}
{{{
root@ssc1p1db01:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
ssc1p1vm01 164G 62.7G 101G 38% 1.00x ONLINE -
ssc1p1vm02 164G 60.9G 103G 37% 1.00x ONLINE -
ssc1p1vm03 164G 61.8G 102G 37% 1.00x ONLINE -
rpool 556G 277G 279G 49% 1.00x ONLINE -
root@ssc1p1db01:~# zpool status rpool
pool: rpool
state: ONLINE
scan: resilvered 276G in 26m30s with 0 errors on Tue Jan 31 23:15:52 2017
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t5000CCA02D021FFCd0s0 ONLINE 0 0 0
c0t5000CCA02D0212ACd0s0 ONLINE 0 0 0
root@ssc1p1db01:~# zpool status ssc1p1vm01
pool: ssc1p1vm01
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ssc1p1vm01 ONLINE 0 0 0
c0t600144F0D6E5A8510000563790A30003d0 ONLINE 0 0 0
}}}
{{{
sh -x collect.ldom
uname -a
pkg info entire
#ldm ls
#ldm ls-bindings primary
#ldm ls-bindings er1t101
#ldm ls-spconfig
#ldm ls-io
df -h
zfs list
swap -l
ipadm
dladm
dladm show-link
dladm show-phys -L
dladm show-vlan
cat /etc/vfstab
echo | format
beadm list
pkg publisher
route -p show
netstat -rn
}}}
{{{
To configure a zone you have to use zonecfg –z zone_name and install if first before booting it up.
}}}
<<showtoc>>
<<<
The primary LDOM, it’s usually called Primary Domain, and can also be a Global Zone if you have zones in it.
You can restart it or shutdown from the console, if the autoboot is set to on, it will restart, or will wait on the OpenBoot, the OK prompt.
also see [[howto identify LDOM primary domain]]
<<<
! login to console using ILOM
{{{
You can access the console using the ILOM, you have 1 per compute node. So per rack you have two.
root@sct01-appclient0101:~# ssh sct01-node1-ilom.sky.local
Password:
Oracle(R) Integrated Lights Out Manager
Version 3.2.5.90.a r108602
Copyright (c) 2016, Oracle and/or its affiliates. All rights reserved.
Warning: HTTPS certificate is set to factory default.
Hostname: sct01-node1-ilom
-> show
/
Targets:
HOST
System
SP
Properties:
Commands:
cd
show
-> start /HOST/console
Are you sure you want to start /HOST/console (y/n)? y
Serial console started. To stop, type #.
sct01-appclient0101 console login:
sct01-appclient0101 console login: sct01-appclient0101 console login:
sct01-appclient0101 console login:
At this point, root password is required.
}}}
! login to console using telnet
{{{
From the primary domain, you can connect to the console of the other domains using telnet:
root@sct01-appclient0101:~# ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME
primary active -n-cv- UART 256 511G 1.0% 1.0% 49d 5h 27m
ssccn1-dom1 active -n---- 5001 128 256G 0.1% 0.1% 35d 5h 48m
ssccn1-dom2 active -n--v- 5002 128 256G 0.0% 0.0% 5d 21h 55m
root@sct01-appclient0101:~# telnet localhost 5001
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connecting to console "ssccn1-dom1" in group "ssccn1-dom1" ....
Press ~? for control options ..
sct01-appclient0102 console login:
}}}
{{{
zlogin <zone name>
To log into zones you use the zlogin command, no need for root password.
To log into the zone console zlogin –C
}}}
https://docs.oracle.com/cd/E19044-01/sol.containers/817-1592/z.inst.task-1/index.html
https://blogs.oracle.com/mandalika/entry/solaris_10_zone_creation_for
A Technical Overview of Oracle SuperCluster http://www.oracle.com/technetwork/server-storage/sun-sparc-enterprise/documentation/o13-045-sc-t5-8-arch-1982476.pdf
Oracle SuperCluster - An Example of Automating the Creating of Application Zones (Doc ID 2041460.1)
Implementing Application Zones on Oracle SuperCluster http://www.oracle.com/us/products/servers-storage/servers/sparc/supercluster/wp-app-zones-supercluster-2611960.pdf
Best Practices for Deploying Oracle Solaris Zones with Oracle Database 11g on SPARC SuperCluster http://www.oracle.com/technetwork/server-storage/engineered-systems/sparc-supercluster/deploying-zones-11gr2-supercluster-1875864.pdf
https://technicalsanctuary.wordpress.com/2016/04/21/creating-an-application-zone-on-a-supercluster/
https://technicalsanctuary.wordpress.com/2015/05/15/changing-the-number-of-cpus-in-a-zones-processor-set-pooladm-solaris-11-1/
https://technicalsanctuary.wordpress.com/2015/01/07/escaping-out-of-a-zlogin-c/
https://technicalsanctuary.wordpress.com/2014/05/02/solaris-11-repo-which-solaris-releases-are-available/
https://technicalsanctuary.wordpress.com/2014/03/24/showing-the-supercluster-software-release-version/
https://technicalsanctuary.wordpress.com/2011/11/25/check-current-value-of-shmmax-on-solaris/
! create a zone in solaris 11
http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-092-s11-zones-intro-524494.html
http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-118-s11-script-zones-524499.html
http://thegeekdiary.com/how-to-create-a-zone-in-solaris-11/
https://richardgood.wordpress.com/2011/11/14/how-to-create-a-zone-in-solaris-11/
<<<
how do you know if that zone is a primary domain?
<<<
{{{
The primary LDOM, it’s usually called Primary Domain, and can also be a Global Zone if you have zones in it.
So basically the first LDOM created on a Sparc system is the primary domain, below labelled control domain.
The first Solaris install created within any LDOM is the global zone.
Subsequent zones are just called non-global zones.
root@enksc1client01:~# virtinfo -a
Domain role: LDoms control I/O service root <- SHOWS "CONTROL" WORD
Domain name: primary
Domain UUID: 60e3bff3-1ff5-4c5c-df53-8ed3ce79bd09
Control domain: enksc1client01
Chassis serial#: AK00263379
Also
root@enksc1client01:~# ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME
primary active -n-cv- UART 384 785152M 0.2% 0.2% 21d 22h 30m
ssccn1-dom1 active -n---- 5001 128 256G 0.0% 0.0% 21d 22h 30m
The ldm command doesn’t run on NON primary LDOMs as the Logical Domain Manager only runs on the primary LDOM.
The above naming convention is in EVERY SuperCluster.
}}}
also see [[howto connect/login to LDOM]]
{{{
for i in `zoneadm list|grep -v global`; do echo "Check Zone
$i #############################################"; zlogin $i "ps -ef | grep [p]mon"; done
Check Zone
er2s1vm01 #############################################
root 29013 27567 0 Jan 31 ? 1:12 /usr/bin/perl -w /opt/oracle.cellos/compmon/exadata_mon_hw_asr.pl -server
oracle 40582 27567 0 Jan 31 ? 1:55 asm_pmon_+ASM1
oracle 7520 27567 0 Jan 31 ? 4:14 ora_pmon_dbm011
Check Zone
er2s1vm02 #############################################
root 31551 30062 0 Jan 31 ? 1:13 /usr/bin/perl -w /opt/oracle.cellos/compmon/exadata_mon_hw_asr.pl -server
oracle 45618 30062 0 Jan 31 ? 1:56 asm_pmon_+ASM1
}}}
{{{
root@ssc1p1app01:~# for i in `zoneadm list -cv | nawk '/running/{print $2}' | grep -v global`; do echo "Check Zone $i"; zlogin $i "ps -ef"; done
Check Zone ssc1zboe047v-i
UID PID PPID C STIME TTY TIME CMD
root 62699 61237 0 20:54:59 ? 0:00 /usr/lib/inet/inetd start
daemon 62316 61237 0 20:54:51 ? 0:00 /lib/crypto/kcfd
root 62351 61237 0 20:54:52 ? 0:03 /lib/inet/in.mpathd
root 62727 62213 0 20:54:59 console 0:00 /usr/sbin/ttymon -g -d /dev/console -l console -m ldterm,ttcompat -h -p ssc1zboe
root 62659 61237 0 20:54:58 ? 0:00 /usr/sbin/cupsd -C /etc/cups/cupsd.conf
root 62322 61237 0 20:54:51 ? 0:00 /usr/lib/pfexecd
netcfg 62291 61237 0 20:54:49 ? 0:01 /lib/inet/netcfgd
root 62213 61237 0 20:54:45 ? 0:08 /lib/svc/bin/svc.startd
daemon 62679 61237 0 20:54:59 ? 0:00 /usr/sbin/rpcbind
root 61237 61237 0 20:54:37 ? 0:00 zsched
root 62028 61237 0 20:54:44 ? 0:00 /usr/sbin/init
root 62691 61237 0 20:54:59 ? 0:00 /usr/lib/autofs/automountd
root 62215 61237 0 20:54:45 ? 0:13 /lib/svc/bin/svc.configd
root 62388 61237 0 20:54:52 ? 0:00 /usr/lib/rad/rad -sp
netadm 62328 61237 0 20:54:52 ? 0:01 /lib/inet/ipmgmtd
netadm 62495 61237 0 20:54:54 ? 0:01 /lib/inet/nwamd
root 62358 61237 0 20:54:52 ? 0:00 /usr/lib/dbus-daemon --system
root 62648 61237 0 20:54:58 ? 0:00 /usr/sbin/cron
root 62697 61237 0 20:54:59 ? 0:00 /usr/lib/fm/fmd/fmd
root 62730 61237 0 20:54:59 ? 0:00 /usr/sbin/syslogd
daemon 62392 61237 0 20:54:52 ? 0:00 /usr/lib/utmpd
root 62695 62691 0 20:54:59 ? 0:00 /usr/lib/autofs/automountd
root 62589 61237 0 20:54:57 ? 0:00 /usr/lib/zones/zoneproxy-client -s localhost:1008
root 62687 61237 0 20:54:59 ? 0:00 /usr/lib/inet/in.ndpd
root 62775 61237 0 20:55:00 ? 0:00 /usr/lib/fm/notify/smtp-notify
root 62653 61237 0 20:54:58 ? 0:02 /usr/sbin/nscd
root 62719 61237 0 20:54:59 ? 0:00 /usr/lib/ssh/sshd
smmsp 62767 61237 0 20:54:59 ? 0:01 /usr/lib/sendmail -Ac -q15m
root 62768 61237 0 20:54:59 ? 0:02 /usr/lib/sendmail -bl -q15m
root 2441 61237 0 15:58:05 ? 0:00 /usr/bin/su root -c ps -ef
root 2443 2441 0 15:58:05 ? 0:00 ps -ef
}}}
{{{
root@ssc1p1app01:~# for i in `zoneadm list -cv | nawk '/running/{print $2}' | grep -v global`; do echo "Check Zone $i"; zlogin $i "uname -a"; done
Check Zone ssc1zboe047v-i
SunOS ssc1zboe047v-i 5.11 11.2 sun4v sparc sun4v
Check Zone bw-zc
SunOS ssc1zbw020v-i 5.11 11.2 sun4v sparc sun4v
Check Zone ecc-zc
SunOS ssc1zecc003v-i 5.11 11.2 sun4v sparc sun4v
Check Zone grc-zc
SunOS ssc1zgrc053v-i 5.11 11.2 sun4v sparc sun4v
}}}
{{{
create materialized view <view_name>
build [immediate | deferred]
refresh [fast | complete | force]
on [commit | demand]
with primary key [rowid]
[[enable | disable] query rewrite]
as
select ...;
TIPS:
----------------
> base table and mview names cant be the same
> cant perform DML on mview, unless "for update" is specified on creation
> mview can be truncated
BUILD clause:
----------------
> immediate - populated immediately
> deferred - populated on the 1st requested refresh
REFRESH types:
----------------
> refresh complete
- simplest way, but expensive if there are a lot of rows
- will recreate all the rowids in the mview
> refresh fast
> refresh force
> never refresh
ENABLE QUERY REWRITE:
----------------
troubleshooting query rewrite
https://gist.github.com/karlarao/8ea0658af3a20f78247f26172069a8c1
https://github.com/karlarao/SQL-Performance-Testing-Toolkit-Gluent/blob/main/gluentwiki.md#disable-query-rewrite
https://github.com/karlarao/SQL-Performance-Testing-Toolkit-Gluent/blob/main/gluentwiki.md#troubleshoot-query-rewrite
https://github.com/karlarao/SQL-Performance-Testing-Toolkit-Gluent/blob/main/gluentwiki.md#enabling-a-rewrite-rule
disable agg rewrite rule https://github.com/karlarao/SQL-Performance-Testing-Toolkit-Gluent/blob/main/gluentwiki.md#disable-agg-rewrite-rule
this can be done through the following:
hint /*+ no_rewrite */
disabling the rewrite rule - disable query rewrite
INCREMENTAL/FAST REFRESH:
----------------
> will be fast refreshed only if there's a change in base table
> the mview log, MLOG$_<basetable> maintains the history
> on refresh fast, the mview log of each tables referenced are checked
> mview log is located in the same schema as master table
> on refresh fast, rowids are kept when updated, new rowids added (from mview log)
REFRESH FORCE
----------------
> use this if mview log is corrupted
> will always use "refresh complete"
REFRESH NEVER
----------------
> cant be refreshed unless the mview is altered
--------------------------------
example: REFRESH COMPLETE
--------------------------------
--REFRESH COMPLETE
create materialized view mv
refresh complete
as
select * from emp;
select rowid, employee_id, first_name from mv;
--refresh command
exec dbms_mview.refresh('mv','c'); -- this will DELETE,INSERT
EXEC DBMS_MVIEW.refresh('mv', 'c', atomic_refresh=>FALSE); -- this will TRUNCATE,INSERT
--REFRESH COMPLETE ON COMMIT (NEEDS ROWID OR PK)
create materialized view mv
refresh complete on commit
with rowid -- without this will error w/ ORA-12054
as
select * from emp;
--REFRESH COMPLETE, SCHEDULED REFRESH INTERVAL
create materialized view mv
refresh complete
start with (sysdate)
next (sysdate+1/1440)
as
select * from emp;
--REFRESH COMPLETE, ENABLE QUERY REWRITE
create materialized view mv
refresh complete
enable query rewrite
as
select department_id, sum(salary) from emp group by department_id;
--------------------------------
example: REFRESH FAST (need log and rowid)
--------------------------------
--create materialized view log
create materialized view log on emp with rowid;
create materialized view log on emp with primary key; --using pk
--query the log
select * from MLOG$_emp;
--create mv
create materialized view mv
refresh fast
with rowid -- need this or pk, else will error ORA-23415
as
select * from emp;
--insert rows
insert into emp select * from emp where rownum < 2;
--refresh
EXEC DBMS_MVIEW.refresh('mv', 'c', atomic_refresh=>FALSE); --complete
EXEC DBMS_MVIEW.refresh('mv', 'f', atomic_refresh=>FALSE); --fast refresh
--another way to put rowid
create materialized view mv
refresh fast
as
select e.*, rowid as "rowids" from emp e;
--create mv refresh fast on commit
create materialized view mv
refresh fast on commit
with rowid
as
select * from emp;
--refresh command
exec dbms_mview.refresh('mv','c'); --complete
exec dbms_mview.refresh('mv','f'); --fast refresh
exec dbms_mview.refresh(LIST=>'mv',METHOD=>'f');
--------------------------------
example: REFRESH FORCE
--------------------------------
--create materialized view log
create materialized view log on emp with primary key; --using pk
--create mv
create materialized view mv
refresh force
with primary key
as
select * from emp;
--use this to refresh mview '?''
exec dbms_mview.refresh('mv','?');
--------------------------------
example: NEVER REFRESH
--------------------------------
--create mv
create materialized view mv
never refresh
as
select * from emp;
--this will error
exec dbms_mview.refresh('mv','?');
--you need to alter the mview to be able to refresh
alter materialized view mv refresh complete;
exec dbms_mview.refresh('mv','c');
select e.*, rowid r from mv e;
alter materialized view mv refresh force;
exec dbms_mview.refresh('mv','?');
select e.*, rowid r from mv e;
--------------------------------
example: DROP MV AND LOG
--------------------------------
drop materialized view mv;
drop materialized view log on emp;
}}}
https://www.youtube.com/watch?v=eHJ6FDmnnuE&list=PLvFesSuT3GZqjvjy7UHB2HN-NOlOAnkv3&index=1
<<<
And restart the domains using the ldm command, stop and start from the primary. Since you have two LDOMS per compute node, you won’t need to worry, but in my case, you always must shut down the middle domain before any of the borders. This domain uses virtualized resources from the border ones.
<<<
{{{
cluster status – shows OSC Status
clzc status – Cluster Zones Status
clzc halt zone_name
clzc boot zone_name
clzc reboot zone_name
To check for resource groups, you can use clrg status from the global zone or from inside the zone.
root@ssc1p1app02:~# clzonecluster status
=== Zone Clusters ===
--- Zone Cluster Status ---
Name Brand Node Name Zone Host Name Status Zone Status
---- ----- --------- -------------- ------ -----------
gts-zc solaris ssc1p1app02 ssc1zgts015v-i Online Running
ssc1p2app01 ssc1zgts016v-i Online Running
ssc1p1app01 ssc1zgts057v-i Online Running
ssc1p2app02 ssc1zgts058v-i Online Running
grc-zc solaris ssc1p1app02 ssc1zgrc031v-i Online Running
ssc1p2app01 ssc1zgrc032v-i Online Running
ssc1p1app01 ssc1zgrc053v-i Online Running
ssc1p2app02 ssc1zgrc054v-i Online Running
rhl-zc solaris ssc1p1app02 ssc1zrgt019v-i Online Running
ssc1p1app01 ssc1zrhl011v-i Online Running
ssc1p2app01 ssc1zrhl012v-i Online Running
ssc1p2app02 ssc1zrhl056v-i Online Running
ecc-zc solaris ssc1p1app02 ssc1zecc001v-i Online Running
ssc1p2app01 ssc1zecc002v-i Online Running
ssc1p1app01 ssc1zecc003v-i Online Running
ssc1p2app02 ssc1zecc055v-i Online Running
po-zc solaris ssc1p1app02 ssc1zpo035v-i Online Running
ssc1p2app01 ssc1zpo036v-i Online Running
ssc1p1app01 ssc1zpo051v-i Online Running
ssc1p2app02 ssc1zpo052v-i Online Running
bw-zc solaris ssc1p1app02 ssc1zbw027v-i Online Running
ssc1p2app01 ssc1zbw028v-i Online Running
ssc1p1app01 ssc1zbw020v-i Online Running
ssc1p2app02 ssc1zbw021v-i Online Running
}}}
{{{
The zones are simpler:
To list: zoneadm list -civ
To Stop: zoneadm –z zone_name halt
To start: zoneadm –z zone_name boot
To restart: zoneadm –z zone_name reboot
}}}
https://www.safaribooksonline.com/library/view/easy-reproducible-reports/9781491954867/
https://www.safaribooksonline.com/library/view/reproducible-research-and/9781491959534/
<<<
The command to see zone status and example results follow.
If the zone STATUS is:
“running” it is up,
“installed” means it can be booted,
“configured" means it can be installed,
“unavailable” problem, usually means the zone image (LUN) is missing
Zones ending in –zc are part of a zonecluster.
{{{
root@ex2s2app01:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
1 dr-ex-zc running /zones/dr-ex-zc solaris shared
}}}
<<<
a couple of months ago I did a presentation and decided to put the files up in github (https://github.com/karlarao/conference_eco2015) just because I wanted to move away from dropbox for my scripts and resources. well, that was very convenient because the audience can just download the master zip file and that’s it! but then I figured I don’t want to have the next presentation on another repo and it would look pretty messy on my git account. ultimately I’d like all the presentations to be on one repo and only separated by folders but then by default if you just put it that way then the “download zip” will download all of the conference folders.
the howto below will show you how to create a branch for each presentation folder so that every folder will have its own “download zip” link. another way to get a downloadable link is through releases but it doesn’t fit the requirement of having just the subset of files shown on the zip file.
here we will be working on the following branches:
* master - this is the default branch where all the files are shown
* empty - an empty branch created from master. this is just used for creating the stage branch, more like a gold image copy
* stage - this is where we initially commit the presentation files and later on will be deleted once the custom branch is created and merged to master
* a custom branch (in this example: talks3) - this is the branch we create from the stage and then merged with the master
click on the link below to get started.
howto: github - talks branch https://www.evernote.com/l/ADB4_Zskw9hBEql3ZU_PhweA3UrVOO2WBgg
Modern HTTP benchmarking tool https://github.com/wg/wrk used for REST https://telegra.ph/Oracle-XE-184-Free-High-load-02-25-2 doing 500 requests/sec
<<<
35,840 (x2 = 71,680M)
(<hugepages> * 2) / 1024
70GB
(HugePages GB * 1024) / 2
= 35,840 vm nr
<<<
{{{
#!/bin/bash
#
# hugepages_settings.sh
#
# Linux bash script to compute values for the
# recommended HugePages/HugeTLB configuration
#
# Note: This script does calculation for all shared memory
# segments available when the script is run, no matter it
# is an Oracle RDBMS shared memory segment or not.
#
# This script is provided by Doc ID 401749.1 from My Oracle Support
# http://support.oracle.com
# Welcome text
echo "
This script is provided by Doc ID 401749.1 from My Oracle Support
(http://support.oracle.com) where it is intended to compute values for
the recommended HugePages/HugeTLB configuration for the current shared
memory segments. Before proceeding with the execution please make sure
that:
* Oracle Database instance(s) are up and running
* Oracle Database 11g Automatic Memory Management (AMM) is not setup
(See Doc ID 749851.1)
* The shared memory segments can be listed by command:
# ipcs -m
Press Enter to proceed..."
read
# Check for the kernel version
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk '{print $2}'`
# Initialize the counter
NUM_PG=0
# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | awk '{print $5}' | grep "[0-9][0-9]*"`
do
MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
if [ $MIN_PG -gt 0 ]; then
NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
fi
done
RES_BYTES=`echo "$NUM_PG * $HPG_SZ * 1024" | bc -q`
# An SGA less than 100MB does not make sense
# Bail out if that is the case
if [ $RES_BYTES -lt 100000000 ]; then
echo "***********"
echo "** ERROR **"
echo "***********"
echo "Sorry! There are not enough total of shared memory segments allocated for
HugePages configuration. HugePages can only be used for shared memory segments
that you can list by command:
# ipcs -m
of a size that can match an Oracle Database SGA. Please make sure that:
* Oracle Database instance is up and running
* Oracle Database 11g Automatic Memory Management (AMM) is not configured"
exit 1
fi
# Finish with results
case $KERN in
'2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
'2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
*) echo "Unrecognized kernel version $KERN. Exiting." ;;
esac
# End
}}}
as root, run the following
vi hugepages_check.sh
{{{
pgrep -lf _pmon_ |
while read pid pname y ; do
printf "%6s %-20s %-80s\n" $pid $pname `grep ^KernelPageSize /proc/$pid/smaps | grep -v '4 kB$'`
done
}}}
http://www.macrumors.com/
Jailbreak across versions http://www.iphonehacks.com/jailbreak
SHSH http://theiphonewiki.com/wiki/index.php?title=SHSH, http://www.machackpc.com/what-is-shsh-how-is-it-important-few-faqs/
Jailbreak 3.1.3
http://www.iphonehacks.com/2010/05/how-to-jailbreak-iphone-ipod-touch-using-spirit-iphone-os-3-1-3-iphone-os-3-1-2-windows.html
ios 4.0.1 3gs.. oh yeah you can't do 5.1.1 anymore
http://www.jailbreakqa.com/questions/124071/updating-jailbroken-401-3gs-to-511-and-jailbreaking-that
http://www.jailbreakqa.com/questions/122961/iphone-3gs-update-from-313-to-511-no-shsh-saved-question
iRelax
- White Noise, Fire Camp, Flute
photostream
http://support.apple.com/kb/PH13692?viewlocale=en_US
http://ipad.about.com/od/ipad-photostream/a/What-Is-Photo-Stream.htm
delete burst photos
http://www.iphonehacks.com/2013/09/iphone-5s-camera-features.html
Resolving iTunes Error 17 When Upgrading or Restoring iOS Devices
http://osxdaily.com/2014/03/22/fix-error-17-itunes-ios/
/private/var/mobile/Library/Notes
/var/root/Media/VoiceRecordings
/private/var/mobile/Documents/Installous/Downloads/
19 find / -iname ipa
80 find / -iname "IMG_*"
81 fibs find / ls
90 find . -iname "voice"
91 find / -iname "voice"
109 find / -iname "music"
110 find / -iname "audio"
162 find / -iname "mp4"
163 find / -iname "record"
193 find . -iname "notes"
197 find / -iname "notes"
237 history | grep -i find
Krl:~ root# find / -iname "music"
/private/var/mobile/Applications/07B6F32E-844E-4DCF-90EA-222297EBE7DC/GuitarRockTour2Free.app/Tracks/Music
/private/var/mobile/Applications/70CF6D6B-7135-4C53-BFBB-F99DD0C7978F/Supercross.app/data/audio/music
/private/var/mobile/Applications/7CD36368-E2E1-476F-8EB4-54565DA4DBD6/StoneWars.app/music
/private/var/mobile/Applications/BB193EAA-7638-42FF-95DC-0B29A25E26E7/BabyScratch.app/music
/private/var/mobile/Applications/D06A7C24-5586-4A6B-97EB-36794E8BB004/Documents/Music
/private/var/mobile/Applications/E8E1537D-CB94-4267-B4DB-8709B8567617/WormsiPhone.app/Audio/Music
/private/var/mobile/Applications/FCCB28CC-C415-4CA5-802D-047CA8EEEDA9/AngryBirds.app/data_iphone/audio/music
/private/var/mobile/Media/iTunes_Control/Music
!
du -sm * | sort -rnk1
Krl:/private/var/mobile root# du -sm * | sort -rnk1
12140 Media
6697 Applications
2233 Documents
747 Library
1 response_2011-03-19-16:15:71.xml
Krl:/private/var/mobile/Media root# du -sm * | sort -rnk1
10746 DCIM
1096 Recordings
153 iTunes_Control
137 PhotoData
10 Books
1 Safari
1 Downloads
0 general_storage
0 com.apple.itunes.lock_sync
0 com.apple.itdbprep.postprocess.lock
0 com.apple.dbaccess.lock
0 Purchases
0 PublicStaging
0 Podcasts
0 Photos
0 MxTube
0 ApplicationArchives
-- Installous apps downloads
-- you must stage them here
/private/var/mobile/Documents/Installous/Downloads
-- contains all the photos and videos
/private/var/mobile/Media/DCIM
''Best reference would be the following metalink notes''
Using Tgtadm to Create an iSCSI Target on EL/RHEL 5 [ID 824095.1]
How to Configure ISCSI Client on RHEL5 / EL5 [ID 473179.1]
How to Configure iSCSI Timeouts in OEL (RHEL) 5 [ID 465257.1]
''Adding iSCSI storage without restarting the iSCSI service'' https://blogs.oracle.com/XPSONHA/entry/adding_iscsi_storage_without_r
https://blogs.oracle.com/wim/entry/using_linux_iscsi_targets_with
''Fibre Channel implementation uses WWPN (World Wide Port Names) and WWNN (World Wide Node Names) to identify devices. iSCSI uses iSCSI addresses.''
http://www.cuddletech.com/articles/iscsi/index.html <-- very nice intro
http://download.oracle.com/docs/cd/E19082-01/819-2723/6n50a1n4t/index.html
http://www.c0t0d0s0.org/archives/4222-Less-known-Solaris-Features-iSCSI-Part-4-Alternative-backing-stores.html
http://prefetch.net/presentations/SolarisiSCSI_Presentation.pdf
http://csayantan.wordpress.com/11gr2-rac-installation/tgtadm/ <-- using tgt but also shows udev
http://blogs.oracle.com/XPSONHA/2010/01/adding_iscsi_storage_without_r.html
http://linux-updates.blogspot.com/2009/06/configuring-iscsi-on-red-hat-linux-4.html <-- shows installation on RHEL4 and RHEL5
http://rajeev.name/quick-and-dirty-guide-to-iscsi-implementation/ <-- quick and dirty guide for netapp filer
{{{
===================================================
ISCSI TARGET (STORAGE SERVER)
===================================================
#REQUIRED PACKAGES
scsi-target-utils-0.0-5.20080917snap.el5.x86_64.rpm
perl-Config-General-2.40-1.el5.noarch.rpm
#NOTES ON iSCSI Qualified Name (IQN)
Format: iqn.yyyy-mm.{reversed domain name} (e.g. iqn.2001-04.com.acme:storage.tape.sys1.xyz) (Note: there is an optional colon with arbitrary text afterwards. This text is there to help better organize or label resources.)
#CREATE A TARGET
tgtadm --lld iscsi --op new --mode target --tid 1 --targetname iqn.2010-02.com.migs:storage.disk
#ADD LUN TO TARGET
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --backing-store /dev/hdb1
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 2 --backing-store /dev/hdc1
#SHOW CONFIGURATION
tgtadm --lld iscsi --op show --mode target
#ALLOW INITIATORS TO CONNECT
tgtadm --lld iscsi --op bind --mode=target --tid=1 --initiator-address=192.168.122.110,192.168.122.111
#SAMPLE targets.conf (DOES THE SAME THING AS COMMANDS ABOVE, SAFER)
<target iqn.2010-02.com.migs:storage.disk>
backing-store /dev/hdb1
backing-store /dev/hdc1
initiator-address 192.168.122.110,192.168.122.111
</target>
#NOTES
- ensure tgtd starts on reboot
- preferably, set: initiator-address ALL
===================================================
ISCSI INITIATORS (STORAGE CLIENTS)
===================================================
#REQUIRED PACKAGES
iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm
#DISCOVER TARGETS
iscsiadm -m discovery -t st -p 192.168.122.100
#LOGIN TO TARGET
iscsiadm --mode node --targetname iqn.2010-02.com.migs:storage.disk --portal 192.168.122.100 --login
#LOGOUT FROM TARGET
iscsiadm --mode node --targetname iqn.2010-02.com.migs:storage.disk --portal 192.168.122.100 --logout
#NOTES
- ensure iscsid starts on reboot
===================================================
to automatically mount a file system during startup
you must have the partition entry in /etc/fstab marked with the "_netdev"
option. For example this would mount a iscsi disk sdb:
/dev/sdb /mnt/iscsi ext3 _netdev 0 0
}}}
Oracle Database uses the protocol iDB (Built by oracle, aptly called the Intelligent Database protocol) to communicate with the exadata cells. iDB is built on the Zero Data Loss, Zero Copy implementation (ZDP) of the industry standard protocol RDSv3 (Reliable datagram socket)
RDS is a low-latency protocol that bypasses kernel calls by using remote direct memory access (RDMA) to accomplish process-to-process communication across the InfiniBand network.
http://en.wikipedia.org/wiki/Zero-copy
http://en.wikipedia.org/wiki/Remote_direct_memory_access
http://wiki.illumos.org/index.php/Analyzing_Performance
http://smartos.org/2011/05/04/video-the-gregg-performance-series/
http://www.randalolson.com/2016/05/25/why-is-reddit-replacing-imgur/
https://www.sycosure.com/blog/is-imgur-private/
https://www.reddit.com/r/NoStupidQuestions/comments/3f2um0/if_i_upload_an_album_anonymously_on_imgur_can/
https://www.quora.com/Does-imgur-delete-old-images-that-have-few-image-views-if-youve-uploaded-them-under-your-account
https://www.quora.com/Imgur-How-long-are-the-images-stored-before-being-purged
<<showtoc>>
! money and time investment on code camps
http://simpleprogrammer.com/2015/06/11/is-paying-for-developer-bootcamp-worth-it/
! http://www.hackreactor.com/remote-beta/
! ''nyc'' https://generalassemb.ly/education/web-development-immersive
! http://supstat.com/training/ , http://nycdatascience.com/courses/ , http://nycdatascience.com/data-science-bootcamp/
! http://www.lewagon.org/program
{{{
1-2 ruby programming
3 object-oriented design
4 SQL database and ORM
5 Front-End (HTML/CSS/Javascript)
6-7 Rails
8-9 Projects
1-9 Best practices - All through the 9 weeks, students will be taught programming and web-development best practices (code versioning and collaboration techniques on Github, continuous deployment and scaling on heroku, management of pictures with Amazon S3, automatic mailing with external SMTP like Mandrill, and a lot more..)
}}}
! https://frontendmasters.com/
! https://www.talentbuddy.co/mentorship/full-stack
''Courses''
Become A Job-Ready Node.js Developer In One Month https://www.talentbuddy.co/workshops/learn-nodejs-rest-api#sthash.kQ5AYpIa.dpuf
Build A Complex Full Stack Web App With JavaScript At Industry Standards https://www.talentbuddy.co/mentorship/full-stack
info pack https://www.talentbuddy.co/mentorship/full-stack/info-pack
Talentbuddy's Full-Stack Web Development Program Explained https://www.talentbuddy.co/blog/talentbuddys-full-stack-web-development-program-explained/
register https://www.talentbuddy.co/mentorship/full-stack/register
What’s the Talentbuddy Mentorship Experience Like? https://www.talentbuddy.co/blog/whats-the-talentbuddy-mentorship-experience-like/
FAQ https://www.talentbuddy.co/blog/talentbuddy-mentorship-faq/
''Reviews''
http://www.quora.com/Has-anyone-completed-a-course-on-Learn-by-Doing-Talentbuddy-If-so-could-you-share-your-experiences
http://www.quora.com/How-useful-is-Talentbuddy-in-preparations-for-a-coding-interview
''Interview''
From Running A Motorcycle Business Out Of A Garage to Building Full Stack Web Apps https://www.talentbuddy.co/blog/from-running-a-motorcycle-business-to-building-full-stack-web-apps/
Escaping The "Add New Code, Everything Else Breaks" Cycle An Interview With Keith Axline, A Talentbuddy Alumnus https://www.talentbuddy.co/blog/escaping-the-add-new-code-everything-else-breaks-cycle/
How A 20-Something Learns To Build His Chess Web Application An Interview With Jack Mallers, A Talentbuddy Alumnus https://www.talentbuddy.co/blog/how-a-20-something-learns-to-build-his-chess-web-application/
Interview With Kevin Kim, A Talentbuddy Alumnus https://www.talentbuddy.co/blog/interview-with-kevin-kim-a-talentbuddy-alumnus/
Interview with Emily Lam, a Talentbuddy Alumna https://www.talentbuddy.co/blog/interview-with-emily-lam-a-talentbuddy-alumna/
Interview with Yuichi Hagio, a Talentbuddy Alumnus https://www.talentbuddy.co/blog/interview-with-yuichi-hagio-a-talentbuddy-alumnus/
! http://www.skilledup.com/articles/the-ultimate-guide-to-coding-bootcamps-the-exhaustive-list
! codeacademy
http://www.codecademy.com/schools/curriculum/resources
NATIONAL CURRICULUM MAP https://codecademy-school.s3.amazonaws.com/uk-curriculum/National%20curriculum%20mapping%20KS2,%20KS3%20and%20KS4.pdf
WHERE CAN CODING TAKE YOU? https://codecademy-school.s3.amazonaws.com/Poster.pdf
codeacademy labs http://classes.codecademy.com/
week 1,2 html and css
week 3,4 jquery and js
week 5 angular
week 6,7 ror 1
week 8,9 ror 2
week 10,11,12 zero to deploy
! for kids
http://codekingdoms.com/about/
http://www.youthdigital.com/
http://funtechsummercamps.com/course-descriptions/minecraft_secrets
! metis - d3 and data science bootcamps
http://www.thisismetis.com/data-visualization-d3-course
<<<
prereq
https://www.codecademy.com/tracks/web
https://www.codecademy.com/tracks/javascript
<<<
http://www.thisismetis.com/data-science
! insight data engineering
http://insightdataengineering.com/
http://insightdatascience.com/
http://www.insightdataengineering.com/fellows.html
http://www.insightdatascience.com/fellows.html
http://insightdatascience.com/Insight_White_Paper.pdf <- data science curriculum
http://insightdataengineering.com/Insight_Data_Engineering_White_Paper.pdf <- data engineering curriculum
http://cdn.oreillystatic.com/en/assets/1/event/119/Data%20Science%20Bootcamp%20Presentation.pdf
! FreeCodeCamp
!! challenge map
http://www.freecodecamp.com/map
!! github wiki
https://github.com/FreeCodeCamp/FreeCodeCamp/wiki/Map
https://github.com/FreeCodeCamp/FreeCodeCamp/wiki/How-Long-Free-Code-Camp-Takes
start here https://github.com/FreeCodeCamp/FreeCodeCamp/wiki/Start-Here
challenge guides https://github.com/FreeCodeCamp/FreeCodeCamp/wiki/Map
city based camp sites https://github.com/FreeCodeCamp/freecodecamp/wiki/List-of-Free-Code-Camp-city-based-Campsites
official chat rooms https://github.com/FreeCodeCamp/FreeCodeCamp/wiki/Official-Free-Code-Camp-Chat-Rooms
Creating Big Data Solutions with Impala
https://www.safaribooksonline.com/library/view/creating-big-data/9781771376136/
Storage formats see [[ORC vs Parquet]]
{{{
1) create db link and tns
CREATE DATABASE LINK "ORCL" CONNECT TO "SYSTEM" IDENTIFIED BY oracle USING 'ORCL';
2) drop/create user
drop user SCOTT cascade;
set long 9999999
select dbms_metadata.get_ddl('USER', username) || ';' usercreate
from dba_users where username = 'SCOTT';
SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT','SCOTT') FROM DUAL;
SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT','SCOTT') FROM DUAL;
SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','SCOTT') FROM DUAL;
CREATE USER "SCOTT" IDENTIFIED BY VALUES 'S:B214FE9CDA4F58C2114A419564313612C32CB71B92CA6585997B22FB8261;F894844C34402B67'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";
GRANT "CONNECT" TO "SCOTT";
GRANT "RESOURCE" TO "SCOTT";
GRANT UNLIMITED TABLESPACE TO "SCOTT";
3) create directory
$ mkdir -p /home/oracle/datapump
Create directory objects and grant privileges (on SQL*Plus as SYS):
create or replace directory DATA_PUMP_DIR as '/home/oracle/datapump';
create or replace directory DATA_PUMP_LOG as '/home/oracle/datapump';
grant read,write on directory DATA_PUMP_DIR to SYS;
grant read,write on directory DATA_PUMP_LOG to SYS;
SELECT grantor, grantee, table_schema, table_name, privilege
FROM all_tab_privs
WHERE table_name = 'DATA_PUMP_DIR';
3) impdp
userid = /
schemas = SCOTT
NETWORK_LINK = ORCL
DIRECTORY = DATA_PUMP_DIR
logfile = impdp_scott.log
parallel = 2
EXCLUDE=TABLE:"IN ('DEPT')"
transform=segment_attributes:n:table
LOGS:
[oracle@emgc11g ~]$
[oracle@emgc11g ~]$ impdp parfile=parfile.par
Import: Release 11.2.0.2.0 - Production on Wed Oct 19 14:16:52 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_IMPORT_SCHEMA_01": /******** AS SYSDBA parfile=parfile.par
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SCOTT" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported "SCOTT"."EMP" 14 rows
. . imported "SCOTT"."SALGRADE" 5 rows
. . imported "SCOTT"."BONUS" 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
ORA-39083: Object type REF_CONSTRAINT failed to create with error:
ORA-00942: table or view does not exist
Failing sql is:
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO") REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at 14:10:38
-- ALTER THE PARFILE
schemas = SCOTT
NETWORK_LINK = ORCL
DIRECTORY = DATA_PUMP_DIR
logfile = impdp_scott2.log
parallel = 2
#sqlfile = cr_gen.sql
INCLUDE=TABLE:"IN ('DEPT')"
CONTENT=METADATA_ONLY
transform=segment_attributes:n:table
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_IMPORT_SCHEMA_01": /******** AS SYSDBA parfile=parfile.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_SCHEMA_01" successfully completed at 14:12:52
-- CREATE THE INVALID INDEXES
SQL> ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO") REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE;
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO") REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE
*
ERROR at line 1:
ORA-02298: cannot validate (SCOTT.FK_DEPTNO) - parent keys not found
SQL>
SQL> ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO") REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE novalidate;
Table altered.
-- IF EXCLUDING LOBs, THEN DO THE FOLLOWING
SUM(BYTES)/1024/1024 SEGMENT_NAME OWNER
-------------------- --------------------------------------------------------------------------------- ------------------------------
22 INCIDENTSM1 HPSM
23 APPLICATIONM1 HPSM
25 SYS_LOB0000181275C00006$$ HPSM
28 CLOCKSM1 HPSM
32 SYS_LOB0000183328C00060$$ HPSM
33 SYS_LOB0000182210C00001$$ HPSM
41 SYSLOGM1 HPSM
41.5625 SCIREXPERTM127C9BA35 HPSM
48 SCIREXPERTM12F2DDBEB HPSM
51.625 SYSATTACHMEM103DA337A HPSM
57 CM3RM1 HPSM
SUM(BYTES)/1024/1024 SEGMENT_NAME OWNER
-------------------- --------------------------------------------------------------------------------- ------------------------------
72 ACTIVITYCM3M1 HPSM
96 AUDITM1 HPSM
120 ACTIVITYM1 HPSM
160 PROBSUMMARYM1 HPSM
160 SCIREXPERTM1 HPSM
176 SYSATTACHMEM1 HPSM
184 SYS_LOB0000182045C00009$$ HPSM
232 SYS_LOB0000181850C00001$$ HPSM
272 MSGLOGM1 HPSM
368.0625 SYS_LOB0000180941C00001$$ HPSM
19422 SYS_LOB0000181244C00005$$ HPSM <---- EXCLUDE THIS BIG TABLE!!!
3839 rows selected.
-- CHECK WHAT TABLE OWNS THAT LOB
SQL>
SQL> desc dba_lobs
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------- --------------------------------------------------------------------------------------------------------------------
OWNER VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
COLUMN_NAME VARCHAR2(4000)
SEGMENT_NAME VARCHAR2(30)
TABLESPACE_NAME VARCHAR2(30)
INDEX_NAME VARCHAR2(30)
CHUNK NUMBER
PCTVERSION NUMBER
RETENTION NUMBER
FREEPOOLS NUMBER
CACHE VARCHAR2(10)
LOGGING VARCHAR2(7)
ENCRYPT VARCHAR2(4)
COMPRESSION VARCHAR2(6)
DEDUPLICATION VARCHAR2(15)
IN_ROW VARCHAR2(3)
FORMAT VARCHAR2(15)
PARTITIONED VARCHAR2(3)
SECUREFILE VARCHAR2(3)
SEGMENT_CREATED VARCHAR2(3)
RETENTION_TYPE VARCHAR2(7)
RETENTION_VALUE NUMBER
SQL> select owner, table_name, index_name, chunk from dba_lobs
2 where segment_name = 'SYS_LOB0000181244C00005$$';
OWNER TABLE_NAME INDEX_NAME CHUNK
------------------------------ ------------------------------ ------------------------------ ----------
HPSM SYSATTACHMEM1 SYS_IL0000181244C00005$$ 8192
SQL> select owner, table_name, index_name, chunk from dba_lobs where segment_name = 'SYS_LOB0000180941C00001$$';
OWNER TABLE_NAME INDEX_NAME CHUNK
------------------------------ ------------------------------ ------------------------------ ----------
HPSM SCHEDULEM1 SYS_IL0000180941C00001$$ 8192
Pct
TABLESPACE Total(Mb) Used(Mb) Free(Mb) Free Largest(Mb) FRAGMENT Extend MAX_MEG
---------------------------------------------------- ------------ ---------- ---------- -------- ----------- ---------- ------ ----------
SVCADM 5,000.0 2,508.4 2,491.6 49.8 1,146.9 954
OPS$ORACLE@TXUDV1> select distinct tablespace_name from dba_segments where owner = 'HPSM';
TABLESPACE_NAME
------------------------------
SVCADM
OPS$ORACLE@TXUDV1>
OPS$ORACLE@TXUDV1>
OPS$ORACLE@TXUDV1>
OPS$ORACLE@TXUDV1> select * from dba_directories;
OWNER DIRECTORY_NAME
------------------------------ ------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SYS EXP2_DIR
/u90/dpdump/TXUDV1
SYS TRANS_CSV_DIR
/u01/app/taxware/trans/
SYS TWEDEV_OWNER
/u01/app/taxware/staging/
SYS ORACLE_OCM_CONFIG_DIR
/u01/app/oracle/product/11.2.0/dbhome_2/ccr/state
SYS DATA_PUMP_DIR
/u01/app/oracle/admin/TXUDV1/dpdump/
SYS EXP_DIR
/u01/app/oracle/admin/TXUDV1/dpdump
}}}
https://www.krenger.ch/blog/datapump-with-database-link-examples/
http://osamamustafa.blogspot.com/2012/08/data-pump-impdp-expdp-networklink-option.html
https://learnwithme11g.wordpress.com/2012/06/07/copy-schema-into-same-database-with-impdp/
http://stackoverflow.com/questions/9988954/ora-02085-database-link-dblink-name-connects-to-oracle
http://www.ludovicocaldara.net/dba/tag/identified-by-values/
http://www.toadworld.com/products/toad-for-oracle/w/toad_for_oracle_wiki/245.creating-dblinks-database-links-in-another-schema
Export/Import DataPump: The Minimum Requirements to Use Export DataPump and Import DataPump (System Privileges) (Doc ID 351598.1)
Using Oracle Data Pump in an Oracle Database Vault Environment https://docs.oracle.com/cd/E11882_01/server.112/e23090/dba.htm#DVADM70315
https://fatdba.files.wordpress.com/2012/03/oracle-11g-database-vault-and-data-pump.pdf
How To Export / Import Objects In Database Vault Environment (Doc ID 822048.1)
https://forums.oracle.com/message/10206740#10206740
{{{
Oracle 10g and above you can use "impdp":
impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
Oracle 9i you can use "imp":
imp scott/tiger file=emp.dmp fromuser=scott touser=scott tables=dept
}}}
https://forums.oracle.com/thread/2430042
http://www.oracle-base.com/articles/10g/oracle-data-pump-10g.php
https://medium.freecodecamp.org/how-to-get-https-working-on-your-local-development-environment-in-5-minutes-7af615770eec
http://blog.tanelpoder.com/2013/11/27/when-do-oracle-parallel-execution-slaves-issue-buffered-physical-reads-part-1/
http://blog.tanelpoder.com/2013/11/27/when-do-oracle-parallel-execution-slaves-issue-buffered-physical-reads-part-2/
https://www.slideshare.net/JorgeBarba16/oracle-database-inmemory-71514724
In-depth introduction to machine learning in 15 hours of expert videos
http://www.r-bloggers.com/in-depth-introduction-to-machine-learning-in-15-hours-of-expert-videos/
https://connormcdonald.wordpress.com/2015/03/28/in-memory-can-you-really-drop-those-indexes/
10g new feature: Hash-Partitioned Global Index http://www.dbafan.com/blog/?p=126
https://www.toadworld.com/platforms/oracle/w/wiki/4221.create-index-examples
Create Partition Index in Parallel runs in serial (Doc ID 1284461.1)
Example of Script to Create a Hash Partition Table (Doc ID 164873.1)
http://stackoverflow.com/questions/1358490/is-a-globally-partitioned-index-better-faster-than-a-non-partitioned-index
{{{
drop table test purge;
create table test as select 2 x from dual;
select * from test;
insert into test values (3); -- execute 2x
insert into test values (null); -- execute 5x
drop index x_idx;
CREATE INDEX x_idx ON test (x, '1');
select * from test where x = 3;
select * from test where x is null; -- this will be range scan
drop index x_idx;
CREATE INDEX x_idx ON test (x);
select * from test where x = 3;
select * from test where x is null; -- this will be full scan
}}}
http://richardfoote.wordpress.com/2012/06/08/indexes-vs-full-table-scan-picture-vs-1000-words-pictures-of-lily/
https://influxdb.com/docs/v0.9/introduction/overview.html
An open-source distributed time series database with no external dependencies
http://www.slideshare.net/g33ktalk/influ-28385035
This is a community for users of InfluxDB, the open source time series database. InfluxDB is useful for DevOps, metrics, sensor data, and real-time analytics
https://plus.google.com/u/0/communities/114507511002042654305?cfem=1
http://techcrunch.com/2014/12/08/errplane-snags-8-1m-to-continue-building-open-source-influxdb-time-database/
https://www.google.com/search?q=combine+kafka+transformation+with+informatica&oq=combine+kafka+transformation+with+informatica&aqs=chrome..69i57.9382j0j1&sourceid=chrome&ie=UTF-8
https://kb.informatica.com/proddocs/Product%20Documentation/5/IN_1011_IntelligentStreamingUserGuide_en.pdf
https://www.informatica.com/content/dam/informatica-com/en/collateral/data-sheet/vds_data-sheet_2620.pdf
Insert blob in oracle database with C#
http://stackoverflow.com/questions/4902250/insert-blob-in-oracle-database-with-c-sharp/4902343#4902343
http://blog.calyptus.eu/seb/2009/03/large-object-storage-for-nhibernate-and-ddd-part-1-blobs-clobs-and-xlobs/
<<showtoc>>
! syntax
{{{
alter session force parallel query parallel 4
alter session enable parallel dml
insert /*+ parallel(32) */ into … as select /*+ parallel(32) */ * from ...
this gets you direct path inserts so you don’t need the append and you get parallelism on both sides of the statment
the key is to enable parallel dml if you want parallel inserts
Insert APPEND will not look in blocks in the buffer cache for empty space. I will add them to the end of the file in a DIRECT PATH fashion
inside PL/SQL do this:
EXECUTE IMMEDIATE 'alter session enable parallel dml';
EXECUTE IMMEDIATE 'alter session force parallel query parallel 4';
insert /*+ parallel(32) */ into … as select /*+ parallel(32) */ * from ...
}}}
see more from this conversation https://www.evernote.com/shard/s48/nl/1353859895/6f5f36bd-d5b2-4a8c-bc32-4e2ee71f89e9/
! parallel delete is also possible
{{{
execute immediate 'alter session enable parallel dml';
delete /*+ parallel(4) */ from big_table b where owner in ('SYS','PUBLIC') ;
}}}
! execution plan
{{{
(correct)
INSERT STATEMENT
PX COORDINATOR <-- PX is closer to INSERT
PX SEND QC (RANDOM)
LOAD AS SELECT
vs
(incorrect)
INSERT STATEMENT
LOAD AS SELECT
VIEW
PX COORDINATOR <-- PX is below LOAD AS SELECT
}}}
! sql monitor example
https://www.evernote.com/l/ADDlFIyhDkVGw5DV5njUYx911hK2bTrOlcM
! references
http://stackoverflow.com/questions/20047610/oracle-11g-how-to-optimize-slow-parallel-insert-select
http://minimalistic-oracle.blogspot.com/2013/11/how-to-use-append-hint-to-optimize.html
http://antognini.ch/2009/10/hints-for-direct-path-insert-statements/
APPEND Hint (Direct-Path) Insert with Values Causes Excessive Space Usage on 11G (Doc ID 842374.1)
https://www.google.com/search?client=firefox-b-1-d&q=oracle+insert+to+materialized+view
https://connor-mcdonald.com/2018/09/10/modifying-tables-without-losing-materialized-views/
https://danischnider.wordpress.com/2019/02/18/materialized-view-refresh-for-dummies/
How to update a materialized view directly https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9535464000346456349
https://stackoverflow.com/questions/17021094/oracle-insert-only-materialized-view
https://docs.oracle.com/en/database/oracle/oracle-database/19/tgdba/tuning-system-global-area.html#GUID-CFADC9EA-2E2F-4EBB-BA2C-3663291DCC25
<<<
Fast ingest optimizes the processing of high-frequency, single-row data inserts into database from applications, such as Internet of Things (IoT) applications.
Fast ingest uses MEMOPTIMIZE_WRITE to insert data into tables specified as MEMOPTIMIZE FOR WRITE hint. The database temporarily buffers these inserts in the large pool and automatically commits the changes at the time of writing these buffered inserts to disk. The changes cannot be rolled back.
The inserts using fast ingest are also known as deferred inserts, because they are initially buffered in the large pool and later written to disk asynchronously by background processes.
<<<
<<<
This is an interesting idea but I do think you need to be careful with such techniques. I believe APPEND_VALUES could be very inefficient if someone uses a small array size by accident, and I don’t think you can check this when performing the translation. MEMOPTIMIZE_WRITE isn’t really intended for a bulk load scenario where the customer expects ACID. There is a risk of breaking whatever logic the customer has wrapped around SQL*Loader to ensure data integrity, the most significant being:
At some point, that data is flushed to the … table. Until that happens, the data is not durable.
· https://docs.oracle.com/en/database/oracle/oracle-database/19/tgdba/tuning-system-global-area.html#GUID-CFADC9EA-2E2F-4EBB-BA2C-3663291DCC25
<<<
{{{
Memoptimized Rowstore provides the following two functionalities:
Fast ingest:
– Fast ingest optimizes the processing of high-frequency, single-row data inserts into a database
– Fast ingest uses the large pool for buffering the inserts before writing them to disk, so as to improve data insert performance
Fast lookup:
– Fast lookup enables fast retrieval of data from for high-frequency queries
– Fast lookup uses a separate memory area in the SGA called the memoptimize pool for buffering the data queried from tables
– For using fast lookup, you must allocate appropriate memory size to the memoptimize pool using MEMOPTIMIZE_POOL_SIZE
}}}
https://juliandontcheff.wordpress.com/2019/11/25/memoptimized-rowstore-fast-ingest-in-oracle-database-19c/
http://www.slideshare.net/tag/insightout2011
http://abcdba.com/abcdbaserverinstallguideshowtoinstalloraclejvm
! references
http://deanattali.com/2015/05/09/setup-rstudio-shiny-server-digital-ocean/#install-shiny <-- good stuff, very detailed
http://www.r-bloggers.com/deploying-your-very-own-shiny-server/ <-- talks about configuring SSL cert, HTTPS with UFW, auto updates on server, reverse proxy
https://www.digitalocean.com/community/tutorials/how-to-set-up-shiny-server-on-ubuntu-14-04 <-- not really detailed
http://matthewlincoln.net/2015/08/31/setup-rstudio-and-shiny-servers-on-digital-ocean.html <-- nice yml script
http://www.rmining.net/2015/05/11/git-pushing-shiny-apps-with-docker-dokku/ <-- shows how to deploy using docker and dokku (a heroku like app)
http://www.r-bloggers.com/run-shiny-app-on-a-ubuntu-server-on-the-amazon-cloud/ <-- setting up shiny on amazon
https://blog.ouseful.info/2015/12/10/how-to-run-a-shiny-app-in-the-cloud-using-tutum-digital-ocean-and-docker-containers/ , https://blog.ouseful.info/2015/01/14/using-docker-to-build-course-vms/ <-- docker + tutum
! tricks
http://www.r-bloggers.com/shiny-server-open-source-edition-solution-for-cpu-bound-apps/ <-- solution for CPU bound apps
! step by step
{{{
https://help.ubuntu.com/community/SwitchingToUbuntu/FromLinux/RedHatEnterpriseLinuxAndFedora
* create user
mkdir -p /home/oracle
cp /etc/skel/.* /home/oracle/
useradd -u 54321 -g oinstall -G dba,oper oracle -d /home/oracle -s /bin/bash
chown -R oracle:oinstall /home/oracle
chmod 700 /home/oracle
ls -ld /home/oracle
passwd oracle
gpasswd -a oracle sudo
* make a 1GB swapfile
dd if=/dev/zero of=/opt/swapfile bs=1024k count=1024
mkswap /opt/swapfile
chmod 0600 /opt/swapfile
/opt/swapfile swap swap defaults 0 0
swapon -a
* install nginx
sudo apt-get update
sudo apt-get -y install nginx
* install R
sudo sh -c 'echo "deb http://cran.rstudio.com/bin/linux/ubuntu trusty/" >> /etc/apt/sources.list'
gpg -a --export E084DAB9 | sudo apt-key add -
sudo apt-get update
sudo apt-get -y install r-base
sudo apt-get install r-base-dev
sudo apt-get -y install libcurl4-gnutls-dev
sudo apt-get -y install libxml2-dev
sudo apt-get -y install libssl-dev
sudo su - -c "R -e \"install.packages('devtools', repos='http://cran.rstudio.com/')\""
sudo su - -c "R -e \"devtools::install_github('daattali/shinyjs')\""
sudo apt-get -y install libapparmor1 gdebi-core
wget https://download2.rstudio.org/rstudio-server-0.99.902-amd64.deb
sudo gdebi rstudio-server-0.99.902-amd64.deb
* install shiny server
sudo su - -c "R -e \"install.packages('shiny', repos='http://cran.rstudio.com/')\""
wget https://download3.rstudio.org/ubuntu-12.04/x86_64/shiny-server-1.4.2.786-amd64.deb
sudo gdebi shiny-server-1.4.2.786-amd64.deb
sudo su - -c "R -e \"install.packages('rmarkdown', repos='http://cran.rstudio.com/')\""
* fix permissions on shiny server directories
sudo groupadd shiny-apps
sudo usermod -aG shiny-apps oracle
sudo usermod -aG shiny-apps shiny
cd /srv/shiny-server
sudo chown -R oracle:shiny-apps .
sudo chmod g+w .
sudo chmod g+s .
* install git
sudo apt-get -y install git
cd /srv/shiny-server
git init
# setup shiny-server repo on github then copy the url
git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"
git remote add origin https://github.com/karlarao/shiny-server.git
git add .
git commit -m "Initial commit"
git push -u origin master
then git pull to refresh the dir later on for every push you did on your laptop
* make URLs pretty
sudo vim /etc/nginx/sites-enabled/default
location /shiny/ {
proxy_pass http://127.0.0.1:3838/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
rewrite ^(/shiny/[^/]+)$ $1/ permanent;
}
location /rstudio/ {
proxy_pass http://127.0.0.1:8787/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
sudo service nginx restart
http://192.241.254.236/shiny/
-rw-r--r-- 1 oracle oinstall 53653388 Feb 19 11:00 shiny-server-1.4.2.786-amd64.deb
-rw-r--r-- 1 oracle oinstall 53982336 May 14 12:29 rstudio-server-0.99.902-amd64.deb
}}}
http://abcdba.com/abcdbaserverinstallguideshowtoinstalloracle11gxmldb
https://software.intel.com/en-us/articles/intelr-memory-latency-checker#whatdoesitmeasure
''Intel® 64 Architecture Processor Topology Enumeration'' http://software.intel.com/en-us/articles/intel-64-architecture-processor-topology-enumeration/
http://ark.intel.com/products/52214/Intel-Core-i7-2600K-Processor-8M-Cache-up-to-3_80-GHz
{{{
[root@desktopserver ~]# mkdir cputopology
[root@desktopserver ~]# cd cputopology/
[root@desktopserver cputopology]# ls -ltr
total 0
[root@desktopserver cputopology]# wget http://software.intel.com/sites/default/files/m/d/4/1/d/8/topo-09272010.tar
--2012-11-20 00:08:31-- http://software.intel.com/sites/default/files/m/d/4/1/d/8/topo-09272010.tar
Resolving software.intel.com... 23.67.253.27, 23.67.253.34
Connecting to software.intel.com|23.67.253.27|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 184320 (180K) [application/octet-stream]
Saving to: `topo-09272010.tar'
100%[==================================================================================================>] 184,320 372K/s in 0.5s
2012-11-20 00:08:32 (372 KB/s) - `topo-09272010.tar' saved [184320/184320]
[root@desktopserver cputopology]# tar -xvf topo-09272010.tar
cpu-topology/
cpu-topology/get_cpuid.asm
cpu-topology/mk_32.bat
cpu-topology/README.txt
cpu-topology/.directory
cpu-topology/mk_32.sh
cpu-topology/cpu_topo.c
cpu-topology/get_cpuid_lix64.s
cpu-topology/mk_64.bat
cpu-topology/cputopology.h
cpu-topology/get_cpuid_lix32.s
cpu-topology/mk_64.sh
cpu-topology/util_os.c
cpu-topology/Intel Source Code License Agreement.doc
[root@desktopserver cputopology]# cd cpu-topology/
[root@desktopserver cpu-topology]# ls -ltr
total 196
-rw-r--r-- 1 1000 users 35328 Dec 21 2009 Intel Source Code License Agreement.doc
-rw-r--r-- 1 1000 users 2457 Dec 21 2009 README.txt
-rwxr-xr-x 1 1000 users 162 Dec 21 2009 mk_64.sh
-rw-r--r-- 1 1000 users 443 Dec 21 2009 mk_64.bat
-rw-r--r-- 1 1000 users 194 Dec 21 2009 mk_32.sh
-rw-r--r-- 1 1000 users 376 Dec 21 2009 mk_32.bat
-rw-r--r-- 1 1000 users 2496 Dec 21 2009 get_cpuid_lix64.s
-rw-r--r-- 1 1000 users 1575 Dec 21 2009 get_cpuid_lix32.s
-rw-r--r-- 1 1000 users 2672 Dec 21 2009 get_cpuid.asm
-rw-r--r-- 1 1000 users 95365 Dec 21 2009 cpu_topo.c
-rw-r--r-- 1 1000 users 10076 Sep 27 2010 cputopology.h
-rw-r--r-- 1 1000 users 14960 Sep 27 2010 util_os.c
[root@desktopserver cpu-topology]# less README.txt
[root@desktopserver cpu-topology]# less mk_64.sh
[root@desktopserver cpu-topology]# less README.txt
[root@desktopserver cpu-topology]# ./mk_64.sh
[root@desktopserver cpu-topology]# ./cpu_topology64.out
Advisory to Users on system topology enumeration
This utility is for demonstration purpose only. It assumes the hardware topology
configuration within a coherent domain does not change during the life of an OS
session. If an OS support advanced features that can change hardware topology
configurations, more sophisticated adaptation may be necessary to account for
the hardware configuration change that might have added and reduced the number
of logical processors being managed by the OS.
User should also`be aware that the system topology enumeration algorithm is
based on the assumption that CPUID instruction will return raw data reflecting
the native hardware configuration. When an application runs inside a virtual
machine hosted by a Virtual Machine Monitor (VMM), any CPUID instructions
issued by an app (or a guest OS) are trapped by the VMM and it is the VMM's
responsibility and decision to emulate/supply CPUID return data to the virtual
machines. When deploying topology enumeration code based on querying CPUID
inside a VM environment, the user must consult with the VMM vendor on how an VMM
will emulate CPUID instruction relating to topology enumeration.
Software visible enumeration in the system:
Number of logical processors visible to the OS: 8
Number of logical processors visible to this process: 8
Number of processor cores visible to this process: 4
Number of physical packages visible to this process: 1
Hierarchical counts by levels of processor topology:
# of cores in package 0 visible to this process: 4 .
# of logical processors in Core 0 visible to this process: 2 .
# of logical processors in Core 1 visible to this process: 2 .
# of logical processors in Core 2 visible to this process: 2 .
# of logical processors in Core 3 visible to this process: 2 .
Affinity masks per SMT thread, per core, per package:
Individual:
P:0, C:0, T:0 --> 1
P:0, C:0, T:1 --> 10
Core-aggregated:
P:0, C:0 --> 11
Individual:
P:0, C:1, T:0 --> 2
P:0, C:1, T:1 --> 20
Core-aggregated:
P:0, C:1 --> 22
Individual:
P:0, C:2, T:0 --> 4
P:0, C:2, T:1 --> 40
Core-aggregated:
P:0, C:2 --> 44
Individual:
P:0, C:3, T:0 --> 8
P:0, C:3, T:1 --> 80
Core-aggregated:
P:0, C:3 --> 88
Pkg-aggregated:
P:0 --> ff
APIC ID listings from affinity masks
OS cpu 0, Affinity mask 0001 - apic id 0
OS cpu 1, Affinity mask 0002 - apic id 2
OS cpu 2, Affinity mask 0004 - apic id 4
OS cpu 3, Affinity mask 0008 - apic id 6
OS cpu 4, Affinity mask 0010 - apic id 1
OS cpu 5, Affinity mask 0020 - apic id 3
OS cpu 6, Affinity mask 0040 - apic id 5
OS cpu 7, Affinity mask 0080 - apic id 7
Package 0 Cache and Thread details
Box Description:
Cache is cache level designator
Size is cache size
OScpu# is cpu # as seen by OS
Core is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32, Cores/cache= 2, Caches/package= 4
L1I is Level 1 Instruction cache, size(KBytes)= 32, Cores/cache= 2, Caches/package= 4
L2 is Level 2 Unified cache, size(KBytes)= 256, Cores/cache= 2, Caches/package= 4
L3 is Level 3 Unified cache, size(KBytes)= 8192, Cores/cache= 8, Caches/package= 1
+-----------+-----------+-----------+-----------+
Cache | L1D | L1D | L1D | L1D |
Size | 32K | 32K | 32K | 32K |
OScpu#| 0 4| 1 5| 2 6| 3 7|
Core |c0_t0 c0_t1|c1_t0 c1_t1|c2_t0 c2_t1|c3_t0 c3_t1|
AffMsk| 1 10| 2 20| 4 40| 8 80|
CmbMsk| 11 | 22 | 44 | 88 |
+-----------+-----------+-----------+-----------+
Cache | L1I | L1I | L1I | L1I |
Size | 32K | 32K | 32K | 32K |
+-----------+-----------+-----------+-----------+
Cache | L2 | L2 | L2 | L2 |
Size | 256K | 256K | 256K | 256K |
+-----------+-----------+-----------+-----------+
Cache | L3 |
Size | 8M |
CmbMsk| ff |
+-----------------------------------------------+
}}}
''turbostat -- show CPU frequency and C-state residency on modern Intel turbo-capable processors.''
<<<
turbostat must be run as root.
turbostat reads hardware counters, but doesn't write them. So it will
not interfere with the OS or other programs, including multiple
invocations of itself.
turbostat may work poorly on Linux-2.6.20 through 2.6.29, as acpi-
cpufreq periodically cleared the APERF and MPERF in those kernels.
The APERF, MPERF MSRs are defined to count non-halted cycles. Although
it is not guaranteed by the architecture, turbostat assumes that they
count at TSC rate, which is true on all processors tested to date.
REFERENCES
"Intel(R) Turbo Boost Technology in Intel(R) Coretm Microarchitecture
(Nehalem) Based Processors"
http://download.intel.com/design/processor/applnots/320354.pdf
"Intel(R) 64 and IA-32 Architectures Software Developer's Manual Volume
3B: System Programming Guide"
http://www.intel.com/products/processor/manuals/
<<<
http://stuff.mit.edu/afs/sipb/contrib/linux/tools/power/x86/turbostat/turbostat.c
http://manpages.ubuntu.com/manpages/precise/man8/turbostat.8.html <-- man page
http://download.intel.com/design/processor/applnots/320354.pdf <-- whitepaper Intel® Turbo Boost Technology in Intel® Core™ Microarchitecture (Nehalem) Based Processors
http://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-intel-ipp-library-71 <-- Code Samples for Intel® Integrated Performance Primitives (Intel® IPP) Library 7.1
! my desktopserver cpu
{{{
[root@desktopserver ~]# ./cpu
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
processor 0 1 2 3 4 5 6 7
physical id (processor socket) 0 0 0 0 0 0 0 0
siblings (logical cores/socket) 8 8 8 8 8 8 8 8
core id 0 1 2 3 0 1 2 3
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4
----------------- ----------------- ----------------- -----------------
Socket0 OScpu#| 0 4| 1 5| 2 6| 3 7|
Core |S0_c0_t0 S0_c0_t1|S0_c1_t0 S0_c1_t1|S0_c2_t0 S0_c2_t1|S0_c3_t0 S0_c3_t1|
----------------- ----------------- ----------------- -----------------
}}}
! Installation
{{{
[root@desktopserver cpu-topology]# wget http://stuff.mit.edu/afs/sipb/contrib/linux/tools/power/x86/turbostat/turbostat.c
--2012-11-20 00:16:30-- http://stuff.mit.edu/afs/sipb/contrib/linux/tools/power/x86/turbostat/turbostat.c
Resolving stuff.mit.edu... 18.181.0.31
Connecting to stuff.mit.edu|18.181.0.31|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24345 (24K) [text/x-csrc]
Saving to: `turbostat.c'
100%[==================================================================================================>] 24,345 132K/s in 0.2s
2012-11-20 00:16:38 (132 KB/s) - `turbostat.c' saved [24345/24345]
[root@desktopserver cpu-topology]# gcc -o turbostat turbostat.c
[root@desktopserver cpu-topology]# ./turbostat
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
4.64 2.00 3.41 14.56 9.27 71.53 0.00 14.06 11.24 16.36 0.00
0 0 5.80 1.66 3.41 18.18 30.86 45.17 0.00 14.06 11.24 16.36 0.00
0 4 2.93 1.65 3.41 21.04 30.86 45.17 0.00 14.06 11.24 16.36 0.00
1 1 4.42 1.74 3.41 8.45 2.79 84.34 0.00 14.06 11.24 16.36 0.00
1 5 3.61 1.67 3.41 9.26 2.79 84.34 0.00 14.06 11.24 16.36 0.00
2 2 9.63 2.78 3.41 21.57 2.84 65.96 0.00 14.06 11.24 16.36 0.00
2 6 4.86 1.74 3.41 26.34 2.84 65.96 0.00 14.06 11.24 16.36 0.00
3 3 2.68 2.00 3.41 6.08 0.58 90.66 0.00 14.06 11.24 16.36 0.00
3 7 3.20 1.69 3.41 5.57 0.58 90.66 0.00 14.06 11.24 16.36 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
4.36 1.92 3.41 14.91 12.95 67.78 0.00 14.21 15.95 10.88 0.00
0 0 8.84 1.70 3.41 25.91 29.70 35.56 0.00 14.21 15.95 10.88 0.00
0 4 10.38 2.50 3.41 24.37 29.70 35.56 0.00 14.21 15.95 10.88 0.00
1 1 4.99 1.64 3.41 22.29 21.32 51.40 0.00 14.21 15.95 10.88 0.00
1 5 3.50 1.62 3.41 23.78 21.32 51.40 0.00 14.21 15.95 10.88 0.00
2 2 3.79 1.66 3.41 7.57 0.76 87.88 0.00 14.21 15.95 10.88 0.00
2 6 2.02 1.71 3.41 9.34 0.76 87.88 0.00 14.21 15.95 10.88 0.00
3 3 0.27 1.63 3.41 3.43 0.00 96.29 0.00 14.21 15.95 10.88 0.00
3 7 1.09 1.76 3.41 2.61 0.00 96.29 0.00 14.21 15.95 10.88 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
4.19 1.72 3.41 17.32 7.21 71.27 0.00 11.93 9.44 15.91 0.00
0 0 8.01 1.63 3.41 20.35 18.08 53.56 0.00 11.93 9.44 15.91 0.00
0 4 4.29 1.66 3.41 24.07 18.08 53.56 0.00 11.93 9.44 15.91 0.00
1 1 6.84 1.80 3.41 30.81 8.11 54.24 0.00 11.93 9.44 15.91 0.00
1 5 4.52 1.68 3.41 33.13 8.11 54.24 0.00 11.93 9.44 15.91 0.00
2 2 3.56 1.63 3.41 11.25 2.33 82.86 0.00 11.93 9.44 15.91 0.00
2 6 3.11 1.61 3.41 11.70 2.33 82.86 0.00 11.93 9.44 15.91 0.00
3 3 0.90 1.61 3.41 4.35 0.32 94.43 0.00 11.93 9.44 15.91 0.00
3 7 2.33 2.32 3.41 2.92 0.32 94.43 0.00 11.93 9.44 15.91 0.00
-- also while running turbostat.c it's also good to have mpstat -P ALL 1 1000 or collectl -scC on another window
}}}
! FIELD DESCRIPTIONS
<<<
''pkg'' processor package number.
''core'' processor core number.
''CPU'' Linux CPU (logical processor) number.
''%c0'' percent of the interval that the CPU retired instructions.
''GHz'' average clock rate while the CPU was in c0 state.
''TSC'' average GHz that the TSC ran during the entire interval.
''%c1, %c3, %c6'' show the percentage residency in hardware core idle states.
''%pc3, %pc6'' percentage residency in hardware package idle states.
<<<
! Verbose mode
<<<
* The ''max efficiency'' frequency, a.k.a. Low Frequency Mode, is the frequency available at the minimum package voltage.
* The ''TSC frequency'' is the nominal maximum frequency of the processor if turbo-mode were not available. This frequency should be sustainable on all CPUs indefinitely, given nominal power and cooling. The remaining rows show what maximum turbo frequency is possible depending on the number of idle cores. Note that this information is not available on all processors.
<<<
{{{
[root@desktopserver cpu-topology]# ./turbostat -v
GenuineIntel 13 CPUID levels; family:model:stepping 0x6:2a:7 (6:42:7)
16 * 100 = 1600 MHz max efficiency
34 * 100 = 3400 MHz TSC frequency
35 * 100 = 3500 MHz max turbo 4 active cores
36 * 100 = 3600 MHz max turbo 3 active cores
37 * 100 = 3700 MHz max turbo 2 active cores
38 * 100 = 3800 MHz max turbo 1 active cores
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
4.75 1.66 3.41 28.37 8.92 57.97 0.00 7.23 6.63 5.84 0.00
0 0 8.82 1.62 3.41 25.86 16.16 49.15 0.00 7.23 6.63 5.84 0.00
0 4 3.56 1.67 3.41 31.13 16.16 49.15 0.00 7.23 6.63 5.84 0.00
1 1 3.82 1.87 3.41 22.36 4.35 69.47 0.00 7.23 6.63 5.84 0.00
1 5 2.83 1.65 3.41 23.35 4.35 69.47 0.00 7.23 6.63 5.84 0.00
2 2 3.37 1.64 3.41 24.33 3.24 69.06 0.00 7.23 6.63 5.84 0.00
2 6 3.09 1.66 3.41 24.62 3.24 69.06 0.00 7.23 6.63 5.84 0.00
3 3 6.40 1.61 3.41 37.49 11.92 44.19 0.00 7.23 6.63 5.84 0.00
3 7 6.08 1.64 3.41 37.81 11.92 44.19 0.00 7.23 6.63 5.84 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
4.60 1.64 3.41 22.02 8.04 65.33 0.00 8.83 6.02 10.69 0.00
0 0 10.41 1.63 3.41 40.38 14.09 35.13 0.00 8.83 6.02 10.69 0.00
0 4 7.53 1.73 3.41 43.26 14.09 35.13 0.00 8.83 6.02 10.69 0.00
1 1 1.73 1.64 3.41 10.89 2.94 84.45 0.00 8.83 6.02 10.69 0.00
1 5 0.92 1.63 3.41 11.70 2.94 84.45 0.00 8.83 6.02 10.69 0.00
2 2 3.10 1.62 3.41 9.53 0.94 86.43 0.00 8.83 6.02 10.69 0.00
2 6 1.79 1.61 3.41 10.84 0.94 86.43 0.00 8.83 6.02 10.69 0.00
3 3 6.19 1.61 3.41 24.28 14.20 55.33 0.00 8.83 6.02 10.69 0.00
3 7 5.15 1.61 3.41 25.32 14.20 55.33 0.00 8.83 6.02 10.69 0.00
}}}
! output on a text file
./turbostat 2> turbostat.txt
https://blogs.oracle.com/networking/entry/switches_inside_oracle_s_engineered
How to Connect Oracle Exadata to 10 G Networks Using Oracle’s Ethernet Switches http://www.oracle.com/technetwork/server-storage/networking/documentation/o13-071-enet-switches-exadata-2041681.pdf
http://arup.blogspot.com/2010/11/tool-to-add-range-partitions.html
http://www.fmrib.ox.ac.uk/fslcourse/unix_intro/fstour.html
http://www.justskins.com/forums/how-to-find-max-136637.html
http://tldp.org/HOWTO/IO-Perf-HOWTO/index.html
http://dsstos.blogspot.com/2007/10/basic-settings-for-solaris-os-to-meet.html
http://www.solarisinternals.com/si/reading/fs2/fs2.html
http://www.pdl.cmu.edu/ftp/DriveChar/traxtent.pdf
http://linux.derkeiler.com/Newsgroups/comp.os.linux.misc/2003-09/2368.html <--MaxPhys
http://www.dataclinic.co.uk/disk-io-information-linux-unix.htm
http://www.princeton.edu/~unix/Solaris/troubleshoot/diskio.html
http://gurkulindia.com/main/2011/03/solaris-sds-both-submirrors-go-into-needs-maintenance-when-io-is-performed-but-no-disk-errors-seen/
http://www.inout.ch/files/pdf/download/inout_oracle_perf_on_ext_vs_asm_vs_zfs.pdf
http://www.ixora.com.au/q+a/io.htm <-- DEV_BSIZE
http://kerneltrap.org/mailarchive/netbsd-tech-kern/2002/6/20/275290/thread
Magicsketch
Incredibooth
http://greenpois0n.com/ - absinthe-win-0.3.zip
<<<
cydia.hackulo.us
cydia.xsellize.com
<<<
ifile http://www.youtube.com/watch?v=PoV9fRcTdRc&feature=related
ifile empty trash http://forums.macrumors.com/archive/index.php/t-1095614.html
oplayer http://www.youtube.com/watch?v=gSmRgCZDuJY
http://i-funbox.com/ <-- for voice memos
http://www.retrohive.com/files/SharePod.zip <-- for mp3s
http://www.oracle.com/technetwork/articles/servers-storage-admin/fault-management-linux-2005816.html
Looks like the http://www.ora600.be/
has a chinese counterpart ;) http://www.oracleodu.com/en/
I just found out about it earlier when Hua Cui (one of the developers) added me on linkedin..
https://api.jquery.com/category/events/
http://javapapers.com/java/java-garbage-collection-introduction/
HPROF: A Heap/CPU Profiling Tool http://docs.oracle.com/javase/7/docs/technotes/samples/hprof.html
JavaTM Virtual Machine Tool Interface (JVM TI) http://docs.oracle.com/javase/6/docs/technotes/guides/jvmti/
http://stackoverflow.com/questions/19012460/java-garbage-collection-monitoring
http://stackoverflow.com/questions/8126868/profiling-number-of-garbage-collected-object-instances-per-class
https://dzone.com/articles/how-monitor-java-garbage
https://plumbr.eu/handbook/gc-tuning-measuring
https://blog.jooq.org/2013/08/12/10-more-common-mistakes-java-developers-make-when-writing-sql/
https://www.jooq.org/
Stored procedures vs Java Prepared statements https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:296373600346847268
http://stackoverflow.com/questions/2099425/when-should-we-use-a-preparedstatement-instead-of-a-statement
http://stackoverflow.com/questions/9548089/which-is-faster-statement-or-preparedstatement
http://stackoverflow.com/questions/10126427/preparedstatement-performance-tuning
http://stackoverflow.com/questions/687550/preparedstatements-and-performance
https://www.mkyong.com/jdbc/jdbc-preparestatement-example-select-list-of-the-records/
http://alvinalexander.com/blog/post/jdbc/jdbc-preparedstatement-select-like
https://www.codeschool.com/paths
What is the recommended learning path for PHP and Javascript? http://programmers.stackexchange.com/questions/132378/what-is-the-recommended-learning-path-for-php-and-javascript
<<<
JavaScript: The Good Parts
<<<
Why you should learn PHP after HTML, CSS, and JavaScript http://www.lynda.com/articles/why-you-should-learn-php-after-html-css-and-javascript
Web Engineering I: Fundamentals of Web Development https://iversity.org/de/courses/web-engineering-i-grundlagen-der-web-entwicklung
Web Engineering II: Developing Mobile HTML5 Apps https://iversity.org/en/courses/web-engineering-ii-developing-mobile-html5-apps
http://www.jseverywhere.org/
! license
https://sales.jetbrains.com/hc/en-gb/articles/207240845-What-is-perpetual-fallback-license-
http://www.scribd.com/doc/23648950/JMeter-Oracle-Database-Testing
http://www.scribd.com/doc/6470142/Using-Apache-JMeter-to-Perform-Load-Testing-Agains
http://www.docstoc.com/docs/25174151/Performance-load-testing-for-database-Using-Apache-JMeter
http://www2.db-tracklayer.com/blog/en/2009/12/26/testing-with-jmeter/
How Good Are Query Optimizers, Really
http://www.vldb.org/pvldb/vol9/p204-leis.pdf
Online Javascript IDE http://en.wikipedia.org/wiki/Online_Javascript_IDE
http://jsfiddle.net/
http://jsonapi.org/
JSON for APEX Developers
http://dgielis.blogspot.com/2015/01/json-for-apex-developers-part-1.html
http://dgielis.blogspot.com/2015/01/json-for-apex-developers-part-2.html
https://livesql.oracle.com/apex/livesql/file/content_HOB7SQ6N54UBZXIND3F9VR17G.html
<<showtoc>>
! command shortcuts
{{{
Full list of help go to -> Help -> Keyboard Shortcuts
shift+tab = to get help
shift+enter = run and new line
cmd+enter = run
ESC + A = insert new cell above
ESC + B = insert new cell below
Tab = autocomplete
type module and function (example: re.search) + shift + tab = tooltip help
upper right, toggle between Markdown and Code
}}}
! howto
Notebook Gallery - Links to the best IPython and Jupyter Notebooks - http://nb.bianp.net/
https://app.pluralsight.com/library/courses/jupyter-notebook-python/table-of-contents
https://www.safaribooksonline.com/videos/using-jupyter-notebooks/9780135174296/9780135174296-ujnp_00_00_00_00
! running R
https://www.safaribooksonline.com/videos/learning-path-jupyter/9781788394918
! installation
!! jupyter notebook installation
https://jupyter.org/install
{{{
python3 -m pip install --upgrade pip
python3 -m pip install jupyter
}}}
!! R kernel integration (execute installation in mac terminal not inside Rstudio)
https://github.com/IRkernel/IRkernel
https://marcocarnini.github.io/software/2016/08/01/installing-r-kernel-for-jupyter.html
https://mpacer.org/maths/r-kernel-for-ipython-notebook
https://www.chrisjmendez.com/2018/12/04/configure-jupyter-notebook-to-work-with-r/
{{{
# you may need to install xquartz to install IRkernel
brew install 'xquartz' --cask
# run the following commands on a pyenv shell
install.packages('IRkernel')
IRkernel::installspec() # to register the kernel in the current R installation, needs to be in pyenv shell
}}}
! run jupyter notebook
set your python environment first
{{{
pyenv activate py370
}}}
run jupyter notebook
{{{
jupyter notebook
}}}
! install packages on jupyter notebooks
https://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/
{{{
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
import numpy as np
import pandas as pd
}}}
! jupyter magic commands
<<<
https://www.google.com/search?q=jupyter+%25timeit&oq=jupyter+%25timeit&aqs=chrome..69i57j0l5.4432j0j1&sourceid=chrome&ie=UTF-8
<<<
google
https://cloud.google.com/pubsub/
! CDC
https://dzone.com/articles/creates-a-cdc-stream-from-oracle-database-to-kafka
! ORDS
https://www.confluent.io/kafka-summit-san-francisco-2019/solutions-for-bi-directional-integration-between-oracle-rdbms-and-apache-kafka
<<showtoc>>
! partitions
https://www.linkedin.com/pulse/kafka-optimization-how-many-partitions-needed-maria-hatfield-phd
! autoscaling
https://blog.softwaremill.com/autoscaling-kafka-streams-applications-with-kubernetes-9aed2e37d3a0
https://www.kaggle.com/c/titanic
https://www.datacamp.com/community/open-courses/kaggle-tutorial-on-machine-learing-the-sinking-of-the-titanic#gs.EMtaoUM
https://www.kaggle.com/wiki/Home
https://www.kaggle.com/competitions
http://www.kdnuggets.com/2015/03/10-steps-success-kaggle-data-science-competitions.html
! competitions
getting started http://blog.kaggle.com/2018/08/22/machine-learning-kaggle-competition-part-one-getting-started/
http://orainternals.wordpress.com/2008/11/24/log-file-synch-tuning-2/
http://orainternals.wordpress.com/2007/01/09/hello-world/ <-- look at the comments section
Using kexec for fast reboots on Oracle Linux with UEK
http://blogs.oracle.com/wim/entry/fast_reboots_for_oracle_linux
<<showtoc>>
Please create the “killsession” procedure using the SYSDBA user. And then as SYSDBA grant “alter system” and “select on sys.v_$session” to SYSTEM user. And then finally grant “execute on system.killsession” to alloc_app_perf
Please create on all ALLOC environments. Then later on we can drop the procedure.
! prereq grant
{{{
grant alter system to system;
grant select on sys.gv_$session to system;
}}}
! the script
<<<
{{{
create or replace procedure system.killsession
( p_sid IN number default NULL,
p_serial IN number default NULL,
p_instance IN number default NULL,
p_username IN varchar2 default NULL,
p_machine IN varchar2 default NULL,
p_osuser IN varchar2 default NULL
) as
BEGIN
FOR c IN (
SELECT username, machine, osuser, sid, serial#, inst_id
FROM sys.gv_$session
WHERE sid = nvl(p_sid,sid)
AND serial# = nvl(p_serial,serial#)
AND inst_id = nvl(p_instance,inst_id)
AND upper(username) = upper(nvl(p_username,username))
AND upper(machine) = upper(nvl(p_machine,machine))
AND upper(osuser) = upper(nvl(p_osuser,osuser))
AND USERNAME IS NOT NULL
AND TYPE <> 'BACKGROUND'
AND STATUS <> 'KILLED'
AND SID NOT IN (select upper(sys_context ('userenv','SID')) from dual)
)
LOOP
EXECUTE IMMEDIATE 'alter system kill session ''' || c.sid || ', ' || c.serial# || ', @' || c.inst_id || ''' immediate';
dbms_output.put_line('Kill session : ''' || c.username || ', ' || c.machine || ', ' || c.osuser || ', ' || c.sid || ', ' || c.serial# || ', @' || c.inst_id || ''' ');
END LOOP;
END;
/
}}}
<<<
! grant this to the performance user
{{{
grant execute on system.killsession to <user>;
}}}
! kill session usage
{{{
set serveroutput on
-- kill all
exec system.killsession();
-- kill sid
exec system.killsession(p_sid=>'36')
-- kill sid, serial, inst
exec system.killsession(p_sid=>'44', p_serial=>'54916', p_instance=>'1')
-- kill inst
exec system.killsession(p_instance=>'1')
-- kill alloc app user
exec system.killsession(p_username=>'ALLOC_APP_USER')
-- kill machine
exec system.killsession(p_machine=>'karldevfedora')
-- kill osuser
exec system.killsession(p_osuser=>'karl')
}}}
! generate kill all commands
{{{
set pages 0
select /* generate kill command */ 'exec system.killsession(p_sid=>'''||sid||''', p_serial=>'''||serial#||''', p_instance=>'''||inst_id||''');'
from gv$session
where username = 'ALLOC_APP_USER'
and machine = 'example.com';
}}}
! monitor
{{{
set lines 300
select username, machine, osuser, sid, serial#, inst_id
from gv$session
where username is not null
and type <> 'BACKGROUND'
order by 1
/
}}}
also create [[CreatePerformanceTuningUser]]
''reference''
https://jhdba.wordpress.com/2009/08/18/procedure-to-kill-a-session/
https://jhdba.wordpress.com/2016/03/23/killing-sessions-across-multiple-instances/
https://oracle-base.com/articles/misc/killing-oracle-sessions
http://blog.tanelpoder.com/2008/06/19/killing-an-oracle-process-from-inside-oracle/
https://chandlerdba.com/2014/05/29/developers-killing-sessions/
ORA-00600 [kjblocalobj_nolock:lt] [kjsmesm:svrmode] [kclcls_8] (Doc ID 1968699.1)
Bug 16875230 - Process may hang waiting on 'gc buffer busy release' wait event in RAC (Doc ID 16875230.8)
12.1.0.1: ORA-00600: [kclcls_8], [196616], [107163] in RAC (Doc ID 2102951.1)
ORA-00600: Internal Error Code, Arguments: [kclcls_8], [29], [4118676] (Doc ID 2093345.1)
ORA-600 [kclcls_8] (Doc ID 460225.1)
EXADATA ORA-00600: internal error code, arguments: [kclcls_8], [0], [193415], https://safiullahmohammad.wordpress.com/2012/11/06/exadata-ora-00600-internal-error-code-arguments-kclcls_8-0-193415/
http://developers.sun.com/solaris/articles/kstat_part2.html
HugePages on Oracle Linux 64-bit [ID 361468.1]
Slow Performance with High CPU Usage on 64-bit Linux with Large SGA [ID 361670.1]
Linux: Common System Performance Issues [ID 419345.1]
kswapd / krefilld Process Consumes All the CPU Resources [ID 272249.1]
''diag and kswapd''
{{{
top - 05:32:24 up 62 days, 8:10, 5 users, load average: 102.95, 79.17, 36.58
Tasks: 1532 total, 6 running, 1460 sleeping, 0 stopped, 66 zombie
Cpu(s): 4.3%us, 3.2%sy, 0.2%ni, 76.7%id, 15.0%wa, 0.1%hi, 0.5%si, 0.0%st
Mem: 98848968k total, 96723688k used, 2125280k free, 14116k buffers
Swap: 25165816k total, 12402688k used, 12763128k free, 14029784k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1401 root 10 -5 0 0 0 D 49.9 0.0 70:54.53 [kswapd0]
8638 root 25 10 18736 1424 884 R 16.6 0.0 928:28.88 /opt/simpana/Base/cvfwd
31819 oracle -2 0 1280m 71m 60m D 15.4 0.1 837:32.04 asm_lms0_+ASM1
28243 oracle 15 0 7480m 118m 53m S 8.5 0.1 14:04.23 ora_j000_hcmtst2
1317 oracle 15 0 8471m 31m 19m S 6.1 0.0 25:09.09 ora_lmd0_hcmsup1
30277 oracle 15 0 8292m 423m 152m S 5.3 0.4 1:01.44 oraclebiuat1 (LOCAL=NO)
29324 oracle 15 0 8461m 1.3g 1.3g S 3.8 1.4 0:29.99 ora_p005_hcmdev1
2044 oracle 15 0 8486m 47m 18m S 3.4 0.0 22:29.14 ora_lck0_hcmsup1
top - 06:10:14 up 62 days, 8:48, 5 users, load average: 48.89, 25.57, 27.13
Tasks: 1557 total, 6 running, 1486 sleeping, 0 stopped, 65 zombie
Cpu(s): 4.7%us, 1.8%sy, 0.0%ni, 12.9%id, 80.5%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 98848968k total, 97774820k used, 1074148k free, 41632k buffers
Swap: 25165816k total, 16482036k used, 8683780k free, 11457724k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1071 oracle 25 0 8463m 21m 18m D 96.5 0.0 48:46.09 ora_diag_hcmtrg1
1401 root 10 -5 0 0 0 D 4.4 0.0 72:28.68 [kswapd0]
30858 oracle 16 0 2162m 100m 16m S 2.0 0.1 0:34.90 /u01/app/11.2.0/grid/bin/oraagent.bin
4557 root 3 -20 42804 31m 2524 S 1.6 0.0 0:42.76 /usr/bin/atop -a -w /var/log/atop/atop_20120223 600
27391 oracle RT 0 688m 375m 53m S 1.3 0.4 219:35.40 /u01/app/11.2.0/grid/bin/ocssd.bin
31383 root 18 0 11464 1760 608 R 1.2 0.0 0:00.17 /usr/sbin/lsof +c0 -w +L -b -R -i
2889 oracle 15 0 8296m 664m 192m S 1.1 0.7 3:09.24 oraclebiuat1 (LOCAL=NO)
2823 oracle 19 0 847m 106m 15m S 0.8 0.1 0:38.71 /u01/app/oracle/product/grid/agent11g/bin/emagent
2887 oracle 15 0 1774m 42m 34m S 0.8 0.0 3:07.40 oracleDBFS1 (LOCAL=NO)
29188 oracle 16 0 7832m 3.3g 3.3g D 0.8 3.5 0:35.86 oraclebiuat1 (LOCAL=NO)
2118 oracle 16 0 8452m 61m 52m D 0.6 0.1 10:03.37 ora_dia0_hcmdev1
top - 05:45:09 up 62 days, 8:23, 5 users, load average: 116.85, 74.67, 46.41
Tasks: 1480 total, 3 running, 1412 sleeping, 0 stopped, 65 zombie
Cpu(s): 6.5%us, 1.6%sy, 0.0%ni, 4.5%id, 87.2%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 98848968k total, 98398692k used, 450276k free, 23416k buffers
Swap: 25165816k total, 14589556k used, 10576260k free, 12862448k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1071 oracle 25 0 8463m 21m 18m R 92.5 0.0 23:39.76 ora_diag_hcmtrg1
31819 oracle -2 0 1280m 68m 57m S 36.5 0.1 838:24.55 asm_lms0_+ASM1
16833 oracle 15 0 7819m 713m 710m S 5.3 0.7 1:09.00 oraclebiuat1 (LOCAL=NO)
27391 oracle RT 0 688m 375m 53m S 2.6 0.4 218:50.57 /u01/app/11.2.0/grid/bin/ocssd.bin
1361 oracle -2 0 8470m 31m 19m D 0.8 0.0 8:31.67 ora_lms1_hcmtrg1
1876 root 18 0 19192 2404 936 R 0.8 0.0 0:24.98 /usr/bin/top -b -c -d 5 -n 720
1175 oracle 16 0 1783m 26m 19m D 0.6 0.0 7:34.40 ora_lmd0_DBFS1
9228 oracle 16 0 5767m 1.1g 1.1g D 0.6 1.2 16:41.49 ora_lmd0_hcmmgr2
top - 05:31:38 up 62 days, 8:09, 5 users, load average: 175.01, 85.25, 36.29
Tasks: 1541 total, 4 running, 1472 sleeping, 0 stopped, 65 zombie
Cpu(s): 1.3%us, 60.9%sy, 0.0%ni, 5.1%id, 32.6%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 98848968k total, 98239628k used, 609340k free, 3104k buffers
Swap: 25165816k total, 11930888k used, 13234928k free, 14365620k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24711 oracle 15 0 2287m 103m 16m S 191.4 0.1 7:49.14 /u01/app/11.2.0/grid/bin/oraagent.bin
26982 oracle 15 0 298m 20m 11m S 45.9 0.0 13:20.97 /u01/app/11.2.0/grid/bin/diskmon.bin -d -f
1401 root 10 -5 0 0 0 D 45.2 0.0 70:44.33 [kswapd0]
31819 oracle -2 0 1280m 72m 60m S 25.8 0.1 837:20.95 asm_lms0_+ASM1
2044 oracle 15 0 8486m 47m 18m S 14.5 0.0 22:27.30 ora_lck0_hcmsup1
12252 oracle 18 0 1371m 15m 15m S 13.9 0.0 3:38.61 ora_pmon_scratch
}}}
''kswapd, vmstat, free -m''
{{{
$ top -c
top - 15:13:06 up 62 days, 17:51, 7 users, load average: 60.27, 51.66, 23.80
Tasks: 1601 total, 4 running, 1507 sleeping, 0 stopped, 90 zombie
Cpu(s): 7.2%us, 4.8%sy, 0.0%ni, 70.1%id, 17.8%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 98848968k total, 97340856k used, 1508112k free, 15188k buffers
Swap: 25165816k total, 16239484k used, 8926332k free, 11274668k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1401 root 11 -5 0 0 0 R 75.3 0.0 81:23.68 [kswapd0]
21857 oracle 15 0 1769m 20m 16m S 47.4 0.0 0:24.26 ora_w002_DBFS1
26606 oracle 16 0 1832m 93m 44m S 37.2 0.1 1:12.17 oracleDBFS1 (LOCAL=NO)
2503 oracle 15 0 8572m 933m 232m D 17.4 1.0 3:55.92 oraclebiuat1 (LOCAL=NO)
2505 oracle 16 0 8573m 929m 226m D 7.2 1.0 4:15.79 oraclebiuat1 (LOCAL=NO)
2359 oracle 16 0 8573m 856m 251m D 6.6 0.9 3:31.53 oraclebiuat1 (LOCAL=NO)
2362 oracle 16 0 8575m 844m 235m D 5.6 0.9 3:17.49 oraclebiuat1 (LOCAL=NO)
2508 oracle 16 0 8572m 935m 231m R 5.6 1.0 4:15.50 oraclebiuat1 (LOCAL=NO)
32445 oracle 18 0 17604 1520 1020 S 3.9 0.0 0:00.13 /bin/bash /home/oracle/dba/bin/get_mem_sizes_all2
32201 oracle 16 0 8552m 867m 252m D 3.6 0.9 5:25.64 oraclebiuat1 (LOCAL=NO)
2501 oracle 18 0 95320 21m 7888 S 3.3 0.0 0:00.10 /u01/app/oracle/product/grid/agent11g/perl/bin/perl /u01/app/oracle/product/
16214 oracle 16 0 2343m 110m 16m S 3.0 0.1 5:39.24 /u01/app/11.2.0/grid/bin/oraagent.bin
27391 oracle RT 0 688m 375m 53m S 2.6 0.4 234:13.34 /u01/app/11.2.0/grid/bin/ocssd.bin
29156 root 18 0 19364 2448 936 S 2.0 0.0 0:32.29 /usr/bin/top -b -c -d 5 -n 720
1638 oracle 16 0 19316 2436 936 R 1.3 0.0 0:00.16 top -c
13344 root RT 0 313m 86m 55m S 1.3 0.1 423:41.28 /u01/app/11.2.0/grid/bin/osysmond.bin
17532 root 15 0 272m 9828 5632 S 1.3 0.0 0:16.85 /usr/sbin/adclient -F -M
2577 oracle 18 0 1330m 14m 7404 D 0.7 0.0 0:00.02 /u01/app/11.2.0/grid/jdk/bin/java -classpath /u01/app/11.2.0/grid/jdk/lib/rt
2870 oracle 18 0 14072 1256 916 S 0.7 0.0 0:00.02 /tmp/CVU_11.2.0.2.0_oracle/exectask -runexe /u01/app/11.2.0/grid/bin/crsctl
3454 oracle 17 0 5787m 22m 18m S 0.7 0.0 0:00.02 ora_j000_hcmcfg1
4880 oracle 15 0 5790m 30m 26m S 0.7 0.0 2:58.89 oraclehcmcfg1 (LOCAL=NO)
8797 oracle 15 0 7902m 4.2g 4.2g S 0.7 4.5 4:04.00 ora_dbw0_biuat1
26528 oracle 18 0 582m 90m 15m S 0.7 0.1 0:33.94 /u01/app/oracle/product/grid/agent11g/bin/emagent
32260 oracle 18 0 1340m 16m 7992 D 0.7 0.0 0:00.05 /u01/app/11.2.0/grid/jdk/jre//bin/java -DORACLE_HOME=/u01/app/11.2.0/grid -c
1092 oracle 15 0 1802m 48m 23m S 0.3 0.0 34:41.11 ora_dia0_DBFS1
1251 oracle 15 0 8470m 28m 19m S 0.3 0.0 10:12.14 ora_dia0_hcmsup1
1290 oracle -2 0 6630m 30m 19m S 0.3 0.0 3:34.35 ora_lms1_fstst1
1302 oracle 15 0 12.4g 32m 18m S 0.3 0.0 22:41.97 ora_dia0_mtatst111
1833 oracle 16 0 1265m 17m 15m D 0.3 0.0 0:00.01 asm_pz99_+ASM1
1998 oracle 15 0 1769m 18m 16m S 0.3 0.0 6:29.36 ora_lck0_DBFS1
2867 root 15 0 97292 3592 2812 S 0.3 0.0 0:00.01 sshd: oracle@notty
3296 oracle 18 0 1782m 34m 31m S 0.3 0.0 5:10.31 ora_cjq0_DBFS1
4122 oracle 15 0 7820m 44m 41m S 0.3 0.0 0:27.04 ora_ctwr_biuat1
7722 root 17 0 8828 1136 888 S 0.3 0.0 0:40.64 /bin/bash /opt/oracle.oswatcher/osw/ExadataRdsInfo.sh HighFreq
8776 oracle 15 0 7834m 58m 43m S 0.3 0.1 6:15.49 ora_dia0_biuat1
8809 oracle 17 0 7819m 32m 32m S 0.3 0.0 0:55.30 ora_asmb_biuat1
vm
oracle@td01db01.tnd.us.example.net:/home/oracle:mtatst111
$
oracle@td01db01.tnd.us.example.net:/home/oracle:mtatst111
$
oracle@td01db01.tnd.us.example.net:/home/oracle:mtatst111
$
oracle@td01db01.tnd.us.example.net:/home/oracle:mtatst111
$ vmstat 1 1000
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 18 16335364 1526252 15336 11201328 1 1 8 25 0 0 3 1 96 0 0
2 22 16357472 1525976 15340 11185180 2820 5688 4232 5904 7308 36705 5 3 62 30 0
5 14 16375524 1525772 15360 11174156 2304 6028 4048 6604 7129 37126 4 2 60 33 0
3 18 16389424 1523620 15368 11163476 1588 3076 2264 5796 7725 36519 6 3 61 30 0
1 20 16404540 1522152 15376 11152288 2464 3412 3728 3672 7037 35861 5 3 63 30 0
3 19 16417768 1519132 15396 11143560 2180 4472 3212 4768 7576 35922 5 2 67 26 0
3 19 16422592 1511776 15416 11141608 1372 2256 1920 3768 9035 41635 6 2 70 23 0
1 19 16426184 1511828 15424 11139164 1352 1976 2240 1980 7488 37727 5 1 72 22 0
2 19 16429440 1511236 15440 11137108 1876 2236 2376 3916 8657 38552 6 1 67 25 0
2 19 16433512 1512520 15452 11135268 1668 3104 2380 3284 10018 38472 6 3 65 25 0
2 19 16437120 1512456 15468 11133364 1236 2348 1784 3600 9244 38254 6 4 68 22 0
1 17 16440952 1380568 15484 11131316 1000 2468 1256 2472 12360 41599 5 2 67 25 0
1 15 16444596 1341772 15492 11128048 712 2144 1036 2448 11111 45197 3 1 77 18 0
2 15 16450780 1291472 15496 11123416 896 2720 1408 2720 10197 46195 3 2 74 21 0
2 15 16458924 1258048 15496 11117520 1252 3476 1404 3476 9829 39381 3 2 71 24 0
0 14 16466808 1236152 15520 11111316 1296 2544 1768 3176 9969 40767 4 2 70 24 0
1 16 16479940 1205264 15528 11101308 2644 4180 2924 4184 10515 45117 4 3 69 24 0
0 16 16510092 1195368 15532 11077992 2952 8680 3364 8680 10889 40722 4 4 64 27 0
4 17 16523904 1162600 15540 11070896 1496 6188 1728 6188 8318 37482 3 3 69 24 0
2 18 16545524 1145108 15552 11059132 3092 10032 3752 10528 10064 39767 5 4 58 34 0
7 17 16573492 1132944 15572 11043088 3728 14504 3824 14880 11111 42954 4 4 49 43 0
1 18 16583004 1116512 15588 11037808 1064 3488 1208 3644 8131 42047 3 2 64 31 0
6 16 16598144 1117924 15596 11027804 2448 6420 2672 7016 10463 44644 6 3 66 26 0
0 17 16612612 1118220 15620 11020236 2748 7624 3248 7624 8333 41386 4 3 75 19 0
1 18 16624048 1099840 15624 11012688 2032 5264 2624 5272 7198 40159 3 2 74 21 0
3 15 16635916 1078412 15640 11004812 2084 4960 2268 5804 8350 39666 3 2 72 23 0
1 16 16651940 1055508 15640 10992316 2556 5272 2732 5272 8398 42930 3 3 74 20 0
1 15 16666952 1025532 15656 10981208 1928 3584 2048 4564 6271 38171 2 2 72 23 0
4 7 16675428 988600 15716 10978792 6036 3460 10972 4076 9867 44285 4 4 76 16 0
5 11 16673412 940024 15828 10985844 11040 0 16436 920 11765 46952 7 5 76 12 0
4 11 16670376 1506172 15860 10990560 16768 0 19852 212 13625 48629 17 6 61 16 0
4 9 16667092 1478956 15920 10992388 17848 0 19788 348 15061 50677 13 6 65 15 0
3 8 16663720 1597092 15956 10993708 17928 0 18196 1992 19818 58548 10 6 71 13 0
4 9 16661816 1588312 15964 10997056 10048 0 10748 760 15166 48630 7 6 77 11 0
4 5 16658972 1577504 15976 10999156 14040 0 14524 1092 19695 60359 8 7 74 11 0
4 8 16655484 1559028 15984 11000432 19188 0 19528 628 19578 56628 10 6 72 13 0
1 9 16653596 1540352 15996 11004068 9932 0 10364 1996 18206 60216 8 6 78 9 0
oracle@td01db01.tnd.us.example.net:/home/oracle:mtatst111
$
oracle@td01db01.tnd.us.example.net:/home/oracle:mtatst111
$ free -m
total used free shared buffers cached
Mem: 96532 95061 1470 0 15 10749
-/+ buffers/cache: 84295 12236
Swap: 24575 16256 8319
oracle@td01db01.tnd.us.example.net:/home/oracle:mtatst111
}}}
also see [[os watcher dcli collection, oswatcher, exawatcher]] for extracting data from oswatcher/exawatcher
<<<
A couple of ideas, assuming that there's swapping (majflt/pswpin/pswpout in sar) with lots of free memory (not just "available/cached" but free):
1) Swappiness and lots of cached IO (low "free", high "cached")
2) NUMA free memory imbalance (one group swaps, others are ok)
<<<
Latch Free should only be trigger for sleeps and the latch address is in one of the parameters in the wait event. Tanel has done a lot of work on latchs and has a couple of very handy scripts for diagnosing them (Latchprof.sql).
See this post on Tanel's blog. http://tech.e2sn.com/oracle/troubleshooting/latch-contention-troubleshooting
https://logicalread.com/oracle-latch-free-wait-dr01/#.WlOW9WTwbdQ
http://tapestryjava.blogspot.gr/2012/06/latency-numbers-every-programmer-should.html
{{{
#! /bin/bash
#
# Examine specific system host devices to identify the drives attached
#
# author: unknown
#
# sample usage: ./ldrv.sh 2> /dev/null
#
function describe_controller () {
local device driver modprefix serial slotname
driver="`readlink -f \"$1/driver\"`"
driver="`basename $driver`"
modprefix="`cut -d: -f1 <\"$1/modalias\"`"
echo "Controller device @ ${1##/sys/devices/} [$driver]"
if [[ "$modprefix" == "pci" ]] ; then
slotname="`basename \"$1\"`"
echo " `lspci -s $slotname |cut -d\ -f2-`"
return
fi
if [[ "$modprefix" == "usb" ]] ; then
if [[ -f "$1/busnum" ]] ; then
device="`cat \"$1/busnum\"`:`cat \"$1/devnum\"`"
serial="`cat \"$1/serial\"`"
else
device="`cat \"$1/../busnum\"`:`cat \"$1/../devnum\"`"
serial="`cat \"$1/../serial\"`"
fi
echo " `lsusb -s $device` {SN: $serial}"
return
fi
echo -e " `cat \"$1/modalias\"`"
}
function describe_device () {
local empty=1
while read device ; do
empty=0
if [[ "$device" =~ ^(.+/[0-9]+:)([0-9]+:[0-9]+:[0-9]+)/block[/:](.+)$ ]] ; then
base="${BASH_REMATCH[1]}"
lun="${BASH_REMATCH[2]}"
bdev="${BASH_REMATCH[3]}"
vnd="$(< ${base}${lun}/vendor)"
mdl="$(< ${base}${lun}/model)"
sn="`sginfo -s /dev/$bdev | \
sed -rn -e \"/Serial Number/{s%^.+' *(.+) *'.*\\\$%\\\\1%;p;q}\"`" &>/dev/null
if [[ -n "$sn" ]] ; then
echo -e " $1 `echo $lun $bdev $vnd $mdl {SN: $sn}`"
else
echo -e " $1 `echo $lun $bdev $vnd $mdl`"
fi
else
echo -e " $1 Unknown $device"
fi
done
[[ $empty -eq 1 ]] && echo -e " $1 [Empty]"
}
function check_host () {
local found=0
local pController=
while read shost ; do
host=`dirname "$shost"`
controller=`dirname "$host"`
bhost=`basename "$host"`
if [[ "$controller" != "$pController" ]] ; then
pController="$controller"
describe_controller "$controller"
fi
find $host -regex '.+/target[0-9:]+/[0-9:]+/block[:/][^/]+' |describe_device "$bhost"
done
}
find /sys/devices/ -name 'scsi_host*' |check_host
}}}
https://learnxinyminutes.com/
https://sites.google.com/site/embtdbo/oracle-parsing
<<showtoc>>
https://stackoverflow.com/questions/35417273/library-cache-lock-in-parallelized-statements
https://stackoverflow.com/questions/32229729/faster-way-to-load-huge-data-warehouse-table
https://orainternals.wordpress.com/2016/05/21/library-cache-lock-on-build-object/
Using Frequent Truncate in Application https://asktom.oracle.com/pls/apex/f%3Fp%3D100:11:0::::P11_QUESTION_ID:47911859692542
! reason and fix - 12c: _optimizer_gather_stats_on_load
{{{
The parameter controlling this change is not mentioned:
_optimizer_gather_stats_on_load
The default is TRUE since Oracle 12.1.0.1 – the parameter or functionality did not exist before Oracle Database 12c.
In Oracle Database 12c, online statistics gathering “piggybacks” statistics gather as part of a direct-path
data loading operation such as, create table as select (CTAS) and insert as select (IAS) operations.
Gathering statistics as part of the data loading operation, means no additional full data scan is required
to have statistics available immediately after the data is loaded.
}}}
{{{
insert /*+append NO_GATHER_OPTIMIZER_STATISTICS*/ into MYTAB select …
}}}
! the bug
High Parse Time Elapsed And High Cursor : Pin S Wait On X Wait Event With Parallelism (Doc ID 1675659.1)
{{{
Copyright (c) 2020, Oracle. All rights reserved. Oracle Confidential.
Click to add to Favorites High Parse Time Elapsed And High Cursor : Pin S Wait On X Wait Event With Parallelism (Doc ID 1675659.1) To BottomTo Bottom
In this Document
Symptoms
Changes
Cause
Solution
References
Applies to:
Oracle Database - Enterprise Edition - Version 11.2.0.3 to 12.1.0.1 [Release 11.2 to 12.1]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Information in this document applies to any platform.
Symptoms
PROBLEM:
----------
We have a test case with a create table as select statement with a dop of 4. The table used by the CTAS contains 500000 rows.
When we execute the statement, we note that Oracle spends all the time to parse (parse time elapsed 99%) and that wait class concurrency are 60% with 40% cursor: pin S wait on X.
We are also able to reproduce the same problem with an insert statement.
We can reproduce the issue in Oracle 11.2.0.3 and 11.2.0.4.
High parse time elapsed (>85%) and high cursor : pin S wait on X wait event
(>60%) seen with parallel execution.
DIAGNOSTIC ANALYSIS:
--------------------
This affects both 11.2.0.3 and 11.2.0.4. Issue does not occur when ran in serial.
- Investigation 1
Setting Parallelism to 4 on the source TABLE SR_TMP_TABLE.
We see the « cursor: pin S wait on X » wait event.
- Investigation 2
Checking what is generating those « cursor: pin S wait on X » wait.
By looking at MUTEX_SLEEP view, we notice that ONE main function generates those mutexes.
Function « kkslce [KKSCHLPIN2] ».
MUTEX_TYPE LOCATION
SLEEPS WAIT_TIME
-------------------------------- ----------------------------------------
---------- ----------
Cursor Pin kkslce [KKSCHLPIN2]
383360 0
Library Cache kglhdgn2 106
74 0
When we set the trace cursor at level 612 on the cursor, we can see the
following reason for NOT sharing the cursor : PQ Slave mismatch(5).
To generate the necessary cursor tracing:
alter session set events 'immediate trace name cursortrace level 612, address &hashvalue';
Example:
select sql_id, hash_value from v$sql
where sql_text LIKE 'insert /*+ NOAPPEND */ into SR_TMP%';
SQL_ID HASH_VALUE
------------- ----------
9ykqjuv9zgb96 3556224294
alter session set events 'immediate trace name cursortrace level 612, address 3556224294';
< Run the sql statement>
To turn it off:
alter system set events 'immediate trace name cursortrace level 128, address 3556224294';
}}}
{{{
Bug 17950283 - high parse time elapsed and high cursor : pin s wait on x wait event with parallel filed for this SR.
Development determined this to be a duplicate of bug 17805316.
Bug 17805316 - improve cursor sharing for px slaves for statements with cdt
}}}
{{{
Apply Patch 17805316: IMPROVE CURSOR SHARING FOR PX SLAVES FOR STATEMENTS WITH CDT.
If one does not exist for your version or platform, open SR and request one.
}}}
! the final fix
{{{
The fix:
The original SQL runs for 7064 seconds. The new ones below are 892 and 620 seconds respectively.
For all of the CREATE TABLE commands on the packages put the following on the hint
Option 1: 892 seconds - index fast full scan on CISADM.D1_MSRMT
NO_GATHER_OPTIMIZER_STATISTICS
Option 2: 620 seconds - full table scan on CISADM.D1_MSRMT
NO_GATHER_OPTIMIZER_STATISTICS FULL(MSRMT)
Option 1 is more scalable when you expect multiple sessions hitting CTAS at the same time because you are reading from an index.
Option 2 is faster if your session is the only one running at a time but will slow down when more sessions are doing full table scan at the same time.
CREATE TABLE DWADM.IF45_STAGING_II_60_TEST NOLOGGING AS
SELECT
/*+PARALLEL(16) NO_GATHER_OPTIMIZER_STATISTICS*/
CUSTOMERS.*,
CASE
CREATE TABLE DWADM.IF45_STAGING_II_60_TEST3 NOLOGGING AS
SELECT
/*+PARALLEL(16) NO_GATHER_OPTIMIZER_STATISTICS FULL(MSRMT)*/
CUSTOMERS.*,
CASE
}}}
.
http://juliandontcheff.wordpress.com/2013/02/12/reducing-library-cache-mutex-x-concurrency-with-dbms_shared_pool-markhot/
{{{
select * from (
select case when (kglhdadr = kglhdpar) then 'Parent' else 'Child '||kglobt09 end cursor,
kglhdadr ADDRESS, substr(kglnaobj,1,20) NAME, kglnahsh HASH_VALUE, kglobtyd TYPE,
kglobt23 LOCKED_TOTAL, kglobt24 PINNED_TOTAL,kglhdexc EXECUTIONS, kglhdnsp NAMESPACE
from x$kglob -- where kglobtyd != 'CURSOR'
order by kglobt24 desc)
where rownum <= 20;
}}}
https://www.freelists.org/post/oracle-l/cursor-mutex-S-library-cache-lock-library-cache-mutex-X-disaster,3
http://andreynikolaev.files.wordpress.com/2012/05/exploring_mutexes_oracle_11_2_retrial_spinlocks.pdf
ash wait chains https://twitter.com/karlarao/status/477168595223326721
http://www.cockos.com/licecap
http://stackoverflow.com/questions/27863065/what-is-used-to-create-animated-screenshots-as-seen-on-atom-packages/27874156#27874156
https://discuss.atom.io/t/what-gif-creator-is-atom-team-using/1272/2
<<<
I believe that one was done using LICEcap340.
I think some of the others were Quicktime and then converted, and maybe some GifBrewery as well.
Yeah, I'm pretty sure I used gifify149 for some and it worked well, but since this Quicktime issue24 is annoying to continually work around I switched to LICEcap.
<<<
* ''data warehouse insider'' http://search.oracle.com/search/search?search_p_main_operator=all&group=Blogs&q=parallel%20degree%20limit%20weblog:datawarehousing
* ''Just use parallel degree limit for each consumer group and dont set anything with the CPU yielding'' http://www.freelists.org/post/oracle-l/limit-parallel-process-per-session-in-10204,2
http://www.oaktable.net/content/things-worth-mention-and-remember-i-parallel-execution-control
http://oracle-randolf.blogspot.com/2011/02/things-worth-to-mention-and-remember-i.html
http://www.rittmanmead.com/2005/08/being-too-clever-for-your-own-good/
http://www.oraclenerd.com/2010/02/parallel-rant.html
http://www.freelists.org/post/oracle-l/parallel-hint,3
<<<
The "default" DOP is cpu_count * parallel_threads_per_cpu, however if
the execution plan has both producer/consumer slave sets, it will use
2 * DOP parallel execution servers.
For example, with cpu_count=12 and parallel_threads_per_cpu=2, the
default DOP would be 12 * 2 = 24, however the query could use 48
parallel execution servers depending on the execution plan.
<<<
http://www.freelists.org/post/oracle-l/best-way-to-invoke-parallel-in-DW-loads,4
<<<
The good news is that Auto DOP in 11gR2 can automatically calculate
the DOP so no more tuning vodo should be required. One less knob for
the DBA/developer to have to mess with.
http://download.oracle.com/docs/cd/E11882_01/server.112/e10881/chapter1.htm#FEATURENO08700
<<<
http://structureddata.org/2010/04/19/the-core-performance-fundamentals-of-oracle-data-warehousing-parallel-execution/
<<<
What do you recommend to achive consistent load times using PQ? Can you make subsequent jobs queue up until one is completed?
Two options depending on the version:
- database resource manager using active session limit and/or max degree of parallelism
- in >=11.2 auto DOP and parallel statement queuing
we are using AUTO DOP and overall it seems to be working fine however, we do have situations where Oracle decides to parallelize operations which run much faster in serial. Do you know of any settings/parameters that we could tweak in order for Oracle to stop doing this?
You can control this via the PARALLEL_MIN_TIME_THRESHOLD parameter.
<<<
''alter session force parallel query parallel N;''
http://blog.tanelpoder.com/2013/03/20/alter-session-force-parallel-query-doesnt-really-force-anything/
http://www.adellera.it/blog/2013/05/17/alter-session-force-parallel-query-and-indexes/
I’ve used clamav before on my linux laptop on both ubuntu and fedora http://www.clamav.net/download.html , but then I just deinstalled because it’s just eating up a lot of CPU cycles. I think they need to consult with an enterprise anti-virus vendor like Sophos and do some kind of POC/demo first http://blogs.sophos.com/2013/12/09/do-you-need-antivirus-on-linux-servers/ . And it actually depends on their needs, I had a customer before where somebody deployed a rootkit on their 2node RAC.. we found out about the strange looking file with sticky bit on it so they called in some Security specialist and Pen testers and resolved it from the network side and did not touch anything on the server (aside from deleting the files).
Quick google of “antivirus on linux server rootkit” came up with
http://xmodulo.com/how-to-scan-linux-for-rootkits.html
http://serverfault.com/questions/6149/a-list-of-windows-rootkit-detection-and-removal-tools
http://en.wikipedia.org/wiki/Linux_malware
Gathering Debug Information on Oracle Linux (Doc ID 225346.1)
sysreport
sosreport
SCAP https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sect-practical_examples
https://linuxacademycom.ideas.aha.io/
How To Redirect Cellcli list metrichistory output Into A Text File (Doc ID 1616960.1)
Interpreting and using cell metrics (Doc ID 1447887.1)
{{{
LIST METRICDEFINITION ATTRIBUTES objectType, metricType, name, unit, description
}}}
LIST METRICCURRENT
LIST METRICHISTORY
http://goo.gl/fhRRPt, http://os2ora.com/exadata-all-cell-metrics-and-how-to-monitor-them/
http://anargodjaev.files.wordpress.com/2013/12/04-monitoring-maintenance-migration-backup.pdf
http://dbmentors.blogspot.com/2013/10/exadata-monitoring-performance-using_28.html
{{{
objectType metricType Name UNIT description
CELL Instantaneous CL_BBU_CHARGE % Disk Controller Battery Charge
CELL Instantaneous CL_BBU_TEMP C Disk Controller Battery Temperature
CELL Instantaneous CL_CPUT % Percentage of time over the previous minute that the system CPUs were not idle.
CELL Instantaneous CL_CPUT_CS % Percentage of CPU time used by CELLSRV
CELL Instantaneous CL_CPUT_MS % Percentage of CPU time used by MS
CELL Instantaneous CL_FANS Number Number of working fans on the cell
CELL Instantaneous CL_MEMUT % Percentage of total physical memory on the cell that is currently used
CELL Instantaneous CL_MEMUT_CS % Percentage of physical memory used by CELLSRV
CELL Instantaneous CL_MEMUT_MS % Percentage of physical memory used by MS
CELL Instantaneous CL_RUNQ Number Average number (over the preceding minute) of processes in the Linux run queue marked running or uninterruptible (from /proc/loadavg).
CELL Instantaneous CL_SWAP_IN_BY_SEC KB/sec Amount of swap pages read in KB per second
CELL Instantaneous CL_SWAP_OUT_BY_SEC KB/sec Amount of swap pages written in KB per second
CELL Instantaneous CL_SWAP_USAGE % Percentage of swap used
CELL Instantaneous CL_TEMP C Temperature (Celsius) of the server, provided by the BMC
CELL Instantaneous CL_VIRTMEM_CS MB Amount of virtual memory used by CELLSRV in MB
CELL Instantaneous CL_VIRTMEM_MS MB Amount of virtual memory used by MS in MB
CELL Instantaneous IORM_MODE Number I/O Resource Manager objective for the cell
CELL Instantaneous N_NIC_NW Number Number of inactive network interfaces
CELL Rate N_HCA_MB_RCV_SEC MB/sec Number of megabytes received by InfiniBand interfaces per second
CELL Rate N_HCA_MB_TRANS_SEC MB/sec Number of megabytes transmitted by InfiniBand interfaces per second
CELL Rate N_NIC_KB_RCV_SEC KB/sec Number of kilobytes received by Ethernet interfaces per second
CELL Rate N_NIC_KB_TRANS_SEC KB/sec Number of kilobytes transmitted by Ethernet interfaces per second
CELL_FILESYSTEM Instantaneous CL_FSUT % Percentage of total space on this file system that is currently used
CELLDISK Cumulative CD_IO_BY_R_LG MB Number of megabytes read in large blocks from a cell disk
CELLDISK Cumulative CD_IO_BY_R_SM MB Number of megabytes read in small blocks from a cell disk
CELLDISK Cumulative CD_IO_BY_W_LG MB Number of megabytes written in large blocks to a cell disk
CELLDISK Cumulative CD_IO_BY_W_SM MB Number of megabytes written in small blocks to a cell disk
CELLDISK Cumulative CD_IO_ERRS Number Number of IO errors on a cell disk
CELLDISK Cumulative CD_IO_RQ_R_LG IO requests Number of requests to read large blocks from a cell disk
CELLDISK Cumulative CD_IO_RQ_R_SM IO requests Number of requests to read small blocks from a cell disk
CELLDISK Cumulative CD_IO_RQ_W_LG IO requests Number of requests to write large blocks to a cell disk
CELLDISK Cumulative CD_IO_RQ_W_SM IO requests Number of requests to write small blocks to a cell disk
CELLDISK Cumulative CD_IO_TM_R_LG us Cumulative latency of reading large blocks from a cell disk
CELLDISK Cumulative CD_IO_TM_R_SM us Cumulative latency of reading small blocks from a cell disk
CELLDISK Cumulative CD_IO_TM_W_LG us Cumulative latency of writing large blocks to a cell disk
CELLDISK Cumulative CD_IO_TM_W_SM us Cumulative latency of writing small blocks to a cell disk
CELLDISK Instantaneous CD_IO_LOAD Number Average I/O load for the cell disk
CELLDISK Instantaneous CD_IO_ST_RQ us/request Average service time per request for small IO requests to a cell disk
CELLDISK Rate CD_IO_BY_R_LG_SEC MB/sec Number of megabytes read in large blocks per second from a cell disk
CELLDISK Rate CD_IO_BY_R_SM_SEC MB/sec Number of megabytes read in small blocks per second from a cell disk
CELLDISK Rate CD_IO_BY_W_LG_SEC MB/sec Number of megabytes written in large blocks per second to a cell disk
CELLDISK Rate CD_IO_BY_W_SM_SEC MB/sec Number of megabytes written in small blocks per second to a cell disk
CELLDISK Rate CD_IO_ERRS_MIN /min Number of IO errors on a cell disk per minute
CELLDISK Rate CD_IO_RQ_R_LG_SEC IO/sec Number of requests to read large blocks per second from a cell disk
CELLDISK Rate CD_IO_RQ_R_SM_SEC IO/sec Number of requests to read small blocks per second from a cell disk
CELLDISK Rate CD_IO_RQ_W_LG_SEC IO/sec Number of requests to write large blocks per second to a cell disk
CELLDISK Rate CD_IO_RQ_W_SM_SEC IO/sec Number of requests to write small blocks per second to a cell disk
CELLDISK Rate CD_IO_TM_R_LG_RQ us/request Average latency of reading large blocks per request to a cell disk
CELLDISK Rate CD_IO_TM_R_SM_RQ us/request Average latency of reading small blocks per request from a cell disk
CELLDISK Rate CD_IO_TM_W_LG_RQ us/request Average latency of writing large blocks per request to a cell disk
CELLDISK Rate CD_IO_TM_W_SM_RQ us/request Average latency of writing small blocks per request to a cell disk
FLASHCACHE Cumulative FC_BYKEEP_OVERWR MB Number of megabytes pushed out of the FlashCache because of space limit for ‘keep’ objects
FLASHCACHE Cumulative FC_IO_BY_R MB Number of megabytes read from FlashCache
FLASHCACHE Cumulative FC_IO_BY_R_MISS MB Number of megabytes read from disks because not all requested data was in FlashCache
FLASHCACHE Cumulative FC_IO_BY_R_SKIP MB Number of megabytes read from disks for IO requests with a hint to bypass FlashCache
FLASHCACHE Cumulative FC_IO_BY_W MB Number of megabytes written to FlashCache
FLASHCACHE Cumulative FC_IO_BYKEEP_R MB Number of megabytes read from FlashCache for ‘keep’ objects
FLASHCACHE Cumulative FC_IO_BYKEEP_W MB Number of megabytes written to FlashCache for ‘keep’ objects
FLASHCACHE Cumulative FC_IO_ERRS Number Number of IO errors on FlashCache
FLASHCACHE Cumulative FC_IO_RQ_R IO requests Number of read IO requests satisfied from FlashCache
FLASHCACHE Cumulative FC_IO_RQ_R_MISS IO requests Number of read IO requests which did not find all data in FlashCache
FLASHCACHE Cumulative FC_IO_RQ_R_SKIP IO requests Number of read IO requests with a hint to bypass FlashCache
FLASHCACHE Cumulative FC_IO_RQ_W IO requests Number of IO requests which resulted in FlashCache being populated with data
FLASHCACHE Cumulative FC_IO_RQKEEP_R IO requests Number of read IO requests for ‘keep’ objects satisfied from FlashCache
FLASHCACHE Cumulative FC_IO_RQKEEP_R_MISS IO requests Number of read IO requests for ‘keep’ objects which did not find all data in FlashCache
FLASHCACHE Cumulative FC_IO_RQKEEP_R_SKIP IO requests Number of read IO requests for ‘keep’ objects with a hint to bypass FlashCache
FLASHCACHE Cumulative FC_IO_RQKEEP_W IO requests Number of IO requests for ‘keep’ objects which resulted in FlashCache being populated with data
FLASHCACHE Instantaneous FC_BY_USED MB Number of megabytes used on FlashCache
FLASHCACHE Instantaneous FC_BYKEEP_USED MB Number of megabytes used for ‘keep’ objects on FlashCache
FLASHCACHE Rate FC_BYKEEP_OVERWR_SEC MB/sec Number of megabytes per second pushed out of the FlashCache because of space limit for ‘keep’ objects
FLASHCACHE Rate FC_IO_BY_R_MISS_SEC MB/sec Number of megabytes read from disks per second because not all requested data was in FlashCache
FLASHCACHE Rate FC_IO_BY_R_SEC MB/sec Number of megabytes read per second from FlashCache
FLASHCACHE Rate FC_IO_BY_R_SKIP_SEC MB/sec Number of megabytes read from disks per second for IO requests with a hint to bypass FlashCache
FLASHCACHE Rate FC_IO_BY_W_SE
C MB/sec Number of megabytes per second written to FlashCache
FLASHCACHE Rate FC_IO_BYKEEP_R_SEC MB/sec Number of megabytes read per second from FlashCache for ‘keep’ objects
FLASHCACHE Rate FC_IO_BYKEEP_W_SEC MB/sec Number of megabytes per second written to FlashCache for ‘keep’ objects
FLASHCACHE Rate FC_IO_RQ_R_MISS_SEC IO/sec Number of read IO requests per second which did not find all data in FlashCache
FLASHCACHE Rate FC_IO_RQ_R_SEC IO/sec Number of read IO requests satisfied per second from FlashCache
FLASHCACHE Rate FC_IO_RQ_R_SKIP_SEC IO/sec Number of read IO requests per second with a hint to bypass FlashCache
FLASHCACHE Rate FC_IO_RQ_W_SEC IO/sec Number of IO requests per second which resulted in FlashCache being populated with data
FLASHCACHE Rate FC_IO_RQKEEP_R_MISS_SEC IO/sec Number of read IO requests per second for ‘keep’ objects which did not find all data in FlashCache
FLASHCACHE Rate FC_IO_RQKEEP_R_SEC IO/sec Number of read IO requests for ‘keep’ objects per second satisfied from FlashCache
FLASHCACHE Rate FC_IO_RQKEEP_R_SKIP_SEC IO/sec Number of read IO requests per second for ‘keep’ objects with a hint to bypass FlashCache
FLASHCACHE Rate FC_IO_RQKEEP_W_SEC IO/sec Number of IO requests per second for ‘keep’ objects which resulted in FlashCache being populated with data
FLASHLOG Cumulative FL_ACTUAL_OUTLIERS IO requests The number of times redo writes to flash and disk both exceeded the outlier threshold
FLASHLOG Cumulative FL_DISK_FIRST IO requests The number of times redo writes first completed to disk
FLASHLOG Cumulative FL_DISK_IO_ERRS IO requests The number of disk I/O errors encountered by Smart Flash Logging
FLASHLOG Cumulative FL_EFFICIENCY_PERCENTAGE % The efficiency of Smart Flash Logging expressed as a percentage
FLASHLOG Cumulative FL_FLASH_FIRST IO requests The number of times redo writes first completed to flash
FLASHLOG Cumulative FL_FLASH_IO_ERRS IO requests The number of flash I/O errors encountered by Smart Flash Logging
FLASHLOG Cumulative FL_FLASH_ONLY_OUTLIERS IO requests The number of times redo writes to flash exceeded the outlier threshold
FLASHLOG Cumulative FL_IO_DB_BY_W MB The number of MB written to hard disk by Smart Flash Logging
FLASHLOG Cumulative FL_IO_FL_BY_W MB The number of MB written to flash by Smart Flash Logging
FLASHLOG Cumulative FL_IO_W IO requests The number of writes serviced by Smart Flash Logging
FLASHLOG Cumulative FL_IO_W_SKIP_BUSY IO requests The number of redo writes that could not be serviced by Smart Flash Logging because too much data had not yet been written to disk
FLASHLOG Cumulative FL_IO_W_SKIP_LARGE IO requests The number of large redo writes that could not be serviced by Smart Flash Logging because the size of the data was larger than the amount of available space on any flash disk
FLASHLOG Cumulative FL_PREVENTED_OUTLIERS IO requests The number of times redo writes to disk exceeded the outlier threshold; these would have been outliers had it not been for Smart Flash Logging
FLASHLOG Instantaneous FL_BY_KEEP Number The amount of redo data saved on flash due to disk I/O errors
FLASHLOG Instantaneous FL_EFFICIENCY_PERCENTAGE_HOUR % The efficiency of Smart Flash Logging over the last hour expressed as a percentage
FLASHLOG Instantaneous FL_IO_DB_BY_W_SEC MB/sec The rate which is the number of MB per second written to hard disk by Smart Flash Logging
FLASHLOG Instantaneous FL_IO_FL_BY_W_SEC MB/sec The rate which is the number of MB per second written to flash by Smart Flash Logging
FLASHLOG Instantaneous FL_IO_W_SKIP_BUSY_MIN IO/sec The number of redo writes during the last minute that could not be serviced by Smart Flash Logging because too much data had not yet been written to disk
GRIDDISK Cumulative GD_IO_BY_R_LG MB Number of megabytes read in large blocks from a grid disk
GRIDDISK Cumulative GD_IO_BY_R_SM MB Number of megabytes read in small blocks from a grid disk
GRIDDISK Cumulative GD_IO_BY_W_LG MB Number of megabytes written in large blocks to a grid disk
GRIDDISK Cumulative GD_IO_BY_W_SM MB Number of megabytes written in small blocks to a grid disk
GRIDDISK Cumulative GD_IO_ERRS Number Number of IO errors on a grid disk
GRIDDISK Cumulative GD_IO_RQ_R_LG IO requests Number of requests to read large blocks from a grid disk
GRIDDISK Cumulative GD_IO_RQ_R_SM IO requests Number of requests to read small blocks from a grid disk
GRIDDISK Cumulative GD_IO_RQ_W_LG IO requests Number of requests to write large blocks to a grid disk
GRIDDISK Cumulative GD_IO_RQ_W_SM IO requests Number of requests to write small blocks to a grid disk
GRIDDISK Rate GD_IO_BY_R_LG_SEC MB/sec Number of megabytes read in large blocks per second from a grid disk
GRIDDISK Rate GD_IO_BY_R_SM_SEC MB/sec Number of megabytes read in small blocks per second from a grid disk
GRIDDISK Rate GD_IO_BY_W_LG_SEC MB/sec Number of megabytes written in large blocks per second to a grid disk
GRIDDISK Rate GD_IO_BY_W_SM_SEC MB/sec Number of megabytes written in small blocks per second to a grid disk
GRIDDISK Rate GD_IO_ERRS_MIN /min Number of IO errors on a grid disk per minute
GRIDDISK Rate GD_IO_RQ_R_LG_SEC IO/sec Number of requests to read large blocks per second from a grid disk
GRIDDISK Rate GD_IO_RQ_R_SM_SEC IO/sec Number of requests to read small blocks per second from a grid disk
GRIDDISK Rate GD_IO_RQ_W_LG_SEC IO/sec Number of requests to write large blocks per second to a grid disk
GRIDDISK Rate GD_IO_RQ_W_SM_SEC IO/sec Number of requests to write small blocks per second to a grid disk
HOST_INTERCONNECT Cumulative N_MB_DROP MB Number of megabytes droped during transmission to a particular host
HOST_INTERCONNECT Cumulative N_MB_RDMA_DROP MB Number of megabytes dropped during RDMA transmission to a particular host
HOST_INTERCONNECT Cumulative N_MB_RECEIVED MB Number of megabytes received from a particular host
HOST_INTERCONNECT Cumulative N_MB_RESENT MB Number of megabytes resent to a particular host
HOST_INTERCONNECT Cumulative N_MB_SENT MB Number of megabytes transmitted to a particular host
HOST_INTERCONNECT Cumulative N_RDMA_RETRY_TM ms Latency of the retry actions during RDMA transmission to a particular host
HOST_INTERCONNECT Rate N_MB_DROP_SEC MB/sec Number of megabytes droped during transmission per second to a particular host
HOST_INTERCONNECT Rate N_MB_RDMA_DROP_SEC MB/sec Number of megabytes dropped during RDMA transmission per second to a particular host
HOST_INTERCONNECT Rate N_MB_RECEIVED_SEC MB/sec Number of megabytes per second received from a particular host
HOST_INTERCONNECT Rate N_MB_RESENT_SEC MB/sec Number of megabytes resent per second to a particular host
HOST_INTERCONNECT Rate N_MB_SENT_SEC MB/sec Number of megabytes transmitted per second to a particular host
IORM_CATEGORY Cumulative CT_FC_IO_RQ IO requests Number of IO requests issued by an IORM category to flash cache
IORM_CATEGORY Cumulative CT_FD_IO_RQ_LG IO requests Number of large IO requests issued by an IORM category to flash disks
IORM_CATEGORY Cumulative CT_FD_IO_RQ_SM IO requests Number of small IO requests issued by an IORM category to flash disks
IORM_CATEGORY Cumulative CT_IO_RQ_LG IO requests Number of large IO requests issued by an IORM category to hard disks
IORM_CATEGORY Cumulative CT_IO_RQ_SM IO requests Number of small IO requests issued by an IORM category to hard disks
IORM_CATEGORY Cumulative CT_IO_WT_LG ms IORM wait time for large IO requests issued by an IORM category
IORM_CATEGORY Cumulative CT_IO_WT_SM ms IORM wait time for small IO requests issued by an IORM category
IORM_CATEGORY Instantaneous CT_FC_IO_BY_SEC MB/sec Number of megabytes of I/O per second for this category to flash cache
IORM_CATEGORY Instantaneous CT_FD_IO_BY_SEC MB/sec Number of megabytes of I/O per second for this category to flash disks
IORM_CATEGORY Instantaneous CT_FD_IO_LOAD Number Average I/O load from this category for flash disks
IORM_CATEGORY Instantaneous CT_IO_BY_SEC MB/sec Number of megabytes of I/O per second for this category to hard disks
IORM_CATEGORY Instantaneous CT_IO_LOAD Number Average I/O load from this category for hard disks
IORM_CATEGORY Instantaneous CT_IO_UTIL_LG % Percentage of disk resources utilized by large requests from this category
IORM_CATEGORY Instantaneous CT_IO_UTIL_SM % Percentage of disk resources utilized by small requests from this category
IORM_CATEGORY Rate CT_FC_IO_RQ_SEC > IO/sec Number of IO requests issued by an IORM category to flash cache per second
IORM_CATEGORY Rate CT_FD_IO_RQ_LG_SEC IO/sec Number of large IO requests issued by an IORM category to flash disks per second
IORM_CATEGORY Rate CT_FD_IO_RQ_SM_SEC IO/sec Number of small IO requests issued by an IORM category to flash disks per second
IORM_CATEGORY Rate CT_IO_RQ_LG_SEC IO/sec Number of large IO requests issued by an IORM category to hard disks per second
IORM_CATEGORY Rate CT_IO_RQ_SM_SEC IO/sec Number of small IO requests issued by an IORM category to hard disks per second
IORM_CATEGORY Rate CT_IO_WT_LG_RQ ms/request Average IORM wait time per request for large IO requests issued by an IORM category
IORM_CATEGORY Rate CT_IO_WT_SM_RQ ms/request Average IORM wait time per request for small IO requests issued by an IORM category
IORM_CONSUMER_GROUP Cumulative CG_FC_IO_RQ IO requests Number of IO requests issued by a consumer group to flash cache
IORM_CONSUMER_GROUP Cumulative CG_FD_IO_RQ_LG IO requests Number of large IO requests issued by a consumer group to flash disks
IORM_CONSUMER_GROUP Cumulative CG_FD_IO_RQ_SM IO requests Number of small IO requests issued by a consumer group to flash disks
IORM_CONSUMER_GROUP Cumulative CG_IO_RQ_LG IO requests Number of large IO requests issued by a consumer group to hard disks
IORM_CONSUMER_GROUP Cumulative CG_IO_RQ_SM IO requests Number of small IO requests issued by a consumer group to hard disks
IORM_CONSUMER_GROUP Cumulative CG_IO_WT_LG ms IORM wait time for large IO requests issued by a consumer group
IORM_CONSUMER_GROUP Cumulative CG_IO_WT_SM ms IORM wait time for small IO requests issued by a consumer group
IORM_CONSUMER_GROUP Instantaneous CG_FC_IO_BY_SEC MB/sec Number of megabytes of I/O per second for this consumer group to flash cache
IORM_CONSUMER_GROUP Instantaneous CG_FD_IO_BY_SEC MB/sec Number of megabytes of I/O per second for this consumer group to flash disks
IORM_CONSUMER_GROUP Instantaneous CG_FD_IO_LOAD Number Average I/O load from this consumer group for flash disks
IORM_CONSUMER_GROUP Instantaneous CG_IO_BY_SEC MB/sec Number of megabytes of I/O per second for this consumer group to hard disks
IORM_CONSUMER_GROUP Instantaneous CG_IO_LOAD Number Average I/O load from this consumer group for hard disks
IORM_CONSUMER_GROUP Instantaneous CG_IO_UTIL_LG % Percentage of disk resources utilized by large requests from this consumer group
IORM_CONSUMER_GROUP Instantaneous CG_IO_UTIL_SM % Percentage of disk resources utilized by small requests from this consumer group
IORM_CONSUMER_GROUP Rate CG_FC_IO_RQ_SEC IO/sec Number of IO requests issued by a consumer group to flash cache per second
IORM_CONSUMER_GROUP Rate CG_FD_IO_RQ_LG_SEC IO/sec Number of large IO requests issued by a consumer group to flash disks per second
IORM_CONSUMER_GROUP Rate CG_FD_IO_RQ_SM_SEC IO/sec Number of small IO requests issued by a consumer group to flash disks per second
IORM_CONSUMER_GROUP Rate CG_IO_RQ_LG_SEC IO/sec Number of large IO requests issued by a consumer group to hard disks per second
IORM_CONSUMER_GROUP Rate CG_IO_RQ_SM_SEC IO/sec Number of small IO requests issued by a consumer group to hard disks per second
IORM_CONSUMER_GROUP Rate CG_IO_WT_LG_RQ ms/request Average IORM wait time per request for large IO requests issued by a consumer group
IORM_CONSUMER_GROUP Rate CG_IO_WT_SM_RQ ms/request Average IORM wait time per request for small IO requests issued by a consumer group
IORM_DATABASE Cumulative DB_FC_IO_RQ IO requests Number of IO requests issued by a database to flash cache
IORM_DATABASE Cumulative DB_FD_IO_RQ_LG IO requests Number of large IO requests issued by a database to flash disks
IORM_DATABASE Cumulative DB_FD_IO_RQ_SM IO requests Number of small IO requests issued by a database to flash disks
IORM_DATABASE Cumulative DB_FL_IO_BY MB The number of MB written to the Flash Log
IORM_DATABASE Cumulative DB_FL_IO_RQ IO requests The number of I/O requests issued to the Flash Log
IORM_DATABASE Cumulative DB_IO_RQ_LG IO requests Number of large IO requests issued by a database to hard disks
IORM_DATABASE Cumulative DB_IO_RQ_SM IO requests Number of small IO requests issued by a database to hard disks
IORM_DATABASE Cumulative DB_IO_WT_LG ms IORM w
ait time for large IO requests issued by a database
IORM_DATABASE Cumulative DB_IO_WT_SM ms IORM wait time for small IO requests issued by a database
IORM_DATABASE Instantaneous DB_FC_IO_BY_SEC MB/sec Number of megabytes of I/O per second for this database to flash cache
IORM_DATABASE Instantaneous DB_FD_IO_BY_SEC MB/sec Number of megabytes of I/O per second for this database to flash disks
IORM_DATABASE Instantaneous DB_FD_IO_LOAD Number Average I/O load from this database for flash disks
IORM_DATABASE Instantaneous DB_FL_IO_BY_SEC MB/sec The number of MB written per second to the Flash Log
IORM_DATABASE Instantaneous DB_FL_IO_RQ_SEC IO/sec The number of I/O requests per second issued to the Flash Log
IORM_DATABASE Instantaneous DB_IO_BY_SEC MB/sec Number of megabytes of I/O per second for this database to hard disks
IORM_DATABASE Instantaneous DB_IO_LOAD Number Average I/O load from this database for hard disks
IORM_DATABASE Instantaneous DB_IO_UTIL_LG % Percentage of disk resources utilized by large requests from this database
IORM_DATABASE Instantaneous DB_IO_UTIL_SM % Percentage of disk resources utilized by small requests from this database
IORM_DATABASE Rate DB_FC_IO_RQ_SEC IO/sec Number of IO requests issued by a database to flash cache per second
IORM_DATABASE Rate DB_FD_IO_RQ_LG_SEC IO/sec Number of large IO requests issued by a database to flash disks per second
IORM_DATABASE Rate DB_FD_IO_RQ_SM_SEC IO/sec Number of small IO requests issued by a database to flash disks per second
IORM_DATABASE Rate DB_IO_RQ_LG_SEC IO/sec Number of large IO requests issued by a database to hard disks per second
IORM_DATABASE Rate DB_IO_RQ_SM_SEC IO/sec Number of small IO requests issued by a database to hard disks per second
IORM_DATABASE Rate DB_IO_WT_LG_RQ ms/request Average IORM wait time per request for large IO requests issued by a database
IORM_DATABASE Rate DB_IO_WT_SM_RQ ms/request Average IORM wait time per request for small IO requests issued by a database
}}}
new
{{{
> CD_BY_FC_DIRTY
> CD_IO_BY_R_SCRUB
> CD_IO_BY_R_SCRUB_SEC
> CD_IO_ERRS_SCRUB
> CD_IO_RQ_R_SCRUB
> CD_IO_RQ_R_SCRUB_SEC
> CD_IO_UTIL
> CD_IO_UTIL_LG
> CD_IO_UTIL_SM
> FC_BY_ALLOCATED
> FC_BY_DIRTY
> FC_BY_STALE_DIRTY
> FC_IO_BY_ALLOCATED_OLTP
> FC_IO_BY_DISK_WRITE
> FC_IO_BY_DISK_WRITE_SEC
> FC_IO_BY_R_ACTIVE_SECONDARY
> FC_IO_BY_R_ACTIVE_SECONDARY_MISS
> FC_IO_BY_R_ACTIVE_SECONDARY_MISS_SEC
> FC_IO_BY_R_ACTIVE_SECONDARY_SEC
> FC_IO_BY_R_DW
> FC_IO_BY_R_MISS_DW
> FC_IO_BY_R_SKIP_FC_THROTTLE
> FC_IO_BY_R_SKIP_FC_THROTTLE_SEC
> FC_IO_BY_R_SKIP_LG
> FC_IO_BY_R_SKIP_LG_SEC
> FC_IO_BY_R_SKIP_NCMIRROR
> FC_IO_BY_W_FIRST
> FC_IO_BY_W_FIRST_SEC
> FC_IO_BY_W_OVERWRITE
> FC_IO_BY_W_OVERWRITE_SEC
> FC_IO_BY_W_POPULATE
> FC_IO_BY_W_POPULATE_SEC
> FC_IO_BY_W_SKIP
> FC_IO_BY_W_SKIP_FC_THROTTLE
> FC_IO_BY_W_SKIP_FC_THROTTLE_SEC
> FC_IO_BY_W_SKIP_LG
> FC_IO_BY_W_SKIP_LG_SEC
> FC_IO_BY_W_SKIP_NCMIRROR
> FC_IO_BY_W_SKIP_SEC
> FC_IO_RQ_DISK_WRITE
> FC_IO_RQ_DISK_WRITE_SEC
> FC_IO_RQ_REPLACEMENT_ATTEMPTED
> FC_IO_RQ_REPLACEMENT_FAILED
> FC_IO_RQ_R_ACTIVE_SECONDARY
> FC_IO_RQ_R_ACTIVE_SECONDARY_MISS
> FC_IO_RQ_R_ACTIVE_SECONDARY_MISS_SEC
> FC_IO_RQ_R_ACTIVE_SECONDARY_SEC
> FC_IO_RQ_R_DW
> FC_IO_RQ_R_MISS_DW
> FC_IO_RQ_R_SKIP_FC_THROTTLE
> FC_IO_RQ_R_SKIP_FC_THROTTLE_SEC
> FC_IO_RQ_R_SKIP_LG
> FC_IO_RQ_R_SKIP_LG_SEC
> FC_IO_RQ_R_SKIP_NCMIRROR
> FC_IO_RQ_W_FIRST
> FC_IO_RQ_W_FIRST_SEC
> FC_IO_RQ_W_OVERWRITE
> FC_IO_RQ_W_OVERWRITE_SEC
> FC_IO_RQ_W_POPULATE
> FC_IO_RQ_W_POPULATE_SEC
> FC_IO_RQ_W_SKIP
> FC_IO_RQ_W_SKIP_FC_THROTTLE
> FC_IO_RQ_W_SKIP_FC_THROTTLE_SEC
> FC_IO_RQ_W_SKIP_LG
> FC_IO_RQ_W_SKIP_LG_SEC
> FC_IO_RQ_W_SKIP_NCMIRROR
> FC_IO_RQ_W_SKIP_SEC
> FL_IO_W_SKIP_NO_BUFFER
> GD_BY_FC_DIRTY
> GD_IO_BY_R_SCRUB
> GD_IO_BY_R_SCRUB_SEC
> GD_IO_ERRS_SCRUB
> GD_IO_RQ_R_SCRUB
> GD_IO_RQ_R_SCRUB_SEC
> SIO_IO_EL_OF
> SIO_IO_EL_OF_SEC
> SIO_IO_OF_RE
> SIO_IO_OF_RE_SEC
> SIO_IO_PA_TH
> SIO_IO_PA_TH_SEC
> SIO_IO_RD_FC
> SIO_IO_RD_FC_HD
> SIO_IO_RD_FC_HD_SEC
> SIO_IO_RD_FC_SEC
> SIO_IO_RD_HD
> SIO_IO_RD_HD_SEC
> SIO_IO_RD_RQ_FC
> SIO_IO_RD_RQ_FC_HD
> SIO_IO_RD_RQ_FC_HD_SEC
> SIO_IO_RD_RQ_FC_SEC
> SIO_IO_RD_RQ_HD
> SIO_IO_RD_RQ_HD_SEC
> SIO_IO_RV_OF
> SIO_IO_RV_OF_SEC
> SIO_IO_SI_SV
> SIO_IO_SI_SV_SEC
> SIO_IO_WR_FC
> SIO_IO_WR_FC_SEC
> SIO_IO_WR_HD
> SIO_IO_WR_HD_SEC
> SIO_IO_WR_RQ_FC
> SIO_IO_WR_RQ_FC_SEC
> SIO_IO_WR_RQ_HD
> SIO_IO_WR_RQ_HD_SEC
}}}
! date range commands (that supposed to work)
{{{
datafile=`echo /tmp/metriccurrentall.txt`
/usr/local/bin/dcli -l root -g /root/cell_group "cellcli -e list metriccurrent" > $datafile
export TM=$(date +%m/%d/%y" "%H:%M:%S)
/usr/local/bin/dcli -l root -g /root/cell_group cellcli -e LIST METRICHISTORY \"WHERE collectionTime \> \'2014-07-15T13:10:00-08:00\'\" AND collectionTime \< \'2014-07-19T13:10:00-08:00\'\" \> FC2.txt
/usr/local/bin/dcli -l root -g /root/cell_group cellcli -e LIST METRICHISTORY \"WHERE collectionTime \> \'2014-07-15T13:10:00-08:00\'\" \> FC2.txt
cellcli -e list metrichistory where collectionTime > '2014-07-15T13:10:00-08:00'
/usr/local/bin/dcli -l root -g /root/cell_group cellcli -e LIST METRICHISTORY \"WHERE name like \'FC_.*\' and collectionTime \> \'2014-02-27T13:10:00-08:00\'\" \> FC2.txt
list metrichistory CL_CPUT where collectionTime > ' 2008-04-28T13:32:13-07:00'
list metrichistory DB_IO_WT_SM_RQ where alertState ! = 'normal' AND -
collectionTime > ' 2008-04-28T13:32:13-07:00'
list metrichistory where metricObjectName like 'CD.*' AND -
collectionTime > ' 2008-04-28T13:32:13-07:00' AND -
collectionTime < ' 2008-04-28T15:32:13-07:00'
}}}
! spool all to file every hour
{{{
vi collect.sh
while :; do
/usr/local/bin/dcli -l root -g /root/cell_group "cellcli -e list metriccurrent" > tmp_file;
TM=$(date +%m/%d/%y" "%H:%M:%S);
ESCAPED_TM=$(date +%m/%d/%y" "%H:%M:%S | sed -e 's/\\/\\\\/g' -e 's/\//\\\//g' -e 's/&/\\\&/g')
echo "Dumped at $TM"
sed -i'.tmp' "s/^/$ESCAPED_TM /g" tmp_file;
cat tmp_file >> output_file;
sleep 3600;
done
nohup sh collect.sh &
rm tmp_file tmp_file.tmp
}}}
! just grep the historical hourly data across cells
this will finish in 5hours for a quarter rack but very low overhead and the output file is super small and you'll get enough history samples (7 days)
also you have to do serial because you'll encounter the 100000 lines error
{{{
/usr/local/bin/dcli --serial -l root -g /root/cell_group cellcli -e LIST METRICHISTORY | grep ":00:" | bzip2 > metrichistory.bz2
}}}
https://hortonworks.com/blog/livy-a-rest-interface-for-apache-spark/
http://www.question-defense.com/2010/03/24/lm_sensors-on-cent-os-5-4-how-to-get-and-install-the-coretemp-module
run this as follows:
{{{
while : ; do sqlplus "/ as sysdba" @loadprof ; echo "--"; sleep 2 ; done
}}}
{{{
-- from http://timurakhmadeev.wordpress.com/2012/02/21/load-profile/
-- karao: added the spool append and tm column
--
-- usage:
-- while : ; do sqlplus "/ as sysdba" @loadprof ; echo "--"; sleep 2 ; done
--
spool loadprof.txt append
col short_name format a20 heading 'Load Profile'
col per_sec format 999,999,999.9 heading 'Per Second'
col per_tx format 999,999,999.9 heading 'Per Transaction'
set colsep ' '
select to_char(sysdate,'MM/DD/YY HH24:MI:SS') tm,
lpad(short_name, 20, ' ') short_name
, per_sec
, per_tx from
(select short_name
, max(decode(typ, 1, value)) per_sec
, max(decode(typ, 2, value)) per_tx
, max(m_rank) m_rank
from
(select /*+ use_hash(s) */
m.short_name
, s.value * coeff value
, typ
, m_rank
from v$sysmetric s,
(select 'Database Time Per Sec' metric_name, 'DB Time' short_name, .01 coeff, 1 typ, 1 m_rank from dual union all
select 'CPU Usage Per Sec' metric_name, 'DB CPU' short_name, .01 coeff, 1 typ, 2 m_rank from dual union all
select 'Redo Generated Per Sec' metric_name, 'Redo size' short_name, 1 coeff, 1 typ, 3 m_rank from dual union all
select 'Logical Reads Per Sec' metric_name, 'Logical reads' short_name, 1 coeff, 1 typ, 4 m_rank from dual union all
select 'DB Block Changes Per Sec' metric_name, 'Block changes' short_name, 1 coeff, 1 typ, 5 m_rank from dual union all
select 'Physical Reads Per Sec' metric_name, 'Physical reads' short_name, 1 coeff, 1 typ, 6 m_rank from dual union all
select 'Physical Writes Per Sec' metric_name, 'Physical writes' short_name, 1 coeff, 1 typ, 7 m_rank from dual union all
select 'User Calls Per Sec' metric_name, 'User calls' short_name, 1 coeff, 1 typ, 8 m_rank from dual union all
select 'Total Parse Count Per Sec' metric_name, 'Parses' short_name, 1 coeff, 1 typ, 9 m_rank from dual union all
select 'Hard Parse Count Per Sec' metric_name, 'Hard Parses' short_name, 1 coeff, 1 typ, 10 m_rank from dual union all
select 'Logons Per Sec' metric_name, 'Logons' short_name, 1 coeff, 1 typ, 11 m_rank from dual union all
select 'Executions Per Sec' metric_name, 'Executes' short_name, 1 coeff, 1 typ, 12 m_rank from dual union all
select 'User Rollbacks Per Sec' metric_name, 'Rollbacks' short_name, 1 coeff, 1 typ, 13 m_rank from dual union all
select 'User Transaction Per Sec' metric_name, 'Transactions' short_name, 1 coeff, 1 typ, 14 m_rank from dual union all
select 'User Rollback UndoRec Applied Per Sec' metric_name, 'Applied urec' short_name, 1 coeff, 1 typ, 15 m_rank from dual union all
select 'Redo Generated Per Txn' metric_name, 'Redo size' short_name, 1 coeff, 2 typ, 3 m_rank from dual union all
select 'Logical Reads Per Txn' metric_name, 'Logical reads' short_name, 1 coeff, 2 typ, 4 m_rank from dual union all
select 'DB Block Changes Per Txn' metric_name, 'Block changes' short_name, 1 coeff, 2 typ, 5 m_rank from dual union all
select 'Physical Reads Per Txn' metric_name, 'Physical reads' short_name, 1 coeff, 2 typ, 6 m_rank from dual union all
select 'Physical Writes Per Txn' metric_name, 'Physical writes' short_name, 1 coeff, 2 typ, 7 m_rank from dual union all
select 'User Calls Per Txn' metric_name, 'User calls' short_name, 1 coeff, 2 typ, 8 m_rank from dual union all
select 'Total Parse Count Per Txn' metric_name, 'Parses' short_name, 1 coeff, 2 typ, 9 m_rank from dual union all
select 'Hard Parse Count Per Txn' metric_name, 'Hard Parses' short_name, 1 coeff, 2 typ, 10 m_rank from dual union all
select 'Logons Per Txn' metric_name, 'Logons' short_name, 1 coeff, 2 typ, 11 m_rank from dual union all
select 'Executions Per Txn' metric_name, 'Executes' short_name, 1 coeff, 2 typ, 12 m_rank from dual union all
select 'User Rollbacks Per Txn' metric_name, 'Rollbacks' short_name, 1 coeff, 2 typ, 13 m_rank from dual union all
select 'User Transaction Per Txn' metric_name, 'Transactions' short_name, 1 coeff, 2 typ, 14 m_rank from dual union all
select 'User Rollback Undo Records Applied Per Txn' metric_name, 'Applied urec' short_name, 1 coeff, 2 typ, 15 m_rank from dual) m
where m.metric_name = s.metric_name
and s.intsize_csec > 5000
and s.intsize_csec < 7000)
group by short_name)
order by m_rank;
spool off
exit
}}}
Is there a way to find out if there was any Db Locking/Contention happened in last 1 month?
http://oracledoug.com/serendipity/index.php?/archives/1478-Diagnosing-Locking-Problems-using-ASH-Part-2.html
http://files.e2sn.com/slides/Tanel_Poder_log_file_sync.pdf
OPDG-slow perf Wait: log file sync MOS 390374.1
''Adaptive Log file sync'' http://www.pythian.com/news/36791/adaptive-log-file-sync-oracle-please-dont-do-that-again/
Script to Collect Log File Sync Diagnostic Information (lfsdiag.sql) [ID 1064487.1]
https://sites.google.com/site/embtdbo/examples/commit-frequency-oracle
https://sites.google.com/site/embtdbo/oracle-commits
http://files.e2sn.com/slides/Tanel_Poder_log_file_sync.pdf
http://shallahamer-orapub.blogspot.com/2012/02/speed-sometimes-means-changing-rules.html
http://shallahamer-orapub.blogspot.com/2010/10/commit-time-vs-log-file-sync-time.html
http://filezone.orapub.com/Research/20120216_CommitWrite/dml1.sql
http://filezone.orapub.com/Research/20120216_CommitWrite/dml2.sql
http://filezone.orapub.com/Research/20120216_CommitWrite/cw_dataCollection.txt
* create the directories
{{{
mkdir -p /home/oracle/dba
cd /home/oracle/dba
mkdir bin etc log scripts
}}}
* search and edit the following on logfile_cleanup script, essentially you have to change the AUDIT_DIR* to your environment settings
{{{
$ locate "rdbms/audit"
/u01/app/11.2.0/grid/rdbms/audit
/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/audit
}}}
* create the script under bin directory
* create the logrotate.conf under etc directory
* seach and add the following files to the logrotate.conf
{{{
locate listener.log
locate alert_ | grep -i "log" | grep u01
}}}
* below is the logrotate.conf
{{{
# see "man logrotate" for details
# rotate log files weekly
monthly
# keep 12 months worth of backlogs
rotate 12
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
compress
# Logrotate configuration for ASM and Database alert.logs.
# ----------------------------------------------------------------------
/u01/app/oracle/diag/tnslsnr/desktopserver/listener/trace/listener.log {
extension '.log'
nocreate
}
/u01/app/oracle/diag/rdbms/dw/dw/trace/alert_dw.log {
extension '.log'
nocreate
}
}}}
* schedule the script in cron
{{{
#-------------------------------------------------------
# Cleanup old logfiles, trace files, trm files.
# Also rotate the alert log files and listener log file.
#-------------------------------------------------------
0 0 * * * /home/oracle/dba/bin/logfile_cleanup >/home/oracle/dba/log/logfile_cleanup.log 2>&1
}}}
http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying
http://prefetch.net/articles/linuxpci.html
http://www.cyberciti.biz/faq/how-can-i-find-out-if-my-ethernet-card-nic-is-being-recognized-or-not/
good device info for configuring udev
udevinfo -a -p /block/sda/sda7 | less
extract detailed information on the hardware configuration of the machine including network cards
http://www.cyberciti.biz/faq/linux-list-network-cards-command/
{{{
lshw -class network
}}}
to get the eth interfaces
{{{
lspci | grep -i eth
}}}
! the script
{{{
[root@enkx3cel01 ~]# cat ./lst18-01-sfcconsumer.pl
#!/usr/bin/perl
# Name: lst18-01-sfcconsumer.pl
# Usage: ./lst18-01-sfcconsumer.pl [-g|-c] [cell group file|list of cells] -n [topN]
use Getopt::Std;
use Text::ParseWords;
sub usage {
print "Usage: /lst18-01-sfcconsumer.pl [-g|-c] [cell group file|list of cells] -n [topN]\n";
}
## Command line argument handling ##
getopts("g:c:n:",\%options);
die usage unless (defined $options{n});
die usage unless ((defined $options{c}) || (defined $options{g}));
$dclipref="-l root -g $options{g}" if defined $options{g};
$dclipref="-l root -c $options{c}" if defined $options{c};
$dclitail="attributes cachedSize,dbUniqueName,hitCount,missCount,objectNumber";
## End Command line argument handling ##
open(F,"dcli ${dclipref} cellcli -e list flashcache attributes name,size|");
while (<F>) {
@words=quotewords('\\s+', 0, $_);
$words[2]=~s/G//; $cbytes=1024*1024*1024*$words[2];
$cell{$words[0]}+=$cbytes;
}
close(F);
open(F,"dcli ${dclipref} cellcli -e list flashcachecontent ${dclitail}|");
while (<F>) {
@words=quotewords('\\s+', 0, $_);
$cached{$words[0]}+=$words[1]; # Array for storage by cell
$db{$words[2]}+=$words[1]; # Array for storage by DB
$mb=$words[1]/1024/1024;
$objd=sprintf "%-8.2f %8s %8.0f %8.0f %8.0f", "$mb", "$words[2]", "$words[3]", "$words[4]", "$words[5]"; # Bld string
push @DTL, $objd;
}
close(F);
$tcellused=0;
$rc=0;
printf "%-10s %8s %8s %8s\n", "Cell", "Avail", "Used", "%Used";
printf "%-10s %-8s %-8s %8s\n", "-"x10, "-"x8, "-"x8, "-"x5;
foreach my $key (sort keys %cell) {
$celltot=$cell{$key}/1024/1024/1024;
$cellused=$cached{$key}/1024/1024/1024;
$tcellused=$tcellused + $cellused;
$pctused=100 * ($cellused / $celltot);
printf "%10s %8.2f %8.2f %8.3f\n", "$key", $celltot, $cellused,$pctused;
}
printf "\n%20s %-8.2f\n\n", "Total GB used:", $tcellused;
printf "%-10s %8s %8s\n", "DB", "DBUsed", "%Used";
printf "%-10s %8s %8s\n", "-"x10, ,"-"x8, "-"x6;
foreach my $key (sort keys %db) {
$dbused=$db{$key}/1024/1024/1024;
$pctused=100 * ($dbused / $tcellused);
printf "%-10s %8.2f %8.3f\n", "$key", $dbused,$pctused;
}
printf "\n%-8s %8s %8s %8s %8s\n", "CachedMB", "DB", "HitCount", "MissCnt", "objNo";
printf "%-8s %8s %8s %8s %8s\n", "-"x8, "-"x8, "-"x8, "-"x8, "-"x8;
foreach my $line (sort { $b <=> $a } @DTL) {
last if $rc eq $options{n};
print "$line\n";
$rc++;
}
}}}
! example run
{{{
[root@enkx3cel01 ~]# ./lst18-01-sfcconsumer.pl -g cell_group -n 10
Cell Avail Used %Used
---------- -------- -------- -----
enkx3cel01: 1.45 1465.66 100812.008
enkx3cel02: 1.45 1465.50 100800.971
enkx3cel03: 1.45 1464.68 100744.424
Total GB used: 4395.85
DB DBUsed %Used
---------- -------- ------
AVLTY 1.58 0.036
BDT12 5.57 0.127
BIGDATA 233.58 5.314
DBFS 3011.69 68.512
DBM 22.30 0.507
DEMO 625.45 14.228
DEMOX3 1.10 0.025
GLUENT 466.57 10.614
OLTP 0.39 0.009
PSFIN 1.86 0.042
SRI 0.99 0.023
UNKNOWN 23.62 0.537
WZDB 1.14 0.026
CachedMB DB HitCount MissCnt objNo
-------- -------- -------- -------- --------
420190.12 DBFS 23261310 0 146144
418908.19 DBFS 23194483 0 146144
384476.56 DBFS 21269452 0 146144
363576.45 DBFS 21744 0 146167
362320.29 DBFS 22760 0 146167
333210.95 DBFS 20340 0 146167
106946.16 DBFS 410929 373653 15911
91940.15 DBFS 12729 0 147272
91273.22 DBFS 10327 0 147272
84458.11 DEMO 96439 97646 70145
}}}
{{{
[root@enkx3cel01 ~]# ./lst18-01-sfcconsumer.pl -c enkx3cel01 -n 5
Cell Avail Used %Used
---------- -------- -------- -----
enkx3cel01: 1.45 1465.66 100812.024
Total GB used: 1465.66
DB DBUsed %Used
---------- -------- ------
AVLTY 0.50 0.034
BDT12 1.90 0.130
BIGDATA 74.62 5.092
DBFS 978.88 66.787
DBM 7.11 0.485
DEMO 250.41 17.085
DEMOX3 0.33 0.022
GLUENT 144.36 9.849
OLTP 0.12 0.008
PSFIN 0.59 0.040
SRI 0.31 0.021
UNKNOWN 6.17 0.421
WZDB 0.37 0.025
CachedMB DB HitCount MissCnt objNo
-------- -------- -------- -------- --------
384476.56 DBFS 21269452 0 146144
333210.95 DBFS 20340 0 146167
106946.16 DBFS 410929 373653 15911
84103.43 DBFS 9745 0 147272
78457.59 DEMO 91434 88474 70145
}}}
!! sysmetric
https://canali.web.cern.ch/canali/docs/article_metalink_on_AWR.htm
http://dboptimizer.com/2011/07/07/mining-awr-statistics-metrics-verses-statistics/
https://sites.google.com/site/oraclemonitor/short-duration-sysmetric
https://sites.google.com/site/oraclemonitor/long-duration-sysmetric
http://dboptimizer.com/category/uncategorized/page/9/
https://blog.pythian.com/do-you-know-if-your-database-slow/
{{{
Title: Manual querying AWR for trend analysis and capacity planning
Version: 1
Created on: January 29, 2009 7:04 AM by Monica - Moderator
Last Modified: January 29, 2009 7:04 AM by Monica - Moderator
Viewed: 537 times
Category: Oracle Database
Community: Database Tuning
This document has been authored by an Oracle customer, and has not been subject to an Oracle technical review.
View Document in Community
Important Note: this document has been authored by an Oracle customer
Submitted by: Luca Canali
Senior DBA
CERN - European Organization for Nuclear Research
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Summary: System resource usage and trend analysis data gathering for capacity planning can profit of Oracle AWR 10g repository as the source of data, both for RAC and single instance.
1. System metric
I had the need to provide management with system resource utilization data for trend analysis and capacity planning of a production database. Production is on 10g RAC. Standard OS monitoring tools provide system metrics for each cluster node separately, which would require the development of custom scripts to aggregate the data in a cluster-wide view. Instead I have used AWR to extract system resource usage metrics for the whole cluster. This has saved me considerable time and allowed for easier analysis. In particular I ran the query reported here below against dba_hist_sysmetric_summary. The result of the query was spooled to a file and imported to a spreadsheet to produce graphs from the collected metric values.
----------------------------
set lines 250
set pages 9999
spool sysmetric_outp.log
alter session set nls_date_format='dd-mm-yyyy hh24:mi';
select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then average end) Physical_Read_Total_Bps,
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then average end) Physical_Write_Total_Bps,
sum(case metric_name when 'Redo Generated Per Sec' then average end) Redo_Bytes_per_sec,
sum(case metric_name when 'Physical Read Total IO Requests Per Sec' then average end) Physical_Read_IOPS,
sum(case metric_name when 'Physical Write Total IO Requests Per Sec' then average end) Physical_write_IOPS,
sum(case metric_name when 'Redo Writes Per Sec' then average end) Physical_redo_IOPS,
sum(case metric_name when 'Current OS Load' then average end) OS_LOad,
sum(case metric_name when 'CPU Usage Per Sec' then average end) DB_CPU_Usage_per_sec,
sum(case metric_name when 'Host CPU Utilization (%)' then average end) Host_CPU_util, --NOTE 100% = 1 loaded RAC node
sum(case metric_name when 'Network Traffic Volume Per Sec' then average end) Network_bytes_per_sec,
snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;
spool off
--------------------------
Prior to running this and the following analysis I had set the retention period for AWR to 31 days (default is 7 days, which may be too short for trend analysis): EXECUTE DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(retention => 60*24*31)
2. Service Activity
On a production 10g RAC database we run several database applications. Each application is associated with a dedicated Oracle service. For performance and capacity planning analysis I needed to find the list of most active applications/services and their activity trend. Service activity is defined as the ratio of 'DB Time' over 'Elapsed Time' (i.e. the percentage of time a given service is in a DB call, as opposed to processing application logic or waiting for input). I extracted the application activity data from the AWR repository, and in particular the DBA_HIST_SERVICE_STAT view. The query I used makes use of analytic functions to extract the deltas of the 'DB time' metric between successive AWR snapshots and is RAC aware.
--------------
set lines 250
set pages 9999
col service_name for a40
col Interval_time for a40
spool service_activity.log
alter session set nls_date_format='dd-mm-yyyy hh24:mi';
select service_name,max(END_INTERVAL_TIME) Interval_time,round(sum(DeltaValue/DeltaT_sec)*(100/1000000),0) Pct_active_time
from (
select sn.snap_id,ss.service_name,sn.END_INTERVAL_TIME,ss.instance_number,ss.stat_name,ss.value,
lag(ss.value) over (partition by ss.service_name,ss.instance_number order by sn.snap_id) prevValue,
ss.value - lag(ss.value) over (partition by ss.service_name,ss.instance_number order by sn.snap_id nulls first) BlindDelta,
nvl2(lag(ss.value) over (partition by ss.service_name,ss.instance_number order by sn.snap_id), -- if the instance restarted Delta=0, error by defect
ss.value - lag(ss.value) over (partition by ss.service_name,ss.instance_number order by sn.snap_id), 0 ) DeltaValue,
extract(hour from END_INTERVAL_TIME-begin_interval_time)*3600
+ extract(minute from END_INTERVAL_TIME-begin_interval_time)* 60
+ extract(second from END_INTERVAL_TIME-begin_interval_time) DeltaT_sec
from DBA_HIST_SERVICE_STAT ss, dba_hist_snapshot sn
where ss.snap_id=sn.snap_id and ss.instance_number=sn.instance_number
and ss.stat_name='DB time'
)
group by snap_id, service_name
having sum(DeltaValue/DeltaT_sec)*(100/1000000)>10 -- Filters out points of low activity (<10% PCT_active_time) and negative values due to instance restarts
order by snap_id, service_name;
spool off
------------------
Note: The scripts reported here have been tested against Oracle Enterprise Edition 10.2.0.3 for Linux i386 with RAC option, however they can be used against all 10g versions, both RAC and single instance.
}}}
<<showtoc>>
! code clinic
http://www.lynda.com/SharedPlaylist/3bd14e75f0014f05a34c169289d7a29a
''PHP'' http://www.lynda.com/PHP-tutorials/Code-Clinic-PHP/162137-2.html
''Python'' http://www.lynda.com/Python-tutorials/Code-Clinic-Python/163752-2.html
''Ruby'' http://www.lynda.com/Ruby-tutorials/Code-Clinic-Ruby/164143-2.html
! json
Working with Data on the Web http://www.lynda.com/CSS-tutorials/Working-Data-Web/133326-2.html
! node.js
@@http://www.lynda.com/Developer-Web-Development-tutorials/Up-Running-NPM-Node-Package-Manager/409274-2.html@@
http://www.lynda.com/Web-Web-Design-tutorials/Web-Project-Workflows-Gulpjs-Git-Browserify/154416-2.html
! ember.js
@@Up and Running with Ember.js http://www.lynda.com/Emberjs-tutorials/Up-Running-Emberjs/178116-2.html@@
! handlebars.js
http://www.lynda.com/Web-Interaction-Design-tutorials/What-you-should-already-know/156166/171020-4.html
! git
http://www.lynda.com/GitHub-tutorials/GitHub-Web-Designers/162276-2.html
! R
@@http://tryr.codeschool.com/@@
http://www.lynda.com/R-tutorials/Up-Running-R/120612-2.html
http://www.lynda.com/R-tutorials/R-Statistics-Essential-Training/142447-2.html
! tools
http://www.lynda.com/vi-tutorials/Up-Running-vi/170336-2.html
! hadoop
http://www.lynda.com/Hadoop-training-tutorials/5811-0.html
http://www.lynda.com/Barton-Poulson/984353-1.html
! forecasting
http://www.lynda.com/Excel-tutorials/Excel-Data-Analysis-Forecasting/153775-2.html
! ITIL
http://www.lynda.com/Network-Administration-tutorials/Capacity-management/184459/188328-4.html
! youtube
http://www.lynda.com/YouTube-tutorials/Marketing-Monetizing-YouTube/181240-2.html
! iOS
iOS app playlist http://www.lynda.com/SharedPlaylist/f7abcfaac7bf414f844ff6f29617d0e7
! financial statement
http://www.lynda.com/search?q=financial+statement
James Webb: How to Read a Financial Statement https://www.youtube.com/watch?v=Jkse-Wafe9U
! Understanding Copyright: A Deeper Dive
http://www.lynda.com/Business-Business-Skills-tutorials/Understanding-Copyright-Deeper-Dive/365065-2.html
https://georgecoghill.wordpress.com/2012/08/12/highlight-draw-on-your-mac-screen/
<<<
While the app is running, hit Command-H to enable or disable it. While enabled, use the modifier keys on the Mac keyboard to access the tools as follows:
• Shift toggles the rectangle drawing tool
• Alt toggles the oval drawing tool
• Ctrl toggles the line drawing tool
• Control-Option toggles the arrow tool
• Ctrl+shift toggles the grid lines (vertical or horizontal only)
• Backspace removes the last shape
• Command K will clear all shapes
• A custom shortcut (set in the styles panel) will toggle the application on/off (system wide)
Here’s a tip on using Highlight to capture annotated screenshots to the clipboard, for quick pasting into an email message: enable Highlight, annotate your screen as desired; press the Command-Shift-4 keyboard combo to get the screenshot crosshair tool then release the keys; click and drag over your chosen area, but don’t release the mouse yet; hold the Control key (which will send the capture to the Clipboard instead of saving to disk); paste your annotated screen capture into the body of an email.
You can also open Preview and choose File -> New from Clipboard or do the same thing from Photoshop if you want to further manipulate the image. But this method saves you from having to navigate to the file in the Finder which to me is a pain.
Highlight is probably also great for Powerpoint or Keynote presentations or any other situation where you are projecting your Mac’s screen and want to mark it up on the fly.
<<<
! HOWTO:
{{{
How to identify MacBook Pro models
http://support.apple.com/kb/HT4132
https://en.wikipedia.org/wiki/MacBook_Pro
sysctl hw.model
http://forums.macrumors.com/threads/mid-2012-macbook-pro-non-retina-the-last-of-the-easily-customizable-swappable%10.1506745/
https://en.wikipedia.org/wiki/List_of_Intel_Core_i7_microprocessors
ram upgrade
https://www.ifixit.com/Guide/MacBook+Pro+13-Inch+Unibody+Mid+2012+RAM+Replacement/10374
https://www.youtube.com/watch?v=a4xVgE06YBU
http://mac.appstorm.net/how-to/hardware-how-to/how-and-why-to-upgrade-your-macs-ram/
Upgrading the Ram & Hard Drive on my MacBook Pro (Mid-2012) https://www.youtube.com/watch?v=1I60AKLn0YI
Installing Second Hard Drive,SSD and 16gb of Ram Into Mac Book Pro 13in 2012
part1 https://www.youtube.com/watch?v=SJUtPkdzWd4
part2 https://www.youtube.com/watch?v=jy3nvjta9kk
part3 https://www.youtube.com/watch?v=jVPpTbqVBU4
}}}
!products:
{{{
Corsair Vengeance 16GB (2x8GB) DDR3 1600 MHz (PC3 12800) Laptop Memory (CMSX16GX3M2A1600C10)
http://www.amazon.com/gp/product/B0076W9Q5A/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&tag=handhotshee-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=B0076W9Q5A
Samsung Electronics 840 EVO-Series 1TB 2.5-Inch SATA III Single Unit Version Internal Solid State Drive MZ-7TE1T0BW
http://www.amazon.com/Samsung-Electronics-EVO-Series-2-5-Inch-MZ-7TE1T0BW/dp/B00E3W16OU/ref=sr_1_1?ie=UTF8&qid=1408928852&sr=8-1&keywords=samsung+840+evo+1tb
MacBook Pro (Mid-2012) caddy
http://www.amazon.com/gp/product/B00AUA2XGO/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B00AUA2XGO&linkCode=as2&tag=mediau05-20
Silverhill 20 Piece Tool Kit for Apple Products
http://www.amazon.com/Silverhill-Piece-Tool-Apple-Products/dp/B002O95BJK/ref=pd_bxgy_pc_img_y
Inateck 2.5 Inch USB 3.0 Hard Drive Disk HDD External Enclosure Case http://www.amazon.com/dp/B00FCLG65U/ref=pe_385040_30332200_TE_item
superduper cloning software
http://www.shirt-pocket.com/SuperDuper/SuperDuperDescription.html
}}}
!buy:
{{{
buy EVO 840 1TB
buy Corsair Vengeance 16GB (2x8GB) DDR3 1600 MHz
buy MacBook Pro (Mid-2012) caddy
buy Silverhill 20 Piece Tool Kit for Apple Products
}}}
!installation
understanding SSDs http://www.samsung.com/global/business/semiconductor/minisite/SSD/us/html/about/whitepaper04.html
trim enabler https://www.youtube.com/watch?v=6owLLxkbfIg
http://www.tomshardware.com/reviews/macbook-pro-ssd-trim,3538-5.html
http://www.tomshardware.com/reviews/samsung-840-evo-msata-review,3716-10.html
http://www.macobserver.com/tmo/forums/viewthread/86333/#652498
https://discussions.apple.com/thread/5311534?start=15&tstart=0
http://forums.macrumors.com/showthread.php?t=1720966
! refurbished
https://www.amazon.com/dp/B0074703CM
https://www.amazon.com/Apple-MacBook-MD101LL-13-3-Inch-Laptop/dp/B0074703CM
http://www.bestbuy.com/site/apple-macbook-pro-intel-core-i5-13-3-display-4gb-memory-500gb-hard-drive-silver/5430505.p?skuId=5430505#
http://www.sears.com/apple-macbook-pro-core-md101ll-a-core-i5-2.5ghz/p-SPM8673379108?sid=IDx20110310x00001i&gclid=CLjDxMKNgNACFRVbhgodpsYE-g&gclsrc=aw.ds
http://www.apple.com/shop/browse/home/specialdeals/mac/macbook_pro/13
https://eshop.macsales.com/shop/Apple-Systems/Used/MacBook-Pro
https://www.gainsaver.com/refurbished-used-apple-mac-laptops-sale?&ACode=2012
! upgrade retina (storage only)
http://www.computerworld.com/article/3056789/apple-mac/how-to-upgrade-your-macbook-pro-retina-to-a-1tb-pcie-ssd.html
MacBook Pro with Retina Display Upgradable? https://discussions.apple.com/thread/4076729?start=0&tstart=0
http://www.laptopmag.com/articles/upgrade-ssd-macbook-pro-retina-display
! OWC kits
https://eshop.macsales.com/shop/ssd/owc/macbook-pro/2012-macbook-pro-non-retina (non-retina 2TB max)
https://eshop.macsales.com/shop/ssd/owc/macbook-pro-retina-display/2013-2014-2015 (retina 1TB max)
! 2012 13" vs 15"
http://www.everymac.com/systems/apple/macbook_pro/macbook-pro-unibody-faq/differences-between-macbook-pro-13-15-inch-mid-2012-usb3.html
antiglare vs glossy https://www.youtube.com/watch?v=-xDX4dgD00s
! Macbook Pro 15.4-inch (Glossy) 2.7Ghz Quad Core i7 (Mid 2012)
https://www.gainsaver.com/used-macbook-pro-15-4-2-7ghz-qci7-750gb-hd-8gb-ram-2012-a1286-2556
https://eshop.macsales.com/item/Apple/D102LG7B8GC/
https://eshop.macsales.com/item/Apple/D104LG7B16C/
https://eshop.macsales.com/Search/SearchPromo.cfm?Ntk=Primary&N2=4294922272&Ns=P_ID%7c1&Ne=4294922318&N=100518+4294922294+4294922290+4294922272&Ntt=OWCUsedMac
https://eshop.macsales.com/Search/SearchPromo.cfm?Ntk=Primary&N2=4294922313&Ns=P_ID%7c1&Ne=4294922315&N=100518+4294922294+4294922313&Ntt=OWCUsedMac
http://www.apple.com/shop/browse/home/specialdeals/mac/macbook_pro/13
http://www.sears.com/apple-macbook-pro-core-md101ll-a-core-i5-2.5ghz/p-SPM8673379108?sid=IDx20110310x00001i&gclid=CLjDxMKNgNACFRVbhgodpsYE-g&gclsrc=aw.ds
http://www.bestbuy.com/site/apple-macbook-pro-intel-core-i5-13-3-display-4gb-memory-500gb-hard-drive-silver/5430505.p?skuId=5430505#
https://www.amazon.com/dp/B0074703CM
https://www.amazon.com/Apple-MacBook-MD101LL-13-3-Inch-Laptop/dp/B0074703CM
https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=MD104LL%2FA
https://www.amazon.com/Apple-MacBook-MD104LL-15-4-Inch-VERSION/dp/B007471D2Q
! get resolution
https://www.cnet.com/products/apple-macbook-pro-15-in-summer-2012/review/
http://gizmodo.com/5538624/macbook-pro-15-is-the-higher-resolution-screen-worth-it
MacBook Pro 15: High Resolution Screen Upgrade https://www.youtube.com/watch?v=_IGZG_d8x7I
https://de.ifixit.com/Answers/View/124673/Upgrading+Low+Res+LCD+to+High+Res+compatibility
{{{
Karl-MacBook:bin karl$ system_profiler SPDisplaysDataType | grep Resolution
Resolution: 1280 x 800
}}}
! 2013 15" macbook retina (early and late)
https://en.wikipedia.org/wiki/MacBook_Pro
http://www.macofalltrades.com/Apple-MacBook-Pro-15-inch-2-8GHz-Quad-core-i7-p/mbp-15-28-e13r.htm (ivy bridge)
http://www.macofalltrades.com/MacBook-Pro-15-inch-R-2-3GHz-QCi7-Late-2013-p/mbp-15-23-l13rb.htm (haswell L4 cache)
! 2012
https://www.amazon.com/Apple-MacBook-MD101LL-2-5GHz-Graphics/dp/B008BEYEL8/ref=sr_1_3?s=pc&ie=UTF8&qid=1478241258&sr=1-3&keywords=macbook+pro
https://www.virtualbox.org/wiki/Download_Old_Builds_5_0
look for VirtualBox 5.0.30
http://download.virtualbox.org/virtualbox/5.0.30/VirtualBox-5.0.30-112061-OSX.dmg
http://download.virtualbox.org/virtualbox/5.0.30/Oracle_VM_VirtualBox_Extension_Pack-5.0.30-112061.vbox-extpack
! mojave does not support 5.0.3
! new install for mojave is VirtualBox 5.2.14
https://download.virtualbox.org/virtualbox/5.2.14/
https://forums.virtualbox.org/viewtopic.php?f=8&t=89573
! guest additions uninstall
{{{
/opt/VBoxGuestAdditions-*/uninstall.sh
}}}
How to Test an SMTP Mail Gateway From a Command Line Interface (Doc ID 74269.1)
http://www.dba-oracle.com/t_e_mail_interface_windows.htm
http://www.orafaq.com/maillist/oracle-l/2003/11/24/1709.htm
http://www.my-whiteboard.com/perl-script-oracle-database-alert-log-parser-send-email-notification-if-error-is-found/
http://www.freelists.org/post/oracle-l/Alert-emails-from-database-10g,2
http://www.freelists.org/post/oracle-l/Attachment-in-PLSQL,3
http://forums.oracle.com/forums/thread.jspa?threadID=668442
http://forums.oracle.com/forums/thread.jspa?threadID=1081911
http://yaodba.blogspot.com/2006/09/monitor-alert-log.html
Document TitleSending Oracle Alert E-mails Using Mailx with an Operating System User ID (Doc ID 290899.1)
How to setup OS Email Notification (Doc ID 742057.1)
Document TitleSending Output to Mail on UNIX does not Send Report as an Attachment (Doc ID 157365.1)
How to FTP a File From a Unix Machine Hosting Forms Server to Another Machine (Doc ID 209199.1)
How To Masquerade Sender Address In Sendmail (Doc ID 1054852.1)
Document TitleLinux sendmail configuration (Doc ID 405229.1)
Document TitleA Few Hints Configuring Sendmail (Doc ID 200798.1)
How to Execute an Operating System Command (Like Sendmail) when a Trigger is Fired (Doc ID 274716.1)
What Options Are Available to Monitor the Mail Queue and How to Enable a Sendmail/Postfix Style Maillog? (Doc ID 1086131.1)
Linux OS Service 'sendmail' (Doc ID 555115.1)
Setup UNIX Sendmail to Access SMTP Gateway (Doc ID 175683.1)
How To Delete Messages From sendmail Queue (Doc ID 737897.1)
How to - Using sendmail as a client with AUTH (Doc ID 848028.1)
Exadata: Replace Sendmail With Postfix (Doc ID 960727.1)
Oracle Email Basics (Doc ID 309892.1)
How To Send An E-Mail From Forms With The ORA_JAVA Package (Doc ID 131980.1)
How to Send an Email With Attachment Via Forms Using ORA_JAVA Package (Doc ID 184339.1)
Document TitleCreating an Email Definition With the Number of Days to Expiration (Doc ID 1088107.1)
How to determine what email system is being utilized for Oracle Alert processing? (Doc ID 428193.1)
How to Send E-mail With Attachments from PL/SQL Using Java Stored Procedures and an 8i database. (Doc ID 120994.1)
Oracle Alert Only Sends Mail To Email Addresses In 'CC' Field And Not To Addresses In The 'To' Field (Doc ID 1010866.7)
How to Generate E-mail within PL/SQL Routines (Doc ID 66347.1)
Which electronic mail facilities does Alert use/support? (Doc ID 211840.1)
SMTP Reference Sites (Doc ID 76729.1)
How to Specify a Default Sender in the Alert Email (Doc ID 578025.1)
How To enable telnet and ftp services on SLES8 (Doc ID 231222.1)
Considerations how to start UM listener / LISTENER_ES as oracle (Doc ID 205298.1)
How To Filter Outbound Content (Doc ID 825163.1)
Linux OS Service 'spamassassin' (Doc ID 564263.1)
http://www.beyondlogic.org/solutions/cmdlinemail/cmdlinemail.htm
''send to multiple users''
http://www.unix.com/unix-dummies-questions-answers/843-how-send-mail-multiple-users.html
http://www.unix.com/shell-programming-scripting/25331-using-mailx-send-email-multiple-users.html
mainframe
zEC12 http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=SP&infotype=PM&appname=STGE_ZS_ZS_USEN&htmlfid=ZSD03029USEN&attachment=ZSD03029USEN.PDF
https://en.wikipedia.org/wiki/IBM_zEnterprise_System
https://en.wikipedia.org/wiki/IBM_zEC12_(microprocessor)
http://www-03.ibm.com/systems/power/hardware/795/
https://en.wikipedia.org/wiki/POWER7
https://en.wikipedia.org/wiki/IBM_mainframe
LPAR MSU Metrics table http://www.ibm.com/support/knowledgecenter/SSUFR9_1.2.0/com.ibm.swg.ba.cognos.zcap_sol.1.2.0.doc/c_zcap_sol_lpar_msu_metrics_table.html
Calculating LPAR Image Capacity http://www-01.ibm.com/support/docview.wss?uid=tss1td103094&aid=1
file:///C:/Users/karl/Downloads/CPU+frequency+monitoring+using+lparstat.htm
https://www.midlandinfosys.com/pdf/IBM-Power7-710-730-Technica-Overview.pdf
Planning and Sizing Virtualization https://www.e-techservices.com/public/TechU2006/v04.pdf
http://www.softpanorama.org/Commercial_unixes/AIX/aix_performance_tuning.shtml
http://www.hexaware.com/fileadd/mainframe_performance_tuning_brochure.pdf
AIX Micro-Partitioning concepts http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.88.1926&rep=rep1&type=pdf
IBM's z12 mainframe engine makes each clock count http://forums.theregister.co.uk/forum/1/2012/09/05/ibm_z12_mainframe_engine/
HA! http://forums.theregister.co.uk/forum/1/2012/09/05/ibm_z12_mainframe_engine/#c_1535291
https://en.wikipedia.org/wiki/Hercules_(emulator)#Performance
http://searchdatacenter.techtarget.com/feature/Traditional-mainframe-systems-vs-x86-software-mainframe#slideshow
IBM zEnterprise System Technical Introduction http://www.redbooks.ibm.com/Redbooks.nsf/RedpieceAbstracts/sg248050.html?Open
Getting Started with zPCR (IBM's Processor Capacity Reference) http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS1381
mainframe concepts https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_mfhwPUs.htm
{{{
SELECT * FROM TABLE(DBMS_WORKLOAD_REPOSITORY.ASH_REPORT_TEXT(1859430704, 2, TO_DATE('2013-06-12 21:25:30', 'YYYY-MM-DD HH24:MI:SS'), TO_DATE('2013-06-12 21:26:15', 'YYYY-MM-DD HH24:MI:SS'), null, null, null, null ));
}}}
{{{
VAR dbid NUMBER
PROMPT Listing latest AWR snapshots ...
SELECT snap_id, end_interval_time
FROM dba_hist_snapshot
--WHERE begin_interval_time > TO_DATE('2011-06-07 07:00:00', 'YYYY-MM-DD HH24:MI:SS')
WHERE end_interval_time > SYSDATE - 1
ORDER BY end_interval_time;
ACCEPT bid NUMBER PROMPT "Enter begin snapshot id: "
ACCEPT eid NUMBER PROMPT "Enter end snapshot id: "
BEGIN
SELECT dbid INTO :dbid FROM v$database;
END;
/
SET TERMOUT OFF PAGESIZE 0 HEADING OFF LINESIZE 1000 TRIMSPOOL ON TRIMOUT ON TAB OFF
SPOOL awr_local_inst_1.html
SELECT * FROM TABLE(DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_HTML(:dbid, 1, &bid, &eid));
-- SPOOL awr_local_inst_2.html
-- SELECT * FROM TABLE(DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_HTML(:dbid, 2, &bid, &eid));
--
-- SPOOL awr_local_inst_3.html
-- SELECT * FROM TABLE(DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_HTML(:dbid, 3, &bid, &eid));
SPOOL awr_global.html
SELECT * FROM TABLE(DBMS_WORKLOAD_REPOSITORY.AWR_GLOBAL_REPORT_HTML(:dbid, CAST(null AS VARCHAR2(10)), &bid, &eid));
SPOOL OFF
SET TERMOUT ON PAGESIZE 5000 HEADING ON
}}}
https://www.oreilly.com/ideas/mapping-big-data
''mapreduce python'' http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
''bashreduce'' http://www.linux-mag.com/id/7407/
<<showtoc>>
https://en.wikipedia.org/wiki/Markdown
http://www.bitfalls.com/2014/05/the-state-of-markdown-editors-may-2014.html
http://www.sitepoint.com/best-markdown-editors-windows/
http://www.elegantthemes.com/blog/tips-tricks/using-markdown-in-wordpress
https://stackedit.io/editor
! markdown cheatsheet
https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
https://enterprise.github.com/downloads/en/markdown-cheatsheet.pdf
! wordpress markdown support
https://en.wikipedia.org/wiki/Markdown
! wiki and markdown
http://stackoverflow.com/questions/4656298/which-wiki-text-syntax-or-markdown-to-use
! github markdown guide
https://guides.github.com/features/mastering-markdown/
https://help.github.com/categories/writing-on-github/
https://guides.github.com/
https://www.youtube.com/githubguides
https://help.github.com/
https://help.github.com/articles/basic-writing-and-formatting-syntax/#using-emoji , http://www.webpagefx.com/tools/emoji-cheat-sheet/
! google chrome markdown plugin
https://github.com/adam-p/markdown-here
! other markdown tools
https://github.com/adam-p/markdown-here/wiki/Other-Markdown-Tools
! word to markdown converter
https://github.com/jgm/pandoc/releases/tag/1.17.0.2
http://ronn-bundgaard.dk/blog/convert-docx-to-markdown-with-pandoc/ <-- good stuff
http://pandoc.org/README.html#reader-options <-- for extracting images on word
! markdown to word
http://superuser.com/questions/181939/how-should-i-convert-markdown-or-similar-to-a-word-document
http://stackoverflow.com/questions/14249811/markdown-to-docx-including-complex-template
http://bob.yexley.net/generate-a-word-document-from-markdown-on-os-x/
! sublime text markdown
https://packagecontrol.io/installation
https://github.com/timonwong/OmniMarkupPreviewer
Editing and previewing textile markup using Sublime Text https://www.youtube.com/watch?v=k9EXDVYhXB8
http://www.ryanthaut.com/guides/sublime-text-3-markdown-and-live-reload/
https://github.com/vkocubinsky/SublimeTableEditor
https://blog.mariusschulz.com/2014/12/16/how-to-set-up-sublime-text-for-a-vastly-better-markdown-writing-experience
! issues , bugs
strikethrough https://github.com/timonwong/OmniMarkupPreviewer/issues/85
https://github.com/timonwong/OmniMarkupPreviewer/issues/82
! readme templates
https://github.com/dbader/readme-template , https://dbader.org/blog/write-a-great-readme-for-your-github-project
https://github.com/fraction/readme-boilerplate
https://github.com/revolunet/node-readme
https://github.com/funqtion/README-template
https://github.com/FreeCodeCamp/FreeCodeCamp
https://github.com/openfarmcc/OpenFarm
https://github.com/chartercc/bidding-engine <- check uncheck
https://github.com/bj/readme
https://github.com/zackharley/readme
https://github.com/davidbgk/open-source-template
https://github.com/dawsonbotsford/meread
! software
https://en.support.wordpress.com/markdown/
http://25.io/mou/
markdownpad
! page jump that works for both github and wordpress
https://en.support.wordpress.com/splitting-content/page-jumps/
! comment in markdown
http://stackoverflow.com/questions/4823468/comments-in-markdown
http://db-engines.com/en/system/MarkLogic%3BMongoDB
https://adamfowlerml.wordpress.com/2013/11/25/marklogic-huh-what-is-it-good-for/
http://www.marklogic.com/blog/tale-two-facets-mongodb-vs-marklogic/
* commonly used with WITH AS to reuse the query block
[[COALESCE to WITH AS]]
Matillion vs. Apache Airflow vs. Stitch
https://www.stitchdata.com/vs/matillion/airflow/
other ETL, ELT tools
https://support.snowflake.net/s/question/0D50Z00007p41cPSAQ/what-etlelt-tools-do-you-use-with-snowflake-if-any
! 1) Run the following scripts on each node:
{{{
set arraysize 5000
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;
set pagesize 50000
set linesize 550
col begin_interval_time format a30
select trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, s.snap_id, TO_CHAR(s.begin_interval_time,'MM/DD/YY HH24:MI:SS') tm, s.instance_number,
rl.current_utilization, rl.max_utilization
from DBA_HIST_RESOURCE_LIMIT rl, dba_hist_snapshot s
where resource_name = 'sessions' and rl.instance_number=&_instancenumber and
s.snap_id = rl.snap_id and
s.instance_number = rl.instance_number
order by s.snap_id;
col begin_interval_time format a30
select trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, s.snap_id, TO_CHAR(s.begin_interval_time,'MM/DD/YY HH24:MI:SS') tm, s.instance_number,
rl.current_utilization, rl.max_utilization
from DBA_HIST_RESOURCE_LIMIT rl, dba_hist_snapshot s
where resource_name = 'processes' and rl.instance_number=&_instancenumber and
s.snap_id = rl.snap_id and
s.instance_number = rl.instance_number
order by s.snap_id;
}}}
! 2) Then drill down on the specific period
{{{
-- ASH SQL
select sample_time, INSTANCE_NUMBER, program, service_hash, count(sid) from
(
select
to_char(ash.sample_time,'MM/DD/YY HH24:MI:SS') sample_time,
INSTANCE_NUMBER,
ash.session_id sid,
ash.session_serial# serial#,
ash.user_id user_id,
ash.module,
ash.program,
ash.service_hash,
ash.sql_id,
ash.sql_plan_hash_value,
sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
sum(decode(ash.session_state,'WAITING',1,0)) -
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "WAIT" ,
sum(decode(ash.session_state,'WAITING',
decode(wait_class,'User I/O',1, 0 ), 0)) "IO" ,
sum(decode(session_state,'ON CPU',1,1)) "TOTAL"
from DBA_HIST_ACTIVE_SESS_HISTORY ash
where ash.sample_time between to_date('05/29/13 07:00', 'MM/DD/YY HH24:MI') and to_date('05/29/13 23:59', 'MM/DD/YY HH24:MI')
and program not like 'oracle%'
group by sample_time, INSTANCE_NUMBER,session_id,user_id,session_serial#,module,program,service_hash,sql_id,sql_plan_hash_value
order by sum(decode(session_state,'ON CPU',1,1)) desc
)
group by sample_time,INSTANCE_NUMBER, program, service_hash
order by 1 asc
/
}}}
! 3) Then graph and analyze
https://www.evernote.com/shard/s48/sh/9a221524-ecd1-4f2a-b32d-0c6a43322eb2/f6e2e18818c997522d144dfdcad264aa
References:
http://oracle-info.com/2012/11/26/troubleshoot-ora-00020-maximum-number-of-processes-exceeded-systematic-way/
https://www.markusdba.net/2016/10/23/switching-a-multitenant-database-to-extended-data-types/
https://smarttechways.com/2022/02/11/ora-43929-collation-cannot-be-specified-if-parameter-max_string_sizestandard/
https://community.oracle.com/tech/developers/discussion/4491313/why-do-i-get-ora-43929-when-create-table-with-collate-when-i-do-have-max-string-size-set-to-extended
-measuring processor power
http://www.intel.com/content/dam/doc/white-paper/resources-xeon-measuring-processor-power-paper.pdf
{{{
select sum(bytes)/1024/1024 from dba_segments ;
SUM(BYTES)/1024/1024
--------------------
12212.125
select sum(bytes)/1024/1024, owner
from dba_segments
where owner = 'SYSMAN'
group by owner
order by 1;
SUM(BYTES)/1024/1024 OWNER
-------------------- ------------------------------
8809.6875 SYSMAN
select sum(bytes)/1024/1024, owner, segment_name
from dba_segments
where owner = 'SYSMAN'
group by owner, segment_name
order by 1;
SUM(BYTES)/1024/1024 OWNER SEGMENT_NAME
-------------------- ------------------------------ ---------------------------------------------------------------------------------
272 SYSMAN EM_METRIC_STATS
296 SYSMAN SYS_LOB0000083096C00015$$
296 SYSMAN EM_TRACE_CRITICAL
320 SYSMAN EM_METRIC_VALUES_HOURLY
432 SYSMAN EM_CS_SCORE_HIST
select sum(bytes)/1024/1024, owner, segment_name
from dba_segments
where owner = 'SYSMAN'
and lower(segment_name) like '%hourly%'
group by owner, segment_name
order by 1;
SUM(BYTES)/1024/1024 OWNER SEGMENT_NAME
-------------------- ------------------------------ ---------------------------------------------------------------------------------
47.25 SYSMAN EM_METRIC_VALUES_HOURLY_PK
320 SYSMAN EM_METRIC_VALUES_HOURLY
select count(*), METRIC_ITEM_ID
from sysman.EM_METRIC_VALUES_HOURLY
group by METRIC_ITEM_ID
order by 1;
18:57:56 SYS@oemrep> desc sysman.EM_METRIC_VALUES_HOURLY
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
METRIC_ITEM_ID NOT NULL NUMBER(38)
COLLECTION_TIME NOT NULL DATE
COUNT_OF_COLLECTIONS NOT NULL NUMBER(38)
AVG_VALUES NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
MIN_VALUES NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
MAX_VALUES NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
STDDEV_VALUES NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
select count(*), METRIC_GROUP_NAME
from sysman.gc$metric_values_hourly
group by METRIC_GROUP_NAME
order by 1;
--where METRIC_GROUP_ID = '2009';
--where METRIC_KEY_ID = '2009';
18:55:17 SYS@oemrep> desc sysman.gc$metric_values_hourly
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
ENTITY_TYPE NOT NULL VARCHAR2(64)
ENTITY_NAME NOT NULL VARCHAR2(256)
ENTITY_GUID NOT NULL RAW(16)
PARENT_ME_TYPE VARCHAR2(64)
PARENT_ME_NAME VARCHAR2(256)
PARENT_ME_GUID NOT NULL RAW(16)
TYPE_META_VER NOT NULL VARCHAR2(8)
METRIC_GROUP_NAME NOT NULL VARCHAR2(64)
METRIC_COLUMN_NAME NOT NULL VARCHAR2(64)
COLUMN_TYPE NOT NULL NUMBER(1)
COLUMN_INDEX NOT NULL NUMBER(3)
DATA_COLUMN_TYPE NOT NULL NUMBER(2)
METRIC_GROUP_ID NOT NULL NUMBER(38)
METRIC_GROUP_LABEL VARCHAR2(64)
METRIC_GROUP_LABEL_NLSID VARCHAR2(64)
METRIC_COLUMN_ID NOT NULL NUMBER(38)
METRIC_COLUMN_LABEL VARCHAR2(64)
METRIC_COLUMN_LABEL_NLSID VARCHAR2(64)
DESCRIPTION VARCHAR2(128)
SHORT_NAME VARCHAR2(40)
UNIT VARCHAR2(32)
IS_FOR_SUMMARY NUMBER
IS_STATEFUL NUMBER
NON_THRESHOLDED_ALERTS NUMBER
METRIC_KEY_ID NOT NULL NUMBER(38)
KEY_PART_1 NOT NULL VARCHAR2(256)
KEY_PART_2 NOT NULL VARCHAR2(256)
KEY_PART_3 NOT NULL VARCHAR2(256)
KEY_PART_4 NOT NULL VARCHAR2(256)
KEY_PART_5 NOT NULL VARCHAR2(256)
KEY_PART_6 NOT NULL VARCHAR2(256)
KEY_PART_7 NOT NULL VARCHAR2(256)
COLLECTION_TIME NOT NULL DATE
COLLECTION_TIME_UTC DATE
COUNT_OF_COLLECTIONS NOT NULL NUMBER(38)
AVG_VALUE NUMBER
MIN_VALUE NUMBER
MAX_VALUE NUMBER
STDDEV_VALUE NUMBER
select table_name, partitions_retained
from sysman.em_int_partitioned_tables
where table_name in ('EM_METRIC_VALUES','EM_METRIC_VALUES_HOURLY','EM_METRIC_VALUES_DAILY');
SELECT
min(TO_CHAR(collection_time,'MM/DD/YY HH24:MI:SS')) collection_time,
max(TO_CHAR(collection_time,'MM/DD/YY HH24:MI:SS')) collection_time
FROM
sysman.gc$metric_values_hourly
where METRIC_GROUP_NAME = 'ME$GVASH_TO_CSV'
COLLECTION_TIME COLLECTION_TIME
----------------- -----------------
01/30/15 00:00:00 03/03/15 23:00:00 <== 1month and 3 days, approx 40MB
12.16% 1011680 ME$GVASH_TO_CSV
15.99% 1330560 CellDisk_Metric
12% x 320MB = 40MB per database per month
2% x 320MB = 7MB per database per month
}}}
https://panchaleswar.wordpress.com/2018/01/08/with-the-discovery-of-meltfdown-and-spectre-the-trend-and-future-of-cloud-computing/ <-- good stuff
https://www.meetup.com/nylug-meetings/events/249437922/ <-- good talk, met a few good engineers! the vulnerabilities were explained very well in both graphical and low level
<<showtoc>>
! courses
https://learning.oreilly.com/videos/the-full-stack/9781788470735/9781788470735-video18_1?autoplay=false
! memcached performance
Facebook and memcached - Tech Talk https://www.youtube.com/watch?v=UH7wkvcf0ys
<<<
* need to have memcached close to webservers to avoid another hop
{{{
Memcache is efficient if you are working just with strings. Redis is helpful, if your keys/values are other data structures,
like lists, sets etc. I think, as in Facebook, majority of the data (posts, comments, links, pictures) are strings
(or can be serialized/de-serialized into string), they went for Memcached.
Also, Mark says, they implemented memcache in 2005 (or at least got inspired from Live Journal). Redis, on the other
hand, was created in 2009.
}}}
* system cpu optimization - https://stackoverflow.com/questions/10520182/linux-when-to-use-scatter-gather-io-readv-writev-vs-a-large-buffer-with-frea
* user cpu optimization - get rid of strlen
<<<
!! postgres memcached
https://learning.oreilly.com/library/view/postgresql-96-high/9781784392970/72d20dc6-90f3-4ab8-b58b-a529f5726bad.xhtml
https://learning.oreilly.com/library/view/postgresql-10-high/9781788474481/51728aa7-bf10-4ebd-888f-4ec53c02acaf.xhtml
!! mysql memcached - end to end app dev
https://learning.oreilly.com/library/view/developing-web-applications/9780470414644/ch10.html
!! python memcached
https://learning.oreilly.com/library/view/foundations-of-python/9781430230038/ch08.html
https://learning.oreilly.com/library/view/foundations-of-python/9781430258551/9781430258544_Ch08.xhtml
! memcached multi node
Database management MemCacheD https://www.youtube.com/watch?v=lVGdXIqaF-8&list=PLMDfT0LnMFdPu5ldfQwg_Jd0i07JAeppK
https://www.youtube.com/results?search_query=multi+cluster+memcached
! references
* https://medium.com/@Alibaba_Cloud/redis-vs-memcached-in-memory-data-storage-systems-3395279b0941
* https://www.linkedin.com/pulse/memcached-vs-redis-which-one-pick-ranjeet-vimal/#targetText=Redis%20is%20In%2Dmemory%20data,database%2C%20cache%20and%20message%20broker.&targetText=While%20that's%20all%20that%20memcached,is%20a%20data%20structure%20server.
* https://www.infoworld.com/article/3063161/why-redis-beats-memcached-for-caching.html
* https://stackoverflow.com/questions/10558465/memcached-vs-redis
put this in a script file.. then run it..
{{{
#!/bin/ksh
clear
echo "`/usr/bin/svmon -G `">xx
cat xx|grep memory|awk '{print $2"\t"$3"\t"$4}'>mem_frames
cat xx|grep pg |awk '{print $3"\t"$4}'>mem_pages
echo "Real memory Usage"
echo "-----------------------------"
cat mem_frames|while read size inuse free
do
rsize=$(printf "%s\n" 'scale =5; '$size'/1024/1024*4' | bc)
rinuse=$(printf "%s\n" 'scale =5; '$inuse'/1024/1024*4' | bc)
rfree=$(printf "%s\n" 'scale =5; '$free'/1024*4' | bc)
percent=$(printf "%s\n" 'scale =3; '$inuse'/'$size'*100' | bc)
echo "Total Installed : $rsize GB"
echo "Total Used : $rinuse GB"
echo "Total Free : $rfree MB"
echo "REAL MEMORY USAGE : % : $percent"
echo "-----------------------------"
done
echo ""
cat mem_pages|while read pgsize pginuse
do
paging_size=$(printf "%s\n" 'scale =5; '$pgsize'/1024*4' | bc)
paging_used=$(printf "%s\n" 'scale =5; '$pginuse'/1024*4' | bc)
percent2=$(printf "%s\n" 'scale =3; '$pginuse'/'$pgsize'*100' | bc)
echo "Paging Space Usage"
echo "-----------------------------"
echo "Total Pg Space : $paging_size MB"
echo "Pg Space Used : $paging_used MB"
echo "Percent Used PAGING : $percent2"
echo "-----------------------------"
done
#
#-- sample output
#Real memory Usage
#-----------------------------
#Total Installed : 1.99992 GB
#Total Used : 0.96468 GB
#Total Free : 1060.08984 MB
#REAL MEMORY USAGE : % : 48.200
#-----------------------------
#Paging Space Usage
#-----------------------------
#Total Pg Space : 512.00000 MB
#Pg Space Used : 2.55076 MB
#Percent Used PAGING : 0.400
#-----------------------------
}}}
https://forums.teradata.com/forum/database/performance-comparsion-merge-into-vs-classic-upsert
https://community.oracle.com/thread/1073421?start=0&tstart=0
http://stackoverflow.com/questions/10539627/when-doing-a-merge-in-oracle-sql-how-can-i-update-rows-that-arent-matched-in-t
http://www.kimballgroup.com/2008/11/design-tip-107-using-the-sql-merge-statement-for-slowly-changing-dimension-processing/
http://www.purplefrogsystems.com/blog/2012/01/using-t-sql-merge-to-load-data-warehouse-dimensions/
https://books.google.com/books?id=w7_sEal658UC&pg=PA98&lpg=PA98&dq=data+warehouse+merge+vs+minus&source=bl&ots=aO9R2BJGc4&sig=BtkbIvvJ2FnSm8odHfOuQ67_fTE&hl=en&sa=X&ved=0ahUKEwiXpr39jr3NAhUr64MKHTSCBA44ChDoAQgbMAA#v=onepage&q=data%20warehouse%20merge%20vs%20minus&f=false
https://www.safaribooksonline.com/library/view/data-virtualization-for/9780123944252/
https://www.safaribooksonline.com/library/view/the-kimball-group/9781119216315/c11.xhtml
https://www.safaribooksonline.com/library/view/t-sql-querying/9780133986631/
https://www.safaribooksonline.com/library/view/star-schema-the/9780071744324/
https://www.mssqltips.com/sqlservertip/1704/using-merge-in-sql-server-to-insert-update-and-delete-at-the-same-time/ <- test SQLs included
http://datawarehouse4u.info/SCD-Slowly-Changing-Dimensions.html
https://www.simple-talk.com/sql/learn-sql-server/the-merge-statement-in-sql-server-2008/
https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/merge?lang=en
https://oracle-base.com/articles/9i/merge-statement
http://www.slideshare.net/Hadoop_Summit/hadoop-and-enterprise-data-warehouse?next_slideshow=1
http://www.slideshare.net/cloudera/best-practices-for-the-hadoop-data-warehouse-edw-101-for-hadoop-professionals
https://www.quora.com/What-are-the-advantages-of-the-MERGE-statement
http://stackoverflow.com/questions/21859144/merge-and-extract-product-data-from-numerous-sql-tables-to-hadoop-key-value-stor
http://stackoverflow.com/questions/38109475/sqoop-hadoop-how-to-join-merge-old-data-and-new-data-imported-by-sqoop-in-la
http://getindata.com/blog/tutorials/tutorial-using-presto-to-combine-data-from-hive-and-mysql-in-one-sql-like-query/
http://inquidia.com/news-and-info/hadoop-how-update-without-update
http://www.oratable.com/oracle-merge-command-for-upsert/
R merge https://www.r-bloggers.com/merging-two-different-datasets-containing-a-common-column-with-r-and-r-studio/
http://www.made2mentor.com/2013/08/how-to-load-slowly-changing-dimensions-using-t-sql-merge/
Overview Oracle SQL - MERGE Statement https://www.youtube.com/watch?v=pV9BYSpqqk0
! other databases
https://wiki.postgresql.org/wiki/MergeTestExamples
http://blog.mclaughlinsoftware.com/2009/05/25/mysql-merge-gone-awry/
http://www.xaprb.com/blog/2006/06/17/3-ways-to-write-upsert-and-merge-queries-in-mysql/
http://markmail.org/message/cq5ttaerfed5ulon
http://readlist.com/lists/redhat.com/linux-lvm/0/2837.html
http://osdir.com/ml/linux-lvm/2010-01/msg00048.html
http://markmail.org/message/rdqkj7tt7tomi7z3?q=VG+metadata+too+large+for+circular+buffer&page=1&refer=cq5ttaerfed5ulon
http://forums.citrix.com/message.jspa?messageID=1506683
http://osdir.com/ml/linux-lvm/2009-09/msg00093.html
http://comments.gmane.org/gmane.linux.redhat.release.rhel5/7433
https://carlos-sierra.net/2013/07/27/two-common-mistakes-using-method_opt-in-dbms_stats/
two common mistakes:
<<<
* METHOD_OPT => ‘FOR ALL INDEXED COLUMNS SIZE…’
* METHOD_OPT => NULL
<<<
https://blogs.oracle.com/optimizer/how-does-the-methodopt-parameter-work
https://blog.dbi-services.com/a-migration-pitfall-with-all-column-size-auto/
! metric extensions (next generation UDM)
''OBE''
Oracle Enterprise Manager 12c: Metric Extensions Part I –Deploy Metric Extensions
http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5741,29#prettyPhoto
Oracle Enterprise Manager 12c: Metric Extensions Part II –Deploy Metric Extensions
http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5742,29#prettyPhoto
''howto''
http://mmikhail.blogspot.sg/2013/01/metric-extensions-in-oem-12c.html
http://oemgc.files.wordpress.com/2012/05/using-metric-extensions-in-em12c.pdf
others:
http://www.slideshare.net/Enkitec/em12c-monitoring-metric-extensions-and-performance-pages
http://nyoug.org/Presentations/2013/Minhas_SPA.pdf
http://iiotzov.wordpress.com/tag/oracle-oem-12c-metrics-extension/
! BIP (BI Publisher)
For the BIP document (version 11.1.1.6), you can refer to the following link:
Oracle® Fusion Middleware Data Modeling Guide for Oracle Business Intelligence Publisher
This is a manual to tell you how to build the data model in the BI Publisher
http://docs.oracle.com/cd/E23943_01/bi.1111/e22258/toc.htm
Oracle® Fusion Middleware Report Designer's Guide for Oracle Business Intelligence Publisher (3 Creating BI Publisher Layout Templates)
There are several ways to create the report layout (e.g. winword rtf, excel, pdf and so on). I just use the web interface in BI Publisher to create the report layout
http://docs.oracle.com/cd/E23943_01/bi.1111/e22254/create_lay_tmpl.htm#BHBJJGCF
''Prereq''
cloud control 12.1.0.1.* requires BIP 11.1.1.5.0
cloud control 12.1.0.2.0 requires BIP 11.1.1.6.0
cloud control 12.1.0.4 has BIP integrated upon install
''OBE''
Oracle Enterprise Manager 12c: Install and Integrate BI Publisher
http://apex.oracle.com/pls/apex/f?p=44785:24:0:::24:P24_CONTENT_ID,P24_PREV_PAGE:6001,1#prettyPhoto
Oracle Enterprise Manager 12c: Reporting with BI Publisher
http://apex.oracle.com/pls/apex/f?p=44785:24:3081108624154::::P24_CONTENT_ID,P24_PREV_PAGE:6004,24
Oracle Enterprise Manager 12c: Implement the BI Publisher Security Model
http://apex.oracle.com/pls/apex/f?p=44785:24:3081108624154:::24:P24_CONTENT_ID,P24_PROD_SECTION_GRP_ID,P24_PREV_PAGE:6002,,24
Oracle Enterprise Manager 12c: Create a BI Publisher Report
http://apex.oracle.com/pls/apex/f?p=44785:24:3081108624154:::24:P24_CONTENT_ID,P24_PROD_SECTION_GRP_ID,P24_PREV_PAGE:6003,,24
''youtube''
Oracle Enterprise Manager 12c: Install and Integrate BI Publisher http://www.youtube.com/watch?v=4ATQDUN0jbo
Enterprise Manager Cloud Control 12c: Implement BI Publisher Security Model http://www.youtube.com/watch?v=mxwK7SS04d8&list=PL15990AF71BFECCCD&index=2
Enterprise Manager Cloud Control 12c: Create a BI Publisher Report http://www.youtube.com/watch?v=2SoTB022JOs&list=PL15990AF71BFECCCD&index=3
Create 1st Report with BI Publisher http://www.youtube.com/watch?v=TK7KYaCEGZU
BIP scheduling http://www.youtube.com/watch?v=IZnAcg58Jqw&list=UUWQO0Ent3xaT22bE7H7rX_g&feature=c4-overview
<<<
''12cR4''
http://www.oracle.com/technetwork/middleware/bi-publisher/downloads/index.html
https://docs.oracle.com/cd/E24628_01/install.121/e24089/install_em_bip.htm
https://apex.oracle.com/pls/apex/f?p=44785:24:818312932291::NO::P24_CONTENT_ID,P24_PREV_PAGE:10239,2#prettyPhoto
https://apex.oracle.com/pls/apex/f?p=44785:24:818312932291::NO::P24_CONTENT_ID,P24_PREV_PAGE:10240,2
https://apex.oracle.com/pls/apex/f?p=44785:24:818312932291::NO::P24_CONTENT_ID,P24_PREV_PAGE:10242,2
<<<
''howto''
http://docs.oracle.com/cd/E24628_01/install.121/e24089/install_em_bip.htm
http://oemgc.files.wordpress.com/2012/10/integrating-bi-publisher-with-enterprise-manager.pdf
''BIP download software''
http://www.oracle.com/technetwork/middleware/bi-publisher/downloads/index.html
! information publisher
http://docs.oracle.com/cd/E24628_01/doc.121/e24473/information_publisher.htm
! examples
How to create a Customized Report in 12c Cloud Control for Database Alert log Errors (Doc ID 1528268.1)
{{{
EMBIPAdministrator
EMBIPScheduler
EMBIPAuthor
EMBIPViewer
two components of BIP reports:
- data model
- layout to retrieve data from the data model
-- PREREQ
data model needs:
- a parameter
- list of values to populate the selection parameter
- a data set to extract the values data
-- STEP BY STEP
1) Create a new folder "Custom Reports"
2) Create a new folder "Custom Datamodels" under the "Custom Reports"
3) Create a data model
2.a create list of values (list_of_db_lov)
example: instance_name, target_guid
> you can test the results
> you can define join conditions here
2.b create parameter so it can be displayed (list_of_db_param)
> reference here the list of values
4) Create a data set (db_instance_tablespace_info)
> this is a SQL query from the repository
> reference here the parameter (as a condition)
5) Save the data model
6) Click get XML output to run the report
7) Click on save as sample data
8) Create a report layer to display the data
8.a Create report
8.b Use the data model
8.c Choose chart template
8.d Save the report
8.e View report
}}}
* for date format on parameter you must follow the java format http://docs.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html
https://books.google.com/books?id=-xD59tbWFTEC&pg=PA355&lpg=PA355&dq=oracle.samples.xsh1&source=bl&ots=xwlfJGiC6t&sig=ACfU3U16zoIeETwppaqbgP99BBL4jHAQug&hl=en&sa=X&ved=2ahUKEwj_q_2omvfgAhUDm-AKHZLFAckQ6AEwAXoECAkQAQ#v=onepage&q=oracle.samples.xsh1&f=false
Tool for Gathering I/O Resource Manager Metrics: metric_iorm.pl (Doc ID 1337265.1)
{{{
You can then invoke it as follows:
1. To query the storage cell's current I/O metrics:
./metric_iorm.pl
2. To view the storage cell's historical I/O metrics, using explicit start and/or end times:
./metric_iorm.pl "where collectionTime > (start_time) and collectionTime < (end_time)"
The start and end times must be provided in a CellCLI-compliant format. For example, to view I/O metrics from 9 AM to 10 AM on 2011-05-25:
./metric_iorm.pl "where collectionTime > '2011-05-25T09:00:00-07:00' and collectionTime < '2011-05-25T10:00:00-07:00'"
3. To view the storage cell's historical I/O metrics, using relative start and/or end times:
./metric_iorm.pl "where ageInMinutes > (number_minutes) and ageInMinutes < (number_minutes)"
For example, to view I/O metrics between 2 and 3 hours ago:
./metric_iorm.pl "where ageInMinutes > 120 and ageInMinutes < 180"
4. To view I/O metrics from multiple storage cells, use dcli from the compute nodes.
Before running the metric script, you need to copy the script to all the storage cells as shown below. This copies the script to the default directory for user celladmin, /home/celladmin.
dcli -c cel01,cel02 -l celladmin -f metric_iorm.pl
dcli -g cell_names.dat -l celladmin -f metric_iorm.pl
where cell_names.dat contains the names of all the cells (cel01 and cel02), one per line.
You can then run the metric script based on the examples shown below.
dcli -c cel01,cel02 -l celladmin "/home/celladmin/metric_iorm.pl > /var/log/oracle/diag/asm/cell/metric_output"
dcli -g cell_names.dat -l celladmin "/home/celladmin/metric_iorm.pl > /var/log/oracle/diag/asm/cell/metric_output"
where cell_names.dat contains the names of all the cells (cel01 and cel02), one per line.
dcli -g cell_names.dat -l celladmin /home/celladmin/metric_iorm.pl \"where collectionTime \> \'2010-12-15T17:10:51-07:00\' and collectionTime \< \'2010-12-15T17:15:51-07:00\'\" \> /var/log/oracle/diag/asm/cell/metric_output
The Output of the Script
For each minute, metric_iorm.pl will output I/O metrics in the format below. This output can be seen in the file /var/log/oracle/diag/asm/cell/metric_output.
}}}
<<<
The "utilization" metric shows you how busy the disks are, on average. This metric is broken down by database and by consumer group. In each case, it is also further broken down by small I/Os (indicative of latency-sensitive workloads) and large I/Os (indicative of throughput-intensive workloads). They correspond to these metrics: DB_IO_UTIL_SM, DB_IO_UTIL_LG, CG_IO_UTIL_SM, and CG_IO_UTIL_LG.
The "small and large IOPS" metrics are an alternative to the "utilization" metrics. They show the rate of I/Os per second to disk, broken down by database, consumer group, and I/O size. They correspond to these metrics: DB_IO_RQ_SM_SEC, DB_IO_RQ_LG_SEC, CG_IO_RQ_SM_SEC, CG_IO_RQ_LG_SEC.
The "throughput" metrics are another alternative to the "utilization" metrics. They show the rate of I/Os to disk in megabytes per second, broken down by database and consumer group. They correspond to these metrics: DB_IO_BY_SEC and CG_IO_BY_SEC.
The "Flash Cache IOPS" metric shows you the rate of I/Os to the flash cache. It is also broken down by database and by consumer group. It corresponds to these metrics: DB_FC_IO_RQ_SEC and CG_FC_IO_RQ_SEC.
The "avg qtime" metric shows you the average amount of time in milliseconds that I/Os waited to be scheduled by I/O Resource Manager. They are broken down by database, consumer group, and I/O size. It corresponds to these metrics: DB_IO_WT_SM_RQ, DB_IO_WT_LG_RQ, CG_IO_WT_SM_RQ, CG_IO_WT_LG_RQ.
The "disk latency" metrics show the average disk latency in milliseconds. They are broken down by I/O size and by reads and writes. They are not broken down by database and consumer group since the disk driver does not recognize I/Os by database and consumer group. Therefore, the disk latencies across databases and consumer groups are the same. They correspond to these metrics: CD_IO_TM_R_SM_RQ, CD_IO_TM_R_LG_RQ, CD_IO_TM_W_SM_RQ,
CD_IO_TM_W_LG_RQ.
If your storage cell hosts multiple databases, then a database may not be listed for the following reasons:
All metrics for the database are zero because the database is idle.
The database is not explicitly listed in the inter-database IORM plan and the inter-database IORM plan does not have a default "share" directive. Use "list iormplan" to view the inter-database IORM plan.
How to Interpret the Output
These metrics can be used to answer the following, common questions:
Which database or consumer group is utilizing the disk the most? Use the disk utilization metrics to answer this question. You can also use the disk IOPS metrics. However, the total number of IOPS that can be sustained by the disk is extremely dependent on the ratio of reads vs writes, the location of the I/Os within the disk, and the ratio of small vs large I/Os. Therefore, we recommend using the disk utilization metrics.
Am I getting good latencies for my OLTP database or consumer group? The I/O latency, as seen by the database, is determined by the flash cache hit rate, the disk latency, and the IORM wait time. OLTP I/Os are small, so you should focus on the disk latency for small reads and writes. You can use IORM to improve the disk latency by giving high resource allocations to the OLTP databases and consumer groups. If necessary, you can also use the "low latency" objective. You can also decrease the IORM wait time by giving high resource allocations to the critical databases and consumer groups.
What is the flash cache hit rate for my database or consumer group? For OLTP databases and consumer groups, you should expect the flash cache to be used for a significant number of I/Os. Since the latency of flash cache I/Os is very low, the I/O response time, as seen by the database, can be optimized by improving the flash cache hit rate for critical workloads.
How much is I/O Resource Manager affecting my database or consumer group? If IORM is enabled, then IORM may delay issuing an I/O when the disks are under heavy load or when a database or consumer group has reached its I/O utilization limit, if any. IORM may also delay issuing large I/Os when it is optimizing for low latencies for OLTP workloads. These delays are reflected in the "average queue time" metrics. You can decrease the delays for critical databases and consumer groups by increasing their resource allocation.
<<<
<<showtoc>>
! microservices migration guide
https://developers.redhat.com/books/migrating-microservice-databases-relational-monolith-distributed-data/
<<<
Nice talk by Edson Yanaga. He talks about different patterns on how to move from relational monolith to distributed data (which then exposed as a microservice)
The good part starts at 27:00 of the video, he talked about CRUD vs CQRS and then the 9 integration patterns
https://youtu.be/qR27-XWnQYs?t=1620
There is a book written about the 9 patterns
https://developers.redhat.com/books/migrating-microservice-databases-relational-monolith-distributed-data/
(same book) https://learning.oreilly.com/library/view/migrating-to-microservice/9781492048824/
[img(30%,30%)[ https://i.imgur.com/8AQgRtv.png]]
Then here are some other technical talks so that you can glue the concepts together.
I like this one because it shows sample code
A demo app that does (ES – event sourcing, MV – materialized view, CQRS) - Three Microservice Patterns to Tear Down Your Monoliths
https://youtu.be/84W9iY3CwdQ
Guido from Trivadis did a good job on focusing first on different architectures possible, then how to integrate kafka in the whole data flow/architecture
Building event-driven (Micro)Services with Apache Kafka by Guido Schmutz
https://youtu.be/IR1NLfaq7PU
<<<
! microservices on integration on DW
<<<
The only two real world implementation talk I found on youtube that applies/integrates microservices to DW (kafka as a source):
Business Intelligence in microservice architecture
https://www.youtube.com/watch?v=IHD-K4RTvXg&t=23s
Chicago RDBMS Data Migration: Modernizing Traditional ETL into microservices
https://www.youtube.com/watch?v=thnGulk1Slk
sample code: https://github.com/jcherng-pivotal/scs-etl-demo
other:
Kafka messaging system on Informatica Intelligent Streaming
https://network.informatica.com/thread/23651
<<<
! microservices patterns
[img(100%,100%)[ https://i.imgur.com/7p8kBwI.png]]
https://microservices.io/patterns/index.html
! microservices technologies
{{{
Frameworks - Spring Boot
Databases - MySQL, Postgres, and Microsoft SQL server
Message brokers - Apache Kafka, ActiveMQ, RabbitMQ, and Redis Streams
}}}
! articles
https://microservices.io/patterns/data/event-sourcing.html
https://microservices.io/adopt/index.html
https://www.slideshare.net/chris.e.richardson
http://chrisrichardson.net/learnmicroservices.html
! transactional microservices
Solving distributed data management problems in a microservice architecture https://eventuate.io/
https://eventuate.io/exampleapps.html
https://github.com/eventuate-examples/es-kanban-board
! video talks
Three Microservice Patterns to Tear Down Your Monoliths https://www.youtube.com/watch?v=84W9iY3CwdQ
RedisConf17 - Microservices and Redis - Chris Richardson https://www.youtube.com/watch?v=CykUle8UUqk
Developing Applications with the Microservices Architecture by Chris Richardson (Complete) https://www.youtube.com/watch?v=WwrCGP96-P8
Developing microservices with aggregates - Chris Richardson https://www.youtube.com/watch?v=7kX3fs0pWwc
Chris Richardson - There is No Such Thing as a Microservice https://www.youtube.com/watch?v=FXCLLsCGY0s
! microservices terms
!! domain driven design
https://www.confluent.io/blog/microservices-apache-kafka-domain-driven-design
https://www.google.com/search?q=domain+driven+design&oq=domain+drive&aqs=chrome.0.0j69i57j0l6.2013j1j1&sourceid=chrome&ie=UTF-8
Domain Driven Design: The Good Parts - Jimmy Bogard https://www.youtube.com/watch?v=U6CeaA-Phqo
!! event sourcing
martin fowler event sourcing
https://www.google.com/search?q=martin+fowler+event+sourcing&oq=martin+fowler+event+sour&aqs=chrome.0.0j69i57j0j69i64.5062j0j1&sourceid=chrome&ie=UTF-8
GOTO 2017 • The Many Meanings of Event-Driven Architecture • Martin Fowler https://www.youtube.com/watch?v=STKCRSUsyP0
!! CQRS
Best practice/Storage selection for CQRS/Event sourcing architecture https://stackoverflow.com/questions/42958640/best-practice-storage-selection-for-cqrs-event-sourcing-architecture
!! bounded vs unbounded data
https://www.google.com/search?q=bounded+vs+unbounded+data&oq=bounded+vs+unbounded+data&aqs=chrome..69i57j0l3.4443j1j4&sourceid=chrome&ie=UTF-8
https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101
https://www.thomashenson.com/bound-vs-unbound-data-in-real-time-analytics/
[img(70%,70%)[ https://i.imgur.com/Suanb25.png]]
!! Serverless Functions vs. Microservices
https://www.stratoscale.com/blog/compute/keeping-small-serverless-functions-vs-microservices/
.
A Systematic Approach for Migrating to Oracle Cloud SaaS [Tech Article] https://community.oracle.com/docs/DOC-997476
-- migration emc data domain
emc data domain http://www.emc.com/sp/oracle/h12346-backup-recover-migrate-oracle-wp.pdf
-- migration external tables
Moving data across platforms using external tables https://www.yumpu.com/en/document/view/17319324/unloading-data-using-external-tables-nyoug/1
Upgrade and Migrate to Oracle Database 19c
https://www.oracle.com/africa/a/tech/docs/twp-upgrade-oracle-database-19c.pdf
https://mikedietrichde.com/2019/06/03/automatic-sql-plan-management-in-oracle-database-19c/
{{{
If you’d like to restore the Oracle 12.1 behavior:
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
parameter => 'ALTERNATE_PLAN_BASELINE',
value => '');
END;
/
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
parameter => 'ALTERNATE_PLAN_SOURCE',
value => '');
END;
/
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
parameter => 'ALTERNATE_PLAN_LIMIT',
value => 10);
END;
/
If you’d like to revert to the Oracle 12.2.0.1 and Oracle 18c behavior:
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
parameter => 'ALTERNATE_PLAN_BASELINE',
value => 'EXISTING');
END;
/
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
parameter => 'ALTERNATE_PLAN_SOURCE',
value => 'CURSOR_CACHE+AUTOMATIC_WORKLOAD_REPOSITORY');
END;
/
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
parameter => 'ALTERNATE_PLAN_LIMIT',
value => 10);
END;
/
Of course, switching to the Oracle 19c defaults works this way:
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
parameter => 'ALTERNATE_PLAN_BASELINE',
value => 'AUTO');
END;
/
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
parameter => 'ALTERNATE_PLAN_SOURCE',
value => 'AUTO');
END;
/
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
parameter => 'ALTERNATE_PLAN_LIMIT',
value => 'UNLIMITED');
END;
/
}}}
<<showtoc>>
! minimize cost
* storage should be deleted/terminated when not used
* elastic ip should be deleted/terminated when not used
* snapshots should be deleted/terminated
! on cleaning up resources
* terminate instance
* delete elastic ip
* delete volume
* delete security group
* delete key pairs
! references
http://aws-musings.com/7-easy-tips-to-reduce-your-amazon-ec2-cloud-costs/
https://blog.cloudability.com/aws-data-transfer-costs-what-they-are-and-how-to-minimize-them/
https://www.quora.com/How-do-I-reduce-my-AWS-EC2-Amazon-cloud-computing-bill
https://www.cloudyn.com/blog/beginners-guide-7-ways-to-optimize-cloud-costs-with-aws/
http://webapps.stackexchange.com/questions/11398/does-mint-com-have-an-api-to-download-data-if-not-are-any-scraping-tools-avail
https://mint.lc.intuit.com/
https://github.com/toddmazierski/mint-exporter
https://github.com/jchavannes/mintreport
http://stackoverflow.com/questions/7269668/is-there-an-api-to-get-bank-transaction-and-bank-balance
https://discuss.dwolla.com/t/export-all-transaction-data-in-one-file/219
http://www.reddit.com/r/personalfinance/comments/2h27fp/how_to_analyze_data_from_mintcom_in_excel/
http://www.oracle.com/technetwork/middleware/bi-foundation/10gr1-twp-bi-dw-sqlmodel-131067.pdf
{{{
12c Cloud Control Repository: How to Modify the Default Retention and Purging Policies for Metric Data? (Doc ID 1405036.1)
To modify the default retention time for the table EM_METRIC_VALUES from 7 partitions to 14 partitions:
1. Use SQL*Plus to connect to the repository database as the SYSMAN user.
2. Check the current value of the retention periods:
select table_name, partitions_retained
from em_int_partitioned_tables
where table_name in ('EM_METRIC_VALUES','EM_METRIC_VALUES_HOURLY','EM_METRIC_VALUES_DAILY');
TABLE_NAME PARTITIONS_RETAINED
------------------------- -------------------
EM_METRIC_VALUES 7
EM_METRIC_VALUES_HOURLY 32
EM_METRIC_VALUES_DAILY 12
3. To modify the default retention time for the table EM_METRIC_VALUES from 7 partitions to 14, execute:
execute gc_interval_partition_mgr.set_retention('SYSMAN', 'EM_METRIC_VALUES', 14);
4. Verify that the retention period has been modified:
select table_name, partitions_retained
from em_int_partitioned_tables
where table_name in ('EM_METRIC_VALUES','EM_METRIC_VALUES_HOURLY','EM_METRIC_VALUES_DAILY');
TABLE_NAME PARTITIONS_RETAINED
------------------------- -------------------
EM_METRIC_VALUES 14
EM_METRIC_VALUES_HOURLY 32
EM_METRIC_VALUES_DAILY 12
Verifying the Partition Maintenance
Login to repository database as the SYSMAN user and execute the below queries:
select count(*) from EM_METRIC_VALUES where collection_time < trunc(sysdate - 7);
select count(*) from EM_METRIC_VALUES_HOURLY where collection_time < trunc(sysdate - 32);
select count(*) from EM_METRIC_VALUES_DAILY where collection_time < add_months(sysdate,-13);
The queries should return 0 rows if the default retention policies are set.
Note: The queries check if there is any data in these tables beyond the default retention periods: 7 days, 32 days, 12 months, respectively. If the retention periods have been modified, then the corresponding numbers should be used.
}}}
! metric extension
{{{
-- 12-AUG-14 12-AUG-14
select min(collection_time), max(collection_time) from sysman.gc$metric_values_latest where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
select * from sysman.gc$metric_values_latest where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
-- 11-JUL-14 11-AUG-14
-- 07-AUG-14 08-SEP-14
select min(collection_time), max(collection_time) from sysman.gc$metric_values_hourly where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
select * from sysman.gc$metric_values_hourly where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
-- 01-JAN-14 11-AUG-14
-- 01-JAN-14 08-SEP-14
select min(collection_time), max(collection_time) from sysman.gc$metric_values_daily where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
select * from sysman.gc$metric_values_daily where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
desc sysman.gc$metric_values_latest
Name Null Type
------------------------- -------- -------------
ENTITY_TYPE NOT NULL VARCHAR2(64)
ENTITY_NAME NOT NULL VARCHAR2(256)
TYPE_META_VER NOT NULL VARCHAR2(8)
METRIC_GROUP_NAME NOT NULL VARCHAR2(64)
METRIC_COLUMN_NAME NOT NULL VARCHAR2(64)
COLUMN_TYPE NOT NULL NUMBER(1)
COLUMN_INDEX NOT NULL NUMBER(3)
DATA_COLUMN_TYPE NOT NULL NUMBER(2)
METRIC_GROUP_ID NOT NULL NUMBER(38)
METRIC_GROUP_LABEL VARCHAR2(64)
METRIC_GROUP_LABEL_NLSID VARCHAR2(64)
METRIC_COLUMN_ID NOT NULL NUMBER(38)
METRIC_COLUMN_LABEL VARCHAR2(64)
METRIC_COLUMN_LABEL_NLSID VARCHAR2(64)
DESCRIPTION VARCHAR2(128)
SHORT_NAME VARCHAR2(40)
UNIT VARCHAR2(32)
IS_FOR_SUMMARY NUMBER
IS_STATEFUL NUMBER
NON_THRESHOLDED_ALERTS NUMBER
METRIC_KEY_ID NOT NULL NUMBER(38)
KEY_PART_1 NOT NULL VARCHAR2(256)
KEY_PART_2 NOT NULL VARCHAR2(256)
KEY_PART_3 NOT NULL VARCHAR2(256)
KEY_PART_4 NOT NULL VARCHAR2(256)
KEY_PART_5 NOT NULL VARCHAR2(256)
KEY_PART_6 NOT NULL VARCHAR2(256)
KEY_PART_7 NOT NULL VARCHAR2(256)
COLLECTION_TIME NOT NULL DATE
COLLECTION_TIME_UTC DATE
VALUE NUMBER
desc sysman.gc$metric_values_hourly
Name Null Type
------------------------- -------- -------------
ENTITY_TYPE NOT NULL VARCHAR2(64)
ENTITY_NAME NOT NULL VARCHAR2(256)
ENTITY_GUID NOT NULL RAW(16 BYTE)
PARENT_ME_TYPE VARCHAR2(64)
PARENT_ME_NAME VARCHAR2(256)
PARENT_ME_GUID NOT NULL RAW(16 BYTE)
TYPE_META_VER NOT NULL VARCHAR2(8)
METRIC_GROUP_NAME NOT NULL VARCHAR2(64)
METRIC_COLUMN_NAME NOT NULL VARCHAR2(64)
COLUMN_TYPE NOT NULL NUMBER(1)
COLUMN_INDEX NOT NULL NUMBER(3)
DATA_COLUMN_TYPE NOT NULL NUMBER(2)
METRIC_GROUP_ID NOT NULL NUMBER(38)
METRIC_GROUP_LABEL VARCHAR2(64)
METRIC_GROUP_LABEL_NLSID VARCHAR2(64)
METRIC_COLUMN_ID NOT NULL NUMBER(38)
METRIC_COLUMN_LABEL VARCHAR2(64)
METRIC_COLUMN_LABEL_NLSID VARCHAR2(64)
DESCRIPTION VARCHAR2(128)
SHORT_NAME VARCHAR2(40)
UNIT VARCHAR2(32)
IS_FOR_SUMMARY NUMBER
IS_STATEFUL NUMBER
NON_THRESHOLDED_ALERTS NUMBER
METRIC_KEY_ID NOT NULL NUMBER(38)
KEY_PART_1 NOT NULL VARCHAR2(256)
KEY_PART_2 NOT NULL VARCHAR2(256)
KEY_PART_3 NOT NULL VARCHAR2(256)
KEY_PART_4 NOT NULL VARCHAR2(256)
KEY_PART_5 NOT NULL VARCHAR2(256)
KEY_PART_6 NOT NULL VARCHAR2(256)
KEY_PART_7 NOT NULL VARCHAR2(256)
COLLECTION_TIME NOT NULL DATE
COLLECTION_TIME_UTC DATE
COUNT_OF_COLLECTIONS NOT NULL NUMBER(38)
AVG_VALUE NUMBER
MIN_VALUE NUMBER
MAX_VALUE NUMBER
STDDEV_VALUE NUMBER
desc sysman.gc$metric_values_daily
Name Null Type
------------------------- -------- -------------
ENTITY_TYPE NOT NULL VARCHAR2(64)
ENTITY_NAME NOT NULL VARCHAR2(256)
ENTITY_GUID NOT NULL RAW(16 BYTE)
PARENT_ME_TYPE VARCHAR2(64)
PARENT_ME_NAME VARCHAR2(256)
PARENT_ME_GUID NOT NULL RAW(16 BYTE)
TYPE_META_VER NOT NULL VARCHAR2(8)
METRIC_GROUP_NAME NOT NULL VARCHAR2(64)
METRIC_COLUMN_NAME NOT NULL VARCHAR2(64)
COLUMN_TYPE NOT NULL NUMBER(1)
COLUMN_INDEX NOT NULL NUMBER(3)
DATA_COLUMN_TYPE NOT NULL NUMBER(2)
METRIC_GROUP_ID NOT NULL NUMBER(38)
METRIC_GROUP_LABEL VARCHAR2(64)
METRIC_GROUP_LABEL_NLSID VARCHAR2(64)
METRIC_COLUMN_ID NOT NULL NUMBER(38)
METRIC_COLUMN_LABEL VARCHAR2(64)
METRIC_COLUMN_LABEL_NLSID VARCHAR2(64)
DESCRIPTION VARCHAR2(128)
SHORT_NAME VARCHAR2(40)
UNIT VARCHAR2(32)
IS_FOR_SUMMARY NUMBER
IS_STATEFUL NUMBER
NON_THRESHOLDED_ALERTS NUMBER
METRIC_KEY_ID NOT NULL NUMBER(38)
KEY_PART_1 NOT NULL VARCHAR2(256)
KEY_PART_2 NOT NULL VARCHAR2(256)
KEY_PART_3 NOT NULL VARCHAR2(256)
KEY_PART_4 NOT NULL VARCHAR2(256)
KEY_PART_5 NOT NULL VARCHAR2(256)
KEY_PART_6 NOT NULL VARCHAR2(256)
KEY_PART_7 NOT NULL VARCHAR2(256)
COLLECTION_TIME NOT NULL DATE
COLLECTION_TIME_UTC DATE
COUNT_OF_COLLECTIONS NOT NULL NUMBER(38)
AVG_VALUE NUMBER
MIN_VALUE NUMBER
MAX_VALUE NUMBER
STDDEV_VALUE NUMBER
}}}
! sysman.gc$metric_values_latest, sysman.gc$metric_values_hourly, sysman.gc$metric_values_daily
{{{
-- 12-AUG-14 12-AUG-14
select min(collection_time), max(collection_time) from sysman.gc$metric_values_latest where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
select * from sysman.gc$metric_values_latest where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
-- 11-JUL-14 11-AUG-14
-- 07-AUG-14 08-SEP-14
select min(collection_time), max(collection_time) from sysman.gc$metric_values_hourly where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
select * from sysman.gc$metric_values_hourly where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
-- 01-JAN-14 11-AUG-14
-- 01-JAN-14 08-SEP-14
select min(collection_time), max(collection_time) from sysman.gc$metric_values_daily where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
select * from sysman.gc$metric_values_daily where METRIC_GROUP_NAME = 'ME$CELLSRV_IOPS_ALL' and entity_name like 'x08p%';
}}}
[img(95%,95%)[ https://lh5.googleusercontent.com/-TisqQexUnxs/VA8MQBTg_SI/AAAAAAAACWo/3GWZkMM-oGc/w2048-h2048-no/latesthourlydaily.png ]]
{{{
Rem
Rem $Header: rdbms/src/server/vos/odmlib/nfs/scripts/mondnfs.sql /main/1 2018/05/30 13:32:52 joserran Exp $
Rem
Rem mondnfs.sql
Rem
Rem Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved.
Rem
Rem NAME
Rem mondnfs.sql - Monitor Direct NFS statistics
Rem
Rem DESCRIPTION
Rem These scripts let you monitor the Direct NFS IOPS and MBPS
Rem while database workloads or RMAN backups are running using
Rem Direct NFS.
Rem This script is for 12.2.0.4 and 12c databases
Rem
Rem NOTES
Rem Part of MOS Doc ID 1495739.1
Rem
Rem BEGIN SQL_FILE_METADATA
Rem SQL_SOURCE_FILE: rdbms/src/server/vos/odmlib/nfs/scripts/mondnfs.sql
Rem SQL_SHIPPED_FILE:
Rem SQL_PHASE:
Rem SQL_STARTUP_MODE: NORMAL
Rem SQL_IGNORABLE_ERRORS: NONE
Rem END SQL_FILE_METADATA
Rem
Rem MODIFIED (MM/DD/YY)
Rem joserran 04/18/18 - Created
Rem
CREATE OR REPLACE PROCEDURE dnfs_monitor
(sleepSecs IN NUMBER)
IS
startTime DATE;
startReadIOPS NUMBER;
startWriteIOPS NUMBER;
startReadBytes NUMBER;
startWriteBytes NUMBER;
endTime DATE;
endReadIOPS NUMBER;
endWriteIOPS NUMBER;
endReadBytes NUMBER;
endWriteBytes NUMBER;
readThr NUMBER;
writeThr NUMBER;
readIOPS NUMBER;
writeIOPS NUMBER;
elapsedTime NUMBER;
read_bytes_prev NUMBER;
read_bytes_next NUMBER;
write_bytes_prev NUMBER;
write_bytes_next NUMBER;
read_IOs_prev NUMBER;
read_IOs_next NUMBER;
write_IOs_prev NUMBER;
write_IOs_next NUMBER;
samples NUMBER;
sleep_sample NUMBER;
BEGIN
samples := 10.0;
sleep_sample := (sleepSecs /samples);
SELECT sysdate, SUM(stats.nfs_readbytes), SUM(stats.nfs_writebytes), SUM(stats.nfs_read), SUM(stats.nfs_write)
INTO startTime, startReadBytes, startWriteBytes, startReadIOPS, startWriteIOPS
FROM dual, v$dnfs_stats stats;
DBMS_OUTPUT.PUT_LINE('Started at ' || TO_CHAR(startTime, 'MM/DD/YYYY HH:MI:SS AM'));
read_bytes_prev := startReadBytes;
read_bytes_next := 0;
endReadBytes := startReadBytes;
write_bytes_prev := startWriteBytes;
write_bytes_next := 0;
endWriteBytes := startWriteBytes;
read_IOs_prev := startReadIOPS;
read_IOs_next := 0;
endReadIOPS := startReadIOPS;
write_IOs_prev := startWriteIOPS;
write_IOs_next := 0;
endWriteIOPS := startWriteIOPS;
FOR j IN 1..samples
LOOP
DBMS_LOCK.SLEEP(sleep_sample);
SELECT sysdate, SUM(stats.nfs_readbytes), SUM(stats.nfs_writebytes), SUM(stats.nfs_read), SUM(stats.nfs_write)
INTO endTime, read_bytes_next, write_bytes_next, read_IOs_next, write_IOs_next
FROM dual, v$dnfs_stats stats;
/* Read bytes */
IF read_bytes_next > read_bytes_prev THEN
endReadBytes := endReadBytes + (read_bytes_next - read_bytes_prev);
END IF;
read_bytes_prev := read_bytes_next;
/* Write bytes */
IF write_bytes_next > write_bytes_prev THEN
endWriteBytes := endWriteBytes + (write_bytes_next - write_bytes_prev);
END IF;
write_bytes_prev := write_bytes_next;
/* Read IOs */
IF read_IOs_next > read_IOs_prev THEN
endReadIOPS := endReadIOPS + (read_IOs_next - read_IOs_prev);
END IF;
read_IOs_prev := read_IOs_next;
/* Write IOs */
IF write_IOs_next > write_IOs_prev THEN
endWriteIOPS := endWriteIOPS + (write_IOs_next - write_IOs_prev);
END IF;
write_IOs_prev := write_IOs_next;
END LOOP;
DBMS_OUTPUT.PUT_LINE('Finished at ' || to_char(endTime, 'MM/DD/YYYY HH:MI:SS AM'));
elapsedTime := (endTime - startTime) * 86400;
readThr := ROUND((endReadBytes - startReadBytes)/(1024 * 1024 * elapsedTime));
writeThr := ROUND((endWriteBytes - startWriteBytes)/(1024 * 1024 * elapsedTime));
readIOPS := ROUND((endReadIOPS - startReadIOPS)/elapsedTime);
writeIOPS := ROUND((endWriteIOPS - startWriteIOPS)/elapsedTime);
DBMS_OUTPUT.PUT_LINE('READ IOPS: ' || LPAD(TO_CHAR(readIOPS, '999999999'), 10, ' '));
DBMS_OUTPUT.PUT_LINE('WRITE IOPS: ' || LPAD(TO_CHAR(writeIOPS, '999999999'), 10, ' '));
DBMS_OUTPUT.PUT_LINE('TOTAL IOPS: ' || LPAD(TO_CHAR(readIOPS + writeIOPS, '999999999'), 10, ' '));
DBMS_OUTPUT.PUT_LINE('READ Throughput: ' || LPAD(TO_CHAR(readThr, '999999999'), 10, ' ') || ' MB/s');
DBMS_OUTPUT.PUT_LINE('WRITE Throughput: ' || LPAD(TO_CHAR(writeThr, '999999999'), 10, ' ') || ' MB/s');
DBMS_OUTPUT.PUT_LINE('TOTAL Throughput: ' || LPAD(TO_CHAR(readThr + writeThr, '999999999'), 10, ' ') || ' MB/s');
END;
/
CREATE OR REPLACE PROCEDURE dnfs_itermonitor
(sleepSecs IN NUMBER,
iter IN NUMBER)
IS
startTime DATE;
startReadIOPS NUMBER;
startWriteIOPS NUMBER;
startReadBytes NUMBER;
startWriteBytes NUMBER;
endTime DATE;
endReadIOPS NUMBER;
endWriteIOPS NUMBER;
endReadBytes NUMBER;
endWriteBytes NUMBER;
readThr NUMBER;
writeThr NUMBER;
readIOPS NUMBER;
writeIOPS NUMBER;
i NUMBER;
elapsedTime NUMBER;
read_bytes_prev NUMBER;
read_bytes_next NUMBER;
write_bytes_prev NUMBER;
write_bytes_next NUMBER;
read_IOs_prev NUMBER;
read_IOs_next NUMBER;
write_IOs_prev NUMBER;
write_IOs_next NUMBER;
samples NUMBER;
sleep_sample NUMBER;
BEGIN
DBMS_OUTPUT.PUT_LINE('Started at ' || TO_CHAR(SYSDATE, 'MM/DD/YYYY HH:MI:SS AM'));
DBMS_OUTPUT.PUT_LINE(
LPAD('TIMESTAMP', 15, ' ')||
LPAD('READ IOPS', 33, ' ')||
LPAD('WRITE IOPS', 15, ' ')||
LPAD('TOTAL IOPS', 15, ' ')||
LPAD('READ (MB/s)', 15, ' ')||
LPAD('WRITE (MB/s)', 15, ' ')||
LPAD('TOTAL (MB/s)', 15, ' '));
samples := 10.0;
sleep_sample := (sleepSecs /samples);
FOR i IN 1..iter
LOOP
SELECT sysdate, SUM(stats.nfs_readbytes), SUM(stats.nfs_writebytes), SUM(stats.nfs_read), SUM(stats.nfs_write)
INTO startTime, startReadBytes, startWriteBytes, startReadIOPS, startWriteIOPS
FROM dual, v$dnfs_stats stats;
read_bytes_prev := startReadBytes;
read_bytes_next := 0;
endReadBytes := startReadBytes;
write_bytes_prev := startWriteBytes;
write_bytes_next := 0;
endWriteBytes := startWriteBytes;
read_IOs_prev := startReadIOPS;
read_IOs_next := 0;
endReadIOPS := startReadIOPS;
write_IOs_prev := startWriteIOPS;
write_IOs_next := 0;
endWriteIOPS := startWriteIOPS;
FOR j IN 1..samples
LOOP
DBMS_LOCK.SLEEP(sleep_sample);
SELECT sysdate, SUM(stats.nfs_readbytes), SUM(stats.nfs_writebytes), SUM(stats.nfs_read), SUM(stats.nfs_write)
INTO endTime, read_bytes_next, write_bytes_next, read_IOs_next, write_IOs_next
FROM dual, v$dnfs_stats stats;
/* Read bytes */
IF read_bytes_next > read_bytes_prev THEN
endReadBytes := endReadBytes + (read_bytes_next - read_bytes_prev);
END IF;
read_bytes_prev := read_bytes_next;
/* Write bytes */
IF write_bytes_next > write_bytes_prev THEN
endWriteBytes := endWriteBytes + (write_bytes_next - write_bytes_prev);
END IF;
write_bytes_prev := write_bytes_next;
/* Read IOs */
IF read_IOs_next > read_IOs_prev THEN
endReadIOPS := endReadIOPS + (read_IOs_next - read_IOs_prev);
END IF;
read_IOs_prev := read_IOs_next;
/* Write IOs */
IF write_IOs_next > write_IOs_prev THEN
endWriteIOPS := endWriteIOPS + (write_IOs_next - write_IOs_prev);
END IF;
write_IOs_prev := write_IOs_next;
END LOOP;
elapsedTime := (endTime - startTime) * 86400;
readThr := ROUND((endReadBytes - startReadBytes)/(1024 * 1024 * elapsedTime));
writeThr := ROUND((endWriteBytes - startWriteBytes)/(1024 * 1024 * elapsedTime));
readIOPS := ROUND((endReadIOPS - startReadIOPS)/elapsedTime);
writeIOPS := ROUND((endWriteIOPS - startWriteIOPS)/elapsedTime);
DBMS_OUTPUT.PUT_LINE(
TO_CHAR(endTime, 'MM/DD/YYYY HH:MI:SS AM') ||
LPAD(TO_CHAR(readIOPS, '999999999'), 15, ' ') ||
LPAD(TO_CHAR(writeIOPS, '999999999'), 15, ' ') ||
LPAD(TO_CHAR(readIOPS + writeIOPS, '999999999'), 15, ' ') ||
LPAD(TO_CHAR(readThr, '999999999'), 15, ' ') ||
LPAD(TO_CHAR(writeThr, '999999999'), 15, ' ') ||
LPAD(TO_CHAR(readThr + writeThr, '999999999'), 15, ' '));
END LOOP;
DBMS_OUTPUT.PUT_LINE('Finished at ' || to_char(endTime, 'MM/DD/YYYY HH:MI:SS AM'));
END;
/
}}}
{{{
Rem
Rem $Header: rdbms/src/server/vos/odmlib/nfs/scripts/mondnfs_pre11204.sql /main/1 2018/05/30 13:32:52 joserran Exp $
Rem
Rem mondnfs_pre11204.sql
Rem
Rem Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved.
Rem
Rem NAME
Rem mondnfs.sql - Monitor Direct NFS statistics
Rem
Rem DESCRIPTION
Rem These scripts let you monitor the Direct NFS IOPS and MBPS
Rem while database workloads or RMAN backups are running using
Rem Direct NFS.
Rem This script targets databases older than 11.2.0.4
Rem
Rem NOTES
Rem Part of MOS Doc ID 1495739.1
Rem
Rem BEGIN SQL_FILE_METADATA
Rem SQL_SOURCE_FILE: rdbms/src/server/vos/odmlib/nfs/scripts/mondnfs_pre11204.sql
Rem SQL_SHIPPED_FILE:
Rem SQL_PHASE:
Rem SQL_STARTUP_MODE: NORMAL
Rem SQL_IGNORABLE_ERRORS: NONE
Rem END SQL_FILE_METADATA
Rem
Rem MODIFIED (MM/DD/YY)
Rem joserran 04/25/18 - Created
Rem
CREATE OR REPLACE PROCEDURE dnfs_monitor
(sleepSecs IN NUMBER)
IS
startTime DATE;
startReadIOPS NUMBER;
startWriteIOPS NUMBER;
endTime DATE;
endReadIOPS NUMBER;
endWriteIOPS NUMBER;
readIOPS NUMBER;
writeIOPS NUMBER;
elapsedTime NUMBER;
read_IOs_prev NUMBER;
read_IOs_next NUMBER;
write_IOs_prev NUMBER;
write_IOs_next NUMBER;
samples NUMBER;
sleep_sample NUMBER;
BEGIN
samples := 10.0;
sleep_sample := (sleepSecs /samples);
SELECT sysdate, SUM(stats.nfs_read), SUM(stats.nfs_write)
INTO startTime, startReadIOPS, startWriteIOPS
FROM dual, v$dnfs_stats stats;
DBMS_OUTPUT.PUT_LINE('Started at ' || TO_CHAR(startTime, 'MM/DD/YYYY HH:MI:SS AM'));
read_IOs_prev := startReadIOPS;
read_IOs_next := 0;
endReadIOPS := startReadIOPS;
write_IOs_prev := startWriteIOPS;
write_IOs_next := 0;
endWriteIOPS := startWriteIOPS;
FOR j IN 1..samples
LOOP
DBMS_LOCK.SLEEP(sleep_sample);
SELECT sysdate, SUM(stats.nfs_read), SUM(stats.nfs_write)
INTO endTime, read_IOs_next, write_IOs_next
FROM dual, v$dnfs_stats stats;
/* Read IOs */
IF read_IOs_next > read_IOs_prev THEN
endReadIOPS := endReadIOPS + (read_IOs_next - read_IOs_prev);
END IF;
read_IOs_prev := read_IOs_next;
/* Write IOs */
IF write_IOs_next > write_IOs_prev THEN
endWriteIOPS := endWriteIOPS + (write_IOs_next - write_IOs_prev);
END IF;
write_IOs_prev := write_IOs_next;
END LOOP;
DBMS_OUTPUT.PUT_LINE('Finished at ' || to_char(endTime, 'MM/DD/YYYY HH:MI:SS AM'));
elapsedTime := (endTime - startTime) * 86400;
readIOPS := ROUND((endReadIOPS - startReadIOPS)/elapsedTime);
writeIOPS := ROUND((endWriteIOPS - startWriteIOPS)/elapsedTime);
DBMS_OUTPUT.PUT_LINE('READ IOPS: ' || LPAD(TO_CHAR(readIOPS, '999999999'), 10, ' '));
DBMS_OUTPUT.PUT_LINE('WRITE IOPS: ' || LPAD(TO_CHAR(writeIOPS, '999999999'), 10, ' '));
DBMS_OUTPUT.PUT_LINE('TOTAL IOPS: ' || LPAD(TO_CHAR(readIOPS + writeIOPS, '999999999'), 10, ' '));
END;
/
CREATE OR REPLACE PROCEDURE dnfs_itermonitor
(sleepSecs IN NUMBER,
iter IN NUMBER)
IS
startTime DATE;
startReadIOPS NUMBER;
startWriteIOPS NUMBER;
endTime DATE;
endReadIOPS NUMBER;
endWriteIOPS NUMBER;
readIOPS NUMBER;
writeIOPS NUMBER;
i NUMBER;
elapsedTime NUMBER;
read_IOs_prev NUMBER;
read_IOs_next NUMBER;
write_IOs_prev NUMBER;
write_IOs_next NUMBER;
samples NUMBER;
sleep_sample NUMBER;
BEGIN
DBMS_OUTPUT.PUT_LINE('Started at ' || TO_CHAR(SYSDATE, 'MM/DD/YYYY HH:MI:SS AM'));
DBMS_OUTPUT.PUT_LINE(
LPAD('TIMESTAMP', 15, ' ')||
LPAD('READ IOPS', 33, ' ')||
LPAD('WRITE IOPS', 15, ' ')||
LPAD('TOTAL IOPS', 15, ' '));
samples := 10.0;
sleep_sample := (sleepSecs /samples);
FOR i IN 1..iter
LOOP
SELECT sysdate, SUM(stats.nfs_read), SUM(stats.nfs_write)
INTO startTime, startReadIOPS, startWriteIOPS
FROM dual, v$dnfs_stats stats;
read_IOs_prev := startReadIOPS;
read_IOs_next := 0;
endReadIOPS := startReadIOPS;
write_IOs_prev := startWriteIOPS;
write_IOs_next := 0;
endWriteIOPS := startWriteIOPS;
FOR j IN 1..samples
LOOP
DBMS_LOCK.SLEEP(sleep_sample);
SELECT sysdate, SUM(stats.nfs_read), SUM(stats.nfs_write)
INTO endTime, read_IOs_next, write_IOs_next
FROM dual, v$dnfs_stats stats;
/* Read IOs */
IF read_IOs_next > read_IOs_prev THEN
endReadIOPS := endReadIOPS + (read_IOs_next - read_IOs_prev);
END IF;
read_IOs_prev := read_IOs_next;
/* Write IOs */
IF write_IOs_next > write_IOs_prev THEN
endWriteIOPS := endWriteIOPS + (write_IOs_next - write_IOs_prev);
END IF;
write_IOs_prev := write_IOs_next;
END LOOP;
elapsedTime := (endTime - startTime) * 86400;
readIOPS := ROUND((endReadIOPS - startReadIOPS)/elapsedTime);
writeIOPS := ROUND((endWriteIOPS - startWriteIOPS)/elapsedTime);
DBMS_OUTPUT.PUT_LINE(
TO_CHAR(endTime, 'MM/DD/YYYY HH:MI:SS AM') ||
LPAD(TO_CHAR(readIOPS, '999999999'), 15, ' ') ||
LPAD(TO_CHAR(writeIOPS, '999999999'), 15, ' ') ||
LPAD(TO_CHAR(readIOPS + writeIOPS, '999999999'), 15, ' '));
END LOOP;
DBMS_OUTPUT.PUT_LINE('Finished at ' || to_char(endTime, 'MM/DD/YYYY HH:MI:SS AM'));
END;
/
}}}
!!! mongodb scd
https://stackoverflow.com/questions/4185105/ways-to-implement-data-versioning-in-mongodb
http://blog.shippable.com/why-we-moved-from-nosql-mongodb-to-postgressql
http://www.evernote.com/shard/s48/sh/c265b8f1-87e8-40d4-b0af-d72487ad7c4e/b4f2acca1085bb4661ed3f9e386b7b20
monitorama http://vimeo.com/monitorama/videos/page:2/sort:date/format:detail, https://speakerdeck.com/monitorama
released ''19/07/13'' http://www.dominicgiles.com/blog/files/956e1e11b3015e13d4d6c148da514384-123.html which you can download here http://www.dominicgiles.com/styled/monitordb.html
<<showtoc>>
<<<
Monte Carlo (mostly using probabilistic and permutation approaches and frameworks) , Markov Chain Monte Carlo , and Hamiltonian Monte Carlo. I think they all do the same thing with subtle difference, but it's confusing what to use and where to use it best ? What kind of Monte Carlos does this course teaches?
Facebook just released a forecasting package called "Prophet" and it uses Hamiltonian Monte Carlo https://research.fb.com/prophet-forecasting-at-scale/ using Stan http://mc-stan.org/ under the hood
<<<
monte carlo simulation forecasting in R
http://www.youtube.com/results?search_query=monte+carlo+simulation+forecasting+in+R
Prescriptive Analytics: Making Better Decisions with Simulation
http://www.b-eye-network.com/view/17224
Method for Creating Multipass Aggregations Using Tableau Server <-- doing various statistical methods in tableau
http://community.tableausoftware.com/message/181143#181143
monte carlo google search
https://www.google.com/search?q=what+is+monte+carlo+simulation&oq=what+is+monte+carlo+&aqs=chrome.1.69i57j0l5.4664j0j7&sourceid=chrome&es_sm=119&ie=UTF-8#q=monte+carlo+simulation+with+R
Introducing Monte Carlo Methods with R
http://www.stat.ufl.edu/archived/casella/ShortCourse/MCMC-UseR.pdf
Forecasting Hotel Arrivals and Occupancy Using Monte Carlo Simulation <-- good stuff, shows graph of actual vs forecast
http://alumnus.caltech.edu/~amir/hotelsim1.pdf
Why most sales forecasts suck…and how Monte Carlo simulations can make them better <-- monte carlo on sales forecast
http://www.retailshakennotstirred.com/retail-shaken-not-stirred/2010/01/why-most-sales-forecasts-suck-and-how-monte-carlo-simulations-can-make-them-better.html
Data Tables & Monte Carlo Simulations in Excel – A Comprehensive Guide <-- monte carlo in excel
http://chandoo.org/wp/2010/05/06/data-tables-monte-carlo-simulations-in-excel-a-comprehensive-guide/#montecarlo-simulations
Excel - Introduction to Monte Carlo simulation
http://office.microsoft.com/en-us/excel-help/introduction-to-monte-carlo-simulation-HA001111893.aspx
Monte Carlo in Tableau
http://drawingwithnumbers.artisart.org/basic-monte-carlo-simulations-in-tableau/
lynda.com - monte carlo simulation
http://www.lynda.com/search?q=monte+carlo+simulation
very easy explanation of monte carlo - lebron james example https://www.khanacademy.org/partner-content/lebron-asks-subject/lebron-asks/v/monte-carlo-simulation-to-answer-lebron-s-question here's the code https://www.khanacademy.org/cs/basketball-decisions/1024155511
! stock trading references
http://jbmarwood.com/monte-carlo-analysis/
http://www.amazon.com/gp/product/0979183820/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0979183820&linkCode=as2&tag=marwmedi-20&linkId=7MNWGNX7KNQNCFXR
http://www.amazon.com/gp/product/B002DHMZ7O/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B002DHMZ7O&linkCode=as2&tag=marwmedi-20&linkId=KTT75L2DOWWVMMBC
https://www.youtube.com/results?search_query=monte+carlo+simulation+trading
https://www.youtube.com/watch?v=3gcLRU24-w0
https://www.linkedin.com/pulse/evaluating-your-strategy-tad-slaff <- Evaluating Your Strategy
https://www.linkedin.com/pulse/evaluating-your-strategy-part-2-tad-slaff
! the best material so far
https://iversity.org/en/my/courses/monte-carlo-methods-in-finance/lesson_units#chapter101
http://www.quora.com/What-are-the-math-skills-you-need-for-statistical-arbitrage
http://www.quora.com/How-would-you-use-Monte-Carlo-simulation-to-predict-stock-movement
http://www.quora.com/What-specific-quantitative-trading-strategies-have-generated-the-most-aggregate-wealth-over-the-past-10-years
http://www.quora.com/What-are-some-of-the-most-important-algorithms-used-in-quantitative-trading
! Hamiltonian Monte-Carlo
Efficient Bayesian inference with Hamiltonian Monte Carlo -- Michael Betancourt (Part 1) https://www.youtube.com/watch?v=pHsuIaPbNbY&t=3670s
Hamiltonian Monte Carlo and Stan -- Michael Betancourt (Part 2) https://www.youtube.com/watch?v=xWQpEAyI5s8&t=4s
Scalable Bayesian Inference with Hamiltonian Monte Carlo https://www.youtube.com/watch?v=VnNdhsm0rJQ
https://www.quora.com/What-is-Hamiltonian-MCMC
https://en.wikipedia.org/wiki/Hybrid_Monte_Carlo
http://arogozhnikov.github.io/2016/12/19/markov_chain_monte_carlo.html
! Markov Chain Monte-Carlo
(ML 18.1) Markov chain Monte Carlo (MCMC) introduction https://www.youtube.com/watch?v=12eZWG0Z5gY
https://www.youtube.com/watch?v=vTUwEu53uzs A Beginner's Guide to Monte Carlo Markov Chain MCMC Analysis 2016
https://www.youtube.com/watch?v=6BPArbP4bZo https://www.youtube.com/watch?v=6BPArbP4bZo
https://www.safaribooksonline.com/library/view/learning-probabilistic-graphical/9781784392055/ Learning Probabilistic Graphical Models in R
https://www.safaribooksonline.com/library/view/learning-probabilistic-graphical/9781784392055/ch05s05.html Markov Chain Monte-Carlo
! best explanation of monte carlo
Cloud Data Centers and Cost Modeling - Real Option Theory and Monte Carlo Simulation https://learning.oreilly.com/library/view/cloud-data-centers/9780128014134/xhtml/chp018.xhtml#chp018titl
http://www.evernote.com/shard/s48/sh/55023475-c983-4a9d-ae59-65ec6376871f/0cd964d7dd4520e5c298f2c50248f69d
{{{
Logical Partitioning (LPAR) in AIX
Doc ID: Note:458571.1
-- INSTALLATION
Minimum Software Versions and Patches Required to Support Oracle Products on IBM pSeries
Doc ID: Note:282036.1
Additional Steps Required To Upgrade IBM JDK On IBM iSeries
Doc ID: Note:457287.1
Questions regarding Oracle database upgrades on AIX
Doc ID: Note:223521.1
PAR: MATRIX IBM AIX for Oracle RDBMS Compatibility
Doc ID: Note:41984.1
http://www-933.ibm.com/eserver/support/fixes/fixcentral/pfixpacks/53
-- BUG on 10.2.0.2 and ML 05 and higher
http://unix.derkeiler.com/Mailing-Lists/AIX-L/2006-09/msg00030.html
http://www.tek-tips.com/viewthread.cfm?qid=1272346&page=7
-- IY fix
ftp://ftp.software.ibm.com/aix/efixes/iy89080/
-- one-of-patch download
https://updates.oracle.com/ARULink/PatchDetails/process_form?patch_num=5496862&release=80102020&plat_lang=212P&patch_num_id=747092&email=karao@sqlwizard.com&userid=ml-591048.992&
Bug 5496862 - AIX: Mandatory patch to use Oracle with IBM Technology Level 5 (5300-5)
Doc ID: Note:5496862.8
Introduction to "Bug Description" Articles
Doc ID: Note:245840.1
10.2.0.3 PMON CRASHES ON STARTUP ON AIX 5L 5.3 ML05 -- WORKS on ML06
Doc ID: Note:458442.1
Does A DB Running Oracle 10.2.0.2 On Aix 5.3 Tl5 Sp1 Have To Use Patch 5496862
Doc ID: Note:432998.1
Is Patch 5496862 Mandatory for 10.2.0.3?
Doc ID: Note:418105.1
How To Determine Whether an APAR has been Fixed in AIX Version/Maintenance Level
Doc ID: Note:417451.1
How To Determine Service Pack in AIX
Doc ID: Note:421513.1
How Do I Determine The AIX Technology level ?
Doc ID: Note:443343.1
Is Patch 5496862 applicable on AIX 5.3 TL 06 / TL 07 / TL 08 ?
Doc ID: Note:443944.1
How To Determine Whether an APAR has been Fixed in AIX Version/Maintenance Level
Doc ID: Note:417451.1
Is Patch 5496862 Mandatory for 10.2.0.3?
Doc ID: Note:418105.1
How Do I Determine The AIX Technology level ?
Doc ID: Note:443343.1
ORA-01115 ORA-01110 ORA-27091 ORA-27072 Error: 5: I/O error
Doc ID: Note:559697.1
IO Interoperability Issue between IBM ML05 and Oracle Databases
Doc ID: Note:390656.1
Patch 5496862 Now Available For RDBMS Server Version 10.1.0.4.2 (AIX)
Doc ID: Note:418048.1
-- DYNAMIC CPU ALLOCATION BUG
http://oracledoug.com/serendipity/index.php?/archives/1318-10.2.0.2-bug-with-Dynamic-Reconfiguration-of-CPU.html
Bug 4704890 - OERI[kslgetl:1] after adding CPU using dynamic reconfiguration
Doc ID: Note:4704890.8
Pmon Terminated With Ora-00600 [1100] After Dynamically Increase CPU_COUNT
Doc ID: Note:467695.1
ORA-600 [kslgetl:1]
Doc ID: Note:351779.1
Ora-07445 (Internal Error)
Doc ID: Note:421045.1
Ora-600 [Kslgetl:1] when dynamically changing CPU
Doc ID: Note:369400.1
-- MEMORY
Memory Consumption on AIX
Doc ID: 259983.1
}}}
http://linuxindetails.wordpress.com/2010/06/27/mtrr-type-mismatch-for-e000000010000000-old-write-back-new-write-combining/
http://www.linuxquestions.org/questions/linux-general-1/mtrr-type-mismatch-352409/
<<showtoc>>
! use case
!! data sync heroku postgres ODS to salesforce
https://docs.mulesoft.com/salesforce-connector/10.3/
!! mulesoft trigger flow to salesforce
https://www.google.com/search?client=firefox-b-1-d&q=mulesoft+trigger+flow+to+salesforce
! courses
https://www.udemy.com/courses/search/?q=mulesoft+salesforce
https://www.udemy.com/course/mulesoft-and-salesforce-integration-real-time-project/
! integration patterns
https://blogs.mulesoft.com/dev/connectivity-dev/top-salesforce-integration-patterns/
https://developer.mulesoft.com/tutorials-and-howtos/quick-start/getting-started-with-salesforce-integration-patterns-using-mulesoft <- VIDEO
<<<
The Five Most Common Salesforce Integration Patterns
The five most common Salesforce integration patterns are:
Migration
Broadcast
Aggregation
Bi-directional synchronization
Correlation
<<<
! before
{{{
UPDATE ps_recv_load_t16
SET receiver_id = (SELECT DISTINCT receiver_id
FROM ps_recv_load_t2a6 A
WHERE A.process_instance =
ps_recv_load_t16 .process_instance
AND A.business_unit =
ps_recv_load_t16 .business_unit
AND A.eip_ctl_id = ps_recv_load_t16 .eip_ctl_id
AND A.seq_nbr = ps_recv_load_t16 .seq_nbr),
po_id = (SELECT DISTINCT po_id
FROM ps_recv_load_t2a6 A
WHERE A.process_instance = ps_recv_load_t16 .process_instance
AND A.business_unit = ps_recv_load_t16 .business_unit
AND A.eip_ctl_id = ps_recv_load_t16 .eip_ctl_id
AND A.seq_nbr = ps_recv_load_t16 .seq_nbr),
line_nbr = (SELECT DISTINCT line_nbr
FROM ps_recv_load_t2a6 A
WHERE A.process_instance =
ps_recv_load_t16 .process_instance
AND A.business_unit = ps_recv_load_t16 .business_unit
AND A.eip_ctl_id = ps_recv_load_t16 .eip_ctl_id
AND A.seq_nbr = ps_recv_load_t16 .seq_nbr),
sched_nbr = (SELECT DISTINCT sched_nbr
FROM ps_recv_load_t2a6 A
WHERE A.process_instance =
ps_recv_load_t16 .process_instance
AND A.business_unit = ps_recv_load_t16 .business_unit
AND A.eip_ctl_id = ps_recv_load_t16 .eip_ctl_id
AND A.seq_nbr = ps_recv_load_t16 .seq_nbr)
WHERE EXISTS (SELECT 'X'
FROM ps_recv_load_t2a6 A
WHERE A.process_instance = ps_recv_load_t16 .process_instance
AND A.business_unit = ps_recv_load_t16 .business_unit
AND A.eip_ctl_id = ps_recv_load_t16 .eip_ctl_id
AND A.seq_nbr = ps_recv_load_t16 .seq_nbr
AND A.seq_nbr = ps_recv_load_t16 .seq_nbr
AND A.process_instance = :1
AND A.business_unit = :2
AND A.receiver_id = :3
AND A.po_id = :4
AND A.line_nbr = :5
AND A.sched_nbr = :6)
}}}
! after
{{{
UPDATE ps_recv_load_t16
SET ( receiver_id, po_id, line_nbr, sched_nbr ) =
(SELECT DISTINCT receiver_id,
po_id,
line_nbr,
sched_nbr
FROM ps_recv_load_t2a6 A
WHERE
A.process_instance = ps_recv_load_t16 .process_instance
AND A.business_unit = ps_recv_load_t16 .business_unit
AND A.eip_ctl_id = ps_recv_load_t16 .eip_ctl_id
AND A.seq_nbr = ps_recv_load_t16.seq_nbr)
WHERE ( process_instance, business_unit, eip_ctl_id, seq_nbr ) IN
(SELECT /*+PUSH_SUBQ*/ process_instance,
business_unit,
eip_ctl_id,
seq_nbr
FROM
ps_recv_load_t2a6 A
WHERE
A.process_instance = :1
AND A.business_unit = :2
AND A.receiver_id = :3
AND A.po_id = :4
AND A.line_nbr = :5
AND A.sched_nbr = :6)
}}}
''"WITH Clause vs global temporary tables"'' http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:524736300346304074
''"difference between sql with clause and inline"'' http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4423923392083
http://psoug.org/reference/with.html
http://www.nocoug.org/download/2011-05/OMG_Refactoring_SQL_Paper.pdf
''http://www.experts-exchange.com/Database/Oracle/A_2375-Subquery-Factoring-WITH-Oracle.html'' <-- good examples GOOD STUFF
{{{
Subquery Factoring WITH OracleCommunity Pick
ivostoykov
Posted on 02/02/10 at 1:47 AM
4 of 4 members found this article helpful.
Facebook
Twitter
Linked In
More
Print
Email
Save
WITH Clause OR Subquery Factoring WITH Oracle
by Ivo Stoykov
Digging information from a database sometimes requires creating complicated queries. Often a simple select * from . . . is nested into another one and both into another one and so forth until the result becomes a very complicated query containing scalar queries (subqueries in the select clause) and in-line views (subqueries in the from clause). Such queries are difficult to read, hard to maintain, and a nightmare to optimize.
Fortunately there is a WITH clause defined in SQL-99 and implemented in Oracle9 release 2 and later. In Oracle database the WITH clause is used for materializing subqueries to avoid recomputing them multiple times without using temporary tables. The WITH clause allows factor out a sub-query, name it, and then reference it by name multiple times within the original complex query.
Additionally this technique lets the optimizer choose how to deal with the sub-query results -- whether to create a temporary table or inline it as a view.
The Syntax
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
WITH
alias_name -- Alias to use in the main query
AS
(insert a query here)
SELECT... -- Beginning of the query main body
Multiple aliases can be defined in the WITH clause:
WITH
alias_name1
AS
(query1)
aleas_name2
AS
(query2)
...
SELECT...
Toggle HighlightingOpen in New WindowSelect All
Let's look inside by taking a simple query:
1:
2:
3:
4:
5:
6:
7:
SQL> col dummy format a10;
SQL> select dummy from dual;
DUMMY
----------
X
SQL>
Toggle HighlightingOpen in New WindowSelect All
Same re-written as WITH clause will be:
1:
2:
3:
4:
5:
6:
7:
SQL> with d as (select * from dual)
2 select * from d;
DUMMY
----------
X
SQL>
Toggle HighlightingOpen in New WindowSelect All
Even if it seems nonsense at first glance, using WITH clause in complicated queries could be very worthy.
Let’s consider usage of WITH clause as creating a temporary table called “d” and then selecting from it. (Oracle actually does not create table but merge SQL before execution.) In other words WITH clause allows us to name a predefined Select statement in the context of a bigger Select and referenced in later by the given name.
Consider next few samples (Please scroll when necessary):
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
SQL> with temp_t1 as ( select dummy c1 from dual
2 select * from temp_t1 a,temp_t1 b
3 /
C1 C1
-- --
X X
SQL> with temp_t1 as (select dummy c1 from dual)
2 ,temp_t2 as (select dummy c1 from dual)
3 select * from temp_t1 a,temp_t2 b
4 /
C1 C1
-- --
X X
SQL> with temp_t1 as ( select dummy c1 from dual )
2 ,temp_t2 as ( select dummy c1 from dual a,temp_t1 b where b.c1 = a.dummy )
3 select * from temp_t2 a
4 /
C1
--
X
SQL> select * from temp_t2 a;
select * from temp_t2 a
ORA-00942: table or view does not exist
Toggle HighlightingOpen in New WindowSelect All
So nevertheless we can select the recordset as a table it exists actually only during the execution.
What Can We Do WITH
Samples above shows that we can do following WITH:
1. Reference a named query is allowed any number of times.
2. Any number of named queries are allowed.
3. Named queries can reference other named queries that came before them and even correlate to previous named queries.
4. Named queries has a local scope to the SELECT in which they are defined.
Benefits
The obvious is the ability to create reusable construction inside a SELECT, name it and reuse it whenever necessary.
Referencing the Same Sub-query Multiple Times
The following query joins two tables and computes the aggregate SUM(SAL) more than once. The bold text represents the parts of the query that are repeated.
1:
2:
3:
4:
5:
6:
SELECT dname, SUM(sal) AS dept_total FROM emp, dept
WHERE emp.deptno = dept.deptno
GROUP BY dname HAVING
SUM(sal) > ( SELECT SUM(sal) * 1/3 FROM emp, dept
WHERE emp.deptno = dept.deptno)
ORDER BY SUM(sal) DESC;
Toggle HighlightingOpen in New WindowSelect All
You can improve the query by doing the subquery once, and referencing it at the appropriate points in the main query. The bold text represents the common parts of the subquery, and the places where the subquery is referenced.
1:
2:
3:
4:
5:
6:
7:
8:
9:
WITH summary AS
(
SELECT dname, SUM(sal) AS dept_total FROM emp, dept
WHERE emp.deptno = dept.deptno
GROUP BY dname
)
SELECT dname, dept_total FROM summary
WHERE dept_total > (SELECT SUM(dept_total) * 1/3 FROM summary)
ORDER BY dept_total DESC;
Toggle HighlightingOpen in New WindowSelect All
What Stats Say
When a question comes to optimization using WITH clause, the answer is – depends. This is because many queries will be treated by Oracle as an in-line view. Sometimes recordset will be “materialized” into “on-the-fly” temporary table for use in later parts of the only this SQL statement. (This means that following execution of the same cursor will result in a new temp table each time.)
Let’s see an example. First we’ll create a test table with more than 1 million rows:
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
CREATE TABLE TestTbl
NOLOGGING
AS
SELECT A1.OWNER AS usr
,A1.OBJECT_TYPE AS PROD
,A1.OBJECT_ID AS ITEMS
FROM ALL_OBJECTS A1, EMP A2, DEPT A3;
SQL> select count(*) as recs from testtbl;
RECS
----------
3140928
SQL> exec DBMS_STATS.GATHER_TABLE_STATS(USER,'TESTTBL');
PL/SQL procedure successfully completed
SQL>
Toggle HighlightingOpen in New WindowSelect All
Next we’ll use a standard approach to show up some reports.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
43:
44:
45:
46:
47:
48:
49:
50:
51:
52:
53:
54:
55:
56:
57:
58:
59:
60:
61:
62:
set serveroutput on;
set autotrace on;
set timing on;
SQL> SELECT prod
2 ,total_items
3 FROM (
4 SELECT prod
5 , NVL(SUM(items),0) AS total_items
6 FROM testtbl
7 GROUP BY
8 prod
9 ) ilv
10 WHERE total_items > (SELECT SUM(items)/3 AS one_third_items
11* FROM testtbl)
SQL> /
PROD TOTAL_ITEMS
------------------- ----------------------
SYNONYM 50068277560
JAVA CLASS 43610562520
Elapsed: 00:00:03.07
Execution Plan
----------------------------------------------------------
Plan hash value: 2449639557
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 28 | 3154 (8)| 00:00:38 |
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 2 | 28 | 3154 (8)| 00:00:38 |
| 3 | TABLE ACCESS FULL| TESTTBL | 3140K| 41M| 2967 (2)| 00:00:36 |
| 4 | SORT AGGREGATE | | 1 | 5 | | |
| 5 | TABLE ACCESS FULL| TESTTBL | 3140K| 14M| 2967 (2)| 00:00:36 |
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(NVL(SUM("ITEMS"),0)> (SELECT SUM("ITEMS")/3 FROM
"TESTTBL" "TESTTBL"))
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
21319 consistent gets
21213 physical reads
0 redo size
536 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
SQL>
Toggle HighlightingOpen in New WindowSelect All
As visible, resources are moderate and execution time is about 3 sec. The problem is that to get the result, table is accessed twice. To avoid this we could refactor the query using WITH clause. As was said above there will be a single access due to materialization of the recordset.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
43:
44:
45:
46:
47:
48:
49:
50:
51:
52:
53:
54:
55:
56:
57:
58:
59:
SQL> WITH ITEMS_REP AS (
2 SELECT PROD, NVL(SUM(ITEMS),0) AS TOTAL_ITMES
3 FROM TESTTBL
4 GROUP BY PROD)
5 SELECT PROD, TOTAL_ITMES
6 FROM ITEMS_REP
7 WHERE TOTAL_ITMES > (SELECT SUM(TOTAL_ITMES)/3 AS ONE_THIRD_ITEMS
8 FROM ITEMS_REP)
9/
SQL>
PROD TOTAL_ITMES
------------------- -----------
SYNONYM 50068277560
JAVA CLASS 43610562520
Elapsed: 00:00:02.15
Execution Plan
----------------------------------------------------------
Plan hash value: 2967744285
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 22 | 528 | 3158 (8)| 00:00:38 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | | | | | |
| 3 | HASH GROUP BY | | 22 | 308 | 3154 (8)| 00:00:38 |
| 4 | TABLE ACCESS FULL | TESTTBL | 3140K| 41M| 2967 (2)| 00:00:36 |
|* 5 | VIEW | | 22 | 528 | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6633_E00F1 | 22 | 308 | 2 (0)| 00:00:01 |
| 7 | SORT AGGREGATE | | 1 | 13 | | |
| 8 | VIEW | | 22 | 286 | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6633_E00F1 | 22 | 308 | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter("TOTAL_ITMES"> (SELECT SUM("TOTAL_ITMES")/3 FROM (SELECT /*+ CACHE_TEMP_TABLE
("T1") */ "C0" "PROD","C1" "TOTAL_ITMES" FROM "SYS"."SYS_TEMP_0FD9D6633_E00F1" "T1")
"ITEMS_REP"))
Statistics
----------------------------------------------------------
2 recursive calls
9 db block gets
10672 consistent gets
10613 physical reads
644 redo size
536 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
Toggle HighlightingOpen in New WindowSelect All
Now the query goes faster. Time is reduced by 1 sec. We have hard-parse in addition to the cost of setting up a temp table. From statistics we can see that the physical reads are significantly reduced.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
SQL> EXPLAIN PLAN SET STATEMENT_ID = 'SQ_FCT' for
2 WITH ITEMS_REP AS (
3 SELECT PROD, NVL(SUM(ITEMS),0) AS TOTAL_ITMES
4 FROM TESTTBL
5 GROUP BY PROD)
6 SELECT PROD, TOTAL_ITMES
7 FROM ITEMS_REP
8 WHERE TOTAL_ITMES > (SELECT SUM(TOTAL_ITMES)/3 AS ONE_THIRD_ITEMS
9 FROM ITEMS_REP);
SQL>
SQL> @?RDBMS\ADMIN\utlxplp.sql
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 395517267
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 22 | 528 | 3158 (8)| 00:00:38 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | | | | | |
| 3 | HASH GROUP BY | | 22 | 308 | 3154 (8)| 00:00:38 |
| 4 | TABLE ACCESS FULL | TESTTBL | 3140K| 41M| 2967 (2)| 00:00:36 |
|* 5 | VIEW | | 22 | 528 | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6637_E00F1 | 22 | 308 | 2 (0)| 00:00:01 |
| 7 | SORT AGGREGATE | | 1 | 13 | | |
| 8 | VIEW | | 22 | 286 | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6637_E00F1 | 22 | 308 | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
Toggle HighlightingOpen in New WindowSelect All
The execution plan above shows the new TEMP TABLE TRANSFORMATION that Oracle has introduced (Step 2). There is a global temporary table Oracle created and loaded from the first scan of TESTTBL. This temporary dataset is then used to answer the overall question of which product items are more than one-third (Step 7).
Let’s repeat that Oracle will not always materialise subqueries in this way. Sometimes it will either merge the subquery into the main query or treat it as a simple in-line view. In cases when CBO chooses not to materialise subquery, but we do want to, Oracle supports the MATERIALIZE hint.
Without hint we see:
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
SQL> set autotrace traceonly explain
SQL>
SQL> WITH INLINE_VIEW AS (
2 SELECT PROD
3 , SUM(ITEMS) AS TOTAL_ITEMS
4 FROM testtbl
5 GROUP BY PROD
6 )
7 SELECT *
8 FROM inline_view;
Execution Plan
----------------------------------------------------------
Plan hash value: 1165785812
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 22 | 308 | 3154 (8)| 00:00:38 |
| 1 | HASH GROUP BY | | 22 | 308 | 3154 (8)| 00:00:38 |
| 2 | TABLE ACCESS FULL| TESTTBL | 3140K| 41M| 2967 (2)| 00:00:36 |
------------------------------------------------------------------------------
SQL>
Toggle HighlightingOpen in New WindowSelect All
Plan shows us that Oracle has chosen not to materialise this subquery because it is used once and temporary table would be a waste of resources.
Adding the MATERIALIZE hint as follows shows following:
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
SQL> WITH INLINE_VIEW AS (
2 SELECT /*+ materialize */
3 PROD, SUM(ITEMS) AS TOTAL_ITEMS
4 FROM testtbl
5 GROUP BY PROD
6 )
7 SELECT *
8 FROM inline_view;
Execution Plan
----------------------------------------------------------
Plan hash value: 844976587
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 22 | 528 | 3156 (8)| 00:00:38 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | | | | | |
| 3 | HASH GROUP BY | | 22 | 308 | 3154 (8)| 00:00:38 |
| 4 | TABLE ACCESS FULL | TESTTBL | 3140K| 41M| 2967 (2)| 00:00:36 |
| 5 | VIEW | | 22 | 528 | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6639_E00F1 | 22 | 308 | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
SQL>
Toggle HighlightingOpen in New WindowSelect All
The temporary table is present in the plan.
On large datasets, however, the savings from such optimisation can be quite. On tiny datasets, the time involved in the temporary table setup can take longer than the original query itself so is not a particularly useful mechanism.
Conclusion
As WITH clause is refactoring of a SELECT statement, it could be used anywhere where SELECT is acceptable, including in DML operations as INSERT, UPDATE and DELETE.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
INSERT INTO table_name
WITH subquery_name AS (SELECT . . .)
SELECT ... FROM subquery_name;
UPDATE table_name
SET column_name = ( WITH subquery_name AS (SELECT ...)
SELECT ... FROM subquery_name );
DELETE FROM table_name
WHERE column_name IN ( WITH subquery_name AS (SELECT ...)
SELECT ... FROM subquery_name );
Toggle HighlightingOpen in New WindowSelect All
Restrictions on Subquery Factoring:
• WITH clause cannot be nested. That is, WITH cannot exists in another WITH. However, the name defined in WITH clause can be used in the subquery of any subsequent subquery.
• In a query with set operators, the set operator subquery cannot contain the WITH clause, but the FROM subquery can.
Difference between 11g and previous versions
Named query must be used in the SQL statement prior to Oracle 11g. It must appear in another named query (like above) or in the main SELECT statement (like the first example). Otherwise an error is raised.
If we fire following script against version less than 11g
1:
2:
3:
4:
SELECT * FROM v$version;
WITH T1 AS (SELECT * FROM DUAL),
T2 AS (SELECT * FROM DUAL)
select * from t1;
Toggle HighlightingOpen in New WindowSelect All
Result will be an error:
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.7.0 - Production
PL/SQL Release 9.2.0.7.0 - Production
CORE 9.2.0.7.0 Production
TNS for 32-bit Windows: Version 9.2.0.7.0 - Production
NLSRTL Version 9.2.0.7.0 - Production
Error starting at line 2 in command:
WITH T1 AS (SELECT * FROM DUAL),
T2 AS (SELECT * FROM DUAL)
select * from t1
Error at Command Line:4 Column:14
Error report:
SQL Error: ORA-32035: unreferenced query name defined in WITH clause
32035. 00000 - "unreferenced query name defined in WITH clause"
*Cause: There is at least one WITH clause query name that is not
referenced in any place.
*Action: remove the unreferenced query name and retry
Toggle HighlightingOpen in New WindowSelect All
In Oracle 11g this is no more a problem
1:
2:
3:
4:
SELECT * FROM v$version;
WITH T1 AS (SELECT * FROM DUAL),
T2 AS (SELECT * FROM DUAL)
select * from t1;
Toggle HighlightingOpen in New WindowSelect All
Result is
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
BANNER
---------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for 32-bit Windows: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
DUMMY
-----
X
Toggle HighlightingOpen in New WindowSelect All
Additional reading
A good discussion about WITH clause at AskTom: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4423923392083
}}}
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=408252255004768&id=220970.1&displayIndex=7&_afrWindowMode=0&_adf.ctrl-state=lkkd1qx2g_77#A15959
<<<
Can I have multiple public networks accessing my Oracle RAC?
Yes, you can have multiple networks however with Oracle RAC 10g and Oracle RAC 11g, the cluster can only manage a single public network with a VIP and the database can only load balance across a single network. FAN will only work on the public network with the Oracle VIPs.
Oracle RAC 11g Release 2 supports multiple public networks. You must set the new init.ora parameter LISTENER_NETWORKS so users are load balanced across their network. Services are tied to networks so users connecting with network 1 will use a different service than network 2. Each network will have its own VIP.
<<<
2013 - Oracle RAC and Oracle RAC One Node on Extended Distance (Stretched) Clusters
http://www.oracle.com/technetwork/products/clustering/overview/extendedracversion11-435972.pdf
2012 - Oracle Real Application Clusters (RAC) and Oracle Clusterware Interconnect Virtual Local Area Networks (VLANs) Deployment Considerations
http://www.oracle.com/technetwork/products/clusterware/overview/interconnect-vlan-06072012-1657506.pdf
good stuff - https://levipereira.wordpress.com/2011/10/22/how-configure-multiples-public-network-an-grid-infrastructure-11g-r2-11-2-environment/
http://catalogo.asac.as/documents/28537/0/Oracle+Real+Application+Clusters+(RAC)%20on+Extended+Distance+Clusters.pdf/1494f1e8-8f4e-4b27-98b2-e59c0c7d777a
http://www.oracle.com/technetwork/products/clustering/overview/scan-129069.pdf
make sure to unlock the CRS memory inside the VMs - this releases
over a GB of physical mem which doesn't need to be locked in a sandbox,
and indeed it's the only way to have a usable environment on a lower-end
laptop
this script will unlock the CRS memory in 11gR2:
https://github.com/ardentperf/racattack/blob/master/makeDVD/root/fix_cssd/fix_cssd.sh
http://carlos-sierra.net/2012/11/14/mutating-histograms-mutating-number-of-rows-mutating-blevel-mutating-number-of-distinct-levels-what-do-they-mean/
http://carlos-sierra.net/2012/06/05/table-contains-n-columns-referenced-in-predicates-with-mutating-number-of-histogram-endpoints-count/
on the .muttrc file to have the recent message appearing first do this..
{{{
set sort = reverse-date
}}}
http://www.mutt.org/doc/manual/manual-6.html
Understanding MVC architecture https://www.youtube.com/watch?v=eTdVkgF_Slo
MVC vs. 3-tier architecture http://stackoverflow.com/questions/4577587/mvc-vs-3-tier-architecture
MVC Vs n-tier architecture http://stackoverflow.com/questions/698220/mvc-vs-n-tier-architecture
Douglas K Barry book - Web Services, Service-Oriented Architectures, and Cloud Computing: The Savvy Manager's Guide - http://www.service-architecture.com/articles/object-oriented-databases/middle_tier_architecture.html , https://www.safaribooksonline.com/library/view/web-services-service-oriented/9780123983572/#toc
http://chandlerdba.wordpress.com/2013/12/01/oracles-locking-model-multi-version-concurrency-control/
http://stackoverflow.com/questions/27499/database-what-is-multiversion-concurrency-control-mvcc-and-who-supports-it
{{{
select * from hr.departments;
select * from hr.employees;
drop materialized view hr.emp_dept;
create materialized view hr.emp_dept
build immediate
refresh on demand
disable query rewrite
as
select dept.department_id, dept.department_name, count (*)
from hr.employees emp, hr.departments dept
where emp.department_id = dept.department_id
group by dept.department_id, dept.department_name
/
-- manual refresh
EXEC DBMS_MVIEW.refresh('hr.emp_dept');
select * from hr.emp_dept;
explain plan for select * from hr.emp_dept;
select * from table(dbms_xplan.display);
explain plan for select dept.department_id, dept.department_name, count (*)
from hr.employees emp, hr.departments dept
where emp.department_id = dept.department_id
group by dept.department_id, dept.department_name
/
select * from table(dbms_xplan.display);
}}}
https://oracle-base.com/articles/misc/materialized-views
{{{
07:34:02 ORACLE@dw> CREATE TABLE t AS SELECT * FROM dba_objects WHERE object_id IS NOT NULL;
Table created.
07:34:08 ORACLE@dw> ALTER TABLE t ADD PRIMARY KEY (object_id);
Table altered.
07:34:12 ORACLE@dw> CREATE MATERIALIZED VIEW LOG ON t;
Materialized view log created.
07:34:16 ORACLE@dw> @46on 8
Session altered.
07:34:27 ORACLE@dw> CREATE MATERIALIZED VIEW mv
BUILD IMMEDIATE
USING INDEX
REFRESH FAST
AS
SELECT * FROM t
/07:34:34 2 07:34:34 3 07:34:34 4 07:34:34 5 07:34:34 6 07:34:34 7
Materialized view created.
}}}
{{{
PARSING IN CURSOR #140307535704096 len=84 dep=2 uid=91 oct=9 lid=91 tim=1358948077604220 hv=3115979820 ad='2b378eac0' sqlid='6uw9zhawvn51c'
CREATE UNIQUE INDEX "ORACLE"."SYS_C0036888" on "ORACLE"."MV"("OBJECT_ID") NOPARALLEL
END OF STMT
}}}
https://levels.io/how-i-build-my-minimum-viable-products/
<<<
read https://gettingreal.37signals.com/ch02_Whats_Your_Problem.php
learn IDE http://www.sublimetext.com/3
install SODA theme https://github.com/buymeasoda/soda-theme%22
install codekit https://incident57.com/codekit/
stocksy to buy images http://www.stocksy.com/
tech stack Nginx on Ubuntu on Linode with basic PHP, Node.JS, JS, CSS and HTML.
hosting https://www.linode.com/
nginx Nginx server with the PHP-FPM gateway
register a domain with NameCheap, use their FreeDNS to add DNS entries to point to my Linode VPS IP address. I’ll clone an existing nginx.conf (Nginx config file) from another domain, change a few things and upload it.
For example Stripe’s library for payment processing and Twitter’s OAuth library.
<<<
https://www.julian.com/guide/growth/intro
https://www.dbarj.com.br/en/2019/07/deploying-a-highly-available-mysql-cluster-with-drbd-on-oci/
https://dev.to/keydunov/using-mysql-as-a-cache-layer-for-bigquery-37p2
http://gigaom.com/2013/10/23/facebook-offers-a-peek-at-how-it-keeps-mysql-online/
http://gigaom.com/2011/12/06/facebook-shares-some-secrets-on-making-mysql-scale/
MySQL Pool Scanner (MPS) https://www.facebook.com/notes/facebook-engineering/under-the-hood-mysql-pool-scanner-mps/10151750529723920
post search https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-posts-search/10151755593228920
Indexing and ranking in Graph Search https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920
Wormhole pub/sub system: Moving data through space and time https://www.facebook.com/notes/facebook-engineering/wormhole-pubsub-system-moving-data-through-space-and-time/10151504075843920
https://mahmoudhatem.wordpress.com/2019/03/28/mysql-ash_sampler-a-simple-ash-builder/
.
! uber schemaless and postgresql
http://use-the-index-luke.com/blog/2016-07-29/on-ubers-choice-of-databases
https://www.yumpu.com/en/document/view/53683323/migrating-uber-from-mysql-to-postgresql
https://www.postgresql.org/message-id/flat/CA%2BTgmobGROzKbt0_f0OEBdoPXwHHyVwozNb9Xb1ekwvEp6qFjg%40mail.gmail.com#CA+TgmobGROzKbt0_f0OEBdoPXwHHyVwozNb9Xb1ekwvEp6qFjg@mail.gmail.com
! Carlos Sierra - mystat
{{{
----------------------------------------------------------------------------------------
--
-- File name: mystat.sql
--
-- Purpose: Reports delta of current sessions stats before and after a SQL
--
-- Author: Carlos Sierra
--
-- Version: 2013/10/04
--
-- Usage: This scripts does not have parameters. It just needs to be executed
-- twice. First execution just before the SQL that needs to be evaluated.
-- Second execution right after.
--
-- Example: @mystat.sql
-- <any sql>
-- @mystat.sql
--
-- Description:
--
-- This script takes a snapshot of v$mystat every time it is executed. Then,
-- on first execution it does nothing else. On second execution it produces
-- a report with the gap between the first and second execution, and resets
-- all snapshots.
--
-- If you want to capture session statistics for one SQL, then execute this
-- script right before and after your SQL.
--
-- Notes:
--
-- This script uses the global temporary plan_table as a repository.
--
-- Developed and tested on 11.2.0.3
--
-- For a more robust tool use Tanel Poder snaper at
-- http://blog.tanelpoder.com
--
---------------------------------------------------------------------------------------
--
-- snap of v$mystat
INSERT INTO plan_table (
statement_id /* record_type */,
timestamp,
object_node /* class */,
object_alias /* name */,
cost /* value */)
SELECT 'v$mystat' record_type,
SYSDATE,
TRIM (',' FROM
TRIM (' ' FROM
DECODE(BITAND(n.class, 1), 1, 'User, ')||
DECODE(BITAND(n.class, 2), 2, 'Redo, ')||
DECODE(BITAND(n.class, 4), 4, 'Enqueue, ')||
DECODE(BITAND(n.class, 8), 8, 'Cache, ')||
DECODE(BITAND(n.class, 16), 16, 'OS, ')||
DECODE(BITAND(n.class, 32), 32, 'RAC, ')||
DECODE(BITAND(n.class, 64), 64, 'SQL, ')||
DECODE(BITAND(n.class, 128), 128, 'Debug, ')
)) class,
n.name,
s.value
FROM v$mystat s,
v$statname n
WHERE s.statistic# = n.statistic#;
--
DEF date_mask = 'YYYY-MM-DD HH24:MI:SS';
COL snap_date_end NEW_V snap_date_end;
COL snap_date_begin NEW_V snap_date_begin;
SET VER OFF PAGES 1000;
--
-- end snap
SELECT TO_CHAR(MAX(timestamp), '&&date_mask.') snap_date_end
FROM plan_table
WHERE statement_id = 'v$mystat';
--
-- begin snap (null if there is only one snap)
SELECT TO_CHAR(MAX(timestamp), '&&date_mask.') snap_date_begin
FROM plan_table
WHERE statement_id = 'v$mystat'
AND TO_CHAR(timestamp, '&&date_mask.') < '&&snap_date_end.';
--
COL statistics_name FOR A62 HEA "Statistics Name";
COL difference FOR 999,999,999,999 HEA "Difference";
--
-- report only if there is a begin and end snaps
SELECT (e.cost - b.cost) difference,
--b.object_node||': '||b.object_alias statistics_name
b.object_alias statistics_name
FROM plan_table b,
plan_table e
WHERE '&&snap_date_begin.' IS NOT NULL
AND b.statement_id = 'v$mystat'
AND b.timestamp = TO_DATE('&&snap_date_begin.', '&&date_mask.')
AND e.statement_id = 'v$mystat'
AND e.timestamp = TO_DATE('&&snap_date_end.', '&&date_mask.')
AND e.object_alias = b.object_alias /* name */
AND e.cost > b.cost /* value */
ORDER BY
--b.object_node,
b.object_alias;
--
-- report snaps
SELECT '&&snap_date_begin.' snap_date_begin,
'&&snap_date_end.' snap_date_end
FROM DUAL
WHERE '&&snap_date_begin.' IS NOT NULL;
--
-- delete only if report is not empty
DELETE plan_table
WHERE '&&snap_date_begin.' IS NOT NULL
AND statement_id = 'v$mystat';
-- end
}}}
! Andy Klock version
{{{
CREATE TABLE RUN_STATS
( "TEST_NAME" VARCHAR2(100),
"SNAP_TYPE" VARCHAR2(5),
"SNAP_TIME" DATE,
"STAT_CLASS" VARCHAR2(10),
"NAME" VARCHAR2(100),
"VALUE" NUMBER);
create or replace package snap_time is
-- Author : I604174
-- Created : 7/21/2014 2:46:05 PM
-- Purpose : Grabs timestamp and session stats
-- Code borrowed from Carlos Sierra's mystat.sql
-- http://carlos-sierra.net/2013/10/04/carlos-sierra-shared-scripts/
/*
grants:
grant select on v_$statname to i604174;
grant select on v_$mystat to i604174;
grant execute on i604174.snap_time to public;
create public synonym snap_time on i604174.snap_time;
create public synonym snap_time for i604174.snap_time;
*/
procedure begin_snap (p_run_name varchar2);
procedure end_snap (p_run_name varchar2);
end snap_time;
/
create or replace package body snap_time is
procedure begin_snap (p_run_name varchar2) is
l_sysdate date:=sysdate;
begin
-- snap time
insert into run_stats values (p_run_name,'BEGIN',l_sysdate,'SNAP','snap time',null);
-- snap mystat
insert into run_stats
SELECT p_run_name record_type,
'BEGIN',
l_sysdate,
TRIM (',' FROM
TRIM (' ' FROM
DECODE(BITAND(n.class, 1), 1, 'User, ')||
DECODE(BITAND(n.class, 2), 2, 'Redo, ')||
DECODE(BITAND(n.class, 4), 4, 'Enqueue, ')||
DECODE(BITAND(n.class, 8), 8, 'Cache, ')||
DECODE(BITAND(n.class, 16), 16, 'OS, ')||
DECODE(BITAND(n.class, 32), 32, 'RAC, ')||
DECODE(BITAND(n.class, 64), 64, 'SQL, ')||
DECODE(BITAND(n.class, 128), 128, 'Debug, ')
)) class,
n.name,
s.value
FROM v$mystat s,
v$statname n
WHERE s.statistic# = n.statistic#;
commit;
end begin_snap;
procedure end_snap (p_run_name varchar2) is
l_sysdate date:=sysdate;
begin
-- snap time
insert into run_stats values (p_run_name,'END',l_sysdate,'SNAP','snap time',null);
-- snap mystat
insert into run_stats
SELECT p_run_name record_type,
'END',
l_sysdate,
TRIM (',' FROM
TRIM (' ' FROM
DECODE(BITAND(n.class, 1), 1, 'User, ')||
DECODE(BITAND(n.class, 2), 2, 'Redo, ')||
DECODE(BITAND(n.class, 4), 4, 'Enqueue, ')||
DECODE(BITAND(n.class, 8), 8, 'Cache, ')||
DECODE(BITAND(n.class, 16), 16, 'OS, ')||
DECODE(BITAND(n.class, 32), 32, 'RAC, ')||
DECODE(BITAND(n.class, 64), 64, 'SQL, ')||
DECODE(BITAND(n.class, 128), 128, 'Debug, ')
)) class,
n.name,
s.value
FROM v$mystat s,
v$statname n
WHERE s.statistic# = n.statistic#;
commit;
end end_snap;
end snap_time;
/
/*
Usage:
exec snap_time.begin_snap('Test 1.1')
run something
exec snap_time.end_snap('Test 1.1')
Instrumentation:
col test_name format a15
col name format a70
select test_name, begin_snap, end_snap, snap_type, stat_class, name, delta from
(
select
test_name,
snap_type,
stat_class,
name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
(snap_time - (lag(snap_time) over (order by snap_time)))*86400 delta
from run_stats
where name = 'snap time'
union all
select
test_name,
snap_type,
stat_class,
name,
lag(snap_time) over (order by snap_time) begin_snap,
snap_time end_snap,
value-lag(value) over (order by snap_time) delta
from run_stats
)
where snap_type = 'END'
and delta > 0
/
*/
}}}
https://www.namecheap.com/support/knowledgebase/article.aspx/9622/10/dns-propagation--explained
https://www.namecheap.com/support/live-chat/domains.aspx
kproxy.com
http://www.sportsmediawatch.com/nba-tv-schedule/
http://www.espn.com/mens-college-basketball/standings/_/group/4
http://www.espn.com/mens-college-basketball/rankings
http://www.espn.com/mens-college-basketball/schedule
http://www.espn.com/mens-college-basketball/conferences/schedule/_/id/52/date/20161128
https://sites.google.com/site/embtdbo/examples/nested-loops-oracle
http://www.dpriver.com/blog/list-of-demos-illustrate-how-to-use-general-sql-parser/oracle-sql-query-rewrite/oracle-sql-query-rewrite-co-related-sub-query-to-inline-view/
{{{
--select * from hr.employees;
--select * from hr.departments;
--drop table hr.salary purge;
--create table hr.salary as select employee_id, salary, hire_date from hr.employees;
-- 163550
SELECT sum(emp.salary)
FROM hr.employees emp,
hr.salary,
hr.departments dept
WHERE emp.employee_id = salary.employee_id
AND emp.department_id = dept.department_id
AND dept.location_id = 1700
AND salary.hire_date = (SELECT MAX(hire_date)
FROM hr.salary s2
WHERE s2.employee_id = salary.employee_id )
;
-- 163550
SELECT SUM(emp.salary)
FROM hr.employees emp,
hr.salary,
hr.departments dept,
(SELECT s2.employee_id,
Max(hire_date) MAX_yyyymmdd
FROM salary s2
GROUP BY s2.employee_id) sal2max
WHERE emp.employee_id = salary.employee_id
AND emp.department_id = dept.department_id
AND dept.location_id = 1700
AND salary.hire_date = sal2max.MAX_yyyymmdd
AND salary.employee_id = sal2max.employee_id
;
}}}
<<<
TECH: SQL*Net V2 on Unix - A Quick Guide to Setting Up Client Side Tracing (Doc ID 16564.1)
<<<
{{{
Client Tracing or App Server
~~~~~~~~~~~~~~
1) Set the environment variable TNS_ADMIN to the directory where the
tnsnames.ora and listener.ora files exist.
The default location is $ORACLE_HOME/network/admin. Set $TNS_ADMIN to this
if it is not set. This ENSURES you know which files you are using.
2) Start the listener: lsnrctl
> set password <password>
> start
Note any errors. If you do not have a password set then ignore the
set password command.
3) If the listener started, start the database.
4) Create a file in $HOME called .sqlnet.ora and add the lines:
trace_level_client= 16
trace_file_client=client
trace_directory_client= /tmp (or similar)
trace_unique_client=true
5) Try to connect from SQL*Plus thus:
sqlplus username/password@alias
or
sqlplus username/password
substituting a suitable alias.
6) If you get an error we may need to see the client trace file
/tmp/client_<PID>.trc where <PID> is the process ID of the
client process (*1).
This will be quite large so it is best to FAX or EMAIL it.
*1 Note: On earlier versions of SQL*Net the filename may NOT have
the process ID appended to it.
Listener Tracing:
~~~~~~~~~~~~~~~~~
1) Edit your $TNS_ADMIN/listener.ora file and add the lines:
TRACE_LEVEL_LISTENER = 16
TRACE_DIRECTORY_LISTENER = /tmp
TRACE_FILE_LISTENER = "listener" <-- output filename
2) Stop and restart the listener:
lsnrctl stop
lsnrctl start
or
lsnrctl reload
Output should go to /tmp/listener.trc
}}}
<<<
was able to obtain a client side sqlnet trace of a slow sqlldr session. This was uploaded to Oracle. The trace showed that there is no slowness from the network perspective. It also shows periodic gaps of 10 – 180 seconds waiting on a responses from the database server. This reaffirms our suspect of something database related is the cause although not necessarily HCC.
<<<
How to Enable Oracle SQL*Net Client , Server , Listener , Kerberos and External procedure Tracing from Net Manager (Doc ID 395525.1)
! references
http://www.oracledistilled.com/oracle-database/troubleshooting/setting-up-oracle-net-services-tracing-on-the-client-and-server/
http://www.oracledbaplusbigdata.com/2018/01/27/sqlnet-ora-logging-and-tracing/
https://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/
Oracle Database 12c Performance Tuning Recipes: A Problem-Solution Approach https://learning.oreilly.com/library/view/oracle-database-12c/9781430261872/9781430261872_Ch10.xhtml
Oracle Net8 Configuration and Troubleshooting https://learning.oreilly.com/library/view/oracle-net8-configuration/1565927532/ch10s03.html
Expert Oracle RAC Performance Diagnostics and Tuning https://learning.oreilly.com/library/view/expert-oracle-rac/9781430267102/9781430267096_Ch11.xhtml
http://speedof.me <- no flash needed works even with ad blocker
speedtest.net
https://www.ateam-oracle.com/testing-latency-and-throughput
{{{
Quick reference
The following is a simple list of steps to collect throughput and latency data.
Run MTR to see general latency and packet loss between servers.
Execute a multi-stream iperf test to see total throughput.
Execute UDP/jitter test if your setup will be using UDP between servers.
Execute jmeter tests against application/rest endpoint(s).
MTR command
mtr --no-dns --report --report-cycles 60 <ip_address>
--no-dns This tells mtr to not resolve DNS names. We don’t care for just testing ping latency and packet loss.
--report generates a report
--report-cycles 60 tells mtr to hit each ip along the route to your destination for 60 seconds.
Replace <ip_address> with the host you want to test.
The output of this command will help you see the latency and packet loss on all hops to your destination.
Note that you may need to run this command with sudo (as root).
MTR Output
Sample MTR output:
[root@localhost ~]# mtr --no-dns --report --report-cycles 60 192.168.1.128
HOST: localhost Loss% Snt Last Avg Best Wrst StDev
1. 10.0.2.2 0.00% 60 0.2 0.3 0.2 0.8 0.1
2. 192.168.1.1 0.00% 60 1.1 1.1 0.9 3.3 0.4
3. 192.168.1.128 20.00% 60 232.3 226.8 219.9 242.6 2.9
Hops 1 and 2 show zero percent packet loss, and an average latency of 0.2ms.
Hop 3 shows a 20 percent packet loss, and latency jumps to an average of 226.8ms.
Hop 3 shows a potential problem. If you see packet loss and latency spikes, this is something to investigate.
Sample iperf commands
Note you must run both a client and server to use iperf.
Run the iperf server
On the machine/VM you want to act as the server, run “iperf -s”
The default port iperf will bind to is 5001. You can change this with the -p option, and specify the port you wish to use.
If you are on a host with multiple interfaces, you can use the -B option to bind to a specific address.
iperf server output
You should see output similar to the following:
[root@localhost ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
Basic bandwidth measurement
From the machine you are using as the client, execute:
iperf -c 192.168.1.22
Replace the 192.x.x.x IP address with the IP address of the machine you are running the iperf server on.
The results of this command will show you the overall bandwidth stats from the client to the server.
If you are on linux, consider using the -Z option, which is the zero copy method of sending data. It will use less CPU resources.
If you are on a host with multiple interfaces, you can use the -B option to bind to a specific address.
Note that this command will execute a single stream – meaning one big pipe from the client to the server. You may not be able to saturate your network link with a single stream.
Sample output
Output will be similar to the following:
[oracle@localhost ~]$ iperf -c 10.0.2.15
------------------------------------------------------------
Client connecting to 10.0.2.15, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[ 3] local 10.0.2.15 port 54124 connected with 10.0.2.15 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 64.7 GBytes 55.5 Gbits/sec
This shows 64.7 GBytes of data was transferred in 10 seconds with an average bandwidth of 55.5 Gbits per second. This is a single stream test.
Test with multiple streams
From the machine you are using as the client, execute:
iperf -c <server_IP> -P 4
This will generate 4 streams instead of the default of 1. Note that while increasing the number of streams may improve your overall throughput, there is a point of diminishing returns. CPU resources and other system factors will eventually be a bottleneck. If you set the number of streams too high, you will see poorer results.
If you are on linux, consider using the -Z option, which is the zero copy method of sending data. It will use less CPU resources.
If you are on a host with multiple interfaces, you can use the -B option to bind to a specific address.
Sample output
[oracle@localhost ~]$ iperf -c 10.0.2.15 -P 4
------------------------------------------------------------
Client connecting to 10.0.2.15, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[ 6] local 10.0.2.15 port 54128 connected with 10.0.2.15 port 5001
[ 3] local 10.0.2.15 port 54125 connected with 10.0.2.15 port 5001
[ 4] local 10.0.2.15 port 54126 connected with 10.0.2.15 port 5001
[ 5] local 10.0.2.15 port 54127 connected with 10.0.2.15 port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 18.1 GBytes 15.6 Gbits/sec
[ 3] 0.0-10.0 sec 28.6 GBytes 24.5 Gbits/sec
[ 4] 0.0-10.0 sec 19.1 GBytes 16.4 Gbits/sec
[ 5] 0.0-10.0 sec 19.1 GBytes 16.4 Gbits/sec
[SUM] 0.0-10.0 sec 84.8 GBytes 72.8 Gbits/sec
This is a test with 4 streams at once.
The output shows the bandwidth for each stream as 15.6, 24.5, 16.4 and 16.4 Gbits per second. The last line, the SUM, shows the total transfer and bandwidth for all 4 streams. This is the overall bandwidth achieved for this test.
Measure bidirectional bandwidth
From the machine you are using as the client, execute:
iperf -c <server_IP> -r
Replace the 192.x.x.x IP address with the IP address of the machine you are running the iperf server on.
The results of this command will show you the overall bandwidth stats from the client to the server, AND from the server to the client. This test is useful if you have a lot of bidirectional traffic occurring.
This command will execute a single stream.
Sample output
[oracle@localhost ~]$ iperf -c 10.0.2.15 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.2.15 port 5001 connected with 10.0.2.15 port 54131
------------------------------------------------------------
Client connecting to 10.0.2.15, TCP port 5001
TCP window size: 4.00 MByte (default)
------------------------------------------------------------
[ 5] local 10.0.2.15 port 54131 connected with 10.0.2.15 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 67.4 GBytes 57.9 Gbits/sec
[ 4] 0.0-10.0 sec 67.4 GBytes 57.8 Gbits/sec
This shows 2 connections, one going each way between client and server.
You can match up the connection with the output based on the [ID] at the start of each line.
ID [4] bandwidth was 57.9 GBits/sec – This is the connection from the server to the client
ID [5] bandwidth was 57.8 GBits/sec – This is the connection from the client to the server
UDP Jitter test
Restart your iperf server with the -u option to let it accept UDP packets.
iperf -s -u
You should see output similar to the following:
[root@localhost ~]# iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 256 KByte (default)
------------------------------------------------------------
On the client, execute
iperf -c <server_IP> -u
The report produced will show you something like the following:
[oracle@localhost ~]$ iperf -c 10.0.2.15 -u
------------------------------------------------------------
Client connecting to 10.0.2.15, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 256 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.2.15 port 25171 connected with 10.0.2.15 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec
[ 3] Sent 893 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.127 ms 43/ 893 (0.05%)
Bandwidth was 1.05 mbits/sec
43 datagrams were lost, or 0.05% of total datagrams sent.
Jitter was .127ms. Jitter is the measure in the variation of packet arrival time interval. In a perfect network, packets arrive in the same interval i.e. packets arrive every 2ms. Jitter cam cause packet loss and network congestion. In audio and video transmission, a lot of jitter can interfere with the quality of the transmission.
A note on window size
TCP window size is an important factor in overall network performance. The window size controls how many packets are transmitted before an ACK needs to be sent. If the TCP window is too small, the sender will reduce the speed at which packets are sent so it does not overwhelm the receiver.
It is possible to tune the TCP window size with iperf. However, for purposes of this article, we are assuming your operating system will autotune TCP for you.
If you want to play with window sizes on your own, or are testing a link above 10Gbps, a good place to start is here: http://fasterdata.es.net/host-tuning/
JMeter command line
You will need to first create a JMeter test for your specific application.
Once completed, place the tests corresponding .jmx file on the server you wish to execute your tests from.
Execute jmeter from the command line with the -n switch, which tells jmeter to run in non-gui mode.
jmeter -n -t <path_to_jmx_script> -l <file_to_save_results_in>
The -l option will save test run results to a jtl file. You can import this file into jmeter in GUI mode if you wish to review the results that way. There will also be text output in the console when you execute the test.
All the test parameters are contained in the jmx script. If you are comfortable doing so, you can edit this file by hand to change test parameters. For example, change the ramp up time or number of threads.
}}}
https://www.dynatrace.com/platform/comparison/dynatrace-vs-new-relic/#targetText=Application%2Ftechnology%20coverage&targetText=Dynatrace%3A%20Best%2Din%2Dclass,Limited%20synthetic%20capabilities.&targetText=Dynatrace%3A%20Supports%20most%20common%20languages%20and%20database%20tiers%20with%20single%20agent.
{{{
You could use newgrp and gpasswd as a collaboration tool. But you have to share the group password to allow access.
Notes about group passwords:
Group passwords are an inherent security problem since more than one person is permitted to know the password.
However, groups are a useful tool for permitting co-operation between different users.
/* create the folders */
[test1@rhel5 tmp]$ ls -ltr
total 32
drwx------ 2 root root 4096 Feb 24 20:55 gconfd-root
drwx------ 2 redhat redhat 4096 Feb 24 22:19 ssh-FNkqlS3954
drwx------ 2 redhat redhat 4096 Feb 24 22:19 keyring-IeOsYx
drwx------ 3 redhat redhat 4096 Feb 24 22:19 gconfd-redhat
drwx------ 2 redhat redhat 4096 Feb 24 22:19 virtual-redhat.5tgJQn
srwxrwxr-x 1 redhat redhat 0 Feb 24 22:19 mapping-redhat
drwx------ 2 redhat redhat 4096 Feb 24 22:20 orbit-redhat
drwxr-x--- 4 test1 group1 4096 Feb 25 01:00 folder <-- this is the collaboration folder.. group1 will be used for collaboration
/* these are the users */
[root@rhel5 ~]# id test1
uid=501(test1) gid=501(test1) groups=501(test1),504(group1)
[root@rhel5 ~]# id test2
uid=502(test2) gid=502(test2) groups=502(test2),504(group1)
[root@rhel5 ~]# id test3
uid=503(test3) gid=503(test3) groups=503(test3)
/* make an administrator */
[root@rhel5 ~]# gpasswd -A test1 -M test1 group1
/* show new group attribute */
[test1@rhel5 ~]$ id
uid=501(test1) gid=501(test1) groups=501(test1),504(group1)
/* check out the gshadow.. see test1 */
cat /etc/gshadow
group1:!:test1:test1
/* /etc/group.. see test1 */
group1:x:504:test1
/* as the test2 user... try to execute newgrp, it won't allow you... notice the group1 is already removed */
[test2@rhel5 tmp]$ id
uid=502(test2) gid=502(test2) groups=502(test2)
[test2@rhel5 tmp]$ newgrp - group1
Password:
Sorry.
/* as root, make a password for group1 */
[root@rhel5 tmp]# gpasswd group1
Changing the password for group group1
New Password:
Re-enter new password:
/* see the gshadow */
group1:$1$ovK56/1n$1BZLf.jnHM0ER0GVBcWiD.:test1:test1
/* as test2 again, execute newgrp, it now allows you to get in the folder */
[test2@rhel5 tmp]$ newgrp - group1
Password:
[test2@rhel5 ~]$
[test2@rhel5 ~]$
[test2@rhel5 ~]$ id
uid=502(test2) gid=504(group1) groups=502(test2),504(group1)
[test2@rhel5 ~]$
[test2@rhel5 ~]$ cd /tmp/folder/
[test2@rhel5 folder]$ ls
inbox outbox
/* then user logs out */
/* see the groups gone */
[test2@rhel5 ~]$ id
uid=502(test2) gid=502(test2) groups=502(test2)
/* add a new user member to the group */
[test1@rhel5 tmp]$ gpasswd -a test3 group1
Adding user test2 to group group1
/* see the entry of /etc/group */
group1:x:504:test1,test3
/* see the entry of gshadow.... notice test2 is added to the group */
group1:$1$ovK56/1n$1BZLf.jnHM0ER0GVBcWiD.:test1:test1,test3
/* as test2 prompts for password */
[test2@rhel5 ~]$ newgrp - group1
Password:
/* as test3 did not prompt for password */
[test3@rhel5 ~]$ newgrp - group1
[test3@rhel5 ~]$
}}}
HAPROXY vs NGINX - 10,000 requests while killing servers https://www.youtube.com/watch?v=yQvcHy_tPjI
http://stackoverflow.com/questions/6045020/how-to-redirect-to-a-different-domain-using-nginx
https://www.digitalocean.com/community/tutorials/how-to-create-temporary-and-permanent-redirects-with-apache-and-nginx
https://wordpress.org/support/topic/nginx-and-redirects
http://wordpress.stackexchange.com/questions/210899/how-to-configure-nginx-to-redirect-requests-to-the-uploads-directory-to-the-prod
http://www.felipe1982.com/blog/2010/09/23/nmap-on-cygwin/
java.sql.SQLRecoverableException: No More Data To Read From Socket (Doc ID 1633316.1)
alter session set "_OPTIMIZER_COST_BASED_TRANSFORMATION" = OFF;
fixed the issue on no rows on aix
{{{
-- no rows on aix, because the INTSIZE_CSEC suddenly changed to a higher value.. so the script is having errors on the filter
spool loadprof.txt append
col METRIC_NAME format a40
col short_name format a20 heading 'Load Profile'
col per_sec format 999,999,999.9 heading 'Per Second'
col per_tx format 999,999,999.9 heading 'Per Transaction'
set colsep ' '
select to_char(sysdate,'MM/DD/YY HH24:MI:SS') tm, a.* from v$sysmetric a
where a.metric_name like '%Logical%'
/
spool off
exit
TM BEGIN_TIM END_TIME INTSIZE_CSEC GROUP_ID METRIC_ID METRIC_NAME VALUE METRIC_UNIT
----------------- --------- --------- ------------ ---------- ---------- ---------------------------------------------------------------- ---------- ----------------------------------------------------------------
03/27/13 22:38:36 27-MAR-13 27-MAR-13 9001 2 2030 Logical Reads Per Sec 4.6217087 Reads Per Second
03/27/13 22:38:36 27-MAR-13 27-MAR-13 9001 2 2031 Logical Reads Per Txn 416 Reads Per Txn
03/27/13 22:38:36 27-MAR-13 27-MAR-13 9001 2 2132 Logical Reads Per User Call 138.666667 Reads Per Call
03/27/13 22:38:36 27-MAR-13 27-MAR-13 2101 3 2030 Logical Reads Per Sec 158.638743 Reads Per Second
03/27/13 22:38:36 27-MAR-13 27-MAR-13 2101 3 2
TM BEGIN_TIM END_TIME INTSIZE_CSEC GROUP_ID METRIC_ID METRIC_NAME VALUE METRIC_UNIT
----------------- --------- --------- ------------ ---------- ---------- ---------------------------------------------------------------- ---------- ----------------------------------------------------------------
03/27/13 22:45:31 27-MAR-13 27-MAR-13 6011 2 2030 Logical Reads Per Sec 18504.9742 Reads Per Second
03/27/13 22:45:31 27-MAR-13 27-MAR-13 6011 2 2031 Logical Reads Per Txn 10905.2353 Reads Per Txn
03/27/13 22:45:31 27-MAR-13 27-MAR-13 6011 2 2132 Logical Reads Per User Call 573.663744 Reads Per Call
03/27/13 22:45:31 27-MAR-13 27-MAR-13 1503 3 2030 Logical Reads Per Sec 17295.0765 Reads Per Second
03/27/13 22:45:31 27-MAR-13 27-MAR-13 1503 3 2031 Logical Reads
commented this part of the script
where m.metric_name = s.metric_name
and s.intsize_csec > 5000
-- and s.intsize_csec < 7000
)
group by short_name)
-- after the fix
TM BEGIN_TIM END_TIME INTSIZE_CSEC GROUP_ID METRIC_ID METRIC_NAME VALUE METRIC_UNIT
----------------- --------- --------- ------------ ---------- ---------- ---------------------------------------- ---------- ----------------------------------------------------------------
03/27/13 22:51:44 27-MAR-13 27-MAR-13 9001 2 2030 Logical Reads Per Sec 326242.195 Reads Per Second
03/27/13 22:51:44 27-MAR-13 27-MAR-13 9001 2 2031 Logical Reads Per Txn 29365060 Reads Per Txn
03/27/13 22:51:44 27-MAR-13 27-MAR-13 9001 2 2132 Logical Reads Per User Call 15406.6422 Reads Per Call
03/27/13 22:51:44 27-MAR-13 27-MAR-13 2400 3 2030 Logical Reads Per Sec 325705.583 Reads Per Second
03/27/13 22:51:44 27-MAR-13 27-MAR-13 2400 3 2031 Logical Reads Per Txn 7816934 Reads Per Txn
}}}
* The hint tells Oracle not to do complex view merging and to execute the in-line view, C, before executing the outer query components
{{{
SELECT /*+ NO_MERGE(c) */
DISTINCT
ALGO_INPUT.BATCH_ID,
S.STORE_ID ,
ALGO_INPUT.CLASS_ID,
SKU0.STYLE_DESCRIPTION,
ALGO_INPUT.EOM_DATE,
PO_WKLST.MANUAL_ALLOC_REORDER,
PO_WKLST.PO_NO,
PO_WKLST.PO_LINE_NO,
PO_WKLST.PO_VERSION,
COALESCE(ALLOCATED_UNIT_QTY,SUGGESTED_ALLOCATED_QTY) "ALLOCATED_QTY_BY_SIZE",
SKU0.VENDOR_COLOR,
SKUS.SIZE_NAME1 "SIZE",
C.MAX_ALLOCATION_SEQ_NO,
LINE_ITEM_TYPE,
BULK_SIZE_TYPE
FROM ALLOC_STORES S,
ALGO_INPUT_FOR_REVIEW ALGO_INPUT,
ALLOC_SKU0_MASTER SKU0,
ALLOC_SKU_MASTER SKUS,
ALLOC_PO_WKLST_LINE_ITEMS PO_WKLST,
ALLOC_BATCH_LINE_ITEMS BATCH_LINES,
(SELECT
MAX(ALLOCATION_SEQ_NO) MAX_ALLOCATION_SEQ_NO,
CLASS_ID,
STORE_ID,
BATCH_ID
FROM ALGO_INPUT_FOR_REVIEW
GROUP BY CLASS_ID, STORE_ID, BATCH_ID) C
WHERE S.STORE_ID =ALGO_INPUT.STORE_ID
AND BATCH_LINES.PO_NO =PO_WKLST.PO_NO
AND BATCH_LINES.PO_LINE_NO =PO_WKLST.PO_LINE_NO
AND BATCH_LINES.BATCH_ID =ALGO_INPUT.BATCH_ID
AND BATCH_LINES.BATCH_LINE_NO =ALGO_INPUT.BATCH_LINE_NO
AND ALGO_INPUT.CLASS_ID =SKU0.CLASS_ID
AND ALGO_INPUT.SKU0 =SKU0.SKU0
AND SKU0.SKU0 =SKUS.SKU0
AND SKU0.CLASS_ID =SKUS.CLASS_ID
AND C.MAX_ALLOCATION_SEQ_NO =ALGO_INPUT.ALLOCATION_SEQ_NO
AND C.CLASS_ID =ALGO_INPUT.CLASS_ID
AND C.STORE_ID =ALGO_INPUT.STORE_ID
AND C.BATCH_ID =ALGO_INPUT.BATCH_ID
ORDER BY STORE_ID;
}}}
http://blog.modulus.io/absolute-beginners-guide-to-nodejs
https://www.airpair.com/javascript/node-js-tutorial
https://ilovecoding.org/courses/learn-node-js-in-a-week/
http://nodetuts.com/
http://www.quora.com/What-are-the-best-resources-to-learn-Node-js
<<showtoc>>
! node-oracledb installation guide
http://dgielis.blogspot.com/2015/01/setting-up-node-and-oracle-database.html
! Oracle Database driver for Node.js maintained by Oracle Corp.
https://github.com/oracle/node-oracledb
https://github.com/OraOpenSource/orawrap
https://github.com/OraOpenSource
JavaScript and Oracle Database Office Hours Sessions https://asktom.oracle.com/pls/apex/f?p=100:551:::NO:551:P551_CLASS_ID:727:#sessions
https://www.npmjs.com/
history http://www.youtube.com/watch?v=SAc0vQCC6UQ&feature=g-vrec
http://blog.nodejs.org/2012/05/08/bryan-cantrill-instrumenting-the-real-time-web/
http://www.nodebeginner.org/
http://www.toptal.com/nodejs/why-the-hell-would-i-use-node-js
https://medium.com/@sagish/intro-why-i-chose-node-js-over-ruby-on-rails-905b0d7d15c3
http://nodeframework.com/
https://app.pluralsight.com/library/courses/node-intro/table-of-contents
https://www.codeschool.com/courses/real-time-web-with-node-js
https://www.udacity.com/course/javascript-promises--ud898
Building Web Applications with Node.js and Express 4.0
https://www.pluralsight.com/courses/nodejs-express-web-applications
Real-time Web Applications
https://www.pluralsight.com/courses/real-time-web-applications
RESTful Web Services with Node.js and Express
https://www.pluralsight.com/courses/node-js-express-rest-web-services
RESTful Web API Design with Node.js
https://www.lynda.com/Node-js-tutorials/RESTful-Web-API-Design-Node-js/521195-2.html
web services node.js
https://www.lynda.com/search?q=web+services+node.js
https://www.lynda.com/search?q=websocket+node.js
R + websocket
http://illposed.net/jsm2012.pdf
A nonparametric decomposition of the Mexican American average wage gap http://onlinelibrary.wiley.com/doi/10.1002/jae.1006/full
Seasonal Decomposition for Geographical Time Series using Nonparametric Regression http://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=2571&context=etd
http://www.stat.berkeley.edu/~brill/Stat153/lowessexamples.pdf
The LOESS Procedure http://www.math.wpi.edu/saspdf/stat/chap38.pdf
LOESS, or LOWESS Smoothing Curves https://sites.google.com/site/davidsstatistics/using-r/smoothing-curves
http://stats.stackexchange.com/questions/161069/difference-between-loess-and-lowess
Question: lowess vs. loess https://support.bioconductor.org/p/2323/
[BioC] lowess vs. loess https://stat.ethz.ch/pipermail/bioconductor/2003-September/002337.html
https://en.wikipedia.org/wiki/Local_regression
A NON-LINEAR REGRESSION MODEL FOR MID-TERM LOAD FORECASTING AND IMPROVEMENTS IN SEASONALITY http://js2007.free.fr/research/pscc2005-168.pdf nonparametric decomposition for long term forecasting
Long-term and Short-term Forecasting Techniques for Regional Airport Planning http://www.diva-portal.org/smash/get/diva2:954470/FULLTEXT02 <- good stuff
LOESS Curves - Intro to Data Science https://www.youtube.com/watch?v=NdJa1zinajM
R - LOWESS smoothing curve https://www.youtube.com/watch?v=zPafVva9BwE
Lecture 7: Nonparametric Regression https://www.youtube.com/watch?v=YHnC1ddWUx0
Forecasting in R: Smoothing Methods Part I https://www.youtube.com/watch?v=Rip9bKD2OA0
Forecasting in R: Smoothing Methods Part II https://www.youtube.com/watch?v=cZRMFNTreQI
* if you UNPIVOT or denormalize you only join once on the table.generatedjobid and not join for every "type" or "parametertype". you can also take advantage of parallelism and partition pruning on "dateacquired"
{{{
# original
```
SELECT TOP (100) j.[generatedjobid],
j.[jobid],
j.[city],
j.state,
j.zipcode,
j.country,
j.[createddate],
j.[dateacquired],
j.[description],
j.[expired],
j.[expirydate],
j.[fedcontractor],
j.[lastupdateddate],
j.[sourcestate],
j.[title],
j.[filename],
onet.value AS onet_code,
naics.value AS naics_code,
mined.value AS min_education,
expr.value AS experience,
lic.value AS license,
train.value AS training,
positions.maximum AS maxPositions,
duration.maximum AS jobDuration,
hoursperweek.maximum AS weeklyHours,
shift.maximum AS shift,
salary.minimum AS minSalary,
salary.maximum AS maxSalary,
salary.unit AS salaryUnit,
addr.city AS companyCity,
addr.state AS companyState,
addr.zipcode AS companyZipcode,
addr.country AS companyCountry
FROM [conduent].[dbo].[job] j
LEFT JOIN classification onet
ON ( j.generatedjobid = onet.generatedjobid
AND onet.type = 'ONET' )
LEFT JOIN classification naics
ON ( j.generatedjobid = naics.generatedjobid
AND naics.type = 'NAICS' )
LEFT JOIN requirement mined
ON ( j.generatedjobid = mined.generatedjobid
AND mined.type = 'mineducation' )
LEFT JOIN requirement expr
ON ( j.generatedjobid = expr.generatedjobid
AND expr.type = 'experience' )
LEFT JOIN requirement lic
ON ( j.generatedjobid = lic.generatedjobid
AND lic.type = 'license' )
LEFT JOIN requirement train
ON ( j.generatedjobid = train.generatedjobid
AND train.type = 'training' )
LEFT JOIN parameter positions
ON ( j.generatedjobid = positions.generatedjobid
AND parametertype = 'positions' )
LEFT JOIN parameter duration
ON ( j.generatedjobid = duration.generatedjobid
AND duration.parametertype = 'duration' )
LEFT JOIN parameter hoursperweek
ON ( j.generatedjobid = hoursperweek.generatedjobid
AND hoursperweek.parametertype = 'hoursperweek' )
LEFT JOIN parameter shift
ON ( j.generatedjobid = shift.generatedjobid
AND shift.parametertype = 'shift' )
LEFT JOIN parameter salary
ON ( j.generatedjobid = salary.generatedjobid
AND salary.parametertype = 'salary' )
LEFT JOIN [application] a
ON ( j.generatedjobid = a.generatedjobid )
LEFT JOIN [address] addr
ON ( a.applicationid = addr.applicationid )
WHERE [dateacquired] BETWEEN CONVERT(DATETIME, '2017-01-01') AND
CONVERT(DATETIME, '2018-12-31 23:59:59')
TableName RowCount
[dbo].[Address] 1,099,494
[dbo].[Application] 38,549,607
[dbo].[Classification] 77,099,214
[dbo].[DailySummary] 574
[dbo].[Job] 36,503,982
[dbo].[JobUploadLog] 86,366
[dbo].[Method] 1,724,666
[dbo].[Parameter] 37,235,995
[dbo].[Requirement] 29,788,796
```
# rewrite
```
WITH cte_classification as (
select * from
classification
unpivot (value for type in ('ONET','NAICS') )
where [dateacquired] BETWEEN CONVERT(DATETIME, '2017-01-01') AND
CONVERT(DATETIME, '2018-12-31 23:59:59')
),
cte_requirement as (
select * from
requirement
unpivot (value for type in ('mineducation','experience','license','training' ) )
where [dateacquired] BETWEEN CONVERT(DATETIME, '2017-01-01') AND
CONVERT(DATETIME, '2018-12-31 23:59:59')
),
cte_parameter as (
select * from
parameter
unpivot (('minimum','maximum','unit')) for parametertype in ('positions' , 'duration', 'hoursperweek', 'shift' , 'salary') )
where [dateacquired] BETWEEN CONVERT(DATETIME, '2017-01-01') AND
CONVERT(DATETIME, '2018-12-31 23:59:59')
)
SELECT TOP (100) j.[generatedjobid],
j.[jobid],
j.[city],
j.state,
j.zipcode,
j.country,
j.[createddate],
j.[dateacquired],
j.[description],
j.[expired],
j.[expirydate],
j.[fedcontractor],
j.[lastupdateddate],
j.[sourcestate],
j.[title],
j.[filename],
cte_class.onet AS onet_code,
cte_class.naics AS naics_code,
cte_req.mined AS min_education,
cte_req.expr AS experience,
cte_req.lic AS license,
cte_req.train AS training,
cte_param.positions.maximum AS maxPositions,
cte_param.duration.maximum AS jobDuration,
cte_param.hoursperweek.maximum AS weeklyHours,
cte_param.shift.maximum AS shift,
cte_param.salary.minimum AS minSalary,
cte_param.salary.maximum AS maxSalary,
cte_param.salary.unit AS salaryUnit,
addr.city AS companyCity,
addr.state AS companyState,
addr.zipcode AS companyZipcode,
addr.country AS companyCountry
FROM [conduent].[dbo].[job] j
LEFT JOIN cte_classification cte_class
ON (j.generatedjobid = cte_class.generatedjobid)
LEFT JOIN cte_requirement cte_req
ON (j.generatedjobid = cte_req.generatedjobid)
LEFT JOIN cte_parameter cte_param
ON (j.generatedjobid = cte_param.generatedjobid)
LEFT JOIN [application] a
ON ( j.generatedjobid = a.generatedjobid
AND [dateacquired] BETWEEN CONVERT(DATETIME, '2017-01-01') AND
CONVERT(DATETIME, '2018-12-31 23:59:59') )
LEFT JOIN [address] addr
ON ( a.applicationid = addr.applicationid
AND [dateacquired] BETWEEN CONVERT(DATETIME, '2017-01-01') AND
CONVERT(DATETIME, '2018-12-31 23:59:59') )
```
## other examples
### example1
```
select
'RUNNAME',
'BEGIN',
sysdate,
wait_class || ' - ' || event as class,
measure,
value
from
(
select * from v$session_event
unpivot (value for measure in (TOTAL_WAITS as 'TOTAL_WAITS',
TOTAL_TIMEOUTS as 'TOTAL_TIMEOUTS',
TIME_WAITED as 'TIME_WAITED',
AVERAGE_WAIT as 'AVERAGE_WAIT',
MAX_WAIT as 'MAX_WAIT',
TIME_WAITED_MICRO as 'TIME_WAITED_MICRO',
EVENT_ID as 'EVENT_ID',
WAIT_CLASS_ID as 'WAIT_CLASS_ID',
WAIT_CLASS# as 'WAIT_CLASS#'
))
where sid in (select /*+ no_merge */ sid from v$mystat where rownum = 1)
)
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,TOTAL_WAITS , 50
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,TOTAL_TIMEOUTS , 0
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,TIME_WAITED , 57
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,AVERAGE_WAIT , 1.15
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,MAX_WAIT , 5
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,TIME_WAITED_MICRO, 574273
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,EVENT_ID , 443865681
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,WAIT_CLASS_ID ,1740759767
RUNNAME,BEGIN,20160420 21:58:36,User I/O - cell multiblock physical read ,WAIT_CLASS# , 8
SID, SID,EVENT ,TOTAL_WAITS,TOTAL_TIMEOUTS,TIME_WAITED,AVERAGE_WAIT, MAX_WAIT,TIME_WAITED_MICRO, EVENT_ID,WAIT_CLASS_ID,WAIT_CLASS#,WAIT_CLASS
153, 153,Disk file operations I/O , 4, 0, 0, 0, 0, 76, 166678035, 1740759767, 8,User I/O
153, 153,Disk file Mirror Read , 4, 0, 0, .04, 0, 1408, 13102552, 1740759767, 8,User I/O
153, 153,control file sequential read , 14, 0, 2, .12, 1, 16944,3213517201, 4108307767, 9,System I/O
153, 153,gc cr multi block request , 115, 0, 2, .02, 0, 18459, 661121159, 3871361733, 11,Cluster
153, 153,gc cr block 2-way , 26, 0, 0, .01, 0, 2589, 737661873, 3871361733, 11,Cluster
153, 153,gc current block 2-way , 27, 0, 0, .01, 0, 1992, 111015833, 3871361733, 11,Cluster
153, 153,gc cr grant 2-way , 1, 0, 0, .01, 0, 92,3201690383, 3871361733, 11,Cluster
153, 153,gc current grant 2-way , 5, 0, 0, .01, 0, 387,2685450749, 3871361733, 11,Cluster
153, 153,gc current grant busy , 2, 0, 0, .01, 0, 270,2277737081, 3871361733, 11,Cluster
153, 153,row cache lock , 12, 0, 0, .01, 0, 1452,1714089451, 3875070507, 4,Concurrency
153, 153,library cache pin , 16, 0, 0, .01, 0, 1428,2802704141, 3875070507, 4,Concurrency
153, 153,library cache lock , 15, 0, 0, .01, 0, 2040, 916468430, 3875070507, 4,Concurrency
153, 153,SQL*Net message to client , 125, 0, 0, 0, 0, 120,2067390145, 2000153315, 7,Network
153, 153,SQL*Net message from client , 124, 0, 778694, 6279.79, 153480, 7786936539,1421975091, 2723168908, 6,Idle
153, 153,SQL*Net break/reset to client , 14, 0, 0, 0, 0, 434,1963888671, 4217450380, 1,Application
153, 153,cell single block physical read , 9, 0, 5, .5, 2, 45231,2614864156, 1740759767, 8,User I/O
153, 153,cell multiblock physical read , 50, 0, 57, 1.15, 5, 574273, 443865681, 1740759767, 8,User I/O
17 rows selected.
```
### example2
```
-- get min max snap_id for each dbid for the past 13 months
for rec_snap in (select new_dbid, target_name
from dbsnmp.caw_dbid_mapping
where new_dbid in ('2024176565','1890029227','1922913859','1826652812'))
loop
insert into plan_table (options, object_node, object_owner, object_name)
SELECT max(rec_snap.target_name),
max(dbid) awrwh_dbid,
TO_CHAR(MIN(snap_id)) awrwh_min_snap_id,
TO_CHAR(MAX(snap_id)) awrwh_max_snap_id
FROM dba_hist_snapshot
WHERE dbid = rec_snap.new_dbid
and to_date(to_char(END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS'),'MM/DD/YY HH24:MI:SS') >= trunc(add_months(sysdate,-13),'MM');
end loop;
-- insert record for each dbid using min max snap_id
for rec_dbid in (select options target, object_node dbid, object_owner min_snap, object_name max_snap
from plan_table)
loop
insert /*+ PARALLEL(8) */ into daily_cpu_all select /*+ PARALLEL(8) */ * from (
WITH
cpuwl AS (
SELECT /*+ MATERIALIZE NO_MERGE */
instance_number,
dbid,
snap_id,
SUM(CASE WHEN stat_name = 'RSRC_MGR_CPU_WAIT_TIME' THEN value ELSE 0 END) rsrcmgr,
SUM(CASE WHEN stat_name = 'LOAD' THEN value ELSE 0 END) loadavg,
SUM(CASE WHEN stat_name = 'NUM_CPUS' THEN value ELSE 0 END) cpu
FROM dba_hist_osstat
WHERE stat_name IN
('RSRC_MGR_CPU_WAIT_TIME','LOAD','NUM_CPUS')
and dbid = rec_dbid.dbid
and snap_id > rec_dbid.min_snap
GROUP BY
instance_number,
dbid,
snap_id
)
select dbid, instance_number, logdate, logmonth, loghour, max(cpu) cpu, avg(loadavg) loadavg, avg(rsrcmgrpct) rsrcmgrpct from (
select a.dbid, a.instance_number, TO_CHAR(a.begin_interval_time,'MM/DD/YY HH24:MI:SS') tm,
to_char(a.begin_interval_time, 'yyyy-mm-dd') as logdate,
to_char(a.begin_interval_time, 'mm') as logmonth,
to_char(a.begin_interval_time, 'hh24') as loghour,
lag(b.snap_id) over(partition by b.instance_number,b.dbid order by b.snap_id) as snap_id,
b.cpu AS cpu,
round(b.loadavg,2) AS loadavg,
round((( b.rsrcmgr-lag(b.rsrcmgr) over (partition by b.instance_number,b.dbid order by b.snap_id) ) / 100) / (((CAST(a.end_interval_time AS DATE) - CAST(a.begin_interval_time AS DATE)) * 86400)*b.cpu)*100,2) as rsrcmgrpct
from dba_hist_snapshot a, cpuwl b
where a.dbid = b.dbid
and a.instance_number = b.instance_number
and a.snap_id = b.snap_id)
group by dbid, instance_number, logdate, logmonth, loghour
)
unpivot (value for metric_name in (cpu as 'NUM_CPUS', loadavg as 'LOAD', rsrcmgrpct as 'RSRC_MGR_CPU_WAIT_TIME_PCT'))
union all
select /*+ PARALLEL(8) */ dbid, instance_number, logdate, logmonth, loghour, metric_name, avg(value) as avg_hourly from (
select a.dbid, a.instance_number, TO_CHAR(a.begin_interval_time,'MM/DD/YY HH24:MI:SS') tm,
to_char(a.begin_interval_time, 'yyyy-mm-dd') as logdate,
to_char(a.begin_interval_time, 'mm') as logmonth,
to_char(a.begin_interval_time, 'hh24') as loghour,
lag(b.snap_id) over(partition by b.metric_name,b.instance_number,b.dbid order by b.snap_id) as snap_id,
b.metric_name,
case when b.metric_name = 'Host CPU Utilization (%)' then
case when b.average < 50 then (b.average*1.7)
else (85+(b.average-50)*0.3)
end
when b.metric_name = 'Average Active Sessions' then b.average
when b.metric_name = 'Current OS Load' then b.average
end as value
from dba_hist_snapshot a, DBA_HIST_SYSMETRIC_SUMMARY b
where a.dbid = rec_dbid.dbid
and a.snap_id > rec_dbid.min_snap
and a.dbid = b.dbid
and a.instance_number = b.instance_number
and a.snap_id = b.snap_id
and b.metric_name in ('Host CPU Utilization (%)','Average Active Sessions','Current OS Load')
) group by dbid, instance_number, logdate, logmonth, loghour, metric_name
order by 3,4,5,6 asc;
```
}}}
<<<
https://asktom.oracle.com/pls/apex/asktom.search?tag=named-not-null-constraint
https://dba.stackexchange.com/questions/19484/oracle-how-to-create-a-not-null-column-in-a-view
https://stackoverflow.com/questions/11097839/how-to-create-a-not-null-column-in-a-view
It's the unfortunate side effect of UNION ALLs in Oracle... complex views lose the NOT NULL flag even if underlying tables have it. Here are 2 regular tables (not even external tables) that both have a NOT NULL column:
If I create a view on a single table only, withotu UNION ALL, the NOT NULL constraint is propagated to the view too:
{{{
SQL> create or replace view v as select * from t1;
View created.
SQL> @desc v
Name Null? Type
------------------------------- -------- ----------------------------
1 A NOT NULL NUMBER(38)```
}}}
the problem is on their "model" layer of their reports
with this behavior the cognos sees that the underlying "model" changed and so the reports are now failing
so if they can workaround this behavior on the cognos side that would be good
ah, gotcha. so Cognos sees that the "fingerprint" of the schema/table structure has changed and doesn't let existing reports to run against the "changed" tables?
so would it be possible to just "recompile" teh cognos model or queries?
or are they cognos'es "internal" tables ... not just some other tables that cognos happens to query?
yes, i don't know if there's a recompile or they need to regenerate the framework to workaround this
we can gather more info, screenshots on how this is affecting them
we can focus the troubleshooting on one table/view then let's see
yes this i suspect is cognos internal medata that points to the real oracle tables/views
i know in OBIEE you can override the model with a customer query
but i think they are looking for something with minimal change
As we can't make the view columns NOT NULL (even external tables only started supporting not null constraints starting from Oracle 12.2), we need an alternative approach.
I'm not a Cognos expert, but can we ask from a friendly Cognos guy if there's a way to "regenerate the queries" or "sync the model with the schema" in Cognos? I googled, found this, but not sure if this is what we'd want: http://www-01.ibm.com/support/docview.wss?uid=swg21340868
Not proposing them to manually modify all queries one by one, but whether there's a cognos Sync tool ... or perhaps some Cognos metadata export/import would generate correct queries against the "new" data model.
we can check with the guy we are working with about the possible options
If Cognos only checked constraints using SQL queries against `dba_tab_columns` or `all_tab_columns`, we could in theory create alternative ~dba_constraints~ views (in the cognos user schema) that would show `NOT NULL` in appropriate locations even for views ... But this would fall apart if Cognos actually describes a table/view and checks the nullability from there. Somewhat hacky too, but we wouldn't replace anything in the SYS schema of course, just a local `all_tab_columns` view that overrides the public synonym view that only the cognos user would see (assuming that Cognos doesn't use `SYS.` prefix).
Still worth checking if there's a Cognos app level "recompile" or "resync" option first! (edited)
(ignore the "dba_constraints" word above)
<<<
{{{
-- the trick but this one is not going to work to trick the model
alter table t2 add col2nn as (CAST(col1 AS number)) NOT NULL;
create view v5 as select col2nn from t2;
create view v6 as select col1 from t2;
create view v7 as select col1 from t1 union all select col2nn from t2;
create view v8 as select col1 from t1 union select col2nn from t2;
}}}
{{{
$ s1
SQL*Plus: Release 12.1.0.2.0 Production on Mon Aug 27 17:18:43 2018
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
17:18:43 SYS@cdb1>
17:18:43 SYS@cdb1>
17:18:43 SYS@cdb1>
17:18:43 SYS@cdb1>
17:18:44 SYS@cdb1> create table t1 col1 not null;
create table t1 col1 not null
*
ERROR at line 1:
ORA-00922: missing or invalid option
17:18:58 SYS@cdb1> create table t1 (col1 not null);
create table t1 (col1 not null)
*
ERROR at line 1:
ORA-02263: need to specify the datatype for this column
17:19:12 SYS@cdb1> create table t1 (col1 number not null);
Table created.
17:19:26 SYS@cdb1>
17:19:26 SYS@cdb1> create table t2 (col1 number);
Table created.
17:19:48 SYS@cdb1> desc t1
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NOT NULL NUMBER
17:19:50 SYS@cdb1> desc t2
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NUMBER
17:19:53 SYS@cdb1>
17:19:56 SYS@cdb1> create view v1 as select col1 from t1 union all select col1 from t2;
View created.
17:20:18 SYS@cdb1> desc v1
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NUMBER
17:20:21 SYS@cdb1> create view v2 as select col1 from t1 union select col1 from t2;
View created.
17:20:43 SYS@cdb1> desc v2
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NUMBER
17:20:46 SYS@cdb1> create table t3 (col 1 number not null);
create table t3 (col 1 number not null)
*
ERROR at line 1:
ORA-00902: invalid datatype
17:21:11 SYS@cdb1> create table t3 (col1 number not null);
Table created.
17:21:26 SYS@cdb1> create view v3 as select col1 from t1 union all from select col1 from t3;
create view v3 as select col1 from t1 union all from select col1 from t3
*
ERROR at line 1:
ORA-00928: missing SELECT keyword
17:21:56 SYS@cdb1> create view v3 as select col1 from t1 union all select col1 from t3;
View created.
17:22:23 SYS@cdb1> desc v3
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NUMBER
17:22:29 SYS@cdb1> create view v3 as select col1 from t1 union select col1 from t3;
create view v3 as select col1 from t1 union select col1 from t3
*
ERROR at line 1:
ORA-00955: name is already used by an existing object
17:22:41 SYS@cdb1> create view v4 as select col1 from t1 union select col1 from t3;
View created.
17:22:57 SYS@cdb1> desc v4
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NUMBER
17:22:59 SYS@cdb1> alter table t2 add col2nn as (CAST("col1" AS number)) NOT NULL;
alter table t2 add col2nn as (CAST("col1" AS number)) NOT NULL
*
ERROR at line 1:
ORA-00904: "col1": invalid identifier
17:29:56 SYS@cdb1> alter table t2 add col2nn as (CAST(col1 AS number)) NOT NULL;
Table altered.
17:30:13 SYS@cdb1> create view v5 as select col2nn from t2;
View created.
17:30:20 SYS@cdb1> desc v5
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL2NN NOT NULL NUMBER
17:30:24 SYS@cdb1> create view v6 as select col1 from t2;
View created.
17:30:40 SYS@cdb1> desc v6
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NUMBER
17:30:42 SYS@cdb1> create view v7 as select col1 from t1 union all select col2nn from t2;
View created.
17:31:21 SYS@cdb1> desc v7
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NUMBER
17:31:27 SYS@cdb1>
17:31:28 SYS@cdb1> create view v8 as select col1 from t1 union all select col2nn from t2;
View created.
17:32:05 SYS@cdb1> desc v8
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NUMBER
17:32:09 SYS@cdb1> drop view v8;
View dropped.
17:32:41 SYS@cdb1> create view v8 as select col1 from t1 union select col2nn from t2;
View created.
17:32:41 SYS@cdb1> desc v8
Name Null? Type
----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
COL1 NUMBER
}}}
https://stackoverflow.com/questions/11097839/how-to-create-a-not-null-column-in-a-view
https://dba.stackexchange.com/questions/19484/oracle-how-to-create-a-not-null-column-in-a-view
http://techtabs.blogspot.com/2010/01/mount-ntfs-partition-in-rhel-54.html
http://wiki.centos.org/TipsAndTricks/NTFS?highlight=(ntfs)
http://wiki.centos.org/AdditionalResources/Repositories/RPMForge
http://techspotting.org/how-to-mount-ntfs-drive-on-centos-5-4/
https://oracle-base.com/articles/misc/null-related-functions
<<<
NVL
DECODE
NVL2
COALESCE
NULLIF
LNNVL
NANVL
SYS_OP_MAP_NONNULL
<<<
{{{
select 1/NULL from dual; -- returns null
select 1/0 from dual; -- error "divisor is equal to zero"
select 1/nvl(null,1) from dual; -- if not null return 1st , if null return 1
select 1/nvl(0,1) from dual; -- errors if zero
select 1/nullif(nvl( nullif(21,0) ,1),0) from dual;
--usual/default behavior:
-- 0 -> null
-- 1 -> 1
-- null -> 1
--must be:
-- 0 -> 1
-- 1 -> 1
-- null -> 1
SELECT NULLIF(0,0) FROM DUAL;
drop table krltest purge;
create table krltest as select 0 as x from dual;
select x from krltest;
update krltest set x = 1;
update krltest set x = 2;
update krltest set x = .5;
update krltest set x = NULL;
select 1/nullif(nvl( nullif(x,0) ,1),0) from krltest;
EOM_VAR%_CLASS = TO_CHAR(round (( nvl(EOM_need.INITAL_NEED_UNITS,0) - nvl(f.ALLOCATED_QTY_CLASS,0) ) /
(nullif(nvl( nullif(EOM_UNIT.UNIT_TARGET,0) ,1),0))
,5))
--other cases
round (decode ((e.total_waits - nvl(b.total_waits, 0)), 0, to_number(NULL),
((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000) / (e.total_waits - nvl(b.total_waits,0))), 2) avgwt,
((round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)) /
NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt, -- THIS IS EVENT (sec) / DB TIME (sec)
}}}
<<<
http://manchev.org/2014/03/processor-group-integration-in-oracle-database-12c/
https://bdrouvot.wordpress.com/2015/01/07/measure-the-impact-of-remote-versus-local-numa-node-access-thanks-to-processor_group_name/
> measures the remote memory numa access, it's actually slower by 2x
https://bdrouvot.wordpress.com/2015/01/15/cpu-binding-processor_group_name-vs-instance-caging-comparison-during-lio-pressure/
> the ones with bind is actually faster because it is mapped to a local node instead of interleaved access, 2x faster
> memory on NUMA node 0 and use the cpus 1 to 6, the SGA allocation may not be “optimal” (If the cpus are not on the same NUMA node where the SGA is currently allocated)
> also bind only is faster than bind + cage by 10%
https://bdrouvot.wordpress.com/2015/01/19/modify-on-the-fly-the-cgroup-properties-linked-to-the-processor_group_name-parameter/
> cpuset.cpus value is dynamic and can be changed on the fly
> cpuset.mems value or changing numa node, needs instance restart
https://bdrouvot.wordpress.com/2015/02/05/processor_group_name-and-sga-distribution-across-numa-nodes/
> there may be a need to link the processor_group_name parameter to more than one NUMA node because:
> the database needs more memory that one NUMA node could offer.
> the database needs more cpus that one NUMA node could offer.
> with more than one numa node for memory, equally spread memory performs better
<<<
! other references
http://manchev.org/2014/03/processor-group-integration-in-oracle-database-12c/
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-cpuset.html
https://ycolin.wordpress.com/2015/01/18/numa-interleave-memory/
processor_group_name_SGA_distribution https://www.evernote.com/shard/s501/sh/30c3af32-45e9-4e96-abc2-11cf777bb6a3/dc7e211a240359a7
https://academy.vertabelo.com/blog/understanding-numerical-data-types-sql/
https://docs.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql
https://stackoverflow.com/questions/16629759/how-to-store-decimal-in-mysql
https://docs.cloud.oracle.com/iaas/Content/GSG/Concepts/baremetalintro.htm
OCI whitepapers https://cloud.oracle.com/iaas/technical-resources
Blocked IP Addresses For VPNaaS Tunnelling (Doc ID 2443083.1)
! cloud models
<<<
IaaS - Oracle Database Virtual Image, Oracle Database Exadata cloud service, AWS EC2 instance, Azure Virtual machine
PaaS - Oracle Database as a Service
SaaS - Oracle Schema as a Service, Amazon RDS
<<<
! issues
https://www.linkedin.com/pulse/problem-oracle-cloud-ahmed-azmi/
https://www.linkedin.com/pulse/oracle-best-company-selling-articles-mohamed-sedik-mba-pmp/
! references architecture
Best Practices for Deploying High Availability Architecture on Oracle Cloud Infrastructure https://cloud.oracle.com/iaas/whitepapers/best-practices-deploying-ha-architecture-oci.pdf
! oracle file storage service (like AWS S3)
http://storageconference.us/2018/Presentations/Beauvais.pdf
! Virtual Cloud Network Overview and Deployment Guide
https://cloud.oracle.com/iaas/whitepapers/vcn-deployment-guide.pdf
.
http://oci360.dbarj.com.br/
https://www.google.com/search?q=oracle+cloud+at+customer+licensing&oq=oracle+cloud+at+customer+li&aqs=chrome.1.69i57j0l7.6062j0j1&sourceid=chrome&ie=UTF-8
<<<
FastConnect (oci) to ExpressRoute (azure) https://www.youtube.com/watch?v=Lb_zuO_CQd8
<<<
https://medium.com/@oci_ben/latency-testing-the-microsoft-azure-oracle-cloud-infrastructure-direct-connectivity-in-london-8e1d59211973
<<<
Screen Captures of Tests
OCI to Azure: Ping || Traceroute
Azure to OCI: Ping || Traceroute || TNSping
<<<
https://cloud-blogs.com/index.php/oracle-cloud/oracle-cloud-iaas/oracle-cloud-infrastructure-oci-blogs/measuring-latency-and-traceroute-details-with-oracle-edge-services/
https://blog.maxjahn.at/2020/02/azure-oracle-cloud-oci-interconnect-network-latency-shootout/
https://www.google.com/search?q=oracle+oci+on+prem+to+cloud+bandwidth+cost&oq=oracle+oci+on+prem+to+cloud+bandwidth+cost&aqs=chrome..69i57j69i64l3.11710j1j1&sourceid=chrome&ie=UTF-8
Oracle Networking Cloud Pricing
https://www.oracle.com/cloud/networking/networking-pricing.html
{{{
- check VNIC charts during slowness
step by step:
> Navigate to the home page of compute instance > Attached VNICs > Click on vnic name .
> This will show the graphs.
> need to validate '*Packets Dropped* graph
}}}
https://www.oracle.com/cloud/oci-vs-aws/
https://www.oracle.com/a/ocom/docs/cloud/oci-vs-aws.pdf
https://www.facebook.com/charlton.h.lopez/posts/10216627862754561
! azure
https://blogs.oracle.com/cloud-infrastructure/overview-of-the-interconnect-between-oracle-and-microsoft
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/oracle/oracle-oci-overview
Overview of Oracle Applications and solutions on Azure
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/oracle/oracle-overview?fbclid=IwAR2W-MWpNAzuTbsUi_sNFRXsGpJWsVqBojQZx8LrbRlgxIXM-sDTgtJYelg
https://www.google.com/search?q=hybrid+cloud+azure+and+oracle+database&sxsrf=ALeKk01bmg2bgOwQeNrwZHnLYMbQW1-JBg:1585551072877&source=lnms&tbm=isch&sa=X&ved=2ahUKEwidiOODzsHoAhU_hHIEHXtkCKoQ_AUoAnoECA0QBA&biw=1701&bih=978
! networking
https://docs.cloud.oracle.com/en-us/iaas/Content/Network/Concepts/fastconnectoverview.htm
https://docs.microsoft.com/en-us/azure/expressroute/expressroute-introduction
https://www.oracle.com/cloud/oci-vs-google-cloud/
https://www.oracle.com/a/ocom/docs/cloud/oci-vs-gcp.pdf
https://www.oracle.com/autonomous-database/autonomous-data-warehouse/oracle-vs-snowflake/
https://www.oracle.com/a/ocom/docs/database/oracle-adw-vs-snowflake-infographic.pdf
{{{
# oda
processor : 23
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5675 @ 3.07GHz
stepping : 2
cpu MHz : 3059.102
cache size : 12288 KB
physical id : 1
siblings : 12
core id : 10
cpu cores : 6
apicid : 53
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips : 6118.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
OraPub CPU speed statistic is 277.287
Other statistics: stdev=16.965 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3925)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089396 0 394
1085106 0 419
1085106 0 353
1085106 0 395
1085106 0 409
1085106 0 383
1085106 0 396
7 rows selected.
Linux patty 2.6.18-194.32.1.0.1.el5 #1 SMP Tue Jan 4 16:26:54 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
patty DEMO1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 257.245
Other statistics: stdev=10.129 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4223.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089396 0 436
1085106 0 430
1085106 0 431
1085106 0 430
1085106 0 430
1085106 0 422
1085106 0 391
7 rows selected.
Linux patty 2.6.18-194.32.1.0.1.el5 #1 SMP Tue Jan 4 16:26:54 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
patty DEMO1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 278.598
Other statistics: stdev=18.158 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3908.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089396 0 410
1085106 0 418
1085106 0 415
1085106 0 398
1085106 0 382
1085106 0 352
1085106 0 380
7 rows selected.
Linux patty 2.6.18-194.32.1.0.1.el5 #1 SMP Tue Jan 4 16:26:54 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
patty DEMO1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 261.313
Other statistics: stdev=15.272 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4163.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089396 0 413
1085106 0 371
1085106 0 426
1085106 0 426
1085106 0 425
1085106 0 425
1085106 0 425
7 rows selected.
Linux patty 2.6.18-194.32.1.0.1.el5 #1 SMP Tue Jan 4 16:26:54 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
patty DEMO1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 271.565
Other statistics: stdev=11.455 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4001.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089396 0 428
1085106 0 384
1085106 0 403
1085106 0 378
1085106 0 403
1085106 0 409
1085106 0 424
7 rows selected.
Linux patty 2.6.18-194.32.1.0.1.el5 #1 SMP Tue Jan 4 16:26:54 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
patty DEMO1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
}}}
https://www.oraclenext.com/2017/04/data-masking-using-oem-cloud-control.html
oem alert notifications
The first thing I would check is to see if you can send emails from the OMS server itself. After that, check to see if you can send them using utl_mail. I have included a test here. If the group gets the email, then the problem may be with the OMS settings. Hope this helps.
{{{
set serveroutput on
BEGIN
UTL_MAIL.SEND(SENDER => 'me',
RECIPIENTS => 'somebody@me.com',
CC => '',
BCC => '',
SUBJECT => 'Testing',
MESSAGE => 'Email test');
END;
/
}}}
<<showtoc>>
! install packages
{{{
yum grouplist
yum groupinfo <group_name>
yum groupinstall -y "KDE Desktop"
yum groupinstall -y "Desktop Debugging and Performance Tools"
yum groupinstall -y "Additional Development"
yum groupinstall -y "Desktop Platform Development"
yum groupinstall -y "Development tools"
yum groupinstall -y "Server Platform Development"
yum groupinstall -y "Debugging Tools"
yum groupinstall -y "System administration tools"
yum groupinstall -y "Networking Tools"
yum groupinstall -y "Office Suite and Productivity"
yum groupinstall -y "Console internet tools"
yum groupinstall -y "Hardware monitoring utilities"
yum groupinstall -y "Large Systems Performance"
yum groupinstall -y "Ruby Support"
yum groupinstall -y "SNMP Support"
yum groupinstall -y "System Management"
yum groupinstall -y "X Window System"
yum groupinstall -y "Legacy X Window System compatibility"
yum groupinstall -y "Legacy UNIX compatibility"
yum install -y xfsprogs
yum install -y tigervnc-server
yum install -y firefox.x86_64
yum install -y sysstat
}}}
! format and mount xfs block storage
http://ask.xmodulo.com/create-mount-xfs-file-system-linux.html
2.1.10.1 What Default Ports Are Used?
http://docs.oracle.com/cd/E24628_01/install.121/e24089/getstrtd_things_to_know.htm#BACEEJEH
http://www.oracle-base.com/articles/11g/oracle-db-11gr2-installation-on-oracle-linux-6.php
http://www.oracle-base.com/articles/12c/cloud-control-12cr3-installation-on-oracle-linux-5-and-6.php
http://www.oracle-base.com/articles/12c/cloud-control-12cr2-post-installation-setup-tasks.php
http://www.oracle-base.com/articles/12c/oracle-db-12cr1-installation-on-oracle-linux-6.php
http://www.oracle.com/pls/db121/portal.portal_db?selected=11&frame=
http://docs.oracle.com/cd/E16655_01/install.121/e17718/toc.htm#BABGGEBA
EM 12c R4: Checklist for Upgrading Enterprise Manager Cloud Control from Version 12.1.0.2/3 to 12.1.0.4 (Doc ID 1682332.1)
http://docs.oracle.com/cd/E24628_01/upgrade.121/e22625/upgrading_12101_PS1_oms.htm#EMUPG165
http://oracle-base.com/articles/12c/cloud-control-12cr3-to-12cr4-upgrade.php
http://www.gokhanatil.com/2014/06/upgrading-enterprise-manager-cloud-control-12-1-0-3-to-12-1-0-4.html
''patch''
http://docs.oracle.com/cd/E24628_01/upgrade.121/e22625/upgrading_12101_PS1_gtgstrtd.htm#EMUPG161
''error''
http://www.identityincloud.com/2014/10/oracle-enterprise-manager-cloud-control.html
Personally I don’t look at the CPU/mem usage of the cell servers and I haven’t come across a case where I have to care about the CPU usage of the storage cells. I look at them as intelligent JBODs.
The more you push the SAME (short stroked) architecture the more you’ll reach the capacity limit and the more the latency will increase. And the storage cells are highly java threaded architecture.
So any workload that would result to 100% CPU usage on the storage cells would still be caused by any of the databases that’s running above it and particularly a SQL or set of SQLS.
And the storage cells behave in such a way that it would squeeze whatever it can in its power so it’s probably not a good KPI (key performance indicator). Because then when you start to correlate things 100% could be good to one SQL and bad for another SQL. Or 10% utilization could be good for one SQL or not on another SQL.
Monitoring the CPU of the storage cells is pretty much like the same as monitoring the Storage Processors of an EMC VNX storage. Where any of the performance issues can easily manifest on the latency on session level, datafile level, and storage level.
For me it’s just not a definitive KPI.
But then, a case for monitoring CPU would be the reverse offload
<<<
CEM cell server stats metric create a SQL to track this from AWR, cellserverstat, cell metric <- reverse offload
this could affect capacity decision because if you are doing a lot of reverse offload then your storage cells number might not be enough
smart scanning too much, or DPR too much and not smart scanning ... but then this could also be caused by chained rows. See below:
@day3 28:55
*** these statistics just say that all cells blocks were processed in the storage cells and not sent back in the compute nodes to be processed in block IO mode (did not fall back to block IO mode)
cell blocks processed by cache layer - 10 blocks
cell blocks processed by txn layer - 9 blocks
cell blocks processed by data/index layer - 8 blocks could be caused by row chaining and migration
@47:00 HOW TO READ THIS?
the "cell blocks processed by data/index layer" has to somehow match the "physical reads direct"... which means you processed all the blocks you've read
also check the divide the "cell physical IO bytes eligible for predicate offload" to the block size to get the data layer number!
how the cache and txn layers could be a little bit higher than data layer thanks to the send/receive buffers (network buffering issues) getting full that turns to re-reading it and double counting
in summary ... the cache, txn, and data layer should be close to offloadable number
if you have chained rows, the data layer will be lesser than cache and txn
<<<
https://www.codementor.io
{{{
[root@db1 onecommand]# ./deploy112.sh -i -l
The steps in order are...
Step 0 = ValidateThisNodeSetup
Step 1 = SetupSSHForRoot
Step 2 = ValidateAllNodes
Step 3 = UnzipFiles
Step 4 = UpdateEtcHosts
Step 5 = CreateCellipnitora
Step 6 = ValidateHW
Step 7 = ValidateIB
Step 8 = ValidateCell
Step 9 = PingRdsCheck
Step 10 = RunCalibrate
Step 11 = ValidateTimeDate
Step 12 = UpdateConfig
Step 13 = CreateUserAccounts
Step 14 = SetupSSHForUsers
Step 15 = CreateOraHomes
Step 16 = CreateGridDisks
Step 17 = InstallGridSoftware
Step 18 = RunGridRootScripts
Step 19 = Install112DBSoftware
Step 20 = Create112Listener
Step 21 = RunAsmCa
Step 22 = UnlockGIHome
Step 23 = ApplyBP
Step 24 = RelinkRDS
Step 25 = LockUpGI
Step 26 = SetupCellEmailAlerts
Step 27 = RunDbca
Step 28 = SetupEMDbControl
Step 29 = ApplySecurityFixes
Step 30 = ResecureMachine
}}}
detailed output http://www.evernote.com/shard/s48/sh/f09c8d94-4f5e-4a67-97cf-10bf636e7a60/9fdb83a7b5448c5b171e87e4efdc7cf6
from this URL http://programmingprogress.blogspot.com/2014/11/learn-programming-languages.html
<<<
Most of these are introductory tutorials. It would be a good challenge to be able to finish most or all of these.
NOTES:
- Codecademy covers most of the web-related languages and keeps track of progress
- The learn*.org groups of sites (ex: http://www.learn-c.org) also covers a lot of languages
- InteractivePython has mappings to several books and is somewhat an interactive textbook
- PythonTutor and the derivatives in other languages (Java, Ruby, Javascript) provides nice step-by-step visualization of the programs being run
Bash
http://www.learnshell.org
https://www.hackerrank.com/domains/shell/bash
C
http://www.learn-c.org
C#
http://www.learncs.org
Clojure
http://codecombat.com
CoffeeScript
http://codecombat.com
CSS
http://www.codecademy.com/tracks/web
https://www.codeschool.com
Go
http://tour.golang.org
HTML
http://www.codecademy.com/tracks/web
https://www.codeschool.com
Io
http://codecombat.com
Java
http://www.learnjavaonline.org
Javascript
http://www.codecademy.com/tracks/javascript
http://www.learn-js.org
http://codecombat.com
https://www.codeschool.com
jQuery
http://www.codecademy.com/tracks/jquery
Lua
http://codecombat.com
PHP
http://www.codecademy.com/tracks/php
http://www.learn-php.org
Python
http://www.codecademy.com/tracks/python
http://www.learnpython.org
http://www.pyschools.com
http://interactivepython.org
http://pythonmonk.com
https://www.hackerrank.com/domains/miscellaneous/python-tutorials
http://www.pythonchallenge.com
http://codecombat.com
http://www.checkio.org
http://www.trypython.org
R
http://tryr.codeschool.com
https://www.datacamp.com
Ruby
http://www.codecademy.com/tracks/ruby
https://rubymonk.com
https://www.codeschool.com
<<<
http://demo.gethue.com/
http://blog.cloudera.com/blog/2013/04/demo-analyzing-data-with-hue-and-hive/
obs screen sharing
https://www.google.com/search?q=obs+screen+sharing&oq=obs+screen+sharing&aqs=chrome..69i57.3078j1j4&sourceid=chrome&ie=UTF-8
<<showtoc>>
! openldap
! freeipa
https://www.freeipa.org/page/Releases/4.6.4
https://www.freeipa.org/page/Downloads
https://www.youtube.com/results?search_query=https%3A%2F%2Fwww.freeipa.org%2Fpage%2FDownloads
Configuring your own LDAP server using FreeIPA (RHCSA) - Recording Live Session https://www.youtube.com/watch?v=8wc4MO3LXQI
Cloudera with freeipa ldap server | manoj sharma https://www.youtube.com/watch?v=oSGu7GQlOP0
! open / close
{{{
Connect as SYS
alter pluggable database <PDB Name> close instances =all;
alter pluggable database <PDB Name> open read write instances =all;
select * from PDB_ALERTS;
select * from PDB_PLUG_IN_VIOLATIONS;
}}}
! close and drop
{{{
15:43:53 SYS@orcl> alter pluggable database pdb3 close;
Pluggable database altered.
15:44:01 SYS@orcl> drop pluggable database pdb3 including datafiles;
Pluggable database dropped.
15:44:20 SYS@orcl> select * from v$pdbs;
CON_ID DBID CON_UID GUID NAME OPEN_MODE RES OPEN_TIME CREATE_SCN TOTAL_SIZE
---------- ---------- ---------- -------------------------------- ------------------------------ ---------- --- --------------------------------------------------------------------------- ---------- ----------
2 4080030308 4080030308 F081641BB43F0F7DE045000000000001 PDB$SEED READ ONLY NO 23-DEC-14 02.43.56.324 PM 1720754 283115520
3 3345156736 3345156736 F0832BAF14721281E045000000000001 PDB1 READ WRITE NO 23-DEC-14 02.44.09.357 PM 2244252 291045376
4 3933321700 3933321700 0AE814AAEDB059CCE055000000000001 PDB2 MOUNTED 23-DEC-14 03.41.57.513 PM 6761680 0
15:44:37 SYS@orcl> select * from cdb_pdbs;
PDB_ID PDB_NAME DBID CON_UID GUID STATUS CREATION_SCN CON_ID
---------- -------------------------------------------------------------------------------------------------------------------------------- ---------- ---------- -------------------------------- ------------- ------------ ----------
4 PDB2 3933321700 3933321700 0AE814AAEDB059CCE055000000000001 UNPLUGGED 6761680 1
2 PDB$SEED 4080030308 4080030308 F081641BB43F0F7DE045000000000001 NORMAL 1720754 1
3 PDB1 3345156736 3345156736 F0832BAF14721281E045000000000001 NORMAL 2244252 1
}}}
Patching for Exadata: introducing oplan https://blogs.oracle.com/XPSONHA/entry/patching_for_exadata_introduci
you can have multiple opt_param hints in one SQL, see below:
{{{
SELECT /*+ OPT_PARAM('_unnest_subquery' 'false') OPT_PARAM('_gby_hash_aggregation_enabled' 'false') */ "T0"."C1" "C0", "T0"."C0" "C1"
FROM (SELECT DISTINCT
MIN ("PS_CBTA_DEAL_ACCTG"."DEAL_ID") OVER () "C0", 1 "C1"
FROM "SYSADM"."PS_CBTA_DEAL_ACCTG" "PS_CBTA_DEAL_ACCTG",
"SYSADM"."PS_CBTA_PROPERTY" "PS_CBTA_PROPERTY",
"SYSADM"."CBTA_DEAL_R_VW" "PS_CBTA_DEAL_VW"
WHERE "PS_CBTA_DEAL_ACCTG"."BUSINESS_UNIT" = 'MTA01'
AND "PS_CBTA_DEAL_ACCTG"."ACCOUNTING_DT" BETWEEN TIMESTAMP '2012-01-01 00:00:00.000000000'
AND TIMESTAMP '2012-03-31 00:00:00.000000000'
AND "PS_CBTA_DEAL_ACCTG"."ANALYSIS_TYPE" IN ('RVD', 'EXP')
AND "PS_CBTA_DEAL_ACCTG"."OFFICENAME" IN
('103201', '101103', '102102')
AND "PS_CBTA_DEAL_ACCTG"."IS_CONVERTED_DEAL" <> 'Y'
AND "PS_CBTA_DEAL_ACCTG"."RESOURCE_CATEGORY" NOT IN
('OB', 'RBT')
AND "PS_CBTA_DEAL_ACCTG"."RESOURCE_SUB_CAT" <> 'NM'
AND "PS_CBTA_PROPERTY"."STATE" <> 'None'
AND "PS_CBTA_PROPERTY"."SETID" = 'SHARE'
AND "PS_CBTA_DEAL_ACCTG"."TRANS_TYPE" IN
('COMM', 'FESH', 'TAX')
AND "PS_CBTA_DEAL_ACCTG"."DEAL_ID" =
"PS_CBTA_DEAL_VW"."DEAL_ID"
AND "PS_CBTA_DEAL_ACCTG"."BUSINESS_UNIT" =
"PS_CBTA_DEAL_VW"."BUSINESS_UNIT"
AND "PS_CBTA_PROPERTY"."CBTA_PROPERTY_ID" =
"PS_CBTA_DEAL_VW"."PROPERTY_ID") "T0"
ORDER BY "C0" ASC NULLS LAST, "C1" ASC NULLS LAST
/
}}}
then check it with
{{{
select * from table( dbms_xplan.display_cursor('3pzhffw74ubtu', null, 'ADVANCED +ALLSTATS LAST +MEMSTATS LAST') );
}}}
then check the outline data
{{{
Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
DB_VERSION('11.2.0.2')
OPT_PARAM('_unnest_subquery' 'false')
OPT_PARAM('_gby_hash_aggregation_enabled' 'false')
ALL_ROWS
OUTLINE_LEAF(@"SEL$32")
OUTLINE_LEAF(@"SEL$24")
OUTLINE_LEAF(@"SEL$22")
OUTLINE_LEAF(@"SEL$20")
OUTLINE_LEAF(@"SEL$12")
}}}
''other references''
http://www.askmaclean.com/opt_param-hint.html
Note:986618.1 Parameters useable by OPT_PARAM hint
http://iusoltsev.wordpress.com/profile/individual-sql-and-cbo/cbo-hints/
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=1445406
! you can now do this with _fix_control
https://oracle-base.com/articles/misc/granular-control-of-optimizer-features-using-fix-control#opt_param-hint
{{{
The OPT_PARAM hint is used to set a specific initialization parameter during the execution of a single SQL statement. It can be used with the _FIX_CONTROL parameter if a session or system level setting is too extreme.
SELECT /*+ OPT_PARAM('_fix_control' '6377505:OFF') */ *
FROM ...;
Multiple entries are set using a space-separated list.
SELECT /*+ OPT_PARAM('_fix_control' '6377505:OFF 6006300:OFF') */ *
FROM ...;
}}}
! some other opt_param options
{{{
hash_join_enabled
optimizer_dynamic_sampling
optimixer_features_enable
optimizer_index_caching
optimizer_index_cost_adj
optimizer_mode
optimizer_secure_view_merging
star_transformation_enabled
select /*+ opt_param('optimizer_mode','first_rows_10') */
select /*+ opt_param('_optimizer_cost_model','io') */
select /*+ opt_param('optimizer_index_cost_adj',20) */
select /*+ opt_param('optimizer_index_caching',20) */
select /*+ opt_param('optimizer_features_enable','11.2.0.4')*/
select /*+ OPT_PARAM('_always_semi_join' 'off')
OPT_PARAM('_b_tree_bitmap_plans' 'false')
OPT_PARAM('query_rewrite_enabled' 'false')
OPT_PARAM('_new_initial_join_orders' 'false')
OPT_PARAM('optimizer_dynamic_sampling' 1)
OPT_PARAM('optimizer_index_cost_adj' 1) */
SELECT /*+ FULL(RHNSERVER) OPT_PARAM('_optimizer_mjc_enabled' 'false') OPT_PARAM('_optimizer_cartesian_enabled' 'false')*/ COUNT (DISTINCT SC.SERVER_ID)
}}}
https://leanpub.com/how-query-engines-work
http://blogs.oracle.com/Support/entry/8_habits_of_highly_effective
-- ORA-600
153788.1 ora600/7445 lookup tool
Note 175982.1 ORA-600 Lookup Error Categories http://www.eygle.com/digest/2010/08/ora_00600_code_explain.html
390293.1 introduction to 600/7445 internal error analysis
22080.1 introduction to error message articles
18485.1 ora-600 internal error code arguments
154170.1 how to find the offending SQL from a trace file
146581.1 how to deal with ora600 errors
http://blogs.oracle.com/db/2010/06/ora-600_troubleshooting.html
Note 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video]
-- KCRFR_UPDATE_NAB_2
ORA-00600 [KCRFR_UPDATE_NAB_2] [0X253B90498], [2] (Doc ID 7186347)
STARTUP FAILS WITH ORA-00600: [KCRFR_UPDATE_NAB_2] AFTER A CRASH (Doc ID 8754200)
ORA-600 [KCRFR_UPDATE_NAB_2] AFTER SHUTDOWN ABORT (Doc ID 7024222)
AFTER DATABASE CRASHED DOESN'T OPEN ORA-600 [KCRFR_UPDATE_NAB_2] (Doc ID 5692594)
INSTANCES CRASH WITH ORA-600 [KCRFR_UPDATE_NAB_2] AFTER DISK FAILURE (Doc ID 6655116)
DATABASE FAILS TO OPEN WITH ORA-600 [KCRFR_UPDATE_NAB_2] (Doc ID 5364658)
ORA-600 [KCRFR_UPDATE_NAB_2] ON STARTUP OPEN AFTER NODE CRASH IN A RAC (Doc ID 4213764)
<<<
Workarounds:
http://wenku.baidu.com/view/27a25fd276a20029bd642d26.html
1.5.3. REDO log problem causes ORA-600 [kcrfr_update_nab_2]
(1) by setting mandatory start _ALLOW_RESETLOGS_CORRUPTION = true
(2) can not start to rebuild the control file, start again forced
(3) can no longer start, through the alter system clear log group x approach, the redo log empty, preventing the impact on the other, and then force the start.
(4) generally start correctly, or report ORA-600 [4000] error.
<<<
-- RAC
Troubleshooting ORA - 12547 TNS LOST CONTACT
Doc ID: Note:555565.1
Libaio.So.1: Cannot Open Shared Object File and ORA-12547: TNS:Lost Contact
Doc ID: Note:394297.1
-- Event 10949
KEEP BUFFER POOL Does Not Work for Large Objects on 11g (Doc ID 1081553.1)
-- 10.2.0.4 patch set errors
Event 13740 Messages
Doc ID: Note:342505.1
"Sskgpgetexecname Failed To Get Name" Message Appears in Alert.log File
Doc ID: Note:604804.1
-- ORA-01555
When Using Export Utility, Received Ora-01555 Error
Doc ID: 726855.1
-- ORA-01033
How We Solved ORA-01033 When Standby and Primary Databases Exist on the Same Host
Doc ID: 433012.1
-- ORA-1031
Transport : Remote Archival to Standby Site Fails with ORA-01031
Doc ID: 353976.1
-- ORA-16191 -Primary log shipping client not logged on standby
Changing SYS password of PRIMARY database when STANDBY in place to avoid ORA-16191
Doc ID: 806703.1
-- 10.2.0.4 alert
WARNING:1 Oracle process running out of OS kernelI/O resources
Doc ID: 748607.1
-- STARTUP
DB startup fails, alert shows Oracle Instance Startup operation failed.
Doc ID: 467949.1
-- 3113
DIAGNOSING ORA-3113 ERRORS
Doc ID: 1020463.6
ORA-03113 on Unix - What Information to Collect
Doc ID: 17613.1
-- 3135
ORA-3135 with Recovery Catalog Creation Across the Network (Firewall included)
Doc ID: 805088.1
ALERT: Pro*C program hangs during connect (lwp_mutex_lock) on multi CPU 2.5.1
Doc ID: 70684.1
-- 12537, 12560, 00507
TNS listener Startup Fails with TNS-12537 TNS-12560 TNS-00507
Doc ID: 557855.1
-- 12537
Troubleshooting ORA-12537 / TNS-12537 TNS:Connection Closed [ID 555609.1]
UNIX: Listener Startup Fails with TNS-12537 TNS-12560 TNS-00507 [ID 557855.1]
http://forums.oracle.com/forums/thread.jspa?threadID=690379&start=0&tstart=0
http://it.toolbox.com/wiki/index.php/ORA-12537 <-- having multiple listeners
http://www.oradev.com/ORA-12537_TNS_connection_closed.jsp <-- different reasons
http://www.dba-oracle.com/t_ora_12537_tns_error.htm
http://www.dba-oracle.com/t_ora_12170_tns_connect_timeout.htm
http://www.dbmotive.com/oracle_error_codes.php?errcode=12170
ORA-12546 or ORA-12537 While Conecting To Database Via Listener (Doc ID 1050756.6)
Remote Connection Error with ORA-12537 TNS: Connection Closed (Doc ID 950975.1)
ORA-12537: TNS:connection closed When SDU Set to Value Greater Than Default (Doc ID 739792.1)
ORA-12537 "TNS: Connection Closed" using Server Manager (svrmgrl) (Doc ID 95798.1)
ORA-12537 WHEN CONNETING/STACK DUMPS WHEN CHECKING STATUS LISTENER (Doc ID 1036305.6)
ORA-12537 Could Happen if Listener and Database are Owned by Different OS User (Doc ID 1069517.1)
How to identify DB connection problems ? (Doc ID 797483.1)
SQL*Net (oracle Networking) Common Errors & Diagnostic Worksheet (Doc ID 45878.1)
Remote or SQL*Net Connections Fail With TNS-12537 Error (Doc ID 333158.1)
TNS-12537 on Connection, Athough Node Invited (Doc ID 215891.1)
TROUBLESHOOTING GUIDE TNS-12518 TNS listener could not hand off client connection (Doc ID 550859.1)
TECH: SQL*Net V2 on Unix - Example Files to get Started Quickly (Doc ID 13984.1)
http://hi.baidu.com/snoworld/blog/item/d339b3354270558aa61e12f7.html <--- hmmm interesting
-- relink
Relink All /Usr/Bin/Make: Not Found
Doc ID: 335626.1
-- 17147, 17182
ORA-00600: [17147] Or ORA-600: [17182] When Running Query With NLS_SORT=BINARY
Doc ID: 382909.1
ORA-600 [17182], ORA-600 [17114] IN ALERT.LOG
Doc ID: 1035099.6
ORA-00600 [17182] Binding Object Datatypes When Patch 6085625 is Installed
Doc ID: 468171.1
-- ORACLE 8.0.6
Top Internal Errors - Oracle Server Release 8.0.6
Doc ID: 146694.1
-- ORACLE 8.1.7
Top Internal Errors - Oracle Server Release 8.1.7
Doc ID: 146692.1
-- ORA-07279
ORA-07279: spcre: semget error, unable to get first
semaphore set.
http://www.orafaq.com/maillist/oracle-l/2000/08/08/2070.htm
SUMMARY: Semaphores/Oracle related one
http://www.sunmanagers.org/pipermail/summaries/2002-March/001106.html
INFO: ORA-7279 if Startup More Than One Instance of Oracle on SCO
Doc ID: 34535.1
-- ORA-27123
SOLARIS: ORA-27123: WHEN STARTING A V8.0.4.X DATABASE
Doc ID: 1042494.6
-- ORA-1159 CREATE CONTROLFILE ERROR
ORA-1159 STARTING DATABASE
Doc ID: 1066832.6
OERR: ORA 1159 file is not from same database as previous files - wrong
Doc ID: 18733.1
How to deal with common issues on the Database Tier when cloning with Rapid clone
Doc ID: 807597.1
Create Controlfile Failure With ORA-1503, ORA-1189, ORA-1110
Doc ID: 121801.1
-- BINARY BUG
Ld: Fatal: File /Oracle/Product/10.2.0/Lib32/Libclntsh.So: Unknown File Type
Doc ID: 467527.1
-- ORA-29702
How to solve ORA-29702 when starting an OPS instance on Windows NT
Doc ID: 158653.1
ORA-29702 During Automatic Shutdown of Database using ASM
Doc ID: 429603.1
ASM Crashes When Rebooting a Server With ORA-29702 Error.
Doc ID: 467354.1
ASM Instance Crashing with ORA-29702
Doc ID: 445023.1
OERR: ORA-12170 TNS:Connect timeout occurred
Doc ID: 194295.1
-- ORA-01102: cannot mount database in EXCLUSIVE mode
https://forums.oracle.com/forums/thread.jspa?threadID=842546
https://forums.oracle.com/forums/thread.jspa?threadID=850812
Second Node Crashes During Startup With ORA-29707 [ID 271420.1] <- Check the parameter cluster_database is set to TRUE on all instances init.ora
https://www.udemy.com/python-for-data-analysis-and-pipelines/
Oracle Partitioning Policy
Topic: Server/Hardware Partitioning
https://www.oracle.com/assets/partitioning-070609.pdf
https://www.oracle.com/a/ocom/docs/linux/ol-kvm-hard-partitioning.pdf
.
https://blogs.oracle.com/oraclemagazine/using-oracle-machine-learning-notebooks
Oracle8 Time Series Cartridge User's Guide Release 8.0.4 https://docs.oracle.com/cd/A58617_01/cartridg.804/a57501/toc.htm
8.1.5 https://docs.oracle.com/cd/A87860_01/doc/inter.817/a67294/ts_intr.htm
Full Import shows Errors when adding Referential Constraint on Cartrige Tables (Doc ID 109576.1)
Database Administrator's Guide for Oracle Essbase http://docs.oracle.com/cloud/latest/financialscs_gs/FADAG/dcatimse.html#dcatimse8645
Manage Concurrency and Priorities on Autonomous Database
https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/manage-priorities.html#GUID-19175472-D200-445F-897A-F39801B0E953
.
<<<
https://oracle.github.io/learning-library/#AutonomousDatabaseFeaturedWorkshops
https://apexapps.oracle.com/pls/apex/dbpm/r/livelabs/view-workshop?p180_id=582
https://github.com/oracle/learning-library/tree/master/workshops/adwc4dev
https://www.doag.org/formes/pubfiles/10884154/20181205-Regio_BerlinBrandenburg_Christiane_Wellnitz_Oracle_Deutschland_Step_by_Step_Guide_Autonomous.pdf
<<<
! complete review of oracle DW 2018
https://www.dropbox.com/s/ej6o58mr3yqus7s/Complete-Review-BDW-oow18.pdf?dl=0
Warehousing Data with Oracle Autonomous Data Warehouse Cloud https://app.pluralsight.com/library/courses/oracle-autonomous-data-warehouse-cloud/table-of-contents
! oracle cloud infrastructure migration
Practical Oracle Cloud Infrastructure: Infrastructure as a Service, Autonomous Database, Managed Kubernetes, and Serverless https://learning.oreilly.com/library/view/practical-oracle-cloud/9781484255063/
! oci book
https://github.com/Apress/pract-oracle-cloud-infrastructure
https://github.com/mtjakobczyk/oci-book
https://www.udemy.com/course/foundation-to-oracle-database-in-oracle-cloud-infrastructure/
https://www.udemy.com/course/oracle-database-migration-methods-on-prem-to-oracle-cloud/
https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/automatic-operations-256569003.html
<<<
! Automatic Operations
Automatic Indexing Enhancements
Automatic Index Optimization
Automatic Materialized Views
Automatic SQL Tuning Set
Automatic Temporary Tablespace Shrink
Automatic Undo Tablespace Shrink
Automatic Zone Maps
Object Activity Tracking System
Sequence Dynamic Cache Resizing
<<<
.
https://oracle-base.com/articles/linux/automating-database-startup-and-shutdown-on-linux
https://unknowndba.blogspot.com/2021/07/systemd-service-to-stopstart-oracle.html
https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm
https://www.oracle.com/cloud/compute/pricing.html
https://www.oracle.com/cloud/compute/
https://www.oracle.com/cloud/compute/virtual-machines.html
https://www.google.com/search?q=oracle+VM.Standard2.24&oq=oracle+VM.Standard2.24&aqs=chrome..69i57j69i64l3.1090j0j1&sourceid=chrome&ie=UTF-8
! performance
https://docs.cloud.oracle.com/en-us/iaas/Content/Block/Concepts/blockvolumeperformance.htm
https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm#vmshapes
Introducing Continuous Integration and Delivery Concepts for DevOps on Oracle Cloud https://app.pluralsight.com/library/courses/oracle-cloud-devops-integration-delivery/table-of-contents
! AWS
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html
{{{
I’ve used https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html a few times as it stores shapes storage stats in a table so you can grab it and then sort it.
Looks like you’re out of luck, though I had no idea what the x2gd were as the last time we looked the r5b’s were the largest. Worth a read I suppose in case someone asks.
}}}
{{{
SELECT * from emp SAMPLE(25) SEED(1)
}}}
https://stackoverflow.com/questions/733652/select-a-random-sample-of-results-from-a-query-result
https://community.oracle.com/tech/developers/discussion/3673906/how-to-randomly-select-rows-by-percentage-of-rows-by-specific-column
{{{
select
deptno
,count(*) cnt
from emp
group by deptno
DEPTNO CNT
30 6
20 5
10 3
choose 50% from each dept randomly
expected (round up to next integer)
DEPTNO CNT
30 3
20 3
10 2
select * from (
select
empno
,deptno
,count(*) over (partition by deptno) cnt
,row_number() over (partition by deptno order by dbms_random.random) rn
from emp
)
where rn <= round(cnt*0.5)
order by deptno
EMPNO DEPTNO CNT RN
7782 10 3 1
7934 10 3 2
7566 20 5 1
7876 20 5 2
7902 20 5 3
7900 30 6 1
7521 30 6 2
7654 30 6 3
}}}
https://blogs.oracle.com/oraclemagazine/on-rownum-and-limiting-results
.
http://www.oracle.com/technetwork/topics/intel-macsoft-096467.html
http://www.toadworld.com/products/toad-mac-edition/b/weblog/archive/2013/07/02/setting-up-a-connection-to-oracle-on-your-mac.aspx
http://www.oracle.com/technetwork/topics/intel-macsoft-096467.html
http://ronr.blogspot.com/2013/02/oracle-client-11gr2-11203-for-apple-mac.html
http://www.kylehailey.com/oracle-io-latency-monitoring/
{{{
Simple but only once a minute
NOTE: the following is a simpler query but the data only updates once a minute
select
n.name event,
m.wait_count cnt,
10*m.time_waited ms,
nvl(round(10*m.time_waited/nullif(m.wait_count,0),3) ,0) avg_ms
from v$eventmetric m,
v$event_name n
where m.event_id=n.event_id
and (
wait_class_id= 1740759767 -- User I/O
or
wait_class_id= 4108307767 -- System I/O
)
and m.wait_count > 0 ;
}}}
How to Store, Query, and Create JSON Documents in Oracle Database
https://blogs.oracle.com/sql/how-to-store-query-and-create-json-documents-in-oracle-database
Oracle retail demand forecasting
http://docs.oracle.com/cd/E75759_01/rdf/pdf/1401/rdf-1401-02-fcug.pdf
http://manchev.org/2015/03/querying-data-in-hadoop-with-the-oracle-sql-connector-for-hdfs/
{{{
-- create the wallet directory, do a "ln -s" on the db_unique_name if it's upper case
mkdir -p /oracle/admin/db01/wallet
-- create a symbolic link on ORACLE_HOME
cd $ORACLE_HOME
ln -s /oracle/admin admin
-- auto login wallet
orapki wallet create -wallet /oracle/admin/db01/wallet -auto_login -pwd "welcome1%" <-- use this on 10gR2,11gR1
orapki wallet create -wallet /u01/app/oracle/admin/testdb/wallet -auto_login_local -pwd "welcome1%" <-- use this on 11gR2 & switch logfile+checkpoint
alter system switch logfile;
alter system switch logfile;
alter system checkpoint;
alter system set encryption wallet open identified by "welcome1%";
alter system set encryption key identified by "welcome1%";
alter system set encryption wallet close;
select * from gv$encryption_wallet;
select CUST_EMAIL from OE.CUSTOMERS where rownum < 2;
select * from gv$encryption_wallet;
alter system set encryption wallet open identified by "welcome1%";
select CUST_EMAIL from OE.CUSTOMERS where rownum < 2;
select * from gv$encryption_wallet;
-- creating a different wallet but same password.. this should error
orapki wallet create -wallet /oracle/admin/db01/wallet -pwd "welcome1%"
* it will error "ORA-28362: master key not found" if not the same wallet
}}}
''then, BACKUP the wallet!''
http://www.oracle-base.com/articles/misc/with-clause.php
{{{
Troubleshooting Oracle Streams Performance Issues (Doc ID 730036.1)
Generate process stack information
When to gather this information :
- High latching activity observed which could be associated with Streams
processes;
- High CPU with no corresponding SQL buffer gets activity; this might
suggest some other PGA or SGA related structure references which are
inefficient;
When collecting stack information, aim to collect a sample large enough
such that a profile of the execution of the process can be gleaned.
Using this information Oracle Support Services can collate the details
to understand where a process is typically executing.
This may be useful in identifying the cause of difficult to understand
performance related issues.
Stack collection can be achieved with the scripts provided as follows :
Note: Oradebug will work on all platforms. pstack (Solaris, Linux) could
also be used to achieve the same. In which case, replace :
- ./stack.sh $1 -> pstack $1
create the following 2 scripts :
stack.sh
#!/bin/ksh
sqlplus -s '/ as sysdba' << EOF
spool stacks.out append
oradebug setospid $1
oradebug unlimit
oradebug SHORT_STACK
EOF
genstacks.sh
#!/bin/ksh
count=$2
sc=0
while [ $sc -lt $count ]
do
sc=`expr $sc + 1`
./stack.sh $1
sleep 1
done
With the OS process ids or threads consuming CPU (refer above)
or using higher than usual CPU.
For each process , with spid (OS process/thread id) X, do following :
chmod +x stack.sh
chmod +x genstacks.sh
script X.out
./genstacks.sh X 600
This will generate 600 stacks from the process : 1 stack per second.
On Unix platforms, the command script can be used to record
For Windows platforms, cygwin can be downloaded from http://www.cygwin.com
to allow Unix shell compatibility.
}}}
-- 10046
oradebug setmypid
ORADEBUG TRACEFILE_NAME;
ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
ORADEBUG EVENT 10046 TRACE NAME CONTEXT OFF;
oradebug flush
oradebug close_trace
-- errorstack
oradebug setmypid
ORADEBUG TRACEFILE_NAME;
oradebug dump errorstack 3
oradebug flush
oradebug close_trace
-- short stack
oradebug setospid 21906; <- spid
OR
oradebug setmypid
ORADEBUG TRACEFILE_NAME;
oradebug short_stack
oradebug flush
oradebug close_trace
also check
[[Perf record, fulltime.sh]]
[[pstack and os_explain]]
https://twitter.com/fritshoogland/status/1039126634656395264
https://twitter.com/fritshoogland/statuses/1039059893368573952
http://orafun.info/
http://orafun.info/stack/ <- paste the short_stack here
https://mahmoudhatem.wordpress.com/2018/10/05/write-consistency-and-dml-restart/
http://orafun.info/stack/ <- paste the short stack here
{{{
D:\oracle\profile\orasrp.exe --aggregate=no --binds=0 --recognize-idle-events=no --sys=no <tracefilename> <htmlname>
}}}
''per module''
{{{
exec DBMS_MONITOR.serv_mod_act_trace_enable (service_name => 'FSTSTAH', module_name => 'EX_APPROVAL');
exec DBMS_MONITOR.serv_mod_act_trace_disable (service_name => 'FSTSTAH', module_name => 'EX_APPROVAL');
trcsess output=client.trc module=EX_APPROVAL *.trc
./orasrp --aggregate=no --binds=0 --recognize-idle-events=no --sys=no client.trc fsprd.html
tkprof client.trc client.tkprof sort=exeela
}}}
http://tylermuth.wordpress.com/2012/11/02/scripted-collection-of-os-watcher-files-on-exadata/
{{{
#/bin/bash
file_name=""
if [ $# -gt 0 ]
then
file_name="_$1"
fi
# A few find examples
# \( -name "*.*" \) \
# \( -name "*vmstat*.*" -o -name "*iostat*.*" \) \
# \( -name "*vmstat*.*" -o -name "*iostat*.*" \) -mmin -60 \
# \( -name "*vmstat*.*" -o -name "*iostat*.*" -o -name "*netstat*.*" \) -mtime -8 \
while read line; do
(ssh -n -q root@$line 'find /opt/oracle.oswatcher/osw/archive \
\( -name "*vmstat*.*" -o -name "*iostat*.*" \) \
-prune -print0 2>/dev/null | \
xargs -0 tar --no-recursion -czf - 2>/dev/null ' | cat > osw${file_name}_${line}.tar.gz
)
done < /home/oracle/tyler/osw/allnodes
}}}
! kswapd troubleshooting
{{{
I used to do this to search for kswapd
less /opt/oracle.oswatcher/osw/archive/oswtop/example.com_top_12.03.08.1600.dat.bz2 | grep -A20 "load average:" > loadspike.txt
and starting Exadata software versions 11.2.3.3 and up, the equivalent command would be
less /opt/oracle.ExaWatcher/archive/Top.ExaWatcher/2014_09_23_08_14_38_TopExaWatcher_enkdb03.enkitec.com.dat.bz2 | grep -A10 "load average:" > loadspike.txt
-- searching entirely from osw files
find . -type f | xargs bzgrep "load average:" *bz2 > loadavg.txt
bzcat /opt/oracle.ExaWatcher/archive/Top.ExaWatcher/2014_09_23_08_14_38_TopExaWatcher_enkdb03.enkitec.com.dat.bz2 | grep "load average:" | sort -rnk12 | less
bzcat /opt/oracle.ExaWatcher/archive/Top.ExaWatcher/2014_09_23_08_14_38_TopExaWatcher_enkdb03.enkitec.com.dat.bz2 | grep -A10 "load average:" | less
top - 08:29:46 up 12 days, 10:09, 6 users, load average: 170.41
}}}
https://savvinov.com/2019/04/08/finding-the-root-cause-of-cpu-waits-using-stack-profiling/
https://mahmoudhatem.wordpress.com/2019/04/10/off-cpu-analysis-using-psnapper/
Exploring Linux /proc filesystem and System Calls Hacking Session with Tanel Poder https://www.youtube.com/watch?v=2Txu6umbsKE
Tanel Poder Linux Process Snapper (pSnapper) Demo https://www.youtube.com/watch?v=wvOcQryWdzE
<<showtoc>>
! project page
https://blog.tanelpoder.com/psnapper/
! videos howto
Exploring Linux /proc filesystem and System Calls Hacking Session with Tanel Poder https://www.youtube.com/watch?v=2Txu6umbsKE
Tanel Poder Linux Process Snapper (pSnapper) Demo https://www.youtube.com/watch?v=wvOcQryWdzE
! help
{{{
$ ./psn -h
usage: psn [-h] [-d seconds] [-p [pid]] [-t [tid]] [-r] [-a]
[--sample-hz SAMPLE_HZ] [--ps-hz PS_HZ] [-o filename] [-i filename]
[-s csv-columns] [-g csv-columns] [-G csv-columns]
[--sources csv-source-names] [--all-sources]
optional arguments:
-h, --help show this help message and exit
-d seconds number of seconds to sample for
-p [pid], --pid [pid]
process id to sample (including all its threads), or
process name regex, or omit for system-wide sampling
-t [tid], --thread [tid]
thread/task id to sample (not implemented yet)
-r, --recursive also sample and report for descendant processes
-a, --all-states display threads in all states, including idle ones
--sample-hz SAMPLE_HZ
sample rate in Hz
--ps-hz PS_HZ sample rate of new processes in Hz
-o filename, --output-sample-db filename
path of sqlite3 database to persist samples to,
defaults to in-memory/transient
-i filename, --input-sample-db filename
path of sqlite3 database to read samples from instead
of actively sampling
-s csv-columns, --select csv-columns
additional columns to report
-g csv-columns, --group-by csv-columns
columns to aggregate by in reports
-G csv-columns, --append-group-by csv-columns
default + additional columns to aggregate by in
reports
--sources csv-source-names
list csv sources to be captured in full. use with -o
or --output-sample-db to capture proc data for manual
analysis
--all-sources capture all sources in full. use with -o or --output-
sample-db
}}}
! example output
!! the workload
https://github.com/karlarao/cputoolkit
{{{
$ ./runcputoolkit-single 1 cdb1
}}}
!! sample commands
!!! as root
{{{
[root@karldevfedora psnapper-master]# ./psn -g cmdline,state,syscall,state,wchan,state
Linux Process Snapper v0.11 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/cmdline, wchan, stat, syscall for 5 seconds... finished.
=== Active Threads =====================================================================================================
samples | avg_threads | cmdline | state | syscall | state | wchan | state
------------------------------------------------------------------------------------------------------------------------
99 | 1.80 | oraclecdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
3 | 0.05 | sqlplus | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
2 | 0.04 | | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
2 | 0.04 | sshd: root@pts/1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | /bin/bash | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
[root@karldevfedora psnapper-master]# ./psn -g cmdline,state,syscall,state,wchan,state -a
Linux Process Snapper v0.11 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/cmdline, wchan, stat, syscall for 5 seconds...
finished.
=== Active Threads =============================================================================================================================================================================
samples | avg_threads | cmdline | state | syscall | state | wchan | state
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1508 | 26.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | rescuer_thread | Sleep (Interruptible)
406 | 7.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | worker_thread | Sleep (Interruptible)
314 | 5.41 | sleep | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
290 | 5.00 | -bash | Sleep (Interruptible) | wait4 | Sleep (Interruptible) | do_wait | Sleep (Interruptible)
290 | 5.00 | /usr/sbin/gssproxy | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
289 | 4.98 | /bin/bash | Sleep (Interruptible) | wait4 | Sleep (Interruptible) | do_wait | Sleep (Interruptible)
232 | 4.00 | /usr/lib/polkit-1/polkitd | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
174 | 3.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | scsi_error_handler | Sleep (Interruptible)
174 | 3.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | smpboot_thread_fn | Sleep (Interruptible)
174 | 3.00 | /usr/sbin/ModemManager | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
174 | 3.00 | /usr/sbin/NetworkManager | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
174 | 3.00 | /usr/sbin/abrtd | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
116 | 2.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | rcu_gp_kthread | Sleep (Interruptible)
116 | 2.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | rcu_nocb_kthread | Sleep (Interruptible)
116 | 2.00 | -bash | Sleep (Interruptible) | read | Sleep (Interruptible) | pipe_wait | Sleep (Interruptible)
116 | 2.00 | /sbin/agetty | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
116 | 2.00 | /usr/lib/polkit-1/polkitd | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
116 | 2.00 | /usr/lib/systemd/systemd | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
116 | 2.00 | su | Sleep (Interruptible) | wait4 | Sleep (Interruptible) | do_wait | Sleep (Interruptible)
69 | 1.19 | oraclecdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
68 | 1.17 | sqlplus | Sleep (Interruptible) | read | Sleep (Interruptible) | pipe_wait | Sleep (Interruptible)
58 | 1.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | devtmpfsd | Sleep (Interruptible)
58 | 1.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | fsnotify_mark_destroy | Sleep (Interruptible)
58 | 1.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | kauditd_thread | Sleep (Interruptible)
58 | 1.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | kjournald2 | Sleep (Interruptible)
58 | 1.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | ksm_scan_thread | Sleep (Interruptible)
58 | 1.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | kthreadd | Sleep (Interruptible)
58 | 1.00 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | wait_woken | Sleep (Interruptible)
58 | 1.00 | (sd-pam) | Sleep (Interruptible) | rt_sigtimedwait | Sleep (Interruptible) | do_sigtimedwait | Sleep (Interruptible)
58 | 1.00 | -bash | Sleep (Interruptible) | read | Sleep (Interruptible) | wait_woken | Sleep (Interruptible)
58 | 1.00 | /sbin/audispd | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
58 | 1.00 | /sbin/audispd | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /sbin/auditd | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | /sbin/auditd | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
58 | 1.00 | /sbin/rngd | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
58 | 1.00 | /usr/bin/abrt-dump-journal-oops | Sleep (Interruptible) | ppoll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /usr/bin/abrt-dump-journal-xorg | Sleep (Interruptible) | ppoll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /usr/bin/dbus-daemon | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | /usr/bin/dbus-daemon | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /usr/bin/perl | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
58 | 1.00 | /usr/lib/systemd/systemd-journald | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | /usr/lib/systemd/systemd-logind | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | /usr/lib/systemd/systemd-udevd | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/alsactl | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/atd | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/chronyd | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/crond | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/gssproxy | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/lvmetad | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/rsyslogd | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/rsyslogd | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/rsyslogd | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/sedispatch | Sleep (Interruptible) | read | Sleep (Interruptible) | unix_stream_read_generic | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/smartd | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
58 | 1.00 | /usr/sbin/sshd | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | avahi-daemon: chroot helper | Sleep (Interruptible) | read | Sleep (Interruptible) | unix_stream_read_generic | Sleep (Interruptible)
58 | 1.00 | avahi-daemon: running [karldevfedora.local] | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | mpstat | Sleep (Interruptible) | pause | Sleep (Interruptible) | sys_pause | Sleep (Interruptible)
58 | 1.00 | ora_aqpc_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_cjq0_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_ckpt_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_d000_cdb1 | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | ora_dbrm_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_dbw0_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_dia0_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_diag_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_gen0_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_lgwr_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_lreg_cdb1 | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | ora_mman_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_mmnl_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_mmon_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_p000_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_p001_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_p002_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_p003_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_pmon_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_psp0_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_pxmn_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_q002_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_q003_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_qm02_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_reco_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_s000_cdb1 | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
58 | 1.00 | ora_smco_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_smon_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_tmon_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_tt00_cdb1 | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
58 | 1.00 | ora_vkrm_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_vktm_cdb1 | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
58 | 1.00 | ora_w000_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_w001_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_w002_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_w003_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_w004_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_w005_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_w006_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | ora_w007_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
58 | 1.00 | oraclecdb1 | Sleep (Interruptible) | read | Sleep (Interruptible) | sk_wait_data | Sleep (Interruptible)
58 | 1.00 | sshd: root@pts/0 | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | sshd: root@pts/1 | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | sshd: root@pts/2 | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
58 | 1.00 | vmstat | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
57 | 0.98 | | Sleep (Interruptible) | [kernel_thread] | Sleep (Interruptible) | kswapd | Sleep (Interruptible)
1 | 0.02 | | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | /bin/bash | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
samples: 58 (expected: 100)
total processes: 162, threads: 184
runtime: 5.18, measure time: 5.02
[root@karldevfedora psnapper-master]# collectl
waiting for 1 second sample...
#<--------CPU--------><----------Disks-----------><----------Network---------->
#cpu sys inter ctxsw KBRead Reads KBWrit Writes KBIn PktIn KBOut PktOut
100 2 1988 2336 0 0 0 0 1 20 3 17
100 6 2007 3270 284 10 0 0 3 43 8 53
100 8 2016 3328 196 13 224 4 6 88 12 78
100 4 1997 3250 4 1 0 0 2 38 5 39
100 1 1984 2300 0 0 0 0 2 27 6 26
100 5 1999 3178 128 8 32 2 2 35 5 35
100 1 1986 2331 0 0 0 0 2 28 6 28
Ouch!
[root@karldevfedora psnapper-master]# top -c
top - 12:26:53 up 364 days, 19:15, 3 users, load average: 3.26, 2.45, 1.33
Tasks: 157 total, 4 running, 153 sleeping, 0 stopped, 0 zombie
%Cpu0 : 95.8 us, 4.2 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 501224 total, 47092 free, 137896 used, 316236 buff/cache
KiB Swap: 1023996 total, 209116 free, 814880 used. 176136 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
25472 oracle 20 0 1073868 82300 81432 R 91.3 16.4 3:10.65 oraclecdb1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
19807 oracle -2 0 1069760 476 388 S 4.1 0.1 675:13.25 ora_vktm_cdb1
20463 root 20 0 0 0 0 S 0.4 0.0 0:00.78 [kworker/0:1]
24442 root 20 0 153008 4424 3176 S 0.4 0.9 0:00.14 sshd: root@pts/2
27140 root 20 0 58624 4492 3708 R 0.4 0.9 0:00.05 top -c
27238 oracle 20 0 1074624 75220 71728 R 0.4 15.0 0:00.01 oraclecdb1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
1 root 20 0 202460 2624 1404 S 0.0 0.5 70:03.58 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
2 root 20 0 0 0 0 S 0.0 0.0 1:14.62 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 0:01.13 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 29:53.34 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh]
9 root 20 0 0 0 0 S 0.0 0.0 24:38.57 [rcuos/0]
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcuob/0]
11 root rt 0 0 0 0 S 0.0 0.0 0:00.00 [migration/0]
12 root rt 0 0 0 0 S 0.0 0.0 1:44.41 [watchdog/0]
13 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [khelper]
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kdevtmpfs]
15 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [netns]
16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [perf]
17 root 0 -20 0 0 0 S 0.0 0.0 0:26.66 [writeback]
18 root 25 5 0 0 0 S 0.0 0.0 0:00.00 [ksmd]
19 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [crypto]
20 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kintegrityd]
21 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset]
22 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kblockd]
23 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ata_sff]
24 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [md]
25 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [devfreq_wq]
28 root 20 0 0 0 0 S 0.0 0.0 121:15.41 [kswapd0]
29 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [fsnotify_mark]
72 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kthrotld]
73 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [acpi_thermal_pm]
74 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_0]
75 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_0]
76 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_1]
77 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_1]
79 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kpsmoused]
80 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [dm_bufio_cache]
81 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ipv6_addrconf]
121 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [deferwq]
173 root 20 0 0 0 0 S 0.0 0.0 0:57.70 [kauditd]
311 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_2]
312 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_2]
313 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ttm_swap]
314 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [qxl_gc]
324 root 0 -20 0 0 0 S 0.0 0.0 0:55.75 [kworker/0:1H]
329 root 20 0 0 0 0 S 0.0 0.0 8:32.37 [jbd2/vda1-8]
330 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ext4-rsv-conver]
405 root 20 0 70556 1552 1420 S 0.0 0.3 52:55.07 /usr/lib/systemd/systemd-journald
424 root 20 0 123616 0 0 S 0.0 0.0 0:00.00 /usr/sbin/lvmetad -f
434 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [rpciod]
450 root 20 0 46020 0 0 S 0.0 0.0 1:32.45 /usr/lib/systemd/systemd-udevd
506 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [vballoon]
540 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kvm-irqfd-clean]
559 root 16 -4 114596 0 0 S 0.0 0.0 6:38.87 /sbin/auditd -n
567 root 12 -8 80236 8 8 S 0.0 0.0 3:56.92 /sbin/audispd
}}}
!!!! kstack
{{{
[root@karldevfedora psnapper-master]# ./psn -g pid,cmdline,state,syscall,state,wchan,state,kstack
Linux Process Snapper v0.13 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/stat, wchan, stack, cmdline, syscall for 5 seconds... finished.
=== Active Threads =============================================================================================================================================================================================================================================
samples | avg_threads | pid | cmdline | state | syscall | state | wchan | state | kstack
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
45 | 1.00 | 2655 | oraclecdb1 | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | retint_careful()
2 | 0.04 | 6993 | oraclecdb1 | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | retint_careful()
1 | 0.02 | 6993 | | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | entry_SYSCALL_64_fastpath()->SyS_exit_group()->do_group_exit()->do_exit()->mmput()->exit_mmap()->unmap_vmas()->unmap_single_vma()
1 | 0.02 | 6997 | oraclecdb1 | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | retint_careful()
1 | 0.02 | 7002 | oraclecdb1 | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | async_page_fault()->do_async_page_fault()->trace_do_page_fault()->__do_page_fault()
1 | 0.02 | 7012 | sqlplus | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | retint_careful()
1 | 0.02 | 7014 | oraclecdb1 | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | int_careful()
1 | 0.02 | 7029 | oraclecdb1 | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | retint_careful()
1 | 0.02 | 19841 | ora_mmnl_cdb1 | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | retint_careful()
samples: 45 (expected: 100)
total processes: 165, threads: 187
runtime: 5.07, measure time: 5.01
}}}
!!!! other columns - read,write,rss
check the file https://github.com/tanelpoder/psnapper/blob/master/proc.py (ProcSource -> available_columns) for columns that you can use
{{{
[root@karldevfedora psnapper-master]# ./psn -g cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss
Linux Process Snapper v0.11 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/io, cmdline, wchan, stat, syscall for 5 seconds... finished.
=== Active Threads ========================================================================================================================================
samples | avg_threads | cmdline | state | syscall | state | wchan | state | read_bytes | write_bytes | rss
-----------------------------------------------------------------------------------------------------------------------------------------------------------
45 | 1.00 | oraclecdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU) | 11132928 | 4096 | 19812
1 | 0.02 | | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU) | 0 | 4096 | 0
1 | 0.02 | -bash | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU) | 102400 | 77824 | 465
1 | 0.02 | oraclecdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU) | 0 | 4096 | 26943
1 | 0.02 | oraclecdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU) | 4096 | 4096 | 22652
1 | 0.02 | oraclecdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU) | 4096 | 4096 | 25337
1 | 0.02 | sshd: root@pts/1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU) | 7163904 | 0 | 970
samples: 45 (expected: 100)
total processes: 160, threads: 182
runtime: 5.07, measure time: 5.02
}}}
!!!! other columns - syscall_id,arg0,arg1
as shown here https://blog.tanelpoder.com/2013/02/21/peeking-into-linux-kernel-land-using-proc-filesystem-for-quickndirty-troubleshooting/
to get the syscall_id and get what file descriptor (fd) it's reading
{{{
[root@karldevfedora 19831]# cat /proc/19831/task/19831/syscall
220 0x318000 0x7ffe32a94998 0x1 0x7ffe32a94948 0x0 0x0 0x7ffe32a947c8 0x7fdba8dd9e7a
[root@karldevfedora 19831]#
[root@karldevfedora 19831]# cat /proc/19831/syscall
220 0x318000 0x7ffe32a94998 0x1 0x7ffe32a94948 0x0 0x0 0x7ffe32a947c8 0x7fdba8dd9e7a
[root@karldevfedora 19831]# ls -l /proc/19831/fd/
total 0
lr-x------. 1 oracle oinstall 64 Apr 15 18:00 0 -> /dev/null
l-wx------. 1 oracle oinstall 64 Apr 15 18:00 1 -> /dev/null
l-wx------. 1 oracle oinstall 64 Apr 15 18:00 2 -> /dev/null
lrwx------. 1 oracle oinstall 64 Apr 15 18:00 256 -> /u01/app/oracle/oradata/cdb1/temp03.dbf
lrwx------. 1 oracle oinstall 64 Apr 15 18:00 257 -> /u01/app/oracle/oradata/cdb1/undotbs01.dbf
lrwx------. 1 oracle oinstall 64 Apr 15 18:00 258 -> /u01/app/oracle/oradata/cdb1/system01.dbf
lrwx------. 1 oracle oinstall 64 Apr 15 18:00 259 -> /u01/app/oracle/oradata/cdb1/sysaux01.dbf
lr-x------. 1 oracle oinstall 64 Apr 15 18:00 3 -> /dev/null
lr-x------. 1 oracle oinstall 64 Apr 15 18:00 4 -> /u01/app/oracle/product/12.1.0.2/db_1/rdbms/mesg/oraus.msb
lr-x------. 1 oracle oinstall 64 Apr 15 18:00 5 -> /proc/19831/fd
lrwx------. 1 oracle oinstall 64 Apr 15 18:00 6 -> /u01/app/oracle/product/12.1.0.2/db_1/dbs/hc_cdb1.dat
lrwx------. 1 oracle oinstall 64 Apr 15 18:00 7 -> /u01/app/oracle/product/12.1.0.2/db_1/dbs/lkCDB1
lr-x------. 1 oracle oinstall 64 Apr 15 18:00 8 -> /u01/app/oracle/product/12.1.0.2/db_1/rdbms/mesg/oraus.msb
[root@karldevfedora psnapper-master]# ./psn -g pid,cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss,syscall_id,arg0,arg1,kstack -a -p "smon"
Linux Process Snapper v0.13 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/stack, io, syscall, stat, cmdline, wchan for 5 seconds... finished.
=== Active Threads =================================================================================================================================================================================================================================================================================
samples | avg_threads | pid | cmdline | state | syscall | state | wchan | state | read_bytes | write_bytes | rss | syscall_id | arg0 | arg1 | kstack
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
99 | 0.99 | 19831 | ora_smon_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible) | 1419952128 | 0 | 3853 | 220 | 0x318000 | 0x7ffe32a94998 | entry_SYSCALL_64_fastpath()->SyS_semtimedop()->SYSC_semtimedop()
1 | 0.01 | 19831 | ora_smon_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible) | 1419952128 | 0 | 3852 | 220 | 0x318000 | 0x7ffe32a94998 | entry_SYSCALL_64_fastpath()->SyS_semtimedop()->SYSC_semtimedop()
samples: 100 (expected: 100)
total processes: 1, threads: 1
runtime: 5.01, measure time: 0.15
}}}
!!!! filter by process name and pid
* here i'm showing all (-a) and filtering only j00 and smon processes
{{{
[root@karldevfedora psnapper-master]# ./psn -g pid,cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss,kstack -a -p "j00|smon"
Linux Process Snapper v0.13 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/stack, io, syscall, stat, cmdline, wchan for 5 seconds... finished.
=== Active Threads =========================================================================================================================================================================================================================================
samples | avg_threads | pid | cmdline | state | syscall | state | wchan | state | read_bytes | write_bytes | rss | kstack
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
100 | 1.00 | 16325 | ora_j000_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible) | 548864 | 16384 | 17657 | entry_SYSCALL_64_fastpath()->SyS_semtimedop()->SYSC_semtimedop()
100 | 1.00 | 16327 | ora_j001_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible) | 0 | 0 | 11346 | entry_SYSCALL_64_fastpath()->SyS_semtimedop()->SYSC_semtimedop()
74 | 0.74 | 19831 | ora_smon_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible) | 1417646080 | 0 | 3369 | entry_SYSCALL_64_fastpath()->SyS_semtimedop()->SYSC_semtimedop()
26 | 0.26 | 19831 | ora_smon_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible) | 1417641984 | 0 | 3369 | entry_SYSCALL_64_fastpath()->SyS_semtimedop()->SYSC_semtimedop()
samples: 100 (expected: 100)
total processes: 3, threads: 3
runtime: 5.01, measure time: 0.35
}}}
* here i'm filtering by process id
{{{
[root@karldevfedora psnapper-master]# ./psn -g pid,cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss,kstack -a -p 19831
Linux Process Snapper v0.13 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/stack, io, syscall, stat, cmdline, wchan for 5 seconds... finished.
=== Active Threads ========================================================================================================================================================================================================================================
samples | avg_threads | pid | cmdline | state | syscall | state | wchan | state | read_bytes | write_bytes | rss | kstack
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
100 | 1.00 | 19831 | ora_smon_cdb1 | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible) | 1418780672 | 0 | 3788 | entry_SYSCALL_64_fastpath()->SyS_semtimedop()->SYSC_semtimedop()
samples: 100 (expected: 100)
total processes: 1, threads: 1
runtime: 5.01, measure time: 0.25
}}}
!!!! run in loop - with no timestamp
{{{
# the workload is
$ ./runcputoolkit-single 2 cdb1
[root@karldevfedora psnapper-master]# while : ; do ./psn -g cmdline,state,syscall,state,wchan,state ; sleep 6 ; done
Linux Process Snapper v0.11 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/cmdline, wchan, stat, syscall for 5 seconds... finished.
=== Active Threads =======================================================================================================================================
samples | avg_threads | cmdline | state | syscall | state | wchan | state
----------------------------------------------------------------------------------------------------------------------------------------------------------
120 | 2.14 | oraclecdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
3 | 0.05 | | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | | Disk (Uninterruptible) | [kernel_thread] | Disk (Uninterruptible) | __wait_on_buffer | Disk (Uninterruptible)
1 | 0.02 | mpstat | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | sleep | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | sshd: root@pts/1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
samples: 56 (expected: 100)
total processes: 170, threads: 192
runtime: 5.15, measure time: 5.07
Linux Process Snapper v0.11 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/cmdline, wchan, stat, syscall for 5 seconds... finished.
=== Active Threads ==========================================================================================================================
samples | avg_threads | cmdline | state | syscall | state | wchan | state
---------------------------------------------------------------------------------------------------------------------------------------------
111 | 2.02 | oraclecdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | oraclecdb1 | Disk (Uninterruptible) | io_destroy | Disk (Uninterruptible) | SyS_io_destroy | Disk (Uninterruptible)
1 | 0.02 | sqlplus | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | sqlplus | Running (ON CPU) | read | Running (ON CPU) | 0 | Running (ON CPU)
samples: 55 (expected: 100)
total processes: 167, threads: 189
runtime: 5.09, measure time: 5.01
Linux Process Snapper v0.11 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/cmdline, wchan, stat, syscall for 5 seconds... finished.
=== Active Threads =====================================================================================================
samples | avg_threads | cmdline | state | syscall | state | wchan | state
------------------------------------------------------------------------------------------------------------------------
128 | 2.33 | oraclecdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | -bash | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | ora_dia0_cdb1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.02 | sshd: root@pts/1 | Running (ON CPU) | [userland] | Running (ON CPU) | 0 | Running (ON CPU)
samples: 55 (expected: 100)
total processes: 174, threads: 196
runtime: 5.07, measure time: 4.99
[root@karldevfedora ~]# top -c
top - 13:17:38 up 364 days, 20:06, 4 users, load average: 2.10, 2.09, 1.27
Tasks: 157 total, 3 running, 154 sleeping, 0 stopped, 0 zombie
%Cpu(s): 93.9 us, 6.1 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 501224 total, 62080 free, 115052 used, 324092 buff/cache
KiB Swap: 1023996 total, 193236 free, 830760 used. 195364 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
30501 oracle 20 0 1072064 72424 71968 R 42.9 14.4 3:16.38 oraclecdb1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
30493 oracle 20 0 1073872 74164 73640 R 42.5 14.8 3:16.38 oraclecdb1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
30276 oracle 20 0 13240 2492 2348 S 0.3 0.5 0:00.04 /bin/bash ./ash_detail
1 root 20 0 202460 3976 2156 S 0.0 0.8 70:03.96 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
2 root 20 0 0 0 0 S 0.0 0.0 1:14.63 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 0:01.13 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 29:54.95 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh]
9 root 20 0 0 0 0 S 0.0 0.0 24:39.64 [rcuos/0]
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcuob/0]
11 root rt 0 0 0 0 S 0.0 0.0 0:00.00 [migration/0]
12 root rt 0 0 0 0 S 0.0 0.0 1:44.42 [watchdog/0]
13 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [khelper]
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kdevtmpfs]
15 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [netns]
16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [perf]
17 root 0 -20 0 0 0 S 0.0 0.0 0:26.66 [writeback]
18 root 25 5 0 0 0 S 0.0 0.0 0:00.00 [ksmd]
19 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [crypto]
20 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kintegrityd]
21 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset]
22 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kblockd]
23 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ata_sff]
24 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [md]
25 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [devfreq_wq]
28 root 20 0 0 0 0 S 0.0 0.0 121:19.75 [kswapd0]
29 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [fsnotify_mark]
72 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kthrotld]
73 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [acpi_thermal_pm]
74 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_0]
75 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_0]
76 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_1]
77 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_1]
79 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kpsmoused]
80 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [dm_bufio_cache]
81 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ipv6_addrconf]
121 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [deferwq]
173 root 20 0 0 0 0 S 0.0 0.0 0:57.70 [kauditd]
311 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_2]
312 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_2]
313 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ttm_swap]
314 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [qxl_gc]
324 root 0 -20 0 0 0 S 0.0 0.0 0:55.77 [kworker/0:1H]
329 root 20 0 0 0 0 S 0.0 0.0 8:32.43 [jbd2/vda1-8]
330 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ext4-rsv-conver]
405 root 20 0 70556 5844 5708 S 0.0 1.2 52:55.20 /usr/lib/systemd/systemd-journald
424 root 20 0 123616 0 0 S 0.0 0.0 0:00.00 /usr/sbin/lvmetad -f
434 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [rpciod]
450 root 20 0 46020 8 0 S 0.0 0.0 1:32.48 /usr/lib/systemd/systemd-udevd
506 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [vballoon]
540 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kvm-irqfd-clean]
559 root 16 -4 114596 0 0 S 0.0 0.0 6:38.87 /sbin/auditd -n
567 root 12 -8 80236 268 248 S 0.0 0.1 3:56.93 /sbin/audispd
570 root 16 -4 52204 288 248 S 0.0 0.1 3:31.27 /usr/sbin/sedispatch
574 root 20 0 4380 12 8 S 0.0 0.0 15:07.43 /sbin/rngd -f
}}}
!!! as oracle
* as oracle you get to see limited output but still useful for troubleshooting from os level perspective, in the bottom it says "Run as root or avoid restricted"
{{{
$ psn -g cmdline,state,syscall,wchan
Linux Process Snapper v0.11 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/stat, cmdline, wchan, syscall for 5 seconds... finished.
=== Active Threads ========================================================================
samples | avg_threads | cmdline | state | syscall | wchan
-------------------------------------------------------------------------------------------
78 | 1.08 | oraclecdb1 | Running (ON CPU) | [userland] | 0
1 | 0.01 | /usr/bin/perl | Running (ON CPU) | nanosleep | hrtimer_nanosleep
$ psn -g cmdline,state,syscall,wchan -a
Linux Process Snapper v0.11 by Tanel Poder [https://tp.dev/psnapper]
Sampling /proc/stat, syscall, cmdline, wchan for 5 seconds... finished.
=== Active Threads ===================================================================================================================
samples | avg_threads | cmdline | state | syscall | wchan
--------------------------------------------------------------------------------------------------------------------------------------
537 | 5.71 | sleep | Sleep (Interruptible) | nanosleep | hrtimer_nanosleep
470 | 5.00 | /bin/bash | Sleep (Interruptible) | wait4 | do_wait
282 | 3.00 | -bash | Sleep (Interruptible) | wait4 | do_wait
188 | 2.00 | -bash | Sleep (Interruptible) | read | pipe_wait
106 | 1.13 | sqlplus | Sleep (Interruptible) | read | pipe_wait
97 | 1.03 | oraclecdb1 | Running (ON CPU) | [userland] | 0
94 | 1.00 | /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr | Sleep (Interruptible) | epoll_wait | ep_poll
94 | 1.00 | /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr | Sleep (Interruptible) | futex | futex_wait_queue_me
94 | 1.00 | /usr/bin/perl | Sleep (Interruptible) | nanosleep | hrtimer_nanosleep
94 | 1.00 | mpstat | Sleep (Interruptible) | pause | sys_pause
94 | 1.00 | ora_aqpc_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_cjq0_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_ckpt_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_d000_cdb1 | Sleep (Interruptible) | epoll_wait | ep_poll
94 | 1.00 | ora_dbrm_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_dbw0_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_dia0_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_diag_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_gen0_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_j000_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_j001_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_lgwr_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_lreg_cdb1 | Sleep (Interruptible) | epoll_wait | ep_poll
94 | 1.00 | ora_mman_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_mmnl_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_mmon_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_p000_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_p001_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_p002_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_p003_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_pmon_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_psp0_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_pxmn_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_q002_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_q003_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_qm02_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_reco_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_s000_cdb1 | Sleep (Interruptible) | epoll_wait | ep_poll
94 | 1.00 | ora_smco_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_smon_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_tmon_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_tt00_cdb1 | Sleep (Interruptible) | nanosleep | hrtimer_nanosleep
94 | 1.00 | ora_vkrm_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_vktm_cdb1 | Sleep (Interruptible) | nanosleep | hrtimer_nanosleep
94 | 1.00 | ora_w000_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_w001_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_w002_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_w003_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_w004_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_w005_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_w006_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | ora_w007_cdb1 | Sleep (Interruptible) | semtimedop | SYSC_semtimedop
94 | 1.00 | oraclecdb1 | Sleep (Interruptible) | read | sk_wait_data
94 | 1.00 | vmstat | Sleep (Interruptible) | nanosleep | hrtimer_nanosleep
1 | 0.01 | sqlplus | Running (ON CPU) | [userland] | 0
samples: 94 (expected: 100)
total processes: 75, threads: 76
runtime: 5.09, measure time: 5.00
Warning: 10227 /proc file accesses failed. Run as root or avoid restricted
proc-files like "syscall" or measure only your own processes
}}}
! psnapper time series demo on a CDB/PDB environment
* go to this URL for the dashboard https://public.tableau.com/profile/karlarao#!/vizhome/psnapper/psnapperdashboard and click on the tabs
[img(100%,100%)[ https://i.imgur.com/0gE7kUg.png ]]
!! the testcase environment
* Oracle developer VM https://www.oracle.com/technetwork/database/enterprise-edition/databaseappdev-vm-161299.html
* configured with 2 CPUs and 6144MB of memory
* the developer VM comes with ORCL PDB, I created another PDB named PDB2
* the CON_ID 1 is the cdb$root
{{{
SYS@ORCL> select CON_ID, dbid, NAME, OPEN_MODE from v$pdbs order by 1
CON_ID DBID NAME OPEN_MODE
---------- ---------- -------------------- --------------------
2 1016842785 PDB$SEED READ ONLY
3 2846920952 ORCL READ WRITE
4 2887141527 PDB2 READ WRITE
}}}
!! the psnapper workload
* workload is cputoolkit https://github.com/karlarao/cputoolkit and dd on 1 minute loop
** cputoolkit: 1 CPU workload on root container and 1 CPU workload on pdb named pdb2
** dd will simulate the outside of oracle kernel workload
* the end to end test case is 1 hour (see the 15:00 hour)
{{{
### cputoolkit workload
oracle@vbgeneric:/home/oracle/dba/cputoolkit-master-root:orcl12c
$ ./runcputoolkit-single 1 orcl12c
oracle@vbgeneric:/home/oracle/dba/cputoolkit-master-pdb2:orcl12c
$ ./runcputoolkit-single 1 orcl12c
### dd workload
[root@vbgeneric ~]# while : ; do dd if=/dev/sda of=/dev/null bs=64k ; sleep 60 ; done
# sample output below
491520+0 records in
491520+0 records out
32212254720 bytes (32 GB) copied, 29.2567 s, 1.1 GB/s
491520+0 records in
491520+0 records out
32212254720 bytes (32 GB) copied, 32.4625 s, 992 MB/s
491520+0 records in
491520+0 records out
32212254720 bytes (32 GB) copied, 27.9659 s, 1.2 GB/s
^C395294+0 records in
395293+0 records out
25905922048 bytes (26 GB) copied, 25.4084 s, 1.0 GB/s
}}}
!! instrumentation
<<<
* system level
** 5 sec psnapper with added timestamp column
** 5 sec top command with added timestamp column
* db level
** dump of v$active_session_history executed on CDB root for inter CON_ID slice and dice
* other
** cputoolkit instrumentation is also running during the testcase period, but we will focus on the tools mentioned above
<<<
!! instrumentation commands
!!! time series psnapper
{{{
### this is what I used for this testcase
[root@vbgeneric psnapper-master]# while : ; do ./psn -g pid,cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss,syscall_id,arg0,arg1,kstack | while read line; do echo "`date +%m\/%d\/%Y\ %T`"\| "$line" ; done ; sleep 6 ; done >> psnap.txt
### if you are planning to execute psnapper on all the host of a RAC environment then you need to add the hostname, so that you can combined all files into one big file and this allows you to use the hostname as a dimension to slice and dice and aggregate the data
# with timestamp and hostname
while : ; do ./psn -g pid,cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss,syscall_id,arg0,arg1,kstack | while read line; do echo "`date +%m\/%d\/%Y\ %T`|`hostname`"\| "$line" ; done ; sleep 6 ; done >> psnap_`hostname`.txt
}}}
!!!! preparing the data for visualization (tableau)
* do this after all the testcases
{{{
cat psnap.txt | awk '/samples \|/,/samples\:/' | grep -v "\-\-\-" | grep -v "samples\:" | grep -v "samples |" > psnapper.txt
sed -i 's/|/,/g' psnapper.txt
### then add the header for the csv data
vi psnapper.txt
time,samples,avg_threads,pid,cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss,syscall_id,arg0,arg1,kstack
### use this header if you add the hostname column
time,hostname,samples,avg_threads,pid,cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss,syscall_id,arg0,arg1,kstack
}}}
!!! time series top
{{{
[root@vbgeneric psnapper-master]# while : ; do top -c -n 45; echo "--"; sleep 6; done | while read line; do echo "`date +%m\/%d\/%Y\ %T`"\| "$line" ; done >> top.txt
}}}
!!! ash dump
* do this after all the testcases
{{{
wget https://raw.githubusercontent.com/karlarao/pull_dump_and_explore_ash/master/ash/0_gvash_to_csv_12c.sql
sqlplus "/ as sysdba"
alter session set container=cdb$root;
@0_gvash_to_csv_12c.sql
}}}
!! closer look on the testcase - db load on cdb$root and PDB2 + dd loop
* as you can see below there are three runs of cputoolkit
** run1: 2 CPU on root and PDB2
** run2: 2 CPU on root and PDB2
** run3: 1 CPU on PDB2
* then right below it is the purple color load that is evenly spaced, that is the one minute dd loop
* and below the purple dd load are the Oracle background processes and the non Oracle kernel resource consumers (more on this on the next section)
Some interesting insights:
* on a system level (left), psnapper was able to instrument the non Oracle kernel activity. The high level system activity hovers at 7 to 8 Avg Threads, also safe to say AAS (average active sessions) because the color Orange (oracle user process) correlates pretty well with the ASH data. So this is cool that we can breakdown and aggregate the OS process with dimensions (pid,cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss,syscall_id,arg0,arg1,kstack).
** tableau can combine two data sources in one dashboard (so here it's psnapper and ash dump)
* clearly the database workload is more than the available resources. psnapper coupled with a powerful visualization tool let's you see further, the correlation and the drill down that can be done guides you to the definitive answers to why and where and allows you to uncover behavior/patterns that you would not notice with plain text
* for the data to make more sense I had to separate the oracle fg, bg, and other processes using a calculated field ".process_type"
{{{
IF contains(lower(trim([Cmdline])),'ora_')=true THEN 'oracle bg process'
ELSEIF contains(lower(trim([Cmdline])),'oracleorcl12c')=true THEN 'oracle user process'
ELSEIF contains(lower(trim([Cmdline])),'dd')=true THEN 'dd'
ELSE 'OTHER' END
}}}
[img(100%,100%)[ https://i.imgur.com/36k89Uc.png ]]
Below are the top output for each run:
!!! run1: 2 CPU on root and PDB2
{{{
SYS@ORCL> select CON_ID, dbid, NAME, OPEN_MODE from v$pdbs order by 1
CON_ID DBID NAME OPEN_MODE
---------- ---------- -------------------- --------------------
2 1016842785 PDB$SEED READ ONLY
3 2846920952 ORCL READ WRITE
4 2887141527 PDB2 READ WRITE
15:19:01 SYS@ORCL> select s.con_id, p.spid, s.sid, s.serial#
from v$process p, v$session s
where p.addr = s.paddr
and p.spid in (16144,16382);
CON_ID SPID SID SERIAL#
---------- ------------------------ ------ -------
4 16382 60 24099
1 16144 53 32173
[oracle@vbgeneric psnapper-master]$ top -c
top - 15:10:22 up 6 days, 10 min, 9 users, load average: 5.53, 3.49, 4.72
Tasks: 321 total, 12 running, 308 sleeping, 0 stopped, 1 zombie
%Cpu0 : 44.7 us, 39.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 16.0 si, 0.0 st
%Cpu1 : 68.4 us, 24.8 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 6.8 si, 0.0 st
KiB Mem : 5846272 total, 54428 free, 1781028 used, 4010816 buff/cache
KiB Swap: 3145724 total, 3112900 free, 32824 used. 2954816 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16869 root 20 0 107948 664 592 R 39.9 0.0 0:05.33 dd if=/dev/sda of=/dev/null bs=64k
16144 oracle 20 0 1259876 128868 124824 R 39.3 2.2 0:16.64 oracleorcl12c (LOCAL=NO)
16382 oracle 20 0 1259876 108148 104108 R 37.6 1.8 0:16.33 oracleorcl12c (LOCAL=NO)
36 root 20 0 0 0 0 S 10.6 0.0 0:16.59 [kswapd0]
2929 oracle -2 0 1255940 57880 54752 S 3.0 1.0 204:43.70 ora_vktm_orcl12c
3858 oracle 20 0 562064 23712 12168 S 2.0 0.4 0:13.68 /usr/libexec/gnome-terminal-server
4269 oracle 20 0 3747584 769636 8436 S 2.0 13.2 835:02.99 /home/oracle/java/jdk1.8.0_121/bin/java -Xmx1024m -Xms256m -jar /home+
1297 root 20 0 324544 94592 8456 S 1.0 1.6 502:04.56 /usr/bin/Xorg :0 -background none -noreset -audit 4 -verbose -auth /r+
17307 oracle 20 0 0 0 0 Z 1.0 0.0 0:00.03 [oracle_17307_or] <defunct>
2175 oracle 20 0 215284 13148 8220 S 0.7 0.2 0:18.78 /u01/app/oracle/product/12.2/db_1/bin/tnslsnr LISTENER -inherit
3422 oracle 20 0 1642060 167220 27132 S 0.7 2.9 641:10.98 /usr/bin/gnome-shell
7225 oracle 20 0 157872 4528 3680 R 0.7 0.1 0:01.38 top -c
17301 oracle 20 0 117088 20264 16352 S 0.7 0.3 0:00.02 sqlplus
7 root 20 0 0 0 0 R 0.3 0.0 4:30.99 [rcu_sched]
15 root 20 0 0 0 0 S 0.3 0.0 1:34.85 [ksoftirqd/1]
18 root 20 0 0 0 0 S 0.3 0.0 2:03.88 [rcuos/1]
1223 root 20 0 489240 22584 21900 S 0.3 0.4 0:40.25 /usr/sbin/rsyslogd -n
2977 oracle 20 0 1255940 62088 58960 S 0.3 1.1 28:47.69 ora_vkrm_orcl12c
2988 oracle 20 0 1255940 58056 54936 S 0.3 1.0 1:33.20 ora_pman_orcl12c
3075 oracle 20 0 1258036 53200 50272 S 0.3 0.9 0:08.12 ora_s016_orcl12c
11477 root 20 0 0 0 0 S 0.3 0.0 0:00.14 [kworker/1:1]
15956 oracle 20 0 116292 3056 1788 S 0.3 0.1 0:00.02 bash
16207 oracle 20 0 116172 3168 1920 S 0.3 0.1 0:00.02 bash
17312 oracle 20 0 1250520 46688 44700 R 0.3 0.8 0:00.01 oracleorcl12c (LOCAL=NO)
1 root 20 0 125332 4528 3228 S 0.0 0.1 0:48.75 /usr/lib/systemd/systemd --switched-root --system --deserialize 20
2 root 20 0 0 0 0 S 0.0 0.0 0:00.14 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 1:14.15 [ksoftirqd/0]
[oracle@vbgeneric psnapper-master]$
}}}
!!! run2: 2 CPU on root and PDB2
{{{
15:22:47 SYS@ORCL>
select s.con_id, p.spid, s.sid, s.serial#
from v$process p, v$session s
where p.addr = s.paddr
and p.spid in (7741,7496);
CON_ID SPID SID SERIAL#
---------- ------------------------ ------ -------
4 7496 45 40016
1 7741 30 56850
[oracle@vbgeneric psnapper-master]$ top -c
top - 15:23:19 up 6 days, 23 min, 10 users, load average: 6.50, 6.03, 5.89
Tasks: 318 total, 6 running, 312 sleeping, 0 stopped, 0 zombie
%Cpu(s): 61.3 us, 27.9 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 10.9 si, 0.0 st
KiB Mem : 5846272 total, 52076 free, 1774192 used, 4020004 buff/cache
KiB Swap: 3145724 total, 3112900 free, 32824 used. 2953572 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7741 oracle 20 0 1258856 103504 99588 R 56.8 1.8 0:20.73 oracleorcl12c (LOCAL=NO)
7496 oracle 20 0 1258856 103704 99788 R 53.8 1.8 0:21.00 oracleorcl12c (LOCAL=NO)
8185 root 20 0 107948 708 636 R 44.9 0.0 0:09.62 dd if=/dev/sda of=/dev/null bs=64k
36 root 20 0 0 0 0 S 9.3 0.0 0:46.15 [kswapd0]
2929 oracle -2 0 1255940 57880 54752 S 5.0 1.0 205:01.93 ora_vktm_orcl12c
4269 oracle 20 0 3747584 769940 8408 S 2.0 13.2 836:09.97 /home/oracle/java/jdk1.8.0_121/bin/java -Xmx1024m -Xms256m -jar /home+
3858 oracle 20 0 563132 24332 12100 S 1.3 0.4 0:25.40 /usr/libexec/gnome-terminal-server
3422 oracle 20 0 1642060 168616 27108 S 1.0 2.9 641:29.60 /usr/bin/gnome-shell
4829 root 20 0 116152 3048 1916 S 0.7 0.1 0:05.94 -bash
1 root 20 0 125332 4528 3228 S 0.3 0.1 0:49.25 /usr/lib/systemd/systemd --switched-root --system --deserialize 20
488 root 20 0 46200 14584 14292 S 0.3 0.2 0:31.95 /usr/lib/systemd/systemd-journald
952 root 20 0 403996 1552 1452 S 0.3 0.0 4:08.80 /usr/sbin/VBoxService --pidfile /var/lock/subsys/vboxadd-service
1297 root 20 0 324544 94580 8444 S 0.3 1.6 502:15.04 /usr/bin/Xorg :0 -background none -noreset -audit 4 -verbose -auth /r+
2175 oracle 20 0 215284 13092 8164 S 0.3 0.2 0:21.36 /u01/app/oracle/product/12.2/db_1/bin/tnslsnr LISTENER -inherit
3339 oracle 20 0 216784 2192 1828 S 0.3 0.0 15:22.04 /usr/bin/VBoxClient --draganddrop
3374 oracle 20 0 26500 2680 2248 S 0.3 0.0 0:03.68 /bin/dbus-daemon --config-file=/etc/at-spi2/accessibility.conf --nofo+
3573 oracle 20 0 1273880 435328 428932 S 0.3 7.4 9:56.27 ora_p000_orcl12c
3911 oracle 20 0 1274960 521472 515180 S 0.3 8.9 5:38.38 ora_p003_orcl12c
4127 oracle 20 0 1267512 335404 327580 S 0.3 5.7 38:21.01 ora_cjq0_orcl12c
6697 root 20 0 157972 4584 3672 S 0.3 0.1 0:00.23 top -c -n 45
7288 oracle 20 0 116172 3168 1920 S 0.3 0.1 0:00.01 bash
17933 oracle 20 0 157872 4544 3652 R 0.3 0.1 0:02.41 top -c
27948 root 20 0 0 0 0 S 0.3 0.0 0:00.46 [kworker/0:2]
2 root 20 0 0 0 0 S 0.0 0.0 0:00.14 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 1:14.57 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 4:33.32 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh]
[oracle@vbgeneric psnapper-master]$
}}}
!!! run3: 1 CPU on PDB2
{{{
select s.con_id, p.spid, s.sid, s.serial#
from v$process p, v$session s
where p.addr = s.paddr
and p.spid in (30493);
CON_ID SPID SID SERIAL#
---------- ------------------------ ---------- ----------
4 30493 268 4893
[oracle@vbgeneric psnapper-master]$ top -c
top - 15:35:48 up 6 days, 35 min, 10 users, load average: 3.28, 5.18, 5.86
Tasks: 298 total, 7 running, 291 sleeping, 0 stopped, 0 zombie
%Cpu(s): 52.5 us, 32.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 14.7 si, 0.0 st
KiB Mem : 5846272 total, 55168 free, 1745844 used, 4045260 buff/cache
KiB Swap: 3145724 total, 3112900 free, 32824 used. 2982292 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
30493 oracle 20 0 1258852 103692 99764 R 64.4 1.8 0:36.86 oracleorcl12c (LOCAL=NO)
31417 root 20 0 107948 644 576 R 46.4 0.0 0:01.69 dd if=/dev/sda of=/dev/null bs=64k
31408 root 20 0 157856 14492 5044 R 43.8 0.2 0:02.31 python ./psn -g pid,cmdline,state,syscall,state,wchan,state,read_byte+
36 root 20 0 0 0 0 S 8.8 0.0 1:14.00 [kswapd0]
2929 oracle -2 0 1255940 57880 54752 S 2.9 1.0 205:18.93 ora_vktm_orcl12c
4269 oracle 20 0 3747584 770348 8648 S 2.6 13.2 837:10.01 /home/oracle/java/jdk1.8.0_121/bin/java -Xmx1024m -Xms256m -jar /home+
3422 oracle 20 0 1642060 170320 27108 S 1.3 2.9 641:46.90 /usr/bin/gnome-shell
3858 oracle 20 0 563132 24580 12100 S 1.0 0.4 0:35.60 /usr/libexec/gnome-terminal-server
7 root 20 0 0 0 0 S 0.7 0.0 4:35.21 [rcu_sched]
1297 root 20 0 324544 94580 8444 S 0.7 1.6 502:24.78 /usr/bin/Xorg :0 -background none -noreset -audit 4 -verbose -auth /r+
1 root 20 0 125332 4528 3228 S 0.3 0.1 0:49.70 /usr/lib/systemd/systemd --switched-root --system --deserialize 20
9 root 20 0 0 0 0 R 0.3 0.0 2:24.36 [rcuos/0]
18 root 20 0 0 0 0 S 0.3 0.0 2:06.48 [rcuos/1]
952 root 20 0 403996 1552 1452 S 0.3 0.0 4:09.20 /usr/sbin/VBoxService --pidfile /var/lock/subsys/vboxadd-service
2175 oracle 20 0 215284 14624 9692 S 0.3 0.3 0:23.71 /u01/app/oracle/product/12.2/db_1/bin/tnslsnr LISTENER -inherit
2977 oracle 20 0 1255940 62088 58960 R 0.3 1.1 28:51.03 ora_vkrm_orcl12c
2982 oracle 20 0 1256452 57896 54788 S 0.3 1.0 0:31.42 ora_svcb_orcl12c
2992 oracle 20 0 1259072 74708 68620 S 0.3 1.3 7:15.63 ora_dia0_orcl12c
4829 root 20 0 116152 3048 1916 S 0.3 0.1 0:09.09 -bash
5425 root 20 0 0 0 0 S 0.3 0.0 0:00.32 [kworker/0:0]
22574 root 20 0 0 0 0 S 0.3 0.0 0:00.19 [kworker/1:0]
30305 oracle 20 0 116172 3168 1920 S 0.3 0.1 0:00.03 bash
2 root 20 0 0 0 0 S 0.0 0.0 0:00.14 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 1:14.99 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh]
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcuob/0]
11 root rt 0 0 0 0 S 0.0 0.0 0:00.86 [migration/0]
[oracle@vbgeneric psnapper-master]$
}}}
!! bubble by Kstack,Wchan,State,Cmd
* The entire 1hour of workload is mostly CPU bound
** The State is mostly "Running (ON CPU)" which is the red color on the bottom
[img(100%,100%)[https://i.imgur.com/yyWLcey.png]]
* Clicking on "Disk Uninterruptible" will highlight the respective areas across Kstack,Wchan,State,Cmd which you can hover to get more data
[img(100%,100%)[https://i.imgur.com/5uyuXwU.png]]
!! topn Kstack and Cmdline
* Here's another way of presenting the same bubble chart data. The top most is the Kstack or Cmdline with highest number of Avg Threads
* When you click on either of the Kstack or Cmdline legend it highlights all affected Kstack or Cmdline contents which is useful for exploratory analysis
** In this example, DD is selected and it highlights the top Kstacks. The numbers correspond to sum of Avg Threads spent on a particular permutation of "Cmdline, State, Arg0, Arg1, Kstack, Syscall, Syscall Id, Wchan"
[img(100%,100%)[ https://i.imgur.com/f3rPUp4.png ]]
* Hovering on the numbers will show the Kstack
[img(100%,100%)[ https://i.imgur.com/IjQWedP.png ]]
!! psnapper by OTHER breakdown
The "topn Kstack and Cmdline" is best used with "psnapper by OTHER breakdown" to find top resource consumers. Below are some example use case
!!! /sbin/rngd
* Here I'm highlighting the Cmdline that is consuming more than half a CPU (which is a lot for a 2 CPU VM)
** You see here that I've got processes/programs like /sbin/rngd, updatedb, mandb, and even java that are suspiciously high in CPU
[img(100%,100%)[https://i.imgur.com/o4zgKAX.png]]
* Then I go back to "topn Kstack and Cmdline", and highlight the /sbin/rng and hover on the Kstack
** So for some reason this program CPU performance is getting amplified by the overall high load of the VM https://access.redhat.com/solutions/3098661 or it has something to do with the scripts I'm running and I'm only finding this out now thanks to psnapper
[img(100%,100%)[https://i.imgur.com/Bjo1wF5.png]]
!!! rcu_sched kthread - CPU stalls
* The "Null" Cmdline entries are also interesting, here the Null is clicked on the legend and that highlighted the Kstack entries above
[img(100%,100%)[ https://i.imgur.com/HxVtqnf.png ]]
* Then when you hover on the numbers you'll see "rcu_" entries
[img(100%,100%)[ https://i.imgur.com/KdseUV9.png ]]
* On the "bubble Kstack,Wchan,State,Cmd" tab you'll also see the same when "Null" Cmdline is clicked
[img(100%,100%)[https://i.imgur.com/U4so9JD.png]]
* Apparently the "rcu_" is related to CPU stalls https://www.google.com/search?q=rcu_sched+kthread+starved+for+jiffies&oq=rcu+kthread&aqs=chrome.2.69i57j0l5.8661j0j1&sourceid=chrome&ie=UTF-8
.
http://blog.analytixware.com/2014/03/packaging-your-shiny-app-as-windows.html
http://bitreading.com/deltafeed/#home
http://www.oaktable.net/content/oracle-exadata-pam-and-temporary-user-lockout
* to disable it, do ''pam_tall2 -u oracle -r''
use this https://addons.mozilla.org/en-US/firefox/addon/tile-tabs/ and select "All Tabs - Vertical Grid"
<<showtoc>>
! From the super high level there's a workload spike on the week of 12th to 16th June
[img(90%,90%)[ http://i.imgur.com/wafvNqi.jpg ]]
! That period is a CPU spike for the whole week
[img(90%,90%)[ http://i.imgur.com/d3zR7zG.jpg ]]
! Panel view of ASH by Wait Class by Instance
[img(90%,90%)[ http://i.imgur.com/q8TGNaT.jpg ]]
! Panel view of time series SQL_ID by Instance
[img(90%,90%)[ http://i.imgur.com/Wu4CWYo.jpg ]]
http://blog.go-faster.co.uk/2018/08/parallel-execution-of-plsql.html
{{{
cd /usr/bin/
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
}}}
http://stackoverflow.com/questions/37212894/which-cygwin-package-to-get-parallel-command <- this works
https://gist.github.com/scienceopen/e0295105a36039aa38ce936f39b26301/
! video
GNU Parallel videos https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
https://www.gnu.org/software/parallel/parallel_tutorial.html
http://ole.tange.dk/kontakt.html
http://moo.nac.uci.edu/~hjm/parsync/
http://stackoverflow.com/questions/24058544/speed-up-rsync-with-simultaneous-concurrent-file-transfers
https://www.linkedin.com/pulse/20140731160907-45133456-tech-tip-running-rsync-over-multiple-threads
http://askubuntu.com/questions/487207/trying-to-use-gnu-parallel-with-sed
http://unix.stackexchange.com/questions/229234/join-multiple-sed-commands-in-one-script-for-processing-csv-file
! possibly wrong results
http://unix.stackexchange.com/questions/32162/using-parallel-to-process-unique-input-files-to-unique-output-files
http://stackoverflow.com/questions/30760449/gnu-parallel-redirect-output-to-a-file-with-a-specific-name
http://askubuntu.com/questions/487207/trying-to-use-gnu-parallel-with-sed
http://stackoverflow.com/questions/23188309/gnu-parallel-with-sed-wrong-arguement-as-file
http://qaoverflow.com/question/gnu-parallel-produces-different-output-compared-to-while-loop-with-this-sed-command/
http://stackoverflow.com/questions/38053503/replacement-string-not-working-in-gnu-parallel
parallel vs serial direct path on latency
https://www.evernote.com/shard/s48/sh/748eee21-38f0-4d15-9d3f-9ea492f6b797/adb9095d22ccf2b84c1e289c22cc87a7
{{{
* The parallel_max_servers value is capped by processes-15 (this is true for versions prior 11.2.0.2 as well)
* The formula for default value is
parallel_max_servers = min( <show parameter processes> -15 , <computed parallel_max_servers> )
* The fudge for processes parameter could be different
Adjusting the default value of parameter parallel_max_servers from 160 to 135 due to the value of parameter processes (150) <-- 150-135=15
Adjusting the default value of parameter parallel_max_servers from 960 to 216 due to the value of parameter processes (300) <-- 300-216=84
* If parallel_max_servers is set to a lower value than processes parameter
then parallel_max_servers and parallel_servers_target are the same
* If parallel_max_servers is set to a higher value than processes parameter
then parallel_max_servers and parallel_servers_target are the same
but the formula = min( <show parameter processes> -15 , <computed parallel_max_servers> ) will be applied
}}}
Working with parquet files, updates in Hive https://www.youtube.com/watch?v=n2INy9fNr1k
https://parse.com/docs/js/api/classes/Parse.Query.html#methods_find
https://www.parse.com/apps/foodhealthyornot/collections
http://hadooptutorial.info/partitioning-in-hive/
https://github.com/vaquarkhan/vk-wiki-notes/wiki/What-is-the-difference-between-partitioning-and-bucketing-a-table-in-Hive-%3F
https://stackoverflow.com/questions/19128940/what-is-the-difference-between-partitioning-and-bucketing-a-table-in-hive
https://www.linkedin.com/pulse/hive-partitioning-bucketing-examples-gaurav-singh/
https://stackoverflow.com/questions/13815179/how-to-update-drop-a-hive-partition
yahoo engineering 200+K partitions per day http://www.odbms.org/blog/2014/09/interview-mithun-radhakrishnan/
http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/1965433.pdf
<<showtoc>>
! how credit card processing works
http://hccpw.bigbinary.com/
http://gavinsoorma.com/2009/09/11g-pending-and-published-statistics/
HOWTO Gathering and Publishing Statistics Independently
https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r1/prod/perform/gathstats/gathstats.htm
<<showtoc>>
! summary
<<
Here’s the summary of the responses:
• Tuning PS Queries particularly the ones coming from nVision
o Generally nVision is adhoc in nature and it may affect other workloads
o This document is old but essentially still true on newer versions - www.go-faster.co.uk/pres/nVision_Performance_Tuning.pps
• Batch processing/month end cycles
o This is a matter of tuning the workload windows and other specific things like http://www.go-faster.co.uk/BatchModel.pdf, http://www.go-faster.co.uk/local_write_wait.pdf
• Statistics
o Generally the stats management has to be integrated into Peopletools
o Here’s a reference document http://blog.psftdba.com/2012/09/maintaining-optimizer-statistics-on.html -> http://www.go-faster.co.uk/gfcpsstats11.pdf
• Partitioning
o It starts with range partition on fiscal_year, accounting period, and then possibly list subpartition on ledger, or currency or something else depending on data
o And here are the specific tables that needs partitioning http://www.go-faster.co.uk/gfc_pspart.manual.pdf
<<
<<<
I’ve been involved on a couple of peoplesoft gigs, and usually this starts out with firefighting.
But what if it’s a later stage implementation? And the customer is now thinking about scale. Are there any peoplesoft specific best practices or checklist that needs to be done to optimize it further?
Some of this could be
• Top tables that needs to be partitioned
• Indexes that needs to be dropped/partitioned or applied
• Peoplesoft infrastructure specific recommendations (move to 10Gbe,etc.)
• Etc.
<<<
! thread1
<<<
Check to determine if they use nVision. It is an ad-hoc report writing feature that often is bundled with Peoplesoft.
The problem lies in the fact that end users can use it to generate SQL on the fly, with all the dangers that come with that.
Also note that Peoplesoft uses application logic to control referential integrity, rather than use database features like foreign keys.
This means that if the table data is loaded from external sources, it could violate the business rules imbedded in the application,
or worse yet, defined in the control tables.
Finally, since Financials is often driven by data loaded from external sources, finding the appropriate time to gather statistics
can have a big impact. Sometimes, you get to a quarterly closing period before such issues are discovered.
<<<
! thread2
<<<
See attached doc and section “gather stats.” Unfortunately, application/user mandates how SQL is generated as Michael has already pointed out.
Few areas to focus on –
• PS Queries
• nVision
• Batch processing/month end cycles
<<<
! thread3
<<<
RED_PAPER_-_PeopleSoft_Enterprise_Performance_on_Oracle_11g_Database.pdf
<<<
! thread4
<<<
I have some thoughts on this:
• Managing Oracle Table Partitioning in PSFT: https://www.google.com/url?q=http://www.go-faster.co.uk/gfc_pspart.manual.pdf&sa=U&ei=ygSVVcmBA4iegwSJyITAAg&ved=0CA4QFjAD&client=internal-uds-cse&usg=AFQjCNEA9x4na0AmtAc_9iukL15ayHcpqw
• nVision – old but essentially still true - https://www.google.com/url?q=http://www.go-faster.co.uk/pres/nVision_Performance_Tuning.pps&sa=U&ei=ZAWVVduYI5PtgwSA4IGgAg&ved=0CAcQFjAB&client=internal-uds-cse&usg=AFQjCNEClLuOFK96UFKAsODMNkLsGerahA – range partition on fiscal_year, accounting period, and then possibly list subpartition on ledger, or currency or something else depending on data
• There are various documents on Stats management in App Engine batch on www.go-faster.co.uk
<<<
! thread5
<<<
• Managing Oracle Table Partitioning in PSFT: http://www.go-faster.co.uk/gfc_pspart.manual.pdf
• nVision – old but essentially still true - www.go-faster.co.uk/pres/nVision_Performance_Tuning.pps
– range partition on fiscal_year, accounting period, and then possibly list subpartition on ledger, or currency or something else depending on data
… and I should have sent this for stats http://blog.psftdba.com/2012/09/maintaining-optimizer-statistics-on.html.
There are a lot of subtleties about stats. The stats management has to be integrated into PeopleTools. I would generally disagree with some of the things in the attached word document.
<<<
! end
<<showtoc>>
! ps360
http://blog.psftdba.com/2016/04/ps360-utility-to-extract-and-present.html
! Improve and Performance Tune your Application Engine Programs (step by step)
http://psftpp.blogspot.com/2010/04/improve-and-performance-tune-your.html
https://www.evernote.com/shard/s48/sh/cfe26e1a-3a62-40a1-be32-2edd917a6841/7e45174716c8455b9d8398f8a2b5f567
! ch10 - peoplesoft for oracle dba
https://learning.oreilly.com/library/view/peoplesoft-for-the/9781430237075/Chapter10.html#ch10
! peoplesoft instrumentation for oracle rdbms
https://slideplayer.com/slide/2548565/
! peoplesoft on oci
PeopleSoft in the Cloud: Oracle Apps on Oracle Cloud Infrastructure https://www.youtube.com/watch?v=ZjniZvIlbWw
PeopleSoft Installation On Cloud (Oracle Free Trial) - Demo (marketplace) https://www.youtube.com/watch?v=MrgEculOuOs
! peoplesoft on exadata
https://www.oracle.com/assets/maa-wp-peoplesoft-on-exadata-349837.pdf
! peoplesoft ping
http://www.go-faster.co.uk/pres/psping.20040603.pps
Downloads/PeopleSoft_Ping_Whitepaper_-_110502.pdf
E-AS: What is PeopleTools PSPing? (Doc ID 619425.1)
! peoplesoft performance monitor
http://thesmartpanda.com/wp-content/uploads/2017/04/Performance_Monitor__RedPaper.pdf
! peoplesoft tracing
http://peoplesoft-toolbox.com/resources/CS-Tracing%20for%20PeopleSoft.pdf
http://peoplesoftlearnings.blogspot.com/search?q=tracing
http://peoplesoftlearnings.blogspot.com/2008/08/application-server-trace.html
[img(80%,80%)[https://i.imgur.com/KeGtfW2.png]]
! peoplecode tracing VIDEO
How to enable Peoplecode/SQL Trace in Peoplesoft https://www.youtube.com/watch?v=weY_QfLOIRk
! peoplesoft parallelism
http://peoplesoftconcept.blogspot.com/2014/02/parallel-processing-in-application-engine-in-peoplesoft.html
https://it.toolbox.com/question/parallel-processing-in-application-engine-peoplesoft-103013
https://it.toolbox.com/question/diagnosing-ci-performance-issue-020111
https://docs.oracle.com/cd/E51433_01/fscm92pbr2/eng/fscm/fare/task_SettingUpParallelProcessingforStatements-9f6808.html
http://www.peoplesoftjournal.com/2013/02/implementing-parallel-processing-in_5539.html
http://psdips.blogspot.com/2018/03/application-engine-parallel-processing.html
! documentation
!! PeopleTools 8.53: Application Engine - Oracle Docs
https://docs.oracle.com/cd/E39332_01/psft/acrobat/pt853tape-b0213.pdf
!! PeopleTools 8.53: PeopleSoft Process Scheduler - Oracle Docs
https://docs.oracle.com/cd/E39332_01/psft/acrobat/pt853tprs-b0213.pdf
!! PeopleTools 8.55: Application Designer Developer's - Oracle Docs
https://docs.oracle.com/cd/E66687_01/psft/acrobat/pt855tapd-b112016.pdf
!! RED PAPER Performance
https://www.evernote.com/shard/s48/sh/6bb096e8-e1d4-4769-9db9-08c3104dc978/69f08503e2b760eb99b303298cb729de
.
http://www.evernote.com/shard/s48/sh/a17710e8-988d-46ae-b4ed-bff0a4468780/b70ae3235f885f2e045fa3e98f8035a3
http://www.evernote.com/shard/s48/sh/a2620aa5-1fed-4dcc-a1b6-c72260d61873/391f47a472e18da6ee6ac9986f1e1f78
Tuning PostgreSQL for High Write Workloads https://www.youtube.com/watch?v=xrMbzHdPLKM
https://fritshoogland.wordpress.com/2014/12/15/oracle-database-operating-system-memory-allocation-management-for-pga/
https://fritshoogland.wordpress.com/2014/12/16/oracle-database-operating-system-memory-allocation-management-for-pga-part-2-oracle-11-2/
https://fritshoogland.wordpress.com/2014/12/17/oracle-database-operating-system-memory-allocation-management-for-pga-part-3-oracle-11-2-0-4-and-amm-quiz/
https://fritshoogland.wordpress.com/2014/12/18/oracle-database-operating-system-memory-allocation-management-for-pga-part-4-oracle-11-2-0-4-and-amm/
https://fritshoogland.wordpress.com/2014/12/19/reading-oracle-memory-dumps/
http://www.ora600.be/_memory_imm_mode_without_autosga+-+no+really+!+don't+resize+my+sga+!+I+mean+it+!
SGA Re-Sizes Occurring Despite AMM/ASMM Being Disabled (MEMORY_TARGET/SGA_TARGET=0) (Doc ID 1269139.1)
http://docs.oracle.com/cd/B10501_01/appdev.920/a96624/05_colls.htm
http://oracle-base.com/articles/8i/collections-8i.php
http://oracle-base.com/articles/9i/bulk-binds-and-record-processing-9i.php
https://dioncho.wordpress.com/?s=10261 <-- also contains nice test case script similar to frits
http://www.oracle.com/technetwork/database/features/availability/exadata-consolidation-522500.pdf
Why use Photoshop for web design
http://www.lynda.com/Photoshop-tutorials/Why-use-Photoshop-web-design/145211/166597-4.html
http://www.quora.com/How-do-or-did-people-use-Photoshop-for-web-design
http://zlib.net/pigz/
''demo'' https://dl.dropbox.com/u/4131944/Screencasts/pigz.mp4
http://blog.labrat.info/20100429/using-xargs-to-do-parallel-processing/
''regular gzip and tar''
http://photos.gocoho.net/howto/tar-gzip/
http://www.abbeyworkshop.com/howto/unix/nix_gtar/
<<<
Problem with tar on proprietary unix platforms is that it is limited and different in usage (unless the GNU tar is installed, which is often available from additional sources, not native on O/S). gzip seems to be portable.
<<<
''other parallel commands using GNU parallel''
http://www.rankfocus.com/use-cpu-cores-linux-commands/
{{{
cat bigfile.bin | bzip2 --best > compressedfile.bz2
Do this:
cat bigfile.bin | parallel --pipe --recend '' -k bzip2 --best > compressedfile.bz2
}}}
{{{
grep pattern bigfile.txt
do this:
cat bigfile.txt | parallel --pipe grep 'pattern'
or this:
cat bigfile.txt | parallel --block 10M --pipe grep 'pattern'
}}}
{{{
cat rands20M.txt | awk '{s+=$1} END {print s}'
do this!
cat rands20M.txt | parallel --pipe awk \'{s+=\$1} END {print s}\' | awk '{s+=$1} END {print s}'
}}}
{{{
wc -l bigfile.txt
Do this:
cat bigfile.txt | parallel --pipe wc -l | awk '{s+=$1} END {print s}'
}}}
{{{
sed s^old^new^g bigfile.txt
Do this:
cat bigfile.txt | parallel --pipe sed s^old^new^g
}}}
<<showtoc>>
! CSS, jquery
http://docsbeta.pinegrow.com/enqueuing-scripts-and-stylesheets/#The_jQuery_case
!! pinegrow
https://pinegrow.com/#buy
https://docs.pinegrow.com/contact-us/
https://pinegrow.com/docs/licensing-questions/students-teachers-ngos-and-npos/
https://www.slant.co/options/8442/alternatives/~pinegrow-alternatives
!!! pinegrow competitors
https://tidycms.com/
https://davidwalsh.name/website-builders-dont-suck
https://grapesjs.com/
https://www.coffeecup.com/bootstrap-builder/
https://wappler.io/features
https://mobirise.com/
https://www.quora.com/Is-there-a-visual-website-builder-with-code-export-option
https://colorlib.com/wp/free-drag-and-drop-website-builder/
https://www.google.com/search?ei=KXyxXNS6L-jy5gL1wragCA&q=drag+and+drop+web+design+with+export&oq=drag+and+drop+web+design+with+export&gs_l=psy-ab.3...63821.65812..65992...0.0..0.168.1292.6j6......0....1..gws-wiz.......0i71j0j0i22i30j33i22i29i30j33i160.c9zotKogkS8
https://visualstudio.microsoft.com/vs/features/node-js/
https://onextrapixel.com/10-best-alternatives-to-adobe-dreamweaver/
https://forum.bubble.is/t/import-export-bubble-code/38150
https://forum.bubble.is/t/is-it-possible-to-export-your-bubble-made-app-to-a-different-or-multiple-servers/35960/3
https://forum.bubble.is/t/is-this-possible-to-download-my-website-code/15410/4
https://forum.bubble.is/t/migration-of-code-data-out-of-bubble/1126
https://www.google.com/search?q=bubble.is+export+code&oq=export+bubble.is+&aqs=chrome.2.69i57j0l2.7267j1j1&sourceid=chrome&ie=UTF-8
!! pinegrow HOWTO
https://pinegrow.com/docs/pages/pages.html
intro to pinegrow https://www.youtube.com/watch?time_continue=41&v=ITSMOYJ6usA
!!! pinegrow and flask/django
https://www.reddit.com/r/flask/comments/5clymc/has_anyone_used_adobe_muse_and_flask_together/
https://forum.pinegrow.com/t/django-tag-jinja2/487/3
https://forum.pinegrow.com/t/help-or-advice-flask-and-pinegrow/2869
http://docsbeta.pinegrow.com/editing-php-html-templates/
!!! pinegrow and angularjs
http://docsbeta.pinegrow.com/creating-a-simple-angularjs-app/
!!! pinegrow and php
https://forum.pinegrow.com/t/an-idea-for-pinegrow-3-an-php/1138/8
https://forum.pinegrow.com/t/getting-php-outputs-in-pinegrow/1817/7
http://blogs.oracle.com/jimlaurent/entry/building_a_solaris_11_repository
https://sqlandplsql.com/2013/02/18/oracle-cursor-examples/ <- good stuff
PL/SQL multiple values select into https://www.google.com/search?q=PL%2FSQL+multiple+values+select+into&oq=pl%2Fsql+mu&aqs=chrome.0.69i59j69i58j69i57j0l3.3831j0j1&sourceid=chrome&ie=UTF-8
http://www.rebellionrider.com/pl-sql-tutorials/how-to-initialize-variable-using-select-into-statement-in-pl-sql-by-manish-sharma.htm#.WWanmMZKWkI
ORA-01422: fetch returns more than requested number of rows https://www.tekstream.com/resources/ora-01422-exact-fetch-returns-more-than-requested-number-of-rows/
PL/SQL FOR LOOP
http://stackoverflow.com/questions/6584966/optimize-the-speed-on-performing-select-query-in-a-big-loop
Working with Cursors http://www.oracle.com/technetwork/issue-archive/2013/13-mar/o23plsql-1906474.html
http://www.adp-gmbh.ch/ora/plsql/loops.html
<<showtoc>>
! reference
https://docs.oracle.com/cloud/latest/db112/LNPLS/controlstatements.htm#LNPLS004
! define VARIABLE to pass on SQL
{{{
VARIABLE g_retention NUMBER
DEFINE p_default = 8
DEFINE p_max = 100
SET VERIFY OFF
DECLARE
v_default NUMBER(3) := &p_default;
v_max NUMBER(3) := &p_max;
BEGIN
select
((TRUNC(SYSDATE) + RETENTION - TRUNC(SYSDATE)) * 86400)/60/60/24 AS RETENTION_DAYS
into :g_retention
from dba_hist_wr_control
where dbid in (select dbid from v$database);
if :g_retention > v_default then
:g_retention := v_max;
else
:g_retention := v_default;
end if;
END;
/
... at the SQL level
WHERE
to_date(tm,'MM/DD/YY HH24:MI:SS') > sysdate - :g_retention
}}}
! pass parameter, IF ELSE on execute immediate
PL/SQL 101 : Understanding Ref Cursors https://community.oracle.com/thread/888365
https://oracle-base.com/articles/misc/using-ref-cursors-to-return-recordsets
https://databaseline.bitbucket.io/unit-testing-plsql-code/
CodeTalk Series: Unit Testing PL SQL Code in the Real World https://www.youtube.com/watch?v=1qAZvS5rvyY
pl/sql unit test https://www.youtube.com/results?search_query=pl%2Fsql+unit+test+
http://www.thatjeffsmith.com/archive/2014/04/unit-testing-your-plsql-with-oracle-sql-developer/
https://stackoverflow.com/questions/32099621/pl-sql-and-sql-script-in-one-sqlfile-with-liquibase/36134227
https://es.slideshare.net/StevenFeuerstein/unit-testing-oracle-plsql-code-utplsql-excel-and-more
CHAPTER 17: Unit Testing With utPLSQL - Oracle and PL/SQL Recipes: A Problem-Solution Approach https://www.safaribooksonline.com/library/view/oracle-and-plsql/9781430232070/
C H A P T E R 5 - PL/SQL Unit Testing - Expert PL/SQL Practices for Oracle Developers and DBAs https://www.safaribooksonline.com/library/view/expert-plsql-practices/9781430234852/Chapter05.html#ch5
CHAPTER 8 Test, Test, Test, and Test Again - Beginning PL/SQL: From Novice to Professional https://www.safaribooksonline.com/library/view/beginning-plsql-from/9781590598825/Chapter08.html
Resources for Developing PL/SQL Expertise - Oracle PL/SQL for DBAs https://www.safaribooksonline.com/library/view/oracle-plsql-for/0596005873/pr03s05.html
Chapter 20. Managing PL/SQL Code - Oracle PL/SQL Programming, 4th Edition https://www.safaribooksonline.com/library/view/oracle-plsql-programming/0596009771/ch20.html
Testing PL/SQL Programs - Oracle PL/SQL Programming, Third Edition https://www.safaribooksonline.com/library/view/oracle-plsql-programming/0596003811/ch19s04.html
<<showtoc>>
! business
@@How to Start and Run A Consulting Business http://pluralsight.com/training/courses/TableOfContents?courseName=start-run-consulting-business@@
http://beta.pluralsight.com/courses/want-to-be-entrepreneur
Writing a Business Plan http://www.pluralsight.com/courses/writing-business-plan
Crowdfunding Fundamentals http://www.pluralsight.com/courses/crowdfunding-fundamentals
Investment Crowdfunding Fundamentals http://www.pluralsight.com/courses/investment-crowdfunding-fundamentals
Best Practices for Project Estimation http://www.pluralsight.com/courses/project-estimation-best-practices
!! financial modeling
https://app.pluralsight.com/library/courses/financial-modeling-business-plan/table-of-contents
! career
Communications: How to Talk, Write, Present, and Get Ahead! http://www.pluralsight.com/courses/communication-skills
http://www.pluralsight.com/courses/starting-running-user-group
Build Your Career with Michael Lopp http://www.pluralsight.com/courses/build-your-career-michael-lopp
http://beta.pluralsight.com/courses/career-survival-strategies-4devs
Learning Technology in the Information Age http://www.pluralsight.com/courses/learning-technology-information-age
Human Behavior for Technical People http://pluralsight.com/training/courses/TableOfContents?courseName=human-behavior-for-technical-people
http://www.pluralsight.com/author/jason-alba
The Dark Side of Technology Careers http://www.pluralsight.com/courses/technology-careers-dark-side
! leadership
Introduction to Leadership and Management for Developers http://www.pluralsight.com/courses/introduction-leadership-management-developers
Leadership: Getting Started http://www.pluralsight.com/courses/leadership-getting-started
Management Strategies that will Increase Productivity Today http://www.pluralsight.com/courses/management-strategies-increase-productivity
! programming
@@http://www.pluralsight.com/courses/learning-program-better-programmer@@
http://www.pluralsight.com/courses/learning-programming-javascript
http://www.pluralsight.com/courses/learning-programming-abstractions-python
! logic and algorithms
Algorithms and Data Structures - Part 1 http://www.pluralsight.com/courses/ads-part1
Algorithms and Data Structures - Part 2 http://www.pluralsight.com/courses/ads2
http://www.pluralsight.com/courses/math-for-programmers
http://www.pluralsight.com/courses/refactoring-fundamentals
http://www.pluralsight.com/courses/provable-code
http://www.pluralsight.com/courses/writing-clean-code-humans
http://www.pluralsight.com/courses/better-software-through-measurement
! enterprise architecture
Patterns for Building Distributed Systems for The Enterprise http://www.pluralsight.com/courses/cqrs-theory-practice
The Elements of Distributed Architecture http://www.pluralsight.com/courses/eda
WCF For Architects http://www.pluralsight.com/courses/wcf-for-architects
Optimizing and Managing Distributed Systems on AWS http://www.pluralsight.com/courses/deploying-highly-available-distributed-systems-aws-part2
! hardware
Understanding Server Hardware http://pluralsight.com/training/courses/TableOfContents?courseName=server-hardware
! R
@@http://www.pluralsight.com/courses/r-programming-fundamentals@@
! tableau
@@Enterprise Business Intelligence with Tableau Server http://pluralsight.com/training/courses/TableOfContents?courseName=enterprise-business-intelligence-tableau-server@@
Big Data Analytics with Tableau pluralsight.com/training/courses/TableOfContents?courseName=big-data-analytics-tableau
http://www.pluralsight.com/courses/data-visualization-using-tableau-public
http://www.pluralsight.com/courses/data-analysis-fundamentals-tableau
http://www.pluralsight.com/courses/big-data-analytics-tableau
http://www.pluralsight.com/courses/enterprise-business-intelligence-tableau-server
http://www.pluralsight.com/courses/business-dashboard-fundamentals
http://www.pluralsight.com/courses/cloud-business-intelligence
! oracle
Oracle PL/SQL Fundamentals - Part 1 http://pluralsight.com/training/courses/TableOfContents?courseName=oracle-plsql-fundamentals
Oracle PL/SQL Fundamentals - Part 2 http://pluralsight.com/training/courses/TableOfContents?courseName=oracle-plsql-fundamentals-part2
Oracle PL/SQL: Transactions, Dynamic SQL & Debugging http://www.pluralsight.com/courses/oracle-plsql-transactions-dynamic-sql-debugging
https://app.pluralsight.com/library/courses/oracle-plsql-working-collections
http://app.pluralsight.com/courses/oracle-triggers
http://app.pluralsight.com/courses/oracle-data-developers
http://app.pluralsight.com/courses/oracle-data-dba
http://app.pluralsight.com/author/pankaj-jain <- others here
! 12c
@@http://pluralsight.com/training/courses/TableOfContents?courseName=oracle-database-12c-fundamentals@@
http://pluralsight.com/training/courses/TableOfContents?courseName=oracle-database-12c-installation-upgrade
http://pluralsight.com/training/courses/TableOfContents?courseName=oracle-database-12c-disaster-recovery
http://app.pluralsight.com/courses/oracle-database-12c-performance-tuning-optimization
! Tuning
http://www.pluralsight.com/courses/oracle-database-12c-performance-tuning-optimization
http://pluralsight.com/training/courses/TableOfContents?courseName=oracle-performance-tuning-developers
! SQL
http://pluralsight.com/training/courses/TableOfContents?courseName=optimizing-sql-queries-oracle
http://pluralsight.com/training/courses/TableOfContents?courseName=adv-sql-queries-oracle-sql-server
http://app.pluralsight.com/author/scott-hecht <- others here
http://www.pluralsight.com/courses/sql-data-wrangling-oracle-tables
http://pluralsight.com/training/courses/TableOfContents?courseName=intro-dates-times-intervals-oracle
http://pluralsight.com/training/Courses/TableOfContents/date-time-fundamentals
! database design, data modeling
http://pluralsight.com/training/Courses/TableOfContents/relational-database-design
http://pluralsight.com/training/courses/TableOfContents?courseName=logical-physical-modeling-analytical-applications
http://pluralsight.com/training/courses/TableOfContents?courseName=building-web-apps-services-aspdotnet-ef-webapi&highlight=
http://pluralsight.com/training/courses/TableOfContents?courseName=querying-entity-framework&highlight=
http://pluralsight.com/training/courses/TableOfContents?courseName=efintro-models&highlight=
http://pluralsight.com/training/courses/TableOfContents?courseName=efarchitecture
http://pluralsight.com/training/courses/TableOfContents?courseName=efintro-models
http://pluralsight.com/training/courses/TableOfContents?courseName=cqrs-theory-practice
http://pluralsight.com/training/courses/TableOfContents?courseName=entity-framework5-getting-started
! ORM (Object-Relational Mapping)
http://www.pluralsight.com/courses/querying-entity-framework
http://www.pluralsight.com/courses/efintro-models
http://www.pluralsight.com/courses/efarchitecture
http://www.pluralsight.com/courses/hibernate-introduction
http://www.pluralsight.com/courses/dotnet-micro-orms-introduction
http://www.pluralsight.com/courses/core-data-fundamentals
! database tuning
@@Play by Play: Database Tuning http://www.pluralsight.com/courses/play-by-play-rob-sullivan@@
! UX
http://www.pluralsight.com/tag/ux-design?pageSize=48&sort=new
http://www.pluralsight.com/tag/ui-design?pageSize=48&sort=new
UI Architecture http://www.pluralsight.com/courses/web-ui-architecture
UX Engineering Process http://www.pluralsight.com/courses/lean-front-end-engineering
Hacking the User Experience / UX for Developers http://www.pluralsight.com/courses/hacking-user-experience
! Mockups
Designing Prototypes for Websites in Balsamiq Mockups http://www.pluralsight.com/courses/designing-prototypes-websites-balsamiq-mockups-2115
Developing Rapid Interactive Prototypes in OmniGraffle http://www.pluralsight.com/courses/developing-rapid-interactive-prototypes-omnigraffle-1959
! Front End Web Development
@@Front End Web Development: Get Started http://www.pluralsight.com/courses/front-end-web-development-get-started@@
Introduction to Web Development http://www.pluralsight.com/courses/web-development-intro
Front End Web Development Career Kickstart http://www.pluralsight.com/courses/front-end-web-development-career-kickstart
Front-End Web Development Quick Start With HTML5, CSS, and JavaScript http://www.pluralsight.com/courses/front-end-web-app-html5-javascript-css
! jQuery
Introduction to JavaScript & jQuery http://www.pluralsight.com/courses/introduction-javascript-jquery
jQuery fundamentals
jQuery: Getting Started http://www.pluralsight.com/courses/jquery-getting-started
jQuery Plugins & jQuery UI http://www.pluralsight.com/courses/jquery-plugins-jquery-ui
! javascript
HTML fundamentals
Introduction to CSS
@@sublime text 3 from scratch@@
@@using the chrome developer tools@@
@@webstorm fundamentals http://www.pluralsight.com/courses/webstorm-fundamentals@@
Choosing a JavaScript Framework http://www.pluralsight.com/courses/choosing-javascript-framework
JavaScript From Scratch http://www.pluralsight.com/courses/javascript-from-scratch
JavaScript the Good Parts http://www.pluralsight.com/courses/javascript-good-parts
Hands-on JavaScript Project: JSON https://app.pluralsight.com/library/courses/javascript-project-json/table-of-contents
!! object oriented JS
Object-Oriented JavaScript With ES6 https://code.tutsplus.com/courses/object-oriented-javascript-with-es6
!! functional programming JS
Functional Programming in JavaScript https://code.tutsplus.com/courses/functional-programming-in-javascript
!! JS design patterns
Put JavaScript Design Patterns Into Practice https://code.tutsplus.com/courses/put-javascript-design-patterns-into-practice
!! advanced JS
Advanced JavaScript Fundamentals https://code.tutsplus.com/courses/advanced-javascript-fundamentals
!! essential JS tools
Essential Tools for JavaScript Developers https://code.tutsplus.com/courses/essential-tools-for-javascript-developers
!! Testing client side JS
Testing Clientside JavaScript http://www.pluralsight.com/courses/testing-javascript
! node.js
!! Mongoose
Introduction to Mongoose for Node.js and MongoDB http://www.pluralsight.com/courses/mongoose-for-nodejs-mongodb
!! RESTful Web Services with Node.js and Express
http://www.pluralsight.com/courses/node-js-express-rest-web-services
!! Real-Time Web with Node.js - socket.io
https://app.pluralsight.com/library/courses/real-time-web-nodejs/table-of-contents
!! CRUD app
CRUD app with AngularJs, Node js, express js, Bootstrap, EJS, MySQL PART I https://www.youtube.com/watch?v=wz-ZkLB7ozo
RESTful Crud Example With Node.js and MySql http://teknosains.com/i/restful-crud-example-with-nodejs-and-mysql
!! REST api
Five Essential Tools for Building REST APIs http://www.pluralsight.com/courses/five-essential-tools-building-rest-api
!! HTTP fundamentals
HTTP Fundamentals http://www.pluralsight.com/courses/xhttp-fund
! angular.js
Building AngularJS and Node.js Apps with the MEAN Stack http://www.pluralsight.com/courses/building-angularjs-nodejs-apps-mean
AngularJS: Get Started http://www.pluralsight.com/courses/angularjs-get-started
http://www.pluralsight.com/courses/angular-big-picture
! meteor.js
http://www.pluralsight.com/courses/meteorjs-fundamentals-single-page-apps
angular kasi is client side lang eh, metoer hawak niya from fron to back end, Hindi na kelangan problemahin ng developer yung interactions. Synched na lahat to database from all clients
http://cordova.apache.org/#about
http://www.pluralsight.com/courses/creating-mobile-apps-hybrid-cross-platform-native
! ember.js
https://www.quora.com/How-hard-is-it-to-learn-Ember-or-Angular-after-having-learnt-Backbone
http://www.pluralsight.com/tag/ember.js?pageSize=48&sort=new
@@http://www.pluralsight.com/courses/fire-up-emberjs@@
http://www.pluralsight.com/courses/emberjs-fundamentals
Play by Play: App Development in Rails and Ember.js with Yehuda Katz https://app.pluralsight.com/library/courses/play-by-play-yehuda-katz/table-of-contents
https://app.pluralsight.com/library/courses/ember-2-getting-started/table-of-contents
! backbone.js
http://www.pluralsight.com/tag/backbone.js?pageSize=48&sort=new
Backbone.js Fundamentals http://www.pluralsight.com/courses/backbone-fundamentals
Application Building Patterns with Backbone.js http://www.pluralsight.com/courses/playing-with-backbonejs
! handlebars.js
@@http://www.pluralsight.com/courses/handlebars-javascript-templating@@
! react.js
Building a Full-Stack App with React and Express http://www.pluralsight.com/courses/react-express-full-stack-app-build
! bower
http://www.pluralsight.com/courses/bower-fundamentals
! build tools
either grunt+gulp+browserify or require.js+webpack .. at least from my understanding of the landscape
Webpack Fundamentals http://www.pluralsight.com/courses/webpack-fundamentals
! dependency injection
RequireJS: JavaScript Dependency Injection and Module Loading http://www.pluralsight.com/courses/requirejs-javascript-dependency-injection
https://gist.github.com/desandro/4686136
<<<
Pretend you're not a web developer that's used to writing a dozen script tags into a webpage for a moment.
If you didn't need to know that module awesome-thing required one or more other modules, and just specifying awesome-thing actually worked, would you be happy just specifying awesome-thing or would you want to be aware of and specify all of its dependencies, their dependencies, etc?
<<<
! Mobile backend as a service (MBaas)
parse http://www.pluralsight.com/courses/building-cloud-based-ios-app-parse
! IOT - internet of things
Programming the Internet of Things with Android http://www.lynda.com/Android-tutorials/Programming-Internet-Things-Android/184920-2.html
! go programming language
http://www.pluralsight.com/courses/go
! cloud computing
http://pluralsight.com/training/Courses#cloud-computing
@@http://www.pluralsight.com/courses/table-of-contents/openstack-introduction@@
@@http://www.pluralsight.com/courses/docker-fundamentals@@
http://www.pluralsight.com/courses/vagrant-versioning-environments
! no-SQL
http://www.pluralsight.com/courses/riak-introduction
mongodb http://www.pluralsight.com/courses/mongodb-introduction
http://www.pluralsight.com/courses/mongodb-big-data-reporting
! ITIL
http://pluralsight.com/training/Courses/TableOfContents/itil-foundations
! bash
http://www.pluralsight.com/courses/bash-shell-scripting
http://www.pluralsight.com/courses/introduction-bash-shell-linux-mac-os
! tools
http://www.pluralsight.com/courses/smash-into-vim
http://www.pluralsight.com/courses/sublime-text-3-from-scratch
http://www.pluralsight.com/courses/meet-command-line
http://www.pluralsight.com/courses/webstorm-fundamentals
! RHEL6
http://www.pluralsight.com/author/nigel-poulton
CompTIA Storage+ Part 1: Storage Fundamentals
CompTIA Storage+ Part 2: Network Storage & Data Replication
CompTIA Storage+ Part 3: Data Protection & Storage
Red Hat Enterprise Linux 6 Booting and Runlevels
Red Hat Enterprise Linux Shell Fundamentals
Red Hat Enterprise Linux Shell Scripting Fundamentals
Red Hat Enterprise Linux Storage Fundamentals
! http
HTTP Fundamentals http://www.pluralsight.com/courses/xhttp-fund
! test driven development
http://www.pluralsight.com/courses/test-first-development-1
http://www.pluralsight.com/courses/test-first-development-2
http://www.pluralsight.com/courses/outside-in-tdd
-- policy managed db
Why and How You Should Be Using Policy-Managed Oracle RAC Databases http://bs.doag.org/formes/pubfiles/5217201/docs/Konferenz/2013/vortraege/Oracle%20Datenbank/2013-DB-Mark_Scardina-Why_and_How_You_Should_Be_Using_Policy-Managed_Oracle_RAC_Databases-Manuskript.pdf
http://martincarstenbach.wordpress.com/2013/06/17/an-introduction-to-policy-managed-databases-in-11-2-rac/
http://www.evernote.com/shard/s48/sh/61b91302-e2a1-4afb-bc92-c1961d39ea94/cd9b820f2cbaed4e64e318ad1bb6bf5b
{{{
/*-------------------------------------------------------------------------
*
* fe-exec.c
* functions related to sending a query down to the backend
*
* Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*
* IDENTIFICATION
* src/interfaces/libpq/fe-exec.c
*
*-------------------------------------------------------------------------
*/
#include "postgres_fe.h"
#include <ctype.h>
#include <fcntl.h>
#include "libpq-fe.h"
#include "libpq-int.h"
#include "mb/pg_wchar.h"
#ifdef WIN32
#include "win32.h"
#else
#include <unistd.h>
#endif
/* keep this in same order as ExecStatusType in libpq-fe.h */
char *const pgresStatus[] = {
"PGRES_EMPTY_QUERY",
"PGRES_COMMAND_OK",
"PGRES_TUPLES_OK",
"PGRES_COPY_OUT",
"PGRES_COPY_IN",
"PGRES_BAD_RESPONSE",
"PGRES_NONFATAL_ERROR",
"PGRES_FATAL_ERROR",
"PGRES_COPY_BOTH",
"PGRES_SINGLE_TUPLE"
};
/*
* static state needed by PQescapeString and PQescapeBytea; initialize to
* values that result in backward-compatible behavior
*/
static int static_client_encoding = PG_SQL_ASCII;
static bool static_std_strings = false;
static PGEvent *dupEvents(PGEvent *events, int count);
static bool pqAddTuple(PGresult *res, PGresAttValue *tup);
static bool PQsendQueryStart(PGconn *conn);
static int PQsendQueryGuts(PGconn *conn,
const char *command,
const char *stmtName,
int nParams,
const Oid *paramTypes,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat);
static void parseInput(PGconn *conn);
static PGresult *getCopyResult(PGconn *conn, ExecStatusType copytype);
static bool PQexecStart(PGconn *conn);
static PGresult *PQexecFinish(PGconn *conn);
static int PQsendDescribe(PGconn *conn, char desc_type,
const char *desc_target);
static int check_field_number(const PGresult *res, int field_num);
/* ----------------
* Space management for PGresult.
*
* Formerly, libpq did a separate malloc() for each field of each tuple
* returned by a query. This was remarkably expensive --- malloc/free
* consumed a sizable part of the application's runtime. And there is
* no real need to keep track of the fields separately, since they will
* all be freed together when the PGresult is released. So now, we grab
* large blocks of storage from malloc and allocate space for query data
* within these blocks, using a trivially simple allocator. This reduces
* the number of malloc/free calls dramatically, and it also avoids
* fragmentation of the malloc storage arena.
* The PGresult structure itself is still malloc'd separately. We could
* combine it with the first allocation block, but that would waste space
* for the common case that no extra storage is actually needed (that is,
* the SQL command did not return tuples).
*
* We also malloc the top-level array of tuple pointers separately, because
* we need to be able to enlarge it via realloc, and our trivial space
* allocator doesn't handle that effectively. (Too bad the FE/BE protocol
* doesn't tell us up front how many tuples will be returned.)
* All other subsidiary storage for a PGresult is kept in PGresult_data blocks
* of size PGRESULT_DATA_BLOCKSIZE. The overhead at the start of each block
* is just a link to the next one, if any. Free-space management info is
* kept in the owning PGresult.
* A query returning a small amount of data will thus require three malloc
* calls: one for the PGresult, one for the tuples pointer array, and one
* PGresult_data block.
*
* Only the most recently allocated PGresult_data block is a candidate to
* have more stuff added to it --- any extra space left over in older blocks
* is wasted. We could be smarter and search the whole chain, but the point
* here is to be simple and fast. Typical applications do not keep a PGresult
* around very long anyway, so some wasted space within one is not a problem.
*
* Tuning constants for the space allocator are:
* PGRESULT_DATA_BLOCKSIZE: size of a standard allocation block, in bytes
* PGRESULT_ALIGN_BOUNDARY: assumed alignment requirement for binary data
* PGRESULT_SEP_ALLOC_THRESHOLD: objects bigger than this are given separate
* blocks, instead of being crammed into a regular allocation block.
* Requirements for correct function are:
* PGRESULT_ALIGN_BOUNDARY must be a multiple of the alignment requirements
* of all machine data types. (Currently this is set from configure
* tests, so it should be OK automatically.)
* PGRESULT_SEP_ALLOC_THRESHOLD + PGRESULT_BLOCK_OVERHEAD <=
* PGRESULT_DATA_BLOCKSIZE
* pqResultAlloc assumes an object smaller than the threshold will fit
* in a new block.
* The amount of space wasted at the end of a block could be as much as
* PGRESULT_SEP_ALLOC_THRESHOLD, so it doesn't pay to make that too large.
* ----------------
*/
#define PGRESULT_DATA_BLOCKSIZE 2048
#define PGRESULT_ALIGN_BOUNDARY MAXIMUM_ALIGNOF /* from configure */
#define PGRESULT_BLOCK_OVERHEAD Max(sizeof(PGresult_data), PGRESULT_ALIGN_BOUNDARY)
#define PGRESULT_SEP_ALLOC_THRESHOLD (PGRESULT_DATA_BLOCKSIZE / 2)
/*
* PQmakeEmptyPGresult
* returns a newly allocated, initialized PGresult with given status.
* If conn is not NULL and status indicates an error, the conn's
* errorMessage is copied. Also, any PGEvents are copied from the conn.
*/
PGresult *
PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status)
{
PGresult *result;
result = (PGresult *) malloc(sizeof(PGresult));
if (!result)
return NULL;
result->ntups = 0;
result->numAttributes = 0;
result->attDescs = NULL;
result->tuples = NULL;
result->tupArrSize = 0;
result->numParameters = 0;
result->paramDescs = NULL;
result->resultStatus = status;
result->cmdStatus[0] = '\0';
result->binary = 0;
result->events = NULL;
result->nEvents = 0;
result->errMsg = NULL;
result->errFields = NULL;
result->errQuery = NULL;
result->null_field[0] = '\0';
result->curBlock = NULL;
result->curOffset = 0;
result->spaceLeft = 0;
if (conn)
{
/* copy connection data we might need for operations on PGresult */
result->noticeHooks = conn->noticeHooks;
result->client_encoding = conn->client_encoding;
/* consider copying conn's errorMessage */
switch (status)
{
case PGRES_EMPTY_QUERY:
case PGRES_COMMAND_OK:
case PGRES_TUPLES_OK:
case PGRES_COPY_OUT:
case PGRES_COPY_IN:
case PGRES_COPY_BOTH:
case PGRES_SINGLE_TUPLE:
/* non-error cases */
break;
default:
pqSetResultError(result, conn->errorMessage.data);
break;
}
/* copy events last; result must be valid if we need to PQclear */
if (conn->nEvents > 0)
{
result->events = dupEvents(conn->events, conn->nEvents);
if (!result->events)
{
PQclear(result);
return NULL;
}
result->nEvents = conn->nEvents;
}
}
else
{
/* defaults... */
result->noticeHooks.noticeRec = NULL;
result->noticeHooks.noticeRecArg = NULL;
result->noticeHooks.noticeProc = NULL;
result->noticeHooks.noticeProcArg = NULL;
result->client_encoding = PG_SQL_ASCII;
}
return result;
}
/*
* PQsetResultAttrs
*
* Set the attributes for a given result. This function fails if there are
* already attributes contained in the provided result. The call is
* ignored if numAttributes is zero or attDescs is NULL. If the
* function fails, it returns zero. If the function succeeds, it
* returns a non-zero value.
*/
int
PQsetResultAttrs(PGresult *res, int numAttributes, PGresAttDesc *attDescs)
{
int i;
/* If attrs already exist, they cannot be overwritten. */
if (!res || res->numAttributes > 0)
return FALSE;
/* ignore no-op request */
if (numAttributes <= 0 || !attDescs)
return TRUE;
res->attDescs = (PGresAttDesc *)
PQresultAlloc(res, numAttributes * sizeof(PGresAttDesc));
if (!res->attDescs)
return FALSE;
res->numAttributes = numAttributes;
memcpy(res->attDescs, attDescs, numAttributes * sizeof(PGresAttDesc));
/* deep-copy the attribute names, and determine format */
res->binary = 1;
for (i = 0; i < res->numAttributes; i++)
{
if (res->attDescs[i].name)
res->attDescs[i].name = pqResultStrdup(res, res->attDescs[i].name);
else
res->attDescs[i].name = res->null_field;
if (!res->attDescs[i].name)
return FALSE;
if (res->attDescs[i].format == 0)
res->binary = 0;
}
return TRUE;
}
/*
* PQcopyResult
*
* Returns a deep copy of the provided 'src' PGresult, which cannot be NULL.
* The 'flags' argument controls which portions of the result will or will
* NOT be copied. The created result is always put into the
* PGRES_TUPLES_OK status. The source result error message is not copied,
* although cmdStatus is.
*
* To set custom attributes, use PQsetResultAttrs. That function requires
* that there are no attrs contained in the result, so to use that
* function you cannot use the PG_COPYRES_ATTRS or PG_COPYRES_TUPLES
* options with this function.
*
* Options:
* PG_COPYRES_ATTRS - Copy the source result's attributes
*
* PG_COPYRES_TUPLES - Copy the source result's tuples. This implies
* copying the attrs, seeing how the attrs are needed by the tuples.
*
* PG_COPYRES_EVENTS - Copy the source result's events.
*
* PG_COPYRES_NOTICEHOOKS - Copy the source result's notice hooks.
*/
PGresult *
PQcopyResult(const PGresult *src, int flags)
{
PGresult *dest;
int i;
if (!src)
return NULL;
dest = PQmakeEmptyPGresult(NULL, PGRES_TUPLES_OK);
if (!dest)
return NULL;
/* Always copy these over. Is cmdStatus really useful here? */
dest->client_encoding = src->client_encoding;
strcpy(dest->cmdStatus, src->cmdStatus);
/* Wants attrs? */
if (flags & (PG_COPYRES_ATTRS | PG_COPYRES_TUPLES))
{
if (!PQsetResultAttrs(dest, src->numAttributes, src->attDescs))
{
PQclear(dest);
return NULL;
}
}
/* Wants to copy tuples? */
if (flags & PG_COPYRES_TUPLES)
{
int tup,
field;
for (tup = 0; tup < src->ntups; tup++)
{
for (field = 0; field < src->numAttributes; field++)
{
if (!PQsetvalue(dest, tup, field,
src->tuples[tup][field].value,
src->tuples[tup][field].len))
{
PQclear(dest);
return NULL;
}
}
}
}
/* Wants to copy notice hooks? */
if (flags & PG_COPYRES_NOTICEHOOKS)
dest->noticeHooks = src->noticeHooks;
/* Wants to copy PGEvents? */
if ((flags & PG_COPYRES_EVENTS) && src->nEvents > 0)
{
dest->events = dupEvents(src->events, src->nEvents);
if (!dest->events)
{
PQclear(dest);
return NULL;
}
dest->nEvents = src->nEvents;
}
/* Okay, trigger PGEVT_RESULTCOPY event */
for (i = 0; i < dest->nEvents; i++)
{
if (src->events[i].resultInitialized)
{
PGEventResultCopy evt;
evt.src = src;
evt.dest = dest;
if (!dest->events[i].proc(PGEVT_RESULTCOPY, &evt,
dest->events[i].passThrough))
{
PQclear(dest);
return NULL;
}
dest->events[i].resultInitialized = TRUE;
}
}
return dest;
}
/*
* Copy an array of PGEvents (with no extra space for more).
* Does not duplicate the event instance data, sets this to NULL.
* Also, the resultInitialized flags are all cleared.
*/
static PGEvent *
dupEvents(PGEvent *events, int count)
{
PGEvent *newEvents;
int i;
if (!events || count <= 0)
return NULL;
newEvents = (PGEvent *) malloc(count * sizeof(PGEvent));
if (!newEvents)
return NULL;
for (i = 0; i < count; i++)
{
newEvents[i].proc = events[i].proc;
newEvents[i].passThrough = events[i].passThrough;
newEvents[i].data = NULL;
newEvents[i].resultInitialized = FALSE;
newEvents[i].name = strdup(events[i].name);
if (!newEvents[i].name)
{
while (--i >= 0)
free(newEvents[i].name);
free(newEvents);
return NULL;
}
}
return newEvents;
}
/*
* Sets the value for a tuple field. The tup_num must be less than or
* equal to PQntuples(res). If it is equal, a new tuple is created and
* added to the result.
* Returns a non-zero value for success and zero for failure.
*/
int
PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len)
{
PGresAttValue *attval;
if (!check_field_number(res, field_num))
return FALSE;
/* Invalid tup_num, must be <= ntups */
if (tup_num < 0 || tup_num > res->ntups)
return FALSE;
/* need to allocate a new tuple? */
if (tup_num == res->ntups)
{
PGresAttValue *tup;
int i;
tup = (PGresAttValue *)
pqResultAlloc(res, res->numAttributes * sizeof(PGresAttValue),
TRUE);
if (!tup)
return FALSE;
/* initialize each column to NULL */
for (i = 0; i < res->numAttributes; i++)
{
tup[i].len = NULL_LEN;
tup[i].value = res->null_field;
}
/* add it to the array */
if (!pqAddTuple(res, tup))
return FALSE;
}
attval = &res->tuples[tup_num][field_num];
/* treat either NULL_LEN or NULL value pointer as a NULL field */
if (len == NULL_LEN || value == NULL)
{
attval->len = NULL_LEN;
attval->value = res->null_field;
}
else if (len <= 0)
{
attval->len = 0;
attval->value = res->null_field;
}
else
{
attval->value = (char *) pqResultAlloc(res, len + 1, TRUE);
if (!attval->value)
return FALSE;
attval->len = len;
memcpy(attval->value, value, len);
attval->value[len] = '\0';
}
return TRUE;
}
/*
* pqResultAlloc - exported routine to allocate local storage in a PGresult.
*
* We force all such allocations to be maxaligned, since we don't know
* whether the value might be binary.
*/
void *
PQresultAlloc(PGresult *res, size_t nBytes)
{
return pqResultAlloc(res, nBytes, TRUE);
}
/*
* pqResultAlloc -
* Allocate subsidiary storage for a PGresult.
*
* nBytes is the amount of space needed for the object.
* If isBinary is true, we assume that we need to align the object on
* a machine allocation boundary.
* If isBinary is false, we assume the object is a char string and can
* be allocated on any byte boundary.
*/
void *
pqResultAlloc(PGresult *res, size_t nBytes, bool isBinary)
{
char *space;
PGresult_data *block;
if (!res)
return NULL;
if (nBytes <= 0)
return res->null_field;
/*
* If alignment is needed, round up the current position to an alignment
* boundary.
*/
if (isBinary)
{
int offset = res->curOffset % PGRESULT_ALIGN_BOUNDARY;
if (offset)
{
res->curOffset += PGRESULT_ALIGN_BOUNDARY - offset;
res->spaceLeft -= PGRESULT_ALIGN_BOUNDARY - offset;
}
}
/* If there's enough space in the current block, no problem. */
if (nBytes <= (size_t) res->spaceLeft)
{
space = res->curBlock->space + res->curOffset;
res->curOffset += nBytes;
res->spaceLeft -= nBytes;
return space;
}
/*
* If the requested object is very large, give it its own block; this
* avoids wasting what might be most of the current block to start a new
* block. (We'd have to special-case requests bigger than the block size
* anyway.) The object is always given binary alignment in this case.
*/
if (nBytes >= PGRESULT_SEP_ALLOC_THRESHOLD)
{
block = (PGresult_data *) malloc(nBytes + PGRESULT_BLOCK_OVERHEAD);
if (!block)
return NULL;
space = block->space + PGRESULT_BLOCK_OVERHEAD;
if (res->curBlock)
{
/*
* Tuck special block below the active block, so that we don't
* have to waste the free space in the active block.
*/
block->next = res->curBlock->next;
res->curBlock->next = block;
}
else
{
/* Must set up the new block as the first active block. */
block->next = NULL;
res->curBlock = block;
res->spaceLeft = 0; /* be sure it's marked full */
}
return space;
}
/* Otherwise, start a new block. */
block = (PGresult_data *) malloc(PGRESULT_DATA_BLOCKSIZE);
if (!block)
return NULL;
block->next = res->curBlock;
res->curBlock = block;
if (isBinary)
{
/* object needs full alignment */
res->curOffset = PGRESULT_BLOCK_OVERHEAD;
res->spaceLeft = PGRESULT_DATA_BLOCKSIZE - PGRESULT_BLOCK_OVERHEAD;
}
else
{
/* we can cram it right after the overhead pointer */
res->curOffset = sizeof(PGresult_data);
res->spaceLeft = PGRESULT_DATA_BLOCKSIZE - sizeof(PGresult_data);
}
space = block->space + res->curOffset;
res->curOffset += nBytes;
res->spaceLeft -= nBytes;
return space;
}
/*
* pqResultStrdup -
* Like strdup, but the space is subsidiary PGresult space.
*/
char *
pqResultStrdup(PGresult *res, const char *str)
{
char *space = (char *) pqResultAlloc(res, strlen(str) + 1, FALSE);
if (space)
strcpy(space, str);
return space;
}
/*
* pqSetResultError -
* assign a new error message to a PGresult
*/
void
pqSetResultError(PGresult *res, const char *msg)
{
if (!res)
return;
if (msg && *msg)
res->errMsg = pqResultStrdup(res, msg);
else
res->errMsg = NULL;
}
/*
* pqCatenateResultError -
* concatenate a new error message to the one already in a PGresult
*/
void
pqCatenateResultError(PGresult *res, const char *msg)
{
PQExpBufferData errorBuf;
if (!res || !msg)
return;
initPQExpBuffer(&errorBuf);
if (res->errMsg)
appendPQExpBufferStr(&errorBuf, res->errMsg);
appendPQExpBufferStr(&errorBuf, msg);
pqSetResultError(res, errorBuf.data);
termPQExpBuffer(&errorBuf);
}
/*
* PQclear -
* free's the memory associated with a PGresult
*/
void
PQclear(PGresult *res)
{
PGresult_data *block;
int i;
if (!res)
return;
for (i = 0; i < res->nEvents; i++)
{
/* only send DESTROY to successfully-initialized event procs */
if (res->events[i].resultInitialized)
{
PGEventResultDestroy evt;
evt.result = res;
(void) res->events[i].proc(PGEVT_RESULTDESTROY, &evt,
res->events[i].passThrough);
}
free(res->events[i].name);
}
if (res->events)
free(res->events);
/* Free all the subsidiary blocks */
while ((block = res->curBlock) != NULL)
{
res->curBlock = block->next;
free(block);
}
/* Free the top-level tuple pointer array */
if (res->tuples)
free(res->tuples);
/* zero out the pointer fields to catch programming errors */
res->attDescs = NULL;
res->tuples = NULL;
res->paramDescs = NULL;
res->errFields = NULL;
res->events = NULL;
res->nEvents = 0;
/* res->curBlock was zeroed out earlier */
/* Free the PGresult structure itself */
free(res);
}
/*
* Handy subroutine to deallocate any partially constructed async result.
*
* Any "next" result gets cleared too.
*/
void
pqClearAsyncResult(PGconn *conn)
{
if (conn->result)
PQclear(conn->result);
conn->result = NULL;
if (conn->next_result)
PQclear(conn->next_result);
conn->next_result = NULL;
}
/*
* This subroutine deletes any existing async result, sets conn->result
* to a PGresult with status PGRES_FATAL_ERROR, and stores the current
* contents of conn->errorMessage into that result. It differs from a
* plain call on PQmakeEmptyPGresult() in that if there is already an
* async result with status PGRES_FATAL_ERROR, the current error message
* is APPENDED to the old error message instead of replacing it. This
* behavior lets us report multiple error conditions properly, if necessary.
* (An example where this is needed is when the backend sends an 'E' message
* and immediately closes the connection --- we want to report both the
* backend error and the connection closure error.)
*/
void
pqSaveErrorResult(PGconn *conn)
{
/*
* If no old async result, just let PQmakeEmptyPGresult make one. Likewise
* if old result is not an error message.
*/
if (conn->result == NULL ||
conn->result->resultStatus != PGRES_FATAL_ERROR ||
conn->result->errMsg == NULL)
{
pqClearAsyncResult(conn);
conn->result = PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR);
}
else
{
/* Else, concatenate error message to existing async result. */
pqCatenateResultError(conn->result, conn->errorMessage.data);
}
}
/*
* This subroutine prepares an async result object for return to the caller.
* If there is not already an async result object, build an error object
* using whatever is in conn->errorMessage. In any case, clear the async
* result storage and make sure PQerrorMessage will agree with the result's
* error string.
*/
PGresult *
pqPrepareAsyncResult(PGconn *conn)
{
PGresult *res;
/*
* conn->result is the PGresult to return. If it is NULL (which probably
* shouldn't happen) we assume there is an appropriate error message in
* conn->errorMessage.
*/
res = conn->result;
if (!res)
res = PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR);
else
{
/*
* Make sure PQerrorMessage agrees with result; it could be different
* if we have concatenated messages.
*/
resetPQExpBuffer(&conn->errorMessage);
appendPQExpBufferStr(&conn->errorMessage,
PQresultErrorMessage(res));
}
/*
* Replace conn->result with next_result, if any. In the normal case
* there isn't a next result and we're just dropping ownership of the
* current result. In single-row mode this restores the situation to what
* it was before we created the current single-row result.
*/
conn->result = conn->next_result;
conn->next_result = NULL;
return res;
}
/*
* pqInternalNotice - produce an internally-generated notice message
*
* A format string and optional arguments can be passed. Note that we do
* libpq_gettext() here, so callers need not.
*
* The supplied text is taken as primary message (ie., it should not include
* a trailing newline, and should not be more than one line).
*/
void
pqInternalNotice(const PGNoticeHooks *hooks, const char *fmt,...)
{
char msgBuf[1024];
va_list args;
PGresult *res;
if (hooks->noticeRec == NULL)
return; /* nobody home to receive notice? */
/* Format the message */
va_start(args, fmt);
vsnprintf(msgBuf, sizeof(msgBuf), libpq_gettext(fmt), args);
va_end(args);
msgBuf[sizeof(msgBuf) - 1] = '\0'; /* make real sure it's terminated */
/* Make a PGresult to pass to the notice receiver */
res = PQmakeEmptyPGresult(NULL, PGRES_NONFATAL_ERROR);
if (!res)
return;
res->noticeHooks = *hooks;
/*
* Set up fields of notice.
*/
pqSaveMessageField(res, PG_DIAG_MESSAGE_PRIMARY, msgBuf);
pqSaveMessageField(res, PG_DIAG_SEVERITY, libpq_gettext("NOTICE"));
pqSaveMessageField(res, PG_DIAG_SEVERITY_NONLOCALIZED, "NOTICE");
/* XXX should provide a SQLSTATE too? */
/*
* Result text is always just the primary message + newline. If we can't
* allocate it, don't bother invoking the receiver.
*/
res->errMsg = (char *) pqResultAlloc(res, strlen(msgBuf) + 2, FALSE);
if (res->errMsg)
{
sprintf(res->errMsg, "%s\n", msgBuf);
/*
* Pass to receiver, then free it.
*/
(*res->noticeHooks.noticeRec) (res->noticeHooks.noticeRecArg, res);
}
PQclear(res);
}
/*
* pqAddTuple
* add a row pointer to the PGresult structure, growing it if necessary
* Returns TRUE if OK, FALSE if not enough memory to add the row
*/
static bool
pqAddTuple(PGresult *res, PGresAttValue *tup)
{
if (res->ntups >= res->tupArrSize)
{
/*
* Try to grow the array.
*
* We can use realloc because shallow copying of the structure is
* okay. Note that the first time through, res->tuples is NULL. While
* ANSI says that realloc() should act like malloc() in that case,
* some old C libraries (like SunOS 4.1.x) coredump instead. On
* failure realloc is supposed to return NULL without damaging the
* existing allocation. Note that the positions beyond res->ntups are
* garbage, not necessarily NULL.
*/
int newSize = (res->tupArrSize > 0) ? res->tupArrSize * 2 : 128;
PGresAttValue **newTuples;
if (res->tuples == NULL)
newTuples = (PGresAttValue **)
malloc(newSize * sizeof(PGresAttValue *));
else
newTuples = (PGresAttValue **)
realloc(res->tuples, newSize * sizeof(PGresAttValue *));
if (!newTuples)
return FALSE; /* malloc or realloc failed */
res->tupArrSize = newSize;
res->tuples = newTuples;
}
res->tuples[res->ntups] = tup;
res->ntups++;
return TRUE;
}
/*
* pqSaveMessageField - save one field of an error or notice message
*/
void
pqSaveMessageField(PGresult *res, char code, const char *value)
{
PGMessageField *pfield;
pfield = (PGMessageField *)
pqResultAlloc(res,
offsetof(PGMessageField, contents) +
strlen(value) + 1,
TRUE);
if (!pfield)
return; /* out of memory? */
pfield->code = code;
strcpy(pfield->contents, value);
pfield->next = res->errFields;
res->errFields = pfield;
}
/*
* pqSaveParameterStatus - remember parameter status sent by backend
*/
void
pqSaveParameterStatus(PGconn *conn, const char *name, const char *value)
{
pgParameterStatus *pstatus;
pgParameterStatus *prev;
if (conn->Pfdebug)
fprintf(conn->Pfdebug, "pqSaveParameterStatus: '%s' = '%s'\n",
name, value);
/*
* Forget any old information about the parameter
*/
for (pstatus = conn->pstatus, prev = NULL;
pstatus != NULL;
prev = pstatus, pstatus = pstatus->next)
{
if (strcmp(pstatus->name, name) == 0)
{
if (prev)
prev->next = pstatus->next;
else
conn->pstatus = pstatus->next;
free(pstatus); /* frees name and value strings too */
break;
}
}
/*
* Store new info as a single malloc block
*/
pstatus = (pgParameterStatus *) malloc(sizeof(pgParameterStatus) +
strlen(name) +strlen(value) + 2);
if (pstatus)
{
char *ptr;
ptr = ((char *) pstatus) + sizeof(pgParameterStatus);
pstatus->name = ptr;
strcpy(ptr, name);
ptr += strlen(name) + 1;
pstatus->value = ptr;
strcpy(ptr, value);
pstatus->next = conn->pstatus;
conn->pstatus = pstatus;
}
/*
* Special hacks: remember client_encoding and
* standard_conforming_strings, and convert server version to a numeric
* form. We keep the first two of these in static variables as well, so
* that PQescapeString and PQescapeBytea can behave somewhat sanely (at
* least in single-connection-using programs).
*/
if (strcmp(name, "client_encoding") == 0)
{
conn->client_encoding = pg_char_to_encoding(value);
/* if we don't recognize the encoding name, fall back to SQL_ASCII */
if (conn->client_encoding < 0)
conn->client_encoding = PG_SQL_ASCII;
static_client_encoding = conn->client_encoding;
}
else if (strcmp(name, "standard_conforming_strings") == 0)
{
conn->std_strings = (strcmp(value, "on") == 0);
static_std_strings = conn->std_strings;
}
else if (strcmp(name, "server_version") == 0)
{
int cnt;
int vmaj,
vmin,
vrev;
cnt = sscanf(value, "%d.%d.%d", &vmaj, &vmin, &vrev);
if (cnt == 3)
{
/* old style, e.g. 9.6.1 */
conn->sversion = (100 * vmaj + vmin) * 100 + vrev;
}
else if (cnt == 2)
{
if (vmaj >= 10)
{
/* new style, e.g. 10.1 */
conn->sversion = 100 * 100 * vmaj + vmin;
}
else
{
/* old style without minor version, e.g. 9.6devel */
conn->sversion = (100 * vmaj + vmin) * 100;
}
}
else if (cnt == 1)
{
/* new style without minor version, e.g. 10devel */
conn->sversion = 100 * 100 * vmaj;
}
else
conn->sversion = 0; /* unknown */
}
}
/*
* pqRowProcessor
* Add the received row to the current async result (conn->result).
* Returns 1 if OK, 0 if error occurred.
*
* On error, *errmsgp can be set to an error string to be returned.
* If it is left NULL, the error is presumed to be "out of memory".
*
* In single-row mode, we create a new result holding just the current row,
* stashing the previous result in conn->next_result so that it becomes
* active again after pqPrepareAsyncResult(). This allows the result metadata
* (column descriptions) to be carried forward to each result row.
*/
int
pqRowProcessor(PGconn *conn, const char **errmsgp)
{
PGresult *res = conn->result;
int nfields = res->numAttributes;
const PGdataValue *columns = conn->rowBuf;
PGresAttValue *tup;
int i;
/*
* In single-row mode, make a new PGresult that will hold just this one
* row; the original conn->result is left unchanged so that it can be used
* again as the template for future rows.
*/
if (conn->singleRowMode)
{
/* Copy everything that should be in the result at this point */
res = PQcopyResult(res,
PG_COPYRES_ATTRS | PG_COPYRES_EVENTS |
PG_COPYRES_NOTICEHOOKS);
if (!res)
return 0;
}
/*
* Basically we just allocate space in the PGresult for each field and
* copy the data over.
*
* Note: on malloc failure, we return 0 leaving *errmsgp still NULL, which
* caller will take to mean "out of memory". This is preferable to trying
* to set up such a message here, because evidently there's not enough
* memory for gettext() to do anything.
*/
tup = (PGresAttValue *)
pqResultAlloc(res, nfields * sizeof(PGresAttValue), TRUE);
if (tup == NULL)
goto fail;
for (i = 0; i < nfields; i++)
{
int clen = columns[i].len;
if (clen < 0)
{
/* null field */
tup[i].len = NULL_LEN;
tup[i].value = res->null_field;
}
else
{
bool isbinary = (res->attDescs[i].format != 0);
char *val;
val = (char *) pqResultAlloc(res, clen + 1, isbinary);
if (val == NULL)
goto fail;
/* copy and zero-terminate the data (even if it's binary) */
memcpy(val, columns[i].value, clen);
val[clen] = '\0';
tup[i].len = clen;
tup[i].value = val;
}
}
/* And add the tuple to the PGresult's tuple array */
if (!pqAddTuple(res, tup))
goto fail;
/*
* Success. In single-row mode, make the result available to the client
* immediately.
*/
if (conn->singleRowMode)
{
/* Change result status to special single-row value */
res->resultStatus = PGRES_SINGLE_TUPLE;
/* Stash old result for re-use later */
conn->next_result = conn->result;
conn->result = res;
/* And mark the result ready to return */
conn->asyncStatus = PGASYNC_READY;
}
return 1;
fail:
/* release locally allocated PGresult, if we made one */
if (res != conn->result)
PQclear(res);
return 0;
}
/*
* PQsendQuery
* Submit a query, but don't wait for it to finish
*
* Returns: 1 if successfully submitted
* 0 if error (conn->errorMessage is set)
*/
int
PQsendQuery(PGconn *conn, const char *query)
{
if (!PQsendQueryStart(conn))
return 0;
/* check the argument */
if (!query)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("command string is a null pointer\n"));
return 0;
}
/* construct the outgoing Query message */
if (pqPutMsgStart('Q', false, conn) < 0 ||
pqPuts(query, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
{
pqHandleSendFailure(conn);
return 0;
}
/* remember we are using simple query protocol */
conn->queryclass = PGQUERY_SIMPLE;
/* and remember the query text too, if possible */
/* if insufficient memory, last_query just winds up NULL */
if (conn->last_query)
free(conn->last_query);
conn->last_query = strdup(query);
/*
* Give the data a push. In nonblock mode, don't complain if we're unable
* to send it all; PQgetResult() will do any additional flushing needed.
*/
if (pqFlush(conn) < 0)
{
pqHandleSendFailure(conn);
return 0;
}
/* OK, it's launched! */
conn->asyncStatus = PGASYNC_BUSY;
return 1;
}
/*
* PQsendQueryParams
* Like PQsendQuery, but use protocol 3.0 so we can pass parameters
*/
int
PQsendQueryParams(PGconn *conn,
const char *command,
int nParams,
const Oid *paramTypes,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
if (!PQsendQueryStart(conn))
return 0;
/* check the arguments */
if (!command)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("command string is a null pointer\n"));
return 0;
}
if (nParams < 0 || nParams > 65535)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("number of parameters must be between 0 and 65535\n"));
return 0;
}
return PQsendQueryGuts(conn,
command,
"", /* use unnamed statement */
nParams,
paramTypes,
paramValues,
paramLengths,
paramFormats,
resultFormat);
}
/*
* PQsendPrepare
* Submit a Parse message, but don't wait for it to finish
*
* Returns: 1 if successfully submitted
* 0 if error (conn->errorMessage is set)
*/
int
PQsendPrepare(PGconn *conn,
const char *stmtName, const char *query,
int nParams, const Oid *paramTypes)
{
if (!PQsendQueryStart(conn))
return 0;
/* check the arguments */
if (!stmtName)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("statement name is a null pointer\n"));
return 0;
}
if (!query)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("command string is a null pointer\n"));
return 0;
}
if (nParams < 0 || nParams > 65535)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("number of parameters must be between 0 and 65535\n"));
return 0;
}
/* This isn't gonna work on a 2.0 server */
if (PG_PROTOCOL_MAJOR(conn->pversion) < 3)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("function requires at least protocol version 3.0\n"));
return 0;
}
/* construct the Parse message */
if (pqPutMsgStart('P', false, conn) < 0 ||
pqPuts(stmtName, conn) < 0 ||
pqPuts(query, conn) < 0)
goto sendFailed;
if (nParams > 0 && paramTypes)
{
int i;
if (pqPutInt(nParams, 2, conn) < 0)
goto sendFailed;
for (i = 0; i < nParams; i++)
{
if (pqPutInt(paramTypes[i], 4, conn) < 0)
goto sendFailed;
}
}
else
{
if (pqPutInt(0, 2, conn) < 0)
goto sendFailed;
}
if (pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Sync message */
if (pqPutMsgStart('S', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* remember we are doing just a Parse */
conn->queryclass = PGQUERY_PREPARE;
/* and remember the query text too, if possible */
/* if insufficient memory, last_query just winds up NULL */
if (conn->last_query)
free(conn->last_query);
conn->last_query = strdup(query);
/*
* Give the data a push. In nonblock mode, don't complain if we're unable
* to send it all; PQgetResult() will do any additional flushing needed.
*/
if (pqFlush(conn) < 0)
goto sendFailed;
/* OK, it's launched! */
conn->asyncStatus = PGASYNC_BUSY;
return 1;
sendFailed:
pqHandleSendFailure(conn);
return 0;
}
/*
* PQsendQueryPrepared
* Like PQsendQuery, but execute a previously prepared statement,
* using protocol 3.0 so we can pass parameters
*/
int
PQsendQueryPrepared(PGconn *conn,
const char *stmtName,
int nParams,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
if (!PQsendQueryStart(conn))
return 0;
/* check the arguments */
if (!stmtName)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("statement name is a null pointer\n"));
return 0;
}
if (nParams < 0 || nParams > 65535)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("number of parameters must be between 0 and 65535\n"));
return 0;
}
return PQsendQueryGuts(conn,
NULL, /* no command to parse */
stmtName,
nParams,
NULL, /* no param types */
paramValues,
paramLengths,
paramFormats,
resultFormat);
}
/*
* Common startup code for PQsendQuery and sibling routines
*/
static bool
PQsendQueryStart(PGconn *conn)
{
if (!conn)
return false;
/* clear the error string */
resetPQExpBuffer(&conn->errorMessage);
/* Don't try to send if we know there's no live connection. */
if (conn->status != CONNECTION_OK)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("no connection to the server\n"));
return false;
}
/* Can't send while already busy, either. */
if (conn->asyncStatus != PGASYNC_IDLE)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("another command is already in progress\n"));
return false;
}
/* initialize async result-accumulation state */
pqClearAsyncResult(conn);
/* reset single-row processing mode */
conn->singleRowMode = false;
/* ready to send command message */
return true;
}
/*
* PQsendQueryGuts
* Common code for protocol-3.0 query sending
* PQsendQueryStart should be done already
*
* command may be NULL to indicate we use an already-prepared statement
*/
static int
PQsendQueryGuts(PGconn *conn,
const char *command,
const char *stmtName,
int nParams,
const Oid *paramTypes,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
int i;
/* This isn't gonna work on a 2.0 server */
if (PG_PROTOCOL_MAJOR(conn->pversion) < 3)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("function requires at least protocol version 3.0\n"));
return 0;
}
/*
* We will send Parse (if needed), Bind, Describe Portal, Execute, Sync,
* using specified statement name and the unnamed portal.
*/
if (command)
{
/* construct the Parse message */
if (pqPutMsgStart('P', false, conn) < 0 ||
pqPuts(stmtName, conn) < 0 ||
pqPuts(command, conn) < 0)
goto sendFailed;
if (nParams > 0 && paramTypes)
{
if (pqPutInt(nParams, 2, conn) < 0)
goto sendFailed;
for (i = 0; i < nParams; i++)
{
if (pqPutInt(paramTypes[i], 4, conn) < 0)
goto sendFailed;
}
}
else
{
if (pqPutInt(0, 2, conn) < 0)
goto sendFailed;
}
if (pqPutMsgEnd(conn) < 0)
goto sendFailed;
}
/* Construct the Bind message */
if (pqPutMsgStart('B', false, conn) < 0 ||
pqPuts("", conn) < 0 ||
pqPuts(stmtName, conn) < 0)
goto sendFailed;
/* Send parameter formats */
if (nParams > 0 && paramFormats)
{
if (pqPutInt(nParams, 2, conn) < 0)
goto sendFailed;
for (i = 0; i < nParams; i++)
{
if (pqPutInt(paramFormats[i], 2, conn) < 0)
goto sendFailed;
}
}
else
{
if (pqPutInt(0, 2, conn) < 0)
goto sendFailed;
}
if (pqPutInt(nParams, 2, conn) < 0)
goto sendFailed;
/* Send parameters */
for (i = 0; i < nParams; i++)
{
if (paramValues && paramValues[i])
{
int nbytes;
if (paramFormats && paramFormats[i] != 0)
{
/* binary parameter */
if (paramLengths)
nbytes = paramLengths[i];
else
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("length must be given for binary parameter\n"));
goto sendFailed;
}
}
else
{
/* text parameter, do not use paramLengths */
nbytes = strlen(paramValues[i]);
}
if (pqPutInt(nbytes, 4, conn) < 0 ||
pqPutnchar(paramValues[i], nbytes, conn) < 0)
goto sendFailed;
}
else
{
/* take the param as NULL */
if (pqPutInt(-1, 4, conn) < 0)
goto sendFailed;
}
}
if (pqPutInt(1, 2, conn) < 0 ||
pqPutInt(resultFormat, 2, conn))
goto sendFailed;
if (pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Describe Portal message */
if (pqPutMsgStart('D', false, conn) < 0 ||
pqPutc('P', conn) < 0 ||
pqPuts("", conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Execute message */
if (pqPutMsgStart('E', false, conn) < 0 ||
pqPuts("", conn) < 0 ||
pqPutInt(0, 4, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Sync message */
if (pqPutMsgStart('S', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* remember we are using extended query protocol */
conn->queryclass = PGQUERY_EXTENDED;
/* and remember the query text too, if possible */
/* if insufficient memory, last_query just winds up NULL */
if (conn->last_query)
free(conn->last_query);
if (command)
conn->last_query = strdup(command);
else
conn->last_query = NULL;
/*
* Give the data a push. In nonblock mode, don't complain if we're unable
* to send it all; PQgetResult() will do any additional flushing needed.
*/
if (pqFlush(conn) < 0)
goto sendFailed;
/* OK, it's launched! */
conn->asyncStatus = PGASYNC_BUSY;
return 1;
sendFailed:
pqHandleSendFailure(conn);
return 0;
}
/*
* pqHandleSendFailure: try to clean up after failure to send command.
*
* Primarily, what we want to accomplish here is to process any ERROR or
* NOTICE messages that the backend might have sent just before it died.
* Since we're in IDLE state, all such messages will get sent to the notice
* processor.
*
* NOTE: this routine should only be called in PGASYNC_IDLE state.
*/
void
pqHandleSendFailure(PGconn *conn)
{
/*
* Accept and parse any available input data, ignoring I/O errors. Note
* that if pqReadData decides the backend has closed the channel, it will
* close our side of the socket --- that's just what we want here.
*/
while (pqReadData(conn) > 0)
parseInput(conn);
/*
* Be sure to parse available input messages even if we read no data.
* (Note: calling parseInput within the above loop isn't really necessary,
* but it prevents buffer bloat if there's a lot of data available.)
*/
parseInput(conn);
}
/*
* Select row-by-row processing mode
*/
int
PQsetSingleRowMode(PGconn *conn)
{
/*
* Only allow setting the flag when we have launched a query and not yet
* received any results.
*/
if (!conn)
return 0;
if (conn->asyncStatus != PGASYNC_BUSY)
return 0;
if (conn->queryclass != PGQUERY_SIMPLE &&
conn->queryclass != PGQUERY_EXTENDED)
return 0;
if (conn->result)
return 0;
/* OK, set flag */
conn->singleRowMode = true;
return 1;
}
/*
* Consume any available input from the backend
* 0 return: some kind of trouble
* 1 return: no problem
*/
int
PQconsumeInput(PGconn *conn)
{
if (!conn)
return 0;
/*
* for non-blocking connections try to flush the send-queue, otherwise we
* may never get a response for something that may not have already been
* sent because it's in our write buffer!
*/
if (pqIsnonblocking(conn))
{
if (pqFlush(conn) < 0)
return 0;
}
/*
* Load more data, if available. We do this no matter what state we are
* in, since we are probably getting called because the application wants
* to get rid of a read-select condition. Note that we will NOT block
* waiting for more input.
*/
if (pqReadData(conn) < 0)
return 0;
/* Parsing of the data waits till later. */
return 1;
}
/*
* parseInput: if appropriate, parse input data from backend
* until input is exhausted or a stopping state is reached.
* Note that this function will NOT attempt to read more data from the backend.
*/
static void
parseInput(PGconn *conn)
{
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
pqParseInput3(conn);
else
pqParseInput2(conn);
}
/*
* PQisBusy
* Return TRUE if PQgetResult would block waiting for input.
*/
int
PQisBusy(PGconn *conn)
{
if (!conn)
return FALSE;
/* Parse any available data, if our state permits. */
parseInput(conn);
/* PQgetResult will return immediately in all states except BUSY. */
return conn->asyncStatus == PGASYNC_BUSY;
}
/*
* PQgetResult
* Get the next PGresult produced by a query. Returns NULL if no
* query work remains or an error has occurred (e.g. out of
* memory).
*/
PGresult *
PQgetResult(PGconn *conn)
{
PGresult *res;
if (!conn)
return NULL;
/* Parse any available data, if our state permits. */
parseInput(conn);
/* If not ready to return something, block until we are. */
while (conn->asyncStatus == PGASYNC_BUSY)
{
int flushResult;
/*
* If data remains unsent, send it. Else we might be waiting for the
* result of a command the backend hasn't even got yet.
*/
while ((flushResult = pqFlush(conn)) > 0)
{
if (pqWait(FALSE, TRUE, conn))
{
flushResult = -1;
break;
}
}
/* Wait for some more data, and load it. */
if (flushResult ||
pqWait(TRUE, FALSE, conn) ||
pqReadData(conn) < 0)
{
/*
* conn->errorMessage has been set by pqWait or pqReadData. We
* want to append it to any already-received error message.
*/
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_IDLE;
return pqPrepareAsyncResult(conn);
}
/* Parse it. */
parseInput(conn);
}
/* Return the appropriate thing. */
switch (conn->asyncStatus)
{
case PGASYNC_IDLE:
res = NULL; /* query is complete */
break;
case PGASYNC_READY:
res = pqPrepareAsyncResult(conn);
/* Set the state back to BUSY, allowing parsing to proceed. */
conn->asyncStatus = PGASYNC_BUSY;
break;
case PGASYNC_COPY_IN:
res = getCopyResult(conn, PGRES_COPY_IN);
break;
case PGASYNC_COPY_OUT:
res = getCopyResult(conn, PGRES_COPY_OUT);
break;
case PGASYNC_COPY_BOTH:
res = getCopyResult(conn, PGRES_COPY_BOTH);
break;
default:
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("unexpected asyncStatus: %d\n"),
(int) conn->asyncStatus);
res = PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR);
break;
}
if (res)
{
int i;
for (i = 0; i < res->nEvents; i++)
{
PGEventResultCreate evt;
evt.conn = conn;
evt.result = res;
if (!res->events[i].proc(PGEVT_RESULTCREATE, &evt,
res->events[i].passThrough))
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("PGEventProc \"%s\" failed during PGEVT_RESULTCREATE event\n"),
res->events[i].name);
pqSetResultError(res, conn->errorMessage.data);
res->resultStatus = PGRES_FATAL_ERROR;
break;
}
res->events[i].resultInitialized = TRUE;
}
}
return res;
}
/*
* getCopyResult
* Helper for PQgetResult: generate result for COPY-in-progress cases
*/
static PGresult *
getCopyResult(PGconn *conn, ExecStatusType copytype)
{
/*
* If the server connection has been lost, don't pretend everything is
* hunky-dory; instead return a PGRES_FATAL_ERROR result, and reset the
* asyncStatus to idle (corresponding to what we'd do if we'd detected I/O
* error in the earlier steps in PQgetResult). The text returned in the
* result is whatever is in conn->errorMessage; we hope that was filled
* with something relevant when the lost connection was detected.
*/
if (conn->status != CONNECTION_OK)
{
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_IDLE;
return pqPrepareAsyncResult(conn);
}
/* If we have an async result for the COPY, return that */
if (conn->result && conn->result->resultStatus == copytype)
return pqPrepareAsyncResult(conn);
/* Otherwise, invent a suitable PGresult */
return PQmakeEmptyPGresult(conn, copytype);
}
/*
* PQexec
* send a query to the backend and package up the result in a PGresult
*
* If the query was not even sent, return NULL; conn->errorMessage is set to
* a relevant message.
* If the query was sent, a new PGresult is returned (which could indicate
* either success or failure).
* The user is responsible for freeing the PGresult via PQclear()
* when done with it.
*/
PGresult *
PQexec(PGconn *conn, const char *query)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendQuery(conn, query))
return NULL;
return PQexecFinish(conn);
}
/*
* PQexecParams
* Like PQexec, but use protocol 3.0 so we can pass parameters
*/
PGresult *
PQexecParams(PGconn *conn,
const char *command,
int nParams,
const Oid *paramTypes,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendQueryParams(conn, command,
nParams, paramTypes, paramValues, paramLengths,
paramFormats, resultFormat))
return NULL;
return PQexecFinish(conn);
}
/*
* PQprepare
* Creates a prepared statement by issuing a v3.0 parse message.
*
* If the query was not even sent, return NULL; conn->errorMessage is set to
* a relevant message.
* If the query was sent, a new PGresult is returned (which could indicate
* either success or failure).
* The user is responsible for freeing the PGresult via PQclear()
* when done with it.
*/
PGresult *
PQprepare(PGconn *conn,
const char *stmtName, const char *query,
int nParams, const Oid *paramTypes)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendPrepare(conn, stmtName, query, nParams, paramTypes))
return NULL;
return PQexecFinish(conn);
}
/*
* PQexecPrepared
* Like PQexec, but execute a previously prepared statement,
* using protocol 3.0 so we can pass parameters
*/
PGresult *
PQexecPrepared(PGconn *conn,
const char *stmtName,
int nParams,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendQueryPrepared(conn, stmtName,
nParams, paramValues, paramLengths,
paramFormats, resultFormat))
return NULL;
return PQexecFinish(conn);
}
/*
* Common code for PQexec and sibling routines: prepare to send command
*/
static bool
PQexecStart(PGconn *conn)
{
PGresult *result;
if (!conn)
return false;
/*
* Silently discard any prior query result that application didn't eat.
* This is probably poor design, but it's here for backward compatibility.
*/
while ((result = PQgetResult(conn)) != NULL)
{
ExecStatusType resultStatus = result->resultStatus;
PQclear(result); /* only need its status */
if (resultStatus == PGRES_COPY_IN)
{
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
{
/* In protocol 3, we can get out of a COPY IN state */
if (PQputCopyEnd(conn,
libpq_gettext("COPY terminated by new PQexec")) < 0)
return false;
/* keep waiting to swallow the copy's failure message */
}
else
{
/* In older protocols we have to punt */
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("COPY IN state must be terminated first\n"));
return false;
}
}
else if (resultStatus == PGRES_COPY_OUT)
{
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
{
/*
* In protocol 3, we can get out of a COPY OUT state: we just
* switch back to BUSY and allow the remaining COPY data to be
* dropped on the floor.
*/
conn->asyncStatus = PGASYNC_BUSY;
/* keep waiting to swallow the copy's completion message */
}
else
{
/* In older protocols we have to punt */
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("COPY OUT state must be terminated first\n"));
return false;
}
}
else if (resultStatus == PGRES_COPY_BOTH)
{
/* We don't allow PQexec during COPY BOTH */
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("PQexec not allowed during COPY BOTH\n"));
return false;
}
/* check for loss of connection, too */
if (conn->status == CONNECTION_BAD)
return false;
}
/* OK to send a command */
return true;
}
/*
* Common code for PQexec and sibling routines: wait for command result
*/
static PGresult *
PQexecFinish(PGconn *conn)
{
PGresult *result;
PGresult *lastResult;
/*
* For backwards compatibility, return the last result if there are more
* than one --- but merge error messages if we get more than one error
* result.
*
* We have to stop if we see copy in/out/both, however. We will resume
* parsing after application performs the data transfer.
*
* Also stop if the connection is lost (else we'll loop infinitely).
*/
lastResult = NULL;
while ((result = PQgetResult(conn)) != NULL)
{
if (lastResult)
{
if (lastResult->resultStatus == PGRES_FATAL_ERROR &&
result->resultStatus == PGRES_FATAL_ERROR)
{
pqCatenateResultError(lastResult, result->errMsg);
PQclear(result);
result = lastResult;
/*
* Make sure PQerrorMessage agrees with concatenated result
*/
resetPQExpBuffer(&conn->errorMessage);
appendPQExpBufferStr(&conn->errorMessage, result->errMsg);
}
else
PQclear(lastResult);
}
lastResult = result;
if (result->resultStatus == PGRES_COPY_IN ||
result->resultStatus == PGRES_COPY_OUT ||
result->resultStatus == PGRES_COPY_BOTH ||
conn->status == CONNECTION_BAD)
break;
}
return lastResult;
}
/*
* PQdescribePrepared
* Obtain information about a previously prepared statement
*
* If the query was not even sent, return NULL; conn->errorMessage is set to
* a relevant message.
* If the query was sent, a new PGresult is returned (which could indicate
* either success or failure). On success, the PGresult contains status
* PGRES_COMMAND_OK, and its parameter and column-heading fields describe
* the statement's inputs and outputs respectively.
* The user is responsible for freeing the PGresult via PQclear()
* when done with it.
*/
PGresult *
PQdescribePrepared(PGconn *conn, const char *stmt)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendDescribe(conn, 'S', stmt))
return NULL;
return PQexecFinish(conn);
}
/*
* PQdescribePortal
* Obtain information about a previously created portal
*
* This is much like PQdescribePrepared, except that no parameter info is
* returned. Note that at the moment, libpq doesn't really expose portals
* to the client; but this can be used with a portal created by a SQL
* DECLARE CURSOR command.
*/
PGresult *
PQdescribePortal(PGconn *conn, const char *portal)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendDescribe(conn, 'P', portal))
return NULL;
return PQexecFinish(conn);
}
/*
* PQsendDescribePrepared
* Submit a Describe Statement command, but don't wait for it to finish
*
* Returns: 1 if successfully submitted
* 0 if error (conn->errorMessage is set)
*/
int
PQsendDescribePrepared(PGconn *conn, const char *stmt)
{
return PQsendDescribe(conn, 'S', stmt);
}
/*
* PQsendDescribePortal
* Submit a Describe Portal command, but don't wait for it to finish
*
* Returns: 1 if successfully submitted
* 0 if error (conn->errorMessage is set)
*/
int
PQsendDescribePortal(PGconn *conn, const char *portal)
{
return PQsendDescribe(conn, 'P', portal);
}
/*
* PQsendDescribe
* Common code to send a Describe command
*
* Available options for desc_type are
* 'S' to describe a prepared statement; or
* 'P' to describe a portal.
* Returns 1 on success and 0 on failure.
*/
static int
PQsendDescribe(PGconn *conn, char desc_type, const char *desc_target)
{
/* Treat null desc_target as empty string */
if (!desc_target)
desc_target = "";
if (!PQsendQueryStart(conn))
return 0;
/* This isn't gonna work on a 2.0 server */
if (PG_PROTOCOL_MAJOR(conn->pversion) < 3)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("function requires at least protocol version 3.0\n"));
return 0;
}
/* construct the Describe message */
if (pqPutMsgStart('D', false, conn) < 0 ||
pqPutc(desc_type, conn) < 0 ||
pqPuts(desc_target, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Sync message */
if (pqPutMsgStart('S', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* remember we are doing a Describe */
conn->queryclass = PGQUERY_DESCRIBE;
/* reset last-query string (not relevant now) */
if (conn->last_query)
{
free(conn->last_query);
conn->last_query = NULL;
}
/*
* Give the data a push. In nonblock mode, don't complain if we're unable
* to send it all; PQgetResult() will do any additional flushing needed.
*/
if (pqFlush(conn) < 0)
goto sendFailed;
/* OK, it's launched! */
conn->asyncStatus = PGASYNC_BUSY;
return 1;
sendFailed:
pqHandleSendFailure(conn);
return 0;
}
/*
* PQnotifies
* returns a PGnotify* structure of the latest async notification
* that has not yet been handled
*
* returns NULL, if there is currently
* no unhandled async notification from the backend
*
* the CALLER is responsible for FREE'ing the structure returned
*/
PGnotify *
PQnotifies(PGconn *conn)
{
PGnotify *event;
if (!conn)
return NULL;
/* Parse any available data to see if we can extract NOTIFY messages. */
parseInput(conn);
event = conn->notifyHead;
if (event)
{
conn->notifyHead = event->next;
if (!conn->notifyHead)
conn->notifyTail = NULL;
event->next = NULL; /* don't let app see the internal state */
}
return event;
}
/*
* PQputCopyData - send some data to the backend during COPY IN or COPY BOTH
*
* Returns 1 if successful, 0 if data could not be sent (only possible
* in nonblock mode), or -1 if an error occurs.
*/
int
PQputCopyData(PGconn *conn, const char *buffer, int nbytes)
{
if (!conn)
return -1;
if (conn->asyncStatus != PGASYNC_COPY_IN &&
conn->asyncStatus != PGASYNC_COPY_BOTH)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("no COPY in progress\n"));
return -1;
}
/*
* Process any NOTICE or NOTIFY messages that might be pending in the
* input buffer. Since the server might generate many notices during the
* COPY, we want to clean those out reasonably promptly to prevent
* indefinite expansion of the input buffer. (Note: the actual read of
* input data into the input buffer happens down inside pqSendSome, but
* it's not authorized to get rid of the data again.)
*/
parseInput(conn);
if (nbytes > 0)
{
/*
* Try to flush any previously sent data in preference to growing the
* output buffer. If we can't enlarge the buffer enough to hold the
* data, return 0 in the nonblock case, else hard error. (For
* simplicity, always assume 5 bytes of overhead even in protocol 2.0
* case.)
*/
if ((conn->outBufSize - conn->outCount - 5) < nbytes)
{
if (pqFlush(conn) < 0)
return -1;
if (pqCheckOutBufferSpace(conn->outCount + 5 + (size_t) nbytes,
conn))
return pqIsnonblocking(conn) ? 0 : -1;
}
/* Send the data (too simple to delegate to fe-protocol files) */
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
{
if (pqPutMsgStart('d', false, conn) < 0 ||
pqPutnchar(buffer, nbytes, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
else
{
if (pqPutMsgStart(0, false, conn) < 0 ||
pqPutnchar(buffer, nbytes, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
}
return 1;
}
/*
* PQputCopyEnd - send EOF indication to the backend during COPY IN
*
* After calling this, use PQgetResult() to check command completion status.
*
* Returns 1 if successful, 0 if data could not be sent (only possible
* in nonblock mode), or -1 if an error occurs.
*/
int
PQputCopyEnd(PGconn *conn, const char *errormsg)
{
if (!conn)
return -1;
if (conn->asyncStatus != PGASYNC_COPY_IN &&
conn->asyncStatus != PGASYNC_COPY_BOTH)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("no COPY in progress\n"));
return -1;
}
/*
* Send the COPY END indicator. This is simple enough that we don't
* bother delegating it to the fe-protocol files.
*/
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
{
if (errormsg)
{
/* Send COPY FAIL */
if (pqPutMsgStart('f', false, conn) < 0 ||
pqPuts(errormsg, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
else
{
/* Send COPY DONE */
if (pqPutMsgStart('c', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
/*
* If we sent the COPY command in extended-query mode, we must issue a
* Sync as well.
*/
if (conn->queryclass != PGQUERY_SIMPLE)
{
if (pqPutMsgStart('S', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
}
else
{
if (errormsg)
{
/* Oops, no way to do this in 2.0 */
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("function requires at least protocol version 3.0\n"));
return -1;
}
else
{
/* Send old-style end-of-data marker */
if (pqPutMsgStart(0, false, conn) < 0 ||
pqPutnchar("\\.\n", 3, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
}
/* Return to active duty */
if (conn->asyncStatus == PGASYNC_COPY_BOTH)
conn->asyncStatus = PGASYNC_COPY_OUT;
else
conn->asyncStatus = PGASYNC_BUSY;
resetPQExpBuffer(&conn->errorMessage);
/* Try to flush data */
if (pqFlush(conn) < 0)
return -1;
return 1;
}
/*
* PQgetCopyData - read a row of data from the backend during COPY OUT
* or COPY BOTH
*
* If successful, sets *buffer to point to a malloc'd row of data, and
* returns row length (always > 0) as result.
* Returns 0 if no row available yet (only possible if async is true),
* -1 if end of copy (consult PQgetResult), or -2 if error (consult
* PQerrorMessage).
*/
int
PQgetCopyData(PGconn *conn, char **buffer, int async)
{
*buffer = NULL; /* for all failure cases */
if (!conn)
return -2;
if (conn->asyncStatus != PGASYNC_COPY_OUT &&
conn->asyncStatus != PGASYNC_COPY_BOTH)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("no COPY in progress\n"));
return -2;
}
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqGetCopyData3(conn, buffer, async);
else
return pqGetCopyData2(conn, buffer, async);
}
/*
* PQgetline - gets a newline-terminated string from the backend.
*
* Chiefly here so that applications can use "COPY <rel> to stdout"
* and read the output string. Returns a null-terminated string in s.
*
* XXX this routine is now deprecated, because it can't handle binary data.
* If called during a COPY BINARY we return EOF.
*
* PQgetline reads up to maxlen-1 characters (like fgets(3)) but strips
* the terminating \n (like gets(3)).
*
* CAUTION: the caller is responsible for detecting the end-of-copy signal
* (a line containing just "\.") when using this routine.
*
* RETURNS:
* EOF if error (eg, invalid arguments are given)
* 0 if EOL is reached (i.e., \n has been read)
* (this is required for backward-compatibility -- this
* routine used to always return EOF or 0, assuming that
* the line ended within maxlen bytes.)
* 1 in other cases (i.e., the buffer was filled before \n is reached)
*/
int
PQgetline(PGconn *conn, char *s, int maxlen)
{
if (!s || maxlen <= 0)
return EOF;
*s = '\0';
/* maxlen must be at least 3 to hold the \. terminator! */
if (maxlen < 3)
return EOF;
if (!conn)
return EOF;
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqGetline3(conn, s, maxlen);
else
return pqGetline2(conn, s, maxlen);
}
/*
* PQgetlineAsync - gets a COPY data row without blocking.
*
* This routine is for applications that want to do "COPY <rel> to stdout"
* asynchronously, that is without blocking. Having issued the COPY command
* and gotten a PGRES_COPY_OUT response, the app should call PQconsumeInput
* and this routine until the end-of-data signal is detected. Unlike
* PQgetline, this routine takes responsibility for detecting end-of-data.
*
* On each call, PQgetlineAsync will return data if a complete data row
* is available in libpq's input buffer. Otherwise, no data is returned
* until the rest of the row arrives.
*
* If -1 is returned, the end-of-data signal has been recognized (and removed
* from libpq's input buffer). The caller *must* next call PQendcopy and
* then return to normal processing.
*
* RETURNS:
* -1 if the end-of-copy-data marker has been recognized
* 0 if no data is available
* >0 the number of bytes returned.
*
* The data returned will not extend beyond a data-row boundary. If possible
* a whole row will be returned at one time. But if the buffer offered by
* the caller is too small to hold a row sent by the backend, then a partial
* data row will be returned. In text mode this can be detected by testing
* whether the last returned byte is '\n' or not.
*
* The returned data is *not* null-terminated.
*/
int
PQgetlineAsync(PGconn *conn, char *buffer, int bufsize)
{
if (!conn)
return -1;
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqGetlineAsync3(conn, buffer, bufsize);
else
return pqGetlineAsync2(conn, buffer, bufsize);
}
/*
* PQputline -- sends a string to the backend during COPY IN.
* Returns 0 if OK, EOF if not.
*
* This is deprecated primarily because the return convention doesn't allow
* caller to tell the difference between a hard error and a nonblock-mode
* send failure.
*/
int
PQputline(PGconn *conn, const char *s)
{
return PQputnbytes(conn, s, strlen(s));
}
/*
* PQputnbytes -- like PQputline, but buffer need not be null-terminated.
* Returns 0 if OK, EOF if not.
*/
int
PQputnbytes(PGconn *conn, const char *buffer, int nbytes)
{
if (PQputCopyData(conn, buffer, nbytes) > 0)
return 0;
else
return EOF;
}
/*
* PQendcopy
* After completing the data transfer portion of a copy in/out,
* the application must call this routine to finish the command protocol.
*
* When using protocol 3.0 this is deprecated; it's cleaner to use PQgetResult
* to get the transfer status. Note however that when using 2.0 protocol,
* recovering from a copy failure often requires a PQreset. PQendcopy will
* take care of that, PQgetResult won't.
*
* RETURNS:
* 0 on success
* 1 on failure
*/
int
PQendcopy(PGconn *conn)
{
if (!conn)
return 0;
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqEndcopy3(conn);
else
return pqEndcopy2(conn);
}
/* ----------------
* PQfn - Send a function call to the POSTGRES backend.
*
* conn : backend connection
* fnid : OID of function to be called
* result_buf : pointer to result buffer
* result_len : actual length of result is returned here
* result_is_int : If the result is an integer, this must be 1,
* otherwise this should be 0
* args : pointer to an array of function arguments
* (each has length, if integer, and value/pointer)
* nargs : # of arguments in args array.
*
* RETURNS
* PGresult with status = PGRES_COMMAND_OK if successful.
* *result_len is > 0 if there is a return value, 0 if not.
* PGresult with status = PGRES_FATAL_ERROR if backend returns an error.
* NULL on communications failure. conn->errorMessage will be set.
* ----------------
*/
PGresult *
PQfn(PGconn *conn,
int fnid,
int *result_buf,
int *result_len,
int result_is_int,
const PQArgBlock *args,
int nargs)
{
*result_len = 0;
if (!conn)
return NULL;
/* clear the error string */
resetPQExpBuffer(&conn->errorMessage);
if (conn->sock == PGINVALID_SOCKET || conn->asyncStatus != PGASYNC_IDLE ||
conn->result != NULL)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("connection in wrong state\n"));
return NULL;
}
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqFunctionCall3(conn, fnid,
result_buf, result_len,
result_is_int,
args, nargs);
else
return pqFunctionCall2(conn, fnid,
result_buf, result_len,
result_is_int,
args, nargs);
}
/* ====== accessor funcs for PGresult ======== */
ExecStatusType
PQresultStatus(const PGresult *res)
{
if (!res)
return PGRES_FATAL_ERROR;
return res->resultStatus;
}
char *
PQresStatus(ExecStatusType status)
{
if ((unsigned int) status >= sizeof pgresStatus / sizeof pgresStatus[0])
return libpq_gettext("invalid ExecStatusType code");
return pgresStatus[status];
}
char *
PQresultErrorMessage(const PGresult *res)
{
if (!res || !res->errMsg)
return "";
return res->errMsg;
}
char *
PQresultVerboseErrorMessage(const PGresult *res,
PGVerbosity verbosity,
PGContextVisibility show_context)
{
PQExpBufferData workBuf;
/*
* Because the caller is expected to free the result string, we must
* strdup any constant result. We use plain strdup and document that
* callers should expect NULL if out-of-memory.
*/
if (!res ||
(res->resultStatus != PGRES_FATAL_ERROR &&
res->resultStatus != PGRES_NONFATAL_ERROR))
return strdup(libpq_gettext("PGresult is not an error result\n"));
initPQExpBuffer(&workBuf);
/*
* Currently, we pass this off to fe-protocol3.c in all cases; it will
* behave reasonably sanely with an error reported by fe-protocol2.c as
* well. If necessary, we could record the protocol version in PGresults
* so as to be able to invoke a version-specific message formatter, but
* for now there's no need.
*/
pqBuildErrorMessage3(&workBuf, res, verbosity, show_context);
/* If insufficient memory to format the message, fail cleanly */
if (PQExpBufferDataBroken(workBuf))
{
termPQExpBuffer(&workBuf);
return strdup(libpq_gettext("out of memory\n"));
}
return workBuf.data;
}
char *
PQresultErrorField(const PGresult *res, int fieldcode)
{
PGMessageField *pfield;
if (!res)
return NULL;
for (pfield = res->errFields; pfield != NULL; pfield = pfield->next)
{
if (pfield->code == fieldcode)
return pfield->contents;
}
return NULL;
}
int
PQntuples(const PGresult *res)
{
if (!res)
return 0;
return res->ntups;
}
int
PQnfields(const PGresult *res)
{
if (!res)
return 0;
return res->numAttributes;
}
int
PQbinaryTuples(const PGresult *res)
{
if (!res)
return 0;
return res->binary;
}
/*
* Helper routines to range-check field numbers and tuple numbers.
* Return TRUE if OK, FALSE if not
*/
static int
check_field_number(const PGresult *res, int field_num)
{
if (!res)
return FALSE; /* no way to display error message... */
if (field_num < 0 || field_num >= res->numAttributes)
{
pqInternalNotice(&res->noticeHooks,
"column number %d is out of range 0..%d",
field_num, res->numAttributes - 1);
return FALSE;
}
return TRUE;
}
static int
check_tuple_field_number(const PGresult *res,
int tup_num, int field_num)
{
if (!res)
return FALSE; /* no way to display error message... */
if (tup_num < 0 || tup_num >= res->ntups)
{
pqInternalNotice(&res->noticeHooks,
"row number %d is out of range 0..%d",
tup_num, res->ntups - 1);
return FALSE;
}
if (field_num < 0 || field_num >= res->numAttributes)
{
pqInternalNotice(&res->noticeHooks,
"column number %d is out of range 0..%d",
field_num, res->numAttributes - 1);
return FALSE;
}
return TRUE;
}
static int
check_param_number(const PGresult *res, int param_num)
{
if (!res)
return FALSE; /* no way to display error message... */
if (param_num < 0 || param_num >= res->numParameters)
{
pqInternalNotice(&res->noticeHooks,
"parameter number %d is out of range 0..%d",
param_num, res->numParameters - 1);
return FALSE;
}
return TRUE;
}
/*
* returns NULL if the field_num is invalid
*/
char *
PQfname(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return NULL;
if (res->attDescs)
return res->attDescs[field_num].name;
else
return NULL;
}
/*
* PQfnumber: find column number given column name
*
* The column name is parsed as if it were in a SQL statement, including
* case-folding and double-quote processing. But note a possible gotcha:
* downcasing in the frontend might follow different locale rules than
* downcasing in the backend...
*
* Returns -1 if no match. In the present backend it is also possible
* to have multiple matches, in which case the first one is found.
*/
int
PQfnumber(const PGresult *res, const char *field_name)
{
char *field_case;
bool in_quotes;
bool all_lower = true;
const char *iptr;
char *optr;
int i;
if (!res)
return -1;
/*
* Note: it is correct to reject a zero-length input string; the proper
* input to match a zero-length field name would be "".
*/
if (field_name == NULL ||
field_name[0] == '\0' ||
res->attDescs == NULL)
return -1;
/*
* Check if we can avoid the strdup() and related work because the
* passed-in string wouldn't be changed before we do the check anyway.
*/
for (iptr = field_name; *iptr; iptr++)
{
char c = *iptr;
if (c == '"' || c != pg_tolower((unsigned char) c))
{
all_lower = false;
break;
}
}
if (all_lower)
for (i = 0; i < res->numAttributes; i++)
if (strcmp(field_name, res->attDescs[i].name) == 0)
return i;
/* Fall through to the normal check if that didn't work out. */
/*
* Note: this code will not reject partially quoted strings, eg
* foo"BAR"foo will become fooBARfoo when it probably ought to be an error
* condition.
*/
field_case = strdup(field_name);
if (field_case == NULL)
return -1; /* grotty */
in_quotes = false;
optr = field_case;
for (iptr = field_case; *iptr; iptr++)
{
char c = *iptr;
if (in_quotes)
{
if (c == '"')
{
if (iptr[1] == '"')
{
/* doubled quotes become a single quote */
*optr++ = '"';
iptr++;
}
else
in_quotes = false;
}
else
*optr++ = c;
}
else if (c == '"')
in_quotes = true;
else
{
c = pg_tolower((unsigned char) c);
*optr++ = c;
}
}
*optr = '\0';
for (i = 0; i < res->numAttributes; i++)
{
if (strcmp(field_case, res->attDescs[i].name) == 0)
{
free(field_case);
return i;
}
}
free(field_case);
return -1;
}
Oid
PQftable(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return InvalidOid;
if (res->attDescs)
return res->attDescs[field_num].tableid;
else
return InvalidOid;
}
int
PQftablecol(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return 0;
if (res->attDescs)
return res->attDescs[field_num].columnid;
else
return 0;
}
int
PQfformat(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return 0;
if (res->attDescs)
return res->attDescs[field_num].format;
else
return 0;
}
Oid
PQftype(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return InvalidOid;
if (res->attDescs)
return res->attDescs[field_num].typid;
else
return InvalidOid;
}
int
PQfsize(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return 0;
if (res->attDescs)
return res->attDescs[field_num].typlen;
else
return 0;
}
int
PQfmod(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return 0;
if (res->attDescs)
return res->attDescs[field_num].atttypmod;
else
return 0;
}
char *
PQcmdStatus(PGresult *res)
{
if (!res)
return NULL;
return res->cmdStatus;
}
/*
* PQoidStatus -
* if the last command was an INSERT, return the oid string
* if not, return ""
*/
char *
PQoidStatus(const PGresult *res)
{
/*
* This must be enough to hold the result. Don't laugh, this is better
* than what this function used to do.
*/
static char buf[24];
size_t len;
if (!res || strncmp(res->cmdStatus, "INSERT ", 7) != 0)
return "";
len = strspn(res->cmdStatus + 7, "0123456789");
if (len > sizeof(buf) - 1)
len = sizeof(buf) - 1;
memcpy(buf, res->cmdStatus + 7, len);
buf[len] = '\0';
return buf;
}
/*
* PQoidValue -
* a perhaps preferable form of the above which just returns
* an Oid type
*/
Oid
PQoidValue(const PGresult *res)
{
char *endptr = NULL;
unsigned long result;
if (!res ||
strncmp(res->cmdStatus, "INSERT ", 7) != 0 ||
res->cmdStatus[7] < '0' ||
res->cmdStatus[7] > '9')
return InvalidOid;
result = strtoul(res->cmdStatus + 7, &endptr, 10);
if (!endptr || (*endptr != ' ' && *endptr != '\0'))
return InvalidOid;
else
return (Oid) result;
}
/*
* PQcmdTuples -
* If the last command was INSERT/UPDATE/DELETE/MOVE/FETCH/COPY, return
* a string containing the number of inserted/affected tuples. If not,
* return "".
*
* XXX: this should probably return an int
*/
char *
PQcmdTuples(PGresult *res)
{
char *p,
*c;
if (!res)
return "";
if (strncmp(res->cmdStatus, "INSERT ", 7) == 0)
{
p = res->cmdStatus + 7;
/* INSERT: skip oid and space */
while (*p && *p != ' ')
p++;
if (*p == 0)
goto interpret_error; /* no space? */
p++;
}
else if (strncmp(res->cmdStatus, "SELECT ", 7) == 0 ||
strncmp(res->cmdStatus, "DELETE ", 7) == 0 ||
strncmp(res->cmdStatus, "UPDATE ", 7) == 0)
p = res->cmdStatus + 7;
else if (strncmp(res->cmdStatus, "FETCH ", 6) == 0)
p = res->cmdStatus + 6;
else if (strncmp(res->cmdStatus, "MOVE ", 5) == 0 ||
strncmp(res->cmdStatus, "COPY ", 5) == 0)
p = res->cmdStatus + 5;
else
return "";
/* check that we have an integer (at least one digit, nothing else) */
for (c = p; *c; c++)
{
if (!isdigit((unsigned char) *c))
goto interpret_error;
}
if (c == p)
goto interpret_error;
return p;
interpret_error:
pqInternalNotice(&res->noticeHooks,
"could not interpret result from server: %s",
res->cmdStatus);
return "";
}
/*
* PQgetvalue:
* return the value of field 'field_num' of row 'tup_num'
*/
char *
PQgetvalue(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
return NULL;
return res->tuples[tup_num][field_num].value;
}
/* PQgetlength:
* returns the actual length of a field value in bytes.
*/
int
PQgetlength(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
return 0;
if (res->tuples[tup_num][field_num].len != NULL_LEN)
return res->tuples[tup_num][field_num].len;
else
return 0;
}
/* PQgetisnull:
* returns the null status of a field value.
*/
int
PQgetisnull(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
return 1; /* pretend it is null */
if (res->tuples[tup_num][field_num].len == NULL_LEN)
return 1;
else
return 0;
}
/* PQnparams:
* returns the number of input parameters of a prepared statement.
*/
int
PQnparams(const PGresult *res)
{
if (!res)
return 0;
return res->numParameters;
}
/* PQparamtype:
* returns type Oid of the specified statement parameter.
*/
Oid
PQparamtype(const PGresult *res, int param_num)
{
if (!check_param_number(res, param_num))
return InvalidOid;
if (res->paramDescs)
return res->paramDescs[param_num].typid;
else
return InvalidOid;
}
/* PQsetnonblocking:
* sets the PGconn's database connection non-blocking if the arg is TRUE
* or makes it blocking if the arg is FALSE, this will not protect
* you from PQexec(), you'll only be safe when using the non-blocking API.
* Needs to be called only on a connected database connection.
*/
int
PQsetnonblocking(PGconn *conn, int arg)
{
bool barg;
if (!conn || conn->status == CONNECTION_BAD)
return -1;
barg = (arg ? TRUE : FALSE);
/* early out if the socket is already in the state requested */
if (barg == conn->nonblocking)
return 0;
/*
* to guarantee constancy for flushing/query/result-polling behavior we
* need to flush the send queue at this point in order to guarantee proper
* behavior. this is ok because either they are making a transition _from_
* or _to_ blocking mode, either way we can block them.
*/
/* if we are going from blocking to non-blocking flush here */
if (pqFlush(conn))
return -1;
conn->nonblocking = barg;
return 0;
}
/*
* return the blocking status of the database connection
* TRUE == nonblocking, FALSE == blocking
*/
int
PQisnonblocking(const PGconn *conn)
{
return pqIsnonblocking(conn);
}
/* libpq is thread-safe? */
int
PQisthreadsafe(void)
{
#ifdef ENABLE_THREAD_SAFETY
return true;
#else
return false;
#endif
}
/* try to force data out, really only useful for non-blocking users */
int
PQflush(PGconn *conn)
{
return pqFlush(conn);
}
/*
* PQfreemem - safely frees memory allocated
*
* Needed mostly by Win32, unless multithreaded DLL (/MD in VC6)
* Used for freeing memory from PQescapeByte()a/PQunescapeBytea()
*/
void
PQfreemem(void *ptr)
{
free(ptr);
}
/*
* PQfreeNotify - free's the memory associated with a PGnotify
*
* This function is here only for binary backward compatibility.
* New code should use PQfreemem(). A macro will automatically map
* calls to PQfreemem. It should be removed in the future. bjm 2003-03-24
*/
#undef PQfreeNotify
void PQfreeNotify(PGnotify *notify);
void
PQfreeNotify(PGnotify *notify)
{
PQfreemem(notify);
}
/*
* Escaping arbitrary strings to get valid SQL literal strings.
*
* Replaces "'" with "''", and if not std_strings, replaces "\" with "\\".
*
* length is the length of the source string. (Note: if a terminating NUL
* is encountered sooner, PQescapeString stops short of "length"; the behavior
* is thus rather like strncpy.)
*
* For safety the buffer at "to" must be at least 2*length + 1 bytes long.
* A terminating NUL character is added to the output string, whether the
* input is NUL-terminated or not.
*
* Returns the actual length of the output (not counting the terminating NUL).
*/
static size_t
PQescapeStringInternal(PGconn *conn,
char *to, const char *from, size_t length,
int *error,
int encoding, bool std_strings)
{
const char *source = from;
char *target = to;
size_t remaining = length;
if (error)
*error = 0;
while (remaining > 0 && *source != '\0')
{
char c = *source;
int len;
int i;
/* Fast path for plain ASCII */
if (!IS_HIGHBIT_SET(c))
{
/* Apply quoting if needed */
if (SQL_STR_DOUBLE(c, !std_strings))
*target++ = c;
/* Copy the character */
*target++ = c;
source++;
remaining--;
continue;
}
/* Slow path for possible multibyte characters */
len = pg_encoding_mblen(encoding, source);
/* Copy the character */
for (i = 0; i < len; i++)
{
if (remaining == 0 || *source == '\0')
break;
*target++ = *source++;
remaining--;
}
/*
* If we hit premature end of string (ie, incomplete multibyte
* character), try to pad out to the correct length with spaces. We
* may not be able to pad completely, but we will always be able to
* insert at least one pad space (since we'd not have quoted a
* multibyte character). This should be enough to make a string that
* the server will error out on.
*/
if (i < len)
{
if (error)
*error = 1;
if (conn)
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("incomplete multibyte character\n"));
for (; i < len; i++)
{
if (((size_t) (target - to)) / 2 >= length)
break;
*target++ = ' ';
}
break;
}
}
/* Write the terminating NUL character. */
*target = '\0';
return target - to;
}
size_t
PQescapeStringConn(PGconn *conn,
char *to, const char *from, size_t length,
int *error)
{
if (!conn)
{
/* force empty-string result */
*to = '\0';
if (error)
*error = 1;
return 0;
}
return PQescapeStringInternal(conn, to, from, length, error,
conn->client_encoding,
conn->std_strings);
}
size_t
PQescapeString(char *to, const char *from, size_t length)
{
return PQescapeStringInternal(NULL, to, from, length, NULL,
static_client_encoding,
static_std_strings);
}
/*
* Escape arbitrary strings. If as_ident is true, we escape the result
* as an identifier; if false, as a literal. The result is returned in
* a newly allocated buffer. If we fail due to an encoding violation or out
* of memory condition, we return NULL, storing an error message into conn.
*/
static char *
PQescapeInternal(PGconn *conn, const char *str, size_t len, bool as_ident)
{
const char *s;
char *result;
char *rp;
int num_quotes = 0; /* single or double, depending on as_ident */
int num_backslashes = 0;
int input_len;
int result_size;
char quote_char = as_ident ? '"' : '\'';
/* We must have a connection, else fail immediately. */
if (!conn)
return NULL;
/* Scan the string for characters that must be escaped. */
for (s = str; (s - str) < len && *s != '\0'; ++s)
{
if (*s == quote_char)
++num_quotes;
else if (*s == '\\')
++num_backslashes;
else if (IS_HIGHBIT_SET(*s))
{
int charlen;
/* Slow path for possible multibyte characters */
charlen = pg_encoding_mblen(conn->client_encoding, s);
/* Multibyte character overruns allowable length. */
if ((s - str) + charlen > len || memchr(s, 0, charlen) != NULL)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("incomplete multibyte character\n"));
return NULL;
}
/* Adjust s, bearing in mind that for loop will increment it. */
s += charlen - 1;
}
}
/* Allocate output buffer. */
input_len = s - str;
result_size = input_len + num_quotes + 3; /* two quotes, plus a NUL */
if (!as_ident && num_backslashes > 0)
result_size += num_backslashes + 2;
result = rp = (char *) malloc(result_size);
if (rp == NULL)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("out of memory\n"));
return NULL;
}
/*
* If we are escaping a literal that contains backslashes, we use the
* escape string syntax so that the result is correct under either value
* of standard_conforming_strings. We also emit a leading space in this
* case, to guard against the possibility that the result might be
* interpolated immediately following an identifier.
*/
if (!as_ident && num_backslashes > 0)
{
*rp++ = ' ';
*rp++ = 'E';
}
/* Opening quote. */
*rp++ = quote_char;
/*
* Use fast path if possible.
*
* We've already verified that the input string is well-formed in the
* current encoding. If it contains no quotes and, in the case of
* literal-escaping, no backslashes, then we can just copy it directly to
* the output buffer, adding the necessary quotes.
*
* If not, we must rescan the input and process each character
* individually.
*/
if (num_quotes == 0 && (num_backslashes == 0 || as_ident))
{
memcpy(rp, str, input_len);
rp += input_len;
}
else
{
for (s = str; s - str < input_len; ++s)
{
if (*s == quote_char || (!as_ident && *s == '\\'))
{
*rp++ = *s;
*rp++ = *s;
}
else if (!IS_HIGHBIT_SET(*s))
*rp++ = *s;
else
{
int i = pg_encoding_mblen(conn->client_encoding, s);
while (1)
{
*rp++ = *s;
if (--i == 0)
break;
++s; /* for loop will provide the final increment */
}
}
}
}
/* Closing quote and terminating NUL. */
*rp++ = quote_char;
*rp = '\0';
return result;
}
char *
PQescapeLiteral(PGconn *conn, const char *str, size_t len)
{
return PQescapeInternal(conn, str, len, false);
}
char *
PQescapeIdentifier(PGconn *conn, const char *str, size_t len)
{
return PQescapeInternal(conn, str, len, true);
}
/* HEX encoding support for bytea */
static const char hextbl[] = "0123456789abcdef";
static const int8 hexlookup[128] = {
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1, -1, -1, -1, -1, -1,
-1, 10, 11, 12, 13, 14, 15, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, 10, 11, 12, 13, 14, 15, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
};
static inline char
get_hex(char c)
{
int res = -1;
if (c > 0 && c < 127)
res = hexlookup[(unsigned char) c];
return (char) res;
}
/*
* PQescapeBytea - converts from binary string to the
* minimal encoding necessary to include the string in an SQL
* INSERT statement with a bytea type column as the target.
*
* We can use either hex or escape (traditional) encoding.
* In escape mode, the following transformations are applied:
* '\0' == ASCII 0 == \000
* '\'' == ASCII 39 == ''
* '\\' == ASCII 92 == \\
* anything < 0x20, or > 0x7e ---> \ooo
* (where ooo is an octal expression)
*
* If not std_strings, all backslashes sent to the output are doubled.
*/
static unsigned char *
PQescapeByteaInternal(PGconn *conn,
const unsigned char *from, size_t from_length,
size_t *to_length, bool std_strings, bool use_hex)
{
const unsigned char *vp;
unsigned char *rp;
unsigned char *result;
size_t i;
size_t len;
size_t bslash_len = (std_strings ? 1 : 2);
/*
* empty string has 1 char ('\0')
*/
len = 1;
if (use_hex)
{
len += bslash_len + 1 + 2 * from_length;
}
else
{
vp = from;
for (i = from_length; i > 0; i--, vp++)
{
if (*vp < 0x20 || *vp > 0x7e)
len += bslash_len + 3;
else if (*vp == '\'')
len += 2;
else if (*vp == '\\')
len += bslash_len + bslash_len;
else
len++;
}
}
*to_length = len;
rp = result = (unsigned char *) malloc(len);
if (rp == NULL)
{
if (conn)
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("out of memory\n"));
return NULL;
}
if (use_hex)
{
if (!std_strings)
*rp++ = '\\';
*rp++ = '\\';
*rp++ = 'x';
}
vp = from;
for (i = from_length; i > 0; i--, vp++)
{
unsigned char c = *vp;
if (use_hex)
{
*rp++ = hextbl[(c >> 4) & 0xF];
*rp++ = hextbl[c & 0xF];
}
else if (c < 0x20 || c > 0x7e)
{
if (!std_strings)
*rp++ = '\\';
*rp++ = '\\';
*rp++ = (c >> 6) + '0';
*rp++ = ((c >> 3) & 07) + '0';
*rp++ = (c & 07) + '0';
}
else if (c == '\'')
{
*rp++ = '\'';
*rp++ = '\'';
}
else if (c == '\\')
{
if (!std_strings)
{
*rp++ = '\\';
*rp++ = '\\';
}
*rp++ = '\\';
*rp++ = '\\';
}
else
*rp++ = c;
}
*rp = '\0';
return result;
}
unsigned char *
PQescapeByteaConn(PGconn *conn,
const unsigned char *from, size_t from_length,
size_t *to_length)
{
if (!conn)
return NULL;
return PQescapeByteaInternal(conn, from, from_length, to_length,
conn->std_strings,
(conn->sversion >= 90000));
}
unsigned char *
PQescapeBytea(const unsigned char *from, size_t from_length, size_t *to_length)
{
return PQescapeByteaInternal(NULL, from, from_length, to_length,
static_std_strings,
false /* can't use hex */ );
}
#define ISFIRSTOCTDIGIT(CH) ((CH) >= '0' && (CH) <= '3')
#define ISOCTDIGIT(CH) ((CH) >= '0' && (CH) <= '7')
#define OCTVAL(CH) ((CH) - '0')
/*
* PQunescapeBytea - converts the null terminated string representation
* of a bytea, strtext, into binary, filling a buffer. It returns a
* pointer to the buffer (or NULL on error), and the size of the
* buffer in retbuflen. The pointer may subsequently be used as an
* argument to the function PQfreemem.
*
* The following transformations are made:
* \\ == ASCII 92 == \
* \ooo == a byte whose value = ooo (ooo is an octal number)
* \x == x (x is any character not matched by the above transformations)
*/
unsigned char *
PQunescapeBytea(const unsigned char *strtext, size_t *retbuflen)
{
size_t strtextlen,
buflen;
unsigned char *buffer,
*tmpbuf;
size_t i,
j;
if (strtext == NULL)
return NULL;
strtextlen = strlen((const char *) strtext);
if (strtext[0] == '\\' && strtext[1] == 'x')
{
const unsigned char *s;
unsigned char *p;
buflen = (strtextlen - 2) / 2;
/* Avoid unportable malloc(0) */
buffer = (unsigned char *) malloc(buflen > 0 ? buflen : 1);
if (buffer == NULL)
return NULL;
s = strtext + 2;
p = buffer;
while (*s)
{
char v1,
v2;
/*
* Bad input is silently ignored. Note that this includes
* whitespace between hex pairs, which is allowed by byteain.
*/
v1 = get_hex(*s++);
if (!*s || v1 == (char) -1)
continue;
v2 = get_hex(*s++);
if (v2 != (char) -1)
*p++ = (v1 << 4) | v2;
}
buflen = p - buffer;
}
else
{
/*
* Length of input is max length of output, but add one to avoid
* unportable malloc(0) if input is zero-length.
*/
buffer = (unsigned char *) malloc(strtextlen + 1);
if (buffer == NULL)
return NULL;
for (i = j = 0; i < strtextlen;)
{
switch (strtext[i])
{
case '\\':
i++;
if (strtext[i] == '\\')
buffer[j++] = strtext[i++];
else
{
if ((ISFIRSTOCTDIGIT(strtext[i])) &&
(ISOCTDIGIT(strtext[i + 1])) &&
(ISOCTDIGIT(strtext[i + 2])))
{
int byte;
byte = OCTVAL(strtext[i++]);
byte = (byte << 3) + OCTVAL(strtext[i++]);
byte = (byte << 3) + OCTVAL(strtext[i++]);
buffer[j++] = byte;
}
}
/*
* Note: if we see '\' followed by something that isn't a
* recognized escape sequence, we loop around having done
* nothing except advance i. Therefore the something will
* be emitted as ordinary data on the next cycle. Corner
* case: '\' at end of string will just be discarded.
*/
break;
default:
buffer[j++] = strtext[i++];
break;
}
}
buflen = j; /* buflen is the length of the dequoted data */
}
/* Shrink the buffer to be no larger than necessary */
/* +1 avoids unportable behavior when buflen==0 */
tmpbuf = realloc(buffer, buflen + 1);
/* It would only be a very brain-dead realloc that could fail, but... */
if (!tmpbuf)
{
free(buffer);
return NULL;
}
*retbuflen = buflen;
return tmpbuf;
}
}}}
http://www.postgresqltutorial.com/
* https://severalnines.com/blog/benchmarking-postgresql-performance
* https://dba.stackexchange.com/questions/42012/how-can-i-benchmark-a-postgresql-query
* artificially slow query
https://www.google.com/search?q=artificially+slow+postgresql+queries&oq=artificially+slow+postgresql+quer&aqs=chrome.1.69i57j33l5.7819j1j0&sourceid=chrome&ie=UTF-8
https://www.endpoint.com/blog/2012/11/05/how-to-make-postgresql-query-slow
https://stackoverflow.com/questions/35336262/how-to-create-a-query-that-takes-long-time-to-run-in-postgresql
<<showtoc>>
! tools
https://explain.depesz.com/
http://tatiyants.com/pev/#/plans
https://thoughtbot.com/blog/reading-an-explain-analyze-query-plan
Ways of Seeing: ORMS & SQL Views https://dev.to/jsutan/comment/749p
json exec plans can be parsed here http://jsonviewer.stack.hu/
! how to explain and run query postgresql
https://www.google.com/search?q=how+to+explain+and+run+query+postgresql&oq=how+to+explain+and+run+query+postg&aqs=chrome.1.69i57j33.6841j0j0&sourceid=chrome&ie=UTF-8
http://www.postgresqltutorial.com/postgresql-explain/
https://stackoverflow.com/questions/117262/what-is-postgresql-explain-telling-me-exactly
http://www.postgresonline.com/journal/archives/171-Explain-Plans-PostgreSQL-9.0-Text,-JSON,-XML,-YAML-Part-1-You-Choose.html
http://www.craigkerstiens.com/2013/06/13/explaining-your-data/
http://tatiyants.com/postgres-query-plan-visualization/
! output postgresql explain in json
https://www.google.com/search?q=output+postgresql+explain+in+json&oq=output+postgresql+explain+in+json&aqs=chrome..69i57j33.7143j0j0&sourceid=chrome&ie=UTF-8
! output explain to a file
https://stackoverflow.com/questions/36558455/postgresql-output-explain-analyze-to-file
<<showtoc>>
https://pgxn.org/
! extensions talk
Around the world with Postgres extensions https://www.youtube.com/watch?v=02P_09egiVk
Breakout Session Presented by AJ Welch of Chartio at PGConf Silicon Valley 2015 https://www.youtube.com/watch?time_continue=5&v=GXCH9I5v1Ic
https://landing.chartio.com/event-recording-pgconfsv-2015
Supercharge your PostgreSQL with extensions_Alexey Vasiliev https://www.youtube.com/watch?v=Ad1YIg8iGow
https://wiki.postgresql.org/wiki/Extensions
https://www.google.com/search?q=what+is+postgresql+extension&tbm=vid&ei=91EqXZ3BMMy3ggf26o3gCg&start=10&sa=N&ved=0ahUKEwjdmO2J8LLjAhXMm-AKHXZ1A6wQ8tMDCFc&biw=1465&bih=965&dpr=1.6
How PostgreSQL Extension APIs are Changing the Face of Relational Databases https://www.youtube.com/watch?v=5q_4SvLAkyI
! parallel database
https://www.citusdata.com/product/comparison
! pageinspect
https://fritshoogland.wordpress.com/2017/07/01/postgresql-block-internals/
! auto_explain - historical plan info
https://www.postgresql.org/docs/9.0/auto-explain.html
https://stackoverflow.com/questions/6977280/is-there-any-system-views-which-show-current-and-history-plan-information-about
https://www.google.com/search?q=postgresql+auto_explain&oq=postgresql+auto_explain&aqs=chrome..69i57j0j69i60l2j0l2.4387j0j0&sourceid=chrome&ie=UTF-8
https://www.cybertec-postgresql.com/en/spying-on-slow-statements-with-auto_explain/
https://pganalyze.com/docs/log-insights/setup/auto_explain
.
https://www.freecodecamp.org/news/fuzzy-string-matching-with-postgresql/
https://www.postgresql.org/docs/9.1/high-availability.html
https://www.endpoint.com/blog/2017/09/27/working-on-production-systems
http://pldoc.sourceforge.net/maven-site/
* https://pganalyze.com/
* https://www.beeflix.io/
* https://aws.amazon.com/rds/performance-insights/?nc=sn&loc=2&dn=2
** https://aws.amazon.com/rds/performance-insights/?nc=sn&loc=1
** Using Performance Insights to Optimize Database Performance https://www.slideshare.net/AmazonWebServices/using-performance-insights-to-optimize-database-performance-dat402-aws-reinvent-2018
<<showtoc>>
! tools
https://github.com/dbacvetkov/PASH-Viewer
! build historical performance data
https://github.com/dbacvetkov/PASH-Viewer/wiki/How-to-create-pg_stat_activity-historical-table
* https://www.cybertec-postgresql.com/en/detecting-performance-problems-easily-in-postgresql/
* https://www.cybertec-postgresql.com/en/3-ways-to-detect-slow-queries-in-postgresql/
* https://dba.stackexchange.com/questions/103597/how-to-check-know-the-highest-run-queries
! views
https://www.postgresql.org/docs/9.3/pgstatstatements.html
{{{
sudo -u postgres pgbench -c 2 -T 99999 -S -r -v -d pgbench 2> /dev/null
}}}
Environment Variables https://www.postgresql.org/docs/9.1/libpq-envars.html
<<showtoc>>
! 10 ways
https://www.postgis.us/presentations/PGOpen2018_data_loading.pdf
! “copy with delimiter as”
https://github.com/pudo/pgcsv
https://stackoverflow.com/questions/33353997/how-to-insert-csv-data-into-postgresql-database-remote-database
https://github.com/search?p=2&q=postgresql+csv+loader&type=Repositories
! Foreign Data Wrapper
https://www.postgresql.org/docs/current/datatype-json.html?fbclid=IwAR0_B0j45OxHQF_RIdJURdcbj2pRo6p5bs6Ojr7BW0G69VEzP2232KDBXdY
https://www.percona.com/blog/2019/07/30/parallelism-in-postgresql/
https://www.amazon.com/PostgreSQL-High-Performance-Gregory-Smith/dp/184951030X
<<showtoc>>
! hi
!! hello
!!! how are you
{{{
/*-------------------------------------------------------------------------
*
* fe-exec.c
* functions related to sending a query down to the backend
*
* Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*
* IDENTIFICATION
* src/interfaces/libpq/fe-exec.c
*
*-------------------------------------------------------------------------
*/
#include "postgres_fe.h"
#include <ctype.h>
#include <fcntl.h>
#include "libpq-fe.h"
#include "libpq-int.h"
#include "mb/pg_wchar.h"
#ifdef WIN32
#include "win32.h"
#else
#include <unistd.h>
#endif
/* keep this in same order as ExecStatusType in libpq-fe.h */
char *const pgresStatus[] = {
"PGRES_EMPTY_QUERY",
"PGRES_COMMAND_OK",
"PGRES_TUPLES_OK",
"PGRES_COPY_OUT",
"PGRES_COPY_IN",
"PGRES_BAD_RESPONSE",
"PGRES_NONFATAL_ERROR",
"PGRES_FATAL_ERROR",
"PGRES_COPY_BOTH",
"PGRES_SINGLE_TUPLE"
};
/*
* static state needed by PQescapeString and PQescapeBytea; initialize to
* values that result in backward-compatible behavior
*/
static int static_client_encoding = PG_SQL_ASCII;
static bool static_std_strings = false;
static PGEvent *dupEvents(PGEvent *events, int count);
static bool pqAddTuple(PGresult *res, PGresAttValue *tup);
static bool PQsendQueryStart(PGconn *conn);
static int PQsendQueryGuts(PGconn *conn,
const char *command,
const char *stmtName,
int nParams,
const Oid *paramTypes,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat);
static void parseInput(PGconn *conn);
static PGresult *getCopyResult(PGconn *conn, ExecStatusType copytype);
static bool PQexecStart(PGconn *conn);
static PGresult *PQexecFinish(PGconn *conn);
static int PQsendDescribe(PGconn *conn, char desc_type,
const char *desc_target);
static int check_field_number(const PGresult *res, int field_num);
/* ----------------
* Space management for PGresult.
*
* Formerly, libpq did a separate malloc() for each field of each tuple
* returned by a query. This was remarkably expensive --- malloc/free
* consumed a sizable part of the application's runtime. And there is
* no real need to keep track of the fields separately, since they will
* all be freed together when the PGresult is released. So now, we grab
* large blocks of storage from malloc and allocate space for query data
* within these blocks, using a trivially simple allocator. This reduces
* the number of malloc/free calls dramatically, and it also avoids
* fragmentation of the malloc storage arena.
* The PGresult structure itself is still malloc'd separately. We could
* combine it with the first allocation block, but that would waste space
* for the common case that no extra storage is actually needed (that is,
* the SQL command did not return tuples).
*
* We also malloc the top-level array of tuple pointers separately, because
* we need to be able to enlarge it via realloc, and our trivial space
* allocator doesn't handle that effectively. (Too bad the FE/BE protocol
* doesn't tell us up front how many tuples will be returned.)
* All other subsidiary storage for a PGresult is kept in PGresult_data blocks
* of size PGRESULT_DATA_BLOCKSIZE. The overhead at the start of each block
* is just a link to the next one, if any. Free-space management info is
* kept in the owning PGresult.
* A query returning a small amount of data will thus require three malloc
* calls: one for the PGresult, one for the tuples pointer array, and one
* PGresult_data block.
*
* Only the most recently allocated PGresult_data block is a candidate to
* have more stuff added to it --- any extra space left over in older blocks
* is wasted. We could be smarter and search the whole chain, but the point
* here is to be simple and fast. Typical applications do not keep a PGresult
* around very long anyway, so some wasted space within one is not a problem.
*
* Tuning constants for the space allocator are:
* PGRESULT_DATA_BLOCKSIZE: size of a standard allocation block, in bytes
* PGRESULT_ALIGN_BOUNDARY: assumed alignment requirement for binary data
* PGRESULT_SEP_ALLOC_THRESHOLD: objects bigger than this are given separate
* blocks, instead of being crammed into a regular allocation block.
* Requirements for correct function are:
* PGRESULT_ALIGN_BOUNDARY must be a multiple of the alignment requirements
* of all machine data types. (Currently this is set from configure
* tests, so it should be OK automatically.)
* PGRESULT_SEP_ALLOC_THRESHOLD + PGRESULT_BLOCK_OVERHEAD <=
* PGRESULT_DATA_BLOCKSIZE
* pqResultAlloc assumes an object smaller than the threshold will fit
* in a new block.
* The amount of space wasted at the end of a block could be as much as
* PGRESULT_SEP_ALLOC_THRESHOLD, so it doesn't pay to make that too large.
* ----------------
*/
#define PGRESULT_DATA_BLOCKSIZE 2048
#define PGRESULT_ALIGN_BOUNDARY MAXIMUM_ALIGNOF /* from configure */
#define PGRESULT_BLOCK_OVERHEAD Max(sizeof(PGresult_data), PGRESULT_ALIGN_BOUNDARY)
#define PGRESULT_SEP_ALLOC_THRESHOLD (PGRESULT_DATA_BLOCKSIZE / 2)
/*
* PQmakeEmptyPGresult
* returns a newly allocated, initialized PGresult with given status.
* If conn is not NULL and status indicates an error, the conn's
* errorMessage is copied. Also, any PGEvents are copied from the conn.
*/
PGresult *
PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status)
{
PGresult *result;
result = (PGresult *) malloc(sizeof(PGresult));
if (!result)
return NULL;
result->ntups = 0;
result->numAttributes = 0;
result->attDescs = NULL;
result->tuples = NULL;
result->tupArrSize = 0;
result->numParameters = 0;
result->paramDescs = NULL;
result->resultStatus = status;
result->cmdStatus[0] = '\0';
result->binary = 0;
result->events = NULL;
result->nEvents = 0;
result->errMsg = NULL;
result->errFields = NULL;
result->errQuery = NULL;
result->null_field[0] = '\0';
result->curBlock = NULL;
result->curOffset = 0;
result->spaceLeft = 0;
if (conn)
{
/* copy connection data we might need for operations on PGresult */
result->noticeHooks = conn->noticeHooks;
result->client_encoding = conn->client_encoding;
/* consider copying conn's errorMessage */
switch (status)
{
case PGRES_EMPTY_QUERY:
case PGRES_COMMAND_OK:
case PGRES_TUPLES_OK:
case PGRES_COPY_OUT:
case PGRES_COPY_IN:
case PGRES_COPY_BOTH:
case PGRES_SINGLE_TUPLE:
/* non-error cases */
break;
default:
pqSetResultError(result, conn->errorMessage.data);
break;
}
/* copy events last; result must be valid if we need to PQclear */
if (conn->nEvents > 0)
{
result->events = dupEvents(conn->events, conn->nEvents);
if (!result->events)
{
PQclear(result);
return NULL;
}
result->nEvents = conn->nEvents;
}
}
else
{
/* defaults... */
result->noticeHooks.noticeRec = NULL;
result->noticeHooks.noticeRecArg = NULL;
result->noticeHooks.noticeProc = NULL;
result->noticeHooks.noticeProcArg = NULL;
result->client_encoding = PG_SQL_ASCII;
}
return result;
}
/*
* PQsetResultAttrs
*
* Set the attributes for a given result. This function fails if there are
* already attributes contained in the provided result. The call is
* ignored if numAttributes is zero or attDescs is NULL. If the
* function fails, it returns zero. If the function succeeds, it
* returns a non-zero value.
*/
int
PQsetResultAttrs(PGresult *res, int numAttributes, PGresAttDesc *attDescs)
{
int i;
/* If attrs already exist, they cannot be overwritten. */
if (!res || res->numAttributes > 0)
return FALSE;
/* ignore no-op request */
if (numAttributes <= 0 || !attDescs)
return TRUE;
res->attDescs = (PGresAttDesc *)
PQresultAlloc(res, numAttributes * sizeof(PGresAttDesc));
if (!res->attDescs)
return FALSE;
res->numAttributes = numAttributes;
memcpy(res->attDescs, attDescs, numAttributes * sizeof(PGresAttDesc));
/* deep-copy the attribute names, and determine format */
res->binary = 1;
for (i = 0; i < res->numAttributes; i++)
{
if (res->attDescs[i].name)
res->attDescs[i].name = pqResultStrdup(res, res->attDescs[i].name);
else
res->attDescs[i].name = res->null_field;
if (!res->attDescs[i].name)
return FALSE;
if (res->attDescs[i].format == 0)
res->binary = 0;
}
return TRUE;
}
/*
* PQcopyResult
*
* Returns a deep copy of the provided 'src' PGresult, which cannot be NULL.
* The 'flags' argument controls which portions of the result will or will
* NOT be copied. The created result is always put into the
* PGRES_TUPLES_OK status. The source result error message is not copied,
* although cmdStatus is.
*
* To set custom attributes, use PQsetResultAttrs. That function requires
* that there are no attrs contained in the result, so to use that
* function you cannot use the PG_COPYRES_ATTRS or PG_COPYRES_TUPLES
* options with this function.
*
* Options:
* PG_COPYRES_ATTRS - Copy the source result's attributes
*
* PG_COPYRES_TUPLES - Copy the source result's tuples. This implies
* copying the attrs, seeing how the attrs are needed by the tuples.
*
* PG_COPYRES_EVENTS - Copy the source result's events.
*
* PG_COPYRES_NOTICEHOOKS - Copy the source result's notice hooks.
*/
PGresult *
PQcopyResult(const PGresult *src, int flags)
{
PGresult *dest;
int i;
if (!src)
return NULL;
dest = PQmakeEmptyPGresult(NULL, PGRES_TUPLES_OK);
if (!dest)
return NULL;
/* Always copy these over. Is cmdStatus really useful here? */
dest->client_encoding = src->client_encoding;
strcpy(dest->cmdStatus, src->cmdStatus);
/* Wants attrs? */
if (flags & (PG_COPYRES_ATTRS | PG_COPYRES_TUPLES))
{
if (!PQsetResultAttrs(dest, src->numAttributes, src->attDescs))
{
PQclear(dest);
return NULL;
}
}
/* Wants to copy tuples? */
if (flags & PG_COPYRES_TUPLES)
{
int tup,
field;
for (tup = 0; tup < src->ntups; tup++)
{
for (field = 0; field < src->numAttributes; field++)
{
if (!PQsetvalue(dest, tup, field,
src->tuples[tup][field].value,
src->tuples[tup][field].len))
{
PQclear(dest);
return NULL;
}
}
}
}
/* Wants to copy notice hooks? */
if (flags & PG_COPYRES_NOTICEHOOKS)
dest->noticeHooks = src->noticeHooks;
/* Wants to copy PGEvents? */
if ((flags & PG_COPYRES_EVENTS) && src->nEvents > 0)
{
dest->events = dupEvents(src->events, src->nEvents);
if (!dest->events)
{
PQclear(dest);
return NULL;
}
dest->nEvents = src->nEvents;
}
/* Okay, trigger PGEVT_RESULTCOPY event */
for (i = 0; i < dest->nEvents; i++)
{
if (src->events[i].resultInitialized)
{
PGEventResultCopy evt;
evt.src = src;
evt.dest = dest;
if (!dest->events[i].proc(PGEVT_RESULTCOPY, &evt,
dest->events[i].passThrough))
{
PQclear(dest);
return NULL;
}
dest->events[i].resultInitialized = TRUE;
}
}
return dest;
}
/*
* Copy an array of PGEvents (with no extra space for more).
* Does not duplicate the event instance data, sets this to NULL.
* Also, the resultInitialized flags are all cleared.
*/
static PGEvent *
dupEvents(PGEvent *events, int count)
{
PGEvent *newEvents;
int i;
if (!events || count <= 0)
return NULL;
newEvents = (PGEvent *) malloc(count * sizeof(PGEvent));
if (!newEvents)
return NULL;
for (i = 0; i < count; i++)
{
newEvents[i].proc = events[i].proc;
newEvents[i].passThrough = events[i].passThrough;
newEvents[i].data = NULL;
newEvents[i].resultInitialized = FALSE;
newEvents[i].name = strdup(events[i].name);
if (!newEvents[i].name)
{
while (--i >= 0)
free(newEvents[i].name);
free(newEvents);
return NULL;
}
}
return newEvents;
}
/*
* Sets the value for a tuple field. The tup_num must be less than or
* equal to PQntuples(res). If it is equal, a new tuple is created and
* added to the result.
* Returns a non-zero value for success and zero for failure.
*/
int
PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len)
{
PGresAttValue *attval;
if (!check_field_number(res, field_num))
return FALSE;
/* Invalid tup_num, must be <= ntups */
if (tup_num < 0 || tup_num > res->ntups)
return FALSE;
/* need to allocate a new tuple? */
if (tup_num == res->ntups)
{
PGresAttValue *tup;
int i;
tup = (PGresAttValue *)
pqResultAlloc(res, res->numAttributes * sizeof(PGresAttValue),
TRUE);
if (!tup)
return FALSE;
/* initialize each column to NULL */
for (i = 0; i < res->numAttributes; i++)
{
tup[i].len = NULL_LEN;
tup[i].value = res->null_field;
}
/* add it to the array */
if (!pqAddTuple(res, tup))
return FALSE;
}
attval = &res->tuples[tup_num][field_num];
/* treat either NULL_LEN or NULL value pointer as a NULL field */
if (len == NULL_LEN || value == NULL)
{
attval->len = NULL_LEN;
attval->value = res->null_field;
}
else if (len <= 0)
{
attval->len = 0;
attval->value = res->null_field;
}
else
{
attval->value = (char *) pqResultAlloc(res, len + 1, TRUE);
if (!attval->value)
return FALSE;
attval->len = len;
memcpy(attval->value, value, len);
attval->value[len] = '\0';
}
return TRUE;
}
/*
* pqResultAlloc - exported routine to allocate local storage in a PGresult.
*
* We force all such allocations to be maxaligned, since we don't know
* whether the value might be binary.
*/
void *
PQresultAlloc(PGresult *res, size_t nBytes)
{
return pqResultAlloc(res, nBytes, TRUE);
}
/*
* pqResultAlloc -
* Allocate subsidiary storage for a PGresult.
*
* nBytes is the amount of space needed for the object.
* If isBinary is true, we assume that we need to align the object on
* a machine allocation boundary.
* If isBinary is false, we assume the object is a char string and can
* be allocated on any byte boundary.
*/
void *
pqResultAlloc(PGresult *res, size_t nBytes, bool isBinary)
{
char *space;
PGresult_data *block;
if (!res)
return NULL;
if (nBytes <= 0)
return res->null_field;
/*
* If alignment is needed, round up the current position to an alignment
* boundary.
*/
if (isBinary)
{
int offset = res->curOffset % PGRESULT_ALIGN_BOUNDARY;
if (offset)
{
res->curOffset += PGRESULT_ALIGN_BOUNDARY - offset;
res->spaceLeft -= PGRESULT_ALIGN_BOUNDARY - offset;
}
}
/* If there's enough space in the current block, no problem. */
if (nBytes <= (size_t) res->spaceLeft)
{
space = res->curBlock->space + res->curOffset;
res->curOffset += nBytes;
res->spaceLeft -= nBytes;
return space;
}
/*
* If the requested object is very large, give it its own block; this
* avoids wasting what might be most of the current block to start a new
* block. (We'd have to special-case requests bigger than the block size
* anyway.) The object is always given binary alignment in this case.
*/
if (nBytes >= PGRESULT_SEP_ALLOC_THRESHOLD)
{
block = (PGresult_data *) malloc(nBytes + PGRESULT_BLOCK_OVERHEAD);
if (!block)
return NULL;
space = block->space + PGRESULT_BLOCK_OVERHEAD;
if (res->curBlock)
{
/*
* Tuck special block below the active block, so that we don't
* have to waste the free space in the active block.
*/
block->next = res->curBlock->next;
res->curBlock->next = block;
}
else
{
/* Must set up the new block as the first active block. */
block->next = NULL;
res->curBlock = block;
res->spaceLeft = 0; /* be sure it's marked full */
}
return space;
}
/* Otherwise, start a new block. */
block = (PGresult_data *) malloc(PGRESULT_DATA_BLOCKSIZE);
if (!block)
return NULL;
block->next = res->curBlock;
res->curBlock = block;
if (isBinary)
{
/* object needs full alignment */
res->curOffset = PGRESULT_BLOCK_OVERHEAD;
res->spaceLeft = PGRESULT_DATA_BLOCKSIZE - PGRESULT_BLOCK_OVERHEAD;
}
else
{
/* we can cram it right after the overhead pointer */
res->curOffset = sizeof(PGresult_data);
res->spaceLeft = PGRESULT_DATA_BLOCKSIZE - sizeof(PGresult_data);
}
space = block->space + res->curOffset;
res->curOffset += nBytes;
res->spaceLeft -= nBytes;
return space;
}
/*
* pqResultStrdup -
* Like strdup, but the space is subsidiary PGresult space.
*/
char *
pqResultStrdup(PGresult *res, const char *str)
{
char *space = (char *) pqResultAlloc(res, strlen(str) + 1, FALSE);
if (space)
strcpy(space, str);
return space;
}
/*
* pqSetResultError -
* assign a new error message to a PGresult
*/
void
pqSetResultError(PGresult *res, const char *msg)
{
if (!res)
return;
if (msg && *msg)
res->errMsg = pqResultStrdup(res, msg);
else
res->errMsg = NULL;
}
/*
* pqCatenateResultError -
* concatenate a new error message to the one already in a PGresult
*/
void
pqCatenateResultError(PGresult *res, const char *msg)
{
PQExpBufferData errorBuf;
if (!res || !msg)
return;
initPQExpBuffer(&errorBuf);
if (res->errMsg)
appendPQExpBufferStr(&errorBuf, res->errMsg);
appendPQExpBufferStr(&errorBuf, msg);
pqSetResultError(res, errorBuf.data);
termPQExpBuffer(&errorBuf);
}
/*
* PQclear -
* free's the memory associated with a PGresult
*/
void
PQclear(PGresult *res)
{
PGresult_data *block;
int i;
if (!res)
return;
for (i = 0; i < res->nEvents; i++)
{
/* only send DESTROY to successfully-initialized event procs */
if (res->events[i].resultInitialized)
{
PGEventResultDestroy evt;
evt.result = res;
(void) res->events[i].proc(PGEVT_RESULTDESTROY, &evt,
res->events[i].passThrough);
}
free(res->events[i].name);
}
if (res->events)
free(res->events);
/* Free all the subsidiary blocks */
while ((block = res->curBlock) != NULL)
{
res->curBlock = block->next;
free(block);
}
/* Free the top-level tuple pointer array */
if (res->tuples)
free(res->tuples);
/* zero out the pointer fields to catch programming errors */
res->attDescs = NULL;
res->tuples = NULL;
res->paramDescs = NULL;
res->errFields = NULL;
res->events = NULL;
res->nEvents = 0;
/* res->curBlock was zeroed out earlier */
/* Free the PGresult structure itself */
free(res);
}
/*
* Handy subroutine to deallocate any partially constructed async result.
*
* Any "next" result gets cleared too.
*/
void
pqClearAsyncResult(PGconn *conn)
{
if (conn->result)
PQclear(conn->result);
conn->result = NULL;
if (conn->next_result)
PQclear(conn->next_result);
conn->next_result = NULL;
}
/*
* This subroutine deletes any existing async result, sets conn->result
* to a PGresult with status PGRES_FATAL_ERROR, and stores the current
* contents of conn->errorMessage into that result. It differs from a
* plain call on PQmakeEmptyPGresult() in that if there is already an
* async result with status PGRES_FATAL_ERROR, the current error message
* is APPENDED to the old error message instead of replacing it. This
* behavior lets us report multiple error conditions properly, if necessary.
* (An example where this is needed is when the backend sends an 'E' message
* and immediately closes the connection --- we want to report both the
* backend error and the connection closure error.)
*/
void
pqSaveErrorResult(PGconn *conn)
{
/*
* If no old async result, just let PQmakeEmptyPGresult make one. Likewise
* if old result is not an error message.
*/
if (conn->result == NULL ||
conn->result->resultStatus != PGRES_FATAL_ERROR ||
conn->result->errMsg == NULL)
{
pqClearAsyncResult(conn);
conn->result = PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR);
}
else
{
/* Else, concatenate error message to existing async result. */
pqCatenateResultError(conn->result, conn->errorMessage.data);
}
}
/*
* This subroutine prepares an async result object for return to the caller.
* If there is not already an async result object, build an error object
* using whatever is in conn->errorMessage. In any case, clear the async
* result storage and make sure PQerrorMessage will agree with the result's
* error string.
*/
PGresult *
pqPrepareAsyncResult(PGconn *conn)
{
PGresult *res;
/*
* conn->result is the PGresult to return. If it is NULL (which probably
* shouldn't happen) we assume there is an appropriate error message in
* conn->errorMessage.
*/
res = conn->result;
if (!res)
res = PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR);
else
{
/*
* Make sure PQerrorMessage agrees with result; it could be different
* if we have concatenated messages.
*/
resetPQExpBuffer(&conn->errorMessage);
appendPQExpBufferStr(&conn->errorMessage,
PQresultErrorMessage(res));
}
/*
* Replace conn->result with next_result, if any. In the normal case
* there isn't a next result and we're just dropping ownership of the
* current result. In single-row mode this restores the situation to what
* it was before we created the current single-row result.
*/
conn->result = conn->next_result;
conn->next_result = NULL;
return res;
}
/*
* pqInternalNotice - produce an internally-generated notice message
*
* A format string and optional arguments can be passed. Note that we do
* libpq_gettext() here, so callers need not.
*
* The supplied text is taken as primary message (ie., it should not include
* a trailing newline, and should not be more than one line).
*/
void
pqInternalNotice(const PGNoticeHooks *hooks, const char *fmt,...)
{
char msgBuf[1024];
va_list args;
PGresult *res;
if (hooks->noticeRec == NULL)
return; /* nobody home to receive notice? */
/* Format the message */
va_start(args, fmt);
vsnprintf(msgBuf, sizeof(msgBuf), libpq_gettext(fmt), args);
va_end(args);
msgBuf[sizeof(msgBuf) - 1] = '\0'; /* make real sure it's terminated */
/* Make a PGresult to pass to the notice receiver */
res = PQmakeEmptyPGresult(NULL, PGRES_NONFATAL_ERROR);
if (!res)
return;
res->noticeHooks = *hooks;
/*
* Set up fields of notice.
*/
pqSaveMessageField(res, PG_DIAG_MESSAGE_PRIMARY, msgBuf);
pqSaveMessageField(res, PG_DIAG_SEVERITY, libpq_gettext("NOTICE"));
pqSaveMessageField(res, PG_DIAG_SEVERITY_NONLOCALIZED, "NOTICE");
/* XXX should provide a SQLSTATE too? */
/*
* Result text is always just the primary message + newline. If we can't
* allocate it, don't bother invoking the receiver.
*/
res->errMsg = (char *) pqResultAlloc(res, strlen(msgBuf) + 2, FALSE);
if (res->errMsg)
{
sprintf(res->errMsg, "%s\n", msgBuf);
/*
* Pass to receiver, then free it.
*/
(*res->noticeHooks.noticeRec) (res->noticeHooks.noticeRecArg, res);
}
PQclear(res);
}
/*
* pqAddTuple
* add a row pointer to the PGresult structure, growing it if necessary
* Returns TRUE if OK, FALSE if not enough memory to add the row
*/
static bool
pqAddTuple(PGresult *res, PGresAttValue *tup)
{
if (res->ntups >= res->tupArrSize)
{
/*
* Try to grow the array.
*
* We can use realloc because shallow copying of the structure is
* okay. Note that the first time through, res->tuples is NULL. While
* ANSI says that realloc() should act like malloc() in that case,
* some old C libraries (like SunOS 4.1.x) coredump instead. On
* failure realloc is supposed to return NULL without damaging the
* existing allocation. Note that the positions beyond res->ntups are
* garbage, not necessarily NULL.
*/
int newSize = (res->tupArrSize > 0) ? res->tupArrSize * 2 : 128;
PGresAttValue **newTuples;
if (res->tuples == NULL)
newTuples = (PGresAttValue **)
malloc(newSize * sizeof(PGresAttValue *));
else
newTuples = (PGresAttValue **)
realloc(res->tuples, newSize * sizeof(PGresAttValue *));
if (!newTuples)
return FALSE; /* malloc or realloc failed */
res->tupArrSize = newSize;
res->tuples = newTuples;
}
res->tuples[res->ntups] = tup;
res->ntups++;
return TRUE;
}
/*
* pqSaveMessageField - save one field of an error or notice message
*/
void
pqSaveMessageField(PGresult *res, char code, const char *value)
{
PGMessageField *pfield;
pfield = (PGMessageField *)
pqResultAlloc(res,
offsetof(PGMessageField, contents) +
strlen(value) + 1,
TRUE);
if (!pfield)
return; /* out of memory? */
pfield->code = code;
strcpy(pfield->contents, value);
pfield->next = res->errFields;
res->errFields = pfield;
}
/*
* pqSaveParameterStatus - remember parameter status sent by backend
*/
void
pqSaveParameterStatus(PGconn *conn, const char *name, const char *value)
{
pgParameterStatus *pstatus;
pgParameterStatus *prev;
if (conn->Pfdebug)
fprintf(conn->Pfdebug, "pqSaveParameterStatus: '%s' = '%s'\n",
name, value);
/*
* Forget any old information about the parameter
*/
for (pstatus = conn->pstatus, prev = NULL;
pstatus != NULL;
prev = pstatus, pstatus = pstatus->next)
{
if (strcmp(pstatus->name, name) == 0)
{
if (prev)
prev->next = pstatus->next;
else
conn->pstatus = pstatus->next;
free(pstatus); /* frees name and value strings too */
break;
}
}
/*
* Store new info as a single malloc block
*/
pstatus = (pgParameterStatus *) malloc(sizeof(pgParameterStatus) +
strlen(name) +strlen(value) + 2);
if (pstatus)
{
char *ptr;
ptr = ((char *) pstatus) + sizeof(pgParameterStatus);
pstatus->name = ptr;
strcpy(ptr, name);
ptr += strlen(name) + 1;
pstatus->value = ptr;
strcpy(ptr, value);
pstatus->next = conn->pstatus;
conn->pstatus = pstatus;
}
/*
* Special hacks: remember client_encoding and
* standard_conforming_strings, and convert server version to a numeric
* form. We keep the first two of these in static variables as well, so
* that PQescapeString and PQescapeBytea can behave somewhat sanely (at
* least in single-connection-using programs).
*/
if (strcmp(name, "client_encoding") == 0)
{
conn->client_encoding = pg_char_to_encoding(value);
/* if we don't recognize the encoding name, fall back to SQL_ASCII */
if (conn->client_encoding < 0)
conn->client_encoding = PG_SQL_ASCII;
static_client_encoding = conn->client_encoding;
}
else if (strcmp(name, "standard_conforming_strings") == 0)
{
conn->std_strings = (strcmp(value, "on") == 0);
static_std_strings = conn->std_strings;
}
else if (strcmp(name, "server_version") == 0)
{
int cnt;
int vmaj,
vmin,
vrev;
cnt = sscanf(value, "%d.%d.%d", &vmaj, &vmin, &vrev);
if (cnt == 3)
{
/* old style, e.g. 9.6.1 */
conn->sversion = (100 * vmaj + vmin) * 100 + vrev;
}
else if (cnt == 2)
{
if (vmaj >= 10)
{
/* new style, e.g. 10.1 */
conn->sversion = 100 * 100 * vmaj + vmin;
}
else
{
/* old style without minor version, e.g. 9.6devel */
conn->sversion = (100 * vmaj + vmin) * 100;
}
}
else if (cnt == 1)
{
/* new style without minor version, e.g. 10devel */
conn->sversion = 100 * 100 * vmaj;
}
else
conn->sversion = 0; /* unknown */
}
}
/*
* pqRowProcessor
* Add the received row to the current async result (conn->result).
* Returns 1 if OK, 0 if error occurred.
*
* On error, *errmsgp can be set to an error string to be returned.
* If it is left NULL, the error is presumed to be "out of memory".
*
* In single-row mode, we create a new result holding just the current row,
* stashing the previous result in conn->next_result so that it becomes
* active again after pqPrepareAsyncResult(). This allows the result metadata
* (column descriptions) to be carried forward to each result row.
*/
int
pqRowProcessor(PGconn *conn, const char **errmsgp)
{
PGresult *res = conn->result;
int nfields = res->numAttributes;
const PGdataValue *columns = conn->rowBuf;
PGresAttValue *tup;
int i;
/*
* In single-row mode, make a new PGresult that will hold just this one
* row; the original conn->result is left unchanged so that it can be used
* again as the template for future rows.
*/
if (conn->singleRowMode)
{
/* Copy everything that should be in the result at this point */
res = PQcopyResult(res,
PG_COPYRES_ATTRS | PG_COPYRES_EVENTS |
PG_COPYRES_NOTICEHOOKS);
if (!res)
return 0;
}
/*
* Basically we just allocate space in the PGresult for each field and
* copy the data over.
*
* Note: on malloc failure, we return 0 leaving *errmsgp still NULL, which
* caller will take to mean "out of memory". This is preferable to trying
* to set up such a message here, because evidently there's not enough
* memory for gettext() to do anything.
*/
tup = (PGresAttValue *)
pqResultAlloc(res, nfields * sizeof(PGresAttValue), TRUE);
if (tup == NULL)
goto fail;
for (i = 0; i < nfields; i++)
{
int clen = columns[i].len;
if (clen < 0)
{
/* null field */
tup[i].len = NULL_LEN;
tup[i].value = res->null_field;
}
else
{
bool isbinary = (res->attDescs[i].format != 0);
char *val;
val = (char *) pqResultAlloc(res, clen + 1, isbinary);
if (val == NULL)
goto fail;
/* copy and zero-terminate the data (even if it's binary) */
memcpy(val, columns[i].value, clen);
val[clen] = '\0';
tup[i].len = clen;
tup[i].value = val;
}
}
/* And add the tuple to the PGresult's tuple array */
if (!pqAddTuple(res, tup))
goto fail;
/*
* Success. In single-row mode, make the result available to the client
* immediately.
*/
if (conn->singleRowMode)
{
/* Change result status to special single-row value */
res->resultStatus = PGRES_SINGLE_TUPLE;
/* Stash old result for re-use later */
conn->next_result = conn->result;
conn->result = res;
/* And mark the result ready to return */
conn->asyncStatus = PGASYNC_READY;
}
return 1;
fail:
/* release locally allocated PGresult, if we made one */
if (res != conn->result)
PQclear(res);
return 0;
}
/*
* PQsendQuery
* Submit a query, but don't wait for it to finish
*
* Returns: 1 if successfully submitted
* 0 if error (conn->errorMessage is set)
*/
int
PQsendQuery(PGconn *conn, const char *query)
{
if (!PQsendQueryStart(conn))
return 0;
/* check the argument */
if (!query)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("command string is a null pointer\n"));
return 0;
}
/* construct the outgoing Query message */
if (pqPutMsgStart('Q', false, conn) < 0 ||
pqPuts(query, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
{
pqHandleSendFailure(conn);
return 0;
}
/* remember we are using simple query protocol */
conn->queryclass = PGQUERY_SIMPLE;
/* and remember the query text too, if possible */
/* if insufficient memory, last_query just winds up NULL */
if (conn->last_query)
free(conn->last_query);
conn->last_query = strdup(query);
/*
* Give the data a push. In nonblock mode, don't complain if we're unable
* to send it all; PQgetResult() will do any additional flushing needed.
*/
if (pqFlush(conn) < 0)
{
pqHandleSendFailure(conn);
return 0;
}
/* OK, it's launched! */
conn->asyncStatus = PGASYNC_BUSY;
return 1;
}
/*
* PQsendQueryParams
* Like PQsendQuery, but use protocol 3.0 so we can pass parameters
*/
int
PQsendQueryParams(PGconn *conn,
const char *command,
int nParams,
const Oid *paramTypes,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
if (!PQsendQueryStart(conn))
return 0;
/* check the arguments */
if (!command)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("command string is a null pointer\n"));
return 0;
}
if (nParams < 0 || nParams > 65535)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("number of parameters must be between 0 and 65535\n"));
return 0;
}
return PQsendQueryGuts(conn,
command,
"", /* use unnamed statement */
nParams,
paramTypes,
paramValues,
paramLengths,
paramFormats,
resultFormat);
}
/*
* PQsendPrepare
* Submit a Parse message, but don't wait for it to finish
*
* Returns: 1 if successfully submitted
* 0 if error (conn->errorMessage is set)
*/
int
PQsendPrepare(PGconn *conn,
const char *stmtName, const char *query,
int nParams, const Oid *paramTypes)
{
if (!PQsendQueryStart(conn))
return 0;
/* check the arguments */
if (!stmtName)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("statement name is a null pointer\n"));
return 0;
}
if (!query)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("command string is a null pointer\n"));
return 0;
}
if (nParams < 0 || nParams > 65535)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("number of parameters must be between 0 and 65535\n"));
return 0;
}
/* This isn't gonna work on a 2.0 server */
if (PG_PROTOCOL_MAJOR(conn->pversion) < 3)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("function requires at least protocol version 3.0\n"));
return 0;
}
/* construct the Parse message */
if (pqPutMsgStart('P', false, conn) < 0 ||
pqPuts(stmtName, conn) < 0 ||
pqPuts(query, conn) < 0)
goto sendFailed;
if (nParams > 0 && paramTypes)
{
int i;
if (pqPutInt(nParams, 2, conn) < 0)
goto sendFailed;
for (i = 0; i < nParams; i++)
{
if (pqPutInt(paramTypes[i], 4, conn) < 0)
goto sendFailed;
}
}
else
{
if (pqPutInt(0, 2, conn) < 0)
goto sendFailed;
}
if (pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Sync message */
if (pqPutMsgStart('S', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* remember we are doing just a Parse */
conn->queryclass = PGQUERY_PREPARE;
/* and remember the query text too, if possible */
/* if insufficient memory, last_query just winds up NULL */
if (conn->last_query)
free(conn->last_query);
conn->last_query = strdup(query);
/*
* Give the data a push. In nonblock mode, don't complain if we're unable
* to send it all; PQgetResult() will do any additional flushing needed.
*/
if (pqFlush(conn) < 0)
goto sendFailed;
/* OK, it's launched! */
conn->asyncStatus = PGASYNC_BUSY;
return 1;
sendFailed:
pqHandleSendFailure(conn);
return 0;
}
/*
* PQsendQueryPrepared
* Like PQsendQuery, but execute a previously prepared statement,
* using protocol 3.0 so we can pass parameters
*/
int
PQsendQueryPrepared(PGconn *conn,
const char *stmtName,
int nParams,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
if (!PQsendQueryStart(conn))
return 0;
/* check the arguments */
if (!stmtName)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("statement name is a null pointer\n"));
return 0;
}
if (nParams < 0 || nParams > 65535)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("number of parameters must be between 0 and 65535\n"));
return 0;
}
return PQsendQueryGuts(conn,
NULL, /* no command to parse */
stmtName,
nParams,
NULL, /* no param types */
paramValues,
paramLengths,
paramFormats,
resultFormat);
}
/*
* Common startup code for PQsendQuery and sibling routines
*/
static bool
PQsendQueryStart(PGconn *conn)
{
if (!conn)
return false;
/* clear the error string */
resetPQExpBuffer(&conn->errorMessage);
/* Don't try to send if we know there's no live connection. */
if (conn->status != CONNECTION_OK)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("no connection to the server\n"));
return false;
}
/* Can't send while already busy, either. */
if (conn->asyncStatus != PGASYNC_IDLE)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("another command is already in progress\n"));
return false;
}
/* initialize async result-accumulation state */
pqClearAsyncResult(conn);
/* reset single-row processing mode */
conn->singleRowMode = false;
/* ready to send command message */
return true;
}
/*
* PQsendQueryGuts
* Common code for protocol-3.0 query sending
* PQsendQueryStart should be done already
*
* command may be NULL to indicate we use an already-prepared statement
*/
static int
PQsendQueryGuts(PGconn *conn,
const char *command,
const char *stmtName,
int nParams,
const Oid *paramTypes,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
int i;
/* This isn't gonna work on a 2.0 server */
if (PG_PROTOCOL_MAJOR(conn->pversion) < 3)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("function requires at least protocol version 3.0\n"));
return 0;
}
/*
* We will send Parse (if needed), Bind, Describe Portal, Execute, Sync,
* using specified statement name and the unnamed portal.
*/
if (command)
{
/* construct the Parse message */
if (pqPutMsgStart('P', false, conn) < 0 ||
pqPuts(stmtName, conn) < 0 ||
pqPuts(command, conn) < 0)
goto sendFailed;
if (nParams > 0 && paramTypes)
{
if (pqPutInt(nParams, 2, conn) < 0)
goto sendFailed;
for (i = 0; i < nParams; i++)
{
if (pqPutInt(paramTypes[i], 4, conn) < 0)
goto sendFailed;
}
}
else
{
if (pqPutInt(0, 2, conn) < 0)
goto sendFailed;
}
if (pqPutMsgEnd(conn) < 0)
goto sendFailed;
}
/* Construct the Bind message */
if (pqPutMsgStart('B', false, conn) < 0 ||
pqPuts("", conn) < 0 ||
pqPuts(stmtName, conn) < 0)
goto sendFailed;
/* Send parameter formats */
if (nParams > 0 && paramFormats)
{
if (pqPutInt(nParams, 2, conn) < 0)
goto sendFailed;
for (i = 0; i < nParams; i++)
{
if (pqPutInt(paramFormats[i], 2, conn) < 0)
goto sendFailed;
}
}
else
{
if (pqPutInt(0, 2, conn) < 0)
goto sendFailed;
}
if (pqPutInt(nParams, 2, conn) < 0)
goto sendFailed;
/* Send parameters */
for (i = 0; i < nParams; i++)
{
if (paramValues && paramValues[i])
{
int nbytes;
if (paramFormats && paramFormats[i] != 0)
{
/* binary parameter */
if (paramLengths)
nbytes = paramLengths[i];
else
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("length must be given for binary parameter\n"));
goto sendFailed;
}
}
else
{
/* text parameter, do not use paramLengths */
nbytes = strlen(paramValues[i]);
}
if (pqPutInt(nbytes, 4, conn) < 0 ||
pqPutnchar(paramValues[i], nbytes, conn) < 0)
goto sendFailed;
}
else
{
/* take the param as NULL */
if (pqPutInt(-1, 4, conn) < 0)
goto sendFailed;
}
}
if (pqPutInt(1, 2, conn) < 0 ||
pqPutInt(resultFormat, 2, conn))
goto sendFailed;
if (pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Describe Portal message */
if (pqPutMsgStart('D', false, conn) < 0 ||
pqPutc('P', conn) < 0 ||
pqPuts("", conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Execute message */
if (pqPutMsgStart('E', false, conn) < 0 ||
pqPuts("", conn) < 0 ||
pqPutInt(0, 4, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Sync message */
if (pqPutMsgStart('S', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* remember we are using extended query protocol */
conn->queryclass = PGQUERY_EXTENDED;
/* and remember the query text too, if possible */
/* if insufficient memory, last_query just winds up NULL */
if (conn->last_query)
free(conn->last_query);
if (command)
conn->last_query = strdup(command);
else
conn->last_query = NULL;
/*
* Give the data a push. In nonblock mode, don't complain if we're unable
* to send it all; PQgetResult() will do any additional flushing needed.
*/
if (pqFlush(conn) < 0)
goto sendFailed;
/* OK, it's launched! */
conn->asyncStatus = PGASYNC_BUSY;
return 1;
sendFailed:
pqHandleSendFailure(conn);
return 0;
}
/*
* pqHandleSendFailure: try to clean up after failure to send command.
*
* Primarily, what we want to accomplish here is to process any ERROR or
* NOTICE messages that the backend might have sent just before it died.
* Since we're in IDLE state, all such messages will get sent to the notice
* processor.
*
* NOTE: this routine should only be called in PGASYNC_IDLE state.
*/
void
pqHandleSendFailure(PGconn *conn)
{
/*
* Accept and parse any available input data, ignoring I/O errors. Note
* that if pqReadData decides the backend has closed the channel, it will
* close our side of the socket --- that's just what we want here.
*/
while (pqReadData(conn) > 0)
parseInput(conn);
/*
* Be sure to parse available input messages even if we read no data.
* (Note: calling parseInput within the above loop isn't really necessary,
* but it prevents buffer bloat if there's a lot of data available.)
*/
parseInput(conn);
}
/*
* Select row-by-row processing mode
*/
int
PQsetSingleRowMode(PGconn *conn)
{
/*
* Only allow setting the flag when we have launched a query and not yet
* received any results.
*/
if (!conn)
return 0;
if (conn->asyncStatus != PGASYNC_BUSY)
return 0;
if (conn->queryclass != PGQUERY_SIMPLE &&
conn->queryclass != PGQUERY_EXTENDED)
return 0;
if (conn->result)
return 0;
/* OK, set flag */
conn->singleRowMode = true;
return 1;
}
/*
* Consume any available input from the backend
* 0 return: some kind of trouble
* 1 return: no problem
*/
int
PQconsumeInput(PGconn *conn)
{
if (!conn)
return 0;
/*
* for non-blocking connections try to flush the send-queue, otherwise we
* may never get a response for something that may not have already been
* sent because it's in our write buffer!
*/
if (pqIsnonblocking(conn))
{
if (pqFlush(conn) < 0)
return 0;
}
/*
* Load more data, if available. We do this no matter what state we are
* in, since we are probably getting called because the application wants
* to get rid of a read-select condition. Note that we will NOT block
* waiting for more input.
*/
if (pqReadData(conn) < 0)
return 0;
/* Parsing of the data waits till later. */
return 1;
}
/*
* parseInput: if appropriate, parse input data from backend
* until input is exhausted or a stopping state is reached.
* Note that this function will NOT attempt to read more data from the backend.
*/
static void
parseInput(PGconn *conn)
{
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
pqParseInput3(conn);
else
pqParseInput2(conn);
}
/*
* PQisBusy
* Return TRUE if PQgetResult would block waiting for input.
*/
int
PQisBusy(PGconn *conn)
{
if (!conn)
return FALSE;
/* Parse any available data, if our state permits. */
parseInput(conn);
/* PQgetResult will return immediately in all states except BUSY. */
return conn->asyncStatus == PGASYNC_BUSY;
}
/*
* PQgetResult
* Get the next PGresult produced by a query. Returns NULL if no
* query work remains or an error has occurred (e.g. out of
* memory).
*/
PGresult *
PQgetResult(PGconn *conn)
{
PGresult *res;
if (!conn)
return NULL;
/* Parse any available data, if our state permits. */
parseInput(conn);
/* If not ready to return something, block until we are. */
while (conn->asyncStatus == PGASYNC_BUSY)
{
int flushResult;
/*
* If data remains unsent, send it. Else we might be waiting for the
* result of a command the backend hasn't even got yet.
*/
while ((flushResult = pqFlush(conn)) > 0)
{
if (pqWait(FALSE, TRUE, conn))
{
flushResult = -1;
break;
}
}
/* Wait for some more data, and load it. */
if (flushResult ||
pqWait(TRUE, FALSE, conn) ||
pqReadData(conn) < 0)
{
/*
* conn->errorMessage has been set by pqWait or pqReadData. We
* want to append it to any already-received error message.
*/
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_IDLE;
return pqPrepareAsyncResult(conn);
}
/* Parse it. */
parseInput(conn);
}
/* Return the appropriate thing. */
switch (conn->asyncStatus)
{
case PGASYNC_IDLE:
res = NULL; /* query is complete */
break;
case PGASYNC_READY:
res = pqPrepareAsyncResult(conn);
/* Set the state back to BUSY, allowing parsing to proceed. */
conn->asyncStatus = PGASYNC_BUSY;
break;
case PGASYNC_COPY_IN:
res = getCopyResult(conn, PGRES_COPY_IN);
break;
case PGASYNC_COPY_OUT:
res = getCopyResult(conn, PGRES_COPY_OUT);
break;
case PGASYNC_COPY_BOTH:
res = getCopyResult(conn, PGRES_COPY_BOTH);
break;
default:
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("unexpected asyncStatus: %d\n"),
(int) conn->asyncStatus);
res = PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR);
break;
}
if (res)
{
int i;
for (i = 0; i < res->nEvents; i++)
{
PGEventResultCreate evt;
evt.conn = conn;
evt.result = res;
if (!res->events[i].proc(PGEVT_RESULTCREATE, &evt,
res->events[i].passThrough))
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("PGEventProc \"%s\" failed during PGEVT_RESULTCREATE event\n"),
res->events[i].name);
pqSetResultError(res, conn->errorMessage.data);
res->resultStatus = PGRES_FATAL_ERROR;
break;
}
res->events[i].resultInitialized = TRUE;
}
}
return res;
}
/*
* getCopyResult
* Helper for PQgetResult: generate result for COPY-in-progress cases
*/
static PGresult *
getCopyResult(PGconn *conn, ExecStatusType copytype)
{
/*
* If the server connection has been lost, don't pretend everything is
* hunky-dory; instead return a PGRES_FATAL_ERROR result, and reset the
* asyncStatus to idle (corresponding to what we'd do if we'd detected I/O
* error in the earlier steps in PQgetResult). The text returned in the
* result is whatever is in conn->errorMessage; we hope that was filled
* with something relevant when the lost connection was detected.
*/
if (conn->status != CONNECTION_OK)
{
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_IDLE;
return pqPrepareAsyncResult(conn);
}
/* If we have an async result for the COPY, return that */
if (conn->result && conn->result->resultStatus == copytype)
return pqPrepareAsyncResult(conn);
/* Otherwise, invent a suitable PGresult */
return PQmakeEmptyPGresult(conn, copytype);
}
/*
* PQexec
* send a query to the backend and package up the result in a PGresult
*
* If the query was not even sent, return NULL; conn->errorMessage is set to
* a relevant message.
* If the query was sent, a new PGresult is returned (which could indicate
* either success or failure).
* The user is responsible for freeing the PGresult via PQclear()
* when done with it.
*/
PGresult *
PQexec(PGconn *conn, const char *query)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendQuery(conn, query))
return NULL;
return PQexecFinish(conn);
}
/*
* PQexecParams
* Like PQexec, but use protocol 3.0 so we can pass parameters
*/
PGresult *
PQexecParams(PGconn *conn,
const char *command,
int nParams,
const Oid *paramTypes,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendQueryParams(conn, command,
nParams, paramTypes, paramValues, paramLengths,
paramFormats, resultFormat))
return NULL;
return PQexecFinish(conn);
}
/*
* PQprepare
* Creates a prepared statement by issuing a v3.0 parse message.
*
* If the query was not even sent, return NULL; conn->errorMessage is set to
* a relevant message.
* If the query was sent, a new PGresult is returned (which could indicate
* either success or failure).
* The user is responsible for freeing the PGresult via PQclear()
* when done with it.
*/
PGresult *
PQprepare(PGconn *conn,
const char *stmtName, const char *query,
int nParams, const Oid *paramTypes)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendPrepare(conn, stmtName, query, nParams, paramTypes))
return NULL;
return PQexecFinish(conn);
}
/*
* PQexecPrepared
* Like PQexec, but execute a previously prepared statement,
* using protocol 3.0 so we can pass parameters
*/
PGresult *
PQexecPrepared(PGconn *conn,
const char *stmtName,
int nParams,
const char *const * paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendQueryPrepared(conn, stmtName,
nParams, paramValues, paramLengths,
paramFormats, resultFormat))
return NULL;
return PQexecFinish(conn);
}
/*
* Common code for PQexec and sibling routines: prepare to send command
*/
static bool
PQexecStart(PGconn *conn)
{
PGresult *result;
if (!conn)
return false;
/*
* Silently discard any prior query result that application didn't eat.
* This is probably poor design, but it's here for backward compatibility.
*/
while ((result = PQgetResult(conn)) != NULL)
{
ExecStatusType resultStatus = result->resultStatus;
PQclear(result); /* only need its status */
if (resultStatus == PGRES_COPY_IN)
{
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
{
/* In protocol 3, we can get out of a COPY IN state */
if (PQputCopyEnd(conn,
libpq_gettext("COPY terminated by new PQexec")) < 0)
return false;
/* keep waiting to swallow the copy's failure message */
}
else
{
/* In older protocols we have to punt */
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("COPY IN state must be terminated first\n"));
return false;
}
}
else if (resultStatus == PGRES_COPY_OUT)
{
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
{
/*
* In protocol 3, we can get out of a COPY OUT state: we just
* switch back to BUSY and allow the remaining COPY data to be
* dropped on the floor.
*/
conn->asyncStatus = PGASYNC_BUSY;
/* keep waiting to swallow the copy's completion message */
}
else
{
/* In older protocols we have to punt */
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("COPY OUT state must be terminated first\n"));
return false;
}
}
else if (resultStatus == PGRES_COPY_BOTH)
{
/* We don't allow PQexec during COPY BOTH */
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("PQexec not allowed during COPY BOTH\n"));
return false;
}
/* check for loss of connection, too */
if (conn->status == CONNECTION_BAD)
return false;
}
/* OK to send a command */
return true;
}
/*
* Common code for PQexec and sibling routines: wait for command result
*/
static PGresult *
PQexecFinish(PGconn *conn)
{
PGresult *result;
PGresult *lastResult;
/*
* For backwards compatibility, return the last result if there are more
* than one --- but merge error messages if we get more than one error
* result.
*
* We have to stop if we see copy in/out/both, however. We will resume
* parsing after application performs the data transfer.
*
* Also stop if the connection is lost (else we'll loop infinitely).
*/
lastResult = NULL;
while ((result = PQgetResult(conn)) != NULL)
{
if (lastResult)
{
if (lastResult->resultStatus == PGRES_FATAL_ERROR &&
result->resultStatus == PGRES_FATAL_ERROR)
{
pqCatenateResultError(lastResult, result->errMsg);
PQclear(result);
result = lastResult;
/*
* Make sure PQerrorMessage agrees with concatenated result
*/
resetPQExpBuffer(&conn->errorMessage);
appendPQExpBufferStr(&conn->errorMessage, result->errMsg);
}
else
PQclear(lastResult);
}
lastResult = result;
if (result->resultStatus == PGRES_COPY_IN ||
result->resultStatus == PGRES_COPY_OUT ||
result->resultStatus == PGRES_COPY_BOTH ||
conn->status == CONNECTION_BAD)
break;
}
return lastResult;
}
/*
* PQdescribePrepared
* Obtain information about a previously prepared statement
*
* If the query was not even sent, return NULL; conn->errorMessage is set to
* a relevant message.
* If the query was sent, a new PGresult is returned (which could indicate
* either success or failure). On success, the PGresult contains status
* PGRES_COMMAND_OK, and its parameter and column-heading fields describe
* the statement's inputs and outputs respectively.
* The user is responsible for freeing the PGresult via PQclear()
* when done with it.
*/
PGresult *
PQdescribePrepared(PGconn *conn, const char *stmt)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendDescribe(conn, 'S', stmt))
return NULL;
return PQexecFinish(conn);
}
/*
* PQdescribePortal
* Obtain information about a previously created portal
*
* This is much like PQdescribePrepared, except that no parameter info is
* returned. Note that at the moment, libpq doesn't really expose portals
* to the client; but this can be used with a portal created by a SQL
* DECLARE CURSOR command.
*/
PGresult *
PQdescribePortal(PGconn *conn, const char *portal)
{
if (!PQexecStart(conn))
return NULL;
if (!PQsendDescribe(conn, 'P', portal))
return NULL;
return PQexecFinish(conn);
}
/*
* PQsendDescribePrepared
* Submit a Describe Statement command, but don't wait for it to finish
*
* Returns: 1 if successfully submitted
* 0 if error (conn->errorMessage is set)
*/
int
PQsendDescribePrepared(PGconn *conn, const char *stmt)
{
return PQsendDescribe(conn, 'S', stmt);
}
/*
* PQsendDescribePortal
* Submit a Describe Portal command, but don't wait for it to finish
*
* Returns: 1 if successfully submitted
* 0 if error (conn->errorMessage is set)
*/
int
PQsendDescribePortal(PGconn *conn, const char *portal)
{
return PQsendDescribe(conn, 'P', portal);
}
/*
* PQsendDescribe
* Common code to send a Describe command
*
* Available options for desc_type are
* 'S' to describe a prepared statement; or
* 'P' to describe a portal.
* Returns 1 on success and 0 on failure.
*/
static int
PQsendDescribe(PGconn *conn, char desc_type, const char *desc_target)
{
/* Treat null desc_target as empty string */
if (!desc_target)
desc_target = "";
if (!PQsendQueryStart(conn))
return 0;
/* This isn't gonna work on a 2.0 server */
if (PG_PROTOCOL_MAJOR(conn->pversion) < 3)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("function requires at least protocol version 3.0\n"));
return 0;
}
/* construct the Describe message */
if (pqPutMsgStart('D', false, conn) < 0 ||
pqPutc(desc_type, conn) < 0 ||
pqPuts(desc_target, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* construct the Sync message */
if (pqPutMsgStart('S', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
goto sendFailed;
/* remember we are doing a Describe */
conn->queryclass = PGQUERY_DESCRIBE;
/* reset last-query string (not relevant now) */
if (conn->last_query)
{
free(conn->last_query);
conn->last_query = NULL;
}
/*
* Give the data a push. In nonblock mode, don't complain if we're unable
* to send it all; PQgetResult() will do any additional flushing needed.
*/
if (pqFlush(conn) < 0)
goto sendFailed;
/* OK, it's launched! */
conn->asyncStatus = PGASYNC_BUSY;
return 1;
sendFailed:
pqHandleSendFailure(conn);
return 0;
}
/*
* PQnotifies
* returns a PGnotify* structure of the latest async notification
* that has not yet been handled
*
* returns NULL, if there is currently
* no unhandled async notification from the backend
*
* the CALLER is responsible for FREE'ing the structure returned
*/
PGnotify *
PQnotifies(PGconn *conn)
{
PGnotify *event;
if (!conn)
return NULL;
/* Parse any available data to see if we can extract NOTIFY messages. */
parseInput(conn);
event = conn->notifyHead;
if (event)
{
conn->notifyHead = event->next;
if (!conn->notifyHead)
conn->notifyTail = NULL;
event->next = NULL; /* don't let app see the internal state */
}
return event;
}
/*
* PQputCopyData - send some data to the backend during COPY IN or COPY BOTH
*
* Returns 1 if successful, 0 if data could not be sent (only possible
* in nonblock mode), or -1 if an error occurs.
*/
int
PQputCopyData(PGconn *conn, const char *buffer, int nbytes)
{
if (!conn)
return -1;
if (conn->asyncStatus != PGASYNC_COPY_IN &&
conn->asyncStatus != PGASYNC_COPY_BOTH)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("no COPY in progress\n"));
return -1;
}
/*
* Process any NOTICE or NOTIFY messages that might be pending in the
* input buffer. Since the server might generate many notices during the
* COPY, we want to clean those out reasonably promptly to prevent
* indefinite expansion of the input buffer. (Note: the actual read of
* input data into the input buffer happens down inside pqSendSome, but
* it's not authorized to get rid of the data again.)
*/
parseInput(conn);
if (nbytes > 0)
{
/*
* Try to flush any previously sent data in preference to growing the
* output buffer. If we can't enlarge the buffer enough to hold the
* data, return 0 in the nonblock case, else hard error. (For
* simplicity, always assume 5 bytes of overhead even in protocol 2.0
* case.)
*/
if ((conn->outBufSize - conn->outCount - 5) < nbytes)
{
if (pqFlush(conn) < 0)
return -1;
if (pqCheckOutBufferSpace(conn->outCount + 5 + (size_t) nbytes,
conn))
return pqIsnonblocking(conn) ? 0 : -1;
}
/* Send the data (too simple to delegate to fe-protocol files) */
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
{
if (pqPutMsgStart('d', false, conn) < 0 ||
pqPutnchar(buffer, nbytes, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
else
{
if (pqPutMsgStart(0, false, conn) < 0 ||
pqPutnchar(buffer, nbytes, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
}
return 1;
}
/*
* PQputCopyEnd - send EOF indication to the backend during COPY IN
*
* After calling this, use PQgetResult() to check command completion status.
*
* Returns 1 if successful, 0 if data could not be sent (only possible
* in nonblock mode), or -1 if an error occurs.
*/
int
PQputCopyEnd(PGconn *conn, const char *errormsg)
{
if (!conn)
return -1;
if (conn->asyncStatus != PGASYNC_COPY_IN &&
conn->asyncStatus != PGASYNC_COPY_BOTH)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("no COPY in progress\n"));
return -1;
}
/*
* Send the COPY END indicator. This is simple enough that we don't
* bother delegating it to the fe-protocol files.
*/
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
{
if (errormsg)
{
/* Send COPY FAIL */
if (pqPutMsgStart('f', false, conn) < 0 ||
pqPuts(errormsg, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
else
{
/* Send COPY DONE */
if (pqPutMsgStart('c', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
/*
* If we sent the COPY command in extended-query mode, we must issue a
* Sync as well.
*/
if (conn->queryclass != PGQUERY_SIMPLE)
{
if (pqPutMsgStart('S', false, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
}
else
{
if (errormsg)
{
/* Oops, no way to do this in 2.0 */
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("function requires at least protocol version 3.0\n"));
return -1;
}
else
{
/* Send old-style end-of-data marker */
if (pqPutMsgStart(0, false, conn) < 0 ||
pqPutnchar("\\.\n", 3, conn) < 0 ||
pqPutMsgEnd(conn) < 0)
return -1;
}
}
/* Return to active duty */
if (conn->asyncStatus == PGASYNC_COPY_BOTH)
conn->asyncStatus = PGASYNC_COPY_OUT;
else
conn->asyncStatus = PGASYNC_BUSY;
resetPQExpBuffer(&conn->errorMessage);
/* Try to flush data */
if (pqFlush(conn) < 0)
return -1;
return 1;
}
/*
* PQgetCopyData - read a row of data from the backend during COPY OUT
* or COPY BOTH
*
* If successful, sets *buffer to point to a malloc'd row of data, and
* returns row length (always > 0) as result.
* Returns 0 if no row available yet (only possible if async is true),
* -1 if end of copy (consult PQgetResult), or -2 if error (consult
* PQerrorMessage).
*/
int
PQgetCopyData(PGconn *conn, char **buffer, int async)
{
*buffer = NULL; /* for all failure cases */
if (!conn)
return -2;
if (conn->asyncStatus != PGASYNC_COPY_OUT &&
conn->asyncStatus != PGASYNC_COPY_BOTH)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("no COPY in progress\n"));
return -2;
}
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqGetCopyData3(conn, buffer, async);
else
return pqGetCopyData2(conn, buffer, async);
}
/*
* PQgetline - gets a newline-terminated string from the backend.
*
* Chiefly here so that applications can use "COPY <rel> to stdout"
* and read the output string. Returns a null-terminated string in s.
*
* XXX this routine is now deprecated, because it can't handle binary data.
* If called during a COPY BINARY we return EOF.
*
* PQgetline reads up to maxlen-1 characters (like fgets(3)) but strips
* the terminating \n (like gets(3)).
*
* CAUTION: the caller is responsible for detecting the end-of-copy signal
* (a line containing just "\.") when using this routine.
*
* RETURNS:
* EOF if error (eg, invalid arguments are given)
* 0 if EOL is reached (i.e., \n has been read)
* (this is required for backward-compatibility -- this
* routine used to always return EOF or 0, assuming that
* the line ended within maxlen bytes.)
* 1 in other cases (i.e., the buffer was filled before \n is reached)
*/
int
PQgetline(PGconn *conn, char *s, int maxlen)
{
if (!s || maxlen <= 0)
return EOF;
*s = '\0';
/* maxlen must be at least 3 to hold the \. terminator! */
if (maxlen < 3)
return EOF;
if (!conn)
return EOF;
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqGetline3(conn, s, maxlen);
else
return pqGetline2(conn, s, maxlen);
}
/*
* PQgetlineAsync - gets a COPY data row without blocking.
*
* This routine is for applications that want to do "COPY <rel> to stdout"
* asynchronously, that is without blocking. Having issued the COPY command
* and gotten a PGRES_COPY_OUT response, the app should call PQconsumeInput
* and this routine until the end-of-data signal is detected. Unlike
* PQgetline, this routine takes responsibility for detecting end-of-data.
*
* On each call, PQgetlineAsync will return data if a complete data row
* is available in libpq's input buffer. Otherwise, no data is returned
* until the rest of the row arrives.
*
* If -1 is returned, the end-of-data signal has been recognized (and removed
* from libpq's input buffer). The caller *must* next call PQendcopy and
* then return to normal processing.
*
* RETURNS:
* -1 if the end-of-copy-data marker has been recognized
* 0 if no data is available
* >0 the number of bytes returned.
*
* The data returned will not extend beyond a data-row boundary. If possible
* a whole row will be returned at one time. But if the buffer offered by
* the caller is too small to hold a row sent by the backend, then a partial
* data row will be returned. In text mode this can be detected by testing
* whether the last returned byte is '\n' or not.
*
* The returned data is *not* null-terminated.
*/
int
PQgetlineAsync(PGconn *conn, char *buffer, int bufsize)
{
if (!conn)
return -1;
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqGetlineAsync3(conn, buffer, bufsize);
else
return pqGetlineAsync2(conn, buffer, bufsize);
}
/*
* PQputline -- sends a string to the backend during COPY IN.
* Returns 0 if OK, EOF if not.
*
* This is deprecated primarily because the return convention doesn't allow
* caller to tell the difference between a hard error and a nonblock-mode
* send failure.
*/
int
PQputline(PGconn *conn, const char *s)
{
return PQputnbytes(conn, s, strlen(s));
}
/*
* PQputnbytes -- like PQputline, but buffer need not be null-terminated.
* Returns 0 if OK, EOF if not.
*/
int
PQputnbytes(PGconn *conn, const char *buffer, int nbytes)
{
if (PQputCopyData(conn, buffer, nbytes) > 0)
return 0;
else
return EOF;
}
/*
* PQendcopy
* After completing the data transfer portion of a copy in/out,
* the application must call this routine to finish the command protocol.
*
* When using protocol 3.0 this is deprecated; it's cleaner to use PQgetResult
* to get the transfer status. Note however that when using 2.0 protocol,
* recovering from a copy failure often requires a PQreset. PQendcopy will
* take care of that, PQgetResult won't.
*
* RETURNS:
* 0 on success
* 1 on failure
*/
int
PQendcopy(PGconn *conn)
{
if (!conn)
return 0;
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqEndcopy3(conn);
else
return pqEndcopy2(conn);
}
/* ----------------
* PQfn - Send a function call to the POSTGRES backend.
*
* conn : backend connection
* fnid : OID of function to be called
* result_buf : pointer to result buffer
* result_len : actual length of result is returned here
* result_is_int : If the result is an integer, this must be 1,
* otherwise this should be 0
* args : pointer to an array of function arguments
* (each has length, if integer, and value/pointer)
* nargs : # of arguments in args array.
*
* RETURNS
* PGresult with status = PGRES_COMMAND_OK if successful.
* *result_len is > 0 if there is a return value, 0 if not.
* PGresult with status = PGRES_FATAL_ERROR if backend returns an error.
* NULL on communications failure. conn->errorMessage will be set.
* ----------------
*/
PGresult *
PQfn(PGconn *conn,
int fnid,
int *result_buf,
int *result_len,
int result_is_int,
const PQArgBlock *args,
int nargs)
{
*result_len = 0;
if (!conn)
return NULL;
/* clear the error string */
resetPQExpBuffer(&conn->errorMessage);
if (conn->sock == PGINVALID_SOCKET || conn->asyncStatus != PGASYNC_IDLE ||
conn->result != NULL)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("connection in wrong state\n"));
return NULL;
}
if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)
return pqFunctionCall3(conn, fnid,
result_buf, result_len,
result_is_int,
args, nargs);
else
return pqFunctionCall2(conn, fnid,
result_buf, result_len,
result_is_int,
args, nargs);
}
/* ====== accessor funcs for PGresult ======== */
ExecStatusType
PQresultStatus(const PGresult *res)
{
if (!res)
return PGRES_FATAL_ERROR;
return res->resultStatus;
}
char *
PQresStatus(ExecStatusType status)
{
if ((unsigned int) status >= sizeof pgresStatus / sizeof pgresStatus[0])
return libpq_gettext("invalid ExecStatusType code");
return pgresStatus[status];
}
char *
PQresultErrorMessage(const PGresult *res)
{
if (!res || !res->errMsg)
return "";
return res->errMsg;
}
char *
PQresultVerboseErrorMessage(const PGresult *res,
PGVerbosity verbosity,
PGContextVisibility show_context)
{
PQExpBufferData workBuf;
/*
* Because the caller is expected to free the result string, we must
* strdup any constant result. We use plain strdup and document that
* callers should expect NULL if out-of-memory.
*/
if (!res ||
(res->resultStatus != PGRES_FATAL_ERROR &&
res->resultStatus != PGRES_NONFATAL_ERROR))
return strdup(libpq_gettext("PGresult is not an error result\n"));
initPQExpBuffer(&workBuf);
/*
* Currently, we pass this off to fe-protocol3.c in all cases; it will
* behave reasonably sanely with an error reported by fe-protocol2.c as
* well. If necessary, we could record the protocol version in PGresults
* so as to be able to invoke a version-specific message formatter, but
* for now there's no need.
*/
pqBuildErrorMessage3(&workBuf, res, verbosity, show_context);
/* If insufficient memory to format the message, fail cleanly */
if (PQExpBufferDataBroken(workBuf))
{
termPQExpBuffer(&workBuf);
return strdup(libpq_gettext("out of memory\n"));
}
return workBuf.data;
}
char *
PQresultErrorField(const PGresult *res, int fieldcode)
{
PGMessageField *pfield;
if (!res)
return NULL;
for (pfield = res->errFields; pfield != NULL; pfield = pfield->next)
{
if (pfield->code == fieldcode)
return pfield->contents;
}
return NULL;
}
int
PQntuples(const PGresult *res)
{
if (!res)
return 0;
return res->ntups;
}
int
PQnfields(const PGresult *res)
{
if (!res)
return 0;
return res->numAttributes;
}
int
PQbinaryTuples(const PGresult *res)
{
if (!res)
return 0;
return res->binary;
}
/*
* Helper routines to range-check field numbers and tuple numbers.
* Return TRUE if OK, FALSE if not
*/
static int
check_field_number(const PGresult *res, int field_num)
{
if (!res)
return FALSE; /* no way to display error message... */
if (field_num < 0 || field_num >= res->numAttributes)
{
pqInternalNotice(&res->noticeHooks,
"column number %d is out of range 0..%d",
field_num, res->numAttributes - 1);
return FALSE;
}
return TRUE;
}
static int
check_tuple_field_number(const PGresult *res,
int tup_num, int field_num)
{
if (!res)
return FALSE; /* no way to display error message... */
if (tup_num < 0 || tup_num >= res->ntups)
{
pqInternalNotice(&res->noticeHooks,
"row number %d is out of range 0..%d",
tup_num, res->ntups - 1);
return FALSE;
}
if (field_num < 0 || field_num >= res->numAttributes)
{
pqInternalNotice(&res->noticeHooks,
"column number %d is out of range 0..%d",
field_num, res->numAttributes - 1);
return FALSE;
}
return TRUE;
}
static int
check_param_number(const PGresult *res, int param_num)
{
if (!res)
return FALSE; /* no way to display error message... */
if (param_num < 0 || param_num >= res->numParameters)
{
pqInternalNotice(&res->noticeHooks,
"parameter number %d is out of range 0..%d",
param_num, res->numParameters - 1);
return FALSE;
}
return TRUE;
}
/*
* returns NULL if the field_num is invalid
*/
char *
PQfname(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return NULL;
if (res->attDescs)
return res->attDescs[field_num].name;
else
return NULL;
}
/*
* PQfnumber: find column number given column name
*
* The column name is parsed as if it were in a SQL statement, including
* case-folding and double-quote processing. But note a possible gotcha:
* downcasing in the frontend might follow different locale rules than
* downcasing in the backend...
*
* Returns -1 if no match. In the present backend it is also possible
* to have multiple matches, in which case the first one is found.
*/
int
PQfnumber(const PGresult *res, const char *field_name)
{
char *field_case;
bool in_quotes;
bool all_lower = true;
const char *iptr;
char *optr;
int i;
if (!res)
return -1;
/*
* Note: it is correct to reject a zero-length input string; the proper
* input to match a zero-length field name would be "".
*/
if (field_name == NULL ||
field_name[0] == '\0' ||
res->attDescs == NULL)
return -1;
/*
* Check if we can avoid the strdup() and related work because the
* passed-in string wouldn't be changed before we do the check anyway.
*/
for (iptr = field_name; *iptr; iptr++)
{
char c = *iptr;
if (c == '"' || c != pg_tolower((unsigned char) c))
{
all_lower = false;
break;
}
}
if (all_lower)
for (i = 0; i < res->numAttributes; i++)
if (strcmp(field_name, res->attDescs[i].name) == 0)
return i;
/* Fall through to the normal check if that didn't work out. */
/*
* Note: this code will not reject partially quoted strings, eg
* foo"BAR"foo will become fooBARfoo when it probably ought to be an error
* condition.
*/
field_case = strdup(field_name);
if (field_case == NULL)
return -1; /* grotty */
in_quotes = false;
optr = field_case;
for (iptr = field_case; *iptr; iptr++)
{
char c = *iptr;
if (in_quotes)
{
if (c == '"')
{
if (iptr[1] == '"')
{
/* doubled quotes become a single quote */
*optr++ = '"';
iptr++;
}
else
in_quotes = false;
}
else
*optr++ = c;
}
else if (c == '"')
in_quotes = true;
else
{
c = pg_tolower((unsigned char) c);
*optr++ = c;
}
}
*optr = '\0';
for (i = 0; i < res->numAttributes; i++)
{
if (strcmp(field_case, res->attDescs[i].name) == 0)
{
free(field_case);
return i;
}
}
free(field_case);
return -1;
}
Oid
PQftable(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return InvalidOid;
if (res->attDescs)
return res->attDescs[field_num].tableid;
else
return InvalidOid;
}
int
PQftablecol(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return 0;
if (res->attDescs)
return res->attDescs[field_num].columnid;
else
return 0;
}
int
PQfformat(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return 0;
if (res->attDescs)
return res->attDescs[field_num].format;
else
return 0;
}
Oid
PQftype(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return InvalidOid;
if (res->attDescs)
return res->attDescs[field_num].typid;
else
return InvalidOid;
}
int
PQfsize(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return 0;
if (res->attDescs)
return res->attDescs[field_num].typlen;
else
return 0;
}
int
PQfmod(const PGresult *res, int field_num)
{
if (!check_field_number(res, field_num))
return 0;
if (res->attDescs)
return res->attDescs[field_num].atttypmod;
else
return 0;
}
char *
PQcmdStatus(PGresult *res)
{
if (!res)
return NULL;
return res->cmdStatus;
}
/*
* PQoidStatus -
* if the last command was an INSERT, return the oid string
* if not, return ""
*/
char *
PQoidStatus(const PGresult *res)
{
/*
* This must be enough to hold the result. Don't laugh, this is better
* than what this function used to do.
*/
static char buf[24];
size_t len;
if (!res || strncmp(res->cmdStatus, "INSERT ", 7) != 0)
return "";
len = strspn(res->cmdStatus + 7, "0123456789");
if (len > sizeof(buf) - 1)
len = sizeof(buf) - 1;
memcpy(buf, res->cmdStatus + 7, len);
buf[len] = '\0';
return buf;
}
/*
* PQoidValue -
* a perhaps preferable form of the above which just returns
* an Oid type
*/
Oid
PQoidValue(const PGresult *res)
{
char *endptr = NULL;
unsigned long result;
if (!res ||
strncmp(res->cmdStatus, "INSERT ", 7) != 0 ||
res->cmdStatus[7] < '0' ||
res->cmdStatus[7] > '9')
return InvalidOid;
result = strtoul(res->cmdStatus + 7, &endptr, 10);
if (!endptr || (*endptr != ' ' && *endptr != '\0'))
return InvalidOid;
else
return (Oid) result;
}
/*
* PQcmdTuples -
* If the last command was INSERT/UPDATE/DELETE/MOVE/FETCH/COPY, return
* a string containing the number of inserted/affected tuples. If not,
* return "".
*
* XXX: this should probably return an int
*/
char *
PQcmdTuples(PGresult *res)
{
char *p,
*c;
if (!res)
return "";
if (strncmp(res->cmdStatus, "INSERT ", 7) == 0)
{
p = res->cmdStatus + 7;
/* INSERT: skip oid and space */
while (*p && *p != ' ')
p++;
if (*p == 0)
goto interpret_error; /* no space? */
p++;
}
else if (strncmp(res->cmdStatus, "SELECT ", 7) == 0 ||
strncmp(res->cmdStatus, "DELETE ", 7) == 0 ||
strncmp(res->cmdStatus, "UPDATE ", 7) == 0)
p = res->cmdStatus + 7;
else if (strncmp(res->cmdStatus, "FETCH ", 6) == 0)
p = res->cmdStatus + 6;
else if (strncmp(res->cmdStatus, "MOVE ", 5) == 0 ||
strncmp(res->cmdStatus, "COPY ", 5) == 0)
p = res->cmdStatus + 5;
else
return "";
/* check that we have an integer (at least one digit, nothing else) */
for (c = p; *c; c++)
{
if (!isdigit((unsigned char) *c))
goto interpret_error;
}
if (c == p)
goto interpret_error;
return p;
interpret_error:
pqInternalNotice(&res->noticeHooks,
"could not interpret result from server: %s",
res->cmdStatus);
return "";
}
/*
* PQgetvalue:
* return the value of field 'field_num' of row 'tup_num'
*/
char *
PQgetvalue(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
return NULL;
return res->tuples[tup_num][field_num].value;
}
/* PQgetlength:
* returns the actual length of a field value in bytes.
*/
int
PQgetlength(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
return 0;
if (res->tuples[tup_num][field_num].len != NULL_LEN)
return res->tuples[tup_num][field_num].len;
else
return 0;
}
/* PQgetisnull:
* returns the null status of a field value.
*/
int
PQgetisnull(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
return 1; /* pretend it is null */
if (res->tuples[tup_num][field_num].len == NULL_LEN)
return 1;
else
return 0;
}
/* PQnparams:
* returns the number of input parameters of a prepared statement.
*/
int
PQnparams(const PGresult *res)
{
if (!res)
return 0;
return res->numParameters;
}
/* PQparamtype:
* returns type Oid of the specified statement parameter.
*/
Oid
PQparamtype(const PGresult *res, int param_num)
{
if (!check_param_number(res, param_num))
return InvalidOid;
if (res->paramDescs)
return res->paramDescs[param_num].typid;
else
return InvalidOid;
}
/* PQsetnonblocking:
* sets the PGconn's database connection non-blocking if the arg is TRUE
* or makes it blocking if the arg is FALSE, this will not protect
* you from PQexec(), you'll only be safe when using the non-blocking API.
* Needs to be called only on a connected database connection.
*/
int
PQsetnonblocking(PGconn *conn, int arg)
{
bool barg;
if (!conn || conn->status == CONNECTION_BAD)
return -1;
barg = (arg ? TRUE : FALSE);
/* early out if the socket is already in the state requested */
if (barg == conn->nonblocking)
return 0;
/*
* to guarantee constancy for flushing/query/result-polling behavior we
* need to flush the send queue at this point in order to guarantee proper
* behavior. this is ok because either they are making a transition _from_
* or _to_ blocking mode, either way we can block them.
*/
/* if we are going from blocking to non-blocking flush here */
if (pqFlush(conn))
return -1;
conn->nonblocking = barg;
return 0;
}
/*
* return the blocking status of the database connection
* TRUE == nonblocking, FALSE == blocking
*/
int
PQisnonblocking(const PGconn *conn)
{
return pqIsnonblocking(conn);
}
/* libpq is thread-safe? */
int
PQisthreadsafe(void)
{
#ifdef ENABLE_THREAD_SAFETY
return true;
#else
return false;
#endif
}
/* try to force data out, really only useful for non-blocking users */
int
PQflush(PGconn *conn)
{
return pqFlush(conn);
}
/*
* PQfreemem - safely frees memory allocated
*
* Needed mostly by Win32, unless multithreaded DLL (/MD in VC6)
* Used for freeing memory from PQescapeByte()a/PQunescapeBytea()
*/
void
PQfreemem(void *ptr)
{
free(ptr);
}
/*
* PQfreeNotify - free's the memory associated with a PGnotify
*
* This function is here only for binary backward compatibility.
* New code should use PQfreemem(). A macro will automatically map
* calls to PQfreemem. It should be removed in the future. bjm 2003-03-24
*/
#undef PQfreeNotify
void PQfreeNotify(PGnotify *notify);
void
PQfreeNotify(PGnotify *notify)
{
PQfreemem(notify);
}
/*
* Escaping arbitrary strings to get valid SQL literal strings.
*
* Replaces "'" with "''", and if not std_strings, replaces "\" with "\\".
*
* length is the length of the source string. (Note: if a terminating NUL
* is encountered sooner, PQescapeString stops short of "length"; the behavior
* is thus rather like strncpy.)
*
* For safety the buffer at "to" must be at least 2*length + 1 bytes long.
* A terminating NUL character is added to the output string, whether the
* input is NUL-terminated or not.
*
* Returns the actual length of the output (not counting the terminating NUL).
*/
static size_t
PQescapeStringInternal(PGconn *conn,
char *to, const char *from, size_t length,
int *error,
int encoding, bool std_strings)
{
const char *source = from;
char *target = to;
size_t remaining = length;
if (error)
*error = 0;
while (remaining > 0 && *source != '\0')
{
char c = *source;
int len;
int i;
/* Fast path for plain ASCII */
if (!IS_HIGHBIT_SET(c))
{
/* Apply quoting if needed */
if (SQL_STR_DOUBLE(c, !std_strings))
*target++ = c;
/* Copy the character */
*target++ = c;
source++;
remaining--;
continue;
}
/* Slow path for possible multibyte characters */
len = pg_encoding_mblen(encoding, source);
/* Copy the character */
for (i = 0; i < len; i++)
{
if (remaining == 0 || *source == '\0')
break;
*target++ = *source++;
remaining--;
}
/*
* If we hit premature end of string (ie, incomplete multibyte
* character), try to pad out to the correct length with spaces. We
* may not be able to pad completely, but we will always be able to
* insert at least one pad space (since we'd not have quoted a
* multibyte character). This should be enough to make a string that
* the server will error out on.
*/
if (i < len)
{
if (error)
*error = 1;
if (conn)
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("incomplete multibyte character\n"));
for (; i < len; i++)
{
if (((size_t) (target - to)) / 2 >= length)
break;
*target++ = ' ';
}
break;
}
}
/* Write the terminating NUL character. */
*target = '\0';
return target - to;
}
size_t
PQescapeStringConn(PGconn *conn,
char *to, const char *from, size_t length,
int *error)
{
if (!conn)
{
/* force empty-string result */
*to = '\0';
if (error)
*error = 1;
return 0;
}
return PQescapeStringInternal(conn, to, from, length, error,
conn->client_encoding,
conn->std_strings);
}
size_t
PQescapeString(char *to, const char *from, size_t length)
{
return PQescapeStringInternal(NULL, to, from, length, NULL,
static_client_encoding,
static_std_strings);
}
/*
* Escape arbitrary strings. If as_ident is true, we escape the result
* as an identifier; if false, as a literal. The result is returned in
* a newly allocated buffer. If we fail due to an encoding violation or out
* of memory condition, we return NULL, storing an error message into conn.
*/
static char *
PQescapeInternal(PGconn *conn, const char *str, size_t len, bool as_ident)
{
const char *s;
char *result;
char *rp;
int num_quotes = 0; /* single or double, depending on as_ident */
int num_backslashes = 0;
int input_len;
int result_size;
char quote_char = as_ident ? '"' : '\'';
/* We must have a connection, else fail immediately. */
if (!conn)
return NULL;
/* Scan the string for characters that must be escaped. */
for (s = str; (s - str) < len && *s != '\0'; ++s)
{
if (*s == quote_char)
++num_quotes;
else if (*s == '\\')
++num_backslashes;
else if (IS_HIGHBIT_SET(*s))
{
int charlen;
/* Slow path for possible multibyte characters */
charlen = pg_encoding_mblen(conn->client_encoding, s);
/* Multibyte character overruns allowable length. */
if ((s - str) + charlen > len || memchr(s, 0, charlen) != NULL)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("incomplete multibyte character\n"));
return NULL;
}
/* Adjust s, bearing in mind that for loop will increment it. */
s += charlen - 1;
}
}
/* Allocate output buffer. */
input_len = s - str;
result_size = input_len + num_quotes + 3; /* two quotes, plus a NUL */
if (!as_ident && num_backslashes > 0)
result_size += num_backslashes + 2;
result = rp = (char *) malloc(result_size);
if (rp == NULL)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("out of memory\n"));
return NULL;
}
/*
* If we are escaping a literal that contains backslashes, we use the
* escape string syntax so that the result is correct under either value
* of standard_conforming_strings. We also emit a leading space in this
* case, to guard against the possibility that the result might be
* interpolated immediately following an identifier.
*/
if (!as_ident && num_backslashes > 0)
{
*rp++ = ' ';
*rp++ = 'E';
}
/* Opening quote. */
*rp++ = quote_char;
/*
* Use fast path if possible.
*
* We've already verified that the input string is well-formed in the
* current encoding. If it contains no quotes and, in the case of
* literal-escaping, no backslashes, then we can just copy it directly to
* the output buffer, adding the necessary quotes.
*
* If not, we must rescan the input and process each character
* individually.
*/
if (num_quotes == 0 && (num_backslashes == 0 || as_ident))
{
memcpy(rp, str, input_len);
rp += input_len;
}
else
{
for (s = str; s - str < input_len; ++s)
{
if (*s == quote_char || (!as_ident && *s == '\\'))
{
*rp++ = *s;
*rp++ = *s;
}
else if (!IS_HIGHBIT_SET(*s))
*rp++ = *s;
else
{
int i = pg_encoding_mblen(conn->client_encoding, s);
while (1)
{
*rp++ = *s;
if (--i == 0)
break;
++s; /* for loop will provide the final increment */
}
}
}
}
/* Closing quote and terminating NUL. */
*rp++ = quote_char;
*rp = '\0';
return result;
}
char *
PQescapeLiteral(PGconn *conn, const char *str, size_t len)
{
return PQescapeInternal(conn, str, len, false);
}
char *
PQescapeIdentifier(PGconn *conn, const char *str, size_t len)
{
return PQescapeInternal(conn, str, len, true);
}
/* HEX encoding support for bytea */
static const char hextbl[] = "0123456789abcdef";
static const int8 hexlookup[128] = {
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1, -1, -1, -1, -1, -1,
-1, 10, 11, 12, 13, 14, 15, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, 10, 11, 12, 13, 14, 15, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
};
static inline char
get_hex(char c)
{
int res = -1;
if (c > 0 && c < 127)
res = hexlookup[(unsigned char) c];
return (char) res;
}
/*
* PQescapeBytea - converts from binary string to the
* minimal encoding necessary to include the string in an SQL
* INSERT statement with a bytea type column as the target.
*
* We can use either hex or escape (traditional) encoding.
* In escape mode, the following transformations are applied:
* '\0' == ASCII 0 == \000
* '\'' == ASCII 39 == ''
* '\\' == ASCII 92 == \\
* anything < 0x20, or > 0x7e ---> \ooo
* (where ooo is an octal expression)
*
* If not std_strings, all backslashes sent to the output are doubled.
*/
static unsigned char *
PQescapeByteaInternal(PGconn *conn,
const unsigned char *from, size_t from_length,
size_t *to_length, bool std_strings, bool use_hex)
{
const unsigned char *vp;
unsigned char *rp;
unsigned char *result;
size_t i;
size_t len;
size_t bslash_len = (std_strings ? 1 : 2);
/*
* empty string has 1 char ('\0')
*/
len = 1;
if (use_hex)
{
len += bslash_len + 1 + 2 * from_length;
}
else
{
vp = from;
for (i = from_length; i > 0; i--, vp++)
{
if (*vp < 0x20 || *vp > 0x7e)
len += bslash_len + 3;
else if (*vp == '\'')
len += 2;
else if (*vp == '\\')
len += bslash_len + bslash_len;
else
len++;
}
}
*to_length = len;
rp = result = (unsigned char *) malloc(len);
if (rp == NULL)
{
if (conn)
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("out of memory\n"));
return NULL;
}
if (use_hex)
{
if (!std_strings)
*rp++ = '\\';
*rp++ = '\\';
*rp++ = 'x';
}
vp = from;
for (i = from_length; i > 0; i--, vp++)
{
unsigned char c = *vp;
if (use_hex)
{
*rp++ = hextbl[(c >> 4) & 0xF];
*rp++ = hextbl[c & 0xF];
}
else if (c < 0x20 || c > 0x7e)
{
if (!std_strings)
*rp++ = '\\';
*rp++ = '\\';
*rp++ = (c >> 6) + '0';
*rp++ = ((c >> 3) & 07) + '0';
*rp++ = (c & 07) + '0';
}
else if (c == '\'')
{
*rp++ = '\'';
*rp++ = '\'';
}
else if (c == '\\')
{
if (!std_strings)
{
*rp++ = '\\';
*rp++ = '\\';
}
*rp++ = '\\';
*rp++ = '\\';
}
else
*rp++ = c;
}
*rp = '\0';
return result;
}
unsigned char *
PQescapeByteaConn(PGconn *conn,
const unsigned char *from, size_t from_length,
size_t *to_length)
{
if (!conn)
return NULL;
return PQescapeByteaInternal(conn, from, from_length, to_length,
conn->std_strings,
(conn->sversion >= 90000));
}
unsigned char *
PQescapeBytea(const unsigned char *from, size_t from_length, size_t *to_length)
{
return PQescapeByteaInternal(NULL, from, from_length, to_length,
static_std_strings,
false /* can't use hex */ );
}
#define ISFIRSTOCTDIGIT(CH) ((CH) >= '0' && (CH) <= '3')
#define ISOCTDIGIT(CH) ((CH) >= '0' && (CH) <= '7')
#define OCTVAL(CH) ((CH) - '0')
/*
* PQunescapeBytea - converts the null terminated string representation
* of a bytea, strtext, into binary, filling a buffer. It returns a
* pointer to the buffer (or NULL on error), and the size of the
* buffer in retbuflen. The pointer may subsequently be used as an
* argument to the function PQfreemem.
*
* The following transformations are made:
* \\ == ASCII 92 == \
* \ooo == a byte whose value = ooo (ooo is an octal number)
* \x == x (x is any character not matched by the above transformations)
*/
unsigned char *
PQunescapeBytea(const unsigned char *strtext, size_t *retbuflen)
{
size_t strtextlen,
buflen;
unsigned char *buffer,
*tmpbuf;
size_t i,
j;
if (strtext == NULL)
return NULL;
strtextlen = strlen((const char *) strtext);
if (strtext[0] == '\\' && strtext[1] == 'x')
{
const unsigned char *s;
unsigned char *p;
buflen = (strtextlen - 2) / 2;
/* Avoid unportable malloc(0) */
buffer = (unsigned char *) malloc(buflen > 0 ? buflen : 1);
if (buffer == NULL)
return NULL;
s = strtext + 2;
p = buffer;
while (*s)
{
char v1,
v2;
/*
* Bad input is silently ignored. Note that this includes
* whitespace between hex pairs, which is allowed by byteain.
*/
v1 = get_hex(*s++);
if (!*s || v1 == (char) -1)
continue;
v2 = get_hex(*s++);
if (v2 != (char) -1)
*p++ = (v1 << 4) | v2;
}
buflen = p - buffer;
}
else
{
/*
* Length of input is max length of output, but add one to avoid
* unportable malloc(0) if input is zero-length.
*/
buffer = (unsigned char *) malloc(strtextlen + 1);
if (buffer == NULL)
return NULL;
for (i = j = 0; i < strtextlen;)
{
switch (strtext[i])
{
case '\\':
i++;
if (strtext[i] == '\\')
buffer[j++] = strtext[i++];
else
{
if ((ISFIRSTOCTDIGIT(strtext[i])) &&
(ISOCTDIGIT(strtext[i + 1])) &&
(ISOCTDIGIT(strtext[i + 2])))
{
int byte;
byte = OCTVAL(strtext[i++]);
byte = (byte << 3) + OCTVAL(strtext[i++]);
byte = (byte << 3) + OCTVAL(strtext[i++]);
buffer[j++] = byte;
}
}
/*
* Note: if we see '\' followed by something that isn't a
* recognized escape sequence, we loop around having done
* nothing except advance i. Therefore the something will
* be emitted as ordinary data on the next cycle. Corner
* case: '\' at end of string will just be discarded.
*/
break;
default:
buffer[j++] = strtext[i++];
break;
}
}
buflen = j; /* buflen is the length of the dequoted data */
}
/* Shrink the buffer to be no larger than necessary */
/* +1 avoids unportable behavior when buflen==0 */
tmpbuf = realloc(buffer, buflen + 1);
/* It would only be a very brain-dead realloc that could fail, but... */
if (!tmpbuf)
{
free(buffer);
return NULL;
}
*retbuflen = buflen;
return tmpbuf;
}
}}}
https://docs.informatica.com/content/dam/source/GUID-8/GUID-856CE534-42AE-4C55-B857-92E3837EC24C/1/en/PWX_102HF1_PowerExchangeForKafkaUserGuideForPowerCenter_en.pdf
positional read 64 that supports 64 bit addressing of a file, it's also an synchronous syscall
{{{
@pvalid filesystemio_options
}}}
kaio <-- similar to ASM
when kio fails then it will fallback to vector writes and pread (userland)
http://appliedpredictivemodeling.com/
http://www.amazon.com/Applied-Predictive-Modeling-Max-Kuhn-ebook/dp/B00CU4Q4QS
http://www.ocwconsortium.org/courses/search/?search=predictive
{{{
set serveroutput off
select * from AGGR.ENS_CUST_PROFILE2_DT_GLT where rownum < 2;
select prev_sql_id p_sqlid from v$session where sid=sys_context('userenv','sid');
@dplan
@planx2 Y <sql_id>
}}}
{{{
$ perl atm_cpuload_st.pl 90
It will run up enough copies of the primes script till CPU usage reaches about 90%.
}}}
http://www.evernote.com/shard/s48/sh/72210ef3-6088-4418-8fb7-ee3108bce87f/24012ec930d0db86c0fed7dc94d745c4
grep and printf is cool :)
{{{
sed -i 1i"-- Purpose: " *.sql <-- to add "Purpose" tag on all SQL files
grep -i Purpose: *.sql *.sh | awk -F: '{ printf("%20s %20s \n", $1, $3) }' <-- to search for files with "Purpose" tag
grep -i Purpose: *.sql *.sh | awk -F: '{ printf("%20s %20s \n", $1, $3) }' > scripts_$(date +%Y%m%d%H%M).txt <-- generate baseline
grep -i Purpose: *.sql *.sh | awk -F: '{ print "<a href=http://blogtest.enkitec.com/scripts/" $1 " target=_blank>"$1 "</a>", "-----" $3 }' <-- generate html
diff -y scripts_20110529.txt scripts_201105291154.txt | egrep -i ">|<|\|" <--show differences
find -name "*.sql" -name "*.sh" -mtime +1 -exec ls -l {} \; <-- find new files
}}}
''check files that have no comments''
{{{
grep -i Purpose: *.sql *.sh | awk -F: '{ printf("%20s %20s \n", $1, $3) }' > scripts_$(date +%Y%m%d%H%M).txt
file=`ls -ltr | awk '{print $9}'`
filecomment=`cat scripts_*txt | awk '{ print $1}' | sort | uniq`
for i in $file;
do
ls -ltr $filecomment | grep $i ;
if [ $? = 0 ]; then
echo $i >> script_with_comments.txt
else
echo $i >> script_no_comments.txt
fi
done
}}}
''compare files''
{{{
rm script_with_comments.txt
rm script_no_comments.txt
file=`cat scripts_201105291412.txt | awk '{print $1}'`
filecomment=`cat with_comments.txt | awk '{ print $1}'`
for i in $file;
do
ls -ltr $filecomment 2> error.txt | grep $i ;
if [ $? = 0 ]; then
echo $i >> script_with_comments.txt
else
echo $i >> script_no_comments.txt
fi
done
}}}
''format the index''
{{{
<h1>Exadata</h1>
<blockquote>
<a href=http://blogtest.enkitec.com/scripts-resources/whats_changed.sql target=_blank>whats_changed.sql</a> ----- Find statements that have significantly different elapsed time than before.
</blockquote>
}}}
{{{
Karl@Karl-LaptopDell ~
$ echo text1 > file1.txt
Karl@Karl-LaptopDell ~
$ echo text2 > file2.txt
Karl@Karl-LaptopDell ~
$
Karl@Karl-LaptopDell ~
$ grep -i text *.txt
file1.txt:text1
file2.txt:text2
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk -F: '{print $1}'
file1.txt
file2.txt
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk '{print $1}'
file1.txt:text1
file2.txt:text2
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk '{print $1}'
file1.txt:text1
file2.txt:text2
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk -F: '{print $2}'
text1
text2
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk -F: '{print $1, $2}'
file1.txt text1
file2.txt text2
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk -F: '{printf ("%10s %10s \n", $1, $2) }'
file1.txt text1
file2.txt text2
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk -F: '{printf ("%10s %20s \n", $1, $2) }'
file1.txt text1
file2.txt text2
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk -F: '{printf ("%10s %30s \n", $1, $2) }'
file1.txt text1
file2.txt text2
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk -F: '{printf ("%10s %-30s \n", $1, $2) }'
file1.txt text1
file2.txt text2
Karl@Karl-LaptopDell ~
$ grep -i text *.txt | awk -F: '{printf ("%10s %-10s \n", $1, $2) }'
file1.txt text1
file2.txt text2
}}}
''printf on decimal''
{{{
http://stackoverflow.com/questions/7425030/how-can-i-limit-the-number-of-digits-displayed-by-printf-after-the-decimal-point
}}}
http://www.unix.com/shell-programming-scripting/43564-awk-printf-user-defined-variables.html
http://www.computing.net/answers/unix/align-fields-in-columns/5637.html
https://nixtricks.wordpress.com/tag/awk/
http://www.computing.net/answers/unix/align-fields-in-columns/5637.html
http://unstableme.blogspot.com/2008/11/print-individual-records-using-awk.html
https://jonathanlewis.wordpress.com/2010/12/03/ansi-argh/
<<<
The problem is this: Oracle doesn’t optimise ANSI SQL, it transforms it then optimises it. Transformation can change query blocks, and Tony’s hints apply to specific query blocks. After a little testing and checking I worked out what the SQL looked like AFTER transformation and BEFORE optimisation; and it’s this
<<<
http://www.mail-archive.com/ug-msosug@mail.opensolaris.org/msg00253.html <-- prstat -mL , lots of High LCK%
http://www.solarisinternals.com/wiki/index.php/Processes
http://andunix.net/blog/2010/memory_usage_solaris_container_zone
http://blog.tanelpoder.com/2008/06/15/advanced-oracle-troubleshooting-guide-part-6-understanding-oracle-execution-plans-with-os_explain/
{{{
for i in $(ls -ltr | grep pstack | awk '{print $9}')
do
sh os_explain $i >> sample.txt
done
}}}
{{{
## After os_explain you could process the file using below commands:
## karao@karl:~/Desktop$ cut -d " " -f4 sample.txt | sort -r | uniq -c
## karao@karl:~/Desktop$ cat sample2.txt | sort | uniq -c
}}}
Lab128 has automated the pstack sampling, os_explain, & reporting. Good tool to know where the query was spending time http://goo.gl/fyH5x
{{{
#!/bin/bash
################################################################################
##
## File name: os_explain
## Purpose: Script to explain the currently active branch of SQL plan
## execution from Oracle server process stack trace
##
## Author: Tanel Poder
## Copyright: (c) http://www.tanelpoder.com
##
## Usage: 1) Take a stack trace of an Oracle process (either by using a
## a debugger, pstack or extract the stack from a corefile,
## save it to a file (pstack.txt for example)
## 2) run ./os_explain <stack_trace_file>
## For example: ./os_explain pstack.txt
##
##
##
## Alternatively you can pipe pstack output directly to os_explain
## STDIN:
##
## pstack <SPID> | ./os_explain
##
## After os_explain you could process the file using below commands:
## karao@karl:~/Desktop$ cut -d " " -f4 sample.txt | sort -r | uniq -c
## karao@karl:~/Desktop$ cat sample2.txt | sort | uniq -c
##
## Other: Get stack using pstack on Solaris and Linux, using procstack
## on AIX and by using pstack on recent versions of HP-UX.
## Older HP-UX versions may require a debugger for getting stack
## from OS. As an alternative (especially on Windows) you could
## use ORADEBUG DUMP ERRORSTACK for dumping stack of a process.
## Note that ORADEBUG's stack dump mechanism is not 100% safe
## on active processes
##
## Most of Oracle kernel function prefix translations are based
## on Metalink note 175982.1
##
################################################################################
# LIFO line reverser
f_lifo() {
#echo Entering f_lifo()
MAX_ARRAY=4095
i=$MAX_ARRAY
IFS=^
while read input ; do
array[i]=$input
((i=i-1))
done
i=1
for output in ${array[@]} ; do
if test "$1" = "-n" ; then
printf "%4d %s\n" $i $output
else
printf "%s\n" $output
fi
((i=i+1))
done
}
TRANSLATION_STRING='
/----------------- lwp# 2/,$d;
s/(.*//;
s/ [[0-9][a-f]]{16} /./;
s/qerae/AND-EQUAL: /g;
s/qerba/BITMAP INDEXAND : /g;
s/qerbc/BITMAP INDEX COMPACTION: /g;
s/qerbi/BITMAP INDEX CREATION: /g;
s/qerbm/MINUS: /g;
s/qerbo/BITMAP INDEX OR: /g;
s/qerbt/BITMAP CONVERT: /g;
s/qerbu/BITMAP INDEX UNLIMITED-OR: /g;
s/qerbx/BITMAP INDEX ACCESS: /g;
s/qercb/CONNECT BY: /g;
s/qercbi/SUPPORT FOR CONNECT BY: /g;
s/qerco/COUNT: /g;
s/qerdl/DELETE: /g;
s/qerep/EXPLOSION: /g;
s/qerff/FIFO BUFFER: /g;
s/qerfi/FIRST ROW: /g;
s/qerfl/FILTER DEFINITION: /g;
s/qerfu/UPDATE: /g;
s/qerfx/FIXED TABLE: /g;
s/qergi/GRANULE ITERATOR: /g;
s/qergr/GROUP BY ROLLUP: /g;
s/qergs/GROUP BY SORT: /g;
s/qerhc/HASH CLUSTERS: /g;
s/qerhj/HASH JOIN: /g;
s/qeril/IN-LIST: /g;
s/qerim/INDEX MAINTENANCE: /g;
s/qerix/INDEX: /g;
s/qerjot/NESTED LOOP JOIN: /g;
s/qerjo/NESTED LOOP OUTER: /g;
s/qerle/LINEAR EXECUTION IMPLEMENTATION: /g;
s/qerli/PARALLEL CREATE INDEX: /g;
s/qerlt/LOAD TABLE: /g;
s/qerns/GROUP BY NO SORT: /g;
s/qeroc/OBJECT COLLECTION ITERATOR: /g;
s/qeroi/EXTENSIBLE INDEXING QUERY COMPONENT: /g;
s/qerpa/PARTITION: /g;
s/qerpf/QUERY EXECUTION PREFETCH: /g;
s/qerpx/PARALLELIZER: /g;
s/qerrm/REMOTE: /g;
s/qerse/SET IMPLEMENTATION: /g;
s/qerso/SORT: /g;
s/qersq/SEQUENCE NUMBER: /g;
s/qerst/QUERY EXECUTION STATISTICS: /g;
s/qertb/TABLE ACCESS: /g;
s/qertq/TABLE QUEUE: /g;
s/qerua/UNION-ALL: /g;
s/qerup/UPDATE: /g;
s/qerus/UPSERT: /g;
s/qervw/VIEW: /g;
s/qerwn/WINDOW: /g;
s/qerxt/EXTERNAL TABLE FETCH : /g;
s/opifch2/SELECT FETCH: /g
s/qergh/HASH GROUP BY: /g
'
# main()
case `uname -s` in
Linux)
FUNCPOS=4
;;
SunOS)
FUNCPOS=2
;;
*)
FUNCPOS=2
;;
esac
if [ "$1X" == "-aX" ] ; then
FILTER="^\$|oracle|----------"
INDENT=" "
shift
else
FILTER="^\$|\?\?|oracle|----------"
TRANSLATION_STRING="/opifch /,\$d;/opiefn0/,\$d;/opiodr/,\$d;/opiexe/,\$d; $TRANSLATION_STRING"
INDENT=" "
fi
if [ $# -eq 0 ] ; then
sed -e "$TRANSLATION_STRING" \
| egrep -v "$FILTER" \
| sed 's/^ *//g;s/__PGOSF[0-9]*_//' \
| awk -F" " "{ for (f=$FUNCPOS; f <= NF; f++) { printf \"%s \", \$f }; printf \"\n\" }" \
| f_lifo \
| awk "
BEGIN{ option=\" \" } /rwsfcd/{ option=\"* \" } !/rwsfcd/{ pref=(pref \"$INDENT\") ;
print pref option \$0 ; option=\" \" }
"
else
for i in $* ; do
sed -e "$TRANSLATION_STRING" < $i \
| egrep -v "$FILTER" \
| sed 's/^ *//g;s/__PGOSF[0-9]*_//' \
| awk -F" " "{ for (f=$FUNCPOS; f <= NF; f++) { printf \"%s \", \$f }; printf \"\n\" }" \
| f_lifo \
| awk "
BEGIN{ option=\" \" } /rwsfcd/{ option=\"* \" } !/rwsfcd/{ pref=(pref \"$INDENT\") ;
print pref option \$0 ; option=\" \" }
"
done
fi
# that's all!
}}}
<<showtoc>>
! openweathermap
https://openweathermap.org/
<<<
https://openweathermap.org/guide
https://codereview.stackexchange.com/questions/131371/script-to-print-weather-report-from-openweathermap-api
https://stackoverflow.com/questions/31146021/save-app-data-in-weather-app
https://codeburst.io/build-a-simple-weather-app-with-node-js-in-just-16-lines-of-code-32261690901d
<<<
tpt_public scripts (http://blog.tanelpoder.com/files/) is an excellent toolkit for performance and troubleshooting, and since Tanel is at 500 miles per hour on coming up with great scripts we'd like to keep up with the new stuff :)
this is a simple pull zip -> update -> upload -> email notification workflow
Tanel will be putting his scripts in github soon, so this may not be applicable in the next few weeks. But if there are any .zip based good stuff repo that's out there and you want to track it this guide may still be useful.
! how to (secretly) keep up with tpt_public scripts :)
check it here https://www.evernote.com/l/ADCv3utvm7hHZ7-RKGG_CmQgGYxdQD9Fnus
{{{
$ cat px.sql
/*
SELECT table_name
FROM dict
WHERE table_name LIKE 'V%PQ%'
OR table_name like 'V%PX%';
TABLE_NAME
------------------------------
V$PQ_SESSTAT
V$PQ_SYSSTAT
V$PQ_SLAVE
V$PQ_TQSTAT
V$PX_BUFFER_ADVICE
V$PX_SESSION
V$PX_SESSTAT
V$PX_PROCESS
V$PX_PROCESS_SYSSTAT
*/
col value format 9999999999
-- Script to monitor parallel queries Note 457857.1
-- shows the if a slave is waiting and for what event it waits..
-- This out shows 2 queries. 1 is running with degree 4 and the other with degree 8. Is shows also that all slaves are currently waiting.
col username for a12
col "QC SID" for A6
col "SID" for A6
col "QC/Slave" for A8
col "Req. DOP" for 9999
col "Actual DOP" for 9999
col "Group" for A6
col "Slaveset" for A8
col "Slave INST" for A9
col "QC INST" for A6
set pages 300 lines 300
col wait_event format a30
select
decode(px.qcinst_id,NULL,username,
' - '||lower(substr(pp.SERVER_NAME,
length(pp.SERVER_NAME)-4,4) ) )"Username",
decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
to_char( px.server_group) "Group",
to_char( px.server_set) "SlaveSet",
to_char(s.sid) "SID",
to_char(px.inst_id) "Slave INST",
decode(sw.state,'WAITING', 'WAIT', 'NOT WAIT' ) as STATE,
case sw.state WHEN 'WAITING' THEN substr(sw.event,1,30) ELSE NULL end as wait_event ,
decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
to_char(px.qcinst_id) "QC INST",
px.req_degree "Req. DOP",
px.degree "Actual DOP",
s.sql_id "SQL_ID"
from gv$px_session px,
gv$session s ,
gv$px_process pp,
gv$session_wait sw
where px.sid=s.sid (+)
and px.serial#=s.serial#(+)
and px.inst_id = s.inst_id(+)
and px.sid = pp.sid (+)
and px.serial#=pp.serial#(+)
and sw.sid = s.sid
and sw.inst_id = s.inst_id
order by
decode(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID),
px.QCSID,
decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
px.SERVER_SET,
px.INST_ID
/
-- shows for long running processes what are the slaves do
set pages 300 lines 300
col "Group" for 9999
col "Username" for a12
col "QC/Slave" for A8
col "Slaveset" for A8
col "Slave INST" for A9
col "QC SID" for A6
col "QC INST" for A6
col "operation_name" for A30
col "target" for A30
select
decode(px.qcinst_id,NULL,username,
' - '||lower(substr(pp.SERVER_NAME,
length(pp.SERVER_NAME)-4,4) ) )"Username",
decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
to_char( px.server_set) "SlaveSet",
to_char(px.inst_id) "Slave INST",
substr(opname,1,30) operation_name,
substr(target,1,30) target,
sofar,
totalwork,
round((sofar/NULLIF(nvl(totalwork,0),0))*100, 2) pct,
units,
start_time,
timestamp,
decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
to_char(px.qcinst_id) "QC INST"
from gv$px_session px,
gv$px_process pp,
gv$session_longops s
where px.sid=s.sid
and px.serial#=s.serial#
and px.inst_id = s.inst_id
and px.sid = pp.sid (+)
and px.serial#=pp.serial#(+)
order by
decode(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID),
px.QCSID,
decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
px.SERVER_SET,
px.INST_ID
/
/*
-- What event are the consumer slaves waiting on?
set linesize 150
col "Wait Event" format a30
select s.sql_id,
px.INST_ID "Inst",
px.SERVER_GROUP "Group",
px.SERVER_SET "Set",
px.DEGREE "Degree",
px.REQ_DEGREE "Req Degree",
w.event "Wait Event"
from GV$SESSION s, GV$PX_SESSION px, GV$PROCESS p, GV$SESSION_WAIT w
where s.sid (+) = px.sid and
s.inst_id (+) = px.inst_id and
s.sid = w.sid (+) and
s.inst_id = w.inst_id (+) and
s.paddr = p.addr (+) and
s.inst_id = p.inst_id (+)
ORDER BY decode(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID),
px.QCSID,
decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
px.SERVER_SET,
px.INST_ID;
*/
-- shows for the PX Deq events the processes that are exchange data
set pages 300 lines 300
col wait_event format a30
select
sw.SID as RCVSID,
decode(pp.server_name,
NULL, 'A QC',
pp.server_name) as RCVR,
sw.inst_id as RCVRINST,
case sw.state WHEN 'WAITING' THEN substr(sw.event,1,30) ELSE NULL end as wait_event ,
decode(bitand(p1, 65535),
65535, 'QC',
'P'||to_char(bitand(p1, 65535),'fm000')) as SNDR,
bitand(p1, 16711680) - 65535 as SNDRINST,
decode(bitand(p1, 65535),
65535, ps.qcsid,
(select
sid
from
gv$px_process
where
server_name = 'P'||to_char(bitand(sw.p1, 65535),'fm000') and
inst_id = bitand(sw.p1, 16711680) - 65535)
) as SNDRSID,
decode(sw.state,'WAITING', 'WAIT', 'NOT WAIT' ) as STATE
from
gv$session_wait sw,
gv$px_process pp,
gv$px_session ps
where
sw.sid = pp.sid (+) and
sw.inst_id = pp.inst_id (+) and
sw.sid = ps.sid (+) and
sw.inst_id = ps.inst_id (+) and
p1text = 'sleeptime/senderid' and
bitand(p1, 268435456) = 268435456
order by
decode(ps.QCINST_ID, NULL, ps.INST_ID, ps.QCINST_ID),
ps.QCSID,
decode(ps.SERVER_GROUP, NULL, 0, ps.SERVER_GROUP),
ps.SERVER_SET,
ps.INST_ID
/
SELECT dfo_number, tq_id, server_type, process, num_rows
FROM gV$PQ_TQSTAT ORDER BY dfo_number DESC, tq_id, server_type, process;
col SID format 999999
SELECT QCSID, SID, INST_ID "Inst", SERVER_GROUP "Group", SERVER_SET "Set",
DEGREE "Degree", REQ_DEGREE "Req Degree"
FROM GV$PX_SESSION ORDER BY QCSID, QCINST_ID, SERVER_GROUP, SERVER_SET;
SELECT * FROM gV$PX_PROCESS order by 1;
col statistic format a50
select * from gV$PX_PROCESS_SYSSTAT order by 1,2;
SELECT INST_ID, NAME, VALUE FROM GV$SYSSTAT
WHERE UPPER (NAME) LIKE '%PARALLEL OPERATIONS%'
OR UPPER (NAME) LIKE '%PARALLELIZED%' OR UPPER (NAME) LIKE '%PX%' order by 1,2;
SELECT px.SID "SID", p.PID, p.SPID "SPID", px.INST_ID "Inst",
px.SERVER_GROUP "Group", px.SERVER_SET "Set",
px.DEGREE "Degree", px.REQ_DEGREE "Req Degree", w.event "Wait Event"
FROM GV$SESSION s, GV$PX_SESSION px, GV$PROCESS p, GV$SESSION_WAIT w
WHERE s.sid (+) = px.sid AND s.inst_id (+) = px.inst_id AND
s.sid = w.sid (+) AND s.inst_id = w.inst_id (+) AND
s.paddr = p.addr (+) AND s.inst_id = p.inst_id (+)
ORDER BY DECODE(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID), px.QCSID,
DECODE(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP), px.SERVER_SET, px.INST_ID;
}}}
extract wheel
https://blog.packagecloud.io/extract-python-egg-and-python-wheel/
{{{
Extract python wheel
A python wheel is a simple Zip file, so you can extract it using any program that reads Zip files:
$ unzip /path/to/file.whl
List files in python egg
$ unzip -l /path/to/file.egg
List files in python wheel
$ unzip -l /path/to/file.whl
}}}
https://chriswarrick.com/blog/2017/07/03/setting-up-a-python-development-environment/
http://holgerbrandl.github.io/r4intellij/
https://github.com/holgerbrandl/r4intellij
http://support.divio.com/local-development/setup/set-up-the-local-development-environment-with-pycharm-and-docker-for-macdocker-for-windows
https://developer.rackspace.com/blog/a-tutorial-on-application-development-using-vagrant-with-the-pycharm-ide/
<<showtoc>>
! virtualenv vs pyenv
https://stackoverflow.com/questions/29950300/what-is-the-relationship-between-virtualenv-and-pyenv
<<<
@@virtualenv@@ allows you to create local, independent python installations by cloning from existing ones
@@pyenv@@ allows you to install different versions of python simultaneously (either system-wide or just for the local user) and then choose which of the multitude of pythons to run at any given time (including those created by virtualenv or Anaconda)
<<<
! general workflow
!! list installed packages/modules
{{{
pip list
pip list | grep numpy
numpy 1.16.1
}}}
!! new install
{{{
brew update
brew install pyenv
echo 'eval "$(pyenv init -)"' >> ~/.bash_profile
source ~/.bash_profile
pyenv install 3.5.0
}}}
at least this is what i have on my bash_profile
{{{
#export JAVA_HOME=/Library/Java/Home
# Set JAVA_HOME so rJava package can find it
export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)/jre
# pyenv
export PATH="~/.pyenv/versions/3.5.0/bin:${PATH}"
if command -v pyenv 1>/dev/null 2>&1; then
eval "$(pyenv init -)"
fi
# oracle client
export PATH=~/instantclient_12_2:$PATH
export ORACLE_HOME=~/instantclient_12_2
export DYLD_LIBRARY_PATH=$ORACLE_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME
export FORCE_RPATH=1
}}}
!! admin commands
{{{
# See all available versions of Python
pyenv install -l | grep -ow [0-9].[0-9].[0-9]
# See which versions of Python are installed
pyenv versions
# Set a specific version of Python as your local version
pyenv local 3.x.x
# Set Python version globally
pyenv global 3.x.x
# Double-check your version
python -V
# list existing virtualenvs
pyenv virtualenvs
}}}
!! install another version
{{{
pyenv install 3.7.0
}}}
{{{
# I had to do this to install 2.7.9
# from https://github.com/pyenv/pyenv/issues/530
CFLAGS="-I$(xcrun --show-sdk-path)/usr/include" pyenv install 2.7.9
}}}
!! ACTIVATE - pyenv-virtualenv to create/change/delete environment
{{{
# install
git clone https://github.com/yyuu/pyenv-pip-rehash.git $(pyenv root)/plugins/pyenv-pip-rehash
brew install pyenv-virtualenv
# create a virtualenv with pyenv
pyenv virtualenv 3.7.0 py370
# to change to the new environment
pyenv virtualenvs
pyenv activate py370
# to deactivate and uninstall
pyenv deactivate py370
pyenv uninstall py370
}}}
!! pyenv in pycharm
* use existing interpreter, and point to the new virtualenv (py370)
* also change the project home directory accordingly
[img(100%,100%)[ https://i.imgur.com/p22lgfd.png]]
!! pyenv rehash (do this everytime PIP install/uninstall is used)
{{{
pytest
======================================================== test session starts =========================================================
platform darwin -- Python 3.5.0, pytest-4.2.0, py-1.7.0, pluggy-0.8.1
collected 0 items
pyenv rehash
pytest
======================================================== test session starts =========================================================
platform darwin -- Python 3.7.0, pytest-4.3.0, py-1.8.0, pluggy-0.9.0
collected 0 items
}}}
!! pyenv migrate (migrate packages from a python version to another)
{{{
git clone git://github.com/pyenv/pyenv-pip-migrate.git $(pyenv root)/plugins/pyenv-pip-migrate
brew install pyenv-pip-migrate
pip freeze > requirements.txt.bak
To migrate installed packages from 2.7.4 to 2.7.5, use pyenv migrate.
$ pyenv migrate 2.7.4 2.7.5
$ pyenv global 2.7.5
$ pip freeze
distribute==0.6.43
nose==1.3.0
wsgiref==0.1.2
}}}
!! pip upgrade
{{{
# upgrade pip
pip install --upgrade pip
# upgrade all packages , do this on a current or migrated (pyenv migrate) environment
for i in $(pip list -o | awk 'NR > 2 {print $1}'); do sudo pip install -U $i; done
}}}
! pyenv
!! pyenv install
https://www.chrisjmendez.com/2017/08/03/installing-multiple-versions-of-python-on-your-mac-using-homebrew/
!! pyenv rehash
https://github.com/pyenv/pyenv-pip-rehash
https://stackoverflow.com/questions/29753592/pyenv-shim-not-created-when-installing-package-using-setup-py
!! pyenv virtualenv
https://github.com/pyenv/pyenv-virtualenv
!! pyenv migrate
https://github.com/pyenv/pyenv-pip-migrate
!! pip upgrade
https://stackoverflow.com/questions/47071256/how-to-update-upgrade-a-package-using-pip
{{{
import subprocess as sbp
import pip
pkgs = eval(str(sbp.run("pip3 list -o --format=json", shell=True,
stdout=sbp.PIPE).stdout, encoding='utf-8'))
for pkg in pkgs:
sbp.run("pip3 install --upgrade " + pkg['name'], shell=True)
}}}
or
{{{
for i in $(pip list -o | awk 'NR > 2 {print $1}'); do sudo pip install -U $i; done
}}}
!! pip freeze - to use for install or uninstall py modules
https://pip.pypa.io/en/stable/reference/pip_freeze/
https://stackoverflow.com/questions/18966564/pip-freeze-vs-pip-list
{{{
Generate output suitable for a requirements file.
$ pip freeze
docutils==0.11
Jinja2==2.7.2
MarkupSafe==0.19
Pygments==1.6
Sphinx==1.2.2
Generate a requirements file and then install from it in another environment.
$ env1/bin/pip freeze > requirements.txt
$ env2/bin/pip install -r requirements.txt
# can also be used to uninstall
pip uninstall -y -r requirements.txt
}}}
!! pyenv uninstall / reinstall from requirements.txt and link with pycharm
{{{
# list installed python under pyenv
ls /Users/kaarao/.pyenv/versions
3.10.4
# create new env
pyenv virtualenv 3.10.4 py3104
# list env
pyenv virtualenvs
3.10.4/envs/py3104 (created from /Users/kaarao/.pyenv/versions/3.10.4)
py3104 (created from /Users/kaarao/.pyenv/versions/3.10.4)
# activate
pyenv activate py3104
# strip version info from requirements.txt
awk -F "==" '{ print $1 }' requirements.txt > requirements3.txt
# reinstall modules
pip install -r requirements3.txt
pip install --upgrade pip setuptools wheel <- you may also have to do this if the install fails
# when using pycharm make sure to point the interpreter to the exact version name /Users/kaarao/.pyenv/versions/3.10.4/bin/python3.10
# so any new modules will be recognized on both terminal and pycharm
}}}
!! check python installation directory
{{{
# cx_Oracle password file location should be on the python installation directory
python -c "import os, sys; print(os.path.dirname(sys.executable))"
# output from a pyenv activate py3104
/Users/kaarao/.pyenv/versions/py3104/bin
# without pyenv
/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS
}}}
!! pyenv examples
https://fijiaaron.wordpress.com/2015/06/18/using-pyenv-with-virtualenv-and-pip-cheat-sheet/
https://fgimian.github.io/blog/2014/04/20/better-python-version-and-environment-management-with-pyenv/
!! references
pip documentation https://pip.pypa.io/en/stable/quickstart/
https://github.com/pyenv
! pipenv
Python Tutorial: Pipenv - Easily Manage Packages and Virtual Environments https://www.youtube.com/watch?v=zDYL22QNiWk&t=6s
{{{
Versions managed w/ `pyenv`. Virtualenv managed w/ `pipenv` ala npm
}}}
https://hackernoon.com/reaching-python-development-nirvana-bb5692adf30c
https://medium.com/@ThisIsFlorianK/it-took-me-an-incredible-amount-of-time-but-eventually-i-ended-up-with-the-same-exact-setup-and-73065653e4af
https://github.com/TOOCS/python
https://www.reddit.com/r/Python/comments/70otjx/pipenv_looks_kinda_like_npm_for_python/
! make
!! make for deploying python
https://www.google.com/search?q=makefile+for+python+script&oq=makefile+for+py&aqs=chrome.2.0j69i57j0l4.4443j0j4&sourceid=chrome&ie=UTF-8
..
Installing the Python Connector
https://docs.snowflake.com/en/user-guide/python-connector-install.html
! current
!! mac
https://github.com/donnemartin/gitsome
see pyenv to install different versions of python https://www.chrisjmendez.com/2017/08/03/installing-multiple-versions-of-python-on-your-mac-using-homebrew/
{{{
417 xcode-select --install
418 pyenv install 3.5.0
419 pyenv versions
420 pyenv local 3.5.0
421 python -v
422 python --version
423 cd
424 cat .bash_profile
425 cat .bash_profile.bak
426 which python3
427 pyenv version
428 pyenv local 3.5.0
429 python
430 PATH="~/.pyenv/versions/2.7.10/bin:${PATH}"
431 which python
432 python
433 PATH="~/.pyenv/versions/3.5.0/bin:${PATH}"
434 python
435 pip3
436 pip3 install gitsome
437 gh configure
438 gh feed
439 gh feed karlarao -p
440 gh notifications
441 gh starred "oracle"
442 gh search-repos "created:>=2017-01-01 user:karlarao"
456 echo -e 'if command -v pyenv 1>/dev/null 2>&1; then\n eval "$(pyenv init -)"\nfi' >> ~/.bash_profile
457 cat .bash_profile
}}}
then fix the matplotlibrc
https://stackoverflow.com/questions/21784641/installation-issue-with-matplotlib-python
https://markhneedham.com/blog/2018/05/04/python-runtime-error-osx-matplotlib-not-installed-as-framework-mac/
{{{
466 cat ~/.matplotlib/matplotlibrc
467 vi cat ~/.matplotlib/matplotlibrc
468 vi ~/.matplotlib/matplotlibrc
469 cat ~/.matplotlib/matplotlibrc
470 rm ~/.matplotlib/matplotlibrc
# create this file
cat ~/.matplotlib/matplotlibrc
backend: TkAgg
}}}
!! windows
https://winpython.github.io
! previous
http://python.org/download/releases/3.1.2/
''on linux''
http://love-python.blogspot.com/2008/03/install-idle-in-linux.html
http://www.velocityreviews.com/forums/t359686-configuring-idle-on-linux.html
''on windows''
http://www.stuartellis.eu/articles/python-development-windows/
http://www.rose-hulman.edu/Class/csse/resources/Eclipse/eclipse-python-configuration.htm
1) install setuptools
{{{
http://www.python.org/download/releases/3.3.2/#id4
https://pypi.python.org/pypi/distribute <-- Distribute is a deprecated fork of the Setuptools project.
http://pythonhosted.org/distribute/
https://bitbucket.org/pypa/setuptools
https://pypi.python.org/pypi/setuptools/1.1.6#windows
https://pythonhosted.org/setuptools/easy_install.html
}}}
2) setup venv http://docs.python.org/dev/library/venv.html
3) download http://www.eclipse.org/
''on mac''
http://docs.python-guide.org/en/latest/starting/install/osx/
http://jakevdp.github.io/blog/2013/02/02/setting-up-a-mac-for-python-development/
http://stackoverflow.com/questions/4934806/how-can-i-find-scripts-directory-with-python
http://stackoverflow.com/questions/3718657/how-to-properly-determine-current-script-directory-in-python
http://stackoverflow.com/questions/5137497/find-current-directory-and-files-directory
http://code.activestate.com/recipes/474083-get-the-path-of-the-currently-executing-python-scr/
https://www.google.com/search?q=python+name+%27__file__%27+is+not+defined&oq=python+name+%27__file__%27+is+not+defined&aqs=chrome..69i57j0l5.2395j0j1&sourceid=chrome&ie=UTF-8
http://stackoverflow.com/questions/2632199/how-do-i-get-the-path-of-the-current-executed-file-in-python
http://www.blog.pythonlibrary.org/2013/10/29/python-101-how-to-find-the-path-of-a-running-script/
http://stackoverflow.com/search?q=%5Bpython%5D+pycharm+__file__+
http://stackoverflow.com/questions/32677911/python-having-difficulty-obtaining-current-path-of-script
http://stackoverflow.com/questions/51520/how-to-get-an-absolute-file-path-in-python/51523#51523
http://stackoverflow.com/questions/7783308/os-path-dirname-file-returns-empty
http://stackoverflow.com/questions/32677911/python-having-difficulty-obtaining-current-path-of-script
https://www.google.com/search?q=python+vs+pl%2Fsql&oq=python+vs+pl%2Fsql&aqs=chrome..69i57.3588j0j1&sourceid=chrome&ie=UTF-8
http://www.performatune.com/en/plsql-vs-java-vs-c-vs-python-for-cpu-intensive-tasks-architectural-decision/
http://tahitiviews.blogspot.com/2007/04/first-thoughts-about-python-vs-plsql.html
Introduction to Python for PL/SQL Developers [Part 1] https://www.youtube.com/results?search_query=intro+to+pl%2Fsql+for+python
part 1 https://www.youtube.com/watch?v=WoAVY7LQbt4&t=3997s
@@cx_Oracle is now python-oracledb@@
! Upgrade
cx_Oracle 8.3 (latest as of 06/2022) to python-oracledb
https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_c.html#upgrading83
! Installing python-oracledb
https://python-oracledb.readthedocs.io/en/latest/user_guide/installation.html
! Deprecated functions
https://python-oracledb.readthedocs.io/en/latest/api_manual/deprecations.html
! Install cx_Oracle
https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html#quick-start-cx-oracle-installation
! cx_Oracle documentation
https://cx-oracle.readthedocs.io/en/latest/
https://oracle.github.io/python-cx_Oracle/
! cx_Oracle tutorial
https://oracle.github.io/python-cx_Oracle/samples/tutorial/Python-and-Oracle-Database-Scripting-for-the-Future.html
! python-oracledb tutorial
https://oracle.github.io/python-oracledb/samples/tutorial/Python-and-Oracle-Database-The-New-Wave-of-Scripting.html
! cx_Oracle example code
https://github.com/oracle/python-cx_Oracle/tree/main/samples
! python-oracledb example code
https://github.com/oracle/python-oracledb/tree/main/samples
{{{
# from http://www.annasyme.com/docs/python_structure.html
Create file
Create file (script.py) in Atom.
Open terminal at side to run script.
Script:
#!/usr/bin/env python
print("this is script.py")
Make it executable
In the terminal, make it executable: chmod +x script.py
Run: ./script.py
Main function
Add def main()
Add “if name” bit at end
#!/usr/bin/env python
print("this is script.py")
def main():
print("this is the main function")
if __name__ == '__main__':
main()
Run: ./script.py
Read in the R1.fastq read file
At the top of the script, import the modules argparse and sys
Define a function (any name, here it’s called parse_args())
To read in R1.fastq use add_argument: parser.add_argument(“R1_file”)
Other input args can be added here.
#!/usr/bin/env python
import argparse
import sys
print("this is script.py")
def parse_args():
parser=argparse.ArgumentParser(description="a script to do stuff")
parser.add_argument("R1_file")
args=parser.parse_args()
return args
def main():
print("this is the main function")
if __name__ == '__main__':
main()
Run, but provide file: ./script.py R1.fq
Read the inputs in the main function
Tell the main function to run the parse_args function
and save the output as an object called inputs (or anything)
Access the individual arg with inputs.R1_file
#!/usr/bin/env python
import argparse
import sys
print("this is script.py")
def parse_args():
parser=argparse.ArgumentParser(description="a script to do stuff")
parser.add_argument("R1_file")
args=parser.parse_args()
return args
def main():
print("this is the main function")
inputs=parse_args()
print(inputs.R1_file)
if __name__ == '__main__':
main()
Run: ./script.py R1.fq
Make a separate function to process files
Add a line to main function telling it to run another function called process_files()
Also tell it to use the inputs (or whatever they were called) as an argument to this function call
Make the new function: process_files(things) – it doesn’t have to say inputs here, it can say anything (here: things)
Access the R1_file arg in this function, but use “things” not “inputs”
#!/usr/bin/env python
import argparse
import sys
print("this is script.py")
def parse_args():
parser=argparse.ArgumentParser(description="a script to do stuff")
parser.add_argument("R1_file")
args=parser.parse_args()
return args
def process_files(things):
print("this is the process files function")
print(things.R1_file)
def main():
print("this is the main function")
inputs=parse_args()
print(inputs.R1_file)
process_files(inputs)
if __name__ == '__main__':
main()
Run a tool to get stats your input R1 file
Use a tool to get stats on this file
e.g. if the command is seqkit stats R1.fq > R1.stats
Break this command into a list of strings, save it as “cmd”
Run the tool with subprocess.run(cmd)
add import subprocess at the top of the script
#!/usr/bin/env python
import argparse
import sys
import subprocess
print("this is script.py")
def parse_args():
parser=argparse.ArgumentParser(description="a script to do stuff")
parser.add_argument("R1_file")
args=parser.parse_args()
return args
def process_files(things):
print("this is the process files function")
print(things.R1_file)
cmd=["seqkit", "stats", things.R1_file]
subprocess.run(cmd)
def main():
print("this is the main function")
inputs=parse_args()
print(inputs.R1_file)
process_files(inputs)
if __name__ == '__main__':
main()
Put this stats step into its own function
Define a new function called stats that takes in a file - stats(file)
Run this from within the process_files function
Note that in here, we give stats the arg called “things.R1_file”
Then, stats considers this to be “file”
#!/usr/bin/env python
import argparse
import sys
import subprocess
print("this is script.py")
def parse_args():
parser=argparse.ArgumentParser(description="a script to do stuff")
parser.add_argument("R1_file")
args=parser.parse_args()
return args
def stats(file):
cmd=["seqkit", "stats", file]
subprocess.run(cmd)
def process_files(things):
print("this is the process files function")
print(things.R1_file)
# cmd=["seqkit", "stats", things.R1_file]
# subprocess.run(cmd)
stats(things.R1_file)
def main():
print("this is the main function")
inputs=parse_args()
print(inputs.R1_file)
process_files(inputs)
if __name__ == '__main__':
main()
Add in an optional input
Add number of threads as a new optional arg - note the two dashes
Without these two dashes, it would be a required, positional arg
default type is string, so specify that type=int
can add in a default value
Check you can print the threads arg from the main function
#!/usr/bin/env python
import argparse
import sys
import subprocess
print("this is script.py")
def parse_args():
parser=argparse.ArgumentParser(description="a script to do stuff")
parser.add_argument("R1_file")
parser.add_argument("--threads", type=int, default=16)
args=parser.parse_args()
return args
def stats(file):
cmd=["seqkit", "stats", file]
subprocess.run(cmd)
def process_files(things):
print("this is the process files function")
print(things.R1_file)
# cmd=["seqkit", "stats", things.R1_file]
# subprocess.run(cmd)
stats(things.R1_file)
def main():
print("this is the main function")
inputs=parse_args()
print(inputs.R1_file)
print(inputs.threads)
process_files(inputs)
if __name__ == '__main__':
main()
Run, and specify threads: ./script.py R1.fq --threads 32
Print all the inputs to the script
We want to print the supplied args, and any default args (if not supplied)
Put this in the parse_args function
#!/usr/bin/env python
import argparse
import sys
import subprocess
print("this is script.py")
def parse_args():
parser=argparse.ArgumentParser(description="a script to do stuff")
parser.add_argument("R1_file")
parser.add_argument("--threads", type=int, default=16)
args=parser.parse_args()
print("the inputs are:")
for arg in vars(args):
print("{} is {}".format(arg, getattr(args, arg)))
return args
def stats(file):
cmd=["seqkit", "stats", file]
subprocess.run(cmd)
def process_files(things):
print("this is the process files function")
print(things.R1_file)
# cmd=["seqkit", "stats", things.R1_file]
# subprocess.run(cmd)
stats(things.R1_file)
def main():
print("this is the main function")
inputs=parse_args()
print(inputs.R1_file)
print(inputs.threads)
process_files(inputs)
if __name__ == '__main__':
main()
Run with and without specifying the --threads
}}}
http://www.oracle-developer.net/display.php?id=316
http://jonathanlewis.wordpress.com/2007/06/25/qb_name/
https://iggyfernandez.wordpress.com/2010/07/26/sql-101-which-query-is-better/
''Example1''
{{{
SELECT /*+ QB_NAME(outer) */ NVL(NVL(MAX(D3.c1) * -1 , 0) * -1 , 0) AS c1,
NVL(NVL(MAX(D3.c2) * -1 , 0) * -1 , 0) AS c2,
NVL(NVL(MAX(D3.c3) * -1 , 0) , 0) AS c3,
NVL(NVL(MAX(D3.c4) * -1 , 0) , 0) AS c4,
NVL(NVL(MAX(D3.c5) * -1 , 0) , 0) AS c5,
NVL(NVL(MAX(D3.c6) * -1 , 0) * -1 , 0) AS c6,
NVL(NVL(MAX(D3.c7) * -1 , 0) , 0) AS c7
FROM
(SELECT /*+ QB_NAME(inner) */
SUM(
CASE
WHEN T125129.HIER6_CODE = 'ALLOCATIONS6'
THEN T295183.BUDGET_LOC_AMT
END ) AS c1,
SUM(
CASE
WHEN T125129.HIER7_CODE = 'COST_OF_SERVICES'
THEN T295183.BUDGET_LOC_AMT
END ) AS c2,
SUM(
CASE
WHEN T125129.HIER4_CODE = 'EBITDA'
THEN T295183.BUDGET_LOC_AMT
END ) AS c3,
SUM(
CASE
WHEN T125129.HIER6_CODE IN ('GROSS_PROFIT6', 'JOINT_VENTURE_OPER6', 'OPEX_BEFORE_ALLOC', 'OTHER_INCOME6')
THEN T295183.BUDGET_LOC_AMT
END ) AS c4,
SUM(
CASE
WHEN T125129.HIER5_CODE = 'GROSS_PROFIT5'
THEN T295183.BUDGET_LOC_AMT
END ) AS c5,
SUM(
CASE
WHEN T125129.HIER6_CODE = 'OPEX_BEFORE_ALLOC'
THEN T295183.BUDGET_LOC_AMT
END ) AS c6,
SUM(
CASE
WHEN T125129.HIER7_CODE = 'GROSS_REVENUE'
THEN T295183.BUDGET_LOC_AMT
END ) AS c7
FROM
(SELECT /*+ QB_NAME(distinct_W_EXCH_RATE_G) */ DISTINCT FROM_CURCY_CD CURRENCY FROM W_EXCH_RATE_G
) T347319, -- USE_MERGE
W_MCAL_DAY_D T156337
/* Dim_W_MCAL_DAY_D_Fiscal_Day */
,
W_HIERARCHY_D T148616
/* Dim_W_HIERARCHY_D_Segment2 */
,
W_GL_SEGMENT_D T148908
/* Dim_W_GL_SEGMENT_D_Segment2 */
,
W_HIERARCHY_D T148543 -- USE_MERGE.. it has the 'Northeast Market Area'
/* Dim_W_HIERARCHY_D_Segment3 */
,
W_GL_SEGMENT_D T148937
/* Dim_W_GL_SEGMENT_D_Segment3 */
,
W_HIERARCHY_D T125129 -- USE_MERGE
/* Dim_W_HIERARCHY_D_Segment1 */
,
W_GL_SEGMENT_D T149255
/* Dim_W_GL_SEGMENT_D_Segment1 */
,
W_INT_ORG_D T111515
/* Dim_W_INT_ORG_D_Company */
,
W_GL_ACCOUNT_D T91397
/* Dim_W_GL_ACCOUNT_D */
,
W_BUDGET_D T146170
/* Dim_W_BUDGET_D */
,
W_LEDGER_D T176007
/* Dim_W_LEDGER_D_Budget */
,
W_ACCT_BUDGET_F T295183
/* Fact_W_ACCT_BUDGET_F_PSFTSTDBUDGET */
WHERE ( T148616.HIER_CODE = T148908.SEGMENT_LOV_ID
AND T148616.HIER20_CODE = T148908.SEGMENT_VAL_CODE
AND T91397.ACCOUNT_SEG2_CODE = T148908.SEGMENT_VAL_CODE
AND T91397.ACCOUNT_SEG2_ATTRIB = T148908.SEGMENT_LOV_ID
AND T148543.HIER_CODE = T148937.SEGMENT_LOV_ID
AND T148543.HIER20_CODE = T148937.SEGMENT_VAL_CODE
AND T91397.ACCOUNT_SEG3_CODE = T148937.SEGMENT_VAL_CODE
AND T91397.ACCOUNT_SEG3_ATTRIB = T148937.SEGMENT_LOV_ID
AND T125129.HIER_CODE = T149255.SEGMENT_LOV_ID
AND T125129.HIER20_CODE = T149255.SEGMENT_VAL_CODE
AND T91397.ACCOUNT_SEG1_CODE = T149255.SEGMENT_VAL_CODE
AND T91397.ACCOUNT_SEG1_ATTRIB = T149255.SEGMENT_LOV_ID
AND T111515.ROW_WID = T295183.COMPANY_ORG_WID
AND T91397.ROW_WID = T295183.GL_ACCOUNT_WID
AND T146170.ROW_WID = T295183.BUDGET_WID
AND T176007.ROW_WID = T295183.BUDGET_LEDGER_WID
AND T156337.ADJUSTMENT_PERIOD_FLG = 'N'
AND T156337.MCAL_CAL_WID = T295183.MCAL_CAL_WID
AND T156337.MCAL_DAY_DT_WID = T295183.PERIOD_END_DT_WID
AND T111515.COMPANY_FLG = 'Y'
AND T146170.APPLICATION_SOURCE = 'GL_PSFT_STD'
AND T148908.SEGMENT_LOV_ID = 'Department~SHARE'
AND T148543.HIER6_NAME = 'Northeast Market Area'
AND T148616.HIER3_NAME = 'Americas LOBs'
AND T148616.HIER4_NAME = 'Outsourcing'
AND T148616.HIER5_NAME = 'GCS'
AND T148937.SEGMENT_LOV_ID = 'Operating Unit~SHARE'
AND T149255.SEGMENT_LOV_ID = 'Account~SHARE'
AND T156337.MCAL_PERIOD_NAME = 'January'
AND T156337.MCAL_PER_NAME_YEAR = '2012'
AND T176007.LEDGER_NAME = 'Budget'
AND T295183.LOC_CURR_CODE = T347319.CURRENCY
AND T295183.LOC_CURR_CODE = 'USD'
AND T347319.CURRENCY = 'USD'
AND T111515.ORG_NUM IS NOT NULL
AND T148543.HIER9_CODE IS NOT NULL
AND (T148616.HIER8_NAME IN ('PAS Dedicated Clients', 'PJM Dedicated Clients', 'TM Dedicated Clients'))
AND (T148616.HIER6_NAME IN ('Facilities Management', 'GCS Operations', 'Project Management', 'Transaction Management'))
AND (T125129.HIER4_CODE IN ('EBITDA')
OR T125129.HIER5_CODE IN ('GROSS_PROFIT5')
OR T125129.HIER6_CODE IN ('ALLOCATIONS6', 'GROSS_PROFIT6', 'JOINT_VENTURE_OPER6', 'OPEX_BEFORE_ALLOC', 'OTHER_INCOME6')
OR T125129.HIER7_CODE IN ('COST_OF_SERVICES', 'GROSS_REVENUE'))
AND T148616.HIER10_CODE IS NOT NULL )
) D3
}}}
''Example2''
{{{
SELECT /* with qb */ a.business_unit,
a.cust_id,
a.item,
DECODE(a.invoice_dt, '',a.due_dt,a.invoice_dt),
DECODE(a.invoice_dt,'',TO_CHAR(a.due_dt,'MM/YY'),TO_CHAR(a.invoice_dt,'MM/YY')),
a.due_dt,
a.orig_item_amt,
SUM(b.entry_amt),
SUM(b.entry_amt_base),
a.currency_cd,
a.ar_specialist,
a.accounting_dt,
a.item_line,
MAX(a.entry_type),
MAX(a.document),
MAX(a.bill_of_lading)
FROM ps_item a,
ps_item_activity b
WHERE a.business_unit = b.business_unit
AND a.business_unit = '10ASV'
AND a.cust_id = b.cust_id
AND a.item = b.item
AND a.item_line = b.item_line
AND ((a.item_status = 'O')
OR (a.item_status = 'C'
AND a.post_dt > to_date('14-MAY-2012','DD-MON-YYYY') ))
AND b.ACCOUNTING_DT <= to_date('14-MAY-2012','DD-MON-YYYY')
AND EXISTS
(SELECT /*+ full (c) leading(ps_cb_ar_r003_pro@inline) */1
FROM ps_item_dst c,
ps_psa_orgprj_defn d
WHERE a.business_unit = c.business_unit
AND a.item = c.item
AND a.item_line = c.item_line
AND a.business_unit = d.business_unit(+)
AND c.business_unit = d.business_unit(+)
AND c.project_id = d.project_id(+)
AND c.project_id IN
(SELECT /*+ qb_name(inline) */ project_id
FROM ps_cb_ar_r003_pro
WHERE oprid ='10098845'
AND run_cntl_id = 'IAR'
)
AND d.project_id IN
(SELECT project_id
FROM ps_cb_ar_r003_pro
WHERE oprid ='10098845'
AND run_cntl_id = 'IAR'
)
)
GROUP BY a.business_unit,
a.cust_id,
a.item,
DECODE(a.invoice_dt, '',a.due_dt,a.invoice_dt),
DECODE(a.invoice_dt,'',TO_CHAR(a.due_dt,'MM/YY'),TO_CHAR(a.invoice_dt,'MM/YY')),
a.due_dt,
a.orig_item_amt,
a.currency_cd,
a.ar_specialist,
a.accounting_dt,
a.item_line
HAVING SUM(b.entry_amt) <>0
OR SUM(b.entry_amt_base)<>0
-- ORDER BY :2
}}}
https://community.hortonworks.com/articles/37765/backing-up-the-ambari-database-with-postgres.html
https://discuss.pivotal.io/hc/en-us/articles/217649658-How-to-connect-to-Ambari-s-PostgreSQL-database-
https://analyticsanvil.wordpress.com/2016/08/21/useful-queries-for-the-hive-metastore/
https://stackoverflow.com/questions/16738516/hive-how-to-see-the-table-created-in-metastore
https://community.cloudera.com/t5/Batch-SQL-Apache-Hive/Accessing-Hive-Metadata/td-p/26836
http://milek.blogspot.com/2008/10/oracle-listener-tcpip-and-performance.html
http://download.oracle.com/docs/cd/B14117_01/network.101/b10775/plan.htm#i453005
Check the scatter plot of the regression here http://www.facebook.com/photo.php?pid=5053273&l=39a2900738&id=552113028
"That is the Linear Regression of Average Active Sessions vs. CPU% utilization
notice the strong R2 of .97 which on the photo shows a strong correlation between the two statistics. But when CPU starts to queue at >80% the AAS also shoots up!
This came from 8core HS21 Bladeserver on a DS4800 SAN, and 2 node 11gR1 RAC."
***At hotsos2011 Neil Gunther told me a story about the Hubble Bubble which is about applying linear regression and space discoveries http://perfdynamics.blogspot.com/2010/06/linear-modeling-in-r-and-hubble-bubble.html
the r2toolkit can be downloaded here http://karlarao.wordpress.com/scripts-resources/
! RAC NODE1
[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/TIdYy8uFCxI/AAAAAAAAA10/gWQHhLmtA20/r2-aas-cpupct.png]]
! RAC NODE2
[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/TQGBFdbleLI/AAAAAAAAA-A/YPfGbuzX6dk/r2-aas-cpupct-2.png]]
For the r2 scripts check out the following link http://karlarao.wordpress.com/scripts-resources/ look for the file r2toolkit.zip
I was also able to present this idea at OOW unconference 2010 and OCW 2010 http://karlarao.wordpress.com/2010/09/24/oracle-closed-world-and-unconference-presentations/ will also present this at HOTSOS 2011 in March http://www.hotsos.com/sym11/sym_speakers.html
''1) The Y (dependent) and X (independent) values''
{{{
SQL> select count(*) from r2_y_value;
COUNT(*)
----------
112
SQL> select count(*) from r2_x_value;
COUNT(*)
----------
54208
}}}
''2) Populate the Y data''
{{{
SQL> insert into r2_regression_data (snap_id, tm, x_axis, y_axis)
2 select snap_id, tm, null, diff
3 from r2_y_value;
112 rows created.
SQL> commit;
Commit complete.
}}}
''3) Analyze the R2 values''
{{{
SQL>
SQL>
SQL> -------------------------------------------------------------------------
SQL> -- ANALYZE r2 VALUES
SQL> -------------------------------------------------------------------------
SQL>
SQL> -- get r2 of each x axis independent value
SQL> -- truncate table r2_stat_name_top;
SQL> declare
cursor r2_stat_name is select stat_name from r2_stat_name;
r2_count number;
r2_r2 number;
stat_name varchar2(200);
begin
for table_scan in r2_stat_name loop
select
regr_count(y.y_axis,x.diff), regr_r2(y.y_axis,x.diff)
into r2_count, r2_r2
from
(
select snap_id, y_axis from r2_regression_data
) y,
(
select snap_id, diff from r2_x_value --<---- x value HERE!
where stat_name = table_scan.stat_name
) x
where x.snap_id = y.snap_id;
insert into r2_stat_name_top values (r2_count, r2_r2, table_scan.stat_name);
commit;
end loop;
end;
/
PL/SQL procedure successfully completed.
}}}
''4) Get the stat_name with the highest R2''
{{{
SQL>
SQL>
SQL> -- get top r2, choose above .90!
SQL> set lines 300
SQL> select * from
2 (select regr_count, round(regr_r2,2) regr_r2, stat_name
3 from r2_stat_name_top
4 where regr_r2 > 0
5 and regr_count > 30
6 and stat_name != (select distinct stat_name from r2_y_value)
7 order by 2 desc)
8 where rownum < 31;
REGR_COUNT REGR_R2 STAT_NAME
---------- ---------- --------------------------------------------------------
112 .97 CPU used by this session
112 .93 consistent gets
112 .93 session logical reads
112 .92 consistent gets from cache
112 .92 no work - consistent read gets
112 .92 CPU used when call started
112 .91 sorts (rows)
112 .91 consistent gets from cache (fastpath)
112 .90 workarea executions - optimal
112 .90 sorts (memory)
112 .89 consistent gets - examination
112 .89 buffer is not pinned count
112 .89 index fetch by key
112 .88 rows fetched via callback
112 .88 DB time
112 .87 table fetch by rowid
112 .87 index scans kdiixs1
112 .79 shared hash latch upgrades - no wait
112 .76 buffer is pinned count
112 .71 DBWR fusion writes
112 .67 gc cr block receive time
112 .67 cluster wait time
112 .67 gc cr blocks received
112 .66 consistent changes
112 .65 data blocks consistent reads - undo records applied
112 .65 redo write time
112 .64 calls to kcmgas
112 .64 physical write total IO requests
112 .63 gc cr blocks served
112 .62 redo wastage
30 rows selected.
}}}
''5) Populate the X data''
{{{
declare
cursor c2 is
select snap_id, diff from r2_x_value --<---- indicate specific x value (.90 above) from the top r2 HERE!
where stat_name = 'CPU used by this session'
;
begin
for table_scan in c2 loop
update r2_regression_data set x_axis = table_scan.diff
where snap_id = table_scan.snap_id;
commit;
end loop;
end;
/
}}}
''6) Regression Analysis code!''
{{{
-- regression analysis code - populate the residual and outlier data
declare
outlier_count number;
intercept number;
slope number;
stnd_dev number;
avg_res number;
cursor c1 is select
snap_id, tm, x_axis, y_axis, proj_y, residual, residual_sqr, stnd_residual
from r2_regression_data;
begin
update r2_regression_data set stnd_residual = 4;
select count(*)
into outlier_count
from r2_regression_data
where abs(stnd_residual) > 3;
while outlier_count >0 loop
select round(regr_intercept (y_axis, x_axis),8)
into intercept
from r2_regression_data;
select round(regr_slope (y_axis, x_axis),8)
into slope
from r2_regression_data;
for table_scan in c1 loop
update r2_regression_data set proj_y = slope * table_scan.x_axis + intercept
where snap_id = table_scan.snap_id;
update r2_regression_data set residual = proj_y - y_axis
where snap_id = table_scan.snap_id;
update r2_regression_data set residual_sqr = residual * residual
where snap_id = table_scan.snap_id;
end loop;
select round(avg(residual),8)
into avg_res
from r2_regression_data;
select round(stddev(residual),8)
into stnd_dev
from r2_regression_data;
for table_scan2 in c1 loop
update r2_regression_data set stnd_residual = (residual-avg_res)/stnd_dev where snap_id = table_scan2.snap_id;
end loop;
select count(*)
into outlier_count
from r2_regression_data where abs(stnd_residual) > 3;
if outlier_count >0 then
for table_scan3 in c1 loop
if abs(table_scan3.stnd_residual) > 3 then
insert into r2_outlier_data (snap_id, tm, x_axis, y_axis, proj_y, residual, residual_sqr, stnd_residual) values
(table_scan3.snap_id, table_scan3.tm, table_scan3.x_axis, table_scan3.y_axis, table_scan3.proj_y, table_scan3.residual, table_scan3.residual_sqr, table_scan3.stnd_residual);
delete from r2_regression_data where snap_id = table_scan3.snap_id;
end if;
end loop;
end if;
end loop;
commit;
end;
/
}}}
''7) Get the full r2 report & residual''
{{{
SQL> select snap_id, to_char(tm,'yy/mm/dd hh24:mi') tm, x_axis, y_axis, round(proj_y,2) proj_y, round(residual,2) resi
2 from
3 (select * from r2_regression_data
4 union all
5 select * from r2_outlier_data)
6 order by residual_sqr desc;
SNAP_ID TM X_AXIS Y_AXIS PROJ_Y RESIDUAL RESIDUAL_SQR STND_RESIDUAL
---------- -------------- ---------- ---------- ---------- ---------- ------------ -------------
8631 10/09/03 10:00 2386527 10.1381514 8 -2.14 4.56 -5.5071925
8655 10/09/04 10:00 2386368 9.6647556 8 -1.66 2.77 -4.2911863
8727 10/09/07 10:00 1512471 6.17427596 4.98 -1.2 1.43 -4.2286624
8583 10/09/01 10:00 1533624 6.16998122 5.05 -1.12 1.26 -3.9690933
8709 10/09/06 16:00 1277555 5.05832429 4.21 -.85 .72 -3.0116921
8654 10/09/04 09:00 2433062 8.63511064 7.94 -.69 .48 -3.1382862
8662 10/09/04 17:00 2551548 8.96291063 8.29 -.67 .45 -3.1757221
8659 10/09/04 14:00 2333061 8.18530792 7.56 -.63 .39 -3.1399135
8636 10/09/03 15:00 2315812 7.95748646 7.47 -.49 .24 -2.5535469
8608 10/09/02 11:00 1783079 6.24226378 5.77 -.48 .23 -2.4990136
8582 10/09/01 09:00 551600 2.28957403 1.82 -.46 .22 -2.4367774
8609 10/09/02 12:00 2103698 7.23740511 6.79 -.45 .2 -2.3384935
8565 10/08/31 16:00 693115 2.67050598 2.28 -.39 .15 -2.062453
8566 10/08/31 17:00 633888 2.4739455 2.09 -.39 .15 -2.0258405
8664 10/09/04 19:00 1890219 5.73770489 6.11 .37 .14 1.91175537
8562 10/08/31 13:00 489129 2.00285427 1.62 -.38 .14 -1.9849164
8712 10/09/06 19:00 1525835 4.57327212 4.94 .37 .14 1.90344801
8610 10/09/02 13:00 1792940 6.17764899 5.8 -.38 .14 -1.9984469
8584 10/09/01 11:00 866023 3.17126538 2.83 -.34 .12 -1.788949
8564 10/08/31 15:00 493105 1.95770374 1.64 -.32 .1 -1.6836825
8568 10/08/31 19:00 481399 1.90847337 1.6 -.31 .1 -1.6224133
8567 10/08/31 18:00 433245 1.75857694 1.45 -.31 .1 -1.6442555
8705 10/09/06 12:00 1501657 4.55864014 4.86 .31 .09 1.57689748
8607 10/09/02 10:00 1766166 6.00471057 5.71 -.29 .09 -1.5442486
8661 10/09/04 16:00 2427824 8.10989209 7.83 -.28 .08 -1.4811425
8711 10/09/06 18:00 1429207 4.34789168 4.63 .29 .08 1.46711563
8663 10/09/04 18:00 2333496 7.80088949 7.53 -.27 .08 -1.443911
8563 10/08/31 14:00 474200 1.83990047 1.58 -.26 .07 -1.3853971
8732 10/09/07 15:00 1585686 4.87290846 5.13 .26 .07 1.34071707
8633 10/09/03 12:00 1718953 5.30180224 5.56 .26 .07 1.32802001
8665 10/09/04 20:00 1527818 4.7140803 4.95 .23 .06 1.2035677
8730 10/09/07 13:00 1375383 4.21823346 4.46 .24 .06 1.24549331
8561 10/08/31 12:00 447792 1.7262915 1.49 -.23 .05 -1.2339135
8590 10/09/01 17:00 1091974 3.74160948 3.55 -.19 .04 -.99414732
8735 10/09/07 18:00 1439298 4.47701642 4.67 .19 .04 .963095217
8713 10/09/06 20:00 1329360 4.10360513 4.31 .21 .04 1.07557408
8680 10/09/05 11:00 493909 1.44333669 1.64 .2 .04 1.0069992
8569 10/08/31 20:00 621020 2.23225169 2.05 -.19 .03 -.9821498
8706 10/09/06 13:00 1558842 4.87406163 5.05 .17 .03 .887599476
8679 10/09/05 10:00 596128 1.78912429 1.97 .18 .03 .909734037
8686 10/09/05 17:00 566790 1.70654602 1.87 .17 .03 .85089999
8737 10/09/07 20:00 1344196 4.18466536 4.36 .18 .03 .900763349
8733 10/09/07 16:00 1128147 3.52837343 3.67 .14 .02 .718250018
8682 10/09/05 13:00 646239 1.99993311 2.13 .13 .02 .647121981
8684 10/09/05 15:00 562085 1.70315119 1.86 .16 .02 .790203399
8689 10/09/05 20:00 619094 1.90113592 2.04 .14 .02 .709234492
8560 10/08/31 11:00 351699 1.32021978 1.19 -.14 .02 -.72083336
8558 10/08/31 09:00 964370 3.00795272 3.15 .14 .02 .699170048
8685 10/09/05 16:00 555031 1.70070695 1.84 .14 .02 .685433839
8707 10/09/06 14:00 1461530 4.60228452 4.74 .13 .02 .68136991
8681 10/09/05 12:00 560716 1.70368071 1.85 .15 .02 .764645087
8678 10/09/05 09:00 697000 2.13204449 2.29 .16 .02 .804958056
8593 10/09/01 20:00 1100780 3.43772925 3.58 .14 .02 .734228301
8656 10/09/04 11:00 1571407 4.96144141 5.09 .13 .02 .642069482
8641 10/09/03 20:00 1752018 5.53668318 5.67 .13 .02 .65619296
8683 10/09/05 14:00 532763 1.62465329 1.76 .14 .02 .710397441
8657 10/09/04 12:00 1961457 6.19011722 6.34 .15 .02 .743485058
8630 10/09/03 09:00 2066457 6.5532292 6.67 .12 .01 .602366716
8710 10/09/06 17:00 1451543 4.60742153 4.7 .1 .01 .48828759
8731 10/09/07 14:00 1609715 5.09466483 5.21 .12 .01 .586699449
8542 10/08/30 17:00 110265 .336416967 .41 .08 .01 .3785396
8543 10/08/30 18:00 103109 .317441401 .39 .07 .01 .358117007
8544 10/08/30 19:00 161094 .476071211 .58 .1 .01 .498247518
8546 10/08/30 21:00 107748 .333670329 .4 .07 .01 .350912598
8570 10/08/31 21:00 321105 1.19129277 1.09 -.1 .01 -.5593413
8586 10/09/01 13:00 1691056 5.55459794 5.47 -.08 .01 -.45244052
8592 10/09/01 19:00 1158329 3.85529824 3.77 -.09 .01 -.48068656
8606 10/09/02 09:00 1149399 3.83633289 3.74 -.1 .01 -.53071017
8617 10/09/02 20:00 1143081 3.60338063 3.72 .11 .01 .576577757
8642 10/09/03 21:00 1292866 4.07937972 4.2 .12 .01 .593821525
8688 10/09/05 19:00 468391 1.46101697 1.56 .1 .01 .48994366
8734 10/09/07 17:00 1304459 4.15358377 4.23 .08 .01 .400681754
8736 10/09/07 19:00 1340310 4.27241376 4.35 .08 .01 .379305809
8634 10/09/03 13:00 2053647 6.71936096 6.63 -.09 .01 -.47571641
8658 10/09/04 13:00 2388735 7.77465876 7.7 -.07 .01 -.3873154
8660 10/09/04 15:00 2226993 7.08390739 7.19 .1 .01 .514074019
8687 10/09/05 18:00 631445 1.98721549 2.08 .09 .01 .466907665
8702 10/09/06 09:00 1427626 4.55118579 4.63 .08 .01 .382632684
8726 10/09/07 09:00 1402837 4.4743164 4.55 .07 .01 .369852249
8728 10/09/07 11:00 1381194 4.40381 4.48 .08 .01 .3763523
8545 10/08/30 20:00 201627 .618431765 .7 .09 .01 .432378244
8536 10/08/30 11:00 274775 .957136911 .94 -.02 0 -.11223094
8537 10/08/30 12:00 283776 .964265268 .97 0 0 .000587034
8538 10/08/30 13:00 164698 .628332443 .59 -.04 0 -.23424664
8540 10/08/30 15:00 198878 .692103767 .7 0 0 .003126568
8541 10/08/30 16:00 165912 .527914401 .59 .06 0 .308651631
8587 10/09/01 14:00 1336977 4.35437199 4.34 -.02 0 -.10280299
8588 10/09/01 15:00 1213871 3.99096109 3.94 -.05 0 -.26170336
8589 10/09/01 16:00 1218170 3.95576848 3.96 0 0 -.00692076
8594 10/09/01 21:00 409523 1.33522544 1.37 .03 0 .164181969
8611 10/09/02 14:00 1609481 5.14215324 5.21 .07 0 .335623814
8612 10/09/02 15:00 1587677 5.09113794 5.14 .05 0 .237990719
8614 10/09/02 17:00 1591507 5.19283262 5.15 -.04 0 -.22753931
8615 10/09/02 18:00 1179791 3.80819504 3.83 .03 0 .121958826
8616 10/09/02 19:00 1328806 4.26580329 4.31 .05 0 .222102102
8618 10/09/02 21:00 740846 2.40469546 2.43 .03 0 .116106207
8635 10/09/03 14:00 2129358 6.86914675 6.87 0 0 .005692706
8637 10/09/03 16:00 2244322 7.18968273 7.24 .05 0 .252144219
8666 10/09/04 21:00 654243 2.1327977 2.15 .02 0 .08887439
8690 10/09/05 21:00 620183 2.02367035 2.04 .02 0 .089578883
8703 10/09/06 10:00 1419108 4.53652602 4.6 .06 0 .317060748
8714 10/09/06 21:00 296818 1.05433744 1.01 -.04 0 -.25101199
8729 10/09/07 12:00 1189720 3.91915435 3.87 -.05 0 -.29020841
8632 10/09/03 11:00 1985814 6.48205386 6.41 -.07 0 -.37035902
8638 10/09/03 17:00 2068615 6.70243433 6.68 -.02 0 -.13830504
8640 10/09/03 19:00 1441356 4.65129746 4.67 .02 0 .0902373
8704 10/09/06 11:00 1254032 4.02578603 4.07 .05 0 .225956234
8708 10/09/06 15:00 1405646 4.53892013 4.56 .02 0 .080375465
8539 10/08/30 14:00 163956 .52973017 .58 .05 0 .266621265
8591 10/09/01 18:00 1141194 3.74610636 3.71 -.03 0 -.19774237
8613 10/09/02 16:00 1880895 6.04760842 6.08 .03 0 .143400491
8639 10/09/03 18:00 1161313 3.84369577 3.78 -.07 0 -.37059383
112 rows selected.
}}}
''8) Get the cpu(y) centric r2 report''
{{{
SQL>
SQL>
SQL>
SQL>
SQL> set pagesize 50000
SQL> set linesize 250
SQL> select a.snap_id, to_char(a.tm,'yy/mm/dd hh24:mi') tm, a.x_axis, a.y_axis, round(b.oracpupct,2) oracpupct, round(
2 from
3 (select * from r2_regression_data
4 union all
5 select * from r2_outlier_data) a, r2_y_value b
6 where a.x_axis is not null
7 and a.snap_id = b.snap_id
8 -- and a.snap_id = 354
9 order by a.residual_sqr desc
10 -- order by oracpupct desc
11
SQL>
SQL> /
SNAP_ID TM X_AXIS Y_AXIS ORACPUPCT PROJ_Y RESIDUAL RESIDUAL_SQR STND_RESIDUAL
---------- -------------- ---------- ---------- ---------- ---------- ---------- ------------ -------------
8631 10/09/03 10:00 2386527 10.1381514 95.6 8 -2.14 4.56 -5.5071925
8655 10/09/04 10:00 2386368 9.6647556 86.75 8 -1.66 2.77 -4.2911863
8727 10/09/07 10:00 1512471 6.17427596 62.43 4.98 -1.2 1.43 -4.2286624
8583 10/09/01 10:00 1533624 6.16998122 60.44 5.05 -1.12 1.26 -3.9690933
8709 10/09/06 16:00 1277555 5.05832429 67.26 4.21 -.85 .72 -3.0116921
8654 10/09/04 09:00 2433062 8.63511064 87.53 7.94 -.69 .48 -3.1382862
8662 10/09/04 17:00 2551548 8.96291063 90.78 8.29 -.67 .45 -3.1757221
8659 10/09/04 14:00 2333061 8.18530792 84.79 7.56 -.63 .39 -3.1399135
8636 10/09/03 15:00 2315812 7.95748646 84.64 7.47 -.49 .24 -2.5535469
8608 10/09/02 11:00 1783079 6.24226378 65.96 5.77 -.48 .23 -2.4990136
8582 10/09/01 09:00 551600 2.28957403 22.91 1.82 -.46 .22 -2.4367774
8609 10/09/02 12:00 2103698 7.23740511 77.3 6.79 -.45 .2 -2.3384935
8565 10/08/31 16:00 693115 2.67050598 28.99 2.28 -.39 .15 -2.062453
8566 10/08/31 17:00 633888 2.4739455 26.19 2.09 -.39 .15 -2.0258405
8610 10/09/02 13:00 1792940 6.17764899 67.25 5.8 -.38 .14 -1.9984469
8562 10/08/31 13:00 489129 2.00285427 21.29 1.62 -.38 .14 -1.9849164
8664 10/09/04 19:00 1890219 5.73770489 67.81 6.11 .37 .14 1.91175537
8712 10/09/06 19:00 1525835 4.57327212 55.19 4.94 .37 .14 1.90344801
8584 10/09/01 11:00 866023 3.17126538 35.2 2.83 -.34 .12 -1.788949
8564 10/08/31 15:00 493105 1.95770374 22.05 1.64 -.32 .1 -1.6836825
8567 10/08/31 18:00 433245 1.75857694 19.38 1.45 -.31 .1 -1.6442555
8568 10/08/31 19:00 481399 1.90847337 21.04 1.6 -.31 .1 -1.6224133
8705 10/09/06 12:00 1501657 4.55864014 54.52 4.86 .31 .09 1.57689748
8607 10/09/02 10:00 1766166 6.00471057 66.77 5.71 -.29 .09 -1.5442486
8711 10/09/06 18:00 1429207 4.34789168 52.87 4.63 .29 .08 1.46711563
8661 10/09/04 16:00 2427824 8.10989209 87.38 7.83 -.28 .08 -1.4811425
8663 10/09/04 18:00 2333496 7.80088949 82.82 7.53 -.27 .08 -1.443911
8563 10/08/31 14:00 474200 1.83990047 20.48 1.58 -.26 .07 -1.3853971
8732 10/09/07 15:00 1585686 4.87290846 58.68 5.13 .26 .07 1.34071707
8633 10/09/03 12:00 1718953 5.30180224 62.9 5.56 .26 .07 1.32802001
8730 10/09/07 13:00 1375383 4.21823346 50.97 4.46 .24 .06 1.24549331
8665 10/09/04 20:00 1527818 4.7140803 55.57 4.95 .23 .06 1.2035677
8561 10/08/31 12:00 447792 1.7262915 18.56 1.49 -.23 .05 -1.2339135
8713 10/09/06 20:00 1329360 4.10360513 49.28 4.31 .21 .04 1.07557408
8680 10/09/05 11:00 493909 1.44333669 17.45 1.64 .2 .04 1.0069992
8735 10/09/07 18:00 1439298 4.47701642 53.79 4.67 .19 .04 .963095217
8590 10/09/01 17:00 1091974 3.74160948 42.53 3.55 -.19 .04 -.99414732
8569 10/08/31 20:00 621020 2.23225169 23.94 2.05 -.19 .03 -.9821498
8679 10/09/05 10:00 596128 1.78912429 21.34 1.97 .18 .03 .909734037
8737 10/09/07 20:00 1344196 4.18466536 49.83 4.36 .18 .03 .900763349
8706 10/09/06 13:00 1558842 4.87406163 58.1 5.05 .17 .03 .887599476
8686 10/09/05 17:00 566790 1.70654602 20.3 1.87 .17 .03 .85089999
8678 10/09/05 09:00 697000 2.13204449 25.1 2.29 .16 .02 .804958056
8684 10/09/05 15:00 562085 1.70315119 20.25 1.86 .16 .02 .790203399
8681 10/09/05 12:00 560716 1.70368071 20.29 1.85 .15 .02 .764645087
8657 10/09/04 12:00 1961457 6.19011722 71 6.34 .15 .02 .743485058
8593 10/09/01 20:00 1100780 3.43772925 40.34 3.58 .14 .02 .734228301
8733 10/09/07 16:00 1128147 3.52837343 42.42 3.67 .14 .02 .718250018
8683 10/09/05 14:00 532763 1.62465329 19.45 1.76 .14 .02 .710397441
8689 10/09/05 20:00 619094 1.90113592 21.92 2.04 .14 .02 .709234492
8558 10/08/31 09:00 964370 3.00795272 36.07 3.15 .14 .02 .699170048
8560 10/08/31 11:00 351699 1.32021978 14.83 1.19 -.14 .02 -.72083336
8685 10/09/05 16:00 555031 1.70070695 20.12 1.84 .14 .02 .685433839
8707 10/09/06 14:00 1461530 4.60228452 54.34 4.74 .13 .02 .68136991
8641 10/09/03 20:00 1752018 5.53668318 65.36 5.67 .13 .02 .65619296
8682 10/09/05 13:00 646239 1.99993311 23.24 2.13 .13 .02 .647121981
8656 10/09/04 11:00 1571407 4.96144141 58.56 5.09 .13 .02 .642069482
8630 10/09/03 09:00 2066457 6.5532292 75.75 6.67 .12 .01 .602366716
8642 10/09/03 21:00 1292866 4.07937972 47.76 4.2 .12 .01 .593821525
8731 10/09/07 14:00 1609715 5.09466483 59.75 5.21 .12 .01 .586699449
8617 10/09/02 20:00 1143081 3.60338063 42.06 3.72 .11 .01 .576577757
8570 10/08/31 21:00 321105 1.19129277 12.82 1.09 -.1 .01 -.5593413
8660 10/09/04 15:00 2226993 7.08390739 80.85 7.19 .1 .01 .514074019
8544 10/08/30 19:00 161094 .476071211 5.68 .58 .1 .01 .498247518
8606 10/09/02 09:00 1149399 3.83633289 42.5 3.74 -.1 .01 -.53071017
8688 10/09/05 19:00 468391 1.46101697 17.41 1.56 .1 .01 .48994366
8710 10/09/06 17:00 1451543 4.60742153 54.53 4.7 .1 .01 .48828759
8687 10/09/05 18:00 631445 1.98721549 22.68 2.08 .09 .01 .466907665
8592 10/09/01 19:00 1158329 3.85529824 42.93 3.77 -.09 .01 -.48068656
8634 10/09/03 13:00 2053647 6.71936096 75.93 6.63 -.09 .01 -.47571641
8545 10/08/30 20:00 201627 .618431765 7.24 .7 .09 .01 .432378244
8586 10/09/01 13:00 1691056 5.55459794 61.64 5.47 -.08 .01 -.45244052
8734 10/09/07 17:00 1304459 4.15358377 49.92 4.23 .08 .01 .400681754
8702 10/09/06 09:00 1427626 4.55118579 52.99 4.63 .08 .01 .382632684
8736 10/09/07 19:00 1340310 4.27241376 49.52 4.35 .08 .01 .379305809
8542 10/08/30 17:00 110265 .336416967 4.06 .41 .08 .01 .3785396
8728 10/09/07 11:00 1381194 4.40381 51.21 4.48 .08 .01 .3763523
8726 10/09/07 09:00 1402837 4.4743164 51.33 4.55 .07 .01 .369852249
8543 10/08/30 18:00 103109 .317441401 3.8 .39 .07 .01 .358117007
8658 10/09/04 13:00 2388735 7.77465876 86.93 7.7 -.07 .01 -.3873154
8546 10/08/30 21:00 107748 .333670329 3.94 .4 .07 .01 .350912598
8639 10/09/03 18:00 1161313 3.84369577 44.7 3.78 -.07 0 -.37059383
8632 10/09/03 11:00 1985814 6.48205386 73.54 6.41 -.07 0 -.37035902
8611 10/09/02 14:00 1609481 5.14215324 60.27 5.21 .07 0 .335623814
8703 10/09/06 10:00 1419108 4.53652602 53.74 4.6 .06 0 .317060748
8541 10/08/30 16:00 165912 .527914401 6.28 .59 .06 0 .308651631
8539 10/08/30 14:00 163956 .52973017 6.24 .58 .05 0 .266621265
8729 10/09/07 12:00 1189720 3.91915435 46.12 3.87 -.05 0 -.29020841
8637 10/09/03 16:00 2244322 7.18968273 82.01 7.24 .05 0 .252144219
8612 10/09/02 15:00 1587677 5.09113794 60.23 5.14 .05 0 .237990719
8588 10/09/01 15:00 1213871 3.99096109 46.02 3.94 -.05 0 -.26170336
8704 10/09/06 11:00 1254032 4.02578603 47.67 4.07 .05 0 .225956234
8616 10/09/02 19:00 1328806 4.26580329 50.22 4.31 .05 0 .222102102
8714 10/09/06 21:00 296818 1.05433744 12.07 1.01 -.04 0 -.25101199
8538 10/08/30 13:00 164698 .628332443 7.34 .59 -.04 0 -.23424664
8614 10/09/02 17:00 1591507 5.19283262 60.26 5.15 -.04 0 -.22753931
8594 10/09/01 21:00 409523 1.33522544 15.84 1.37 .03 0 .164181969
8591 10/09/01 18:00 1141194 3.74610636 42.78 3.71 -.03 0 -.19774237
8613 10/09/02 16:00 1880895 6.04760842 69.42 6.08 .03 0 .143400491
8615 10/09/02 18:00 1179791 3.80819504 44.06 3.83 .03 0 .121958826
8618 10/09/02 21:00 740846 2.40469546 27.18 2.43 .03 0 .116106207
8638 10/09/03 17:00 2068615 6.70243433 76.87 6.68 -.02 0 -.13830504
8640 10/09/03 19:00 1441356 4.65129746 54.47 4.67 .02 0 .0902373
8690 10/09/05 21:00 620183 2.02367035 40.16 2.04 .02 0 .089578883
8666 10/09/04 21:00 654243 2.1327977 24.62 2.15 .02 0 .08887439
8708 10/09/06 15:00 1405646 4.53892013 53.82 4.56 .02 0 .080375465
8536 10/08/30 11:00 274775 .957136911 11.1 .94 -.02 0 -.11223094
8587 10/09/01 14:00 1336977 4.35437199 49.3 4.34 -.02 0 -.10280299
8635 10/09/03 14:00 2129358 6.86914675 78.22 6.87 0 0 .005692706
8540 10/08/30 15:00 198878 .692103767 8.1 .7 0 0 .003126568
8537 10/08/30 12:00 283776 .964265268 11.17 .97 0 0 .000587034
8589 10/09/01 16:00 1218170 3.95576848 45.12 3.96 0 0 -.00692076
112 rows selected.
}}}
''9) Compare the R2 of y to another column value (oracle cpu pct%)''
{{{
SQL> select regr_count(a.y_axis,b.oracpupct), regr_r2(a.y_axis,b.oracpupc
2 from
3 (select * from r2_regression_data
4 union all
5 select * from r2_outlier_data) a, r2_y_value b
6 where a.x_axis is not null
7 and a.snap_id = b.snap_id
8 order by a.residual_sqr desc
9
SQL> /
REGR_COUNT(A.Y_AXIS,B.ORACPUPCT) REGR_R2(A.Y_AXIS,B.ORACPUPCT)
-------------------------------- -----------------------------
112 .974644372
}}}
''10) Get R2 again of X and Y''
{{{
SQL> select regr_count(y_axis,x_axis), regr_r2(y_axis,x_axis) from
2 (select * from r2_regression_data
3 union all
4 select * from r2_outlier_data);
REGR_COUNT(Y_AXIS,X_AXIS) REGR_R2(Y_AXIS,X_AXIS)
------------------------- ----------------------
112 .971346164
}}}
''11) Get R2 after removing outlier data''
{{{
SQL> select regr_count(y_axis,x_axis), regr_r2(y_axis,x_axis) from r2_regression_data;
REGR_COUNT(Y_AXIS,X_AXIS) REGR_R2(Y_AXIS,X_AXIS)
------------------------- ----------------------
104 .991180797
}}}
''12) Get R2 of outlier''
{{{
SQL> select regr_count(y_axis,x_axis), regr_r2(y_axis,x_axis) from r2_outlier_data;
REGR_COUNT(Y_AXIS,X_AXIS) REGR_R2(Y_AXIS,X_AXIS)
------------------------- ----------------------
8 .881970392
}}}
''13) Get outlier data''
{{{
SQL> select snap_id, to_char(tm,'yy/mm/dd hh24:mi') tm, x_axis, y_axis, round(proj_y,2) proj_y, round(resid
2 from r2_outlier_data
3 where x_axis is not null
4 order by residual_sqr desc;
SNAP_ID TM X_AXIS Y_AXIS PROJ_Y RESIDUAL RESIDUAL_SQR STND_RESIDUAL
---------- -------------- ---------- ---------- ---------- ---------- ------------ -------------
8631 10/09/03 10:00 2386527 10.1381514 8 -2.14 4.56 -5.5071925
8655 10/09/04 10:00 2386368 9.6647556 8 -1.66 2.77 -4.2911863
8727 10/09/07 10:00 1512471 6.17427596 4.98 -1.2 1.43 -4.2286624
8583 10/09/01 10:00 1533624 6.16998122 5.05 -1.12 1.26 -3.9690933
8709 10/09/06 16:00 1277555 5.05832429 4.21 -.85 .72 -3.0116921
8654 10/09/04 09:00 2433062 8.63511064 7.94 -.69 .48 -3.1382862
8662 10/09/04 17:00 2551548 8.96291063 8.29 -.67 .45 -3.1757221
8659 10/09/04 14:00 2333061 8.18530792 7.56 -.63 .39 -3.1399135
8 rows selected.
}}}
''14) Get statistical summary of data''
{{{
SQL> set serveroutput on
SQL> set echo on
SQL> declare
2 s DBMS_STAT_FUNCS.SummaryType;
3 begin
4 DBMS_STAT_FUNCS.SUMMARY('R2TOOLKIT','R2_REGRESSION_DATA','X_AXIS',5,s);
5 dbms_output.put_line('SUMMARY STATISTICS');
6 dbms_output.put_line('---------------------------');
7 dbms_output.put_line('Count: '||s.count);
8 dbms_output.put_line('Min: '||s.min);
9 dbms_output.put_line('Max: '||s.max);
10 dbms_output.put_line('Range: '||s.range);
11 dbms_output.put_line('Mean: '||round(s.mean));
12 dbms_output.put_line('Mode Count: '||s.cmode.count);
13 dbms_output.put_line('Mode: '||s.cmode(1));
14 dbms_output.put_line('Variance: '||round(s.variance));
15 dbms_output.put_line('Stddev: '||round(s.stddev));
16 dbms_output.put_line('---------------------------');
17 dbms_output.put_line('Quantile 5 -> '||s.quantile_5);
18 dbms_output.put_line('Quantile 25 -> '||s.quantile_25);
19 dbms_output.put_line('Median -> '||s.median);
20 dbms_output.put_line('Quantile 75 -> '||s.quantile_75);
21 dbms_output.put_line('Quantile 95 -> '||s.quantile_95);
22 dbms_output.put_line('---------------------------');
23 dbms_output.put_line('Extreme Count: '||s.extreme_values.count);
24 dbms_output.put_line('Extremes: '||s.extreme_values(1));
25 dbms_output.put_line('Bottom 5: '||s.bottom_5_values(5)||','||s.bottom_5_values(4)||','||s.bottom_5
26 dbms_output.put_line('Top 5: '||s.top_5_values(1)||','||s.top_5_values(2)||','||s.top_5_
27 dbms_output.put_line('---------------------------');
28 end;
29 /
SUMMARY STATISTICS
---------------------------
Count: 104
Min: 103109
Max: 2427824
Range: 2324715
Mean: 1131744
Mode Count: 104
Mode: 165912
Variance: 405820839869
Stddev: 637041
---------------------------
Quantile 5 -> 164880.1
Quantile 25 -> 559294.75
Median -> 1201795.5
Quantile 75 -> 1574976.75
Quantile 95 -> 2212347.75
---------------------------
Extreme Count: 0
declare
*
ERROR at line 1:
ORA-06533: Subscript beyond count
ORA-06512: at line 24
SQL> declare
2 s DBMS_STAT_FUNCS.SummaryType;
3 begin
4 DBMS_STAT_FUNCS.SUMMARY('R2TOOLKIT','R2_REGRESSION_DATA','Y_AXIS',5,s);
5 dbms_output.put_line('SUMMARY STATISTICS');
6 dbms_output.put_line('---------------------------');
7 dbms_output.put_line('Count: '||s.count);
8 dbms_output.put_line('Min: '||s.min);
9 dbms_output.put_line('Max: '||s.max);
10 dbms_output.put_line('Range: '||s.range);
11 dbms_output.put_line('Mean: '||round(s.mean));
12 dbms_output.put_line('Mode Count: '||s.cmode.count);
13 dbms_output.put_line('Mode: '||s.cmode(1));
14 dbms_output.put_line('Variance: '||round(s.variance));
15 dbms_output.put_line('Stddev: '||round(s.stddev));
16 dbms_output.put_line('---------------------------');
17 dbms_output.put_line('Quantile 5 -> '||s.quantile_5);
18 dbms_output.put_line('Quantile 25 -> '||s.quantile_25);
19 dbms_output.put_line('Median -> '||s.median);
20 dbms_output.put_line('Quantile 75 -> '||s.quantile_75);
21 dbms_output.put_line('Quantile 95 -> '||s.quantile_95);
22 dbms_output.put_line('---------------------------');
23 dbms_output.put_line('Extreme Count: '||s.extreme_values.count);
24 dbms_output.put_line('Extremes: '||s.extreme_values(1));
25 dbms_output.put_line('Bottom 5: '||s.bottom_5_values(5)||','||s.bottom_5_values(4)||','||s.bottom_5
26 dbms_output.put_line('Top 5: '||s.top_5_values(1)||','||s.top_5_values(2)||','||s.top_5_
27 dbms_output.put_line('---------------------------');
28 end;
29 /
SUMMARY STATISTICS
---------------------------
Count: 104
Min: .317441401312861593235425011125945705385
Max: 8.10989208507089241034195162635529608007
Range: 7.79245068375803081710652661522935037469
Mean: 4
Mode Count: 104
Mode: .9642652676419483212824393251499355417298
Variance: 4
Stddev: 2
---------------------------
Quantile 5 -> .5430354091205594430385812645437926098878
Quantile 25 -> 1.88582705559029512237984947748443866416
Median -> 3.93746141390887432508029373874942105042
Quantile 75 -> 4.89590657700375285514313374197188642884
Quantile 95 -> 7.17381643105824124338148201530821454513
---------------------------
Extreme Count: 0
declare
*
ERROR at line 1:
ORA-06533: Subscript beyond count
ORA-06512: at line 24
}}}
! Drilling down on the peak workload... with AAS of 10
''1) General Workload report''
{{{
AWR CPU and IO Workload Report
i *** *** ***
n Total Total Total U S
Snap s Snap C CPU A Oracle OS Physical Oracle RMAN OS S Y
Snap Start t Dur P Time DB DB Bg RMAN A CPU CPU Memory IOPs IOPs IOPs IO r IO w Redo Exec CPU CPU CPU R S
ID Time # (m) U (s) Time CPU CPU CPU S (s) (s) (mb) r w redo (mb)/s (mb)/s (mb)/s Sess /s % % % % %
------ --------------- --- ---------- --- ----------- ---------- --------- --------- -------- ----- ----------- ----------- ---------- --------- --------- --------- --------- --------- --------- ---- --------- ------ ---- ---- ---- ----
8631 10/09/03 10:00 1 59.98 8 28790.40 36485.18 27217.50 307.25 0.00 10.1 27524.75 26288.07 12287.06 454.684 31.020 49.214 106.008 0.392 0.091 98 2111.028 96 0 91 84 7
}}}
''2) Tablespace IO report''
{{{
AWR Tablespace IO Report
i
n
Snap s Snap IOPS IOPS IOPS
Snap Start t Dur IO Read Av Av Av Write Av Av Av Buffer Av Buf Total Total
ID Time # (m) TS Rank Time Reads Rd(ms) Reads/s Blks/Rd Time Writes Wt(ms) Writes/s Blks/Wrt Waits Wt(ms) IO R+W R+W
------ --------------- --- ---------- -------------------- ---- -------- -------- ------ -------- ------- -------- -------- ------ -------- -------- -------- ------ -------- --------
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 1 179285 981340 1.8 273 3.7 364237 78032 46.7 22 1.4 5881 5.2 1059372 294
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 2 173600 403730 4.3 112 104.7 74466 25632 29.1 7 1.0 13 15.4 429362 119
8631 10/09/03 10:00 1 59.98 BIOHUB_MAIN 3 67411 208421 3.2 58 13.6 291 72 40.4 0 1.0 0 0.0 208493 58
8631 10/09/03 10:00 1 59.98 SYSAUX 4 13830 35832 3.9 10 2.6 22611 4968 45.5 1 1.4 0 0.0 40800 11
8631 10/09/03 10:00 1 59.98 SYSTEM 5 1744 6446 2.7 2 1.8 4058 988 41.1 0 1.5 0 0.0 7434 2
8631 10/09/03 10:00 1 59.98 TEMP 6 118 299 3.9 0 17.3 17385 1047 166.0 0 26.2 0 0.0 1346 0
8631 10/09/03 10:00 1 59.98 UNDOTBS1 7 164 363 4.5 0 1.0 4344 896 48.5 0 9.8 38 0.3 1259 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 8 875 541 16.2 0 1.0 675 168 40.2 0 1.0 0 0.0 709 0
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 9 760 298 25.5 0 1.0 438 80 54.8 0 1.0 0 0.0 378 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 10 714 269 26.5 0 1.0 433 80 54.1 0 1.0 0 0.0 349 0
TING
8631 10/09/03 10:00 1 59.98 UNDOTBS2 11 22 9 24.4 0 1.0 11 8 13.8 0 1.0 661 6.2 17 0
8631 10/09/03 10:00 1 59.98 USERS 12 22 8 27.5 0 1.0 9 8 11.3 0 1.0 0 0.0 16 0
}}}
''3) Datafile IO report''
{{{
AWR File IO Report
i
n
Snap s Snap IOPS IOPS IOPS
Snap Start t Dur IO Read Av Av Av Write Av Av Av Buffer Av Buf Total Total
ID Time # (m) TS File# Filename Rank Time Reads Rd(ms) Reads/s Blks/Rd Time Writes Wt(ms) Writes/s Blks/Wrt Waits Wt(ms) IO R+W R+W
------ --------------- --- ---------- -------------------- ----- ------------------------------------------------------------ ---- -------- -------- ------ -------- ------- -------- -------- ------ -------- -------- -------- ------ -------- --------
8631 10/09/03 10:00 1 59.98 BIOHUB_MAIN 13 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_01.dbf 1 28940 83874 3.5 23 15.1 13 10 13.0 0 1.0 0 0.0 83884 23
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 44 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_15.dbf 2 11077 62572 1.8 17 2.6 6112 1473 41.5 0 1.4 178 15.0 64045 18
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 43 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_14.dbf 3 10542 57851 1.8 16 2.8 6130 1439 42.6 0 1.3 170 11.8 59290 16
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 53 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_24.dbf 4 8738 51111 1.7 14 2.6 24620 5146 47.8 1 2.0 425 2.9 56257 16
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 51 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_22.dbf 5 7290 49711 1.5 14 2.8 26259 5157 50.9 1 1.5 357 1.2 54868 15
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 52 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_23.dbf 6 7613 49670 1.5 14 3.0 8324 1878 44.3 1 1.4 615 1.8 51548 14
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 56 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_03.dbf 7 8887 44159 2.0 12 4.0 25073 6139 40.8 2 1.4 239 6.9 50298 14
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 40 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_11.dbf 8 9968 46766 2.1 13 4.2 8604 1881 45.7 1 1.3 346 4.4 48647 14
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 42 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_13.dbf 9 8381 45347 1.8 13 3.7 8453 1805 46.8 1 1.3 473 9.6 47152 13
8631 10/09/03 10:00 1 59.98 BIOHUB_MAIN 157 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_02.dbf 10 13397 46774 2.9 13 12.3 109 21 51.9 0 1.0 0 0.0 46795 13
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 39 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_10.dbf 11 8571 43053 2.0 12 4.3 14653 3283 44.6 1 1.3 329 4.3 46336 13
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 57 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_04.dbf 12 7538 39006 1.9 11 4.5 27792 6759 41.1 2 1.3 324 4.2 45765 13
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 58 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_05.dbf 13 6789 40225 1.7 11 4.4 21520 5183 41.5 1 1.2 314 3.2 45408 13
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 59 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_06.dbf 14 6613 40806 1.6 11 4.0 19542 4413 44.3 1 1.1 230 2.6 45219 13
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 6 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_01.dbf 15 7240 42344 1.7 12 4.5 13303 2804 47.4 1 1.2 207 6.0 45148 13
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 41 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_12.dbf 16 7726 42326 1.8 12 3.9 8236 1712 48.1 0 1.3 256 9.1 44038 12
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 55 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_02.dbf 17 7393 40557 1.8 11 4.3 12177 3220 37.8 1 1.2 156 5.4 43777 12
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 54 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_25.dbf 18 7901 40249 2.0 11 3.9 10703 2440 43.9 1 1.2 405 2.2 42689 12
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 60 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_07.dbf 19 6077 39850 1.5 11 4.1 7057 1772 39.8 0 1.4 255 1.6 41622 12
8631 10/09/03 10:00 1 59.98 SYSAUX 2 +DATA_1/xxxxxdb/datafile/sysaux.262.695412469 20 13830 35832 3.9 10 2.6 22611 4968 45.5 1 1.4 0 0.0 40800 11
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 61 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_08.dbf 21 6200 39021 1.6 11 4.3 7446 1757 42.4 0 1.3 92 10.3 40778 11
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 62 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_09.dbf 22 6255 37974 1.6 11 4.4 6284 1495 42.0 0 1.1 82 3.3 39469 11
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 45 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_16.dbf 23 7717 30393 2.5 8 3.3 9145 1888 48.4 1 1.8 105 24.4 32281 9
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 35 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_22.dbf 24 8994 24508 3.7 7 64.7 15286 5018 30.5 1 1.0 0 0.0 29526 8
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 49 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_20.dbf 25 4294 21441 2.0 6 3.2 48517 7243 67.0 2 1.8 44 6.6 28684 8
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 50 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_21.dbf 26 4260 19872 2.1 6 3.6 26567 4888 54.4 1 1.4 58 7.1 24760 7
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 48 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_19.dbf 27 4406 20567 2.1 6 3.2 9303 2178 42.7 1 1.5 125 1.5 22745 6
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 36 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_23.dbf 28 6298 16191 3.9 4 94.4 9616 5817 16.5 2 1.0 0 0.0 22008 6
8631 10/09/03 10:00 1 59.98 BIOHUB_MAIN 161 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_06.dbf 29 7615 21071 3.6 6 14.0 29 8 36.3 0 1.0 0 0.0 21079 6
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 47 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_18.dbf 30 4252 19195 2.2 5 3.4 4645 1172 39.6 0 1.2 65 3.7 20367 6
8631 10/09/03 10:00 1 59.98 BIOHUB_MAIN 160 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_05.dbf 31 4777 18203 2.6 5 10.2 39 8 48.8 0 1.0 0 0.0 18211 5
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS 46 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_17.dbf 32 3557 17274 2.1 5 3.7 3772 907 41.6 0 1.1 31 8.4 18181 5
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 34 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_21.dbf 33 4558 11736 3.9 3 19.8 10773 4829 22.3 1 1.0 0 0.0 16565 5
8631 10/09/03 10:00 1 59.98 BIOHUB_MAIN 158 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_03.dbf 34 5319 15326 3.5 4 14.1 31 8 38.8 0 1.0 0 0.0 15334 4
8631 10/09/03 10:00 1 59.98 BIOHUB_MAIN 159 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_04.dbf 35 5238 15210 3.4 4 13.3 41 9 45.6 0 1.0 0 0.0 15219 4
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 31 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_18.dbf 36 5643 12708 4.4 4 119.3 441 104 42.4 0 1.0 9 4.4 12812 4
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 84 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_28.dbf 37 5195 12533 4.1 3 119.6 816 171 47.7 0 1.0 0 0.0 12704 4
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 118 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_63.dbf 38 5117 11646 4.4 3 89.4 3157 1055 29.9 0 1.1 0 0.0 12701 4
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 117 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_62.dbf 39 4484 11404 3.9 3 116.9 3828 753 50.8 0 1.1 0 0.0 12157 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 115 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_60.dbf 40 4299 11369 3.8 3 117.3 1867 401 46.6 0 1.1 0 0.0 11770 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 116 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_61.dbf 41 4537 11267 4.0 3 118.3 1882 389 48.4 0 1.0 1 130.0 11656 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 113 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_58.dbf 42 4629 11265 4.1 3 118.3 950 212 44.8 0 1.0 0 0.0 11477 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 111 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_56.dbf 43 4070 11271 3.6 3 118.4 745 167 44.6 0 1.0 0 0.0 11438 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 112 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_57.dbf 44 4403 11279 3.9 3 118.2 514 110 46.7 0 1.0 0 0.0 11389 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 110 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_55.dbf 45 4211 11168 3.8 3 119.4 460 109 42.2 0 1.0 0 0.0 11277 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 114 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_59.dbf 46 4121 11205 3.7 3 119.0 247 44 56.1 0 1.0 0 0.0 11249 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 109 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_54.dbf 47 4368 11127 3.9 3 119.9 356 71 50.1 0 1.0 0 0.0 11198 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 106 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_51.dbf 48 4527 11104 4.1 3 120.1 475 86 55.2 0 1.0 0 0.0 11190 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 108 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_53.dbf 49 4282 11101 3.9 3 120.2 373 73 51.1 0 1.0 0 0.0 11174 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 105 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_50.dbf 50 4431 11088 4.0 3 120.3 286 57 50.2 0 1.0 0 0.0 11145 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 103 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_48.dbf 51 4374 11066 4.0 3 120.5 388 72 53.9 0 1.0 0 0.0 11138 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 107 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_52.dbf 52 4265 11064 3.9 3 120.6 341 69 49.4 0 1.0 0 0.0 11133 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 104 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_49.dbf 53 4393 11010 4.0 3 121.1 471 94 50.1 0 1.0 0 0.0 11104 3
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 102 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_47.dbf 54 3972 10914 3.6 3 122.2 284 61 46.6 0 1.0 0 0.0 10975 3
8631 10/09/03 10:00 1 59.98 BIOHUB_MAIN 162 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_07.dbf 55 2125 7963 2.7 2 13.5 29 8 36.3 0 1.0 0 0.0 7971 2
8631 10/09/03 10:00 1 59.98 SYSTEM 1 +DATA_1/xxxxxdb/datafile/system.261.695412463 56 1744 6446 2.7 2 1.8 4058 988 41.1 0 1.5 0 0.0 7434 2
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 30 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_17.dbf 57 2140 5565 3.8 2 85.5 324 96 33.8 0 1.0 0 0.0 5661 2
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 33 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_20.dbf 58 2116 4061 5.2 1 47.4 921 478 19.3 0 1.0 0 0.0 4539 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 23 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_10.dbf 59 2224 4423 5.0 1 105.8 319 74 43.1 0 1.0 1 20.0 4497 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 27 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_14.dbf 60 2107 4385 4.8 1 106.7 265 69 38.4 0 1.0 0 0.0 4454 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 28 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_15.dbf 61 2068 4370 4.7 1 107.1 199 59 33.7 0 1.0 0 0.0 4429 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 24 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_11.dbf 62 2433 4328 5.6 1 108.1 278 58 47.9 0 1.0 0 0.0 4386 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 21 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_08.dbf 63 2002 4340 4.6 1 107.8 156 34 45.9 0 1.0 0 0.0 4374 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 22 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_09.dbf 64 2003 4289 4.7 1 109.1 99 25 39.6 0 1.0 0 0.0 4314 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 26 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_13.dbf 65 2052 4170 4.9 1 112.2 335 61 54.9 0 1.0 0 0.0 4231 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 20 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_07.dbf 66 2104 4140 5.1 1 112.9 103 28 36.8 0 1.0 0 0.0 4168 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 7 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_01.dbf 67 2159 4085 5.3 1 110.7 12 9 13.3 0 1.0 0 0.0 4094 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 16 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_03.dbf 68 2263 3951 5.7 1 118.3 52 21 24.8 0 1.0 0 0.0 3972 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 17 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_04.dbf 69 1827 3609 5.1 1 91.5 49 17 28.8 0 1.0 0 0.0 3626 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 32 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_19.dbf 70 1262 2975 4.2 1 64.4 342 124 27.6 0 1.0 0 0.0 3099 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 37 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_24.dbf 71 1559 3024 5.2 1 109.6 156 31 50.3 0 1.1 0 0.0 3055 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 38 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_25.dbf 72 1515 2891 5.2 1 114.4 72 26 27.7 0 1.0 0 0.0 2917 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 19 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_06.dbf 73 1552 2838 5.5 1 116.1 202 41 49.3 0 1.0 2 5.0 2879 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 29 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_16.dbf 74 1241 2751 4.5 1 119.8 105 35 30.0 0 1.0 0 0.0 2786 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 18 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_05.dbf 75 1311 2732 4.8 1 120.6 143 29 49.3 0 1.0 0 0.0 2761 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 15 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_02.dbf 76 1494 2695 5.5 1 122.3 30 11 27.3 0 1.0 0 0.0 2706 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 98 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_43.dbf 77 1367 2591 5.3 1 110.3 41 11 37.3 0 1.0 0 0.0 2602 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 130 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_75.dbf 78 898 1435 6.3 0 96.2 2732 1158 23.6 0 1.1 0 0.0 2593 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 99 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_44.dbf 79 1343 2568 5.2 1 111.3 35 13 26.9 0 1.0 0 0.0 2581 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 96 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_41.dbf 80 1617 2507 6.4 1 114.0 26 15 17.3 0 1.0 0 0.0 2522 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 100 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_45.dbf 81 1376 2495 5.5 1 114.5 128 21 61.0 0 1.0 0 0.0 2516 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 101 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_46.dbf 82 1474 2475 6.0 1 115.4 137 13 105.4 0 1.0 0 0.0 2488 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 97 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_42.dbf 83 1461 2407 6.1 1 118.6 91 29 31.4 0 1.0 0 0.0 2436 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 122 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_67.dbf 84 638 2047 3.1 1 67.8 1368 339 40.4 0 1.0 0 0.0 2386 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 121 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_66.dbf 85 798 1862 4.3 1 74.4 463 114 40.6 0 1.3 0 0.0 1976 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 119 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_64.dbf 86 742 1914 3.9 1 72.4 275 53 51.9 0 1.0 0 0.0 1967 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 123 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_68.dbf 87 601 1740 3.5 0 79.5 748 166 45.1 0 1.1 0 0.0 1906 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 120 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_65.dbf 88 641 1838 3.5 1 75.3 285 52 54.8 0 1.0 0 0.0 1890 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 124 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_69.dbf 89 610 1770 3.4 0 78.2 467 107 43.6 0 1.2 0 0.0 1877 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 125 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_70.dbf 90 561 1804 3.1 1 76.8 112 35 32.0 0 1.0 0 0.0 1839 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 127 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_72.dbf 91 864 1798 4.8 0 77.0 93 28 33.2 0 1.0 0 0.0 1826 1
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 129 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_74.dbf 92 946 1454 6.5 0 95.0 559 264 21.2 0 1.0 0 0.0 1718 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 85 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_29.dbf 93 727 1679 4.3 0 98.3 137 33 41.5 0 1.0 0 0.0 1712 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 128 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_73.dbf 94 747 1666 4.5 0 83.0 117 28 41.8 0 1.0 0 0.0 1694 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 126 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_71.dbf 95 699 1631 4.3 0 84.8 30 16 18.8 0 1.0 0 0.0 1647 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 25 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_12.dbf 96 719 1595 4.5 0 114.0 174 47 37.0 0 1.0 0 0.0 1642 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 83 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_27.dbf 97 720 1518 4.7 0 108.6 103 30 34.3 0 1.0 0 0.0 1548 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 89 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_33.dbf 98 893 1519 5.9 0 108.6 54 18 30.0 0 1.0 0 0.0 1537 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 86 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_30.dbf 99 666 1479 4.5 0 111.5 162 28 57.9 0 1.0 0 0.0 1507 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 90 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_34.dbf ### 807 1484 5.4 0 111.1 27 12 22.5 0 1.0 0 0.0 1496 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 91 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_35.dbf ### 876 1447 6.1 0 113.9 106 35 30.3 0 1.0 0 0.0 1482 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 82 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_26.dbf ### 707 1456 4.9 0 113.2 56 16 35.0 0 1.0 0 0.0 1472 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 88 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_32.dbf ### 836 1432 5.8 0 115.1 58 22 26.4 0 1.0 0 0.0 1454 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 87 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_31.dbf ### 710 1419 5.0 0 116.1 87 24 36.3 0 1.0 0 0.0 1443 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 92 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_37.dbf ### 778 1412 5.5 0 116.7 92 23 40.0 0 1.0 0 0.0 1435 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 93 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_38.dbf ### 827 1393 5.9 0 118.3 53 20 26.5 0 1.0 0 0.0 1413 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 95 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_40.dbf ### 732 1391 5.3 0 118.6 46 21 21.9 0 1.0 0 0.0 1412 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 94 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_39.dbf ### 673 1391 4.8 0 118.5 33 12 27.5 0 1.0 0 0.0 1403 0
8631 10/09/03 10:00 1 59.98 TEMP 1 +DATA_1/xxxxxdb/tempfile/temp.264.695412473 ### 118 299 3.9 0 17.3 17385 1047 166.0 0 26.2 0 1346 0
8631 10/09/03 10:00 1 59.98 UNDOTBS1 3 +DATA_1/xxxxxdb/datafile/undotbs1.263.695412471 ### 164 363 4.5 0 1.0 4344 896 48.5 0 9.8 38 0.3 1259 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 133 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_78.dbf ### 405 640 6.3 0 1.0 1140 259 44.0 0 1.0 0 0.0 899 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 132 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_77.dbf ### 236 432 5.5 0 1.0 595 131 45.4 0 1.0 0 0.0 563 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 134 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_79.dbf ### 203 306 6.6 0 1.0 1007 215 46.8 0 1.1 0 0.0 521 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 137 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_82.dbf ### 132 212 6.2 0 1.0 1110 248 44.8 0 1.0 0 0.0 460 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 135 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_80.dbf ### 196 298 6.6 0 1.0 704 153 46.0 0 1.3 0 0.0 451 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 138 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_83.dbf ### 153 183 8.4 0 1.0 1093 226 48.4 0 1.0 0 0.0 409 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 131 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_76.dbf ### 209 302 6.9 0 1.0 355 60 59.2 0 1.0 0 0.0 362 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 139 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_84.dbf ### 211 162 13.0 0 1.0 943 187 50.4 0 1.0 0 0.0 349 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 136 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_81.dbf ### 93 151 6.2 0 1.0 378 93 40.6 0 1.0 0 0.0 244 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 140 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_85.dbf ### 127 111 11.4 0 1.0 73 49 14.9 0 1.1 0 0.0 160 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 142 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_87.dbf ### 136 144 9.4 0 1.0 21 8 26.3 0 1.0 0 0.0 152 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 144 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_89.dbf ### 67 73 9.2 0 1.0 29 8 36.3 0 1.0 0 0.0 81 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 149 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_94.dbf ### 70 69 10.1 0 1.0 31 8 38.8 0 1.0 0 0.0 77 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 141 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_86.dbf ### 80 61 13.1 0 1.0 22 8 27.5 0 1.0 0 0.0 69 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 145 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_90.dbf ### 60 51 11.8 0 1.0 31 8 38.8 0 1.0 0 0.0 59 0
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 81 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_10.d ### 81 37 21.9 0 1.0 9 8 11.3 0 1.0 0 0.0 45 0
NG
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 164 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_08.dbf ### 46 34 13.5 0 1.0 36 8 45.0 0 1.0 0 0.0 42 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 163 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_07.dbf ### 46 34 13.5 0 1.0 31 8 38.8 0 1.0 0 0.0 42 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 165 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_09.dbf ### 47 34 13.8 0 1.0 29 8 36.3 0 1.0 0 0.0 42 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 151 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_96.dbf ### 65 34 19.1 0 1.0 40 8 50.0 0 1.0 0 0.0 42 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 167 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_11.dbf ### 49 31 15.8 0 1.0 30 8 37.5 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 172 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_16.dbf ### 51 31 16.5 0 1.0 28 8 35.0 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 168 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_12.dbf ### 49 31 15.8 0 1.0 53 8 66.3 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 169 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_13.dbf ### 45 31 14.5 0 1.0 42 8 52.5 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 171 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_15.dbf ### 45 31 14.5 0 1.0 50 8 62.5 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 175 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_19.dbf ### 49 31 15.8 0 1.0 48 8 60.0 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 176 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_20.dbf ### 47 31 15.2 0 1.0 51 8 63.8 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 173 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_17.dbf ### 49 31 15.8 0 1.0 29 8 36.3 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 174 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_18.dbf ### 49 31 15.8 0 1.0 47 8 58.8 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 177 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_21.dbf ### 45 31 14.5 0 1.0 42 8 52.5 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 166 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_10.dbf ### 47 31 15.2 0 1.0 29 8 36.3 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 170 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_14.dbf ### 47 31 15.2 0 1.0 41 8 51.3 0 1.0 0 0.0 39 0
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 80 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_09.d ### 69 29 23.8 0 1.0 12 8 15.0 0 1.0 0 0.0 37 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 74 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_03.d ### 77 29 26.6 0 1.0 57 8 71.3 0 1.0 0 0.0 37 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 78 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_07.d ### 75 29 25.9 0 1.0 45 8 56.3 0 1.0 0 0.0 37 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 69 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_08 ### 72 29 24.8 0 1.0 57 8 71.3 0 1.0 0 0.0 37 0
TING
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 65 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_04 ### 71 29 24.5 0 1.0 52 8 65.0 0 1.0 0 0.0 37 0
TING
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 63 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_02 ### 78 29 26.9 0 1.0 52 8 65.0 0 1.0 0 0.0 37 0
TING
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 72 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_01.d ### 78 29 26.9 0 1.0 55 8 68.8 0 1.0 0 0.0 37 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 143 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_88.dbf ### 59 29 20.3 0 1.0 28 8 35.0 0 1.0 0 0.0 37 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 71 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_10 ### 79 29 27.2 0 1.0 53 8 66.3 0 1.0 0 0.0 37 0
TING
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 67 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_06 ### 79 29 27.2 0 1.0 18 8 22.5 0 1.0 0 0.0 37 0
TING
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 68 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_07 ### 71 29 24.5 0 1.0 55 8 68.8 0 1.0 0 0.0 37 0
TING
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 66 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_05 ### 79 29 27.2 0 1.0 18 8 22.5 0 1.0 0 0.0 37 0
TING
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 64 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_03 ### 78 29 26.9 0 1.0 61 8 76.3 0 1.0 0 0.0 37 0
TING
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 76 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_05.d ### 69 29 23.8 0 1.0 54 8 67.5 0 1.0 0 0.0 37 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 75 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_04.d ### 72 29 24.8 0 1.0 49 8 61.3 0 1.0 0 0.0 37 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 70 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_09 ### 77 29 26.6 0 1.0 50 8 62.5 0 1.0 0 0.0 37 0
TING
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 73 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_02.d ### 86 29 29.7 0 1.0 55 8 68.8 0 1.0 0 0.0 37 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 77 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_06.d ### 84 29 29.0 0 1.0 57 8 71.3 0 1.0 0 0.0 37 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_PERSO_TS_TESTI 79 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_08.d ### 69 29 23.8 0 1.0 45 8 56.3 0 1.0 0 0.0 37 0
NG
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 147 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_92.dbf ### 55 27 20.4 0 1.0 28 8 35.0 0 1.0 0 0.0 35 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 150 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_95.dbf ### 54 26 20.8 0 1.0 28 8 35.0 0 1.0 0 0.0 34 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 148 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_93.dbf ### 54 26 20.8 0 1.0 40 8 50.0 0 1.0 0 0.0 34 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 152 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_97.dbf ### 54 26 20.8 0 1.0 28 8 35.0 0 1.0 0 0.0 34 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 154 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_99.dbf ### 54 26 20.8 0 1.0 31 8 38.8 0 1.0 0 0.0 34 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 153 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_98.dbf ### 54 26 20.8 0 1.0 30 8 37.5 0 1.0 0 0.0 34 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 155 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_100.dbf ### 60 26 23.1 0 1.0 28 8 35.0 0 1.0 0 0.0 34 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 156 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_06.dbf ### 54 26 20.8 0 1.0 28 8 35.0 0 1.0 0 0.0 34 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS 146 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_91.dbf ### 56 26 21.5 0 1.0 40 8 50.0 0 1.0 0 0.0 34 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 9 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_02.dbf ### 22 9 24.4 0 1.0 11 8 13.8 0 1.0 0 0.0 17 0
8631 10/09/03 10:00 1 59.98 UNDOTBS2 4 +DATA_1/xxxxxdb/datafile/undotbs2.265.695412479 ### 22 9 24.4 0 1.0 11 8 13.8 0 1.0 661 6.2 17 0
8631 10/09/03 10:00 1 59.98 USERS 5 +DATA_1/xxxxxdb/datafile/users.266.695412481 ### 22 8 27.5 0 1.0 9 8 11.3 0 1.0 0 0.0 16 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 10 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_03.dbf ### 22 8 27.5 0 1.0 11 8 13.8 0 1.0 0 0.0 16 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 8 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_01.dbf ### 22 8 27.5 0 1.0 11 8 13.8 0 1.0 0 0.0 16 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 11 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_04.dbf ### 22 8 27.5 0 1.0 15 8 18.8 0 1.0 0 0.0 16 0
8631 10/09/03 10:00 1 59.98 BIOHUB_BIOMETRICS 12 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_05.dbf ### 22 8 27.5 0 1.0 13 8 16.3 0 1.0 0 0.0 16 0
8631 10/09/03 10:00 1 59.98 xxxxx_CENTRAL_TS_TES 14 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_01 ### 30 8 37.5 0 1.0 17 8 21.3 0 1.0 0 0.0 16 0
TING
}}}
''4) Top Timed Events''
{{{
AWR Top Events Report
i
n
Snap s Snap A
Snap Start t Dur Event Time Avgwt DB Time A
ID Time # (m) Event Rank Waits (s) (ms) % S Wait Class
------ --------------- --- ---------- ---------------------------------------- ----- -------------- -------------- -------- ------- ------ ---------------
8631 10/09/03 10:00 1 59.98 CPU time 1 0.00 27217.50 0.00 75 7.6 CPU
8631 10/09/03 10:00 1 59.98 db file scattered read 2 558250.00 1300.52 2.33 4 0.4 User I/O
8631 10/09/03 10:00 1 59.98 gcs log flush sync 3 415152.00 1073.14 2.58 3 0.3 Other
8631 10/09/03 10:00 1 59.98 db file sequential read 4 588175.00 1028.34 1.75 3 0.3 User I/O
8631 10/09/03 10:00 1 59.98 log file parallel write 5 177144.00 1015.58 5.73 3 0.3 System I/O
}}}
''5) Top 20 SQLs''
{{{
AWR Top SQL Report
i
n Elapsed
Snap s Snap Plan Elapsed Time CPU A
Snap Start t Dur SQL Hash Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
8655 10/09/04 10:00 1 59.97 5p6a4cpc38qg3 3067813470 24516.18 27.27 17856.45 374 0 919 3458526 899 660 0 6.81 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8662 10/09/04 17:00 1 60.18 5p6a4cpc38qg3 3067813470 23449.89 23.71 20799.27 72 672994388 105 9090770 989 320 0 6.49 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8659 10/09/04 14:00 1 60.01 5p6a4cpc38qg3 3067813470 22758.77 24.26 19637.77 72 2457316389 182 7268329 938 391 0 6.32 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8653 10/09/04 08:00 1 59.95 5p6a4cpc38qg3 3067813470 22456.59 25.72 18133.92 100 48069206 167 4148269 873 535 0 6.24 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8658 10/09/04 13:00 1 60.11 5p6a4cpc38qg3 3067813470 22161.01 23.60 20235.70 78 545821468 219 9539455 939 360 0 6.14 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8654 10/09/04 09:00 1 59.78 5p6a4cpc38qg3 3067813470 21584.52 24.72 18996.42 110 239162393 112 5644950 873 432 0 6.02 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8661 10/09/04 16:00 1 59.95 5p6a4cpc38qg3 3067813470 20806.46 22.84 19223.80 47 320679428 76 7873590 911 352 0 5.78 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8660 10/09/04 15:00 1 59.67 5p6a4cpc38qg3 3067813470 19292.37 22.07 18430.75 36 175976224 260 8377377 874 355 0 5.39 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8631 10/09/03 10:00 1 59.98 5p6a4cpc38qg3 3067813470 18805.15 27.82 17089.64 110 0 7260 2815426 676 355 0 5.23 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8663 10/09/04 18:00 1 60.58 5p6a4cpc38qg3 3067813470 18769.61 23.64 16816.86 46 882910191 172 9477951 794 290 0 5.16 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8636 10/09/03 15:00 1 59.83 5p6a4cpc38qg3 3067813470 18374.14 26.90 16926.69 67 0 1096 7126880 683 280 0 5.12 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8637 10/09/03 16:00 1 60.59 5p6a4cpc38qg3 3067813470 16895.70 25.91 16298.97 78 0 2212 8090271 652 208 0 4.65 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8657 10/09/04 12:00 1 59.98 5p6a4cpc38qg3 3067813470 16762.93 22.81 15688.13 26 0 112 6969281 735 280 0 4.66 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8635 10/09/03 14:00 1 60.01 5p6a4cpc38qg3 3067813470 15888.86 26.05 15407.11 51 3670199503 1163 7606867 610 283 0 4.41 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8557 10/08/31 08:00 1 60.14 5p6a4cpc38qg3 3385254935 15609.40 29.68 11370.47 245 2669391952 5222 88900 526 522 0 4.33 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8630 10/09/03 09:00 1 60.01 5p6a4cpc38qg3 3067813470 15297.69 26.24 14828.78 63 3561619757 1107 2177597 583 377 0 4.25 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8634 10/09/03 13:00 1 59.72 5p6a4cpc38qg3 3067813470 15061.37 26.80 14238.39 69 668291545 2720 6192614 562 299 0 4.20 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8632 10/09/03 11:00 1 59.76 5p6a4cpc38qg3 3067813470 14768.43 26.56 14121.11 57 0 2861 2444366 556 269 0 4.12 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8638 10/09/03 17:00 1 59.39 5p6a4cpc38qg3 3067813470 14719.85 25.73 14262.94 68 0 807 6900496 572 204 0 4.13 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
8609 10/09/02 12:00 1 60.09 5p6a4cpc38qg3 3067813470 14116.71 29.05 12874.50 43 0 437 1193240 486 345 0 3.92 1 SELECT last_dh.id FROM demand_history la
st_dh
INNER JOIN demand_history first_
20 rows selected.
}}}
''6) Top 5 SQLs of SNAP_ID 8631.. which by the way got an AAS of 10''
{{{
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
8631 10/09/03 10:00 1 59.98 5p6a4cpc38qg3 3067813470 18805.15 27.82 17089.64 5.64 0.00 110.80 110.30 0 7260 0 2815426 676 355 0 5.23 1 SELECT
8631 10/09/03 10:00 1 59.98 chp2zxzxsc6az 0 SQL*Plus 3603.31 2235.22 977.86 59.39 83.34 67.79 245970369 2620209 0 0 0 0 0 1.00 2 DECLAR
8631 10/09/03 10:00 1 59.98 6jbrg916bjmqc 0 DBMS_SCH 3584.81 3584.81 2169.73 965.05 0.00 4.25 230.80 99791222 3203168 3634 1 1 0 0 1.00 3 DECLAR
EDULER
8631 10/09/03 10:00 1 59.98 2k7p48zwvz9cr 3268610971 SQL*Plus 3033.08 0.42 1851.16 887.68 51.29 83.13 11.25 199474250 2603911 0 7187 7187 0 0 0.84 4 DELETE
8631 10/09/03 10:00 1 59.98 fu93bkrwbdznd 1154882994 SQL*Plus 2881.11 0.40 1844.66 879.58 0.00 0.58 10.54 199264302 2602238 0 7187 7187 0 0 0.80 5 selec
}}}
! Now on the low workload period... with AAS of 2.2
''1) General workload report''
{{{
AWR CPU and IO Workload Report
i *** *** ***
n Total Total Total U S
Snap s Snap C CPU A Oracle OS Physical Oracle RMAN OS S Y
Snap Start t Dur P Time DB DB Bg RMAN A CPU CPU Memory IOPs IOPs IOPs IO r IO w Redo Exec CPU CPU CPU R S
ID Time # (m) U (s) Time CPU CPU CPU S (s) (s) (mb) r w redo (mb)/s (mb)/s (mb)/s Sess /s % % % % %
------ --------------- --- ---------- --- ----------- ---------- --------- --------- -------- ----- ----------- ----------- ---------- --------- --------- --------- --------- --------- --------- ---- --------- ------ ---- ---- ---- ----
8636 10/09/03 15:00 1 59.83 8 28718.40 28565.78 24080.20 228.03 0.00 8.0 24308.23 25261.29 12287.06 124.386 34.736 49.384 70.647 0.885 0.087 104 1991.365 85 0 88 82 6
8582 10/09/01 09:00 1 60.35 8 28968.00 8290.55 6498.64 136.95 0.00 2.3 6635.59 7707.44 12287.06 265.980 8.644 32.428 211.100 0.304 0.073 98 2020.585 23 0 27 20 7
}}}
''2) Tablespace IO''
{{{
AWR Tablespace IO Report
i
n
Snap s Snap IOPS IOPS IOPS
Snap Start t Dur IO Read Av Av Av Write Av Av Av Buffer Av Buf Total Total
ID Time # (m) TS Rank Time Reads Rd(ms) Reads/s Blks/Rd Time Writes Wt(ms) Writes/s Blks/Wrt Waits Wt(ms) IO R+W R+W
------ --------------- --- ---------- -------------------- ---- -------- -------- ------ -------- ------- -------- -------- ------ -------- -------- -------- ------ -------- --------
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 1 169346 875793 1.9 242 109.9 24479 19626 12.5 5 1.0 10 0.0 895419 247
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 2 14020 71778 2.0 20 20.8 11806 5988 19.7 2 1.1 286 2.1 77766 21
8582 10/09/01 09:00 1 60.35 SYSAUX 3 2178 8754 2.5 2 1.8 3689 1153 32.0 0 1.1 0 0.0 9907 3
8582 10/09/01 09:00 1 60.35 TEMP 4 410 1697 2.4 0 31.2 30332 3335 91.0 1 30.9 0 0.0 5032 1
8582 10/09/01 09:00 1 60.35 SYSTEM 5 502 2055 2.4 1 2.4 161 77 20.9 0 1.2 7 11.4 2132 1
8582 10/09/01 09:00 1 60.35 BIOHUB_MAIN 6 427 1288 3.3 0 1.0 30 45 6.7 0 1.0 0 0.0 1333 0
8582 10/09/01 09:00 1 60.35 UNDOTBS1 7 6 7 8.6 0 1.0 1378 778 17.7 0 11.8 14 0.0 785 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 8 37 310 1.2 0 1.0 121 146 8.3 0 1.0 0 0.0 456 0
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 9 41 287 1.4 0 1.0 51 70 7.3 0 1.0 0 0.0 357 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 10 33 259 1.3 0 1.0 54 70 7.7 0 1.0 0 0.0 329 0
TING
8582 10/09/01 09:00 1 60.35 UNDOTBS2 11 1 7 1.4 0 1.0 5 7 7.1 0 1.0 0 0.0 14 0
8582 10/09/01 09:00 1 60.35 USERS 11 2 7 2.9 0 1.0 4 7 5.7 0 1.0 0 0.0 14 0
}}}
''3) File IO''
{{{
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 35 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_22.dbf 1 6390 49445 1.3 14 74.8 90 67 13.4 0 1.0 0 0.0 49512 14
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 36 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_23.dbf 2 5860 32049 1.8 9 111.5 117 83 14.1 0 1.0 0 0.0 32132 9
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 31 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_18.dbf 3 5783 29395 2.0 8 120.6 100 80 12.5 0 1.0 1 0.0 29475 8
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 84 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_28.dbf 4 6057 29075 2.1 8 120.7 437 190 23.0 0 1.1 0 0.0 29265 8
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 110 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_55.dbf 5 5079 27176 1.9 8 115.0 170 93 18.3 0 1.0 0 0.0 27269 8
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 111 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_56.dbf 6 5557 26806 2.1 7 116.6 245 141 17.4 0 1.0 0 0.0 26947 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 116 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_61.dbf 7 5454 25782 2.1 7 121.2 1493 1138 13.1 0 1.0 0 0.0 26920 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 109 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_54.dbf 8 5310 25898 2.1 7 120.6 109 76 14.3 0 1.0 0 0.0 25974 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 115 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_60.dbf 9 5173 25470 2.0 7 122.6 1087 470 23.1 0 1.0 0 0.0 25940 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 117 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_62.dbf 10 5059 25273 2.0 7 123.6 932 548 17.0 0 1.0 0 0.0 25821 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 113 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_58.dbf 11 5097 25464 2.0 7 122.7 257 142 18.1 0 1.2 0 0.0 25606 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 112 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_57.dbf 12 5298 25419 2.1 7 122.9 177 103 17.2 0 1.0 0 0.0 25522 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 114 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_59.dbf 13 5277 25428 2.1 7 122.8 197 87 22.6 0 1.0 0 0.0 25515 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 103 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_48.dbf 14 5183 25353 2.0 7 123.2 178 100 17.8 0 1.0 0 0.0 25453 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 108 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_53.dbf 15 5210 25360 2.1 7 123.2 97 62 15.6 0 1.0 0 0.0 25422 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 105 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_50.dbf 16 5081 25360 2.0 7 123.2 83 52 16.0 0 1.0 0 0.0 25412 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 106 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_51.dbf 17 5517 25342 2.2 7 123.2 88 60 14.7 0 1.0 0 0.0 25402 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 104 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_49.dbf 18 5010 25280 2.0 7 123.5 224 92 24.3 0 1.0 0 0.0 25372 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 107 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_52.dbf 19 5623 25262 2.2 7 123.6 113 58 19.5 0 1.0 0 0.0 25320 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 102 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_47.dbf 20 5082 25144 2.0 7 124.2 151 98 15.4 0 1.0 0 0.0 25242 7
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 7 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_01.dbf 21 2498 15462 1.6 4 68.9 3471 4891 7.1 1 1.0 0 0.0 20353 6
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 34 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_21.dbf 22 1087 18142 0.6 5 29.5 100 64 15.6 0 1.1 0 0.0 18206 5
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 30 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_17.dbf 23 2058 12678 1.6 4 87.8 85 60 14.2 0 1.0 0 0.0 12738 4
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 27 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_14.dbf 24 1905 10050 1.9 3 109.0 68 48 14.2 0 1.0 0 0.0 10098 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 15 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_02.dbf 25 1521 8595 1.8 2 90.1 961 1489 6.5 0 1.0 0 0.0 10084 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 20 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_07.dbf 26 1967 10005 2.0 3 109.5 44 34 12.9 0 1.1 6 0.0 10039 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 22 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_09.dbf 27 1901 9987 1.9 3 109.7 88 48 18.3 0 1.0 1 0.0 10035 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 28 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_15.dbf 28 2049 9927 2.1 3 110.3 48 42 11.4 0 1.0 0 0.0 9969 3
8582 10/09/01 09:00 1 60.35 SYSAUX 2 +DATA_1/xxxxxdb/datafile/sysaux.262.695412469 29 2178 8754 2.5 2 1.8 3689 1153 32.0 0 1.1 0 0.0 9907 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 21 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_08.dbf 30 1894 9843 1.9 3 111.2 33 27 12.2 0 1.0 0 0.0 9870 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 24 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_11.dbf 31 1956 9737 2.0 3 112.5 75 54 13.9 0 1.0 0 0.0 9791 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 23 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_10.dbf 32 1934 9679 2.0 3 113.1 77 51 15.1 0 1.0 0 0.0 9730 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 26 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_13.dbf 33 1979 9420 2.1 3 116.2 82 57 14.4 0 1.0 2 0.0 9477 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 16 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_03.dbf 34 2038 9435 2.2 3 116.0 56 25 22.4 0 1.0 0 0.0 9460 3
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 154 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_99.dbf 35 625 4251 1.5 1 1.0 2665 3210 8.3 1 1.0 0 0.0 7461 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 37 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_24.dbf 36 1450 7096 2.0 2 109.5 42 33 12.7 0 1.0 0 0.0 7129 2
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 41 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_12.dbf 37 984 6873 1.4 2 11.3 240 141 17.0 0 1.0 7 0.0 7014 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 33 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_20.dbf 38 936 6782 1.4 2 66.1 85 63 13.5 0 1.1 0 0.0 6845 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 38 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_25.dbf 39 1593 6793 2.3 2 114.0 51 42 12.1 0 1.0 0 0.0 6835 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 18 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_05.dbf 40 1508 6586 2.3 2 117.2 50 30 16.7 0 1.0 0 0.0 6616 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 29 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_16.dbf 41 1327 6564 2.0 2 117.5 65 42 15.5 0 1.1 0 0.0 6606 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 32 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_19.dbf 42 988 6526 1.5 2 68.7 110 64 17.2 0 1.0 0 0.0 6590 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 19 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_06.dbf 43 1400 6521 2.1 2 118.4 22 16 13.8 0 1.0 0 0.0 6537 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 17 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_04.dbf 44 1397 6438 2.2 2 119.9 22 16 13.8 0 1.0 0 0.0 6454 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 98 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_43.dbf 45 1171 5914 2.0 2 113.1 26 18 14.4 0 1.0 0 0.0 5932 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 99 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_44.dbf 46 1092 5869 1.9 2 114.5 16 12 13.3 0 1.0 0 0.0 5881 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 96 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_41.dbf 47 1197 5770 2.1 2 116.4 15 13 11.5 0 1.0 0 0.0 5783 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 101 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_46.dbf 48 1193 5766 2.1 2 116.0 12 11 10.9 0 1.0 0 0.0 5777 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 100 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_45.dbf 49 1285 5662 2.3 2 118.1 31 24 12.9 0 1.0 0 0.0 5686 2
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 97 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_42.dbf 50 1134 5462 2.1 2 122.4 5 8 6.3 0 1.0 0 0.0 5470 2
8582 10/09/01 09:00 1 60.35 TEMP 1 +DATA_1/xxxxxdb/tempfile/temp.264.695412473 51 410 1697 2.4 0 31.2 30332 3335 91.0 1 30.9 0 5032 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 128 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_73.dbf 52 588 3694 1.6 1 87.6 946 1309 7.2 0 1.1 0 0.0 5003 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 51 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_22.dbf 53 766 4563 1.7 1 15.2 802 382 21.0 0 1.1 0 0.0 4945 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 52 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_23.dbf 54 762 4287 1.8 1 16.1 370 197 18.8 0 1.1 4 0.0 4484 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 82 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_26.dbf 55 1168 4418 2.6 1 87.6 7 9 7.8 0 1.0 0 0.0 4427 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 44 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_15.dbf 56 712 4080 1.7 1 18.6 258 274 9.4 0 1.0 8 1.3 4354 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 83 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_27.dbf 57 1024 4073 2.5 1 95.0 38 28 13.6 0 1.0 0 0.0 4101 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 42 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_13.dbf 58 617 3813 1.6 1 19.6 205 142 14.4 0 1.3 7 0.0 3955 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 39 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_10.dbf 59 659 2998 2.2 1 24.6 2509 794 31.6 0 1.1 20 1.0 3792 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 127 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_72.dbf 60 619 3768 1.6 1 85.9 18 15 12.0 0 1.0 0 0.0 3783 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 123 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_68.dbf 61 595 3725 1.6 1 86.9 68 30 22.7 0 1.6 0 0.0 3755 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 25 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_12.dbf 62 830 3673 2.3 1 116.0 69 45 15.3 0 1.0 0 0.0 3718 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 121 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_66.dbf 63 679 3434 2.0 1 95.0 502 252 19.9 0 1.1 0 0.0 3686 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 53 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_24.dbf 64 667 3426 1.9 1 19.9 223 173 12.9 0 1.1 13 0.8 3599 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 120 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_65.dbf 65 634 3331 1.9 1 97.9 415 228 18.2 0 1.1 0 0.0 3559 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 119 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_64.dbf 66 644 3452 1.9 1 94.5 183 103 17.8 0 1.0 0 0.0 3555 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 122 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_67.dbf 67 647 3371 1.9 1 96.8 311 151 20.6 0 1.1 0 0.0 3522 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 58 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_05.dbf 68 527 3267 1.6 1 21.1 198 157 12.6 0 1.0 17 3.5 3424 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 40 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_11.dbf 69 719 2822 2.5 1 26.7 641 587 10.9 0 1.0 0 0.0 3409 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 126 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_71.dbf 70 624 3328 1.9 1 97.1 26 16 16.3 0 1.0 0 0.0 3344 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 125 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_70.dbf 71 580 3318 1.7 1 97.4 19 15 12.7 0 1.0 0 0.0 3333 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 118 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_63.dbf 72 638 3224 2.0 1 101.1 286 106 27.0 0 1.0 0 0.0 3330 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 90 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_34.dbf 73 547 3294 1.7 1 118.1 27 19 14.2 0 1.0 0 0.0 3313 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 86 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_30.dbf 74 753 3294 2.3 1 118.1 18 16 11.3 0 1.1 0 0.0 3310 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 43 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_14.dbf 75 568 3097 1.8 1 24.0 343 211 16.3 0 1.1 2 0.0 3308 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 89 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_33.dbf 76 676 3290 2.1 1 118.2 20 13 15.4 0 1.0 0 0.0 3303 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 88 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_32.dbf 77 787 3257 2.4 1 118.5 22 21 10.5 0 1.0 0 0.0 3278 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 124 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_69.dbf 78 610 3226 1.9 1 100.1 75 21 35.7 0 2.9 0 0.0 3247 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 91 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_35.dbf 79 710 3228 2.2 1 119.5 10 10 10.0 0 1.0 0 0.0 3238 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 92 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_37.dbf 80 646 3224 2.0 1 119.7 20 12 16.7 0 1.0 0 0.0 3236 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 85 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_29.dbf 81 560 3210 1.7 1 120.2 22 19 11.6 0 1.0 0 0.0 3229 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 56 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_03.dbf 82 649 3083 2.1 1 22.0 245 137 17.9 0 1.4 0 0.0 3220 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 87 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_31.dbf 83 626 3203 2.0 1 120.5 23 14 16.4 0 1.0 0 0.0 3217 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 95 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_40.dbf 84 611 3149 1.9 1 122.6 31 24 12.9 0 1.0 0 0.0 3173 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 93 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_38.dbf 85 638 3153 2.0 1 122.4 25 15 16.7 0 1.0 0 0.0 3168 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 94 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_39.dbf 86 666 3142 2.1 1 122.8 15 15 10.0 0 1.0 0 0.0 3157 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 55 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_02.dbf 87 505 2845 1.8 1 23.8 43 38 11.3 0 1.0 1 0.0 2883 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 50 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_21.dbf 88 452 2358 1.9 1 9.7 839 415 20.2 0 1.1 45 2.2 2773 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 46 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_17.dbf 89 392 2576 1.5 1 8.9 289 165 17.5 0 1.2 0 0.0 2741 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 59 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_06.dbf 90 525 2554 2.1 1 26.7 144 63 22.9 0 1.0 132 2.7 2617 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 62 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_09.dbf 91 598 2320 2.6 1 29.9 565 127 44.5 0 1.0 0 0.0 2447 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 57 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_04.dbf 92 507 2299 2.2 1 29.2 243 136 17.9 0 1.0 0 0.0 2435 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 61 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_08.dbf 93 433 2267 1.9 1 30.7 53 47 11.3 0 1.0 0 0.0 2314 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 6 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_01.dbf 94 547 1922 2.8 1 38.5 756 369 20.5 0 1.0 0 0.0 2291 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 54 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_25.dbf 95 463 2101 2.2 1 31.8 80 61 13.1 0 1.0 1 10.0 2162 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 49 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_20.dbf 96 357 1789 2.0 0 12.4 788 369 21.4 0 1.1 16 2.5 2158 1
8582 10/09/01 09:00 1 60.35 SYSTEM 1 +DATA_1/xxxxxdb/datafile/system.261.695412463 97 502 2055 2.4 1 2.4 161 77 20.9 0 1.2 7 11.4 2132 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 48 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_19.dbf 98 345 1491 2.3 0 14.6 818 396 20.7 0 1.1 0 0.0 1887 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 60 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_07.dbf 99 435 1756 2.5 0 38.4 153 100 15.3 0 1.0 4 0.0 1856 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 45 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_16.dbf ### 508 1712 3.0 0 20.8 230 141 16.3 0 1.1 8 0.0 1853 1
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS 47 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_18.dbf ### 323 1479 2.2 0 14.8 771 366 21.1 0 1.1 1 0.0 1845 1
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 153 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_98.dbf ### 187 1091 1.7 0 1.0 414 440 9.4 0 1.1 0 0.0 1531 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 133 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_78.dbf ### 170 742 2.3 0 1.1 1554 509 30.5 0 1.1 0 0.0 1251 0
8582 10/09/01 09:00 1 60.35 UNDOTBS1 3 +DATA_1/xxxxxdb/datafile/undotbs1.263.695412471 ### 6 7 8.6 0 1.0 1378 778 17.7 0 11.8 14 0.0 785 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 152 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_97.dbf ### 115 642 1.8 0 1.0 81 78 10.4 0 1.0 0 0.0 720 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 132 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_77.dbf ### 61 390 1.6 0 1.1 851 308 27.6 0 1.1 0 0.0 698 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 137 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_82.dbf ### 90 254 3.5 0 1.0 1180 374 31.6 0 1.1 0 0.0 628 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 131 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_76.dbf ### 55 414 1.3 0 1.1 185 91 20.3 0 1.0 0 0.0 505 0
8582 10/09/01 09:00 1 60.35 BIOHUB_MAIN 13 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_01.dbf ### 160 456 3.5 0 1.0 5 7 7.1 0 1.0 0 0.0 463 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 138 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_83.dbf ### 30 272 1.1 0 1.0 408 146 27.9 0 1.5 0 0.0 418 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 136 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_81.dbf ### 72 208 3.5 0 1.0 649 205 31.7 0 1.0 0 0.0 413 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 129 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_74.dbf ### 40 266 1.5 0 1.1 177 113 15.7 0 1.0 0 0.0 379 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 130 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_75.dbf ### 47 280 1.7 0 1.1 158 93 17.0 0 1.0 0 0.0 373 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 135 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_80.dbf ### 51 141 3.6 0 1.0 291 162 18.0 0 1.0 0 0.0 303 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 134 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_79.dbf ### 44 115 3.8 0 1.0 206 140 14.7 0 1.0 0 0.0 255 0
8582 10/09/01 09:00 1 60.35 BIOHUB_MAIN 157 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_02.dbf ### 84 239 3.5 0 1.0 5 6 8.3 0 1.0 0 0.0 245 0
8582 10/09/01 09:00 1 60.35 BIOHUB_MAIN 159 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_04.dbf ### 60 195 3.1 0 1.0 4 6 6.7 0 1.0 0 0.0 201 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 143 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_88.dbf ### 16 171 0.9 0 1.0 4 6 6.7 0 1.0 0 0.0 177 0
8582 10/09/01 09:00 1 60.35 BIOHUB_MAIN 161 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_06.dbf ### 54 150 3.6 0 1.0 4 7 5.7 0 1.0 0 0.0 157 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 151 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_96.dbf ### 30 115 2.6 0 1.0 39 36 10.8 0 1.1 0 0.0 151 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 150 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_95.dbf ### 19 78 2.4 0 1.0 70 46 15.2 0 1.0 0 0.0 124 0
8582 10/09/01 09:00 1 60.35 BIOHUB_MAIN 158 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_03.dbf ### 37 116 3.2 0 1.0 4 6 6.7 0 1.0 0 0.0 122 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 139 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_84.dbf ### 7 41 1.7 0 1.0 182 62 29.4 0 1.2 0 0.0 103 0
8582 10/09/01 09:00 1 60.35 BIOHUB_MAIN 160 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_05.dbf ### 18 83 2.2 0 1.0 6 6 10.0 0 1.0 0 0.0 89 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 148 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_93.dbf ### 6 42 1.4 0 1.0 17 17 10.0 0 1.1 0 0.0 59 0
8582 10/09/01 09:00 1 60.35 BIOHUB_MAIN 162 +DATA_1/xxxxxdb/datafile/xxxxmain_ts_07.dbf ### 14 49 2.9 0 1.0 2 7 2.9 0 1.0 0 0.0 56 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 149 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_94.dbf ### 2 32 0.6 0 1.0 21 14 15.0 0 1.0 0 0.0 46 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 142 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_87.dbf ### 3 38 0.8 0 1.0 4 6 6.7 0 1.0 0 0.0 44 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 145 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_90.dbf ### 3 37 0.8 0 1.0 6 6 10.0 0 1.0 0 0.0 43 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 141 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_86.dbf ### 2 37 0.5 0 1.0 8 6 13.3 0 1.0 0 0.0 43 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 144 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_89.dbf ### 3 37 0.8 0 1.0 6 6 10.0 0 1.0 0 0.0 43 0
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 81 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_10.d ### 5 35 1.4 0 1.0 3 7 4.3 0 1.0 0 0.0 42 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 147 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_92.dbf ### 2 35 0.6 0 1.0 3 6 5.0 0 1.0 0 0.0 41 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 156 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_06.dbf ### 2 32 0.6 0 1.0 8 6 13.3 0 1.0 0 0.0 38 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 146 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_91.dbf ### 3 32 0.9 0 1.0 6 6 10.0 0 1.0 0 0.0 38 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 155 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_100.dbf ### 3 32 0.9 0 1.0 7 6 11.7 0 1.0 0 0.0 38 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS 140 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_85.dbf ### 2 31 0.6 0 1.0 6 6 10.0 0 1.0 0 0.0 37 0
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 78 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_07.d ### 3 28 1.1 0 1.0 5 7 7.1 0 1.0 0 0.0 35 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 69 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_08 ### 3 28 1.1 0 1.0 13 7 18.6 0 1.0 0 0.0 35 0
TING
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 65 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_04 ### 1 28 0.4 0 1.0 4 7 5.7 0 1.0 0 0.0 35 0
TING
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 63 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_02 ### 4 28 1.4 0 1.0 4 7 5.7 0 1.0 0 0.0 35 0
TING
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 77 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_06.d ### 6 28 2.1 0 1.0 4 7 5.7 0 1.0 0 0.0 35 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 79 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_08.d ### 3 28 1.1 0 1.0 3 7 4.3 0 1.0 0 0.0 35 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 80 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_09.d ### 2 28 0.7 0 1.0 3 7 4.3 0 1.0 0 0.0 35 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 74 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_03.d ### 3 28 1.1 0 1.0 8 7 11.4 0 1.0 0 0.0 35 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 68 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_07 ### 1 28 0.4 0 1.0 4 7 5.7 0 1.0 0 0.0 35 0
TING
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 64 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_03 ### 6 28 2.1 0 1.0 6 7 8.6 0 1.0 0 0.0 35 0
TING
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 66 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_05 ### 3 28 1.1 0 1.0 4 7 5.7 0 1.0 0 0.0 35 0
TING
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 72 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_01.d ### 4 28 1.4 0 1.0 7 7 10.0 0 1.0 0 0.0 35 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 71 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_10 ### 5 28 1.8 0 1.0 5 7 7.1 0 1.0 0 0.0 35 0
TING
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 67 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_06 ### 1 28 0.4 0 1.0 3 7 4.3 0 1.0 0 0.0 35 0
TING
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 76 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_05.d ### 2 28 0.7 0 1.0 6 7 8.6 0 1.0 0 0.0 35 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 75 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_04.d ### 3 28 1.1 0 1.0 3 7 4.3 0 1.0 0 0.0 35 0
NG
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 70 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_09 ### 2 28 0.7 0 1.0 6 7 8.6 0 1.0 0 0.0 35 0
TING
8582 10/09/01 09:00 1 60.35 xxxxx_PERSO_TS_TESTI 73 +DATA_1/xxxxxdb/datafile/xxxxx_perso_ts_testing_02.d ### 10 28 3.6 0 1.0 9 7 12.9 0 1.0 0 0.0 35 0
NG
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 165 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_09.dbf ### 1 17 0.6 0 1.0 4 7 5.7 0 1.0 0 0.0 24 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 164 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_08.dbf ### 1 17 0.6 0 1.0 4 7 5.7 0 1.0 0 0.0 24 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 163 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_07.dbf ### 1 17 0.6 0 1.0 3 7 4.3 0 1.0 0 0.0 24 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 168 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_12.dbf ### 2 16 1.3 0 1.0 13 7 18.6 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 169 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_13.dbf ### 2 16 1.3 0 1.0 5 7 7.1 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 175 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_19.dbf ### 2 16 1.3 0 1.0 9 7 12.9 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 172 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_16.dbf ### 2 16 1.3 0 1.0 4 7 5.7 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 176 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_20.dbf ### 2 16 1.3 0 1.0 7 7 10.0 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 171 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_15.dbf ### 2 16 1.3 0 1.0 11 7 15.7 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 173 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_17.dbf ### 2 16 1.3 0 1.0 5 7 7.1 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 174 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_18.dbf ### 2 16 1.3 0 1.0 5 7 7.1 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 177 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_21.dbf ### 2 16 1.3 0 1.0 4 7 5.7 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 170 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_14.dbf ### 2 16 1.3 0 1.0 6 7 8.6 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 166 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_10.dbf ### 2 16 1.3 0 1.0 4 7 5.7 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 167 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_11.dbf ### 2 16 1.3 0 1.0 5 7 7.1 0 1.0 0 0.0 23 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 9 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_02.dbf ### 4 7 5.7 0 1.0 5 7 7.1 0 1.0 0 0.0 14 0
8582 10/09/01 09:00 1 60.35 USERS 5 +DATA_1/xxxxxdb/datafile/users.266.695412481 ### 2 7 2.9 0 1.0 4 7 5.7 0 1.0 0 0.0 14 0
8582 10/09/01 09:00 1 60.35 UNDOTBS2 4 +DATA_1/xxxxxdb/datafile/undotbs2.265.695412479 ### 1 7 1.4 0 1.0 5 7 7.1 0 1.0 0 0.0 14 0
8582 10/09/01 09:00 1 60.35 xxxxx_CENTRAL_TS_TES 14 +DATA_1/xxxxxdb/datafile/xxxxx_central_ts_testing_01 ### 7 7 10.0 0 1.0 5 7 7.1 0 1.0 0 0.0 14 0
TING
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 12 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_05.dbf ### 1 7 1.4 0 1.0 5 7 7.1 0 1.0 0 0.0 14 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 11 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_04.dbf ### 1 7 1.4 0 1.0 7 7 10.0 0 1.0 0 0.0 14 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 10 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_03.dbf ### 1 7 1.4 0 1.0 3 7 4.3 0 1.0 0 0.0 14 0
8582 10/09/01 09:00 1 60.35 BIOHUB_BIOMETRICS 8 +DATA_1/xxxxxdb/datafile/xxxxblob_ts_01.dbf ### 1 7 1.4 0 1.0 4 7 5.7 0 1.0 0 0.0 14 0
}}}
''4) Top timed events''
{{{
AWR Top Events Report
i
n
Snap s Snap A
Snap Start t Dur Event Time Avgwt DB Time A
ID Time # (m) Event Rank Waits (s) (ms) % S Wait Class
------ --------------- --- ---------- ---------------------------------------- ----- -------------- -------------- -------- ------- ------ ---------------
8582 10/09/01 09:00 1 60.35 CPU time 1 0.00 6498.64 0.00 78 1.8 CPU
8582 10/09/01 09:00 1 60.35 log file sync 2 116223.00 702.01 6.04 8 0.2 Commit
8582 10/09/01 09:00 1 60.35 log file parallel write 3 117419.00 653.78 5.57 8 0.2 System I/O
8582 10/09/01 09:00 1 60.35 direct path read 4 872513.00 498.93 0.57 6 0.1 User I/O
8582 10/09/01 09:00 1 60.35 db file sequential read 5 82654.00 138.04 1.67 2 0.0 User I/O
}}}
''5) Top 20 SQLs''
{{{
not entry on the top 20
}}}
''6) Top 5 SQLs on SNAP_ID 8582''
{{{
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
8582 10/09/01 09:00 1 60.35 g45wmw7d3g35a 4178773515 1676.11 6.28 1556.56 67.85 5.02 0.00 4.75 25626096 15280562 0 262 267 263 0 0.46 1 SELECT
8582 10/09/01 09:00 1 60.35 g90bc735d6g63 1552981796 1632.17 0.60 1237.47 364.83 19.26 0.00 1.03 77371697 77346130 0 2718 2719 518 0 0.45 2 SELECT
8582 10/09/01 09:00 1 60.35 5p6a4cpc38qg3 3989064390 1028.13 1.44 943.56 0.28 0.00 0.00 53.67 184689064 174 0 2746253 714 445 0 0.28 3 SELECT
8582 10/09/01 09:00 1 60.35 cxkt05mq1t28g 4045880545 w3wp.exe 410.74 0.00 377.25 0.00 0.00 2.15 0.00 44329789 0 0 18999510 6333223 2697 0 0.11 4 SELECT
8582 10/09/01 09:00 1 60.35 3an2crqdjvhtk 287589933 291.90 6.08 271.16 12.77 0.78 0.00 1.38 4589509 2736817 0 2520 48 48 0 0.08 5 SELECT
}}}
''sample viz DEMO here'' https://www.evernote.com/shard/s48/sh/aecbb4bf-e186-4f98-87f3-2491e8516afb/be8200d83a83966b8993c014e10c936f
found it at http://gigaom.com/2013/10/08/new-tool-lets-you-visualize-just-about-anything-in-5-minutes-maybe-less/
but would be cool if they can just do share viz URL ;)
http://www.confio.com/English/Tips/Read_By_Other_Session.php
http://sites.google.com/site/embtdbo/wait-event-documentation/oracle-buffer-busy-wait
http://forums.oracle.com/forums/thread.jspa?threadID=570920&start=0&tstart=0
http://guyharrison.squarespace.com/blog/2009/11/30/more-on-the-database-flash-cache.html
oracle-l
http://www.freelists.org/post/oracle-l/read-by-other-session-scenario
http://www.freelists.org/post/oracle-l/Understanding-AWR-buffer-waits,1
http://www.freelists.org/post/oracle-l/read-by-other-session
http://www.freelists.org/post/oracle-l/Read-by-other-session-and-db-sequential-read-waits
Also check on the book Oracle Performance Firefighting
''buffer busy waits''
http://docwiki.embarcadero.com/DBOptimizer/en/Buffer_busy_wait
http://orainternals.wordpress.com/2010/09/27/gc-buffer-busy-waits/
http://hoopercharles.wordpress.com/2010/04/16/buffer-busy-waits-reason-codes-and-block-classes/
Client would like to know the cause of intermittent failures on their transactions.. at 12noon they experience significant increase on database load.. see ''AAS reached 8.8'' and the transaction failures also manifested during this period
{{{
AWR CPU and IO Workload Report
i *** *** ***
n Total Total Total U S
Snap s Snap C CPU A Oracle OS Physical Oracle RMAN OS S Y I
Snap Start t Dur P Time DB DB Bg RMAN A CPU OS CPU Memory IOPs IOPs IOPs IO r IO w Redo Exec CPU CPU CPU R S O
ID Time # (m) U (s) Time CPU CPU CPU S (s) Load (s) (mb) r w redo (mb)/s (mb)/s (mb)/s Sess /s % % % % % %
------ --------------- --- ---------- --- ----------- ---------- --------- --------- -------- ----- ----------- ------- ----------- ---------- --------- --------- --------- --------- --------- --------- ---- --------- ------ ---- ---- ---- ---- ----
5372 10/07/16 08:00 1 59.76 4 14342.40 2964.15 2178.58 12.43 0.00 0.8 2191.01 0.00 3009.09 16384.00 162.770 1.938 7.745 11.320 0.028 0.013 68 109.695 15 0 21 18 3 2
5373 10/07/16 09:00 1 59.62 4 14308.80 9473.54 4343.06 13.56 0.00 2.6 4356.62 0.01 7109.19 16384.00 599.493 4.363 12.867 41.368 0.056 0.022 74 162.833 30 0 50 45 5 4
5374 10/07/16 10:00 1 60.24 4 14457.60 14428.95 4483.80 15.16 0.00 4.0 4498.96 0.01 8658.73 16384.00 586.597 3.583 14.468 45.485 0.050 0.025 80 198.896 31 0 60 52 7 7
5375 10/07/16 11:00 1 60.37 4 14488.80 17790.76 5186.81 14.53 0.00 4.9 5201.34 0.01 9353.15 16384.00 913.161 4.174 15.377 68.776 0.062 0.027 79 182.372 36 0 65 57 7 10
5376 10/07/16 12:00 1 59.23 4 14215.20 31402.42 4765.97 15.05 0.00 8.8 4781.02 0.02 8882.52 16384.00 1148.730 4.175 14.262 90.552 0.061 0.026 71 192.156 34 0 62 53 9 18
5377 10/07/16 13:00 1 60.28 4 14467.20 13989.80 4756.07 15.38 0.00 3.9 4771.46 0.01 8275.07 16384.00 799.908 3.748 14.602 60.211 0.055 0.026 85 184.143 33 0 57 51 6 9
5378 10/07/16 14:00 1 60.24 4 14457.60 19635.25 5990.46 15.53 0.00 5.4 6005.98 0.01 11365.83 16384.00 739.742 4.544 18.628 51.218 0.066 0.034 77 222.680 42 0 79 72 7 3
5379 10/07/16 15:00 1 60.13 4 14431.20 11961.27 4991.92 15.25 0.00 3.3 5007.18 0.01 8948.50 16384.00 712.127 5.277 17.092 47.656 0.079 0.034 79 223.386 35 0 62 56 6 4
5380 10/07/16 16:00 1 20.43 4 4903.20 4370.15 1780.20 6.45 0.00 3.6 1786.64 0.02 3189.64 16384.00 927.779 3.858 16.352 53.521 0.061 0.029 91 223.967 36 0 65 58 7 5
5381 10/07/16 16:21 1 38.90 4 9336.00 8598.01 2256.73 7.88 0.00 3.7 2264.61 0.01 3942.44 16384.00 1175.971 4.045 11.443 81.309 0.060 0.020 90 189.442 24 0 42 35 7 13
5382 10/07/16 17:00 1 60.14 4 14433.60 17103.44 5287.65 15.03 0.00 4.7 5302.68 0.01 10077.91 16384.00 776.447 3.784 19.335 53.303 0.059 0.034 96 232.461 37 0 70 63 7 3
}}}
Drilling down on the wait events... ''read by other session'' on the top followed by ''db file scattered read''...
{{{
AWR Top Events Report
i
n
Snap s Snap A
Snap Start t Dur Event Time Avgwt DB Time A
ID Time # (m) Event Rank Waits (s) (ms) % S Wait Class
------ --------------- --- ---------- ---------------------------------------- ----- -------------- -------------- -------- ------- ------ ---------------
5376 10/07/16 12:00 1 59.23 read by other session 1 2846859.00 12618.61 4.43 40 3.6 User I/O
5376 10/07/16 12:00 1 59.23 db file scattered read 2 3129794.00 6946.61 2.22 22 2.0 User I/O
5376 10/07/16 12:00 1 59.23 CPU time 3 0.00 4765.97 0.00 15 1.3 CPU
5376 10/07/16 12:00 1 59.23 latch: cache buffers chains 4 66769.00 681.75 10.21 2 0.2 Concurrency
5376 10/07/16 12:00 1 59.23 db file sequential read 5 951177.00 664.07 0.70 2 0.2 User I/O
}}}
Datafile IO... notice the ''high Average Write (ms) and Buffer Waits''
{{{
AWR File IO Report
i
n
Snap s Snap IOPS IOPS IOPS
Snap Start t Dur IO Read Av Av Av Write Av Av Av Buffer Av Buf Total Total
ID Time # (m) TS File# Filename Rank Time Reads Rd(ms) Reads/s Blks/Rd Time Writes Wt(ms) Writes/s Blks/Wrt Waits Wt(ms) IO R+W R+W
------ --------------- --- ---------- -------------------- ----- ------------------------------------------------------------ ---- -------- -------- ------ -------- ------- -------- -------- ------ -------- -------- -------- ------ -------- --------
5376 10/07/16 12:00 1 59.23 TSDATA 7 /u02/oradata/xxxxx/TSData03.dbf 1 84496 463315 1.8 130 10.2 923 647 14.3 0 1.1 260791 4.0 463962 131
5376 10/07/16 12:00 1 59.23 TSDATA 6 /u02/oradata/xxxxx/TSData02.dbf 2 85314 455751 1.9 128 10.4 644 480 13.4 0 1.1 258436 4.0 456231 128
5376 10/07/16 12:00 1 59.23 TSDATA 12 /u02/oradata/xxxxx/TSData06.dbf 3 81869 438705 1.9 123 10.5 693 413 16.8 0 1.1 269205 4.1 439118 124
5376 10/07/16 12:00 1 59.23 TSDATA 5 /u02/oradata/xxxxx/TSData01.dbf 4 84829 437575 1.9 123 10.3 714 433 16.5 0 1.1 218519 4.0 438008 123
5376 10/07/16 12:00 1 59.23 TSDATA 8 /u02/oradata/xxxxx/TSData04.dbf 5 82117 430164 1.9 121 10.5 11904 1995 59.7 1 1.7 233173 4.0 432159 122
5376 10/07/16 12:00 1 59.23 TSDATA 11 /u02/oradata/xxxxx/TSData05.dbf 6 78244 417644 1.9 118 10.6 437 365 12.0 0 1.1 230429 3.9 418009 118
5376 10/07/16 12:00 1 59.23 TSDATA 19 /u02/oradata/xxxxx/TSData10.dbf 7 63057 367037 1.7 103 9.2 650 271 24.0 0 1.3 425870 4.6 367308 103
5376 10/07/16 12:00 1 59.23 TSDATA 18 /u02/oradata/xxxxx/TSData09.dbf 8 62115 344686 1.8 97 10.7 417 197 21.2 0 1.2 342756 4.9 344883 97
5376 10/07/16 12:00 1 59.23 TSDATA 15 /u02/oradata/xxxxx/TSData08.dbf 9 63161 330362 1.9 93 10.6 400 274 14.6 0 1.2 341085 5.2 330636 93
5376 10/07/16 12:00 1 59.23 TSDATA 21 /u02/oradata/xxxxx/TSData11.dbf 10 26496 143453 1.8 40 10.7 358 179 20.0 0 2.2 146818 5.6 143632 40
5376 10/07/16 12:00 1 59.23 TSDATA 13 /u02/oradata/xxxxx/TSData07.dbf 11 26873 135233 2.0 38 10.6 740 568 13.0 0 1.3 116062 4.8 135801 38
5376 10/07/16 12:00 1 59.23 SYSAUX 3 /u02/oradata/xxxxx/sysaux01.dbf 12 12730 67868 1.9 19 1.0 5780 244 236.9 0 1.9 0 0.0 68112 19
5376 10/07/16 12:00 1 59.23 SYSTEM 1 /u02/oradata/xxxxx/system01.dbf 13 8738 35132 2.5 10 1.0 86 46 18.7 0 1.3 57 0.5 35178 10
5376 10/07/16 12:00 1 59.23 TSINDEX 10 /u02/oradata/xxxxx/TSIndex02.dbf 14 2087 5479 3.8 2 1.0 2855 2058 13.9 1 1.5 2 10.0 7537 2
5376 10/07/16 12:00 1 59.23 TSINDEX 14 /u02/oradata/xxxxx/TSIndex03.dbf 15 1809 2667 6.8 1 1.0 3387 2549 13.3 1 1.7 33 0.6 5216 1
5376 10/07/16 12:00 1 59.23 TSINDEX 9 /u02/oradata/xxxxx/TSIndex01.dbf 16 1177 2100 5.6 1 1.0 2274 1669 13.6 0 1.6 23 0.0 3769 1
5376 10/07/16 12:00 1 59.23 TSINDEX 20 /u02/oradata/xxxxx/TSIndex04.dbf 17 1412 1593 8.9 0 1.0 3288 1962 16.8 1 1.8 0 0.0 3555 1
5376 10/07/16 12:00 1 59.23 MBINDEX 17 /u02/oradata/xxxxx/MBIndex01.dbf 18 859 3432 2.5 1 1.0 1 2 5.0 0 1.0 0 0.0 3434 1
5376 10/07/16 12:00 1 59.23 UNDOTBS2 22 /u02/oradata/xxxxx/undotbs2.dbf 19 11 2 55.0 0 1.0 611 448 13.6 0 12.6 5 0.0 450 0
5376 10/07/16 12:00 1 59.23 USERS 4 /u02/oradata/xxxxx/users01.dbf 20 97 212 4.6 0 1.0 2 2 10.0 0 1.0 0 0.0 214 0
5376 10/07/16 12:00 1 59.23 MBDATA 16 /u02/oradata/xxxxx/MBData01.dbf 21 27 72 3.8 0 1.0 1 2 5.0 0 1.0 0 0.0 74 0
}}}
Top SQLs... notice the ''high LIO and PIO on SQLs'' and the ''AAS of 4.14 on SQL 7kugzf4d2vbbm''
{{{
AWR Top SQL Report
i
n Elapsed
Snap s Snap Plan Elapsed Time CPU A
Snap Start t Dur SQL Hash Time per exec Time Cluster Parse PX A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) Wait LIO PIO Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
5376 10/07/16 12:00 1 59.23 7kugzf4d2vbbm 3892247241 TrxEngine@xxxxxxxx ( 14706.23 4.07 1750.06 0 291465143 10577961 3525 3616 3636 0 4.14 1 UPDATE RCN_DL_RECN_TRXN SET STATUS_CODE
TNS V1-V3) = :B2 , AUTH_CODE = :B1 , RESPONSE_CODE
5376 10/07/16 12:00 1 59.23 8z5wnj8yn5m68 3479127514 TrxEngine@xxxxxxxx ( 2746.66 2.00 348.96 0 60145030 1823493 1367 1370 1373 0 0.77 2 update ISWITCH_TRANSACTIONS trx
TNS V1-V3) set trx.status_code =:i1 ,
5376 10/07/16 12:00 1 59.23 2p867hb71ka0r 3479127514 TrxEngine@xxxxxxxx ( 1995.18 2.34 241.81 0 42723816 1153488 850 854 854 0 0.56 3 update iswitch_transactions trx
TNS V1-V3) set trx.transaction_fee =:f4 ,
5376 10/07/16 12:00 1 59.23 c4urxrzf2pfmp 642032217 TrxEngine@xxxxxxxx ( 1306.28 3.27 200.47 0 32251677 1546070 396 400 402 0 0.37 4 update ISWITCH_TRANSACTIONS trx
TNS V1-V3) set trx.to_account_number =:s2
5376 10/07/16 12:00 1 59.23 59vbvx21dsahu 3633629268 1214.12 202.35 46.41 0 3722156 1599341 50 6 6 0 0.34 5 SELECT distinct product_type FROM ISWIT
CH_Transactions ORDER BY product_type
}}}
Another view of Top SQLs... ''includes IO, APP, Concurrency, and Cluster wait...''
{{{
AWR Top SQL Report
i Ela
n Time
Snap s Snap Plan Ela per CPU IO App Ccr Cluster PX A
Snap Start t Dur SQL Hash Time exec Time Wait Wait Wait Wait Direct Parse Server A Time SQL
ID Time # (m) ID Value Module (s) (s) (s) (s) (s) (s) (s) LIO PIO Writes Rows Exec Count Exec S Rank Text
------ --------------- --- ------- --------------- ------------ -------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ -------- ---------- -------- ------- ---- ------
5376 10/07/16 12:00 1 59.23 7kugzf4d2vbbm 3892247241 TrxEngin 14706.23 4.07 1750.06 9187.63 0.00 523.63 0.00 291465143 10577961 0 3525 3616 3636 0 4.14 1 UPDATE
e@xxxxxx
xx (TNS
V1-V3)
5376 10/07/16 12:00 1 59.23 8z5wnj8yn5m68 3479127514 TrxEngin 2746.66 2.00 348.96 1470.59 0.00 120.92 0.00 60145030 1823493 0 1367 1370 1373 0 0.77 2 update
e@xxxxxx
xx (TNS
V1-V3)
5376 10/07/16 12:00 1 59.23 2p867hb71ka0r 3479127514 TrxEngin 1995.18 2.34 241.81 1042.70 0.00 107.25 0.00 42723816 1153488 0 850 854 854 0 0.56 3 update
e@xxxxxx
xx (TNS
V1-V3)
5376 10/07/16 12:00 1 59.23 c4urxrzf2pfmp 642032217 TrxEngin 1306.28 3.27 200.47 757.06 0.00 27.44 0.00 32251677 1546070 0 396 400 402 0 0.37 4 update
e@xxxxxx
xx (TNS
V1-V3)
5376 10/07/16 12:00 1 59.23 59vbvx21dsahu 3633629268 1214.12 202.35 46.41 1133.30 0.00 3.98 0.00 3722156 1599341 0 50 6 6 0 0.34 5 SELECT
}}}
The ASH report of that period...
{{{
ASH Report For xxxxx/xxxxx
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
xxxxx 3366261715 xxxxx 1 10.2.0.4.0 NO xxxxxxxx
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
4 8,192M (100%) 5,440M (66.4%) 505M (6.2%) 8.0M (0.1%)
Analysis Begin Time: 16-Jul-10 12:00:00
Analysis End Time: 16-Jul-10 13:00:00
Elapsed Time: 60.0 (mins)
Sample Count: 3,196
Average Active Sessions: 8.88
Avg. Active Session per CPU: 2.22
Report Target: None specified
Top User Events DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
Avg Active
Event Event Class % Activity Sessions
----------------------------------- --------------- ---------- ----------
read by other session User I/O 41.58 3.69
CPU + Wait for CPU CPU 31.04 2.76
db file scattered read User I/O 22.18 1.97
db file sequential read User I/O 2.19 0.19
latch: cache buffers chains Concurrency 1.72 0.15
-------------------------------------------------------------
Top Background Events DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top Event P1/P2/P3 Values DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
Event % Event P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1 Parameter 2 Parameter 3
-------------------------- -------------------------- --------------------------
read by other session 41.58 "18","603101","1" 0.88
file# block# class#
db file scattered read 22.18 "12","1041965","16" 0.06
file# block# blocks
db file sequential read 2.19 "1","1513","1" 0.03
file# block# blocks
latch: cache buffers chains 1.72 "504403167063768080","122","0 0.22
address number tries
-------------------------------------------------------------
Top Service/Module DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
xxxxx TrxEngine@octarine (TNS 40.43 UNNAMED 40.43
TrxEnginexxxx@octarine ( 32.13 UNNAMED 32.13
SYS$USERS UNNAMED 17.12 UNNAMED 17.12
xxxxx TOAD 8.6.1.0 8.64 UNNAMED 8.64
-------------------------------------------------------------
Top Client IDs DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top SQL Command Types DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
-> 'Distinct SQLIDs' is the count of the distinct number of SQLIDs
with the given SQL Command Type found over all the ASH samples
in the analysis period
Distinct Avg Active
SQL Command Type SQLIDs % Activity Sessions
---------------------------------------- ---------- ---------- ----------
UPDATE 6 66.77 5.93
SELECT 200 31.48 2.79
-------------------------------------------------------------
Top SQL Statements DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
SQL ID Planhash % Activity Event % Event
------------- ----------- ---------- ------------------------------ ----------
8z5wnj8yn5m68 3479127514 22.68 read by other session 10.70
update ISWITCH_TRANSACTIONS trx set trx.status_code =:i1 ,
trx.response_code =:s5 , trx.response_code_backend =:s6
, trx.response_type =:s7 , trx.modified_date =:d1
where trx.aux_serial_number =:s8 and trx.tr
CPU + Wait for CPU 9.57
db file scattered read 1.41
2p867hb71ka0r 3479127514 19.21 read by other session 10.48
update iswitch_transactions trx set trx.transaction_fee =:f4 ,
trx.modified_date = sysdate where trx.aux_serial_number =:s4
and trx.transaction_number =:s5
CPU + Wait for CPU 6.91
db file scattered read 1.16
6d9bhx2pjfuc5 3479127514 13.36 read by other session 8.01
update ISWITCH_TRANSACTIONS trx set trx.acquirer_institute_id =:s2
, trx.modified_date =:d1 where trx.aux_seri
al_number =:s3 and trx.transaction_number =:s4
CPU + Wait for CPU 3.50
db file scattered read 1.60
c4urxrzf2pfmp 642032217 11.23 read by other session 5.54
update ISWITCH_TRANSACTIONS trx set trx.to_account_number =:s2
, trx.to_account_label =:s3 , trx.modified_date =
:d1 where trx.aux_serial_number =:s4
CPU + Wait for CPU 4.22
db file scattered read 1.35
59vbvx21dsahu 3633629268 8.17 read by other session 4.29
SELECT distinct product_type FROM ISWITCH_Transactions ORDER BY product_type
db file scattered read 3.41
-------------------------------------------------------------
Top SQL using literals DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
# of Sampled
Plan Hash % Activity SQL Versions
----------- ---------- ------------
Example SQL 1
--------------------
Example SQL 2
--------------------
4046012914 8.26 264
0z9atywck5gra
select EOD.FIELD_03 GL_ACCOUNT_NO, decode(EOD.FIELD_04,61,'CR',62,'DR') C
REDIT_DEBIT_FLAG, EOD.FIELD_06 BRANCH_CODE, EOD.FIELD_10 TRX_AMOUN
T, EOD.FIELD_14 DL_DESCRIPTION, EOD.TRANSACTIONS_NUMBER, E
OD.STATUS, ( SELECT CARD_NO FROM xxxxICMS.CMS_CARD WHERE PRIMARY_ACC_
gz0rvtxh9tvbv
select EOD.FIELD_03 GL_ACCOUNT_NO, decode(EOD.FIELD_04,61,'CR',62,'DR') C
REDIT_DEBIT_FLAG, EOD.FIELD_06 BRANCH_CODE, EOD.FIELD_10 TRX_AMOUN
T, EOD.FIELD_14 DL_DESCRIPTION, EOD.TRANSACTIONS_NUMBER, E
OD.STATUS, ( SELECT CARD_NO FROM xxxxICMS.CMS_CARD WHERE PRIMARY_ACC_
199672211 5.63 180
06r21sz0f4p05
SELECT distinct cA.status_code cl3 , cA.Modified_By cl1, cA.Modified_Date
cl21, cA.PIN_ERROR_COUNT cl4 , cA.ATM_PROFILE cl5, cA.ADDED_TO_AUD_ON
cl2 FROM CMS_CARD_AUDIT cA Where cA.Card_No='6019710507553675'order by ca.
added_to_aud_on
gzfh9gtqugscg
SELECT distinct cA.status_code cl3 , cA.Modified_By cl1, cA.Modified_Date
cl21, cA.PIN_ERROR_COUNT cl4 , cA.ATM_PROFILE cl5, cA.ADDED_TO_AUD_ON
cl2 FROM CMS_CARD_AUDIT cA Where cA.Card_No='6019710508570132'order by ca.
added_to_aud_on
-------------------------------------------------------------
Top PL/SQL Procedures DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
-> 'PL/SQL entry subprogram' represents the application's top-level
entry-point(procedure, function, trigger, package initialization
or RPC call) into PL/SQL.
-> 'PL/SQL current subprogram' is the pl/sql subprogram being executed
at the point of sampling . If the value is 'SQL', it represents
the percentage of time spent executing SQL for the particular
plsql entry subprogram
PLSQL Entry Subprogram % Activity
----------------------------------------------------------------- ----------
PLSQL Current Subprogram % Current
----------------------------------------------------------------- ----------
xxxxISWITCH.TRIG_RCN_DL_SWTICH_TRXN 66.49
SQL 66.49
-------------------------------------------------------------
Top Sessions DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
-> '# Samples Active' shows the number of ASH samples in which the session
was found waiting for that particular event. The percentage shown
in this column is calculated with respect to wall clock time
and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
the PQ slave activity into the session issuing the PQ. Refer to
the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# % Activity Event % Event
--------------- ---------- ------------------------------ ----------
User Program # Samples Active XIDs
-------------------- ------------------------------ ------------------ --------
484,40611 8.26 db file scattered read 7.17
EGM10804 TOAD.exe 229/360 [ 64%] 0
462,14182 2.25 db file scattered read 1.28
xxxxICMS 41/360 [ 11%] 0
511,11879 2.16 read by other session 1.06
xxxxICMS 34/360 [ 9%] 0
445,34581 2.00 read by other session 0.97
xxxxICMS 31/360 [ 9%] 0
439, 6696 1.56 read by other session 0.81
xxxxICMS 26/360 [ 7%] 0
-------------------------------------------------------------
Top Blocking Sessions DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top Sessions running PQs DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
No data exists for this section of the report.
-------------------------------------------------------------
Top DB Objects DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
-> With respect to Application, Cluster, User I/O and buffer busy waits only.
Object ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
Object Name (Type) Tablespace
----------------------------------------------------- -------------------------
61692 40.39 read by other session 34.73
xxxxISWITCH.RCN_DL_RECN_TRXN (TABLE) TSDATA
db file scattered read 5.51
61600 8.07 read by other session 4.29
xxxxISWITCH.ISWITCH_TRANSACTIONS (TABLE) TSDATA
db file scattered read 3.41
61448 7.38 db file scattered read 7.17
xxxxISWITCH.BACKEND_EOD_FILE_OUTPUT_ARCHV (TABLE) TSDATA
61826 5.10 db file scattered read 3.50
xxxxICMS.CMS_CARD_AUDIT (TABLE) TSDATA
read by other session 1.47
61615 2.41 db file scattered read 1.31
xxxxISWITCH.ISWITCH_TRX_FEE_RECORDS (TABLE) TSDATA
read by other session 1.00
-------------------------------------------------------------
Top DB Files DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
-> With respect to Cluster and User I/O events only.
File ID % Activity Event % Event
--------------- ---------- ------------------------------ ----------
File Name Tablespace
----------------------------------------------------- -------------------------
18 10.70 read by other session 8.60
/u02/oradata/xxxxx/TSData09.dbf TSDATA
db file scattered read 1.97
11 7.35 read by other session 4.69
/u02/oradata/xxxxx/TSData05.dbf TSDATA
db file scattered read 2.57
12 6.63 read by other session 4.04
/u02/oradata/xxxxx/TSData06.dbf TSDATA
db file scattered read 2.53
8 6.57 read by other session 3.66
/u02/oradata/xxxxx/TSData04.dbf TSDATA
db file scattered read 2.82
7 5.98 read by other session 3.38
/u02/oradata/xxxxx/TSData03.dbf TSDATA
db file scattered read 2.44
-------------------------------------------------------------
Top Latches DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
Max Sample
Latch % Latch Blocking Sid % Activity Wait secs
------------------------------ ---------- --------------- ---------- ----------
# Waits # Sampled Wts # Sampled Wts # Sampled Wts # Sampled Wts
Sampled < 10ms 10ms - 100ms 100ms - 1s > 1s
-------------- -------------- -------------- -------------- --------------
latch: cache buffers chains 1.72 Held Shared 1.16 0.477127
37 2 13 22 0
-------------------------------------------------------------
Activity Over Time DB/Inst: xxxxx/xxxxx (Jul 16 12:00 to 13:00)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
12:00:00 (5.0 min) 325 read by other session 163 5.10
db file scattered read 85 2.66
CPU + Wait for CPU 61 1.91
12:05:00 (5.0 min) 141 CPU + Wait for CPU 64 2.00
db file scattered read 36 1.13
read by other session 29 0.91
12:10:00 (5.0 min) 144 CPU + Wait for CPU 84 2.63
db file scattered read 27 0.84
read by other session 19 0.59
12:15:00 (5.0 min) 150 CPU + Wait for CPU 88 2.75
db file scattered read 35 1.10
read by other session 22 0.69
12:20:00 (5.0 min) 200 CPU + Wait for CPU 101 3.16
db file scattered read 44 1.38
read by other session 44 1.38
12:25:00 (5.0 min) 94 CPU + Wait for CPU 53 1.66
read by other session 21 0.66
db file scattered read 19 0.59
12:30:00 (5.0 min) 132 CPU + Wait for CPU 77 2.41
db file scattered read 30 0.94
read by other session 18 0.56
12:35:00 (5.0 min) 538 read by other session 364 11.39
db file scattered read 113 3.54
CPU + Wait for CPU 49 1.53
12:40:00 (5.0 min) 511 read by other session 284 8.89
CPU + Wait for CPU 103 3.22
db file scattered read 103 3.22
12:45:00 (5.0 min) 217 CPU + Wait for CPU 87 2.72
db file scattered read 69 2.16
read by other session 47 1.47
12:50:00 (5.0 min) 243 read by other session 102 3.19
CPU + Wait for CPU 75 2.35
db file scattered read 57 1.78
12:55:00 (5.0 min) 501 read by other session 216 6.76
CPU + Wait for CPU 152 4.76
db file scattered read 91 2.85
-------------------------------------------------------------
End of Report
}}}
Drilling down on the ASH views... ''see my comments on the queries''
{{{
-- TOP SQLs with read by other session event
df17hpvr5mtuu 2
3kvr3hgs3majr 2
11
SQL_ID CNT
------------- ----------
47sbgm74kgjuw 11
644k3s28r6w31 46
8z68932mqd2qn 223
c4urxrzf2pfmp 1028
2p867hb71ka0r 1749
6d9bhx2pjfuc5 2031
8z5wnj8yn5m68 2105 <-- the top SQL in ASH report, but this is the 2nd on the top SQL of AWR report
95 rows selected.
-- BREAK DOWN BY MODULE
MODULE SQL_ID CNT
------------------------------------------------ ------------- ----------
TrxEnginexxxx@xxxxxxxx (TNS V1-V3) 47sbgm74kgjuw 11
TrxEnginexxxx@xxxxxxxx (TNS V1-V3) 11
TrxEnginexxxx@xxxxxxxx (TNS V1-V3) 8z68932mqd2qn 32
TrxEngine@xxxxxxxx (TNS V1-V3) 644k3s28r6w31 45
TrxEngine@xxxxxxxx (TNS V1-V3) 8z68932mqd2qn 191
TrxEngine@xxxxxxxx (TNS V1-V3) c4urxrzf2pfmp 405
TrxEnginexxxx@xxxxxxxx (TNS V1-V3) 2p867hb71ka0r 430
TrxEnginexxxx@xxxxxxxx (TNS V1-V3) c4urxrzf2pfmp 623
TrxEnginexxxx@xxxxxxxx (TNS V1-V3) 8z5wnj8yn5m68 631
TrxEngine@xxxxxxxx (TNS V1-V3) 2p867hb71ka0r 1319
TrxEngine@xxxxxxxx (TNS V1-V3) 8z5wnj8yn5m68 1474
MODULE SQL_ID CNT
------------------------------------------------ ------------- ----------
TrxEngine@xxxxxxxx (TNS V1-V3) 6d9bhx2pjfuc5 2031
100 rows selected.
SQL> select current_obj#, count(*) cnt from dba_hist_active_sess_history
2 where snap_id between 5376 and 5377
3 and module = 'TrxEnginexxxx@xxxxxxxx (TNS V1-V3)'
4 or module = 'TrxEngine@xxxxxxxx (TNS V1-V3)'
5 and event_id=3056446529 and sql_id='8z5wnj8yn5m68'
6 group by current_obj#
7 order by 2;
CURRENT_OBJ# CNT
------------ ----------
61827 1
61602 1
61605 2
61604 5
61600 5
61615 33
61603 39
-1 315
61692 2887 <-- this object is being used by the #1 in the Top SQL of the AWR report, SQL_ID 7kugzf4d2vbbm
9 rows selected.
select object_id, owner, object_name, subobject_name, object_type from dba_objects
where object_id in (61692 );
SQL> select object_id, owner, object_name, subobject_name, object_type from dba_objects
where object_id in (61692 ); 2
OBJECT_ID OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE
---------- ------------------------------ -------------------------------------------------------------------------------------------------------------------------------- ------------------------------ -------------------
61692 xxxxISWITCH RCN_DL_RECN_TRXN TABLE
SQL> select current_file#, current_block#, count(*) cnt
2 from dba_hist_active_sess_history
where snap_id between 5376 and 5377
3 4 and module = 'TrxEnginexxxx@xxxxxxxx (TNS V1-V3)'
or module = 'TrxEngine@xxxxxxxx (TNS V1-V3)'
and current_obj# in (61692)
5 6 7 group by current_file#, current_block#
having count(*)>50
order by 3; 8 9
CURRENT_FILE# CURRENT_BLOCK# CNT
------------- -------------- ----------
0 0 315 <-- what does CURRENT_FILE# CURRENT_BLOCK# = 0 mean???
SQL> select segment_name, header_file, header_block
from dba_segments where owner='xxxxISWITCH'
2 and segment_name in ('RCN_DL_RECN_TRXN'); 3
SEGMENT_NAME HEADER_FILE HEADER_BLOCK
--------------------------------------------------------------------------------- ----------- ------------
RCN_DL_RECN_TRXN 5 838170
Running Kyle's script http://sites.google.com/site/embtdbo/wait-event-documentation/oracle-buffer-busy-wait
shows that the SQLs on the ASH report waits mostly on this table RCN_DL_RECN_TRXN
CNT OBJ OTYPE SQL_ID BLOCK_TYPE TBS ASSM
---------- -------------------- --------------- ------------- -------------------- ---------- ------
26 RCN_DL_RECN_TRXN TABLE 6d9bhx2pjfuc5 data block TSDATA AUTO
27 RCN_DL_RECN_TRXN TABLE 6d9bhx2pjfuc5 data block TSDATA AUTO
28 RCN_DL_RECN_TRXN TABLE 8z5wnj8yn5m68 data block TSDATA AUTO
29 RCN_DL_RECN_TRXN TABLE 2p867hb71ka0r data block TSDATA AUTO
30 RCN_DL_RECN_TRXN TABLE 8z5wnj8yn5m68 data block TSDATA AUTO
30 RCN_DL_RECN_TRXN TABLE c4urxrzf2pfmp data block TSDATA AUTO
33 RCN_DL_RECN_TRXN TABLE 8z5wnj8yn5m68 data block TSDATA AUTO
35 RCN_DL_RECN_TRXN TABLE 8z5wnj8yn5m68 data block TSDATA AUTO
36 RCN_DL_RECN_TRXN TABLE 6d9bhx2pjfuc5 data block TSDATA AUTO
37 ISWITCH_TRANSACTIONS TABLE 59vbvx21dsahu data block TSDATA AUTO
38 RCN_DL_RECN_TRXN TABLE 2p867hb71ka0r data block TSDATA AUTO
CNT OBJ OTYPE SQL_ID BLOCK_TYPE TBS ASSM
---------- -------------------- --------------- ------------- -------------------- ---------- ------
46 RCN_DL_RECN_TRXN TABLE 8z5wnj8yn5m68 data block TSDATA AUTO
50 RCN_DL_RECN_TRXN TABLE 2p867hb71ka0r data block TSDATA AUTO
63 RCN_DL_RECN_TRXN TABLE 8z5wnj8yn5m68 data block TSDATA AUTO
65 RCN_DL_RECN_TRXN TABLE 6d9bhx2pjfuc5 data block TSDATA AUTO
73 RCN_DL_RECN_TRXN TABLE 2p867hb71ka0r data block TSDATA AUTO
}}}
<<<
__''There are a couple of questions here...''__
''1) The SQL_ID 7kugzf4d2vbbm''
.. hmm.. i got this bank application (oltp) with intermittent failures on transactions.. then i was focusing on the time of the issue at 12pm-1pm and as per the awr report of that period "update" statement with SQLID 7kugzf4d2vbbm has the top elapsed with high AAS... and i also have the ASH report of the same period but it does not appear there even if I look for v$active_session_history or dba_hist_active_sess_history.. what appears in ASH as the top SQL was the 2nd top on that AWR top sql report with SQLID 8z5wnj8yn5m68...
What's weird for me is that 7kugzf4d2vbbm does not appear in v$active_session_history or dba_hist_active_sess_history at all
I know that the top SQL from AWR gets data from DBA_HIST*SQLSTATS.. and if it's consuming some amount of work, and it's always on the top then it's got to be somewhere in the ASH but it's not..
''2) Read by other session''
How would it be possible that the SQLs on the ASH report waits mostly on this table RCN_DL_RECN_TRXN? And this table is being accessed by ''SQL_ID 7kugzf4d2vbbm''... And the table of the SQLs on the ASH report is coming from ISWITCH_TRANSACTIONS...
How does the update operation on RCN_DL_RECN_TRXN affect the table ISWITCH_TRANSACTIONS? or vice versa..
hmmm..
<<<
http://stackoverflow.com/questions/226703/how-do-i-prompt-for-input-in-a-linux-shell-script
{{{
echo "Please enter some input: "
read input_variable
echo "You entered: $input_variable"
}}}
http://www.pythian.com/news/28797/de-confusing-ssd-for-oracle-databases/#comment-658529
http://sysdba.wordpress.com/2006/04/28/how-to-adjust-the-high-watermark-in-oracle-10g-alter-table-shrink/
Reclaiming unused space in an E-Business Suite Instance tablespace (Doc ID 303709.1)
How to lower down high water mark of a data file https://community.oracle.com/thread/1058006
awesome document by redhat
kbase doc DOC-7715 - multi-processor, multi-core or supports hyperthreading
http://www.evernote.com/shard/s48/sh/374cdb18-97d3-421d-85b6-0be1d270cc77/fcde8c4f5ca369745cfd3d6de07379e9
<<showtoc>>
! courses
https://www.udemy.com/building-databases-with-redis/
https://www.udemy.com/learning-redis/
https://www.linkedin.com/learning/learning-redis/welcome
https://www.linkedin.com/learning/azure-redis-cache/the-course-overview
https://app.pluralsight.com/library/courses/building-nosql-apps-redis/table-of-contents
! redis use cases by data structure
http://highscalability.com/blog/2019/9/3/top-redis-use-cases-by-core-data-structure-types.html
! store images in redis
https://www.google.com/search?q=store+images+in+redis&oq=store+images+in+red&aqs=chrome.0.0j69i57j0l2.3948j0j7&sourceid=chrome&ie=UTF-8
! Keeping Instagram up with over a million new users in twelve hours
https://instagram-engineering.com/storing-hundreds-of-millions-of-simple-key-value-pairs-in-redis-1091ae80f74c (code https://news.ycombinator.com/item?id=3183276)
https://news.ycombinator.com/item?id=3804351
https://www.quora.com/What-is-the-best-thing-to-use-Redis-for-in-a-social-networking-site
! redis-vs-memcached
https://stackoverflow.com/questions/22802360/pros-and-cons-of-using-redis-vs-memcacheddb-as-djangos-session-system
! redis as cache (HOWTO GUIDE)
Why Your MySQL Needs Redis | Redis Labs https://www.youtube.com/watch?v=_4HwUVNl9Nc&t=2998s
! app development
!! redis and python
Redis 01 Using String keys https://www.youtube.com/watch?v=QqVAmSaS7fw&list=PLiBQwJML2oINLnq3eui-2NxB_74D_y77k
!! Setting up MongoDB, ElasticSearch, Redis and RabbitMQ - end to end app dev
Setting up MongoDB, ElasticSearch, Redis and RabbitMQ https://www.youtube.com/watch?v=UGC9m0Cx74w&list=PLj5fYy-TIvPfJIVFdHCTiqb4g2IKnl9KD&index=2
https://github.com/amitstefen/mongo_app
!! mastering flask web dev
https://learning.oreilly.com/library/view/mastering-flask-web/9781788995405/f04c504f-d3fd-47f4-946e-f68fca7d23e7.xhtml
<<<
Frontend: Webserver and uWSGI – stateless
Celery workers: Celery – stateless
Message queue: RabbitMQ or AWS SQS – state
Cache: Redis – state
Database: SQL or NoSQL – state
<<<
! source db to redis
!! iVoyant redis connector (oracle to redis) - cdc replication
CDC Replication to Redis from Oracle and MySQL https://www.youtube.com/watch?v=F-0gjKKv-h4
https://www.google.com/search?sxsrf=ACYBGNTj5JwoUAe3xQ4ms-bOOf9zl_OltA%3A1564695432272&ei=iFtDXZ2fELG9ggfR6Y7IDQ&q=iVoyant+redis+connector&oq=iVoyant+redis+connector&gs_l=psy-ab.3..33i160.6786.11317..11519...0.0..0.245.1599.16j1j1......0....1..gws-wiz.......0j0i30j0i20i263j33i299.W01vDkzxxzc&ved=0ahUKEwidg7Db0OLjAhWxnuAKHdG0A9kQ4dUDCAo&uact=5
https://redislabs.com/redisconf19/agenda/
!! How to store data from MySQL to Redis using Logstash (mysql to redis)
How to store data from MySQL to Redis using Logstash https://www.youtube.com/watch?v=sMQTULcSSp8
!! Pro Java Clustering and Scalability: Building Real-Time Apps with Spring, Cassandra, Redis, WebSocket and RabbitMQ
https://learning.oreilly.com/library/view/pro-java-clustering/9781484229859/
!! Building Scalable Apps with Redis and Node.js
https://learning.oreilly.com/library/view/building-scalable-apps/9781783984480/
!! Learning Path: Web Application Development using Redis, Express, and Socket.IO
https://www.oreilly.com/library/view/learning-path-web/9781787288829/?autoplay=false#toc-start
!! django redis
https://code.tutsplus.com/tutorials/how-to-cache-using-redis-in-django-applications--cms-30178?_ga=2.99468830.1820286573.1564789276-1230379328.1564789276
! redis to mysql
https://stackoverflow.com/questions/23080557/whats-the-best-strategy-to-sync-redis-data-to-mysql
<<<
When you update values in Redis,you can put the values in other 'Queue', such as List in Redis.Then consuming the values in Queue and update Mysql.
If the Redis data isn't too much,just using a Scheduler to batch flush all data to Mysql.
<<<
https://www.youtube.com/results?search_query=redis+to+rabbitmq+to+mysql
https://github.com/michaeldegroot/mysql-cache
! redis performance
!! request per second
https://skipperkongen.dk/2013/08/27/how-many-requests-per-second-can-i-get-out-of-redis/
!! latency troubleshooting
Redis latency problems troubleshooting https://redis.io/topics/latency
!! write io
https://medium.com/@amangoeliitb/improving-database-performance-with-redis-dbd38fdf3cb
http://blog.tanelpoder.com/2012/10/31/select-statement-generating-redo-and-lost-write-detection/
http://www.pythian.com/news/37343/select-statement-generating-redo-and-other-mysteries-of-exadata/
http://blog.tanelpoder.com/2012/05/02/advanced-oracle-troubleshooting-guide-part-10-index-unique-scan-doing-multiblock-reads/
http://jonathanlewis.wordpress.com/2009/06/16/clean-it-up/
{{{
There are five terms to consider:
clean
commit cleanout
block cleanout
delayed block cleanout
delayed logging block cleanout
}}}
http://www.jlcomp.demon.co.uk/cleanout.html
http://www.jlcomp.demon.co.uk/faq/ts_readonly.html
http://www.jlcomp.demon.co.uk/bbw.html
http://www.jlcomp.demon.co.uk/buffer_handles.html
{{{
To enable compression, set the COMPRESSION attribute of the redo transport destination to ENABLE. For example:
LOG_ARCHIVE_DEST_2='SERVICE=boston COMPRESSION=ENABLE DB_UNIQUE_NAME=boston'
}}}
Redo Transport Compression in a Data Guard Environment (Doc ID 729551.1)
How to confirm if Redo Transport Compression is used In Dataguard? (Doc ID 1927057.1)
How to find out the compression rate of Redo Transport Compression ? (Doc ID 1490751.1)
https://oraclehandson.wordpress.com/2011/01/07/enabling-redo-log-transport-compression/
{{{
$ cat SecHealthCheck_IFSTST_20130418.txt | grep -i "edm_file_storage_tab"
OWNER GRANTOR GRANTEE TABLE_NAME GO S I U D A R I E
---------- ---------- ------------------------- ------------------------------ --- - - - - - - - -
IFSAPP IFSAPP IFSINFO EDM_FILE_STORAGE_TAB YES X
IFSAPP IFSAPP IFSSYS EDM_FILE_STORAGE_TAB NO X
IFSAPP IFSAPP IFSINFO EDM_FILE_STORAGE_TAB YES X
IFSAPP IFSAPP IFSSYS EDM_FILE_STORAGE_TAB NO X
ON DBV
* create a user called remotedba
* create a realm name LimitedOracleAdmin
* create command rule/rule set on SELECT for remotedba user
conn dvadmin/<password>
create user remotedba identified by <password>;
create user remotedba2 identified by <password>;
conn / as sysdba
grant dba to remotedba;
grant dba to remotedba2;
conn remotedba/<password> <-- he is a contractor, not allowed
conn remotedba2/<password> <-- he is an internal DBA and is allowed
15:34:22 REMOTEDBA@ifstst_1> select count(*) from IFSAPP.EDM_FILE_STORAGE_TAB where rownum < 2;
COUNT(*)
----------
1
### this realm may not be implemented.. and you can just go with the SELECT command rule but what this does for you
# is whenever there are new DBA accounts that will be created you don't have to create a SELECT command rule for them on this table
# since by default if the DBA is not on the realm then he's not allowed to access the table
# but this can be used as an exception rule.. where you have a DBA that is really allowed to query this table then just make
# him a participant of this realm then he'll be able to access it..
# and take note even if you make the DBA with SELECT command rule restrictions as a participant.. the command rule will still take precedence
-- realm
[begin DVSYS.DBMS_MACADM.CREATE_REALM(realm_name => 'LimitedOracleAdmin', description => '', enabled => 'Y', audit_options => '2' ); DVSYS.DBMS_MACADM.ADD_OBJECT_TO_REALM(realm_name => 'LimitedOracleAdmin', object_owner => 'IFSAPP', object_name => 'EDM_FILE_STORAGE_TAB', object_type => 'TABLE' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSAPP', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSINFO', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSSYS', rule_set_name => '', auth_options => '0' ); end; ]
--edit
[begin DVSYS.DBMS_MACADM.DELETE_AUTH_FROM_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSAPP'); DVSYS.DBMS_MACADM.DELETE_AUTH_FROM_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSINFO'); DVSYS.DBMS_MACADM.DELETE_AUTH_FROM_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSSYS'); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSAPP', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSINFO', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSSYS', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'REMOTEDBA', rule_set_name => '', auth_options => '0' ); end; ]
-- just the app schema
[begin DVSYS.DBMS_MACADM.CREATE_REALM(realm_name => 'LimitedOracleAdmin', description => '', enabled => 'Y', audit_options => '2' ); DVSYS.DBMS_MACADM.ADD_OBJECT_TO_REALM(realm_name => 'LimitedOracleAdmin', object_owner => 'IFSAPP', object_name => 'EDM_FILE_STORAGE_TAB', object_type => 'TABLE' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSAPP', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSINFO', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSSYS', rule_set_name => '', auth_options => '0' ); end; ]
-- adding remotedba and remotedba2
[begin DVSYS.DBMS_MACADM.DELETE_AUTH_FROM_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSSYS'); DVSYS.DBMS_MACADM.DELETE_AUTH_FROM_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSAPP'); DVSYS.DBMS_MACADM.DELETE_AUTH_FROM_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSINFO'); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSSYS', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSAPP', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'IFSINFO', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'REMOTEDBA', rule_set_name => '', auth_options => '0' ); DVSYS.DBMS_MACADM.ADD_AUTH_TO_REALM(realm_name => 'LimitedOracleAdmin', grantee => 'REMOTEDBA2', rule_set_name => '', auth_options => '0' ); end; ]
-- rule set
remotedba_noselect
-- rule
remotedba_noselect_rule
dvf.f$session_user != 'REMOTEDBA'
[begin DVSYS.DBMS_MACADM.CREATE_RULE_SET(rule_set_name => 'remotedba_noselect', description => '', enabled => 'Y', eval_options => 1, audit_options => 1, fail_options => 1, fail_message => '', fail_code => '', handler_options => 0, handler => ''); DVSYS.DBMS_MACADM.ADD_RULE_TO_RULE_SET(rule_set_name => 'remotedba_noselect', rule_name => 'remotedba_noselect_rule', rule_order => '1', enabled => 'Y'); end; ]
-- command rule
begin DVSYS.DBMS_MACADM.CREATE_COMMAND_RULE(command=> 'SELECT', rule_set_name => 'remotedba_noselect', object_owner => 'IFSAPP', object_name => 'EDM_FILE_STORAGE_TAB',enabled => 'Y'); end;
15:35:24 REMOTEDBA@ifstst_1> select count(*) from IFSAPP.EDM_FILE_STORAGE_TAB where rownum < 2;
select count(*) from IFSAPP.EDM_FILE_STORAGE_TAB where rownum < 2
*
ERROR at line 1:
ORA-01031: insufficient privileges
}}}
{{{
sed -n -i '2,$ p' cell_mbs.txt
}}}
{{{
for i in *; do mv $i $i.txt; done
}}}
http://stackoverflow.com/questions/1224766/how-do-i-rename-the-extension-for-a-batch-of-files
{{{
for file in *.r
do
mv "$file" "${file%.r}.R"
done
}}}
https://stackoverflow.com/questions/10748453/replace-comma-with-newline-in-sed-on-macos
{{{
sed 's/{replace this with your characters}/\'$'\n''/g' tables.txt | grep CA1
}}}
.
http://www.unix.com/shell-programming-and-scripting/152214-replace-newline-comma.html
{{{
sed -n -e 'H;${x;s/\n/,/g;s/^,//;p;}'
}}}
http://stackoverflow.com/questions/2709458/bash-script-to-replace-spaces-in-file-names
http://stackoverflow.com/questions/15347843/remove-whitespaces-from-filenames-in-linux
{{{
for f in hotel*pdf ; do mv "$f" "${f// /_}"; done
}}}
{{{
----------------------------------------------------------------------------------------
--
-- File name: report_sql_monitor.sql
-- Purpose: Execute DBMS_SQLTUNE.REPORT_SQL_MONITOR function.
--
-- Author: Kerry Osborne
--
-- Usage: This scripts prompts for three values, all of which can be left blank.
--
-- If all three parameters are left blank, the last statement monitored
-- for the current session will be reported on.
--
-- If the SID is specified and the other two parameters are left blank,
-- the last statement executed by the specified SID will be reported.
--
-- If the SQL_ID is specified and the other two parameters are left blank,
-- the last execution of the specified statement by the current session
-- will be reported.
--
-- If the SID and the SQL_ID are specifie and the SQL_EXEC_ID is left
-- blank, the last execution of the specified statement by the specified
-- session will be reported.
--
-- If all three parameters are specified, the specified execution of the
-- specified statement by the specified session will by reported.
--
-- Note: If a match is not found - the header is printed with no data.
-- The most common cause for this is when you enter a SQL_ID and
-- leave the other parameters blank, but the current session has
-- not executed the specifid statement.
--
-- Note 2: The serial# is not prompted for, but is setup by the decodei.
-- The serial# parameter is in here to ensure you don't get data
-- for the wrong session, but be aware that you may need to modify
-- this script to allow input of a specific serial#.
---------------------------------------------------------------------------------------
set long 999999999
set lines 250
col report for a250
accept sid prompt "Enter value for sid: "
select
DBMS_SQLTUNE.REPORT_SQL_MONITOR(
session_id=>nvl('&&sid',sys_context('userenv','sid')),
session_serial=>decode('&&sid',null,null,
sys_context('userenv','sid'),(select serial# from v$session where audsid = sys_context('userenv','sessionid')),
null),
sql_id=>'&sql_id',
sql_exec_id=>'&sql_exec_id',
inst_id=>'&inst_id',
report_level=>'ALL')
as report
from dual;
set lines 250
undef SID
}}}
{{{
set pagesize 0 echo off timing off linesize 1000 trimspool on trim on long 2000000 longchunksize 2000000 feedback off verify off
spool sqlmon_&&sql_id..html
select dbms_sqltune.report_sql_monitor(report_level=>'+histogram', type=>'EM', sql_id=>'&&sql_id') monitor_report from dual;
spool off
SPOOL sqlmon_detail_&&sql_id..html
SELECT DBMS_SQLTUNE.report_sql_detail(
sql_id => '&&sql_id',
type => 'ACTIVE',
report_level => 'ALL') AS report
FROM dual;
SPOOL OFF
}}}
http://blog.lachmann.org/?p=504
http://davidlopezrodriguez.com/2011/09/20/resize2fs-cant-resize-no-space-left-on-device-while-trying-to-add-group/
http://h30499.www3.hp.com/t5/System-Administration/Online-resize2fs-Operation-not-permitted-While-trying-to-add/td-p/4680934
shows how to disable it..
http://askdba.org/weblog/2012/01/user-sessions-stuck-on-resmgrcpu-quantum-wait-event/
by default the RESOURCE_PLAN is assigned to DEFAULT_MAINTENANCE_PLAN
{{{
set lines 300
col window_name format a17
col RESOURCE_PLAN format a25
col LAST_START_DATE format a50
col duration format a15
col enabled format a5
select window_name, RESOURCE_PLAN, LAST_START_DATE, DURATION, enabled from DBA_SCHEDULER_WINDOWS;
execute dbms_scheduler.set_attribute('MONDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('TUESDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('WEDNESDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('THURSDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('FRIDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('SATURDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('SUNDAY_WINDOW','RESOURCE_PLAN','');
execute DBMS_AUTO_TASK_ADMIN.DISABLE;
}}}
* there's also an interesting bug on default_maintenance_plan killing the cells
<<<
Yes I remember Sue Lee or Tanel mentioned this that RM preemption wouldn't be done when a latch is held by a process.
On the other hand, I've experienced a scenario where buffer pins were preempted by RM (concurrency + CPU starvation), see more details here http://bit.ly/1xuqbfM
22:18:35 SYS@pib01scp4> @ash_wait_chains program2||event2||sql_id "event='buffer busy waits'" sysdate-1/24/60 sysdate
%This SECONDS
------ ----------
WAIT_CHAIN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
72% 2085
-> (JDBC Thin Client) buffer busy waits [segment header] fv4fs01rvgdju -> (JDBC Thin Client) resmgr:cpu quantum fv4fs01rvgdju
As you can see below it looks like Oracle allows resource manager "scheduler" to kick in when a session is holding a buffer pinned causing the top sessions to wait on "buffer busy waits" while they are being put to sleep by the resource manager.
Reducing the concurrency should help here or allowing more resources for the SQL_ID fv4fs01rvgdju whenever the high concurrency is needed during start/end of month or we can ultimately run the SCPP_TS_RW on two nodes.
Also we can't rule out ASSM bugs here so some stack traces of the "buffer busy waits" pin holders would be useful (next section)
----
I knew that resource manager level preemption wouldn't be done when a latch is held by a process (makes sense). Sue lee said that for mutexes they don't care currently (IIRC). I guess maybe they don't care about pinned buffers either, but Sue was commenting about mutexes
ok .... they should increase the resource usage allowance for this workload and control its resource consumption by reducing concurrency. and then see if there's something in resource manager to patch :)
can't you just start from reducing concurrency? how many concurrently inserting sessions do you have? concurrency and CPU shortage do not go together well. wait chains just showed that
----
I would hope that the a session couldn't put itself into 'resmgr:cpu quantum' while holding a latch - but can anyone make a definite statement that that's not possible ?
Has anyone looked closely at the internals of resource manager ?
I'm trying to find out why we can see 18M gets on the "resmgr:schema config" latch every hour. At present the only way I can cause any gets is to do something silly (like changing the a user/consumer group mapping); although it does look as if CJQ gets the latch once every 5 seconds when there's a resource plan active.
<<<
* you can combine user profiles (sessions per user) and DBRM to limit PX or ELAPSED and kill that session or throttle
https://askdba.org/weblog/2012/09/limiting-io-and-cpu-resources-using-11g-oracle-resource-manager/
https://blog.yannickjaquier.com/oracle/oracle-database-resource-manager.html
https://learning.oreilly.com/library/view/expert-oracle-exadata/9781430262428/9781430262411_Ch07.xhtml
https://learning.oreilly.com/library/view/expert-oracle-exadata/9781430233923/Chapter07.html#ch7
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/managing-resources-with-oracle-database-resource-manager.html#GUID-1D62EF37-F2E2-4595-BA47-DECDE899E62E
https://docs.oracle.com/database/121/SQLRF/statements_6012.htm#SQLRF01310
{{{
oracle@enkdb03.enkitec.com:/home/oracle/dba/karao/scripts/exabook:dbm011
$ cat rmplan.sh
#!/bin/bash
while :; do
sqlplus "/ as sysdba" <<! &
spool rmplan.txt append
@rmplan.sql
exit
!
sleep 20
echo
done
oracle@enkdb03.enkitec.com:/home/oracle/dba/karao/scripts/exabook:dbm011
$ cat rmplan.sql
set colsep ','
set lines 300
col "Parameter" FOR a40
col "Session Value" FOR a20
col "Instance Value" FOR a20
col "Description" FOR a50
SELECT TO_CHAR(sysdate,'MM/DD/YY HH24:MI:SS') tm, a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value", a.ksppdesc "Description"
FROM x$ksppi a, x$ksppcv b, x$ksppsv c
WHERE a.indx = b.indx AND a.indx = c.indx
AND substr(ksppinm,1,1) <> '_'
AND a.ksppinm = 'resource_manager_plan'
/
$ cat rmplan.txt | grep resource_manager_plan | awk '{print $4}' | sort | uniq | less
,DEFAULT_PLAN
,FORCE:pq_critical
FORCE:pq_critical
}}}
{{{
TM ,Parameter ,Session Value ,Instance Value ,Description
-----------------,----------------------------------------,--------------------,--------------------,--------------------------------------------------
09/23/14 05:53:24,resource_manager_plan ,DEFAULT_PLAN ,DEFAULT_PLAN ,resource mgr top plan
TM ,Parameter ,Session Value ,Instance Value ,Description
-----------------,----------------------------------------,--------------------,--------------------,--------------------------------------------------
10/05/14 06:00:07,resource_manager_plan ,FORCE:pq_critical ,FORCE:pq_critical ,resource mgr top plan
TM ,Parameter ,Session Value ,Instance Value ,Description
-----------------,----------------------------------------,--------------------,--------------------,--------------------------------------------------
10/06/14 06:48:53,resource_manager_plan ,FORCE:pq_critical ,FORCE:pq_critical ,resource mgr top plan
Sun Oct 05 06:00:01 2014
Setting Resource Manager plan SCHEDULER[0x4449]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Sun Oct 05 06:00:03 2014
Begin automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Sun Oct 05 06:00:07 2014
TABLE SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY: ADDED INTERVAL PARTITION SYS_P597 (41916) VALUES LESS THAN (TO_DATE(' 2014-10-06 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GRE
GORIAN'))
TABLE SYS.WRI$_OPTSTAT_HISTGRM_HISTORY: ADDED INTERVAL PARTITION SYS_P600 (41916) VALUES LESS THAN (TO_DATE(' 2014-10-06 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREG
ORIAN'))
End automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Mon Oct 06 22:00:10 2014
Thread 1 advanced to log sequence 226 (LGWR switch)
Current log# 2 seq# 226 mem# 0: +DATA/DW/ONLINELOG/group_2.333.857614663
Current log# 2 seq# 226 mem# 1: +RECO/DW/ONLINELOG/group_2.311.857614663
Tue Oct 07 05:12:42 2014
Thread 1 advanced to log sequence 227 (LGWR switch)
Current log# 1 seq# 227 mem# 0: +DATA/DW/ONLINELOG/group_1.361.857614663
Current log# 1 seq# 227 mem# 1: +RECO/DW/ONLINELOG/group_1.335.857614663
Tue Oct 07 13:00:12 2014
Thread 1 advanced to log sequence 228 (LGWR switch)
Current log# 2 seq# 228 mem# 0: +DATA/DW/ONLINELOG/group_2.333.857614663
Current log# 2 seq# 228 mem# 1: +RECO/DW/ONLINELOG/group_2.311.857614663
Tue Oct 07 21:00:14 2014
Thread 1 advanced to log sequence 229 (LGWR switch)
Current log# 1 seq# 229 mem# 0: +DATA/DW/ONLINELOG/group_1.361.857614663
Current log# 1 seq# 229 mem# 1: +RECO/DW/ONLINELOG/group_1.335.857614663
Tue Oct 07 22:00:00 2014
Setting Resource Manager plan SCHEDULER[0x4444]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Tue Oct 07 22:00:01 2014
Begin automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Tue Oct 07 22:00:05 2014
TABLE SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY: ADDED INTERVAL PARTITION SYS_P617 (41918) VALUES LESS THAN (TO_DATE(' 2014-10-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLE SYS.WRI$_OPTSTAT_HISTGRM_HISTORY: ADDED INTERVAL PARTITION SYS_P620 (41918) VALUES LESS THAN (TO_DATE(' 2014-10-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
End automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Mon Oct 27 22:00:00 2014
Setting Resource Manager plan SCHEDULER[0x4443]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Mon Oct 27 22:00:02 2014
Begin automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Mon Oct 27 22:00:07 2014
TABLE SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY: ADDED INTERVAL PARTITION SYS_P878 (41938) VALUES LESS THAN (TO_DATE(' 2014-10-28 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GRE
GORIAN'))
TABLE SYS.WRI$_OPTSTAT_HISTGRM_HISTORY: ADDED INTERVAL PARTITION SYS_P881 (41938) VALUES LESS THAN (TO_DATE(' 2014-10-28 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREG
ORIAN'))
Mon Oct 27 22:00:17 2014
Thread 1 advanced to log sequence 375 (LGWR switch)
Current log# 1 seq# 375 mem# 0: +DATA/DW/ONLINELOG/group_1.361.857614663
Current log# 1 seq# 375 mem# 1: +RECO/DW/ONLINELOG/group_1.335.857614663
End automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
11:00:22 SYS@dw1> select sysdate from dual;
SYSDATE
-----------------
20141029 11:00:30
11:00:35 SYS@dw1> show parameter resource
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
resource_limit boolean TRUE
resource_manager_cpu_allocation integer 24
resource_manager_plan string FORCE:pq_critical
}}}
! this will do the job
{{{
CREATE OR REPLACE TRIGGER px_force_serial
AFTER LOGON ON database
WHEN (USER in ('ALLOC_APP_USER','SYSTEM'))
BEGIN
execute immediate 'alter session disable parallel query';
END;
/
}}}
! this somehow limits 1 parallel for every slave
{{{
exec DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.DELETE_PLAN_DIRECTIVE(PLAN =>'default_plan', GROUP_OR_SUBPLAN => 'OTHER_GROUPS', PARALLEL_DEGREE_LIMIT_P1 => default);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:default_plan';
exec DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(PLAN =>'default_plan', GROUP_OR_SUBPLAN => 'OTHER_GROUPS', NEW_PARALLEL_DEGREE_LIMIT_P1 => -1 );
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
SELECT plan,group_or_subplan,cpu_p1,cpu_p2,cpu_p3, PARALLEL_DEGREE_LIMIT_P1, status
FROM dba_rsrc_plan_directives
order by 1,3 desc,4 desc,5 desc;
select
a.SID,
a.RM_CONSUMER_GROUP RM_GROUP,
a.SQL_ID,
b.CHILD_NUMBER CH,
a.SQL_PLAN_HASH_VALUE PHV,
a.sql_exec_id SQL_EXEC,
a.INST_ID ,
a.USERNAME,
a.PX_SERVERS_REQUESTED RQ,
a.PX_SERVERS_ALLOCATED ALLOC,
to_char(a.SQL_EXEC_START,'MMDDYY HH24:MI:SS') SQL_EXEC_START
from gv$sql_monitor a, gv$sql b
where a.sql_id = b.sql_id
and a.inst_id = b.inst_id
and a.sql_child_address = b.child_address
and username is not null
order by a.STATUS, a.SQL_EXEC_START, a.SQL_EXEC_ID, a.PX_SERVERS_ALLOCATED, a.PX_SERVER_SET, a.PX_SERVER# asc
/
}}}
http://www.pythian.com/blog/oracle-limiting-query-runtime-without-killing-the-session/
http://www.emilianofusaglia.net/resourcemanager_17.html
file://localhost/Users/karl/Dropbox/oracle/OfficialDocs/oracle-database-docs-11gR2_E11882_01/server.112/e10595/dbrm004.htm <-- Assigning Sessions to Resource Consumer Groups
file:///users/karl/Dropbox/oracle/OfficialDocs/oracle-database-docs-11gR2_E11882_01/server.112/e10595/dbrm012.htm#CHDBEJJB <-- predefined resource plans
http://docs.oracle.com/cloud/latest/db121/VLDBG/parallel002.htm#BEIHFJHG
Multilevel Plan Example
http://docs.oracle.com/database/121/ADMIN/dbrm.htm#ADMIN11891
CPU resources not used by either Users_group or Mail_Maint_group at level 2 are allocated to OTHER_GROUPS, @@''because in multilevel plans, unused resources are reallocated to consumer groups or subplans at the next lower level, not to siblings at the same level.''@@ Thus, if Users_group uses only 70% instead of 80%, the remaining 10% cannot be used by Mail_Maint_group. That 10% is available only to OTHER_GROUPS at level 3.
! recommended
if you are going to use multilevel plans let's say levels 1, 2, 3, and 4
do not put more than one consumer group on levels 1,2,3 and only put >1 consumer group on the last level
see examples here http://docs.oracle.com/cd/E50790_01/doc/doc.121/e50471/iorm.htm#CACFJHAI where it shows only on the last level where it puts siblings on the same level
Example of Managing Parallel Statements Using Directive Attributes http://docs.oracle.com/cd/E11882_01/server.112/e10595/dbrm007.htm#BABCIJDG
http://www.idevelopment.info/data/Oracle/DBA_scripts/Examples/example_database_resource_manager_setup.sql
! 11g 'MGMT_MTH' switch
@@mgmt_mth was introduced starting 11.1@@
set the mgmt_mth to 'RATIO' or 'EMPHASIS'
http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_resmgr.htm#CFAEHEHJ
@@cpu_mth 10gR2 (deprecated)@@
The resource allocation method for distributing CPU among sessions in the consumer group. The default is ROUND-ROBIN, which uses a round-robin scheduler to ensure sessions are fairly executed. RUN-TO-COMPLETION specifies that sessions with the largest active time are scheduled ahead of other sessions
http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_resmgr.htm#i1002932
<<<
an intra-database IORM plan tied with proper DBRM percentage allocations, I would prefer a single level plan for this based on shares (mgmt_mth=>'RATIO')
<<<
<<<
ORA-600 [KGKPLOALLOC1] IN CELLSRV
This bug is seen when RATIO intra-database plans are set on the database and sent to the cells. The workaround is to set EMPHASIS intra-database plans instead of RATIO plans. RATIO plans are created using dbms_resource_manager.create_plan by specifying the mgmt_mth to 'RATIO'. This is not a common method of creating resource plans. Most customers create resource plans by specifying the mgmt_mth to 'EMPHASIS'.
Cellsrv issue fixed in 11.2.3.3.
Workaround recommended for all prior releases.
<<<
! 12c 'SHARES' switch
in 11.1 they introduced the MGMT_MTH
which you either have to set the to 'RATIO' or 'EMPHASIS'
and with 'RATIO' you define the proportion of shares of resources only through the 'MGMT_P1', so it's a single level plan based on shares model
now on 12.1, with CDB and PDB there's a new switch on the CREATE_CDB_PLAN_DIRECTIVE and CREATE_PLAN_DIRECTIVE called 'SHARES' where you can define the distribution of resources on two levels
> on CDB level using the CREATE_CDB_PLAN_DIRECTIVE, the distribution of resources across PDBs inside the CDB
> on PDB level using the CREATE_PLAN_DIRECTIVE, the distribution of resources across the consumer groups (max of 8) inside the PDB
now what I haven't validated is if I use the CREATE_PLAN_DIRECTIVE -> MGMT_MTH -> 'RATIO' in a PDB, would it work? @@YES@@
or if I use the 'SHARES' switch in a non-CDB, would it work? or do I need to go with MGMT_MTH -> 'RATIO' ? @@YES@@
some references here
11.2 create_plan_directive
http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_resmgr.htm#ARPLS67609
12.1 create_plan_directive
http://docs.oracle.com/database/121/ARPLS/d_resmgr.htm#ARPLS67609
12.1 create_cdb_plan_directive
http://docs.oracle.com/database/121/ARPLS/d_resmgr.htm#ARPLS73823
here's the hierarchy from IORM -> CDB/non-CDB -> PDB -> consumer groups
also see [[resource manager - multi level plans , mgmt_p1]]
! update on this research - SUMMARY
the SHARES directive with mgmt_mth => 'EMPHASIS' is no different with MGMT_P1 behavior just like having a percentage set with 100% limit on a level 1
ultimately the mgmt_mth => 'RATIO' has to be set on CREATE_PLAN to be the pure SHARES behavior where the sum can go above 100 and not get the error below
ORA-29375: sum of values 300 for level 1, plan HYBRID_PLAN exceeds 100
@@In summary@@,
The mgmt_mth on CREATE_PLAN was introduced starting 11.1 which you can set to 'RATIO' or 'EMPHASIS'.
To achieve the shares behavior mgmt_mth => 'RATIO' has to be set and on CREATE_PLAN_DIRECTIVE set MGMT_P1 numbers just on the level 1
pre-11.1 there was a cpu_mth on CREATE_PLAN which also allows you to set 'RATIO' or 'EMPHASIS' but that one is deprecated and I think it’s only focused on CPU, hence the CPU on the name
In 12.1 there’s a new switch to the CREATE_PLAN_DIRECTIVE and CREATE_CDB_PLAN_DIRECTIVE called SHARES which gives you the shares behavior
Using SHARES with the CREATE_CDB_PLAN+CREATE_CDB_PLAN_DIRECTIVE combination you don’t have to set the mgmt_mth => 'RATIO'
But on the CREATE_PLAN+CREATE_PLAN_DIRECTIVE combination you still have to set it to mgmt_mth => 'RATIO' to get the shares behavior
For the limits on the directive value for the mgmt_mth => 'RATIO', the MGMT_P1 and SHARES (CREATE_PLAN_DIRECTIVE and CREATE_CDB_PLAN_DIRECTIVE) have a limit of 100
Meaning the sum of the SHARES value for all directives can be above 100, but any directive cannot have above 100 value.
On IORM SHARES, any directive cannot have above 32 value.
http://askdba.org/weblog/2012/09/limiting-io-and-cpu-resources-using-11g-oracle-resource-manager/
How to Restart Windows Explorer Without Rebooting
http://www.wikihow.com/Restart-Windows-Explorer-Without-Rebooting-Computer <- kill "explorer", click "new task", enter "explorer"
http://www.wikihow.com/Run-Task-Manager-from-Command-Prompt <- taskmgr
{{{
rman target /
RMAN> startup nomount
RMAN> restore controlfile from '/reco/rman/slob/20120604_backup-slob/control-slob-201206050642.bkp';
RMAN> alter database mount;
RMAN> catalog start with '/reco/rman/slob/20120604_backup-slob/';
RMAN> restore database;
RMAN> alter database open resetlogs;
oracle@desktopserver.local:/reco/rman/slob/20120604_backup-slob:slob
$ ls -ltr
total 293788
-rw-r----- 1 oracle dba 1581056 Jun 5 06:42 SLOB.20120605.12.1.bkp
-rw-r----- 1 oracle dba 17481728 Jun 5 06:42 snapcf-slob.bkp
-rw-r----- 1 oracle dba 98304 Jun 5 06:42 SLOB.20120605.16.1.bkp
-rw-r----- 1 oracle dba 14196736 Jun 5 06:42 SLOB.20120605.13.1.bkp
-rw-r----- 1 oracle dba 1114112 Jun 5 06:42 SLOB.20120605.15.1.bkp
-rw-r----- 1 oracle dba 38404096 Jun 5 06:42 SLOB.20120605.14.1.bkp
-rw-r----- 1 oracle dba 192552960 Jun 5 06:43 SLOB.20120605.11.1.bkp
-rw-r----- 1 oracle dba 98304 Jun 5 06:43 spfile.SLOB.20120605.785141018.bkp
-rw-r----- 1 oracle dba 17481728 Jun 5 06:43 control-slob-201206050642.bkp
-rw-r----- 1 oracle dba 17481728 Jun 5 06:43 control-standby-slob-201206050642.bkp
}}}
http://blog.flimatech.com/2010/10/07/how-to-restore-table-statistics-stats/
{{{
– Check historical stats present for the table
SQL> select table_name, to_char(stats_update_time, 'DD-MON-YYYY HH24:MI:SS') from dba_tab_stats_history where owner = 'SCOTT';
TABLE_NAME TO_CHAR(STATS_UPDATE
—————————— ——————–
TEST 06-OCT-2010 00:08:05
TEST 06-OCT-2010 00:08:48
– restore stats to time it was before
SQL> exec DBMS_STATS.RESTORE_TABLE_STATS (ownname=>'SCOTT', tabname=>'TEST', as_of_timestamp=>TO_DATE('06-OCT-2010 00:08:48', 'DD-MON-YYYY HH24:MI:SS'));
PL/SQL procedure successfully completed.
– verify stats was restored by checking date
SQL> select to_char(last_analyzed, 'DD-MON-YYYY HH24:MI:SS') from user_tables where table_name = 'TEST';
TO_CHAR(LAST_ANALYZE
——————–
06-OCT-2010 00:08:05
}}}
{{{
conn DVACCTMGR/welcome1%
create user DBA_SR identified by oracle1;
create user DBA_JR identified by oracle1;
conn / as sysdba
grant dba to dba_sr;
grant dba to dba_jr;
-- create a realm on the object
-- create rule set to restrict SELECT on JR DBAs
Create a rule set named "No JR DBAs" to restrict access for user
DBA_JR. Select Rule Sets, select CREATE. For the rule
expression use the string dvf.f$session_user != 'DBA_JR'.
-- create rule
No_JR_DBAs_rule
dvf.f$session_user != 'DBA_JR'
-- create rule set to restrict DML on SR and JR DBAs
"No SR and JR DBAs"
-- create rule
No_SR_and_JR_DBAs_rule
dvf.f$session_user not in ('DBA_SR','DBA_JR')
}}}
* I think, if it's just one table that you want to protect from the DBAs or SYSDBA then just adding the table to a realm will do the job.. but to be fine grained on the security you can add the user to the real and restrict it with a SELECT command rule
Reverse Engineering a Resource Manager Plan From Within the Database (Doc ID 1388634.1)
{{{
Connect as sysdba and run ...
SQL> connect / as sysdba
SQL>@get_plan_v2(10g).sql
}}}
{{{
connect / as sysdba
set long 10000
set linesize 250
set trimspool on
set feedback off
set serveroutput on
exec get_plan_v2 ('<plan_name>');
}}}
{{{
create or replace procedure get_plan_v1 (A_PLAN_NAME IN VARCHAR2) is
TYPE list_tab IS TABLE OF VARCHAR2(30) INDEX BY BINARY_INTEGER;
plan_list list_tab;
cons_grp_list list_tab;
t BINARY_INTEGER := 1;
cursor c_cons_grp is
select group_or_subplan, type
from DBA_RSRC_PLAN_DIRECTIVES
where plan = A_PLAN_NAME;
cursor c_cons_grp_map (V_CONS_GRP IN VARCHAR2) is
select attribute, value
from DBA_RSRC_GROUP_MAPPINGS
where consumer_group = V_CONS_GRP;
cursor c_cons_grp_privs (V_CONS_GRP IN VARCHAR2) is
select grantee, grant_option
from DBA_RSRC_CONSUMER_GROUP_PRIVS
where granted_group = V_CONS_GRP;
begin
dbms_output.put_line ('execute dbms_resource_manager.create_pending_area;');
dbms_output.put_line ( dbms_metadata.GET_DDL ('RMGR_PLAN',A_PLAN_NAME) );
plan_list(1) := A_PLAN_NAME;
for i in 1..plan_list.COUNT loop
for r_cons_grp in c_cons_grp loop
if r_cons_grp.type = 'CONSUMER_GROUP'
then cons_grp_list(t) := r_cons_grp.group_or_subplan;
t:=t+1;
dbms_output.put_line ( dbms_metadata.GET_DDL ('RMGR_CONSUMER_GROUP',r_cons_grp.group_or_subplan) );
elsif r_cons_grp.type = 'PLAN' then
plan_list(i+1) := r_cons_grp.group_or_subplan;
end if;
end loop;
end loop;
for i in 1..plan_list.COUNT loop
dbms_output.put_line ( dbms_metadata.GET_DEPENDENT_DDL ('RMGR_PLAN_DIRECTIVE',plan_list(i)) );
end loop;
-- Consumer group mappings
for i in 1..cons_grp_list.COUNT loop
for r_cons_grp_map in c_cons_grp_map(cons_grp_list(i)) loop
dbms_output.put_line ('execute DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING (ATTRIBUTE=>''' || r_cons_grp_map.attribute || ''', VALUE=>''' || r_cons_grp_map.value || ''', CONSUMER_GROUP=>''' || cons_grp_list(i) || ''');');
end loop;
end loop;
-- consumer group privileges
for i in 1..cons_grp_list.COUNT loop
for r_cons_grp_privs in c_cons_grp_privs(cons_grp_list(i)) loop
dbms_output.put_line ('execute DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP (GRANTEE_NAME=>''' || r_cons_grp_privs.grantee || ''', CONSUMER_GROUP=>''' || cons_grp_list(i) || ''', GRANT_OPTION=> ''' || r_cons_grp_privs.grant_option || ''');');
end loop;
end loop;
dbms_output.put_line ('execute dbms_resource_manager.submit_pending_area;');
end;
/
}}}
<<showtoc>>
! first rewrite avoids the group by
{{{
# BEFORE
VAR BAN VARCHAR2(32)
VAR CHG_ACTV_KEY VARCHAR2(32)
VAR CHG_ACTV_KEY VARCHAR2(32)
EXEC :BAN := '302034860';
EXEC :CHG_ACTV_KEY := '';
EXEC :CHG_ACTV_KEY := '4937';
select /*+ monitor */
b.ban,
b.cycle_run_month,
b.cycle_run_year
from dim.delta_bill_pull_tbl_glt b,
( select
ban,
actv_bill_seq_no
from dim.delta_charge_pull_tbl_glt
where ban = (select to_ban
from ( select ban, max(ent_seq_no) ent_seq_no
from dim.delta_charge_pull_tbl_glt
where ban = :ban and to_ent_seq_no = :chg_actv_key
group by ban ) a,
dim.delta_charge_pull_tbl_glt d
where a.ban = d.ban
and a.ent_seq_no = d.ent_seq_no)
and ent_seq_no = :chg_actv_key ) c
where
b.ban = c.ban
and b.bill_seq_no = c.actv_bill_seq_no
and rownum=1
/
# AFTER
VAR BAN VARCHAR2(32)
VAR CHG_ACTV_KEY VARCHAR2(32)
VAR CHG_ACTV_KEY VARCHAR2(32)
EXEC :BAN := '302034860';
EXEC :CHG_ACTV_KEY := '';
EXEC :CHG_ACTV_KEY := '4937';
select /*+ monitor */
b.ban,
b.cycle_run_month,
b.cycle_run_year
from dim.delta_bill_pull_tbl_glt b,
( select
ban,
actv_bill_seq_no
from dim.delta_charge_pull_tbl_glt
where ban = (select d.to_ban
from dim.delta_charge_pull_tbl_glt d
where d.ban = :ban
and d.ent_seq_no = (select max(ent_seq_no)
from dim.delta_charge_pull_tbl_glt
where ban = :ban
and to_ent_seq_no = :chg_actv_key))
and ent_seq_no = :chg_actv_key ) c
where
b.ban = c.ban
and b.bill_seq_no = c.actv_bill_seq_no
and rownum=1
/
}}}
! another example is split this into two SQLs
{{{
VARIABLE to_ban NUMBER
BEGIN
SELECT to_ban
INTO :to_ban
FROM dim_h.delta_charge_pull_tbl_glt d
WHERE d.ban = :ban
AND d.ent_seq_no = (SELECT MAX(ent_seq_no)
FROM dim_h.delta_charge_pull_tbl_glt
WHERE ban = :ban
AND to_ent_seq_no = :chg_actv_key);
EXCEPTION
WHEN NO_DATA_FOUND THEN
:to_ban := NULL;
END;
/
SELECT /*+ monitor */
b.ban,
b.cycle_run_month,
b.cycle_run_year
FROM dim_h.delta_bill_pull_tbl_glt b
, dim_h.delta_charge_pull_tbl_glt c
WHERE b.ban = c.ban
AND b.bill_seq_no = c.actv_bill_seq_no
AND c.ban = :to_ban
AND c.ent_seq_no = :chg_actv_key
AND ROWNUM = 1;
}}}
! using gluent present, push it down to hadoop
{{{
./present -t dim.gl_delta_bill_charge_pull_join -xf \
--present-join="table(dim.ens_bill_gl_detail_t_glt) alias(gl) \
--present-join="table(dim.delta_adjustment_pull_tbl_glt) alias(adj) outer-join(gl) join-clauses(adj.ban = gl.ban, adj.ar_actv_key = gl.ent_seq_no, adj.ar_actv_seq_no = gl.actv_seq_no) project(*)" \
--present-join="table(dim.delta_charge_pull_tbl_glt) alias(chg) outer-join(gl) join-clauses(chg.ban = gl.ban, chg.ar_actv_key = gl.ent_seq_no, chg.ar_actv_seq_no = gl.actv_seq_no) project(*)"
}}}
<<showtoc>>
! rgb from image
http://imagecolorpicker.com/
! rgb hex
http://www.rgbhex.com/
http://en.wikipedia.org/wiki/Riser_card
riser card and bus extenders http://www.orbitmicro.com/company/blog/87
roofline model
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=roofline%20model
fast fourier transform
https://www.youtube.com/results?search_query=fast+fourier+transform
LOESS and LOWESS (locally weighted scatterplot smoothing)
http://en.wikipedia.org/wiki/Local_regression
monte carlo simulation
https://www.youtube.com/results?search_query=monte+carlo+simulation
coin toss, dice toss
http://utopia.knoware.nl/~hlub/rlwrap/ <-- main distribution
http://blog.enkitec.com/2010/04/using-sqlplus-with-rlwrap-on-ms-windows/
http://www.oracle-base.com/articles/linux/rlwrap.php
http://sysdba.wordpress.com/2006/10/08/how-to-use-rlwrap-to-get-a-command-history-in-sqlplus/
http://martincarstenbach.wordpress.com/2011/09/01/compiling-rlwrap-for-oracle-linux-6-1-x86-64/
https://forums.oracle.com/forums/thread.jspa?threadID=2148571 <-- install guide
Dell EMC Oracle RMAN Agent
https://www.delltechnologies.com/en-us/collaterals/unauth/technical-guides-support-information/docu85247.pdf
<<showtoc>>
I executed the ACTIVE DUPLICATE again with 16 channels on both target and auxiliary and it’s still doing the same IO rate of 250-450 MB/s.
Then after reading on Oracle official doc I found out that the ACTIVE DUPLICATE “optimizations” are only available on 12cR1.
So the only way 11gR2 is doing it is by RMAN IMAGE COPY which is serial and slow and by the way the RMAN PARALLEL IMAGE COPY optimization is also available starting 12cR1.
https://docs.oracle.com/database/121/BRADV/rcmdupdb.htm#GUID-213A5741-5C93-42A8-BC2A-EE5412D8C2BA
• Compressing Backup Sets Used to Perform Active Database Duplication
• Creating Backup Sets in Parallel During Active Database Duplication
https://docs.oracle.com/database/121/BRADV/rcmbckba.htm#BRADV665
• Making Multisection Backups Using Image Copies
Computing the overall elapsed of the ACTIVE DUPLICATE based on the rate and the size of the database it will finish in approximately 15 hours and this can’t be optimized any further.
o Database size / ((IO rate MBs / 1GBs) * 1hour)
• 22000GB / ((400/1000)*3600) = 15 hours
If we stick with the restore, it finishes in approximately 3 hours but then this backup needs to come first from the HP side which needs to be transferred to the ZFS mount point so we need to add that time to the 3 hours.
o Restore = Database size / (IO rate GBs * 1hour)
• 22000GB / (3*3600) = 2 hours
o Recover = Restore / 2
• 2 / 2 = 1 hour
o Restore + Recover = 3 hours
The rman backup and restore route gives you better bandwidth. I don’t know what they meant by 6 hours window? Is it cloning within SSC? Or cloning from HP to SSC?
If that’s within SSC then the rman backup/restore is the best choice to beat the 6 hours window.
! IO RATE - RMAN ACTIVE DUPLICATE (~250MB/s read : ~450MB/s write)
{{{
. oraenv
+ASM1
• On FHL zone (source – more on reads)
./asm_metrics.pl -show=dbinst -display=snap,avg -dg=DATAECCSTG -interval=5 -sort_field=iops
............................
Collecting 5 sec....
............................
......... SNAP TAKEN AT ...................
11:44:52 Kby Avg AvgBy/ Kby Avg AvgBy/
11:44:52 INST DBINST DG FG DSK Reads/s Read/s ms/Read Read Writes/s Write/s ms/Write Write
11:44:52 ------ ----------- ----------- ---------- ---------- ------- ------- ------- ------ ------ ------- -------- ------
11:44:52 FHL 254 243968 15.6 983556 1 19 0.2 16384
11:44:52 dbm072 3 48 0.3 16384 2 21 0.4 13440
11:44:52 dbm071 3 48 0.6 16384 1 19 0.5 16384
11:44:52 EDIS1 0 0 0.0 0 0 3 0.5 8192
11:44:52 BSIT1 0 0 0.0 0 0 0 0.0 0
11:44:52 BSIT2 0 0 0.0 0 0 0 0.0 0
11:44:52 VRTXT1 0 0 0.0 0 0 0 0.0 0
11:44:52 VRTXT2 0 0 0.0 0 0 0 0.0 0
......... AVERAGE SINCE ...................
11:42:43 Kby Avg AvgBy/ Kby Avg AvgBy/
11:42:43 INST DBINST DG FG DSK Reads/s Read/s ms/Read Read Writes/s Write/s ms/Write Write
11:42:43 ------ ----------- ----------- ---------- ---------- ------- ------- ------- ------ ------ ------- -------- ------
11:42:43 FHL 238 227586 15.6 978632 1 23 0.4 16083
11:42:43 dbm072 4 56 0.5 16302 2 27 0.4 12300
11:42:43 dbm071 4 57 0.5 16304 1 20 0.3 14717
11:42:43 VRTXT2 0 0 0.0 0 0 2 0.9 8192
11:42:43 BSIT2 0 0 0.0 0 0 2 1.3 8192
11:42:43 EDIS1 0 0 0.0 0 0 1 0.4 8192
• On RHL zone (destination – more on writes)
./asm_metrics.pl -show=dbinst -display=snap,avg -dg=DATARHLDR -interval=5 -sort_field=iops
............................
Collecting 5 sec....
............................
......... SNAP TAKEN AT ...................
11:45:18 Kby Avg AvgBy/ Kby Avg AvgBy/
11:45:18 INST DBINST DG FG DSK Reads/s Read/s ms/Read Read Writes/s Write/s ms/Write Write
11:45:18 ------ ----------- ----------- ---------- ---------- ------- ------- ------- ------ ------ ------- -------- ------
11:45:18 RHL 0 0 0.0 0 425 435005 10.7 1047120
11:45:18 dbm092 4 58 0.5 16384 4 24 0.4 5539
11:45:18 dbm091 2 32 0.5 16384 1 19 0.3 16384
......... AVERAGE SINCE ...................
11:43:12 Kby Avg AvgBy/ Kby Avg AvgBy/
11:43:12 INST DBINST DG FG DSK Reads/s Read/s ms/Read Read Writes/s Write/s ms/Write Write
11:43:12 ------ ----------- ----------- ---------- ---------- ------- ------- ------- ------ ------ ------- -------- ------
11:43:12 RHL 0 3 0.3 16384 397 405231 10.8 1045605
11:43:12 dbm091 3 55 0.4 16308 1 19 0.5 14751
11:43:12 dbm092 3 49 0.5 16298 2 19 0.4 11725
}}}
! IO RATE - RMAN RESTORE (~3-6 GB/s write)
{{{
. oraenv
+ASM1
./asm_metrics.pl -show=dbinst -display=snap,avg -dg=DATAQA -interval=5 -sort_field=iops
............................
Collecting 5 sec....
............................
00:31:38 Kby Avg AvgBy/ Kby Avg AvgBy/
00:31:38 INST DBINST DG FG DSK Reads/s Read/s ms/Read Read Writes/s Write/s ms/Write Write
00:31:38 ------ ----------- ----------- ---------- ---------- ------- ------- ------- ------ ------ ------- -------- ------
00:31:38 SHL 24 2541 0.8 108158 3375 3450354 8.5 1046997
00:31:38 SGT 1247 29604 0.8 24309 431 6889 0.5 16384
00:31:38 dbm042 4 57 0.5 16294 2 27 0.4 11150
00:31:38 dbm041 4 60 0.6 16299 2 23 0.6 15059
00:31:38 SGR 1 21 0.6 16384 2 23 0.6 14832
00:31:38 SBO 0 0 0.0 0 0 0 0.5 8192
............................
Collecting 5 sec....
............................
00:31:38 Kby Avg AvgBy/ Kby Avg AvgBy/
00:31:38 INST DBINST DG FG DSK Reads/s Read/s ms/Read Read Writes/s Write/s ms/Write Write
00:31:38 ------ ----------- ----------- ---------- ---------- ------- ------- ------- ------ ------ ------- -------- ------
00:31:38 SHL 24 2522 0.8 108158 3349 3423813 8.5 1046994
00:31:38 SGT 1249 29660 0.8 24309 431 6903 0.5 16384
00:31:38 dbm042 4 57 0.5 16295 2 27 0.4 11169
00:31:38 dbm041 4 60 0.6 16293 2 23 0.6 15038
00:31:38 SGR 1 21 0.6 16384 2 23 0.6 14841
00:31:38 SBO 0 0 0.0 0 0 0 0.5 8192
............................
Collecting 5 sec....
............................
}}}
! references
https://docs.oracle.com/database/121/BRADV/rcmdupdb.htm#BRADV298
https://oracle-base.com/articles/12c/recovery-manager-rman-database-duplication-enhancements-12cr1#active-database-duplication-using-backup-sets
Known issues with RMAN DUPLICATE FROM ACTIVE DATABASE (Doc ID 1366290.1)
https://docs.oracle.com/database/121/BRADV/rcmbckba.htm#BRADV665
https://uhesse.com/2015/06/29/multisection-backup-for-image-copies/
https://oracle-base.com/articles/12c/recovery-manager-rman-enhancements-12cr1#multisection
https://oracle-base.com/articles/11g/rman-enhancements-11gr1#multisection_backups
Use RMAN to relocate a 10TB RAC database with minimum downtime
http://www.nyoug.org/Presentations/2011/September/Zuo_RMAN_to_Relocate.pdf
http://www.technicalconferencesolutions.com/pls/caat/caat_reg_schedules_upd.login
RocksDB: A High Performance Embedded Key-Value Store for Flash Storage - Data at Scale
https://www.youtube.com/watch?v=V_C-T5S-w8g&t=843s
ORA-24795 ROLLBACK Error with Informatica BULK load http://karteekblog.blogspot.com/2009/12/ora-24795-rollback-error-with.html
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch03_:_Linux_Networking <-- good stuff
http://www.cyberciti.biz/faq/linux-setup-default-gateway-with-route-command/
http://www.cyberciti.biz/tips/configuring-static-routes-in-debian-or-red-hat-linux-systems.html
http://www.cyberciti.biz/faq/howto-linux-configuring-default-route-with-ipcommand/
https://technology.amis.nl/2006/06/06/building-a-route-planner-with-oracle-sql-finding-the-quickest-route-in-a-graph/
connection scan algorithm https://medium.com/@assertis/so-you-want-to-build-a-journey-planner-f99bfa8d069d
http://www.jlcomp.demon.co.uk/faq/shortest_distance.html
https://en.wikipedia.org/wiki/Shortest_path_problem
row by row VS batch on latency
http://highscalability.com/blog/2013/12/4/how-can-batching-requests-actually-reduce-latency.html
<<showtoc>>
! troubleshooting steps
{{{
1) check alert log for this error message
Wed Sep 21 13:39:19 2011 > WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=37
System State dumped to trace file /oracle/diag/rdbms/..../.trc
2) check for v$session
select sid,username,sql_id,event current_event,p1text,p1,p2text,p2,p3text,p3
from v$session
where event='row cache lock'
/
3) check for ASH
select *
from dba_hist_active_sess_history
where sample_time between to_date('26-MAR-14 12:49:00','DD-MON-YY HH24:MI:SS')
and to_date('26-MAR-14 12:54:00','DD-MON-YY HH24:MI:SS')
and event = 'row cache lock'
order by sample_id
/
4) check SGA resize operations
select component, oper_type, initial_size, final_size, to_char (start_time, 'dd/mm/yy hh24:mi') start_date, to_char (end_time, 'dd/mm/yy hh24:mi') end_date
from v$memory_resize_ops
where status = 'complete'
order by start_time desc, component
/
5) get enqueue type
select *
from v$rowcache
where cache# IN (select P1
from dba_hist_active_sess_history
where sample_time between to_date('26-MAR-14 12:49:00','DD-MON-YY HH24:MI:SS')
and to_date('26-MAR-14 12:54:00','DD-MON-YY HH24:MI:SS')
and event = 'row cache lock' )
/
-- get most affected latch
select latch#, child#, sleeps
from v$latch_children
where name='row cache objects'
and sleeps > 0
order by sleeps desc;
-- get detailed latch miss info
select "WHERE", sleep_count, location
from v$latch_misses
where parent_name='row cache objects'
and sleep_count > 0;
6) ash wait chains
7) ash query dump
select *
from dba_hist_active_sess_history
where sample_time between to_date('26-MAR-14 12:49:00','DD-MON-YY HH24:MI:SS')
and to_date('26-MAR-14 12:54:00','DD-MON-YY HH24:MI:SS')
and event = 'row cache lock' order by sample_id
/
8) last resort
conn / as sysdba
alter session set max_dump_file_size=unlimited;
alter session set events 'immediate trace name SYSTEMSTATE level 266';
alter session set events 'immediate trace name SYSTEMSTATE level 266';
alter session set events 'immediate trace name SYSTEMSTATE level 266';
}}}
! modes
WAITEVENT: "row cache lock" Reference Note (Doc ID 34609.1)
<<<
Parameters:
P1 = cache - ID of the dictionary cache
P2 = mode - Mode held
P3 = request - Mode requested
cache - ID of the dictionary cache
Row cache lock we are waiting for. Note that the actual CACHE# values differ between Oracle versions. The cache can be found using this select - "PARAMETER" is the cache name:
SELECT cache#, type, parameter
FROM v$rowcache
WHERE cache# = &P1
;
In a RAC environment the row cache locks use global enqueues of type "Q[A-Z]" with the lock id being the hashed object name.
mode - Mode held
The mode the lock is currently held in:
KQRMNULL 0 null mode - not locked
KQRMS 3 share mode
KQRMX 5 exclusive mode
KQRMFAIL 10 fail to acquire instance lock
request - Mode requested
The mode the lock is requested in:
KQRMNULL 0 null mode - not locked
KQRMS 3 share mode
KQRMX 5 exclusive mode
KQRMFAIL 10 fail to acquire instance lock
<<<
! references
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-library-cache#TOC-Row-Cache <-- good stuff
http://www.dadbm.com/database-hang-row-cache-lock-concurrency-troubleshooting/ <-- good stuff
http://afatkulin.blogspot.com/2010/06/row-cache-objects-latch-contention.html
http://logicalread.solarwinds.com/oracle-row-cache-lock-wait-dr01/#.VwKivuIrLwc
http://surachartopun.com/2009/11/investigate-row-cache-lock.html
http://savvinov.com/2014/07/14/row-cache-lock/
https://aprakash.wordpress.com/2010/05/07/row-cache-lock-an-interesting-case/
how to resolve row cache lock on dc_segments , dc_tablespace_quotas https://community.oracle.com/thread/2403827?tstart=0
Finding out the row cache lock holder through V$ view https://dioncho.wordpress.com/2009/10/15/finding-out-the-row-cache-lock-holder-through-v-view/
http://www.dataintegration.ninja/loading-star-schema-dimensions-facts-in-parallel/
<<<
MD5 are columns outside primary key
<<<
! ORA_HASH
https://gerardnico.com/db/oracle/ora_hash
https://stackoverflow.com/questions/19226040/how-to-generate-hashkey-using-multiple-columns-in-oracle-could-you-please-give
https://danischnider.wordpress.com/2017/01/24/how-to-build-hash-keys-in-oracle/
https://auth0.com/blog/adding-salt-to-hashing-a-better-way-to-store-passwords/
! dbms_random.string
<<<
I added dbms_random.string('X',15) on the columns of ASH to get rid of duplicate rows on metric extension https://github.com/karlarao/gvash_to_csv/blob/master/Metric.Extension.-.gvash.to.pdf
<<<
.
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues
Analysing Row Lock Contention with LogMiner http://antognini.ch/2012/03/analysing-row-lock-contention-with-logminer/
TX Transaction and Enq: Tx - Row Lock Contention - Example wait scenarios [ID 62354.1]
How to Determine The Lock Type and Mode from an Enqueue Wait [ID 413934.1]
https://forums.oracle.com/forums/thread.jspa?threadID=860488
http://jonathanlewis.wordpress.com/2009/07/26/empiricism/
http://www.freelists.org/post/oracle-l/Tx-row-lock-contention-after-implementing-transaction-management-in-application-server,6
http://www.orafaq.com/maillist/oracle-l/2002/07/11/0881.htm
http://dioncho.wordpress.com/2009/05/07/tracking-the-bind-value/
http://stackoverflow.com/questions/8826726/getting-the-value-that-fired-the-oracle-trigger, https://community.oracle.com/thread/1126953 <- trigger stuff
http://oracle-base.com/articles/8i/logminer.php#QueryingLogInfo
http://www.idevelopment.info/data/Oracle/DBA_tips/LogMiner/LOGMINER_15.shtml
http://www.oracleflash.com/24/How-to-read-or-analyze-redo-log-files-using-LogMiner.html
http://dboptimizer.com/2012/11/30/enqueue-is-it-a-pk-fk-or-bitmap-index-problem/
<<<
One distinguishing factor is the lock mode. If the lock mode is exclusive (mode 6) then it’s most likely a classic row lock where two sessions are trying to modify the same row. On the other hand if the lock mode is share (mode 4) it’s typically going to be
inserting a unique key when someone else has already inserted that key but not committed
Inserting a foreign when then parent value has been inserted but not committed or deleted and not commited (not to be confused with locks due to un-indexed foreign key which cause a “enq: TM – contention” wait not a TX wait)
bitmap index chunk contention
<<<
Automatic Locks in DML Operations https://docs.oracle.com/database/121/SQLRF/ap_locks001.htm#SQLRF55502
Data Concurrency and Consistency https://docs.oracle.com/cd/B28359_01/server.111/b28318/consist.htm#CNCPT020
<<<
When It started:
1.I looked our stats reports starting Mar 1st, I see this pattern first started on Apr12 2017, 12:47 Spike Issue. Attaching the grid report for reference.
You can check what new business process enabled to send this new data to process from that day Apr 12th.
Why is it happening:
There are several sessions trying to access the same record/rowid to grab a same sequence number with FOR UPDATE CLAUSE , causing enq:row lock contention for each sessions and delaying commits.
Record/rowid: PRODDTA.F00022/ UKOBNM=F554201B
Why it is happening only at 12:47 and not other time , even though this is normal for E1,
Based on my sample analysis with today’s data, number of sessions/new sequences generated at around, For F554201B was
--between 5:28 PM to 6:28 PM,– 1966 requests in 1 Hr. Approx. = 32 requests /Minute
-- But from 12:50 to 12:58 PM 1093 requests in 8 Minutes time, almost 136 requests/Minute, i.e. 3-4 times more volume..
I am attaching Excel file , with all number details for review, has 2 tabs..
Seems Oracle is not able to scale that fast the seq. number generation..
Also I see the DB side it is enabling , GTX Background Process, when Application is using XA Transactions. Not sure this XA transaction playing any role during this high load transaction mix processing.
<<<
{{{
Time Seq-num No.Changes/sec
5:28 PM 9737421
5:29 9737450 29
5:30 9737550 100
5:31 9737597 47
5:32 9737650 53
In 4 Min it Generated 229
6:28 PM 9739387
In 1 Hr It Generated 1966
}}}
<<<
The analysis sounds sound up to the point when “seems Oracle is not able to scale that fast the seq. number generation”, this is incorrect.
We are not talking about sequence numbers in the strict argument of a sequence object (“CREATE SEQUENCE …”), if that was the case then the previous statement would have been accurate.
Here the sequence is an application generated sequence implemented via serialized access to a row, while this approach allows you to have strictly consecutive and never “wasted” (application generated) sequence numbers it does not scale by definition.
In order for the sequence to be consecutive (and not “wasted”) the application introduced a serialization point (the “lock of the row with the FOR UPDATE clause) that is equivalent to a door where a single person can get through at any point in time, the door works well with normal traffic but it becomes a bottleneck during Black Friday.
There are mainly two possible causes for this issue:
1. 12.47PM (or around that time) is your Black Friday
OR
2. One of the transaction requesting the sequence number is taking longer now (“to go through the door”, aka while holding the row locked), causing a queue behind him at the door
In both scenarios the database is a bit of a victim (of application design / workload) and it’s actually putting in place safety nets (locks) in order to make sure the data integrity is preserved.
The goal at this point would be to figure out if the system falls into #1 or #2 from previous list.
I’m assuming by your “nothing changed” then workload didn’t change either (#1), is that the case? Can you confirm the load is pretty much the same?
If yes then #2 might be the case here, where a transaction that uses to take X seconds now it takes Y seconds (with Y >> X) thus causing sessions to pile up behind him.
For full disclosure, there might also be some other exotic issues going on but from the initial description of the problem it doesn’t sound like it so I suggest to start with the basics first :-)
<<<
! generate ash report
! instrument with ash (hist)
{{{
define _start_time='09/26/2012 15:00'
define _end_time='09/26/2012 16:00'
-- QUERY MODULE
col program format a40
col module format a20
col event format a30
select count(*), session_id, program, module, CLIENT_ID, sql_id, event, mod(p1,16) as lock_mode, BLOCKING_SESSION from
dba_hist_active_sess_history
where sample_time
between to_date('&_start_time', 'MM/DD/YY HH24:MI')
and to_date('&_end_time', 'MM/DD/YY HH24:MI')
and session_id in (sid1,sid2,sid3) -- walk the chain of blockers
and client_id = '10094003'
group by session_id, program, module, CLIENT_ID, sql_id, event, mod(p1,16), BLOCKING_SESSION
order by 1 asc;
}}}
! with v$active_session_history
{{{
define _start_time='04/20/2017 15:15'
define _end_time='04/20/2017 16:00'
-- QUERY MODULE
col program format a40
col module format a20
col event format a30
select count(*), session_id, program, module, CLIENT_ID, sql_id, event, mod(p1,16) as lock_mode, BLOCKING_SESSION from
v$active_Session_history
where sample_time
between to_date('&_start_time', 'MM/DD/YY HH24:MI')
and to_date('&_end_time', 'MM/DD/YY HH24:MI')
group by session_id, program, module, CLIENT_ID, sql_id, event, mod(p1,16), BLOCKING_SESSION
order by 1 asc;
}}}
! with v$active_session_history last 10 minutes
{{{
set lines 300
col program format a40
col module format a20
col event format a30
select count(*), session_id, program, module, CLIENT_ID, sql_id, event, mod(p1,16) as lock_mode, BLOCKING_SESSION from
v$active_Session_history
where SAMPLE_TIME > sysdate - 10/1440
group by session_id, program, module, CLIENT_ID, sql_id, event, mod(p1,16), BLOCKING_SESSION
order by 1 asc;
}}}
! at least get the bind capture
{{{
col name format a10
col position format 99
col value_string format a20
select snap_id, name, position, value_string, last_captured
from dba_hist_sqlbind
where sql_id = '&sql_id'
order by 5 asc;
}}}
http://lefterhs.blogspot.com/2010/12/10g-use-vsqlbindcapture-to-find-bind.html
https://dioncho.wordpress.com/2009/05/07/tracking-the-bind-value/
http://dba.stackexchange.com/questions/18569/how-to-troubleshoot-enq-tx-row-lock-contention
http://blog.tanelpoder.com/2010/10/18/read-currently-running-sql-statements-bind-variable-values/
http://tech.e2sn.com/oracle/troubleshooting/oracle-s-real-time-sql-monitoring-feature-v-sql_monitor
''How To Extract an RPM Package Without Installing It (rpm extract command)''
mkdir checkrpm
rpm2cpio php-5.1.4-1.esp1.x86_64.rpm | cpio -idmv
http://www.cyberciti.biz/tips/how-to-extract-an-rpm-package-without-installing-it.html
http://www.oracle.com/technetwork/articles/servers-storage-admin/extractingfilesrpm-444871.html
''rpm query format''
{{{
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" binutils
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" compat-db
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" compat-libstdc++-33
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" compat-libstdc++-296
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" control-center
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" gcc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" gcc-c++
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" glibc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" glibc-common
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" glibc-devel
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" gnome-libs
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" libaio
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" libgcc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" libstdc++
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" libstdc++-devel
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" make
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" pdksh
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" sysstat
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" compat-gcc-32
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" compat-gcc-32-c++
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" compat-libstdc++-33
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" compat-libstdc++-296
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" openmotif
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" openmotif21
}}}
''Command options''
{{{
-avzWu --delete --dry-run
-a
Recursive mode
Preserves symbolic links
Preserves permissions
Preserves timestamp
Preserves owner and group
-v verbose
-z is to enable compression
-W transfer the whole file, will speed-up the rsync process, as it doesn’t have to perform the checksum at the source and destination
-u do not overwrite a file at the destination, if it is modified
--delete If a file is not present at the source, but present at the target, you might want to delete the file at the target during rsync.
rsync delete option deletes files that are not there in source directory.
--dry-run simulation, see what files *would* be copied or deleted without actually doing the action
}}}
''Sample commands''
{{{
-- to do a full sync from the source and delete any new file on target
rsync -avzWu --delete --dry-run /root/backup/rsynctest root@mybooklive2:/root/backup
rsync -avzWu --delete /root/backup/rsynctest root@mybooklive2:/root/backup
-- to do a full sync from the source.. this creates the rsynctest folder inside the backup directory
rsync -avzWu --dry-run /root/backup/rsynctest root@mybooklive2:/root/backup
rsync -avzWu /root/backup/rsynctest root@mybooklive2:/root/backup
-- to do a full sync from the source.. this creates the subfolders of documents inside the mybooklive2 directory
rsync -avzWu /root/documents/ root@mybooklive:/root/backup/mybooklive2
}}}
''Putting it in cron''
{{{
MyBookLive:~/backup/mybooklive2# crontab -l
# m h dom mon dow command
# */1 * * * * /root/rsync.sh
MyBookLive:~/backup/mybooklive2# cat /root/rsync.sh
rsync -avzWu /root/backup/rsynctest root@mybooklive2:/root/backup > ~/log/rsynclog-$(date +%Y%m%d%H%M).log &
}}}
Rsync Backup To My Book Live http://netcodger.wordpress.com/?cat=52804
http://www.thegeekstuff.com/2010/09/rsync-command-examples/ <-- GOOOOOD STUFF
http://www.comentum.com/rsync.html
<<<
Use of "/" at the end of path:
When using "/" at the end of source, rsync will copy the content of the last folder.
When not using "/" at the end of source, rsync will copy the last folder and the content of the folder.
When using "/" at the end of destination, rsync will paste the data inside the last folder.
When not using "/" at the end of destination, rsync will create a folder with the last destination folder name and paste the data inside that folder.
Also using "/" vs "/*"
the "/" will copy any hidden folder
<<<
http://www.mikerubel.org/computers/rsync_snapshots/
https://wiki.archlinux.org/index.php/Full_System_Backup_with_rsync
{{{
#!/bin/sh
START=$(date +%s)
rsync -aAXv /* $1 --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found,/home/*/.gvfs,/var/lib/pacman/sync/*}
FINISH=$(date +%s)
echo "total time: $(( ($FINISH-$START) / 60 )) minutes, $(( ($FINISH-$START) % 60 )) seconds"
touch $1/"Backup from $(date '+%A, %d %B %Y, %T')"
# ~/Scripts/backup.sh /some/destination
}}}
-- another sample script.. don't use this
{{{
#!/bin/bash
#
# A backup script based on Rsync that pushes backups onto a NAS.
#
# Directories are rotated n times and rsync is called to
# place a new backup in 0.days_ago/
# Net Codger http://netcodger.wordpress.com 4/17/2012
# —– Edit these variables to suit your environment —–
n=30 ;Number of backups to retain
NAS=MyBookLive ;IP address or resolvable hostname of NAS
SrcDir=/home/NetCodger ;Directory to be backed up
DestDir=/shares/Linux/NetCodger ;Backup destination on NAS
# —– End of edits —–
echo
echo =========================================================================
echo “Starting backup.sh…..”;
date;
echo
# Delete the n’th backup.
echo Removing oldest backup.
ssh root@$NAS ‘[ -d '$DestDir'/'$n'.days_ago ] && rm -rf ‘$DestDir’/'$n’.days_ago ]’
# Rename backup directories to free up the 0 day directory.
ssh root@$NAS ‘for i in {‘$n’..1}; \
do [[ -d '$DestDir'/$(($i-1)).days_ago ]] && \
/bin/mv ‘$DestDir’/$(($i-1)).days_ago ‘$DestDir’/${i}.days_ago; done’
echo
# Run the Rsync command. Nice is used to prevent Rysnc from hogging the CPU.
# –link-dest creates hard links so that each backup run appears as a full
# backup even though they only copy changed blocks since 1.days_ago
nice rsync -av \
–delete \
–link-dest=../1.days_ago \
$SrcDir root@$NAS:$DestDir/0.days_ago/
echo
echo =========================================================================
echo “Completed running backup.sh”;
date;
-rsync -avzWu --delete
–dry-run
}}}
* ''download the rsync deb file here'' http://ipod-touch-max.ru/cydia/index.php?cat=package&id=9761#
* then stage it in the root dir
* then do dpkg -i rsync_3.0.5-3_iphoneos-arm.deb
* that's it!
* when you are doing the rsync, turn off the autolock because the lock will turn off the wifi and mess up the transfer
to show the diff
{{{
diff <(ssh krl 'ls -1aR /private/var/mobile/Media/DCIM') <(ssh mybooklive 'ls -1aR /root/backup/iphone/Photos')
}}}
to show the most recent copied file
{{{
find . -type f -printf '%TY-%Tm-%Td %TT %p\n' | sort
}}}
<<<
this is what you'll see when you lock your phone and turn off the wifi.. when you get back home, just re-execute the rsync command..
{{{
2011-04-20 02:55:57.0000000000 ./103APPLE/IMG_3524.JPG
2011-04-20 02:55:58.0000000000 ./103APPLE/IMG_3525.JPG
2011-04-20 02:57:16.0000000000 ./103APPLE/IMG_3526.JPG
2011-04-20 02:57:24.0000000000 ./103APPLE/IMG_3527.JPG
2011-04-20 02:59:38.0000000000 ./103APPLE/IMG_3528.JPG
2012-10-31 03:45:49.1762334530 ./103APPLE/.IMG_3529.JPG.xida7l <-- so there's a lock file!
2012-11-05 23:04:13.0000000000 ./.MISC/Info.plist
}}}
another thing, if it get stuck then just kill the rsync process and rerun the rsync command
{{{
2012-10-31 03:53:04.6882287580 ./103APPLE/.IMG_3721.JPG.QTWwOf
2012-11-05 23:04:13.0000000000 ./.MISC/Info.plist
MyBookLive:~/backup/iphone/Photos# ps -ef | grep -i rsync
root 4924 8547 0 04:23 pts/2 00:00:00 tail -f log/rsync-iphone-201210310301.log
root 6718 21928 0 04:30 pts/0 00:00:00 grep -i rsync
root 21834 1 0 03:01 pts/2 00:00:00 rsync -avzWu --delete root@krl:/private/var/mobile/Media/DCIM/ /root/backup/iphone/Photos
root 21839 21834 3 03:01 pts/2 00:02:54 ssh -l root krl rsync --server --sender -vulWogDtprze.iL . /private/var/mobile/Media/DCIM/
root 21875 21834 1 03:01 pts/2 00:01:06 rsync -avzWu --delete root@krl:/private/var/mobile/Media/DCIM/ /root/backup/iphone/Photos
MyBookLive:~/backup/iphone/Photos# kill -9 21834 21875
MyBookLive:~/backup/iphone/Photos# ps -ef | grep -i rsync
root 4924 8547 0 04:23 pts/2 00:00:00 tail -f log/rsync-iphone-201210310301.log
root 6923 21928 0 04:31 pts/0 00:00:00 grep -i rsync
}}}
<<<
to execute the rsync
{{{
MyBookLive:~# cat rsync-iphone.sh
rsync -avzWu root@krl:/private/var/mobile/Media/DCIM/ /root/backup/iphone/Photos > ~/log/rsync-iphone-$(date +%Y%m%d%H%M).log &
}}}
http://justinhileman.info/article/how-to-back-up-your-iphone-with-rsync/
http://www.maclife.com/article/howtos/using_rsync_keep_your_files_sync_0
http://serverfault.com/questions/59140/how-do-diff-over-ssh
http://zuhaiblog.com/2011/02/14/using-diff-to-compare-folders-over-ssh-on-two-different-servers/
https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories-on-a-vps
https://superuser.com/questions/544436/rsync-between-two-local-directories
http://serverfault.com/questions/43014/copying-a-large-directory-tree-locally-cp-or-rsync
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
<<showtoc>>
! ruby install
https://www.chrisjmendez.com/2016/05/05/installing-ruby-on-rails-on-osx-using-rbenv/
! rbenv
https://www.chrisjmendez.com/2016/12/02/how-do-you-install-multiple-rails-versions-with-rbenv-2/
https://www.chrisjmendez.com/2016/05/01/installing-ruby-on-rails-on-mac-using-rvm/
.
<<showtoc>>
! the full details
!! create ext table script and loading of data
!! test case of merge vs minus
check out this git commit https://github.com/karlarao/code_ninja/commits?author=karlarao&since=2016-11-02T04:00:00Z&until=2016-11-03T04:00:00Z
! summary of the process/what needs to be done
!! the data needs to be named "cpuwl_all.csv"
<<<
-rw-r--r--. 1 oracle oinstall 11503554 Sep 22 18:46 20160922_awr_cpuwl-tableau-irislte4-example.com.csv
-rw-r--r--. 1 oracle oinstall 11457611 Sep 27 13:43 20160927_awr_cpuwl-tableau-irislte2-example.com.csv
-rw-r--r--. 1 oracle oinstall 11130983 Oct 25 20:54 cpuwl_all.csv
<<<
!! bash file to remove the trash character
<<<
-rw-r--r--. 1 oracle oinstall 39 Oct 25 20:47 sed.sh
<<<
!! control file to "generate" the create external table command
<<<
-rw-r--r--. 1 oracle oinstall 1014 Oct 25 19:16 cpuwl.ctl
<<<
!! create/load the external table and data
creation of the external table only needs to be done once, you don't have to drop and recreate every loading of new data. you'll just have to update the file where the external table is pointed at
<<<
-rw-r--r--. 1 oracle oinstall 4631 Oct 25 20:41 load_cpuwl.sql
<<<
!! the log file
<<<
-rw-r--r--. 1 oracle oinstall 8784 Nov 2 16:39 cpuwl_all.log_xt
<<<
!! the bad file
<<<
-rw-r--r--. 1 oracle oinstall 97 Nov 2 16:39 cpuwl_all.bad
<<<
! merge SQL
{{{
+-- merge to main
+MERGE INTO awr_cpuwl s1
+USING awr_cpuwl_ext s0
+ ON (
+ s1.INSTNAME = s0.INSTNAME
+ AND s1.DB_ID = s0.DB_ID
+ AND s1.HOSTNAME = s0.HOSTNAME
+ AND s1.ID = s0.ID
+ AND s1.TM = s0.TM
+ AND s1.INST = s0.INST
+ )
+WHEN MATCHED THEN
+ UPDATE SET
+ s1.DUR = s0.DUR ,
+ s1.CPU = s0.CPU ,
+ s1.LOADAVG = s0.LOADAVG ,
+ s1.AAS_CPU = s0.AAS_CPU ,
+ s1.RSRCMGRPCT = s0.RSRCMGRPCT,
+ s1.OSCPUPCT = s0.OSCPUPCT ,
+ s1.OSCPUSYS = s0.OSCPUSYS ,
+ s1.OSCPUIO = s0.OSCPUIO
+WHEN NOT MATCHED THEN
+ INSERT VALUES (
+ s0.INSTNAME ,
+ s0.DB_ID ,
+ s0.HOSTNAME ,
+ s0.ID ,
+ s0.TM ,
+ s0.INST ,
+ s0.DUR ,
+ s0.CPU ,
+ s0.LOADAVG ,
+ s0.AAS_CPU ,
+ s0.RSRCMGRPCT,
+ s0.OSCPUPCT ,
+ s0.OSCPUSYS ,
+ s0.OSCPUIO );
}}}
https://apexapps.oracle.com/pls/apex/f?p=44785:141:0::NO::P141_PAGE_ID,P141_SECTION_ID:119,870
2 Connection Strategies for Database Applications
https://docs.oracle.com/en/database/oracle/oracle-database/19/adfns/connection_strategies.html#GUID-90D1249D-38B8-47BF-9829-BA0146BD814A
Connection Pool Sizing and SmartDB / Connection Pool Sizing Concepts - ToonKoppelaars
https://www.youtube.com/watch?v=eiydITTdDAQ
! run connping in a shell script
{{{
[opc@katest connping]$ cat connping.sh
# set Oracle environment
export ORACLE_HOME=/home/opc/instantclient_19_15
export SQLPATH=$ORACLE_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME
export TNS_ADMIN=/home/opc
export PATH="$ORACLE_HOME:$PATH"
# set connping environment
CONNPINGDIR="/home/opc/connping"
LOGNAME="connping.csv"
MAXSIZE="104857600" #100MB
DATE=$(date +%Y%m%d%H%M%S%N)
# log rotate connping file when size hits 100MB
cd $CONNPINGDIR
LOGNAME_SIZE=$(ls -l $LOGNAME | awk '{print $5}')
echo $LOGNAME_SIZE
if ((LOGNAME_SIZE >= MAXSIZE));
then
echo "$LOGNAME exceeds max threshold"
tar -cjvpf $LOGNAME-$DATE.bz2 $LOGNAME
rm $LOGNAME
else
echo "$LOGNAME is below max threshold"
fi
# run connping
./connping -l admin/"<passwordhere>"@kaadb_medium --period=55 --utctime --utcformat='YYYY/MM/DD HH24:MI:SS' >> $CONNPINGDIR/$LOGNAME
}}}
! example output
{{{
[opc@katest connping]$ ls -ltr
total 492
-rwxr-xr-x. 1 opc opc 486032 Sep 2 15:06 connping
-rw-r--r--. 1 opc opc 1916 Sep 2 22:30 connping.csv-20220902223002002271881.bz2
-rw-r--r--. 1 opc opc 5215 Sep 2 22:31 connping.csv
-rwxr-xr-x. 1 opc opc 816 Sep 2 22:31 connping.sh
}}}
! deploy in crontab
{{{
[opc@katest connping]$ crontab -l
*/1 * * * * sh /home/opc/connping/connping.sh
}}}
! cleanup for viz
{{{
echo "CONN,OCI,DUAL,SID,INST,UTCTIME" > connpingviz.csv
cat connping.csv | grep "time" | sed "s/connect://g ; s/ ociping://g ; s/ dualping://g ; s/ sid=//g ; s/ inst#=//g ; s/ time=//g ; s/ ms//g" >> connpingviz.csv
}}}
! old
{{{
[opc@katest connping]$ export ORACLE_HOME=/home/opc/instantclient_19_15
[opc@katest connping]$ export SQLPATH=$ORACLE_HOME
[opc@katest connping]$ export LD_LIBRARY_PATH=$ORACLE_HOME
[opc@katest connping]$ export TNS_ADMIN=/home/opc
[opc@katest connping]$ export PATH="$ORACLE_HOME:$PATH"
[opc@katest connping]$
[opc@katest connping]$
[opc@katest connping]$ date
Tue Aug 9 14:04:25 GMT 2022
[opc@katest connping]$ ./connping -l admin/"<passwordhere>"@kaadb_medium --period=3 | while read line; do echo "`date +%m\/%d\/%Y\ %T`|`hostname`"\| "$line" ; done
08/09/2022 14:04:32|katest|
08/09/2022 14:04:32|katest| RWP*Connect/OCIPing Release 3.0.1.31 Development on Tue, 09 Aug 2022 14:04:26 UTC
08/09/2022 14:04:32|katest| Connected default database with reconnect to:
08/09/2022 14:04:32|katest| Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
08/09/2022 14:04:32|katest| connect:281.36 ms, ociping:3.124 ms, dualping:2.488 ms, sid=41805, inst#=4, time=1.3
08/09/2022 14:04:32|katest| connect:492.42 ms, ociping:2.306 ms, dualping:2.205 ms, sid=43764, inst#=4, time=2.5
08/09/2022 14:04:32|katest| connect mean=386.89, stddev=105.53
08/09/2022 14:04:32|katest| ociping mean=2.72, stddev=0.41
08/09/2022 14:04:32|katest| dualping mean=2.35, stddev=0.14
[opc@katest connping]$ date
Tue Aug 9 14:04:33 GMT 2022
}}}
! old cleanup for viz
{{{
cat networkping.out | grep "ociping:" | awk -F',' '{ print $1 "," $2 "," $3 "," $5}' | sed "s/connect: //g ; s/ ociping: //g ; s/ dualping: //g ; s/ inst#=//g ; s/ ms//g" > ociping.csv
cat -n ociping.csv | awk -F" " '{print $1","$2}' > ociping2.csv
tail -n 1000000 ociping2.csv > ociping3.csv
}}}
{{{
--Extract SQL Monitor and dbms_xplan reports of a SQL_ID
--by Karl Arao
--
--HOWTO:
--
--Execute as user with DBA role
--
--@rwp_sqlmonreport
--Enter SQL_ID (required)
--Enter value for 1: <sql_id>
PRO Enter SQL_ID (required)
DEF sqlmon_sqlid = '&&1';
DEF sqlmon_date_mask = 'YYYYMMDDHH24MISS';
DEF sqlmon_text = 'Y';
DEF sqlmon_active = 'Y';
DEF sqlmon_hist = 'Y';
DEF tuning_pack = 'Y';
-- number of SQL Monitoring reports to collect from memory and history
DEF rwp_conf_num_sqlmon_rep = '12';
SET LIN 32767 PAGES 0 LONG 32767000 LONGC 32767 TRIMS ON AUTOT OFF;
SET VER OFF;
SET FEED OFF;
SET ECHO OFF;
SET TERM OFF;
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD/HH24:MI:SS') rwp_time_stamp FROM DUAL;
SELECT TO_CHAR(SYSDATE, 'HH24:MI:SS') hh_mm_ss FROM DUAL;
SET HEA OFF;
SET TERM ON;
-- log
SPO &&sqlmon_sqlid..txt APP;
VAR myreport CLOB
-- text
SET SERVEROUT ON SIZE 1000000;
SET TERM OFF
SPO rwp_sqlmon_&&sqlmon_sqlid._driver_txt.sql
DECLARE
PROCEDURE put (p_line IN VARCHAR2)
IS
BEGIN
DBMS_OUTPUT.PUT_LINE(p_line);
END put;
BEGIN
FOR i IN (SELECT * FROM
(SELECT sid,
session_serial#,
sql_exec_start,
sql_exec_id,
inst_id
FROM gv$sql_monitor
WHERE '&&tuning_pack.' = 'Y'
AND '&&sqlmon_text.' = 'Y'
AND sql_id = '&&sqlmon_sqlid.'
AND sql_text IS NOT NULL
AND process_name = 'ora'
ORDER BY
sql_exec_start DESC)
WHERE ROWNUM <= &&rwp_conf_num_sqlmon_rep.)
LOOP
put('BEGIN');
put(':myreport :=');
put('DBMS_SQLTUNE.report_sql_monitor');
put('( sql_id => ''&&sqlmon_sqlid.''');
put(', session_id => '||i.sid);
put(', session_serial => '||i.session_serial#);
put(', sql_exec_start => TO_DATE('''||TO_CHAR(i.sql_exec_start, '&&sqlmon_date_mask.')||''', ''&&sqlmon_date_mask.'')');
put(', sql_exec_id => '||i.sql_exec_id);
put(', inst_id => '||i.inst_id);
put(', report_level => ''ALL''');
put(', type => ''TEXT'' );');
put('END;');
put('/');
put('PRINT :myreport;');
put('SPO rwp_sqlmon_&&sqlmon_sqlid._'||i.sql_exec_id||'_'||LPAD(TO_CHAR(i.sql_exec_start, 'HH24MISS'), 6, '0')||'.txt;');
put('PRINT :myreport;');
put('SPO OFF;');
END LOOP;
END;
/
SPO OFF;
SET SERVEROUT OFF;
SPO rwp_sqlmon_&&sqlmon_sqlid..txt;
SELECT DBMS_SQLTUNE.report_sql_monitor_list(sql_id => '&&sqlmon_sqlid.', type => 'TEXT')
FROM DUAL
WHERE '&&tuning_pack.' = 'Y'
AND '&&sqlmon_text.' = 'Y';
@rwp_sqlmon_&&sqlmon_sqlid._driver_txt.sql
SPO OFF;
-- active
SET SERVEROUT ON SIZE 1000000;
SPO rwp_sqlmon_&&sqlmon_sqlid._driver_active.sql
DECLARE
PROCEDURE put (p_line IN VARCHAR2)
IS
BEGIN
DBMS_OUTPUT.PUT_LINE(p_line);
END put;
BEGIN
FOR i IN (SELECT * FROM
(SELECT sid,
session_serial#,
sql_exec_start,
sql_exec_id,
inst_id
FROM gv$sql_monitor
WHERE '&&tuning_pack.' = 'Y'
AND '&&sqlmon_active.' = 'Y'
AND sql_id = '&&sqlmon_sqlid.'
AND sql_text IS NOT NULL
AND process_name = 'ora'
ORDER BY
sql_exec_start DESC)
WHERE ROWNUM <= &&rwp_conf_num_sqlmon_rep.)
LOOP
put('BEGIN');
put(':myreport :=');
put('DBMS_SQLTUNE.report_sql_monitor');
put('( sql_id => ''&&sqlmon_sqlid.''');
put(', session_id => '||i.sid);
put(', session_serial => '||i.session_serial#);
put(', sql_exec_start => TO_DATE('''||TO_CHAR(i.sql_exec_start, '&&sqlmon_date_mask.')||''', ''&&sqlmon_date_mask.'')');
put(', sql_exec_id => '||i.sql_exec_id);
put(', inst_id => '||i.inst_id);
put(', report_level => ''ALL''');
put(', type => ''ACTIVE'' );');
put('END;');
put('/');
put('SPO rwp_sqlmon_&&sqlmon_sqlid._'||i.sql_exec_id||'_'||LPAD(TO_CHAR(i.sql_exec_start, 'HH24MISS'), 6, '0')||'.html;');
put('PRINT :myreport;');
put('SPO OFF;');
END LOOP;
END;
/
SPO OFF;
SET SERVEROUT OFF;
SPO rwp_sqlmon_&&sqlmon_sqlid._list.html;
SELECT DBMS_SQLTUNE.report_sql_monitor_list(sql_id => '&&sqlmon_sqlid.', type => 'HTML')
FROM DUAL
WHERE '&&tuning_pack.' = 'Y'
AND '&&sqlmon_active.' = 'Y';
SPO OFF;
@rwp_sqlmon_&&sqlmon_sqlid._driver_active.sql
SPO rwp_sqlmon_&&sqlmon_sqlid._detail.html;
SELECT DBMS_SQLTUNE.report_sql_detail(sql_id => '&&sqlmon_sqlid.', report_level => 'ALL', type => 'ACTIVE')
FROM DUAL
WHERE '&&tuning_pack.' = 'Y'
AND '&&sqlmon_active.' = 'Y';
SPO OFF;
-- historical, based on elapsed, worst &&rwp_conf_num_sqlmon_rep.
-- it errors out in < 12c but the error is not reported to screen/main files
SET SERVEROUT ON SIZE 1000000;
SPO rwp_sqlmon_&&sqlmon_sqlid._driver_hist.sql
DECLARE
PROCEDURE put (p_line IN VARCHAR2)
IS
BEGIN
DBMS_OUTPUT.PUT_LINE(p_line);
END put;
BEGIN
FOR i IN (SELECT *
FROM (SELECT report_id,
--TO_NUMBER(EXTRACTVALUE(XMLType(report_summary),'/report_repository_summary/sql/stats/stat[@name="elapsed_time"]'))
substr(key4,instr(key4,'#')+1, instr(key4,'#',1,2)-instr(key4,'#',1)-1) elapsed,
--EXTRACTVALUE(XMLType(report_summary),'/report_repository_summary/sql/@sql_exec_id')
key2 sql_exec_id,
--EXTRACTVALUE(XMLType(report_summary),'/report_repository_summary/sql/@sql_exec_start')
key3 sql_exec_start
FROM dba_hist_reports
WHERE component_name = 'sqlmonitor'
--AND EXTRACTVALUE(XMLType(report_summary),'/report_repository_summary/sql/@sql_id') = '&&sqlmon_sqlid.'
AND key1 = '&&sqlmon_sqlid.'
AND '&&tuning_pack.' = 'Y'
AND '&&sqlmon_hist.' = 'Y'
ORDER BY 2 DESC)
WHERE ROWNUM <= &&rwp_conf_num_sqlmon_rep.)
LOOP
put('BEGIN');
put(':myreport :=');
put('DBMS_AUTO_REPORT.REPORT_REPOSITORY_DETAIL');
put('( rid => '||i.report_id);
put(', type => ''ACTIVE'' );');
put('END;');
put('/');
put('SPO rwp_sqlmon_&&sqlmon_sqlid._'||i.sql_exec_id||'_'||REPLACE(SUBSTR(i.sql_exec_start, 12, 8), ':','')||'_hist.html;');
put('PRINT :myreport;');
put('SPO OFF;');
END LOOP;
END;
/
SPO OFF;
SET SERVEROUT OFF;
@rwp_sqlmon_&&sqlmon_sqlid._driver_hist.sql
SPO rwp_sqlmon_&&sqlmon_sqlid._dplan.txt
select * from table( dbms_xplan.display_cursor('&&sqlmon_sqlid.', null, 'ADVANCED +ALLSTATS LAST +MEMSTATS LAST +PREDICATE +PEEKED_BINDS') );
SPO OFF
SPO rwp_sqlmon_&&sqlmon_sqlid._dplan_awr.txt
select * from table(dbms_xplan.display_awr('&&sqlmon_sqlid.',null,null,'ADVANCED +ALLSTATS LAST +MEMSTATS LAST +PREDICATE +PEEKED_BINDS'));
SPO OFF
SET ECHO OFF FEED OFF VER OFF SHOW OFF HEA OFF LIN 2000 NEWP NONE PAGES 0 LONG 2000000 LONGC 2000 SQLC MIX TAB ON TRIMS ON TI OFF TIMI OFF ARRAY 100 NUMF "" SQLP SQL> SUF sql BLO . RECSEP OFF APPI OFF AUTOT OFF;
COL inst_child FOR A21;
BREAK ON inst_child SKIP 2;
SPO rwp_sqlmon_&&sqlmon_sqlid._rac_xplan.txt;
PRO Current Execution Plans (last execution)
PRO
PRO Captured while still in memory. Metrics below are for the last execution of each child cursor.
PRO If STATISTICS_LEVEL was set to ALL at the time of the hard-parse then A-Rows column is populated.
PRO
SELECT RPAD('Inst: '||v.inst_id, 9)||' '||RPAD('Child: '||v.child_number, 11) inst_child, t.plan_table_output
FROM gv$sql v,
TABLE(DBMS_XPLAN.DISPLAY('gv$sql_plan_statistics_all', NULL, 'ADVANCED ALLSTATS LAST', 'inst_id = '||v.inst_id||' AND sql_id = '''||v.sql_id||''' AND child_number = '||v.child_number)) t
WHERE v.sql_id = '&&sqlmon_sqlid.'
AND v.loaded_versions > 0;
SPO OFF;
SET ECHO OFF FEED 6 VER ON SHOW OFF HEA ON LIN 80 NEWP 1 PAGES 14 LONG 80 LONGC 80 SQLC MIX TAB ON TRIMS OFF TI OFF TIMI OFF ARRAY 15 NUMF "" SQLP SQL> SUF sql BLO . RECSEP WR APPI OFF AUTOT OFF;
SET TERM ON
-- get current time
SPO &&sqlmon_sqlid..txt APP;
COL current_time NEW_V current_time FOR A15;
SELECT 'Completed: ' x, TO_CHAR(SYSDATE, 'YYYYMMDD_HH24MISS') current_time FROM DUAL;
SET TERM OFF
HOST zip -jmq rwp_&&sqlmon_sqlid._&¤t_time. rwp_sqlmon_*
}}}
<<showtoc>>
! material
https://www.safaribooksonline.com/library/view/hadoop-fundamentals-for/9781491913161/
table of contents http://shop.oreilly.com/product/0636920035183.do
! Sample environment
<<<
https://districtdatalabs.silvrback.com/creating-a-hadoop-pseudo-distributed-environment
http://bit.ly/hfpd3vm , https://www.dropbox.com/s/eg80qsitun7txu1/hfpd3.vmdk.gz?dl=0
username: student
password: password
user "Hadoop Analyst" is just a display name for student.
So student == "Hadoop Analyst".
Beware that the keyboard settings are qwerty !
<<<
! sample code
https://github.com/bbengfort/hadoop-fundamentals
<<showtoc>>
! learning path
http://shop.oreilly.com/category/learning-path.do
!! Architect and Build Big Data Applications
http://shop.oreilly.com/category/learning-path/architect-build-big-data-applications.do
!! oracle
https://www.safaribooksonline.com/topics/oracle?active=--databases--oracle
! hadoop
!! hadoop architecture
!!! Architectural Considerations for Hadoop Applications
https://www.safaribooksonline.com/library/view/architectural-considerations-for/9781491923313/
http://hadooparchitecturebook.weebly.com/
https://github.com/hadooparchitecturebook
!!! Distributed Systems in One Lesson
http://www.infiniteskills.com/training/distrubuted-systems-in-one-lesson.html
!!! SQL Big Data Convergence - The Big Picture
http://www.pluralsight.com/courses/sql-big-data-convergence-big-picture
!!! hadoop AWS
!!!! Big Data on Amazon Web Services
http://www.pluralsight.com/courses/big-data-amazon-web-services
!! data model
!!! parquet (impala)
http://www.cloudera.com/content/cloudera/en/documentation/cloudera-impala/v2-0-x/topics/impala_parquet.html <-- the data format
!!! avro (schema, data format )
http://radar.oreilly.com/2014/11/the-problem-of-managing-schemas.html?cmp=ex-data-na-na-na_architectural_considerations_for_hadoop_applications_5
!!! kite (Data API for Hadoop, partitioned data)
avro + kite
http://kitesdk.org/docs/current/
!! data ingestion
!!! flume (interceptors)
https://flume.apache.org/FlumeUserGuide.html#flume-interceptors?cmp=ex-data-na-na-na_architectural_considerations_for_hadoop_applications_6
!! data processing
!!! Processing Frameworks
http://radar.oreilly.com/2015/02/processing-frameworks-for-hadoop.html?cmp=ex-data-na-na-na_architectural_considerations_for_hadoop_applications_7
[img[ https://lh3.googleusercontent.com/-2rfvIFYS0u0/Vh2xEKTBD6I/AAAAAAAAC0E/SiZyr4UtOkw/s800-Ic42/Screen%252520Shot%2525202015-10-13%252520at%2525209.31.37%252520PM.png ]]
!!! SQL
!!!! hive
http://www.infiniteskills.com/training/introduction-to-apache-hive.html
SQL on Hadoop - Analyzing Big Data with Hive http://www.pluralsight.com/courses/sql-hadoop-analyzing-big-data-hive
!!!!! hive on spark
Hive on Spark: Getting Started https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark:+Getting+Started?cmp=ex-data-na-na-na_architectural_considerations_for_hadoop_applications_2
!!!! impala
https://www.quora.com/How-does-Cloudera-Impala-compare-to-Facebooks-Presto
!!! stream processing
http://hadoop.apache.org/docs/r1.2.1/streaming.html
streaming vs storm + trident
!!!! spark streaming
http://spark.apache.org/streaming/
!!!! storm
http://storm.apache.org/
trident https://storm.apache.org/documentation/Trident-tutorial.html?cmp=ex-data-na-na-na_architectural_considerations_for_hadoop_applications_8
!! workflow (orchestration)
!!! oozie (much better tool - hive, sqoop, MR, pig, ssh, etc)
http://oozie.apache.org/
!!! azkaban (a simple workflow tool)
http://azkaban.github.io/
!!! scalding
Data Science & Hadoop Workflows at Scale With Scalding http://www.pluralsight.com/courses/data-science-hadoop-workflows-scalding
!! monitoring
!!! hue
http://gethue.com/
!! others
!!! graph processing
https://gigaom.com/2013/08/14/facebooks-trillion-edge-hadoop-based-graph-processing-engine/
! data science
!! Just Enough Math
https://player.oreilly.com/videos/9781491904060
Math For Programmers https://app.pluralsight.com/library/courses/math-for-programmers/table-of-contents
! code examples
!! hadooparchitecturebook - sessionization
NRT Sessionization with Spark Streaming landing on HDFS and putting live stats in HBase https://github.com/hadooparchitecturebook/SparkStreaming.Sessionization
!! hadooparchitecturebook - clickstream analytics
Code for Tutorial on designing clickstream analytics application using Hadoop https://github.com/hadooparchitecturebook/clickstream-tutorial
! oracle
!! Oracle 12c New Features, Part I LiveLessons
https://www.safaribooksonline.com/library/view/oracle-12c-new/9780134264264/
!! Oracle SQL Performance Tuning for Developers LiveLessons (Video Training)
https://www.safaribooksonline.com/library/view/oracle-sql-performance/9780134117058/
! shell scripting
!! node.js shell scripting https://www.safaribooksonline.com/library/view/shell-scripting-with/9781771376075/
!! expert shell scripting https://www.safaribooksonline.com/library/view/expert-shell-scripting/9781430218418/
!! Shell Scripting: Expert Recipes for Linux, Bash, and More https://www.safaribooksonline.com/library/view/shell-scripting-expert/9781118166321/
! R
!! Garrett Grolemund
https://www.safaribooksonline.com/search/?query=Garrett%20Grolemund&field=authors
!! Jared P. Lander
https://www.safaribooksonline.com/search/?query=Jared%20P.%20Lander&field=authors
https://www.safaribooksonline.com/library/view/advanced-r-programming/9780134052700/
http://kerryosborne.oracle-guy.com/2013/06/sql-gone-bad-but-plan-not-changed/
http://kerryosborne.oracle-guy.com/2013/06/sql-gone-bad-but-plan-not-changed-part-2/
{{{
So in summary the following conclusions can be made:
- The same PLAN_HASH_VALUE is merely an indicator that the same operations on objects of the same name are performed in the same order.
- It tells nothing about the similarity of the expected runtime performance of the execution plan, due to various reasons as demonstrated. The most significant information that is not covered by the PLAN_HASH_VALUE are the filter and access predicates, but there are other attributes, too, that are not part of the hash value calculation.
}}}
http://carlos-sierra.net/2013/06/09/has-my-plan-changed-or-not/
{{{
look for PHV1 and PHV2 columns of the SQLT report
}}}
http://orastory.wordpress.com/2012/07/12/plan_hash_value-and-internal-temporary-table-names/
{{{
system-generated names like ’SYS_TEMP%’, ’index$_join$_%’ and ’VW_ST%’ may show up as different PHV but actually not when pulled from dba_hist_sql_plan
}}}
how PHV is calculated http://oracle-randolf.blogspot.com/2009/07/planhashvalue-how-equal-and-stable-are.html
http://jonathanlewis.wordpress.com/2013/06/07/same-plan/
http://iiotzov.wordpress.com/2012/11/27/same-sql_id-same-execution-plan-sql_plan_hash_value-greatly-different-elapsed-time-a-simple-way-to-troubleshoot-with-ash/
same sql different parse times - https://mail.google.com/mail/u/0/?shva=1&rld=1#search/same+sql+different+parse+times/13b7abcdad16e388
<<<
If it turns out that almost all the time really is "CPU parse time",
though, you won't see any significant differences in v$session_event, and
might not see any significant differences in v$sesstat. A quick sanity
check on the 10053 trace is simply:
grep "^Join Order" {filename} | wc
to see how many join orders were examined - it may be very difference -
then
grep "join permutations" {filename}
as this may indicate that the two statements had dramatically different
headline strategies
grep "Recost for ORDER BY" | wc
as this may indicate that a lot of time was spent "doubling" the
optimisation work
grep "Now joining" {filename} | wc
which gives you a clue of how far each join order got before being aborted
General reasons why optimisation might take a long time:
a) the essential cost of the query is high and the number of tables high
b) the essential cost of the query is high and the number of options for
transformation is high
c) dynamic sampling of contained tables
d) dynamic sampling of dictionary tables
<<<
<<<
Yeah indeed this extra 5sec of CPU time seems to go on "cursor generation" - hard parse - BUT I suspect the CPU usage of the recursive SQL statements executed in the hard parse phase get accounted as the "hard parse cpu" too (don't remember at the moment and no time to verify, but it's easy to check).
Even though the slow parse case does less recursive calls and executions than the fast one, in the slow case you're doing 100 000 logical reads vs 3900 in the fast case. So the #1 data is still needed here, which SQL ID shows up in the top in the slow parse case?
Snapper should show that (the "ash" section) - but if it doesn't show the recursive SQL properly for some reason, then use SQL Trace ...
This is one of the cases (multiple recursive SQL statements executed within the parent SQL) where SQL trace's full chronology with detailed CPU / logical IO counts is needed - for finding out where the CPU time is spent (e.g. in the parse call itself or some recursive SQL under it).
If it turns out that there's some recursive SQL which uses most of the CPU time (and generates those 100 000 LIOs) you can focus on that.
If it turns out that it's the hard parse code itself which uses most of the CPU time (the sum of recursive calls CPU is a small fraction of the 5 second CPU usage) then I'd proceed with stack profiling and making more sense out of the CBO trace ...
<<<
{{{
Join order: this oneliner worked for me to check join orders: grep '^Join\ order\[' {file} | sed 's/Join order.\([0-9]*\).*/\1/' | sort -n
Join permutations: (grep -Hi join\ permutation {file}) I created a textfile of both and diffed it.
Recost for ORDER BY: (grep -Hi recost {file})
Now joining (grep Now\ joining {file})
}}}
! mauro
<<<
-- investigate high level using the viewer
https://jonathanlewis.wordpress.com/2010/04/30/10053-viewer/
-- chase the query block registry pattern the ones that says "FINAL" using this grep
grep -nE '^[A-Z]{2,}:|Registe' dwops01s1_ora_69874_MY_10053.trc | less
https://www.slideshare.net/MauroPagano3/chasing-the-optimizer-71564184
<<<
# sata ssd
[img[ http://i.imgur.com/Sa5LzDk.png ]]
# t1
[img[ http://i.imgur.com/ZQ5HXuW.png ]]
You can process the binary sar files using the ''sadf'' tool.. then massage the date column to fit on the tableau time dimension
massage the SAR output
{{{
sadf -d /var/log/sa/sa03 -- -u > sar_$HOSTNAME-u.csv
sadf -d /var/log/sa/sa03 -- -d > sar_$HOSTNAME-d.csv
[root@desktopserver sa]# date +"%m/%d/%Y %H:%M:%S"
07/20/2012 22:13:55
awk -F';' '
BEGIN {
FS=OFS=";"
}
{
$3 = substr($3,6,2)"/"substr($3,9,2)"/"substr($3,1,4)" "substr($3,12,8)
print $0
}
' sar_$HOSTNAME-u.csv > sar.csv
sed -i 's/;/,/g' sar.csv
}}}
import csv in tableau then create a calculated field afterwards
{{{
ALL_CPU = [%user]+[%system]+[%steal]+[%nice]+[%iowait]
}}}
! Check out the visualization here
sar CPU% https://www.evernote.com/shard/s48/sh/9e3676ff-c9d8-4950-82f3-5786350562e4/a9a0893695ee2456e2d0a294a9994fdb
! Headers of the output
OEL 6.4 automatically places headers on the output, on OEL 5.x this is not the case.. so you need to copy the following headers on the first line of the CSV if the OS version you have now doesn't give the headers
{{{
[oracle@desktopserver ~]$ sadf -d /var/log/sa/sa03 -- -u
# hostname;interval;timestamp;CPU;%user;%nice;%system;%iowait;%steal;%idle
desktopserver.local;594;2014-01-03 06:10:01 UTC;-1;2.95;0.00;0.37;0.13;0.00;96.56
desktopserver.local;599;2014-01-03 06:20:01 UTC;-1;0.00;0.00;0.00;0.00;0.00;0.00
desktopserver.local;594;2014-01-03 06:30:01 UTC;-1;2.99;0.00;0.34;0.11;0.00;96.56
[oracle@desktopserver ~]$ sadf -d /var/log/sa/sa03 -- -d
# hostname;interval;timestamp;DEV;tps;rd_sec/s;wr_sec/s;avgrq-sz;avgqu-sz;await;svctm;%util
desktopserver.local;594;2014-01-03 06:10:01 UTC;dev8-16;1.17;2.32;15.49;15.23;0.01;5.16;3.23;0.38
desktopserver.local;594;2014-01-03 06:10:01 UTC;dev8-0;4.28;25.61;47.85;17.17;0.02;5.29;4.16;1.78
desktopserver.local;594;2014-01-03 06:10:01 UTC;dev8-32;3.33;32.04;35.32;20.22;0.01;3.76;3.37;1.12
}}}
! Sample run
{{{
oracle@localhost.localdomain:/home/oracle:orcl
$ sadf -d /var/log/sa/sa03 -- -u > sar_$HOSTNAME-u.csv
oracle@localhost.localdomain:/home/oracle:orcl
$ awk -F';' '
> BEGIN {
> FS=OFS=";"
> }
> {
> $3 = substr($3,6,2)"/"substr($3,9,2)"/"substr($3,1,4)" "substr($3,12,8)
> print $0
> }
> ' sar_$HOSTNAME-u.csv > sar.csv
oracle@localhost.localdomain:/home/oracle:orcl
$ cat sar.csv
localhost.localdomain;600;01/03/2014 08:10:01;-1;0.16;0.00;1.19;16.46;0.00;82.19
localhost.localdomain;600;01/03/2014 08:20:01;-1;0.08;0.00;1.61;1.07;0.00;97.24
localhost.localdomain;599;01/03/2014 08:30:01;-1;0.08;0.00;0.83;1.71;0.00;97.38
localhost.localdomain;600;01/03/2014 08:40:01;-1;0.06;0.00;0.62;0.54;0.00;98.78
localhost.localdomain;600;01/03/2014 08:50:01;-1;0.07;0.00;0.62;0.79;0.00;98.52
localhost.localdomain;600;01/03/2014 09:00:01;-1;0.06;0.00;0.59;0.60;0.00;98.74
oracle@localhost.localdomain:/home/oracle:orcl
$
oracle@localhost.localdomain:/home/oracle:orcl
$ sed -i 's/;/,/g' sar.csv
oracle@localhost.localdomain:/home/oracle:orcl
$
oracle@localhost.localdomain:/home/oracle:orcl
$ cat sar.csv
localhost.localdomain,600,01/03/2014 08:10:01,-1,0.16,0.00,1.19,16.46,0.00,82.19
localhost.localdomain,600,01/03/2014 08:20:01,-1,0.08,0.00,1.61,1.07,0.00,97.24
localhost.localdomain,599,01/03/2014 08:30:01,-1,0.08,0.00,0.83,1.71,0.00,97.38
localhost.localdomain,600,01/03/2014 08:40:01,-1,0.06,0.00,0.62,0.54,0.00,98.78
localhost.localdomain,600,01/03/2014 08:50:01,-1,0.07,0.00,0.62,0.79,0.00,98.52
localhost.localdomain,600,01/03/2014 09:00:01,-1,0.06,0.00,0.59,0.60,0.00,98.74
oracle@localhost.localdomain:/home/oracle:orcl
}}}
! References
http://www.unix.com/shell-programming-scripting/14655-changing-yyyy-mm-dd-ddmmyy.html
http://www.cyberciti.biz/tips/processing-the-delimited-files-using-cut-and-awk.html
http://stackoverflow.com/questions/3788274/need-linux-equivalent-to-windows-echo-date-time-computername
http://www.thegeekstuff.com/2011/03/sar-examples/
http://sebastien.godard.pagesperso-orange.fr/faq.html
http://kuther.net/howtos/howto-import-sar-data-postgresql-database-and-create-graphs-openofficeorg <-- create a table in database, then graph
http://sebastien.godard.pagesperso-orange.fr/tutorial.html
http://www.unix.com/shell-programming-scripting/109931-formatting-sar-sadf-utility-output.html
http://linux.die.net/man/1/sadf
https://learnxinyminutes.com/docs/scala/
https://www.udemy.com/scala-tutorial-for-absolute-beginners/
https://www.lynda.com/Scala-tutorials/Scala-Essential-Training/574693-2.html?srchtrk=index%3a1%0alinktypeid%3a2%0aq%3ascala%0apage%3a1%0as%3arelevance%0asa%3atrue%0aproducttypeid%3a2
https://www.pluralsight.com/search?q=scala
https://app.pluralsight.com/library/courses/scala-getting-started/table-of-contents
https://app.pluralsight.com/library/courses/scala-thinking-functionally/table-of-contents
https://www.udemy.com/scalable-programming-with-scala-and-spark/
https://www.udemy.com/hdpcd-spark-using-scala/
https://oracle-base.com/articles/18c/scalable-sequences-18c
http://www.slideshare.net/ChicagoHUG/scalding-for-hadoop - a matrix API , functional programming
https://github.com/ThinkBigAnalytics/scalding-workshop
https://github.com/twitter/scalding/wiki
https://github.com/twitter/scalding
<<showtoc>>
! control-m
! autosys
Scheduling R Markdown Reports via Email
https://www.r-bloggers.com/scheduling-r-markdown-reports-via-email/
Automatic Report Generation in R, with knitr and markdown
https://thinkinator.com/2013/05/12/automatic-report-generation-in-r-with-knitr-and-markdown/
http://deanattali.com/2015/03/24/knitrs-best-hidden-gem-spin/
http://rmarkdown.rstudio.com/articles_report_from_r_script.html
{{{
screen -S karl <- to create session
screen -ls <- to list
screen -r karl <- to resume, won't let you grab one that's already attached
screen -d -r karl <- will detach/grab session and attach you to it
screen -x karl <- to resume
CTRL-A d <- to safely disconnect
}}}
http://pissedoffadmins.com/os/using-ssh-and-screen-to-create-persistent-terminal-sessions-accessible-from-almost-anywhere.html
http://wiki.zimbra.com/wiki/Using_Screen_for_Session_Mgmt_(Never_have_your_scripts_die_because_you_lost_your_connection)
! get tables count of all tables in a database
sh hive_table_count.sh <databasename>
{{{
[raj_ops@sandbox ~]$ cat hive_table_count.sh
#!/bin/bash
rm -f tableNames.txt
rm -f HiveTableDDL.txt
hive -e "use $1; show tables;" > tableNames.txt
wait
cat tableNames.txt |while read LINE
do
echo -e $LINE >> HiveTableDDL.txt
hive -e "use $1;select count(*) from $LINE" >>HiveTableDDL.txt
echo -e "\n" >> HiveTableDDL.txt
done
sed '/^$/d' HiveTableDDL.txt > combinecount.txt
awk '!(NR%2){print$0p}{p=" "$0}' combinecount.txt > finalcount.txt
echo "Table Count Done..."
cat finalcount.txt
}}}
! get table count based on tables on file tableNames.txt
sh hive_table_count.sh <databasename>
{{{
[raj_ops@sandbox ~]$ cat tableNames.txt
product
store
[raj_ops@sandbox ~]$ cat hive_table_count.sh
#!/bin/bash
rm -f HiveTableDDL.txt
wait
cat tableNames.txt |while read LINE
do
echo -e $LINE >> HiveTableDDL.txt
hive -e "use $1;select count(*) from $LINE" >>HiveTableDDL.txt
echo -e "\n" >> HiveTableDDL.txt
done
sed '/^$/d' HiveTableDDL.txt > combinecount.txt
awk '!(NR%2){print$0p}{p=" "$0}' combinecount.txt > finalcount.txt
echo "Table Count Done..."
cat finalcount.txt
}}}
! example run
{{{
[raj_ops@sandbox ~]$ sh hive_table_count.sh foodmart
Logging initialized using configuration in file:/etc/hive/2.5.0.0-1245/0/hive-log4j.properties
OK
Time taken: 2.923 seconds
Query ID = raj_ops_20180123123341_474704d8-3d04-421c-8cfa-6c7f92cfe2cb
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1516661636204_0040)
Map 1: -/- Reducer 2: 0/1
Map 1: 0/1 Reducer 2: 0/1
Map 1: 0(+1)/1 Reducer 2: 0/1
Map 1: 1/1 Reducer 2: 0(+1)/1
Map 1: 1/1 Reducer 2: 1/1
OK
Time taken: 11.666 seconds, Fetched: 1 row(s)
Logging initialized using configuration in file:/etc/hive/2.5.0.0-1245/0/hive-log4j.properties
OK
Time taken: 2.736 seconds
Query ID = raj_ops_20180123123411_0b27390f-a606-42f9-ac4d-682607af30af
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1516661636204_0041)
Map 1: -/- Reducer 2: 0/1
Map 1: 0/1 Reducer 2: 0/1
Map 1: 0/1 Reducer 2: 0/1
Map 1: 0(+1)/1 Reducer 2: 0/1
Map 1: 0(+1)/1 Reducer 2: 0/1
Map 1: 0/1 Reducer 2: 0/1
Map 1: 1/1 Reducer 2: 0(+1)/1
Map 1: 1/1 Reducer 2: 1/1
OK
Time taken: 13.599 seconds, Fetched: 1 row(s)
Table Count Done...
1560 product
25 store
}}}
! reference
https://stackoverflow.com/questions/21900886/get-row-count-from-all-tables-in-hive
<<<
You will need to do a
{{{
select count(*) from table
}}}
for all tables.
To automate this, you can make a small bash script and some bash commands. First run
{{{
$hive -e 'show tables' | tee tables.txt
}}}
This stores all tables in the database in a text file tables.txt
Create a bash file (count_tables.sh) with the following contents.
{{{
while read line
do
echo "$line "
eval "hive -e 'select count(*) from $line'"
done
}}}
Now run the following commands.
{{{
$chmod +x count_tables.sh
$./count_tables.sh < tables.txt > counts.txt
}}}
This creates a text file(counts.txt) with the counts of all the tables in the database
<<<
https://stackoverflow.com/questions/18134131/how-to-get-generate-the-create-statement-for-an-existing-hive-table
https://www.quora.com/How-can-I-list-all-Hive-tables-and-store-its-DDL-schema-CREATE-TABLE-statement-in-a-text-file
! step 1) create a .sh file with the below content ,say hive_table_ddl.sh
{{{
#!/bin/bash
rm -f tableNames.txt
rm -f HiveTableDDL.txt
hive -e "use $1; show tables;" > tableNames.txt
wait
cat tableNames.txt |while read LINE
do
hive -e "use $1;show create table $LINE" >>HiveTableDDL.txt
echo -e "\n" >> HiveTableDDL.txt
done
rm -f tableNames.txt
echo "Table DDL generated"
}}}
! step 2) Run the above shell script by passing 'db name' as paramanter
{{{
>bash hive_table_dd.sh <<databasename>>
}}}
<<<
output :
All the create table statements of your DB will be written into the HiveTableDDL.txt
<<<
https://stackoverflow.com/questions/38949699/hive-auto-increment-after-certain-number/39011768#39011768
http://www.proudcloud.net/approach
section 508 blind case https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=section%20508%20blind%20case
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=nfl%20red%20and%20green
jaw screen reader https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=jaw%20screen%20reader
firefox toolbar wave https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=firefox%20toolbar%20wave
stephen cuthins accenture https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=stephen+cuthins+accenture
https://www.healthcare.gov/#skipnav
http://www.dralegal.org/impact/cases/national-federation-of-the-blind-nfb-et-al-v-target-corporation
https://en.wikipedia.org/wiki/National_Federation_of_the_Blind_v._Target_Corp.
https://forums.vandyke.com/showthread.php?t=10826
<<<
I have problems with the ctrl-w command to delete the last word before cursor (in BASH - Emacs mode).
The other ctrl shortcuts work as expected :
ctrl-a : Move cursor to beginning of line
ctrl-e : Move cursor to end of line
ctrl-u : Cut everything before the cursor
ctrl-k : Cut everything after the cursor
ctrl-y : Paste the last thing to be cut
ctrl-_ : Undo
I'm using SecureCRT 7.0.2 on Mac OS X 10.8.2
stty -a
<<<
-- search for karl.com and replace it with example.com
sed -i 's/karl.com/example.com/g' *.trc
{{{
cat `ls -ltr *awr_topevents-tableau*csv | awk '{print $9}'` >> top_events-all.csv
cat `ls -ltr *awr_cpuwl-tableau*csv | awk '{print $9}'` >> cpuwl-all.csv
cat `ls -ltr *awr_sysstat-tableau*csv | awk '{print $9}'` >> sysstat-all.csv
cat `ls -ltr *awr_topsqlx-tableau-exa*csv | awk '{print $9}'` >> topsqlx-all.csv
cat `ls -ltr *awr_iowl-tableau-exa*csv | awk '{print $9}'` >> iowl-all.csv
sed -i 's/fsprd2/fsprd1/g' iowl-all.csv
sed -i 's/mtaprd112/mtaprd111/g' iowl-all.csv
sed -i 's/pd01db04/pd01db03/g' iowl-all.csv
}}}
http://www.warmetal.nl/sed
http://www.chriskdesigns.com/change-your-wordpress-domain-quickly-with-linux-mysql-and-sed/
-- remove first/last characters
http://www.ivorde.ro/How_to_remove_first_last_character_from_a_string_using_SED-75.html
https://unix.stackexchange.com/questions/99350/how-to-insert-text-before-the-first-line-of-a-file
https://askubuntu.com/questions/151674/how-do-i-insert-a-line-at-the-top-of-a-text-file-using-the-command-line
{{{
function prepend(){
# @author Abdennour TOUMI
if [ -e $2 ]; then
sed -i -e '1i$1\' $2
fi
}
}}}
http://stackoverflow.com/questions/5398395/how-can-i-insert-a-tab-character-with-sed-on-os-x
http://www.slideshare.net/tanelp/troubleshooting-complex-performance-issues-oracle-seg-contention
https://sites.google.com/site/embtdbo/examples/tuning---select-with-outlier-oracle
Creating a serverless web app with Airtable, Hyperdev, Node.js, Surge.sh & Ember.js
https://medium.com/the-backlog-by-nimbo-x/creating-a-serverless-web-app-with-node-js-ember-js-and-paas-services-hyperdev-surge-sh-8e3ebe263a76#.rpbqt3ysm
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:865497961356
<<<
session cached cursors CANNOT, WILL NOT, DOES NOT directly decrease the number of parses - never.
It can turn a soft parse into a softer soft parse, but a parse is a parse is a parse.
Only the APPLICATION DEVELOPER can reduce the number of parses - period.
In the first case - sqlplus, the application works like this:
loop
read input
if input = exit then EXIT;
open cursor
parse input
execute input
if input was a query - display output
close cursor
end loop
sqlplus is a simple, small, dumb program to read your input and parse/execute it. It attempts no caching of statements - it would not be worthwhile. It is just a *simple dumb command line tool to run ad-hoc sql*. It is not a programming environment really.
PLSQL on the other hand, plsql is designed to scale, to perform to write database applications in. PLSQL is doing the caching for you there. PLSQL implements (in current releases - you people with 8i and 9i will see different results than we see above) implements execute immediate like this:
If you have code like:
for i in 1 .. 10
loop
execute immmediate SOME_STRING using i;
end loop;
Under the covers, plsql does this:
for i in 1 .. 10
loop
if (last_statement_executed <> SOME_STRING)
then
if last_statement_executed is open, close it
else open a cursor for last_statement_executed.
parse SOME_STRING into last_statement_executed
end if
bind last_statement_executed
execute last_statement_executed
end loop;
so you see, plsql remembers the last statement it executed for that execute immediate and DOES NOT CLOSE IT - the next time through - if you use the same "some_string", we skip the parse.
session_cached cursors does control the size of the plsql cursor cache - plsql will only cache a number of cursors at a time and the number it caches is set by session_cached_cursors.
<<<
http://www.orafaq.com/node/758
https://hoopercharles.wordpress.com/2011/07/21/session_cached_cursors-possibly-interesting-details/
xx
! set port for em express
{{{
show con_name
alter session set container=plugdb;
startup -- start the pdb
show con_name
exec dbms_xdb_config.sethttpsport(5501); -- set port for em express
select dbms_xdb_config.gethttpsport from dual; -- get the port set
}}}
<<showtoc>>
! follow this
https://www.evernote.com/shard/s48/sh/86afa0e2-8f61-475d-bd8f-e9241430d4d3/33b41f0ca5209fc5439f42cf75a61163
also see [[.DataGuard commands]]
! others
How to Setup Active DataGuard on Exadata (Doc ID 1580796.1)
https://docs.oracle.com/database/121/SBYDB/create_ps.htm#SBYDB00200
https://easyoradba.com/2016/07/18/oracle-dataguard-primary-rac-to-standby-rac-12c-with-dataguard-broker-on-exadata/
Creating a Standby Database with Recovery Manager
https://docs.oracle.com/database/121/SBYDB/rcmbackp.htm#SBYDB01500
http://www.oracledistilled.com/oracle-database/high-availability/data-guard/creating-a-physical-standby-using-rman-duplicate-from-active-database/
http://kamranagayev.com/2016/01/02/rman-catalog-start-with-returns-no-files-found-to-be-unknown-to-the-database/
https://oracle-base.com/articles/12c/data-guard-setup-using-broker-12cr2
!! mos notes
Creating a Physical Standby using RMAN Duplicate (RAC or Non-RAC) (Doc ID 1617946.1)
How to Setup Active DataGuard on Exadata (Doc ID 1580796.1)
Creating a Physical Standby database using RMAN restore from service (Doc ID 2283978.1)
https://support.oracle.com/epmos/faces/DocumentDisplay?id=2283978.1
Migration to Exadata Cloud using Simple Data Guard Approach with Minimal Downtime (Doc ID 2386116.1)
https://support.oracle.com/epmos/faces/DocumentDisplay?id=2386116.1
!!! mos monitoring
Script to Collect Data Guard Primary Site Diagnostic Information for Version 9i(Doc ID 241374.1)
Script to Collect Data Guard Primary Site Diagnostic Information for Version 10g and Above (Including RAC). (Doc ID 1577401.1)
https://support.oracle.com/epmos/faces/CommunityDisplay?resultUrl=https%3A%2F%2Fcommunity.oracle.com%2Fthread%2F3265661&_afrLoop=384072498588273&resultTitle=Data+Guard+Monitoring+script&commId=3265661&displayIndex=2&_afrWindowMode=0&_adf.ctrl-state=176fd9l2vv_248
https://blogs.oracle.com/bcndatabase/entry/automatic_data_guard_healthcheck_using_shell_scripting
http://www.oraclemasters.in/?p=1255
Note 814417.1 Information to gather and upload for Dataguard related problems
Note.241374.1 Script to Collect Data Guard Primary Site Diagnostic Information
Note.241438.1 Script to Collect Data Guard Physical Standby Diagnostic Information
NOTE 247643.1 Data Guard Configuration Health Checks - Generic
!!! 12c DG new features
12c Data Guard new features: https://www.youtube.com/watch?v=HGoIZsC8fwY
! errors
ORA-01506: missing or illegal database name https://harvarinder.blogspot.com/2015/01/ora-01506-missing-or-illegal-database.html
!! password file
https://petesdbablog.wordpress.com/2013/07/14/12c-new-feature-changes-to-the-password-file/
http://satya-dba.blogspot.com/2009/11/password-file-in-oracle.html
https://docs.oracle.com/cd/B28359_01/server.111/b28310/dba007.htm#ADMIN10241
About a Shared Password File in a Disk Group https://docs.oracle.com/database/121/OSTMG/GUID-6165EC00-C329-4140-A007-2FE18D6DEC51.htm
move pwd file to ASM https://community.oracle.com/thread/4114192
https://www.thegeekdiary.com/how-to-create-password-file-for-database-on-12c-asm-diskgroup/
{{{
crsctl stat res ora.[database].db -f | grep PWFILE
}}}
Creating a Password File in a Disk Group https://docs.oracle.com/database/121/OSTMG/GUID-2ACBBB1E-A39D-473E-A9EF-E7BC3872C36E.htm
!! pdb
https://www.google.com/search?q=12c+data+guard+pdb&oq=12c+data+guard+pdb&aqs=chrome..69i57j0.8043j0j1&sourceid=chrome&ie=UTF-8
https://oracle-base.com/articles/12c/multitenant-controlling-pdb-replication-in-data-guard-environments-12c
https://borysneselovskyi.wordpress.com/2017/09/08/steps-to-configure-oracle-data-guard-version-12-2-for-a-pluggable-database/
!! ORA-12514: TNS:listener does not currently know
https://stackoverflow.com/questions/10786782/ora-12514-tnslistener-does-not-currently-know-of-service-requested-in-connect-d
!! dgid from fal client not in data guard configuration
https://www.google.com/search?q=dgid+from+fal+client+not+in+data+guard+configuration&oq=dgid+from+fal+not+in+con&aqs=chrome.1.69i57j0.7537j0j1&sourceid=chrome&ie=UTF-8
http://oracletechdba.blogspot.com/2014/10/falserver-dgid-from-fal-client-not-in.html
http://www.nazmulhuda.info/ora-16057-dgid-from-server-not-in-data-guard-configuration
!! cryptographic checksum mismatch error 12599
https://www.google.com/search?q=cryptographic+checksum+mismatch+error+12599&oq=cyrptographic+checksu&aqs=chrome.2.69i57j0l7.7076j0j1&sourceid=chrome&ie=UTF-8
!! rman restore archivelog thread 2
https://www.google.com/search?q=rman+restore+archivelog+thread+2&oq=oracle+restore+archivelog+thre&aqs=chrome.2.69i57j0l7.6389j0j1&sourceid=chrome&ie=UTF-8
!! RMAN-05535: warning: All redo log files were not defined properly - log_file_name_convert
https://www.google.com/search?q=RMAN-05535%3A+warning%3A+All+redo+log+files+were+not+defined+properly&oq=RMAN-05535%3A+warning%3A+All+redo+log+files+were+not+defined+properly&aqs=chrome..69i57j69i58.851j0j1&sourceid=chrome&ie=UTF-8
https://learnwithme11g.wordpress.com/2011/12/30/creating-physical-standby-database-with-rman-duplicate-from-active-database/
http://sandeepbirineni.blogspot.com/2013/07/rman-05535-warning-all-redo-log-files.html
!! Bug 29965684 : GETTING WARNING MUST_RENAME_THIS_DATAFILE DURING RMAN ACTIVE DUPLICATE
!! FAL: Failed to request gap sequence Check that the CONTROL_FILE_RECORD_KEEP_TIME
https://www.google.com/search?q=FAL%3A+Failed+to+request+gap+sequence+Check+that+the+CONTROL_FILE_RECORD_KEEP_TIME&oq=FAL%3A+Failed+to+request+gap+sequence+Check+that+the+CONTROL_FILE_RECORD_KEEP_TIME&aqs=chrome..69i57j69i58.6880j0j1&sourceid=chrome&ie=UTF-8
http://bbalaban-oracle-blog.blogspot.com/2015/11/en-falclient-failed-to-request-gap.html
!! oracle GAP - SCN range
https://community.oracle.com/thread/3256314
https://community.oracle.com/thread/2372687?start=30&tstart=0
https://medium.com/@FranckPachot/where-to-check-data-guard-gap-e1ccadc8f41
http://www.oracle-ckpt.com/dataguard_troubleshoot_snapper/
!! error 1034 received logging on to the standby
https://community.oracle.com/thread/3731471?start=30&tstart=0
.
https://stackoverflow.com/questions/53573337/how-to-share-guest-vms-vpn-connection-with-host
TECH: Unix Semaphores and Shared Memory Explained
Doc ID: Note:15566.1
WHAT IS THE MAXIMUM NUMBER OF SEMAPHORES PER SET?
Doc ID: Note:1010332.6
TECH: Calculating Oracle's SEMAPHORE Requirements
Doc ID: Note:15654.1
Excessive Number of Semaphores Allocated
Doc ID: Note:157090.1
Relationship Between Common Init.ora Parameters and Unix Kernel Parameters
Doc ID: Note:144638.1
10g SGA is split in multiple shared memory segments
Doc ID: Note:399261.1
http://www.evernote.com/shard/s48/sh/ef60aaf2-63b5-4b8f-b220-f9c4b76c8100/0075485ae7011e3083e0f6a2ce459b24
https://www.ibm.com/developerworks/community/blogs/aixpert/entry/faq3_how_can_i_monitoring_shared_processor_pools_longer_term?lang=en
https://www.ibm.com/developerworks/community/blogs/aixpert/entry/faq2_analyzing_large_volumes_of_nmon_data?lang=en
<<showtoc>>
! pass SQL
http://stackoverflow.com/questions/32048072/how-to-pass-input-variable-to-sql-statement-in-r-shiny
http://www.slideshare.net/AlexBrown17/shiny-r-live-shared-and-explored
Question on MVC in Shiny
https://groups.google.com/forum/#!topic/shiny-discuss/4NMgmuIGM4Y
http://stackoverflow.com/questions/35794021/mvc-pattern-in-r
best practices for modular Shiny app development - directory structure
https://groups.google.com/forum/#!topic/shiny-discuss/A1LjCeFN-hc
Shiny - the future
https://groups.google.com/forum/#!searchin/shiny-discuss/mvc|sort:relevance/shiny-discuss/Tv82PBTte-c/ulq_yaJL45wJ
rApache websockets
https://groups.google.com/forum/#!searchin/shiny-discuss/mvc|sort:relevance/shiny-discuss/xCIxH-Wb3LQ/Pd_i4WIcRhgJ
shiny server is nodejs
https://blog.rstudio.org/2016/10/14/shiny-server-pro-1-4-7/
<<<
Shiny Server was originally written using Node.js 0.10, which is nearing the end of its lifespan. This release will move to Node.js 6.x.
<<<
Shiny Server - Is sky the limit?
https://groups.google.com/forum/#!topic/shiny-discuss/E6PwgQ-v13w
shiny-server, multiple users and heavy computations
https://groups.google.com/forum/#!searchin/shiny-discuss/scale/shiny-discuss/E75Yv-kKG3w/TtlLqDTngzoJ
<<<
Shiny Server Open Source
You can only run one process per application, and R processes are single-threaded, meaning they can only do one thing at a time. So if an R operation takes 50 seconds, and you have multiple users hit your application at the same time, the first one will wait 50 seconds for their analysis to complete, the second one will wait 100 seconds (50s for the original user's computation + 50s for their own). So the last user would be waiting 500 seconds.
<<<
Importing and accessing large data files in Shiny
http://stackoverflow.com/questions/25656449/importing-and-accessing-large-data-files-in-shiny
google search "R reactivepoll"
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=R+reactivepoll
example
http://shiny.rstudio.com/gallery/reactive-poll-and-file-reader.html
example: Reactive poll and file reader example for R Shiny
https://gist.github.com/wch/9652222
Use reactivePoll to accumulate data for output
http://stackoverflow.com/questions/19898113/use-reactivepoll-to-accumulate-data-for-output
Call Variable from reactive data() in R Shiny App
http://stackoverflow.com/questions/17421779/call-variable-from-reactive-data-in-r-shiny-app
How can I pass data between functions in a Shiny app
http://stackoverflow.com/questions/13592225/how-can-i-pass-data-between-functions-in-a-shiny-app
ggplot/reactive context error when subsetting data
https://groups.google.com/forum/#!topic/shiny-discuss/A4RAGyuzQTw
<<showtoc>>
! google group search "real time"
https://groups.google.com/forum/#!searchin/shiny-discuss/real$20time%7Csort:relevance
! Shiny and real-time hookup to Oracle
https://groups.google.com/forum/#!searchin/shiny-discuss/real$20time|sort:relevance/shiny-discuss/pIccP0dbh2A/TTdmgK3kyecJ
{{{
## ui.R
library(shiny)
library(ROracle)
library(ggplot2)
shinyUI(fluidPage(
titlePanel("Prototyp Shiny"),
sidebarLayout(
sidebarPanel(
numericInput(
inputId = "anr",
label="Artikelnummer",
value=""
)
),
# Show a plot of the generated distribution
mainPanel(
dataTableOutput("anzk")
)
)
))
##Server.R
library(shiny)
library(ROracle)
library(ggplot2)
# Define server logic required to draw a histogram
shinyServer(function(input, output) {
art_re <- reactive({
drv <- dbDriver("Oracle")
con <- dbConnect(drv,username="**",password="**",dbname="**")
res <- dbSendQuery(con,paste0("select count(*) from kuart_mon where firma='00' and artnr=",input$anr,"and periode='201512'"))
art_res <- fetch(res)
dbDisconnect(con)
art_res})
output$anzk <- renderTable({ art_re() })
})
runApp('H:\\Profile\\Projekte\\Prototype_shiny\\app_1')
}}}
Slow processing time for subsetting in Shiny
https://groups.google.com/forum/#!topic/shiny-discuss/9cHEOw7HP2U
<<<
There are a number of other things that happen in Shiny - for example, if you're sending a big table from the server to the browser, it needs to be converted to JSON and sent over the network
<<<
A Simple Shiny App for Monitoring Trading Strategies – Part II
http://www.statsblogs.com/2014/08/07/a-simple-shiny-app-for-monitoring-trading-strategies-part-ii/
''ostackprof''
http://blog.tanelpoder.com/files/scripts/ostackprof.sql -windows based script
http://blog.tanelpoder.com/files/scripts/aot/short_stack.sql
http://oraclue.com/2008/09/27/getting-oracle-stack-using-oradebug-short_stack/
http://blog.tanelpoder.com/2008/10/31/advanced-oracle-troubleshooting-guide-part-9-process-stack-profiling-from-sqlplus-using-ostackprof/
http://blog.tanelpoder.com/2008/06/15/advanced-oracle-troubleshooting-guide-part-6-understanding-oracle-execution-plans-with-os_explain/
''12c shortstack'' http://scn.sap.com/community/oracle/blog/2014/10/26/oracle--new-diagnostic-event-waitevent-and-others-for-troubleshooting-researching-purpose-with-oracle-12c
{{{
set echo on
exec dbms_monitor.session_trace_enable(waits=>true);
alter session set events 'wait_event["cell list of blocks read request"] shortstack()';
alter session set events 'wait_event["cell multiblock read request"] shortstack()';
alter session set events 'wait_event["cell physical read no I/O"] shortstack()';
alter session set events 'wait_event["cell single block read request"] shortstack()';
egrep 'cell list of blocks read request|cell multiblock read request|cell physical read no I/O|cell single block read request" /u01/app/oracle/diag/rdbms/dbm01/dbm011/trace/dbm011_ora_76459.trc
}}}
''dstackprof''
http://blog.tanelpoder.com/2009/04/24/tracing-oracle-sql-plan-execution-with-dtrace/
http://blog.tanelpoder.com/2008/09/02/oracle-hidden-costs-revealed-part2-using-dtrace-to-find-why-writes-in-system-tablespace-are-slower-than-in-others/
''pstack''
http://blog.tanelpoder.com/2008/06/15/advanced-oracle-troubleshooting-guide-part-6-understanding-oracle-execution-plans-with-os_explain/
Lab128 has automated the pstack sampling, os_explain, & reporting. Good tool to know where the query was spending time http://goo.gl/fyH5x
''documentation'' http://docs.oracle.com/cd/E11882_01/install.112/e24321/inst_task.htm#CIHCDHJB, http://docs.oracle.com/cd/E11882_01/install.112/e24321/app_nonint.htm#BABFEECI
10gR2 example http://www.oracle-base.com/articles/misc/OuiSilentInstallations.php
dbca.rsp http://vegdave.wordpress.com/2006/08/31/how-to-create-an-instance-of-oracle-database-using-dbca-command-line-tool/
netca.rsp http://anuj-singh.blogspot.com/2011/05/oracle-netca-in-silent-mode.html
''[SEVERE] - Invalid My Oracle Support credentials - ERROR'' http://askdba.org/weblog/2010/05/11gr2-silent-install-errors-with-severe-invalid-my-oracle-support-credentials/ <-- GOOD STUFF
full channel list
http://news.sling.com/sites/sling.newshq.businesswire.com/files/doc_library/file/Channel_Comparison_Chart.pdf
check out the CPU_WAIT_SECS column on 8 vs 16 sessions, also editing the runit.sh will change the behavior to be a sustained LIOs workload see "HOWTO on LIOs test case" on [[cpu - SillyLittleBenchmark - SLOB]] for the steps
! before edit of runit.sh, 16 sessions
{{{
select 'alter user '||username||' identified by '||username||';' from dba_users where username like 'USER%';
select username, account_status from dba_users;
oracle@desktopserver.local:/home/oracle/dba/benchmark/SLOB:slob
$ while :; do ./runit.sh 0 16; done
Tm 40
Tm 5
Tm 6
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 6
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
^C
20130115 01:44:57 0 0 3 0 0 0 0 95 8 13K 12K 2 960 0 5.15 4.52 3.53
20130115 01:44:58 8 0 8 1 0 0 0 81 8 16K 14K 29 945 18 6.26 4.76 3.61
20130115 01:44:59 89 0 9 0 0 0 0 0 8 16K 10K 4 942 17 6.26 4.76 3.61
20130115 01:45:00 93 0 5 0 0 0 0 0 8 15K 11K 10 943 19 6.26 4.76 3.61
20130115 01:45:01 93 0 6 0 0 0 0 0 8 14K 9220 6 942 19 6.26 4.76 3.61
20130115 01:45:02 93 0 5 0 0 0 0 0 8 14K 9993 10 943 20 6.26 4.76 3.61
20130115 01:45:03 80 0 7 0 0 0 0 10 8 17K 11K 10 912 3 5.84 4.70 3.59
20130115 01:45:04 17 0 9 3 0 0 0 69 8 14K 13K 144 957 12 5.84 4.70 3.59
20130115 01:45:05 2 0 5 0 0 0 0 91 8 9716 11K 21 959 1 5.84 4.70 3.59
20130115 01:45:06 1 0 4 0 0 0 0 93 8 9880 11K 10 960 4 5.84 4.70 3.59
20130115 01:45:07 3 0 6 0 0 0 0 90 8 11K 13K 53 959 1 5.84 4.70 3.59
20130115 01:45:08 1 0 5 0 0 0 0 92 8 14K 13K 9 960 7 5.37 4.62 3.57
20130115 01:45:09 1 0 4 0 0 0 0 93 8 11K 11K 8 959 2 5.37 4.62 3.57
20130115 01:45:10 1 0 4 0 0 0 0 94 8 15K 11K 8 959 5 5.37 4.62 3.57
20130115 01:45:11 2 0 5 0 0 0 0 91 8 9973 11K 11 960 0 5.37 4.62 3.57
20130115 01:45:12 1 0 4 0 0 0 0 93 8 9980 12K 8 959 1 5.37 4.62 3.57
20130115 01:45:13 1 0 5 0 0 0 0 92 8 9664 13K 8 959 1 5.02 4.56 3.56
20130115 01:45:14 2 0 4 0 0 0 0 92 8 11K 11K 16 960 2 5.02 4.56 3.56
20130115 01:45:15 64 0 11 0 0 0 0 22 8 17K 13K 20 945 18 5.02 4.56 3.56
20130115 01:45:16 93 0 6 0 0 0 0 0 8 16K 9609 19 942 18 5.02 4.56 3.56
20130115 01:45:17 93 0 5 0 0 0 0 0 8 16K 12K 8 942 17 5.02 4.56 3.56
20130115 01:45:18 93 0 6 0 0 0 0 0 8 16K 12K 16 942 18 5.98 4.77 3.63
20130115 01:45:19 92 0 6 0 0 0 0 0 8 13K 10K 8 924 8 5.98 4.77 3.63
20130115 01:45:20 22 0 6 0 0 0 0 69 8 16K 12K 22 912 3 5.98 4.77 3.63
20130115 01:45:21 11 0 9 0 0 0 0 78 8 14K 13K 139 959 1 5.98 4.77 3.63
20130115 01:45:22 2 0 5 0 0 0 0 91 8 13K 12K 26 959 2 5.98 4.77 3.63
20130115 01:45:23 1 0 4 0 0 0 0 94 8 9693 13K 8 959 1 5.50 4.69 3.61
20130115 01:45:24 0 0 4 0 0 0 0 93 8 9668 11K 8 959 1 5.50 4.69 3.61
20130115 01:45:25 1 0 4 0 0 0 0 93 8 10K 11K 8 959 1 5.50 4.69 3.61
20130115 01:45:26 1 0 4 0 0 0 0 94 8 10K 11K 9 959 1 5.50 4.69 3.61
20130115 01:45:27 1 0 5 0 0 0 0 92 8 10K 12K 8 959 1 5.50 4.69 3.61
20130115 01:45:28 1 0 5 0 0 0 0 93 8 11K 13K 8 959 2 5.06 4.61 3.59
20130115 01:45:29 2 0 5 0 0 0 0 91 8 10K 10K 10 960 2 5.06 4.61 3.59
20130115 01:45:30 1 0 5 0 0 0 0 93 8 16K 11K 12 961 4 5.06 4.61 3.59
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20130115 01:45:31 23 0 14 1 0 0 0 60 8 16K 11K 16 944 16 5.06 4.61 3.59
20130115 01:45:32 94 0 4 0 0 0 0 0 8 14K 11K 8 943 17 5.06 4.61 3.59
20130115 01:45:33 94 0 5 0 0 0 0 0 8 16K 12K 8 943 17 6.02 4.81 3.66
20130115 01:45:34 93 0 5 0 0 0 0 0 8 20K 10K 11 943 17 6.02 4.81 3.66
20130115 01:45:35 93 0 5 0 0 0 0 0 8 17K 10K 9 939 13 6.02 4.81 3.66
20130115 01:45:36 55 0 8 0 0 0 0 34 8 15K 10K 15 913 1 6.02 4.81 3.66
20130115 01:45:37 16 0 8 1 0 0 0 73 8 13K 14K 147 960 1 6.02 4.81 3.66
20130115 01:45:38 3 0 5 0 0 0 0 90 8 12K 13K 21 960 3 5.53 4.73 3.64
01/15/13 01:44:31 Logical reads 1,209,222.5 881,687.4
01/15/13 01:44:33 Logical reads 1,209,222.5 881,687.4
01/15/13 01:44:35 Logical reads 1,209,222.5 881,687.4
01/15/13 01:44:37 Logical reads 1,209,222.5 881,687.4
01/15/13 01:44:39 Logical reads 1,209,222.5 881,687.4
01/15/13 01:44:42 Logical reads 1,209,222.5 881,687.4
01/15/13 01:44:44 Logical reads 1,209,222.5 881,687.4
01/15/13 01:44:46 Logical reads 1,209,222.5 881,687.4
01/15/13 01:44:48 Logical reads 1,209,222.5 881,687.4
01/15/13 01:44:50 Logical reads 1,379,850.0 900,952.1
01/15/13 01:44:52 Logical reads 1,379,850.0 900,952.1
01/15/13 01:44:54 Logical reads 1,379,850.0 900,952.1
01/15/13 01:44:56 Logical reads 1,379,850.0 900,952.1
01/15/13 01:44:58 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:00 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:02 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:04 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:06 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:08 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:10 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:12 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:14 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:16 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:18 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:20 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:22 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:24 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:26 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:28 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:30 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:32 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:34 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:36 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:38 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:40 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:42 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:44 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:46 Logical reads 1,379,850.0 900,952.1
01/15/13 01:45:48 Logical reads 1,379,850.0 900,952.1
15 01:43:25 1 1.00 CPU +++++ 8
15 01:43:35 4 12.25 CPU .00 log file parallel write ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:43:40 3 8.31 CPU .97 cursor: pin S wait on X ++++++++++++++++++++++++++++++++++++++++8+++------
15 01:43:50 2 16.01 CPU .50 db file scattered read ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:43:55 4 10.29 CPU .21 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:44:05 1 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:44:10 4 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:44:25 5 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:44:30 1 1.00 CPU .00 log file parallel write +++++ 8
15 01:44:40 3 16.01 CPU .33 db file sequential read ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:44:45 2 10.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:44:55 2 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:45:00 4 11.75 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:45:10 1 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:45:15 5 13.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:45:25 1 1.00 CPU +++++ 8
15 01:45:30 5 13.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:45:35 2 2.00 CPU ++++++++++ 8
15 01:45:45 3 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:45:50 5 9.02 CPU .38 latch free ++++++++++++++++++++++++++++++++++++++++8++++++--
15 01:45:55 3 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
52 rows selected.
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIOS SQL_TEXT
OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ---------- --------
--------------------------------- ------------------------------ -------------------------
01/15/13 01:40:48 1 965 sqlplus@de SYS c80qj9qbfrrgy 0 79376787 29 .002052 638.43 select t
o_char(sysdate,'MM/DD/YY HH24:MI: oracle desktopserver.local
01/15/13 01:40:48 1 1345 sqlplus@de USER1 68vu5q46nu22s 4 198477492 8 2.108704 108.24 SELECT C
OUNT(C2) FROM CF1 WHERE CUSTID > oracle desktopserver.local
01/15/13 01:40:48 1 772 sqlplus@de USER10 68vu5q46nu22s 7 198477492 9 1.863848 124.29
oracle desktopserver.local
01/15/13 01:40:48 1 770 sqlplus@de USER11 68vu5q46nu22s 12 198477492 8 2.082638 109.78
oracle desktopserver.local
01/15/13 01:40:48 1 391 sqlplus@de USER12 68vu5q46nu22s 8 198477492 8 2.070093 110.26
oracle desktopserver.local
01/15/13 01:40:48 1 6 sqlplus@de USER13 68vu5q46nu22s 5 198477492 9 1.792141 129.39
oracle desktopserver.local
01/15/13 01:40:48 1 1154 sqlplus@de USER14 68vu5q46nu22s 14 198477492 9 1.838884 125.92
oracle desktopserver.local
01/15/13 01:40:48 1 1344 sqlplus@de USER15 68vu5q46nu22s 11 198477492 10 1.706702 137.34
oracle desktopserver.local
01/15/13 01:40:48 1 390 sqlplus@de USER16 68vu5q46nu22s 0 198477492 9 1.805574 128.31
oracle desktopserver.local
01/15/13 01:40:48 1 962 sqlplus@de USER2 68vu5q46nu22s 2 198477492 8 2.049950 111.53
oracle desktopserver.local
01/15/13 01:40:48 1 1153 sqlplus@de USER3 68vu5q46nu22s 3 198477492 8 2.158388 110.73
oracle desktopserver.local
01/15/13 01:40:48 1 582 sqlplus@de USER4 68vu5q46nu22s 6 198477492 9 1.817453 127.65
oracle desktopserver.local
01/15/13 01:40:48 1 198 sqlplus@de USER5 68vu5q46nu22s 10 198477492 8 2.045978 113.33
oracle desktopserver.local
01/15/13 01:40:48 1 580 sqlplus@de USER6 68vu5q46nu22s 13 198477492 8 2.128022 119.18
oracle desktopserver.local
01/15/13 01:40:48 1 961 sqlplus@de USER7 68vu5q46nu22s 15 198477492 9 1.856603 124.78
oracle desktopserver.local
01/15/13 01:40:48 1 8 sqlplus@de USER8 68vu5q46nu22s 1 198477492 9 1.844254 125.68
oracle desktopserver.local
01/15/13 01:40:48 1 200 sqlplus@de USER9 68vu5q46nu22s 9 198477492 9 1.920541 120.86
oracle desktopserver.local
17 rows selected.
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------
01/15/13 01:58:00, 331847, 85436299, 143.15, 329.47, 186.32, 0, 0, 0, 257.46, 259315.08
01/15/13 01:58:00, 332600, 85627839, 144.53, 331.34, 186.81, 0, 0, 0, 257.45, 258426.57
01/15/13 01:58:00, 332015, 85475930, 143.72, 334.64, 190.92, 0, 0, 0, 257.45, 255428.2
01/15/13 01:58:00, 331986, 85466539, 144.43, 331.2, 186.77, 0, 0, 0, 257.44, 258050.16
01/15/13 01:58:00, 332079, 85495166, 144.19, 333.07, 188.88, 0, 0, 0, 257.45, 256687.34
01/15/13 01:58:00, 332093, 85494874, 144.44, 330.54, 186.1, 0, 0, 0, 257.44, 258654.96
01/15/13 01:58:00, 331952, 85458607, 144.3, 330.55, 186.26, 0, 0, 0, 257.44, 258532.23
01/15/13 01:58:00, 327504, 84314475, 142.06, 328.86, 186.8, 0, 0, 0, 257.45, 256384.37
01/15/13 01:58:00, 331451, 85333882, 143.83, 330.23, 186.4, 0, 0, 0, 257.46, 258407.1
01/15/13 01:58:00, 331598, 85366396, 143.18, 326.72, 183.54, 0, 0, 0, 257.44, 261281.17
01/15/13 01:58:00, 331583, 85365690, 143.28, 330.47, 187.19, 0, 0, 0, 257.45, 258318.18
01/15/13 01:58:00, 331709, 85399314, 143.91, 331.69, 187.78, 0, 0, 0, 257.45, 257464.81
01/15/13 01:58:00, 331827, 85428651, 144.11, 328.08, 183.97, 0, 0, 0, 257.45, 260389.66
01/15/13 01:58:00, 331686, 85394217, 143.85, 330.62, 186.77, 0, 0, 0, 257.45, 258288.14
01/15/13 01:58:00, 332039, 85483486, 143.74, 332.08, 188.34, 0, 0, 0, 257.45, 257420.12
01/15/13 01:58:00, 331420, 85325534, 143.67, 330.47, 186.79, 0, 0, 0, 257.45, 258197.42
16 rows selected.
}}}
! after edit of runit.sh, 16 sessions
{{{
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20130115 01:58:15 2 0 3 0 0 0 0 92 8 12K 11K 7 969 1 10.00 7.87 5.36
20130115 01:58:16 74 0 14 0 0 0 0 10 8 14K 11K 17 951 18 10.00 7.87 5.36
20130115 01:58:17 94 0 5 0 0 0 0 0 8 15K 11K 0 951 18 10.00 7.87 5.36
20130115 01:58:18 93 0 6 0 0 0 0 0 8 19K 10K 8 951 16 10.00 7.87 5.36
20130115 01:58:19 95 0 4 0 0 0 0 0 8 16K 11K 0 951 19 10.48 8.01 5.41
20130115 01:58:20 91 0 8 0 0 0 0 0 8 15K 9585 8 933 9 10.48 8.01 5.41
20130115 01:58:21 16 0 7 1 0 0 0 73 8 14K 12K 76 957 19 10.48 8.01 5.41
20130115 01:58:22 7 0 9 0 0 0 0 82 8 12K 13K 89 969 1 10.48 8.01 5.41
20130115 01:58:23 60 0 12 0 0 0 0 25 8 16K 14K 4 951 16 10.48 8.01 5.41
20130115 01:58:24 94 0 5 0 0 0 0 0 8 13K 10K 8 951 17 10.92 8.14 5.47
20130115 01:58:25 94 0 5 0 0 0 0 0 8 15K 10K 0 951 17 10.92 8.14 5.47
20130115 01:58:26 93 0 6 0 0 0 0 0 8 14K 11K 18 952 17 10.92 8.14 5.47
20130115 01:58:27 94 0 5 0 0 0 0 0 8 16K 12K 0 947 15 10.92 8.14 5.47
20130115 01:58:28 24 0 6 0 0 0 0 67 8 16K 14K 22 921 2 10.92 8.14 5.47
20130115 01:58:29 10 0 7 0 0 0 0 80 8 17K 17K 132 968 0 10.13 8.02 5.45
20130115 01:58:30 43 0 12 0 0 0 0 43 8 13K 11K 16 951 18 10.13 8.02 5.45
20130115 01:58:31 94 0 5 0 0 0 0 0 8 15K 11K 8 951 16 10.13 8.02 5.45
20130115 01:58:32 94 0 5 0 0 0 0 0 8 15K 11K 8 951 18 10.13 8.02 5.45
20130115 01:58:33 94 0 5 0 0 0 0 0 8 19K 11K 8 951 19 10.13 8.02 5.45
20130115 01:58:34 94 0 5 0 0 0 0 0 8 15K 10K 11 949 16 10.68 8.17 5.51
20130115 01:58:35 37 0 6 0 0 0 0 54 8 16K 11K 14 921 4 10.68 8.17 5.51
20130115 01:58:36 14 0 7 0 0 0 0 77 8 12K 12K 147 968 1 10.68 8.17 5.51
20130115 01:58:37 20 0 11 0 0 0 0 67 8 14K 12K 9 951 16 10.68 8.17 5.51
20130115 01:58:38 93 0 6 0 0 0 0 0 8 14K 11K 8 951 18 10.68 8.17 5.51
20130115 01:58:39 94 0 5 0 0 0 0 0 8 15K 10K 0 951 18 11.18 8.32 5.57
20130115 01:58:40 94 0 5 0 0 0 0 0 8 13K 10K 8 951 19 11.18 8.32 5.57
20130115 01:58:41 94 0 5 0 0 0 0 0 8 15K 10K 21 956 18 11.18 8.32 5.57
20130115 01:58:42 61 0 7 0 0 0 0 29 8 15K 12K 15 922 1 11.18 8.32 5.57
20130115 01:58:43 15 0 9 0 0 0 0 73 8 14K 15K 138 969 1 11.18 8.32 5.57
20130115 01:58:44 4 0 5 0 0 0 0 88 8 12K 14K 17 969 3 10.29 8.18 5.54
20130115 01:58:45 36 0 18 0 0 0 0 44 8 14K 12K 132 968 2 10.29 8.18 5.54
20130115 01:58:46 20 0 12 0 0 0 0 65 8 13K 11K 24 951 17 10.29 8.18 5.54
20130115 01:58:47 78 0 19 0 0 0 0 1 8 16K 13K 130 948 16 10.29 8.18 5.54
20130115 01:58:48 94 0 5 0 0 0 0 0 8 14K 11K 8 948 16 10.29 8.18 5.54
$ while :; do ./runit.sh 0 16; done
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
SQL
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
01/15/13 01:58:49 DB Time 11.1
01/15/13 01:58:49 DB CPU 5.5
01/15/13 01:58:49 Redo size 207,565.7 66,635.4
01/15/13 01:58:49 Logical reads 2,882,015.3 925,220.9
01/15/13 01:58:49 Block changes 1,554.0 498.9
01/15/13 01:58:49 Physical reads 2.0 .6
01/15/13 01:58:49 Physical writes 34.6 11.1
01/15/13 01:58:49 User calls 105.2 33.8
01/15/13 01:58:49 Parses 122.7 39.4
01/15/13 01:58:49 Hard Parses 2.8 .9
01/15/13 01:58:49 Logons 3.6 1.2
01/15/13 01:58:49 Executes 11,322.7 3,634.9
01/15/13 01:58:49 Rollbacks .0
01/15/13 01:58:49 Transactions 3.1
01/15/13 01:58:49 Applied urec .1 .0
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------
01/15/13 01:58:50, 367051, 94499113, 158.45, 360.85, 202.4, 0, 0, 0, 257.45, 261880.37
01/15/13 01:58:50, 367151, 94522370, 159.57, 362.78, 203.21, 0, 0, 0, 257.45, 260547.22
01/15/13 01:58:50, 367054, 94496415, 159.06, 366.5, 207.44, 0, 0, 0, 257.45, 257832.24
01/15/13 01:58:50, 366274, 94293097, 159.22, 362.96, 203.73, 0, 0, 0, 257.44, 259791.81
01/15/13 01:58:50, 367424, 94594423, 159.76, 365.19, 205.43, 0, 0, 0, 257.45, 259028.9
01/15/13 01:58:50, 367435, 94592875, 159.99, 361.94, 201.95, 0, 0, 0, 257.44, 261350.95
01/15/13 01:58:50, 362462, 93313012, 157.57, 358.31, 200.74, 0, 0, 0, 257.44, 260423.87
01/15/13 01:58:50, 362272, 93264793, 157.08, 361.16, 204.08, 0, 0, 0, 257.44, 258234.69
01/15/13 01:58:50, 366556, 94371277, 159.32, 361.49, 202.17, 0, 0, 0, 257.45, 261061.4
01/15/13 01:58:50, 367043, 94490873, 158.77, 358.37, 199.6, 0, 0, 0, 257.44, 263668.36
01/15/13 01:58:50, 366264, 94293811, 158.47, 362.24, 203.77, 0, 0, 0, 257.45, 260310.29
01/15/13 01:58:50, 366316, 94308624, 159.04, 363.58, 204.54, 0, 0, 0, 257.45, 259387.63
01/15/13 01:58:50, 367052, 94496813, 159.61, 359.36, 199.75, 0, 0, 0, 257.45, 262959.8
01/15/13 01:58:50, 367030, 94493180, 159.22, 362.71, 203.49, 0, 0, 0, 257.45, 260518.71
01/15/13 01:58:50, 367182, 94530582, 159.08, 363.74, 204.66, 0, 0, 0, 257.45, 259885.03
01/15/13 01:58:50, 366068, 94245591, 158.73, 361.99, 203.27, 0, 0, 0, 257.45, 260352.66
16 rows selected.
15 01:57:10 5 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:57:15 3 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:57:20 5 6.40 CPU ++++++++++++++++++++++++++++++++ 8
15 01:57:25 5 14.26 CPU .14 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:57:30 3 16.01 CPU .33 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:57:35 5 9.00 CPU ++++++++++++++++++++++++++++++++++++++++8++++++
15 01:57:40 5 12.60 CPU .00 log file parallel write ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:57:45 4 15.52 CPU .48 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:57:50 5 8.23 CPU .59 cursor: mutex S ++++++++++++++++++++++++++++++++++++++++8++-------
15 01:57:55 4 11.76 CPU .49 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:58:00 4 15.93 CPU .33 control file sequential r ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:58:05 5 7.40 CPU +++++++++++++++++++++++++++++++++++++ 8
15 01:58:10 4 11.75 CPU .75 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:58:15 5 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:58:20 5 9.80 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:58:25 5 10.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:58:30 5 13.20 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:58:35 4 12.24 CPU .51 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:58:40 5 5.20 CPU ++++++++++++++++++++++++++ 8
15 01:58:45 5 12.54 CPU .26 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 01:58:50 1 14.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
137 rows selected.
while :; do ./runit.sh 0 16; done
edit the sleep 10 to 1
SQL
TM CPU_TOTAL CPU_OS CPU_ORA CPU_ORA_WAIT COMMIT READIO WAIT
----------------- ---------- ---------- ---------- ------------ ---------- ---------- ----------
01/15/13 02:20:09 11.935 .555 5.322 6.058 .001 .004 .124
TM SESSION_ID AAS
----------------- ---------- ----------
01/15/13 02:20:09 7 44
01/15/13 02:20:09 8 10
01/15/13 02:20:09 9 42
01/15/13 02:20:09 10 33
01/15/13 02:20:09 201 43
01/15/13 02:20:09 203 42
01/15/13 02:20:09 390 40
01/15/13 02:20:09 393 42
01/15/13 02:20:09 574 1
01/15/13 02:20:09 575 0
01/15/13 02:20:09 580 40
01/15/13 02:20:09 582 41
01/15/13 02:20:09 585 42
01/15/13 02:20:09 586 43
01/15/13 02:20:09 777 2
01/15/13 02:20:09 964 1
01/15/13 02:20:09 966 42
01/15/13 02:20:09 967 45
01/15/13 02:20:09 1152 5
01/15/13 02:20:09 1154 42
01/15/13 02:20:09 1344 42
01/15/13 02:20:09 1348 41
----------
sum 683
16 sess turbostat
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
58.15 3.42 3.41 18.28 5.72 17.86 0.00 0.83 0.80 0.22 0.00
0 0 70.63 3.40 3.41 11.59 6.26 11.52 0.00 0.83 0.80 0.22 0.00
0 4 56.56 3.40 3.41 25.66 6.26 11.52 0.00 0.83 0.80 0.22 0.00
1 1 58.65 3.44 3.41 16.72 4.16 20.47 0.00 0.83 0.80 0.22 0.00
1 5 54.59 3.43 3.41 20.78 4.16 20.47 0.00 0.83 0.80 0.22 0.00
2 2 59.03 3.40 3.41 27.25 10.21 3.51 0.00 0.83 0.80 0.22 0.00
2 6 56.06 3.38 3.41 30.22 10.21 3.51 0.00 0.83 0.80 0.22 0.00
3 3 56.89 3.47 3.41 4.95 2.23 35.93 0.00 0.83 0.80 0.22 0.00
3 7 52.81 3.47 3.41 9.03 2.23 35.93 0.00 0.83 0.80 0.22 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
94.52 3.50 3.41 3.30 1.24 0.94 0.00 0.00 0.00 0.00 0.00
0 0 97.09 3.49 3.41 2.24 0.62 0.05 0.00 0.00 0.00 0.00 0.00
0 4 93.98 3.51 3.41 5.35 0.62 0.05 0.00 0.00 0.00 0.00 0.00
1 1 96.35 3.50 3.41 3.12 0.20 0.33 0.00 0.00 0.00 0.00 0.00
1 5 92.41 3.50 3.41 7.06 0.20 0.33 0.00 0.00 0.00 0.00 0.00
2 2 93.75 3.51 3.41 2.42 1.81 2.02 0.00 0.00 0.00 0.00 0.00
2 6 95.69 3.51 3.41 0.48 1.81 2.02 0.00 0.00 0.00 0.00 0.00
3 3 93.85 3.51 3.41 2.46 2.35 1.34 0.00 0.00 0.00 0.00 0.00
3 7 93.08 3.51 3.41 3.23 2.35 1.34 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
67.55 3.45 3.41 12.27 5.11 15.07 0.00 1.02 0.83 0.30 0.00
0 0 72.98 3.37 3.41 17.74 3.10 6.18 0.00 1.02 0.83 0.30 0.00
0 4 75.85 3.41 3.41 14.86 3.10 6.18 0.00 1.02 0.83 0.30 0.00
1 1 66.62 3.45 3.41 11.30 8.78 13.31 0.00 1.02 0.83 0.30 0.00
1 5 65.96 3.44 3.41 11.95 8.78 13.31 0.00 1.02 0.83 0.30 0.00
2 2 64.21 3.50 3.41 6.40 7.07 22.32 0.00 1.02 0.83 0.30 0.00
2 6 64.19 3.49 3.41 6.42 7.07 22.32 0.00 1.02 0.83 0.30 0.00
3 3 65.51 3.48 3.41 14.52 1.50 18.46 0.00 1.02 0.83 0.30 0.00
3 7 65.07 3.48 3.41 14.97 1.50 18.46 0.00 1.02 0.83 0.30 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
59.94 3.42 3.41 15.77 4.29 20.00 0.00 1.38 1.42 0.07 0.00
0 0 71.41 3.39 3.41 11.28 4.18 13.13 0.00 1.38 1.42 0.07 0.00
0 4 61.63 3.39 3.41 21.06 4.18 13.13 0.00 1.38 1.42 0.07 0.00
1 1 58.93 3.44 3.41 8.19 2.49 30.40 0.00 1.38 1.42 0.07 0.00
1 5 54.74 3.45 3.41 12.38 2.49 30.40 0.00 1.38 1.42 0.07 0.00
2 2 64.23 3.33 3.41 25.92 8.71 1.14 0.00 1.38 1.42 0.07 0.00
2 6 58.48 3.41 3.41 31.67 8.71 1.14 0.00 1.38 1.42 0.07 0.00
3 3 55.59 3.47 3.41 7.30 1.79 35.32 0.00 1.38 1.42 0.07 0.00
3 7 54.53 3.50 3.41 8.35 1.79 35.32 0.00 1.38 1.42 0.07 0.00
}}}
! after edit of runit.sh, 8 sessions
{{{
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20130115 02:08:53 8 0 5 0 0 0 0 85 8 13K 14K 72 953 3 3.27 4.43 4.62
20130115 02:08:54 29 0 8 0 0 0 0 60 8 14K 11K 19 942 11 3.01 4.36 4.59
20130115 02:08:55 93 0 4 0 0 0 0 1 8 17K 10K 4 945 10 3.01 4.36 4.59
20130115 02:08:56 91 0 7 0 0 0 0 1 8 15K 11K 22 934 5 3.01 4.36 4.59
20130115 02:08:57 15 0 7 0 0 0 0 75 8 20K 13K 78 950 11 3.01 4.36 4.59
20130115 02:08:58 3 0 5 0 0 0 0 90 8 13K 13K 34 952 2 3.01 4.36 4.59
20130115 02:08:59 72 0 7 0 0 0 0 19 8 15K 11K 5 943 11 3.81 4.50 4.64
20130115 02:09:00 95 0 4 0 0 0 0 0 8 14K 11K 14 944 12 3.81 4.50 4.64
20130115 02:09:01 55 0 5 0 0 0 0 38 8 16K 11K 26 934 3 3.81 4.50 4.64
20130115 02:09:02 13 0 7 0 0 0 0 78 8 14K 15K 98 953 3 3.81 4.50 4.64
20130115 02:09:03 5 0 5 0 0 0 0 88 8 18K 13K 8 942 9 3.81 4.50 4.64
20130115 02:09:04 92 0 6 0 0 0 0 0 8 15K 11K 16 944 10 4.14 4.56 4.66
20130115 02:09:05 94 0 5 0 0 0 0 0 8 18K 12K 0 942 8 4.14 4.56 4.66
20130115 02:09:06 32 0 6 0 0 0 0 59 8 14K 12K 38 930 3 4.14 4.56 4.66
20130115 02:09:07 9 0 6 0 0 0 0 84 8 13K 13K 75 953 3 4.14 4.56 4.66
20130115 02:09:08 29 0 8 0 0 0 0 61 8 14K 14K 24 942 11 4.14 4.56 4.66
20130115 02:09:09 94 0 4 0 0 0 0 0 8 16K 11K 0 943 9 4.45 4.62 4.67
20130115 02:09:10 90 0 5 0 0 0 0 3 8 15K 11K 16 933 4 4.45 4.62 4.67
20130115 02:09:11 13 0 4 0 0 0 0 80 8 13K 12K 24 932 2 4.45 4.62 4.67
20130115 02:09:12 7 0 7 0 0 0 0 84 8 12K 14K 91 953 1 4.45 4.62 4.67
20130115 02:09:13 62 0 8 0 0 0 0 28 8 19K 14K 8 942 10 4.45 4.62 4.67
20130115 02:09:14 94 0 4 0 0 0 0 0 8 15K 11K 16 944 9 4.73 4.67 4.69
20130115 02:09:15 63 0 5 0 0 0 0 30 8 15K 11K 9 928 2 4.73 4.67 4.69
20130115 02:09:16 13 0 6 0 0 0 0 78 8 13K 13K 97 953 2 4.73 4.67 4.69
20130115 02:09:17 4 0 4 0 0 0 0 90 8 12K 12K 16 952 2 4.73 4.67 4.69
20130115 02:09:18 88 0 8 0 0 0 0 2 8 15K 13K 16 944 10 4.73 4.67 4.69
20130115 02:09:19 86 0 4 0 0 0 0 8 8 14K 11K 1 942 9 5.00 4.73 4.71
20130115 02:09:20 39 0 6 0 0 0 0 53 8 16K 12K 31 928 3 5.00 4.73 4.71
20130115 02:09:21 9 0 5 0 0 0 0 84 8 14K 13K 74 953 4 5.00 4.73 4.71
20130115 02:09:22 23 0 9 0 0 0 0 65 8 16K 12K 29 946 12 5.00 4.73 4.71
20130115 02:09:23 94 0 5 0 0 0 0 0 8 14K 12K 4 942 10 5.00 4.73 4.71
20130115 02:09:24 94 0 4 0 0 0 0 0 8 14K 11K 15 939 8 5.40 4.81 4.74
20130115 02:09:25 13 0 4 0 0 0 0 80 8 14K 11K 15 928 2 5.40 4.81 4.74
20130115 02:09:26 7 0 6 0 0 0 0 85 8 13K 12K 91 957 2 5.40 4.81 4.74
20130115 02:09:27 44 0 8 0 0 0 0 45 8 13K 12K 14 944 10 5.40 4.81 4.74
$ while :; do ./runit.sh 0 8; done
Tm 25
Tm 3
Tm 2
Tm 3
Tm 3
Tm 3
Tm 2
Tm 2
Tm 2
Tm 3
Tm 3
Tm 3
Tm 3
Tm 3
Tm 3
Tm 2
Tm 2
Tm 3
Tm 3
Tm 3
Tm 2
Tm 2
Tm 3
Tm 2
Tm 2
SQL>
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
01/15/13 02:09:54 DB Time 4.2
01/15/13 02:09:54 DB CPU 3.9
01/15/13 02:09:54 Redo size 315,958.7 66,131.1
01/15/13 02:09:54 Logical reads 2,239,843.3 468,806.2
01/15/13 02:09:54 Block changes 2,274.9 476.1
01/15/13 02:09:54 Physical reads .8 .2
01/15/13 02:09:54 Physical writes 38.6 8.1
01/15/13 02:09:54 User calls 102.3 21.4
01/15/13 02:09:54 Parses 155.8 32.6
01/15/13 02:09:54 Hard Parses 2.5 .5
01/15/13 02:09:54 Logons 3.5 .7
01/15/13 02:09:54 Executes 8,867.4 1,856.0
01/15/13 02:09:54 Rollbacks .0
01/15/13 02:09:54 Transactions 4.8
01/15/13 02:09:54 Applied urec .1 .0
15 rows selected.
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------
01/15/13 02:10:07, 506504, 130397301, 206.67, 247.42, 40.75, 0, 0, 0, 257.45, 527022.9
01/15/13 02:10:07, 506552, 130407950, 206.78, 241.11, 34.33, 0, 0, 0, 257.44, 540865.92
01/15/13 02:10:07, 506510, 130401159, 207.12, 246.15, 39.04, 0, 0, 0, 257.45, 529758.64
01/15/13 02:10:07, 506545, 130409463, 207.07, 245.4, 38.34, 0, 0, 0, 257.45, 531409.97
01/15/13 02:10:07, 506013, 130268217, 207.5, 245.66, 38.16, 0, 0, 0, 257.44, 530282.39
01/15/13 02:10:07, 506586, 130422752, 206.61, 247.22, 40.61, 0, 0, 0, 257.45, 527563.2
01/15/13 02:10:07, 506555, 130414989, 207.4, 244.94, 37.54, 0, 0, 0, 257.45, 532443.66
01/15/13 02:10:07, 506514, 130404637, 206.9, 245.76, 38.86, 0, 0, 0, 257.46, 530614.99
8 rows selected.
SQL>
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIOS SQL_TEXT OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ---------- ----------------------------------------- ------------------------------ -------------------------
01/15/13 02:09:57 1 9 sqlplus@de SYS c80qj9qbfrrgy 0 79376787 198 .002209 54.87 select to_char(sysdate,'MM/DD/YY HH24:MI: oracle desktopserver.local
01/15/13 02:09:57 1 961 sqlplus@de USER1 68vu5q46nu22s 5 198477492 503,777 .000488 527036.67 SELECT COUNT(C2) FROM CF1 WHERE CUSTID > oracle desktopserver.local
01/15/13 02:09:57 1 390 sqlplus@de USER2 68vu5q46nu22s 0 198477492 502,975 .000489 526455.74 oracle desktopserver.local
01/15/13 02:09:57 1 965 sqlplus@de USER3 68vu5q46nu22s 4 198477492 503,062 .000486 529721.44 oracle desktopserver.local
01/15/13 02:09:57 1 7 sqlplus@de USER4 68vu5q46nu22s 1 198477492 504,016 .000476 540416.35 oracle desktopserver.local
01/15/13 02:09:57 1 580 sqlplus@de USER5 68vu5q46nu22s 3 198477492 503,991 .000485 530955.34 oracle desktopserver.local
01/15/13 02:09:57 1 584 sqlplus@de USER6 68vu5q46nu22s 7 198477492 503,906 .000486 530199.85 oracle desktopserver.local
01/15/13 02:09:57 1 1344 sqlplus@de USER7 68vu5q46nu22s 6 198477492 504,031 .000484 532160.52 oracle desktopserver.local
01/15/13 02:09:57 1 198 sqlplus@de USER8 68vu5q46nu22s 2 198477492 503,997 .000486 529324.54 oracle desktopserver.local
9 rows selected.
SQL>
TM CPU_TOTAL CPU_OS CPU_ORA CPU_ORA_WAIT COMMIT READIO WAIT
----------------- ---------- ---------- ---------- ------------ ---------- ---------- ----------
01/15/13 02:09:05 4.828 .458 4.039 .331 .001 .003 .039
TM SESSION_ID AAS
----------------- ---------- ----------
01/15/13 02:09:06 7 29
01/15/13 02:09:06 8 24
01/15/13 02:09:06 198 30
01/15/13 02:09:06 390 31
01/15/13 02:09:06 391 1
01/15/13 02:09:06 575 0
01/15/13 02:09:06 580 12
01/15/13 02:09:06 581 21
01/15/13 02:09:06 582 3
01/15/13 02:09:06 584 10
01/15/13 02:09:06 961 33
01/15/13 02:09:06 962 0
01/15/13 02:09:06 964 35
01/15/13 02:09:06 1344 31
01/15/13 02:09:06 1347 2
----------
sum 262
15 rows selected.
8 sess turbostat
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
54.09 3.42 3.41 17.99 9.50 18.41 0.00 0.83 0.84 0.00 0.00
0 0 69.70 3.42 3.41 13.72 16.26 0.32 0.00 0.83 0.84 0.00 0.00
0 4 53.02 3.40 3.41 30.40 16.26 0.32 0.00 0.83 0.84 0.00 0.00
1 1 61.69 3.38 3.41 24.51 11.24 2.56 0.00 0.83 0.84 0.00 0.00
1 5 53.02 3.31 3.41 33.18 11.24 2.56 0.00 0.83 0.84 0.00 0.00
2 2 49.12 3.47 3.41 9.77 7.44 33.68 0.00 0.83 0.84 0.00 0.00
2 6 48.57 3.45 3.41 10.32 7.44 33.68 0.00 0.83 0.84 0.00 0.00
3 3 49.48 3.46 3.41 10.37 3.06 37.08 0.00 0.83 0.84 0.00 0.00
3 7 48.17 3.45 3.41 11.68 3.06 37.08 0.00 0.83 0.84 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
53.76 3.40 3.41 22.23 6.15 17.86 0.00 0.22 0.19 0.00 0.00
0 0 70.44 3.35 3.41 13.51 6.72 9.34 0.00 0.22 0.19 0.00 0.00
0 4 54.36 3.31 3.41 29.59 6.72 9.34 0.00 0.22 0.19 0.00 0.00
1 1 50.70 3.42 3.41 29.44 11.19 8.66 0.00 0.22 0.19 0.00 0.00
1 5 52.09 3.41 3.41 28.06 11.19 8.66 0.00 0.22 0.19 0.00 0.00
2 2 47.38 3.45 3.41 18.19 2.88 31.55 0.00 0.22 0.19 0.00 0.00
2 6 52.85 3.46 3.41 12.72 2.88 31.55 0.00 0.22 0.19 0.00 0.00
3 3 49.46 3.42 3.41 24.84 3.80 21.90 0.00 0.22 0.19 0.00 0.00
3 7 52.85 3.38 3.41 21.45 3.80 21.90 0.00 0.22 0.19 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
52.99 3.43 3.41 18.07 5.31 23.63 0.00 1.09 0.83 0.17 0.00
0 0 73.69 3.38 3.41 21.08 4.10 1.13 0.00 1.09 0.83 0.17 0.00
0 4 53.01 3.30 3.41 41.76 4.10 1.13 0.00 1.09 0.83 0.17 0.00
1 1 49.81 3.45 3.41 17.56 8.88 23.75 0.00 1.09 0.83 0.17 0.00
1 5 47.88 3.45 3.41 19.48 8.88 23.75 0.00 1.09 0.83 0.17 0.00
2 2 48.66 3.46 3.41 11.15 4.24 35.95 0.00 1.09 0.83 0.17 0.00
2 6 52.63 3.50 3.41 7.17 4.24 35.95 0.00 1.09 0.83 0.17 0.00
3 3 50.39 3.46 3.41 11.91 4.00 33.70 0.00 1.09 0.83 0.17 0.00
3 7 47.84 3.46 3.41 14.46 4.00 33.70 0.00 1.09 0.83 0.17 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
53.32 3.42 3.41 23.65 6.19 16.84 0.00 0.13 0.08 0.00 0.00
0 0 68.05 3.40 3.41 12.30 5.12 14.53 0.00 0.13 0.08 0.00 0.00
0 4 52.62 3.40 3.41 27.73 5.12 14.53 0.00 0.13 0.08 0.00 0.00
1 1 50.98 3.40 3.41 20.62 11.42 16.98 0.00 0.13 0.08 0.00 0.00
1 5 51.91 3.41 3.41 19.70 11.42 16.98 0.00 0.13 0.08 0.00 0.00
2 2 55.28 3.50 3.41 8.23 3.77 32.72 0.00 0.13 0.08 0.00 0.00
2 6 49.51 3.45 3.41 14.01 3.77 32.72 0.00 0.13 0.08 0.00 0.00
3 3 46.70 3.45 3.41 45.71 4.45 3.14 0.00 0.13 0.08 0.00 0.00
3 7 51.51 3.33 3.41 40.89 4.45 3.14 0.00 0.13 0.08 0.00 0.00
}}}
! after edit of runit.sh, 4 sessions
{{{
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20130115 03:05:41 43 0 6 0 0 0 0 49 8 13K 11K 19 930 4 1.74 1.07 0.93
20130115 03:05:42 34 0 4 0 0 0 0 59 8 12K 12K 15 924 1 1.74 1.07 0.93
20130115 03:05:43 9 0 6 0 0 0 0 82 8 13K 15K 58 935 0 1.74 1.07 0.93
20130115 03:05:44 5 0 5 0 0 0 0 87 8 11K 11K 9 928 4 1.68 1.07 0.93
20130115 03:05:45 49 0 3 0 0 0 0 46 8 15K 11K 8 928 5 1.68 1.07 0.93
20130115 03:05:46 26 0 6 0 0 0 0 66 8 14K 11K 14 922 1 1.68 1.07 0.93
20130115 03:05:47 6 0 7 0 0 0 0 85 8 11K 13K 51 933 2 1.68 1.07 0.93
20130115 03:05:48 21 0 5 0 0 0 0 72 8 14K 13K 9 929 4 1.68 1.07 0.93
20130115 03:05:49 49 0 6 0 0 0 0 43 8 12K 11K 8 929 4 1.87 1.11 0.95
20130115 03:05:50 15 0 6 0 0 0 0 77 8 12K 12K 32 928 7 1.87 1.11 0.95
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
01/15/13 03:05:48 DB Time 1.9
01/15/13 03:05:48 DB CPU 1.9
01/15/13 03:05:48 Redo size 412,815.6 65,429.6
01/15/13 03:05:48 Logical reads 1,419,919.2 225,051.6
01/15/13 03:05:48 Block changes 2,993.1 474.4
01/15/13 03:05:48 Physical reads 1.8 .3
01/15/13 03:05:48 Physical writes .0 .0
01/15/13 03:05:48 User calls 84.3 13.4
01/15/13 03:05:48 Parses 180.8 28.7
01/15/13 03:05:48 Hard Parses 2.4 .4
01/15/13 03:05:48 Logons 2.7 .4
01/15/13 03:05:48 Executes 5,716.5 906.0
01/15/13 03:05:48 Rollbacks .0
01/15/13 03:05:48 Transactions 6.3
01/15/13 03:05:48 Applied urec .0 .0
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/15/13 03:05:26, 284368, 73208282, 79.55, 88.94, 9.39, 0, 0, 0, 257.44, 823105.36, 0
01/15/13 03:05:26, 284349, 73202703, 79.72, 89.24, 9.52, 0, 0, 0, 257.44, 820315.49, 1
01/15/13 03:05:26, 284130, 73150319, 80.39, 91.84, 11.45, 0, 0, 0, 257.45, 796477.85, 2
01/15/13 03:05:26, 284027, 73121403, 79.99, 91.16, 11.17, 0, 0, 0, 257.45, 802127.88, 3
SQL>
TM CPU_TOTAL CPU_OS CPU_ORA CPU_ORA_WAIT COMMIT READIO WAIT
----------------- ---------- ---------- ---------- ------------ ---------- ---------- ----------
01/15/13 03:06:01 2.44 .54 1.9 0 .002 .007 .032
TM SESSION_ID AAS
----------------- ---------- ----------
01/15/13 03:06:01 6 17
01/15/13 03:06:01 390 3
01/15/13 03:06:01 575 0
01/15/13 03:06:01 576 9
01/15/13 03:06:01 581 17
01/15/13 03:06:01 958 6
01/15/13 03:06:01 1152 19
01/15/13 03:06:01 1344 19
----------
sum 90
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
27.32 3.23 3.41 37.20 7.52 27.96 0.00 1.83 1.61 0.13 0.00
0 0 22.45 2.89 3.41 61.07 15.21 1.26 0.00 1.83 1.61 0.13 0.00
0 4 52.29 3.29 3.41 31.23 15.21 1.26 0.00 1.83 1.61 0.13 0.00
1 1 16.41 3.19 3.41 39.79 4.92 38.89 0.00 1.83 1.61 0.13 0.00
1 5 31.48 3.35 3.41 24.72 4.92 38.89 0.00 1.83 1.61 0.13 0.00
2 2 14.84 3.01 3.41 51.80 8.20 25.16 0.00 1.83 1.61 0.13 0.00
2 6 37.12 3.25 3.41 29.52 8.20 25.16 0.00 1.83 1.61 0.13 0.00
3 3 32.66 3.38 3.41 19.05 1.75 46.54 0.00 1.83 1.61 0.13 0.00
3 7 11.28 3.12 3.41 40.44 1.75 46.54 0.00 1.83 1.61 0.13 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
34.97 3.36 3.41 39.25 5.72 20.06 0.00 1.48 1.67 0.38 0.00
0 0 25.62 3.10 3.41 66.06 6.89 1.43 0.00 1.48 1.67 0.38 0.00
0 4 59.90 3.34 3.41 31.78 6.89 1.43 0.00 1.48 1.67 0.38 0.00
1 1 28.01 3.28 3.41 34.40 3.23 34.36 0.00 1.48 1.67 0.38 0.00
1 5 31.75 3.42 3.41 30.66 3.23 34.36 0.00 1.48 1.67 0.38 0.00
2 2 58.42 3.44 3.41 15.34 11.40 14.83 0.00 1.48 1.67 0.38 0.00
2 6 14.24 3.29 3.41 59.52 11.40 14.83 0.00 1.48 1.67 0.38 0.00
3 3 6.73 3.02 3.41 62.29 1.37 29.60 0.00 1.48 1.67 0.38 0.00
3 7 55.08 3.46 3.41 13.94 1.37 29.60 0.00 1.48 1.67 0.38 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
25.81 3.19 3.41 40.64 10.93 22.63 0.00 1.00 0.79 0.06 0.00
0 0 20.76 2.77 3.41 63.29 14.39 1.56 0.00 1.00 0.79 0.06 0.00
0 4 54.52 3.31 3.41 29.53 14.39 1.56 0.00 1.00 0.79 0.06 0.00
1 1 36.33 3.24 3.41 18.66 10.59 34.41 0.00 1.00 0.79 0.06 0.00
1 5 5.06 2.60 3.41 49.94 10.59 34.41 0.00 1.00 0.79 0.06 0.00
2 2 11.07 3.07 3.41 47.51 11.92 29.51 0.00 1.00 0.79 0.06 0.00
2 6 35.52 3.34 3.41 23.06 11.92 29.51 0.00 1.00 0.79 0.06 0.00
3 3 34.11 3.35 3.41 34.04 6.81 25.04 0.00 1.00 0.79 0.06 0.00
3 7 9.08 2.59 3.41 59.07 6.81 25.04 0.00 1.00 0.79 0.06 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
30.25 3.28 3.41 40.69 10.53 18.53 0.00 0.72 0.66 0.00 0.00
0 0 48.04 3.26 3.41 29.96 21.70 0.29 0.00 0.72 0.66 0.00 0.00
0 4 30.54 3.32 3.41 47.46 21.70 0.29 0.00 0.72 0.66 0.00 0.00
1 1 34.64 3.29 3.41 34.63 7.21 23.52 0.00 0.72 0.66 0.00 0.00
1 5 17.85 3.12 3.41 51.42 7.21 23.52 0.00 0.72 0.66 0.00 0.00
2 2 41.74 3.45 3.41 7.67 4.90 45.70 0.00 0.72 0.66 0.00 0.00
2 6 5.04 3.15 3.41 44.36 4.89 45.70 0.00 0.72 0.66 0.00 0.00
3 3 40.24 3.42 3.41 46.86 8.30 4.61 0.00 0.72 0.66 0.00 0.00
3 7 23.90 2.89 3.41 63.20 8.30 4.61 0.00 0.72 0.66 0.00 0.00
}}}
! after edit of runit.sh, 16 sessions - bind on cpu1-4
{{{
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20130115 21:30:40 49 0 5 0 0 0 0 44 8 13K 11K 8 942 18 11.53 7.00 2.99
20130115 21:30:41 52 0 6 0 0 0 0 41 8 13K 15K 25 942 17 11.53 7.00 2.99
20130115 21:30:42 49 0 3 0 0 0 0 46 8 11K 12K 0 940 17 11.53 7.00 2.99
20130115 21:30:43 33 0 7 0 0 0 0 57 8 15K 14K 24 912 4 11.73 7.11 3.05
20130115 21:30:44 13 0 8 0 0 0 0 77 8 15K 15K 141 960 2 11.73 7.11 3.05
20130115 21:30:45 7 0 7 2 0 0 0 82 8 15K 12K 32 943 19 11.73 7.11 3.05
20130115 21:30:46 46 0 8 0 0 0 0 44 8 11K 11K 8 942 20 11.73 7.11 3.05
20130115 21:30:47 50 0 5 0 0 0 0 43 8 11K 12K 16 942 18 11.73 7.11 3.05
20130115 21:30:48 49 0 4 0 0 0 0 46 8 12K 13K 0 942 18 12.07 7.26 3.12
20130115 21:30:49 50 0 4 0 0 0 0 44 8 13K 11K 16 942 17 12.07 7.26 3.12
20130115 21:30:50 49 0 5 0 0 0 0 44 8 12K 11K 8 942 20 12.07 7.26 3.12
20130115 21:30:51 51 0 6 0 0 0 0 41 8 16K 11K 24 936 15 12.07 7.26 3.12
20130115 21:30:52 27 0 6 1 0 0 0 64 8 14K 13K 14 912 3 12.07 7.26 3.12
20130115 21:30:53 11 0 8 0 0 0 0 79 8 15K 15K 147 959 2 11.18 7.15 3.11
20130115 21:30:54 11 0 8 0 0 0 0 78 8 10K 11K 8 942 20 11.18 7.15 3.11
20130115 21:30:55 50 0 6 0 0 0 0 42 8 15K 11K 25 943 18 11.18 7.15 3.11
20130115 21:30:56 49 0 4 0 0 0 0 44 8 12K 10K 9 943 19 11.18 7.15 3.11
20130115 21:30:57 51 0 4 0 0 0 0 44 8 12K 12K 16 943 19 11.18 7.15 3.11
20130115 21:30:58 49 0 4 0 0 0 0 45 8 12K 13K 1 942 16 11.57 7.30 3.18
20130115 21:30:59 51 0 5 0 0 0 0 43 8 14K 11K 17 943 17 11.57 7.30 3.18
20130115 21:31:00 50 0 7 0 0 0 0 42 8 11K 17K 8 931 15 11.57 7.30 3.18
20130115 21:31:01 23 0 6 0 0 0 0 67 8 14K 12K 38 912 2 11.57 7.30 3.18
20130115 21:31:02 9 0 7 0 0 0 0 83 8 10K 13K 131 959 2 11.57 7.30 3.18
20130115 21:31:03 23 0 10 0 0 0 0 65 8 15K 14K 24 942 19 10.64 7.18 3.16
20130115 21:31:04 49 0 3 0 0 0 0 47 8 11K 11K 0 942 18 10.64 7.18 3.16
20130115 21:31:05 51 0 6 0 0 0 0 41 8 13K 11K 24 941 16 10.64 7.18 3.16
20130115 21:31:06 51 0 8 0 0 0 0 40 8 12K 16K 19 945 20 10.64 7.18 3.16
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
42.16 3.37 3.41 43.12 4.58 10.13 0.00 0.25 0.16 0.00 0.00
0 0 14.94 3.00 3.41 75.43 7.18 2.45 0.00 0.25 0.16 0.00 0.00
0 4 67.55 3.41 3.41 22.81 7.18 2.45 0.00 0.25 0.16 0.00 0.00
1 1 73.88 3.40 3.41 19.33 6.20 0.60 0.00 0.25 0.16 0.00 0.00
1 5 16.43 3.19 3.41 76.78 6.20 0.60 0.00 0.25 0.16 0.00 0.00
2 2 74.86 3.41 3.41 8.49 1.62 15.02 0.00 0.25 0.16 0.00 0.00
2 6 9.74 3.05 3.41 73.62 1.62 15.02 0.00 0.25 0.16 0.00 0.00
3 3 65.86 3.46 3.41 8.34 3.33 22.46 0.00 0.25 0.16 0.00 0.00
3 7 14.05 3.24 3.41 60.16 3.33 22.46 0.00 0.25 0.16 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
54.30 3.48 3.41 44.37 0.72 0.61 0.00 0.00 0.00 0.00 0.00
0 0 12.32 3.33 3.41 86.83 0.49 0.37 0.00 0.00 0.00 0.00 0.00
0 4 93.89 3.50 3.41 5.25 0.49 0.37 0.00 0.00 0.00 0.00 0.00
1 1 94.16 3.49 3.41 3.96 1.76 0.12 0.00 0.00 0.00 0.00 0.00
1 5 21.06 3.48 3.41 77.06 1.76 0.12 0.00 0.00 0.00 0.00 0.00
2 2 97.91 3.47 3.41 0.95 0.20 0.94 0.00 0.00 0.00 0.00 0.00
2 6 7.66 3.44 3.41 91.20 0.20 0.94 0.00 0.00 0.00 0.00 0.00
3 3 93.78 3.50 3.41 4.78 0.42 1.02 0.00 0.00 0.00 0.00 0.00
3 7 13.66 3.38 3.41 84.90 0.42 1.02 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
39.33 3.39 3.41 42.69 6.92 11.06 0.00 0.49 0.42 0.15 0.00
0 0 14.82 3.23 3.41 63.88 10.70 10.59 0.00 0.49 0.42 0.15 0.00
0 4 60.61 3.46 3.41 18.10 10.70 10.59 0.00 0.49 0.42 0.15 0.00
1 1 72.97 3.39 3.41 13.38 12.23 1.41 0.00 0.49 0.42 0.15 0.00
1 5 10.42 3.20 3.41 75.94 12.23 1.41 0.00 0.49 0.42 0.15 0.00
2 2 72.15 3.37 3.41 18.86 1.76 7.22 0.00 0.49 0.42 0.15 0.00
2 6 11.19 3.24 3.41 79.82 1.76 7.22 0.00 0.49 0.42 0.15 0.00
3 3 59.21 3.47 3.41 12.79 2.97 25.03 0.00 0.49 0.42 0.15 0.00
3 7 13.22 3.33 3.41 58.78 2.97 25.03 0.00 0.49 0.42 0.15 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
58.02 3.51 3.41 41.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 21.20 3.51 3.41 78.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 4 100.00 3.51 3.41 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 100.00 3.51 3.41 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 16.53 3.51 3.41 83.47 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 2 100.00 3.51 3.41 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 6 13.76 3.51 3.41 86.24 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 3 100.00 3.51 3.41 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 7 12.69 3.51 3.41 87.31 0.00 0.00 0.00 0.00 0.00 0.00 0.00
^C
$ while :; do numactl --physcpubind=1-4 ./runit.sh 0 16; done
Tm 40
Tm 7
Tm 7
Tm 7
Tm 7
Tm 7
Tm 7
Tm 7
Tm 7
Tm 7
Tm 6
Tm 7
Tm 7
Tm 7
Tm 6
Tm 6
Tm 6
Tm 6
Tm 7
Tm 6
Tm 6
Tm 6
Tm 6
Tm 6
Tm 6
Tm 7
Tm 7
Tm 7
Tm 6
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
01/15/13 21:31:12 DB Time 11.5
01/15/13 21:31:12 DB CPU 3.1
01/15/13 21:31:12 Redo size 179,258.5 66,893.5
01/15/13 21:31:12 Logical reads 2,332,403.6 870,377.7
01/15/13 21:31:12 Block changes 1,373.0 512.3
01/15/13 21:31:12 Physical reads 2.6 1.0
01/15/13 21:31:12 Physical writes .0 .0
01/15/13 21:31:12 User calls 101.1 37.7
01/15/13 21:31:12 Parses 109.7 41.0
01/15/13 21:31:12 Hard Parses 2.3 .9
01/15/13 21:31:12 Logons 3.6 1.3
01/15/13 21:31:12 Executes 9,171.2 3,422.4
01/15/13 21:31:12 Rollbacks .0
01/15/13 21:31:12 Transactions 2.7
01/15/13 21:31:12 Applied urec .0 .0
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/15/13 21:31:14, 167354, 43085477, 52.62, 229.38, 176.76, 0, 0, 0, 257.45, 187837.13, 0
01/15/13 21:31:14, 167763, 43190218, 52.46, 235.86, 183.4, 0, 0, 0, 257.45, 183121.91, 1
01/15/13 21:31:14, 167972, 43242791, 52.65, 233.55, 180.9, 0, 0, 0, 257.44, 185155.16, 2
01/15/13 21:31:14, 167273, 43064985, 52.52, 235.5, 182.98, 0, 0, 0, 257.45, 182864.5, 3
01/15/13 21:31:14, 168035, 43259554, 53, 229.71, 176.71, 0, 0, 0, 257.44, 188325.73, 4
01/15/13 21:31:14, 166672, 42908921, 52.65, 236.36, 183.71, 0, 0, 0, 257.45, 181544.23, 5
01/15/13 21:31:14, 167668, 43165434, 53.51, 230.74, 177.23, 0, 0, 0, 257.45, 187075.65, 6
01/15/13 21:31:14, 167297, 43069303, 52.62, 233.14, 180.52, 0, 0, 0, 257.44, 184732.02, 7
01/15/13 21:31:14, 167467, 43115023, 52.43, 234.34, 181.91, 0, 0, 0, 257.45, 183984.47, 8
01/15/13 21:31:14, 167665, 43165635, 53.44, 239.89, 186.44, 0, 0, 0, 257.45, 179940.83, 9
01/15/13 21:31:14, 167301, 43069661, 52.72, 233.87, 181.15, 0, 0, 0, 257.44, 184164.22, 10
01/15/13 21:31:14, 167693, 43171995, 52.02, 237.08, 185.06, 0, 0, 0, 257.45, 182096.47, 11
01/15/13 21:31:14, 167855, 43214004, 52.8, 230.09, 177.29, 0, 0, 0, 257.45, 187815.77, 12
01/15/13 21:31:14, 167868, 43218293, 52.74, 233.37, 180.64, 0, 0, 0, 257.45, 185188.96, 13
01/15/13 21:31:14, 167558, 43135664, 52.33, 229.11, 176.78, 0, 0, 0, 257.44, 188275.36, 14
01/15/13 21:31:14, 167299, 43072081, 53.06, 236.49, 183.43, 0, 0, 0, 257.46, 182129.85, 15
16 rows selected.
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIOS SQL_TEXT OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ---------- ----------------------------------------- ------------------------------ -------------------------
01/15/13 21:31:16 1 576 sqlplus@de SYS c80qj9qbfrrgy 0 79376787 101 .002925 74.46 select to_char(sysdate,'MM/DD/YY HH24:MI: oracle desktopserver.local
01/15/13 21:31:16 1 388 sqlplus@de USER1 68vu5q46nu22s 9 198477492 169,127 .001432 179829.76 SELECT COUNT(C2) FROM CF1 WHERE CUSTID > oracle desktopserver.local
01/15/13 21:31:16 1 200 sqlplus@de USER11 68vu5q46nu22s 12 198477492 169,765 .001368 188137.06 oracle desktopserver.local
01/15/13 21:31:16 1 1346 sqlplus@de USER12 68vu5q46nu22s 13 198477492 169,758 .001387 185612.85 oracle desktopserver.local
01/15/13 21:31:16 1 771 sqlplus@de USER13 68vu5q46nu22s 7 198477492 168,937 .001393 184827.86 oracle desktopserver.local
01/15/13 21:31:16 1 583 sqlplus@de USER14 68vu5q46nu22s 11 198477492 169,221 .001414 182076.42 oracle desktopserver.local
01/15/13 21:31:16 1 198 sqlplus@de USER15 68vu5q46nu22s 0 198477492 168,901 .001371 187769.57 oracle desktopserver.local
01/15/13 21:31:16 1 581 sqlplus@de USER16 68vu5q46nu22s 15 198477492 169,193 .001411 182479.29 oracle desktopserver.local
01/15/13 21:31:16 1 1154 sqlplus@de USER2 68vu5q46nu22s 5 198477492 168,407 .001416 181786.56 oracle desktopserver.local
01/15/13 21:31:16 1 1347 sqlplus@de USER3 68vu5q46nu22s 10 198477492 168,995 .001397 184299.93 oracle desktopserver.local
01/15/13 21:31:16 1 7 sqlplus@de USER6 68vu5q46nu22s 3 198477492 168,896 .001407 182938.74 oracle desktopserver.local
01/15/13 21:31:16 1 772 sqlplus@de USER7 68vu5q46nu22s 8 198477492 168,961 .001400 183951.76 oracle desktopserver.local
01/15/13 21:31:16 1 389 sqlplus@de USER8 68vu5q46nu22s 1 198477492 169,269 .001406 183044.82 oracle desktopserver.local
01/15/13 21:31:16 1 1153 sqlplus@de USER9 68vu5q46nu22s 14 198477492 169,474 .001365 188578.76 oracle desktopserver.local
14 rows selected.
SQL>
TM CPU_TOTAL CPU_OS CPU_ORA CPU_ORA_WAIT COMMIT READIO WAIT
----------------- ---------- ---------- ---------- ------------ ---------- ---------- ----------
01/15/13 21:31:15 12.406 .606 3.089 8.711 .002 .002 .229
TM SESSION_ID AAS
----------------- ---------- ----------
01/15/13 21:31:15 6 42
01/15/13 21:31:15 7 6
01/15/13 21:31:15 198 46
01/15/13 21:31:15 200 43
01/15/13 21:31:15 388 18
01/15/13 21:31:15 389 45
01/15/13 21:31:15 390 26
01/15/13 21:31:15 576 17
01/15/13 21:31:15 579 28
01/15/13 21:31:15 581 47
01/15/13 21:31:15 583 44
01/15/13 21:31:15 771 44
01/15/13 21:31:15 772 18
01/15/13 21:31:15 773 27
01/15/13 21:31:15 961 44
01/15/13 21:31:15 962 41
01/15/13 21:31:15 1153 29
01/15/13 21:31:15 1154 44
01/15/13 21:31:15 1156 14
01/15/13 21:31:15 1346 43
01/15/13 21:31:15 1347 42
----------
sum 708
21 rows selected.
}}}
! the most sustained yet after removing sqlplus awr commands.. 16 sess
{{{
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20130115 21:54:22 93 0 6 0 0 0 0 0 8 15K 10K 31 943 16 15.97 12.05 6.92
20130115 21:54:23 93 0 5 0 0 0 0 0 8 16K 10K 0 937 14 16.14 12.15 6.98
20130115 21:54:24 43 0 13 0 0 0 0 41 8 13K 12K 155 960 2 16.14 12.15 6.98
20130115 21:54:25 32 0 9 0 0 0 0 56 8 12K 10K 1 943 19 16.14 12.15 6.98
20130115 21:54:26 92 0 6 0 0 0 0 0 8 17K 9351 18 943 18 16.14 12.15 6.98
20130115 21:54:27 94 0 5 0 0 0 0 0 8 15K 9341 8 943 18 16.14 12.15 6.98
20130115 21:54:28 94 0 5 0 0 0 0 0 8 15K 12K 16 943 16 16.37 12.26 7.04
20130115 21:54:29 92 0 6 0 0 0 0 0 8 16K 11K 8 941 16 16.37 12.26 7.04
20130115 21:54:30 51 0 13 0 0 0 0 34 8 14K 12K 147 960 2 16.37 12.26 7.04
20130115 21:54:31 20 0 10 0 0 0 0 68 8 11K 10K 1 943 17 16.37 12.26 7.04
20130115 21:54:32 93 0 6 0 0 0 0 0 8 18K 11K 24 943 17 16.37 12.26 7.04
20130115 21:54:33 94 0 5 0 0 0 0 0 8 16K 12K 0 943 16 16.58 12.37 7.10
20130115 21:54:34 93 0 6 0 0 0 0 0 8 15K 10K 24 943 17 16.58 12.37 7.10
20130115 21:54:35 94 0 4 0 0 0 0 0 8 14K 10K 1 942 16 16.58 12.37 7.10
20130115 21:54:36 66 0 12 0 0 0 0 20 8 19K 13K 147 961 0 16.58 12.37 7.10
20130115 21:54:37 16 0 11 0 0 0 0 70 8 15K 11K 9 944 19 16.58 12.37 7.10
20130115 21:54:38 94 0 5 0 0 0 0 0 8 14K 12K 16 943 17 16.61 12.45 7.16
20130115 21:54:39 93 0 5 0 0 0 0 0 8 15K 10K 13 945 19 16.61 12.45 7.16
20130115 21:54:40 93 0 6 0 0 0 0 0 8 16K 11K 28 945 19 16.61 12.45 7.16
20130115 21:54:41 94 0 4 0 0 0 0 0 8 14K 10K 1 943 16 16.61 12.45 7.16
20130115 21:54:42 71 0 12 0 0 0 0 15 8 16K 13K 156 961 1 16.61 12.45 7.16
20130115 21:54:43 12 0 9 0 0 0 0 78 8 13K 14K 1 943 16 15.28 12.24 7.12
20130115 21:54:44 90 0 8 0 0 0 0 0 8 22K 10K 25 946 17 15.28 12.24 7.12
20130115 21:54:45 94 0 4 0 0 0 0 0 8 15K 10K 3 943 16 15.28 12.24 7.12
20130115 21:54:46 93 0 5 0 0 0 0 0 8 15K 10K 16 943 16 15.28 12.24 7.12
20130115 21:54:47 94 0 5 0 0 0 0 0 8 15K 10K 8 943 20 15.28 12.24 7.12
20130115 21:54:48 78 0 11 0 0 0 0 9 8 16K 14K 143 958 4 15.10 12.26 7.15
20130115 21:54:49 7 0 9 0 0 0 0 81 8 14K 11K 13 943 16 15.10 12.26 7.15
20130115 21:54:50 91 0 8 0 0 0 0 0 8 14K 10K 16 945 20 15.10 12.26 7.15
20130115 21:54:51 93 0 5 0 0 0 0 0 8 14K 10K 0 943 16 15.10 12.26 7.15
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
95.26 3.50 3.41 2.14 1.69 0.91 0.00 0.08 0.07 0.00 0.00
0 0 95.41 3.50 3.41 2.25 2.12 0.22 0.00 0.08 0.07 0.00 0.00
0 4 95.75 3.50 3.41 1.91 2.12 0.22 0.00 0.08 0.07 0.00 0.00
1 1 95.37 3.50 3.41 0.77 0.61 3.25 0.00 0.08 0.07 0.00 0.00
1 5 95.17 3.51 3.41 0.97 0.61 3.25 0.00 0.08 0.07 0.00 0.00
2 2 94.87 3.51 3.41 1.32 3.81 0.00 0.00 0.08 0.07 0.00 0.00
2 6 94.21 3.51 3.41 1.98 3.81 0.00 0.00 0.08 0.07 0.00 0.00
3 3 95.21 3.51 3.41 4.41 0.23 0.15 0.00 0.08 0.07 0.00 0.00
3 7 96.13 3.49 3.41 3.49 0.23 0.15 0.00 0.08 0.07 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
81.62 3.48 3.41 8.31 3.88 6.19 0.00 0.67 0.51 0.00 0.00
0 0 82.46 3.48 3.41 9.77 7.28 0.49 0.00 0.67 0.51 0.00 0.00
0 4 80.44 3.49 3.41 11.79 7.28 0.49 0.00 0.67 0.51 0.00 0.00
1 1 81.61 3.50 3.41 4.97 2.73 10.68 0.00 0.67 0.51 0.00 0.00
1 5 80.50 3.48 3.41 6.09 2.73 10.68 0.00 0.67 0.51 0.00 0.00
2 2 82.98 3.43 3.41 12.56 3.67 0.79 0.00 0.67 0.51 0.00 0.00
2 6 82.25 3.47 3.41 13.29 3.67 0.79 0.00 0.67 0.51 0.00 0.00
3 3 80.32 3.50 3.41 5.03 1.83 12.82 0.00 0.67 0.51 0.00 0.00
3 7 82.37 3.51 3.41 2.98 1.83 12.82 0.00 0.67 0.51 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
81.40 3.48 3.41 8.24 2.17 8.18 0.00 0.10 0.06 0.05 0.00
0 0 80.91 3.46 3.41 15.55 2.93 0.62 0.00 0.10 0.06 0.05 0.00
0 4 85.74 3.44 3.41 10.71 2.93 0.62 0.00 0.10 0.06 0.05 0.00
1 1 80.58 3.48 3.41 8.30 4.12 7.00 0.00 0.10 0.06 0.05 0.00
1 5 83.82 3.48 3.41 5.06 4.12 7.00 0.00 0.10 0.06 0.05 0.00
2 2 78.13 3.50 3.41 13.50 0.64 7.73 0.00 0.10 0.06 0.05 0.00
2 6 83.36 3.49 3.41 8.27 0.64 7.73 0.00 0.10 0.06 0.05 0.00
3 3 80.08 3.50 3.41 1.55 1.02 17.36 0.00 0.10 0.06 0.05 0.00
3 7 78.61 3.50 3.41 3.02 1.02 17.36 0.00 0.10 0.06 0.05 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
81.62 3.48 3.41 6.18 3.15 9.04 0.00 0.89 0.68 0.22 0.00
0 0 82.38 3.49 3.41 3.68 5.40 8.54 0.00 0.89 0.68 0.22 0.00
0 4 79.81 3.49 3.41 6.25 5.40 8.54 0.00 0.89 0.68 0.22 0.00
1 1 82.66 3.49 3.41 3.00 4.56 9.78 0.00 0.89 0.68 0.22 0.00
1 5 80.85 3.49 3.41 4.80 4.56 9.78 0.00 0.89 0.68 0.22 0.00
2 2 85.24 3.43 3.41 11.13 2.21 1.43 0.00 0.89 0.68 0.22 0.00
2 6 80.37 3.49 3.41 15.99 2.21 1.43 0.00 0.89 0.68 0.22 0.00
3 3 79.80 3.50 3.41 3.35 0.43 16.42 0.00 0.89 0.68 0.22 0.00
3 7 81.90 3.51 3.41 1.25 0.43 16.42 0.00 0.89 0.68 0.22 0.00
^C
$ while :; do ./runit.sh 0 16; done
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
Tm 5
SQL>
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
01/15/13 21:54:58 DB Time 12.3
01/15/13 21:54:58 DB CPU 6.0
01/15/13 21:54:58 Redo size 386.0 23,192.0
01/15/13 21:54:58 Logical reads 3,378,988.7 203,009,640.0
01/15/13 21:54:58 Block changes 2.7 160.0
01/15/13 21:54:58 Physical reads .0 .0
01/15/13 21:54:58 Physical writes 2.7 160.0
01/15/13 21:54:58 User calls 111.9 6,720.0
01/15/13 21:54:58 Parses 56.7 3,404.0
01/15/13 21:54:58 Hard Parses 2.7 160.0
01/15/13 21:54:58 Logons 4.1 244.0
01/15/13 21:54:58 Executes 13,176.5 791,644.0
01/15/13 21:54:58 Rollbacks .0
01/15/13 21:54:58 Transactions .0
01/15/13 21:54:58 Applied urec .0 .0
15 rows selected.
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/15/13 21:54:53, 640033, 164778174, 255.99, 657.73, 401.74, 0, 0, 0, 257.45, 250526.51, 0
01/15/13 21:54:53, 640038, 164777971, 255.34, 655.71, 400.38, 0, 0, 0, 257.45, 251295.65, 1
01/15/13 21:54:53, 639439, 164618624, 255.13, 655.65, 400.52, 0, 0, 0, 257.44, 251078.74, 2
01/15/13 21:54:53, 640022, 164777037, 255.5, 648.56, 393.06, 0, 0, 0, 257.46, 254066.43, 3
01/15/13 21:54:53, 640032, 164773547, 255.73, 652.61, 396.88, 0, 0, 0, 257.45, 252482.21, 4
01/15/13 21:54:53, 639468, 164628357, 257.04, 661.6, 404.56, 0, 0, 0, 257.45, 248834.37, 5
01/15/13 21:54:53, 639696, 164688834, 256.01, 646.95, 390.94, 0, 0, 0, 257.45, 254560.69, 6
01/15/13 21:54:53, 640031, 164771482, 255.93, 652.45, 396.53, 0, 0, 0, 257.44, 252541.85, 7
01/15/13 21:54:53, 640025, 164777339, 255.68, 653.82, 398.14, 0, 0, 0, 257.45, 252021.45, 8
01/15/13 21:54:53, 639993, 164768876, 257.05, 661.02, 403.97, 0, 0, 0, 257.45, 249263.14, 9
01/15/13 21:54:53, 639654, 164672370, 255.97, 655.59, 399.62, 0, 0, 0, 257.44, 251182.91, 10
01/15/13 21:54:53, 639976, 164761629, 255.19, 656.79, 401.6, 0, 0, 0, 257.45, 250859.36, 11
01/15/13 21:54:53, 640033, 164776477, 255.51, 650.57, 395.06, 0, 0, 0, 257.45, 253280.56, 12
01/15/13 21:54:53, 640037, 164781219, 255.18, 645.95, 390.77, 0, 0, 0, 257.46, 255098.26, 13
01/15/13 21:54:53, 640043, 164772266, 255.86, 652.11, 396.25, 0, 0, 0, 257.44, 252675.57, 14
01/15/13 21:54:53, 639140, 164551155, 257.06, 657.35, 400.28, 0, 0, 0, 257.46, 250325.5, 15
16 rows selected.
SQL>
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIOS SQL_TEXT OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ---------- ----------------------------------------- ------------------------------ -------------------------
01/15/13 21:54:39 1 197 sqlplus@de SYS c80qj9qbfrrgy 0 79376787 233 .002968 31.81 select to_char(sysdate,'MM/DD/YY HH24:MI: oracle desktopserver.local
01/15/13 21:54:39 1 773 sqlplus@de USER1 68vu5q46nu22s 9 198477492 627,964 .001035 248639.09 SELECT COUNT(C2) FROM CF1 WHERE CUSTID > oracle desktopserver.local
01/15/13 21:54:39 1 391 sqlplus@de USER10 68vu5q46nu22s 4 198477492 627,467 .001023 251721.71 oracle desktopserver.local
01/15/13 21:54:39 1 579 sqlplus@de USER11 68vu5q46nu22s 12 198477492 627,496 .001019 252604.55 oracle desktopserver.local
01/15/13 21:54:39 1 1152 sqlplus@de USER12 68vu5q46nu22s 13 198477492 628,034 .001012 254457.89 oracle desktopserver.local
01/15/13 21:54:39 1 1153 sqlplus@de USER13 68vu5q46nu22s 7 198477492 628,024 .001022 251923.48 oracle desktopserver.local
01/15/13 21:54:39 1 1347 sqlplus@de USER14 68vu5q46nu22s 11 198477492 628,048 .001029 250230.98 oracle desktopserver.local
01/15/13 21:54:39 1 200 sqlplus@de USER15 9xhqh6x4n4tgz 0 0 126 5.424413 235819.62 DECLARE x NUMBER := 0; v_r PLS_INTEGER; oracle desktopserver.local
01/15/13 21:54:39 1 581 sqlplus@de USER16 68vu5q46nu22s 15 198477492 627,724 .001030 249921.3 SELECT COUNT(C2) FROM CF1 WHERE CUSTID > oracle desktopserver.local
01/15/13 21:54:39 1 964 sqlplus@de USER2 68vu5q46nu22s 5 198477492 627,311 .001037 248243.53 oracle desktopserver.local
01/15/13 21:54:39 1 1345 sqlplus@de USER3 68vu5q46nu22s 10 198477492 627,693 .001027 250580.67 oracle desktopserver.local
01/15/13 21:54:39 1 580 sqlplus@de USER4 68vu5q46nu22s 2 198477492 628,030 .001027 250646.62 oracle desktopserver.local
01/15/13 21:54:39 1 389 sqlplus@de USER5 68vu5q46nu22s 6 198477492 627,448 .001014 253996.24 oracle desktopserver.local
01/15/13 21:54:39 1 199 sqlplus@de USER6 68vu5q46nu22s 3 198477492 628,493 .001017 253213.63 oracle desktopserver.local
01/15/13 21:54:39 1 962 sqlplus@de USER7 68vu5q46nu22s 8 198477492 627,995 .001024 251401.8 oracle desktopserver.local
01/15/13 21:54:39 1 1149 sqlplus@de USER8 68vu5q46nu22s 1 198477492 628,446 .001027 250725.76 oracle desktopserver.local
01/15/13 21:54:39 1 8 sqlplus@de USER9 68vu5q46nu22s 14 198477492 628,170 .001021 252100.82 oracle desktopserver.local
17 rows selected.
SQL>
TM CPU_TOTAL CPU_OS CPU_ORA CPU_ORA_WAIT COMMIT READIO WAIT
----------------- ---------- ---------- ---------- ------------ ---------- ---------- ----------
01/15/13 21:54:41 13.179 .729 5.983 6.467 0 0 .125
TM SESSION_ID AAS
----------------- ---------- ----------
01/15/13 21:54:41 8 41
01/15/13 21:54:41 197 44
01/15/13 21:54:41 199 45
01/15/13 21:54:41 200 5
01/15/13 21:54:41 389 15
01/15/13 21:54:41 390 37
01/15/13 21:54:41 391 47
01/15/13 21:54:41 579 45
01/15/13 21:54:41 580 44
01/15/13 21:54:41 581 48
01/15/13 21:54:41 773 44
01/15/13 21:54:41 962 46
01/15/13 21:54:41 964 49
01/15/13 21:54:41 1149 46
01/15/13 21:54:41 1152 47
01/15/13 21:54:41 1153 47
01/15/13 21:54:41 1345 49
01/15/13 21:54:41 1347 48
----------
sum 747
18 rows selected.
}}}
! after removing sleep 1 command.. 16 sess
{{{
REAL 16 sess
23:27:09
ramp up
23:27:43
23:34:05
$ while :; do ./runit.sh 0 16;done
Tm 39
Tm 5
Tm 5
Tm 5
20130115 23:33:35 96 0 3 0 0 0 0 0 8 13K 7519 14 786 18 15.12 11.97 7.11
20130115 23:33:36 95 0 4 0 0 0 0 0 8 13K 8385 18 782 16 15.12 11.97 7.11
20130115 23:33:37 97 0 2 0 0 0 0 0 8 13K 7277 6 784 18 15.12 11.97 7.11
20130115 23:33:38 95 0 4 0 0 0 0 0 8 13K 7648 11 782 16 15.35 12.07 7.16
20130115 23:33:39 81 0 14 0 0 0 0 3 8 13K 9063 135 785 18 15.35 12.07 7.16
20130115 23:33:40 93 0 6 0 0 0 0 0 8 13K 7511 22 783 18 15.35 12.07 7.16
20130115 23:33:41 95 0 4 0 0 0 0 0 8 13K 8526 18 785 19 15.35 12.07 7.16
20130115 23:33:42 76 0 23 0 0 0 0 0 8 13K 11K 387 807 26 15.35 12.07 7.16
20130115 23:33:43 77 0 22 0 0 0 0 0 8 12K 10K 289 807 29 18.36 12.75 7.41
20130115 23:33:44 95 0 4 0 0 0 0 0 8 13K 7685 14 806 23 18.36 12.75 7.41
20130115 23:33:45 95 0 4 0 0 0 0 0 8 13K 7573 12 808 32 18.36 12.75 7.41
20130115 23:33:46 94 0 5 0 0 0 0 0 8 13K 8471 20 806 30 18.36 12.75 7.41
20130115 23:33:47 96 0 3 0 0 0 0 0 8 14K 7706 1 807 29 18.36 12.75 7.41
# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
#Date Time User Nice Sys Wait IRQ Soft Steal Idle CPUs Intr Ctxsw Proc RunQ Run Avg1 Avg5 Avg15
20130115 23:33:48 95 0 4 0 0 0 0 0 8 13K 7915 17 807 29 19.54 13.09 7.55
20130115 23:33:49 96 0 3 0 0 0 0 0 8 13K 7549 0 807 31 19.54 13.09 7.55
20130115 23:33:50 95 0 4 0 0 0 0 0 8 12K 7851 24 803 29 19.54 13.09 7.55
20130115 23:33:51 82 0 9 0 0 0 0 7 8 12K 9218 103 789 19 19.54 13.09 7.55
20130115 23:33:52 86 0 13 0 0 0 0 0 8 14K 8253 44 781 18 19.54 13.09 7.55
20130115 23:33:53 95 0 4 0 0 0 0 0 8 15K 7242 0 779 17 19.33 13.15 7.60
20130115 23:33:54 95 0 4 0 0 0 0 0 8 13K 7446 8 779 16 19.33 13.15 7.60
20130115 23:33:55 95 0 4 0 0 0 0 0 8 13K 7134 0 773 13 19.33 13.15 7.60
20130115 23:33:56 86 0 8 0 0 0 0 5 8 13K 8522 16 748 2 19.33 13.15 7.60
20130115 23:33:57 83 0 14 0 0 0 0 1 8 13K 9538 131 776 17 19.33 13.15 7.60
20130115 23:33:58 95 0 4 0 0 0 0 0 8 14K 7251 0 774 18 19.14 13.21 7.65
20130115 23:33:59 96 0 3 0 0 0 0 0 8 13K 7420 0 774 16 19.14 13.21 7.65
20130115 23:34:00 95 0 4 0 0 0 0 0 8 13K 7212 0 774 19 19.14 13.21 7.65
20130115 23:34:01 92 0 7 0 0 0 0 0 8 13K 8272 8 753 7 19.14 13.21 7.65
20130115 23:34:02 75 0 15 0 0 0 0 8 8 13K 9397 131 772 16 19.14 13.21 7.65
20130115 23:34:03 95 0 4 0 0 0 0 0 8 14K 7248 0 772 17 19.05 13.29 7.70
20130115 23:34:04 95 0 4 0 0 0 0 0 8 14K 7337 0 771 18 19.05 13.29 7.70
20130115 23:34:05 95 0 4 0 0 0 0 0 8 13K 7326 0 771 17 19.05 13.29 7.70
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
99.89 3.38 3.41 0.11 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 99.83 3.38 3.41 0.17 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 4 99.86 3.38 3.41 0.14 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 99.92 3.38 3.41 0.08 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 99.82 3.38 3.41 0.18 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 2 99.89 3.38 3.41 0.11 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 6 99.94 3.38 3.41 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 3 99.91 3.38 3.41 0.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 7 99.92 3.38 3.41 0.08 0.00 0.00 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
98.53 3.42 3.41 1.02 0.26 0.20 0.00 0.00 0.00 0.00 0.00
0 0 98.36 3.42 3.41 1.00 0.44 0.20 0.00 0.00 0.00 0.00 0.00
0 4 99.13 3.42 3.41 0.23 0.44 0.20 0.00 0.00 0.00 0.00 0.00
1 1 99.72 3.42 3.41 0.13 0.04 0.11 0.00 0.00 0.00 0.00 0.00
1 5 99.10 3.42 3.41 0.75 0.04 0.11 0.00 0.00 0.00 0.00 0.00
2 2 99.79 3.42 3.41 0.09 0.01 0.11 0.00 0.00 0.00 0.00 0.00
2 6 97.54 3.42 3.41 2.34 0.01 0.11 0.00 0.00 0.00 0.00 0.00
3 3 97.56 3.42 3.41 1.51 0.56 0.37 0.00 0.00 0.00 0.00 0.00
3 7 96.99 3.41 3.41 2.08 0.56 0.37 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
98.50 3.39 3.41 1.09 0.29 0.11 0.00 0.00 0.00 0.00 0.00
0 0 97.14 3.39 3.41 2.15 0.61 0.10 0.00 0.00 0.00 0.00 0.00
0 4 98.94 3.39 3.41 0.35 0.61 0.10 0.00 0.00 0.00 0.00 0.00
1 1 99.04 3.39 3.41 0.38 0.53 0.06 0.00 0.00 0.00 0.00 0.00
1 5 98.33 3.39 3.41 1.09 0.53 0.06 0.00 0.00 0.00 0.00 0.00
2 2 99.08 3.39 3.41 0.92 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 6 99.91 3.40 3.41 0.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 3 99.09 3.40 3.41 0.59 0.02 0.30 0.00 0.00 0.00 0.00 0.00
3 7 96.52 3.39 3.41 3.16 0.02 0.30 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
98.14 3.42 3.41 1.21 0.34 0.31 0.00 0.00 0.00 0.00 0.00
0 0 99.72 3.42 3.41 0.23 0.00 0.05 0.00 0.00 0.00 0.00 0.00
0 4 97.61 3.42 3.41 2.34 0.00 0.05 0.00 0.00 0.00 0.00 0.00
1 1 98.50 3.42 3.41 0.60 0.74 0.16 0.00 0.00 0.00 0.00 0.00
1 5 97.15 3.42 3.41 1.95 0.74 0.16 0.00 0.00 0.00 0.00 0.00
2 2 98.75 3.42 3.41 0.24 0.61 0.40 0.00 0.00 0.00 0.00 0.00
2 6 96.79 3.42 3.41 2.20 0.61 0.40 0.00 0.00 0.00 0.00 0.00
3 3 97.40 3.42 3.41 1.96 0.00 0.64 0.00 0.00 0.00 0.00 0.00
3 7 99.21 3.42 3.41 0.15 0.00 0.64 0.00 0.00 0.00 0.00 0.00
SQL>
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
01/15/13 23:33:49 DB Time 15.3
01/15/13 23:33:49 DB CPU 7.5
01/15/13 23:33:49 Redo size 333.1 20,016.0
01/15/13 23:33:49 Logical reads 4,025,012.8 241,863,021.0
01/15/13 23:33:49 Block changes 3.2 192.0
01/15/13 23:33:49 Physical reads .0 .0
01/15/13 23:33:49 Physical writes 1.4 83.0
01/15/13 23:33:49 User calls 130.8 7,860.0
01/15/13 23:33:49 Parses 66.3 3,983.0
01/15/13 23:33:49 Hard Parses 3.2 195.0
01/15/13 23:33:49 Logons 4.6 279.0
01/15/13 23:33:49 Executes 15,694.3 943,073.0
01/15/13 23:33:49 Rollbacks .0
01/15/13 23:33:49 Transactions .0
01/15/13 23:33:49 Applied urec .0 .0
15 rows selected.
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/15/13 23:33:49, 361944, 93184156, 157.15, 356.92, 199.77, 0, 0, 0, 257.45, 261079.24, 0
01/15/13 23:33:49, 356931, 91889872, 154.92, 357.88, 202.96, 0, 0, 0, 257.44, 256761.27, 1
01/15/13 23:33:49, 365833, 94179142, 158.69, 366.07, 207.37, 0, 0, 0, 257.44, 257273.79, 2
01/15/13 23:33:49, 361185, 92987723, 157.32, 362.64, 205.32, 0, 0, 0, 257.45, 256418.66, 3
01/15/13 23:33:49, 361576, 93087176, 157.01, 360.44, 203.43, 0, 0, 0, 257.45, 258257.21, 4
01/15/13 23:33:49, 356439, 91762333, 154.21, 349.84, 195.64, 0, 0, 0, 257.44, 262296.24, 5
01/15/13 23:33:49, 357338, 91992800, 155.04, 350.88, 195.83, 0, 0, 0, 257.44, 262178.54, 6
01/15/13 23:33:49, 356574, 91801127, 154.4, 353.37, 198.97, 0, 0, 0, 257.45, 259785.74, 7
01/15/13 23:33:49, 362010, 93200713, 157.65, 356.35, 198.71, 0, 0, 0, 257.45, 261539.82, 8
01/15/13 23:33:49, 375506, 96671965, 163.24, 384.45, 221.21, 0, 0, 0, 257.44, 251453.84, 9
01/15/13 23:33:49, 356696, 91829667, 155.73, 352.24, 196.51, 0, 0, 0, 257.45, 260698.87, 10
01/15/13 23:33:49, 356139, 91687764, 155.32, 354.22, 198.9, 0, 0, 0, 257.45, 258845.13, 11
01/15/13 23:33:49, 362222, 93255812, 157.41, 362.79, 205.38, 0, 0, 0, 257.45, 257054.48, 12
01/15/13 23:33:49, 362068, 93216668, 157.61, 361.6, 203.98, 0, 0, 0, 257.46, 257792.76, 13
01/15/13 23:33:49, 356448, 91767126, 154.93, 344.11, 189.18, 0, 0, 0, 257.45, 266678.39, 14
01/15/13 23:33:49, 357008, 91910824, 155.1, 352.01, 196.91, 0, 0, 0, 257.45, 261102.46, 15
16 rows selected.
SQL>
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIOS SQL_TEXT OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ---------- ----------------------------------------- ------------------------------ -------------------------
01/15/13 23:33:55 1 391 sqlplus@de SYS c80qj9qbfrrgy 1 79376787 231 .003098 39.12 select to_char(sysdate,'MM/DD/YY HH24:MI: oracle desktopserver.local
01/15/13 23:33:55 1 6 sqlplus@de USER1 68vu5q46nu22s 7 198477492 361,478 .000991 259771.62 SELECT COUNT(C2) FROM CF1 WHERE CUSTID > oracle desktopserver.local
01/15/13 23:33:55 1 199 sqlplus@de USER10 68vu5q46nu22s 10 198477492 361,473 .000988 260609.85 oracle desktopserver.local
01/15/13 23:33:55 1 962 sqlplus@de USER11 68vu5q46nu22s 14 198477492 361,349 .000965 266810.79 oracle desktopserver.local
01/15/13 23:33:55 1 1152 sqlplus@de USER12 68vu5q46nu22s 8 198477492 367,343 .000984 261651.61 oracle desktopserver.local
01/15/13 23:33:55 1 3 sqlplus@de USER13 68vu5q46nu22s 9 198477492 382,042 .001024 251359.95 oracle desktopserver.local
01/15/13 23:33:55 1 960 sqlplus@de USER14 68vu5q46nu22s 11 198477492 361,396 .000994 259094.95 oracle desktopserver.local
01/15/13 23:33:55 1 1150 sqlplus@de USER15 68vu5q46nu22s 3 198477492 366,386 .001005 256279.08 oracle desktopserver.local
01/15/13 23:33:55 1 961 sqlplus@de USER2 68vu5q46nu22s 1 198477492 360,917 .001004 256342.75 oracle desktopserver.local
01/15/13 23:33:55 1 770 sqlplus@de USER4 68vu5q46nu22s 5 198477492 361,851 .000980 262616.14 oracle desktopserver.local
01/15/13 23:33:55 1 1343 sqlplus@de USER5 68vu5q46nu22s 15 198477492 361,591 .000985 261324.21 oracle desktopserver.local
01/15/13 23:33:55 1 771 sqlplus@de USER6 68vu5q46nu22s 0 198477492 366,239 .000988 260584.57 oracle desktopserver.local
01/15/13 23:33:55 1 1149 sqlplus@de USER8 68vu5q46nu22s 4 198477492 367,025 .000996 258379.12 oracle desktopserver.local
13 rows selected.
15 11:32:20 5 15.36 CPU .64 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:25 5 15.80 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:30 5 15.60 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:35 5 15.20 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:40 5 12.80 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:45 5 14.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:50 5 15.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:32:55 5 15.80 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:00 5 14.74 CPU .46 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:05 5 15.60 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:10 5 15.68 CPU .48 latch free ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:15 5 15.60 CPU .00 control file sequential r ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:20 5 13.58 CPU .44 kksfbc child completion ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:25 5 13.20 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:30 5 15.40 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:35 5 15.40 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
15 11:33:40 1 16.00 CPU ++++++++++++++++++++++++++++++++++++++++8+++++++++
88 rows selected.
SQL>
TM CPU_TOTAL CPU_OS CPU_ORA CPU_ORA_WAIT COMMIT READIO WAIT
----------------- ---------- ---------- ---------- ------------ ---------- ---------- ----------
01/15/13 23:34:00 16.774 .304 7.539 8.931 0 0 .147
TM SESSION_ID AAS
----------------- ---------- ----------
01/15/13 23:34:00 3 57
01/15/13 23:34:00 5 47
01/15/13 23:34:00 6 16
01/15/13 23:34:00 198 48
01/15/13 23:34:00 199 18
01/15/13 23:34:00 200 8
01/15/13 23:34:00 389 8
01/15/13 23:34:00 390 49
01/15/13 23:34:00 391 16
01/15/13 23:34:00 578 45
01/15/13 23:34:00 580 55
01/15/13 23:34:00 582 16
01/15/13 23:34:00 766 0
01/15/13 23:34:00 769 12
01/15/13 23:34:00 770 16
01/15/13 23:34:00 771 54
01/15/13 23:34:00 772 46
01/15/13 23:34:00 958 46
01/15/13 23:34:00 959 8
01/15/13 23:34:00 960 55
01/15/13 23:34:00 961 56
01/15/13 23:34:00 962 17
01/15/13 23:34:00 1149 12
01/15/13 23:34:00 1150 57
01/15/13 23:34:00 1151 51
01/15/13 23:34:00 1152 55
01/15/13 23:34:00 1153 1
01/15/13 23:34:00 1343 54
01/15/13 23:34:00 1344 49
01/15/13 23:34:00 1345 16
----------
sum 988
30 rows selected.
}}}
! LIOs driver SQLs
''SLOB''
<<<
connect as user1/user1
* this SQL runs for .000478 secs per execution
{{{
DECLARE
x NUMBER := 0;
v_r PLS_INTEGER;
BEGIN
dbms_random.initialize(UID * 7777);
FOR i IN 1..5000 LOOP
v_r := dbms_random.value(257, 10000) ;
SELECT COUNT(c2) into x FROM cf1 where custid > v_r - 256 AND custid < v_r;
END LOOP;
END;
/
}}}
<<<
''cputoolkit''
<<<
connect as / as sysdba
* this SQL runs for .984511 secs per execution
{{{
declare
rcount number;
begin
-- 600/60=10 minutes of workload
for j in 1..3600 loop
-- lotslios by Tanel Poder
select /*+ cputoolkit ordered
use_nl(b) use_nl(c) use_nl(d)
full(a) full(b) full(c) full(d) */
count(*)
into rcount
from
sys.obj$ a,
sys.obj$ b,
sys.obj$ c,
sys.obj$ d
where
a.owner# = b.owner#
and b.owner# = c.owner#
and c.owner# = d.owner#
and rownum <= 10000000;
end loop;
end;
/
}}}
<<<
! SLOB sql details
* slob executes sqls at a very fast rate, that's ''.000478 secs'' and ''282 LIOs'' for each execution looping it for ''5000 times''
* at a sustained rate that's going to be around ''1.2M LIOs/sec'' which is what we are seeing on the AWR load profile
<<<
{{{
-- FIRST EXECUTION
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/16/13 12:09:43, 1, 282, .03, .67, .64, .03, .67, .64, 282, 420.12, 0
-- 2ND EXECUTION
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/16/13 12:11:48, 5001, 1287555, 1.5, 4.95, 3.45, 0, 0, 0, 257.46, 260330.43, 0
-- 3RD EXECUTION
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/16/13 12:15:14, 10001, 2574827, 2.56, 6, 3.44, 0, 0, 0, 257.46, 429055.96, 0
-- 4TH EXECUTION.. so that's every 5000 increments
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/16/13 12:17:44, 15001, 3862099, 3.63, 7.07, 3.44, 0, 0, 0, 257.46, 546336.79, 0
-- .000478 SECS ELAPSED PER EXEC
SQL>
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIOS SQL_TEXT OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ---------- ----------------------------------------- ------------------------------ -------------------------
01/16/13 12:17:24 1 576 sqlplus@de USER1 68vu5q46nu22s 0 198477492 14,644 .000478 539051.63 SELECT COUNT(C2) FROM CF1 WHERE CUSTID > oracle desktopserver.local
-- THE EXECUTION PLAN COMPARE IT WITH CPUTOOLKIT
12:18:38 SYS@slob> @dplan
Enter value for sql_id: 68vu5q46nu22s
Enter value for child_no:
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 68vu5q46nu22s, child number 0
-------------------------------------
SELECT COUNT(C2) FROM CF1 WHERE CUSTID > :B1 - 256 AND CUSTID < :B1
Plan hash value: 198477492
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 259 (100)| |
| 1 | SORT AGGREGATE | | 1 | 132 | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| CF1 | 256 | 33792 | 259 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | I_CF1 | 256 | | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(:B1-256<:B1)
4 - access("CUSTID">:B1-256 AND "CUSTID"<:B1)
22 rows selected.
NAME POSITION DUP_POSITION DATATYPE DATATYPE_STRING CHARACTER_SID PRECISION SCALE MAX_LENGTH LAST_CAPTURED
------------------------------ ---------- ------------ ---------- --------------- ------------- ---------- ---------- ---------- -----------------
VALUE_STRING
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
VALUE_ANYDATA()
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 2 NUMBER 22 20130116 12:09:36
498
ANYDATA()
2 2 NUMBER 22 20130116 12:09:36
498
ANYDATA()
SELECT COUNT(C2) FROM CF1 WHERE CUSTID > 498 - 256 AND CUSTID < 498;
}}}
<<<
! cputoolkit sql details
* cputoolkit aims to have 1sec per execute on my desktopserver by tweaking the rownum of tanel's lotslios sql, so that's ''.984511 secs'' and ''293212 LIOs'' for each execution looping it for ''3600 times'' (3600secs = 1hour)
* at a sustained rate that's going to be around ''293K LIOs/sec'' which is what we are seeing on the AWR load profile.. check out HTon-TurboOn here [[cores vs threads, v2 vs x2]]
<<<
{{{
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/16/13 12:34:56, 1, 293212, 1.02, 1.03, 0, 1.02, 1.03, 0, 293212, 285281.74, 0
SQL>
TM , EXEC, LIOS, CPUSECS, ELAPSECS,CPU_WAIT_SECS, CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP, C
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------,---
01/16/13 12:52:31, 150, 43688588, 147.23, 147.67, .44, .98, .98, 0, 291257.25, 295847.45, 0
SQL>
TM INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIOS SQL_TEXT OSUSER MACHINE
----------------- ----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ---------- ----------------------------------------- ------------------------------ -------------------------
01/16/13 12:52:33 1 462 sqlplus@de SYS 9fx889bgz15h3 0 3691747574 151 .984511 295852.66 SELECT /*+ cputoolkit ordered oracle desktopserver.local
12:55:54 SYS@dw> @dplan
Enter value for sql_id: 9fx889bgz15h3
Enter value for child_no:
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 9fx889bgz15h3, child number 0
-------------------------------------
SELECT /*+ cputoolkit ordered use_nl(b)
use_nl(c) use_nl(d) full(a) full(b)
full(c) full(d) */ COUNT(*) FROM SYS.OBJ$ A, SYS.OBJ$ B, SYS.OBJ$ C,
SYS.OBJ$ D WHERE A.OWNER# = B.OWNER# AND B.OWNER# = C.OWNER# AND
C.OWNER# = D.OWNER# AND ROWNUM <= 10000000
Plan hash value: 3691747574
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 13P(100)| |
| 1 | SORT AGGREGATE | | 1 | 12 | | |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | NESTED LOOPS | | 1410P| 14E| 13P (1)|999:59:59 |
| 4 | NESTED LOOPS | | 54T| 443T| 513G (1)|999:59:59 |
| 5 | NESTED LOOPS | | 2078M| 11G| 19M (1)| 00:11:31 |
| 6 | TABLE ACCESS FULL| OBJ$ | 79774 | 233K| 249 (1)| 00:00:01 |
|* 7 | TABLE ACCESS FULL| OBJ$ | 26050 | 78150 | 247 (0)| 00:00:01 |
|* 8 | TABLE ACCESS FULL | OBJ$ | 26050 | 78150 | 247 (0)| 00:00:01 |
|* 9 | TABLE ACCESS FULL | OBJ$ | 26050 | 78150 | 247 (0)| 00:00:01 |
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM<=10000000)
7 - filter("A"."OWNER#"="B"."OWNER#")
8 - filter("B"."OWNER#"="C"."OWNER#")
9 - filter("C"."OWNER#"="D"."OWNER#")
33 rows selected.
}}}
<<<
{{{
-- DR ICASH
drauroaixp21
20:54:34 SYS@icashps> show parameter cpu_count
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 8
20:54:38 SYS@icashps>
20:54:39 SYS@icashps> show parameter resource
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
resource_limit boolean FALSE
resource_manager_plan string
20:54:52 SYS@icashps> set lines 300
col window_name format a17
col RESOURCE_PLAN format a25
col LAST_START_DATE format a50
col duration format a15
col enabled format a5
select window_name, RESOURCE_PLAN, LAST_START_DATE, DURATION, enabled from DBA_SCHEDULER_WINDOWS;20:55:07 SYS@icashps> 20:55:07 SYS@icashps> 20:55:07 SYS@icashps> 20:55:07 SYS@icashps> 20:55:07 SYS@icashps> 20:55:07 SYS@icashps>
WINDOW_NAME RESOURCE_PLAN LAST_START_DATE DURATION ENABL
----------------- ------------------------- -------------------------------------------------- --------------- -----
WEEKNIGHT_WINDOW 26-FEB-13 10.00.00.599802 PM CST6CDT +000 08:00:00 TRUE
WEEKEND_WINDOW 23-FEB-13 08.43.38.102855 AM CST6CDT +002 00:00:00 TRUE
oracle@drauroaixp21:/apps/oracle/dba/benchmark:icashps
$ prtconf
System Model: IBM,8204-E8A
Machine Serial Number: 0659956
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 4
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 drauroaixp21
Memory Size: 16384 MB
Good Memory Size: 16384 MB
Platform Firmware level: Not Available
Firmware Version: IBM,EL350_071
Console Login: enable
Auto Restart: true
Full Core: false
$ lsattr -El proc0
frequency 4204000000 Processor Speed False
smt_enabled true Processor SMT enabled False
smt_threads 2 Processor SMT threads False
state enable Processor state False
type PowerPC_POWER6 Processor type False
$ uname -M
IBM,8204-E8A
$ lsdev -Cc processor
proc0 Available 00-00 Processor
proc2 Available 00-02 Processor
proc4 Available 00-04 Processor
proc6 Available 00-06 Processor
$ lscfg -vp |grep -ip proc |grep "PROC"
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
$ lparstat -i
Node Name : drauroaixp21
Partition Name : drauroaixp21
Partition Number : 2
Type : Shared-SMT
Mode : Capped
Entitled Capacity : 4.00
Partition Group-ID : 32770
Shared Pool ID : 0
Online Virtual CPUs : 4
Maximum Virtual CPUs : 8
Minimum Virtual CPUs : 1
Online Memory : 16384 MB
Maximum Memory : 32768 MB
Minimum Memory : 512 MB
Variable Capacity Weight : 0
Minimum Capacity : 0.10
Maximum Capacity : 8.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 8
Active Physical CPUs in system : 8
Active CPUs in Pool : 8
Shared Physical CPUs in system : 8
Maximum Capacity of Pool : 800
Entitled Capacity of Pool : 800
Unallocated Capacity : 0.00
Physical CPU Percentage : 100.00%
Unallocated Weight : 0
Desired Virtual CPUs : 4
Desired Memory : 16384 MB
Desired Variable Capacity Weight : 0
Desired Capacity : 4.00
$ lparstat
System configuration: type=Shared mode=Capped smt=On lcpu=8 mem=16384 psize=8 ent=4.00
%user %sys %wait %idle physc %entc lbusy vcsw phint
----- ----- ------ ------ ----- ----- ------ ----- -----
2.4 0.6 1.0 96.0 0.13 3.1 1.9 7239417888 2097575355
mpsat
vmstat
lparstat
while :; do ./runit.sh 0 17; done
go to aix dir
edit the runit
copy the reader.sql
}}}
{{{
Description: With Oracle 9i it was required to retrieve segment information like the number of extents, blocks or byte from the segment header.
The segment headers were not cached and so requesting this kind of information for a high number of segments resulted in a high amount of single block disk reads and a long runtime of DBA_SEGMENTS queries.
it was required to access the segment headers. The information related to bytes, blocks and extents is stored centrally in SEG$, but as a prerequisite SEG$ has to be updated using the Oracle procedure DBMS_SPACE_ADMIN.TABLESPACE_FIX_SEGMENT_EXTBLKS.
If this procedure is not executed, segments will only be updated in case they allocate a new extent, but other segments still suffer from the segment header accesses.
Advantages:
Faster access on SEG$, performance improvement on the monitoring SQLs and anything that accesses the DBA_SEGMENT/DBA_EXTENTS
Here’s the risk (as per Doc ID 463101.1):
Make sure to test and time this on DEV environment first and then execute the command during a low workload or idle period
Also do not running this on SYSTEM tablespace (Bug 12940620 - Cached block/extent counts in SEG$ not updated after ADD extent (Doc ID 12940620.8))
The procedure fixes extents, blocks and bytes in the segment headers to synchronize seg$ and
segment header entries. It holds the allocation enqueue for the tablespace till the command
is completed and this may delay some sort of operations in this tablespace (new extent allocation,
deallocate extent, etc.). So it needs to be run during an idle period.
Here’s how to find the objects with mismatch and affected tablespaces:
Find and Fix the Mismatch Between DBA_SEGMENTS and DBA_EXTENTS Views (Doc ID 463101.1)
The query
select /*+ RULE */ s.tablespace_name, s.segment_name segment, s.partition_name,
s.owner owner, s.segment_type, s.blocks sblocks, e.blocks eblocks,
s.extents sextents, e.extents eextents, s.bytes sbytes, e.bytes ebytes
from dba_segments s,
(select count(*) extents, sum(blocks) blocks, sum(bytes) bytes, segment_name,
partition_name, segment_type, owner
from dba_extents
group by segment_name,partition_name,segment_type,owner) e
where s.segment_name=e.segment_name
and s.owner = e.owner
and (s.partition_name = e.partition_name or s.partition_name is null)
and s.segment_type = e.segment_type
and s.owner not like 'SYS%'
and ((s.blocks <> e.blocks) or (s.extents <> e.extents) or (s.bytes <> e.bytes));
Segments not pre-calculated for DBA_SEGMENTS
The tablespace_fix_segment_extblks procedure suggested by Oracle GCS is related to Bug. 27038986
. Bug.27038986: (seg$ is still not updated frequently for changes in index or lob segment)
Description
SEG$ is not updated although Patch 12963364 is applied and the hidden parameter "_bug12963364_spacebg_sync_segblocks" is set to TRUE.
Workaround
Manually execute the following procedure frequently: dbms_space_admin.tablespace_fix_segment_extblks
This bug is not fixed until 12.2.0.1.181016 (Oct 2018) Database Release Update (DB RU)
. Bug.12963364: (block/extent count caching in SEG$ stops working for some segments)
Description:
The cached block/extents counts in SEG$ can become disabled for segments in local tablespaces if the update triggered by an add extent operation is lost before the 5 minute update window expires (such as by a database shutdown).
Thereafter, the cached information stays out of sync permanently unless an update procedure is manually run by the DBA. The result is that, over time, the amount of segments with uncached sizing data can grow, resulting in the degradation of performance when querying DBA_SEGMENTS.
Rediscovery Notes
You are exposed to this bug issue if select query on seg$ shows "bitand(s.spare1,131072) as 0" even after 5 mins BG update task timeout.
Workaround
Running dbms_space_admin.tablespace_fix_segment_extblks on occasion will restore the segments that have lost synchronization.
This bug is not fixed until 12.2.0.1 (base release)
}}}
https://blogs.sap.com/2011/02/14/poor-performance-when-accessing-oracle-dba-views-the-story/
https://docs.oracle.com/database/121/ARPLS/d_spadmn.htm#ARPLS68124
https://www.ora-solutions.net/web/2011/07/18/performance-degradation-for-query-on-dba_segments-bytes-in-11gr2/
http://www.orafaq.com/maillist/oracle-l/2007/05/10/1139.htm
http://www.dadbm.com/oracle-slow-sql-query-against-dba_segments-solved/
Find and Fix the Mismatch Between DBA_SEGMENTS and DBA_EXTENTS Views
http://blog.itpub.net/12798004/viewspace-2680783/
http://www.financemanila.net/2009/09/how-to-enable-3g-mms-and-internet-tethering-on-your-smart-and-sun-iphone-3g/
http://go.smart.com.ph/default/article/view/342/
online load https://www.smartpinoyload.com
http://kevinclosson.wordpress.com/2011/06/16/quit-blogging-or-give-a-quick-update-with-pointers-to-good-blog-a-glimpse-of-the-future-and-a-photo-of-something-from-the-past-yes/
http://kevinclosson.wordpress.com/2011/11/01/very-cool-yet-very-dense-low-wattage-hp-technology-in-project-moonshot-by-the-way-a-large-cpu-count-requires-a-symmetrical-architecture/
http://en.wikipedia.org/wiki/Symmetric_multiprocessing
http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access
http://www.allinterview.com/showanswers/65429.html
http://www.datastagetips.com/2011/02/what-is-difference-between-smp-and-mpp.html
https://kevinclosson.wordpress.com/page/4/
http://www.answers.com/topic/symmetric-multiprocessing
http://www.blight.com/~ja-wells/311/report.html
http://en.wikipedia.org/wiki/Massively_parallel
http://en.wikipedia.org/wiki/Greenplum
http://en.wikipedia.org/wiki/Massive_parallel_processing
''numa''
http://kevinclosson.wordpress.com/2010/12/02/_enable_numa_support-the-if-then-else-oracle-database-11g-release-2-initialization-parameter/
http://kevinclosson.wordpress.com/kevin-closson-index/intel-xeon-5500-nehalem-related-topics/
''misconfigured numa''
http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/
''Andi Kleen'' - creator of numactl
http://halobates.de/ <-- white papers
http://halobates.de/scalability-tokyo2.pdf <-- Scalability of modern Linux kernels
http://halobates.de/lk09-scalability.pdf <-- Linux multi-core scalability
http://halobates.de/numaapi3.pdf <-- An NUMA API for Linux
''Oracle Sun Server X2-8 System Architecture'' http://www.oracle.com/technetwork/articles/systems-hardware-architecture/sun-server-x2-8-architecture-1689083.pdf
likwid (ala numactl) http://wiki.likwid.googlecode.com/hg/BOF-slides.pdf, http://blogs.fau.de/hager/files/2013/06/isc13_tutorial_NLPE.pdf
http://www.princeton.edu/~unix/Solaris/troubleshoot/ram.html
smtx : how many? http://blogs.oracle.com/clive/entry/smtx_how_many
http://unix.derkeiler.com/Newsgroups/comp.unix.solaris/2006-02/msg00879.html <-- brendan reply
! ''snapper4 references''
http://blog.tanelpoder.com/2013/02/18/even-more-snapper-v4-03-now-works-in-sql-developer-too/
http://blog.tanelpoder.com/2013/02/10/session-snapper-v4-the-worlds-most-advanced-oracle-troubleshooting-script/
http://blog.tanelpoder.com/2013/02/18/manual-before-and-after-snapshot-support-in-snapper-v4/
http://blog.tanelpoder.com/2013/02/18/snapper-v4-02-and-the-snapper-launch-party-video/
! quick troubleshooting: gv$sql_monitor, report_sql_monitor, and snapper v4 on remote instance
<<<
So you're on instance3, and the session you want to troubleshoot is on instance1. To open another putty session you have to go through a multi-step process just to get the password and login as oracle. And you are lazy.
Don't worry, you can make use of your current session on instance3 to know what's happening on all instances and then on specific SIDs of other instances.
<<<
See the SQLs below and the example output/scenario
<<<
First is run this [[sqlmon]] and then this [[report_sql_monitor.sql]] and then the commands below
SNAPPER V4
{{{
-- high level ASH workload characterization across instances
@snapper ash 5 1 all@*
-- by username on all instances
@snapper "ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats" 5 1 user=APPUSER@*
-- all users on instance 1
@snapper "ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats" 5 1 all@1
-- by SID on instance 1
@snapper "ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats" 5 1 17@1
-- by inst_id,sid tuple syntax .. snapper on inst 2, SID 1234
@snapper ash 5 1 (2,1234)
-- comma separate to pass multiple inst_id,SID tuples
@snapper ash 5 1 (2,1234),(4,5678),(3,999)
-- snapper QC_ID run from any instance
@snapper ash 5 1 qc=1234@*
More here => http://blog.tanelpoder.com/2013/02/18/snapper-v4-02-and-the-snapper-launch-party-video/
}}}
<<<
! Example run:
{{{
-- I'm on instance 3 and the session I want to troubleshoot is on instance 1
SYS@db01scp3>
-- quick workload characterization across nodes
SYS@db01scp3> @snapper ash 5 1 all@*
Sampling SID all@* with interval 5 seconds, taking 1 snapshots...
-- Session Snapper v4.06 BETA - by Tanel Poder ( http://blog.tanelpoder.com ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)
----------------------------------------------------------------------------------------------------
Active% | INST | SQL_ID | SQL_CHILD | EVENT | WAIT_CLASS
----------------------------------------------------------------------------------------------------
3965% | 2 | 8gqt47kymkn6u | 0 | cell single block physical read | User I/O
2120% | 2 | 8gqt47kymkn6u | 0 | resmgr:cpu quantum | Scheduler
1975% | 2 | 5s9b69t543p4y | 0 | cell single block physical read | User I/O
1080% | 2 | aabha81j0m8as | 1 | resmgr:cpu quantum | Scheduler
1040% | 2 | 8gqt47kymkn6u | 0 | ON CPU | ON CPU
860% | 2 | 5s9b69t543p4y | 0 | resmgr:cpu quantum | Scheduler
840% | 2 | aabha81j0m8as | 1 | cell single block physical read | User I/O
580% | 4 | 5x7ayrz9s09tm | 0 | cell single block physical read | User I/O
535% | 3 | 66mccb3dffhdk | 1 | direct path read temp | User I/O
530% | 2 | | | resmgr:cpu quantum | Scheduler
-- End of ASH snap 1, end=2014-07-02 21:48:51, seconds=5, samples_taken=20
PL/SQL procedure successfully completed.
-- the gv$sql_monitor SQL
SYS@db01scp3> @sqlmon
*** SQL MONITOR + V$SQL ***
STATUS OFF INST EXEC ELA_TM CPU_TM IO_TM RMBS WMBS SID MODULE SQL_ID PHV SQL_EXEC_ID USERNAME PX1 PX2 PX3 SQL_EXEC_START SQL_EXEC_END SQL_TEXT
-------------------- --- ---- ---------- ---------- ---------- ---------- ------ ------ ----- -------------------- ------------- ---------- ----------- ---------- ---- ---- ---- --------------- --------------- ----------------------------------------------------------------------
EXECUTING No 1 8 384254.19 39960.01 280164.93 38 0 17 DBMS_SCHEDULER 5409mxs94a3d1 0 16777228 SYS 062814 10:17:30 070214 21:01:44 DECLARE job BINARY_INTEGER := :job; next_date TIMESTAMP WITH TIME ZON
EXECUTING No 2 0 14616.12 310.6 14044.35 1 0 367 SQL*Plus 2c7wr6h50km5a 0 33554434 SCPP_DW_DB 070214 17:01:19 070214 21:04:55 BEGIN RECON_REPORT_EQ_TREP(:ref_cursor); END;
O
INST SID USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_%
----- ----- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ----------
1 415 SCPP_SS_RW JDBC Thin 02nf0g0n2qp5g 0 730501726 0 69.87 read by other sessio delete from SCPP_SAME_DBO.futu 0 No 0
1 17 SYS oracle@x03 3z4d3ugq726n1 2 2533694825 274 549.09 cell single block ph DELETE FROM SYS.AUD$ WHERE DB 107 No 0
-- the report_sql_monitor script with INST_ID
SYS@db01scp3> @report_sql_monitor
Enter value for sid: 17
Enter value for sql_id: 5409mxs94a3d1
Enter value for sql_exec_id: 16777228
Enter value for inst_id: 1
REPORT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL Monitoring Report
SQL Text
------------------------------
DECLARE job BINARY_INTEGER := :job; next_date TIMESTAMP WITH TIME ZONE := :mydate; broken BOOLEAN := FALSE; job_name VARCHAR2(30) := :job_name; job_subname VARCHAR2(30) := :job_subname; job_owner VARCHAR2(30) := :job_owner; job_start TIMESTAMP WITH T
IME ZONE := :job_start; job_scheduled_start TIMESTAMP WITH TIME ZONE := :job_scheduled_start; window_start TIMESTAMP WITH TIME ZONE := :window_start; window_end TIMESTAMP WITH TIME ZONE := :window_end; chain_id VARCHAR2(14) := :chainid;
credential_owner varchar2(30) := :credown; credential_name varchar2(30) := :crednam; destination_owner varchar2(30) := :destown; destination_name varchar2(30) := :destnam; job_dest_id varchar2(14) := :jdestid; log_id number := :log_id; BEGIN BEGIN DB
MS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(1, TRUE); END; :mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
Global Information
------------------------------
Status : EXECUTING
Instance ID : 1
Session : SYS (17:46643)
SQL ID : 5409mxs94a3d1
SQL Execution ID : 16777228
Execution Started : 06/28/2014 10:17:30
First Refresh Time : 06/28/2014 10:17:38
Last Refresh Time : 07/02/2014 21:06:48
Duration : 384577s
Module/Action : DBMS_SCHEDULER/PURGE_STD_AUDIT_TRAILS
Service : SYS$USERS
Program : oracle@x03db01.example.com (J000)
Global Stats
===========================================================================================
| Elapsed | Cpu | IO | Concurrency | Cluster | PL/SQL | Buffer | Read | Read |
| Time(s) | Time(s) | Waits(s) | Waits(s) | Waits(s) | Time(s) | Gets | Reqs | Bytes |
===========================================================================================
| 389263 | 39964 | 280464 | 5.05 | 68830 | 4.14 | 2G | 39M | 14TB |
===========================================================================================
-- the snapper output on SID 17 @ instance 1
SYS@db01scp3> @snapper "ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats" 5 1 17@1
Sampling SID 17@1 with interval 5 seconds, taking 1 snapshots...
-- Session Snapper v4.06 BETA - by Tanel Poder ( http://blog.tanelpoder.com ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SID @INST, USERNAME , TYPE, STATISTIC , DELTA, HDELTA/SEC, %TIME, GRAPH , NUM_WAITS, WAITS/SEC, AVERAGES
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
17 @1, SYS , STAT, session logical reads , 16113, 3.76k, , , , , ~ per execution
17 @1, SYS , STAT, cluster wait time , 35, 8.16, , , , , ~ per execution
17 @1, SYS , STAT, user I/O wait time , 324, 75.55, , , , , ~ per execution
17 @1, SYS , STAT, non-idle wait time , 358, 83.48, , , , , ~ per execution
17 @1, SYS , STAT, non-idle wait count , 4354, 1.02k, , , , , ~ per execution
17 @1, SYS , STAT, physical read total IO requests , 126, 29.38, , , , , ~ per execution
17 @1, SYS , STAT, physical read total multi block requests , 126, 29.38, , , , , ~ per execution
17 @1, SYS , STAT, physical read requests optimized , 29, 6.76, , , , , ~ per execution
17 @1, SYS , STAT, physical read total bytes optimized , 1794048, 418.34k, , , , , ~ per execution
17 @1, SYS , STAT, physical read total bytes , 131997696, 30.78M, , , , , ~ per execution
17 @1, SYS , STAT, cell physical IO interconnect bytes , 131997696, 30.78M, , , , , ~ per execution
17 @1, SYS , STAT, gcs messages sent , 11761, 2.74k, , , , , ~ per execution
17 @1, SYS , STAT, consistent gets , 16113, 3.76k, , , , , ~ per execution
17 @1, SYS , STAT, consistent gets from cache , 16113, 3.76k, , , , , ~ per execution
17 @1, SYS , STAT, consistent gets from cache (fastpath) , 15987, 3.73k, , , , , ~ per execution
17 @1, SYS , STAT, logical read bytes from cache , 131997696, 30.78M, , , , , ~ per execution
17 @1, SYS , STAT, physical reads , 16113, 3.76k, , , , , ~ per execution
17 @1, SYS , STAT, physical reads cache , 16113, 3.76k, , , , , ~ per execution
17 @1, SYS , STAT, physical read IO requests , 126, 29.38, , , , , ~ per execution
17 @1, SYS , STAT, physical read bytes , 131997696, 30.78M, , , , , ~ per execution
17 @1, SYS , STAT, free buffer requested , 16113, 3.76k, , , , , ~ per execution
17 @1, SYS , STAT, dirty buffers inspected , 80, 18.65, , , , , ~ per execution
17 @1, SYS , STAT, pinned buffers inspected , 16, 3.73, , , , , ~ per execution
17 @1, SYS , STAT, hot buffers moved to head of LRU , 1462, 340.91, , , , , ~ per execution
17 @1, SYS , STAT, free buffer inspected , 15241, 3.55k, , , , , ~ per execution
17 @1, SYS , STAT, physical reads cache prefetch , 15987, 3.73k, , , , , ~ per execution
17 @1, SYS , STAT, prefetched blocks aged out before use , 2, .47, , , , , ~ per execution
17 @1, SYS , STAT, redo entries , 126, 29.38, , , , , ~ per execution
17 @1, SYS , STAT, redo size , 454276, 105.93k, , , , , ~ bytes per user commit
17 @1, SYS , STAT, redo entries for lost write detection , 124, 28.91, , , , , ~ per execution
17 @1, SYS , STAT, redo size for lost write detection , 446972, 104.23k, , , , , ~ per execution
17 @1, SYS , STAT, file io wait time , 3234798, 754.3k, , , , , ~ per execution
17 @1, SYS , STAT, gc local grants , 4352, 1.01k, , , , , ~ per execution
17 @1, SYS , STAT, gc remote grants , 11761, 2.74k, , , , , ~ per execution
17 @1, SYS , STAT, no work - consistent read gets , 16113, 3.76k, , , , , ~ per execution
17 @1, SYS , STAT, table scan rows gotten , 580205, 135.29k, , , , , ~ per execution
17 @1, SYS , STAT, table scan blocks gotten , 16113, 3.76k, , , , , ~ per execution
17 @1, SYS , STAT, cell flash cache read hits , 29, 6.76, , , , , ~ per execution
17 @1, SYS , TIME, DB CPU , 516921, 120.54ms, 12.1%, [@@ ], , ,
17 @1, SYS , TIME, sql execute elapsed time , 6203855, 1.45s, 144.7%, [##########], , ,
17 @1, SYS , TIME, DB time , 6203855, 1.45s, 144.7%, [##########], , , ~ unaccounted time
17 @1, SYS , WAIT, gc cr multi block request , 343679, 80.14ms, 8.0%, [W ], 193, 45, 1.78ms average wait
17 @1, SYS , WAIT, cell multiblock physical read , 3614576, 842.85ms, 84.3%, [WWWWWWWWW ], 125, 29.15, 28.92ms average wait
-- End of Stats snap 1, end=2014-07-02 21:31:39, seconds=4.3
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Active% | SQL_ID | SID | EVENT | WAIT_CLASS | MODULE | SERVICE_NAME | BLOCKING_SES | P2 | P3
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
26% | 3z4d3ugq726n1 | 17 | cell multiblock physical read | User I/O | DBMS_SCHEDULER | SYS$USERS | | 3517816569 | 1048576
13% | 3z4d3ugq726n1 | 17 | cell multiblock physical read | User I/O | DBMS_SCHEDULER | SYS$USERS | | 4232095195 | 1048576
5% | 3z4d3ugq726n1 | 17 | cell multiblock physical read | User I/O | DBMS_SCHEDULER | SYS$USERS | | 188591192 | 1048576
5% | 3z4d3ugq726n1 | 17 | cell multiblock physical read | User I/O | DBMS_SCHEDULER | SYS$USERS | | 3684039998 | 1048576
5% | 3z4d3ugq726n1 | 17 | ON CPU | ON CPU | DBMS_SCHEDULER | SYS$USERS | | |
3% | 3z4d3ugq726n1 | 17 | cell multiblock physical read | User I/O | DBMS_SCHEDULER | SYS$USERS | | 1771281682 | 1048576
3% | 3z4d3ugq726n1 | 17 | gc cr multi block request | Cluster | DBMS_SCHEDULER | SYS$USERS | | 804607 | 1
3% | 3z4d3ugq726n1 | 17 | cell multiblock physical read | User I/O | DBMS_SCHEDULER | SYS$USERS | | 749744948 | 1048576
3% | 3z4d3ugq726n1 | 17 | gc cr multi block request | Cluster | DBMS_SCHEDULER | SYS$USERS | | 801071 | 1
3% | 3z4d3ugq726n1 | 17 | cell multiblock physical read | User I/O | DBMS_SCHEDULER | SYS$USERS | | 767922459 | 1048576
-- End of ASH snap 1, end=2014-07-02 21:31:39, seconds=5, samples_taken=39
PL/SQL procedure successfully completed.
}}}
.
{{{
-- snapper manual begin and end
select sid from v$session where sql_id = '8gqt47kymkn6u'
set serveroutput on
var snapper refcursor
snapper4.sql all,begin 5 1 1538
snapper4.sql all,end 5 1 1538
}}}
{{{
@snapper out 1 120 "select sid from v$session where status = 'ACTIVE'"
@snapper all 1 5 qc=276
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 5 qc=138
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 1 sid=2164
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 5 "select sid from v$session where program like 'nqsserver%'"
@snapper ash=event+wait_class,stats,gather=ts,tinclude=CPU,sinclude=redo|reads|writes 5 5 "select sid from v$session where username like 'DBFS%'" <-- get sysstat values
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session" <-- ALL PROCESSES - start with this!
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats,gather=a 5 5 "select sid from v$session" <-- ALL PROCESSES - with BUFG and LATCH
@snapper ash=event+wait_class,stats,gather=tsw,tinclude=CPU,sinclude=redo|reads|writes 5 5 "select sid from v$session where username like 'USER%' or program like '%DBW%' or program like '%CKP%' or program like '%LGW%'" <-- get ASM redundancy/parity test case
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'DBFS' or program like '%SMC%' or program like '%W00%'" <-- get DBFS and other background processes
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 5 ALL
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 1374
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'DBFS'"
-- the snapperloop, copy the snapperloop file in the same directory then do a spool then run any of the commands below
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 263
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'SYSADM' and module = 'EX_APPROVAL'"
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'SYSADM'"
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'SYSADM' and sid = 821"
@snapper "stats,gather=s,sinclude=total IO requests|physical.*total bytes" 10 1 all <-- get the total IO MB/s
}}}
this is just a temporary placeholder for snapper v4.1, you can find the original/updated snapper script here http://blog.tanelpoder.com/files/scripts
{{{
------------------------------------------------------------------------------
--
-- File name: snapper.sql
-- Purpose: An easy to use Oracle session-level performance measurement tool
-- which does NOT require any database changes nor creation of any
-- database objects!
--
-- This is very useful for ad-hoc performance diagnosis in environments
-- with restrictive change management processes, where creating
-- even temporary tables and PL/SQL packages is not allowed or would
-- take too much time to get approved.
--
-- All processing is done by a few sqlplus commands and an anonymous
-- PL/SQL block, all that's needed is SQLPLUS access (and if you want
-- to output data to server-side tracefile then execute rights on
-- DBMS_SYSTEM). Snapper only queries some V$ views (and in advanced
-- mode some X$ fixed tables, but it does not enable any traces nor
-- use oradebug.
--
-- The output is formatted the way it could be easily post-processed
-- by either Unix string manipulation tools or loaded to spreadsheet.
--
-- As of version 3.5, Snapper works on Oracle versions starting from
-- Oracle 9.2
--
-- Note1: The "ASH" functionality in Snapper just samples GV$SESSION view,
-- so you do NOT need Diagnostics Pack licenses to use Snapper's
-- "ASH" output
--
-- Note2: Snapper just reports you performance metric deltas in a snapsphot
-- and does not attempt to solve any performance problems for you.
-- You still need to interpret and understand these standard Oracle
-- metrics yourself
--
-- Author: Tanel Poder (tanel@tanelpoder.com)
-- Copyright: (c) Tanel Poder - http://blog.tanelpoder.com - All rights reserved.
--
-- Disclaimer: This script is provided "as is", so no warranties or guarantees are
-- made about its correctness, reliability and safety. Use it at your
-- own risk!
--
-- License: 1) You may use this script for your (or your businesses) purposes for free
-- 2) You may modify this script as you like for your own (or your businesses) purpose,
-- but you must always leave this script header (the entire comment section), including the
-- author, copyright and license sections as the first thing in the beginning of this file
-- 3) You may NOT publish or distribute this script or any variation of it PUBLICLY
-- (including, but not limited to uploading it to your public website or ftp server),
-- instead just link to its location in blog.tanelpoder.com
-- 4) You may distribute this script INTERNALLY in your company, for internal use only,
-- for example when building a standard DBA toolset to be deployed to all
-- servers or DBA workstations
--
--
-- Thanks to: Adrian Billington, Jamey Johnston, Marcus Mönnig, Hans-Peter Sloot
-- and Ronald Rood for bugfixes, additions and improvements
--
--------------------------------------------------------------------------------
--
-- The Session Snapper v4.06 BETA ( USE AT YOUR OWN RISK !!! )
-- (c) Tanel Poder ( http://blog.tanelpoder.com )
--
--
-- +-----=====O=== Welcome to The Session Snapper! (Yes, you are looking at a cheap ASCII
-- / imitation of a fish and a fishing rod.
-- | Nevertheless the PL/SQL code below the
-- | fish itself should be helpful for quick
-- | catching of relevant Oracle performance
-- | information.
-- | So I wish you happy... um... snapping?
-- | )
-- | ......
-- | iittii,,....
-- ¿ iiffffjjjjtttt,,
-- ..;;ttffLLLLffLLLLLLffjjtt;;..
-- ..ttLLGGGGGGLLffLLLLLLLLLLLLLLffjjii,, ..ii,,
-- ffGGffLLLLLLjjttjjjjjjjjffLLLLLLLLLLjjii.. ..iijj;;....
-- ffGGLLiittjjttttttiittttttttttffLLLLLLGGffii.. ;;LLLLii;;;;..
-- ffEEGGffiittiittttttttttiiiiiiiittjjjjffLLGGLLii.. iiLLLLLLttiiii,,
-- ;;ffDDLLiiiitt,,ttttttttttttiiiiiiiijjjjjjffLLLLffttiiiiffLLGGLLjjtttt;;..
-- ..ttttjjiitt,,iiiiiittttttttjjjjttttttttjjjjttttjjttttjjjjffLLDDGGLLttii..
-- iittiitttt, ;;iittttttttjjjjjjjjjjttjjjjjjffffffjjjjjjjjjjLLDDGGLLtt;;..
-- jjjjttttii:. ..iiiiffLLGGLLLLLLLLffffffLLLLLLLLLLLLLLLLffffffLLLLLLfftt,,
-- iittttii,,;;,,ttiiiiLLLLffffffjjffffLLLLLLLLffLLffjjttttttttttjjjjffjjii..
-- ,,iiiiiiiiiittttttiiiiiiiiiijjffffLLLLLLLLffLLffttttttii;;;;iiiitttttttt;;..
-- ..iittttttffffttttiiiiiiiiiittttffjjjjffffffffttiittii:: ....,,;;iittii;;
-- ..;;iittttttttttttttttiiiiiittttttttttjjjjjjtttttt;; ..;;ii;;..
-- ..;;;;iittttttjjttiittttttttttttttjjttttttttii.. ....
-- ....;;;;ttjjttttiiiiii;;;;;;iittttiiii..
-- ..;;ttttii;;.... ..;;;;....
-- ..iiii;;..
-- ..;;,,
-- ....
--
--
-- Usage:
--
-- snapper.sql <ash[1-3]|stats|all>[,out][,trace][,pagesize=X][,gather=[s][t][w][l][e][b][a]]> <seconds_in_snap> <snapshot_count> <sid(s)_to_snap>
--
-- ash - sample session activity ASH style, waits and SQL_IDs from gv$session and
-- print a TOP SQL/wait report from these samples (this is the default from
-- Snapper 3.0). The columns chosen for TOP calculation are defined in CONFIG
-- section below.
--
-- ash=sql_id+event+wait_class
-- - the above example illustrates that you can also specify the gv$session
-- columns for TOP report yourself. The above example will show a TOP
-- activity report grouped by SQL_ID + EVENT + WAIT_CLASS
-- Note that the columns are separated by a "+" sign (as comma is a snapper
-- parameter separator, not ASH column separator)
--
-- ash1
-- ash2
-- ash3 - in addition to "ash" report you can have 3 more reported during the same
-- snapper sampling snapshot. Just include ash1=col1+col2,ash2=col3+col4,...
-- parameters if you want multiple TOP reports per Snapper snapshot
--
-- stats - sample gv$sesstat,gv$sess_time_model,gv$session_event performance counters
-- and report how much these stats increased (deltas) during Snapper run
-- all - report both ASH and stats sections
--
-- out - use dbms_output.put_line() for output. output will be seen only when
-- Snapper run completes due to dbms_output limitations. This is the default.
-- trace - write output to server process tracefile
-- (you must have execute permission on sys.dbms_system.ksdwrt() for that,
-- you can use both out and trace parameters together if you like )
--
-- pagesize - display header lines after X snapshots. if pagesize=0 don't display
-- any headers. pagesize=-1 will display a terse header only once
--
-- gather - if omitted, gathers s,t,w statistics (see below)
-- - if specified, then gather following:
--
-- Session-level stats:
-- s - Session Statistics from gv$sesstat
-- t - Session Time model info from gv$sess_time_model
-- w - Session Wait statistics from gv$session_event and gv$session_wait
--
-- Instance-level stats:
-- l - instance Latch get statistics ( gets + immediate_gets )
-- e - instance Enqueue lock get statistics
-- b - buffer get Where statistics -- useful in versions up to 10.2.x
-- a - All above
--
-- sinclude - if specified, then show only GV$SESSTAT stats which match the
-- LIKE pattern of sinclude (REGEXP_LIKE in 10g+)
-- linclude - if specified, then show only GV$LATCH latch stats which match the
-- LIKE pattern of linclude (REGEXP_LIKE in 10g+)
-- tinclude - if specified, then show only GV$SESS_TIME_MODEL stats which match the
-- LIKE pattern of tinclude (REGEXP_LIKE in 10g+)
-- winclude - if specified, then show only GV$SESSION_EVENT wait stats which match the
-- LIKE pattern of winclude (REGEXP_LIKE in 10g+)
--
-- you can combine above parameters in any order, separate them by commas
-- !!!don't use spaces as otherwise they are treated as next parameters by sqlplus !!!
-- !!!if you want to use spaces, enclose the whole sqlplus parameter in doublequotes !!!
--
-- <seconds_in_snap> - the number of seconds between taking snapshots
-- <snapshot_count> - the number of snapshots to take ( maximum value is power(2,31)-1 )
--
-- <sids_to_snap> can be either one sessionid, multiple sessionids separated by
-- commas or a SQL statement which returns a list of SIDs (if you need spaces
-- in that parameter text, enclose it in double quotes).
--
-- if you want to snap ALL sids, use "all" as value for
-- <sids_to_snap> parameter
--
-- alternatively you can used "select sid from gv$session" as value for <sids_to_snap>
-- parameter to capture all SIDs. you can write any query (with multiple and/or)
-- conditions to specify complex rules for capturing only the SIDs you want
--
-- starting from version 3.0 there are further session_id selection options available in
-- instead of sid you can write such expressions for snapper's <sids_to_snap> parameter:
--
-- sid=123 -- take sid 123 only (the same as just writing 123)
-- user=tanel -- take all sessions where username is 'tanel' (case insensitive)
-- -- this is the same as writing following subquery for the
-- -- <sids_to_snap> parameter:
-- select sid from gv$session where lower(username) like lower('tanel')
--
-- user=tanel% -- take all sessions where username begins with 'tanel%' (case insensitive)
-- -- the = means actually LIKE in SQL terms in this script
--
-- spid=1234 -- all these 3 parameters do the same thing:
-- ospid=1234 -- they look up the sessions(s) where the processes OS PID=1234
-- pid=1234 -- this is useful for quickly looking up what some OS process is doing
-- -- if it consumes too much of some resource
-- qc=123
-- qcsid=123 -- show query coordinator and all PX slave sessions
--
-- program=sqlplus% -- the following examples filter by corresponding gv$session coulmns
-- machine=linux01 -- machine
-- osuser=oracle -- os username
-- module=HR -- module
-- "action=Find Order" -- note the quotes because there is a space inside the parameter
-- -- value
-- client_id=tanelpoder -- show only sessions where client_identifier is set to tanelpoder
-- -- this is very useful in cases with (properly instrumented)
-- -- connection pools
--
--
-- Note that if you want to change some "advanced" snapper configuration parameters
-- or default values then search for CONFIG in this file to see configurable
-- variable section
--
--
-- Examples:
-- NB! Read the online examples, these are more detailed and list script output too!
--
-- http://tech.e2sn.com/oracle-scripts-and-tools/session-snapper
--
-- @snapper ash,stats 1 1 515
-- (Output one 1-second snapshot of session 515 using dbms_output and exit
-- Wait, gv$sesstat and gv$sess_time_model statistics are reported by default
-- Starting from V3 the ASH style session activity report is shown as well)
--
-- @snapper stats,gather=w 1 1 515
-- (Output one 1-second snapshot of session 515 using dbms_output and exit
-- only Wait event statistics are reported, no ASH)
--
-- @snapper ash,gather=st 1 1 515
-- (Output one 1-second snapshot of session 515 using dbms_output and exit
-- only gv$sesstat and gv$sess_Time_model statistics are gathered + ASH)
--
-- @snapper trace,ash,gather=stw,pagesize=0 10 90 117,210,313
-- (Write 90 10-second snapshots into tracefile for session IDs 117,210,313
-- all statistics are reported, do not print any headers)
--
-- @snapper trace,ash 900 999999999 "select sid from v$session"
-- (Take a snapshot of ALL sessions every 15 minutes and write the output to trace,
-- loop (almost) forever )
--
-- @snapper out,trace 300 12 "select sid from v$session where username='APPS'"
-- (Take 12 5-minute snapshots of all sessions belonging to APPS user, write
-- output to both dbms_output and tracefile)
--
-- Notes:
--
-- Snapper does not currently detect if a session with given SID has
-- ended and been recreated between snapshots, thus it may report bogus
-- statistics for such sessions. The check and warning for that will be
-- implemented in a future version.
--
--------------------------------------------------------------------------------
set termout off tab off verify off linesize 999 trimspool on trimout on null ""
--debug:
-- set termout on serveroutput on
-- Get parameters (future snapper v4.x extended syntax: @snapper <options> <"begin"|"end"|sleep#> <"snap_name"|snap_count> <sid>)
define snapper_options="&1"
define snapper_sleep="&2"
define snapper_count="&3"
define snapper_sid="&4"
-- The following code is required for making this script "dynamic" as due to
-- different Oracle versions, script parameters or granted privileges some
-- statements might not compile if not adjusted properly.
define _IF_ORA11_OR_HIGHER="--"
define _IF_LOWER_THAN_ORA11="--"
define _IF_DBMS_SYSTEM_ACCESSIBLE="/* dbms_system is not accessible"
-- /*dummy*/ -- this "dummy" is here just for avoiding VIM syntax highlighter going crazy due to previous line
define _IF_X_ACCESSIBLE="--"
-- plsql_object_id columns available in v$session (from 10.2.0.3)
define _YES_PLSQL_OBJ_ID="--"
define _NO_PLSQL_OBJ_ID=""
-- blocking_instance available in v$session (from 10.2)
define _YES_BLK_INST="--"
define _NO_BLK_INST=""
-- snapper v4 manual before/after snapshotting
define _MANUAL_SNAPSHOT="--"
define _USE_DBMS_LOCK=""
-- set the noprint's value to "noprint" if you don't want these temporary variables to show up in a sqlplus spool file
DEF noprint=""
col snapper_ora11higher &noprint new_value _IF_ORA11_OR_HIGHER
col snapper_ora11lower &noprint new_value _IF_LOWER_THAN_ORA11
col dbms_system_accessible &noprint new_value _IF_DBMS_SYSTEM_ACCESSIBLE
col x_accessible &noprint new_value _IF_X_ACCESSIBLE
col no_plsql_obj_id &noprint new_value _NO_PLSQL_OBJ_ID
col yes_plsql_obj_id &noprint new_value _YES_PLSQL_OBJ_ID
col no_blk_inst &noprint new_value _NO_BLK_INST
col yes_blk_inst &noprint new_value _YES_BLK_INST
col manual_snapshot &noprint new_value _MANUAL_SNAPSHOT
col use_dbms_lock &noprint new_value _USE_DBMS_LOCK
col snapper_sid &noprint new_value snapper_sid
-- sid_filter and inst_filter are the new RAC gv$ friendly way to filter sessions in Snapper v4
def sid_filter="/**/"
def inst_filter="/**/"
col sid_filter &noprint new_value sid_filter
col inst_filter &noprint new_value inst_filter
-- initialize, precompute and determine stuff
var v varchar2(100)
var x varchar2(10)
var sid_filter varchar2(4000)
var inst_filter varchar2(4000)
-- this is here for a reason
-- im extracting the first word of the snapper_sid (if its a complex expression, not just a single SID)
-- by relying on how DEF and & assignment treat spaces in strings
def ssid_begin=&snapper_sid
declare
o sys.dbms_describe.number_table;
p sys.dbms_describe.number_table;
l sys.dbms_describe.number_table;
a sys.dbms_describe.varchar2_table;
dty sys.dbms_describe.number_table;
def sys.dbms_describe.number_table;
inout sys.dbms_describe.number_table;
len sys.dbms_describe.number_table;
prec sys.dbms_describe.number_table;
scal sys.dbms_describe.number_table;
rad sys.dbms_describe.number_table;
spa sys.dbms_describe.number_table;
tmp number;
lv_sid_filter varchar2(4000);
lv_inst_filter varchar2(4000);
function get_filter(str in varchar2) return varchar2
is
ret varchar2(1000);
begin
if str like '%@%' then
--dbms_output.put_line('get_filter:1 str= '||str);
ret := lower(trim(regexp_replace(substr(str,instr(str,'=')+1), '^(.+)@([[:digit:]\*]+)(.*)', '\1')));
else
--dbms_output.put_line('get_filter:2 str= '||str);
ret := lower(trim(substr(str,instr(str,'=')+1)));
end if;
--dbms_output.put_line('get_filter = ' || ret);
return ret;
end get_filter;
begin
-- compute inst_filter
case
when regexp_instr('&ssid_begin','@') = 0 then
lv_inst_filter := '/* inst_filter */ s.inst_id=USERENV(''Instance'')';
when regexp_instr('&ssid_begin','@\*') > 0 or '&ssid_begin' like '(%' then
lv_inst_filter := '/* inst_filter */ 1=1';
when regexp_instr('&ssid_begin','@\d+') > 0 then
lv_inst_filter := 's.inst_id = ' || regexp_replace('&ssid_begin', '^(.+)@(\d+)(.*)', '\2');
else
lv_inst_filter := 's.inst_id=USERENV(''Instance'')';
--when regexp_instr('&ssid_begin','@\d+') > 0 then regexp_replace(snapper_sid, '^(.+)@\d+', '\1') || ' AND inst_id = ' || regexp_replace(snapper_sid, '^(.+)@(\d+)(.*)', '\2')
end case;
-- compute sid_filter
case
when trim(lower('&ssid_begin')) like 'sid=%' then lv_sid_filter := 's.sid in ('||get_filter('&ssid_begin')||')'; --||trim(replace('&ssid_begin','sid=',''))||')';
when trim(lower('&ssid_begin')) like 'user=%' then lv_sid_filter := 'lower(username) like ''' ||get_filter('&ssid_begin')||'''';
when trim(lower('&ssid_begin')) like 'username=%' then lv_sid_filter := 'lower(username) like ''' ||get_filter('&ssid_begin')||'''';
when trim(lower('&ssid_begin')) like 'machine=%' then lv_sid_filter := 'lower(machine) like ''' ||get_filter('&ssid_begin')||'''';
when trim(lower('&ssid_begin')) like 'program=%' then lv_sid_filter := 'lower(program) like ''' ||get_filter('&ssid_begin')||'''';
when trim(lower('&ssid_begin')) like 'service=%' then lv_sid_filter := 'lower(service_name) like ''' ||get_filter('&ssid_begin')||'''';
when trim(lower('&ssid_begin')) like 'module=%' then lv_sid_filter := 'lower(module) like ''' ||get_filter('&ssid_begin')||'''';
when trim(lower('&ssid_begin')) like 'action=%' then lv_sid_filter := 'lower(action) like ''' ||get_filter('&ssid_begin')||'''';
when trim(lower('&ssid_begin')) like 'osuser=%' then lv_sid_filter := 'lower(osuser) like ''' ||get_filter('&ssid_begin')||'''';
when trim(lower('&ssid_begin')) like 'client_id=%' then lv_sid_filter := 'lower(client_identifier) like '''||get_filter('&ssid_begin')||'''';
when trim(lower('&ssid_begin')) like 'spid=%' then lv_sid_filter := '(s.inst_id,s.paddr) in (select /*+ UNNEST */ inst_id,addr from gv$process where spid in ('||get_filter('&ssid_begin')||'))';
when trim(lower('&ssid_begin')) like 'ospid=%' then lv_sid_filter := '(s.inst_id,s.paddr) in (select /*+ UNNEST */ inst_id,addr from gv$process where spid in ('||get_filter('&ssid_begin')||'))';
when trim(lower('&ssid_begin')) like 'pid=%' then lv_sid_filter := '(s.inst_id,s.paddr) in (select /*+ NO_UNNEST */ inst_id,addr from gv$process where spid in ('||get_filter('&ssid_begin')||'))';
when trim(lower('&ssid_begin')) like 'qcsid=%' then lv_sid_filter := '(s.inst_id,s.sid) in (select /*+ NO_UNNEST */ inst_id,sid from gv$px_session where qcsid in ('||get_filter('&ssid_begin')||'))';
when trim(lower('&ssid_begin')) like 'qc=%' then lv_sid_filter := '(s.inst_id,s.sid) in (select /*+ NO_UNNEST */ inst_id,sid from gv$px_session where qcsid in ('||get_filter('&ssid_begin')||'))';
when trim(lower('&ssid_begin')) like 'all%' then lv_sid_filter := '1=1';
when trim(lower('&ssid_begin')) like 'bg%' then lv_sid_filter := 'type=''BACKGROUND''';
when trim(lower('&ssid_begin')) like 'fg%' then lv_sid_filter := 'type=''USER''';
when trim(lower('&ssid_begin')) like 'smon%' then lv_sid_filter := 'program like ''%(SMON)%''';
when trim(lower('&ssid_begin')) like 'pmon%' then lv_sid_filter := 'program like ''%(PMON)%''';
when trim(lower('&ssid_begin')) like 'ckpt%' then lv_sid_filter := 'program like ''%(CKPT)%''';
when trim(lower('&ssid_begin')) like 'lgwr%' then lv_sid_filter := 'program like ''%(LGWR)%''';
when trim(lower('&ssid_begin')) like 'dbwr%' then lv_sid_filter := 'program like ''%(DBW%)%''';
when trim(lower('&ssid_begin')) like 'select%' then /*lv_inst_filter := '/* inst_filter2 1=1'; */ lv_sid_filter := q'{(s.inst_id,s.sid) in (&snapper_sid)}';
--when trim(lower('&ssid_begin')) like 'select%' then lv_sid_filter := '(s.inst_id,s.sid) in ('||regexp_replace(replace(q'{&snapper_sid}','',''''''), '^select ', 'select /*+ unnest */ ', 1, 1, 'i')||')'; -- '1=1'; lv_inst_filter := '1=1';
--when trim(lower('&ssid_begin')) like 'with%' then lv_sid_filter := '(s.inst_id,s.sid) in (&snapper_sid)'; -- '1=1'; lv_inst_filter := '1=1';
when trim(lower('&ssid_begin')) like '(%' then lv_inst_filter := '/* inst_filter2 */ 1=1'; lv_sid_filter := q'{(s.inst_id,s.sid) in (&snapper_sid)}'; -- '1=1'; lv_inst_filter := '1=1';
else lv_sid_filter := '/* sid_filter_else_cond */ s.sid in ('||get_filter('&ssid_begin')||')'; --lv_sid_filter := '/* sid_filter_else_cond */ s.sid in (&ssid_begin)';
end case;
:inst_filter := lv_inst_filter;
:sid_filter := lv_inst_filter||' and '||lv_sid_filter;
-- this block determines whether dbms_system.ksdwrt is accessible to us
-- dbms_describe is required as all_procedures/all_objects may show this object
-- even if its not executable by us (thanks to o7_dictionary_accessibility=false)
begin
execute immediate 'select count(*) from x$kcbwh where rownum = 1' into tmp;
:x:= ' '; -- x$ tables are accessible, so dont comment any lines out
exception
when others then null;
end;
sys.dbms_describe.describe_procedure(
'DBMS_SYSTEM.KSDWRT', null, null,
o, p, l, a, dty, def, inout, len, prec, scal, rad, spa
);
-- we never get to following statement if dbms_system is not accessible
-- as sys.dbms_describe will raise an exception
:v:= '-- dbms_system is accessible';
exception
when others then null;
end;
/
-- this is here for a reason
-- im extracting the first word of the snapper_sid (if its a complex expression, not just a single SID)
-- by relying on how DEF and & assignment treat spaces in strings
-- def ssid_begin=&snapper_sid
-- select * from (
-- select
-- snapper_sid
-- --case
-- -- when regexp_instr('&ssid_begin','@') = 0 then snapper_sid || ' AND inst_id = sys_context(''userenv'', ''instance'')'
-- -- when regexp_instr('&ssid_begin','@\*') > 0 then regexp_replace(snapper_sid, '^(.+)@\*', '\1') -- all instances
-- -- when regexp_instr('&ssid_begin','@\d+') > 0 then regexp_replace(snapper_sid, '^(.+)@\d+', '\1') || ' AND inst_id = ' || regexp_replace(snapper_sid, '^(.+)@(\d+)(.*)', '\2')
-- -- else snapper_sid
-- --end snapper_sid
-- from (
-- select
-- case
-- -- when trim(lower('&ssid_begin')) like 'sid=%' then trim(replace('&ssid_begin','sid=',''))
-- when trim(lower('&ssid_begin')) like 'sid=%' then 'select inst_id,sid from gv$session where sid in ('||trim(replace('&ssid_begin','sid=',''))||')'
-- when trim(lower('&ssid_begin')) like 'user=%' then 'select inst_id,sid from gv$session where lower(username) like '''||lower(trim(replace('&ssid_begin','user=','')))||''''
-- when trim(lower('&ssid_begin')) like 'username=%' then 'select inst_id,sid from gv$session where lower(username) like '''||lower(trim(replace('&ssid_begin','username=','')))||''''
-- when trim(lower('&ssid_begin')) like 'machine=%' then 'select inst_id,sid from gv$session where lower(machine) like '''||lower(trim(replace('&ssid_begin','machine=','')))||''''
-- when trim(lower('&ssid_begin')) like 'program=%' then 'select inst_id,sid from gv$session where lower(program) like '''||lower(trim(replace('&ssid_begin','program=','')))||''''
-- when trim(lower('&ssid_begin')) like 'service=%' then 'select inst_id,sid from gv$session where lower(service_name) like '''||lower(trim(replace('&ssid_begin','service=','')))||''''
-- when trim(lower('&ssid_begin')) like 'module=%' then 'select inst_id,sid from gv$session where lower(module) like '''||lower(trim(replace('&ssid_begin','module=','')))||''''
-- when trim(lower('&ssid_begin')) like 'action=%' then 'select inst_id,sid from gv$session where lower(action) like '''||lower(trim(replace('&ssid_begin','action=','')))||''''
-- when trim(lower('&ssid_begin')) like 'osuser=%' then 'select inst_id,sid from gv$session where lower(osuser) like '''||lower(trim(replace('&ssid_begin','osuser=','')))||''''
-- when trim(lower('&ssid_begin')) like 'client_id=%' then 'select inst_id,sid from gv$session where lower(client_identifier) like '''||lower(trim(replace('&ssid_begin','client_id=','')))||''''
-- when trim(lower('&ssid_begin')) like 'spid=%' then 'select inst_id,sid from gv$session where paddr in (select addr from gv$process where spid in ('||lower(trim(replace('&ssid_begin','spid=','')))||'))'
-- when trim(lower('&ssid_begin')) like 'ospid=%' then 'select inst_id,sid from gv$session where paddr in (select addr from gv$process where spid in ('||lower(trim(replace('&ssid_begin','ospid=','')))||'))'
-- when trim(lower('&ssid_begin')) like 'pid=%' then 'select inst_id,sid from gv$session where paddr in (select addr from gv$process where spid in ('||lower(trim(replace('&ssid_begin','pid=','')))||'))'
-- when trim(lower('&ssid_begin')) like 'qcsid=%' then 'select inst_id,sid from gv$px_session where qcsid in ('||lower(trim(replace('&ssid_begin','qcsid=','')))||')' -- TODO use pxs
-- when trim(lower('&ssid_begin')) like 'qc=%' then 'select inst_id,sid from gv$px_session where qcsid in ('||lower(trim(replace('&ssid_begin','qc=','')))||')' -- TODO use pxs
-- when trim(lower('&ssid_begin')) = 'all' then 'select inst_id,sid from gv$session'
-- when trim(lower('&ssid_begin')) = 'bg' then 'select inst_id,sid from gv$session where type=''BACKGROUND'''
-- when trim(lower('&ssid_begin')) = 'fg' then 'select inst_id,sid from gv$session where type=''USER'''
-- when trim(lower('&ssid_begin')) = 'smon' then 'select inst_id,sid from gv$session where program like ''%(SMON)%'''
-- when trim(lower('&ssid_begin')) = 'pmon' then 'select inst_id,sid from gv$session where program like ''%(PMON)%'''
-- when trim(lower('&ssid_begin')) = 'ckpt' then 'select inst_id,sid from gv$session where program like ''%(CKPT)%'''
-- when trim(lower('&ssid_begin')) = 'lgwr' then 'select inst_id,sid from gv$session where program like ''%(LGWR)%'''
-- when trim(lower('&ssid_begin')) = 'dbwr' then 'select inst_id,sid from gv$session where program like ''%(DBW%)%'''
-- when trim(lower('&ssid_begin')) like 'select%' then null
-- when trim(lower('&ssid_begin')) like 'with%' then null
-- --else 'select inst_id,sid from gv$session where sid in (&ssid_begin)'
-- else null
-- end snapper_sid -- put the result back to snapper_sid sqlplus value (if its not null)
-- from
-- dual
-- )
-- )
-- where
-- snapper_sid is not null -- snapper_sid sqlplus variable value will not be replaced if this query doesnt return any rows
-- /
prompt snapper_sid = &snapper_sid
-- this query populates some sqlplus variables required for dynamic compilation used below
with mod_banner as (
select
replace(banner,'9.','09.') banner
from
v$version
where rownum = 1
)
select
decode(substr(banner, instr(banner, 'Release ')+8,2), '09', '--', '') snapper_ora10lower,
decode(substr(banner, instr(banner, 'Release ')+8,2), '09', '', '--') snapper_ora9,
decode(substr(banner, instr(banner, 'Release ')+8,1), '1', '', '--') snapper_ora10higher,
case when substr(banner, instr(banner, 'Release ')+8,2) >= '11' then '' else '--' end snapper_ora11higher,
case when substr(banner, instr(banner, 'Release ')+8,2) < '11' then '' else '--' end snapper_ora11lower,
nvl(:v, '/* dbms_system is not accessible') dbms_system_accessible,
nvl(:x, '--') x_accessible,
case when substr( banner, instr(banner, 'Release ')+8, instr(substr(banner,instr(banner,'Release ')+8),' ') ) >= '10.2' then '' else '--' end yes_blk_inst,
case when substr( banner, instr(banner, 'Release ')+8, instr(substr(banner,instr(banner,'Release ')+8),' ') ) >= '10.2' then '--' else '' end no_blk_inst,
case when substr( banner, instr(banner, 'Release ')+8, instr(substr(banner,instr(banner,'Release ')+8),' ') ) >= '10.2.0.3' then '' else '--' end yes_plsql_obj_id,
case when substr( banner, instr(banner, 'Release ')+8, instr(substr(banner,instr(banner,'Release ')+8),' ') ) >= '10.2.0.3' then '--' else '' end no_plsql_obj_id,
case when lower('&snapper_options') like '%,begin%' or lower('&snapper_options') like 'begin%' or lower('&snapper_options') like '%,end%' or lower('&snapper_options') like 'end%' then '' else '--' end manual_snapshot,
case when lower('&snapper_options') like '%,begin%' or lower('&snapper_options') like 'begin%' or lower('&snapper_options') like '%,end%' or lower('&snapper_options') like 'end%' then '--' else '' end use_dbms_lock,
:sid_filter sid_filter,
:inst_filter inst_filter
from
mod_banner
/
-- on different lines as sql developer might not like this command
set termout on
set serveroutput on size unlimited format wrapped
prompt Sampling SID &4 with interval &snapper_sleep seconds, taking &snapper_count snapshots...
-- main()
-- let the Snapping start!!!
declare
-- Snapper start
-- forward declarations
procedure output(p_txt in varchar2);
procedure fout;
function tptformat( p_num in number,
p_stype in varchar2 default 'STAT',
p_precision in number default 2,
p_base in number default 10,
p_grouplen in number default 3
)
return varchar2;
function getopt( p_parvalues in varchar2,
p_extract in varchar2,
p_delim in varchar2 default ','
)
return varchar2;
-- type, constant, variable declarations
-- trick for holding 32bit UNSIGNED event and stat_ids in 32bit SIGNED PLS_INTEGER
pls_adjust constant number(10,0) := power(2,31) - 1;
type srec is record (ts timestamp, stype varchar2(4), inst_id number, sid number, statistic# number, value number, event_count number );
type stab is table of srec index by pls_integer;
type ltab is table of srec index by varchar2(100); -- lookup tab for various average calculation
s1 stab;
s2 stab;
l1 ltab;
l2 ltab;
type snrec is record (stype varchar2(4), statistic# number, name varchar2(100));
type sntab is table of snrec index by pls_integer;
sn_tmp sntab;
sn sntab;
type sntab_reverse is table of snrec index by varchar2(100); -- used for looking up stat id from stat name
sn_reverse sntab_reverse;
tmp_varchar2 varchar2(1000); -- misc
function get_useful_average(c in srec /* curr_metric */, p in srec /* all_prev_metrics */) return varchar2;
type tmp_sestab is table of gv$session%rowtype index by pls_integer;
type sestab is table of gv$session%rowtype index by varchar2(20);
g_sessions sestab;
g_empty_sessions sestab;
type hc_tab is table of number index by pls_integer; -- index is sql hash value
type ses_hash_tab is table of hc_tab index by pls_integer; -- index is SID
g_ses_hash_tab ses_hash_tab;
g_empty_ses_hash_tab ses_hash_tab;
-- dbms_debug_vc2coll is a built-in collection present in every oracle db
g_ash sys.dbms_debug_vc2coll := new sys.dbms_debug_vc2coll();
g_empty_ash sys.dbms_debug_vc2coll := new sys.dbms_debug_vc2coll();
g_snap1 sys.dbms_debug_vc2coll;
g_snap2 sys.dbms_debug_vc2coll;
g_ash_samples_taken number := 0;
g_count_statname number;
g_count_eventname number;
g_mysid number;
i number;
a number;
b number;
c number;
delta number;
evcnt number;
changed_values number;
pagesize number:=99999999999999;
missing_values_s1 number := 0;
missing_values_s2 number := 0;
disappeared_sid number := 0;
lv_curr_sid number := 0; -- used for determining whether to print an empty line between session stats
d1 timestamp(6);
d2 timestamp(6);
ash_date1 date;
ash_date2 date;
lv_gather varchar2(1000);
gv_header_string varchar2(1000);
lv_data_string varchar2(1000);
lv_ash varchar2(1000);
lv_stats varchar2(1000);
gather_stats number := 0;
gather_ash number := 0;
g_snap_begin varchar2(1000);
g_snap_end varchar2(1000);
-- CONFIGURABLE STUFF --
-- this sets what are the default ash sample TOP reporting group by columns
g_ash_columns varchar2(1000) := 'inst_id + sql_id + sql_child_number + event + wait_class';
g_ash_columns1 varchar2(1000) := 'inst_id + event + p1 + wait_class';
g_ash_columns2 varchar2(1000) := 'inst_id + sid + user + machine + program';
g_ash_columns3 varchar2(1000) := 'inst_id + plsql_object_id + plsql_subprogram_id + sql_id';
-- output column configuration
output_header number := 0; -- 1=true 0=false
output_username number := 1; -- v$session.username
output_inst number := 0; -- inst
output_sid number := CASE WHEN dbms_utility.is_cluster_database = TRUE THEN 0 ELSE 1 END; -- just sid
output_inst_sid number := CASE WHEN dbms_utility.is_cluster_database = TRUE THEN 1 ELSE 0 END; -- inst_id and sid together
output_time number := 0; -- time of snapshot start
output_seconds number := 0; -- seconds in snapshot (shown in footer of each snapshot too)
output_stype number := 1; -- statistic type (WAIT,STAT,TIME,ENQG,LATG,...)
output_sname number := 1; -- statistic name
output_delta number := 1; -- raw delta
output_delta_s number := 0; -- raw delta normalized to per second
output_hdelta number := 0; -- human readable delta
output_hdelta_s number := 1; -- human readable delta normalized to per second
output_percent number := 1; -- percent of total time/samples
output_eventcnt number := 1; -- wait event count
output_eventcnt_s number := 1; -- wait event count
output_eventavg number := 1; -- average wait duration
output_pcthist number := 1; -- percent of total visual bar (histogram) -- Histograms seem to work for me on 9.2.0.7 + - JBJ2)
-- column widths in ASH report output
w_inst_id number := 4;
w_sid number := 6;
w_username number := 20;
w_machine number := 20;
w_terminal number := 20;
w_program number := 25;
w_event number := 35;
w_wait_class number := 15;
w_state number := 8;
w_p1 number := 20;
w_p2 number := 20;
w_p3 number := 20;
w_row_wait_obj# number := 10;
w_row_wait_file# number := 6;
w_row_wait_block# number := 10;
w_row_wait_row# number := 6;
w_blocking_session_status number := 15;
w_blocking_instance number := 12;
w_blocking_session number := 12;
w_sql_hash_value number := 12;
w_sql_id number := 15;
w_sql_child_number number := 9;
w_plsql_entry_object_id number := 10;
w_plsql_entry_subprogram_id number := 10;
w_plsql_object_id number := 10;
w_plsql_subprogram_id number := 10;
w_module number := 25;
w_action number := 25;
w_client_identifier number := 25;
w_service_name number := 25;
w_activity_pct number := 7;
-- END CONFIGURABLE STUFF --
-- constants for ash collection extraction from the vc2 collection
s_inst_id constant number := 1 ;
s_sid constant number := 2 ;
s_username constant number := 3 ;
s_machine constant number := 4 ;
s_terminal constant number := 5 ;
s_program constant number := 6 ;
s_event constant number := 7 ;
s_wait_class constant number := 8 ;
s_state constant number := 9 ;
s_p1 constant number := 10 ;
s_p2 constant number := 11 ;
s_p3 constant number := 12 ;
s_row_wait_obj# constant number := 13 ;
s_row_wait_file# constant number := 14 ;
s_row_wait_block# constant number := 15 ;
s_row_wait_row# constant number := 16 ;
s_blocking_session_status constant number := 17 ;
s_blocking_instance constant number := 18 ;
s_blocking_session constant number := 19 ;
s_sql_hash_value constant number := 20 ;
s_sql_id constant number := 21 ;
s_sql_child_number constant number := 22 ;
s_plsql_entry_object_id constant number := 23 ;
s_plsql_entry_subprogram_id constant number := 24 ;
s_plsql_object_id constant number := 25 ;
s_plsql_subprogram_id constant number := 26 ;
s_module constant number := 27 ;
s_action constant number := 28 ;
s_client_identifier constant number := 29 ;
s_service_name constant number := 30 ;
-- constants for ash collection reporting, which columns to show in report
c_inst_id constant number := power(2, s_inst_id );
c_sid constant number := power(2, s_sid );
c_username constant number := power(2, s_username );
c_machine constant number := power(2, s_machine );
c_terminal constant number := power(2, s_terminal );
c_program constant number := power(2, s_program );
c_event constant number := power(2, s_event );
c_wait_class constant number := power(2, s_wait_class );
c_state constant number := power(2, s_state );
c_p1 constant number := power(2, s_p1 );
c_p2 constant number := power(2, s_p2 );
c_p3 constant number := power(2, s_p3 );
c_row_wait_obj# constant number := power(2, s_row_wait_obj# );
c_row_wait_file# constant number := power(2, s_row_wait_file# );
c_row_wait_block# constant number := power(2, s_row_wait_block# );
c_row_wait_row# constant number := power(2, s_row_wait_row# );
c_blocking_session_status constant number := power(2, s_blocking_session_status );
c_blocking_instance constant number := power(2, s_blocking_instance );
c_blocking_session constant number := power(2, s_blocking_session );
c_sql_hash_value constant number := power(2, s_sql_hash_value );
c_sql_id constant number := power(2, s_sql_id );
c_sql_child_number constant number := power(2, s_sql_child_number );
c_plsql_entry_object_id constant number := power(2, s_plsql_entry_object_id );
c_plsql_entry_subprogram_id constant number := power(2, s_plsql_entry_subprogram_id);
c_plsql_object_id constant number := power(2, s_plsql_object_id );
c_plsql_subprogram_id constant number := power(2, s_plsql_subprogram_id );
c_module constant number := power(2, s_module );
c_action constant number := power(2, s_action );
c_client_identifier constant number := power(2, s_client_identifier );
c_service_name constant number := power(2, s_service_name );
/*---------------------------------------------------
-- proc for outputting data to trace or dbms_output
---------------------------------------------------*/
procedure output(p_txt in varchar2) is
begin
if (getopt('&snapper_options', 'out') is not null)
or
(getopt('&snapper_options', 'out') is null and getopt('&snapper_options', 'trace') is null)
then
dbms_output.put_line(p_txt);
end if;
-- The block below is a sqlplus trick for conditionally commenting out PL/SQL code
&_IF_DBMS_SYSTEM_ACCESSIBLE
if getopt('&snapper_options', 'trace') is not null then
sys.dbms_system.ksdwrt(1, p_txt);
sys.dbms_system.ksdfls;
end if;
-- */
end; -- output
/*---------------------------------------------------
-- function for converting interval datatype to microseconds
---------------------------------------------------*/
function get_seconds(i interval day to second) return number
as
begin
return to_number(extract(second from i)) +
to_number(extract(minute from i)) * 60 +
to_number(extract(hour from i)) * 60 * 60 +
to_number(extract(day from i)) * 60 * 60 * 24;
end get_seconds;
/*---------------------------------------------------
-- proc for outputting data, utilizing global vars
---------------------------------------------------*/
procedure fout is
l_output_username VARCHAR2(100);
gsid varchar2(20);
begin
--if s2(b).stype='WAIT' then output( 'DEBUG WAIT ' || sn(s2(b).statistic#).name || ' ' || delta ); end if;
--output( 'DEBUG, Entering fout(), b='||to_char(b)||' sn(s2(b).statistic#='||s2(b).statistic# );
--output( 'DEBUG, In fout(), a='||to_char(a)||' b='||to_char(b)||' s1.count='||s1.count||' s2.count='||s2.count||' s2.count='||s2.count);
gsid := trim(to_char(s2(b).inst_id))||','||trim(to_char(s2(b).sid));
if output_username = 1 then
begin
l_output_username := nvl( g_sessions(gsid).username, substr(g_sessions(gsid).program, instr(g_sessions(gsid).program,'(')) );
exception
when no_data_found then l_output_username := 'error';
when others then raise;
end;
end if;
-- DEBUG
--output('before');
--output (CASE WHEN output_eventavg = 1 THEN CASE WHEN s2(b).stype IN ('WAIT') THEN lpad(tptformat(delta / CASE WHEN evcnt = 0 THEN 1 ELSE evcnt END, s2(b).stype), 10, ' ')||' average wait' ELSE get_useful_average(s2(b), s1(a)) END END);
--output('after');
output( CASE WHEN output_header = 1 THEN 'SID= ' END
|| CASE WHEN output_inst = 1 THEN to_char(s2(b).inst_id, '9999')||', ' END
|| CASE WHEN output_sid = 1 THEN to_char(s2(b).sid,'999999')||', ' END
|| CASE WHEN output_inst_sid = 1 THEN to_char(s2(b).sid,'99999')||' '||lpad('@'||trim(to_char(s2(b).inst_id, '99')),3)||', ' END
|| CASE WHEN output_username = 1 THEN rpad(CASE s2(b).sid WHEN -1 THEN ' ' ELSE NVL(l_output_username, ' ') END, 10)||', ' END
|| CASE WHEN output_time = 1 THEN to_char(d1, 'YYYYMMDD HH24:MI:SS')||', ' END
|| CASE WHEN output_seconds = 1 THEN to_char(case get_seconds(d2-d1) when 0 then &snapper_sleep else get_seconds(d2-d1) end, '9999999')||', ' END
|| CASE WHEN output_stype = 1 THEN s2(b).stype||', ' END
|| CASE WHEN output_sname = 1 THEN rpad(sn(s2(b).statistic#).name, 58, ' ')||', ' END
|| CASE WHEN output_delta = 1 THEN to_char(delta, '999999999999')||', ' END
|| CASE WHEN output_delta_s = 1 THEN to_char(delta/(case get_seconds(d2-d1) when 0 then &snapper_sleep else get_seconds(d2-d1) end),'999999999')||', ' END
|| CASE WHEN output_hdelta = 1 THEN lpad(tptformat(delta, s2(b).stype), 10, ' ')||', ' END
|| CASE WHEN output_hdelta_s = 1 THEN lpad(tptformat(delta/(case get_seconds(d2-d1) when 0 then &snapper_sleep else get_seconds(d2-d1) end ), s2(b).stype), 10, ' ')||', ' END
|| CASE WHEN output_percent = 1 THEN CASE WHEN s2(b).stype IN ('TIME','WAIT') THEN to_char(delta/CASE get_seconds(d2-d1) WHEN 0 THEN &snapper_sleep ELSE get_seconds(d2-d1) END / 10000, '9999.9')||'%' ELSE ' ' END END||', '
|| CASE WHEN output_pcthist = 1 THEN CASE WHEN s2(b).stype IN ('TIME','WAIT') THEN rpad(rpad('[', ceil(round(delta/CASE get_seconds(d2-d1) WHEN 0 THEN &snapper_sleep ELSE get_seconds(d2-d1) END / 100000,1))+1, CASE WHEN s2(b).stype IN ('WAIT') THEN 'W' WHEN sn(s2(b).statistic#).name = 'DB CPU' THEN '@' ELSE '#' END),11,' ')||']' ELSE ' ' END END||', '
|| CASE WHEN output_eventcnt = 1 THEN CASE WHEN s2(b).stype IN ('WAIT') THEN to_char(evcnt, '99999999') ELSE ' ' END END||', '
|| CASE WHEN output_eventcnt_s = 1 THEN CASE WHEN s2(b).stype IN ('WAIT') THEN lpad(tptformat((evcnt / case get_seconds(d2-d1) when 0 then &snapper_sleep else get_seconds(d2-d1) end ), 'STAT' ), 10, ' ') ELSE ' ' END END||', '
|| CASE WHEN output_eventavg = 1 THEN CASE WHEN s2(b).stype IN ('WAIT') THEN lpad(tptformat(delta / CASE WHEN evcnt = 0 THEN 1 ELSE evcnt END, s2(b).stype), 10, ' ')||' average wait' ELSE get_useful_average(s2(b), s1(a)) END END
);
end;
/*---------------------------------------------------
-- lookup stat delta helper calculator (l2.value - l1.value)
---------------------------------------------------*/
function get_delta(metric_id in varchar2) return number
is
rec1 srec;
rec2 srec;
val1 number;
val2 number;
d number;
begin
begin
val1 := l1(metric_id).value;
exception
when no_data_found then val1 := 0;
end;
begin
val2 := l2(metric_id).value;
exception
when no_data_found then val2 := 0;
end;
d := val2 - NVL(val1, 0);
return d;
end get_delta;
/*---------------------------------------------------
-- delta helper function for convenience - it allows to specify any metric delta, if not specified then get current one
---------------------------------------------------*/
function gd(c in srec, metric_type in varchar2 DEFAULT NULL, metric_name in varchar2 DEFAULT NULL) return number
is
str varchar2(1000);
tmp_delta number;
begin
if metric_type || metric_name is null then
str := c.stype||','||trim(to_char(c.inst_id))||','||trim(to_char(c.sid))||','||trim(to_char(c.statistic#,'999999999999999999999999'));
else
begin
str := trim(metric_type)||','||trim(to_char(c.inst_id))||','||trim(to_char(c.sid))||','||trim(to_char(sn_reverse(metric_type||','||metric_name).statistic#));
exception
when no_data_found then return 0;
end;
end if;
tmp_delta := get_delta(str);
--output('tmp_delta '||c.stype||' '||tmp_delta);
return tmp_delta;
-- return get_delta(str);
end;
/*---------------------------------------------------
-- function for calculating useful averages and ratios between metrics
---------------------------------------------------*/
function get_useful_average(c in srec /* curr_metric */, p in srec /* all_prev_metrics */) return varchar2
is
ret varchar2(1000);
mt varchar2(100) := c.stype; -- metric_type
mn varchar2(100) := sn(c.statistic#).name; -- metric_name
begin
case
when mt = 'STAT' then
case
when mn = 'bytes sent via SQL*Net to client' then ret := lpad( tptformat(gd(c) / nullif(gd(c, 'STAT', 'SQL*Net roundtrips to/from client'),0), mt), 10) || ' bytes per roundtrip' ;
when mn = 'bytes receive via SQL*Net from client' then ret := lpad( tptformat(gd(c) / nullif(gd(c, 'STAT', 'SQL*Net roundtrips to/from client'),0), mt), 10) || ' bytes per roundtrip' ;
when mn = 'redo size' then ret := lpad( tptformat(gd(c) / nullif(gd(c, 'STAT', 'user commits' ),0), mt), 10) || ' bytes per user commit';
when mn = 'execute count' then ret := lpad( tptformat(gd(c) / nullif(gd(c, 'STAT', 'parse count (total)' ),0), mt), 10) || ' executions per parse';
when mn = 'parse count (total)' then ret := lpad( tptformat(gd(c) / nullif(gd(c, 'STAT', 'parse count (hard)' ),0), mt), 10) || ' softparses per hardparse';
when mn = 'session cursor cache hits' then ret := lpad( tptformat(gd(c) - (gd(c, 'STAT', 'parse count (total)' ) ), mt), 10) || ' softparses avoided thanks to cursor cache';
when mn = 'buffer is pinned count' then ret := lpad( tptformat(gd(c) / nullif(gd(c) + gd(c, 'STAT', 'session logical reads'),0) * 100, mt), 10) || ' % buffer gets avoided thanks to buffer pin caching';
else ret := lpad( tptformat(gd(c) / nullif(gd(c, 'STAT', 'execute count'),0), mt), 10) || ' per execution' ;
end case; -- mt=stat, mn
when mt = 'TIME' then
-- this is ugly and wrong at the moment - will refactor some day
case
when mn = 'DB time' then ret := lpad(tptformat(get_seconds(d2 - d1)*1000000 - gd(c) - nullif(gd(c, 'DB CPU', 'TIME')
- gd(c, 'WAIT', 'pmon timer')
- gd(c, 'WAIT', 'VKTM Logical Idle Wait')
- gd(c, 'WAIT', 'VKTM Init Wait for GSGA')
- gd(c, 'WAIT', 'IORM Scheduler Slave Idle Wait')
- gd(c, 'WAIT', 'rdbms ipc message')
- gd(c, 'WAIT', 'i/o slave wait')
- gd(c, 'WAIT', 'VKRM Idle')
- gd(c, 'WAIT', 'wait for unread message on broadcast channel')
- gd(c, 'WAIT', 'wait for unread message on multiple broadcast channels')
- gd(c, 'WAIT', 'class slave wait')
- gd(c, 'WAIT', 'KSV master wait')
- gd(c, 'WAIT', 'PING')
- gd(c, 'WAIT', 'watchdog main loop')
- gd(c, 'WAIT', 'DIAG idle wait')
- gd(c, 'WAIT', 'ges remote message')
- gd(c, 'WAIT', 'gcs remote message')
- gd(c, 'WAIT', 'heartbeat monitor sleep')
- gd(c, 'WAIT', 'GCR sleep')
- gd(c, 'WAIT', 'SGA: MMAN sleep for component shrink')
- gd(c, 'WAIT', 'MRP redo arrival')
- gd(c, 'WAIT', 'LNS ASYNC archive log')
- gd(c, 'WAIT', 'LNS ASYNC dest activation')
- gd(c, 'WAIT', 'LNS ASYNC end of log')
- gd(c, 'WAIT', 'simulated log write delay')
- gd(c, 'WAIT', 'LGWR real time apply sync')
- gd(c, 'WAIT', 'parallel recovery slave idle wait')
- gd(c, 'WAIT', 'LogMiner builder: idle')
- gd(c, 'WAIT', 'LogMiner builder: branch')
- gd(c, 'WAIT', 'LogMiner preparer: idle')
- gd(c, 'WAIT', 'LogMiner reader: log (idle)')
- gd(c, 'WAIT', 'LogMiner reader: redo (idle)')
- gd(c, 'WAIT', 'LogMiner client: transaction')
- gd(c, 'WAIT', 'LogMiner: other')
- gd(c, 'WAIT', 'LogMiner: activate')
- gd(c, 'WAIT', 'LogMiner: reset')
- gd(c, 'WAIT', 'LogMiner: find session')
- gd(c, 'WAIT', 'LogMiner: internal')
- gd(c, 'WAIT', 'Logical Standby Apply Delay')
- gd(c, 'WAIT', 'parallel recovery coordinator waits for slave cleanup')
- gd(c, 'WAIT', 'parallel recovery control message reply')
- gd(c, 'WAIT', 'parallel recovery slave next change')
- gd(c, 'WAIT', 'PX Deq: Txn Recovery Start')
- gd(c, 'WAIT', 'PX Deq: Txn Recovery Reply')
- gd(c, 'WAIT', 'fbar timer')
- gd(c, 'WAIT', 'smon timer')
- gd(c, 'WAIT', 'PX Deq: Metadata Update')
- gd(c, 'WAIT', 'Space Manager: slave idle wait')
- gd(c, 'WAIT', 'PX Deq: Index Merge Reply')
- gd(c, 'WAIT', 'PX Deq: Index Merge Execute')
- gd(c, 'WAIT', 'PX Deq: Index Merge Close')
- gd(c, 'WAIT', 'PX Deq: kdcph_mai')
- gd(c, 'WAIT', 'PX Deq: kdcphc_ack')
- gd(c, 'WAIT', 'shared server idle wait')
- gd(c, 'WAIT', 'dispatcher timer')
- gd(c, 'WAIT', 'cmon timer')
- gd(c, 'WAIT', 'pool server timer')
- gd(c, 'WAIT', 'JOX Jit Process Sleep')
- gd(c, 'WAIT', 'jobq slave wait')
- gd(c, 'WAIT', 'pipe get')
- gd(c, 'WAIT', 'PX Deque wait')
- gd(c, 'WAIT', 'PX Idle Wait')
- gd(c, 'WAIT', 'PX Deq: Join ACK')
- gd(c, 'WAIT', 'PX Deq Credit: need buffer')
- gd(c, 'WAIT', 'PX Deq Credit: send blkd')
- gd(c, 'WAIT', 'PX Deq: Msg Fragment')
- gd(c, 'WAIT', 'PX Deq: Parse Reply')
- gd(c, 'WAIT', 'PX Deq: Execute Reply')
- gd(c, 'WAIT', 'PX Deq: Execution Msg')
- gd(c, 'WAIT', 'PX Deq: Table Q Normal')
- gd(c, 'WAIT', 'PX Deq: Table Q Sample')
- gd(c, 'WAIT', 'Streams fetch slave: waiting for txns')
- gd(c, 'WAIT', 'Streams: waiting for messages')
- gd(c, 'WAIT', 'Streams capture: waiting for archive log')
- gd(c, 'WAIT', 'single-task message')
- gd(c, 'WAIT', 'SQL*Net message from client')
- gd(c, 'WAIT', 'SQL*Net vector message from client')
- gd(c, 'WAIT', 'SQL*Net vector message from dblink')
- gd(c, 'WAIT', 'PL/SQL lock timer')
- gd(c, 'WAIT', 'Streams AQ: emn coordinator idle wait')
- gd(c, 'WAIT', 'EMON slave idle wait')
- gd(c, 'WAIT', 'Streams AQ: waiting for messages in the queue')
- gd(c, 'WAIT', 'Streams AQ: waiting for time management or cleanup tasks')
- gd(c, 'WAIT', 'Streams AQ: delete acknowledged messages')
- gd(c, 'WAIT', 'Streams AQ: deallocate messages from Streams Pool')
- gd(c, 'WAIT', 'Streams AQ: qmn coordinator idle wait')
- gd(c, 'WAIT', 'Streams AQ: qmn slave idle wait')
- gd(c, 'WAIT', 'Streams AQ: RAC qmn coordinator idle wait')
- gd(c, 'WAIT', 'HS message to agent')
- gd(c, 'WAIT', 'ASM background timer')
- gd(c, 'WAIT', 'auto-sqltune: wait graph update')
- gd(c, 'WAIT', 'WCR: replay client notify')
- gd(c, 'WAIT', 'WCR: replay clock')
- gd(c, 'WAIT', 'WCR: replay paused')
- gd(c, 'WAIT', 'JS external job')
- gd(c, 'WAIT', 'cell worker idle')
,0) , mt), 10) || ' unaccounted time' ;
else null;
end case; -- mt=time, mn
end case; -- mt
return ret;
end get_useful_average;
/*---------------------------------------------------
-- function for converting large numbers to human-readable format
---------------------------------------------------*/
function tptformat( p_num in number,
p_stype in varchar2 default 'STAT',
p_precision in number default 2,
p_base in number default 10, -- for KiB/MiB formatting use
p_grouplen in number default 3 -- p_base=2 and p_grouplen=10
)
return varchar2
is
begin
if p_num = 0 then return '0'; end if;
if p_num IS NULL then return '~'; end if;
if p_stype in ('WAIT','TIME') then
return
round(
p_num / power( p_base , trunc(log(p_base,abs(p_num)))-trunc(mod(log(p_base,abs(p_num)),p_grouplen)) ), p_precision
)
|| case trunc(log(p_base,abs(p_num)))-trunc(mod(log(p_base,abs(p_num)),p_grouplen))
when 0 then 'us'
when 1 then 'us'
when p_grouplen*1 then 'ms'
when p_grouplen*2 then 's'
when p_grouplen*3 then 'ks'
when p_grouplen*4 then 'Ms'
else '*'||p_base||'^'||to_char( trunc(log(p_base,abs(p_num)))-trunc(mod(log(p_base,abs(p_num)),p_grouplen)) )||' us'
end;
else
return
round(
p_num / power( p_base , trunc(log(p_base,abs(p_num)))-trunc(mod(log(p_base,abs(p_num)),p_grouplen)) ), p_precision
)
|| case trunc(log(p_base,abs(p_num)))-trunc(mod(log(p_base,abs(p_num)),p_grouplen))
when 0 then ''
when 1 then ''
when p_grouplen*1 then 'k'
when p_grouplen*2 then 'M'
when p_grouplen*3 then 'G'
when p_grouplen*4 then 'T'
when p_grouplen*5 then 'P'
when p_grouplen*6 then 'E'
else '*'||p_base||'^'||to_char( trunc(log(p_base,abs(p_num)))-trunc(mod(log(p_base,abs(p_num)),p_grouplen)) )
end;
end if;
end; -- tptformat
/*---------------------------------------------------
-- simple function for parsing arguments from parameter string
---------------------------------------------------*/
function getopt( p_parvalues in varchar2,
p_extract in varchar2,
p_delim in varchar2 default ','
) return varchar2
is
ret varchar(1000) := NULL;
begin
-- dbms_output.put('p_parvalues = ['||p_parvalues||'] ' );
-- dbms_output.put('p_extract = ['||p_extract||'] ' );
if lower(p_parvalues) like lower(p_extract)||'%'
or lower(p_parvalues) like '%'||p_delim||lower(p_extract)||'%' then
ret :=
nvl (
substr(p_parvalues,
instr(p_parvalues, p_extract)+length(p_extract),
case
instr(
substr(p_parvalues,
instr(p_parvalues, p_extract)+length(p_extract)
)
, p_delim
)
when 0 then length(p_parvalues)
else
instr(
substr(p_parvalues,
instr(p_parvalues, p_extract)+length(p_extract)
)
, p_delim
) - 1
end
)
, chr(0) -- in case parameter was specified but with no value
);
else
ret := null; -- no parameter found
end if;
-- dbms_output.put_line('ret = ['||replace(ret,chr(0),'\0')||']');
return ret;
end; -- getopt
/*---------------------------------------------------
-- proc for getting session list with username, osuser, machine etc
---------------------------------------------------*/
procedure get_sessions is
tmp_sessions tmp_sestab;
begin
select /*+ unnest */
*
bulk collect into
tmp_sessions
from
gv$session s
where
1=1
and (
&sid_filter
) ;
--(inst_id,sid) in (&snapper_sid);
g_sessions := g_empty_sessions;
for i in 1..tmp_sessions.count loop
g_sessions(tmp_sessions(i).inst_id||','||tmp_sessions(i).sid) := tmp_sessions(i);
end loop;
end; -- get_sessions
/*---------------------------------------------------
-- function for getting session list with username, osuser, machine etc
-- this func does not update the g_sessions global array but returns session info as return value
---------------------------------------------------*/
function get_sessions return sestab is
tmp_sessions tmp_sestab;
l_return_sessions sestab;
begin
select /*+ unnest */
*
bulk collect into
tmp_sessions
from
gv$session s
where
1=1
and (&sid_filter) ;
--(inst_id,sid) in (&snapper_sid);
for i in 1..tmp_sessions.count loop
--output('get_sessions i='||i||' sid='||tmp_sessions(i).sid);
l_return_sessions(tmp_sessions(i).inst_id||','||tmp_sessions(i).sid) := tmp_sessions(i);
end loop;
return l_return_sessions;
end; -- get_sessions
/*---------------------------------------------------
-- functions for extracting and converting gv$session
-- records to varchar2
---------------------------------------------------*/
function sitem(p in varchar2) return varchar2 as
begin
return '<'||translate(p, '<>', '__')||'>';
end; -- sitem varchar2
function sitem(p in number) return varchar2 as
begin
return '<'||to_char(p)||'>';
end; -- sitem number
function sitem(p in date) return varchar2 as
begin
return '<'||to_char(p, 'YYYY-MM-DD HH24:MI:SS')||'>';
end; -- sitem date
function sitem_raw(p in raw) return varchar2 as
begin
return '<'||upper(rawtohex(p))||'>';
end; -- sitem_raw
/*---------------------------------------------------
-- proc for resetting the snapper ash array
---------------------------------------------------*/
procedure reset_ash is
begin
g_ash_samples_taken := 0;
-- clear g_ash
g_ash := new sys.dbms_debug_vc2coll();
end; -- reset_ash
/*---------------------------------------------------
-- proc for getting ash style samples from gv$session
---------------------------------------------------*/
procedure extract_ash is
ash_i varchar2(30);
s gv$session%rowtype;
begin
-- keep track how many times we sampled gv$session so we could calculate averages later on
g_ash_samples_taken := g_ash_samples_taken + 1;
--output('g_sessions.count='||g_sessions.count);
ash_i := g_sessions.first;
while ash_i is not null loop
s := g_sessions(ash_i);
if -- active, on cpu
(s.status = 'ACTIVE' and s.state != 'WAITING' and s.sid != g_mysid)
or -- active, waiting for non-idle wait
(s.status = 'ACTIVE' and s.state = 'WAITING' and s.wait_class != 'Idle' and s.sid != g_mysid)
then
--output('extract_ash: i='||i||' sid='||s.sid||' hv='||s.sql_hash_value||' sqlid='||s.sql_id);
-- if not actually waiting for anything, clear the past wait event details
if s.state != 'WAITING' then
s.state:='ON CPU';
s.event:='ON CPU';
s.wait_class:='ON CPU'; --TODO: What do we need to do for 9i here?
s.p1:=NULL;
s.p2:=NULL;
s.p3:=NULL;
end if;
g_ash.extend;
-- max length 1000 bytes (due to dbms_debug_vc2coll)
g_ash(g_ash.count) := substr(
sitem(s.inst_id) -- 1
||sitem(s.sid) -- 2
||sitem(s.username) -- 3 -- 30 bytes
||sitem(s.machine) -- 4 -- 64 bytes
||sitem(s.terminal) -- 5 -- 30 bytes
||sitem(s.program) -- 6 -- 48 bytes
||sitem(s.event) -- 7 -- 64 bytes
||sitem(s.wait_class) -- 8 -- 64 bytes, 10g+
||sitem(s.state) -- 9
||sitem(s.p1) -- 10
||sitem(s.p2) -- 11
||sitem(s.p3) -- 12
||sitem(s.row_wait_obj#) -- 13
||sitem(s.row_wait_file#) -- 14
||sitem(s.row_wait_block#) -- 15
||sitem(s.row_wait_row#) -- 16
||sitem(s.blocking_session_status) -- 17 -- 10g+
&_NO_BLK_INST ||sitem('N/A') -- 17 -- 10gR2+
&_YES_BLK_INST ||sitem(s.blocking_instance) -- 18 -- 10gR2+
||sitem(s.blocking_session) -- 19 -- 10g+
||sitem(s.sql_hash_value) -- 20
||sitem(s.sql_id) -- 21 -- 10g+
||sitem(s.sql_child_number) -- 22 -- 10g+
&_NO_PLSQL_OBJ_ID ||sitem('N/A') -- 23
&_NO_PLSQL_OBJ_ID ||sitem('N/A') -- 24
&_NO_PLSQL_OBJ_ID ||sitem('N/A') -- 25
&_NO_PLSQL_OBJ_ID ||sitem('N/A') -- 22
&_YES_PLSQL_OBJ_ID ||sitem(s.plsql_entry_object_id) -- 23
&_YES_PLSQL_OBJ_ID ||sitem(s.plsql_entry_subprogram_id) -- 24
&_YES_PLSQL_OBJ_ID ||sitem(s.plsql_object_id) -- 25
&_YES_PLSQL_OBJ_ID ||sitem(s.plsql_subprogram_id) -- 26
||sitem(s.module) -- 27 -- 48 bytes
||sitem(s.action) -- 28 -- 32 bytes
||sitem(s.client_identifier) -- 29 -- 64 bytes
||sitem(s.service_name) -- 30 -- 64 bytes, 10g+
, 1, 1000);
end if; -- sample is of an active session
ash_i := g_sessions.next(ash_i);
end loop;
exception
when no_data_found then output('error in extract_ash(): no_data_found for item '||i);
end; -- extract_ash
/*---------------------------------------------------
-- proc for querying performance data into collections
---------------------------------------------------*/
procedure snap( p_snapdate out timestamp, p_stats out stab, l_stats out ltab, p_stats_string out sys.dbms_debug_vc2coll) is
lv_include_stat varchar2(1000) := nvl( lower(getopt('&snapper_options', 'sinclude=' )), '%');
lv_include_latch varchar2(1000) := nvl( lower(getopt('&snapper_options', 'linclude=' )), '%');
lv_include_time varchar2(1000) := nvl( lower(getopt('&snapper_options', 'tinclude=' )), '%');
lv_include_wait varchar2(1000) := nvl( lower(getopt('&snapper_options', 'winclude=' )), '%');
lstr varchar2(1000);
begin
p_snapdate := systimestamp;
select /* */ p_snapdate ts, snapper_stats.*
bulk collect into p_stats
from (
select 'STAT' stype, s.inst_id, s.sid, ss.statistic# - pls_adjust statistic#, ss.value, null event_count
from gv$session s, gv$sesstat ss
where &sid_filter --(inst_id,sid) in (&snapper_sid)
and s.inst_id = ss.inst_id
and s.sid = ss.sid
and (lv_gather like '%s%' or lv_gather like '%a%')
and ss.statistic# in (select /*+ no_unnest */ statistic# from v$statname
where lower(name) like '%'||lv_include_stat||'%'
or regexp_like (name, lv_include_stat, 'i')
)
--
union all
select
'WAIT', s.inst_id, s.sid,
en.event# + (select count(*) from v$statname) + 1 - pls_adjust,
nvl(se.time_waited_micro,0) + ( decode(se.event||s.state, s.event||'WAITING', s.seconds_in_wait, 0) * 1000000 ) value, total_waits event_count
from gv$session s, gv$session_event se, v$event_name en
where &sid_filter
and s.sid = se.sid
and s.inst_id = se.inst_id
and se.event = en.name
--and (se.inst_id, se.sid) in (&snapper_sid)
and (lv_gather like '%w%' or lv_gather like '%a%')
and en.event# in (select event# from v$event_name
where lower(name) like '%'||lv_include_wait||'%'
or regexp_like (name, lv_include_wait, 'i')
)
--
union all
select 'TIME' stype, s.inst_id, s.sid, st.stat_id - pls_adjust statistic#, st.value, null event_count
from gv$session s, gv$sess_time_model st
where &sid_filter --(inst_id,sid) in (&snapper_sid)
and s.inst_id = st.inst_id
and s.sid = st.sid
and (lv_gather like '%t%' or lv_gather like '%a%')
and st.stat_id in (select stat_id from gv$sys_time_model
where lower(stat_name) like '%'||lv_include_time||'%'
or regexp_like (stat_name, lv_include_time, 'i')
)
--
union all
select 'LATG', s.inst_id, -1 sid,
s.latch# +
(select count(*) from v$statname) +
(select count(*) from v$event_name) +
1 - pls_adjust statistic#,
s.gets + s.immediate_gets value, null event_count
from gv$latch s
where &inst_filter
and (lv_gather like '%l%' or lv_gather like '%a%')
and latch# in (select latch# from v$latchname
where lower(name) like '%'||lv_include_latch||'%'
or regexp_like (name, lv_include_latch, 'i')
)
--
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 union all
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 select 'BUFG', to_number(sys_context('userenv', 'instance')), -1 sid,
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 s.indx +
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 (select count(*) from v$statname) +
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 (select count(*) from v$event_name) +
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 (select count(*) from gv$latch) +
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 1 - pls_adjust statistic#,
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 s.why0+s.why1+s.why2 value, null event_count
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 from x$kcbsw s, x$kcbwh w
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 where
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 s.indx = w.indx
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 and s.why0+s.why1+s.why2 > 0
&_IF_X_ACCESSIBLE &_IF_LOWER_THAN_ORA11 and (lv_gather like '%b%' or lv_gather like '%a%')
--
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER union all
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER select 'BUFG', to_number(sys_context('userenv', 'instance')), -1 sid,
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER sw.indx +
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER (select count(*) from v$statname) +
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER (select count(*) from v$event_name) +
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER (select count(*) from gv$latch) +
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER 1 - pls_adjust statistic#,
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER why.why0+why.why1+why.why2+sw.other_wait value, null event_count
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER from
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER x$kcbuwhy why,
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER x$kcbwh dsc,
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER x$kcbsw sw
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER where
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER why.indx = dsc.indx
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER and why.inst_id = dsc.inst_id
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER and dsc.inst_id = sw.inst_id
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER and why.inst_id = sw.inst_id
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER and why.why0 + why.why1 + why.why2 + sw.other_wait > 0
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER and dsc.indx = sw.indx
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER and why.indx = sw.indx
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER -- deliberate cartesian join
&_IF_X_ACCESSIBLE &_IF_ORA11_OR_HIGHER and (lv_gather like '%b%' or lv_gather like '%a%')
--
union all
select 'ENQG', s.inst_id, -1 sid,
ascii(substr(s.eq_type,1,1))*256 + ascii(substr(s.eq_type,2,1)) +
(select count(*) from v$statname) +
(select count(*) from v$event_name) +
(select count(*) from gv$latch) +
&_IF_X_ACCESSIBLE (select count(*) from x$kcbwh) +
1 - pls_adjust statistic#,
s.total_req# value, null event_count
from gv$enqueue_stat s
where &inst_filter
and (lv_gather like '%e%' or lv_gather like '%a%')
) snapper_stats
order by inst_id, sid, stype, statistic#;
if p_stats.COUNT > 0 then
-- l_stats is an associative array for stats lookup, used for the useful averages calculation
-- p_stats_string is a dbms_debug_vc2coll collection datatype for "persisting" stats values across snapper DB calls (for "before" and "after" snaps)
p_stats_string := sys.dbms_debug_vc2coll();
for s in p_stats.first..p_stats.last loop
-- type srec is record (stype varchar2(4), sid number, statistic# number, value number, event_count number );
lstr := p_stats(s).stype||','||trim(to_char(p_stats(s).inst_id))||','||trim(to_char(p_stats(s).sid))||','||trim(to_char(p_stats(s).statistic#,'999999999999999999999999'));
l_stats(lstr) := p_stats(s);
if g_snap_begin is not null then
p_stats_string.extend();
p_stats_string(s) := TO_CHAR(p_stats(s).ts, 'YYYY-MM-DD HH24:MI:SS.FF') ||','||
p_stats(s).stype ||','||
TO_CHAR(p_stats(s).inst_id) ||','||
TO_CHAR(p_stats(s).sid) ||','||
TRIM(TO_CHAR(p_stats(s).statistic#, '999999999999999999999999'))||','||
TRIM(TO_CHAR(p_stats(s).value, '999999999999999999999999'))||','||
TRIM(TO_CHAR(p_stats(s).event_count,'999999999999999999999999'));
--output('p_stats.p_stats_string='||p_stats_string(s));
end if;
end loop; -- s in (p_stats)
end if; -- p.stats.COUNT > 0
end snap;
/*---------------------------------------------------
-- proc for reversing the string-normalized
-- stats array into lookup tables/collections
---------------------------------------------------*/
procedure snap_from_stats_string (p_string_stats in sys.dbms_debug_vc2coll, p_snapdate out timestamp, p_stats out stab, l_stats out ltab)
is
lstr varchar2(1000);
lv_rec srec;
begin
p_snapdate := NULL;
--type srec is record (stype varchar2(4), sid number, statistic# number, value number, event_count number );
for s in p_string_stats.first .. p_string_stats.last loop
lv_rec.ts := TO_TIMESTAMP(replace(regexp_substr(p_string_stats(s)||',', '(.*?),', 1, 1),',',''), 'YYYY-MM-DD HH24:MI:SS.FF');
lv_rec.stype := replace(regexp_substr(p_string_stats(s)||',', '(.*?),', 1, 2),',','');
lv_rec.inst_id := TO_NUMBER(replace(regexp_substr(p_string_stats(s)||',', '(.*?),', 1, 3),',',''));
lv_rec.sid := TO_NUMBER(replace(regexp_substr(p_string_stats(s)||',', '(.*?),', 1, 4),',',''));
lv_rec.statistic# := TO_NUMBER(replace(regexp_substr(p_string_stats(s)||',', '(.*?),', 1, 5),',',''));
lv_rec.value := TO_NUMBER(replace(regexp_substr(p_string_stats(s)||',', '(.*?),', 1, 6),',',''));
lv_rec.event_count := TO_NUMBER(replace(regexp_substr(p_string_stats(s)||',', '(.*?),', 1, 7),',',''));
--output('snap_from_stats_string.event_count = '||to_char(lv_rec.event_count));
p_stats(s) := lv_rec;
lstr := p_stats(s).stype||','||trim(to_char(p_stats(s).inst_id))||','||trim(to_char(p_stats(s).sid))||','||trim(to_char(p_stats(s).statistic#,'999999999999999999999999'));
l_stats(lstr) := p_stats(s);
end loop;
p_snapdate := lv_rec.ts;
end snap_from_stats_string;
/*---------------------------------------------------
-- proc for dumping ASH data out in grouped
-- and ordered fashion
---------------------------------------------------*/
procedure out_ash( p_ash_columns in varchar2, p_topn in number := 10 ) as
-- whether to print given column or not
p_inst_id number := 0;
p_sid number := 0;
p_username number := 0;
p_machine number := 0;
p_terminal number := 0;
p_program number := 0;
p_event number := 0;
p_wait_class number := 0;
p_state number := 0;
p_p1 number := 0;
p_p2 number := 0;
p_p3 number := 0;
p_row_wait_obj# number := 0;
p_row_wait_file# number := 0;
p_row_wait_block# number := 0;
p_row_wait_row# number := 0;
p_blocking_session_status number := 0;
p_blocking_instance number := 0;
p_blocking_session number := 0;
p_sql_hash_value number := 0;
p_sql_id number := 0;
p_sql_child_number number := 0;
p_plsql_entry_object_id number := 0;
p_plsql_entry_subprogram_id number := 0;
p_plsql_object_id number := 0;
p_plsql_subprogram_id number := 0;
p_module number := 0;
p_action number := 0;
p_client_identifier number := 0;
p_service_name number := 0;
-- temporary variables for holding session details (for later formatting)
o_inst_id varchar2(100);
o_sid varchar2(100);
o_username varchar2(100);
o_machine varchar2(100);
o_terminal varchar2(100);
o_program varchar2(100);
o_event varchar2(100);
o_wait_class varchar2(100);
o_state varchar2(100);
o_p1 varchar2(100);
o_p2 varchar2(100);
o_p3 varchar2(100);
o_row_wait_obj# varchar2(100);
o_row_wait_file# varchar2(100);
o_row_wait_block# varchar2(100);
o_row_wait_row# varchar2(100);
o_blocking_session_status varchar2(100);
o_blocking_instance varchar2(100);
o_blocking_session varchar2(100);
o_sql_hash_value varchar2(100);
o_sql_id varchar2(100);
o_sql_child_number varchar2(100);
o_plsql_entry_object_id varchar2(100);
o_plsql_entry_subprogram_id varchar2(100);
o_plsql_object_id varchar2(100);
o_plsql_subprogram_id varchar2(100);
o_module varchar2(100);
o_action varchar2(100);
o_client_identifier varchar2(100);
o_service_name varchar2(100);
-- helper local vars
l_ash_grouping number := 0;
l_output_line varchar2(4000);
l_ash_header_line varchar2(4000);
begin
-- bail out if no ASH samples recorded
if g_ash.count = 0 then
output(' <No active sessions captured during the sampling period>');
return;
end if;
l_ash_header_line := 'Active%';
-- ash,ash1,ash2,ash3 parameter column group tokenizer
for s in (
SELECT LEVEL
, SUBSTR
( TOKEN
, DECODE(LEVEL, 1, 1, INSTR(TOKEN, DELIMITER, 1, LEVEL-1)+1)
, INSTR(TOKEN, DELIMITER, 1, LEVEL) -
DECODE(LEVEL, 1, 1, INSTR(TOKEN, DELIMITER, 1, LEVEL-1)+1)
) TOKEN
FROM ( SELECT REPLACE( LOWER(p_ash_columns) ,' ','')||'+' AS TOKEN
, '+' AS DELIMITER
FROM DUAL
)
CONNECT BY
INSTR(TOKEN, DELIMITER, 1, LEVEL)>0
ORDER BY
LEVEL ASC
) loop
case s.token
-- actual column names in gv$session
when 'inst_id' then l_ash_grouping := l_ash_grouping + c_inst_id ; l_ash_header_line := l_ash_header_line || ' | ' || lpad('INST_ID' , w_inst_id , ' ');
when 'sid' then l_ash_grouping := l_ash_grouping + c_sid ; l_ash_header_line := l_ash_header_line || ' | ' || lpad('SID' , w_sid , ' ');
when 'username' then l_ash_grouping := l_ash_grouping + c_username ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('USERNAME' , w_username , ' ');
when 'machine' then l_ash_grouping := l_ash_grouping + c_machine ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('MACHINE' , w_machine , ' ');
when 'terminal' then l_ash_grouping := l_ash_grouping + c_terminal ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('TERMINAL' , w_terminal , ' ');
when 'program' then l_ash_grouping := l_ash_grouping + c_program ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('PROGRAM' , w_program , ' ');
when 'event' then l_ash_grouping := l_ash_grouping + c_event ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('EVENT' , w_event , ' ');
when 'wait_class' then l_ash_grouping := l_ash_grouping + c_wait_class ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('WAIT_CLASS' , w_wait_class , ' ');
when 'state' then l_ash_grouping := l_ash_grouping + c_state ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('STATE' , w_state , ' ');
when 'p1' then l_ash_grouping := l_ash_grouping + c_p1 ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('P1' , w_p1 , ' ');
when 'p2' then l_ash_grouping := l_ash_grouping + c_p2 ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('P2' , w_p2 , ' ');
when 'p3' then l_ash_grouping := l_ash_grouping + c_p3 ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('P3' , w_p3 , ' ');
when 'row_wait_obj#' then l_ash_grouping := l_ash_grouping + c_row_wait_obj# ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ROW_WAIT_OBJ#' , w_row_wait_obj# , ' ');
when 'row_wait_file#' then l_ash_grouping := l_ash_grouping + c_row_wait_file# ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ROW_WAIT_FILE#' , w_row_wait_file# , ' ');
when 'row_wait_block#' then l_ash_grouping := l_ash_grouping + c_row_wait_block# ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ROW_WAIT_BLOCK#' , w_row_wait_block# , ' ');
when 'row_wait_row#' then l_ash_grouping := l_ash_grouping + c_row_wait_row# ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ROW_WAIT_ROW#' , w_row_wait_row# , ' ');
when 'blocking_session_status' then l_ash_grouping := l_ash_grouping + c_blocking_session_status ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('BLOCKING_SESSION_STATUS' , w_blocking_session_status , ' ');
when 'blocking_instance' then l_ash_grouping := l_ash_grouping + c_blocking_instance ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('BLOCKING_INSTANCE' , w_blocking_instance , ' ');
when 'blocking_session' then l_ash_grouping := l_ash_grouping + c_blocking_session ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('BLOCKING_SESSION' , w_blocking_session , ' ');
when 'sql_hash_value' then l_ash_grouping := l_ash_grouping + c_sql_hash_value ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('SQL_HASH_VALUE' , w_sql_hash_value , ' ');
when 'sql_id' then l_ash_grouping := l_ash_grouping + c_sql_id ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('SQL_ID' , w_sql_id , ' ');
when 'sql_child_number' then l_ash_grouping := l_ash_grouping + c_sql_child_number ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('SQL_CHILD_NUMBER' , w_sql_child_number , ' ');
when 'plsql_entry_object_id' then l_ash_grouping := l_ash_grouping + c_plsql_entry_object_id ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('PLSQL_ENTRY_OBJECT_ID' , w_plsql_entry_object_id , ' ');
when 'plsql_entry_subprogram_id' then l_ash_grouping := l_ash_grouping + c_plsql_entry_subprogram_id; l_ash_header_line := l_ash_header_line || ' | ' || rpad('PLSQL_ENTRY_SUBPROGRAM_ID' , w_plsql_entry_subprogram_id, ' ');
when 'plsql_object_id' then l_ash_grouping := l_ash_grouping + c_plsql_object_id ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('PLSQL_OBJECT_ID' , w_plsql_object_id , ' ');
when 'plsql_subprogram_id' then l_ash_grouping := l_ash_grouping + c_plsql_subprogram_id ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('PLSQL_SUBPROGRAM_ID' , w_plsql_subprogram_id , ' ');
when 'module' then l_ash_grouping := l_ash_grouping + c_module ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('MODULE' , w_module , ' ');
when 'action' then l_ash_grouping := l_ash_grouping + c_action ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ACTION' , w_action , ' ');
when 'client_identifier' then l_ash_grouping := l_ash_grouping + c_client_identifier ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('CLIENT_IDENTIFIER' , w_client_identifier , ' ');
when 'service_name' then l_ash_grouping := l_ash_grouping + c_service_name ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('SERVICE_NAME' , w_service_name , ' ');
-- aliases for convenience (only either real name or alias should be used together at the same time)
when 'user' then l_ash_grouping := l_ash_grouping + c_username ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('USERNAME' , w_username , ' ');
when 'obj' then l_ash_grouping := l_ash_grouping + c_row_wait_obj# ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ROW_WAIT_OBJ#' , w_row_wait_obj# , ' ');
when 'file' then l_ash_grouping := l_ash_grouping + c_row_wait_file# ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ROW_WAIT_FILE#' , w_row_wait_file# , ' ');
when 'block' then l_ash_grouping := l_ash_grouping + c_row_wait_block# ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ROW_WAIT_BLOCK#' , w_row_wait_block# , ' ');
when 'row' then l_ash_grouping := l_ash_grouping + c_row_wait_row# ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ROW_WAIT_ROW#' , w_row_wait_row# , ' ');
when 'bss' then l_ash_grouping := l_ash_grouping + c_blocking_session_status ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('BLOCKING_SESSION_STATUS' , w_blocking_session_status , ' ');
when 'bsi' then l_ash_grouping := l_ash_grouping + c_blocking_instance ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('BLOCKING_INSTANCE' , w_blocking_instance , ' ');
when 'bs' then l_ash_grouping := l_ash_grouping + c_blocking_session ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('BLOCKING_SESSION' , w_blocking_session , ' ');
when 'sql' then l_ash_grouping := l_ash_grouping + c_sql_hash_value ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('SQL_HASH_VALUE' , w_sql_hash_value , ' ');
when 'sqlid' then l_ash_grouping := l_ash_grouping + c_sql_id ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('SQL_ID' , w_sql_id , ' ');
when 'child' then l_ash_grouping := l_ash_grouping + c_sql_child_number ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('SQL_CHILD_NUMBER' , w_sql_child_number , ' ');
when 'plsql_eoid' then l_ash_grouping := l_ash_grouping + c_plsql_entry_object_id ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('PLSQL_ENTRY_OBJECT_ID' , w_plsql_entry_object_id , ' ');
when 'plsql_esubpid' then l_ash_grouping := l_ash_grouping + c_plsql_entry_subprogram_id; l_ash_header_line := l_ash_header_line || ' | ' || rpad('PLSQL_ENTRY_SUBPROGRAM_ID' , w_plsql_entry_subprogram_id, ' ');
when 'plsql_oid' then l_ash_grouping := l_ash_grouping + c_plsql_object_id ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('PLSQL_OBJECT_ID' , w_plsql_object_id , ' ');
when 'plsql_subpid' then l_ash_grouping := l_ash_grouping + c_plsql_subprogram_id ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('PLSQL_SUBPROGRAM_ID' , w_plsql_subprogram_id , ' ');
when 'mod' then l_ash_grouping := l_ash_grouping + c_module ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('MODULE' , w_module , ' ');
when 'act' then l_ash_grouping := l_ash_grouping + c_action ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('ACTION' , w_action , ' ');
when 'cid' then l_ash_grouping := l_ash_grouping + c_client_identifier ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('CLIENT_IDENTIFIER' , w_client_identifier , ' ');
when 'service' then l_ash_grouping := l_ash_grouping + c_service_name ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('SERVICE_NAME' , w_service_name , ' ');
when 'wait_event' then l_ash_grouping := l_ash_grouping + c_event ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('EVENT' , w_event , ' ');
when 'wait_state' then l_ash_grouping := l_ash_grouping + c_state ; l_ash_header_line := l_ash_header_line || ' | ' || rpad('STATE' , w_state , ' ');
else
null;
-- raise_application_error(-20000, 'Invalid ASH column name');
end case; -- case s.token
end loop; -- tokenizer
output(' ');
output(lpad('-',length(l_ash_header_line),'-'));
output(l_ash_header_line);
output(lpad('-',length(l_ash_header_line),'-'));
-- this is needed for "easy" sorting and group by ops (without any custom stored object types!)
for i in (
with raw_records as (
select column_value rec from table(cast(g_ash as sys.dbms_debug_vc2coll))
),
ash_records as (
select
substr(r.rec, instr(r.rec, '<', 1, 1)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 1)+1), '>')-1) inst_id
, substr(r.rec, instr(r.rec, '<', 1, 2)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 2)+1), '>')-1) sid
, substr(r.rec, instr(r.rec, '<', 1, 3)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 3)+1), '>')-1) username
, substr(r.rec, instr(r.rec, '<', 1, 4)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 4)+1), '>')-1) machine
, substr(r.rec, instr(r.rec, '<', 1, 5)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 5)+1), '>')-1) terminal
, substr(r.rec, instr(r.rec, '<', 1, 6)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 6)+1), '>')-1) program
, substr(r.rec, instr(r.rec, '<', 1, 7)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 7)+1), '>')-1) event
, substr(r.rec, instr(r.rec, '<', 1, 8)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 8)+1), '>')-1) wait_class
, substr(r.rec, instr(r.rec, '<', 1, 9)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 9)+1), '>')-1) state
, substr(r.rec, instr(r.rec, '<', 1, 10)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 10)+1), '>')-1) p1
, substr(r.rec, instr(r.rec, '<', 1, 11)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 11)+1), '>')-1) p2
, substr(r.rec, instr(r.rec, '<', 1, 12)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 12)+1), '>')-1) p3
, substr(r.rec, instr(r.rec, '<', 1, 13)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 13)+1), '>')-1) row_wait_obj#
, substr(r.rec, instr(r.rec, '<', 1, 14)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 14)+1), '>')-1) row_wait_file#
, substr(r.rec, instr(r.rec, '<', 1, 15)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 15)+1), '>')-1) row_wait_block#
, substr(r.rec, instr(r.rec, '<', 1, 16)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 16)+1), '>')-1) row_wait_row#
, substr(r.rec, instr(r.rec, '<', 1, 17)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 17)+1), '>')-1) blocking_session_status
, substr(r.rec, instr(r.rec, '<', 1, 18)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 18)+1), '>')-1) blocking_instance
, substr(r.rec, instr(r.rec, '<', 1, 19)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 19)+1), '>')-1) blocking_session
, substr(r.rec, instr(r.rec, '<', 1, 20)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 20)+1), '>')-1) sql_hash_value
, substr(r.rec, instr(r.rec, '<', 1, 21)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 21)+1), '>')-1) sql_id
, substr(r.rec, instr(r.rec, '<', 1, 22)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 22)+1), '>')-1) sql_child_number
, substr(r.rec, instr(r.rec, '<', 1, 23)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 23)+1), '>')-1) plsql_entry_object_id
, substr(r.rec, instr(r.rec, '<', 1, 24)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 24)+1), '>')-1) plsql_entry_subprogram_id
, substr(r.rec, instr(r.rec, '<', 1, 25)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 25)+1), '>')-1) plsql_object_id
, substr(r.rec, instr(r.rec, '<', 1, 26)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 26)+1), '>')-1) plsql_subprogram_id
, substr(r.rec, instr(r.rec, '<', 1, 27)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 27)+1), '>')-1) module
, substr(r.rec, instr(r.rec, '<', 1, 28)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 28)+1), '>')-1) action
, substr(r.rec, instr(r.rec, '<', 1, 29)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 29)+1), '>')-1) client_identifier
, substr(r.rec, instr(r.rec, '<', 1, 30)+1, instr (substr(r.rec, instr(r.rec, '<', 1, 30)+1), '>')-1) service_name
from
raw_records r
)
select * from (
select
decode(bitand(l_ash_grouping, power(2, s_inst_id )), 0, chr(0), inst_id ) as inst_id
, decode(bitand(l_ash_grouping, power(2, s_sid )), 0, chr(0), sid ) as sid
, decode(bitand(l_ash_grouping, power(2, s_username )), 0, chr(0), username ) as username
, decode(bitand(l_ash_grouping, power(2, s_machine )), 0, chr(0), machine ) as machine
, decode(bitand(l_ash_grouping, power(2, s_terminal )), 0, chr(0), terminal ) as terminal
, decode(bitand(l_ash_grouping, power(2, s_program )), 0, chr(0), program ) as program
, decode(bitand(l_ash_grouping, power(2, s_event )), 0, chr(0), event ) as event
, decode(bitand(l_ash_grouping, power(2, s_wait_class )), 0, chr(0), wait_class ) as wait_class
, decode(bitand(l_ash_grouping, power(2, s_state )), 0, chr(0), state ) as state
, decode(bitand(l_ash_grouping, power(2, s_p1 )), 0, chr(0), p1 ) as p1
, decode(bitand(l_ash_grouping, power(2, s_p2 )), 0, chr(0), p2 ) as p2
, decode(bitand(l_ash_grouping, power(2, s_p3 )), 0, chr(0), p3 ) as p3
, decode(bitand(l_ash_grouping, power(2, s_row_wait_obj# )), 0, chr(0), row_wait_obj# ) as row_wait_obj#
, decode(bitand(l_ash_grouping, power(2, s_row_wait_file# )), 0, chr(0), row_wait_file# ) as row_wait_file#
, decode(bitand(l_ash_grouping, power(2, s_row_wait_block# )), 0, chr(0), row_wait_block# ) as row_wait_block#
, decode(bitand(l_ash_grouping, power(2, s_row_wait_row# )), 0, chr(0), row_wait_row# ) as row_wait_row#
, decode(bitand(l_ash_grouping, power(2, s_blocking_session_status )), 0, chr(0), blocking_session_status ) as blocking_session_status
, decode(bitand(l_ash_grouping, power(2, s_blocking_instance )), 0, chr(0), blocking_instance ) as blocking_instance
, decode(bitand(l_ash_grouping, power(2, s_blocking_session )), 0, chr(0), blocking_session ) as blocking_session
, decode(bitand(l_ash_grouping, power(2, s_sql_hash_value )), 0, chr(0), sql_hash_value ) as sql_hash_value
, decode(bitand(l_ash_grouping, power(2, s_sql_id )), 0, chr(0), sql_id ) as sql_id
, decode(bitand(l_ash_grouping, power(2, s_sql_child_number )), 0, chr(0), sql_child_number ) as sql_child_number
, decode(bitand(l_ash_grouping, power(2, s_plsql_entry_object_id )), 0, chr(0), plsql_entry_object_id ) as plsql_entry_object_id
, decode(bitand(l_ash_grouping, power(2, s_plsql_entry_subprogram_id )), 0, chr(0), plsql_entry_subprogram_id ) as plsql_entry_subprogram_id
, decode(bitand(l_ash_grouping, power(2, s_plsql_object_id )), 0, chr(0), plsql_object_id ) as plsql_object_id
, decode(bitand(l_ash_grouping, power(2, s_plsql_subprogram_id )), 0, chr(0), plsql_subprogram_id ) as plsql_subprogram_id
, decode(bitand(l_ash_grouping, power(2, s_module )), 0, chr(0), module ) as module
, decode(bitand(l_ash_grouping, power(2, s_action )), 0, chr(0), action ) as action
, decode(bitand(l_ash_grouping, power(2, s_client_identifier )), 0, chr(0), client_identifier ) as client_identifier
, decode(bitand(l_ash_grouping, power(2, s_service_name )), 0, chr(0), service_name ) as service_name
, count(*)/g_ash_samples_taken average_active_samples
from
ash_records a
group by
decode(bitand(l_ash_grouping, power(2, s_inst_id )), 0, chr(0), inst_id ) -- inst_id
, decode(bitand(l_ash_grouping, power(2, s_sid )), 0, chr(0), sid ) -- sid
, decode(bitand(l_ash_grouping, power(2, s_username )), 0, chr(0), username ) -- username
, decode(bitand(l_ash_grouping, power(2, s_machine )), 0, chr(0), machine ) -- machine
, decode(bitand(l_ash_grouping, power(2, s_terminal )), 0, chr(0), terminal ) -- terminal
, decode(bitand(l_ash_grouping, power(2, s_program )), 0, chr(0), program ) -- program
, decode(bitand(l_ash_grouping, power(2, s_event )), 0, chr(0), event ) -- event
, decode(bitand(l_ash_grouping, power(2, s_wait_class )), 0, chr(0), wait_class ) -- wait_class
, decode(bitand(l_ash_grouping, power(2, s_state )), 0, chr(0), state ) -- state
, decode(bitand(l_ash_grouping, power(2, s_p1 )), 0, chr(0), p1 ) -- p1
, decode(bitand(l_ash_grouping, power(2, s_p2 )), 0, chr(0), p2 ) -- p2
, decode(bitand(l_ash_grouping, power(2, s_p3 )), 0, chr(0), p3 ) -- p3
, decode(bitand(l_ash_grouping, power(2, s_row_wait_obj# )), 0, chr(0), row_wait_obj# ) -- row_wait_obj#
, decode(bitand(l_ash_grouping, power(2, s_row_wait_file# )), 0, chr(0), row_wait_file# ) -- row_wait_file#
, decode(bitand(l_ash_grouping, power(2, s_row_wait_block# )), 0, chr(0), row_wait_block# ) -- row_wait_block#
, decode(bitand(l_ash_grouping, power(2, s_row_wait_row# )), 0, chr(0), row_wait_row# ) -- row_wait_row#
, decode(bitand(l_ash_grouping, power(2, s_blocking_session_status )), 0, chr(0), blocking_session_status ) -- blocking_session_status
, decode(bitand(l_ash_grouping, power(2, s_blocking_instance )), 0, chr(0), blocking_instance ) -- blocking_instance
, decode(bitand(l_ash_grouping, power(2, s_blocking_session )), 0, chr(0), blocking_session ) -- blocking_session
, decode(bitand(l_ash_grouping, power(2, s_sql_hash_value )), 0, chr(0), sql_hash_value ) -- sql_hash_value
, decode(bitand(l_ash_grouping, power(2, s_sql_id )), 0, chr(0), sql_id ) -- sql_id
, decode(bitand(l_ash_grouping, power(2, s_sql_child_number )), 0, chr(0), sql_child_number ) -- sql_child_number
, decode(bitand(l_ash_grouping, power(2, s_plsql_entry_object_id )), 0, chr(0), plsql_entry_object_id ) -- plsql_entry_object_id
, decode(bitand(l_ash_grouping, power(2, s_plsql_entry_subprogram_id )), 0, chr(0), plsql_entry_subprogram_id ) -- plsql_entry_subprogram_id
, decode(bitand(l_ash_grouping, power(2, s_plsql_object_id )), 0, chr(0), plsql_object_id ) -- plsql_object_id
, decode(bitand(l_ash_grouping, power(2, s_plsql_subprogram_id )), 0, chr(0), plsql_subprogram_id ) -- plsql_subprogram_id
, decode(bitand(l_ash_grouping, power(2, s_module )), 0, chr(0), module ) -- module
, decode(bitand(l_ash_grouping, power(2, s_action )), 0, chr(0), action ) -- action
, decode(bitand(l_ash_grouping, power(2, s_client_identifier )), 0, chr(0), client_identifier ) -- client_identifier
, decode(bitand(l_ash_grouping, power(2, s_service_name )), 0, chr(0), service_name ) -- service_name
order by
count(*)/g_ash_samples_taken desc
)
where rownum <= p_topn
) loop
l_output_line := '';
o_inst_id := CASE WHEN i.inst_id = chr(0) THEN null ELSE nvl(i.inst_id , ' ') END;
o_sid := CASE WHEN i.sid = chr(0) THEN null ELSE nvl(i.sid , ' ') END;
o_username := CASE WHEN i.username = chr(0) THEN null ELSE nvl(i.username , ' ') END;
o_machine := CASE WHEN i.machine = chr(0) THEN null ELSE nvl(i.machine , ' ') END;
o_terminal := CASE WHEN i.terminal = chr(0) THEN null ELSE nvl(i.terminal , ' ') END;
o_program := CASE WHEN i.program = chr(0) THEN null ELSE nvl(i.program , ' ') END;
o_event := CASE WHEN i.event = chr(0) THEN null ELSE nvl(i.event , ' ') END;
o_wait_class := CASE WHEN i.wait_class = chr(0) THEN null ELSE nvl(i.wait_class , ' ') END;
o_state := CASE WHEN i.state = chr(0) THEN null ELSE nvl(i.state , ' ') END;
o_p1 := CASE WHEN i.p1 = chr(0) THEN null ELSE nvl(i.p1 , ' ') END;
o_p2 := CASE WHEN i.p2 = chr(0) THEN null ELSE nvl(i.p2 , ' ') END;
o_p3 := CASE WHEN i.p3 = chr(0) THEN null ELSE nvl(i.p3 , ' ') END;
o_row_wait_obj# := CASE WHEN i.row_wait_obj# = chr(0) THEN null ELSE nvl(i.row_wait_obj# , ' ') END;
o_row_wait_file# := CASE WHEN i.row_wait_file# = chr(0) THEN null ELSE nvl(i.row_wait_file# , ' ') END;
o_row_wait_block# := CASE WHEN i.row_wait_block# = chr(0) THEN null ELSE nvl(i.row_wait_block# , ' ') END;
o_row_wait_row# := CASE WHEN i.row_wait_row# = chr(0) THEN null ELSE nvl(i.row_wait_row# , ' ') END;
o_blocking_session_status := CASE WHEN i.blocking_session_status = chr(0) THEN null ELSE nvl(i.blocking_session_status , ' ') END;
o_blocking_instance := CASE WHEN i.blocking_instance = chr(0) THEN null ELSE nvl(i.blocking_instance , ' ') END;
o_blocking_session := CASE WHEN i.blocking_session = chr(0) THEN null ELSE nvl(i.blocking_session , ' ') END;
o_sql_hash_value := CASE WHEN i.sql_hash_value = chr(0) THEN null ELSE nvl(i.sql_hash_value , ' ') END;
o_sql_id := CASE WHEN i.sql_id = chr(0) THEN null ELSE nvl(i.sql_id , ' ') END;
o_sql_child_number := CASE WHEN i.sql_child_number = chr(0) THEN null ELSE nvl(i.sql_child_number , ' ') END;
o_plsql_entry_object_id := CASE WHEN i.plsql_entry_object_id = chr(0) THEN null ELSE nvl(i.plsql_entry_object_id , ' ') END;
o_plsql_entry_subprogram_id := CASE WHEN i.plsql_entry_subprogram_id = chr(0) THEN null ELSE nvl(i.plsql_entry_subprogram_id , ' ') END;
o_plsql_object_id := CASE WHEN i.plsql_object_id = chr(0) THEN null ELSE nvl(i.plsql_object_id , ' ') END;
o_plsql_subprogram_id := CASE WHEN i.plsql_subprogram_id = chr(0) THEN null ELSE nvl(i.plsql_subprogram_id , ' ') END;
o_module := CASE WHEN i.module = chr(0) THEN null ELSE nvl(i.module , ' ') END;
o_action := CASE WHEN i.action = chr(0) THEN null ELSE nvl(i.action , ' ') END;
o_client_identifier := CASE WHEN i.client_identifier = chr(0) THEN null ELSE nvl(i.client_identifier , ' ') END;
o_service_name := CASE WHEN i.service_name = chr(0) THEN null ELSE nvl(i.service_name , ' ') END;
-- print the activity % as the first column
l_output_line := lpad(to_char(round(i.average_active_samples*100))||'%', w_activity_pct, ' ');
-- loop through ash columns to find what to print and in which order
for s in (
SELECT LEVEL
, SUBSTR
( TOKEN
, DECODE(LEVEL, 1, 1, INSTR(TOKEN, DELIMITER, 1, LEVEL-1)+1)
, INSTR(TOKEN, DELIMITER, 1, LEVEL) -
DECODE(LEVEL, 1, 1, INSTR(TOKEN, DELIMITER, 1, LEVEL-1)+1)
) TOKEN
FROM ( SELECT REPLACE( LOWER(p_ash_columns) ,' ','')||'+' AS TOKEN
, '+' AS DELIMITER
FROM DUAL
)
CONNECT BY
INSTR(TOKEN, DELIMITER, 1, LEVEL)>0
ORDER BY
LEVEL ASC
) loop
l_output_line := l_output_line || ' | ' ||
case s.token
-- actual column names in gv$session
when 'inst_id' then lpad(o_inst_id , w_inst_id , ' ')
when 'sid' then lpad(o_sid , w_sid , ' ')
when 'username' then rpad(o_username , w_username , ' ')
when 'machine' then rpad(o_machine , w_machine , ' ')
when 'terminal' then rpad(o_terminal , w_terminal , ' ')
when 'program' then rpad(o_program , w_program , ' ')
when 'event' then rpad(o_event , w_event , ' ')
when 'wait_class' then rpad(o_wait_class , w_wait_class , ' ')
when 'state' then rpad(o_state , w_state , ' ')
when 'p1' then rpad(o_p1 , w_p1 , ' ')
when 'p2' then rpad(o_p2 , w_p2 , ' ')
when 'p3' then rpad(o_p3 , w_p3 , ' ')
when 'row_wait_obj#' then rpad(o_row_wait_obj# , w_row_wait_obj# , ' ')
when 'row_wait_file#' then rpad(o_row_wait_file# , w_row_wait_file# , ' ')
when 'row_wait_block#' then rpad(o_row_wait_block# , w_row_wait_block# , ' ')
when 'row_wait_row#' then rpad(o_row_wait_row# , w_row_wait_row# , ' ')
when 'blocking_session_status' then rpad(o_blocking_session_status , w_blocking_session_status , ' ')
when 'blocking_instance' then rpad(o_blocking_instance , w_blocking_instance , ' ')
when 'blocking_session' then rpad(o_blocking_session , w_blocking_session , ' ')
when 'sql_hash_value' then rpad(o_sql_hash_value , w_sql_hash_value , ' ')
when 'sql_id' then rpad(o_sql_id , w_sql_id , ' ')
when 'sql_child_number' then rpad(o_sql_child_number , w_sql_child_number , ' ')
when 'plsql_entry_object_id' then rpad(o_plsql_entry_object_id , w_plsql_entry_object_id , ' ')
when 'plsql_entry_subprogram_id' then rpad(o_plsql_entry_subprogram_id , w_plsql_entry_subprogram_id, ' ')
when 'plsql_object_id' then rpad(o_plsql_object_id , w_plsql_object_id , ' ')
when 'plsql_subprogram_id' then rpad(o_plsql_subprogram_id , w_plsql_subprogram_id , ' ')
when 'module' then rpad(o_module , w_module , ' ')
when 'action' then rpad(o_action , w_action , ' ')
when 'client_identifier' then rpad(o_client_identifier , w_client_identifier , ' ')
when 'service_name' then rpad(o_service_name , w_service_name , ' ')
-- aliases for convenience (only either real name or alias should be used together at the same time)
when 'user' then rpad(o_username , w_username , ' ')
when 'obj' then rpad(o_row_wait_obj# , w_row_wait_obj# , ' ')
when 'file' then rpad(o_row_wait_file# , w_row_wait_file# , ' ')
when 'block' then rpad(o_row_wait_block# , w_row_wait_block# , ' ')
when 'row' then rpad(o_row_wait_row# , w_row_wait_row# , ' ')
when 'bss' then rpad(o_blocking_session_status , w_blocking_session_status , ' ')
when 'bsi' then rpad(o_blocking_instance , w_blocking_instance , ' ')
when 'bs' then rpad(o_blocking_session , w_blocking_session , ' ')
when 'sql' then rpad(o_sql_hash_value , w_sql_hash_value , ' ')
when 'sqlid' then rpad(o_sql_id , w_sql_id , ' ')
when 'child' then rpad(o_sql_child_number , w_sql_child_number , ' ')
when 'plsql_eoid' then rpad(o_plsql_entry_object_id , w_plsql_entry_object_id , ' ')
when 'plsql_esubpid' then rpad(o_plsql_entry_subprogram_id , w_plsql_entry_subprogram_id, ' ')
when 'plsql_oid' then rpad(o_plsql_object_id , w_plsql_object_id , ' ')
when 'plsql_subpid' then rpad(o_plsql_subprogram_id , w_plsql_subprogram_id , ' ')
when 'mod' then rpad(o_module , w_module , ' ')
when 'act' then rpad(o_action , w_action , ' ')
when 'cid' then rpad(o_client_identifier , w_client_identifier , ' ')
when 'service' then rpad(o_service_name , w_service_name , ' ')
when 'wait_event' then rpad(o_event , w_event , ' ')
when 'wait_state' then rpad(o_state , w_state , ' ')
else
''
end; -- case s.token
end loop; -- ash parameter tokenizer
output(l_output_line);
end loop; -- grouped ash samples
end out_ash;
-- and it begins!!!
begin
-- get snappers own sid into g_mysid
select sid into g_mysid from v$mystat where rownum = 1;
pagesize := nvl( getopt('&snapper_options', 'pagesize=' ), pagesize);
--output ( 'Pagesize='||pagesize );
lv_ash := getopt('&snapper_options', 'ash');
lv_stats := getopt('&snapper_options', 'stat');
if lv_ash is not null then gather_ash := 1; end if;
if lv_stats is not null then gather_stats := 1; end if;
--output('all='||case when getopt('&snapper_options', 'all') = chr(0) then 'chr(0)' when getopt('&snapper_options', 'all') is null then 'null' else (getopt('&snapper_options','all')) end);
-- some additional default value logic
if getopt('&snapper_options', 'all') is not null then
--output('setting stats to all due to option = all');
gather_stats := 1;
gather_ash := 1;
else
if (lv_ash is null and lv_stats is null) then
gather_stats := 0;
gather_ash := 1;
end if;
end if;
-- determine which performance counters and stats to collect
lv_gather := case nvl( lower(getopt ('&snapper_options', 'gather=')), 'stw')
when 'all' then 'stw'
else nvl( lower(getopt ('&snapper_options', 'gather=')), 'stw')
end;
--lv_gather:=getopt ('&snapper_options', 'gather=');
--output('lv_gather='||lv_gather);
g_snap_begin := lower(getopt('&snapper_options', 'begin' ));
g_snap_end := lower(getopt('&snapper_options', 'end' ));
--output('g_snap_begin = '||g_snap_begin);
--output('g_snap_end = '||g_snap_end);
if pagesize > 0 then
output(' ');
output('-- Session Snapper v4.06 BETA - by Tanel Poder ( http://blog.tanelpoder.com ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)');
output(' ');
end if;
-- initialize statistic and event name array
-- fetch statistic names with their adjusted IDs
select *
bulk collect into sn_tmp
from (
select 'STAT' stype, statistic# - pls_adjust statistic#, name
from v$statname
where (lv_gather like '%s%' or lv_gather like '%a%')
--
union all
select 'WAIT',
event# + (select count(*) from v$statname) + 1 - pls_adjust, name
from v$event_name
where (lv_gather like '%w%' or lv_gather like '%a%')
--
union all
select 'TIME' stype, stat_id - pls_adjust statistic#, stat_name name
from gv$sys_time_model
where (lv_gather like '%t%' or lv_gather like '%a%')
--
union all
select 'LATG',
l.latch# +
(select count(*) from v$statname) +
(select count(*) from v$event_name) +
1 - pls_adjust statistic#,
name
from gv$latch l
where (lv_gather like '%l%' or lv_gather like '%a%')
--
&_IF_X_ACCESSIBLE union all
&_IF_X_ACCESSIBLE select 'BUFG',
&_IF_X_ACCESSIBLE indx +
&_IF_X_ACCESSIBLE (select count(*) from v$statname) +
&_IF_X_ACCESSIBLE (select count(*) from v$event_name) +
&_IF_X_ACCESSIBLE (select count(*) from gv$latch) +
&_IF_X_ACCESSIBLE 1 - pls_adjust statistic#,
&_IF_X_ACCESSIBLE kcbwhdes name
&_IF_X_ACCESSIBLE from x$kcbwh
&_IF_X_ACCESSIBLE where (lv_gather like '%b%' or lv_gather like '%a%')
--
union all
select 'ENQG',
ascii(substr(e.eq_type,1,1))*256 + ascii(substr(e.eq_type,2,1)) +
(select count(*) from v$statname) +
(select count(*) from v$event_name) +
(select count(*) from gv$latch) +
&_IF_X_ACCESSIBLE (select count(*) from x$kcbwh) +
1 - pls_adjust statistic#,
eq_type
from (
select es.eq_type
||' - '||lt.name
eq_type,
total_req#
from
gv$enqueue_stat es
, gv$lock_type lt
where es.eq_type = lt.type
) e
where (lv_gather like '%e%' or lv_gather like '%a%')
) snapper_statnames
order by stype, statistic#;
-- store these into an index_by array organized by statistic# for fast lookup
for i in 1..sn_tmp.count loop
sn(sn_tmp(i).statistic#) := sn_tmp(i);
sn_reverse(sn_tmp(i).stype||','||sn_tmp(i).name) := sn_tmp(i);
end loop;
-- main sampling loop
for c in 1..&snapper_count loop
-- sesstat and other performance counter sampling
if gather_stats = 1 then
-- print header if required
gv_header_string :=
CASE WHEN output_header = 1 THEN 'HEAD,' END
|| CASE WHEN output_inst = 1 THEN ' INST,' END
|| CASE WHEN output_sid = 1 THEN ' SID,' END
|| CASE WHEN output_inst_sid = 1 THEN ' SID @INST,' END
|| CASE WHEN output_username = 1 THEN ' USERNAME ,' END
|| CASE WHEN output_time = 1 THEN ' SNAPSHOT START ,' END
|| CASE WHEN output_seconds = 1 THEN ' SECONDS,' END
|| CASE WHEN output_stype = 1 THEN ' TYPE,' END
|| CASE WHEN output_sname = 1 THEN rpad(' STATISTIC',59,' ')||',' END
|| CASE WHEN output_delta = 1 THEN ' DELTA,' END
|| CASE WHEN output_delta_s = 1 THEN ' DELTA/SEC,' END
|| CASE WHEN output_hdelta = 1 THEN ' HDELTA,' END
|| CASE WHEN output_hdelta_s = 1 THEN ' HDELTA/SEC,' END
|| CASE WHEN output_percent = 1 THEN ' %TIME,' END
|| CASE WHEN output_pcthist = 1 THEN ' GRAPH ,' END
|| CASE WHEN output_eventcnt = 1 THEN ' NUM_WAITS,' END
|| CASE WHEN output_eventcnt_s = 1 THEN ' WAITS/SEC,' END
|| CASE WHEN output_eventavg = 1 THEN ' AVERAGES ' END
;
if g_snap_begin is null then
if pagesize > 0 and mod(c-1, pagesize) = 0 then
output(rpad('-',length(gv_header_string),'-'));
output(gv_header_string);
output(rpad('-',length(gv_header_string),'-'));
else
if pagesize = -1 and c = 1 then
output(gv_header_string);
end if;
end if;
else
output('Taking BEGIN sample ...');
end if;
-- TODO raise an error if both begin and end are used together
-- TODO conditionally comment out the refcursor use unless begin and end is used
-- manual before/after snapshots (snapper v4)
if g_snap_begin is not null or g_snap_end is not null then
if g_snap_begin is not null then
get_sessions;
snap(d1,s1,l1,g_snap1);
&_MANUAL_SNAPSHOT open :snapper for select column_value rec from table(g_snap1); -- if you see this error then run: "VAR SNAPPER REFCURSOR" first!
exit;
end if;
if g_snap_end is not null then
&_MANUAL_SNAPSHOT fetch :snapper bulk collect into g_snap1; -- You should run snapper with BEGIN option first!
-- procedure snap_from_stats_string (p_string_stats in sys.dbms_debug_vc2coll, p_snapdate out date, p_stats out stab, l_stats out ltab)
snap_from_stats_string(g_snap1, d1, s1, l1);
end if;
else -- normal interval sampling
if c = 1 then
get_sessions;
snap(d1,s1,l1,g_snap1);
else
get_sessions;
d1 := d2;
s1 := s2;
g_snap1 := g_snap2;
end if; -- c = 1
end if;
end if; -- gather_stats = 1
-- ASH style sampling
&_USE_DBMS_LOCK ash_date1 := sysdate;
&_USE_DBMS_LOCK if gather_ash = 1 then
&_USE_DBMS_LOCK while sysdate < (ash_date1 + (&snapper_sleep/86400)) loop
&_USE_DBMS_LOCK -- get active session records from g_sessions
&_USE_DBMS_LOCK get_sessions;
&_USE_DBMS_LOCK extract_ash();
&_USE_DBMS_LOCK -- sleep timeout backoff depending on the duration sampled (for up to 10 seconds total sampling time will get max 100 Hz sampling)
&_USE_DBMS_LOCK -- for longer duration sampling the algorithm will back off and for long durations (over 100 sec) the sampling rate will stabilize
&_USE_DBMS_LOCK -- at 1Hz
&_USE_DBMS_LOCK dbms_lock.sleep( greatest(0.1,(least(1,&snapper_sleep*&snapper_count/100))) );
&_USE_DBMS_LOCK end loop;
&_USE_DBMS_LOCK else
&_USE_DBMS_LOCK dbms_lock.sleep( ((ash_date1+(&snapper_sleep/86400)) - sysdate)*86400 );
&_USE_DBMS_LOCK null;
&_USE_DBMS_LOCK end if;
&_USE_DBMS_LOCK ash_date2 := sysdate;
-- sesstat new sample and delta calculation
if gather_stats = 1 then
get_sessions;
snap(d2,s2,l2,g_snap2);
-- manually coded nested loop outer join for calculating deltas:
-- why not use a SQL join? this would require creation of PL/SQL
-- collection object types, but Snapper does not require any changes
-- to the database, so any custom object types are out!
changed_values := 0;
missing_values_s1 := 0;
missing_values_s2 := 0;
-- remember last disappeared SID so we wouldn't need to output a warning
-- message for each statistic row of that disappeared sid
disappeared_sid := 0;
i :=1; -- iteration counter (for debugging)
a :=1; -- s1 array index
b :=1; -- s2 array index
if s2.count > 0 then lv_curr_sid := s2(b).sid; end if;
while ( a <= s1.count and b <= s2.count ) loop
if lv_curr_sid != 0 and lv_curr_sid != s2(b).sid then
if pagesize > 0 and mod(c-1, pagesize) = 0 then
-- if filtering specific stats, assuming that it's better to not leave spaces between every session data
if getopt('&snapper_options', 'sinclude=')||getopt('&snapper_options', 'tinclude=' )||getopt('&snapper_options', 'winclude=' ) is null then
output(' ');
-- output(rpad('-',length(gv_header_string),'-'));
-- output(gv_header_string);
-- output(rpad('-',length(gv_header_string),'-'));
end if;
end if;
lv_curr_sid := s2(b).sid;
end if;
delta := 0; -- don't print
case
when s1(a).sid = s2(b).sid then
case
when s1(a).statistic# = s2(b).statistic# then
delta := s2(b).value - s1(a).value;
evcnt := s2(b).event_count - s1(a).event_count;
--output('DEBUG, s1(a).statistic# s2(b).statistic#, a='||to_char(a)||' b='||to_char(b)||' s1.count='||s1.count||' s2.count='||s2.count||' s2.count='||s2.count);
if delta != 0 then fout(); end if;
a := a + 1;
b := b + 1;
when s1(a).statistic# > s2(b).statistic# then
delta := s2(b).value;
evcnt := s2(b).event_count;
if delta != 0 then fout(); end if;
b := b + 1;
when s1(a).statistic# < s2(b).statistic# then
output('ERROR, s1(a).statistic# < s2(b).statistic#, a='||to_char(a)||' b='||to_char(b)||' s1.count='||s1.count||' s2.count='||s2.count||' s2.count='||s2.count);
a := a + 1;
b := b + 1;
else
output('ERROR, s1(a).statistic# ? s2(b).statistic#, a='||to_char(a)||' b='||to_char(b)||' s1.count='||s1.count||' s2.count='||s2.count||' s2.count='||s2.count);
a := a + 1;
b := b + 1;
end case; -- s1(a).statistic# ... s2(b).statistic#
when s1(a).sid > s2(b).sid then
delta := s2(b).value;
evcnt := s2(b).event_count;
if delta != 0 then fout(); end if;
b := b + 1;
when s1(a).sid < s2(b).sid then
if disappeared_sid != s1(a).sid then
output('WARN, Session has disappeared since previous snapshot, ignoring SID='||to_char(s1(a).sid)||' debug(a='||to_char(a)||' b='||to_char(b)||' s1.count='||s1.count||' s2.count='||s2.count||' s2.count='||s2.count||')');
end if;
disappeared_sid := s1(a).sid;
a := a + 1;
else
output('ERROR, Should not be here, SID='||to_char(s2(b).sid)||' a='||to_char(a)||' b='||to_char(b)||' s1.count='||s1.count||' s2.count='||s2.count||' s2.count='||s2.count);
end case; -- s1(a).sid ... s2(b).sid
i:=i+1;
if delta != 0 then
changed_values := changed_values + 1;
end if; -- delta != 0
end loop; -- while ( a <= s1.count and b <= s2.count )
if pagesize > 0 and changed_values > 0 then
output(' ');
--output('-- End of Stats snap '||to_char(c)||', end='||to_char(d2, 'YYYY-MM-DD HH24:MI:SS')||', seconds='||to_char(case get_seconds(d2-d1) when 0 then (&snapper_sleep) else round(get_seconds(d2-d1), 1) end));
output('-- End of Stats snap '||to_char(c)||', end='||to_char(d2, 'YYYY-MM-DD HH24:MI:SS')||', seconds='||round(get_seconds(d2-d1), 1));
end if;
output(' ');
end if; -- gather_stats = 1
if gather_ash = 1 then
-- get ASH sample grouping details
g_ash_columns := nvl( getopt('&snapper_options', 'ash=' ), g_ash_columns );
-- optional additional ASH groupings
g_ash_columns1 := case when getopt('&snapper_options', 'ash1' ) is null then null when getopt('&snapper_options', 'ash1' ) = chr(0) then g_ash_columns1 else getopt('&snapper_options', 'ash1=' ) end;
g_ash_columns2 := case when getopt('&snapper_options', 'ash2' ) is null then null when getopt('&snapper_options', 'ash2' ) = chr(0) then g_ash_columns2 else getopt('&snapper_options', 'ash2=' ) end;
g_ash_columns3 := case when getopt('&snapper_options', 'ash3' ) is null then null when getopt('&snapper_options', 'ash3' ) = chr(0) then g_ash_columns3 else getopt('&snapper_options', 'ash3=' ) end;
-- group ASH records and print report
out_ash( g_ash_columns, 10 );
-- group and print optional ASH reports
if g_ash_columns1 is not null then out_ash( g_ash_columns1, 10 ); end if;
if g_ash_columns2 is not null then out_ash( g_ash_columns2, 10 ); end if;
if g_ash_columns3 is not null then out_ash( g_ash_columns3, 10 ); end if;
if pagesize > 0 then
output(' ');
--output('-- End of ASH snap '||to_char(c)||', end='||to_char(ash_date2, 'YYYY-MM-DD HH24:MI:SS')||', seconds='||to_char(case (ash_date2-ash_date1) when 0 then (&snapper_sleep) else round((ash_date2-ash_date1) * 86400, 1) end)||', samples_taken='||g_ash_samples_taken);
output('-- End of ASH snap '||to_char(c)||', end='||to_char(ash_date2, 'YYYY-MM-DD HH24:MI:SS')||', seconds='||to_char(round((ash_date2-ash_date1) * 86400, 1))||', samples_taken='||g_ash_samples_taken);
output(' ');
end if;
reset_ash();
end if; -- gather_ash = 1
end loop; -- for c in 1..snapper_count
end;
/
undefine snapper_oraversion
undefine snapper_sleep
undefine snapper_count
undefine snapper_sid
undefine ssid_begin
undefine _IF_ORA11_OR_HIGHER
undefine _IF_LOWER_THAN_ORA11
undefine _NO_BLK_INST
undefine _YES_BLK_INST
undefine _NO_PLSQL_OBJ_ID
undefine _YES_PLSQL_OBJ_ID
undefine _IF_DBMS_SYSTEM_ACCESSIBLE
undefine _IF_X_ACCESSIBLE
undefine _MANUAL_SNAPSHOT
undefine _USE_DBMS_LOCK
col snapper_ora11higher clear
col snapper_ora11lower clear
col dbms_system_accessible clear
set serveroutput off
}}}
''the plain loop for current instance''
{{{
$ cat snapperloop
#!/bin/bash
while :; do
sqlplus "/ as sysdba" <<! &
spool snap.txt append
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 1 $1
exit
!
sleep 10
echo
done
}}}
''snapper loop showing activity across all instances (must use snapper v4)''
{{{
$ cat snapperloop
#!/bin/bash
while :; do
sqlplus "/ as sysdba" <<! &
spool snap.txt append
@snapper ash 5 1 all@*
exit
!
sleep 10
echo
done
}}}
''process the snap.txt file as csv input to tableau or excel''
in tableau convert the "Active%" column to "Number (Decimal)" and drag to "Measure"
create a header for the "cat -n" and name it as "Series"
{{{
less snap.txt | grep "|" > snap.csv
sed -i 's/|/,/g' snap.csv
cat -n snap.csv > snap.csv2
sed 's/\t/,/g' snap.csv2 > snap.csv3
}}}
https://www.snowflake.com/blog/quickstart-guide-for-sagemaker-snowflake-part-one/
https://github.com/snowflakedb/ML/tree/master/samples/chicago_taxi
https://www.snowflake.com/blog/snowflake-and-spark-part-1-why-spark/
https://github.com/snowflakedb/spark-snowflake
https://www.snowflake.com/blog/connecting-a-jupyter-notebook-to-snowflake-via-spark-part-4/
https://stackoverflow.com/questions/58306352/advice-on-best-ci-cd-tools-for-snowflake-data-warehouse
<<<
https://dzone.com/articles/learn-how-to-setup-a-cicd-pipeline-from-scratch
https://semaphoreci.com/blog/cicd-pipeline
https://community.snowflake.com/s/article/How-to-Setup-a-CI-CD-Pipeline-for-Snowflake-using-Sqitch-and-Jenkins.CI/CD
https://medium.com/hashmapinc/ci-cd-on-snowflake-using-sqitch-and-jenkins-245c6dc4c205
https://medium.com/hashmapinc/using-dbt-to-execute-elt-pipelines-in-snowflake-dbe76d5beed5
https://www.youtube.com/watch?v=sVv1AI8Tptg
This is a link to our partner connect tools and you can test them out to find the ones that are relevant for your use-case.
https://docs.snowflake.net/manuals/user-guide/ecosystem-partner-connect.html
<<<
https://github.com/Snowflake-Labs/snowchange
<<<
In Snowflake
You can access the past 7 days of performance data in INFORMATION_SCHEMA https://docs.snowflake.com/en/sql-reference/info-schema.html
And 365 days in SNOWFLAKE.ACCOUNT_USAGE https://docs.snowflake.com/en/sql-reference/account-usage.html#differences-between-account-usage-and-information-schema The access to this requires special privileges. See here https://docs.snowflake.com/en/sql-reference/account-usage.html#enabling-account-usage-for-other-roles
The available views are here:
INFORMATION_SCHEMA https://docs.snowflake.com/en/sql-reference/info-schema.html#list-of-views
SNOWFLAKE.ACCOUNT_USAGE https://docs.snowflake.com/en/sql-reference/account-usage.html#account-usage-views
Other references:
https://www.snowflake.com/blog/using-snowflake-information-schema/
https://community.snowflake.com/s/article/Understanding-Your-Snowflake-Utilization-Part-1-Warehouse-Profiling
https://community.snowflake.com/s/article/Understanding-Your-Snowflake-Utilization-Part-2-Storage-Profiling
https://community.snowflake.com/s/article/Understanding-Your-Snowflake-Utilization-Part-3--Query-Profiling
http://cloudsqale.com/?s=snowflake
<<<
! questions
<<<
question on snowflake historical perf data. I read that there's two ways to capture historical perf metrics
The available views are in INFORMATION_SCHEMA (past 7 days) and SNOWFLAKE.ACCOUNT_USAGE (365 days history)
questions:
- how does the data flow from 7 days to 365 days repository? because I saw this disclaimer on ACCOUNT_USAGE "All latency times are approximate; in some instances, the actual latency may be lower"
and I'm wondering if this is similar to what Enterprise Manager is doing where data gets average'd by hourly and daily so it's always better to get data from the INFORMATION_SCHEMA for a more "actual number"
- it also says that pulling data could take time (Latency of data From 45 minutes to 3 hours - varies by view). do you have any customers that have built something similar to AWR Warehouse of snowflake data for more than 1 year retention and faster native table access?
- do you guys have standard reporting toolkit out there to mine INFORMATION_SCHEMA (past 7 days) and SNOWFLAKE.ACCOUNT_USAGE (365 days history) ?
<<<
! INFORMATION_SCHEMA
!! limitations 10K row limit and < 7 days
{{{
use warehouse TEST_DW;
use database TEST_DB;
select min(start_time), max(start_time)
from table(information_schema.query_history_by_warehouse(result_limit => 10000));
order by start_time;
ran for 4.8 minutes
2021-01-19 15:19:41.206 -0800
2021-01-25 12:03:19.773 -0800
}}}
! SNOWFLAKE.ACCOUNT_USAGE
!! materialization of the schema (for faster access)
https://github.com/randypitcherii/snowflake_security_analytics/blob/master/fivetran/transformations/snowflake_monitoring_materialization.sql
! other examples
<<<
https://github.com/fivetran/benchmark/blob/master/302-SnowflakeTiming.sql
{{{
select substr(query_text, 0, 50), start_time, compilation_time, execution_time, total_elapsed_time
from table(information_schema.query_history_by_warehouse(result_limit => 200))
order by start_time
}}}
<<<
https://github.com/daanalytics/Snowflake/tree/master/setup
<<<
Setting up a Snowflake Sandbox environment
Find scripts and snippets in this folder to setup and get started with Snowflake. This peace is inspired by the following blogpost by Venkatesh Sekar from Hashmapinc
https://medium.com/hashmapinc/heres-your-day-1-and-2-checklist-for-snowflake-adoption-e0e7ff8f105a
In this folder the following scripts:
Setup the 2nd Account Admin User; AccountAdmin.sql
Define Resource Monitors; ResourceMonitors.sql
Creating Roles; Roles.sql
Creating RolesHierarchy; RoleHierarchy.sql
Creating Sandbox Users; Users.sql
Granting Roles to Sandbox Users; GrantingRoles.sql
Creating Warehouses; Warehouses.sql
Creating Databases; Databases.sql
Next, follow Venkatesh Sekar's tips and try the following:
Using the sample data
Creating tables
Creating internal stages
Loading data from stage
Interacting with the data and getting to know Snowflake
<<<
<<showtoc>>
! api
https://docs.snowflake.com/en/user-guide/connecting.html
https://docs.snowflake.com/en/user-guide/jdbc-configure.html
https://discourse.snowplowanalytics.com/t/snowflake-loader-setup-ssl-error/3578/1
https://snowflakecommunity.force.com/s/question/0D50Z00007BKctqSAD/connection-issues-with-python-api
* APEX_WEB_SERVICE https://www.google.com/search?sxsrf=ALeKk025kqPiieZxUsrNeHMGFuhvw-HVGg%3A1600461592531&source=hp&ei=GBtlX4CDHrCb_QaT6rKwBw&q=apex_web_service&oq=apex_web_service&gs_lcp=CgZwc3ktYWIQAzICCAAyAggAMgIIADICCAAyAggAMgIIADICCAAyAggAMgIIADIECAAQQzoECCMQJzoFCAAQkQI6DgguELEDEIMBEMcBEKMCOggIABCxAxCDAToICC4QsQMQgwE6BQgAELEDOgcILhCxAxBDOgUILhCxAzoECAAQClCgAVisGmC2G2gAcAB4AIABhAKIAf0NkgEFNi45LjGYAQCgAQGqAQdnd3Mtd2l6&sclient=psy-ab&ved=0ahUKEwiA78mFyPPrAhWwTd8KHRO1DHYQ4dUDCAg&uact=5
https://docs.snowflake.com/en/user-guide/ecosystem-partner-connect.html
! plain JDBC
https://community.snowflake.com/s/question/0D50Z00007M8RHk/how-to-connect-from-oralce-sql-developer-to-snowflake-using-jdbc
https://www.snowflake.com/blog/ability-to-connect-to-snowflake-with-jdbc/
https://github.com/SlalomBuild/snowflake-on-ecs
Deploying a Modern Cloud-Based Data Engineering Architecture w/ AWS, Airflow & Snowflake-Wes Sankey https://www.youtube.com/watch?v=924V6edYW_w&t=2634s
[img(90%,90%)[ https://user-images.githubusercontent.com/3683046/108808041-b1d2c080-7573-11eb-97cc-6f168e380228.png ]]
[img(90%,90%)[ https://user-images.githubusercontent.com/3683046/108808044-b26b5700-7573-11eb-901a-4c99612433a7.png ]]
https://flowobjects.com/auditing-monitoring
https://github.com/BigDataDave1/SnowTel
https://bigdatadave.com/2020/03/15/tableau-snowflake-snowtel-snowflake-query-telemetry-in-a-tableau-workbook/
https://bigdatadave.com/2020/02/09/tableau-snowflake-missing-warehouse-when-opening-tableau-workbook/
snow concurrency graph https://www.youtube.com/watch?v=96s7yyNyZzc
https://github.com/RobertFehrmann/concurrency_framework
{{{
-- setup the context
use database concurrency_demo;
create or replace schema concurrency_demo.roles;
-- create output table
show grants to role PUBLIC;
create or replace table roles.roles_snapshot
as select * from table(result_scan(last_query_id())) where 1=0 ;
-- the following two statements need to be executed together
show roles;
create or replace table meta_schema.roles_demo as
select
seq4() statement_id
,parse_json ('{"Role":"'||"name"||'"'
||',"sqlquery":['
||'"SHOW GRANTS TO ROLE '||"name"||'"'
||',"INSERT INTO concurrency_demo.roles.roles_snapshot SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()))"'
||']'
||'}') task
from table(result_scan(last_query_id()));
-- play with the parameters to evalute the impact on run time
-- 5 workers, one statement block per batch
call meta_schema.sp_concurrent('PROCESS_REQUEST',1,5,'ROLES_DEMO');
-- 5 workers, one statement block per batch
call meta_schema.sp_concurrent('PROCESS_REQUEST',5,1,'ROLES_DEMO');
}}}
.
https://github.com/RobertFehrmann/fehrminator
https://www.snowflake.com/blog/synthetic-data-generation-at-scale-part-1/
https://www.snowflake.com/blog/synthetic-data-generation-at-scale-part-2/
! generator function
https://docs.snowflake.com/en/sql-reference/functions/generator.html
.
! dynamic upscale is determined by the virtual warehouse used
the hierarchy is table -> database -> warehouse (engine)
{{{
[user]#[warehouse]@[database].[schema]>
Return the 25 most recent queries that ran in the specified warehouse:
!queries warehouse=mywh
}}}
! auto scaling and concurrency
https://stackoverflow.com/questions/61346996/snowflake-auto-scaling-and-concurrency
! row by row api ETL
https://www.snowflake.com/blog/bringing-extensibility-to-data-pipelines-whats-new-with-snowflake-external-functions/
https://thenewstack.io/reduce-aws-processing-time-5-minutes-300-milliseconds/
<<showtoc>>
! direct connection
!! connect oracle directly to snowflake - using FREEWARE
{{{
others have created a freeware connector https://datacons.co.uk/oracle-snowflake-connector/ that uses APEX_WEB_SERVICE under the hood and utilizes the snowflake rest APIs
The freeware would be a stored proc call then you would need to use JSON functions on the SELECT part (check the video demo)
https://datacons.co.uk/wp-content/uploads/2020/06/doc.v.0.8.pdf
https://www.doag.org/de/home/news/aufgezeichnet-present-snowflake-data-using-oracle-application-express-apex/detail/ <-- VIDEO DEMO
8 C_URL_LOGIN VARCHAR2(32767) := 'https://#ACCOUNT#.snowflakecomputing.com/session/v1/login-request?__uiAppName=Login';
9 C_URL_REQ_QUERY_STATUS VARCHAR2(32767) := 'https://#ACCOUNT#.snowflakecomputing.com/monitoring/queries/#QUERY_ID#';
10 C_URL_REQ_QUERY VARCHAR2(32767) := 'https://#ACCOUNT#.snowflakecomputing.com/queries/v1/query-request?requestId=#REQEST_ID#';
11 C_URL_REQ_QUERY_RESULT_BY_ID VARCHAR2(32767) := 'https://#ACCOUNT#.snowflakecomputing.com/queries/#QUERY_ID#/result';
}}}
youtube https://www.youtube.com/watch?v=817izaPoa84
!! connect oracle directly to snowflake - using GLUENT
{{{
I know Gluent also has a snowflake connector which will probably represent the snowflake tables seamlessly without any code change like they do in hadoop environments.
}}}
!! CDATA (paid)
https://www.cdata.com/kb/tech/snowflake-odbc-oracle-hs.rst
https://www.cdata.com/drivers/snowflake/odbc/
! using 3rd party - mentioned on snowflake site
{{{
go through a 3rd party (fivetran, matillion, HVR, etc.) and then to snowflake because snowflake has this jdbc driver
https://docs.snowflake.com/en/user-guide/ecosystem-partner-connect.html
}}}
! compared to bigquery
<<<
btw federated queries from on-prem to bigquery is also something that is not yet possible as far as i know.
you can do federated queries through cloud sql https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries#setting_up_database_connections
but not sure if you can connect the cloud sql to on-prem you'll be able to push that data up to bigquery
I reached out to Kent Graziano of Snowflake
He confirmed no DB link type feature. And he mentioned most work with the tools I mentioned like Fivetran, HVR, etc.
<<<
!! oracle analytics cloud
https://docs.oracle.com/en/cloud/paas/analytics-cloud/acsds/connect-data-enterprise-data-models.html#GUID-3C8367BE-8CCF-4ECF-BD71-103C37F5C59A
!! oracle analytics desktop
https://docs.oracle.com/en/middleware/bi/analytics-desktop/bidvd/connect-snowflake-data-warehouse.html
https://guides.snowflake.com/
https://github.com/Snowflake-Labs/sfguides
https://github.com/Snowflake-Labs
https://www.snowflake.com/blog/using-materialized-views-to-solve-multi-clustering-performance-problems/
{{{
CREATE OR REPLACE TABLE WEBLOG (
CREATE_MS BIGINT,
PAGE_ID BIGINT,
TIME_TO_LOAD_MS INTEGER,
METRIC2 INTEGER,
METRIC3 INTEGER,
METRIC4 INTEGER,
METRIC5 INTEGER,
METRIC6 INTEGER,
METRIC7 INTEGER,
METRIC8 INTEGER,
METRIC9 INTEGER
);
}}}
{{{
INSERT INTO WEBLOG
SELECT
(SEQ8())::BIGINT AS CREATE_MS
,UNIFORM(1,9999999,RANDOM(10002))::BIGINT PAGE_ID
,UNIFORM(1,9999999,RANDOM(10003))::INTEGER TIME_ON_LOAD_MS
,UNIFORM(1,9999999,RANDOM(10005))::INTEGER METRIC2
,UNIFORM(1,9999999,RANDOM(10006))::INTEGER METRIC3
,UNIFORM(1,9999999,RANDOM(10007))::INTEGER METRIC4
,UNIFORM(1,9999999,RANDOM(10008))::INTEGER METRIC5
,UNIFORM(1,9999999,RANDOM(10009))::INTEGER METRIC6
,UNIFORM(1,9999999,RANDOM(10010))::INTEGER METRIC7
,UNIFORM(1,9999999,RANDOM(10011))::INTEGER METRIC8
,UNIFORM(1,9999999,RANDOM(10012))::INTEGER METRIC9
FROM TABLE(GENERATOR(ROWCOUNT => 10000000000))
ORDER BY CREATE_ms;
}}}
{{{
ALTER TABLE WEBLOG CLUSTER BY (CREATE_MS);
SELECT SYSTEM$CLUSTERING_INFORMATION( 'WEBLOG' , '(CREATE_MS)' );
SELECT SYSTEM$CLUSTERING_INFORMATION( 'WEBLOG' , '(PAGE_ID)' );
}}}
{{{
SELECT COUNT(*) CNT, AVG(TIME_TO_LOAD_MS) AVG_TIME_TO_LOAD
FROM WEBLOG
WHERE CREATE_MS BETWEEN 1000000000 AND 1000001000;
SELECT COUNT(*) CNT, AVG(TIME_TO_LOAD_MS) AVG_TIME_TO_LOAD
FROM WEBLOG
WHERE PAGE_ID=100000;
}}}
{{{
ALTER WAREHOUSE <NAME> SET WAREHOUSE_SIZE=XXXLARGE;
CREATE OR REPLACE MATERIALIZED VIEW MV_TIME_TO_LOAD
(CREATE_MS, PAGE_ID, TIME_TO_LOAD_MS) CLUSTER BY (PAGE_ID)
AS
SELECT CREATE_MS, PAGE_ID, TIME_TO_LOAD_MS
FROM WEBLOG;
ALTER WAREHOUSE <NAME> SET WAREHOUSE_SIZE=MEDIUM;
}}}
{{{
SELECT SYSTEM$CLUSTERING_INFORMATION
( 'MV_TIME_TO_LOAD' , '(PAGE_ID)' );
}}}
{{{
ALTER SESSION SET USE_CACHED_RESULT=FALSE;
SELECT COUNT(*),AVG(TIME_TO_LOAD_MS)
FROM WEBLOG
WHERE PAGE_ID=100000;
}}}
{{{
ALTER SESSION SET USE_CACHED_RESULT=FALSE;
SELECT COUNT(*),AVG(TIME_TO_LOAD_MS) AVG_TIME_TO_LOAD
FROM MV_TIME_TO_LOAD
WHERE PAGE_ID=100000;
}}}
{{{
import snowflake.connector
import time
import sys
def getSnowFlakeCntxt():
# Set SnowFlake context.
SnowFlakectx = snowflake.connector.connect(
user=<user>,
password=<password>,
account=<account>
warehouse=<warehouse>,
database=<database>,
schema=<schema>
)
return SnowFlakectx
def executeSnowflakeCmd(iteration,batch,seed):
conn = getSnowFlakeCntxt()
cs = conn.cursor()
try:
cs.execute("alter session set QUERY_TAG = %s", ("MV_INSERT_TEST"))
insSQL = """
insert into WEBLOG
select
((seq8())+""" + str(iteration*batch+seed) +""")::bigint as create_ms
,uniform(1,9999999,random(10002))::bigint page_id
,uniform(1,9999999,random(10003))::integer time_to_load_ms
,uniform(1,9999999,random(10005))::integer metric2
,uniform(1,9999999,random(10006))::integer metric3
,uniform(1,9999999,random(10007))::integer metric4
,uniform(1,9999999,random(10008))::integer metric5
,uniform(1,9999999,random(10009))::integer metric6
,uniform(1,9999999,random(10010))::integer metric7
,uniform(1,9999999,random(10011))::integer metric8
,uniform(1,9999999,random(10012))::integer metric9
from table(generator(rowcount => """ +str(batch) + """))
order by create_ms
"""
cs.execute(insSQL)
finally:
cs.close()
if __name__ == "__main__":
iterations=int(sys.argv[1])
batch_size=int(sys.argv[2])
sleep_time=int(sys.argv[3])
seed=int(sys.argv[4])
for i in range(0,iterations):
executeSnowflakeCmd(i,batch_size,seed)
time.sleep(sleep_time)
print("***** COMPLETED ******")
python TableInsert.py 20 600000 60 100000000
}}}
{{{
SELECT COUNT(*),AVG(TIME_TO_LOAD_MS)
FROM MV_TIME_ON_SITE
WHERE PAGE_ID=100000;
}}}
{{{
SELECT 'WEBLOG', COUNT(*),AVG(TIME_TO_LOAD_MS)
FROM WEBLOG
WHERE PAGE_ID=100000
UNION ALL
SELECT 'MV_TIME_TO_LOAD',COUNT(*),AVG(TIME_TO_LOAD_MS)
FROM MV_TIME_TO_LOAD
WHERE PAGE_ID=100000;
}}}
! Query Optimization at Snowflake (Jiaqi Yan, SnowflakeDB)
https://www.youtube.com/watch?v=CPWn1SZUZqE
! spark pushdown
https://www.snowflake.com/blog/snowflake-spark-part-2-pushing-query-processing/
! control join order
you can't control the join order, you need to create temp tables
https://community.snowflake.com/s/article/Controlling-Join-Order
Join Order Benchmark on Snowflake and Postgres
https://courses.cs.washington.edu/courses/csed516/20au/projects/p05.pdf
! CTE
https://docs.snowflake.com/en/user-guide/queries-cte.html
<<showtoc>>
! certifications
https://community.snowflake.com/s/snowprocertifications
https://www.webassessor.com/wa.do?page=logout
https://snowforce.my.salesforce.com/sfc/p/#i0000000hZh2/a/0Z0000001tR0/Oj8qDX9NcBYYywUDwySGaXVwD3zw3UH4Hy6XEzhX6aA
<<showtoc>>
https://trial.snowflake.com/
https://community.snowflake.com/s/article/Tuning-Snowflake
https://docs.snowflake.net/manuals/index.html
https://docs.snowflake.net/manuals/release-notes/2019-06.html
! snowflake courses
Snowflake Data Warehouse and Cloud Data Analytics https://www.udemy.com/course/draft/1855330/
Snowflake cloud data warehouse fundamentals https://www.udemy.com/course/snowflake-cloud-data-warehouse/
https://www.udemy.com/course/snowflake-snowpro-core-certification-practice-exams/
https://www.udemy.com/course/snowflake-cloud-data-warehousing-basics-to-advanced-concepts/
! snowflake books
Jumpstart Snowflake: A Step-by-Step Guide to Modern Cloud Analytics
https://learning.oreilly.com/library/view/jumpstart-snowflake-a/9781484253281/
! articles
* https://www.linkedin.com/pulse/ideal-warehouse-architecture-its-cloud-john-ryan/
* https://www.linkedin.com/pulse/does-pattern-based-dw-architecture-still-hold-value-today-bill-inmon/
* https://www.analytics.today/blog/db-arch-compared
! snowflake talks
* [[snowflakedb internals by Ashish Motivala]]
! snowflake tips and tricks
!! CLI
!!! snowsql
https://docs.snowflake.net/manuals/user-guide/snowsql-install-config.html
!!! snowcd (Connectivity Diagnostic Tool)
https://docs.snowflake.com/en/user-guide/snowcd.html
!! moving data
!!! s3 to snowflake
https://medium.com/snowflakedb/how-to-upload-data-from-aws-s3-to-snowflake-in-a-simple-way-12803f7041a6
!!! RDBMS to snowflake
To load the data from any RDBMS, you need to use ELT tools like Matillion, Talend etc., or you can extract data as flat file from RDBMS and stage it to AWS S3 or Azure Blob and load the data... I always recommend to use ELT tools.
Loading data from XML or JSON is similar to loading from CSV or Parque, one limitation to watch is that each XML/JSON file cannot be more than 16MB
!!! matillion
https://www.matillion.com/etl-for-snowflake/
matillion howto https://www.youtube.com/channel/UCTcFSUXuWKp8hBFznXDR2cw
Tutorial | Oracle: Database Query Component | Matillion ETL for Snowflake https://www.youtube.com/watch?v=cKTXw0L0Vhg&list=PLCe5hPcHMfas_mtyzgF-SYmpKJThf4qSW&index=13
!!! attunity
Attunity is an option to move the data into Snowflake. Attunity has a product named as Attunity replicator which can write to snowflake but cannot do any transformation within snowflake (snowflake table to snowflake table). Attunity does the CDC (change data capture) based replication and it is a great tool for SAP ECC data replication into Snowflake.
!!! informatica / datastage
Informatica supports Snowflake, I have not tested with Datastage and I assume we can use the snowflake JDBC/ODBC connector with Datastage
!! transformations
All transformation can be handled within snowflake as ELT process, we avoid doing any transformation outside of snowflake.
!! security
!!! row level security
Snowflake data access and control is done by role based security and also you can control the data access with views and secured views. With the combination of these three Role, View & Secured view you can control the data access at row level, column level etc.,
!!! security key
it is completely safe when you use your own key for encryption along with virtual private snowflake instance which will only keep that enterprise data and it becomes private SaaS
!! parallel processing
The Snowflake architecture differentiates the performance. Snowflake architecture also referred as EPP elastic parallel processing.
!! SCD
!!! merge
https://www.udemy.com/course/draft/1855330/learn/lecture/12401122#questions/5191704
!! staging area
!!! snowflake s3, aws s3, or azure ?
In what scenario of staging we have to go with snowflake internal staging and when to go with AWS/Azure?
from production and real project standpoint we need to use AWS S3 or Azure Blob storage always. Internal is just for ad-hoc testing, learning standpoint
Technically there is no or will not be any performance issue between Snowflake Internal or AWS S3 as the Snowflake internal also store the files at their own AWS S3 bucket. As you have seen the demo about the 15K files loaded from AWS S3 to Snowflake running at AWS (please note that Snowflake also available on Azure), I have run the same but the files from Azure Blob into Snowflake running at AWS, there was about 30 to 45 seconds delay to load all the 15K files. AWS S3 to AWS Snowflake took about 2 minutes and Azure blob to AWS Snowflake took about 3 minutes.
!!! how to move a data from Azure adls path to snowflake?
you have to create a stage something like this:
create or replace stage AZURESAINV
url = 'azure://grstr.blob.core.windows.net/salinv'
credentials=(azure_sas_token='yourtokenhere'); Check the video on Snowflake UI (first add-on vedio)
!! streaming ELT (kafka)
https://medium.com/convoy-tech/logs-offsets-near-real-time-elt-with-apache-kafka-snowflake-473da1e4d776
!! benchmark comparison
!!! Cloud Data Warehouse Benchmark Redshift vs Snowflake vs BigQuery
https://www.youtube.com/watch?v=XpaN-PqSczM
https://fivetran.com/blog/warehouse-benchmark
!! limitations
!!! column size / file size
It is widely used and there are 1400 enterprise customers since 2016 and as of Dec 2018. It does support semi-structured data JSON & XML in native form and run SQL on top of them. unstructured like audio, video, photo, pdf etc., not supported but if you convert them into binary file if the size is 16MB (column data size) then you can store it in Snowflake as single column. please note that Snowflake max column size is 16MB. We have stored photos in snowflake.
.
.
.
CMU Advanced Database Systems - 25 Ashish Motivala [Snowflake] (Spring 2018) https://www.youtube.com/watch?v=dABd7JQz0A8&feature=youtu.be
Then another talk by Ashish https://youtu.be/KkeyjFMmIf8, about the specifics of the snowflakedb metadata layer running on foundationdb (acid nosql acquired by apple, released as OSS again last nov 2018)
{{{
Some of my notes from the talks:
Engineers are mostly ex-Oracle
They wrote the whole stack from scratch (compiler and exec engine)
metadata
- foundationdb
- zone maps then to the partitions (similar to https://docs.oracle.com/database/121/DWHSG/zone_maps.htm#DWHSG8949)
storage layer
- PAX format
- 16MB micropartitions across nodes
exec engine (written in c++ aka muscle) - monetdb, c-store
pushed based vs iterative based query execution model
DAG shaped plans
no transaction mgt (no buffer pool)
at run time make adaptive decisions on the fly (per operator ms polling)
no optimizer statistics, only used zone maps
join order picked by the optimizer (not yet adaptive)
auto skew avoidance (popular values) on build side of the join (broadcast vs hash vs adaptive)
workers execute the plan
compiler (query optimizer aka brain) - written in java
pushes created plans with zone maps info to the compiler
some queries can be answered from zone maps (part of metadata) without going to exec engine. example: select max from
evaluation of expressions/operators works with zone maps. date trunc expression "evaluation" is done in compiler and exec layer
join pruning plan passed to exec engine includes zone maps to scan (already determined)
majority of the time is spent on worker nodes (c++) but dealing with zone maps (java) on large complex query plans with thousands of operators, transforming then these plans will take time
every partition ingested has a zone map (predetermined)
stopped using bloom filters, not that effective. but uses it for some joins and other things
no indexes at all. building index table is more effective for trillion rows vs maintaining a real index
cloud services
heavily multi-tenant (access control, query optimizer, transaction mgr, etc
hard state stored in kv store
concurrency control
snapshot isolation (berenson95)
SI based on MVCC, DML produces new versions of table (no redo/undo log) with modified set of file operations tracked for rollback purposes and time travel queries
pruning
aka small materialized aggregates (moerkotte98), zone maps (part of metadata), data skipping
availability
replicated across datacenters
weekly online upgrade (dancefloor/dancers analogy)
stateless services/versions
support for semi-strucures and schema-less data
variant, array, json object (mondodb)
supported by all sql operators (joins, group by, sort, etc)
flattening (pivoting) a single object into rows (lateral flatten function)
auto columnar storage for schema-less data (variant), popular semi-structured key automatically transformed as column with zone maps
security
encrypted data import/export
nist 800-57 , root keys in HS
s3 access policies integration
RBAC within SQL (privs and roles)
2 factor auth and federated auth
post sigmod16 features
data sharing
serverless ingestion of data (ingestion service)
reclustering of data (approximate sorting of large amounts of data)
spark connector with pushdown
support for azure cloud
connectors
future work
advisors (for mview, clustering of data)
mviews
stored proc (done)
data lake support
streaming
time series
multi-cloud
global snowflake
replication
}}}
https://docs.snowflake.com/en/user-guide/snowsql-use.html
https://hevodata.com/learn/snowflake-sql/
Social Media Mining with R
http://www.amazon.com/Social-Media-Mining-Nathan-Danneman/dp/1783281774 , it-ebooks.info/book/3604/
Python: Mining the Social Web, 2nd Edition Data Mining Facebook, Twitter, LinkedIn, Google+, GitHub, and More
http://shop.oreilly.com/product/0636920030195.do , http://goo.gl/xgjgVW
! course materials and VM
books https://www.dropbox.com/sh/dhy35zi1zl3ib9l/AAAggUQhAXznvu5IDZggNM2Ja?dl=0
http://miningthesocialweb.com/quick-start/
http://vimeo.com/72383764
http://nbviewer.ipython.org/github/ptwobrussell/Mining-the-Social-Web-2nd-Edition/blob/master/ipynb/_Appendix%20A%20-%20Virtual%20Machine%20Experience.ipynb
https://learning.oreilly.com/videos/socket-io-solutions/9781786464927
* check the ZFS pool on the GUI admin page for the grants/ownerships
! related articles
http://serverfault.com/questions/235197/detect-if-solaris-has-chown-restricted
https://community.hpe.com/t5/System-Administration/chown-filename-Not-owner/td-p/3278705
https://www.google.com/search?q=solaris+rstchown&oq=solaris+rstchown&aqs=chrome..69i57j69i65l2j0l3.1647j0j1&sourceid=chrome&ie=UTF-8
http://www.tech-recipes.com/rx/498/prevent-solaris-users-from-changing-file-ownership-chown/
http://docs.oracle.com/cd/E19455-01/805-7229/6j6q8svf4/index.html
https://blogs.oracle.com/mandalika/entry/solaris_show_me_the_cpu
{{{
prtdiag | head -1
}}}
showcpucount
{{{
#!/bin/bash
/usr/bin/kstat -m cpu_info | egrep "chip_id|core_id|module: cpu_info" > /var/tmp/cpu_info.log
nproc=`(grep chip_id /var/tmp/cpu_info.log | awk '{ print $2 }' | sort -u | wc -l | tr -d ' ')`
ncore=`(grep core_id /var/tmp/cpu_info.log | awk '{ print $2 }' | sort -u | wc -l | tr -d ' ')`
vproc=`(grep 'module: cpu_info' /var/tmp/cpu_info.log | awk '{ print $4 }' | sort -u | wc -l | tr -d ' ')`
nstrandspercore=$(($vproc/$ncore))
ncoresperproc=$(($ncore/$nproc))
speedinmhz=`(/usr/bin/kstat -m cpu_info | grep clock_MHz | awk '{ print $2 }' | sort -u)`
speedinghz=`echo "scale=2; $speedinmhz/1000" | bc`
echo "Total number of physical processors: $nproc"
echo "Number of virtual processors: $vproc"
echo "Total number of cores: $ncore"
echo "Number of cores per physical processor: $ncoresperproc"
echo "Number of hardware threads (strands or vCPUs) per core: $nstrandspercore"
echo "Processor speed: $speedinmhz MHz ($speedinghz GHz)"
# now derive the vcpu-to-core mapping based on above information #
echo -e "\n** Socket-Core-vCPU mapping **"
let linenum=2
for ((i = 1; i <= ${nproc}; ++i ))
do
chipid=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $2 }'`
echo -e "\nPhysical Processor $i (chip id: $chipid):"
for ((j = 1; j <= ${ncoresperproc}; ++j ))
do
let linenum=($linenum + 1)
coreid=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $2 }'`
echo -e "\tCore $j (core id: $coreid):"
let linenum=($linenum - 2)
vcpustart=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $4 }'`
let linenum=(3 * $nstrandspercore + $linenum - 3)
vcpuend=`sed -n ${linenum}p /var/tmp/cpu_info.log | awk '{ print $4 }'`
echo -e "\t\tvCPU ids: $vcpustart - $vcpuend"
let linenum=($linenum + 4)
done
done
rm /var/tmp/cpu_info.log
}}}
karl@karl:karl2$ cat mpstat_1.txt | grep -v smtx | awk '{ print $1 " " $11 }' | sort -rnk2 | head -20
13:17:53 33890
13:17:52 26179
13:27:43 25517
13:21:21 25131
13:18:31 24948
13:17:54 24248
13:26:55 23915
13:19:22 23891
13:17:53 23095
13:17:45 22604
13:24:34 22296
13:28:10 21912
13:28:03 21713
13:16:24 21691
13:28:16 21313
13:19:14 20700
13:28:16 20593
13:28:50 20474
13:30:18 20451
13:30:04 20354
[oracle@desktopserver cpu_wait_jb]$ cat *txt | sort -rnk1 | uniq | less
http://www.data4ict.com/lpicourse/lpi101/lpi101chapter32_sort.asp
{{{
Sort "/work/sortdir/sortfile" based on the third numeric field
[root@ws ~]# sort -rnk 3 -t ',' /work/sortdir/sortfile
banana,yellow,45
cherry,red,10
apple,green,5
}}}
''uniq -c'' count the uniq rows
http://www.unix.com/unix-dummies-questions-answers/71104-sorting-using-count-grep-count.html
http://www.linuxjournal.com/article/7396
http://khaliqsperl.blogspot.com/2007/06/shell-csu-cut-sort-unique.html
{{{
Karl@Karl-LaptopDell ~/home
$ cat oak.txt | cut -d, -f2 | sort | uniq -c
1
176
2 Australia
1 Belgium
1 Canada
6 Denmark
1 Finland
1 Germany
1 Japan
4 Netherlands
1 Russia
1 Sweden
3 Switzerland
8 United Kingdom
27 United States
Karl@Karl-LaptopDell ~/home
$ cat oak.txt | cut -d, -f2 | grep "United States" | wc -l
27
}}}
Luca - What’s New with Spark Performance Monitoringin Apache Spark 3.0
https://canali.web.cern.ch/docs/WhatsNew_Spark3_Performance_Monitoring_DataAI_Summit_EU_Nov2020_LC.pdf
https://github.com/LucaCanali/sparkMeasure
Apache Spark 3.0 Memory Monitoring Improvements
https://db-blog.web.cern.ch/blog/luca-canali/2020-08-spark3-memory-monitoring
SPARK-29543 Support Structured Streaming UI https://issues.apache.org/jira/browse/SPARK-29543
SPARK-27189 Add Executor metrics and memory usage instrumentation to the metrics system https://issues.apache.org/jira/browse/SPARK-27189
[SPARK-27189][CORE] Add Executor metrics and memory usage instrumentation to the metrics system #24132 https://github.com/apache/spark/pull/24132
SPARK-23206 Additional Memory Tuning Metrics https://issues.apache.org/jira/browse/SPARK-23206
SPARK-26357 Expose executors' procfs metrics to Metrics system https://issues.apache.org/jira/browse/SPARK-26357
...
https://www.slideshare.net/databricks/building-robust-etl-pipelines-with-apache-spark <- GOOD STUFF
https://aws.amazon.com/blogs/big-data/using-spark-sql-for-etl/
https://github.com/anish749/spark2-etl-examples
https://medium.com/@mrpowers/how-to-write-spark-etl-processes-df01b0c1bec9
https://www.red-gate.com/simple-talk/sql/bi/scala-apache-spark-tandem-next-generation-etl-framework/
https://github.com/holdenk/spark-testing-base
https://github.com/juanrh/sscheck
http://www.scalatest.org/
https://databricks.com/session/get-rid-of-traditional-etl-move-to-spark
Using Spark as an ETL tool https://www.safaribooksonline.com/library/view/scala-guide-for/9781787282858/ch21s03.html
Chapter 2. Data Processing Pipeline Using Scala https://www.safaribooksonline.com/library/view/building-a-recommendation/9781785282584/ch02.html
https://www.google.com/search?q=scala+spark+etl&oq=scala+spark+etl&aqs=chrome..69i57j69i60l3j0l2.5253j1j1&sourceid=chrome&ie=UTF-8
!!! spark ETL talend
https://stackoverflow.com/questions/40371279/talend-and-apache-spark
https://www.talend.com/blog/2017/09/15/talend-apache-spark-technical-primer/
https://www.talend.com/blog/2018/02/21/talend-vs-spark-submit-configuration-whats-the-difference/
https://databricks.com/
https://databricks.com/azure-databricks-runtime-dialing-up-spark-performance-10x
Spark Vs. Snowflake: The Cloud Data Engineering (ETL) Debate
https://www.prophecy.io/blogs/spark-vs-snowflake-the-cloud-data-engineering-etl-debate
Spark vs. Snowflake: The Cloud Data Engineering (ETL) Debate
https://news.ycombinator.com/item?id=23838750
<<<
A quote from the article I would object to is "for large datasets and complex transformations this architecture is far from ideal. This is far from the world of open-source code on Git & CI/CD that data engineering offers - again locking you into proprietary formats, and archaic development processes."
No one is forcing you to use those tools on top of something like Snowflake (which is just a SQL interface). These days we have great open source tools (such as https://www.getdbt.com/) which let you write plain SQL that you can then deploy to multiple environments, perform automated testing and deployment, and do fun scripting. At the same time, dealing with large datasets in a spark world is full of lower level details, whereas in a SQL database it's the exact same query you would run on a smaller dataset.
The reality is that the ETL model is fading in favour of ELT (load data then transform it in the warehouse) because maintaining complex data pipelines and spark clusters make little sense when you can spin up a cloud data warehouse. In this world we don't just need less developer time, those developers don't have to be engineers that can write and maintain spark workloads/clusters, they can be analysts who are able to do transformations and have something valuable out to the business faster than the equivalent spark data pipeline can be built.
<<<
<<<
Very valid points: 1) Agree that Snowflake is far easier to use than Spark. 2) Agree that DBT is a great tool.
ETL workflows normally processing 10s of TBs and workflows with large and complex business logic is the context. With Spark code, you can break down your code into smaller pieces, see data flow across them, write unit tests, and have the entire thing still execute as a single SQL query.
Don't large SQL scripts become really gnarly for complex stuff - nothing short of magical incantations. I can't see data flow from a subquery for debugging without changing code.
Prophecy as a company is focused on making Spark significantly easier to use!
<<<
https://stackoverflow.com/questions/37471346/automatically-and-elegantly-flatten-dataframe-in-spark-sql
<<<
I’ve dealt w/ “flattening” of data SQL issues before on Oracle where Xquery and JSON_TABLE are used to flatten XML and JSON respectively and on Hive where LATERAL VIEW EXPLODE on JSON. Both Oracle and Hive are hitting issues related to memory. In Oracle the SQL terminates abruptly when the PGA gets exhausted. In Hive, java gc issues would hit at some point and the SQL would also terminate w/ vertex error. The usual fix is to lessen the columns to explode (example is marketing data traits) or break the query into smaller Xquery then load into temp tables and then do the join from there. This way you don’t overwhelm the memory in one big shot.
In my denormalization problem, the goal is to flatten the 3 tables by ID into one table. The spark job completes with no exceptions it’s just that it is very slow. My suspicion is there’s data skewness involved. But I feel there’s code related refactoring that can be done to make this faster I just don’t know enough Spark and Spark SQL to be dangerous : - )
I did some due diligence on the right sizing of the Executors. And I came up with
{{{
–num-executors 42 --executor-memory 44g --executor-cores 5
}}}
The current spark-submit settings seem to be skinny.
Right sizing the executors (thanks to the perf tuning course):
{{{
CPU: 26 x 8 = 208 Cores
Cores/Executor: 208/5 = 41.6 executors or 42 as round even number
Memory/Executor: 252 x 8 = 2016GB -> 2016/42= 48GB memory/executor
8% for executor memory overhead = 44GB memory/executor
spark-submit –num-executors 42 --executor-memory 44g --executor-cores 5
}}}
On the spark config (some anomalous settings):
spark.sql.broadcastTimeout: 36000 – This is quite long, this is a broadcast timeout of 10 hours, the default is 5 minutes
spark.driver.maxResultSize: 0 – seems like this might lead to OOM
On the code side of things (this is where I need most of the help):
First thing I need to get is the explain and spark UI profiling. I’ll work on that.
Below are my questions:
Have you encountered this join and flatten problem before?
If yes, what code or object level optimizations did you do?
I would appreciate any advice on how I could better approach the spark join and flatten. Always happy to learn on complex issues like this. Thanks in advance!
<<<
https://canali.web.cern.ch/docs/SparkExecutorMemory.png
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/108953637-8dd9b280-7639-11eb-81d0-281cbf88a24e.png ]]
<<showtoc>>
! my (compiled) troubleshooting workflow
Understanding the spark explain plan and spark UI
https://github.com/karlarao/spark_sql_tuning
! detailed HOWTOs
* Understanding Query Plans and Spark UIs https://databricks.com/session/understanding-query-plans-and-spark-uis
* Apache Spark Core—Deep Dive—Proper Optimization Daniel Tomes Databricks https://www.youtube.com/watch?v=daXEp4HmS-E
! troubleshooting before spark 3.0
https://stackoverflow.com/questions/58146859/what-happened-to-the-ability-to-visualize-query-plans-in-a-databricks-notebook , https://youtu.be/GQSNJAzxOr8?t=1561
[img(80%,80%)[ https://i.stack.imgur.com/JH9Xx.png ]]
mac hadoop development https://www.google.com/search?q=mac+hadoop+development&oq=mac+hadoop+development&aqs=chrome..69i57.3868j0j1&sourceid=chrome&ie=UTF-8
https://medium.com/@jeremytarling/apache-spark-and-hadoop-on-a-macbook-air-running-osx-sierra-66bfbdb0b6f7
https://community.hortonworks.com/articles/36832/setting-hadoop-development-environment-on-mac-os-x.html
http://hadoopsie.com/mac-setup-for-hadoop-dev/
Hadoop Map Reduce Programming 101 - 04 Setup Eclipse For Map Reduce Development https://www.youtube.com/watch?v=mRIsLSrHOj4
Setup Spark Development Environment on Windows - Introduction https://www.youtube.com/watch?v=Z7JXDg5jltQ&list=PLf0swTFhTI8oy5U0C5GwWR88zb8WKgqK2&index=1
https://www.youtube.com/channel/UCakdSIPsJqiOLqylgoYmwQg/search?query=development
Introduction - Setup Python, Pycharm and Spark on Windows https://www.youtube.com/watch?v=aUN5xnkmcmY&list=PLf0swTFhTI8pYbd8mr36LiYIOOY2xw5Iu&index=1
https://www.supergloo.com/fieldnotes/intellij-scala-spark/
https://hortonworks.com/tutorial/setting-up-a-spark-development-environment-with-scala/
https://medium.com/@mrpowers/creating-a-spark-project-with-sbt-intellij-sbt-spark-package-and-friends-cc9108751c28
* How to Read Spark Query Plans https://www.youtube.com/watch?v=UZt_tqx4sII
* How to Read Spark DAGs https://www.youtube.com/watch?v=LoFN_Q224fQ&list=PLmtsMNDRU0Bw6VnJ2iixEwxmOZNT7GDoC&index=2
! wiki guides
* best practices and tuning - https://umbertogriffo.gitbook.io/apache-spark-best-practices-and-tuning/
* spark internals - https://jaceklaskowski.gitbooks.io/mastering-spark-sql/content/spark-sql-LogicalPlan-ExplainCommand.html
! courses
* https://rockthejvm.com/p/spark-optimization
** https://github.com/rockthejvm/spark-optimization/compare/3.6-skewed-joins...master
* https://rockthejvm.com/p/spark-performance-tuning
** https://github.com/rockthejvm/spark-optimization-2/compare/5.1-data-skews...master
https://community.hortonworks.com/questions/33715/why-do-we-need-to-setup-spark-thrift-server.html
<<showtoc>>
! web ui
https://spark.apache.org/docs/latest/web-ui.html
<<<
Table of Contents
Jobs Tab
Jobs detail
Stages Tab
Stage detail
Storage Tab
Environment Tab
Executors Tab
SQL Tab
SQL metrics
Structured Streaming Tab
Streaming (DStreams) Tab
JDBC/ODBC Server Tab
<<<
! web ui SQL metrics
https://spark.apache.org/docs/latest/web-ui.html#sql-metrics
<<<
Here is the list of SQL metrics:
SQL metrics Meaning Operators
number of output rows the number of output rows of the operator Aggregate operators, Join operators, Sample, Range, Scan operators, Filter, etc.
data size the size of broadcast/shuffled/collected data of the operator BroadcastExchange, ShuffleExchange, Subquery
time to collect the time spent on collecting data BroadcastExchange, Subquery
scan time the time spent on scanning data ColumnarBatchScan, FileSourceScan
metadata time the time spent on getting metadata like number of partitions, number of files FileSourceScan
shuffle bytes written the number of bytes written CollectLimit, TakeOrderedAndProject, ShuffleExchange
shuffle records written the number of records written CollectLimit, TakeOrderedAndProject, ShuffleExchange
shuffle write time the time spent on shuffle writing CollectLimit, TakeOrderedAndProject, ShuffleExchange
remote blocks read the number of blocks read remotely CollectLimit, TakeOrderedAndProject, ShuffleExchange
remote bytes read the number of bytes read remotely CollectLimit, TakeOrderedAndProject, ShuffleExchange
remote bytes read to disk the number of bytes read from remote to local disk CollectLimit, TakeOrderedAndProject, ShuffleExchange
local blocks read the number of blocks read locally CollectLimit, TakeOrderedAndProject, ShuffleExchange
local bytes read the number of bytes read locally CollectLimit, TakeOrderedAndProject, ShuffleExchange
fetch wait time the time spent on fetching data (local and remote) CollectLimit, TakeOrderedAndProject, ShuffleExchange
records read the number of read records CollectLimit, TakeOrderedAndProject, ShuffleExchange
sort time the time spent on sorting Sort
peak memory the peak memory usage in the operator Sort, HashAggregate
spill size number of bytes spilled to disk from memory in the operator Sort, HashAggregate
time in aggregation build the time spent on aggregation HashAggregate, ObjectHashAggregate
avg hash probe bucket list iters the average bucket list iterations per lookup during aggregation HashAggregate
data size of build side the size of built hash map ShuffledHashJoin
time to build hash map the time spent on building hash map ShuffledHashJoin
<<<
! GENERAL monitoring and instrumentation
Monitoring and Instrumentation https://spark.apache.org/docs/latest/monitoring.html
http://spark.rstudio.com/deployment.html
https://github.com/rstudio/sparklyr
<<<
Yes this is interesting that ARM CPUs are catching up on Intel. But more cores doesn't mean more performance (see image attached). There's always diminishing returns. ARM platforms could compete on the other stuff (caching, web, hadoop nodes) but not on CPU (latency) intensive/sensitive database server and VM server use case which Intel is still king. Also check this http://www.anandtech.com/show/8776/arm-challinging-intel-in-the-server-market-an-overview/9
[img[ http://i.imgur.com/p7kZgqF.png ]]
<<<
ARM Challenging Intel in the Server Market: An Overview http://www.anandtech.com/show/8776/arm-challinging-intel-in-the-server-market-an-overview/9
http://ark.intel.com/products/75053/Intel-Xeon-Processor-E3-1230L-v3-8M-Cache-1_80-GHz
https://www.apm.com/products/data-center/x-gene-family/x-gene/
https://www.linleygroup.com/uploads/x-gene-3-white-paper-final.pdf
https://ark.intel.com/products/93790/Intel-Xeon-Processor-E7-8890-v4-60M-Cache-2_20-GHz
http://www.fujitsu.com/fts/products/computing/servers/mission-critical/primequest-2800e3/
[img[ http://i.imgur.com/WHXJqTy.png ]]
[img[ http://i.imgur.com/0SMfbQz.png ]]
! other articles
https://www.nextplatform.com/2019/04/24/why-single-socket-servers-could-rule-the-future/
https://www.gartner.com/doc/reprints?id=1-680TWAA&ct=190212&st=sb
intel data center strategy https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/data-center-strategy-paper.pdf
http://www.simplehelp.net/2009/05/25/how-to-create-a-multi-part-tar-file-with-linux/
{{{
-- split
tar -cjvpf exachk_consolidated.tar.bz2 exachk
tar -cf - exachk_consolidated.tar.bz2 | split -b 1m - db_backup.tar
-- put back
cat db_backup.tara* | (tar x)
}}}
Proactive Oracle Database Monitoring And Capacity Planning With Splunk
https://conf.splunk.com/files/2016/slides/proactive-oracle-database-monitoring-and-capacity-planning-with-splunk-software.pdf
.conf2016 - Proactive Oracle Database Monitoring and Capacity Planning With Splunk
https://www.evernote.com/shard/s18/client/snv?noteGuid=515722b4-467d-49d2-ac87-f7b2d93dbb1c¬eKey=9b0349aa2f86e07c&sn=https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs18%2Fsh%2F515722b4-467d-49d2-ac87-f7b2d93dbb1c%2F9b0349aa2f86e07c&title=.conf2016%2B-%2BProactive%2BOracle%2BDatabase%2BMonitoring%2Band%2BCapacity%2BPlanning%2BWith%2BSplunk
https://github.com/tmuth/splunk-conf2016
https://medium.com/@odedia/a-tale-of-3-cloud-native-abstractions-for-event-processing-e7f3de484aa0
https://dataflow.spring.io/docs/batch-developer-guides/getting-started/task/
https://pivotal.io/platform/services-marketplace/microservices-management/spring-cloud-services
https://www.google.com/search?q=oracle+11.2.0.3+group+by+not+working+12.1.0.2&oq=oracle+11.2.0.3+group+by+not+working+12.1.0.2&aqs=chrome..69i57j69i64l3.16039j0j1&sourceid=chrome&ie=UTF-8
Oracle 12c - query behavior change (11g not thrown error while "group by" not mention in sub-query/in-line)
https://asktom.oracle.com/pls/apex/f?p=100:11:::NO::P11_QUESTION_ID:9536968200346861383
https://stackoverflow.com/questions/40546638/possible-oracle-bug-with-subqueries-and-group-functions
https://livesql.oracle.com/apex/livesql/file/content_FP499O87G9NJ3CA3JKIJA5XM8.html
https://oracle-base.com/articles/12c/adaptive-query-optimization-12cr1
<<showtoc>>
! blog post by Chris
https://blogs.oracle.com/sql/entry/the_problem_with_sql_calling
! other test case
{{{
I was thinking about the function concept in general (regardless of the caching) and I forgot the most important thing,
using a function in this way is *logically incorrect* (even though ETL usually don’t parallelize with OLTP)
Each recursive SQL execution gets its own “starting SCN” so read consistency is only guaranteed for the life of that
recursive statement, not for the life of the top level SQL (this is intended behavior, the recursive doesn’t inherit the top caller SCN).
It means that if somebody is changing the underlying tables then the same function call with the same parameters *within
the same execution of the top level SQL* will get different result, this can cause data logical corruption.
I created a procedure “bump_rows” to just evenly increase the number of rows in the underlying table I query in the function,
here is the result for the top level SQL before, during, and after the execution of my bump_rows.
SQL> select /* BEFORE */ dept_id, find_num_trans(dept_id) from k_departments;
DEPT_ID FIND_NUM_TRANS(DEPT_ID)
---------- -----------------------
1 100700
2 100700
3 100700
4 100700
5 100700
6 100700
7 100700
8 100700
9 100700
10 100700
SQL> select /* DURING */ dept_id, find_num_trans(dept_id) from k_departments;
DEPT_ID FIND_NUM_TRANS(DEPT_ID)
---------- -----------------------
1 100700
2 100700
3 100800
4 100900
5 101000
6 101100
7 101200
8 101200
9 101200
10 101200
SQL> select /* AFTER */ dept_id, find_num_trans(dept_id) from k_departments;
DEPT_ID FIND_NUM_TRANS(DEPT_ID)
---------- -----------------------
1 101200
2 101200
3 101200
4 101200
5 101200
6 101200
7 101200
8 101200
9 101200
10 101200
}}}
''bump rows''
{{{
create or replace procedure bump_rows as
begin
for i in 1..5 loop
insert into k_transactions
select mod(rownum,10)+1, rownum
from dual
connect by rownum <= 1000;
commit;
dbms_lock.sleep(1);
end loop;
end;
/
the dbms_lock sleep is just so that it’s easier to reproduce,
also you can have the commit outside the FOR loop, doesn’t change the underlying concept
}}}
! references
https://blogs.oracle.com/sql/entry/optimizing_the_pl_sql_challenge6
https://blogs.oracle.com/sql/entry/optimizing_the_pl_sql_challenge
http://www.sqlines.com/online
https://roboquery.com/
https://www.compilerworks.com/
https://www.dbsolo.com/help/compare.html <- haven't used this but this both a schema compare and DDL generator
* when using the schema/database diff tool of sql developer it's more intuitive to interpret the results if the diff is done both ways.
* so let's say if you are comparing PROD and TEST ,open two sql developer programs then do a PROD to TEST compare on one session, and TEST to PROD on another
* on the output the left side is the source and the right side is the destination. see example below:
[img(50%,50%)[http://i.imgur.com/EI5JxYn.png]]
[img(80%,80%)[http://i.imgur.com/lAEpmKo.png]]
! oem schema compare
http://oemadmin.blogspot.com/2017/03/schema-comparison-for-2-different.html
https://dbakevlar.com/2016/06/em13c-configuration-management-and-comparing-targets/
! multi-platform database diff
https://www.dbsolo.com/schema_comparison.html
https://www.dbsolo.com/index.html
https://www.dbsolo.com/help/compare.html
https://www.dbsolo.com/help/DBSolo.html
! other references
https://community.oracle.com/blogs/dearDBA/2016/05/26/how-to-compare-2-schemas-and-implement-the-ddl-differences-with-sqldeveloper
https://www.thatjeffsmith.com/archive/2012/09/sql-developer-database-diff-compare-objects-from-multiple-schemas/
https://www.avioconsulting.com/blog/using-sql-developer-diff-feature
.
http://www.thatjeffsmith.com/archive/2015/06/your-most-requested-sql-developer-demos/
[img[http://i.imgur.com/AMXFywb.png]]
[img[http://i.imgur.com/tBsvODR.png]]
[img[http://i.imgur.com/r3bTody.png]]
http://kerryosborne.oracle-guy.com/2013/07/13/sql-translation-framework/
https://plsqldba.wordpress.com/2018/09/22/rewrite-a-sql-statement-using-dbms_sql_translator/
<<<
create_translation_profile.sql – creates a translation profile
create_translation.sql – adds a translation to a profile
drop_translation.sql – drops a translation from a profile
export_translation_profile.sql – shows the contents of a profile
enable_translation.sql – turns on translation for the current session
disable_translation.sql – turns off translation for the current session
trace_translation_on.sql – turns on tracing
trace_translation_off.sql – turns off tracing
translation_logon_trigger.sql – creates a logon trigger to enable translation
<<<
<<showtoc>>
! install instant client
!! watch this first
https://www.youtube.com/watch?v=Sz69kdItkPw
<<<
download
* Basic Package
* SQL*Plus Package
* Tools Package
* SDK Package
* JDBC Supplement Package
* ODBC Package
<<<
!! download
https://www.oracle.com/technetwork/topics/intel-macsoft-096467.html
!! follow the readme
https://www.oracle.com/technetwork/topics/intel-macsoft-096467.html#ic_osx_inst
!!! unzip and create directories
{{{
cd ~/
unzip *zip
mkdir ~/lib
ln -s ~/instantclient_12_2/libclntsh.dylib ~/lib/
ln -s ~/instantclient_12_2/libclntsh.dylib.12.1 ~/lib/
mkdir -p ~/instantclient_12_2/network/admin
}}}
!!! add entries in bash_profile
{{{
export PATH=~/instantclient_12_2:$PATH
export ORACLE_HOME=~/instantclient_12_2
export DYLD_LIBRARY_PATH=$ORACLE_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME
export FORCE_RPATH=1
}}}
! install cx_oracle
{{{
pip3 install cx_Oracle
}}}
!! test cx_oracle
{{{
import cx_Oracle
con = cx_Oracle.connect(user='karlarao', password='karlarao', dsn='<put_hostname_here>:1521/cdb1')
cur = con.cursor() #The cursor() method opens a cursor for select statement that will be used.
cur.execute('select table_name from user_tables') #The execute() method parses and executes the statement.
row = cur.fetchone() # row is the record set that having query result
for row in cur.fetchall():
print(row)
cur.close()
con.close()
}}}
{{{
>>> >>> >>> >>> ... ... ('DEPARTMENTS',)
('EMPLOYEES',)
('JOBS',)
('JOB_HISTORY',)
('LOCATIONS',)
('REGIONS',)
('TEMP1',)
('TEMP2',)
}}}
!! test cx_Oracle (2022)
* need to define the instant client path
{{{
(py3104) kaarao@kaarao-mac oci % cat test.py
#!/usr/bin/env python
import cx_Oracle
cx_Oracle.init_oracle_client(lib_dir=r"/Users/kaarao/Documents/oci/instantclient_19_8")
con = cx_Oracle.connect(user='admin', password='welcome1', dsn='192.168.1.236:1521/devpdb')
cur = con.cursor()
cur.execute('select table_name from user_tables')
row = cur.fetchone()
for row in cur.fetchall():
print(row)
cur.close()
con.close()
}}}
! other references
https://oracle.github.io/python-cx_Oracle/
http://molecularsciences.org/web/content/python3-oracle-database
https://jasonstitt.com/cx_oracle_on_os_x
https://medium.com/@ottercoder/3-problems-with-installing-cx-oracle-on-your-mac-8ac48da1f87e
http://www.cs.utexas.edu/~mitra/csSpring2012/cs327/cx_mac.html
https://thelaziestprogrammer.com/sharrington/databases/oracle/install-cx_oracle-mac
https://gist.github.com/thom-nic/6011715
.
! installation
{{{
-- first download the sql developer
-- then set the JAVA_HOME to the sql developer jre
cmd.exe
set JAVA_HOME=C:\Users\araoka\Documents\k\tools\sqldeveloper-4.1.3.20.78-x64\sqldeveloper\jdk\jre
}}}
! quick usage
{{{
sql.bat karlarao/<password>@//<host>:1521/<dbname>
sql.bat karlarao/<password>@//<host>:1521/<dbname> @test.sql
help
history usage
history time
set sqlformat csv
set head off
set pagesize 50
alias
repeat 10 1
info <table_name>
}}}
! usage
{{{
I use oracle sqlcl to access that warehouse from my desktop. It feels like command line too.
Install and setup
Download sqlcl here http://www.oracle.com/technetwork/developer-tools/sqlcl/overview/index.html
Unzip at C:\Users\araoka\Documents\k\tools\
Connect using
cd "C:\Users\araoka\Documents\k\tools\sqlcl-4.2.0.16.260.1205-no-jre\sqlcl\bin"
c:
set JAVA_HOME=C:\Users\araoka\Documents\k\tools\sqldeveloper-4.1.3.20.78-x64\sqldeveloper\jdk\jre
sql.bat karlarao/<password>@//<host>:1521/dbwh
}}}
! references
https://github.com/oradoc/sqlcl
https://github.com/mpkincai/sqlcl
Working with SQLite Databases using Python and Pandas
https://www.dataquest.io/blog/python-pandas-databases/?utm_source=dbweekly&utm_medium=email
{{{
#!/bin/bash
while :; do
sqlplus "/ as sysdba" <<-EOF
@sqlmon.sql
EOF
sleep 20
echo
done
}}}
{{{
ttitle left '*** GV$SQL_MONITOR ***' skip 1
-- with child address join
set pagesize 999
set lines 300
col status format a12
col inst format 99
col px1 format 999
col px2 format 999
col px3 format 999
col module format a20
col RMBs format 99999
col WMBs format 99999
col sql_exec_id format 9999999999
col username format a15
col sql_text format a70
col sid format 9999
col rm_group format a10
select
a.status,
decode(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'N','Y') Offload,
decode(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'Y','N') InMemPX,
b.EXECUTIONS exec,
round(a.ELAPSED_TIME/1000000,2) ela_tm,
round(a.CPU_TIME/1000000,2) cpu_tm,
round(a.USER_IO_WAIT_TIME/1000000,2) io_tm,
round((a.PHYSICAL_READ_BYTES/1024/1024)/NULLIF(nvl((a.ELAPSED_TIME/1000000),0),0),2) RMBs,
round((a.PHYSICAL_WRITE_BYTES/1024/1024)/NULLIF(nvl((a.ELAPSED_TIME/1000000),0),0),2) WMBs,
a.SID,
substr (a.MODULE, 1,16) module,
-- a.RM_CONSUMER_GROUP rm_group, -- new in 11204
a.SQL_ID,
a.SQL_PLAN_HASH_VALUE PHV,
a.sql_exec_id,
a.INST_ID inst,
a.USERNAME,
CASE WHEN a.PX_SERVERS_ALLOCATED IS NULL THEN NULL WHEN a.PX_SERVERS_ALLOCATED = 0 THEN 1 ELSE a.PX_SERVERS_ALLOCATED END PX1,
CASE WHEN a.PX_SERVER_SET IS NULL THEN NULL WHEN a.PX_SERVER_SET = 0 THEN 1 ELSE a.PX_SERVER_SET END PX2,
CASE WHEN a.PX_SERVER# IS NULL THEN NULL WHEN a.PX_SERVER# = 0 THEN 1 ELSE a.PX_SERVER# END PX3,
to_char(a.SQL_EXEC_START,'MMDDYY HH24:MI:SS') SQL_EXEC_START,
-- to_char((a.SQL_EXEC_START + round(a.ELAPSED_TIME/1000000,2)/86400),'MMDDYY HH24:MI:SS') SQL_EXEC_END,
substr(a.SQL_TEXT, 1,70) sql_text
from gv$sql_monitor a, gv$sql b
where a.sql_id = b.sql_id
and a.inst_id = b.inst_id
and a.sql_child_address = b.child_address
and a.status in ('QUEUED','EXECUTING')
-- and lower(a.module) like '%dynrole%'
-- a.SQL_ID in ('fjnfu5qn3krhn')
-- or a.status like '%ALL ROWS%'
-- or a.status like '%ERROR%'
order by a.status, a.SQL_EXEC_START, a.SQL_EXEC_ID, a.PX_SERVERS_ALLOCATED, a.PX_SERVER_SET, a.PX_SERVER# asc
/
ttitle left '*** GV$SESSION ***' skip 1
set pagesize 999
set lines 180
col inst for 9999
col username format a13
col prog format a10 trunc
col sql_text format a60 trunc
col sid format 9999
col child for 99999
col avg_etime for 999,999.99
break on sql_text
col sql_text format a30
col event format a20
col hours format 99999
select a.inst_id inst, sid, username, substr(program,1,19) prog, b.sql_id, child_number child, plan_hash_value, executions execs,
(elapsed_time/decode(nvl(executions,0),0,1,executions))/1000000 avg_etime,
substr(event,1,20) event,
substr(sql_text,1,30) sql_text,
LAST_CALL_ET/60/60 hours,
decode(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offload,
decode(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,0,100*(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES-b.IO_INTERCONNECT_BYTES)
/decode(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,1,b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES)) "IO_SAVED_%"
from gv$session a, gv$sql b
where status = 'ACTIVE'
and username is not null
and a.sql_id = b.sql_id
and a.inst_id = b.inst_id
and a.sql_child_number = b.child_number
and sql_text not like 'select a.inst_id inst, sid, substr(program,1,19) prog, b.sql_id, child_number child,%' -- don't show this query
and sql_text not like 'declare%' -- skip PL/SQL blocks
order by hours desc, sql_id, child
/
ttitle left '*** GV$SESSION sort by INST_ID ***' skip 1
set lines 32767
col terminal format a4
col machine format a4
col os_login format a15
col oracle_login format a15
col osuser format a4
col module format a5
col program format a20
col schemaname format a5
-- col state format a8
col client_info format a5
col status format a4
col sid format 99999
col serial# format 99999
col unix_pid format a8
col sql_text format a30
col action format a8
col event format a40
col hours format 99999
select /* usercheck */ s.INST_ID, s.sid sid, lpad(p.spid,7) unix_pid, s.username oracle_login, substr(s.program,1,20) program,
s.sql_id, -- remove in 817, 9i
sa.plan_hash_value, -- remove in 817, 9i
substr(s.event,1,40) event,
substr(sa.sql_text,1,30) sql_text,
s.LAST_CALL_ET/60/60 hours
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
and sa.sql_text NOT LIKE '%usercheck%'
-- and lower(sa.sql_text) LIKE '%grant%'
-- and s.username = 'APAC'
-- and s.schemaname = 'SYSADM'
-- and lower(s.program) like '%uscdcmta21%'
-- and s.sid=12
-- and p.spid = 14967
-- and s.sql_hash_value = 3963449097
-- and s.sql_id = '5p6a4cpc38qg3'
-- and lower(s.client_info) like '%10036368%'
-- and s.module like 'PSNVS%'
-- and s.program like 'PSNVS%'
order by inst_id, sql_id
/
}}}
! sample output
{{{
STATUS O I EXEC ELA_TM CPU_TM IO_TM RMBS WMBS SID MODULE RM_GROUP SQL_ID PHV SQL_EXEC_ID INST USERNAME PX1 PX2 PX3 SQL_EXEC_START SQL_TEXT
------------ - - ---------- ---------- ---------- ---------- ------ ------ ----- -------------------- ---------- ------------- ---------- ----------- ---- --------------- ---- ---- ---- --------------- ----------------------------------------------------------------------
EXECUTING Y N 78 .05 .02 0 0 0 138 SQL*Plus REPORTS 56v09mkbstyaa 2117817910 16777356 1 KSO 32 092314 00:09:13 select count(*) from kso.skew2
EXECUTING Y N 0 0 0 0 167 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 1 092314 00:09:13
EXECUTING Y N 0 0 0 0 204 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 2 092314 00:09:13
EXECUTING Y N 0 0 0 0 225 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 3 092314 00:09:13
EXECUTING Y N 0 0 0 0 287 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 4 092314 00:09:13
EXECUTING Y N 0 0 0 0 302 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 5 092314 00:09:13
EXECUTING Y N 0 0 0 0 247 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 6 092314 00:09:13
EXECUTING Y N 0 0 0 0 326 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 7 092314 00:09:13
EXECUTING Y N 0 0 0 0 346 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 8 092314 00:09:13
EXECUTING Y N 0 0 0 0 367 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 9 092314 00:09:13
EXECUTING Y N 0 0 0 0 389 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 10 092314 00:09:13
EXECUTING Y N 0 0 0 0 409 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 11 092314 00:09:13
EXECUTING Y N 0 0 0 0 424 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 12 092314 00:09:13
EXECUTING Y N 0 0 0 0 449 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 13 092314 00:09:13
EXECUTING Y N 0 0 0 0 462 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 14 092314 00:09:13
EXECUTING Y N 0 0 0 0 3 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 15 092314 00:09:13
EXECUTING Y N 0 0 0 0 27 REPORTS 56v09mkbstyaa 2117817910 16777356 2 1 16 092314 00:09:13
EXECUTING Y N 78 0 0 0 646 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 17 092314 00:09:13
EXECUTING Y N 78 0 0 0 716 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 18 092314 00:09:13
EXECUTING Y N 78 0 0 0 777 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 19 092314 00:09:13
EXECUTING Y N 78 0 0 0 838 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 20 092314 00:09:13
EXECUTING Y N 78 0 0 0 898 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 21 092314 00:09:13
EXECUTING Y N 78 0 0 0 969 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 22 092314 00:09:13
EXECUTING Y N 78 0 0 0 1031 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 23 092314 00:09:13
EXECUTING Y N 78 0 0 0 1095 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 24 092314 00:09:13
EXECUTING Y N 78 0 0 0 1159 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 25 092314 00:09:13
EXECUTING Y N 78 0 0 0 1225 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 26 092314 00:09:13
EXECUTING Y N 78 0 0 0 1284 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 27 092314 00:09:13
EXECUTING Y N 78 0 0 0 390 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 28 092314 00:09:13
EXECUTING Y N 78 0 0 0 202 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 29 092314 00:09:13
EXECUTING Y N 78 0 0 0 66 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 30 092314 00:09:13
EXECUTING Y N 78 0 0 0 8 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 31 092314 00:09:13
EXECUTING Y N 78 0 0 0 71 REPORTS 56v09mkbstyaa 2117817910 16777356 1 1 32 092314 00:09:13
...
QUEUED Y N 78 0 0 0 1098 SQL*Plus REPORTS 56v09mkbstyaa 2117817910 16777360 1 KSO 092314 00:09:13 select count(*) from kso.skew2
QUEUED Y N 78 0 0 0 4 SQL*Plus REPORTS 56v09mkbstyaa 2117817910 16777362 1 KSO 092314 00:09:13 select count(*) from kso.skew2
266 rows selected.
}}}
backup of previous version
{{{
ttitle left '*** SQL MONITOR + V$SQL ***' skip 1
-- with child address join
set lines 300
col status format a20
col inst format 99
col px1 format 999
col px2 format 999
col px3 format 999
col module format a20
col RMBs format 99999
col WMBs format 99999
col sql_exec_id format 9999999999
col username format a10
col sql_text format a70
col sid format 9999
select
a.status,
decode(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offload,
a.INST_ID inst,
b.EXECUTIONS exec,
round(a.ELAPSED_TIME/1000000,2) ela_tm,
round(a.CPU_TIME/1000000,2) cpu_tm,
round(a.USER_IO_WAIT_TIME/1000000,2) io_tm,
round((a.PHYSICAL_READ_BYTES/1024/1024)/NULLIF(nvl((a.ELAPSED_TIME/1000000),0),0),2) RMBs,
round((a.PHYSICAL_WRITE_BYTES/1024/1024)/NULLIF(nvl((a.ELAPSED_TIME/1000000),0),0),2) WMBs,
a.SID,
substr (a.MODULE, 1,16) module,
a.SQL_ID,
a.SQL_PLAN_HASH_VALUE PHV,
a.sql_exec_id,
a.USERNAME,
CASE WHEN a.PX_SERVERS_ALLOCATED IS NULL THEN NULL WHEN a.PX_SERVERS_ALLOCATED = 0 THEN 1 ELSE a.PX_SERVERS_ALLOCATED END PX1,
CASE WHEN a.PX_SERVER_SET IS NULL THEN NULL WHEN a.PX_SERVER_SET = 0 THEN 1 ELSE a.PX_SERVER_SET END PX2,
CASE WHEN a.PX_SERVER# IS NULL THEN NULL WHEN a.PX_SERVER# = 0 THEN 1 ELSE a.PX_SERVER# END PX3,
to_char(a.SQL_EXEC_START,'MMDDYY HH24:MI:SS') SQL_EXEC_START,
to_char((a.SQL_EXEC_START + round(a.ELAPSED_TIME/1000000,2)/86400),'MMDDYY HH24:MI:SS') SQL_EXEC_END,
substr(a.SQL_TEXT, 1,70) sql_text
from gv$sql_monitor a, gv$sql b
where a.sql_id = b.sql_id
and a.inst_id = b.inst_id
and a.sql_child_address = b.child_address
-- and lower(a.module) like '%dynrole%'
-- a.SQL_ID in ('fjnfu5qn3krhn')
and a.status like '%EXECUTING%'
-- or a.status like '%ALL ROWS%'
-- or a.status like '%ERROR%'
order by a.SQL_EXEC_START, a.PX_SERVERS_ALLOCATED, a.PX_SERVER_SET, a.PX_SERVER# asc
/
/*
-- with no child address join
set lines 300
col status format a20
col inst format 99
col px1 format 999
col px2 format 999
col px3 format 999
col module format a20
col RMBs format 99999
col WMBs format 99999
col sql_exec_id format 9999999999
col username format a10
col sql_text format a50
col sid format 9999
select
status,
INST_ID inst,
round(ELAPSED_TIME/1000000,2) ela_tm,
round(CPU_TIME/1000000,2) cpu_tm,
round(USER_IO_WAIT_TIME/1000000,2) io_tm,
round((PHYSICAL_READ_BYTES/1024/1024)/NULLIF(nvl((ELAPSED_TIME/1000000),0),0),2) RMBs,
round((PHYSICAL_WRITE_BYTES/1024/1024)/NULLIF(nvl((ELAPSED_TIME/1000000),0),0),2) WMBs,
SID,
MODULE,
SQL_ID,
SQL_PLAN_HASH_VALUE PHV,
sql_exec_id,
USERNAME,
CASE WHEN PX_SERVERS_ALLOCATED IS NULL THEN NULL WHEN PX_SERVERS_ALLOCATED = 0 THEN 1 ELSE PX_SERVERS_ALLOCATED END PX1,
CASE WHEN PX_SERVER_SET IS NULL THEN NULL WHEN PX_SERVER_SET = 0 THEN 1 ELSE PX_SERVER_SET END PX2,
CASE WHEN PX_SERVER# IS NULL THEN NULL WHEN PX_SERVER# = 0 THEN 1 ELSE PX_SERVER# END PX3,
to_char(SQL_EXEC_START,'MMDDYY HH24:MI:SS') SQL_EXEC_START,
to_char((SQL_EXEC_START + round(ELAPSED_TIME/1000000,2)/86400),'MMDDYY HH24:MI:SS') SQL_EXEC_END,
substr(SQL_TEXT, 1,50) sql_text
from gv$sql_monitor
where
-- lower(module) like '%dynrole%'
-- SQL_ID in ('fjnfu5qn3krhn')
status like '%EXECUTING%'
-- or status like '%ALL ROWS%'
-- or status like '%ERROR%'
order by SQL_EXEC_START, PX_SERVERS_ALLOCATED, PX_SERVER_SET, PX_SERVER# asc
/
*/
ttitle left '*** V$SESSION ***' skip 1
set pagesize 999
set lines 180
col inst for 9999
col username format a13
col prog format a10 trunc
col sql_text format a60 trunc
col sid format 9999
col child for 99999
col avg_etime for 999,999.99
break on sql_text
col sql_text format a30
col event format a20
col hours format 99999
select a.inst_id inst, sid, username, substr(program,1,19) prog, b.sql_id, child_number child, plan_hash_value, executions execs,
(elapsed_time/decode(nvl(executions,0),0,1,executions))/1000000 avg_etime,
substr(event,1,20) event,
substr(sql_text,1,30) sql_text,
LAST_CALL_ET/60/60 hours,
decode(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offload,
decode(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,0,100*(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES-b.IO_INTERCONNECT_BYTES)
/decode(b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,1,b.IO_CELL_OFFLOAD_ELIGIBLE_BYTES)) "IO_SAVED_%"
from gv$session a, gv$sql b
where status = 'ACTIVE'
and username is not null
and a.sql_id = b.sql_id
and a.inst_id = b.inst_id
and a.sql_child_number = b.child_number
and sql_text not like 'select a.inst_id inst, sid, substr(program,1,19) prog, b.sql_id, child_number child,%' -- don't show this query
and sql_text not like 'declare%' -- skip PL/SQL blocks
order by sql_id, child
/
/*
set lines 32767
col terminal format a4
col machine format a4
col os_login format a15
col oracle_login format a15
col osuser format a4
col module format a5
col program format a20
col schemaname format a5
-- col state format a8
col client_info format a5
col status format a4
col sid format 99999
col serial# format 99999
col unix_pid format a8
col sql_text format a30
col action format a8
col event format a20
col hours format 99999
select /* usercheck */ s.INST_ID, s.sid sid, lpad(p.spid,7) unix_pid, s.username oracle_login, substr(s.program,1,20) program,
s.sql_id, -- remove in 817, 9i
sa.plan_hash_value, -- remove in 817, 9i
substr(s.event,1,20) event,
substr(sa.sql_text,1,30) sql_text,
s.LAST_CALL_ET/60/60 hours
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
and sa.sql_text NOT LIKE '%usercheck%'
-- and lower(sa.sql_text) LIKE '%grant%'
-- and s.username = 'APAC'
-- and s.schemaname = 'SYSADM'
-- and lower(s.program) like '%uscdcmta21%'
-- and s.sid=12
-- and p.spid = 14967
-- and s.sql_hash_value = 3963449097
-- and s.sql_id = '5p6a4cpc38qg3'
-- and lower(s.client_info) like '%10036368%'
-- and s.module like 'PSNVS%'
-- and s.program like 'PSNVS%'
order by inst_id, sql_id
/
*/
}}}
{{{
The "Execs" of SQL Monitor Report Is Larger Than The Execution Count of DML Statement Sometimes (Doc ID 2639748.1)
Applies to:
Oracle Database - Enterprise Edition - Version 12.2.0.1 and later
Information in this document applies to any platform.
Goal
The purpose of the note is to explain why sometimes the "Execs" of SQL Monitor Report is larger than the execution count of DML statement.
Solution
The Executions column "Execs" of SQL Monitor Report displays the number of times each line of the plan has been executed.
And due to write consistency DML operation need to restart if another session changed its target key value during its execution.
For example:
Test preparation:
grant dba to osp identified by osp;
conn osp/osp
Connected.
SQL> create table testa(col1 number,col2 number);
begin
for i in 1..10000 loop
insert into testa values(i,i);
end loop;
commit;
end;
/
exec DBMS_STATS.GATHER_TABLE_STATS('OSP', 'TESTA', method_opt => 'FOR ALL COLUMNS',NO_INVALIDATE=>false);
Session 1:
SQL> update testa set col2=col2+1 where col1=5038;
1 row updated.
Session 2:
-- Update 18 rows data using COL1 as condition
SQL> update /*+ monitor */ testa set col2=col2+1 where col1 in (1684,2243,2802,566,1125,3920,6715,7274,4479,5038,5597,6156,3361,7833,8392,8951,9510,10000);
->>>>>>>>>> Blocking by Session and hung.
Session 3:
-- Change the COL1 value of one of target rows of Session 2 and commit.
SQL> update testa set col1=col1+1 where col1 in (10000);
1 row updated.
SQL> commit;
Commit complete.
Session 4:
-- Check the "Execs" of SQL Monitor Report of the SQL statement of Session 2, currently the "Execs" is 1.
SQL> set pagesize 1000 linesize 20000 long 100000 numwidth 10 echo on
SQL> SELECT DBMS_SQLTUNE.report_sql_monitor(sql_id => '36yyhquwybkk3', type => 'TEXT') AS report FROM dual;
REPORT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL Monitoring Report
SQL Text
------------------------------
update /*+ monitor */ testa set col2=col2+1 where col1 in (1684,2243,2802,566,1125,3920,6715,7274,4479,5038,5597,6156,3361,7833,8392,8951,9510,10000)
Global Information
------------------------------
Status : EXECUTING
Instance ID : 1
Session : OSP (55:26075)
SQL ID : 36yyhquwybkk3
SQL Execution ID : 16777217
Execution Started : 02/17/2020 11:32:11
First Refresh Time : 02/17/2020 11:32:11
Last Refresh Time : 02/17/2020 11:32:43
Duration : 35s
Module/Action : SQL*Plus/-
Service : SYS$USERS
Program : sqlplus@celvpvm13887.us.oracle.com (TNS V1-V3)
Global Stats
============================================
| Elapsed | Cpu | Application | Buffer |
| Time(s) | Time(s) | Waits(s) | Gets |
============================================
| 32 | 0.00 | 32 | 27 |
============================================
SQL Plan Monitoring Details (Plan Hash Value=1298344373)
================================================================================================================================================
| Id | Operation | Name | Rows | Cost | Time | Start | Execs | Rows | Activity | Activity Detail |
| | | | (Estim) | | Active(s) | Active | | (Actual) | (%) | (# samples) |
================================================================================================================================================
| 0 | UPDATE STATEMENT | | | | 1 | +3 | 1 | 0 | | |
| -> 1 | UPDATE | TESTA | | | 35 | +1 | 1 | 0 | 100.00 | enq: TX - row lock contention (34) |
| -> 2 | TABLE ACCESS FULL | TESTA | 18 | 7 | 30 | +3 | 1 | 10 | | |
================================================================================================================================================
Session 1:
SQL> commit;
Commit complete.
Session 2:
-- Because Session 1 commited and released the lock, UPDATE proceed to execute and update 17 rows data according to the latest status of target status.
SQL> update /*+ monitor */ testa set col2=col2+1 where col1 in (1684,2243,2802,566,1125,3920,6715,7274,4479,5038,5597,6156,3361,7833,8392,8951,9510,10000);
17 rows updated.
SQL>
Session 4:
-- Check the "Execs" of SQL Monitor Report of the SQL statement of Session 2, the "Execs" become larger than 1.
SQL> SELECT DBMS_SQLTUNE.report_sql_monitor(sql_id => '36yyhquwybkk3', type => 'TEXT') AS report FROM dual;
REPORT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL Monitoring Report
SQL Text
------------------------------
update /*+ monitor */ testa set col2=col2+1 where col1 in (1684,2243,2802,566,1125,3920,6715,7274,4479,5038,5597,6156,3361,7833,8392,8951,9510,10000)
Global Information
------------------------------
Status : DONE
Instance ID : 1
Session : OSP (55:26075)
SQL ID : 36yyhquwybkk3
SQL Execution ID : 16777217
Execution Started : 02/17/2020 11:32:11
First Refresh Time : 02/17/2020 11:32:11
Last Refresh Time : 02/17/2020 11:33:02
Duration : 51s
Module/Action : SQL*Plus/-
Service : SYS$USERS
Program : sqlplus@celvpvm13887.us.oracle.com (TNS V1-V3)
Global Stats
=======================================================
| Elapsed | Cpu | Application | Other | Buffer |
| Time(s) | Time(s) | Waits(s) | Waits(s) | Gets |
=======================================================
| 51 | 0.01 | 51 | 0.01 | 162 |
=======================================================
SQL Plan Monitoring Details (Plan Hash Value=1298344373)
==============================================================================================================================================
| Id | Operation | Name | Rows | Cost | Time | Start | Execs | Rows | Activity | Activity Detail |
| | | | (Estim) | | Active(s) | Active | | (Actual) | (%) | (# samples) |
==============================================================================================================================================
| 0 | UPDATE STATEMENT | | | | 49 | +3 | 3 | 1 | | |
| 1 | UPDATE | TESTA | | | 52 | +1 | 3 | 1 | 100.00 | enq: TX - row lock contention (51) |
| 2 | TABLE ACCESS FULL | TESTA | 18 | 7 | 49 | +3 | 3 | 52 | | |
==============================================================================================================================================
If set 10046 events trace to above Session 2, we can find the restart is showing on the row source operation as "starts=x".
SQL ID: 36yyhquwybkk3 Plan Hash: 1298344373
update /*+ monitor */ testa set col2=col2+1
where
col1 in (1684,2243,2802,566,1125,3920,6715,7274,4479,5038,5597,6156,3361,
7833,8392,8951,9510,10000)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 45.88 0 71 92 17
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 45.88 0 71 92 17
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 107
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 UPDATE TESTA (cr=71 pr=0 pw=0 time=45880583 us starts=3) <<<<<<<<<<-------starts=3 indicates this operation restarted
52 52 52 TABLE ACCESS FULL TESTA (cr=70 pr=0 pw=0 time=3386 us starts=3 cost=7 size=144 card=18) <<<<<<<<<<-------starts=3 indicates this operation restarted
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
enq: TX - row lock contention 1 45.87 45.87
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 15.88 15.88
********************************************************************************
}}}
<<<
https://www.red-gate.com/simple-talk/sql/oracle/execution-plans-part-14-sql-monitoring/
the “Start Active” (how many seconds through the execution were we before this operation started) and
“Time Active (s)” (for how long was this operation active) give you a little better precision in terms of sequencing.
<<<
.
{{{
Creating a history of sql monitor reports
$Header: 215187.1 0_mon_readme.txt 11.4.5.8 2013/05/10 carlos.sierra $
Requires Oracle Tuning Pack license.
1. Open a new session and execute 1_mon_repository.sql followed by 2_mon_capture.sql.
The latter will loop indefinitely.
2. On a second session execute 3_mon_reports.sql every so often.
3. When ready to review unzip mon_reports.zip into a directory and browse mon_main.html.
215187.1 SQLTXPLAIN (SQLT) Tool that helps to diagnose SQL statements performing poorly
Download SQLTXPLAIN (SQLT)
10.2, 11.1, 11.2 ,12.1, and 12.2 download
}}}
! on ADG
{{{
You can modify the code to poll on the ADG and insert on a real table on the primary. You get both summary and SQL mon report in a table.
Just an idea, I haven't tried this.
}}}
! Text version of the SQL Monitor list on OEM
{{{
set lines 300
col status format a20
col inst format 99
col px1 format 999
col px2 format 999
col px3 format 999
col RMBs format 99999
col WMBs format 99999
col sql_exec_id format 9999999999
col username format a10
col sql_text format a20
col sid format 9999
select
status,
INST_ID inst,
round(ELAPSED_TIME/1000000,2) ela_tm,
round(CPU_TIME/1000000,2) cpu_tm,
round(USER_IO_WAIT_TIME/1000000,2) io_tm,
round((PHYSICAL_READ_BYTES/1024/1024)/(ELAPSED_TIME/1000000),2) RMBs,
round((PHYSICAL_WRITE_BYTES/1024/1024)/(ELAPSED_TIME/1000000),2) WMBs,
SID,
SQL_ID,
SQL_PLAN_HASH_VALUE PHV,
sql_exec_id,
USERNAME,
CASE WHEN PX_SERVERS_ALLOCATED IS NULL THEN NULL WHEN PX_SERVERS_ALLOCATED = 0 THEN 1 ELSE PX_SERVERS_ALLOCATED END PX1,
CASE WHEN PX_SERVER_SET IS NULL THEN NULL WHEN PX_SERVER_SET = 0 THEN 1 ELSE PX_SERVER_SET END PX2,
CASE WHEN PX_SERVER# IS NULL THEN NULL WHEN PX_SERVER# = 0 THEN 1 ELSE PX_SERVER# END PX3,
to_char(SQL_EXEC_START,'MMDDYY HH24:MI:SS') SQL_EXEC_START,
to_char((SQL_EXEC_START + round(ELAPSED_TIME/1000000,2)/86400),'MMDDYY HH24:MI:SS') SQL_EXEC_END,
substr(SQL_TEXT, 1,16) sql_text
from gv$sql_monitor
-- where
-- SQL_ID in ('fjnfu5qn3krhn')
-- status like '%EXECUTING%'
-- or status like '%ALL ROWS%'
-- or status like '%ERROR%'
-- sid = 424
-- and username = 'SYSADM'
order by SQL_EXEC_START, PX_SERVERS_ALLOCATED, PX_SERVER_SET, PX_SERVER# asc;
STATUS INST ELA_TM CPU_TM IO_TM RMBS WMBS SID SQL_ID PHV SQL_EXEC_ID USERNAME PX1 PX2 PX3 SQL_EXEC_START SQL_EXEC_END SQL_TEXT
-------------------- ---- ---------- ---------- ---------- ------ ------ ----- ------------- ---------- ----------- ---------- ---- ---- ---- --------------- --------------- --------------------
EXECUTING 2 13408.22 11695.04 1549.89 1438 0 2135 fjnfu5qn3krhn 374179282 33554432 SYSADM 8 061612 12:43:49 061612 16:27:17 SELECT PT.SEQ_NB
EXECUTING 2 13.25 1.51 11.84 6 5 1553 fjnfu5qn3krhn 374179282 33554432 1 1 061612 12:43:49 061612 12:44:02
EXECUTING 2 10.93 1.55 9.46 7 6 1829 fjnfu5qn3krhn 374179282 33554432 1 2 061612 12:43:49 061612 12:44:00
EXECUTING 2 9.05 1.51 7.65 8 8 2137 fjnfu5qn3krhn 374179282 33554432 1 3 061612 12:43:49 061612 12:43:58
EXECUTING 2 12.74 1.62 11.25 6 6 2454 fjnfu5qn3krhn 374179282 33554432 1 4 061612 12:43:49 061612 12:44:02
Name Null? Type
---------------------------------------------------------------------------- -------- ----------------------------------------------------------
1 INST_ID NUMBER
2 KEY NUMBER
3 STATUS VARCHAR2(76)
4 USER# NUMBER
5 USERNAME VARCHAR2(30)
6 MODULE VARCHAR2(64)
7 ACTION VARCHAR2(64)
8 SERVICE_NAME VARCHAR2(64)
9 CLIENT_IDENTIFIER VARCHAR2(64)
10 CLIENT_INFO VARCHAR2(64)
11 PROGRAM VARCHAR2(48)
12 PLSQL_ENTRY_OBJECT_ID NUMBER
13 PLSQL_ENTRY_SUBPROGRAM_ID NUMBER
14 PLSQL_OBJECT_ID NUMBER
15 PLSQL_SUBPROGRAM_ID NUMBER
16 FIRST_REFRESH_TIME DATE
17 LAST_REFRESH_TIME DATE
18 REFRESH_COUNT NUMBER
19 SID NUMBER
20 PROCESS_NAME VARCHAR2(5)
21 SQL_ID VARCHAR2(13)
22 SQL_TEXT VARCHAR2(2000)
23 IS_FULL_SQLTEXT VARCHAR2(4)
24 SQL_EXEC_START DATE
25 SQL_EXEC_ID NUMBER
26 SQL_PLAN_HASH_VALUE NUMBER
27 EXACT_MATCHING_SIGNATURE NUMBER
28 FORCE_MATCHING_SIGNATURE NUMBER
29 SQL_CHILD_ADDRESS RAW(8)
30 SESSION_SERIAL# NUMBER
31 PX_IS_CROSS_INSTANCE VARCHAR2(4)
32 PX_MAXDOP NUMBER
33 PX_MAXDOP_INSTANCES NUMBER
34 PX_SERVERS_REQUESTED NUMBER
35 PX_SERVERS_ALLOCATED NUMBER
36 PX_SERVER# NUMBER
37 PX_SERVER_GROUP NUMBER
38 PX_SERVER_SET NUMBER
39 PX_QCINST_ID NUMBER
40 PX_QCSID NUMBER
41 ERROR_NUMBER VARCHAR2(160)
42 ERROR_FACILITY VARCHAR2(4)
43 ERROR_MESSAGE VARCHAR2(256)
44 BINDS_XML CLOB
45 OTHER_XML CLOB
46 ELAPSED_TIME NUMBER
47 QUEUING_TIME NUMBER
48 CPU_TIME NUMBER
49 FETCHES NUMBER
50 BUFFER_GETS NUMBER
51 DISK_READS NUMBER
52 DIRECT_WRITES NUMBER
53 IO_INTERCONNECT_BYTES NUMBER
54 PHYSICAL_READ_REQUESTS NUMBER
55 PHYSICAL_READ_BYTES NUMBER
56 PHYSICAL_WRITE_REQUESTS NUMBER
57 PHYSICAL_WRITE_BYTES NUMBER
58 APPLICATION_WAIT_TIME NUMBER
59 CONCURRENCY_WAIT_TIME NUMBER
60 CLUSTER_WAIT_TIME NUMBER
61 USER_IO_WAIT_TIME NUMBER
62 PLSQL_EXEC_TIME NUMBER
63 JAVA_EXEC_TIME NUMBER
}}}
! my notes on sqoop benchmarking
https://gist.github.com/karlarao/856815e1cae7f5496dcb50359b7c8485
<<<
test cases
40 threads 16GB chunk 1000 fetch size no hcc
40 threads 16GB chunk 1000 fetch size hcc
20 threads 16GB chunk 1000 fetch size no hcc
20 threads 16GB chunk 1000 fetch size hcc
20 threads 64GB chunk 1000 fetch size no hcc
20 threads 64GB chunk 1000 fetch size hcc
20 threads 64GB chunk 5000 fetch size no hcc
20 threads 64GB chunk 5000 fetch size hcc
100 threads 64GB chunk 5000 fetch size no hcc
100 threads 64GB chunk 5000 fetch size hcc
<<<
<<<
What we've noticed on performance are as follows:
* each sqoop thread is one sql_id in the oracle database, so 40 threads mean 40 sql_ids but let's say only 3 of those would really fetch data and 37 would be inactive this can be seen from images[1-4] below
* increasing threads from what we can see has the most impact - from 40 (doing 116-117MB/sec) to 100 (doing 184-217MB/sec)
* more threads doesn't mean linear performance increase (at some point we will hit the elbow :) of the curve), but it drives more active sessions in the database to do the work. see [ashtop on 40 threads](#ashtop-on-40-threads) and [ashtop on 100 threads](#ashtop-on-100-threads) below
* the "mapreduce" part of the sqoop is still a mystery because we can't see the breakdown of elapsed time on the mapreduce part. let's say for the 40 threads that spent 93 seconds on mapreduce we'd like to know if oracle or hadoop is the bottleneck
* at one point i saw "library cache lock" event when the thread sessions are spawned in the database, because each session is a new sql_id i suppose that time is parsing but then it's just seconds and not really substantial from total elapsed
* i was also thinking if we push let's say 200threads then that would probably translate to 16 AAS CPU on db side, and probably more bandwidth. but then the database has to spawn all that 200 session_ids (hard parse) which should be fine i think. the worst case is all those 200 session_ids would be active and would translate to 1:1 AAS CPU and the whole database cluster would be out of CPU
Some thoughts by gluent peeps:
* Sqoop has a fairly long bootstrap time (30s perhaps) so if your Sqoops are over in a minute or two then the startup time is hurting you and bigger chunks might help.
* In all of our testing it is the write to Avro that is the bottleneck. We typically see lots of Oracle sessions doing a bit of work and a *lot* of CPU being burned in the Hadoop cluster
* You should get a good view of the Hadoop side CPU from Cloudera Manager's charts
* Also worth looking in resource manager UI to see if the mappers are queuing or all getting on CPU. Having said that, if your Sqoops are finishing in 1 or 2 minutes then, my guess would be they are not being queued
<<<
https://aws.amazon.com/blogs/big-data/use-sqoop-to-transfer-data-from-amazon-emr-to-amazon-rds/
https://weidongzhou.wordpress.com/2015/10/01/export-data-from-hive-table-to-oracle-database/#comments
http://www.insightcrunch.com/2016/10/sqoop-import-and-export-tables-from.html
{{{
sqoop export \
–connect jdbc:oracle:thin:@enkx3-scan:1521:dbm1 \
–username wzhou \
–password wzhou \
–direct \
–export-dir ‘/user/hive/warehouse/test_oracle.db/my_all_objects_sqoop’ \
–table WZHOU.TEST_IMPORT_FROM_SCOOP \
–fields-terminated-by ‘\001’
Note: –export-dir argument specifies the location of hive source directory, not individual filename. –fields-terminated-by ‘\001’ must be included in the command. Otherwise, you will see the following error:
Caused by: java.lang.RuntimeException: Can’t parse input data: ‘SYSTEMLOGMNR_REFCON$7604TABLE2013-03-12’
}}}
..
https://community.hortonworks.com/articles/70258/sqoop-performance-tuning.html
https://hadoopjournal.wordpress.com/2017/08/09/sqoop-job-performance-tuning-techniques/
https://stackoverflow.com/questions/39868438/how-to-optimize-sqoop-import
https://community.hortonworks.com/questions/24640/sqoop-with-condition-is-required-if-we-use-query-o.html
sqoop 1.4.5 with oraoop integrated: http://blog.cloudera.com/blog/2014/11/how-apache-sqoop-1-4-5-improves-oracle-databaseapache-hadoop-integration/
As per Tanel (CCd)
The LinkedIn project name is DataBus
Uses ORA_ROWSCN for incremental extraction
Among other stuff that it does
https://github.com/linkedin/databus/wiki/Connecting-Databus-to-an-Oracle-Database
-- os command dbms_scheduler
http://www.red-database-security.com/tutorial/run_os_commands_via_dbms_scheduler.html
https://tinky2jed.wordpress.com/technical-stuff/oracle-stuff/creating-an-oracle-job-that-runs-an-os-executable/
http://msutic.blogspot.com/2015/05/how-to-pass-arguments-to-os-shell.html
http://msutic.blogspot.com/2013/10/oracle-scheduler-external-jobs-and.html
http://berxblog.blogspot.com/2012/02/restore-dbmsschedulercreatecredential.html
http://www.nocoug.org/download/2011-11/Maria_Colgan_Optimizer_Statistics.pdf
https://blogs.oracle.com/optimizer/entry/gathering_optimizer_statistics_is_one
https://blogs.oracle.com/optimizer/entry/how_do_i_use_concurrent
FAQ: Gathering Concurrent Statistics Using DBMS_STATS Frequently Asked Questions [ID 1555451.1]
concurrent + auto degree https://blogs.oracle.com/optimizer/how-to-gather-optimizer-statistics-fast
sample code: https://github.com/oracle-samples/oracle-db-examples/tree/main/optimizer/faster_stats
{{{
-- gathers statistics for all fixed objects (dynamic performance tables)
BEGIN
DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
END;
/
-- gathers statistics for dictionary schemas 'SYS', 'SYSTEM' and schemas of RDBMS components
BEGIN
DBMS_STATS.GATHER_DICTIONARY_STATS;
END;
/
}}}
http://juliandontcheff.wordpress.com/2011/06/04/on-gathering-dictionary-statistics-do-i-analyze-the-sys-schema/
https://blogs.oracle.com/optimizer/entry/fixed_objects_statistics_and_why
http://oracleappstechnology.blogspot.com/2008/09/difference-between-gatherfixedobjectsta.html
http://www.oracle-base.com/articles/11g/StatisticsCollectionEnhancements_11gR1.php
Karen Morton wrote an excellent paper on the
very complicated subject of gathering appropriate object statistics in 2009 called Managing Statistics for
Optimal Query Performance. It can be found at http://method-r.com/downloads/doc_download/11-managing-statistics-for-optimal-query-performance-karen-morton
see [[Histogram]]
http://www.oaktable.net/content/investigation-collecting-cost-base-optimizer-statistic
http://www.rittmanmead.com/2007/09/fixing-up-performance/
http://www.rittmanmead.com/2010/08/more-on-interval-partitioning/
http://www.rittmanmead.com/2009/12/transcend-and-index-maintenance/
http://www.rittmanmead.com/2006/07/partition-exchange-loading-using-owb10gr2/
https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics
https://goldparrot.files.wordpress.com/2011/05/best_practices_for_dbms_stats.pdf
https://www.evernote.com/shard/s48/sh/20bcaa45-b56f-4b78-9f6a-7957d8b89eb2/c34e0a2b222eb43ef19358f0dd083634
also see this
Karen Morton wrote an excellent paper on the
very complicated subject of gathering appropriate object statistics in 2009 called Managing Statistics for
Optimal Query Performance. It can be found at http://method-r.com/downloads/doc_download/11-managing-statistics-for-optimal-query-performance-karen-morton
{{{
exec dbms_stats.set_global_prefs(pname=>'ESTIMATE_PERCENT', pvalue=>'30');
exec dbms_stats.set_global_prefs(pname=>'ESTIMATE_PERCENT', pvalue=>'DBMS_STATS.AUTO_SAMPLE_SIZE');
select dbms_stats.get_prefs('ESTIMATE_PERCENT') prefs from dual;
exec dbms_stats.gather_table_stats('hr','partition1');
select table_name, sample_size from dba_tables where owner = 'HR' and table_name = 'PARTITION1';
select * from sys.optstat_user_prefs$ ;
}}}
http://www.datasoftech.com/library/DSI_11g_Stat2.pdf
https://mdinh.wordpress.com/2012/07/10/dbms_stats-preference/
{{{
TABLE Level:
exec DBMS_STATS.SET_TABLE_PREFS (USER,'FCT_TABLE,'INCREMENTAL','FALSE');
SELECT
owner, table_name,
DBMS_STATS.get_prefs(ownname=>USER,tabname=>table_name,pname=>'INCREMENTAL') incremental,
DBMS_STATS.get_prefs(ownname=>USER,tabname=>table_name,pname=>'GRANULARITY') granularity,
DBMS_STATS.get_prefs(ownname=>USER,tabname=>table_name,pname=>'STALE_PERCENT') stale_percent,
DBMS_STATS.get_prefs(ownname=>USER,tabname=>table_name,pname=>'ESTIMATE_PERCENT') estimate_percent,
DBMS_STATS.get_prefs(ownname=>USER,tabname=>table_name,pname=>'CASCADE') cascade,
DBMS_STATS.get_prefs(pname=>'METHOD_OPT') method_opt
FROM dba_tables
WHERE table_name like 'FCT%'
ORDER BY owner, table_name;
Note: Use the DBA view versus the above for better performance
select * from DBA_TAB_STAT_PREFS;
SCHEMA Level:
exec DBMS_STATS.SET_SCHEMA_PREFS (USER,'STALE_PERCENT','8');
SELECT
username,
DBMS_STATS.get_prefs(ownname=>USER,pname=>'INCREMENTAL') incremental,
DBMS_STATS.get_prefs(ownname=>USER,pname=>'GRANULARITY') granularity,
DBMS_STATS.get_prefs(ownname=>USER,pname=>'STALE_PERCENT') stale_percent,
DBMS_STATS.get_prefs(ownname=>USER,pname=>'ESTIMATE_PERCENT') estimate_percent,
DBMS_STATS.get_prefs(ownname=>USER,pname=>'CASCADE') cascade,
DBMS_STATS.get_prefs(pname=>'METHOD_OPT') method_opt
FROM dba_users
WHERE username like '%WH'
ORDER BY username;
DATABASE Level:
exec DBMS_STATS.SET_GLOBAL_PREFS('METHOD_OPT','FOR ALL COLUMNS SIZE REPEAT');
SELECT
DBMS_STATS.get_prefs(pname=>'INCREMENTAL') incremental,
DBMS_STATS.get_prefs(pname=>'GRANULARITY') granularity,
DBMS_STATS.get_prefs(pname=>'STALE_PERCENT') publish,
DBMS_STATS.get_prefs(pname=>'ESTIMATE_PERCENT') estimate_percent,
DBMS_STATS.get_prefs(pname=>'CASCADE') cascade,
DBMS_STATS.get_prefs(pname=>'METHOD_OPT') method_opt
FROM dual;
}}}
http://www.allguru.net/database/oracle-sql-profile-tuning-command/
http://xploreapps.blogspot.com/2009/09/213021.html
http://tonguc.wordpress.com/2007/01/10/oracle-best-practices-part-1/
http://oradbatips.blogspot.com/2008/04/tip-72-restore-old-statistics.html
https://community.oracle.com/thread/609610
{{{
6. Re: How and when the table will go STALE ?
601585
Apprentice
601585 Jan 18, 2008 8:36 AM (in response to rahulras)
You need to read that manual with more caution. It has all info you need.
1. Table modification info stays in shared pool and flushed into dictionary by Oracle automatically. You can explicity do it by calling dbms_stats.flush_database_monitoring_info.
2. dba_tab_modifications view = How many DML are applied to target table?
dba_tab_statistics.stale_stats = Is statistics stale?
3. When you call dbms_stats.gather... familiy, Oracle flushed the stale info to disk. You gnerally don't need to care about that.
4. Statistics is considered to be stale, when the change is over 10% of current rows.
(As of 11g, this value can be customized per objects. Cool feature)
create table t_stat(id int);
insert into t_stat select rownum from all_objects where rownum <= 100;
commit;
exec dbms_stats.gather_table_stats(user, 'T_STAT');
select * from sys.dba_tab_modifications where table_name = 'T_STAT';
No row selected
select stale_stats from sys.dba_tab_statistics where table_name = 'T_STAT';
NO
insert into t_stat select rownum from all_objects where rownum <= 20;
select * from sys.dba_tab_modifications where table_name = 'T_STAT';
No rows selected <-- Oops
select stale_stats from sys.dba_tab_statistics where table_name = 'T_STAT';
NO <-- Oops
exec dbms_stats.flush_database_monitoring_info;
select * from sys.dba_tab_modifications where table_name = 'T_STAT';
TABLE_OWNER TABLE_NAME PARTITION_NAME SUBPARTITION_NAME INSERTS UPDATES DELETES TIMESTAMP TRUNCATED DROP_SEGMENTS
UKJA T_STAT 20 0 0 2008-01-18 PM 11:30:19 NO 0
select stale_stats from sys.dba_tab_statistics where table_name = 'T_STAT';
YES
}}}
http://www.freelists.org/post/oracle-l/Need-advice-on-dbms-statsFLUSH-DATABASE-MONITORING-INFO,1
<<<
DML stats are flushed periodically by Oracle (@@MOS doc 762738.1 says every 3 hrs@@),
<<<
http://stackoverflow.com/questions/21250093/identify-partitions-having-stale-statistics-in-a-list-of-schema
oracle 11g stale statistics https://community.oracle.com/message/10772019
{{{
SELECT DT.OWNER,
DT.TABLE_NAME,
ROUND ( (DELETES + UPDATES + INSERTS) / NUM_ROWS * 100) PERCENTAGE
FROM DBA_TABLES DT, DBA_TAB_MODIFICATIONS DTM
WHERE DT.OWNER = DTM.TABLE_OWNER
AND DT.TABLE_NAME = DTM.TABLE_NAME
AND NUM_ROWS > 0
AND ROUND ( (DELETES + UPDATES + INSERTS) / NUM_ROWS * 100) >= 10
AND OWNER IN ('OWNER_NAME’')
ORDER BY 3 desc;
}}}
! mos
How to List the Objects with Stale Statistics Using dbms_stats.gather_schema_stats options=>'LIST STALE' (Doc ID 457666.1)
FAQ: Statistics Gathering Frequently Asked Questions (Doc ID 1501712.1)
FAQ: Automatic Statistics Collection (Doc ID 1233203.1)
! Install rlwrap and set alias
{{{
-- if you are subscribed to the EPEL repo
yum install rlwrap
-- if you want to build from source
# wget http://utopia.knoware.nl/~hlub/uck/rlwrap/rlwrap-0.37.tar.gz
# tar zxf rlwrap-0.37.tar.gz
# rm rlwrap-0.37.tar.gz
The configure utility will shows error: you need the GNU readline library.
It just needs the readline-devel package
# yum install readline-devel*
# cd rlwrap-0.37
# ./configure
# make
# make install
# which rlwrap
/usr/local/bin/rlwrap
alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'
}}}
! Install environment framework - karlenv
{{{
# name: environment framework - karlenv
# source URL: http://karlarao.tiddlyspot.com/#%5B%5Bstep%20by%20step%20environment%5D%5D
# notes:
# - I've edited/added some lines on the setsid and showsid from
# Coskan's code making it suitable for most unix(solaris,aix,hp-ux)/linux environments http://goo.gl/cqRPK
# - added lines of code before and after the setsid and showsid to get the following info:
# - software homes installed
# - get DBA scripts location
# - set alias
#
# SCRIPTS LOCATION
export TANEL=~/dba/tanel
export KERRY=~/dba/scripts
export KARL=~/dba/karao/scripts/
export SQLPATH=~/:$TANEL:$KERRY:$KARL
# ALIAS
alias s='rlwrap -D2 -irc -b'\''"@(){}[],+=&^%#;|\'\'' -f $TANEL/setup/wordfile_11gR2.txt sqlplus / as sysdba @/tmp/login.sql'
alias s1='sqlplus / as sysdba @/tmp/login.sql'
alias oradcli='dcli -l oracle -g ~/dbs_group'
# alias celldcli='dcli -l root -g /root/cell_group'
# MAIN
cat `cat /etc/oraInst.loc | grep -i inventory | sed 's/..............\(.*\)/\1/'`/ContentsXML/inventory.xml | grep "HOME NAME" 2> /dev/null
export PATH=""
export PATH=$HOME/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$SQLPATH:~/dba/bin:$PATH
export myid="`whoami`@`hostname`"
export PS1='${myid}':'$PWD':'$ORACLE_SID
$ '
export EDITOR=vi
export GLOGIN=`ls /tmp/login.sql 2> /dev/null | wc -l`
if [ "$GLOGIN" -eq 1 ] ; then
echo ""
else
echo "SET SQLPROMPT \"_USER'@'_CONNECT_IDENTIFIER'>' \"
SET LINES 260 TIME ON" > /tmp/login.sql
fi
setsid ()
{
unset ORATAB
unset ORACLE_BASE
unset ORACLE_HOME
unset ORACLE_SID
export ORATAB_OS=`ls /var/opt/oracle/oratab 2> /dev/null | wc -l`
if [ "$ORATAB_OS" -eq 1 ] ; then
export ORATAB=/var/opt/oracle/oratab
else
export ORATAB=/etc/oratab
fi
export ORAENVFILE=`ls /usr/local/bin/oraenv 2> /dev/null | wc -l`
if [ "$ORAENVFILE" -eq 1 ] ; then
echo ""
else
cat $ORATAB | grep -v "^#" | grep -v "*"
echo ""
echo "Please enter the ORACLE_HOME: "
read RDBMS_HOME
export ORACLE_HOME=$RDBMS_HOME
fi
if tty -s
then
if [ -f $ORATAB ]
then
line_count=`cat $ORATAB | grep -v "^#" | grep -v "*" | sed 's/:.*//' | wc -l`
# check that the oratab file has some contents
if [ $line_count -ge 1 ]
then
sid_selected=0
while [ $sid_selected -eq 0 ]
do
sid_available=0
for i in `cat $ORATAB | grep -v "^#" | grep -v "*" | sed 's/:.*//'`
do
sid_available=`expr $sid_available + 1`
sid[$sid_available]=$i
done
# get the required SID
case ${SETSID_AUTO:-""} in
YES) # Auto set use 1st entry
sid_selected=1 ;;
*)
i=1
while [ $i -le $sid_available ]
do
printf "%2d- %10s\n" $i ${sid[$i]}
i=`expr $i + 1`
done
echo ""
echo "Select the Oracle SID with given number [1]:"
read entry
if [ -n "$entry" ]
then
entry=`echo "$entry" | sed "s/[a-z,A-Z]//g"`
if [ -n "$entry" ]
then
entry=`expr $entry`
if [ $entry -ge 1 ] && [ $entry -le $sid_available ]
then
sid_selected=$entry
fi
fi
else
sid_selected=1
fi
esac
done
#
# SET ORACLE_SID
#
export PATH=$HOME/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:$ORACLE_HOME/bin:$ORACLE_PATH:$PATH
export ORACLE_SID=${sid[$sid_selected]}
echo "Your profile configured for $ORACLE_SID with information below:"
unset LD_LIBRARY_PATH
ORAENV_ASK=NO
. oraenv
unset ORAENV_ASK
#
#GIVE MESSAGE
#
else
echo "No entries in $ORATAB. no environment set"
fi
fi
fi
}
showsid()
{
echo ""
echo "ORACLE_SID=$ORACLE_SID"
echo "ORACLE_BASE=$ORACLE_BASE"
echo "ORACLE_HOME=$ORACLE_HOME"
echo ""
}
# Find oracle_home of running instance
export ORATAB_OS=`ls /var/opt/oracle/oratab 2> /dev/null | wc -l`
if [ "$ORATAB_OS" -eq 1 ] ; then
ps -ef | grep _pmon | grep -v grep
else
ps -ef | grep pmon | grep -v grep | grep -v bash | grep -v perl |\
while read PMON; do
INST=`echo $PMON | awk {' print $2, $8 '}`
INST_PID=`echo $PMON | awk {' print $2'}`
INST_HOME=`ls -l /proc/$INST_PID/exe 2> /dev/null | awk -F'>' '{ print $2 }' | sed 's/bin\/oracle$//' | sort | uniq`
echo "$INST $INST_HOME"
done
fi
# Set Oracle environment
setsid
showsid
}}}
! Usage
{{{
[root@desktopserver ~]# su - oracle
[oracle@desktopserver ~]$
[oracle@desktopserver ~]$ vi .karlenv <-- copy the script from the "Install environment framework - karlenv" section of the wiki link above
[oracle@desktopserver ~]$
[oracle@desktopserver ~]$ ls -la | grep karl
-rw-r--r-- 1 oracle dba 6071 Dec 14 15:58 .karlenv
[oracle@desktopserver ~]$
[oracle@desktopserver ~]$ . ~oracle/.karlenv <-- set the environment
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
1- +ASM
2- dw
Select the Oracle SID with given number [1]:
2 <-- choose an instance
Your profile configured for dw with information below:
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=dw
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
oracle@desktopserver.local:/home/oracle:dw
$ s <-- rlwrap'd sqlplus alias, also you can use the "s1" alias if you don't have rlwrap installed
SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 5 15:41:15 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP and Real Application Testing options
USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED SPID OPID CPID SADDR PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
SYS dw desktopserver.local 5 8993 11.2.0.3.0 20111219 27483 24 27480 00000000DFB78138 00000000DF8F9FA0
SQL> @gas <-- calling one of Kerry's scripts from the /home/oracle/dba/scripts directory
INST SID PROG USERNAME SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME SQL_TEXT OSUSER MACHINE
----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ----------------------------------------- ------------------------------ -------------------------
1 5 sqlplus@de SYS bmyd05jjgkyz1 0 79376787 3 .003536 select a.inst_id inst, sid, substr(progra oracle desktopserver.local
1 922 OMS SYSMAN 2b064ybzkwf1y 0 0 50,515 .004947 BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2 oracle desktopserver.local
SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP and Real Application Testing options
oracle@desktopserver.local:/home/oracle:dw
}}}
! making a generic environment script.. called as "dbaenv"
1)
* mkdir -p $HOME/dba/bin
* then add the $HOME/dba/bin on the path of .bash_profile
{{{
$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin:$HOME/dba/bin
export PATH
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
export PATH=$ORACLE_HOME/bin:.:$PATH
}}}
2) copy the code of .karlenv above then create it as dbaenv file on the $HOME/dba/bin directory
3) call it as follows on any directory
{{{
. dbaenv
}}}
4) for rac one node this pmoncheck is also helpful to have on the $HOME/dba/bin directory
{{{
$ cat pmoncheck
dcli -l oracle -g /home/oracle/dbs_group ps -ef | grep pmon | grep -v grep | grep -v ASM
}}}
https://www.stitchdata.com/integrations/sources/
https://www.stitchdata.com/docs/
https://www.stitchdata.com/docs/integrations/databases/amazon-oracle-rds
https://medium.com/@FranckPachot/strace-the-current-oracle-session-process-38594cf6a11a
{{{
connect / as sysdba
column spid new_value pid
select spid from v$process join v$session on v$session.paddr=v$process.addr where sid=sys_context('userenv','sid');
column spid clear
set escape on
host strace -fy -p &pid -o strace.log \& :
select * from v$osstat;
disconnect
}}}
{{{
# show relative time
strace -r -o out_r.txt tnsping cdb1
# show timestamp
strace -t -o out_t.txt tnsping cdb1
# show calls summary
strace -c -o out_c.txt tnsping cdb1
}}}
-- FAQ, INFO
Streams Complete Reference FAQ [ID 752871.1]
Streams Configuration Report and Health Check Script
Doc ID: Note:273674.1
9i Best Practices For Streams RAC Setup
Doc ID: Note:304268.1
9i Streams Recommended Configuration
Doc ID: Note:297273.1
10gR1 Streams Recommended Configuration
Doc ID: Note:298877.1
10.2 Streams Recommendations
Doc ID: Note:418755.1
Streams Performance Recommendations
Doc ID: Note:335516.1
How to replicate Data between Oracle and a Foreign Datasource [ID 283700.1]
-- PERFORMANCE
Minimize Performance Impact of Batch Processing in Streams [ID 550593.1]
-- VIDEO
Master Note for Streams Downstream Capture - 10g and 11g [Video] [ID 1264598.1]
-- STEP BY STEP
120331.1 Example: How to create a trigger to generate a primary key
10gR2 Streams Recommended Configuration [ID 418755.1]
How to Create STRMADMIN User and Grant Privileges [ID 786528.1]
Streams Replication Supplemental Logging Requirements [ID 782541.1]
How To Setup Schema Level Streams Replication
Doc ID: Note:301431.1
Guideline to setup database level Streams replication
Doc ID: Note:459922.1
Steps To Setup Replication Using Oracle Streams
Doc ID: Note:224255.1
Configuring Streams 9.2 Using Oracle Enterprise Manager 9.2
Doc ID: Note:206018.1
Initial Steps Required to a Create Multi Master Replication Environment
Doc ID: Note:117434.1
NOTE:471845.1 - Streams Bi-Directional Setup
Oracle Transparent Replication using Advanced Replication and Streams [ID 162835.1]
How To Configure Streams Real-Time Downstream Environment [ID 753158.1]
-- COMPARATIVE STUDY
Comparitive Study between Oracle Streams and Oracle Data Guard
Doc ID: Note:300223.1
Comparison Between Oracle Streams and Change Data Capture [ID 727445.1]
NOTE:238455.1 - Streams DML Types Supported and Supported Datatypes
Extended Datatype Support (EDS) for Streams [ID 556742.1]
NOTE:238457.1 - What DML and DDL is Captured by Streams
-- CHANGE DATA CAPTURE
Change Data Capture(CDC) FAQ [ID 867602.1]
Synchronous Change Data Capture (CDC) - Example SQL to Set-up in version 9.2 [ID 246551.1]
Change Data Capture CDC Healthcheck. [ID 983443.1]
Implementing Streams Heterogeneous (non-Oracle ) Change Capture [ID 315836.1]
-- MONITORING
Oracle Streams STRMMON Monitoring Utility
Doc ID: Note:290605.1
How to Resynchronize your Replicated Environment
Doc ID: 1015725.6
-- AUDITING
DML and DDL Auditing Using Streams [ID 316893.1]
-- ORACLE STREAMS SE STANDARD EDITION
Streams between Standard and Enterprise edition is not working [ID 567872.1]
sqrt stress
http://people.seas.harvard.edu/~apw/stress/
<<showtoc>>
! plugins
sidebar enhancements
sublime.wbond.net
emmet.io - http://docs.emmet.io/cheat-sheet/
! end
! types of subqueries
<<<
* inline view - A subquery in the FROM clause of a SELECT statement is also called an inline view
* nested subquery - A subquery in the WHERE clause of a SELECT statement is also called a nested subquery
* correlated subquery - subquery in SELECT, this is row by row processing
correlated subquery when a nested subquery references a column from a table referred to a parent statement any number of levels above the subquery. The parent statement can be a SELECT, UPDATE, or DELETE statement in which the subquery is nested. A correlated subquery is evaluated once for each row processed by the parent statement. Oracle resolves unqualified columns in the subquery by looking in the tables named in the subquery and then in the tables named in the parent statement.
<<<
<<showtoc>>
! rewrite
{{{
--select * from hr.employees;
--select * from hr.departments;
--drop table hr.salary purge;
--create table hr.salary as select employee_id, salary, hire_date from hr.employees;
-- 163550
SELECT sum(emp.salary)
FROM hr.employees emp,
hr.salary,
hr.departments dept
WHERE emp.employee_id = salary.employee_id
AND emp.department_id = dept.department_id
AND dept.location_id = 1700
AND salary.hire_date = (SELECT MAX(hire_date)
FROM hr.salary s2
WHERE s2.employee_id = salary.employee_id )
;
-- 163550
SELECT SUM(emp.salary)
FROM hr.employees emp,
hr.salary,
hr.departments dept,
(SELECT s2.employee_id,
Max(hire_date) MAX_yyyymmdd
FROM salary s2
GROUP BY s2.employee_id) sal2max
WHERE emp.employee_id = salary.employee_id
AND emp.department_id = dept.department_id
AND dept.location_id = 1700
AND salary.hire_date = sal2max.MAX_yyyymmdd
AND salary.employee_id = sal2max.employee_id
;
select e.employee_id, e.department_id, e.salary,
(select min(e2.salary)
from HR.EMPLOYEES E2
where e2.department_id = e.department_id
) min_salary
from HR.EMPLOYEES e
where e.employee_id in (116,113) order by 1;
--113 100 6900 6900
--116 30 3500 3100
select
employee_id,
department_id,
salary,
min(salary) over(partition by department_id) min_department_salary
from hr.employees e
where e.employee_id in (116,113) order by 1;
--113 100 6900 6900
--116 30 3500 3500 <- wrong result
CREATE TABLE test_sub
(step varchar(1), cost_time int, rank_no int)
;
INSERT INTO test_sub (step, cost_time, rank_no) VALUES ('a', 10, 1);
INSERT INTO test_sub (step, cost_time, rank_no) VALUES ('b', 20, 2);
INSERT INTO test_sub (step, cost_time, rank_no) VALUES ('c', 30, 3);
commit;
select * from test_sub;
select
main.step,
main.cost_time,
main.rank_no,
(select sum(sub.cost_time)
from test_sub sub
where sub.rank_no <= main.rank_no) as total_time
from
test_sub main;
select main.step, main.cost_time, main.rank_no,
sum(cost_time) over (order by rank_no) as total_time
from test_sub main;
select main.step, main.cost_time, main.rank_no,
sum(sub.cost_time) as total_time
from test_sub main
join
test_sub sub
on sub.rank_no <= main.rank_no
group by main.step, main.cost_time, main.rank_no
order by main.rank_no asc;
--a 10 1 10
--b 20 2 30
--c 30 3 60
SELECT TABLE_4.INCIDENT_TYPE,
TABLE_4.POC_CONTACT,
(SELECT TABLE_2.NOTE_DATE
|| ' '
|| TABLE_1.USER_FIRST_NAME
|| ' '
|| TABLE_1.USER_LAST_NAME
|| ' : '
|| TABLE_2.OTHER_HELP_NOTES
FROM TABLE_1,
TABLE_2
WHERE TABLE_2.USER_ID = TABLE_1.USER_ID
AND TABLE_2.REC_ID = TABLE_4.REC_ID
AND TABLE_2.NOTE_DATE = (SELECT MAX(TABLE_2.NOTE_DATE)
FROM TABLE_2
WHERE TABLE_2.REC_ID = TABLE_4.REC_ID
AND TABLE_2.NOTE_DATE <=
TABLE_4.REPORT_DATE))
AS SUM_OF_SHORTAGE,
(SELECT TABLE_3.NOTE_DATE
|| ' '
|| TABLE_1.USER_FIRST_NAME
|| ' '
|| TABLE_1.USER_LAST_NAME
|| ' : '
|| TABLE_3.HELP_NOTES
FROM TABLE_1,
TABLE_3
WHERE TABLE_3.USER_ID = TABLE_1.USER_ID
AND TABLE_3.REC_ID = TABLE_4.REC_ID
AND TABLE_3.NOTE_DATE = (SELECT MAX(TABLE_3.NOTE_DATE)
FROM TABLE_3
WHERE TABLE_3.REC_ID = TABLE_4.REC_ID
AND TABLE_3.NOTE_DATE <=
TABLE_4.REPORT_DATE)) AS HELP_NOTES,
TABLE_4.REPORT_NUM
FROM TABLE_4
WHERE TABLE_4.SITE_ID = '1';
select REC_ID, TO_CHAR(REPORT_DATE,'DD-MON-YY HH:MI:SS') REPORT_DATE,
(SELECT MAX(TABLE_2.note_date) as MAX_DATE
FROM TABLE_2
where TABLE_2.REC_ID = TABLE_1.REC_ID
and TABLE_2.NOTE_DATE <= TABLE_1.REPORT_DATE
) NOTES_MAX_DATE
from TABLE_1 where REC_ID = 121 order by TO_DATE(REPORT_DATE,'DD-MON-YY HH:MI:SS');
--new
SELECT table_4.incident_type,
table_4.poc_contact,
t2.sum_of_shortage,
t3.help_notes,
table_4.report_num
FROM table_4
LEFT JOIN (SELECT table_2.rec_id,
table_2.note_date
|| ' '
|| table_1.user_first_name
|| ' '
|| table_1.user_last_name
|| ' : '
|| table_2.other_help_notes
AS sum_of_shortage
FROM table_1
JOIN table_2
ON table_2.user_id = table_1.user_id
WHERE table_2.note_date =
(SELECT MAX(table_2.note_date) AS max_date
FROM table_2
WHERE table_2.rec_id = table_4.rec_id
AND table_2.note_date <= table_4.report_date)) t2
ON t2.rec_id = table_4.rec_id
LEFT JOIN (SELECT table_3.rec_id,
table_3.note_date
|| ' '
|| table_1.user_first_name
|| ' '
|| table_1.user_last_name
|| ' : '
|| table_3.other_help_notes
AS help_notes
FROM table_1
JOIN table_3
ON table_3.user_id = table_1.user_id
WHERE table_2.note_date =
(SELECT MAX(table_3.note_date) AS max_date
FROM table_3
WHERE table_3.rec_id = table_4.rec_id
AND table_3.note_date <= table_4.report_date)) t3
ON t3.rec_id = table_4.rec_id
WHERE table_4.site_id = '1';
}}}
! references
https://stackoverflow.com/questions/51765722/rewrite-correlated-subquery-in-select-into-join-statement
https://stackoverflow.com/questions/8218411/rewrite-query-without-using-correlated-subqueries
https://stackoverflow.com/questions/5262579/oracle-tune-correlated-subqueries-in-select-clause
https://stackoverflow.com/questions/38645993/how-to-limit-results-of-a-correlated-subquery-that-uses-outer-query-aliases
{{{
# sudoers file.
#
# This file MUST be edited with the 'visudo' command as root.
#
# See the sudoers man page for the details on how to write a sudoers file.
#
# Host alias specification
# User alias specification
# Cmnd alias specification
Cmnd_Alias EXPORT = /oracle/product/10.2/bin/exp, /oracle/product/10.2/bin/expdp
Cmnd_Alias NO_ORA_BIN = !/oracle/product/10.2/bin/*
# Defaults specification
# User privilege specification
root ALL=(ALL) ALL
# Uncomment to allow people in group wheel to run all commands
# %wheel ALL=(ALL) ALL
# Same thing without a password
# %wheel ALL=(ALL) NOPASSWD: ALL
# Samples
# %users ALL=/sbin/mount /cdrom,/sbin/umount /cdrom
# %users localhost=/sbin/shutdown -h now
%ora_exp_users oel11g=NO_ORA_BIN, EXPORT
}}}
{{{
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
oracle ALL=(ALL) NOPASSWD:ALL
}}}
allowing turbostat for oracle user (passwordless)
{{{
-- on visudo
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
oracle ALL=(ALL) NOPASSWD:ALL
oracle ALL=NOPASSWD:/home/oracle/dba/benchmark/cputoolkit/turbostat
-- then run as
sudo ./turbostat
}}}
or just do this
{{{
echo "welcome1" | sudo -S ./turbostat 2> turbostat.txt &
echo "welcome1" | sudo -S kill -9 `ps -ef | grep -i "./turbostat" | grep -v grep | grep -v sudo | awk '{print $2}'`
}}}
<<showtoc>>
! cray xc30 - this is used by the NSA (snowden)
http://www.cray.com/products/computing/xc-series
https://en.wikipedia.org/wiki/Cray_XC30
! sgi uv 300
https://www.sgi.com/products/servers/uv/uv_300_30ex.html <- enkitec was able to benchmark one of these
! end
* downside is this tool doesn't have macros
https://github.com/jimradford/superputty
https://github.com/phendryx/superputty/issues/34
.
https://www.badgerandblade.com/forum/threads/syms-and-filenes-are-over.260645/
https://www.blue-yonder.com/en/solutions/price-optimization?gclid=CJS_yNWT5NQCFV2Hswod8R4MDQ
https://www.blue-yonder.com/en/solutions/replenishment-optimization
http://www.mysqlperformanceblog.com/2012/03/15/ext4-vs-xfs-on-ssd/
{{{
select to_char(begin_time,'MM/DD/YY HH24:MI:SS') btm, to_char(end_time,'MM/DD/YY HH24:MI:SS') etm, m.*
from v$sysmetric m
where m.metric_name = 'Database Time Per Sec'
select count(*), intsize_csec from v$sysmetric group by intsize_csec;
* there are two groups of data on this view... the one that timur has on his sql uses the 60sec time slice, that's equivalent to doing a per minute AWR..
47 1501 <-- 15sec time slice
158 6009 <-- 60sec time slice
select metric_name from v$sysmetric where intsize_csec = 6009
minus
select metric_name from v$sysmetric where intsize_csec = 1501
-- all in 47
Buffer Cache Hit Ratio
Memory Sorts Ratio
User Transaction Per Sec
Physical Reads Per Sec
Physical Reads Per Txn
Physical Writes Per Sec
Physical Writes Per Txn
Physical Reads Direct Per Sec
Physical Reads Direct Per Txn
Redo Generated Per Sec
Redo Generated Per Txn
Logons Per Sec
Logons Per Txn
User Calls Per Sec
User Calls Per Txn
Logical Reads Per Sec
Logical Reads Per Txn
Redo Writes Per Sec
Redo Writes Per Txn
Total Table Scans Per Sec
Total Table Scans Per Txn
Full Index Scans Per Sec
Full Index Scans Per Txn
Execute Without Parse Ratio
Soft Parse Ratio
Host CPU Utilization (%)
DB Block Gets Per Sec
DB Block Gets Per Txn
Consistent Read Gets Per Sec
Consistent Read Gets Per Txn
DB Block Changes Per Sec
DB Block Changes Per Txn
Consistent Read Changes Per Sec
Consistent Read Changes Per Txn
Database CPU Time Ratio
Library Cache Hit Ratio
Shared Pool Free %
Executions Per Txn
Executions Per Sec
Txns Per Logon
Database Time Per Sec
Average Active Sessions
Host CPU Usage Per Sec
Cell Physical IO Interconnect Bytes
Temp Space Used
Total PGA Allocated
Total PGA Used by SQL Workareas
-- all in 6009 -- should be 111 + 47
Active Parallel Sessions
Active Serial Sessions
Average Synchronous Single-Block Read Latency
Background CPU Usage Per Sec
Background Checkpoints Per Sec
Background Time Per Sec
Branch Node Splits Per Sec
Branch Node Splits Per Txn
CPU Usage Per Sec
CPU Usage Per Txn
CR Blocks Created Per Sec
CR Blocks Created Per Txn
CR Undo Records Applied Per Sec
CR Undo Records Applied Per Txn
Captured user calls
Current Logons Count
Current OS Load
Current Open Cursors Count
Cursor Cache Hit Ratio
DB Block Changes Per User Call
DB Block Gets Per User Call
DBWR Checkpoints Per Sec
DDL statements parallelized Per Sec
DML statements parallelized Per Sec
Database Wait Time Ratio
Disk Sort Per Sec
Disk Sort Per Txn
Enqueue Deadlocks Per Sec
Enqueue Deadlocks Per Txn
Enqueue Requests Per Sec
Enqueue Requests Per Txn
Enqueue Timeouts Per Sec
Enqueue Timeouts Per Txn
Enqueue Waits Per Sec
Enqueue Waits Per Txn
Executions Per User Call
GC CR Block Received Per Second
GC CR Block Received Per Txn
GC Current Block Received Per Second
GC Current Block Received Per Txn
Global Cache Average CR Get Time
Global Cache Average Current Get Time
Global Cache Blocks Corrupted
Global Cache Blocks Lost
Hard Parse Count Per Sec
Hard Parse Count Per Txn
I/O Megabytes per Second
I/O Requests per Second
Leaf Node Splits Per Sec
Leaf Node Splits Per Txn
Library Cache Miss Ratio
Logical Reads Per User Call
Long Table Scans Per Sec
Long Table Scans Per Txn
Network Traffic Volume Per Sec
Open Cursors Per Sec
Open Cursors Per Txn
PGA Cache Hit %
PQ QC Session Count
PQ Slave Session Count
PX downgraded 1 to 25% Per Sec
PX downgraded 25 to 50% Per Sec
PX downgraded 50 to 75% Per Sec
PX downgraded 75 to 99% Per Sec
PX downgraded to serial Per Sec
PX operations not downgraded Per Sec
Parse Failure Count Per Sec
Parse Failure Count Per Txn
Physical Read Bytes Per Sec
Physical Read IO Requests Per Sec
Physical Read Total Bytes Per Sec
Physical Read Total IO Requests Per Sec
Physical Reads Direct Lobs Per Sec
Physical Reads Direct Lobs Per Txn
Physical Write Bytes Per Sec
Physical Write IO Requests Per Sec
Physical Write Total Bytes Per Sec
Physical Write Total IO Requests Per Sec
Physical Writes Direct Lobs Per Txn
Physical Writes Direct Lobs Per Sec
Physical Writes Direct Per Sec
Physical Writes Direct Per Txn
Process Limit %
Queries parallelized Per Sec
Recursive Calls Per Sec
Recursive Calls Per Txn
Redo Allocation Hit Ratio
Replayed user calls
Response Time Per Txn
Row Cache Hit Ratio
Row Cache Miss Ratio
Rows Per Sort
SQL Service Response Time
Session Count
Session Limit %
Streams Pool Usage Percentage
Total Index Scans Per Sec
Total Index Scans Per Txn
Total Parse Count Per Sec
Total Parse Count Per Txn
Total Sorts Per User Call
Total Table Scans Per User Call
User Calls Ratio
User Commits Per Sec
User Commits Percentage
User Limit %
User Rollback Undo Records Applied Per Txn
User Rollback UndoRec Applied Per Sec
User Rollbacks Per Sec
User Rollbacks Percentage
Workload Capture and Replay status
##################################
BTM ETM BEGIN_TIME END_TIME INTSIZE_CSEC GROUP_ID METRIC_ID METRIC_NAME VALUE METRIC_UNIT
----------------- ----------------- ---------- --------- ------------ -------- --------- ---------------------------------------------------------------- ----- ----------------------------------------------------------------
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2000 Buffer Cache Hit Ratio 58.6206896551724 % (LogRead - PhyRead)/LogRead
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2001 Memory Sorts Ratio 100 % MemSort/(MemSort + DiskSort)
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2002 Redo Allocation Hit Ratio 100 % (#Redo - RedoSpaceReq)/#Redo
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2003 User Transaction Per Sec 0 Transactions Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2004 Physical Reads Per Sec 1.42011834319527 Reads Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2005 Physical Reads Per Txn 84 Reads Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2006 Physical Writes Per Sec 1.42011834319527 Writes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2007 Physical Writes Per Txn 84 Writes Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2008 Physical Reads Direct Per Sec 0 Reads Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2009 Physical Reads Direct Per Txn 0 Reads Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2010 Physical Writes Direct Per Sec 0 Writes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2011 Physical Writes Direct Per Txn 0 Writes Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2012 Physical Reads Direct Lobs Per Sec 0 Reads Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2013 Physical Reads Direct Lobs Per Txn 0 Reads Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2014 Physical Writes Direct Lobs Per Sec 0 Writes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2015 Physical Writes Direct Lobs Per Txn 0 Writes Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2016 Redo Generated Per Sec 105.697379543533 Bytes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2017 Redo Generated Per Txn 6252 Bytes Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2018 Logons Per Sec 0.185967878275571 Logons Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2019 Logons Per Txn 11 Logons Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2020 Open Cursors Per Sec 2.09636517328825 Cursors Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2021 Open Cursors Per Txn 124 Cursors Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2022 User Commits Per Sec 0 Commits Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2023 User Commits Percentage 0 % (UserCommit/TotalUserTxn)
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2024 User Rollbacks Per Sec 0 Rollbacks Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2025 User Rollbacks Percentage 0 % (UserRollback/TotalUserTxn)
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2026 User Calls Per Sec 5.2916314454776 Calls Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2027 User Calls Per Txn 313 Calls Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2028 Recursive Calls Per Sec 6.50887573964497 Calls Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2029 Recursive Calls Per Txn 385 Calls Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2030 Logical Reads Per Sec 3.43195266272189 Reads Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2031 Logical Reads Per Txn 203 Reads Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2032 DBWR Checkpoints Per Sec 0 Check Points Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2033 Background Checkpoints Per Sec 0 Check Points Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2034 Redo Writes Per Sec 0.169061707523246 Writes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2035 Redo Writes Per Txn 10 Writes Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2036 Long Table Scans Per Sec 0 Scans Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2037 Long Table Scans Per Txn 0 Scans Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2038 Total Table Scans Per Sec 0.202874049027895 Scans Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2039 Total Table Scans Per Txn 12 Scans Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2040 Full Index Scans Per Sec 0 Scans Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2041 Full Index Scans Per Txn 0 Scans Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2042 Total Index Scans Per Sec 0.43956043956044 Scans Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2043 Total Index Scans Per Txn 26 Scans Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2044 Total Parse Count Per Sec 2.04564666103128 Parses Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2045 Total Parse Count Per Txn 121 Parses Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2046 Hard Parse Count Per Sec 0 Parses Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2047 Hard Parse Count Per Txn 0 Parses Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2048 Parse Failure Count Per Sec 0 Parses Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2049 Parse Failure Count Per Txn 0 Parses Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2050 Cursor Cache Hit Ratio 2.47933884297521 % CursorCacheHit/SoftParse
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2051 Disk Sort Per Sec 0 Sorts Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2052 Disk Sort Per Txn 0 Sorts Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2053 Rows Per Sort 846.792553191489 Rows Per Sort
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2054 Execute Without Parse Ratio 0 % (ExecWOParse/TotalExec)
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2055 Soft Parse Ratio 100 % SoftParses/TotalParses
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2056 User Calls Ratio 44.8424068767908 % UserCalls/AllCalls
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2057 Host CPU Utilization (%) 5.59171781042713 % Busy/(Idle+Busy)
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2058 Network Traffic Volume Per Sec 3316.93998309383 Bytes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2059 Enqueue Timeouts Per Sec 0 Timeouts Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2060 Enqueue Timeouts Per Txn 0 Timeouts Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2061 Enqueue Waits Per Sec 0 Waits Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2062 Enqueue Waits Per Txn 0 Waits Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2063 Enqueue Deadlocks Per Sec 0 Deadlocks Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2064 Enqueue Deadlocks Per Txn 0 Deadlocks Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2065 Enqueue Requests Per Sec 7.67540152155537 Requests Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2066 Enqueue Requests Per Txn 454 Requests Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2067 DB Block Gets Per Sec 0.101437024513948 Blocks Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2068 DB Block Gets Per Txn 6 Blocks Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2069 Consistent Read Gets Per Sec 3.33051563820795 Blocks Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2070 Consistent Read Gets Per Txn 197 Blocks Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2071 DB Block Changes Per Sec 0.101437024513948 Blocks Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2072 DB Block Changes Per Txn 6 Blocks Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2073 Consistent Read Changes Per Sec 0 Blocks Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2074 Consistent Read Changes Per Txn 0 Blocks Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2075 CPU Usage Per Sec 1.46214201183432 CentiSeconds Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2076 CPU Usage Per Txn 86.4857 CentiSeconds Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2077 CR Blocks Created Per Sec 0 Blocks Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2078 CR Blocks Created Per Txn 0 Blocks Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2079 CR Undo Records Applied Per Sec 0 Undo Records Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2080 CR Undo Records Applied Per Txn 0 Records Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2081 User Rollback UndoRec Applied Per Sec 0 Records Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2082 User Rollback Undo Records Applied Per Txn 0 Records Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2083 Leaf Node Splits Per Sec 0 Splits Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2084 Leaf Node Splits Per Txn 0 Splits Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2085 Branch Node Splits Per Sec 0 Splits Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2086 Branch Node Splits Per Txn 0 Splits Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2087 PX downgraded 1 to 25% Per Sec 0 PX Operations Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2088 PX downgraded 25 to 50% Per Sec 0 PX Operations Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2089 PX downgraded 50 to 75% Per Sec 0 PX Operations Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2090 PX downgraded 75 to 99% Per Sec 0 PX Operations Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2091 PX downgraded to serial Per Sec 0 PX Operations Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2092 Physical Read Total IO Requests Per Sec 3.19526627218935 Requests Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2093 Physical Read Total Bytes Per Sec 40717.6331360947 Bytes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2094 GC CR Block Received Per Second 0 Blocks Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2095 GC CR Block Received Per Txn 0 Blocks Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2096 GC Current Block Received Per Second 0 Blocks Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2097 GC Current Block Received Per Txn 0 Blocks Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2098 Global Cache Average CR Get Time 0 CentiSeconds Per Get
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2099 Global Cache Average Current Get Time 0 CentiSeconds Per Get
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2100 Physical Write Total IO Requests Per Sec 1.97802197802198 Requests Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2101 Global Cache Blocks Corrupted 0 Blocks
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2102 Global Cache Blocks Lost 0 Blocks
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2103 Current Logons Count 30 Logons
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2104 Current Open Cursors Count 33 Cursors
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2105 User Limit % 0.000000698491931124239 % Sessions/License_Limit
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2106 SQL Service Response Time 0.118068194842407 CentiSeconds Per Call
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2107 Database Wait Time Ratio 0 % Wait/DB_Time
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2108 Database CPU Time Ratio 104.943600172791 % Cpu/DB_Time
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2109 Response Time Per Txn 82.4116 CentiSeconds Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2110 Row Cache Hit Ratio 100 % Hits/Gets
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2111 Row Cache Miss Ratio 0 % Misses/Gets
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2112 Library Cache Hit Ratio 100 % Hits/Pins
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2113 Library Cache Miss Ratio 0 % Misses/Gets
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2114 Shared Pool Free % 8.01832424966913 % Free/Total
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2115 PGA Cache Hit % 99.9966081841154 % Bytes/TotalBytes
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2118 Process Limit % 3.1 % Processes/Limit
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2119 Session Limit % 2.6281208935611 % Sessions/Limit
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2120 Executions Per Txn 115 Executes Per Txn
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2121 Executions Per Sec 1.94420963651733 Executes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2122 Txns Per Logon 0 Txns Per Logon
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2123 Database Time Per Sec 1.39326458157227 CentiSeconds Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2124 Physical Write Total Bytes Per Sec 18160.202874049 Bytes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2125 Physical Read IO Requests Per Sec 1.42011834319527 Requests Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2126 Physical Read Bytes Per Sec 11633.6094674556 Bytes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2127 Physical Write IO Requests Per Sec 1.42011834319527 Requests Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2128 Physical Write Bytes Per Sec 11633.6094674556 Bytes Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2129 DB Block Changes Per User Call 0.0191693290734824 Blocks Per Call
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2130 DB Block Gets Per User Call 0.0191693290734824 Blocks Per Call
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2131 Executions Per User Call 0.36741214057508 Executes Per Call
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2132 Logical Reads Per User Call 0.648562300319489 Reads Per Call
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2133 Total Sorts Per User Call 0.600638977635783 Sorts Per Call
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2134 Total Table Scans Per User Call 0.0383386581469649 Tables Per Call
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2135 Current OS Load 1.4794921875 Number Of Processes
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2136 Streams Pool Usage Percentage 0 % Memory allocated / Size of Streams pool
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2137 PQ QC Session Count 0 Sessions
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2138 PQ Slave Session Count 0 Sessions
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2139 Queries parallelized Per Sec 0 Queries Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2140 DML statements parallelized Per Sec 0 Statements Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2141 DDL statements parallelized Per Sec 0 Statements Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2142 PX operations not downgraded Per Sec 0 PX Operations Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2143 Session Count 40 Sessions
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2144 Average Synchronous Single-Block Read Latency 2.62433862433862 Milliseconds
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2145 I/O Megabytes per Second 0.0676246830092984 Megabtyes per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2146 I/O Requests per Second 5.17328825021133 Requests per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2147 Average Active Sessions 0.0139326458157227 Active Sessions
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2148 Active Serial Sessions 1 Sessions
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2149 Active Parallel Sessions 0 Sessions
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2150 Captured user calls 0 calls
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2151 Replayed user calls 0 calls
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2152 Workload Capture and Replay status 0 status
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2153 Background CPU Usage Per Sec 0.388786136939983 CentiSeconds Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2154 Background Time Per Sec 0.0178232121724429 Active Sessions
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2155 Host CPU Usage Per Sec 45.7480980557904 CentiSeconds Per Second
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2156 Cell Physical IO Interconnect Bytes 3482624 bytes
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2157 Temp Space Used 0 bytes
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2158 Total PGA Allocated 93776896 bytes
06/18/12 02:16:40 06/18/12 02:17:39 18-JUN-12 18-JUN-12 5915 2 2159 Total PGA Used by SQL Workareas 0 bytes
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2000 Buffer Cache Hit Ratio 100 % (LogRead - PhyRead)/LogRead
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2001 Memory Sorts Ratio 100 % MemSort/(MemSort + DiskSort)
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2003 User Transaction Per Sec 0 Transactions Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2004 Physical Reads Per Sec 0 Reads Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2005 Physical Reads Per Txn 0 Reads Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2006 Physical Writes Per Sec 0.213523131672598 Writes Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2007 Physical Writes Per Txn 3 Writes Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2008 Physical Reads Direct Per Sec 0 Reads Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2009 Physical Reads Direct Per Txn 0 Reads Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2016 Redo Generated Per Sec 17.0818505338078 Bytes Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2017 Redo Generated Per Txn 240 Bytes Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2018 Logons Per Sec 0.142348754448399 Logons Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2019 Logons Per Txn 2 Logons Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2026 User Calls Per Sec 3.98576512455516 Calls Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2027 User Calls Per Txn 56 Calls Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2030 Logical Reads Per Sec 0.854092526690391 Reads Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2031 Logical Reads Per Txn 12 Reads Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2034 Redo Writes Per Sec 0.142348754448399 Writes Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2035 Redo Writes Per Txn 2 Writes Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2038 Total Table Scans Per Sec 0.142348754448399 Scans Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2039 Total Table Scans Per Txn 2 Scans Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2040 Full Index Scans Per Sec 0 Scans Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2041 Full Index Scans Per Txn 0 Scans Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2054 Execute Without Parse Ratio 0 % (ExecWOParse/TotalExec)
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2055 Soft Parse Ratio 100 % SoftParses/TotalParses
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2057 Host CPU Utilization (%) 4.96219692361171 % Busy/(Idle+Busy)
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2067 DB Block Gets Per Sec 0 Blocks Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2068 DB Block Gets Per Txn 0 Blocks Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2069 Consistent Read Gets Per Sec 0.854092526690391 Blocks Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2070 Consistent Read Gets Per Txn 12 Blocks Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2071 DB Block Changes Per Sec 0 Blocks Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2072 DB Block Changes Per Txn 0 Blocks Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2073 Consistent Read Changes Per Sec 0 Blocks Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2074 Consistent Read Changes Per Txn 0 Blocks Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2108 Database CPU Time Ratio 113.330146807784 % Cpu/DB_Time
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2112 Library Cache Hit Ratio 100 % Hits/Pins
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2114 Shared Pool Free % 8.01832424966913 % Free/Total
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2120 Executions Per Txn 21 Executes Per Txn
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2121 Executions Per Sec 1.49466192170819 Executes Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2122 Txns Per Logon 0 Txns Per Logon
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2123 Database Time Per Sec 1.0423487544484 CentiSeconds Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2147 Average Active Sessions 0.010423487544484 Active Sessions
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2155 Host CPU Usage Per Sec 40.6405693950178 CentiSeconds Per Second
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2156 Cell Physical IO Interconnect Bytes 566272 bytes
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2157 Temp Space Used 0 bytes
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2158 Total PGA Allocated 93776896 bytes
06/18/12 02:17:25 06/18/12 02:17:39 18-JUN-12 18-JUN-12 1405 3 2159 Total PGA Used by SQL Workareas 0 bytes
205 rows selected
}}}
{{{
-- to configure
nmtui
-- to show
ip addr
nmcli d show
-- to get ifconfig working
yum install net-tools
}}}
https://unix.stackexchange.com/questions/188750/unable-to-configure-the-network-settings-in-centos-7
https://oracle-base.com/articles/linux/linux-network-configuration
https://www.thegeekdiary.com/centos-rhel-7-troubleshooting-ifconfig-command-not-found/
.
to get the firmware version of the HBA card
{{{
systool -c fc_host -v
}}}
to get the eth interfaces
{{{
lspci | grep -i eth
}}}
http://www.thegeekstuff.com/2010/08/hba-wwn-address/
http://nguyenantoine.blogspot.com/2010/05/red-hat-rhel-usefull-san-command.html
http://davidlopezrodriguez.com/2008/01/24/playing-around-with-scsi-host-bus-adapters-hba-in-linux/
http://www.cyberciti.biz/faq/linux-find-fibre-channel-device-information/
http://www.linuxforums.org/forum/red-hat-fedora-linux/178836-what-does-systool-c-fc_remote_ports-v-mean.html
http://alinux.web.id/2011/04/16/how-to-knowing-sas-hba-on-ibm-blade-hs21-or-hs22.html
http://unixsa.wordpress.com/category/storage/
https://talend-training.blogspot.com/2011/12/how-to-optimize-tuniquerow-component-in.html
https://help.talend.com/reader/iYcvdknuprDzYycT3WRU8w/sh7RZbAtg4KTo7vc06k4Kw <- talend doc
https://community.talend.com/t5/Design-and-Development/different-results-for-tUniqueRow-and-tAggregateRow/td-p/27328
https://www.talendforge.org/forum/viewtopic.php?id=27325 <- 420,000 rows x 55 columns
http://souravinayak.blogspot.com/2017/02/data-deduplication-with-tuniquerow.html
https://stackoverflow.com/questions/34734723/how-to-retrive-duplicate-records-in-talend-integration
https://community.talend.com/t5/Design-and-Development/howa-to-optimise-tUniqRow-and-tSortRow/td-p/49309
http://www.tablesgenerator.com/html_tables/load#
http://divtable.com/converter/
<<<
Tableau generates ineffcienct SQL for date predicates using functins like DATE_PART(), EXTRACT() and CAST()
https://community.tableau.com/s/question/0D54T00000C5mVXSAZ/tableau-generates-ineffcienct-sql-for-date-predicates-using-functins-like-datepart-extract-and-cast
Performance Improvement: Use date range instead of casting #4043
https://github.com/metabase/metabase/issues/4043
<<<
If performance is an issue I recommend installing and watching this video
<<<
* perf tool - tableau log viewer
https://github.com/tableau/tableau-log-viewer https://github.com/tableau/tableau-log-viewer/releases
* tableau performance tuning
Enhancing Tableau Data Queries https://www.youtube.com/watch?v=STfTQ55QE9s&index=19&list=LLmp7QJNLQvBQcvdltLTkiYQ&t=0s
* tableau dashboard actions
Tableau Actions Give Your Dashboards Superpowers https://www.youtube.com/watch?v=r8SNKmzsW6c
<<<
* For general server admin, watch this
<<<
tableau server
https://www.udemy.com/administering-tableau-server-10-with-real-time-scenarios/
<<<
* For automation you need to install tabcmd,tabpy/R integration
{{{
tabcmd
Tabcmd and Batch Scripts to automate PDF generation https://www.youtube.com/watch?v=ajB7CDcoyDU
tabpy/R
Data science applications with TabPy/R https://www.youtube.com/watch?v=nRtOMTnBz_Y&feature=youtu.be
}}}
Tableau Server Client & the REST API to automate your workflow https://www.youtube.com/watch?v=NHomg6M9RcA&list=PLklSCDzsQHdkjiTHqqCaU8tdA70AlSnPs&index=9&t=0s
Understanding Level of Detail (LOD) expressions - like analytics functions inside your workbooks https://www.youtube.com/watch?v=kmApWaE3Os4&list=PLklSCDzsQHdkjiTHqqCaU8tdA70AlSnPs&index=19&t=0s
Tableau Bridge - for local updates of data https://www.youtube.com/watch?v=CjhR20j1K64&list=PLklSCDzsQHdkjiTHqqCaU8tdA70AlSnPs&index=25&t=0s
this one IT @Tableau | Cloudy with 100% chance of data - how tableau does tableau internally - interesting how they used JSON as their data source https://www.youtube.com/watch?list=PLklSCDzsQHdkjiTHqqCaU8tdA70AlSnPs&time_continue=2820&v=BQch2QTLvtU
.
<<showtoc>>
! performance talks
Enhancing Tableau Data Queries https://www.youtube.com/watch?v=STfTQ55QE9s&index=19&list=LLmp7QJNLQvBQcvdltLTkiYQ&t=0s
Tableau Actions Give Your Dashboards Superpowers https://www.youtube.com/watch?v=r8SNKmzsW6c
Data science applications with TabPy/R https://www.youtube.com/watch?v=nRtOMTnBz_Y&feature=youtu.be
! workbook performance
https://github.com/tableau/tableau-log-viewer
https://github.com/tableau/tableau-log-viewer/wiki
https://github.com/tableau/tableau-log-viewer/releases
https://www.google.com/search?q=tableau+log+viewer&oq=tableau+log+view&aqs=chrome.0.0j69i57j0l2.2385j0j1&sourceid=chrome&ie=UTF-8
https://tableau.github.io/extensions-api/docs/trex_logging.html
! server performance
https://github.com/tableau/Logshark/releases
log shark https://www.tableau.com/about/blog/2016/11/introducing-logshark-analyze-your-tableau-server-log-files-tableau-62025
https://tableau.github.io/Logshark/
https://github.com/tableau/Logshark
! other open source tools
https://tableau.github.io/
https://github.com/tableau/TabPy
.
<<<
I would categorize the performance in two parts:
1) The workbooks/dashboards
Those “dashboards” can be the desktop workbook files or the published workbooks on the tableau server.
For the desktop workbooks there’s this tabjolt
http://www.tableau.com/about/blog/2015/4/introducing-tabjolt-point-and-run-load-testing-solution-tableau-server-38604
https://github.com/tableau/tabjolt
http://tableaulove.tumblr.com/post/117121569355/load-testing-tableau-server-9-with-tabjolt-on-ec2
https://www.youtube.com/watch?v=Jzq9I-BrEfk
For the web published workbooks you can use something like the loadrunner
https://www.youtube.com/watch?v=ty5a0RiSWDM
The two above can be categorized as your app layer. Where once all the data is pulled the workbook will do all the magic and the visualizations. Here the more dynamic your viz and the more it pulls from different data sources the slower it gets.
2) The data extracts
Imagine this as your materialized view or the cache seeding process in OBIEE. There are two stages where your data can be in tableau:
For desktop
Live data -> TDE file (tableau data extract)
For web published dashboard
Live data -> published data extract
So as much as possible you want the data to be coming from the extract files. The general workflow here is the desktop users can initially upload their workbooks and data extract file to the tableau server and then it becomes a published dashboard where it’s accessible by a web browser. The workbook is out there but then how about the new data? You’ll configure the data extract to refresh periodically, connect it to an Oracle database (TNS) then point it to all the tables you like to have but usually they point it to a view (SQL w/ all the joins in it).
From the workload tuning perspective. You can investigate the queries if they are making efficient use of the hardware (bandwidth and PX). Because once the all of the pull from the database is done. That’s it. The next level of performance burden is on the workbooks. And when you have to tune the app layer there’s a performance dashboard in Tableau Server that you can investigate but this is only helpful for web published ones.
BTW this kind of architecture is the same on http://spotfire.tibco.com/ which is a tableau competitor
<<<
an example use case of tableau and awr_topevents.sql for dynamic visualization and data points drill down
http://www.evernote.com/shard/s48/sh/f0b2191b-e88c-4b3a-8b56-15ffd454aea4/44431b0f327e04d826d2f9d84a565023
awr_topevents script available here http://karlarao.wordpress.com/scripts-resources
* To get the time dimension working do this
{{{
from this
TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm,
to this
TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI') tm,
}}}
* set the following on SQL*Plus
{{{
set feedback off pages 0 term off head on und off trimspool on echo off lines 4000 colsep ','
COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;
col instname format a8
col hostname format a20
col snap_id format 99999 heading "snapid"
col tm format a15 heading "tm"
col inst format 90 heading "inst"
col dur format 999990.00 heading "dur"
col event format a55 heading "Event"
col event_rank format 90 heading "EventRank"
col waits format 9999999990.00 heading "Waits"
col time format 9999999990.00 heading "Timesec"
col avgwt format 99990.00 heading "Avgwtms"
col pctdbt format 9990.0 heading "DBTimepct"
col aas format 990.0 heading "Aas"
col wait_class format a15 heading "WaitClass"
spool awr_topevents_v2-tableau-&_instname-&_hostname..txt
select '&_instname' instname, '&_dbid' db_id, '&_hostname' hostname, snap_id
... output snipped ...
ORDER BY snap_id;
spool off
host sed -n -i '2,$ p' awr_topevents_v2-tableau-&_instname-&_hostname..txt
}}}
* clean up the headers of AWR data points
{{{
sed -n -i '5,$ p' awr_topevents_v2.txt
cat -n awr_topevents_v2.txt | less
}}}
<<showtoc>>
<<<
Talentbuddy Problems
Updated: June 15, 2015
Total: 136
Normal: 50
Medium: 56
Hard: 30
! Normal
Average grade
Binary
Binary float
Bottle
Bounce rate
Caesar shift
Common courses
Copy-Paste
@@Count digits@@
Count occurences
Count ones
Count substrings
Count tokens
Count words
Countries
Find character
Find substring
@@FizzBuzz@@
Float division
Growth
Highest grade
Integer division
Invert sum
Linear equation
Max
Mean
Merge Sort
@@Missing number@@
Odd square sum
Pair product
Pair Sum
Prediction
Prime numbers
Request counting
Remove stop words
Remove substring
Scheduling
Select substring
Simple sum
Sorting Students
@@Sort names@@
Sort words
Sorting students
Standard deviation
Student progress
Successful students
Time
Top locations
Vowel count
Z-score
! Medium
2^n
Arithmetic evaluation
AST Part One
Bacon number
Balanced brackets
Basic search query
Book store
Brands
Compute average
Copy async
Currency exchange
Depth first traversal
Dispatcher
Divide by 2
Even number
Find String
Fraction
Heads and tails
Indexes
Intersecting street segments
Linked List Cycle
Longest improvement
@@Longest palindrome@@
Longest street segment
Majority number
Max sum
Median
Medical app
Multiply by 2
Neighbourhood
Nth number
Nth permutation
PACO
Parallel async
Plane tickets
Power of 2
Precision
Priority
Purchase tracking
Query tokens stemming
Rain
Read async
Relative sort
Selection
Semantic analysis
Set bit
Shopping cart
Skyscrapers
Sorted merge
Speed
Swap values
Tokenize query
Topological sort
Unset bit
User administration
User table
! Hard
AST Part Two
Check
Chocolate bars
Coins
Contact management
Context extraction
Context pruning
Extract book titles
Failure detection
Fast power
Hash String
Intermediary code
LLVM parser
Map matcher
Mapper
Palindromes count
Pouring
Price experiment
Pub crawl
Reducer
Selection
Simple expression
Social network
Sqrt
Streets nearby
Trigger words
Tuple sum
Tweets per second
Typeahead
Unique sequence
<<<
<<showtoc>>
<<<
Updated: March 8, 2015
! Languages - 34
Getting Started - 2
Web Analytics - 5
Classroom Analysis - 7
Text Editor - 5
Data Conversion - 6
Simple Loops - 6
Expressions - 3
! Tech Interviews - 52
Elementary Data Structures - 7
Sorting and Order Statistics - 6
Search - 5
Elementary Graph Problems - 9
Advanced Techniques - 3
Math - 7
General Interview Practice - 5
HubSpot Challenges - 2
Redbeacon Challenges - 2
Twitter Challenges - 2
Uber Challenges - 4
! Databases - 11
MongoDB Basics - 7
Redis Basics - 4
! Projects - 24
Search Engine - 4
Books - 4
Map Reduce - 4
GPS Positioning - 5
Sysmbolic Execution - 7
! Fun - 141
Tokenize Query - Lessons - 4
Bounce Rate - Lessons - 3
Computer Vision - Lessons - 3
Programming Basics - 6
Expressions - 3
Simple Loops - 6
Data Conversion - 6
Search - 5
Advanced Techniques - 3
Twitter Challenges - 2
Text Editor - 5
Search Engine - 4
MongoDB Basics - 7
Redis Basics - 4
Elementary Graph Problems - 9
Google Interview - 4
Simple Interview - 2
HubSpot Challenges - 2
Uber Challenges - 4
Web Analytics - 5
Classroom Analysis - 7
Books - 4
GPS Positioning - 5
Map Reduce - 4
Symbolic Execution - 7
Getting Started - 2
Elementary Data Structures - 7
Sorting and Order Statistics - 6
Math - 7
General Interview Practice - 5
Redbeacon Challenges - 2
Async JavaScript - 3
<<<
http://highscalability.com/blog/2013/10/23/strategy-use-linux-taskset-to-pin-processes-or-let-the-os-sc.html
http://mailinator.blogspot.com.au/2010/02/how-i-sped-up-my-server-by-factor-of-6.html
http://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/
https://builtwith.com/
http://stackoverflow.com/questions/396739/how-do-you-determine-what-technology-a-website-is-built-on
<<<
been researching about time series databases. so far ang dami dami nila! reason is different use cases/niche. nakita ko tong tempoDB (IOT data store), nag pivot sila to tempoIQ then got acqui-hired by Avant. check out https://app.tempoiq.com/docs/html/index.html
<<<
https://app.tempoiq.com/docs/html/index.html
https://twitter.com/tempo_iq
https://twitter.com/NpappaG/status/686692241444831232
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=tempoiq%20acquired
https://www.crunchbase.com/organization/tempo#/entity
http://debanked.com/2016/04/hungry-for-data-avant-acqui-hires-two-startups-to-expand-tech-team/
https://fijiaaron.wordpress.com/2019/01/19/tensorflow-python-virtualenv-setup/
{{{
### install pyenv and virtualenv
brew install pyenv
brew install pyenv-virtualenv
### initialize pyenv
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
### install python 3.6 with pyenv
pyenv install 3.6.5
### create a virtualenv for using tensorflow
pyenv virtualenv 3.6.5 tf
pyenv activate tf
### install dependencies in virtualenv using
pip pip install --upgrade pip
pip install tensorflow
### optionally use jupyter web interface
pip install jupyter
jupyter notebook
### use tensorflow in your app
import tensorflow as tf
}}}
checkout this gist https://gist.github.com/fijiaaron/0755a3b28b9e8aaf1cd9d4181c3df346
putty connection manager
mobaxterm
<<showtoc>>
! references
<<<
DBMS_MVIEW.refresh delete and insert
https://www.google.com/search?source=hp&ei=kRiZXrTgNJCwytMP8v2f0Aw&q=DBMS_MVIEW.refresh+delete+and+insert&oq=DBMS_MVIEW.refresh+delete+and+insert&gs_lcp=CgZwc3ktYWIQAzIFCCEQqwIyBQghEKsCMgUIIRCrAjoCCAA6BggAEBYQHjoICAAQFhAKEB46BQgAEM0COgUIIRCgAToICCEQFhAdEB46BwghEAoQoAFKDQgXEgkxMS04MmcxMDFKCwgYEgcxMS0yZzE2UPYLWJAhYIYsaABwAHgCgAGBAogBrRCSAQYxMS44LjGYAQCgAQKgAQGqAQdnd3Mtd2l6sAEA&sclient=psy-ab&ved=0ahUKEwj07J75uO7oAhUQmHIEHfL-B8oQ4dUDCAg&uact=5
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1857127200346321681
https://hourim.wordpress.com/2015/04/18/parallel-refreshing-a-materialized-view/
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/arpls/DBMS_MVIEW.html#GUID-36379E2B-3F21-4F92-8019-F902BFB2896A
<<<
! testcase
{{{
drop materialized view hr.emp_dept;
create materialized view hr.emp_dept
build immediate
refresh on demand
disable query rewrite
as
select a.owner, a.object_id, count (*)
from sys.dba_objects a, sys.dba_objects b, sys.dba_objects c, sys.dba_objects d
where a.object_id = b.object_id
and a.object_id = c.object_id
and a.object_id = d.object_id
group by a.owner, a.object_id
/
-- manual refresh
EXEC DBMS_MVIEW.refresh('hr.emp_dept'); -- DELETE,INSERT
$ date
Fri Apr 17 23:25:30 UTC 2020
EXEC DBMS_MVIEW.refresh('hr.emp_dept', 'C', atomic_refresh=>FALSE); -- TRUNCATE,INSERT
!date
Fri Apr 17 23:27:16 UTC 2020
-- ASH DUMP DATA
-- DELETE,INSERT
0,cdb1,1,111973319,04/17/20 23:25:50,04/17/20 23:25:48,17-APR-20 11.25.50.425 PM,47,59023,FOREGROUND,16,0,cyjw19ka8g48g,Y,0,7,DELETE,8865866553336969485,0hfqcvbybguzv,47,2544800256,1,DELETE,,16777219,17-APR-2
0,13428,1,,,,,,,,,34,file#,4,block#,42899,blocks,17,,,216,ON CPU,0,NOT IN WAIT,,,,,244262,4,42899,0,59,VERSION2,17326,020008009DB60000,,1024,N,N,N,Y,N,N,N,N,N,N,N,N,N,N,N,3427055676,sqlplus@karldevfedora (TNS
V1-V3),sqlplus@karldevfedora (TNS V1-V3),,,karldevfedora,0,,0,0,,,,1000008,18,0,1245184,0,1245184,2281472,0
0,cdb1,1,111973320,04/17/20 23:25:51,,17-APR-20 11.25.51.425 PM,47,59023,FOREGROUND,16,0,afjq05c0jtcc0,Y,0,2,INSERT,14637874184148315399,0hfqcvbybguzv,47,2968490067,,,,,,13428,1,,,,,,,,,46,file#,0,block#,284,
blocks,1,,,67,ON CPU,0,NOT IN WAIT,,,,,244262,4,42950,0,59,VERSION2,17326,020008009DB60000,,1168,N,Y,Y,Y,N,N,N,N,N,N,N,N,N,N,N,3427055676,sqlplus@karldevfedora (TNS V1-V3),sqlplus@karldevfedora (TNS V1-V3),,,
karldevfedora,0,,0,0,18323942,701002,1156747,1000027,10,0,466944,0,466944,18403328,0
0,cdb1,1,111973321,04/17/20 23:25:52,04/17/20 23:25:51,17-APR-20 11.25.52.425 PM,47,59023,FOREGROUND,16,0,afjq05c0jtcc0,Y,0,2,INSERT,14637874184148315399,0hfqcvbybguzv,47,2968490067,2,HASH,GROUP BY,16777217,1
7-APR-20,13428,1,,,,,,,,,46,file#,0,block#,284,blocks,1,,,67,ON CPU,0,NOT IN WAIT,,,,,244262,4,42950,0,59,VERSION2,17326,020008009DB60000,,1024,N,N,N,Y,N,N,N,N,N,N,N,N,N,N,N,3427055676,sqlplus@karldevfedora (
TNS V1-V3),sqlplus@karldevfedora (TNS V1-V3),,,karldevfedora,0,,0,0,,,,1002073,0,0,0,0,0,176869376,0
0,cdb1,1,111973322,04/17/20 23:25:53,04/17/20 23:25:51,17-APR-20 11.25.53.435 PM,47,59023,FOREGROUND,16,0,afjq05c0jtcc0,Y,0,2,INSERT,14637874184148315399,0hfqcvbybguzv,47,2968490067,2,HASH,GROUP BY,16777217,1
7-APR-20,13428,1,,,,,,,,,47,location,3,consumer group id,17326, ,0,,,12190,ON CPU,0,NOT IN WAIT,,,,,244262,4,42950,0,59,VERSION2,17326,020008009DB60000,,1024,N,N,N,Y,N,N,N,N,N,N,N,N,N,N,N,3427055676,sqlplus@k
arldevfedora (TNS V1-V3),sqlplus@karldevfedora (TNS V1-V3),,,karldevfedora,0,,0,0,2381336,1586946,2381336,1015816,0,0,0,0,0,370397184,0
-- TRUNCATE,INSERT
0,cdb1,1,111973418,04/17/20 23:27:32,,17-APR-20 11.27.32.561 PM,71,10482,FOREGROUND,16,0,4ag4jqw92cpu6,Y,0,85,TRUNCATE TABLE,0,b1r8hszxag0xf,47,491277193,,,,,,13428,1,,,,,,,,,24,address,1640720912,number,268,tries,0,,,115,ON CPU,0,NOT IN WAIT,,,,,-1,0,0,0,59,VERSION2,17326,,,1168,N,Y,Y,Y,N,N,N,N,N,N,N,N,N,N,N,3427055676,sqlplus@karldevfedora (TNS V1-V3),sqlplus@karldevfedora (TNS V1-V3),,,karldevfedora,0,,0,0,47513293,22496,1879394,1000012,0,0,0,0,0,2215936,0
0,cdb1,1,111973419,04/17/20 23:27:33,,17-APR-20 11.27.33.581 PM,15,64544,BACKGROUND,16,0,,Y,-1,0,,0,,0,0,,,,,,,,,,1,15,64544,,wait for stopper event to be increased,3065535233,11445,,0,,0,,0,Other,1893977003,0,WAITING,58365,UNKNOWN,,,,,0,3,192,0,59,VERSION2,,,,0,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,165959219,oracle@karldevfedora (SMON),,,,karldevfedora,0,,0,0,,,,1017025,0,0,0,0,0,1159168,0
0,cdb1,1,111973419,04/17/20 23:27:33,,17-APR-20 11.27.33.581 PM,42,48256,FOREGROUND,16,0,,Y,-1,0,,0,,0,0,,,,,,,,,,1,15,64544,262148,,,18,sleeptime/senderid,0,passes,0,,0,,,7,ON CPU,0,NOT IN WAIT,,,,,-1,0,0,0,59,VERSION2,17324,,,0,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,165959219,oracle@karldevfedora (P000),,,,karldevfedora,0,,0,0,995134,0,988989,1017025,0,0,0,0,0,962560,0
0,cdb1,1,111973419,04/17/20 23:27:33,,17-APR-20 11.27.33.581 PM,71,10482,FOREGROUND,16,0,dcyftv8w2pxr3,Y,0,2,INSERT,8672105135424904022,b1r8hszxag0xf,47,3952231885,,,,,,13428,1,,,,,,,,,92,file#,0,block#,284,blocks,1,,,75,ON CPU,0,NOT IN WAIT,,,,,-1,0,0,0,59,VERSION2,17326,0600060002AC0000,,1168,N,Y,Y,Y,N,N,N,N,N,N,N,N,N,N,N,3427055676,sqlplus@karldevfedora (TNS V1-V3),sqlplus@karldevfedora (TNS V1-V3),,,karldevfedora,0,,0,0,,,,1017025,11,0,147456,0,147456,8048640,0
0,cdb1,1,111973420,04/17/20 23:27:34,04/17/20 23:27:33,17-APR-20 11.27.34.581 PM,71,10482,FOREGROUND,16,0,dcyftv8w2pxr3,Y,0,2,INSERT,8672105135424904022,b1r8hszxag0xf,47,3952231885,4,HASH JOIN,,16777218,17-APR-20,13428,1,,,,,,,,,92,file#,0,block#,284,blocks,1,,,75,ON CPU,0,NOT IN WAIT,,,,,-1,0,0,0,59,VERSION2,17326,0600060002AC0000,,1024,N,N,N,Y,N,N,N,N,N,N,N,N,N,N,N,3427055676,sqlplus@karldevfedora (TNS V1-V3),sqlplus@karldevfedora (TNS V1-V3),,,karldevfedora,0,,0,0,1367093,593171,1367093,1000991,0,0,0,0,0,76533760,0
}}}
{{{
sqlmon output
22:31:24 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 2 .04 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
no rows selected
22:31:29 SYS@cdb1>
22:31:44 SYS@cdb1>
22:31:44 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 71 47274 SYS sqlplus@ka fudftpsst93ny 0 0 0 .00 db file sequential r /* MV_REFRESH (INS) */INSERT / 0 No 0 karldevfedora oracle
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 3 .03 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
*** GV$SESSION sort by INST_ID ***
INST_ID SID UNIX_PID ORACLE_LOGIN PROGRAM SQL_ID PLAN_HASH_VALUE EVENT SQL_TEXT HOURS machine OSUSER
---------- ------ -------- --------------- -------------------- ------------- --------------- ---------------------------------------- ------------------------------ ------ ------------------------------ ----------
1 71 8919 SYS sqlplus@karldevfedor 0hfqcvbybguzv 0 db file sequential read BEGIN DBMS_MVIEW.refresh('hr.e 0 karldevfedora oracle
22:32:32 SYS@cdb1>
22:32:38 SYS@cdb1>
22:32:38 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 4 .02 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
*** GV$SESSION sort by INST_ID ***
INST_ID SID UNIX_PID ORACLE_LOGIN PROGRAM SQL_ID PLAN_HASH_VALUE EVENT SQL_TEXT HOURS machine OSUSER
---------- ------ -------- --------------- -------------------- ------------- --------------- ---------------------------------------- ------------------------------ ------ ------------------------------ ----------
1 71 8919 SYS sqlplus@karldevfedor 0hfqcvbybguzv 0 SQL*Net message from client BEGIN DBMS_MVIEW.refresh('hr.e 0 karldevfedora oracle
22:32:42 SYS@cdb1> 22:32:42 SYS@cdb1> 22:32:42 SYS@cdb1>
22:32:43 SYS@cdb1>
22:32:43 SYS@cdb1>
22:32:43 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 5 .02 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
*** GV$SESSION sort by INST_ID ***
INST_ID SID UNIX_PID ORACLE_LOGIN PROGRAM SQL_ID PLAN_HASH_VALUE EVENT SQL_TEXT HOURS machine OSUSER
---------- ------ -------- --------------- -------------------- ------------- --------------- ---------------------------------------- ------------------------------ ------ ------------------------------ ----------
1 71 8919 SYS sqlplus@karldevfedor 0hfqcvbybguzv 0 SQL*Net message from client BEGIN DBMS_MVIEW.refresh('hr.e 0 karldevfedora oracle
22:32:50 SYS@cdb1>
22:35:59 SYS@cdb1>
22:36:00 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 71 47274 SYS sqlplus@ka 1a12b10m7rvfs 0 84725288 0 .12 db file sequential r /* MV_REFRESH (INS) */INSERT / 0 No 0 karldevfedora oracle
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 6 .01 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
*** GV$SESSION sort by INST_ID ***
INST_ID SID UNIX_PID ORACLE_LOGIN PROGRAM SQL_ID PLAN_HASH_VALUE EVENT SQL_TEXT HOURS machine OSUSER
---------- ------ -------- --------------- -------------------- ------------- --------------- ---------------------------------------- ------------------------------ ------ ------------------------------ ----------
1 71 8919 SYS sqlplus@karldevfedor 9j2vmn758hs1t 0 db file sequential read SELECT JOB_NAME NAME FROM DBA_ 0 karldevfedora oracle
1 71 8919 SYS sqlplus@karldevfedor b1r8hszxag0xf 0 db file sequential read BEGIN DBMS_MVIEW.refresh('hr.e 0 karldevfedora oracle
22:36:13 SYS@cdb1> 22:36:13 SYS@cdb1> 22:36:13 SYS@cdb1> 22:36:13 SYS@cdb1>
22:36:14 SYS@cdb1>
22:36:14 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 7 .01 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
*** GV$SESSION sort by INST_ID ***
INST_ID SID UNIX_PID ORACLE_LOGIN PROGRAM SQL_ID PLAN_HASH_VALUE EVENT SQL_TEXT HOURS machine OSUSER
---------- ------ -------- --------------- -------------------- ------------- --------------- ---------------------------------------- ------------------------------ ------ ------------------------------ ----------
1 71 8919 SYS sqlplus@karldevfedor b1r8hszxag0xf 0 SQL*Net message from client BEGIN DBMS_MVIEW.refresh('hr.e 0 karldevfedora oracle
22:36:18 SYS@cdb1>
22:36:20 SYS@cdb1>
22:36:21 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 8 .01 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
*** GV$SESSION sort by INST_ID ***
INST_ID SID UNIX_PID ORACLE_LOGIN PROGRAM SQL_ID PLAN_HASH_VALUE EVENT SQL_TEXT HOURS machine OSUSER
---------- ------ -------- --------------- -------------------- ------------- --------------- ---------------------------------------- ------------------------------ ------ ------------------------------ ----------
1 71 8919 SYS sqlplus@karldevfedor b1r8hszxag0xf 0 SQL*Net message from client BEGIN DBMS_MVIEW.refresh('hr.e 0 karldevfedora oracle
22:36:25 SYS@cdb1>
22:36:27 SYS@cdb1>
22:36:27 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 9 .01 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
*** GV$SESSION sort by INST_ID ***
INST_ID SID UNIX_PID ORACLE_LOGIN PROGRAM SQL_ID PLAN_HASH_VALUE EVENT SQL_TEXT HOURS machine OSUSER
---------- ------ -------- --------------- -------------------- ------------- --------------- ---------------------------------------- ------------------------------ ------ ------------------------------ ----------
1 71 8919 SYS sqlplus@karldevfedor b1r8hszxag0xf 0 SQL*Net message from client BEGIN DBMS_MVIEW.refresh('hr.e 0 karldevfedora oracle
22:36:32 SYS@cdb1>
22:36:34 SYS@cdb1>
22:36:35 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 10 .01 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
*** GV$SESSION sort by INST_ID ***
INST_ID SID UNIX_PID ORACLE_LOGIN PROGRAM SQL_ID PLAN_HASH_VALUE EVENT SQL_TEXT HOURS machine OSUSER
---------- ------ -------- --------------- -------------------- ------------- --------------- ---------------------------------------- ------------------------------ ------ ------------------------------ ----------
1 71 8919 SYS sqlplus@karldevfedor b1r8hszxag0xf 0 SQL*Net message from client BEGIN DBMS_MVIEW.refresh('hr.e 0 karldevfedora oracle
22:36:39 SYS@cdb1>
22:36:49 SYS@cdb1> @sqlmon
no rows selected
*** GV$SESSION ***
INST SID SERIAL USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT SQL_TEXT HOURS OFF IO_SAVED_% machine OSUSER
----- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- ------------------------------ ------ --- ---------- ------------------------------ ----------
1 42 57922 SYS sqlplus@ka fsccynmk4ga7t 0 2886813138 11 .01 SQL*Net message to c select a.inst_id inst, sid, a. 0 No 0 karldevfedora oracle
no rows selected
22:36:53 SYS@cdb1>
}}}
http://www.evernote.com/shard/s48/sh/dc1123d9-eac9-4848-a873-533d84895240/5d9ca01b6a63f11b72708fc159f55cfc
{{{
# SQLHC with TCB
Download the SQLHC ADW version SQL Tuning Health-Check Script (SQLHC) - ADB Specific (Doc ID)
# run with TCB
@sqlhc T 8m04jyb9nwjv9 TCB
# verify if you have credentials created
# you need to have an "enabled" credential to create/download files on the object storage
set lines 500
col owner format a30
col credential_name format a30
col username format a30
select owner, credential_name, username, enabled from dba_credentials;
# TCB will be created under SQL_TCB_DIR
set line 200 pages 200
col object_name format a50
select object_name from DBMS_CLOUD.list_files('SQL_TCB_DIR');
# copy created files to your bucket
set serveroutput on
begin
FOR file_list IN (select object_name from DBMS_CLOUD.list_files('SQL_TCB_DIR'))
LOOP
dbms_output.put_line('File=>' || file_list.object_name);
DBMS_CLOUD.PUT_OBJECT(
CREDENTIAL_NAME => 'DEF_CREDS',
OBJECT_URI => 'https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/intoraclerwp/kabucket/sqlhc/8m04jyb9nwjv9/' || file_list.object_name,
DIRECTORY_NAME => 'SQL_TCB_DIR',
FILE_NAME => file_list.object_name
);
END LOOP;
end;
/
# download files from the bucket, cleanup the download.sh for unnecessary text
spool download.sh
set pages 0 lines 800
select 'curl -u "karl.arao@oracle.com:<AUTH TOKEN HERE>" -X GET https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/intoraclerwp/kabucket/sqlhc/8m04jyb9nwjv9/' || object_name || ' -O' from DBMS_CLOUD.list_files('SQL_TCB_DIR');
spool off
# cleanup the bucket
set serveroutput on
begin
FOR file_list IN (select object_name from DBMS_CLOUD.list_files('SQL_TCB_DIR'))
LOOP
dbms_output.put_line('File=>' || file_list.object_name);
dbms_cloud.delete_file('SQL_TCB_DIR',file_list.object_name);
END LOOP;
end;
/
# import the TCB to a VM
Steps to import Testcase:
SQL Testcase generated files cannot be imported to ADW or ATP environment. So this has to be tried in an on-prem or a cloud environment.
Execute the following command after copying it to a directory and extracting the zip file.
Create user and grant dba role to user
connect tcb/tcb
create directory TCB_IMP_DIR as '<import_tcbdir>';
begin
dbms_sqldiag.import_sql_testcase(directory => 'TCB_IMP_DIR', filename =>'oratcb_8m04jyb9nwjv9main.xml');
end;
/
Check TCB_IMP_DIR for the import log to validate whether the import was successful.
}}}
The most important thing for me about this ''test case'' feature is the ability to get a Visual SQL Tuning (VST) diagram with DB Optimizer... usually I would need to be connected on the instance itself to generate VST... but with this test case feature
...I just need to have an SQLT run on the problem query and pull the SQLT zip file
...then put it on my environment
...do the compare with the bad plan
...then to the test case of either the good or bad plan
...and while the query is running I can generate a VST!
A little background...
Each SQLTXPLAIN run (//START sqltxtract.sql SQL_ID//) packages everything in one zip file
{{{
[oracle@karl sqlt_s54491-good]$ du -sm sqlt_s54491.zip
47 sqlt_s54491.zip
}}}
and inside that zip file includes the ''test case zip file'' and has a *_tc.zip extension (sqlt_s54491_tc.zip)
Note that this is different from the *_tcb.zip extension (sqlt_s54491_tcb.zip) which is using the ''DBMS_SQLDIAG TCB'' (test case builder) package
{{{
[oracle@karl sqlt_s54491-good]$ unzip -l sqlt_s54491.zip
Archive: sqlt_s54491.zip
Length Date Time Name
--------- ---------- ----- ----
104557695 11-27-2011 02:36 sqlt_s54491_10053_explain.trc
2587 11-27-2011 09:18 sqlt_s54491_driver.zip
660035 11-27-2011 02:35 sqlt_s54491_lite.html
28711 11-27-2011 09:18 sqlt_s54491_log.zip
34589692 11-27-2011 02:35 sqlt_s54491_main.html
316568 11-27-2011 09:18 sqlt_s54491_opatch.zip
40487 11-27-2011 02:35 sqlt_s54491_p73080644_sqlprof.sql
21526 11-27-2011 02:35 sqlt_s54491_readme.html
461504 11-27-2011 02:35 sqlt_s54491_sql_detail_active.html
385239 11-27-2011 02:35 sqlt_s54491_sql_monitor_active.html
424680 11-27-2011 02:35 sqlt_s54491_sql_monitor.html
57213 11-27-2011 02:35 sqlt_s54491_sql_monitor.txt
333117 11-27-2011 02:35 sqlt_s54491_sta_report_mem.txt
1488 11-27-2011 02:35 sqlt_s54491_sta_script_mem.sql
616556 11-27-2011 02:36 sqlt_s54491_tcb.zip <-- ''DBMS_SQLDIAG TCB'' (test case builder) package zip file
8659 11-27-2011 09:18 sqlt_s54491_tc_script.sql
7793 11-27-2011 09:18 sqlt_s54491_tc_sql.sql
4254816 11-27-2011 09:18 sqlt_s54491_tc.zip <-- ''test case'' zip file
33308174 11-27-2011 09:18 sqlt_s54491_trc.zip
--------- -------
180076540 19 files
}}}
So... I usually do the following steps to drill down on a query.. but for this tiddler I will only discuss the stuff that are highlighted
- install SQLTXPLAIN
- pull bad and good SQLT runs
- do compare
''- generate local test case
- execute query
- db optimizer''
! Create local test case using SQLT files
* Express and Custom are the two modes on doing this
* SOURCE and TARGET systems should be similar.
''Express mode'' @@''<-- preferred and much easier!''@@
<<<
Unzip sqlt_s54491_tc.zip in server.
''unzip sqlt_s54491_tc.zip -d TC54491''
''cd TC54491''
Review and execute tc.sh from OS or sqltc.sql from sqlplus.
''Option 1: ./tc.sh''
Option 2: sqlplus / as sysdba @sqltc.sql
<<<
''Custom mode''
<<<
Unzip sqlt_s54491_tc.zip in server.
unzip sqlt_s54491_tc.zip -d TC54491
cd TC54491
Create test case user and schema objects connecting as SYSDBA:
sqlplus / as sysdba
START sqlt_s54491_metadata.sql
Purge pre-existing s54491 from local SQLT repository connected as SYSDBA:
START sqlt_s54491_purge.sql
Import SQLT repository for s54491 (provide SQLTXPLAIN password):
HOS imp sqltxplain FILE=sqlt_s54491_exp.dmp TABLES=sqlt% IGNORE=Y
Restore CBO schema object statistics for test case user connected as SYSDBA:
START sqlt_s54491_restore.sql
Restore CBO system statistics connected as SYSDBA:
START sqlt_s54491_system_stats.sql
Set the CBO environment connecting as test case user TC54491 (include optional test case user suffix):
CONN TC54491/TC54491
START sqlt_s54491_set_cbo_env.sql
Execute test case:
START tc.sql
<<<
! Edit the q.sql and execute the query
* running the ''tc.sh'' will error on the last part because the q.sql that contains the query still has the application schema name... so edit/replace it with the TC user, then login as the TC user, see steps below:
{{{
sed -i 's/SYSADM/TC54491/g' q.sql
sqlplus TC54491/TC54491
@sqlt_s54491_set_cbo_env.sql
@q.sql
@plan.sql
}}}
! Generate VST with DB Optimizer
''Example usage - DB Optimizer example - 3mins to 10secs''
https://www.evernote.com/shard/s48/sh/070796b4-673e-418f-9ff9-d362ae9941dd/9636928fbcf370e0dcf9fb940cc5a9c8
How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors [ID 232963.1]
How to Run SQL Testcase Builder from ADRCI [Video] [ID 1174105.1]
''NOTE: if the incident is not an actionable incident, no testcase can be created''
{{{
adrci
show home
set homepath diag/rdbms/orcl11g/ORCL11G
show problem
show incident
dde show actions -- status 'Ready' which means there is a recommended action SQLTCB ( SQL Testcase Builder ) ready to be executed.
DDE EXECUTE ACTION INCIDENT <incident_id> ACTIONNAME <action_name> INVOCATION <invocation_id> -- Action requires login system/<passwd>
dde show actions -- status 'SUCCESS' which means the recommended action is successfully run on the incident
* All the sql tescase files will be stored in the incident directory of the respective incident under ADR HOME.
]$ cd u01/Oracle/diag/rdbms/orcl11g/ORCL11G
]$ cd incident
]$ cd incdir_10979
* Along with incident dumps, there will be a directory created by name SQLTCB. Under this directory all the testcase files will be stored.
]$ cd SQLTCB_1
]$ ls -lrt
README.txt
oratcb1_009A00A40001ol.xml
oratcb1_009A00A40001sql.xml
oratcb1_009A00A40001dpexp.sql
oratcb1_009A00A40001dpexp.log
oratcb1_009A00A40001dpexp.dmp
oratcb1_009A00A40001xpls.sql
oratcb1_009A00A40001ssimp.sql
oratcb1_009A00A40001dpimp.sql
oratcb1_009A00A40001xpl.txt
oratcb1_009A00A40001xplo.sql
oratcb1_009A00A40001xplf.sql
oratcb1_009A00A40001main.xml
* These files will be included in the package created for this incident, along with incident dumps, trace files and alert.log.
Example:
adrci> ips pack incident 10979 in /tmp
Generated package ORA600qks_20100809120430_COM_1.zip
}}}
kdiff3 - my standard tool http://kdiff3.sourceforge.net/
diffuse - diff 4 or more files at a time http://diffuse.sourceforge.net/
beyond compare - output diff reports to html https://www.scootersoftware.com/download.php
diffmerge - diff tool I use for git tower https://sourcegear.com/diffmerge/
! editors
http://notepad-plus-plus.org/
http://csved.sjfrancke.nl/#csvuni
! Book notes and resources
Forecasting Oracle Performance - formulas and samples https://www.evernote.com/shard/s48/sh/106315ca-ccc8-483e-b4da-04539f427e3a/b3f25784814460e09d11d7c14910d144
Basic Forecasting Statistics - https://www.evernote.com/shard/s48/sh/cb434aee-f000-4991-832f-ab4505c338c5/d931b189a7ead88a53210b820c96d73e
Statistical Analysis Template - https://www.evernote.com/shard/s48/sh/47200301-7a8f-4995-86a8-dfd4e49bb0a8/d8a3b4cbbf1f0178f08a0112e53c1599
! Statistics without tears notes
The Mind Map of Statistics without Tears https://www.evernote.com/shard/s48/sh/c2321be9-26fd-4bd4-94f2-7a05cdf0e693/770ef7f6eced4d40579557bde14e1cfb
Chapter 3 - Summarizing our data https://www.evernote.com/shard/s48/sh/3d6fa83d-ab2e-4bc1-a571-173543ee7fef/fe81f3e0b0a44a9ee2c271d7af465fae
Chapter 4 - Shape of a distribution https://www.evernote.com/shard/s48/sh/8a02ddb2-8b81-49fb-a1f7-7e9491d7aa45/3d592caafdb2bd30fd12b093ed9eeacc
Chapter 5 - Estimates and Inferences https://www.evernote.com/shard/s48/sh/22a32adf-aa36-41d7-a7d6-9fb5a52b45ea/5b3cb0935231be3303e7f19b36d5103b
Chapter 6 - Comparing samples https://www.evernote.com/shard/s48/sh/97a2722a-d7a6-4afa-9c1a-09d5cfbf3a64/1f0efcf6196323cf7e29542c86d15734
Chapter 8 - Correlation & Regression https://www.evernote.com/shard/s48/sh/b9dd9732-c909-4599-bac1-9f0b778f8bab/d0b588f517f568cf87a67633d6402387
! The end product
This is the FOP book concepts applied in real world -> [[r2project]]
the r2toolkit is here http://karlarao.wordpress.com/scripts-resources/
<<showtoc>>
! tricks
!! aegcfg
<<<
all the available conf flags are in sql/edb360_00_config.sql
<<<
!!! change span of dates to be collected
{{{
In the sqldb360/sql directory you will find aegcfg.sql file.
In it you can set the exact dates to collect.
-- range of dates below superceed history days when values are other than YYYY-MM-DD
-- default values YYYY-MM-DD mean: use edb360_conf_days
-- actual values sample: 2016-04-26
DEF edb360_conf_date_from = 'YYYY-MM-DD';
DEF edb360_conf_date_to = 'YYYY-MM-DD';
}}}
!!! skip ESP
{{{
-- use only if you have to skip esp and escp (value --skip--) else null
--DEF skip_esp_and_escp = '--skip--';
DEF skip_esp_and_escp = '';
}}}
!!! filter DBID from AWR dump or AWR warehouse
{{{
Put this in the custom config file.
-- use if you need tool to act on a dbid stored on AWR, but that is not the current v$database.dbid
DEF edb360_config_dbid = '';
The edb360 chooses the DBID from v$database when none is specified.
After that, all queries use that DBID if the column is present so my guess is that you will have a mix of results where some will belong to
your host db and others will be from the imported db.
My suggestion is to look at the queries in the resulting htmls to verify they are using the correct dbid.
}}}
!! special DLP selector that correlates to AWR reports
{{{
after the edb360 is generated
you place the .js in the directory.
you just need to rename the file to edb360_dlp.js
}}}
.
http://www.performatune.com/en/plsql-vs-java-vs-c-vs-python-for-cpu-intensive-tasks-architectural-decision/
lessen think time (chatter) in between database calls
check hotsos training days 2017
Artificial General Intelligence (AGI)
Artificial Superintelligence (ASI)
Artificial Narrow Intelligence (ANI)
https://www.aware.co.th/three-tiers-ai-automating-tomorrow-agi-asi-ani/
https://www.coresystems.net/blog/the-difference-between-artifical-intelligence-artifical-general-intelligence-and-artifical-super-intelligence
<<showtoc>>
[img[ http://i.imgur.com/MCyZO0W.png ]]
! Jeffrey Ryan
jeff.a.ryan@gmail.com
https://cran.r-project.org/web/packages/xts/xts.pdf
https://www.r-bloggers.com/author/jryan/
http://blog.revolutionanalytics.com/2011/07/the-r-files-jeff-ryan.html
https://www.rdocumentation.org/search?q=Jeffrey+Ryan
http://stackoverflow.com/users/594478/jeff-r
<<showtoc>>
! list of time series databases
https://www.misfra.me/2016/04/09/tsdb-list/
http://db-engines.com/en/ranking/time+series+dbms
https://en.wikipedia.org/wiki/Time_series_database
http://db-engines.com/en/system/Axibase%3BInfluxDB
! discussions
https://www.misfra.me/2015/07/20/time-series-databases-discussion-notes/
http://jmoiron.net/blog/thoughts-on-timeseries-databases/
https://www.misfra.me/2015/07/03/solving-time-series-storage-with-brute-force/
! videos
Creating a Time-Series Database in MySQL https://www.youtube.com/watch?v=ldyKJL_IVl8&t=643s
Optimizing a Relational Database for Time Series Data - TempoDB Whiteboard Session https://www.youtube.com/watch?v=X4TfveHzBwM&t=223s
other tempoiq videos https://www.youtube.com/channel/UCE-3TAXm54rbIQCQZ2aBN3g
Time Series Databases in the Upside-down Internet #WhiteboardWalkthrough https://www.youtube.com/watch?v=SgD3RD2Shg4&t=168s
Time series analysis with Spark and Cassandra https://www.youtube.com/watch?v=uERFXD1Nj6E&t=17s
https://speakerdeck.com/preetamjinka
Catena: A High-Performance Time-Series Storage Engine https://www.youtube.com/watch?v=wZ1y6cu1-h8
! papers
https://en.wikipedia.org/wiki/Temporal_database
management of time series data http://www.canberra.edu.au/researchrepository/file/82315cf7-7446-fcf2-6115-b94fbd7599c6/1/full_text.pdf
Temporal Databases and Data Warehousing http://www.dcs.ed.ac.uk/home/tz/phd/thesis/node11.htm , http://www.dcs.ed.ac.uk/home/tz/phd/thesis/node2.htm
The data warehouse - and object oriented temporal database http://www.essi.upc.edu/~aabello/publications/03.temporal.pdf
! articles
* SCD
https://en.wikipedia.org/wiki/Slowly_changing_dimension
Tips for an incremental load with Slowly Changing Dimensions? https://www.reddit.com/r/tableau/comments/2v0jmm/tips_for_an_incremental_load_with_slowly_changing/
Practical Incremental View Maintenance https://www.cs.duke.edu/courses/spring02/cps296.1/lectures/08-view.pdf
* incremental refresh data warehouse
incremental refresh data warehouse siss https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=incremental%20refresh%20data%20warehouse%20siss
example code incremental ETL https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=example+code+incremental+ETL&start=20
http://stackoverflow.com/questions/14261152/how-to-test-incremental-data-in-etl
https://developer.gooddata.com/article/incremental-data-loading
real time incremental data load https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=real+time+incremental+data+load&start=20
Best Practices for Real-time Data Warehousing http://www.oracle.com/technetwork/middleware/data-integrator/overview/best-practices-for-realtime-data-wa-132882.pdf
http://sqlmag.com/sql-server-integration-services/combining-cdc-and-ssis-incremental-data-loads
http://www.jamesserra.com/archive/2011/08/methods-for-populating-a-data-warehouse/
http://sqlmag.com/blog/shortening-load-times-sql-server-dws
ETL Evolution for Real-Time Data Warehousing http://proc.conisar.org/2012/pdf/2214.pdf
Near Real-Time with Traditional Data Warehouse Architectures: Factors and How-to https://eden.dei.uc.pt/~pnf/publications/NearRTDW.pdf
Design consideration for incremental data refresh in real time data warehouses https://www.google.com/search?q=Design+consideration+for+incremental+data+refresh+in+real+time+data+warehouses&oq=Design+consideration+for+incremental+data+refresh+in+real+time+data+warehouses&aqs=chrome..69i57.308j0j4&sourceid=chrome&ie=UTF-8#q=Design+consideration+for+incremental+data+refresh+in+real+time+data+warehouses&start=10
* hadoop SCD
Using Apache NiFi for Slowly Changing Dimensions on Hadoop Part 1 https://community.hortonworks.com/articles/48843/slowly-changing-dimensions-on-hadoop-part-1.html
http://hortonworks.com/blog/four-step-strategy-incremental-updates-hive/
* R merge
http://stackoverflow.com/questions/2232699/how-to-do-a-data-table-merge-operation
http://stackoverflow.com/questions/1299871/how-to-join-merge-data-frames-inner-outer-left-right/9652931#9652931
Using R for Iterative and Incremental Processing http://shivaram.org/publications/presto-hotcloud12.pdf
http://stackoverflow.com/questions/33491073/subsetting-data-from-r-data-frame-using-incremental-variable-names
http://stackoverflow.com/questions/35370165/incremental-ids-in-a-r-data-frame
* mysql SCD
http://stackoverflow.com/questions/33455310/mysql-merge-incremental-data-into-full-set
* outlier detection of temporal data
outlier detection of temporal data https://www.siam.org/meetings/sdm13/gupta.pdf
Pattern Recognition in Time Series http://cs.gmu.edu/~jessica/publications/astronomy11.pdf
* storing time series data with data model
Storing time-series data, relational or non http://stackoverflow.com/questions/4814167/storing-time-series-data-relational-or-non
http://www.softwaregems.com.au/Documents/Documentary%20Examples/sysmon%20Public.pdf
http://www.softwaregems.com.au/Documents/Documentary%20Examples/sequoia%20091019%20Server%20Public.pdf
http://www.softwaregems.com.au/Documents/Documentary%20Examples/IDEF1X%20Notation.pdf
! discussions
http://stackoverflow.com/questions/800331/why-do-we-need-a-temporal-database
http://stackoverflow.com/questions/1517387/whats-difference-between-a-temporal-database-and-a-historical-archive-database
InfluxDB, ElasticSearch? Real question is what do you want to store and what do you plan on doing with it after? https://twitter.com/andydavies/status/452132123764617216
Thoughts on Time-series Databases (jmoiron.net) https://news.ycombinator.com/item?id=9805742
http://www.xaprb.com/blog/2014/03/02/time-series-databases-influxdb/
http://www.xaprb.com/blog/2014/06/08/time-series-database-requirements/
http://www.xaprb.com/blog/2015/10/16/time-series-tagging/
Making Cassandra Perform as a TSDB https://signalfx.com/blog/making-cassandra-perform-as-a-tsdb/
http://www.slideshare.net/jericevans/time-series-data-with-apache-cassandra
https://berlinbuzzwords.de/sites/berlinbuzzwords.de/files/media/documents/eric_evans_-time_series_data_with_apache_cassandra.pdf
PhilDB the time series database with built-in change logging https://peerj.com/articles/cs-52.pdf
http://www.ryandaigle.com/a/time-series-db-design-with-influx
How InfluxDB Stores Data http://grisha.org/blog/2015/03/20/influxdb-data/
http://www.dbta.com/Editorial/Trends-and-Applications/The-Unique-Database-Requirements-of-Time-Series-Data-52035.aspx
https://blog.tempoiq.com/blog/2013/04/22/optimizing-relational-databases-for-time-series-data-time-series-database-overview-part-3
! books
Developing Time-Oriented Database Applications in SQL (The Morgan Kaufmann Series in Data Management Systems) https://www.amazon.com/dp/1558604367/?tag=stackoverfl08-20
! videos
Temporal in SQL Server 2016 https://channel9.msdn.com/Shows/Data-Exposed/Temporal-in-SQL-Server-2016
! time series mongodb
https://www.mongodb.com/blog/post/time-series-data-and-mongodb-part-2-schema-design-best-practices
https://medium.com/oracledevs/build-a-go-lang-based-rest-api-on-top-of-cassandra-3ac5d9316852
https://www3.nd.edu/~dial/publications/xian2018restful.pdf
! time series emberjs
https://discuss.emberjs.com/t/time-series-data-in-ember-data/6945
<<<
We just create simple model objects that extend Ember.Object and have fetch() methods on them. When you call fetch(), it looks at its state and sends an XHR to the appropriate endpoint for the specified time range.
The data structures we’re operating on are quite complex, but the data handling is quite simple—perhaps surprisingly so.
<<<
https://www.skylight.io/
https://discuss.emberjs.com/t/chart-datapoints-in-ember-data-ala-skylight/7622/2
https://github.com/wrobstory/mcflyin
https://github.com/Addepar/ember-charts/issues/44
https://opensource.addepar.com/ember-charts/#/documentation
! time series application django
https://www.google.com/search?q=time+series+application+django&ei=3n26XLqANK-zggfIx5P4Dw&start=10&sa=N&ved=0ahUKEwi654aZyt3hAhWvmeAKHcjjBP8Q8tMDCIoB&biw=1339&bih=798
https://github.com/anthonyalmarza/django-timeseries
https://stackoverflow.com/questions/25212009/django-postgres-large-time-series
https://www.reddit.com/r/django/comments/9m2hnv/storing_user_specific_timeseries_data_in_django/
https://www.quora.com/What-is-the-best-timeseries-database-to-use-with-a-Django-project
https://medium.com/python-data/time-series-aggregation-techniques-with-python-a-look-at-major-cryptocurrencies-a9eb1dd49c1b
! time series flask
https://smirnov-am.github.io/2018/11/26/flask-streaming.html
https://geoyi.org/2017/06/18/stock-api-development-with-python-bokeh-and-flask-to-heroku/
https://thingsmatic.com/2016/07/10/a-web-app-for-iot-data-visualization/
!! flask influxdb
https://www.influxdata.com/blog/getting-started-python-influxdb/
https://github.com/btashton/flask-influxdb
https://github.com/LarsBergqvist/IoT_Charts
https://www.datadoghq.com/pricing/
timed out waiting for input: auto-logout
{{{
cat /etc/profile.d/autologout.sh
export TMOUT=36000
}}}
{{{
If you want to disable timeout, then reset variable TMOUT=0
$ export TMOUT=0
}}}
https://dbalifeeasy.com/2021/10/28/timed-out-waiting-for-input-auto-logout/
<<showtoc>>
https://www.timescale.com/
Embrace the PostgreSQL ecosystem
! documentation
https://docs.timescale.com/latest/main
https://docs.timescale.com/latest/development
! learn (check resources section)
https://www.timescale.com/learn/
! comparison with other time series database
https://db-engines.com/en/system/InfluxDB%3BOpenTSDB%3BTimescaleDB
! installation
!! Error installing timescaledb from source with Postgres.app (need manual compile)
https://github.com/timescale/timescaledb/issues/156
.
http://forums.t-mobile.com/t5/Prepaid/Wanting-to-Switch-to-Tmobile-prepaid-but-not-sure-how-iphone/td-p/603901
http://forums.t-mobile.com/t5/Prepaid/Questions-Regarding-Pay-As-You-Go-and-iPhone-3g/m-p/517547/highlight/true#M3771
http://www.intomobile.com/2008/07/22/tether-your-iphone-3g-to-your-laptop-use-your-iphone-3g-as-a-wireless-modem/
http://www.intomobile.com/2008/10/21/iphone-wifi-router-pda-net-makes-iphone-3g-tethering-easier-than-ever/
http://www.ehow.com/how_7651607_tether-tmobile.html
http://www.namshik.com/how-to-unlimited-browsing-on-t-mobile-prepaid/
http://forums.macrumors.com/archive/index.php/t-741370.html
http://www.ehow.co.uk/how_8462084_use-card-jailbroken-iphone-3g.html
''to check voicemail from another phone''
- dial the number
- press *
- enter password
- press #
- # to skip messages
''to configure MMS''
http://forums.macrumors.com/showthread.php?t=821026
http://m.youtube.com/#/watch?v=8ga-aZQ_H1s&desktop_uri=%2Fwatch%3Fv%3D8ga-aZQ_H1s
''support''
http://www.t-mobile.com/Contact.aspx
<<<
sinfuliphonerepo.com
tmofile ios4 mms fix package
<<<
http://www.mrexcel.com/forum/excel-questions/38085-add-one-month.html
{{{
=DATE(YEAR(A1),MONTH(A1)+1,DAY(A1)) adds 1 month
}}}
https://blog.dataloop.io/top10-open-source-time-series-databases
http://www.xaprb.com/blog/2014/03/02/time-series-databases-influxdb/
https://github.com/influxdata/influxdb-comparisons
https://github.com/influxdata/influxdb-r
<<showtoc>>
! summary of the issue:
This client have primary (XD10) and standby (XD11) exadata environments, golden gate is used to replicate data to the standby site. Every 6 months they alternate switching to the two environments. The problem here was, when they switched to XD11 the batch run of CA1 was running longer from 22mins to ~2.5hours
! summary of the fix:
Added the NL_SJ hint inside the correlated subquery to make it operate as a nested loop semi-join.
This will force the plan to always use nested loops, and we avoided create a SQL profile/baseline.
After the fix was integrated to the code, the new elapsed was 19mins from ~2.5hours.
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/116030850-7fece180-a62a-11eb-89e9-832dd628d37c.png ]]
! XD10 cluster: 22mins (fast performance)
This shows consistent elapsed of 22mins. Nov 16, 17, 18 at 9am
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/115981471-143f4180-a562-11eb-9e98-dc7c72ac57be.png]]
Drilling down on the top level SQL_ID 1nhgrtxz5fjx5 (the procedure of the batch job) and the driver SQL_ID 18qdxm4pkyguk
15mins spent by driver SQL_ID 18qdxm4pkyguk
ONLY 1min spent on INDEX RANGE SCAN by SQL_ID 18qdxm4pkyguk
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/115976322-d11da800-a53a-11eb-8234-a6569d9dc5a7.png]]
This showing the detailed data of that bar chart associated with INDEX RANGE SCAN, it just confirms that SQL_ID 18qdxm4pkyguk did that operation
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/115976433-10002d80-a53c-11eb-87a4-4ca202bfffe6.png]]
! XD11 cluster: ~2.5hours (bad performance)
This shows consistent elapsed of ~2.5hours
March 15, 16, 17
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/116026097-f9330700-a61f-11eb-84ec-e5bf6a6d8bda.png]]
Drilling down on the ~2.5hours, the driver SQL_ID 18qdxm4pkyguk is dragging the total elapsed time of the batch. This is because the plan changed from index range scan to fast full scan
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/116027231-4c0dbe00-a622-11eb-8d5e-bf22813c5b5b.png]]
On this graph just focus on read_gb row, the sustained red bar is the INDEX FAST FULL SCAN IO by SQL_ID 18qdxm4pkyguk.
The pl/sql package/procedure performance is getting dragged by 18qdxm4pkyguk because it’s the slowest sql.
That’s why 22mins (index range scan - selective index blocks PHV 2662027970) vs 2 hours (index fast full scan PHV 1973837680)
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/116028549-62694900-a625-11eb-9cc5-bc71d8c56915.png]]
! the fix
The correlated subquery is where the SQL is having issues those two tables (tracked_offender_profile and tracked_offender_device) are getting accessed as correlated subquery more like row by row and then you have this date filter with function in it which is another issue.
{{{
-- this date_utilities_pkg.Convert_to_gmt (tracked_offender_device.rec_end_dt, :B5) > :B4)
FROM tracked_offender_profile OFFENDER,
AND EXISTS (SELECT 'ACTIVE'
FROM tracked_offender_device
WHERE tracked_offender_device.ct_tracked_offender_id =
OFFENDER.ct_tracked_offender_id
AND date_utilities_pkg.Convert_to_gmt (
tracked_offender_device.rec_end_dt, :B5) >
:B4)
}}}
The bottleneck, plan_line_id 69
{{{
| 65 | HASH JOIN | | 276 | 51336 | 129 (1)| 00:00:01 |
| 66 | NESTED LOOPS | | 37 | 5735 | 125 (1)| 00:00:01 |
| 67 | NESTED LOOPS | | 37 | 5735 | 125 (1)| 00:00:01 |
| 68 | HASH JOIN RIGHT SEMI | | 37 | 4884 | 88 (2)| 00:00:01 |
| 69 | INDEX STORAGE FAST FULL SCAN | AK2_TRACKED_OFFENDER_DEVICE | 521 | 6773 | 11 (10)| 00:00:01 |
| 70 | HASH JOIN | | 498 | 59262 | 77 (0)| 00:00:01 |
}}}
What’s happening here I think is the date filter is not really working because of the function, then it uses that AK2 index because of the ct_tracked_offender_id, and since CA1 has a lot of rows optimizer chooses fast full scan vs range scan for smaller size schemas. AK2 index contains the ct_tracked_offender_id, rec_end_dt columns
!! the complexity and decision behind the fix
Given that we have 50+ parsing schemas and only CA1 is having this issue, we decided to fix the code of the package being used by CA1. At first I suggested to create a SQL Profile and then issue ALTER SESSION SET SQLTUNE_CATEGORY = ‘CA1_PLAN’; when CA1 runs its batch so other parsing schemas will not be affected. But then that's a more complicated solution. We went with the easier route, just put the hint in the subquery to make it operate as a nested loop semi-join.
!! Here's the final SQL
{{{
SELECT /*+ QB_NAME(first_union) */ DISTINCT OFFENDER.ct_tracked_offender_id,
OFFENDER.offender_first_name,
OFFENDER.offender_last_name,
OFFENDER.spvsn_sch_end_dt,
vt_user_profile.ct_user_id OFFICER_ID,
app_user.user_first_name OFFICER_FIRST_NAME,
app_user.user_last_name OFFICER_LAST_NAME,
ori.ori_txt,
OFFENDER.to_risk_level_id,
tracked_offender_risk_level.to_risk_level_sort_id,
tracked_offender_risk_level.to_risk_level_name,
vt_user_profile.user_login
FROM vt_user_profile,
user_notification,
ori,
vt_common.t_notification_type,
tracked_offender_profile OFFENDER,
vt_user.app_user,
tracked_offender_risk_level
WHERE OFFENDER.ct_user_id = vt_user_profile.ct_user_id
AND ori.ct_ori_id = OFFENDER.ct_ori_id
AND t_notification_type.by_offender_flag = :B3
AND vt_user_profile.ct_user_id = user_notification.ct_user_id
AND user_notification.notification_type_id =
t_notification_type.notification_type_id
AND t_notification_type.immediate_alert_flag = :B2
AND app_user.user_id = vt_user_profile.app_user_id
AND vt_user_profile.ct_user_id = :B1
AND OFFENDER.to_risk_level_id =
tracked_offender_risk_level.to_risk_level_id
AND EXISTS (SELECT /*+ QB_NAME(first_inner) NL_SJ */ 'ACTIVE'
FROM tracked_offender_device
WHERE tracked_offender_device.ct_tracked_offender_id =
OFFENDER.ct_tracked_offender_id
AND date_utilities_pkg.Convert_to_gmt (
tracked_offender_device.rec_end_dt, :B5) >
:B4)
UNION
SELECT /*+ QB_NAME(second_union) */ DISTINCT OFFENDER.ct_tracked_offender_id,
OFFENDER.offender_first_name,
OFFENDER.offender_last_name,
OFFENDER.spvsn_sch_end_dt,
OFFICER.ct_user_id OFFICER_ID,
app_user.user_first_name OFFICER_FIRST_NAME,
Nvl (app_user.user_last_name, 'UNASSIGNED') OFFICER_LAST_NAME,
ori.ori_txt,
OFFENDER.to_risk_level_id,
tracked_offender_risk_level.to_risk_level_sort_id,
tracked_offender_risk_level.to_risk_level_name,
OFFICER.user_login
FROM vt_user_profile,
user_notification,
ori,
vt_common.t_notification_type,
tracked_offender_profile OFFENDER,
user_additional_offender,
vt_user_profile OFFICER,
vt_user.app_user,
tracked_offender_risk_level
WHERE OFFENDER.ct_tracked_offender_id =
user_additional_offender.ct_tracked_offender_id
AND user_additional_offender.ct_user_id = vt_user_profile.ct_user_id
AND ori.ct_ori_id = OFFENDER.ct_ori_id
AND t_notification_type.by_offender_flag = :B3
AND OFFENDER.ct_user_id = OFFICER.ct_user_id(+)
AND vt_user_profile.ct_user_id = user_notification.ct_user_id
AND user_notification.notification_type_id =
t_notification_type.notification_type_id
AND t_notification_type.immediate_alert_flag = :B2
AND app_user.user_id = OFFICER.app_user_id
AND vt_user_profile.ct_user_id = :B1
AND OFFENDER.to_risk_level_id =
tracked_offender_risk_level.to_risk_level_id
AND EXISTS (SELECT /*+ QB_NAME(second_inner) NL_SJ */ 'ACTIVE'
FROM tracked_offender_device
WHERE tracked_offender_device.ct_tracked_offender_id =
OFFENDER.ct_tracked_offender_id
AND date_utilities_pkg.Convert_to_gmt (
tracked_offender_device.rec_end_dt, :B5) >
:B4)
UNION
SELECT /*+ QB_NAME(third_union) */ DISTINCT OFFENDER.ct_tracked_offender_id,
OFFENDER.offender_first_name,
OFFENDER.offender_last_name,
OFFENDER.spvsn_sch_end_dt,
OFFICER.ct_user_id OFFICER_ID,
app_user.user_first_name OFFICER_FIRST_NAME,
Nvl (app_user.user_last_name, 'UNASSIGNED') OFFICER_LAST_NAME,
ori.ori_txt,
OFFENDER.to_risk_level_id,
tracked_offender_risk_level.to_risk_level_sort_id,
tracked_offender_risk_level.to_risk_level_name,
OFFICER.user_login
FROM vt_user_profile,
user_notification,
vt_common.t_notification_type,
tracked_offender_profile OFFENDER,
ori,
ori_parent,
vt_user_profile OFFICER,
vt_user.app_user,
tracked_offender_risk_level
WHERE OFFENDER.ct_ori_id = ori.ct_ori_id
AND OFFENDER.ct_ori_id = ori_parent.child_id
AND ori_parent.parent_id = vt_user_profile.ct_ori_id
AND t_notification_type.by_ori_flag = :B3
AND OFFENDER.ct_user_id = OFFICER.ct_user_id(+)
AND vt_user_profile.ct_user_id = user_notification.ct_user_id
AND user_notification.notification_type_id =
t_notification_type.notification_type_id
AND t_notification_type.immediate_alert_flag = :B2
AND app_user.user_id = OFFICER.app_user_id
AND vt_user_profile.ct_user_id = :B1
AND OFFENDER.to_risk_level_id =
tracked_offender_risk_level.to_risk_level_id
AND EXISTS (SELECT /*+ QB_NAME(third_inner) NL_SJ */ 'ACTIVE'
FROM tracked_offender_device
WHERE tracked_offender_device.ct_tracked_offender_id =
OFFENDER.ct_tracked_offender_id
AND date_utilities_pkg.Convert_to_gmt (
tracked_offender_device.rec_end_dt, :B5) >
:B4)
ORDER BY ori_txt,
officer_last_name,
officer_first_name,
user_login,
to_risk_level_sort_id,
offender_last_name,
offender_first_name
/
}}}
<<showtoc>>
! howto
official doc http://git-tower.com/help/mac
learning videos http://www.git-tower.com/learn/git/videos
! install
http://www.git-tower.com/blog/diff-tools-mac/
http://www.sourcegear.com/diffmerge/downloaded.php
<<showtoc>>
! disclaimer
Benchmarking SQL-on-Hadoop Systems: TPC or not TPC?
https://pdfs.semanticscholar.org/3c92/54ee62c8cab6a341c3ca19f277df245c194a.pdf
! tpc-H
https://husnusensoy.wordpress.com/tag/tpc-h/
http://people.apache.org/~gopalv/tpch-plans/
https://github.com/rxin/TPC-H-Hive
https://github.com/hortonworks/hive-testbench-carter-fork
https://github.com/hortonworks/hive-testbench
https://community.hortonworks.com/questions/82918/how-to-perform-tpch-on-hive.html
https://github.com/t3rmin4t0r?utf8=%E2%9C%93&tab=repositories&q=tp&type=&language=
! tpc-DS
! tpc-H vs tpc-DS
https://www.theregister.co.uk/2012/05/04/is_tpc_ds_worthy_of_weaponising/
<<<
Behold the TPC-DS, a new weapon in the global Big Data war
Seeking the 'game-proof' benchmark
There isn’t anything inherently evil about industry standard benchmarks, just as there isn’t anything inherently evil about guns. You know the saying: “Guns don’t kill people – people kill people.” (What about bullets? No, they’re not inherently evil either.)
But in the hands of motivated vendors, benchmarks are weapons to be wielded against competitors with great gusto. So why am I writing about benchmarks? It’s because the Transaction Processing Council (TPC) has released a new major benchmark, TPC-DS, which aims to provide a level playing field for vendors warring over their Big Data prowess.
El Reg's Timothy Prickett Morgan wrote about this new benchmark at length here. In his inimitable way, Tim captured the meat of the story, plus the marrow, gristle, hooves and hide of it too.
I talked with TPC reps last week about TPC-DS and benchmarks in general – learning quite a bit along the way about how TPC-DS is different (and better) than what we’ve seen before. In my years on the vendor side of the industry, there was nothing better than using a shiny new TPC-C or TPC-D score to smite the living hell out the server competition. But my efforts to learn enough to explain the results in customer presentations taught me bit about how vendors perform benchmarks. It’s like learning how sausage is made, but with more acronyms.
There were lots and lots of ways to put together stellar benchmark results on system configurations that a real customer would never buy, with software optimised in ways that won’t work in production environments, all offered at prices they’ll never see.
But benchmarks are a necessary evil, like the Irish and the Dutch. There are differences between systems and the ways vendors implement similar or even identical technology. Two Xeon processors might perform exactly the same when they roll out of Intel’s fab. But after they’ve been incorporated into various ‘industry standard’ systems, get their software loads and start running apps, there will be differences in performance and certainly price/performance.
In a perfect world, customers would be able to do ‘bake-offs’ or ‘try and buy’ exercises before issuing a PO. But these are time-consuming and expensive for both the vendor and customer. It’s worth the effort for a very large or mission-critical installation, but not so much when the deal size and importance is lesser. This is where standard benchmarks come in: they give buyers a few more data points to use in their decision making process.
Benchmark organisations like TPC and SPEC (Standard Performance Evaluation Council) arose from industry need for standardized and valid comparisons between systems and solutions. These consortiums are primarily vendor-funded, and vendors certainly play a role in helping shape the benchmarks.
If a benchmark's results are printed in a forest and no one reads them, does it exist?
Building standard benchmarks isn’t easy. Tests need to stress the right parts of the system/solution – mimicking real-world processing as much as possible. They need to be complex enough to stress even the largest system, but at the same time scalable and reasonably easy and inexpensive to run. Safeguards have to be built in so that vendors can’t rig the results or game the tests. And, finally, the rules need to be documented and enforced.
But for a benchmark to be truly successful, it has to be used by vendors and customers alike.
What jumped out during my briefing with TPC was the size and complexity of TPC-DS. Lots of tables, lots of data, and lots of operations. It’s much more complete and complex than TPC-H, with 99 queries compared to only 22 for the venerable H.
One of the newest wrinkles with DS is that the ‘ad hoc’ queries are truly ad hoc. In TPC-D and, to a lesser extent, TPC-H, ad hoc queries could be anticipated and planned for by canny benchmark engineers. With DS, the ad hoc queries are randomized, and there are too many potential permutations to allow for pre-positioning data accurately.
The TPC-DS ‘number’ is a bit complicated to understand – it’s a combination of the single user time to run through 99 queries, multiple users running through 99 queries, and database load times. A valid TPC-DS run has to have the single user run results plus a multi-user run with at least four streams.
There isn’t a maximum number of streams (which is the same as users) that need to be tested. There also is no minimum response time goal that needs to be satisfied in the multi-user runs, like in other benchmarks. The metric is heavily influenced by the multi-user results, rewarding scalability and optimised use of shared resources.
Given that there aren’t maximum response time rules, vendors will end up doing lots of TPC-DS multi-user runs in order to find the sweet spot where they’re supporting a large number of users and achieving their best overall score. The executive summary of the mandatory benchmark disclosure document shows the numbers that went into the overall metric, including number of users supported, response times, and the like.
To me, TPC-DS is a big leap forward. I’m no benchmarking expert, and nothing is totally foolproof, but this looks to be a highly ‘game-proof’ benchmark. It also does a much better job of capturing the complexity of modern business processing to a much greater extent than its predecessors.
From a customer perspective, it’s definitely worth consulting TPC-DS before making that next Big Data solution buy. Now we wait and see if the vendors are going to run it through some gear – and thus weaponise it. ®
<<<
! tpcx-hs
https://www.slideshare.net/insideHPC/introducing-the-tpcxhs-benchmark-for-big-data
! others
http://blog.atscale.com/how-different-sql-on-hadoop-engines-satisfy-bi-workloads
https://databricks.com/blog/2017/07/12/benchmarking-big-data-sql-platforms-in-the-cloud.html
https://zeepabyte.com/blog/index.php/2017/08/02/using-tpc-h-metrics-to-explain-the-three-fold-benefit-of-cascade-analytics-system-i/
..
! tpmC_20120927.csv
''-- computation using cores'' (USE THIS by default)
{{{
cat tpc.csv | awk -F',' '{ print $2",",$4",",$5",",$6",",$7",",$8",",$11",",$14",",$16",",$21"," }' | grep -v ", ," | grep -v "System, tpmC, Price/Perf," > compute.csv
awk -F',' '{print $2/$8",", $0}' compute.csv > tpmC.csv
}}}
''-- computation using threads''
{{{
cat tpc.csv | awk -F',' '{ print $2",",$4",",$5",",$6",",$7",",$8",",$11",",$15",",$16",",$21"," }' | grep -v ", ," | grep -v "System, tpmC, Price/Perf," > compute.csv
awk -F',' '{print $2/$8",", $0}' compute.csv > tpmC.csv
}}}
! tpmC_20130407.csv
TPC added two new columns on the csv file, resulting to the change of filter variables
{{{
-- new (cores)
cat tpc.csv | awk -F',' '{ print $4",",$6",",$7",",$8",",$9",",$10",",$13",",$16",",$18",",$23"," }' | grep -v ", ," | grep -v "System, tpmC, Price/Perf," > compute.csv
awk -F',' '{print $2/$8",", $0}' compute.csv > tpmC.csv
-- new (threads)
cat tpc.csv | awk -F',' '{ print $4",",$6",",$7",",$8",",$9",",$10",",$13",",$17",",$18",",$23"," }' | grep -v ", ," | grep -v "System, tpmC, Price/Perf," > compute.csv
awk -F',' '{print $2/$8",", $0}' compute.csv > tpmC.csv
}}}
trace events can be found on this file
/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/mesg/oraus.msg
http://oraclue.files.wordpress.com/2011/03/oracle_diagnostic_events_in_11g1.pdf
''event reference''
http://www.juliandyke.com/Diagnostics/Events/EventReference.html
Options for Connecting to Foreign Data Stores and Non-Oracle Databases
Doc ID: Note:233876.1
Tuning Generic Connectivity And Gateways
Doc ID: Note:230543.1
ORA-2062, ORA-28500 USING HS TO ACCESS DB2/400 FROM ODBC
https://metalink.oracle.com/metalink/plsql/f?p=130:15:5414202594548419714::::p15_database_id,p15_docid,p15_show_header,p15_show_help,p15_black_frame,p15_font:Bug,1325253,1,0,1,helvetica
How to Resolve Common Errors Encountered while using Transparent Gateways or Generic Connectivity
Doc ID: Note:234517.1
Fast Connection Failover (FCF) Test Client Using 11g JDBC Driver and 11g RAC Cluster
Doc ID: Note:566573.1
How to Configure Generic Connectivity (HSODBC) on Linux 32 bit using DB2Connect
Doc ID: Note:375624.1
Which JDBC Drivers To Use In AS/400 V5R2 And V5R3 ?
Doc ID: Note:423964.1
Using Oracle Transparent Gateway for DB2/400 with Independent ASPs
Doc ID: Note:356446.1
How to Configure Generic Connectivity (HSODBC) on Linux 32 bit using DB2Connect
Doc ID: Note:375624.1
How to Connect to Informix Dynamic Server using Generic Connectivity
Doc ID: Note:363716.1
Errors Using Generic Connectivity to Connect to DB2 on AS400 - ORA-28500, SQL State:S1000, SQL Code: -7008
Doc ID: Note:295041.1
Capturing from a IBM DB2 Database on AS/400 via ODBC Fails without any Error Message
Doc ID: Note:233702.1
ALL Oracle Transparent Gateway Products 3.x
Doc ID: Note:162617.1
Updating AS400 Files Using Generic Connectivity HSODBC Gives Error ORA-28500 And SQLCODE:-7008
Doc ID: Note:398417.1
Where is Transparent Gateway for DB/2 on OWB2.1 and Oracle 8.1.6?
Doc ID: Note:131323.1
Configuring and Using Oracle Linked Servers
Doc ID: Note:445810.1
JDBC drivers for DB2 UDB
Doc ID: Note:423916.1
How to setup generic connectivity (HSODBC) for 32 bit Windows (Windows NT, Windows 2000, Windows XP, Windows 2003)
Doc ID: Note:109730.1
How to access ODBC datasource inside OWB
Doc ID: Note:102537.1
How to access ODBC datasource inside OWB
Doc ID: Note:102537.1
QUICK START GUIDE: WIN NT - Generic Connectivity using ODBC
Doc ID: Note:114820.1
QUICK START GUIDE: UNIX - Generic Connectivity using ODBC
Doc ID: Note:115098.1
Is Generic Connectivity Available On The LINUX Or Windows 64-bit Platforms?
Doc ID: Note:361676.1
! background
some background narrative about the "real CPU" accounting in AIX https://www.evernote.com/shard/s48/sh/4d399b0b-d853-43b2-a674-a54e475165ae/55ede8cd0a38544697236dd93c86de5e
! details
{{{
So the physical CPUs of the AIX box is 8… now it’s a bit tricky to get the real CPU% in AIX..
If the database shows you this cpu_count...
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
First you have to determine the CPUs of the machine
$ prtconf
System Model: IBM,8204-E8A
Machine Serial Number: 10F2441
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 8
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 nad0019aixp21
Memory Size: 21248 MB
Good Memory Size: 21248 MB
Platform Firmware level: Not Available
Firmware Version: IBM,EL350_132
Console Login: enable
Auto Restart: true
Full Core: false
$ lsattr -El proc0
frequency 4204000000 Processor Speed False
smt_enabled true Processor SMT enabled False
smt_threads 2 Processor SMT threads False
state enable Processor state False
type PowerPC_POWER6 Processor type False
$ uname -M
IBM,8204-E8A
$ lsdev -Cc processor
proc0 Available 00-00 Processor
proc2 Available 00-02 Processor
proc4 Available 00-04 Processor
proc6 Available 00-06 Processor
proc8 Available 00-08 Processor
proc10 Available 00-10 Processor
proc12 Available 00-12 Processor
proc14 Available 00-14 Processor
$ lscfg -vp |grep -ip proc |grep "PROC"
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
2 WAY PROC CUOD :
$ lparstat -i
Node Name : hostexample
Partition Name : hostexample
Partition Number : 2
Type : Shared-SMT
Mode : Uncapped
Entitled Capacity : 2.30
Partition Group-ID : 32770
Shared Pool ID : 0
Online Virtual CPUs : 8
Maximum Virtual CPUs : 8
Minimum Virtual CPUs : 1
Online Memory : 21247 MB
Maximum Memory : 40960 MB
Minimum Memory : 256 MB
Variable Capacity Weight : 128
Minimum Capacity : 0.10
Maximum Capacity : 8.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 8
Active Physical CPUs in system : 8
Active CPUs in Pool : 8
Shared Physical CPUs in system : 8
Maximum Capacity of Pool : 800
Entitled Capacity of Pool : 740
Unallocated Capacity : 0.00
Physical CPU Percentage : 28.75%
Unallocated Weight : 0
Then, execute the lparstat…
• The ent 2.30 is the entitled CPU capacity
• The psize is the # of physical CPUs on the shared pool
• The physc 4.42 means that the CPU usage went above the entitled capacity because it is “Uncapped”.. so to get the real CPU% just do a 4.42/8 = 55% utilization
• 55% utilization could either be applied on the 8Physical CPUs or 16 Logical CPUs… because that’s just the percentage used so I just put on the prov worksheet 60%
• The physc will show the same number as the AAS CPU (from ASH) which I think is easier to get for large multi LPAR environments and just as definitive
$ lparstat 1 10000
System configuration: type=Shared mode=Uncapped smt=On lcpu=16 mem=21247 psize=8 ent=2.30
%user %sys %wait %idle physc %entc lbusy vcsw phint
----- ----- ------ ------ ----- ----- ------ ----- -----
91.4 7.6 0.8 0.3 3.94 171.1 29.9 4968 1352
92.0 6.9 0.7 0.4 3.76 163.4 26.2 4548 1054
93.1 6.0 0.5 0.3 4.42 192.3 33.2 4606 1316
91.3 7.5 0.7 0.5 3.74 162.6 25.6 5220 1191
93.4 5.7 0.6 0.3 4.07 176.9 28.7 4423 1239
93.1 6.0 0.6 0.4 4.05 176.0 29.4 4709 1164
92.3 6.7 0.6 0.5 3.46 150.2 24.8 4299 718
92.2 6.9 0.6 0.4 3.69 160.6 27.9 4169 973
91.9 7.3 0.5 0.3 4.06 176.5 33.2 4248 1233
}}}
! other references
talks about different power versions, dynamic SMT config and physc, PURR-SPURR ratio via a new column, named "nsp", energy (turbo bost in intel)
Understanding Processor Utilization on Power Systems - AIX https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Not+AIX/page/Understanding+Processor+Utilization+on+Power+Systems+-+AIX
rusage, physc, and PURR scale down factor
https://ardentperf.com/2016/07/01/understanding-cpu-on-aix-power-smt-systems/
{{{
-- SQL Test Case
-------------------
select to_char(sysdate, 'YY/MM/DD HH24:MI:SS') AS "START" from dual;
col time1 new_value time1
col time2 new_value time2
select to_char(sysdate, 'SSSSS') time1 from dual;
set arraysize 5000
set echo off verify off
-- define the variables in SQLPlus
variable PQ2 varchar2(32)
variable PQ1 varchar2(32)
-- set the bind values
begin :PQ2 := '06/01/2011'; :PQ1 := '06/30/2011';
end;
/
-- Set statistics level to high
alter session set statistics_level = all;
-- alter session set current_schema = sysadm;
-- alter session set "_serial_direct_read"=ALWAYS;
-- ensure direct path read is done
-- alter session force parallel query;
-- alter session force parallel ddl;
-- alter session force parallel dml;
-- alter session set optimizer_index_cost_adj = 150 ;
-- alter session set "_b_tree_bitmap_plans"=false;
-- alter session set "_optimizer_cost_based_transformation" = on;
-- alter session set "_gby_hash_aggregation_enabled" = true;
-- alter session set "_unnest_subquery" = false;
-- alter session set "_optimizer_max_permutations"=80000;
-- alter session set plsql_optimize_level=3;
-- alter session set "_optim_peek_user_binds"=false;
-- alter session set "_optimizer_use_feedback" = false;
@mystat.sql
set serveroutput off termout off
-- THE SQL STATEMENT
-------------------------- PUT YOUR SQL STATEMENT HERE: START --------------------------
SELECT /*+ MONITOR GATHER_PLAN_STATISTICS */
...
FROM ...
WHERE ...
AND "PS_CBTA_PROD_TD_R_V1"."ACCOUNTING_DT" BETWEEN TO_DATE(:PQ2, 'MM/DD/YYYY') AND TO_DATE(:PQ1, 'MM/DD/YYYY')
/
-------------------------- PUT YOUR SQL STATEMENT HERE: END --------------------------
set termout on
select prev_sql_id p_sqlid from v$session where sid=sys_context('userenv','sid');
@mystat.sql
set termout on
select to_char(sysdate, 'YY/MM/DD HH24:MI:SS') AS "END" from dual;
select to_char(sysdate, 'SSSSS') time2 from dual;
select &&time2 - &&time1 total_time from dual;
select '''END''' END from dual;
-------------------
}}}
{{{
To get the definitive number of slaves requested you have to search for section with kxfrAllocSlaves(begin) and then for the actual number of slaves allocated search for the section with kxfpqcthr and that’s the final number
alter session set "_px_trace"=high,all;
alter session set TRACEFILE_IDENTIFIER = 'big_table_trace';
select count(*) from big_table;
alter session set "_px_trace"="none";
Also starting 11.2.0.4, the v$sql_monitor exposes some resource management related columns which make it easier to spot if the right consumer group setting (by correlating with the plan directives) or resource plan is in place.
select
a.SID,
a.RM_CONSUMER_GROUP RM_GROUP,
a.SQL_ID,
b.CHILD_NUMBER CH,
a.SQL_PLAN_HASH_VALUE PHV,
a.sql_exec_id SQL_EXEC,
a.INST_ID IN,
a.USERNAME,
a.PX_SERVERS_REQUESTED RQ,
a.PX_SERVERS_ALLOCATED ALLOC,
to_char(a.SQL_EXEC_START,'MMDDYY HH24:MI:SS') SQL_EXEC_START
from gv$sql_monitor a, gv$sql b
where a.sql_id = b.sql_id
and a.inst_id = b.inst_id
and a.sql_child_address = b.child_address
and username is not null
order by a.STATUS, a.SQL_EXEC_START, a.SQL_EXEC_ID, a.PX_SERVERS_ALLOCATED, a.PX_SERVER_SET, a.PX_SERVER# asc
/
}}}
! review PX execution environment setup
! sql monitor
How to Monitor the Parallel Statement Queue (Doc ID 1684985.1)
How to Collect SQL Monitor Output For Parallel Query (Doc ID 1604469.1)
! px trace
{{{
alter session set "_px_trace"=high,all;
}}}
{{{
alter session set parallel_degree_policy = auto;
alter session set "_px_trace" = high , granule, high , execution;
OR
alter session set "_px_trace" = compilation, execution, messaging;
alter session set TRACEFILE_IDENTIFIER = 'in_memory';
select /*+ parallel(2) */ count(1) from bigemp;
-- switch off
alter session set "_px_trace"="none";
}}}
in-mem px kicked in if ''"useBufferCache:1" and "direct-read:disabled"''
resource mgr related will show ''Resource Manager reduced'' -> ''Granted 0''
How to Use _PX_TRACE to Check Whether Parallelism is Used (Doc ID 400886.1)
How to find if a table can be used for In Memory Parallel Execution (Doc ID 1273062.1)
Tracing Parallel Execution with _px_trace. Part I (Doc ID 444164.1)
<<<
some examples
Example#1 select statement with one slave set, degree of parallelism from the dictionary.
Example#2 select statement with two slave sets, degree of parallelism from the dictionary.
Example#3 select statement, too few slaves available to run query in parallel.
Example#4 requested Degree Of Parallelism (DOP) rounded down dynamically due to CPU load
Example#5 Degree Of Parallelism taken from a hint.
Example#6 Join of two tables each with different DOP setting, which DOP do we choose ?
Example#7 Parallel execution in RAC.
<<<
! wolfgang it
alter session set events '10053 trace name context forever, level 1';
! other xml
{{{
select t.*
from v$sql_plan v,
xmltable(
'/other_xml/info'
passing xmltype(v.other_xml)
columns
info_type varchar2(30) path '@type',
info_value varchar2(30) path '/info'
) t
where v.sql_id = '&sql_id'
and v.child_number = &child_number
and other_xml is not null;
}}}
! dplan
https://blogs.oracle.com/In-Memory/entry/oracle_database_in_memory_on
parallel scans affinitized for inmemory -> TABLE ACCESS INMEMORY FULL
parallel scans affinitized for buffer cache -> TABLE ACCESS STORAGE FULL
! references
https://martincarstenbach.wordpress.com/2014/10/09/little-things-worth-knowing-troubleshooting-parallel-statement-queueing/
Unix Processes, Pipes and Signals
Doc ID: 43545.1
Program Illustrating Signal Handling in SQL*Net & Interaction w/ User Handlers
Doc ID: 66510.1
SOLARIS: svrmgrl gives ORA-3113 on database startup with 8.1.7
Doc ID: 121773.1
Diagnosing and Troubleshooting Signal Errors
Doc ID: 215601.1
QREF: Unix Signals
Doc ID: 28779.1
SIGCLD Error Using Forked Processes in Pro*C
Doc ID: 145881.1
EXECUTABLES ABORTED/CORE DUMPED WITH SIGNAL
Doc ID: 1020404.102
How To Track Dead Connection Detection(DCD) Mechanism Without Enabling Any Client/Server Network Tracing
Doc ID: 438923.1
How to Check if Dead Connection Detection (DCD) is Enabled in 9i and 10g
Doc ID: 395505.1
ALERT: Hang During Startup/Shutdown on Unix When System Uptime > 248 Days
Doc ID: 118228.1
Troubleshooting 11i Known Forms Tech Stack Problems Which Result in Crashes
Doc ID: 121427.1
Startup Mount Hangs and Alert Log Shows no Error Messages
Doc ID: 130423.1
Using IOMeter with Virtual Iron <-- IO METER
Doc ID: 851562.1
UNIX: Connect Internal asks for password when TWO_TASK is set
Doc ID: 1066589.6
How I Resolved ORA-03135: connection lost contact
Doc ID: 465572.1
EXECUTABLES ABORTED/CORE DUMPED WITH SIGNAL
Doc ID: 1020404.102
The use of Client signal handlers and Oracle BEQUEATH Connections
Doc ID: 452122.1
WAIT system call returns -1 in Pro*C, when connected to default database
Doc ID: 97060.1
Signal Handling With Oracle (Pro*C and bequeath_detach)
Doc ID: 74839.1
Troubleshooting FRM-92XXX Errors in Oracle Applications
Doc ID: 365529.1
ALERT: Pro*C program hangs during connect (lwp_mutex_lock) on multi CPU 2.5.1
Doc ID: 70684.1
ALERT:HP-UX: DBWR crashing with ORA-7404 on HP-UX
Doc ID: 62589.1
WAIT system call returns -1 in Pro*C, when connected to default database
Doc ID: 97060.1
HANDLING UNIX SIGNALS WITH ORACLE PROGRAMS
Doc ID: 1877.1
-- TRUSS
How to Run Truss
Doc ID: 146428.1
-- LINUX DEBUGGING
http://www.cngr.cn/article/63/390/2006/2006071915630.shtml Linux Unicode
How to Process an Express Core File Using dbx, dbg, dde, gdb or ladebug
Doc ID: 118252.1
TECH: Getting a Stack Trace from a CORE file
Doc ID: 1812.1
-- MICROSOFT DEBUGGING
http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/Q_21584102.html <-- good stuff to get started on a dump file
http://www.mcse.ms/message1447663.html
http://forums.techarena.in/windows-server-help/57827.htm
http://support.microsoft.com/default.aspx?scid=kb;en-us;905539
http://www.microsoft.com/whdc/devtools/debugging/debugstart.mspx
http://www.microsoft.com/whdc/devtools/debugging/default.mspx
http://www.microsoft.com/whdc/devtools/debugging/install64bit.mspx
http://www.microsoft.com/whdc/devtools/debugging/symbolpkg.mspx#f
How to create dmp file and stack trace when agent coredumps in Windows
Doc ID: 602521.1
http://msdn.microsoft.com/en-us/library/cc266361.aspx
http://msdn.microsoft.com/en-us/library/cc266499.aspx
http://msdn.microsoft.com/en-us/library/cc266482.aspx
http://msdn.microsoft.com/en-us/library/cc266509.aspx
http://support.microsoft.com/kb/315263
http://msdn.microsoft.com/en-us/library/ms680369(VS.85).aspx
http://www.vistaheads.com/forums/microsoft-public-windows-vista-hardware-devices/311874-help-reading-minidump.html
http://www.ningzhang.org/ <--------- VERY USEFUL
http://www.ningzhang.org/2008/12/19/silverlight-debugging-with-windbg-and-sos/
http://www.ningzhang.org/2008/12/18/silverlight-debugging-with-visual-studio/
http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Windows/2000/Q_21543772.html
WINNT/WIN2000: Recreating Oracle Services and Instances from the Command Line
Doc ID: Note:61621.1
https://freekdhooge.wordpress.com/2007/12/25/can-a-select-block-a-truncate/
if DML then
<<<
resource busy and acquire with NOWAIT
<<<
if SELECT then if truncate happens first
<<<
ORA-08103: object no longer exists
<<<
else the truncate will succeed
{{{
Begin Snap: 338 17-Jan-10 06:50:58 31 2.9
End Snap: 339 17-Jan-10 07:01:01 30 2.2
Elapsed: 10.05 (mins)
DB Time: 22.08 (mins)
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 25,946.47 6,162.81
Logical reads: 10,033.03 2,383.05
Block changes: 147.02 34.92
Physical reads: 9,390.59 2,230.46
Physical writes: 41.20 9.79
User calls: 19.14 4.55
Parses: 9.87 2.34
Hard parses: 0.69 0.16
Sorts: 3.05 0.72
Logons: 0.52 0.12
Executes: 95.91 22.78
Transactions: 4.21
user commits 2,526 4.2 1.0
user rollbacks 13 0.0 0.0
execute count 57,841 95.9 22.8
user calls 11,544 19.1 4.6
trx/sec = (user commits + user rollbacks) / elapsed
trx/sec = 2526+13/603
}}}
<<<
''Transactions per second'' are defined as the number of insert, update or delete statements committed and/or rolled back per second.
''Executes per second'' are defined as the number of SQL commands (insert, update, delete or select statements) executed per second.
''User calls per second'' are defined as the number of logins, parses or execute calls per second.
<<<
https://blogs.warwick.ac.uk/java/entry/oracle_metric
{{{
Observation
based on Java 3 tier app using JDBC :
Execute Count = No. of select/insert/delete/update. For easier understanding. do not think it includes recursive calls as doc indicated.
Recursive Calls = No. of update/insert/delete + Ceiling(Hard parse)
User Commit = min(No. of update/insert/delete)
User Calls = No. of select/insert/delete/update + N * Hard Parse (where N =2 when select and N =1 otherwise.)
}}}
! tableau awr_sysstat
{{{
[UCOMS]+[URS]
}}}
http://www.centrexcc.com/Tuning%20by%20Cardinality%20Feedback.pdf
{{{
## desktopserver same socket test case, diff cores
numactl --physcpubind=3 dd if=/dev/zero bs=32M count=1024 2> /dev/null | numactl --physcpubind=1 dd of=/dev/null bs=32M
## desktopserver same socket test case, same cores
numactl --physcpubind=3 dd if=/dev/zero bs=32M count=1024 2> /dev/null | numactl --physcpubind=7 dd of=/dev/null bs=32M
## desktopserver pin on cpu3
numactl --physcpubind=3 ./saturate 1 dw
## desktopserver pin on cpu3 and cpu7 (same core)
numactl --physcpubind=3 ./saturate 1 dw ; numactl --physcpubind=7 ./saturate 1 dw
One Per Socket: No Frequency Boost
numactl --physcpubind=3 dd if=/dev/zero bs=32M count=1024 2> /dev/null | numactl --physcpubind=11 dd of=/dev/null bs=32M
Both Same Socket: Frequency Boost
numactl --physcpubind=3 dd if=/dev/zero bs=32M count=1024 2> /dev/null | numactl --physcpubind=0 dd of=/dev/null bs=32M
}}}
''tyler muth test case pigz'' https://twitter.com/kevinclosson/status/267028348130193408
{{{
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit:dw
$ numactl --hardware
available: 1 nodes (0)
node 0 size: 16290 MB
node 0 free: 5146 MB
node distances:
node 0
0: 10
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit:dw
$ numastat
node0
numa_hit 518542427
numa_miss 0
numa_foreign 0
interleave_hit 11819
local_node 518542427
other_node 0
}}}
<<<
{{{
select pin.ksppinm called, pcv.ksppstvl itsvalue
from sys.x$ksppi pin, sys.x$ksppcv pcv
where pin.inst_id=userenv('Instance')
and pcv.inst_id=pin.inst_id
and pin.indx=pcv.indx
and pin.ksppinm like '\_%' escape '\'
and pin.ksppinm like '%NUMA%'
/
}}}
by default oracle (rdbms side of things) don't have numa support enabled, and I think you have to apply some bug patch and have this parameter set to true
_enable_NUMA_support FALSE
but apparently there's an ''if then else'' algorithm on this parameter to set it to TRUE http://kevinclosson.wordpress.com/2010/12/02/_enable_numa_support-the-if-then-else-oracle-database-11g-release-2-initialization-parameter/
Oracle RDBMS is not detecting NUMA Nodes [ID 868642.1]
Oracle NUMA Usage Recommendation [ID 759565.1]
http://www.linuxquestions.org/questions/linux-newbie-8/how-to-enable-numa-693264/
but since on the os side of things the numa is enabled by default (if you have a numa capable cpu), when you bind a script to run on a specific cpu it actually runs on that cpu.. and as you can see below it also actually makes use of turbo boost
{{{
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit/aas60:dw
$ numactl --physcpubind=5 ./saturate 1 dw
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
15.35 3.70 3.41 25.93 13.56 45.16 0.00 0.00 0.00 0.00 0.00
0 0 5.12 3.67 3.41 23.75 48.30 22.83 0.00 0.00 0.00 0.00 0.00
0 4 2.99 3.67 3.41 25.88 48.30 22.83 0.00 0.00 0.00 0.00 0.00
1 1 2.41 3.69 3.41 97.59 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 98.33 3.71 3.41 1.67 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 2 2.71 3.65 3.41 4.78 0.89 91.62 0.00 0.00 0.00 0.00 0.00
2 6 3.00 3.64 3.41 4.48 0.89 91.62 0.00 0.00 0.00 0.00 0.00
3 3 3.90 3.66 3.41 24.86 5.05 66.19 0.00 0.00 0.00 0.00 0.00
3 7 4.30 3.67 3.41 24.46 5.05 66.19 0.00 0.00 0.00 0.00 0.00
so this is a single node numa which is not numa...and I'm just using numactl to do things I could do with taskset or cgroups
1 node = 1 physical CPU? x2-8 has 8 physical CPUs... so that's 8 node numa
dash8 is an 8 node numa system
Yes dash 2 is 2node...a cpu is a node...but with amd MCM (Multi-chip module) like the 6100 & 6200 there are two nodes per sock...read my blog shit on magny cours or 6100
}}}
<<<
! C-state definitions
http://download.intel.com/design/processor/applnots/320354.pdf
<<<
sum of C states is equal to 100%
''c0'' - active core
''c1'' - active core
''c3'' - inactive core
''c6'' - inactive core
<<<
! cputoolkit 1 and 8 sessions (no numactl)
<<<
!! 1 session
{{{
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
15.60 3.67 3.41 31.91 14.83 37.66 0.00 0.00 0.00 0.00 0.00
0 0 8.26 3.66 3.41 32.58 54.99 4.18 0.00 0.00 0.00 0.00 0.00
0 4 5.43 3.66 3.41 35.41 54.98 4.18 0.00 0.00 0.00 0.00 0.00
1 1 3.59 3.62 3.41 10.25 1.96 84.20 0.00 0.00 0.00 0.00 0.00
1 5 2.57 3.62 3.41 11.27 1.96 84.20 0.00 0.00 0.00 0.00 0.00
2 2 98.58 3.67 3.41 1.42 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 6 0.14 3.61 3.41 99.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 3 3.19 3.63 3.41 32.17 2.37 62.27 0.00 0.00 0.00 0.00 0.00
3 7 3.06 3.64 3.41 32.30 2.37 62.27 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
17.37 3.66 3.41 34.02 8.97 39.64 0.00 0.00 0.00 0.00 0.00
0 0 28.65 3.69 3.41 34.51 25.38 11.46 0.00 0.00 0.00 0.00 0.00
0 4 7.80 3.66 3.41 55.36 25.38 11.46 0.00 0.00 0.00 0.00 0.00
1 1 12.95 3.60 3.41 15.43 3.93 67.69 0.00 0.00 0.00 0.00 0.00
1 5 5.32 3.63 3.41 23.06 3.93 67.69 0.00 0.00 0.00 0.00 0.00
2 2 75.81 3.66 3.41 2.63 0.69 20.87 0.00 0.00 0.00 0.00 0.00
2 6 0.37 3.60 3.41 78.07 0.69 20.87 0.00 0.00 0.00 0.00 0.00
3 3 5.68 3.61 3.41 29.89 5.87 58.56 0.00 0.00 0.00 0.00 0.00
3 7 2.36 3.63 3.41 33.22 5.87 58.56 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
15.23 3.71 3.41 25.31 4.08 55.37 0.00 0.00 0.00 0.00 0.00
0 0 98.25 3.71 3.41 1.75 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 4 3.14 3.70 3.41 96.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 3.88 3.65 3.41 8.63 2.50 84.99 0.00 0.00 0.00 0.00 0.00
1 5 1.42 3.64 3.41 11.09 2.50 84.99 0.00 0.00 0.00 0.00 0.00
2 2 3.58 3.64 3.41 5.59 1.14 89.69 0.00 0.00 0.00 0.00 0.00
2 6 2.19 3.66 3.41 6.99 1.14 89.69 0.00 0.00 0.00 0.00 0.00
3 3 3.91 3.68 3.41 36.57 12.71 46.81 0.00 0.00 0.00 0.00 0.00
3 7 5.48 3.67 3.41 35.00 12.71 46.81 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
16.01 3.69 3.41 25.51 9.21 49.27 0.00 0.00 0.00 0.00 0.00
0 0 64.19 3.71 3.41 8.64 15.99 11.18 0.00 0.00 0.00 0.00 0.00
0 4 2.87 3.65 3.41 69.96 15.99 11.18 0.00 0.00 0.00 0.00 0.00
1 1 5.07 3.65 3.41 14.02 5.02 75.90 0.00 0.00 0.00 0.00 0.00
1 5 3.27 3.64 3.41 15.82 5.02 75.90 0.00 0.00 0.00 0.00 0.00
2 2 3.51 3.65 3.41 39.74 1.19 55.56 0.00 0.00 0.00 0.00 0.00
2 6 38.03 3.68 3.41 5.22 1.19 55.56 0.00 0.00 0.00 0.00 0.00
3 3 6.76 3.64 3.41 24.13 14.67 54.44 0.00 0.00 0.00 0.00 0.00
3 7 4.34 3.64 3.41 26.56 14.67 54.44 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
15.66 3.68 3.41 26.45 13.14 44.75 0.00 0.00 0.00 0.00 0.00
0 0 6.15 3.66 3.41 20.92 35.95 36.99 0.00 0.00 0.00 0.00 0.00
0 4 3.68 3.65 3.41 23.39 35.95 36.99 0.00 0.00 0.00 0.00 0.00
1 1 2.60 3.64 3.41 11.48 3.21 82.71 0.00 0.00 0.00 0.00 0.00
1 5 2.54 3.64 3.41 11.54 3.21 82.71 0.00 0.00 0.00 0.00 0.00
2 2 4.69 3.68 3.41 95.31 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 6 98.45 3.68 3.41 1.55 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 3 3.14 3.64 3.41 24.14 13.40 59.32 0.00 0.00 0.00 0.00 0.00
3 7 4.02 3.66 3.41 23.27 13.40 59.32 0.00 0.00 0.00 0.00 0.00
}}}
<<<
<<<
!! 8 sessions
{{{
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
98.50 3.51 3.41 1.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 0 99.38 3.51 3.41 0.62 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0 4 99.94 3.51 3.41 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 1 93.22 3.51 3.41 6.78 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 99.88 3.51 3.41 0.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 2 99.88 3.51 3.41 0.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 6 97.18 3.51 3.41 2.82 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 3 98.61 3.51 3.41 1.39 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 7 99.93 3.51 3.41 0.07 0.00 0.00 0.00 0.00 0.00 0.00 0.00
}}}
<<<
! cputoolkit numactl test case
<<<
!! bind on 3
{{{
numactl --physcpubind=3 ./saturate 1 dw
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
15.41 3.66 3.41 30.67 23.83 30.09 0.00 0.00 0.00 0.00 0.00
0 0 5.45 3.65 3.41 36.30 44.71 13.54 0.00 0.00 0.00 0.00 0.00
0 4 4.36 3.64 3.41 37.39 44.71 13.54 0.00 0.00 0.00 0.00 0.00
1 1 4.46 3.64 3.41 23.11 47.38 25.04 0.00 0.00 0.00 0.00 0.00
1 5 5.22 3.64 3.41 22.36 47.38 25.04 0.00 0.00 0.00 0.00 0.00
2 2 3.11 3.63 3.41 11.88 3.24 81.77 0.00 0.00 0.00 0.00 0.00
2 6 2.17 3.61 3.41 12.82 3.24 81.77 0.00 0.00 0.00 0.00 0.00
3 3 98.37 3.67 3.41 1.63 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 7 0.14 3.62 3.41 99.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00
}}}
!! bind on 5
{{{
numactl --physcpubind=5 ./saturate 1 dw
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
15.32 3.70 3.41 25.98 9.30 49.40 0.00 0.00 0.00 0.00 0.00
0 0 4.67 3.67 3.41 25.89 31.69 37.74 0.00 0.00 0.00 0.00 0.00
0 4 4.07 3.67 3.41 26.49 31.69 37.74 0.00 0.00 0.00 0.00 0.00
1 1 2.33 3.70 3.41 97.67 0.00 0.00 0.00 0.00 0.00 0.00 0.00
1 5 98.36 3.71 3.41 1.64 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 2 2.75 3.65 3.41 3.97 0.33 92.94 0.00 0.00 0.00 0.00 0.00
2 6 2.08 3.65 3.41 4.65 0.33 92.94 0.00 0.00 0.00 0.00 0.00
3 3 3.56 3.67 3.41 24.35 5.16 66.93 0.00 0.00 0.00 0.00 0.00
3 7 4.70 3.67 3.41 23.21 5.16 66.93 0.00 0.00 0.00 0.00 0.00
}}}
!! bind on 3 and 7
{{{
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
28.33 3.68 3.41 17.26 20.02 34.39 0.00 0.00 0.00 0.00 0.00
0 0 8.39 3.65 3.41 23.23 31.73 36.65 0.00 0.00 0.00 0.00 0.00
0 4 5.36 3.65 3.41 26.25 31.73 36.65 0.00 0.00 0.00 0.00 0.00
1 1 7.81 3.66 3.41 35.15 47.42 9.62 0.00 0.00 0.00 0.00 0.00
1 5 4.01 3.67 3.41 38.94 47.42 9.62 0.00 0.00 0.00 0.00 0.00
2 2 3.13 3.63 3.41 4.66 0.94 91.27 0.00 0.00 0.00 0.00 0.00
2 6 1.41 3.63 3.41 6.39 0.94 91.27 0.00 0.00 0.00 0.00 0.00
3 3 98.26 3.68 3.41 1.74 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 7 98.26 3.68 3.41 1.74 0.00 0.00 0.00 0.00 0.00 0.00 0.00
}}}
<<<
! kevin's numactl test case
<<<
!! bind on 3 and 1
{{{
numactl --physcpubind=3 dd if=/dev/zero bs=32M count=1024 2> /dev/null | numactl --physcpubind=1 dd of=/dev/null bs=32M
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
19.82 1.74 3.41 60.78 8.83 10.57 0.00 0.00 0.00 0.00 0.00
0 0 11.69 1.70 3.41 46.76 28.72 12.83 0.00 0.00 0.00 0.00 0.00
0 4 8.94 1.73 3.41 49.51 28.72 12.83 0.00 0.00 0.00 0.00 0.00
1 1 46.93 1.70 3.41 50.49 0.08 2.50 0.00 0.00 0.00 0.00 0.00
1 5 4.88 2.95 3.41 92.54 0.08 2.50 0.00 0.00 0.00 0.00 0.00
2 2 9.10 1.69 3.41 57.47 6.47 26.96 0.00 0.00 0.00 0.00 0.00
2 6 10.01 1.78 3.41 56.57 6.47 26.96 0.00 0.00 0.00 0.00 0.00
3 3 63.38 1.69 3.41 36.58 0.03 0.01 0.00 0.00 0.00 0.00 0.00
3 7 3.64 1.75 3.41 96.32 0.03 0.01 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
19.61 1.66 3.41 58.00 9.47 12.92 0.00 0.00 0.00 0.00 0.00
0 0 11.33 1.72 3.41 44.88 33.65 10.14 0.00 0.00 0.00 0.00 0.00
0 4 8.63 1.66 3.41 47.57 33.65 10.14 0.00 0.00 0.00 0.00 0.00
1 1 46.70 1.64 3.41 51.00 0.05 2.26 0.00 0.00 0.00 0.00 0.00
1 5 3.86 1.64 3.41 93.83 0.05 2.26 0.00 0.00 0.00 0.00 0.00
2 2 10.29 1.67 3.41 46.28 4.18 39.25 0.00 0.00 0.00 0.00 0.00
2 6 8.70 1.76 3.41 47.87 4.18 39.25 0.00 0.00 0.00 0.00 0.00
3 3 63.44 1.65 3.41 36.52 0.02 0.02 0.00 0.00 0.00 0.00 0.00
3 7 3.91 1.86 3.41 96.06 0.02 0.02 0.00 0.00 0.00 0.00 0.00
}}}
<<<
<<<
!! bind on 3 and 7
{{{
numactl --physcpubind=3 dd if=/dev/zero bs=32M count=1024 2> /dev/null | numactl --physcpubind=7 dd of=/dev/null bs=32M
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
20.77 1.66 3.41 34.82 11.72 32.70 0.00 0.00 0.00 0.00 0.00
0 0 9.87 1.65 3.41 32.40 33.70 24.02 0.00 0.00 0.00 0.00 0.00
0 4 6.49 1.67 3.41 35.79 33.70 24.02 0.00 0.00 0.00 0.00 0.00
1 1 8.06 1.76 3.41 44.52 10.64 36.77 0.00 0.00 0.00 0.00 0.00
1 5 9.28 1.80 3.41 43.31 10.64 36.77 0.00 0.00 0.00 0.00 0.00
2 2 5.41 1.67 3.41 22.07 2.52 70.00 0.00 0.00 0.00 0.00 0.00
2 6 5.05 1.78 3.41 22.43 2.52 70.00 0.00 0.00 0.00 0.00 0.00
3 3 70.49 1.64 3.41 29.51 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 7 51.51 1.64 3.41 48.49 0.00 0.00 0.00 0.00 0.00 0.00 0.00
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
21.00 1.66 3.41 41.19 7.65 30.17 0.00 0.00 0.00 0.00 0.00
0 0 12.55 1.68 3.41 56.89 20.53 10.03 0.00 0.00 0.00 0.00 0.00
0 4 12.20 1.86 3.41 57.24 20.53 10.03 0.00 0.00 0.00 0.00 0.00
1 1 5.61 1.64 3.41 39.65 6.28 48.45 0.00 0.00 0.00 0.00 0.00
1 5 6.09 1.64 3.41 39.18 6.28 48.45 0.00 0.00 0.00 0.00 0.00
2 2 5.43 1.65 3.41 28.59 3.79 62.18 0.00 0.00 0.00 0.00 0.00
2 6 4.19 1.65 3.41 29.84 3.79 62.18 0.00 0.00 0.00 0.00 0.00
3 3 70.51 1.65 3.41 29.49 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 7 51.39 1.64 3.41 48.61 0.00 0.00 0.00 0.00 0.00 0.00 0.00
}}}
<<<
! slob numactl
<<<
{{{
$ while :; do numactl --physcpubind=3 ./runit.sh 0 1; done
TM Load Profile Per Second Per Transaction
----------------- -------------------- -------------- ---------------
02/10/13 14:06:31 DB Time 1.0
02/10/13 14:06:31 DB CPU 1.0
02/10/13 14:06:31 Redo size 71.0 4,264.0
02/10/13 14:06:31 Logical reads 1,004,749.7 60,345,265.0
02/10/13 14:06:31 Block changes .8 47.0
02/10/13 14:06:31 Physical reads .0 .0
02/10/13 14:06:31 Physical writes .0 .0
02/10/13 14:06:31 User calls 34.5 2,074.0
02/10/13 14:06:31 Parses 17.3 1,039.0
02/10/13 14:06:31 Hard Parses .8 47.0
02/10/13 14:06:31 Logons 1.3 77.0
02/10/13 14:06:31 Executes 3,918.3 235,331.0
02/10/13 14:06:31 Rollbacks .0
02/10/13 14:06:31 Transactions .0
02/10/13 14:06:31 Applied urec .0 .0
15 rows selected.
core CPU %c0 GHz TSC %c1 %c3 %c6 %c7 %pc2 %pc3 %pc6 %pc7
24.02 3.46 3.41 66.60 5.56 3.82 0.00 0.00 0.00 0.00 0.00
0 0 15.50 3.41 3.41 69.35 11.20 3.94 0.00 0.00 0.00 0.00 0.00
0 4 14.63 3.41 3.41 70.23 11.20 3.94 0.00 0.00 0.00 0.00 0.00
1 1 17.14 3.49 3.41 70.45 4.35 8.06 0.00 0.00 0.00 0.00 0.00
1 5 17.94 3.44 3.41 69.65 4.35 8.06 0.00 0.00 0.00 0.00 0.00
2 2 14.56 3.41 3.41 76.47 6.69 2.28 0.00 0.00 0.00 0.00 0.00
2 6 13.85 3.36 3.41 77.18 6.69 2.28 0.00 0.00 0.00 0.00 0.00
3 3 97.82 3.51 3.41 1.20 0.00 0.98 0.00 0.00 0.00 0.00 0.00
3 7 0.73 2.94 3.41 98.29 0.00 0.98 0.00 0.00 0.00 0.00 0.00
}}}
<<<
http://www.howtogeek.com/121650/how-to-secure-ssh-with-google-authenticators-two-factor-authentication/
{{{
-- create a simple test case
drop table t purge;
-- session 1
CREATE TABLE t (n NUMBER PRIMARY KEY);
VARIABLE n NUMBER
execute :n := 1
INSERT INTO t VALUES (:n);
-- session 2
VARIABLE n NUMBER
execute :n := 1
INSERT INTO t VALUES (:n);
}}}
{{{
-- create a simple test case
drop table t purge;
-- session 1
CREATE TABLE t (n NUMBER PRIMARY KEY);
VARIABLE n NUMBER
execute :n := 1
INSERT INTO t VALUES (:n);
commit;
}}}
{{{
session 1
15:47:55 KARLARAO@orcl> select * from t;
N
----------
1
15:48:00 KARLARAO@orcl> update t set n=1 where n=1;
1 row updated.
session 2
15:46:14 KARLARAO@orcl> select * from t;
N
----------
1
15:46:20 KARLARAO@orcl>
15:46:21 KARLARAO@orcl> commit;
Commit complete.
15:47:49 KARLARAO@orcl> update t set n=1 where n=1; <-- this hangs!
-- waits for row lock
SQL> SQL> Sampling SID all@* with interval 2 seconds, taking 1 snapshots...
-- Session Snapper v4.06 BETA - by Tanel Poder ( http://blog.tanelpoder.com ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)
----------------------------------------------------------------------------------------------------
Active% | INST | SQL_ID | SQL_CHILD | EVENT | WAIT_CLASS
----------------------------------------------------------------------------------------------------
100% | 1 | adbw30mjk2dn4 | 0 | enq: TX - row lock contention | Application
-- End of ASH snap 1, end=2015-02-05 15:50:20, seconds=2, samples_taken=11
PL/SQL procedure successfully completed.
-- waits for mode 6
15:48:44 SYS@orcl> col program format a40
15:49:33 SYS@orcl> col module format a20
15:49:33 SYS@orcl> col event format a30
15:49:33 SYS@orcl> select session_id, program, module, sql_id, event, mod(p1,16) as lock_mode, BLOCKING_SESSION from
15:49:33 2 v$active_session_history
15:49:33 3 where sample_time > sysdate - &minutes /( 60*24)
15:49:33 4 -- and lower(module) like 'ex_%'
15:49:33 5 and event like 'enq: T%';
Enter value for minutes: 1
old 3: where sample_time > sysdate - &minutes /( 60*24)
new 3: where sample_time > sysdate - 1 /( 60*24)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
SESSION_ID PROGRAM MODULE SQL_ID EVENT LOCK_MODE BLOCKING_SESSION
---------- ---------------------------------------- -------------------- ------------- ------------------------------ ---------- ----------------
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
45 sqlplus@localhost.localdomain (TNS V1-V3 SQL*Plus adbw30mjk2dn4 enq: TX - row lock contention 6 37
)
58 rows selected.
}}}
! scripts
{{{
-- QUERY TX MODE from ASH
col program format a40
col module format a20
col event format a30
select TO_CHAR(sample_time,'MM/DD/YY HH24:MI:SS') TM, inst_id, session_id, BLOCKING_SESSION, machine, substr(module,1,20) module, sql_id, event, mod(p1,16) as lock_mode from
gv$active_session_history
where sample_time > sysdate - 1 /( 60*24)
and event like 'enq: T%';
-- RAC user locks tree
set lines 500
set pagesize 66
col inst format 99
col object_name format a20
col spid format a6
col type format a4
col machine format a30
SELECT TO_CHAR(SYSDATE,'MM/DD/YY HH24:MI:SS') TM, o.name object_name, u.name owner, lid.*
FROM (SELECT
s.inst_id inst, s.SID, s.serial#, p.spid, s.blocking_session blocking_sid, s.machine, NVL (s.sql_id, 0) sqlid, s.sql_hash_value sqlhv,
DECODE (l.TYPE,
'TM', l.id1,
'TX', DECODE (l.request,
0, NVL (lo.object_id, -1),
s.row_wait_obj#
),
-1
) AS object_id,
l.TYPE type,
DECODE (l.lmode,
0, 'NONE',
1, 'NULL',
2, 'ROW SHARE',
3, 'ROW EXCLUSIVE',
4, 'SHARE',
5, 'SHARE ROW EXCLUSIVE',
6, 'EXCLUSIVE',
'?'
) mode_held,
DECODE (l.request,
0, 'NONE',
1, 'NULL',
2, 'ROW SHARE',
3, 'ROW EXCLUSIVE',
4, 'SHARE',
5, 'SHARE ROW EXCLUSIVE',
6, 'EXCLUSIVE',
'?'
) mode_requested,
l.id1, l.id2, l.ctime time_in_mode,s.row_wait_obj#, s.row_wait_block#,
s.row_wait_row#, s.row_wait_file#
FROM gv$lock l,
gv$session s,
gv$process p,
(SELECT object_id, session_id, xidsqn
FROM gv$locked_object
WHERE xidsqn > 0) lo
WHERE l.inst_id = s.inst_id
AND s.inst_id = p.inst_id
AND s.SID = l.SID
AND p.addr = s.paddr
AND l.SID = lo.session_id(+)
AND l.id2 = lo.xidsqn(+)) lid,
SYS.obj$ o,
SYS.user$ u
WHERE o.obj#(+) = lid.object_id
AND o.owner# = u.user#(+)
AND object_id <> -1
/
}}}
new version
{{{
set lines 500
set pagesize 66
col inst format 99
col object_name format a20
col spid format a6
col type format a4
col machine format a30
col owner format a20
SELECT TO_CHAR(SYSDATE,'MM/DD/YY HH24:MI:SS') TM, u.username owner, o.object_name object_name, lid.*
FROM (SELECT
s.inst_id inst, s.SID, s.serial#, p.spid, s.blocking_session blocking_sid, s.machine, NVL (s.sql_id, 0) sqlid, s.sql_hash_value sqlhv,
DECODE (l.TYPE,
'TM', l.id1,
'TX', DECODE (l.request,
0, NVL (lo.object_id, -1),
s.row_wait_obj#
),
-1
) AS object_id,
l.TYPE type,
DECODE (l.lmode,
0, 'NONE',
1, 'NULL',
2, 'ROW SHARE',
3, 'ROW EXCLUSIVE',
4, 'SHARE',
5, 'SHARE ROW EXCLUSIVE',
6, 'EXCLUSIVE',
'?'
) mode_held,
DECODE (l.request,
0, 'NONE',
1, 'NULL',
2, 'ROW SHARE',
3, 'ROW EXCLUSIVE',
4, 'SHARE',
5, 'SHARE ROW EXCLUSIVE',
6, 'EXCLUSIVE',
'?'
) mode_requested,
l.id1, l.id2, l.ctime time_in_mode,s.row_wait_obj#, s.row_wait_block#,
s.row_wait_row#, s.row_wait_file#
FROM gv$lock l,
gv$session s,
gv$process p,
(SELECT object_id, session_id, xidsqn
FROM gv$locked_object
WHERE xidsqn > 0) lo
WHERE l.inst_id = s.inst_id
AND s.inst_id = p.inst_id
AND s.SID = l.SID
AND p.addr = s.paddr
AND l.SID = lo.session_id(+)
AND l.id2 = lo.xidsqn(+)) lid,
dba_objects o,
dba_users u
WHERE o.object_id(+) = lid.object_id
AND o.owner = u.username(+)
AND o.object_id <> -1
/
}}}
! references
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues
p1,p2,p3 https://community.oracle.com/thread/924272
! some other stuff that can be done
get the distribution of bind variables when this locking mode6 issue happens
the reason is, there could be an outlier user that frequently keys in that problematic value that could be causing the issue
http://www.evernote.com/shard/s48/sh/a6abe10f-e5c3-4fb8-b538-bcd790da7a4b/e05df0b2ac17af54f99f374f9dfd9676
{{{
create table work (dummy number);
begin
-- 600/60=10 minutes of workload
for j in 1..600 loop
-- rate of 40 (20*2) tx per sec
for i in 1..20 loop
insert into work values(1);
commit;
delete work where rownum <2;
commit;
end loop;
dbms_lock.sleep(1);
end loop;
end;
/
}}}
{{{
ALTER SESSION SET COMMIT_WRITE = WAIT;
exec while true loop update t_commit set a=a+1 ; commit ; insert into t_commit values(1); commit; delete t_commit where rownum <2; commit; end loop;
}}}
! types of subqueries
https://docs.oracle.com/cd/B19306_01/server.102/b14200/queries007.htm
<<<
* correlated subquery - subquery in @@SELECT@@, this is row by row processing
* inline view - A subquery in the @@FROM@@ clause of a SELECT statement is also called an inline view
* nested subquery - A subquery in the @@WHERE@@ clause of a SELECT statement is also called a nested subquery
correlated subquery when a nested subquery references a column from a table referred to a parent statement any number of levels above the subquery. The parent statement can be a SELECT, UPDATE, or DELETE statement in which the subquery is nested. A correlated subquery is evaluated once for each row processed by the parent statement. Oracle resolves unqualified columns in the subquery by looking in the tables named in the subquery and then in the tables named in the parent statement.
<<<
! correlated subquery won't scale
<<<
• This is a pagination query and runs for 13-20seconds executed during daytime about 100 times (or more)
• PARSING_SCHEMA: SIGNON
• The bottleneck is on the correlated subquery on APP_CONNECTIONS table which is accessed w/ Full Table Scan. The MAX function on LOGON_TS column resulted to a row-by-row operation, and so the FTS is done 604 times.
• This type of query won’t scale due to the more rows fetched the longer it runs.
• The fix here is to separate the correlated subquery to a WITH AS clause and execute the queries once so that the Full Table Scan is only done once vs row-by-row. Then join that result set to the main query.
Correlated subqueries are generally a bad idea, since in many cases they result in one query per row, which won't scale. If I'm reading your query correctly though, you could simply join d twice with different join conditions to get x and y.
<<<
<<showtoc>>
! ubuntu vs rhel commands
https://help.ubuntu.com/community/SwitchingToUbuntu/FromLinux/RedHatEnterpriseLinuxAndFedora
! check version and kernel
{{{
ubuntu@ubuntu-talentbuddy:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04 LTS
Release: 14.04
Codename: trusty
ubuntu@ubuntu-talentbuddy:~$ uname -a
Linux ubuntu-talentbuddy 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
}}}
! references
<<showtoc>>
! hadoop
https://www.udemy.com/learn-hadoop-mapreduce-and-bigdata-from-scratch/#/
https://www.udemy.com/learn-hadoop-mapreduce-and-bigdata-from-scratch/?couponCode=KIRAN39#/
! travel
https://www.udemy.com/travel-hacking-like-a-wizard-fly-for-free/?dtcode=zJu0iBF1rgC4
! web scraping
https://www.udemy.com/learn-web-scraping-in-minutes/?dtcode=fUJ0s2X1rguh
! financial modeling
https://www.udemy.com/financial-and-valuation-modeling/?dtcode=bJEFpbM1rguh
https://www.udemy.com/2-hour-financial-model/?dtcode=jlD2mE81rgyI
https://www.udemy.com/u/corporatebridge/?active=tp&tp=2
! next
https://www.udemy.com/learn-nodejs-apis-fast-and-simple/
https://www.udemy.com/oracle-private-database-cloud/
https://www.udemy.com/rest-services-testing-in-60-minutes/
https://www.udemy.com/learn-api-technical-writing-2-rest-for-writers/
http://fritshoogland.wordpress.com/2012/07/23/using-udev-on-rhel-6-ol-6-to-change-disk-permissions-for-asm/
udev ASM - single path - https://www.evernote.com/shard/s48/sh/485425bc-a16f-4446-aebd-988342e3c30e/edc860d713dd4a66ff57cbc920b4a69c
http://www.oracle-base.com/blog/2012/03/16/oracle-linux-5-8-and-udev-issues/
Making permissions on EMC PowerPath devices persistent for CRS on Enterprise Linux
http://oracle11ggotchas.com/articles/Making%20permissions%20on%20EMC%20PowerPath%20devices%20persistent%20for%20CRS%20on%20Enterprise%20Linux.pdf
Comparing ASM to Filesystem in benchmarks [ID 1153664.1]
http://kevinclosson.wordpress.com/2007/02/11/what-performs-better-direct-io-or-direct-io-there-is-no-such-thing-as-a-stupid-question/
<<showtoc>>
! background
calibrate_io was used to compare IOPS and bandwdith of on-prem vs DBCS (LVM) environment
<<<
The calibrate_io would be the measure of your block volume filesystem IOPS and IO bandwidth capabilities. Whereas the CTAS test also measures IOPS/bandwidth but it is a completely different workload.
The calibrate_io session below (unix_pid 3872), spawns the ORA_CS0x processes. These processes read the datafiles of the database to execute its asynchronous workload. This is different from CTAS workload where you read a table that sits on a tablespace where it could be compressed or encrypted. That’s why we need to have the output of mystat and session_event to have more details on the session running the CTAS.
The calibrate_io numbers you have on DBCS VM.Standard2.16 is close to what is published here https://docs.oracle.com/en-us/iaas/Content/Block/Concepts/blockvolumeperformance.htm
You can also specify a higher num_physical_disks value and see the effect of that. Check what different numbers will give you. It will spawn more concurrent ORA_CS0x processes and will push your disk harder.
num_physical_disks => 5
num_physical_disks => 10
num_physical_disks => 20
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/156599268-5dcced84-96a0-4f91-bee8-17ef550917e1.png ]]
{{{
14:47:16 SYS@devcdb> 14:47:16 2 14:47:16 3 14:47:16 4 14:47:16 5
SID SERIAL# UNIX_PID
---------- ---------- ----------------------------
252 55455 3872
15:01:03 SYS@devcdb> SET SERVEROUTPUT ON
DECLARE
l_max_iops PLS_INTEGER;
l_max_mbps PLS_INTEGER;
l_actual_latency PLS_INTEGER;
BEGIN
DBMS_RESOURCE_MANAGER.calibrate_io (
num_physical_disks => 1,
max_latency => 20,
max_iops => l_max_iops,
max_mbps => l_max_mbps,
actual_latency => l_actual_latency);
DBMS_OUTPUT.put_line ('l_max_iops = ' || l_max_iops);
DBMS_OUTPUT.put_line ('l_max_mbps = ' || l_max_mbps);
DBMS_OUTPUT.put_line ('l_actual_latency = ' || l_actual_latency);
END;
/
15:01:18 SYS@devcdb> 15:01:18 2 15:01:18 3 15:01:18 4 15:01:18 5 15:01:18 6 15:01:18 7 15:01:18 8 15:01:18 9 15:01:18 10 15:01:18 11 15:01:18 12 15:01:18 13 15:01:18 14 15:01:18 15 15:01:18 16 15:01:18 17
pidstat -dl 1
03:01:27 PM UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
03:01:28 PM 1000 4962 7021.36 0.00 0.00 ora_cs00_devcdb
03:01:28 PM 1000 4964 6757.28 0.00 0.00 ora_cs01_devcdb
}}}
<<<
! ORA_CS0x process perf flamegraph
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/156600113-3199efd0-0bb4-4d66-b1ef-4579eeaa238f.png
]]
! calibrate_io setup
{{{
15:00:12 SYS@devcdb>
15:00:43 SYS@devcdb>
SELECT d.name,
i.asynch_io
FROM v$datafile d,
v$iostat_file i
WHERE d.file# = i.file_no
AND i.filetype_name = 'Data File';
15:00:51 SYS@devcdb> 15:00:51 2 15:00:51 3 15:00:51 4 15:00:51 5 15:00:51 6
NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ASYNCH_IO
---------
/u02/app/oracle/oradata/DEVCDB/system01.dbf
ASYNC_OFF
/u02/app/oracle/oradata/DEVCDB/sysaux01.dbf
ASYNC_OFF
/u02/app/oracle/oradata/DEVCDB/undotbs01.dbf
ASYNC_OFF
...output snipped...
15:00:51 SYS@devcdb>
15:00:51 SYS@devcdb>
SELECT * FROM v$io_calibration_status;
15:00:55 SYS@devcdb>
STATUS CALIBRATION_TIME CON_ID
------------- --------------------------------------------------------------------------- ----------
READY 02-MAR-22 03.00.12.086 PM 0
15:00:55 SYS@devcdb>
15:00:58 SYS@devcdb>
SET LINESIZE 150
COLUMN start_time FORMAT A30
COLUMN end_time FORMAT A30
SELECT * FROM dba_rsrc_io_calibrate;15:01:03 SYS@devcdb> 15:01:03 SYS@devcdb> 15:01:03 SYS@devcdb> 15:01:03 SYS@devcdb>
START_TIME END_TIME MAX_IOPS MAX_MBPS MAX_PMBPS LATENCY NUM_PHYSICAL_DISKS
------------------------------ ------------------------------ ---------- ---------- ---------- ---------- ------------------
ADDITIONAL_INFO
------------------------------------------------------------------------------------------------------------------------------------------------------
02-MAR-22 02.47.21.657616 PM 02-MAR-22 03.00.12.086180 PM 2739 623 329 0 1
}}}
! iostat -xmd
{{{
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 3.00 0.00 4892.00 0.00 149.78 0.00 62.70 81.82 16.23 16.23 0.00 0.20 100.10
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 26.00 0.00 2.00 0.00 0.11 112.00 0.01 6.00 0.00 6.00 6.00 1.20
sdc 6.00 0.00 4900.00 4.00 153.03 0.03 63.92 75.08 15.58 15.59 3.75 0.20 100.10
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.99 8.91 0.99 0.42 0.01 88.80 0.19 19.60 21.56 2.00 19.40 19.21
sdc 5.94 0.00 4797.03 0.00 156.76 0.00 66.93 57.22 12.25 12.25 0.00 0.21 99.01
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 8.00 0.00 2.00 0.00 0.04 40.00 0.00 1.50 0.00 1.50 1.50 0.30
sdc 2.00 0.00 5291.00 0.00 185.13 0.00 71.66 55.16 10.41 10.41 0.00 0.19 99.30
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 0.00 0.00 1.00 0.00 0.00 7.00 0.05 52.00 0.00 52.00 52.00 5.20
sda 0.00 7.00 0.00 1.00 0.00 0.03 64.00 0.00 1.00 0.00 1.00 1.00 0.10
sdc 2.00 1.00 4488.00 13.00 170.90 0.08 77.80 52.97 11.81 11.83 4.69 0.22 100.60
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 13.00 0.00 1.00 0.00 0.05 112.00 0.03 32.00 0.00 32.00 32.00 3.20
sdc 7.00 0.00 4532.00 0.00 182.51 0.00 82.48 46.99 10.38 10.38 0.00 0.22 99.90
}}}
! top -c
{{{
top - 15:12:39 up 6 days, 7:09, 16 users, load average: 2.65, 2.84, 2.14
Tasks: 417 total, 4 running, 413 sleeping, 0 stopped, 0 zombie
%Cpu(s): 8.2 us, 18.4 sy, 0.0 ni, 41.2 id, 30.5 wa, 0.0 hi, 1.8 si, 0.0 st
KiB Mem : 7910116 total, 63928 free, 2312776 used, 5533412 buff/cache
KiB Swap: 4194300 total, 3921404 free, 272896 used. 3370596 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4962 oracle 20 0 2810968 25180 18712 R 33.5 0.3 1:43.57 ora_cs00_devcdb
36 root 20 0 0 0 0 S 7.7 0.0 1:58.54 [kswapd0:0]
3064 oracle 20 0 2823828 224660 198308 S 5.8 2.8 0:04.15 ora_m003_devcdb
3285 oracle 20 0 162452 4084 2976 R 1.9 0.1 0:28.31 top -c
9600 oracle -2 0 2793064 6468 6056 R 1.9 0.1 167:42.52 ora_vktm_devcdb
9827 oracle 20 0 2926456 571624 459388 S 1.3 7.2 4:41.97 ora_mmon_devcdb
2921 root 20 0 90512 2268 2208 S 0.6 0.0 8:22.42 /sbin/rngd -f
6228 oracle 20 0 116924 3900 3580 S 0.6 0.0 0:00.32 /bin/ksh ./genstacks.sh 4962 9999999
15 root 20 0 0 0 0 S 0.3 0.0 1:30.19 [ksoftirqd/1]
1977 oracle 20 0 2815892 467992 442568 S 0.3 5.9 0:09.84 ora_m000_devcdb
1979 oracle 20 0 2840484 370992 326400 S 0.3 4.7 0:06.82 ora_m001_devcdb
3133 oracle 20 0 2795156 63396 57536 S 0.3 0.8 0:04.22 ora_m004_devcdb
3837 oracle 20 0 108008 1540 1408 S 0.3 0.0 0:02.31 iostat -xmd 1
4480 root 20 0 0 0 0 S 0.3 0.0 0:00.43 [kworker/u4:3]
4831 root 20 0 0 0 0 S 0.3 0.0 0:00.81 [kworker/1:0]
5135 oracle 20 0 206908 10840 9284 S 0.3 0.1 0:15.19 /u02/app/oracle/product/version/db_1/bin/tnslsnr LISTENER -inherit
6353 root 20 0 0 0 0 S 0.3 0.0 0:00.07 [kworker/0:0]
9590 oracle 20 0 2793336 8580 7676 S 0.3 0.1 2:53.74 ora_psp0_devcdb
9778 oracle 20 0 2808232 11440 10440 S 0.3 0.1 0:59.80 ora_smco_devcdb
9796 oracle 20 0 2800100 46636 43692 S 0.3 0.6 0:49.27 ora_lreg_devcdb
11743 oracle 20 0 2814956 455340 447080 S 0.3 5.8 18:01.76 ora_cjq0_devcdb
1 root 20 0 193888 5744 3772 S 0.0 0.1 3:32.68 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
2 root 20 0 0 0 0 S 0.0 0.0 0:00.38 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 0:53.45 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 4:49.50 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh]
9 root 20 0 0 0 0 R 0.0 0.0 4:08.02 [rcuos/0]
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcuob/0]
11 root rt 0 0 0 0 S 0.0 0.0 0:01.18 [migration/0]
12 root rt 0 0 0 0 S 0.0 0.0 0:07.69 [watchdog/0]
13 root rt 0 0 0 0 S 0.0 0.0 0:04.90 [watchdog/1]
14 root rt 0 0 0 0 S 0.0 0.0 0:01.15 [migration/1]
17 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/1:0H]
18 root 20 0 0 0 0 S 0.0 0.0 3:52.57 [rcuos/1]
19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcuob/1]
20 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [khelper]
21 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kdevtmpfs]
22 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [netns]
23 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [perf]
24 root 20 0 0 0 0 S 0.0 0.0 0:00.71 [khungtaskd]
25 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [writeback]
26 root 25 5 0 0 0 S 0.0 0.0 0:00.00 [ksmd]
27 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [crypto]
28 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kintegrityd]
29 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset]
30 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kblockd]
31 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [md]
32 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [devfreq_wq]
37 root 20 0 0 0 0 S 0.0 0.0 0:30.95 [fsnotify_mark]
49 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kthrotld]
50 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [acpi_thermal_pm]
52 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kmpath_rdacd]
53 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kpsmoused]
54 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bcache]
55 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ipv6_addrconf]
75 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [deferwq]
76 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [charger_manager]
147 root 20 0 0 0 0 S 0.0 0.0 0:00.11 [kauditd]
610 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [ata_sff]
620 root 20 0 0 0 0 S 0.0 0.0 0:00.01 [scsi_eh_0]
629 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_0]
641 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_1]
653 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_1]
1152 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_2]
1154 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_2]
1155 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_3]
1156 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_3]
1159 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [scsi_eh_4]
1160 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [scsi_tmf_4]
1226 root 0 -20 0 0 0 S 0.0 0.0 0:05.41 [kworker/0:1H]
}}}
! sqlmon - active sessions
{{{
14:47:35 SYS@devcdb> @sqlmon
##############################
'*** GV$SQL_MONITOR ***'
##############################
no rows selected
##############################
'*** GV$SESSION ***'
##############################
INST MINS SID USERNAME PROG SQL_ID CHILD PLAN_HASH_VALUE EXECS AVG_ETIME EVENT P1TEXT P1 P2TEXT P2 P3TEXT P3 SQL_TEXT OFF IO_SAVED_% machine OSUSER
----- ------- ----- ------------- ---------- ------------- ------ --------------- ---------- ----------- -------------------- -------------------- ---------- -------------------- ---------- -------------------- ---------- ------------------------------ --- ---------- ------------------------------ ----------
1 11 252 SYS sqlplus@lo 9hy1wxdyd5bu4 0 0 3 115.22 class slave wait slave id 0 0 0 DECLARE l_max_iops PL No 0 localhost.localdomain oracle
}}}
! strace -fy -p 4962
* this shows the reading across datafiles using pread64
{{{
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\211$@\2\22Z$\0\0\0\17\4\3\202\0\0\5\0\23\0x\2\0\0\345\0kk"..., 8192, 76619776) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\373'@\2\33\314$\0\0\0\3\4\373{\0\0\7\0\4\0\210\2\0\0\374\0AA"..., 8192, 83845120) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0G)@\2G\261$\0\0\0\1\4\307\33\0\0\4\0\20\0\203\2\0\0\314\0))"..., 8192, 86564864) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\275)@\2\220\221$\0\0\0F\4\275A\0\0\6\0\2\0\221\2\0\0\304\0EE"..., 8192, 87531520) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0:,@\2^\332$\0\0\0\3\4B\304\0\0\2\0\3\0\241\2\0\0\300\0@@"..., 8192, 92749824) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\300/@\2\2572%\0\0\0\26\4\204\311\0\0\2\0\22\0\263\2\0\0\301\0@@"..., 8192, 100139008) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0s0@\2\177o%\0\0\0\26\4\200%\0\0\5\0\26\0\257\2\0\0\347\0CC"..., 8192, 101605376) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0w1@\2\225\231%\0\0\0\20\4\347 \0\0\t\0\31\0\336\2\0\0\364\0HH"..., 8192, 103735296) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\3271@\2\322\225%\0\0\0\16\4\220\366\0\0\10\0\24\0\360\2\0\0\21\1HH"..., 8192, 104521728) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\3013@\2\315\216%\0\0\0\37\4\32]\0\0\6\0\33\0\301\2\0\0\306\0\36\36"..., 8192, 108535808) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\3147@\2\230\247%\0\0\0N\4\32A\0\0\2\0!\0\310\2\0\0\303\0MM"..., 8192, 117014528) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\353:@\2\245\233%\0\0\0j\4\6\362\0\0\t\0\r\0\337\2\0\0\365\0ll"..., 8192, 123559936) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\2E@\2,\22&\0\0\0\7\4\212\4\0\0\n\0\2\0\260\2\0\0\275\0SS"..., 8192, 144719872) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0KE@\2\366d&\0\0\0\t\4V\337\0\0\n\0\1\0\274\2\0\0\275\0BB"..., 8192, 145317888) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\217H@\2\255a&\0\0\0\2\4\\\210\0\0\1\0\24\0\326\2\0\0\344\0\1\1"..., 8192, 152166400) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0gJ@\2\316\225&\0\0\0#\4;}\0\0\7\0\1\0\312\2\0\0\3\1II"..., 8192, 156033024) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0PL@\2\237}&\0\0\0F\4\362\0\0\0\2\0\22\0\364\2\0\0\307\0EE"..., 8192, 160038912) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\247O@\2\200\201&\0\0\0a\4H\n\0\0\2\0\25\0\365\2\0\0\310\0``"..., 8192, 167043072) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0 P@\2\207\233&\0\0\0b\4u\177\0\0\10\0\25\0$\3\0\0\30\1aa"..., 8192, 168034304) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0UP@\2\304\305&\0\0\0\26\4.\232\0\0\10\0\2\0.\3\0\0\30\1FF"..., 8192, 168468480) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\270S@\2\342\234&\0\0\0\2\4~o\0\0\6\0\f\0\367\2\0\0\314\0\1\1"..., 8192, 175570944) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0nU@\2\304\340&\0\0\0\v\4\315x\0\0\6\0\20\0\10\3\0\0\315\00055"..., 8192, 179159040) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0?X@\2 \343&\0\0\0 \4#\2\0\0\10\0\20\0005\3\0\0\31\1@@"..., 8192, 185065472) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\5\\@\2\27\357&\0\0\0a\4f\326\0\0\1\0\4\0\355\2\0\0\347\0``"..., 8192, 192978944) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0g\\@\2\320#'\0\0\0\3\4\347|\0\0\1\0 \0\361\2\0\0\347\0FF"..., 8192, 193781760) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\356]@\2\240;'\0\0\0\2\4\375\240\0\0\2\0\22\0\32\3\0\0\313\0DD"..., 8192, 196984832) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0Z^@\2#\21'\0\0\0U\4\331\211\0\0\4\0\17\0\340\2\0\0\325\0TT"..., 8192, 197869568) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\274^@\2}\"'\0\0\0\"\4^\217\0\0\5\0\34\0\3\3\0\0\363\0HH"..., 8192, 198672384) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0Ka@\2\375\n4\0\0\0\2\4_\325\0\0\5\0\20\0\21\3\0\0\364\0WW"..., 8192, 204038144) = 8192
pread64(269</u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf>, "\2\242\0\0\26b@\2X=?\0\0\0-\4\271y\0\0\7\0\1\0M\3\0\0\21\1LL"..., 8192, 205701120) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\341\1@\0~\0177\0\0\0\1\4\237\37\0\0\2\0\0\0006\0\0\0~\0177\0"..., 8192, 3940352) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0u\2@\0\235\333\r\0\0\0\1\6:\365\0\0\1\0\35\0D\0\0\0\234\333\r\0"..., 8192, 5152768) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\231\2@\0\355\313&\0\0\0\1\6\t=\0\0\1\0\0\0@\0\0\0\344\313&\0"..., 8192, 5447680) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\266\6@\0\322\316\n\0\0\0\2\6\264G\0\0\2\0\r\0\352\2\0\0\321\316\n\0"..., 8192, 14073856) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0S\7@\0\0\0\0\0\0\0\1\5\23\240\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 15360000) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\20\242\0\0\330\7@\0z`\0\0\0\0\1\4\266\344\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 16449536) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0\343\10@\0\0\0\0\0\0\0\1\5\243\257\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 18636800) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0:\t@\0\0\0\0\0\0\0\1\5z\256\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 19349504) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0n\t@\0\0\0\0\0\0\0\1\5.\256\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 19775488) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0r\t@\0\0\0\0\0\0\0\1\0052\256\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 19808256) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\356\r@\0j;\"\0\0\0\1\6\36)\0\0\1\0 \0\2\0\0\0V%\"\0"..., 8192, 29212672) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\343\22@\0\313\361\t\0\0\0\2\4\216\365\0\0\1\0\f\0\342\2\0\0\313\361\t\0"..., 8192, 39608320) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\365\24@\0\237V\n\0\0\0\1\6\312\335\0\0\2\0\6\0\t\3\0\0\227V\n\0"..., 8192, 43950080) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\20\31@\0\273`\n\0\0\0\1\6\257.\0\0\1\0\24\0s\1\0\0\314\\\n\0"..., 8192, 52559872) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\"\34@\0YL\v\0\0\0\1\6V%\0\0\2\0\17\0002\0\0\0WL\v\0"..., 8192, 58998784) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\25#@\0\366+\v\0\0\0\2\6~_\0\0\2\0\0\0000\0\0\0\365+\v\0"..., 8192, 73572352) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0q#@\0\5O\f\0\0\0\2\6E\266\0\0\2\0 \0000\0\0\0\4O\f\0"..., 8192, 74326016) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\235(@\0V9#\0\0\0\1\0069`\0\0\2\0\n\0(\0\0\0U9#\0"..., 8192, 85172224) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\343(@\0\277\262\25\0\0\0\1\4c\1\0\0\2\0\23\0(\0\0\0\277\262\25\0"..., 8192, 85745664) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\212*@\0\312\252#\0\0\0\1\6Xy\0\0\1\0\0\0s\1\0\0\311\252#\0"..., 8192, 89210880) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0],@\0\222\332\v\0\0\0\1\6\220\276\0\0\1\0\26\0\342\2\0\0l\332\v\0"..., 8192, 93036544) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0\225-@\0\0\0\0\0\0\0\1\5\325\212\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 95592448) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\2312@\0\241\323\16\0\0\0\1\6\346\216\0\0\1\0\0\0S\0\0\0\240\323\16\0"..., 8192, 106110976) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\3464@\0\333h\r\0\0\0\2\6\255F\0\0\2\0\0\0000\0\0\0\331h\r\0"..., 8192, 110936064) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\3737@\0\341B\r\0\0\0\2\4\341\235\0\0\1\0\0\0\342\2\0\0\341B\r\0"..., 8192, 117399552) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\0038@\0E\0\16\0\0\0\1\4\341\203\0\0\2\0\1\0001\0\0\0E\0\16\0"..., 8192, 117465088) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\00029@\0\351P\r\0\0\0\2\6\275\253\0\0\2\0\0\0\363\2\0\0\350P\r\0"..., 8192, 119947264) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0R=@\0\307]&\0\0\0\1\6\227\35\0\0\1\0\1\0q\1\0\0\306]&\0"..., 8192, 128598016) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\301>@\0\300[\24\0\0\0\2\4g*\0\0\1\3@\0r\1\0\0\300[\24\0"..., 8192, 131604480) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0DP@\0o6>\0\0\0\1\6\226\304\0\0\2\0!\0\t\0\0\0n6>\0"..., 8192, 168329216) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0IP@\0\326A\33\0\0\0\2\4C{\0\0\1\0\0\0\10\0\0\0\326A\33\0"..., 8192, 168370176) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\203P@\0\257E\"\0\0\0\2\4\353\37\0\0\1\0 \0\342\2\0\0\257E\"\0"..., 8192, 168845312) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\350Q@\0\t\200\21\0\0\0\1\4\310\306\0\0\1\0\r\0S\0\0\0\t\200\21\0"..., 8192, 171769856) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\211W@\0\340P\21\0\0\0\1\4\225)\0\0\1\0\0\0\22\0\0\0\340P\21\0"..., 8192, 183574528) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\262W@\0\340P\21\0\0\0\1\0046\220\0\0\1\0\6\0\22\0\0\0\340P\21\0"..., 8192, 183910400) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\301W@\0\340P\21\0\0\0\1\4|\21\0\0\1\0\0\0\22\0\0\0\340P\21\0"..., 8192, 184033280) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\303[@\0VP\21\0\0\0\1\4\255\307\0\0\2\0\5\0%\0\0\0VP\21\0"..., 8192, 192438272) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\244_@\0\256\223\21\0\0\0\1\4O\265\0\0\1\0\0\0=\0\0\0\256\223\21\0"..., 8192, 200572928) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0f`@\0p`$\0\0\0\1\6+\253\0\0\2\0\0\0?\0\0\0k`$\0"..., 8192, 202162176) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\212`@\0\347P\21\0\0\0\1\4 \345\0\0\2\0\f\0>\0\0\0\347P\21\0"..., 8192, 202457088) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0kb@\0V2'\0\0\0\1\6w<\0\0\1\0\0\0S\0\0\0g\254\33\0"..., 8192, 206397440) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\202b@\0\37H\22\0\0\0\2\6\34\250\0\0\2\0\0\0'\0\0\0\36H\22\0"..., 8192, 206585856) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\24d@\0\370K$\0\0\0\2\4\10\35\0\0\1\0\6\0s\1\0\0\370K$\0"..., 8192, 209879040) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\26f@\0\364,\24\0\0\0\2\4+s\0\0\1\0\0\0\342\2\0\0\364,\24\0"..., 8192, 214089728) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\16h@\0\210\253%\0\0\0\1\6\200H\0\0\2\0\6\0\363\2\0\0\203\253%\0"..., 8192, 218218496) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0\236j@\0\0\0\0\0\0\0\1\5\336\315\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 223592448) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0gl@\0F\203$\0\0\0\1\6B\304\0\0\2\0\1\0\372\2\0\0A\203$\0"..., 8192, 227336192) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0,p@\0\357\334\30\0\0\0\2\4\206\360\0\0\1\0\v\0\342\2\0\0\357\334\30\0"..., 8192, 235241472) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\213q@\0Pf&\0\0\0\1\6\16h\0\0\1\0\34\0\206\1\0\0>f&\0"..., 8192, 238116864) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0kx@\0\255E\"\0\0\0\2\4\10\36\0\0\1\0\22\0\342\2\0\0\255E\"\0"..., 8192, 252534784) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\216y@\0\21x&\0\0\0\1\6\23\326\0\0\1\0\0\0q\1\0\0\20x&\0"..., 8192, 254918656) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\221z@\0Z\6'\0\0\0\1\6\16-\0\0\1\0\37\0S\0\0\0Y\6'\0"..., 8192, 257040384) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\323{@\0\205\342&\0\0\0\1\6\305\366\0\0\1\0\20\0q\1\0\0\204\342&\0"..., 8192, 259678208) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0\363}@\0\0\0\0\0\0\0\1\5\263\332\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 264134656) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\20\242\0\0008\177@\0005\201\"\0\0\0\1\4\324\202\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 266797056) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\31\242\0\0\21\200@\0R\207\"\0\0\0\1\4k?\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 268574720) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0\210\200@\0\251\206\"\0\0\0\2\4\270y\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 269549568) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0\n\201@\0\252\206\"\0\0\0\2\4\275~\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 270614528) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0x\205@\0\277\206\"\0\0\0\2\4.<\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 279904256) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0M\210@\0\321\206\"\0\0\0\2\4\207\v\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 285843456) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0w\216@\0\352\206\"\0\0\0\2\4.\27\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 298770432) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0E\221@\0\375\206\"\0\0\0\2\4\355'\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 304652288) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0t\221@\0\375\206\"\0\0\0\2\4\17E\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 305037312) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0\223\225@\0\23\207\"\0\0\0\2\4\352\263\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 313679872) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0001\226@\0\24\207\"\0\0\0\2\4\364\315\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 314974208) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0\206\232@\0*\207\"\0\0\0\2\4\302\331\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 324059136) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0\344\234@\0.\207\"\0\0\0\2\4\257S\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 329023488) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0(\242@\0E\207\"\0\0\0\2\4\347\240\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 340066304) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\33\242\0\0Y\245@\0R\207\"\0\0\0\2\4\3\372\0\0AW\0\0\0\0\0\1\0\0\0\7"..., 8192, 346759168) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0\223\255@\0<+'\0\0\0\2\4\302\210\0\0\1\0\0\0\342\2\0\0<+'\0"..., 8192, 364011520) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\20\242\0\0\360\257@\0\36\0057\0\0\0\2\4\326i\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 368967680) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\6\242\0\0T\265@\0#\3607\0\0\0\2\4\r\177\0\0\1\0!\0@\0\0\0#\3607\0"..., 8192, 380272640) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0$\266@\0\0\0\0\0\0\0\1\5d\21\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 381976576) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0\244\266@\0\0\0\0\0\0\0\1\5\344\21\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 383025152) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0\255\267@\0\0\0\0\0\0\0\1\5\355\20\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 385196032) = 8192
pread64(270</u02/app/oracle/oradata/DEVCDB/local2/system01.dbf>, "\0\242\0\0v\270@\0\0\0\0\0\0\0\1\0056\37\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 386842624) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "=\242\0\0F\1\0\1\r4\0\0\0\0\2\4<\231\0\0\2\0\0\0\1\0\0\0\0\0\0\0"..., 8192, 2670592) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "\6\242\0\0\374\1\0\0010a\n\0\0\0\2\4Q\230\0\0\1\0\0\0{#\0\0000a\n\0"..., 8192, 4161536) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "\0\242\0\0\26\2\0\1\0\0\0\0\0\0\1\5\26\244\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 4374528) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "\6\242\0\0\205\5\0\1\226\212\r\0\0\0\2\4|\30\0\0\5\0\0\0\1\35\0\0\226\212\r\0"..., 8192, 11575296) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "\6\242\0\0\365\7\0\1b)\"\0\0\0\2\4\224\357\0\0\5\0\0\0\1\35\0\0b)\"\0"..., 8192, 16687104) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "\6\242\0\0\30\t\0\1h)\"\0\0\0\2\4y\340\0\0\5\0\0\0\1\35\0\0h)\"\0"..., 8192, 19070976) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "\6\242\0\0\210\n\0\1\227)\"\0\0\0\2\4\351\225\0\0\5\0\0\0\1\35\0\0\227)\"\0"..., 8192, 22085632) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "\0\242\0\0\364\16\0\1\0\0\0\0\0\0\1\5\364\250\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 31358976) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "!\242\0\0\221\20\0\1\321\300\r\0\0\0\2\4\323\307\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 34742272) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "!\242\0\0Y\21\0\1/\311\r\0\0\0\1\4\233\256\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 36380672) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "\6\242\0\0\271\23\0\1(\t8\0\0\0\1\6H\227\0\0\2\0\23\0\25:\0\0%\t8\0"..., 8192, 41361408) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, " \242\0\0x\24\0\1+\243\16\0\0\0\1\4_s\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 42926080) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "=\242\0\0\242\27\0\1\251\243\16\0\0\0\2\4S\337\0\0\6\0\0\0\1\0\0\0\0\0\0\0"..., 8192, 49561600) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "\0\242\0\0\365\32\0\1\0\0\0\0\0\0\1\5\365\274\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 56532992) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "=\242\0\0u\33\0\1\240\244\16\0\0\0\4\4\2\311\0\0\1\0\0\0\1\0\0\0\0\0\0\0"..., 8192, 57581568) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "=\242\0\0\250\33\0\1\254\244\16\0\0\0\2\4\215\322\0\0\4\0\0\0\1\0\0\0\0\0\0\0"..., 8192, 57999360) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "E\242\0\0\250 \0\1Z\246\16\0\0\0\2\4\321\220\0\0\263 \0\1\5\0\0\0\0\0\0\0"..., 8192, 68485120) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, "=\242\0\0\2%\0\1o\247\16\0\0\0\2\4\275\355\0\0\6\0\0\0\1\0\0\0\0\0\0\0"..., 8192, 77611008) = 8192
pread64(271</u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf>, ^C"\6\242\0\0K(\0\1\302\250\16\0\0\0\1\4V\216\0\0\2\0\0\0\252S\0\0\300\250\16\0"..., 8192, 84500480) = 8192
strace: Process 4962 detached
[oracle@localhost DEVCDB]$ ^C
[oracle@localhost DEVCDB]$ ^C
[oracle@localhost DEVCDB]$ ^C
[oracle@localhost DEVCDB]$
[oracle@localhost DEVCDB]$ strace -fy -p 4962
}}}
! short_stack and perf
{{{
# terminal 1
./genstacks 4962 99999999
# terminal 2
perf record -g -p 4962
# package everything from terminal 3
tar -cjvpf perf_perf_data.tar.bz2 perf* stacks*txt
perf_flamegraph.svg
perf_perf_graph.data
perf_perf_graph.data-folded
stacks2.txt
stacks3.txt
}}}
! lsof
{{{
[oracle@localhost ~]$ lsof +p 4962
lsof: WARNING: can't stat() tracefs file system /sys/kernel/debug/tracing
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ora_cs00_ 4962 oracle cwd DIR 8,33 102 270184810 /u02/app/oracle/product/version/db_1/dbs
ora_cs00_ 4962 oracle rtd DIR 0,36 144 256 /
ora_cs00_ 4962 oracle txt REG 8,33 448003000 137362676 /u02/app/oracle/product/version/db_1/bin/oracle
ora_cs00_ 4962 oracle DEL REG 0,5 0 /SYSV00000000
ora_cs00_ 4962 oracle DEL REG 0,5 32769 /SYSV00000000
ora_cs00_ 4962 oracle DEL REG 0,5 65538 /SYSV00000000
ora_cs00_ 4962 oracle DEL REG 0,5 98307 /SYSV2ba3e5c8
ora_cs00_ 4962 oracle mem REG 8,33 3486080 407196994 /u02/app/oracle/product/version/db_1/lib/libshpkavx219.so
ora_cs00_ 4962 oracle mem REG 0,34 44184 /usr/lib64/libnss_files-2.17.so (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 0,34 40978 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 0,34 53746 /usr/lib64/libnuma.so.1 (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 0,34 44166 /usr/lib64/libc-2.17.so (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 0,34 44194 /usr/lib64/libresolv-2.17.so (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 0,34 44176 /usr/lib64/libnsl-2.17.so (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 0,34 44192 /usr/lib64/libpthread-2.17.so (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 0,34 44174 /usr/lib64/libm-2.17.so (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 0,34 44172 /usr/lib64/libdl-2.17.so (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 8,33 3642520 409545489 /u02/app/oracle/product/version/db_1/lib/libipc1.so
ora_cs00_ 4962 oracle mem REG 8,33 478728 403964590 /u02/app/oracle/product/version/db_1/lib/libmql1.so
ora_cs00_ 4962 oracle mem REG 8,33 410512 409545487 /u02/app/oracle/product/version/db_1/lib/libons.so
ora_cs00_ 4962 oracle mem REG 0,34 88004 /usr/lib64/libaio.so.1.0.1 (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 8,33 774352 407196946 /u02/app/oracle/product/version/db_1/lib/libocrutl19.so
ora_cs00_ 4962 oracle mem REG 8,33 8581040 407196966 /u02/app/oracle/product/version/db_1/lib/libocrb19.so
ora_cs00_ 4962 oracle mem REG 8,33 3783528 409545581 /u02/app/oracle/product/version/db_1/lib/libocr19.so
ora_cs00_ 4962 oracle mem REG 8,33 17336 415020824 /u02/app/oracle/product/version/db_1/lib/libskgxn2.so
ora_cs00_ 4962 oracle mem REG 8,33 13921936 407196939 /u02/app/oracle/product/version/db_1/lib/libhasgen19.so
ora_cs00_ 4962 oracle mem REG 8,33 197440 403642180 /u02/app/oracle/product/version/db_1/lib/libdbcfg19.so
ora_cs00_ 4962 oracle mem REG 8,33 267072 407197049 /u02/app/oracle/product/version/db_1/lib/libclsra19.so
ora_cs00_ 4962 oracle mem REG 0,34 44196 /usr/lib64/librt-2.17.so (path dev=0,36)
ora_cs00_ 4962 oracle mem REG 8,33 411704 407196928 /u02/app/oracle/product/version/db_1/lib/libskjcx19.so
ora_cs00_ 4962 oracle mem REG 8,33 1273720 403964579 /u02/app/oracle/product/version/db_1/lib/libskgxp19.so
ora_cs00_ 4962 oracle mem REG 8,33 2018328 407196995 /u02/app/oracle/product/version/db_1/lib/libcell19.so
ora_cs00_ 4962 oracle mem REG 8,33 12536 403642195 /u02/app/oracle/product/version/db_1/lib/libofs.so
ora_cs00_ 4962 oracle mem REG 8,33 17848 409545509 /u02/app/oracle/product/version/db_1/lib/libodmd19.so
ora_cs00_ 4962 oracle mem REG 0,34 44159 /usr/lib64/ld-2.17.so (path dev=0,36)
ora_cs00_ 4962 oracle DEL REG 0,12 59649549 /[aio]
ora_cs00_ 4962 oracle 0r CHR 1,3 0t0 1028 /dev/null
ora_cs00_ 4962 oracle 1w CHR 1,3 0t0 1028 /dev/null
ora_cs00_ 4962 oracle 2w CHR 1,3 0t0 1028 /dev/null
ora_cs00_ 4962 oracle 3r CHR 1,3 0t0 1028 /dev/null
ora_cs00_ 4962 oracle 4r REG 8,33 1433088 17995482 /u02/app/oracle/product/version/db_1/rdbms/mesg/oraus.msb
ora_cs00_ 4962 oracle 5r DIR 0,4 0 59649541 /proc/4962/fd
ora_cs00_ 4962 oracle 6w REG 8,33 39621 404866345 /u02/app/oracle/diag/rdbms/devcdb/devcdb/trace/devcdb_cs00_4962.trc
ora_cs00_ 4962 oracle 7w REG 8,33 4633 404866346 /u02/app/oracle/diag/rdbms/devcdb/devcdb/trace/devcdb_cs00_4962.trm
ora_cs00_ 4962 oracle 8r REG 8,33 448003000 137362676 /u02/app/oracle/product/version/db_1/bin/oracle
ora_cs00_ 4962 oracle 256u REG 8,33 1101012992 30228072 /u02/app/oracle/oradata/DEVCDB/system01.dbf
ora_cs00_ 4962 oracle 257u REG 8,33 1121984512 30228073 /u02/app/oracle/oradata/DEVCDB/sysaux01.dbf
ora_cs00_ 4962 oracle 258u REG 8,33 629153792 19594481 /u02/app/oracle/oradata/DEVCDB/undotbs01.dbf
ora_cs00_ 4962 oracle 259u REG 8,33 367009792 270871088 /u02/app/oracle/oradata/DEVCDB/pdbseed/system01.dbf
ora_cs00_ 4962 oracle 260u REG 8,33 461381632 270871089 /u02/app/oracle/oradata/DEVCDB/pdbseed/sysaux01.dbf
ora_cs00_ 4962 oracle 261u REG 8,33 5251072 25843488 /u02/app/oracle/oradata/DEVCDB/users01.dbf
ora_cs00_ 4962 oracle 262u REG 8,33 209723392 270871090 /u02/app/oracle/oradata/DEVCDB/pdbseed/undotbs01.dbf
ora_cs00_ 4962 oracle 263u REG 8,33 408952832 139074223 /u02/app/oracle/oradata/DEVCDB/devpdb/system01.dbf
ora_cs00_ 4962 oracle 264u REG 8,33 587210752 139074224 /u02/app/oracle/oradata/DEVCDB/devpdb/sysaux01.dbf
ora_cs00_ 4962 oracle 265u REG 8,33 209723392 139074222 /u02/app/oracle/oradata/DEVCDB/devpdb/undotbs01.dbf
ora_cs00_ 4962 oracle 266u REG 8,33 53747712 139074232 /u02/app/oracle/oradata/DEVCDB/devpdb/users01.dbf
ora_cs00_ 4962 oracle 267u REG 8,33 492838912 138801273 /u02/app/oracle/oradata/DEVCDB/localcopy/system01.dbf
ora_cs00_ 4962 oracle 268u REG 8,33 534781952 138801274 /u02/app/oracle/oradata/DEVCDB/localcopy/sysaux01.dbf
ora_cs00_ 4962 oracle 269u REG 8,33 209723392 138801272 /u02/app/oracle/oradata/DEVCDB/localcopy/undotbs01.dbf
ora_cs00_ 4962 oracle 270u REG 8,33 387981312 268691025 /u02/app/oracle/oradata/DEVCDB/local2/system01.dbf
ora_cs00_ 4962 oracle 271u REG 8,33 524296192 268691026 /u02/app/oracle/oradata/DEVCDB/local2/sysaux01.dbf
ora_cs00_ 4962 oracle 272u REG 8,33 209723392 268691024 /u02/app/oracle/oradata/DEVCDB/local2/undotbs01.dbf
ora_cs00_ 4962 oracle 273u REG 8,33 408952832 140089178 /u02/app/oracle/oradata/DEVCDB/localtc/system01.dbf
ora_cs00_ 4962 oracle 274u REG 8,33 555753472 140089179 /u02/app/oracle/oradata/DEVCDB/localtc/sysaux01.dbf
ora_cs00_ 4962 oracle 275u REG 8,33 209723392 140089177 /u02/app/oracle/oradata/DEVCDB/localtc/undotbs01.dbf
ora_cs00_ 4962 oracle 276u REG 8,33 94380032 140089214 /u02/app/oracle/oradata/DEVCDB/localtc/users01.dbf
[oracle@localhost ~]$
}}}
! pidstat -dl 1
{{{
[oracle@localhost bin]$ pidstat -dl 1
Linux 4.1.12-124.27.1.el7uek.x86_64 (localhost.localdomain) 03/02/2022 _x86_64_ (2 CPU)
03:01:24 PM UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
03:01:25 PM 1000 4962 9966.36 0.00 0.00 ora_cs00_devcdb
03:01:25 PM 1000 4964 10100.93 0.00 0.00 ora_cs01_devcdb
03:01:25 PM UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
03:01:26 PM 1000 4962 12936.00 0.00 0.00 ora_cs00_devcdb
03:01:26 PM 1000 4964 7120.00 0.00 0.00 ora_cs01_devcdb
03:01:26 PM 1000 9734 0.00 32.00 0.00 ora_ckpt_devcdb
03:01:26 PM UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
03:01:27 PM 1000 4962 8304.00 0.00 0.00 ora_cs00_devcdb
03:01:27 PM 1000 4964 8328.00 0.00 0.00 ora_cs01_devcdb
03:01:27 PM UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
03:01:28 PM 1000 4962 7021.36 0.00 0.00 ora_cs00_devcdb
03:01:28 PM 1000 4964 6757.28 0.00 0.00 ora_cs01_devcdb
03:01:28 PM UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
03:01:29 PM 1000 4962 8696.00 0.00 0.00 ora_cs00_devcdb
03:01:29 PM 1000 4964 10808.00 0.00 0.00 ora_cs01_devcdb
03:01:29 PM 1000 9734 0.00 32.00 0.00 ora_ckpt_devcdb
^C
Average: UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
Average: 1000 4962 9378.82 0.00 0.00 ora_cs00_devcdb
Average: 1000 4964 8632.16 0.00 0.00 ora_cs01_devcdb
Average: 1000 9734 0.00 12.55 0.00 ora_ckpt_devcdb
}}}
! psnapper - psn
{{{
[oracle@localhost bin]$ ./psn -g pid,cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss,kstack -a
Linux Process Snapper v1.1.0 by Tanel Poder [https://0x.tools]
Sampling /proc/cmdline, wchan, stat, io, syscall, stack for 5 seconds... finished.
=== Active Threads
query returned no rows
samples: 6 (expected: 100)
total processes: 0, threads: 0
runtime: 5.41, measure time: 5.40
Warning: 5124 /proc file accesses failed. Run as root or avoid restricted
proc-files like "syscall" or measure only your own processes
}}}
{{{
[oracle@localhost bin]$ sudo ./psn -g pid,cmdline,state,syscall,state,wchan,state,kstack
Linux Process Snapper v1.1.0 by Tanel Poder [https://0x.tools]
Sampling /proc/cmdline, syscall, wchan, stat, stack for 5 seconds... finished.
=== Active Threads ===================================================================================================================================================================================================================================================================================================================================================================================
samples | avg_threads | pid | cmdline | state | syscall | state | wchan | state | kstack
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3 | 0.60 | 3883 | ora_cs00_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | -
2 | 0.40 | 7 | | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | ret_from_fork()->kthread()->rcu_gp_kthread()
2 | 0.40 | 36 | | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | ret_from_fork()->kthread()->kswapd()
1 | 0.20 | 3883 | ora_cs00_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | system_call_fastpath()->SyS_pread64()->vfs_read()->__vfs_read()->xfs_file_read_iter()->generic_file_read_iter()->page_cache_sync_readahead()->ondemand_readahead()->__do_page_cache_readahead()->__page_cache_alloc()->alloc_pages_current()->__alloc_pages_nodemask()
1 | 0.20 | 3883 | ora_cs00_devcdb | Running (ON CPU) | pread64 | Running (ON CPU) | 0 | Running (ON CPU) | -
1 | 0.20 | 9600 | ora_vktm_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | system_call_fastpath()->SyS_nanosleep()->hrtimer_nanosleep()
samples: 5 (expected: 100)
total processes: 405, threads: 855
runtime: 5.64, measure time: 5.60
Warning: less than 10 samples captured. Reduce the amount of processes monitored or run pSnapper for longer.
[oracle@localhost bin]$
}}}
{{{
[oracle@localhost bin]$ ./psn -g cmdline,state,syscall,state,wchan,state,read_bytes,write_bytes,rss
Linux Process Snapper v1.1.0 by Tanel Poder [https://0x.tools]
Sampling /proc/syscall, cmdline, wchan, stat, io for 5 seconds... finished.
=== Active Threads =======================================================================================================================================================================
samples | avg_threads | cmdline | state | syscall | state | wchan | state | read_bytes | write_bytes | rss
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 | 0.11 | ora_cs00_devcdb | Disk (Uninterruptible) | [running] | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 37879988224 | 110592 | 4180
1 | 0.11 | ora_cs00_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 37190029312 | 110592 | 4192
1 | 0.11 | ora_cs00_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 37312000000 | 110592 | 4192
1 | 0.11 | ora_cs00_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 37394038784 | 110592 | 4192
1 | 0.11 | ora_cs00_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 37763850240 | 110592 | 4180
1 | 0.11 | ora_cs00_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 38013870080 | 110592 | 4180
1 | 0.11 | ora_cs00_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 38115463168 | 110592 | 4180
1 | 0.11 | ora_cs00_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | 37484240896 | 110592 | 4181
1 | 0.11 | ora_cs00_devcdb | Running (ON CPU) | pread64 | Running (ON CPU) | __lock_page_killable | Running (ON CPU) | 37624033280 | 110592 | 4181
1 | 0.11 | ora_cs01_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 36392763392 | 110592 | 4069
1 | 0.11 | ora_cs01_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 36518834176 | 110592 | 4069
1 | 0.11 | ora_cs01_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 36615671808 | 110592 | 4069
1 | 0.11 | ora_cs01_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 36809740288 | 110592 | 4058
1 | 0.11 | ora_cs01_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 36940431360 | 110592 | 4057
1 | 0.11 | ora_cs01_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | 37284421632 | 110592 | 4057
1 | 0.11 | ora_cs01_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | 36697444352 | 110592 | 4058
1 | 0.11 | ora_cs01_devcdb | Running (ON CPU) | pread64 | Running (ON CPU) | 0 | Running (ON CPU) | 37182349312 | 110592 | 4057
1 | 0.11 | ora_cs01_devcdb | Running (ON CPU) | pread64 | Running (ON CPU) | __lock_page_killable | Running (ON CPU) | 37051711488 | 110592 | 4057
samples: 9 (expected: 100)
total processes: 147, threads: 348
runtime: 5.16, measure time: 5.15
Warning: 4554 /proc file accesses failed. Run as root or avoid restricted
proc-files like "syscall" or measure only your own processes
}}}
{{{
[oracle@localhost bin]$ sudo ./psn -g pid,cmdline,state,syscall,state,wchan,state,kstack
Linux Process Snapper v1.1.0 by Tanel Poder [https://0x.tools]
Sampling /proc/cmdline, syscall, stat, wchan, stack for 5 seconds... finished.
=== Active Threads ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
samples | avg_threads | pid | cmdline | state | syscall | state | wchan | state | kstack
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
4 | 0.80 | 3885 | ora_cs01_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | -
2 | 0.40 | 2921 | /sbin/rngd | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | system_call_fastpath()->SyS_poll()->do_sys_poll()->poll_schedule_timeout()
2 | 0.40 | 3883 | ora_cs00_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | -
1 | 0.20 | 7 | | Running (ON CPU) | [kernel_thread] | Running (ON CPU) | 0 | Running (ON CPU) | ret_from_fork()->kthread()->rcu_gp_kthread()
1 | 0.20 | 36 | | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | ret_from_fork()->kthread()->kswapd()->shrink_zone()->shrink_lruvec()->shrink_inactive_list()->shrink_page_list()
1 | 0.20 | 3285 | top | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | system_call_fastpath()->SyS_read()->vfs_read()->__vfs_read()->proc_pid_cmdline_read()->__get_free_pages()->alloc_pages_current()->__alloc_pages_nodemask()
1 | 0.20 | 3883 | ora_cs00_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | system_call_fastpath()->SyS_pread64()->vfs_read()->__vfs_read()->xfs_file_read_iter()->generic_file_read_iter()->__lock_page_killable()
1 | 0.20 | 3883 | ora_cs00_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | system_call_fastpath()->SyS_pread64()->vfs_read()->__vfs_read()->xfs_file_read_iter()->generic_file_read_iter()
1 | 0.20 | 3883 | ora_cs00_devcdb | Running (ON CPU) | pread64 | Running (ON CPU) | 0 | Running (ON CPU) | system_call_fastpath()->SyS_pread64()->vfs_read()->__vfs_read()->xfs_file_read_iter()->generic_file_read_iter()->__lock_page_killable()
1 | 0.20 | 3885 | ora_cs01_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | system_call_fastpath()->SyS_pread64()->vfs_read()->__vfs_read()->xfs_file_read_iter()->generic_file_read_iter()->__lock_page_killable()
1 | 0.20 | 5314 | /usr/bin/gnome-shell | Disk (Uninterruptible) | [kernel_thread] | Disk (Uninterruptible) | wait_on_page_bit | Disk (Uninterruptible) | page_fault()->do_page_fault()->__do_page_fault()->handle_mm_fault()->handle_pte_fault()->__do_fault()->filemap_fault()->__do_page_cache_readahead()->btrfs_readpages()->extent_readpages()->__extent_readpages.constprop.40()->__do_readpage()->btrfs_get_extent()->btrfs_lookup_file_extent()->btrfs_search_slot()->read_block_for_search.isra.34()->read_tree_block()->btree_read_extent_buffer_pages.constprop.57()->read_extent_buffer_pages()->wait_on_page_bit()
1 | 0.20 | 9600 | ora_vktm_devcdb | Running (ON CPU) | nanosleep | Running (ON CPU) | hrtimer_nanosleep | Running (ON CPU) | system_call_fastpath()->SyS_nanosleep()->hrtimer_nanosleep()
samples: 5 (expected: 100)
total processes: 403, threads: 853
runtime: 5.81, measure time: 5.79
Warning: less than 10 samples captured. Reduce the amount of processes monitored or run pSnapper for longer.
}}}
{{{
[oracle@localhost bin]$ sudo ./psn -g pid,cmdline,state,syscall,state,wchan,state,kstack
[sudo] password for oracle:
Linux Process Snapper v1.1.0 by Tanel Poder [https://0x.tools]
Sampling /proc/cmdline, syscall, wchan, stat, stack for 5 seconds... finished.
=== Active Threads ======================================================================================================================================================================================================================================================================================
samples | avg_threads | pid | cmdline | state | syscall | state | wchan | state | kstack
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3 | 0.75 | 3883 | ora_cs00_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | -
3 | 0.75 | 3885 | ora_cs01_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible) | system_call_fastpath()->SyS_pread64()->vfs_read()->__vfs_read()->xfs_file_read_iter()->generic_file_read_iter()->__lock_page_killable()
1 | 0.25 | 7 | | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | ret_from_fork()->kthread()->rcu_gp_kthread()
1 | 0.25 | 36 | | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | -
1 | 0.25 | 1979 | ora_m001_devcdb | Running (ON CPU) | semtimedop | Running (ON CPU) | 0 | Running (ON CPU) | system_call_fastpath()->SyS_semtimedop()->SYSC_semtimedop()
1 | 0.25 | 3883 | ora_cs00_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | system_call_fastpath()->SyS_pread64()->vfs_read()->__vfs_read()->xfs_file_read_iter()->generic_file_read_iter()->__lock_page_killable()
1 | 0.25 | 3885 | ora_cs01_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU) | -
samples: 4 (expected: 100)
total processes: 403, threads: 853
runtime: 6.65, measure time: 6.60
Warning: less than 10 samples captured. Reduce the amount of processes monitored or run pSnapper for longer.
}}}
{{{
[oracle@localhost bin]$ ./psn -g cmdline,state,syscall,state,wchan,state -a
Linux Process Snapper v1.1.0 by Tanel Poder [https://0x.tools]
Sampling /proc/syscall, cmdline, wchan, stat for 5 seconds... finished.
=== Active Threads ======================================================================================================================================================================================
samples | avg_threads | cmdline | state | syscall | state | wchan | state
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
441 | 25.94 | /home/oracle/java/jdk1.8.0_201/bin/java | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
170 | 10.00 | /usr/bin/gnome-shell | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
119 | 7.00 | -bash | Sleep (Interruptible) | wait4 | Sleep (Interruptible) | do_wait | Sleep (Interruptible)
102 | 6.00 | /usr/libexec/evolution-calendar-factory-subprocess | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
102 | 6.00 | /usr/libexec/gnome-shell-calendar-server | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
85 | 5.00 | /usr/bin/gnome-shell | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
85 | 5.00 | /usr/libexec/evolution-addressbook-factory | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
85 | 5.00 | /usr/libexec/evolution-addressbook-factory-subprocess | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
85 | 5.00 | /usr/libexec/evolution-calendar-factory | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
85 | 5.00 | /usr/libexec/gsd-smartcard | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/bin/VBoxClient | Sleep (Interruptible) | wait4 | Sleep (Interruptible) | do_wait | Sleep (Interruptible)
68 | 4.00 | /usr/bin/gnome-software | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/at-spi-bus-launcher | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/evolution-source-registry | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gnome-session-binary | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gnome-terminal-server | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/goa-daemon | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-a11y-settings | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-account | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-color | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-datetime | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-housekeeping | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-keyboard | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-media-keys | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-mouse | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-power | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-sharing | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-sound | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/gsd-xsettings | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/ibus-dconf | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | /usr/libexec/mission-control-5 | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | nautilus-desktop | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
68 | 4.00 | sqlplus | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
51 | 3.00 | /usr/bin/VBoxClient | Sleep (Interruptible) | ioctl | Sleep (Interruptible) | rtR0SemEventMultiLnxWait.isra.2 | Sleep (Interruptible)
51 | 3.00 | /usr/bin/VBoxClient | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/bin/seapplet | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/at-spi2-registryd | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/dconf-service | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/evolution-calendar-factory-subprocess | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/goa-identity-service | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gsd-clipboard | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gsd-disk-utility-notify | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gsd-print-notifications | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gsd-printer | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gsd-rfkill | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gsd-screensaver-proxy | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gsd-wacom | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gvfs-afc-volume-monitor | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gvfs-goa-volume-monitor | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gvfs-gphoto2-volume-monitor | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gvfs-mtp-volume-monitor | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gvfs-udisks2-volume-monitor | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gvfsd | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gvfsd-metadata | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/gvfsd-trash | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/ibus-engine-simple | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/ibus-portal | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/ibus-x11 | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | /usr/libexec/xdg-permission-store | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | abrt-applet | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | bash | Sleep (Interruptible) | read | Sleep (Interruptible) | wait_woken | Sleep (Interruptible)
51 | 3.00 | ibus-daemon | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
51 | 3.00 | oracledevcdb | Sleep (Interruptible) | read | Sleep (Interruptible) | sk_wait_data | Sleep (Interruptible)
51 | 3.00 | sqlplus | Sleep (Interruptible) | read | Sleep (Interruptible) | wait_woken | Sleep (Interruptible)
34 | 2.00 | /usr/bin/dbus-daemon | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
34 | 2.00 | /usr/bin/dbus-daemon | Sleep (Interruptible) | poll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
34 | 2.00 | /usr/bin/pulseaudio | Sleep (Interruptible) | ppoll | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
34 | 2.00 | ora_gen1_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
34 | 2.00 | ora_ofsd_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | -bash | Sleep (Interruptible) | read | Sleep (Interruptible) | wait_woken | Sleep (Interruptible)
17 | 1.00 | /bin/bash | Sleep (Interruptible) | wait4 | Sleep (Interruptible) | do_wait | Sleep (Interruptible)
17 | 1.00 | /home/oracle/java/jdk1.8.0_201/bin/java | Sleep (Interruptible) | accept | Sleep (Interruptible) | inet_csk_accept | Sleep (Interruptible)
17 | 1.00 | /home/oracle/java/jdk1.8.0_201/bin/java | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
17 | 1.00 | /u02/app/oracle/product/version/db_1/bin/tnslsnr | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
17 | 1.00 | /u02/app/oracle/product/version/db_1/bin/tnslsnr | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
17 | 1.00 | /usr/bin/VBoxClient | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
17 | 1.00 | /usr/libexec/evolution-addressbook-factory-subprocess | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
17 | 1.00 | /usr/libexec/goa-daemon | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
17 | 1.00 | /usr/libexec/goa-identity-service | Sleep (Interruptible) | futex | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
17 | 1.00 | /usr/libexec/gvfs-afc-volume-monitor | Sleep (Interruptible) | read | Sleep (Interruptible) | wait_woken | Sleep (Interruptible)
17 | 1.00 | bash | Sleep (Interruptible) | wait4 | Sleep (Interruptible) | do_wait | Sleep (Interruptible)
17 | 1.00 | dbus-launch | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
17 | 1.00 | iostat | Sleep (Interruptible) | pause | Sleep (Interruptible) | sys_pause | Sleep (Interruptible)
17 | 1.00 | ora_aqpc_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_cjq0_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_ckpt_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_cl00_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_clmn_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_d000_devcdb | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
17 | 1.00 | ora_dbrm_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_dbw0_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_dia0_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_diag_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_gen0_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_lg00_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_lg01_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_lgwr_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_lreg_devcdb | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
17 | 1.00 | ora_m000_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_m001_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_m003_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_m004_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_mman_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_mmnl_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_mmon_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_p001_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_p002_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_p003_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_pman_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_pmon_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_psp0_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_pxmn_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_q008_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_q00j_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_qm02_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_reco_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_s000_devcdb | Sleep (Interruptible) | epoll_wait | Sleep (Interruptible) | ep_poll | Sleep (Interruptible)
17 | 1.00 | ora_smco_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_smon_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_svcb_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_tmon_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_tt00_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_tt01_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_tt02_devcdb | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
17 | 1.00 | ora_vkrm_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_w000_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_w001_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_w002_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_w003_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_w004_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_w005_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_w006_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | ora_w007_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | oracledevcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
17 | 1.00 | sqlplus | Sleep (Interruptible) | read | Sleep (Interruptible) | sk_wait_data | Sleep (Interruptible)
17 | 1.00 | ssh | Sleep (Interruptible) | select | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
16 | 0.94 | ora_p000_devcdb | Sleep (Interruptible) | semtimedop | Sleep (Interruptible) | SYSC_semtimedop | Sleep (Interruptible)
16 | 0.94 | ora_vktm_devcdb | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
16 | 0.94 | top | Sleep (Interruptible) | pselect6 | Sleep (Interruptible) | poll_schedule_timeout | Sleep (Interruptible)
15 | 0.88 | /usr/bin/VBoxClient | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
14 | 0.82 | ora_cs01_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible)
11 | 0.65 | ora_cs00_devcdb | Disk (Uninterruptible) | pread64 | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible)
3 | 0.18 | ora_cs00_devcdb | Running (ON CPU) | pread64 | Running (ON CPU) | 0 | Running (ON CPU)
2 | 0.12 | ora_cs01_devcdb | Disk (Uninterruptible) | [running] | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible)
1 | 0.06 | /home/oracle/java/jdk1.8.0_201/bin/java | Sleep (Interruptible) | [running] | Sleep (Interruptible) | futex_wait_queue_me | Sleep (Interruptible)
1 | 0.06 | /usr/bin/VBoxClient | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.06 | /usr/bin/VBoxClient | Sleep (Interruptible) | nanosleep | Sleep (Interruptible) | 0 | Sleep (Interruptible)
1 | 0.06 | ora_cs00_devcdb | Disk (Uninterruptible) | [running] | Disk (Uninterruptible) | __lock_page_killable | Disk (Uninterruptible)
1 | 0.06 | ora_cs00_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.06 | ora_cs00_devcdb | Running (ON CPU) | pread64 | Running (ON CPU) | __lock_page_killable | Running (ON CPU)
1 | 0.06 | ora_cs01_devcdb | Running (ON CPU) | pread64 | Running (ON CPU) | __lock_page_killable | Running (ON CPU)
1 | 0.06 | ora_p000_devcdb | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU)
1 | 0.06 | ora_vktm_devcdb | Sleep (Interruptible) | [running] | Sleep (Interruptible) | hrtimer_nanosleep | Sleep (Interruptible)
1 | 0.06 | top | Running (ON CPU) | [running] | Running (ON CPU) | 0 | Running (ON CPU)
samples: 17 (expected: 100)
total processes: 147, threads: 348
runtime: 5.20, measure time: 5.10
Warning: 8568 /proc file accesses failed. Run as root or avoid restricted
proc-files like "syscall" or measure only your own processes
}}}
{{{
[oracle@localhost bin]$ ./psn
Linux Process Snapper v1.1.0 by Tanel Poder [https://0x.tools]
Sampling /proc/stat for 5 seconds... finished.
=== Active Threads =================================================
samples | avg_threads | comm | state
--------------------------------------------------------------------
28 | 1.56 | (ora_cs*_devcdb) | Disk (Uninterruptible)
8 | 0.44 | (ora_cs*_devcdb) | Running (ON CPU)
3 | 0.17 | (rcu_sched) | Running (ON CPU)
1 | 0.06 | (ora_vktm_devcdb) | Running (ON CPU)
1 | 0.06 | (rcuos/*) | Running (ON CPU)
1 | 0.06 | (rngd) | Running (ON CPU)
samples: 18 (expected: 100)
total processes: 402, threads: 852
runtime: 5.11, measure time: 5.05
[oracle@localhost bin]$ ./psn -h
usage: psn [-h] [-d seconds] [-p [pid]] [-t [tid]] [-r] [-a]
[--sample-hz SAMPLE_HZ] [--ps-hz PS_HZ] [-o filename] [-i filename]
[-s csv-columns] [-g csv-columns] [-G csv-columns] [--list]
optional arguments:
-h, --help show this help message and exit
-d seconds number of seconds to sample for
-p [pid], --pid [pid]
process id to sample (including all its threads), or
process name regex, or omit for system-wide sampling
-t [tid], --thread [tid]
thread/task id to sample (not implemented yet)
-r, --recursive also sample and report for descendant processes
-a, --all-states display threads in all states, including idle ones
--sample-hz SAMPLE_HZ
sample rate in Hz (default: 20)
--ps-hz PS_HZ sample rate of new processes in Hz (default: 2)
-o filename, --output-sample-db filename
path of sqlite3 database to persist samples to,
defaults to in-memory/transient
-i filename, --input-sample-db filename
path of sqlite3 database to read samples from instead
of actively sampling
-s csv-columns, --select csv-columns
additional columns to report
-g csv-columns, --group-by csv-columns
columns to aggregate by in reports
-G csv-columns, --append-group-by csv-columns
default + additional columns to aggregate by in
reports
--list list all available columns
}}}
! short_stack
{{{
[oracle@localhost performance]$ tail -f stacks2.txt
03/02/2022 15:11:40 | ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+200<-__sighandler()<-pread64()+19<-ksfd_skgfqio()+314<-ksfdgo()+489<-ksfdaio()+1656<-ksfdcsdoio()+6676<-ksfdcse()+11275<-kcfcse()+640<-ksvrdp_int()+1953<-opirip()+583<-opidrv()+581<-sou2o()+165<-opimai_real()+173<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
03/02/2022 15:11:42 | Oracle pid: 52, Unix process pid: 4962, image: oracle@localhost.localdomain (CS00)
03/02/2022 15:11:42 | Statement processed.
03/02/2022 15:11:42 | ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+200<-__sighandler()<-pread64()+19<-ksfd_skgfqio()+314<-ksfdgo()+489<-ksfdaio()+1656<-ksfdcsdoio()+6676<-ksfdcse()+11275<-kcfcse()+640<-ksvrdp_int()+1953<-opirip()+583<-opidrv()+581<-sou2o()+165<-opimai_real()+173<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
03/02/2022 15:11:43 | Oracle pid: 52, Unix process pid: 4962, image: oracle@localhost.localdomain (CS00)
03/02/2022 15:11:43 | Statement processed.
03/02/2022 15:11:43 | ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+200<-__sighandler()<-pread64()+19<-ksfd_skgfqio()+314<-ksfdgo()+489<-ksfdaio()+1656<-ksfdcsdoio()+6676<-ksfdcse()+11275<-kcfcse()+640<-ksvrdp_int()+1953<-opirip()+583<-opidrv()+581<-sou2o()+165<-opimai_real()+173<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
03/02/2022 15:11:45 | Oracle pid: 52, Unix process pid: 4962, image: oracle@localhost.localdomain (CS00)
03/02/2022 15:11:45 | Statement processed.
03/02/2022 15:11:45 | ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+200<-__sighandler()<-pread64()+19<-ksfd_skgfqio()+314<-ksfdgo()+489<-ksfdaio()+1656<-ksfdcsdoio()+6676<-ksfdcse()+11275<-kcfcse()+640<-ksvrdp_int()+1953<-opirip()+583<-opidrv()+581<-sou2o()+165<-opimai_real()+173<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
03/02/2022 15:11:46 | Oracle pid: 52, Unix process pid: 4962, image: oracle@localhost.localdomain (CS00)
03/02/2022 15:11:46 | Statement processed.
03/02/2022 15:11:46 | ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+200<-__sighandler()<-pread64()+19<-ksfd_skgfqio()+314<-ksfdgo()+489<-ksfdaio()+1656<-ksfdcsdoio()+6676<-ksfdcse()+11275<-kcfcse()+640<-ksvrdp_int()+1953<-opirip()+583<-opidrv()+581<-sou2o()+165<-opimai_real()+173<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
03/02/2022 15:11:48 | Oracle pid: 52, Unix process pid: 4962, image: oracle@localhost.localdomain (CS00)
03/02/2022 15:11:48 | Statement processed.
03/02/2022 15:11:48 | ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+200<-__sighandler()<-pread64()+19<-ksfd_skgfqio()+314<-ksfdgo()+489<-ksfdaio()+1656<-ksfdcsdoio()+6676<-ksfdcse()+11275<-kcfcse()+640<-ksvrdp_int()+1953<-opirip()+583<-opidrv()+581<-sou2o()+165<-opimai_real()+173<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
03/02/2022 15:11:50 | Oracle pid: 52, Unix process pid: 4962, image: oracle@localhost.localdomain (CS00)
03/02/2022 15:11:50 | Statement processed.
03/02/2022 15:11:50 | ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+200<-__sighandler()<-pread64()+19<-ksfd_skgfqio()+314<-ksfdgo()+489<-ksfdaio()+1656<-ksfdcsdoio()+6676<-ksfdcse()+11275<-kcfcse()+640<-ksvrdp_int()+1953<-opirip()+583<-opidrv()+581<-sou2o()+165<-opimai_real()+173<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
03/02/2022 15:11:51 | Oracle pid: 52, Unix process pid: 4962, image: oracle@localhost.localdomain (CS00)
03/02/2022 15:11:51 | Statement processed.
03/02/2022 15:11:51 | ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+200<-__sighandler()<-pread64()+19<-ksfd_skgfqio()+314<-ksfdgo()+489<-ksfdaio()+1656<-ksfdcsdoio()+6676<-ksfdcse()+11275<-kcfcse()+640<-ksvrdp_int()+1953<-opirip()+583<-opidrv()+581<-sou2o()+165<-opimai_real()+173<-ssthrdmain()+417<-main()+256<-__libc_start_main()+245
03/02/2022 15:11:53 | Oracle pid: 52, Unix process pid: 4962, image: oracle@localhost.localdomain (CS00)
03/02/2022 15:11:53 | Statement processed.
}}}
! end
Unindexed Foreign Keys https://hoopercharles.wordpress.com/2011/09/05/unindexed-foreign-keys-what-is-wrong-with-this-quote/
No Index on the Foreign Key http://docs.oracle.com/cd/B28359_01/server.111/b28318/data_int.htm#BABCAHDJ
yes you can make unique index invisible and still maintain the uniqueness
{{{
SYS@emrep> connect scott/tiger
Connected.
SCOTT@emrep> select table_name from user_tables;
TABLE_NAME
------------------------------
DEPT
EMP
BONUS
SALGRADE
SCOTT@emrep> create dept2 as select * from dept;
create dept2 as select * from dept
*
ERROR at line 1:
ORA-00901: invalid CREATE command
SCOTT@emrep> create table dept2 as select * from dept;
Table created.
SCOTT@emrep>
SCOTT@emrep> select * from dept;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
SCOTT@emrep> select * from dept2;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
SCOTT@emrep>
SCOTT@emrep>
SCOTT@emrep> insert dept2 value (50,'it','dallas');
insert dept2 value (50,'it','dallas')
*
ERROR at line 1:
ORA-00925: missing INTO keyword
SCOTT@emrep> insert into dept2 values (50,'it','dallas');
1 row created.
SCOTT@emrep> select * from dept2;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
50 it dallas
SCOTT@emrep> rollback;
Rollback complete.
SCOTT@emrep> insert into dept2 values (40,'OPERATIONS','BOSTON');
1 row created.
SCOTT@emrep> select * from dept2;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
40 OPERATIONS BOSTON
SCOTT@emrep> rollback;
Rollback complete.
SCOTT@emrep> create unique index dept2_unq on dept2(deptno);
Index created.
SCOTT@emrep> insert into dept2 values (40,'OPERATIONS','BOSTON');
insert into dept2 values (40,'OPERATIONS','BOSTON')
*
ERROR at line 1:
ORA-00001: unique constraint (SCOTT.DEPT2_UNQ) violated
SCOTT@emrep> alter index dept2_unq invisible ;
Index altered.
SCOTT@emrep> insert into dept2 values (40,'OPERATIONS','BOSTON');
insert into dept2 values (40,'OPERATIONS','BOSTON')
*
ERROR at line 1:
ORA-00001: unique constraint (SCOTT.DEPT2_UNQ) violated
SCOTT@emrep> alter index dept2_unq visible online;
alter index dept2_unq visible online
*
ERROR at line 1:
ORA-14141: ALTER INDEX VISIBLE|INVISIBLE may not be combined with other operations
SCOTT@emrep> alter index dept2_unq visible'
2
SCOTT@emrep> alter index dept2_unq visible;
Index altered.
SCOTT@emrep> alter index dept2_unq invisible online;
alter index dept2_unq invisible online
*
ERROR at line 1:
ORA-14141: ALTER INDEX VISIBLE|INVISIBLE may not be combined with other operations
SCOTT@emrep> drop index dept2_unq;
Index dropped.
}}}
https://www.extendoffice.com/documents/excel/1139-excel-unmerge-cells-and-fill.html
{{{
-- unplug to XML
15:25:33 SYS@orcl> select * from v$pdbs;
CON_ID DBID CON_UID GUID NAME OPEN_MODE RES OPEN_TIME CREATE_SCN TOTAL_SIZE
---------- ---------- ---------- -------------------------------- ------------------------------ ---------- --- --------------------------------------------------------------------------- ---------- ----------
2 4080030308 4080030308 F081641BB43F0F7DE045000000000001 PDB$SEED READ ONLY NO 23-DEC-14 02.43.56.324 PM 1720754 283115520
3 3345156736 3345156736 F0832BAF14721281E045000000000001 PDB1 READ WRITE NO 23-DEC-14 02.44.09.357 PM 2244252 291045376
4 3933321700 3933321700 0AE814AAEDB059CCE055000000000001 PDB2 READ ONLY NO 23-DEC-14 03.09.38.545 PM 6761680 288358400
5 3983095293 3983095293 0AE8BDDF05120D9EE055000000000001 PDB3 READ WRITE NO 23-DEC-14 03.12.39.376 PM 6787435 288358400
15:41:39 SYS@orcl> ALTER PLUGGABLE DATABASE pdb2 close;
Pluggable database altered.
15:41:51 SYS@orcl> ALTER PLUGGABLE DATABASE pdb2 unplug into '/tmp/pdb2.xml';
Pluggable database altered.
15:41:57 SYS@orcl>
15:41:58 SYS@orcl> select * from v$pdbs;
CON_ID DBID CON_UID GUID NAME OPEN_MODE RES OPEN_TIME CREATE_SCN TOTAL_SIZE
---------- ---------- ---------- -------------------------------- ------------------------------ ---------- --- --------------------------------------------------------------------------- ---------- ----------
2 4080030308 4080030308 F081641BB43F0F7DE045000000000001 PDB$SEED READ ONLY NO 23-DEC-14 02.43.56.324 PM 1720754 283115520
3 3345156736 3345156736 F0832BAF14721281E045000000000001 PDB1 READ WRITE NO 23-DEC-14 02.44.09.357 PM 2244252 291045376
4 3933321700 3933321700 0AE814AAEDB059CCE055000000000001 PDB2 MOUNTED 23-DEC-14 03.41.57.513 PM 6761680 0
5 3983095293 3983095293 0AE8BDDF05120D9EE055000000000001 PDB3 READ WRITE NO 23-DEC-14 03.12.39.376 PM 6787435 288358400
15:42:01 SYS@orcl> select pdb_name , status from CDB_PDBS;
PDB_NAME STATUS
-------------------------------------------------------------------------------------------------------------------------------- -------------
PDB2 UNPLUGGED
PDB$SEED NORMAL
PDB1 NORMAL
PDB3 NORMAL
}}}
Why indexes become unusable?
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:1859798300346695894
{{{
bunzip2 *bz2
}}}
{{{
gunzip -vf *.gz
}}}
{{{
for i in *.zip; do unzip $i ; done
}}}
{{{
for i in *.tar; do tar xf $i; echo $i; done
}}}
http://yoember.com/emberjs/how-to-keep-your-ember-js-project-up-to-date/
http://www.thatjeffsmith.com/archive/2017/02/real-time-sql-monitoring-in-oracle-sql-developer-v4-2/
http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/sqldev-relnotes-42-3215764.html
<<<
Updated Real Time SQL Monitoring viewer
<<<
{{{
$ cat usercheck.sql
set lines 32767
col terminal format a4
col machine format a4
col os_login format a4
col oracle_login format a4
col osuser format a4
col module format a5
col program format a8
col schemaname format a5
-- col state format a8
col client_info format a5
col status format a4
col sid format 99999
col serial# format 99999
col unix_pid format a8
col txt format a50
col action format a8
select /* usercheck */ s.INST_ID, s.terminal terminal, s.machine machine, p.username os_login, s.username oracle_login, s.osuser osuser, s.module, s.action, s.program, s.schemaname,
s.state,
s.client_info, s.status status, s.sid sid, s.serial# serial#, lpad(p.spid,7) unix_pid, -- s.sql_hash_value,
sa.plan_hash_value, -- remove in 817, 9i
s.sql_id, -- remove in 817, 9i
substr(sa.sql_text,1,1000) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
and sa.sql_text NOT LIKE '%usercheck%'
-- and lower(sa.sql_text) LIKE '%grant%'
-- and s.username = 'APAC'
-- and s.schemaname = 'SYSADM'
-- and lower(s.program) like '%uscdcmta21%'
and s.sid=&SID
-- and p.spid = 14967
-- and s.sql_hash_value = 3963449097
-- and s.sql_id = '5p6a4cpc38qg3'
-- and lower(s.client_info) like '%10036368%'
-- and s.module like 'PSNVS%'
-- and s.program like 'PSNVS%'
order by status desc;
}}}
https://docs.oracle.com/cd/E87041_01/PDF/MWM_BI_DataMap_2.7.0.pdf
https://docs.oracle.com/cd/E87041_01/PDF/MDM_BI_DataMap_2.7.0.pdf
https://docs.oracle.com/cd/E87041_01/PDF/CCB_BI_DataMap_2.7.0.0.12.pdf
...
https://www.quora.com/Which-programming-languages-should-UI-designers-learn
https://www.quora.com/What-are-the-essential-skills-for-a-UX-designer
http://programmers.stackexchange.com/questions/40590/why-do-some-programmers-hate-the-ui-part-of-the-development/40597
http://blog.careerfoundry.com/ui-design/the-difference-between-ux-and-ui-design-a-laymans-guide/
http://www.webcoursesbangkok.com/blog/what-is-the-most-popular-ui-and-ux-design-program-of-today/
http://www.itcareerfinder.com/it-careers/user-interface-developer.html
http://www.uxmatters.com/mt/archives/2010/03/ux-design-versus-ui-development.php
http://www.webdesignerdepot.com/2012/06/ui-vs-ux-whats-the-difference/
! v2 with smart scan
{{{
# exadata v2 with smart scan
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5540 @ 2.53GHz
stepping : 5
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 1
siblings : 8
core id : 3
cpu cores : 4
apicid : 23
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx rdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips : 5054.01
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
OraPub CPU speed statistic is 300.186
Other statistics: stdev=7.388 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3616.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 378
1085106 0 359
1085106 0 358
1085106 0 354
1085106 0 380
1085106 0 359
1085106 0 360
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 297.126
Other statistics: stdev=6.165 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3653.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 368
1085106 0 372
1085106 0 377
1085106 0 361
1085106 0 359
1085106 0 358
1085106 0 365
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 290.974
Other statistics: stdev=4.617 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3730)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 378
1085106 0 371
1085106 0 364
1085106 0 371
1085106 0 373
1085106 0 379
1085106 0 380
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 282.72
Other statistics: stdev=2.458 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3838.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 394
1085106 0 382
1085106 0 387
1085106 0 385
1085106 0 378
1085106 0 385
1085106 0 386
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 268.07
Other statistics: stdev=14.73 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4058.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 393
1085106 0 402
1085106 0 386
1085106 0 436
1085106 0 390
1085106 0 388
1085106 0 433
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 272.303
Other statistics: stdev=18.025 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4000)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 401
1085106 0 382
1085106 0 438
1085106 0 430
1085106 0 377
1085106 0 377
1085106 0 396
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 271.843
Other statistics: stdev=.278 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3991.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 403
1085106 0 399
1085106 0 399
1085106 0 399
1085106 0 400
1085106 0 399
1085106 0 399
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
}}}
! v2 without smart scan
{{{
# exadata v2 without smart scan
OraPub CPU speed statistic is 270.166
Other statistics: stdev=2.217 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4016.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 408
1085106 0 401
1085106 0 401
1085106 0 401
1085106 0 401
1085106 0 398
1085106 0 408
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 264.905
Other statistics: stdev=6.486 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4098.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 422
1085106 0 407
1085106 0 405
1085106 0 407
1085106 0 404
1085106 0 405
1085106 0 431
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 263.92
Other statistics: stdev=1.846 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4111.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 417
1085106 0 414
1085106 0 413
1085106 0 412
1085106 0 412
1085106 0 406
1085106 0 410
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 258.453
Other statistics: stdev=9.357 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4203.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 450
1085106 0 414
1085106 0 416
1085106 0 453
1085106 0 410
1085106 0 414
1085106 0 415
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
OraPub CPU speed statistic is 269.708
Other statistics: stdev=1.251 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4023.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1090782 0 411
1085106 0 400
1085106 0 403
1085106 0 400
1085106 0 404
1085106 0 403
1085106 0 404
7 rows selected.
Linux enkdb01.enkitec.com 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb01.enkitec.com dbm1 11.2.0.3.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 16
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 16
}}}
I added the X3 numbers on the charts below as a continuation of the research [[cores vs threads, v2 vs x2]]
It still shows the same behavior and trend
[img[ http://i.imgur.com/cxy2czW.png ]]
[img[ http://i.imgur.com/2mTXUYe.png ]]
[img[ http://i.imgur.com/4FzkiNU.png ]]
[img[ http://i.imgur.com/JL1eOdY.png ]]
It seems like they turned off the turbo boost on the 1/8th rack.. check out the GHz vs TSC columns on the image..
If GHz is higher than TSC then that means there’s overclocking going on
https://www.evernote.com/shard/s48/sh/d72b073f-1018-49ea-a54d-fe71d6ad925a/9da87c97f9e67ef843085eeb1496830d
benchmark used is the cputoolkit http://karlarao.wordpress.com/scripts-resources/
All instrumentation done using turbostat.c ... check out the tiddler here for the installation [[intel turbostat.c, turbo boost]]
the out of the box 1/8th
{{{
-- compute nodes
[root@enkx3db01 oracle]# sh cpu_topology
Product Name: SUN FIRE X4170 M3
Product Name: ASSY,MOTHERBOARD,1U
model name : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
physical id (processor socket) 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
siblings (logical CPUs/socket) 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
core id (# assigned to a core) 0 1 6 7 0 1 6 7 0 1 6 7 0 1 6 7
cpu cores (physical cores/socket) 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
-- cells
[root@enkx3cel01 ~]# sh cpu_topology
Product Name: SUN FIRE X4270 M3
Product Name: ASSY,MOTHERBOARD,2U
model name : Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz
processors (OS CPU count) 0 4 5 6 10 11 12 16 17 18 22 23
physical id (processor socket) 0 0 0 1 1 1 0 0 0 1 1 1
siblings (logical CPUs/socket) 6 6 6 6 6 6 6 6 6 6 6 6
core id (# assigned to a core) 0 4 5 0 4 5 0 4 5 0 4 5
cpu cores (physical cores/socket) 3 3 3 3 3 3 3 3 3 3 3 3
}}}
cells also has turbo boost disabled
{{{
[root@enkx3cel01 ~]# less biosconfig.xml| grep -i turbo
<!-- Turbo Mode -->
<!-- Description: Turbo Mode. -->
<Turbo_Mode>Disabled</Turbo_Mode>
}}}
! resourcecontrol, turning on ALL
{{{
-- compute nodes
[root@enkx3db01 ~]# sh cpu_topology
Product Name: SUN FIRE X4170 M3
Product Name: ASSY,MOTHERBOARD,1U
model name : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
physical id (processor socket) 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
siblings (logical CPUs/socket) 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
core id (# assigned to a core) 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
cpu cores (physical cores/socket) 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
-- cells
{{{
[root@enkx3cel01 ~]# sh cpu_topology
Product Name: SUN FIRE X4270 M3
Product Name: ASSY,MOTHERBOARD,2U
model name : Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz
processors (OS CPU count) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
physical id (processor socket) 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
siblings (logical CPUs/socket) 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
core id (# assigned to a core) 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
cpu cores (physical cores/socket) 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
}}}
! References
http://www.anandtech.com/show/3922/intels-sandy-bridge-architecture-exposed/4
http://ark.intel.com/products/64596/Intel-Xeon-Processor-E5-2690-20M-Cache-2_90-GHz-8_00-GTs-Intel-QPI
on die images http://goo.gl/xQR06
http://afatkulin.blogspot.ca/2010/06/11gr2-result-cache-scalability.html
Alex's test
{{{
[root@enkx3db02 tmp]# nohup time ./test.sh
[root@enkx3db02 tmp]# cat test.sh
for i in {1..1000}
do
dd if=/dev/zero bs=1024k of=/tmp/test.rnd count=15 > /dev/null
done
}}}
* IE 11 will be used by Junos VPN
* disable protected mode
** https://support.edmentum.com/4_General_and_Technical_Solutions/How_to_disable_Protected_Mode_in_Internet_Explorer
[img[ http://i.imgur.com/Eko8QIG.png ]]
http://sourceforge.net/projects/vdbench/
http://www.cuddletech.com/blog/pivot/entry.php?id=1111
https://www.informatica.com/products/data-integration.html#fbid=MrFAot2R6w0
https://www.slideshare.net/InformaticaMarketplace/streaming-real-time-data-with-vibe-data-stream
https://www.rtinsights.com/real-time-data-integration-informatica/
''How To Edit GoPro Hero HD Video On An iPad 2''
http://www.youtube.com/watch?v=B1Bx_x9K9SA
<<<
*Thanks for posting this. As a GoPro + iPad video noob, I found it helpful. It's worth noting to make sure that Location Services are turned on for iMovie, or the app won't read the videos in your library. To enable Location Services, go to the home screen / Settings (gear icon) / Location Services / make sure iMovie is set to "ON"
<<<
http://nofilmschool.com/2011/03/apple-brings-video-editing-ipad-2-touch-based/
http://filmmakeriq.com/2010/07/22-filmmaking-apps-for-the-ipad-iphone/
http://www.frys.com/product/6403832?site=sr:SEARCH:MAIN_RSLT_PG
http://www.viemu.com/a_vi_vim_graphical_cheat_sheet_tutorial.html
http://www.viemu.com/vi-vim-cheat-sheet.gif
https://www.cyberciti.biz/faq/vim-keyboard-short-cuts/
{{{
Go to the first line
gg or 1 shift g or :0 (zero) Enter
Go to the last line.
Shift g or :$ Enter
To scroll down one screen
Ctrl f
To scroll up one screen
Ctrl b
First position on line
0(zero)
Last position on line
$
}}}
http://learnwithme11g.wordpress.com/tag/virtual-column-based-partitioning/
http://dba.stackexchange.com/questions/16209/making-the-oracle-optimizer-use-a-virtual-column-to-find-out-about-a-partition
http://goo.gl/HUPQ0
the book https://vladmihalcea.teachable.com/courses/enrolled/348278
https://vladmihalcea.com/tutorials/sql/
https://vladmihalcea.com/tutorials/hibernate/
https://vladmihalcea.com/tutorials/spring/
Oracle Databases on VMware Best Practices Guide http://www.vmware.com/files/pdf/partners/oracle/Oracle_Databases_on_VMware_-_Best_Practices_Guide.pdf
Virtualizing Oracle Database 10g/11g on VMware Infrastructure http://www.vmware.com/files/pdf/partners/oracle/vmw-oracle-virtualizing-oracle-db10g11g-vmware-on-infrastructure.pdf
DBA Guide to Databases on VMware http://www.vmware.com/files/pdf/solutions/DBA_Guide_to_Databases_on_VMware-WP.pdf
Oracle database consolidation and perf on cisco UCS http://www.slideshare.net/Ciscodatacenter/oracle-database-consolidation-and-performance-on-cisco-ucs
Oracle Database Licensing in a VMware Virtual Environment http://blogs.flexerasoftware.com/elo/2012/10/oracle-database-licensing-in-a-vmware-virtual-environment-part-3-of-3.html
Virtualized Oracle Database Study Using Cisco Unified Computing System and EMC Unified Storage http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns955/ns967/ucs_tr100026_emc_wp.pdf
Virtualizing Oracle: Oracle RAC on VMware vSphere 4 http://www.cognizant.com/InsightsWhitepapers/Virtualizing-Oracle-Oracle-RAC-on-Vmware-vSphere-4.pdf
VMware for server consolidation and virtualization - Solutions components list http://h71019.www7.hp.com/ActiveAnswers/cache/71088-0-0-225-121.html
NetApp for Oracle Database - WhitePaper http://goo.gl/s7KFR7
https://blogs.vmware.com/virtualblocks/2019/01/25/introducing-elastic-vsan/
vsan ready nodes include hardware
https://www.vmware.com/resources/compatibility/pdf/vi_vsan_guide.pdf
https://storagehub.vmware.com/t/vmware-vsan/oracle-database-12c-on-vmware-virtual-san-6-2-all-flash/
https://blogs.vmware.com/apps/2016/08/oracle-12c-oltp-dss-workloads-flash-virtual-san-6-2.html
https://kevinclosson.net/2017/02/10/slob-use-cases-by-industry-vendors-learn-slob-speak-the-experts-language/
https://microage.com/blog/hci-comparison-vmware-vsan-cisco-hyperflex/
Dell EMC vSAN Ready Nodes Overview https://www.youtube.com/watch?v=4BjHkTtonBM
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/solutions/oracle/understanding_oracle_certification_support_licensing_vmware_environments-white-paper.pdf
Oracle on HCI Steroids: A Complete End-to-End Solution for BusinessCritical Oracle Workloads on VMware vSAN VMworld 2017
https://cms.vmworldonline.com/event_data/5/session_notes/STO1167BU.pdf
Oracle Real Application Clusters on VMware vSAN https://www.youtube.com/watch?v=aWNbn2radiQ
https://code.visualstudio.com/docs?start=true
https://code.visualstudio.com/docs/getstarted/introvideos
! commands
<<<
ctrl + alt + n <- to run the script and run only the selection
<<<
! vscode plsql
https://github.com/OraOpenSource/OXAR/blob/master/docs/connect_oracle.md
https://code.visualstudio.com/docs/editor/tasks
https://ora-00001.blogspot.com/2017/03/using-vs-code-for-plsql-development.html
https://ruepprich.wordpress.com/2017/03/27/compile-plsql-with-vs-code-using-ssh/
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/dotnet/debugging/Debugging.htm
! vscode unsaved backup location
https://superuser.com/questions/1225368/visual-studio-code-unsaved-files-location
you need to convert the P1/address number to hex
{{{
WAIT #3: nam='latch free' ela= 3749 address=15709454624 number=202 tries=1
obj#=-1 tim=5650189627
SQL> select to_char(15709454624, 'XXXXXXXXXXXXXXXX') addr_hex from dual;
ADDR_HEX ----------------- 3A85B4120
SQL> @la 3A85B4120
ADDR LATCH# CHLD NAME GETS ---------------- ---------- ------ --------------- ---------- 00000003A85B4120 202 2 kks stats 494562
}}}
http://blog.tanelpoder.com/2008/06/21/another-use-case-for-waitprof-diagnosing-events-in-waitclass-other/
! wash
http://datavirtualizer.com/web-ash-w-ash/
wash video demo http://www.screencast.com/t/sZrFxZkTrmn
! visash
https://github.com/pioro/visash
http://oracleprof.blogspot.com/2013/10/orasash-visualization.html
https://oracle-base.com/articles/10g/dbms_epg_10gR2
! backend/data - orasash
https://github.com/pioro/orasash
https://blog.pythian.com/trying-out-s-ash/
https://www.doag.org/formes/pubfiles/6435229/2014-null-Ernst_Leber-OraSASH_everybodies_ASH___-Praesentation.pdf
! topaas
http://oracleprof.blogspot.com/search?q=topaas.sql
https://github.com/pioro/topaas
https://www.doag.org/formes/pubfiles/6436045/2014-null-Marcin_Przepiorowski-Performance_monitoring_in_SQL_Plus_using_AWR_and_analytic_functions-Praesentation.pdf
! d3 on ZFS
https://blogs.oracle.com/pascal/data-visualization-with-d3
https://frontendcharts.com/highcharts-or-d3/
https://www.slant.co/versus/10577/11579/~d3-js_vs_highcharts
https://www.quora.com/Is-D3-js-or-Highchart-js-better
! objectlab
https://www.meetup.com/NYC-D3-JS/events/215158142/
https://github.com/objectlab/TimeseriesVisualization
shown on slide #26
http://www.slideshare.net/karlarao/o-2013-ultimate-exadata-io-monitoring-flash-harddisk-write-back-cache-overhead
OEM12c Exadata IO now have Max Cell Disk Limit which I presented @OakTableWorld13 http://goo.gl/pIUnlj using BIP :)
https://twitter.com/karlarao/status/448524553661059072
<<<
Kevin @kevinclosson Mar 26
@karlarao kudos Karl... you're the first Exadata type that seems to care about destage. Too many faithful of the ExaMagic(tm)
Reply Retweet Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson part1: yeah man, I have some investigations about it, it's 30% IOPS overhead from the total writes side of things.
Reply Delete Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson part2: meaning if you have 0:100 read:write ratio that does 100K write IOPS you'll see 130K IOPS on Flash. I say total writes
Reply Delete Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson part3: because if you have database A doing 10K write IOPS and database B doing 90K write IOPS, you'll see that 30K IOPS
Reply Delete Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson part4: overhead as "OTHER_DATABASE" which is bigger than the IOPS of database A.. but still you'll get that more IOPS capacity
Reply Delete Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson part5: from the Write Back Flash Cache you'll just have to be aware of the behavior and overhead.. whew,that was a long tweet!
Reply Delete Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson part6: oh btw, it's the destage (hard disk) and flash metadata IOs (flash) overhead, here's WBFC patent http://goo.gl/2WCmw
Reply Delete Favorite Pocket More
Kevin @kevinclosson Mar 26
@karlarao Oracle sales are trying to tell customers that destage doesn't happen. Lying without being called a liar is a useful skill
Reply Retweet Favorite More
Kevin @kevinclosson Mar 26
@karlarao btw, it's highly unlikely everything in that patent is in the actual product. Patents cover a lot more than implementation usually
Reply Retweet Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson that's why when we do sizings we size w/ ASM high redundancy & factor in +30% on writes & 50:50 R:W ratio to be conservative
Reply Delete Favorite More
Kevin @kevinclosson Mar 26
@karlarao but I'm not saying they are lying. I could have ended that tweet with kicking the dog and starving the children #nonsequiter
Reply Retweet Favorite More
Kevin @kevinclosson Mar 26
.@karlarao you folks ~ONLY honest people in the #Exabeta ecosystem. All others ignore triple mirror and repeat lies about online patching
Reply Retweet Favorited More
Kevin @kevinclosson Mar 26
@karlarao so does the 30% cut into the datasheet flash IOPS number of 1,680,000 ?
Reply Retweet Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson part1: so let's say on X4 full rack, on normal redun. it says 1960000 write IOPS, so with 30% 588000 write IOPS overhead, you
Reply Delete Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson part2: can really only do 1372000 workload write IOPS on the database
Reply Delete Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson argh correction, I mean 1372000 hardware write IOPS, because I was measuring it from cells
Reply Delete Favorite More
Kevin @kevinclosson Mar 26
@karlarao and the more sane HIGH REDUNDANCY shaves off of that further I take it?
Reply Retweet Favorite More
Karl Arao @karlarao Mar 26
@kevinclosson that is correct ;)
<<<
Fusion Middleware Administering JDBC Data Sources for Oracle WebLogic Server 12.1.3
https://docs.oracle.com/middleware/1213/wls/JDBCA/ds_tuning.htm#JDBCA490
{{{
Difference Between <SQL ISVALID> And <SQL SELECT 1 FROM DUAL> When Datasource Is Testing DB Connection (Doc ID 2236149.1)
Applies to:
Oracle WebLogic Server - Version 12.1.3.0.0 and later
Information in this document applies to any platform.
Goal
To test database connection, [SQL SELECT 1 FROM DUAL] is used before WLS 12.1.2 and [SQL ISVALID] is used from WLS 12.1.3.
What is the difference between them?
Solution
The [SELECT 1 FROM DUAL] checks the connection by a full query including connecting, parsing the SQL, executing the SQL and returning the result.
The [ISVALID] just checks that the remote Database process is still connected via the network socket, without sending anything that needs to go into the Database functionality, like parsing SQL etc.
[ISVALID] can improve the performance; however it cannot check correctly in a few cases, for example:
When there is a problem in the parser, execution or other function (except connection) on the Database side.
When Database is into a shutdown mode but not yet disconnecting.
}}}
https://stackoverflow.com/questions/3668506/efficient-sql-test-query-or-validation-query-that-will-work-across-all-or-most
<<<
17 Tuning Data Source Connection Pools
This chapter provides information on how to properly tune the connection pool attributes in JDBC data sources in your WebLogic Server 12.1.3 domain to improve application and system performance.
This chapter includes the following sections:
Increasing Performance with the Statement Cache
Connection Testing Options for a Data Source
Enabling Connection Creation Retries
Enabling Connection Requests to Wait for a Connection
Automatically Recovering Leaked Connections
Avoiding Server Lockup with the Correct Number of Connections
Limiting Statement Processing Time with Statement Timeout
Using Pinned-To-Thread Property to Increase Performance
Using Unwrapped Data Type Objects
Tuning Maintenance Timers
<<<
<<showtoc>>
dracula theme in webstorm and sublime
http://stackoverflow.com/questions/13504594/how-to-make-phpstorm-intellij-idea-dark-whole-ide-not-just-color-scheme
http://www.reddit.com/r/webdev/comments/1i9bhd/jetbrains_webstorm_or_sublimetext/
gitignore.io https://www.gitignore.io/ <- Create useful .gitignore files for your project, there's also a webstorm plugin for this
! commands
<<<
ctrl + shift + e <- to run the JS
alt + shift + e <- run only the selection http://stackoverflow.com/questions/23441657/pycharm-run-only-part-of-my-python-file
shift + shift <- to switch to the last window
<<<
! split window
https://www.jetbrains.com/ruby/help/splitting-and-unsplitting-editor-window.html
{{{
SELECT *
FROM
(SELECT *
FROM HR.SQL_LOADER_DEMO_SIMPLE) a,
(SELECT *
FROM HR.SQL_LOADER_DEMO_SIMPLE) b,
(SELECT *
FROM HR.SQL_LOADER_DEMO_SIMPLE) c
}}}
! user ids
[img(80%,80%)[https://i.imgur.com/0uzhxCg.png]]
! started with user id 79
[img(80%,80%)[https://i.imgur.com/cjAfSVr.png]]
[img(80%,80%)[https://i.imgur.com/FyGsIwK.png]]
! then another user executes the SQL , the parsing schema did not change
[img(80%,80%)[https://i.imgur.com/M0pANHk.png]]
[img(80%,80%)[https://i.imgur.com/zuZY7Ke.png]]
! even if SYS executes the SQL the parsing schema stayed the same. This will only change when the shared pool or SQL_ID is flushed from shared pool and then another executes the same SQL
[img(80%,80%)[https://i.imgur.com/xeSX1KA.png]]
[img(80%,80%)[https://i.imgur.com/cgT5N6u.png]]
I experienced this, and the bottleneck is what's stated below. So moving away from map join improved my query from 2 hours to 3 minutes.
https://github.com/t3rmin4t0r/notes/wiki/Why-would-a-shuffle-join-be-faster-than-a-map-join%3F
{{{
Why would a shuffle join be faster than a map join in Hive?
Scale is not an easy thing to measure. The process of scaling "up" a measurement is like measuring the tides in a swimming pool - significant things at scale show up below the baselines.
Let's take a simple 100:1 join condition, with no compression in the mix, on a single rack of 40 nodes with full bandwidth available & perfect scheduling for all the tasks.
That's a 100Gb x 1Gb join - according to common sense, this is a trivial scenario, because you can broadcast the 1Gb to every node and get only 39Gb moving over the wire for this to execute, while shuffling 101Gb would be too much.
But the regular broadcast fails entirely if not every listener is active at the time of the broadcast - a pure broadcast model can't work unless all the consumers are already up. That does not work for Hive, since holding up all tasks till all of them are up is wasted CPU across the cluster - the answer obviously is for each node to download the file on its own time-frame.
This means that the node generating the 1Gb chunk has to transmit this data outbound to all 40 nodes in the rack. The minimum time for this operation is 39 seconds, at the perfect ~1.2 GBytes/s bandwidth that one node has outbound.
However, the 101Gb shuffle looks slightly different, because there are 40 GBytes/s of outbound bandwidth available at that time.
Even given that it might halve the bandwidth in some cases, a 101Gb cross-shuffle only takes 5 seconds at minimum.
When the costs of compressing/serializing the 100Gb are far lower than the shared network cost, it becomes clear that for a 100:1 join ratio, on a big enough network the shuffle join beats the broadcast join by a significant margin.
Therefore, the shuffle-join is paradoxically better performing in a weaker network, than a broadcast join, in this scenario.
There are ways around the broadcast-join bottleneck, but nearly all of them involve additional overheads for the simpler cases.
}}}
.
! 2021
Tuning SQLs to Optimize Smart Scan Performance (Doc ID 2608210.1)
Exadata Smart Scan FAQ (Doc ID 1927934.1)
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-monitoring.html#GUID-2DA44230-B3B2-41D4-B392-35E0717B9039
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-monitoring.html#GUID-2DA44230-B3B2-41D4-B392-35E0717B9039
https://fritshoogland.files.wordpress.com/2015/12/oracle-exadata-and-database-memory.pdf
https://fritshoogland.wordpress.com/2015/06/29/investigating-the-full-table-direct-path-buffered-decision/
! previous
Predicate evaluation is not offloaded to Exadata Cell in the following cases:
http://docs.oracle.com/html/E13861_14/monitoring.htm#BABHHIHH
{{{
The CELL_OFFLOAD_PROCESSING parameter is set to FALSE.
The table or partition being scanned is small.
The optimizer does not use direct path read.
A scan is performed on a clustered table.
A scan is performed on an index-organized table.
A fast full scan is performed on compressed indexes.
A fast full scan is performed on reverse key indexes.
The table has row dependencies enabled or the rowscn is being fetched.
The optimizer wants the scan to return rows in ROWID order.
The optimizer does not use direct path read.
The command is CREATE INDEX using nosort.
A LOB or LONG column is being selected or queried.
A SELECT ... VERSIONS query is done on a table.
A query that has more than 255 columns referenced and heap table is uncompressed, or Basic or OLTP compressed.
However such queries on Exadata Hybrid Columnar Compression-compressed tables are offloaded.
The tablespace is encrypted, and the CELL_OFFLOAD_DECRYPTION parameter is set to FALSE. In order for Exadata
Cell to perform decryption, Oracle Database needs to send the decryption key to Exadata Cell. If there are security concerns
about keys being shipped across the network to Exadata Cell, then disable the decryption feature.
The tablespace is not completely stored on Exadata Cell.
The predicate evaluation is on a virtual column.
}}}
..
<<showtoc>>
! sample captures
https://wiki.wireshark.org/SampleCaptures#Oracle_TNS_.2F_SQLnet_.2F_OCI_.2F_OPI
! Wireshark Tip 3: Graph HTTP Response Times
https://www.youtube.com/watch?v=FMRI6ua2MjE
! Wireshark Recipes - Sunil Gupta
https://learning.oreilly.com/videos/wireshark-recipes/9781838554408/9781838554408-video6_6
! network usage per process linux
https://www.slashroot.in/find-network-traffic-and-bandwidth-usage-process-linux
! references
https://hoopercharles.wordpress.com/2009/12/15/network-monitoring-experimentations-1/
https://www.youtube.com/results?search_query=oracle+TNS+wireshark
https://www.youtube.com/watch?v=Qp9SGdlYZWE
https://osqa-ask.wireshark.org/questions/60745/tns-dissecting
https://osqa-ask.wireshark.org/questions/17894/oracle-sqlnet-tracing
jonah - Listening In Passive Capture and Analysis of Oracle Network Traffic http://www.nyoug.org/Presentations/2008/Sep/Harris_Listening%20In.pdf
How to Use Wireshark to Measure Network Traffic - change bits to bytes to packets
https://support.pelco.com/s/article/How-to-Use-Wireshark-to-Measure-Network-Traffic-1538586679539?language=en_US
bandwidth filtering
https://www.golinuxcloud.com/measure-bandwidth-wireshark/
load balancer TLSv1.2 <- load balancer
using wireshark, https://support.spirent.com/SC_KnowledgeView?Id=FAQ19346
<<<
You can do this from the Wireshark application itself:
Make sure you have saved the file to disk already (File>Save) (if you have just done a capture)
Go to File>Export Packet Dissesctions>as "CSV" [etc]
Then enter a filename (make sure you add .csv on the end as WS does not do this!)
<<<
using tshark
https://shantoroy.com/networking/convert-pcap-to-csv-using-tshark/
Understanding write-cache mode for logical drives
http://publib.boulder.ibm.com/infocenter/eserver/v1r2/index.jsp?topic=%2Fdiricinfo%2Ffqy0_awriteld.html
http://blog.revolutionanalytics.com/2012/09/guest-post-from-flowingdata.html
http://flowingdata.com/category/tutorials/ <- @@good stuff!@@
!Here's the index of tutorials on flowingdata
check out this link https://www.evernote.com/l/ADAdOdUwHQ5AYr6lJvhQVaI2xj4qfb2Uq3U
! with smart scan
{{{
# exadata x2 with smart scan
processor : 23
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2
cpu MHz : 2926.090
cache size : 12288 KB
physical id : 1
siblings : 12
core id : 10
cpu cores : 6
apicid : 53
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips : 5852.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
OraPub CPU speed statistic is 291
Other statistics: stdev=58.868 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3850)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 343
1085106 0 446
1085106 0 383
1085106 0 459
1085106 0 426
1085106 0 298
1085106 0 298
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 345.561
Other statistics: stdev=24.447 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3155)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 429
1085106 0 367
1085106 0 305
1085106 0 306
1085106 0 305
1085106 0 305
1085106 0 305
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 235.82
Other statistics: stdev=5.303 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4603.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 465
1085106 0 464
1085106 0 458
1085106 0 472
1085106 0 469
1085106 0 444
1085106 0 455
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 285.58
Other statistics: stdev=53.201 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3911.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 480
1085106 0 464
1085106 0 480
1085106 0 411
1085106 0 314
1085106 0 365
1085106 0 313
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 281.674
Other statistics: stdev=43.971 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3931.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 486
1085106 0 439
1085106 0 480
1085106 0 405
1085106 0 331
1085106 0 322
1085106 0 382
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
}}}
! without smart scan
{{{
# exadata x2 without smart scan
OraPub CPU speed statistic is 322.211
Other statistics: stdev=4.858 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=3368.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 428
1085106 0 346
1085106 0 333
1085106 0 340
1085106 0 334
1085106 0 334
1085106 0 334
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 246.724
Other statistics: stdev=19.617 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=4421.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 358
1085106 0 425
1085106 0 448
1085106 0 495
1085106 0 418
1085106 0 398
1085106 0 469
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 213.93
Other statistics: stdev=11.925 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=5085)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 481
1085106 0 510
1085106 0 497
1085106 0 463
1085106 0 536
1085106 0 537
1085106 0 508
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 200.924
Other statistics: stdev=3.162 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=5401.667)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 546
1085106 0 546
1085106 0 543
1085106 0 543
1085106 0 546
1085106 0 539
1085106 0 524
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 192.456
Other statistics: stdev=.978 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=5638.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 570
1085106 0 563
1085106 0 565
1085106 0 563
1085106 0 567
1085106 0 566
1085106 0 559
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
OraPub CPU speed statistic is 195.875
Other statistics: stdev=13.039 PIOs=0 pio/lio=0 avg lios/test=1085106 avg
time(ms)/test=5558.333)
PL/SQL procedure successfully completed.
LIO PIO DURATION
---------- ---------- ----------
1089660 0 545
1085106 0 570
1085106 0 573
1085106 0 570
1085106 0 572
1085106 0 562
1085106 0 488
7 rows selected.
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
HOST_NAME INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
enkdb03.enkitec.com dbm1 11.2.0.2.0
1 row selected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 24
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 24
}}}
<<showtoc>>
All in all it's not bad (x5-8), more cores w/ pretty much the same per core performance
[img[ https://lh3.googleusercontent.com/-g1-0t0GkYJw/ViEm87w-6MI/AAAAAAAAC0g/k_OilBsEr84/s2048-Ic42/x3x4x5-8.png ]]
! References
https://twitter.com/karlarao/status/435882623500423168 - 47%+ in performance, 50%+ in cores, as fast as X4-2, big jump for Exadata X4-8 in terms of speed+capacity (120 cores)
http://www.intel.com/content/www/us/en/benchmarks/server/xeon-e7-v3/xeon-e7-v3-general-compute.html#rate
http://www.intel.com/content/www/us/en/benchmarks/server/xeon-e7-v3/xeon-e7-v3-world-record.html
http://sp.ts.fujitsu.com/dmsp/Publications/public/CINT2006R.2800E2-E7-8890v3-8P.ref.pdf
!! data sheet
http://www.oracle.com/us/products/servers/x5-8-datasheet-2565551.pdf
http://www.oracle.com/technetwork/server-storage/oracle-x86/documentation/x5-8-system-architecture-2584605.pdf
https://blogs.oracle.com/hardware/entry/oracle_server_x5_8_s
http://exadata-dba.blogspot.com/2014/08/exadata-x5.html
http://www.slideshare.net/MGralike/miracle-open-world-2011-xml-index-strategies
http://it.toolbox.com/blogs/oracle-guide/oow-xml-index-and-binary-xml-in-oracle-11g-12714
http://grow-n-shine.blogspot.com/2011/11/one-of-biggest-change-that-oracle-has.html
http://www.vldb.org/conf/2004/IND5P1.PDF
http://technology.amis.nl/2013/06/28/oracle-database-12c-xmlindex-support-for-hash-partitioning/
http://www.liberidu.com/blog/2009/10/16/howto-partition-binary-xml-xmltype-storage/
https://community.oracle.com/message/10656858
XMLTable performance vs. TABLE(XMLSequence()) https://community.oracle.com/thread/2526907?start=0&tstart=0
Oracle XML DB: Best Practices to Get Optimal Performance out of XML Queries http://www.oracle.com/technetwork/database-features/xmldb/xmlqueryoptimize11gr2-168036.pdf
What is the best way to insert a XML input to an oracle table in stored procedure? http://dba.stackexchange.com/questions/39610/what-is-the-best-way-to-insert-a-xml-input-to-an-oracle-table-in-stored-procedur
insert .xml file into xmltype table? https://community.oracle.com/thread/2203503?start=0&tstart=0
Insert into table with xmltype column from a xml file http://stackoverflow.com/questions/23563898/insert-into-table-with-xmltype-column-from-a-xml-file
http://www.liberidu.com/blog/page/16/?s=xml+insert
http://www.liberidu.com/blog/2007/09/05/xml-data-to-be-stored-or-not-to-be-stored-and-beyond%E2%80%A6/
http://www.liberidu.com/blog/2007/08/17/howto-create-xmltype-table-for-binary-xml-usage/
http://www.liberidu.com/blog/2014/07/18/loading-xml-documents-into-an-oracle-database-2/
http://www.liberidu.com/blog/2014/07/18/loading-xml-documents-into-an-oracle-database/
https://odieweblog.wordpress.com/2013/09/16/xmltable-vs-external-xslt-preprocessor/
http://www.liberidu.com/blog/2008/09/05/xmldb-performance-environment-set-up-procedure/
https://www.google.com/search?q=Marco+Gralike+insert+performance&oq=Marco+Gralike+insert+performance&aqs=chrome..69i57.3180j0j1&sourceid=chrome&es_sm=122&ie=UTF-8#q=Marco+Gralike+xml+insert+performance&start=10
XMLTYPE insert performance https://community.oracle.com/thread/852162?start=0&tstart=0
http://www.slideshare.net/MGralike/xml-conferentie-amsterdam-creating-structure-in-unstructured-data-amis-marco-gralike
http://www.slideshare.net/MGralike/real-world-experience-with-oracle-xml-database-11g-an-oracle-aces-perspective-oracle-open-world-2008-marco-gralike-presentation
http://www.slideshare.net/MGralike/hotsos-2010-the-ultimate-performance-challenge-how-to-make-xml-perform
http://www.liberidu.com/blog/2008/07/11/howto-load-really-big-xml-files/
http://database.developer-works.com/article/14621364/Making+insert+performance+better
http://oracle.developer-works.com/article/4477407/XMLTYPE+insert+performance
http://stackoverflow.com/questions/12690868/how-to-use-xmltable-in-oracle
{{{
CREATE TABLE EMPLOYEES
(
id NUMBER,
data XMLTYPE
);
desc employees
INSERT INTO EMPLOYEES
VALUES (1, xmltype ('<Employees>
<Employee emplid="1111" type="admin">
<firstname>John</firstname>
<lastname>Watson</lastname>
<age>30</age>
<email>johnwatson@sh.com</email>
</Employee>
<Employee emplid="2222" type="admin">
<firstname>Sherlock</firstname>
<lastname>Homes</lastname>
<age>32</age>
<email>sherlock@sh.com</email>
</Employee>
<Employee emplid="3333" type="user">
<firstname>Jim</firstname>
<lastname>Moriarty</lastname>
<age>52</age>
<email>jim@sh.com</email>
</Employee>
<Employee emplid="4444" type="user">
<firstname>Mycroft</firstname>
<lastname>Holmes</lastname>
<age>41</age>
<email>mycroft@sh.com</email>
</Employee>
</Employees>'));
select * from employees;
--syntax
XMLTable('<XQuery>'
PASSING <xml column>
COLUMNS <new column name> <column type> PATH <XQuery path>)
--print firstname and lastname of all employees
SELECT t.id, x.*
FROM employees t,
XMLTABLE ('/Employees/Employee'
PASSING t.data
COLUMNS firstname VARCHAR2(30) PATH 'firstname',
lastname VARCHAR2(30) PATH 'lastname') x
WHERE t.id = 1;
--print firstname of all employees
-- you can use the following expressions too, item(), node(), attribute(), element(), document-node(), namespace(), text(), xs:integer, xs:string
SELECT t.id, x.*
FROM employees t,
XMLTABLE ('/Employees/Employee/firstname'
PASSING t.data
COLUMNS firstname VARCHAR2 (30) PATH 'text()') x
WHERE t.id = 1;
--print employee type of all employees
SELECT emp.id, x.*
FROM employees emp,
XMLTABLE ('/Employees/Employee'
PASSING emp.data
COLUMNS firstname VARCHAR2(30) PATH 'firstname',
type1 VARCHAR2(30) PATH '@emplid',
type2 VARCHAR2(30) PATH '@type') x;
--print firstname and lastname of employee with id 2222
SELECT t.id, x.*
FROM employees t,
XMLTABLE ('/Employees/Employee[@emplid=2222]'
PASSING t.data
COLUMNS firstname VARCHAR2(30) PATH 'firstname',
lastname VARCHAR2(30) PATH 'lastname') x
WHERE t.id = 1;
--print firstname and lastname of employees who are admins
SELECT t.id, x.*
FROM employees t,
XMLTABLE ('/Employees/Employee[@type="admin"]'
PASSING t.data
COLUMNS firstname VARCHAR2(30) PATH 'firstname',
lastname VARCHAR2(30) PATH 'lastname') x
WHERE t.id = 1;
--print firstname and lastname of employees having age > 40
SELECT t.id, x.*
FROM employees t,
XMLTABLE ('/Employees/Employee[age>40]'
PASSING t.data
COLUMNS firstname VARCHAR2(30) PATH 'firstname',
lastname VARCHAR2(30) PATH 'lastname',
age VARCHAR2(30) PATH 'age') x
WHERE t.id = 1;
}}}
http://www.adellera.it/blog/2012/08/27/xplan-now-with-self-measures-for-row-source-operations/
{{{
[root@enkx4cel01 ~]# dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
enkx4cel01: name: enkx4cel01_IORMPLAN
enkx4cel01: catPlan:
enkx4cel01: dbPlan:
enkx4cel01: objective: auto
enkx4cel01: status: active
enkx4cel02: name: enkx4cel02_IORMPLAN
enkx4cel02: catPlan:
enkx4cel02: dbPlan:
enkx4cel02: objective: auto
enkx4cel02: status: active
enkx4cel03: name: enkx4cel03_IORMPLAN
enkx4cel03: catPlan:
enkx4cel03: dbPlan:
enkx4cel03: objective: auto
enkx4cel03: status: active
enkx4cel04: name: enkx4cel04_IORMPLAN
enkx4cel04: catPlan:
enkx4cel04: dbPlan:
enkx4cel04: objective: auto
enkx4cel04: status: active
enkx4cel05: name: enkx4cel05_IORMPLAN
enkx4cel05: catPlan:
enkx4cel05: dbPlan:
enkx4cel05: objective: auto
enkx4cel05: status: active
enkx4cel06: name: enkx4cel06_IORMPLAN
enkx4cel06: catPlan:
enkx4cel06: dbPlan:
enkx4cel06: objective: auto
enkx4cel06: status: active
enkx4cel07: name: enkx4cel07_IORMPLAN
enkx4cel07: catPlan:
enkx4cel07: dbPlan:
enkx4cel07: objective: auto
enkx4cel07: status: active
[root@enkx4cel01 ~]#
[root@enkx4cel01 ~]#
[root@enkx4cel01 ~]#
[root@enkx4cel01 ~]# dcli -g ~/cell_group -l root 'cellcli -e alter iormplan inactive'
enkx4cel01: IORMPLAN status cannot be set to inactive. Set the objective to 'basic' instead.
enkx4cel01: See 'help alter iormplan' for more details.
enkx4cel01: IORMPLAN successfully altered
enkx4cel02: IORMPLAN status cannot be set to inactive. Set the objective to 'basic' instead.
enkx4cel02: See 'help alter iormplan' for more details.
enkx4cel02: IORMPLAN successfully altered
enkx4cel03: IORMPLAN status cannot be set to inactive. Set the objective to 'basic' instead.
enkx4cel03: See 'help alter iormplan' for more details.
enkx4cel03: IORMPLAN successfully altered
enkx4cel04: IORMPLAN status cannot be set to inactive. Set the objective to 'basic' instead.
enkx4cel04: See 'help alter iormplan' for more details.
enkx4cel04: IORMPLAN successfully altered
enkx4cel05: IORMPLAN status cannot be set to inactive. Set the objective to 'basic' instead.
enkx4cel05: See 'help alter iormplan' for more details.
enkx4cel05: IORMPLAN successfully altered
enkx4cel06: IORMPLAN status cannot be set to inactive. Set the objective to 'basic' instead.
enkx4cel06: See 'help alter iormplan' for more details.
enkx4cel06: IORMPLAN successfully altered
enkx4cel07: IORMPLAN status cannot be set to inactive. Set the objective to 'basic' instead.
enkx4cel07: See 'help alter iormplan' for more details.
enkx4cel07: IORMPLAN successfully altered
[root@enkx4cel01 ~]#
[root@enkx4cel01 ~]#
[root@enkx4cel01 ~]#
[root@enkx4cel01 ~]# dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
enkx4cel01: name: enkx4cel01_IORMPLAN
enkx4cel01: catPlan:
enkx4cel01: dbPlan:
enkx4cel01: objective: basic
enkx4cel01: status: active
enkx4cel02: name: enkx4cel02_IORMPLAN
enkx4cel02: catPlan:
enkx4cel02: dbPlan:
enkx4cel02: objective: basic
enkx4cel02: status: active
enkx4cel03: name: enkx4cel03_IORMPLAN
enkx4cel03: catPlan:
enkx4cel03: dbPlan:
enkx4cel03: objective: basic
enkx4cel03: status: active
enkx4cel04: name: enkx4cel04_IORMPLAN
enkx4cel04: catPlan:
enkx4cel04: dbPlan:
enkx4cel04: objective: basic
enkx4cel04: status: active
enkx4cel05: name: enkx4cel05_IORMPLAN
enkx4cel05: catPlan:
enkx4cel05: dbPlan:
enkx4cel05: objective: basic
enkx4cel05: status: active
enkx4cel06: name: enkx4cel06_IORMPLAN
enkx4cel06: catPlan:
enkx4cel06: dbPlan:
enkx4cel06: objective: basic
enkx4cel06: status: active
enkx4cel07: name: enkx4cel07_IORMPLAN
enkx4cel07: catPlan:
enkx4cel07: dbPlan:
enkx4cel07: objective: basic
enkx4cel07: status: active
[root@enkx4cel01 ~]#
[root@enkx4cel01 ~]# dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = auto'
enkx4cel01: IORMPLAN successfully altered
enkx4cel02: IORMPLAN successfully altered
enkx4cel03: IORMPLAN successfully altered
enkx4cel04: IORMPLAN successfully altered
enkx4cel05: IORMPLAN successfully altered
enkx4cel06: IORMPLAN successfully altered
enkx4cel07: IORMPLAN successfully altered
[root@enkx4cel01 ~]# dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
enkx4cel01: name: enkx4cel01_IORMPLAN
enkx4cel01: catPlan:
enkx4cel01: dbPlan:
enkx4cel01: objective: auto
enkx4cel01: status: active
enkx4cel02: name: enkx4cel02_IORMPLAN
enkx4cel02: catPlan:
enkx4cel02: dbPlan:
enkx4cel02: objective: auto
enkx4cel02: status: active
enkx4cel03: name: enkx4cel03_IORMPLAN
enkx4cel03: catPlan:
enkx4cel03: dbPlan:
enkx4cel03: objective: auto
enkx4cel03: status: active
enkx4cel04: name: enkx4cel04_IORMPLAN
enkx4cel04: catPlan:
enkx4cel04: dbPlan:
enkx4cel04: objective: auto
enkx4cel04: status: active
enkx4cel05: name: enkx4cel05_IORMPLAN
enkx4cel05: catPlan:
enkx4cel05: dbPlan:
enkx4cel05: objective: auto
enkx4cel05: status: active
enkx4cel06: name: enkx4cel06_IORMPLAN
enkx4cel06: catPlan:
enkx4cel06: dbPlan:
enkx4cel06: objective: auto
enkx4cel06: status: active
enkx4cel07: name: enkx4cel07_IORMPLAN
enkx4cel07: catPlan:
enkx4cel07: dbPlan:
enkx4cel07: objective: auto
enkx4cel07: status: active
}}}
I found this yum.py and enhanced it to cover most of the yum operations that are frequently executed…
this is very useful if you have a machine or desktop server with lots of yum repos and you just want to install an RPM from a particular repo source.
With this script you no longer have to manually rename the repo files to repo.bak to disable that repo source or even manually type the enable/disable yum commands, saving you lots of time!
the idea came from the following links
http://www.fedoraforum.org/forum/showthread.php?t=26925
http://forums.fedoraforum.org/archive/index.php/t-27212.html
! ''The script''
{{{
#! /usr/bin/env python
################################################################
# Author: Jim Fitzpatrick
# Description: This program is meant to be run with the linux yum utility,
# to easily enable and disable repositories. Be sure not to mix
# incompatible repos, as the program will not stop you from
# doing so.
#
# Added by Karl Arao
# - options for list,search,install,remove for specific RPM or Group
# - formatting enhancements
################################################################
import os
#print instructions
print "*****************************************************"
print "Below is a list of repos. Please enter"
print "*****************************************************"
print "e to enable,"
print "d to disable,"
print "o to omit"
print "If unsure, choose to omit."
print "*****************************************************"
#create array of repos and prompt user to enable/disable them
repolist = os.listdir('/etc/yum.repos.d/')
repoenable = ""
i = 0
for x in repolist:
if repolist[i][-5:] == '.repo':
repolist[i] = repolist[i][0:-5]
repo = repolist[i]
print "" + repo + "? (e/d/o)"
boolx = raw_input()
if boolx == 'e':
repoenable = repoenable + " --enablerepo=" + repo
elif boolx == 'd':
repoenable = repoenable + " --disablerepo=" + repo
else:
repoenable = repoenable
i = i+1
#prompt user for desired package
print "*****************************************************"
print "Enter either of the following:"
print "Package Name(s) ==> ,example --> rsync or rsync ftp httpd"
print "Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server"
print "or * to output all (only valid with l/gl) ==> ,example --> *"
print "*****************************************************"
package = raw_input()
#print install instructions
print "*****************************************************"
print "Below is a list of install options. Please enter"
print "*****************************************************"
print "l to list,"
print "s to search,"
print "i to install,"
print "r to remove,"
print "gl to grouplist,"
print "gi to groupinstall,"
print "gr to groupremove,"
print "o to exit"
print "*****************************************************"
print "" + package + "? (l/s/i/r/gl/gi/gr)"
installmethod = raw_input()
#execute install commands
if installmethod == 'l':
string = "\nyum" + repoenable + " list " + "'" + package + "'"
elif installmethod == 's':
string = "\nyum" + repoenable + " search " + "'" + package + "'"
elif installmethod == 'i':
string = "\nyum" + repoenable + " install " + "'" + package + "'"
elif installmethod == 'r':
string = "\nyum" + repoenable + " remove " + "'" + package + "'"
elif installmethod == 'gl':
string = "\nyum" + repoenable + " grouplist " + "'" + package + "'"
elif installmethod == 'gi':
string = "\nyum" + repoenable + " groupinstall " + "'" + package + "'"
elif installmethod == 'gr':
string = "\nyum" + repoenable + " groupremove " + "'" + package + "'"
else:
string = "yum -h"
print string
os.system(string)
}}}
! ''How to use the script''
-- make the repo files the same as the repo names
root@desktopserver.localdomain:/etc/yum.repos.d:
$ ls -l *repo
-rw-r--r-- 1 root root 164 Nov 6 12:59 oel55http.repo
-rw-r--r-- 1 root root 164 Nov 6 12:56 oel57http.repo
-rw-r--r-- 1 root root 1113 Nov 12 2010 rpmforge.repo
-rw-r--r-- 1 root root 236 Nov 6 12:58 vbox.repo
-- enable the repo
root@desktopserver.localdomain:/etc/yum.repos.d:
$ cat oel57http.repo
[oel57http]
name=Enterprise-$releasever - Media
baseurl=http://192.168.203.1/oel57/Server
enabled=1
gpgcheck=1
gpgkey=http://192.168.203.1/oel57/RPM-GPG-KEY-oracle
-- as the user root execute the script
$ ./yum.py
''Example 1) List the rsync rpm''
<<<
{{{
$ ./yum.py
*****************************************************
Below is a list of repos. Please enter
*****************************************************
e to enable,
d to disable,
o to omit
If unsure, choose to omit.
*****************************************************
oel57http? (e/d/o)
e
oel55http? (e/d/o)
d
rpmforge? (e/d/o)
d
vbox? (e/d/o)
d
*****************************************************
Enter either of the following:
Package Name(s) ==> ,example --> rsync or rsync ftp httpd
Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server
or * to output all (only valid with l/gl) ==> ,example --> *
*****************************************************
rsync
*****************************************************
Below is a list of install options. Please enter
*****************************************************
l to list,
s to search,
i to install,
r to remove,
gl to grouplist,
gi to groupinstall,
gr to groupremove,
o to exit
*****************************************************
rsync? (l/s/i/r/gl/gi/gr)
l
yum --enablerepo=oel57http --disablerepo=oel55http --disablerepo=rpmforge --disablerepo=vbox list rsync
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
Installed Packages
rsync.x86_64 3.0.6-4.el5 installed
}}}
To do a ''LIST ALL RPMs'' :
{{{
$ ./yum.py
*****************************************************
Below is a list of repos. Please enter
*****************************************************
e to enable,
d to disable,
o to omit
If unsure, choose to omit.
*****************************************************
oel57http? (e/d/o)
d
oel55http? (e/d/o)
e
rpmforge? (e/d/o)
d
vbox? (e/d/o)
d
*****************************************************
Enter either of the following:
Package Name(s) ==> ,example --> rsync or rsync ftp httpd
Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server
or * to output all (only valid with l/gl) ==> ,example --> *
*****************************************************
* <-- THIS COMMAND WILL GENERATE THE list '*' OPTION
*****************************************************
Below is a list of install options. Please enter
*****************************************************
l to list,
s to search,
i to install,
r to remove,
gl to grouplist,
gi to groupinstall,
gr to groupremove,
o to exit
*****************************************************
*? (l/s/i/r/gl/gi/gr)
l
yum --disablerepo=oel57http --enablerepo=oel55http --disablerepo=rpmforge --disablerepo=vbox list '*' <-- THIS COMMAND WILL BE GENERATED
}}}
<<<
''Example 2) Search the rsync rpm''
<<<
{{{
$ ./yum.py
*****************************************************
Below is a list of repos. Please enter
*****************************************************
e to enable,
d to disable,
o to omit
If unsure, choose to omit.
*****************************************************
oel57http? (e/d/o)
e
oel55http? (e/d/o)
d
rpmforge? (e/d/o)
d
vbox? (e/d/o)
d
*****************************************************
Enter either of the following:
Package Name(s) ==> ,example --> rsync or rsync ftp httpd
Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server
or * to output all (only valid with l/gl) ==> ,example --> *
*****************************************************
sync
*****************************************************
Below is a list of install options. Please enter
*****************************************************
l to list,
s to search,
i to install,
r to remove,
gl to grouplist,
gi to groupinstall,
gr to groupremove,
o to exit
*****************************************************
sync? (l/s/i/r/gl/gi/gr)
s
yum --enablerepo=oel57http --disablerepo=oel55http --disablerepo=rpmforge --disablerepo=vbox search 'sync'
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
================================================================================================ Matched: sync =================================================================================================
lam.i386 : The LAM (Local Area Multicomputer) programming environment.
lam.x86_64 : The LAM (Local Area Multicomputer) programming environment.
c-ares.i386 : A library that performs asynchronous DNS operations
c-ares.x86_64 : A library that performs asynchronous DNS operations
gnome-vfs2.i386 : The GNOME virtual file-system libraries
gnome-vfs2.x86_64 : The GNOME virtual file-system libraries
gtk-vnc.i386 : A GTK widget for VNC clients
gtk-vnc.x86_64 : A GTK widget for VNC clients
gtk-vnc-devel.i386 : Libraries, includes, etc. to compile with the gtk-vnc library
gtk-vnc-devel.x86_64 : Libraries, includes, etc. to compile with the gtk-vnc library
gtk-vnc-python.x86_64 : Python bindings for the gtk-vnc library
hesiod-devel.i386 : Development libraries and headers for Hesiod
hesiod-devel.x86_64 : Development libraries and headers for Hesiod
libaio.i386 : Linux-native asynchronous I/O access library
libaio.x86_64 : Linux-native asynchronous I/O access library
libaio-devel.i386 : Development files for Linux-native asynchronous I/O access
libaio-devel.x86_64 : Development files for Linux-native asynchronous I/O access
libevent.i386 : Abstract asynchronous event notification library
libevent.x86_64 : Abstract asynchronous event notification library
libsoup.i386 : Soup, an HTTP library implementation
libsoup.x86_64 : Soup, an HTTP library implementation
libtevent.i386 : Talloc-based, event-driven mainloop
libtevent.x86_64 : Talloc-based, event-driven mainloop
nspr.i386 : Netscape Portable Runtime
nspr.x86_64 : Netscape Portable Runtime
ntp.x86_64 : Synchronizes system time using the Network Time Protocol (NTP).
rsync.x86_64 : A program for synchronizing files over a network
system-config-date.noarch : A graphical interface for modifying system date and time
yum-utils.noarch : Utilities based around the yum package manager
}}}
<<<
''Example 3) Remove the rsync rpm''
<<<
{{{
$ ./yum.py
*****************************************************
Below is a list of repos. Please enter
*****************************************************
e to enable,
d to disable,
o to omit
If unsure, choose to omit.
*****************************************************
oel57http? (e/d/o)
e
oel55http? (e/d/o)
d
rpmforge? (e/d/o)
d
vbox? (e/d/o)
d
*****************************************************
Enter either of the following:
Package Name(s) ==> ,example --> rsync or rsync ftp httpd
Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server
or * to output all (only valid with l/gl) ==> ,example --> *
*****************************************************
rsync
*****************************************************
Below is a list of install options. Please enter
*****************************************************
l to list,
s to search,
i to install,
r to remove,
gl to grouplist,
gi to groupinstall,
gr to groupremove,
o to exit
*****************************************************
rsync? (l/s/i/r/gl/gi/gr)
r
yum --enablerepo=oel57http --disablerepo=oel55http --disablerepo=rpmforge --disablerepo=vbox remove 'rsync'
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package rsync.x86_64 0:3.0.6-4.el5 set to be erased
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================================================================================
Removing:
rsync x86_64 3.0.6-4.el5 installed 665 k
Transaction Summary
================================================================================================================================================================================================================
Remove 1 Package(s)
Reinstall 0 Package(s)
Downgrade 0 Package(s)
Is this ok [y/N]:
}}}
<<<
''Example 4) Install the rsync rpm''
<<<
{{{
$ ./yum.py
*****************************************************
Below is a list of repos. Please enter
*****************************************************
e to enable,
d to disable,
o to omit
If unsure, choose to omit.
*****************************************************
oel57http? (e/d/o)
e
oel55http? (e/d/o)
d
rpmforge? (e/d/o)
d
vbox? (e/d/o)
d
*****************************************************
Enter either of the following:
Package Name(s) ==> ,example --> rsync or rsync ftp httpd
Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server
or * to output all (only valid with l/gl) ==> ,example --> *
*****************************************************
rsync
*****************************************************
Below is a list of install options. Please enter
*****************************************************
l to list,
s to search,
i to install,
r to remove,
gl to grouplist,
gi to groupinstall,
gr to groupremove,
o to exit
*****************************************************
rsync? (l/s/i/r/gl/gi/gr)
i
yum --enablerepo=oel57http --disablerepo=oel55http --disablerepo=rpmforge --disablerepo=vbox install 'rsync'
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package rsync.x86_64 0:3.0.6-4.el5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================================================================================
Installing:
rsync x86_64 3.0.6-4.el5 oel57http 347 k
Transaction Summary
================================================================================================================================================================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size: 347 k
Is this ok [y/N]: y
Downloading Packages:
rsync-3.0.6-4.el5.x86_64.rpm | 347 kB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : rsync 1/1
Installed:
rsync.x86_64 0:3.0.6-4.el5
Complete!
}}}
<<<
''Example 5) Grouplist the "DNS Name Server"''
<<<
{{{
$ ./yum.py
*****************************************************
Below is a list of repos. Please enter
*****************************************************
e to enable,
d to disable,
o to omit
If unsure, choose to omit.
*****************************************************
oel57http? (e/d/o)
e
oel55http? (e/d/o)
d
rpmforge? (e/d/o)
d
vbox? (e/d/o)
d
*****************************************************
Enter either of the following:
Package Name(s) ==> ,example --> rsync or rsync ftp httpd
Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server
or * to output all (only valid with l/gl) ==> ,example --> *
*****************************************************
DNS Name Server
*****************************************************
Below is a list of install options. Please enter
*****************************************************
l to list,
s to search,
i to install,
r to remove,
gl to grouplist,
gi to groupinstall,
gr to groupremove,
o to exit
*****************************************************
DNS Name Server? (l/s/i/r/gl/gi/gr)
gl
yum --enablerepo=oel57http --disablerepo=oel55http --disablerepo=rpmforge --disablerepo=vbox grouplist 'DNS Name Server'
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
Setting up Group Process
Available Groups:
DNS Name Server
Done
}}}
To do a ''LIST ALL GROUPs'':
{{{
$ ./yum.py
*****************************************************
Below is a list of repos. Please enter
*****************************************************
e to enable,
d to disable,
o to omit
If unsure, choose to omit.
*****************************************************
oel57http? (e/d/o)
e
oel55http? (e/d/o)
d
rpmforge? (e/d/o)
d
vbox? (e/d/o)
d
*****************************************************
Enter either of the following:
Package Name(s) ==> ,example --> rsync or rsync ftp httpd
Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server
or * to output all (only valid with l/gl) ==> ,example --> *
*****************************************************
*
*****************************************************
Below is a list of install options. Please enter
*****************************************************
l to list,
s to search,
i to install,
r to remove,
gl to grouplist,
gi to groupinstall,
gr to groupremove,
o to exit
*****************************************************
*? (l/s/i/r/gl/gi/gr)
gl
yum --enablerepo=oel57http --disablerepo=oel55http --disablerepo=rpmforge --disablerepo=vbox grouplist '*'
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
Setting up Group Process
Installed Groups:
Administration Tools
Development Libraries
Development Tools
Editors
GNOME Desktop Environment
GNOME Software Development
Games and Entertainment
Graphical Internet
Graphics
Legacy Network Server
Legacy Software Development
Legacy Software Support
Mail Server
Network Servers
Office/Productivity
Printing Support
Server Configuration Tools
Sound and Video
System Tools
Text-based Internet
Web Server
Windows File Server
X Window System
Available Groups:
Authoring and Publishing
DNS Name Server
Engineering and Scientific
FTP Server
Java Development
KDE (K Desktop Environment)
KDE Software Development
MySQL Database
News Server
OpenFabrics Enterprise Distribution
PostgreSQL Database
X Software Development
Done
}}}
<<<
''Example 6) Groupinstall the "DNS Name Server"''
<<<
{{{
$ ./yum.py
*****************************************************
Below is a list of repos. Please enter
*****************************************************
e to enable,
d to disable,
o to omit
If unsure, choose to omit.
*****************************************************
oel57http? (e/d/o)
e
oel55http? (e/d/o)
d
rpmforge? (e/d/o)
d
vbox? (e/d/o)
d
*****************************************************
Enter either of the following:
Package Name(s) ==> ,example --> rsync or rsync ftp httpd
Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server
or * to output all (only valid with l/gl) ==> ,example --> *
*****************************************************
DNS Name Server
*****************************************************
Below is a list of install options. Please enter
*****************************************************
l to list,
s to search,
i to install,
r to remove,
gl to grouplist,
gi to groupinstall,
gr to groupremove,
o to exit
*****************************************************
DNS Name Server? (l/s/i/r/gl/gi/gr)
gi
yum --enablerepo=oel57http --disablerepo=oel55http --disablerepo=rpmforge --disablerepo=vbox groupinstall 'DNS Name Server'
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
Setting up Group Process
Resolving Dependencies
--> Running transaction check
---> Package bind.x86_64 30:9.3.6-16.P1.el5 set to be updated
---> Package bind-chroot.x86_64 30:9.3.6-16.P1.el5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================================================================================
Installing:
bind x86_64 30:9.3.6-16.P1.el5 oel57http 988 k
bind-chroot x86_64 30:9.3.6-16.P1.el5 oel57http 46 k
Transaction Summary
================================================================================================================================================================================================================
Install 2 Package(s)
Upgrade 0 Package(s)
Total download size: 1.0 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): bind-chroot-9.3.6-16.P1.el5.x86_64.rpm | 46 kB 00:00
(2/2): bind-9.3.6-16.P1.el5.x86_64.rpm | 988 kB 00:00
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 15 MB/s | 1.0 MB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : bind 1/2
Installing : bind-chroot 2/2
Installed:
bind.x86_64 30:9.3.6-16.P1.el5 bind-chroot.x86_64 30:9.3.6-16.P1.el5
Complete!
}}}
<<<
''Example 7) Groupremove the "DNS Name Server"''
<<<
{{{
$ ./yum.py
*****************************************************
Below is a list of repos. Please enter
*****************************************************
e to enable,
d to disable,
o to omit
If unsure, choose to omit.
*****************************************************
oel57http? (e/d/o)
e
oel55http? (e/d/o)
d
rpmforge? (e/d/o)
d
vbox? (e/d/o)
d
*****************************************************
Enter either of the following:
Package Name(s) ==> ,example --> rsync or rsync ftp httpd
Group Name (only valid with gl/gi/gr) ==> ,example --> DNS Name Server
or * to output all (only valid with l/gl) ==> ,example --> *
*****************************************************
DNS Name Server
*****************************************************
Below is a list of install options. Please enter
*****************************************************
l to list,
s to search,
i to install,
r to remove,
gl to grouplist,
gi to groupinstall,
gr to groupremove,
o to exit
*****************************************************
DNS Name Server? (l/s/i/r/gl/gi/gr)
gr
yum --enablerepo=oel57http --disablerepo=oel55http --disablerepo=rpmforge --disablerepo=vbox groupremove 'DNS Name Server'
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
Setting up Group Process
Resolving Dependencies
--> Running transaction check
---> Package bind.x86_64 30:9.3.6-16.P1.el5 set to be erased
---> Package bind-chroot.x86_64 30:9.3.6-16.P1.el5 set to be erased
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================================================================================
Removing:
bind x86_64 30:9.3.6-16.P1.el5 installed 2.2 M
bind-chroot x86_64 30:9.3.6-16.P1.el5 installed 0.0
Transaction Summary
================================================================================================================================================================================================================
Remove 2 Package(s)
Reinstall 0 Package(s)
Downgrade 0 Package(s)
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Erasing : bind-chroot 1/2
Erasing : bind 2/2
warning: /etc/sysconfig/named saved as /etc/sysconfig/named.rpmsave
Removed:
bind.x86_64 30:9.3.6-16.P1.el5 bind-chroot.x86_64 30:9.3.6-16.P1.el5
Complete!
}}}
<<<
https://parallelthoughts.xyz/2019/05/a-tale-of-query-optimization/
https://www.lastweekinaws.com/blog/why-zoom-chose-oracle-cloud-over-aws-and-maybe-you-should-too/
https://support.zoom.us/hc/en-us/articles/201362673-Request-or-Give-Remote-Control