Tuesday, April 22, 2014

Enterprise Manager not effected by Heartbleed

The Oracle Security team has indicated that Enterprise Manager Cloud Control, Grid Control as well as Ops Center are not effected by the Heartbleed vulnerability. This is mentioned in the internet published document on the Oracle Technical Network (OTN):

http://www.oracle.com/technetwork/topics/security/opensslheartbleedcve-2014-0160-2188454.html

An excerpt from this document:

1.0 Oracle products that, while using OpenSSL, were not subject to CVE-2014-0160

Global Product Security has determined that the following products are using OpenSSL cryptographic libraries whose versions have been externally reported as not vulnerable to CVE-2014-0160 or did not use OpenSSL libraries to implement the vulnerable TLS protocol. No further action is therefore expected for these products:

  • Advanced Lights Out Manager (ALOM) [Product ID 9843/ALOM/ALOM]
  • ALOM-CMT [Product ID 9846/SYSFW-ALL/ALOM-CMT]
  • Audit Vault [Product ID 1977,9749]
  • Brocade(McData) Fiber Channel Switches and Management Software [Product ID 9864]
  • Cisco MDS Fiber Channel Switches and Management Software [Product ID 9865]
  • Corente Services Gateway
  • E-Business Suite 11i
  • eGate Integrator 5.0.5 SRE
  • Enterprise Manager Cloud Control
  • Enterprise Manager Cloud Control Plug-ins and Connectors
  • Enterprise Manager Grid Control [Product ID 1370]
  • Enterprise Manager Grid Control Plug-ins and Connectors
  • Enterprise Manager Ops Center
  • Exadata [Product ID 2546]
  • ....
  • ....

Please read the document for the full list.

Regards,

Porus.

Friday, March 14, 2014

Steps to Fast Track your Database Cloud implementation on Exadata

Oracle Exadata Database Machine is the ideal consolidation platform for Enterprise Database Cloud and Oracle Enterprise Manager provides the most optimized and comprehensive solution to rapidly setup, manage and deliver Enterprise Clouds. Clearly, very significant innovations have been delivered via Exadata X4, Enterprise Manager 12c and Database 12c in Cloud Computing space and customers can start realizing benefits from this combination of most powerful and unique enterprise database cloud solution in industry.
As per OracleVoice blog on Forbes.com:  "Why Database As A Service (DBaaS) Will Be The Breakaway Technology of 2014":
"Database as a Service (DBaaS) is arguably the next big thing in IT. Indeed, the market analysis firm 451 Researchprojects an astounding 86% cumulative annual growth rate, with annual revenues from DBaaS providers rising from $150 million in 2012 to $1.8 billion by 2016."
In this blog post, I will walk through the steps aiming to simplify DBaaS Setup on Exadata and also describe automation kits available to achieve the following rapidly - 
  • Setup Monitoring and Management of Exadata Database Machine platform in EM 12c
  • Setup and Deliver DBaaS on Exadata using EM 12c
  • Manage and Optimize Exadata and EM 12c powered DBaaS cloud platform on an ongoing basis



There are 2 separate automation kits that are provided with EM 12c, first kit is for enabling rapid monitoring and management setup of Exadata stack in EM 12c and second kit is for rapid setup of DBaaS -
1) Deploy EM 12c site or use existing site - If you do not have existing EM 12c R3 setup, you can use EM Automation Kit for Exadata for installing EM 12c R3 Plug-in update 1. This kit is available via patch 17036016 on My Oracle Support(MOS) and can be used to deploy EM 12c latest release. Refer to Readme of patch and MOS note "Obtaining the Oracle Enterprise Manager Setup Automation kit for Exadata (Doc ID 1440951.1)" for additional details. Please note that this will setup EM12c Oracle Management Service along with Management Repository. It can be deployed on a single m/c or OMS and OMR can be setup on different machines.
2) Deploy EM 12cR3 agents and required plug-ins on Exadata Machine - Agent kit is also part of the same EM Automation Kit for Exadata and can be used for deploying agents and plug-ins on Exadata stack. Refer to MOS note "Obtaining the Oracle Enterprise Manager Setup Automation kit for Exadata (Doc ID 1440951.1)" for additional details. Best practice is to use most recent version of Agent kit and also deploy latest plug-ins. Patch details for respective platform are described in the MOS note.
Agent kit script will require Java 1.6.0_43 or greater version on database node where this script is being run. Agent kit script will need to be run as root OS user on Exadata db node, however JAVA_HOME and PATH with JAVA_HOME/bin should be set up as agent OS owner, so these OS env variables need to setup in profile of agent OS owner.
Agent Automation kit helps with achieving following - 
  • EMCLI setup on Exadata Server
  • EM 12c R3 site compatibility checks
  • Setup and remove SSH between Exadata nodes to test SSH setup
  • Deploy EM 12c Agent and required Plugins on all DB Nodes of Exadata Machine
  • Confirm Exachk tool availability and run Exachk tool
  • Run Exadata Discovery Prerequisites
  • Discover Targets Cluster, Grid Infrastructure, RAC database and listener targets
Note - In case of Exadata X4, ensure you have the EM 12cR3 latest Bundle patch(released in January 2014). Refer to following MOS notes -
Enterprise Manager 12.1.0.3 Bundle Patch Master Note (Doc ID 1572022.1)
Enterprise Manager for Exadata Plug-in 12cR3 Bundle Patch Bug List (Doc ID 1613177.1)
3) Discover Grid Infrastructure and RAC targets – Above setup script will discover Targets Cluster, Grid Infrastructure, RAC database and listener targets. Discover Grid Infrastructure, ASM and RAC targets manually if required.
4) Please note that this setup script will not discover Oracle Exadata Database machine target in EM 12c. You need to discover the machine using following steps
    • From the Setup menu, select Add Targets, then select “Add targets Manually”.
    • In the “Add Targets Manually” page, select 'Add Targets Using Guided Process (Also Adds Related Targets)' and Target Type as Oracle Exadata Database Machine.
    • Click Add Using Guided Discovery and follow the wizard.

      5) Setup Database Cloud Using Rapid Start Kit - Once you have setup Exadata management in EM 12c, next step is to setup database cloud. Refer to Rapid Start Kit for setting up cloud for both DBaaS and Pluggable DBaaS/PDBaaS. This kit will help achieve the following -
      • Create Cloud Admin, SSA Admin and SSA User custom roles
      • Create Cloud Admin, SSA Admin and SSA Users
      • Grant Quota to SSA User custom roles
      • Setup Zones with Placement Policy Constraints
      • Setup Pools with Placement Constraints
      • Setup Service Template/Catalog and grant it SSA User custom roles.
      Here are brief steps for setting up Database Cloud using Rapid start Kit, available in EM Agent Kit 12.1.0.3.0, after login to Exadata machine first DB node as EM 12c agent owner
        • Change to /cloudsetup directory.
        • Review the input files under config directory and customize the dbaas_cloud_input.xml for configuring DBaaS cloud and pdbaas_cloud_input.xml for configuring Pluggable Database as a Service.
        • Run the following command to setup DBaaS on Exadata Machine.
        /emcli login -username=sysman
        /emcli @exadata_cloud_setup.py –dbaas
        Above command will use dbaas_cloud_input.xml (under cloudsetup/config) as input file for configuring DBaaS. 
        • To setup PDBaaS on Exadata, please use following command.
        /emcli @exadata_cloud_setup.py –pdbaas
        Above command will use pdbaas_cloud_input.xml (under cloudsetup/config) as input file for configuring PDBaaS
        Note: Currently Rapid Start kit for DBaaS makes use of 11.2.0.3.0 Database "Exadata Data Warehouse" Profile available out-of-box. However you can create your own DBCA based Profiles and customize the dbaas_cloud_input.xml. Also if you need to use RMAN backup based or Snap clone based profile, you can to login to EM12c SSA Portal as SSA Administrator, to create the profile and setup service template.  
        At this stage, you will be able to manage and deliver your Exadata powered enterprise database cloud using EM 12c.
        Additional References:

        Wednesday, January 15, 2014

        New Technical article: Back Up a Thousand Databases Using Enterprise Manager Cloud Control 12c

        Friends,

        I am pleased to announce that a new technical article of mine has been published (January 2014) on the Oracle Technical Network.


        Back Up a Thousand Databases Using Enterprise Manager Cloud Control 12c

        This detailed technical article explains the set up and scheduling of full and incremental RMAN Database backups for  thousands of databases using Enterprise Manager Cloud Control (Enterprise Manager) 12c, and how this is done more easily and efficiently than the older, more time-consuming, manual method of performing Unix shell scripting, RMAN scripting, and cron jobs for each database to be backed up. 

        And with the Database Group Backup feature new to Enterprise Manager Cloud Control 12c, it can be even faster to set up RMAN backups for multiple databases - even if there are thousands - that are part of an Enterprise Manager Database Group.

        The article also highlights the advantages of using PDBs in Oracle Database 12c and backing them up using RMAN. RMAN cannot backup individual schemas, and it has always been difficult to perform point-in-time-recovery (PITR) at an individual schema level, since schemas can easily be distributed across multiple tablespaces. The advantage in using PDBs in a Container Database is that you can easily set up RMAN backups at the Container Database level, and yet perform PITR at the PDB level. This is a clear technical advantage of the Multi-tenant architecture of Oracle Database 12c.

        The set up and scheduling of RMAN database backups forms a part of the Base Database Management features of Enterprise Manager that enables numerous customers to use Enterprise Manager 12c more and more.  In fact I had personally introduced Enterprise Manager to HDFC bank in India in 2007 for the purpose of their RMAN backups, they started using it for the first time, and today they are a DBaaS-Exadata reference customer who have presented in OOW for the last 2 years.

        Regards,


        Porus.

        Thursday, January 9, 2014

        What is EM 12c DBaaS Snap Clone?


        Happy New Year to all! Being the first blog post of the new year, lets look at a relatively new feature in EM that has gained significant popularity over the last year - EM 12c DBaaS Snap Clone.
        The ‘Oracle Cloud Management Pack for Oracle Database’ a.k.a the Database as a Service (DBaaS) feature in EM 12c has grown tremendously since its release two years ago.  It started with basic single instance and RAC database provisioning, a technical service catalog, an out of box self service portal, metering and chargeback, etc. But since then we have added provisioning of schemas and pluggable databases, full clones using RMAN backups, and Snap Clone. This video showcases the various EM12c DBaaS features.
        This blog will cover one of the most exciting and popular features – Snap Clone. In one line, Snap Clone is a self service way of creating rapid and space efficient clones of large (~TB) databases.
        Self Service - empowers the end users (developers, testers, data analysts, etc) to get access to database clones whenever they need it.
        Rapid - implies the time it takes to clone the database. This is in minutes and not hours, days, or weeks.
        Space Efficient - represents the significant reduction in storage (>90%) required for cloning databases 
        Customer Scenario 
        To best explain the benefits of Snap Clone, let’s look at a Banking customer scenario:
        • 5 production databases total 30 TB of storage
        • All 5 production databases have a standby
        • Clones of the production database are required for data analysis and reporting
        • 6 total clones across different teams every quarter
        • For security reasons, sensitive data has to be masked prior to cloning
        Based on the above scenario, the storage required, if using traditional cloning techniques, can be calculated as follows:
        5 Prod DB                  = 30 TB
        5 Standby DB            = 30 TB
        5 Masked DB             = 30 TB (These will be used for creating clones)
        6 Clones (6 * 30 TB) = 180 TB
                                       ------------------
        Total                           = 270 TB
        Time = days to weeks
        As the numbers indicate, this is quite horrible. Not only 30 TB turn into 270 TB, creating 6 clones of all production databases would take forever. In addition to this, there are other issues with data cloning like:
        • Lack of automation. Scripts are good but often not a long term solution.
        • Traditional cloning techniques are slow while, existing storage vendor solutions are DBA unfriendly 
        • Data explosion often outpaces storage capacity and hurts ITs ability to provide clones for dev and testing
        • Archaic processes that require multiple users to share a single clone, or only supports fixed refresh cycles
        • Different priorities between DBAs and Storage admins
        Snap Clone to the Rescue 
        All of the above issues lead to slow turnaround times, and users have to wait for days and weeks to get access to their databases. Basically, we end up with competing priorities and requirements, where the user demands self service access, rapid cloning, and the ability to revert data changes, while IT demands standardization, better control, reduction in storage and administrative overhead, better visibility into the database stack, etc.
        EM 12c DBaaS Snap Clone tries to address all these issues. It provides:
        • Rapid and space efficient cloning of databases by leveraging storage copy-on-write (or similar) technology
        • Supports all database versions from 10g to 12c
        • Supports various storage vendors and configurations NAS and SAN
        • Lineage and association tracking between clone master and its various clones and snapshots
        • 'Time Travel' capability to restore and access past data
        • Deep visibility into storage, OS, and database layer for easy triage of performance and configuration issues
        • Simplified access for end user via out-of-the-box self service portal
        • RESTful APIs to integrate with custom portals and third party products
        • Ability to meter and charge back on the clone databases
        So how does Snap Clone work?
        The secret sauce lies in the Storage Management Framework (SMF) plug-in. This plug-in sits between the storage system and the DBA, and provides the much needed layer of abstraction required to shield DBAs and users from the nuances of the different storage systems. At the storage level, Snap Clone makes use of storage copy-on-write (or similar) technology. There are two options in terms of using and interacting with storage:
        1. Direct connection to storage: Here storage admins can register NetApp and ZFS storage appliance with EM, and then EM directly connects to the storage appliance and performs all required snapshot and clone operations. This approach requires you to license the relevant options on the storage appliance, but is the easiest and the most efficient and fault tolerant approach.
        2. Connection to storage via ZFS file system: This is a storage vendor agnostic solution and can be used by any customer. Here instead of connecting to storage, the storage admin mounts the volumes to a Solaris server and format it with ZFS file system. Now all snapshot and clone operations required on the storage are conducted via ZFS file system,. The good thing about this approach is that it does not require thin cloning options to be licensed on the storage since ZFS file system provides these capabilities.
        For more details on how to setup and use Snap Clone, refer to a previous blog post
        Now, lets go back to our Banking customer scenario and see how Snap Clone helped then reduce their storage cost and time to clone.
        5 Prod DB                      = 30 TB
        5 Standby DB                 = 30 TB
        5 Masked DB                 = 30 TB
        6 Clones (6 * 30 TB)      = 180 TB
        6 Clones (6 * 5 * 2 GB) = 60 GB
                                           ------------------
        Total                               = 270 TB 90 TB
        Time = days to weeks minutes
        Assuming the clone databases will have minimal writes, we allocate about 2GB of write space per clone. For 5 production databases and 6 clones, this totals to just 60GB in required storage space. This is a whopping 99.97% savings in storage. Plus, these clones are created in matter of minutes and not the usual days or weeks. The product has out-of-the-box charts that show the storage savings across all storage devices and cloned databases. See the screenshot below.
        Snap Clone Savings
        Where can you use Snap Clone databases?
        As i said earlier, Snap Clone is most effective when cloning large databases  (~TBs). Common scenarios we see our customers best use Snap Clone are:
        • Application upgrade testing. For example, EBusiness suite upgrade to R12
        • Functional testing. For example, testing using production datasets.
        • Agile development. For example, run parallel development sprints by giving each sprint its own cloned database.
        • Data Analysis and Reporting. For example, stock market analysis at the close of market everyday.
        Its obvious that Snap Clone has a strong affinity to applications, since its application data that you want to clone and use. Hence it is important to add that the Snap Clone feature when combined with EM12c middleware-as-a-service (MWaaS) can provide a complete end-to-end self service application deployment experience. If you have existing portals or need to integrate Snap Clone with existing processes, then use our RESTful APIs for easy integration with third party systems.
        In summary, Snap Clone is a new and exciting way of dealing with data cloning challenges. It shields DBAs from the nuances of different storage systems, while allowing end users to request and use clones in a rapid and self service fashion. All of this while saving storage costs. So try this feature out today, and your development and test teams will thank you forever.
        In subsequent blog posts, we will look at some popular deployment models used with Snap Clone.
        -- Adeesh Fulay (@adeeshf)
        Additional References

        Database Lifecycle Management for Cloud Service Providers


        Adopting the Cloud Computing paradigm enables service providers to maximize revenues while driving capital costs down through greater efficiencies of working capital and OPEX changes. In case of enterprise private cloud, corporate IT, which plays the role of the provider, may not be interested in revenues, but still care about providing differentiated service at lower cost. The efficiency and cost eventually makes the service profitable and sustainable. This basic tenet has to be satisfied irrespective of the type of service-infrastructure (IaaS), platform (PaaS) or software application (SaaS). In this blog, we specifically focus on the database layer and how its lifecycle gets managed by the Service Providers. 

        Any service provider needs to ensure that:
        • Hardware and software population are in control. As new consumers come in and some consumers retire, there is a constant flux of resources in the data center. The flux has to be managed and controlled
        • The platform for providing the service is standardized, so that operations can be conducted predictable and at scale across a pool of resources
        • Mundane and repeatable tasks like backup, patching, etc are automated
        • Customer attrition does not happen owing to heightened compliance risk
        While the Database Lifecycle Management features of Enterprise Manager have been widely adopted, I feel that the applicability of the features with respect to service providers is yet well understood and hence appreciated. In this blog, let me try addressing how the lifecycle management features can be effective in addressing each of the above requirements.
        1. Controlling hardware and software population:
        Enterprise Manager 12c provides a near real-time view of the assets in a data center. It comes with out-of-box inventory reports that show the current population and the growth trend within the data center. The inventory can be further sliced and diced based on cost center, owner, etc. In a cloud, whether private or public, the target properties of each asset can be appropriately populated, so that the provider can easily figure out the distribution of assets. For example, how many databases are owned by Marketing LOB can be easily answered. The flux within the data center is usually higher when virtualization techniques such as server virtualization and Oracle 12c multitenant option are used. These technologies make the provisioning process extremely nimble, potentially leading to a higher number of virtual machines (VMs) or pluggable databases (PDBs) within the data center and hence accentuating the need for such ongoing reporting. The inventory reports can be also created using BI Publisher and delivered to non-EM users, such as a CIO.
        Now, not all reports can always be readily available. There can be situations where a data center manager can seek adhoc information, such as, how many databases owned by a particular customer is running on Exadata. This involves an adhoc query based upon an association, viz. database running on Exadata and target properties, viz. owner being the customer. Enterprise Manager 12c provides a sophisticated Configuration Search feature that lets administrators define such adhoc queries and save them for reuse.
        2. Standardization of platform:
        The massive standardization of platform components is not merely a nice-to-have for a cloud service provider, it is rather a must-have. A provider may choose to offer various levels of services, tagged with levels such as gold, silver and bronze. However, for each such level, the platform components need to be standardized, not only for ease of manageability but also for ensuring consistency of QOS across all the tenants. So how can the platform be standardized? We can highlight two major Enterprise Manager 12c features here:
        The ability to rollout gold images that can be version controlled within Enterprise Manager's Software Library. The inputs of the provisioning process can be "locked down" by the designer of the provisioning process, thereby ensuring that each deployment is a replica of the other.
        The ability to compare the configuration of deployments (often referred to as the "Points of Delivery" of the services). This is a very powerful feature that supports 1-n comparisons across multiple tiers of the stack. For example, one can compare an entire database machine from storage cells, compute nodes to databases with one or more of those.
        3. Automation of repeatable tasks:
        A large portion of OPEX for a service provider is expended while executing mundane and repeatable tasks like backup, log file cleanup or patching. Enterprise Manager 12c comes with an automation framework comprising Jobs and Deployment Procedures that lets administrators define these repetitive actions and schedule them as needed. EMCC’s task automation framework is scalable, carries functions such as ability to schedule, resume, retry which are of paramount importance in conducting mass operations in an enterprise scale cloud. The task automation verbs are also exposed through the EMCLI interface. Oracle Cloud administrators make extensive use of EMCLI for large scale operations on thousands of tenant services.
        One of the most popular features of Enterprise Manager 12c is the out-of-box procedures for patch automation. The patching procedures can patch the Linux operating system, clusterware and the database. For minimizing the downtime involved in the patching process Enterprise Manager 12c also supports out-of-place patching that can prepare the patched software ahead of time and migrate the instances one by one as needed. This technique is widely adopted by the service providers to make sure the tenants' downtime related SLAs are respected and adhered to. The co-ordination of such downtime can be instrumented by Enterprise Manager 12c's blackout functionality.
        4. Managing Compliance risks:
        In a service driven model, the provider is liable in case of security breaches. The consumer and in turn, the customer of the consumer's apps need to be assured that their data is not breached into owing to platform level vulnerabilities. The security breaches often happen owing to faulty configuration such as default passwords, relaxed file permissions, or an open network port. The hardening of the platform therefore, has to be done at all levels-OS, network, database, etc. The security breaches often happen owing to faulty configuration such as default passwords, relaxed file permissions, or an open port. . To manage compliance, administrators can create baselines referred to as Compliance Standard. Any deviations from the baselines triggers compliance violation notifications, alerting administrators to resolve the issue before it creates risk in the environment.
        We can therefore see how four major asks from a service provider can be satisfied with the Lifecycle Management features of Enterprise Manager 12c. As substantiated through several third party studies and customer testimonials, these result in higher efficiency with lower OPEX.

        Using EM CLI for mass update of Lifecycle Status Property Value


        I co-presented at Oracle Open World in September, Manage Beyond Limits: Enterprise Manager CLI and Other Extensibility Features. I focused on the enhancements to Enterprise Manager Command Line Interface, EM CLI. I enthused about the two new modes, Interactive and Script mode and how they compare to the standard mode of previous releases, from the SQL*Plus like environment of Interactive mode to the scalable, JSON formatted output of script mode. I highlighted the ease of use and the scalable power of EM CLI.
        After my session a number of you asked me for a copy of the scripts that I demoed. This is one. 
        Why do we take on the extra task involved in learning something new? …because we know it will lead to personal growth, ultimately solve a problem or two, and maybe even look good on our resume. Learning Jython scripting will tick all of those boxes. Plus, it’s fun!
        This script tries to solve the problem of mass updates to the Lifecycle Status property value. This is a new property introduced in Oracle Enterprise Manager 12c, and can be used to indicate the importance of a target, e.g. “Mission Critical", or to determine where a target is in its life cycle, e.g. “Stage”, “Test” or “Production”. Consider a new deployment of several hundred Oracle Databases, half of which are Mission Critical and the other half are in “Test”, but are about to go “Production”.
        What is the best way to transition from “Test” to “Production”?
        EM CLI in script mode!
        EM CLI in script mode takes advantage of the Jython scripting language to use Enterprise Manager in a programmatic way, allowing task automation. The EM CLI Jython script below automates the setting of the Lifecycle Status Property Value, and uses standard programming constructs to make itterating through several Targets simpler, more robust and less error prone.
        At a high level, every EM CLI Jython script can effectively be broken down into two parts:
        Step 1: The setting and defining the necessary variables such as, which OMS URL to connect to, how secure you want your communication channel and which Administrator to log into the OMS.
        Step 2: The calling or manipulation of EM CLI 12c procedures. Procedures were called verbs in previous releases, verb options are now procedure arguments in script and interactive mode.You can explore the on-line verb reference for more information.
        Let’s break the script down further in to the major functional blocks of code.
        Line 19: Sets the variable EMCLI_OMS_URL, which determines which OMS URL we shall connect too.
        Line 21: Sets the variable EMCLI_TRUSTALL, which determines the level of security associated with the communication channel between the EM CLI and the OMS. We are choosing the lowest level of security.
        Both of these variables could also have been set as environment variables.
        Line 26 – 40: Between the if – else loop, we are checking for arguments that are passed to the script. We are passing two arguments into this script. Following, is what it looks like when calling an EM CLI Jython script, with arguments, on the command line:
        $>./emcli @oow_demo2.py OWUSER Production
        Where:
        @oow_demo2.py - is the name of our Jython Script.
        OWUSER - is the username used to log into the OMS, the script will prompt for a password, to authenticate this user. The mode of authentication is the same as is configured for the Console. Authentication modes supported are Repository, SSO or LDAP.
        Production - is the Lifecycle Status property Value we shall set.
        Line 27: We log into the OMS.
        Line 29: We search through all targets where the version, “DBVersion” is greater than or equal to 12.1. This is passed to an internal procedure defined in Line 10.
        Line 11: We construct the SQL command, based on the arguments passed in, then use the EM CLI list()procedure to convert the returned output to an easily parse-able JSON formatted syntax (line 15) . We then return the Response Object, obj (line 16). The information returned are all the targets of the appropriate version.
        Line 37: We then take the information and parse it, filtering further on oracle_database Target types. Finally we parse and print TARGET_NAME, TARGET_TYPE, PROPERTY_NAME and PROPERTY_VALUE for all databases which fit our criteria.
        Line 39: We call the set_target_property_value() procedure which accepts a colon separated list of property records, in the form, TARGET_NAME:TARGET_TYPE:PROPERTY_NAME:PROPERTY_VALUE.
        Please copy the code, save it with the *.py extension and change the EMCLI_OMS_URL value to the valid OMS URL for your environment.
        Play around with it, and take your Jython scripting knowledge from Test to Production.

        Implementing Service Level Agreements in Enterprise Manager 12c for Oracle Packaged Applications

        Contributed by Eunjoo Lee, Product Manager, Oracle Enterprise Manager.
        Service Level Management, or SLM, is a key tool in the proactive management of any Oracle Packaged Application (e.g., E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, Fusion Apps, etc.). The benefits of SLM are that administrators can utilize representative Application transactions, which are constantly and automatically running behind the scenes, to verify that all of the key application and technology components of an Application are available and performing to expectations.
        A single transaction can verify the availability and performance of the underlying Application Tech Stack in a much more efficient manner than by monitoring the same underlying targets individually.
        In this article, we’ll be demonstrating SLM using Siebel Applications, but the same tools and processes apply to any of the Package Applications mentioned above. In this demonstration, we will log into the Siebel Application, navigate to the Contacts View, update a contact phone record, and then log-out.
        This transaction exposes availability and performance metrics of multiple Siebel Servers, multiple Components and Component Groups, and the Siebel Database - in a single unified manner. We can then monitor and manage these transactions like any other target in EM 12c, including placing pro-active alerts on them if the transaction is either unavailable or is not performing to required levels. The first step in the SLM process is recording the Siebel transaction. The following screenwatch demonstrates how to record Siebel transaction using an EM tool called “OpenScript”. A completed recording is called a “Synthetic Transaction”.
        The second step in the SLM process is uploading the Synthetic Transaction into EM 12c, and creating Generic Service Tests. We can create a Generic Service Test to execute our synthetic transactions at regular intervals to evaluate the performance of various business flows. As these transactions are running periodically, it is possible to monitor the performance of the Siebel Application by evaluating the performance of the synthetic transactions. The process of creating a Generic Service Test is detailed in the next screenwatch. EM 12c provides a guided workflow for all of the key creation steps, including configuring the Service Test, uploading of the Synthetic Test, determining the frequency of the Service Test, establishing beacons, and selecting performance and usage metrics, just to name a few.
        The third and final step in the SLM process is the creation of Service Level Agreements (SLA). Service Level Agreements allow Administrators to utilize the previously created Service Tests to specify expected service levels for Application availability, performance, and usage. SLAs can be created for different time periods and for different Service Tests. This last screenwatch demonstrates the process of creating an SLA, as well as highlights the Dashboards and Reports that Administrators can use to monitor Service Test results.
        Hopefully, this article provides you with a good start point for creating Service Level Agreements for your E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, or Fusion Applications. Enterprise Manager Cloud Control 12c, with the Application Management Suites, represents a quick and easy way to implement Service Level Management capabilities at customer sites.

        Disclaimer

        Opinions expressed in this blog are entirely the opinions of the writers of this blog, and do not reflect the position of Oracle corporation. No responsiblity will be taken for any resulting effects if any of the instructions or notes in the blog are followed. It is at the reader's own risk and liability.