To get national healthcare right requires Adaptive Intelligence

http://blogs.forrester.com/james_staten/13-07-30-to_get_national_healthcare_right_requires_adaptive_intelligence

To get national healthcare right requires Adaptive Intelligence

With the employer mandate delays being the latest setback to U.S. president Obama’s push for national healthcare, it’s worth looking at how other countries are successfully tackling the same problem. The United Kingdom has had nationalized healthcare for years, and one of the things that makes this effort so successful is its approach to data collaboration — something Forrester calls Adaptive Intelligence.

While the UK hasn’t successfully moved into fully electronic health records, it has in place today a health records sharing system that lets its over 27,000 member organizations string together patient care information across providers, hospitals, and ministries, creating a more full and accurate picture of each patient, which results in better care. At the heart of this exchange is a central data sharing system called Spine. It’s through Spine that all the National Health Service (NHS) member organizations connect their data sets for integration and analysis. The data-sharing model Spine creates has been integral in the creation of summary care records across providers, an electronic prescription service, and highly detailed patient care quality analysis. As we discussed in the Forrester report “Introducing Adaptive Intelligence,” no one company can alone create an accurate picture of its customers or its business without collaborating on the data and analysis with other organizations who have complementary views that flesh out the picture.

Forrester defines Adaptive Intelligence as:
Real-time, multi-directional sharing of data to derive contextually appropriate, authoritative knowledge that helps maximize business value.

And this value is exactly what Spine provides for the NHS.

Spine, however, is coming to the end of its life. It’s based on a traditional scale up, three-tiered architecture with an Oracle database at the backend. Being accessed by so many organizations — and being part of so many technology-supported business processes — availability, change costs, and response times associated with Spine are critical to the NHS. As such, the NHS is deep into the development and testing of Spine 2, which moves the core data sharing system to a modern application architecture that is much more flexible and agile. The front and middle tiers are built on mostly open source technologies including RabbitMQ, Redis, Tornado, Splunk, and others. The backend relational database is being replaced by a NoSQL key value store using Riak from Basho Technologies, while the front tiers will maintain existing XML-based service connections and integrations. The core reason the NHS chose this new architecture was cost and performance requirements.

“We’re looking for a 90 percent reduction in response times for key messages and a radical improvement in the cost of change,” said Martin Sumner, the technical lead on the Spine2 project.

Sumner added that the architectural shift and the broader change in approach to using in-house expertise, supported by subject matter experts (SMEs) with Agile techniques, had reduced the core technical team to less than 30 people. This is a tremendous improvement over Spine 1, which the incumbent supplier considered to be one of the largest IT programs in the world, “consuming over 15,000 man-years of effort to date.”

Core to any effective Adaptive Intelligence sharing project is good data quality management, which requires, in a healthcare context, consistent patient identifiers across the system. In this case, the NHS, as the data authority, maintains a single NHS number for each UK citizen, which is the personal identifier within the system.  One of the keys to performance improvement in Spine 2 is the separation of the system into three services: core real-time data integration services, identity management, and secondary uses. This will let the NHS scale and manage the availability of each service separately — another key best practice in modern application design.

Unlike the current Spine, the next version will give the NHS performance efficiency through horizontal scaling, a more synchronous messaging interface, innovative records management, and a modern, efficient storage system. The data set isn’t huge — Sumner said Spine is in the low TBs in size today — but is widely used and at the heart of an integration culture that has grown around the model of Adaptive Intelligence.

So while the U.S. Congress is kept busy debating the timing for its move to national healthcare, the IT side of the government might want to take a look at the data sharing model it will need to put in place once a date is finally agreed upon.

Advertisements
Posted in Uncategorized | Leave a comment

AT&T turns its data into an adaptive intelligence business

http://www.zdnet.com/at-and-t-turns-its-data-into-an-adaptive-intelligence-business-7000021348/

There’s no doubt that, to consumer marketing professionals, data about the users of mobile network are highly valuable. But AT&T is finding that enterprise application designers, corporate security & risk professionals, corporate trainers and CFOs are very interested in this data as well – so much so that the US-based network operator is turning access to and collaboration on its data into a new business service.

Under the guidance of Laura Merling, VP of Ecosystem Development & Platform Services (and formerly of Mashery), AT&T Business Solutions is embarking on an ambitious plan for sharing its data in a secure programmatic fashion leveraging RESTful APIs.  It had previously shared it data in a more informal fashion with selected partners and customers but found this approach difficult to standardize and repeat on a larger scale. It also has participated in data collaboration efforts such as the well-known hackathon with American Airlines at South by Southwest earlier this year.

Now it is formalizing this effort and taking some key Adaptive Intelligence best practice steps to ensure secure data sharing and at the same time easing enterprise developer access and use. The approach starts with good data management which involves data aggregation from multiple systems and data anonymization to protect identities and personal information of AT&T mobile clients. From here AT&T has created programmable interfaces to each of its data sets that ensure read-only access to the data governed by enforceable business policies.

What data is it sharing? It starts with location-based data and analytics (GPS ping data aggregated to show patterns of mobile movements, followed by the anonymized and aggregated customer data and insights (matching ping data to customer group data – such as income, age, type of mobile device used, etc.). But the data sharing strategy isn’t just about learning about mobile device use but also about network performance and use patterns (including its wifi hot spot network), analyzing the AT&T bill to identify patterns and optimize your own use, and eventually viewing AT&T as a data authority on user identity and access management. How would this work? Well as Merling put it, AT&T already verifies the identity of its customers. Therefore it has the ability to help mobile app developers, enterprise security and risk professionals and other network and network service providers validate that the mobile device and the person using it are who they say they are when they ask for access. No, your mobile phone isn’t yet your new ID. Logins and passwords and MFA still must accompany that use. But wouldn’t it be nice to know that you could federate your enterprise identity server to AT&T and get validation that the mobile connection coming over your VPN really is coming from the same users’ phone as the user credentials received?

There are eCommerce implications here as well that could make it easier for startups and other companies to take payments. AT&T already takes credit cards from all its mobile users, why not authorize AT&T to add charges when you buy things through or with your phone? No more sharing your credit card number with various retailers (online or physical).

And AT&T isn’t just throwing its data out there and a bunch of ideas about how you might leverage it. It is crafting each data sharing service in direct partnership with its existing enterprise customers who have immediate plans to leverage this data programmatically. Undoubtedly Verizon, Vodafone, BT and the other major mobile and wifi network operators are thinking along similar lines. But the Adaptive Intelligence approach being taken by AT&T will give them a distinct advantage as it unifies all its data sharing requests and empowers its customers and prospects to use this data to create the next innovative insights and services. This is something that could not have been achieved (certainly not at the scale now possible) if the approach was closed data, negotiated access and one-off sharing approaches that are common today in enterprise data sharing.

This is the Adaptive Intelligence approach and AT&T is a clear leader in the new approach to big data business intelligence.

Topics: Big Data, Making the Business Case For Big Data

James Staten

About James Staten

James Staten is a Vice President and Principal Analyst at Forrester Research, serving Infrastructure and Operations professionals.

Kick off your day with ZDNet’s daily email newsletter. It’s the freshest tech news and opinion, served hot. Get it.

Posted in Uncategorized | Leave a comment

On Connecting, Pivoting, and Learning New Things

https://mkjay.wordpress.com/

Our technologist bypasses TNSNAMES.ORA, turns rows to columns, and meets the NO_DATA_NEEDED exception.

I am trying to use SQL*Plus to connect to an Oracle Database instance directly, using just the connect string (that is, without referencing tnsnames.ora).

I have the following in my tnsnames.ora:

POD=(DESCRIPTION=(ADDRESS_LIST=….))

and I can connect to the database with

sqlplus USER/PASSWORD@POD

However, if I try to connect with

sqlplus USER/PASSWORD@// 
HOST.NAME:1521/SVC

I see the following error message:

ORA-12154: TNS: could not resolve the connect identifier specified

Is there any way I can connect to the database without referring to tnsnames.ora?

There are multiple answers to this—I’ll present two. First, you seem to be trying to use the easy connect naming method introduced with Oracle Database 10g. This enables you to connect to a database—without using a TNS connect string—by specifying the host and database service to connect to.

So you are receiving this error:

 

$ sqlplus scott/tiger@//host/ora11g

SQL*Plus: Release 11.2.0.1.0 
Production on Fri Apr 2 10:25:12
Copyright (c) 1982, 2009, Oracle.  
All rights reserved.

ERROR:
ORA-12154: TNS:could not resolve the 
connect identifier specified

That means that the Oracle client software could not translate that connect string into something that enabled you to find the listener. The problem lies in your sqlnet.ora file: you need to let the client software know that you want to use EZCONNECT as one of your naming methods. If you add

 

NAMES.DIRECTORY_PATH= 
    (TNSNAMES, EZCONNECT)

to your sqlnet.ora configuration file on the client (seedownload.oracle.com/docs/cd/E11882_01/network.112/e10836/naming.htm#sthref473 for all the details), you’ll find that this method now works:

 

$ sqlplus scott/tiger@//host/ora11
                              
g
SQL*Plus: Release 11.2.0.1.0 Production on Fri Apr 2 10:30:33 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11
g
Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options

Alternatively, if you want to connect without both a sqlnet.ora file and a tnsnames.ora file, you can put the details you would normally place in your tnsnames.ora file right on the command line:

 

$ sqlplus scott/tiger@'(DESCRIPTION=
(ADDRESS=(PROTOCOL=TCP)
(HOST=localhost)(PORT=1521))
(CONNECT_DATA=(SERVER=DEDICATED)
(service_name=ora11gr2)))'
 

This works nicely when you have no control over the configuration files at all and want to connect to a known host/instance. You should note that in real life, there would be no line breaks, there would be no spaces in the SQL Plus command line, and you would have to escape special characters based on the shell (or command prompt) you were using.

Pivot Query Examples

Please show an example of a pivot query—to turn rows into columns. Please also illustrate a pivot query using varrays or nested tables.

I’ll show you a couple of techniques—an Oracle Database 11g Release 1 and above ANSI implementation and a technique that works in Oracle Database 10g Release 2 using DECODE (or CASE)—and how to use the COLLECT aggregate built-in function in Oracle Database 10g and above to return a collection of data.

First, here’s the latest technique using the new PIVOT clause. (Note that the converse—UNPIVOT—turns columns into rows.) In Listing 1, I use the SCOTT.EMP table with the goal of turning the DEPTNO data, which usually goes down the page in rows, into columns.

Code Listing 1: Using PIVOT to turn columns into rows

 

SQL> select deptno, clerk, salesman,
  2         manager, analyst, president
  3    from (select deptno, job, sal
  4            from emp )
  5   pivot( sum(sal) for job in
  6   ( 'CLERK' as clerk,
  7     'SALESMAN' as salesman,
  8     'MANAGER' as manager,
  9     'ANALYST' as analyst,
 10     'PRESIDENT' as president ) )
 11   order by deptno
 12  /

    DEPTNO      CLERK   SALESMAN    MANAGER    ANALYST  PRESIDENT
----------    -------   --------    -------    -------  ---------
        10       1300                  2450                  5000
        20       1900                  2975       6000
        30        950       5600       2850

The key in this query is on line 5: the new PIVOT keyword. It works a little like a WHERE clause and GROUP BY all in one. The query starts on lines 3 and 4, where I select the columns I want to work with—DEPTNO, JOB, and SAL. The PIVOT clause includes SUM(SAL) to compute the aggregate SUM(SAL) grouping implicitly by the remaining columns (DEPTNO and JOB). Normally, that result would look like this:

 

 SQL> select deptno, job, sum(sal)
  2    from emp
  3   group by deptno, job
  4   order by deptno, job
  5  /

    DEPTNO JOB         SUM(SAL)
---------- --------    --------
        10 CLERK           1300
        10 MANAGER         2450
        10 PRESIDENT       5000
        20 ANALYST         6000
        20 CLERK           1900
        20 MANAGER         2975
        30 CLERK            950
        30 MANAGER         2850
        30 SALESMAN        5600

9 rows selected.

Another way to pivot columns into rows that works in releases preceding Oracle Database 11gwould be to use DECODE (or CASE) and an aggregation to pivot. Using the last query, I’ll add the necessary DECODE function call to achieve pivoting. I’ll group only by DEPTNO this time, because the goal is to turn the three rows for DEPTNO 10 into one row, as shown in Listing 2.

Code Listing 2: Using DECODE and aggregation to turn columns into rows

 

SQL> select deptno,
  2    sum( decode( job, 'CLERK', sal ) ) clerk,
  3    sum( decode( job, 'SALESMAN', sal ) ) salesman,
  4    sum( decode( job, 'MANAGER', sal ) ) manager,
  5    sum( decode( job, 'ANALYST', sal ) ) analyst,
  6    sum( decode( job, 'PRESIDENT', sal ) ) president
  7    from scott.emp
  8   group by deptno
  9   order by deptno
 10  /

    DEPTNO      CLERK   SALESMAN    MANAGER    ANALYST  PRESIDENT
----------    -------   --------    -------    -------  ---------
        10       1300                  2450                  5000
        20       1900                  2975       6000
        30        950       5600       2850

As you can see, the code in Listing 2 achieved the same result as the new PIVOT clause (in Listing 1). The trick is to create a sparse matrix and then use aggregation to collapse the redundant rows, aggregating the SAL column as you go along. Without the aggregation, the data would look like it does in Listing 3.

Code Listing 3: Using DECODE (without aggregation) to turn columns into rows

 

SQL> select deptno,
  2    ( decode( job, 'CLERK', sal ) ) clerk,
  3    ( decode( job, 'SALESMAN', sal ) ) salesman,
  4    ( decode( job, 'MANAGER', sal ) ) manager,
  5    ( decode( job, 'ANALYST', sal ) ) analyst,
  6    ( decode( job, 'PRESIDENT', sal ) ) president
  7    from scott.emp
  8   order by deptno
  9  /

    DEPTNO      CLERK   SALESMAN    MANAGER    ANALYST  PRESIDENT
----------    -------   --------    -------    -------  ---------
        10                             2450
        10                                                   5000
        10       1300
        20                             2975
        20                                        3000
        20       1100
        20        800
        20                                        3000
        30                  1250
        30                  1500
        30                  1600
        30        950
        30                             2850
        30                  1250

14 rows selected.

All you need to do after that is aggregate by DEPTNO (achieving the one row per department) and sum up the salary.

The last request is to pivot with a collection. Fortunately, the COLLECT aggregate built-in function has been available since Oracle Database 10g. You might think you could use it as shown in Listing 4, but you’ll notice a problem right away: you don’t know what jobs those salaries represent. They are ordered by the job, but you don’t know the job they are associated with. In order to make that association, you need to go a step further and use your own types that preserve the needed information:

 

SQL> create or replace type
  2  myScalarType as object
  3  ( job varchar2(9), sal number )
  4  /
Type created.

SQL> create or replace type
  2  myTableType
  3  as table of myScalarType
  4  /
Type created.

Code Listing 4: First attempt to pivot with the COLLECT aggregate built-in function

 

SQL> select deptno,
  2         cast(collect(sum_sal order by job) as sys.odciNumberList) sals
  3    from (select deptno, job, sum(sal) sum_sal
  4            from emp
  5                   group by deptno, job )
  6  group by deptno
  7  /

    DEPTNO   SALS
-----------  ----------------------------------
        10   ODCINUMBERLIST(1300, 2450, 5000)
        20   ODCINUMBERLIST(6000, 1900, 2975)
30 ODCINUMBERLIST(950, 2850, 5600)

Now you have a collection that can represent the job and the salary associated with that job. You can modify your COLLECT query now to use it, as shown in Listing 5. And you get the information you need.

Code Listing 5: Using the COLLECT aggregate built-in function—with types—to pivot

 

SQL> select deptno,
  2         cast(
  3          collect( myScalarType(job,sum_sal) order by job )
  4              as myTableType) sals
  5    from (select deptno, job, sum(sal) sum_sal
  6            from emp group by deptno, job)
  7  group by deptno
  8  /

    DEPTNO   SALS(JOB, SAL)
----------   ---------------------------------
        10   MYTABLETYPE(MYSCALARTYPE('CLERK'
             , 1300), MYSCALARTYPE('MANAGER',
              2450), MYSCALARTYPE('PRESIDENT'
             , 5000))

        20   MYTABLETYPE(MYSCALARTYPE('ANALYST',
              6000), MYSCALARTYPE('CLERK',
              1900), MYSCALARTYPE('MANAGER',
              2975))

        30   MYTABLETYPE(MYSCALARTYPE('CLERK'
             , 950), MYSCALARTYPE('MANAGER',
             2850), MYSCALARTYPE('SALESMAN',

So, that shows three methods of “pivoting” a result set—a very common topic onasktom.oracle.com .

I Learn Something New

I am known for saying, “I learn something new about Oracle Database pretty much every day.” Recently I learned something new about PL/SQL pipelined functions. I’ve been using pipelined functions since they were first introduced in Oracle9i Database, almost 10 years ago. I thought I knew everything I needed to know about them, but I was wrong.

Recently on asktom.oracle.com , I was asked a question about the predefined exception NO_DATA_NEEDED. At first I thought that it was a typo and that the person really meant NO_DATA_FOUND, because I hadn’t heard of or read about that exception. But in investigating a little deeper, I discovered what it was.

NO_DATA_NEEDED is a very important exception to be aware of if you ever write a pipelined function. In fact, if you haven’t heard of it and you have written a pipelined function, there is a very good chance you have a bug lurking in your code! This exception is raised only in the context of a pipelined PL/SQL function and only when the pipelined function is willing to return more data but the invoking SQL statement doesn’t need any more data (hence the name NO_DATA_NEEDED). In general, a pipelined function would look like this:

 

create or replace function
foo( inputs ... )
return some_type
PIPELINED
as
    /* declaration */
begin
    /* initialization */

    /* process a loop */
        pipe row(i);
    end loop;

    /* clean up */
    return;
end;
/

The /* clean up */ code would be used to release any resources allocated in the initialization code. For example, it would close any open cursors, close any open files, and close down any open resources cleanly.

So what would happen if the /* clean up */ code were not invoked—not because of an error but because the calling SQL statement didn’t ever exhaust the output of the function? For example

 

SQL> create or replace function
  2  generate_data( n in number )
  3  return sys.odciNumberList
  4  PIPELINED
  5  as
  6  begin
  7      dbms_output.put_line
  8      ( '===>>> INITIALIZE' );
  9      for i in 1..generate_data.n
 10      loop
 11          dbms_output.put_line
 12          ( '===>>> PROCESS' );
 13          pipe row(i);
 14      end loop;
 15      dbms_output.put_line
 16      ( '===>>> CLEAN UP' );
 17      return;
 18  end;
 19  /

Function created.

That is a pretty straightforward PL/SQL pipelined function. If you run it to completion, you will see this output:

 

SQL> select *
  2    from table(generate_data(2));
COLUMN_VALUE
----------------
           1
           2

===>>> INITIALIZE
===>>> PROCESS
===>>> PROCESS
===>>> CLEAN UP

This is expected. But what if you don’t fetch two rows from that function; what if you fetch only one?

 

SQL> select *
  2    from table(generate_data(2))
  3   where rownum = 1;

COLUMN_VALUE
----------------
           1

===>>> INITIALIZE
===>>> PROCESS

Next Steps

 ASK Tom
Tom Kyte answers your most difficult technology questions. Highlights from that forum appear in this column.

READ more Tom
 Oracle Database Concepts 11g Release 2 (11.2)

 

DOWNLOAD
 Oracle Database 11g Release 2

As you can see, the generate_data function was successfully initialized and completed one process bit of the code, but the rest was just skipped over, because the invoking SQL statement did not need it. And I didn’t see any error message (I would have expected an unhandled exception to raise an error!), so the code appeared to have worked. 

An exception was raised, however. It was an exception that does not have to be caught, and it will be ignored entirely if it is not caught. It differs from every other exception in this regard: we would all expect an unhandled exception to propagate to the client and appear as an error. Let’s see what happens with the code if I implement this error handler:

 

SQL> create or replace function
  2  generate_data( n in number )
  3  return sys.odciNumberList
  4  PIPELINED
  5  as
  6  begin
  7      dbms_output.put_line
  8      ( '===>>> INITIALIZE' );
  9      for i in 1..generate_data.n
 10      loop
 11          dbms_output.put_line
 12          ( '===>>> PROCESS' );
 13          pipe row(i);
 14      end loop;
 15      dbms_output.put_line
 16      ( '===>>> CLEAN UP' );
 17      return;
 18  exception
 19      when no_data_needed
 20      then
 21          dbms_output.put_line
 22          ( '***>>> CLEAN UP' );
 23          return;
 24  end;
 25  /
Function created.

On line 19, I catch the predefined NO_DATA_NEEDED exception, and on line 21, I announce that I am cleaning up (releasing any resources that need to be released). Now when I run this pipelined function without exhausting it, I see

 

SQL> select *
  2    from table(generate_data(2))
  3   where rownum = 1;

COLUMN_VALUE
----------------
           1

===>>> INITIALIZE
===>>> PROCESS
***>>> CLEAN UP

As you can see, my special cleanup code (I used ***>>> to announce it) was executed and I could clean up any resources I allocated.

Now this question might arise: what if I use a WHEN OTHERS exception block instead? The answer is, in this case , the output would be the same as WHEN NO_DATA_NEEDED was used, but it would be the wrong approach , the wrong way to code. WHEN OTHERS is far too powerful here and far too broad-ranging in general. Here I would like to clean up when the invoker does not need all the results, and this exception—NO_DATA_NEEDED—is very specifically for this purpose.

So in the future, when you’re looking at pipelined functions, remember NO_DATA_NEEDED and use it when appropriate.

Posted in Uncategorized | Leave a comment

https://forums.oracle.com/thread/2309909

String Deaggregation

Link | Posted on by | Leave a comment

http://www.oracle-base.com/articles/misc/string-aggregation-techniques.php#collect

String Aggregation Techniques

Link | Posted on by | Leave a comment

AskTom: join example, new vs old syntax

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6585774577187

You Asked

Hi Tom,
I know in oracle9i we have the cross join and full outer join. Do they exist for 8i if so 
can you point me to the documentation. If they don't can you give an example/breakdown of 
how to rewrite in 8i.

Thanks in advance,
 

and we said…

to join in 8i, just use a where clause:

select * 
  from emp, dept 
 where emp.deptno = dept.deptno;

in 9i and up, you could

select ename, dname, emp.deptno, dept.deptno
  from SCOTT.EMP inner join SCOTT.DEPT
    on emp.deptno = dept.deptno


to outer join 8i, use a (+)

select * from emp, dept where emp.deptno = dept.deptno(+)

is the same as the 9i syntax of:

select ename, dname, emp.deptno, dept.deptno
  from SCOTT.EMP LEFT outer join SCOTT.DEPT
    on emp.deptno = dept.deptno

whereas 

select * from emp, dept where emp.deptno(+) = dept.deptno

is the same as:

select ename, dname, emp.deptno, dept.deptno
  from SCOTT.EMP RIGHT outer join SCOTT.DEPT
    on emp.deptno = dept.deptno


In 8i, a full outer join would be:

select * from emp, dept where emp.deptno = dept.deptno(+)
UNION ALL
select * from emp, dept where emp.deptno(+) = dept.deptno AND emp.deptno is null;

versus the 9i syntax (which does the same amount of work -- no magic there)

select *
  from SCOTT.EMP FULL outer join SCOTT.DEPT
    on emp.deptno = dept.deptno

Posted in Uncategorized | Leave a comment

http://www.oracle-developer.net/display.php?id=430

Working with long columns  (Adrian Billington)

Link | Posted on by | Leave a comment