Cedar Capital Consulting Group

Financial Services Blog

Home | About Us | The Team | Our Experiences | In The Spotlight | Blog | Contact Us

 

Recent topics:

  • In the clouds!
  • Walk away......
  • Privacy Vs. Security.
  • Do traditional DR plans work in more modern disaster scenarios?
  • Disaster Recovery: the risk of interconnection in a world of internet piracy.
  • Re-freezing toxic assets.
  • Red flags in the morning, take warning!
  • Reigning in derivatives.

BlogLine.gif

Wednesday, April 7, 2010

In the Clouds!

From Client-Server to the Cloud



As more and more organizations implement or contemplate cloud-computing solutions, management needs to consider the risk implications that the technology adds to the overall business environment.  Cloud computing solutions can mitigate some organization-specific risks while expanding existing risks or adding new risks.



Let us look at cloud computing based on several of the core COBIT control components, or information criteria, namely confidentiality, integrity, reliability, and availability.  All organizations already need to consider these components whether they are using cloud computing or not, but cloud computing adds a third party and new technology to the achievement of these objectives.



Confidentiality, or the protection of information from unauthorized disclosure, in a cloud computing environment requires that the third party host organization ensure that confidential data is recorded, stored, moved, and processed in such a way that other cloud users (perhaps even competitors) cannot access or view that information.  Strong contractual language is only part of the answer; strong technology safeguards need to be implemented within the platform.



Integrity refers to the accuracy, completeness, and validity of the information.  Cloud computing does not repeal the law of garbage in, garbage out; but it does add the complexity that your organizational information is spread throughout the cloud and needs to be accurately and completely reassembled when you call for it and only when you call for it.



Reliability is related to integrity and ensures that information is appropriate for management use in an operating the entity. 



Availability of information is the key control component in a cloud-computing environment.  In fact, availability of information may be a key driver for adopting a cloud computing solution.  The use of largely scalable and available computing resources creates back up and recovery challenges for data residing in all of the far corners of the cloud.  Is cloud computing still cost efficient with the new controls and assurances that must be put in place to satisfy regulatory requirements? 



While cloud computing does not change the fundamental nature of organizational objectives and risks, the cloud solution adds new dimensions to the old risks that need to be addressed through both strong contractual language, new operational processes and secure technology.
12:44 pm est

Thursday, March 11, 2010

Walk Away!

No Help in Sight, More Homeowners Walk Away

By DAVID STREITFELD

Published: February 2, 2010, The New York Times

About 5.1 million will own a home valued below 75 percent of what is owed.  New research suggests that when a home value falls below 75 percent of the amount owed on the mortgage, the owner starts to think hard about walking away, even if he or she has the money to keep paying.

They are stretched, aggrieved and restless. With figures released last week showing that the real estate market was stalling again, their numbers are now projected to climb to a peak of 5.1 million by June, 2010, about 10 percent of all Americans with mortgages. 

Additional statistics show 25% of all mortgages are projected to be underwater in the near future.  These statistics and tactics raise new questions for the mortgage industry including:

Short-Term

  • What is the historical context of Walk Aways and 75% mortgage home values?  Is there any?
  • What impact will Walk Aways have on home values?
  • What impact will Walk Aways have on lender portfolios?
  • What impact will Walk Aways have on the secondary markets and mortgage investors?
  • Can lenders retroactively manage walk away risk?
  • What is the impact of walk away risk on a non-walk aways?
  • What is the impact of a walk away on a borrower's credit? Will the impact be permanent or relatively short-term?
  • How should ability-to-pay versus willingness-to-pay impact consumer credit ratings?

Medium-Term

  • Have consumer sentiments towards mortgage payments changed permanently?  (i.e. pay mortgage at all costs)  Will consumers tend to pay unsecured debt first to fund daily living expenses?  If so, how with this impact lender behavior?
  • How will lenders manage future walk away risk? Will all borrowers pay for the risk?  Will a new layer of non-prime borrowers emerge?
  • How long will it take credit models to adjust for the new behaviors?
  • Will regulatory reform occur as a result of walk away risk?
  • What impact will new policies and regulations have on credit availability?
  • What impact will credit availability have on home and real-estate values?
9:15 am est

Friday, January 22, 2010

Scambling Privacy and Security

The question, which comes first, the chicken or the egg applies to security and privacy, or does privacy come before security?  Said another way, business (non-IT) management has often said that operations would have world-class privacy, if only IT would lock down the sensitive data with robust security schemes.  To which IT says that they would create the requisite security schemes if only business management would reveal to them the sensitive applications and data elements.

As always, the answer lies somewhere in the middle.  Privacy has an operational dimension as well as a technical one.  Business management, probably with the assistance of legal counsel, needs to understand privacy laws and regulations, such as Gramm-Leach-Bliley, PCI-DSS, or FERPA, that apply to its operations and translate those requirements into specifications for IT to implement the proper security schemes.  Business management should also develop good business processes to ensure effective implementation of other privacy aspects, e.g. clean desk policy, proper physical identification and labeling of documents containing protected personal information (PPI), secure destruction of confidential materials, etc.

IT management should implement business requirements across platforms, applications, and devices.  This presupposes that IT management knows where all instances of a sensitive data element (or combinations of elements) reside in the infrastructure.  Different platforms may require different security techniques to address the privacy concerns of business management.  Laptops may require whole disk encryption, biometric security, and / or location verification capabilities.  Internet-facing devices may require other techniques, such as firewalls, application isolation environments, or DMZs.  A further consideration in providing security in the IT environment is the need to cross-associate particular users with the data elements (or combinations of data elements) they are authorized to access.

One technique for pulling these security requirements together includes a permissions matrix.  A permissions matrix arrays the users (or the roles of multiple users) on one axis and the available data elements on the other axis.  In the intersection of users (or roles) and data elements are the permissions of that user (or role) to that data element.  The permissions could be read only, write, edit, delete, or no access.  Implementation of the permissions matrix in the IT environment is a complex endeavor, but one that must be undertaken in order to comply with the myriad of federal, state, and industry-specific privacy requirements.

3:20 pm est

Wednesday, July 1, 2009

Reigning in Derivatives - the Need for a Thoughtful Approach
 

A word that has been on the tip of everyone's tongue these past few months is DERIVATIVES, although the definition and understanding of their use escapes most.  What are these complex instruments derived by computer wielding rocket scientists?

Derivatives are financial contracts whose price and terms are based on underlying financial instruments.  The more standardized of these contracts trade on various derivative exchanges (CME, CBOE, NYSE) and are known as listed derivatives, while the more complex contracts with unique terms are called over the counter derivatives (OTC) and trade directly between a writer of the contract and a buyer.  Derivative contracts standardly call for periodic margin/collateral calls, or payments, by the counterparty that is out of the money on the trade.

There are two polar opposite thoughts on the use of derivative contracts; the view that the derivative gives a benefit to a company or individual and allows them to hedge and manage specific risk, versus the view that derivatives allow a company to gamble on the market, and gain off balance sheet leverage.  

Which of these views is correct?  There probably is some truth  in each view, hence the drive to better control the process and enhance regulation.  Some concerns are: 

1) Most OTC derivatives do not trade or clear on any centralized platform, and it is difficult to ascertain total market exposure for a counterparty when things start to go bad.  Secondary trading of outstanding derivative contracts without the benefit of a centralized clearer can leave payment and performance responsibilities murky.  In the past few years, there have been industry efforts to better control confirmation and recording of derivative transactions.

2) Each counterparty sets its own standards for risk collateral requirements, so a counterparty could find itself under-collateralized if the other party in the transaction is not able to make payments.  

3) Many derivative counterparties are not regulated financial institutions, but rather Hedge Funds or other financial companies, like AIG.  These entities do not have to abide by the same capital requirements that banks and securities firms do find themselves unable to make good on payments required under derivative contracts if they have not required adequate collateral, or kept capital against future exposures.  If an unregulated entity is a large enough counterparty in the OTC derivatives market (as was AIG), its inability to make payments could have a systemic affect on the overall financial system.

4) Some derivative players may have fewer automated operational processes, which could make management of transactions and risk less than optimal.  This could have an impact on systemic risk.

5) OTC derivative payments due cannot be offset against receivables from clients on listed exchanges (OTC agreements can provide for payment netting on OTC trades with the same counterparty).  The ability to net offsetting exposures with clients in the global markets, similar to the foreign exchange markets, may minimize risk.

Derivatives impacted the recent financial downturn.  How could their impact been avoided? 

Enhancements could include a centralized clearer for standardized OTC contracts.  More thought has to go into clearing and settlement issues for non-standard or tailored derivative contracts, which cannot easily be standardized for clearing and margin purposes.   If several clearing agents enter the market with different clearing and settlement standards, the market could become even more fragmented.  Centralized clearing of standardized OTC contracts may allow for offsetting of counterparty risk across the OTC and listed markets. 

Regulators can also define regulatory requirements and capital standards for ALL companies entering into OTC derivative contracts, not just the companies that are already regulated.    

It is too easy for derivatives players to cause distress in the financial markets again, so some changes to the control environment are necessary.  However, these are complicated issues that impact capital and liquidity in the markets, and care should be taken so that change agents are thoughtful in their approach and do not just react to generalized outrage.

7:07 am est

Tuesday, June 16, 2009

Red Flags in the Morning, Take Warning!

 

We have been witness to every major disaster in the last 40 years: from the pervasive East Coast power failure in the sixties, to the Blackout in New York City in the seventies and the violence and vandalism that ensued, to the many hurricanes, tornados, floods, the Trade Center bombing in 1993, 9/11, Katrina, the Washington Sniper, another major NYC power outage and the beat goes on. It is astonishing to see that every time one of these events occurs, we pick ourselves up, dust ourselves off, and BURY OUR HEADS IN THE SAND. There is a human tendency to return to the status quo. After 9/11, we were all hyper-alert and as the months and years have gone by, we are no longer white-knuckling the commuter flight from DC to New York.

 

When engaging in a disaster preparedness or business continuity exercise, typical planning includes predictable and probable events (and lower planning costs). Catastrophic event often are left out because of their statistically improbable likelihood.

 

How improbable are the catastrophic events that we have experienced over the last 20 years? If we look closely, in almost every instance of a catastrophic event, there were red flags that might have strengthened the disaster preparedness or security efforts to address potential threats. But it seems that, in many instances, those red flags were overlooked or discounted. 

 

After the 1993 bombing of the Trade Center, although stronger security measures were implemented to make it more difficult to get into the Trade Center, the nature of the measures in no way addressed the scope of a 9/11 event. (Although, I am not sure how that would have been possible.)  How unlikely would 9/11 actually have been? Post facto analysis showed failures in our preparation for and response to the event.  Homeland Security has implemented robust detection and forecasting systems to prevent a recurrence of a similar event on our soil. 

 

Recently, our government revealed that the Russians and others have penetrated our electric grid and planted grid-disabling code. Almost every week, one reads of significant incursions by hackers into government agencies. One wonders how this can happen?  Is our most vital infrastructure so vulnerable? Any number of movie-of-the-week plots comes to mind. In recent weeks, the government has addressed the serious red flag raised by the Russian incursion to the U.S. electric grid by announcing the formation of an elite cyber security corps.  

 

From rising tides and tornados to terrorist events, pandemic flu to rogue virus we have repeated red flags.  It is easy to fall into complacency, to put our heads in the sand, and ignore the red flags. When we say these events are just too overwhelming and costly to plan for, do we count the cost of the occurrence of these events? What is the cost to brand and revenue of an event? If we do not address the red flags identified by risk systems; if we do not plan for the worst-case impact of terrorist activities, of malicious hacking, the cost to our entire nation could be considerable.

 

Risk identification systems produce red flags to identify action-requiring situations. We see red flags that have been ignored, resulting in failures of various kinds. In the securities industry, we see firms with elaborate, regulatory mandated supervisory procedures that ignore the alerts raised by their risk systems. Our current economic problems and failures of large firms were portended by risk systems at least 18 months before the major bank failures. If risk identification systems identify red flag situations, where is the accountability of ignored flags?

 

Similarly, the kinds of events that have been occurring are not just in the nature of the 500-year flood event: that is an event that is rare but predictable. What we have been experiencing lately are events that go beyond our imagining. How do you plan for the metaphorical 500-year events?  Or, plan for the unthinkable.  The red flags have been raised. So perhaps you do not build the best levees possible, but you do something to get you part of the way there. And, have good sound disaster planning for the improbable that gets you the rest of the way there.  Planning for every predictable event is just not practical. Therefore some approach that involves development of generic infrastructure response is the best approach to planning for either likely or unlikely events. Redundancy in communication and electric, evacuation and relocation capability: basic but flexible preparedness not specific to any one type of event.

 

One thing we have learned from Katrina and 9/11 is the broad economic impact of such different kinds of 500-year events. This should motivate us to consider generic capabilities that soften the blow when next event occurs. We read daily warnings of cyber incursions, likely climate change, rising tides: the red flags have been raised.

5:58 pm est

Monday, June 8, 2009

Re-freezing Toxic Assets

Many are asking, "What happened to the Public/Private Investment Program (PPIP) that will buy up toxic assets, whole loans and securities, from banks?"   The program announcement created hoopla and positive reaction in the markets and then things went quite, until recently.

We have long known that the key issue for program success requires agreement on a value and price for these instruments.  Firms seem to be staying on the sidelines, waiting to understand more about the details of the program, and to see what role future earnings might play in shoring up their capital base.  The recent run up in the markets has given banks a chance to raise capital and put off the need to participate in the programs.

The Legacy Loan Program (LLP) launch, which will purchase legacy whole loans and will be managed by the FDIC, recently delayed, giving Regulators time to reassess the details of the program.

The Fed is still moving ahead with their part of the PPIP, which involves the purchase of the more complex securities and bonds backed by loans.  These assets are much more difficult and complex to value.  Sellers worry that bidding prices will result in much lower than perceived value.  Potential buyers also worry that the only assets offered are the most toxic of the toxic, which some could argue no one would want to buy.  Hence, continued standoff and delay.

While the financial environment has improved somewhat, a lack of available capital prevents the securitization market and the credit markets from getting back to normal. Another market downturn could re-freeze capital.  While the Government programs may not lead to a significant number of immediate transactions, it will still be a good idea to continue to fine-tune the details of the programs as a backstop.  If there is a capital squeezing financial market downturn, this year or next, the Government will be able to step in with "shovel ready" programs that can help move assets and raise capital.

5:47 pm est

Wednesday, May 13, 2009

Disaster Recovery: the risk of interconnection in a world of internet piracy.

 

Hackers Break Into Virginia Health Professions Database, Demand Ransom

 

Hackers last week broke into a Virginia state Web site used by pharmacists to track prescription drug abuse. They deleted records on more than 8 million patients and replaced the site's homepage with a ransom note demanding $10 million for the return of the records.

 

Washington Post, May 4 2009

 

 

The financial services industry has undergone increasing consolidation in the last 30 years. An overwhelming percentage of all business, in all products, is run by handful of mega firms. At the same time, every aspect of business in the financial services industry is now conducted electronically. No longer do you see elderly runners tottering down Wall Street with confirmations and certified checks to be exchanged at the cage for corresponding settlement documents. From the submission of customer orders on-line, trading platforms, clearance and settlement of securities transactions, in addition to payments made by debit and credit cards and automated clearinghouse (ACH) transactions, such as the direct deposit of paychecks, every component of transactions in the financial services industry is linked through vast data and communication networks.

 

Electronic commerce has facilitated the burgeoning economy that we have experienced over the last 30 years  -- periodic crashes, not withstanding --- the complete reliance on electronic record keeping and communications capability has significant risks to the individual commercial entity and to our national and global economic viability. Those risks stem from an economy that, although composed of many and varied individual participants, is a system. While this has always, in a sense, been true, the electronic and telecommunications connections between the participants have solidified the system, much as links in a chain. And today the number of links in the chain is diminishing because of consolidation among the participants and increasing automation efficiency. The interconnectedness and the small number of key players in the financial services industry increase the risk of events, which would cripple key nodes in the network. This could, as a consequence, impact the entire system causing economic blackouts, which would have economic and political consequences. Fortunately, the system has spent a great deal of time on effort on disaster recovery to avoid such events.

  

Our economic system is dominated by a small number of very big banks and investment firms. The disaster recovery rules in the financial services industry (FINRA Rule Series 3500 etc) focus on the preparedness of the individual financial entity to respond to a variety of challenges: technology outage, communications outage, building inaccessible, pandemic preparedness etc. The next step is to elevate financial services industry disaster recovery preparedness to a more robust and systemic level.

 

Disaster Recovery for the financial community belongs next to national security in rigor and importance. The financial community should continue to collectively leverage their individual disaster response capabilities to develop highly secure message protocols, alternate data stores, and communications capabilities. Similarly, systemic end-to-end disaster recovery testing, which exercises scenarios where key links in the chain are not functioning, should occur on a regular basis.  If we have learned anything from 9/11 from a disaster recovery perspective, it is that the piles of paper that we have generated to respond to business continuity mandates do not mean a whole lot if you have not thoroughly practiced the disaster response.  During 9/11, the firms that efficiently and effectively executed their plans did so as a result of planning and practice.

 

Recently, we have seen how economic events caused a cascading impact on every financial institution, not only nationally, but also globally and in every obscure economic nook and cranny of the planet.  Similarly, thorough emergency management planning and disaster recovery planning softens the impact of continent or global-wide events.

 

We have only to recall the recent announcement by the Federal Government that foreign governments have planted code moles in our electronic grid to realize that piracy is taking in a new form in the 21st century. Although, the grid mole is a very clever tactic, as this is the top of the automated food chain i.e. no power, no data, no commerce, it is indicative of the kinds of incursions we need to broadly plan for in the future.

7:49 am est

Wednesday, April 8, 2009

Do traditional DR plans work in more modern disaster scenarios?

 

Most firms, financial and other, focus disaster recovery planning on the assumption of an isolated building or data center problem, or a more focused regional power or property outage.   The SARS outbreak, 9/11 and the 2003 Northeast power grid outage highlighted some new issues: how does a firm respond when a majority of their employees do not come to work (and the employees of their provider /outsourced firms). How do firms communicate with global employees, and with other firms and providers that are also in disaster recovery mode?   A worldwide health crises, or a broader regional power outage, will have a far deeper and different impact than a focused regional disaster. This problem requires different levels of interrelated planning and communication.  In the instance of a pandemic health crisis, a firm's power does not go out and there is no disruption to a company's data center, so typical DR plans will not help. 

In the instance of a broad health crisis, a large number of employees, globally, either cannot or will not come to work due to illness or the threat of illness.  In addition, every provider, counterparty, market participant and client potentially has the same problem, and the impact to a company's day-to-day operations could be staggering.   Any operation or system that is outsourced could potentially be unavailable, depending on the ability for a provider to keep systems operational with reduced staff levels.

Aside from a broad heath pandemic, many disaster recovery plans assume that outages will be isolated and contained, and not impact the entire US or other countries.  Similar thinking occurred in the mortgage community in the last few years  -- regional market downturns were anticipated -- an across the board decline in housing prices was not. The Financial Services industry has sponsored broad exercises where staff outages scenarios were simulated.  The exercises have highlighted how reliant and interconnected our service economy is, and how difficult it can be to plan for every eventuality.

Have we done enough vetting of the disaster work plans of the companies we rely on a daily basis...service providers, web providers, software providers.... do not just review how they back up their systems and data, but do they have plans to operate if large numbers of employees are not at work?  Do providers have good communication plans with their staff and clients in case employees are widely scattered or cell phone towers and standard communication modes are inoperable (as in 9/11)?  Should companies look to get more remote computing capability?  Do companies have robust plans for day-to-day credit monitoring of clients during time of crises. And, do they have the ability to communicate and analyze credit problems versus those failing due to operational problems?

4:48 pm est

2010.04.01 | 2010.03.01 | 2010.01.01 | 2009.07.01 | 2009.06.01 | 2009.05.01 | 2009.04.01

Link to web log's RSS file

Site Meter

CCCGroup@verizon.net
Cedar Capital Consulting Group 
1600 Tysons Boulevard, 8th Floor
McLean, VA 22102
(703) 622-1490
 
Cedar Capital Consulting Group is a Management Consulting Group and not a CPA firm, and does not provide attest services, audits, or other engagements in accordance with the AICPA's Statements on Auditing Standards.

CopyRightAllRightReserved.jpg