A FICO Score for 401k Plans

How much alpha can advisors really add? How do you truly rate and benchmark a 401k plan’s performance against peers?

Al Otto thinks he’s figured it out, and it involves analytics (of course). After successful RIA and retirement plan careers, Otto and partner Mark McCoy noticed dangerous misconceptions about 401ks taking hold with employers—and plan advisors.

In particular, they saw that price dominated skill in plan evaluations, so they developed a list of questions about what should also be included in the mix. It led to algorithms that readily measure the value added to a plan from investment decisions over time.

The result was Veriphy Analytics, a firm that provides “objective insight into the health of your retirement plan.”

Otto sat with 401(k) Specialist for a wide-ranging interview about advisor accountability in the performance of the plan, and how it can be accurately measured.

More about Veriphy can be found at www.veriphyanalytics.com.

Q: You mentioned the falsehoods you saw floating around the industry involving retirement plans. What are they?

A: I’ll give you four specific ones.

  1. “Low fees produce better outcomes.” While it’s possible, it’s not always true.
  2. “Passive funds always outperform active funds.” That is unequivocally not true, depending on the asset class.
  3. “Target date funds produce better outcomes.” Well, that’s not always true.
  4. Recently, I’ve seen information that is actually quite concerning to me, and that is that investments in 401k plans are all the same. Our data make it very clear that that’s not true.

But, what happens is in this industry approaching $8 trillion in assets, is that investment and advisor decisions are being made on inputs and assumptions. The biggest input is, “What is your fee?”

There’s no question that fees are very measurable and important, as they need to be reasonable. However, in general, while there are all kinds of measurements around investment managers, I believe we’re the first company in the country to measure investment advisors or decisions at the committee level. So, we’re measuring a plan’s investment outcomes, and there are large disparities among plans. It’s fun to see this evolve.

Q: How do algorithms and methodologies like yours work?

A: We begin with common sense. So, if you’re going to measure a 401k plan against a 403(b) plan, you’ve got to have a way to normalize them. Common sense will tell you if you have more return and less risk than a given benchmark, that’s good. And if you have more risk and less return than the benchmark, that’s not so good.

We actually started with that, so our output could be seen by the benefits manager and easily understood. Yet, we have drill-downs, so that the chief financial officer, who might be the investment person on the committee, can really dig down fairly deep to get information. We have a patent pending methodology, it’s called Veriphy Ratio, that’s similar to a FICO score, in essence, for a retirement plan.

Q: How is it presented to the client or committee? Is it intuitive?

A: If it’s positive, then value has been added on a risk-adjusted basis. We give credence to people taking less risk that adds value. People don’t necessarily think about this, but participants leave 401k plans all the time. Quite often, if they time their leave and the market goes down a little bit, they want out—immediately.

From a plan management perspective, you want to minimize the volatility in the plan as much as possible while still taking advantage of the tide moving up in the marketplace. We give a positive weight to lower risk than the benchmark, and we look at all of the asset classes in a plan.

We set a benchmark so that there are, I believe, up to 189 asset classes. We have that many because we want to be [as specific] as possible when we’re evaluating whether value’s been added. You can’t just say, “How did you do versus the S&P?”

Q: You’re talking about holding retirement advisors accountable. What’s the response been so far? Are they happy with something like this, or weary?

A: One of the first advisors we spoke to was Matt Gnabasik of Blue Prairie Group. He’s a thought leader, and they’re a whole team of thought-leaders. Matt just stopped us and said, “Wait, I get it and thank you. Finally, somebody is going to be able to show the value that we are bringing to the table.”

We have advisor groups that are, I would say, elite subscribers. They’re happy because we’re able to demonstrate the value they’ve added. One of the things that we measure is the actual excess return delivered from the investments over time, and the risk that was taken to deliver it. We have plans where literally, over the last seven years, the plan has had over 20 percent absolute excess return above the benchmark with slightly less risk than the benchmark. That is phenomenal. And we have seen the exact opposite, where as much as 20 percent less return than the benchmark with more risk, and that’s awful.

Q: So, if you’re doing it right, you’re going to love it. If you’re doing it wrong, you’re going to hate it. It’s not really rocket science, right?

A: I think the other thing that’s really important is it is not our intention to make this information public. We’ll make general information public as we gather it, but we’re not going to post the lowest scores, because those firms must be able to utilize the data to improve themselves.

Our intent in the long-term is to build a community of, generally fiduciaries—the plan committee members and service providers—that are excellent and want to continue to improve and excel.

Q: Participant outcomes is a current buzzword. Does this help with outcomes, and if so, how?

A: We only measure things that are factual, and we want to measure actual outcomes. So yes, we do participate in that, and we agree that retirement dignity, retirement readiness is incredibly important. So, we can measure, rather than “What’s the increase in my deferral rate?” the question becomes okay, “What is the ongoing increase in my contributions and my average account value, and how does that compare to others in my industry?” I say “in my industry” because you can’t compare an airline to Joe’s Welding Shop. But we have the data.

Q: So, are you taking advantage of big data then, in terms of the recommendations that you’ve made?

A: We are. There’s a component of our data engineering that uses artificial intelligence. We gather our data from Form 5500, unless it’s a client and then we get the information directly from the custodian.

Our broad analytics use annual Form 5500 filings. Anybody that’s worked with Form 5500 will tell you that trying to get a hold of that information is a dirty job. It takes a fair amount of domain expertise to understand how to pull that information out, especially when you have complex plan structures.

I think we’ve made some really wonderful headway in that regard. Still working on getting everything in the database, but we’ve got a large majority of the plans that actually file an audit with a statement of accounts.

Q: Is this the first of its type? Do you have many competitors?

A: So, I don’t know of anybody that’s doing this yet. Most of the analytics that are out there are really focused on fund measurements, and we’re very focused solely on plan measurements.

It’s our hope that this truly improves outcomes throughout the industry. It’s our hope to increase the profitability of advisors that are good, and the recordkeepers and fund management firms that serve with them. I think that just bringing more transparency to the table will lift the level of competency in the industry, and that’s what we’re trying to do.

John Sullivan
+ posts

With more than 20 years serving financial markets, John Sullivan is the former editor-in-chief of Investment Advisor magazine and retirement editor of ThinkAdvisor.com. Sullivan is also the former editor of Boomer Market Advisor and Bank Advisor magazines, and has a background in the insurance and investment industries in addition to his journalism roots.

4 comments
  1. I don’t want to come off as an attacker…. but this Veriphy approach is another prime example of everything that is wrong with the DC industry. These are DC plans where Advisors can influence the investment menu (not the explicit behavior of participants) to a degree but don’t actually manage any money directly. Why would an advisor want to tie their “value” to past performance and/or risk measures of the funds they suggested on an investment menu where they don’t even control where the money goes? And what about Advisors that use passive investment options? Are they “bad” advisors adding “no value” because the returns will never be above benchmark? #totalwhiff

    1. Brett, thank you for your opinion. I certainly understand your view. Let me address each item individually.

      First, because you cared enough to address this article, my guess is that you are a thoughtful and passionate investment advisor. Our experience shows that thoughtful advisors are adding value to their plans, so there is no need to fear. We don’t believe that Investment Advisor fees should be the only standard for selection. One of the main things we are trying to do is to eliminate assumptions from the DC Plan marketplace and replace those assumptions with facts.

      It is true that advisors cannot totally influence which funds are utilized by the Plan’s participants. However, it is also a fact that an advisor is being paid to provide investment advice on each and every fund option. So, it becomes important to review the composite performance of each fund in an asset class over time. (i.e. Are fund changes making a positive difference for the participants who use the asset class?). When there is a fund change, the participant will receive the combined investment returns and risk taken from all the funds used in the asset class over time. So, we are working to raise the standard, so a current fund is not represented to be held for the last 5 years when in fact it wasn’t.

      It is also important to measure the performance of the plan as a whole vs. the Plan’s benchmark, given the risk taken and the return received. Every plan has its own risk/return, demographics, plan health and retirement readiness profile. In order to be fair, we measure both the asset weighted and the equally weighted version of each asset class (i.e. the asset weighted analysis gives a proper view from a plan perspective and the equal weighted gives a view from the advisor perspective).

      We measure plan risk taken and returns received against a Plan’s unique profile. We do this by measuring the risk taken & return received in a normalized fashion, based on the composite performance of the plan’s assets over time. Today’s top advisors are focused on building retirement readiness for the participants in the plans they manage. We believe that the risk taken & return received should be vital part of a participant’s retirement readiness and sometimes this is overlooked. Our data shows us that there can be a very large disparity between the cream of the crop and the bottom of the barrel.

      Plan fiduciaries have a legal duty to monitor service providers and to make sure that any fees paid from a plan are reasonable. Veriphy has the ability to measure risk taken and total “excess return” provided over time as well as the plan’s annualized composite returns vs. the plan’s benchmark returns. This gives the plan committee an independent third-party analysis of value added that they can then compare to the fees paid. I believe this is a strong indicator of reasonableness. It was Ben Graham (Warren Buffett’s mentor) that said “Price is what you pay, value is what you receive.”

      I am so glad you brought up “passive investments.” Veriphy asset classes are benchmarked in a fashion that puts passive funds and active funds on an equal playing field because they are the funds that represent a particular asset class. In fact, there are numerous passive funds that add significant value on a risk adjusted basis when compared to their benchmark. There are also some asset classes where the index is actually not adding value to the plan. Obviously, this is true for the actively managed funds as well.

      I would enjoy a conversation with you if you have the time. You can schedule a meeting with me from our website. (www.veriphyanalytics.com)
      #increaseretirementprosperity

  2. Brett, one area I would disagree on is “These are DC plans where Advisors can influence the investment menu (not the explicit behavior of participants) to a degree, but don’t actually manage any money directly”. I think the example of what is wrong with the DC industry is that so many advisors are hands-off when it comes to k plans and don’t treat them like they are managing them. On the 401(k) plans we advise on, we do have a lot of influence on the behavior of the participants as we are very aggressive about meeting with all of them, typically on an individual and regular basis. In addition to that we create models that we manage and the vast majority of plan participants choose a model. Add in our 3(38) service on many plans, and we truly are managing the investments. But I would argue that even with our 3(21) service, while the plan sponsor is making the ultimate decision as to the investments, I’ve never had one go against our recommendation. If more treated them like managed accounts and service them more thoroughly we would have better participant outcomes. Having said all of that, I would be very skeptical of the outcomes of the ‘analytics’. It’s still garbage in, garbage out. It reminds me of some of the ‘plan performance ratings’ that some providers sell that is based off of 5500 data. The data going in is limited which always makes me question that outcome.

  3. Tim makes solid points. We need more advisors like Tim that realize engaging with participants is the key to driving results along with solid plan design elements. However, 3(38) and 3(21) still leaves the decisions at the participant level for personal investment selection. Even using investment models, a plan won’t have 100% adoption rate from the participants. Point taken though and I applaud Tim for engaging with participants and attempting to drive better outcomes for them.

    As for the Al comments, sorry but Advisors who try to define their value based on ability to select investments for a DC menu are missing the boat. Some record keepers have limited investment menus based on different product sets, Advisors don’t always have the ability to choose any investment they want. Not to mention some committees might have a reason for using a Money Market over a Stable Fund to maintain liquidity for their employees. I would suspect a Money Market fund may not look good if returns are a big part of the analytics Veriphy uses. Also, without the data how can one even track accurately how long a participant held a certain fund? One cannot accurately score results if you can’t determine the actual investment results of the participants. Target Date portfolios according to the DOL should be selected based on design and suitability, not explicit fees and investment results. If Veriphy was able to use actual plan data, perhaps there would be some value there, but doesn’t sound like they have that capability. Lastly, this concept of tying Advisor value to investment results sets up good Advisors like Tim who are doing right by the client but suddenly might look bad if a fund or two they installed in the plan under performed. Where is the wisdom in tying your value to future performance, which none of us can predict? Where does ERISA point toward investment results as being the standard for anything?

Comments are closed.

Related Posts
Total
0
Share