Understanding the field usage of any object in Salesforce

Overview

Understanding the field usage of any object in Salesforce

One of the biggest problems that I have addressed while working with Salesforce is to understand and evaluate the field usage of a custom object. This application does the work for you, generating a CSV/Excel file with the date of the last record that used each field, and the percentage of use across all of them.

To make this app work, you will need a System Administrator credential to log into Salesforce
This app is currently working with the Spyder IDE, which is part of Anaconda


Let's understand how it works!

Dependencies

First, we need our dependencies. We will use Pandas, datetime and Simple Salesforce

from simple_salesforce import Salesforce
import pandas as pd
import datetime

Credentials

Next, we are going to connect to Salesforce with Simple Salesforce

  sf = Salesforce(password='password',
            username='username',
            organizationId='organizationId')

Your organizationId should look like this, 00JH0000000tYml.
To find it, just follow the next steps (Lightning experience):

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • In the quick search bar (the one in the left) type Company Information
  • Click Company Information
  • Finally, look for Salesforce.com Organization ID. The ID will look like 00JH0000000tYml

The Object

Now you will need to plug the object name. The object name is the API Name of the object. Normally, if it is a custom object, it will finish like this, __c
To find the API NAME just follow these instructions:

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • Click on Object Manager in the header of the page
  • Find your object using the name and copy the API NAME which is next to the name of the object

This part of the code if going to use the name of the object to bring all the fields
  object_to_evaluate = "object"
  object_fields = getattr(sf, object_to_evaluate).describe()

The Date

This part is important and will make you think. The default code is going to bring the data from the last year. Is important to understand what happened during that period. If you release a new field a week ago, it will show that it was use a couple of days ago, but the usage will be really low, around a 2% (7/365). You can change the days to evaluate simple change the 365 for the number of days that you want.

last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

The Result

Now we are going to iterate all the fields and get the created date from the last record that used the field, and the number of records that use that field during the period (one year).

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True)">
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

Some Formatting

Formatting is a nice to have to understand the result, especially if you are going to share the insights. We are going to rename some columns, format the dates column in a way that CSV/Excel can understand, and we are adding a % of use column.

data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

The Files

Finally, we are going to export the files to CSV and Excel, so you can choose which one you prefer to use. The files will be stored in the same folder as the app. So, if you are running this app in your Desktop folder, the CSV and Excel files will be store in the same folder.

data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

If you like it, remember to
Buy Me A Coffee


The final code will look like this:

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True) # Format the CSV/Excel report data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True) data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date data_to_csv = data_to_csv.drop('attributes', axis=1) max_value = data_to_csv['Quantity'].max() data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value # Export the data to a CSV/Excel file data_to_csv.to_csv('last Field Usage Date.csv') data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")">
from simple_salesforce import Salesforce
import pandas as pd
import datetime

# Connection to Salesforce
sf = Salesforce(password='password',
                username='username',
                organizationId='organizationId')


# Change the name to the object that you want to evaluate. If is a custom object remember to end it with __c
object_to_evaluate = "object"

# Get all the fields from the Object
object_fields = getattr(sf, object_to_evaluate).describe()

# Define an empty list to append the information
data = []

# Create a date variable to define from when we want to get the data
last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

# Iterate over the fields and bring the last record created Date where the field wasn't empty
# If the record is not found, store it in the CSV/Excel file as not found
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

# Format the CSV/Excel report
data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

# Export the data to a CSV/Excel file
data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

HOPE IT HELPS!

If you like it, remember to
Buy Me A Coffee

Owner
Sebastian Undurraga
Sebastian Undurraga
LSO, also known as Linux Swap Operator, is a software with both GUI and terminal versions that you can manage the Swap area for Linux operating systems.

LSO - Linux Swap Operator Türkçe - LSO Nedir? LSO, diğer adıyla Linux Swap Operator Linux işletim sistemleri için Swap alanını yönetebileceğiniz hem G

Eren İnce 4 Feb 09, 2022
Unofficial Valorant documentation and tools for third party developers

Valorant Third Party Toolkit This repository contains unofficial Valorant documentation and tools for third party developers. Our goal is to centraliz

Noah Kim 20 Dec 21, 2022
Pytorch implementation of "Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates"

Peer Loss functions This repository is the (Multi-Class & Deep Learning) Pytorch implementation of "Peer Loss Functions: Learning from Noisy Labels wi

Kushal Shingote 1 Feb 08, 2022
Let’s Play with Python3

Python3-FirstEdition a bunch of python programs and stuff Super Important Notice THIS IS LICENSED UNDER GNU PUBLIC LICENSE V3 also, refer to Contribut

Jym Patel 2 Nov 24, 2022
List of resources for learning Category Theory

A curated list of resources for studying category theory. As resources aimed at mathematicians are abundant, this list is aimed at materials whose target audience is not people with a graduate-level

Bruno Gavranović 100 Jan 01, 2023
A basic tool to generate Hydrogen drum machine kits.

Generate Hydrogen Kit A basic tool to generate drumkit.xml files for Hydrogen drum machine. Saves a bit of time when making kits. Supply it with a nam

Luna Langton 2 Nov 28, 2021
A streamlit app for exploring image search results from HuggingPics

title emoji colorFrom colorTo sdk app_file pinned huggingpics-explorer 🤗 blue red streamlit app.py false huggingpics-explorer A streamlit app for exp

Nathan Raw 4 Sep 10, 2022
A calculator for common measurements used in sci-fi books.

Sci-fi-speed-calculator A calculator for common measurements used in sci-fi books. Author: Tyler Windmemuth Purpose: This program allows sci-fi author

Tyler Windemuth 0 Apr 22, 2022
Tracing and Observability with OpenFaaS

Tracing and Observability with OpenFaaS Today we will walk through how to add OpenTracing or OpenTelemetry with Grafana's Tempo. For this walk-through

Lucas Roesler 8 Nov 17, 2022
This tool don't used illegal ativity

ETHICALTOOL This tool for only educational purposes don't used illegal ativity @onlinehacking this tool for pkg update && pkg upgrade && pkg install g

Mrkarthick 4 Dec 23, 2021
Read and write life sciences file formats

Python-bioformats is a Python wrapper for Bio-Formats, a standalone Java library for reading and writing life sciences image file formats. Bio-Formats

CellProfiler 106 Dec 19, 2022
Account Manager / Nuker with GUI.

Account Manager / Nuker Remove all friends Block all friends Leave all servers Mass create servers Close all dms Mass dm Exit Setup git clone https://

Lodi#0001 1 Oct 23, 2021
Ferramenta de monitoramento do risco de colapso no sistema de saúde em municípios brasileiros com a Covid-19.

FarolCovid 🚦 Ferramenta de monitoramento do risco de colapso no sistema de saúde em municípios brasileiros com a Covid-19. Monitoring tool & simulati

Impulso 49 Jul 10, 2022
Tugas kelompok Struktur Data

Binary-Tree Tugas kelompok Struktur Data Silahkan jika ingin mengubah tipe data pada operasi binary tree *Boleh juga semua program kelompok bisa disat

Usmar manalu 2 Nov 28, 2022
A frontend to ease the use of pulseaudio's routing capabilities, mimicking voicemeeter's workflow

Pulsemeeter A frontend to ease the use of pulseaudio's routing capabilities, mimicking voicemeeter's workflow Features Create virtual inputs and outpu

Gabriel Carneiro 164 Jan 04, 2023
Just imagine normal bancho, but you can have multiple profiles and funorange speed up maps ranked

Local osu! server Just imagine normal bancho, but you can have multiple profiles and funorange speed up maps ranked (coming soon)! Windows Setup Insta

Cover 25 Nov 15, 2022
This is an API to get user details for competitive coding platforms - Codeforces, Codechef, SPOJ, Interviewbit. More Platform will be Added Soon.

Competitive-Programming-Score-API An API to get user details for competitive coding platforms - Codeforces, Codechef, SPOJ, Interviewbit Platforms Ava

Aaditya Prakash 3 Jan 17, 2022
LPCV Winner Solution of Spring Team

LPCV Winner Solution of Spring Team

22 Jul 20, 2022
For my Philips Airpurifier AC3259/10

Philips-Airpurifier For my Philips Airpurifier AC3259/10 I will try to keep this code

AcidSleeper 7 Feb 26, 2022
Covid-ChatBot - A Rapid Response Virtual Agent for Covid-19 Queries

COVID-19 CHatBot A Rapid Response Virtual Agent for Covid-19 Queries Contents What is ChatBot Types of ChatBots About the Project Dataset Prerequisite

NelakurthiSudheer 2 Jan 04, 2022