Almost beat by ChatGPT (not yet) – on an admin/dev task to bulk assign a Salesforce permission set license to all users in a permission set group

With the recent Salesforce’s announcement on end of life of permissions on profiles, permission sets and permission set groups are the future of user management. Permission Set Groups allow bundles of permissions to be assigned to a User. They fill the gap between monolithic Profiles and atomistic Permission Sets, which is nice.

Now imagine a task that is to bulk assign a permission set license to all users in a permission set group. Since a permission set license can only be associated to individual users, admins can only go to the specified permission set license in “Setup | Company Information” and assign users. However there is no easy way to create a list view to show all users within a particular permission set group. Thus, one has to find each individual user and link them to the license.

For example, I was recently looking into the Salesforce “Data Mask” product which requires a permission set license (called “Data Mask User”) to be granted to any user who needs to access it. While in practice it probably only needs a couple of users in the production org to do such a job, in sandboxes we might want to grant all admins to test things. In my current project setup, all admin users are sharing the same permission sets which are all grouped into a permission set group called “All Assigned to Admin”. Therefore the simple requirement here is to grant the “Data Mask User” permission set license to all users in this group.

To avoid such a tedious routine that needs to be done in every sandbox for every admin user, I decided to write some scripts to automate the job. Here is the code I came up after two iterations.

// Avoid duplicates
Set<Id> assignedUserIds = new Set<Id>();
PermissionSetLicense psl = [select Id, (select AssigneeId from PermissionSetLicenseAssignments) from PermissionSetLicense where MasterLabel = 'Data Mask User' limit 1];
for (PermissionSetLicenseAssign psla : psl.PermissionSetLicenseAssignments) {
    assignedUserIds.add(psla.AssigneeId);
}

PermissionSetLicenseAssign[] saveAssignments = new PermissionSetLicenseAssign[] {};
PermissionSetGroup psg = [select Id, (select AssigneeId from Assignments) from PermissionSetGroup where DeveloperName = 'CXC_All_Assign_to_Admin' limit 1];

// Only active users can be assigned
Id[] userIds = new Id[] {};
for (PermissionSetAssignment psa : psg.Assignments) {
    userIds.add(psa.AssigneeId);
}
Map<Id, User> activeUsers = new Map<Id, User>([select Id, IsActive from User where Id in :userIds and IsActive = true]);
for (PermissionSetAssignment psa : psg.Assignments) {
    if (!assignedUserIds.contains(psa.AssigneeId) && activeUsers.keySet().contains(psa.AssigneeId)) {
        saveAssignments.add(new PermissionSetLicenseAssign(AssigneeId = psa.AssigneeId, PermissionSetLicenseId = psl.Id));
    }
}
insert saveAssignments;

The initial version did not check whether users are Active but that results in an error saying the permission set license cannot be assigned to a user that is not Active. The code also checked things to not assign the license to any users who already have it so it can be defensively executed many times in the same org.

I was quite happy with the result.

A couple of days later when I was playing around with ChatGPT, I decided to assign the bot such a task. Here was my question and the bot’s answer:

I was shocked after seeing the first couple of sentences. The tone shows so much confidence of the bot as it even has concrete steps of doing such as job with a point-and-click style and specifically tells you where to find certain settings and click on which buttons. How did I miss such a configuration setting I wonder? Then I followed the steps all the way until Step 5 where there is no permission set license I can select. I realised it didn’t know what it was suggesting from that point on.

Then I requested the bot to write some code to do this and here is the result:

I have to say the code is nicely structured with variables properly named, and it even follows the typical Apex style that it initiates list/set variables first, then queries certain objects, and do a DML. The comments help you understand what it is doing although in practice comments like this talking about “what” instead of “why” is generally a bad one. But under the bot context, it is totally fine and it shows its thought process in plain English.

It is not the right code though. It seems to have mixed up UserPermissioSetAssignment with the other object PermissionSetLicenseAssign which is the key object to link a permission set license to a user. Note that when a permission set group is assigned to a user, it is done via the “PermissionSetAssignment” object (there is no “PermissionSetGroupAssignment” object).

I can probably help enhance its learning by clicking on the “Thumbs down” icon. It’s a powerful tool. Although it cannot replace us doing our work, it can certainly help guide us where to check things. It’s a language tool which means it can’t actually login into a trial org and do experiments (but who knows what the future brings). What I need to do, as a human, is to draft some code (possibly with the bot’s sample template), execute it, analysing errors, and tweak the code with more iterations to achieve the goal.

Here is a quick reference ERD for Permission related objects:

Certification – Identity and Access Management Architect

Summary

  • The Salesforce Help documentation is the single source that has enough information to get you to pass the exam.
  • Don’t dismiss the Exam Guide. You won’t find more accurate hints about the questions anywhere else, e.g. The Community (Customer/Partner) section is now (Jan 2023) 18%.
  • Focus on the key concepts: OAuth, SAML, OpenID Connect, Connected App, Authentication vs Authorisation, OAuth 2.0 flows, Single Sign-On flows, etc.
  • Use identity related settings in Salesforce org’s Setup to enhance the learning.

The Single Source

Many people claim this certification to be one of the most difficult Salesforce multi-choice exams but I don’t think it is as difficult as it sounds although I did not have that much hands-on experience in commercial projects. But it is a relatively self-contained topic with all details that can be found in a single place, i.e. the Salesforce Help documentation.

I spent most of my preparation time (maybe 80% of the time) reading through the help. I usually read it in a layer-by-layer style rather than page by page. The help documentation is organised in a nice tree-like structure. The root node has the first layer of child nodes which show high level sections of the topic, with each section (child node) having a short paragraph of 3-4 sentences to explain it. If I need to know more details of a particular section, I can drill down into that section and it presents the next layer of child nodes which represent the sub-sections.

Exam Guide

Do not dismiss the Exam Guide as it is always kept up to date. Many blog posts’ information is a few years old so the weight of each topic could be very different now. e.g. The Community (Customer/Partner) section is now (Jan 2023) 18% which means there are around 11 questions on this and it certainly felt a lot more than I expected in the actual exam. I read the exam guide many times. If I wasn’t sure what certain concepts mean in the exam outline, I would google the terms and learn that specific concept. But most likely the topic can be found in the help documentation aforementioned.

Key Concepts

First of all, OAuth is an authorization protocol while SAML is an authentication protocol. My advice is to google the difference between authorization and authentication and read the top 5 pages in the search result.

I often come to these pages for protocol references:

The design of any OAuth 2.0 flow looks more complicated than I would initially expected. But most of these back and forth communications between different entities are working around various different security issues. Every parameter/attribute introduced, whether being in a redirect URL for an auth. code or an HTTP POST request for the access token, is there for a reason. Ask these questions:

  • Why was Implicit Grant (User-Agent) less secure than and replaced by the PKCE flow?
  • What’s the difference between Authorisation Code and Access Token?

For lots of the Single Sign-On related questions, the key thing is to figure out who is the service provider and who is the identity provider. If you can quickly get that clear, you are half way though an answer.

A big part of the SSO flow is user provisioning.

Pay attention to external identity in communities and its license details, Experience Cloud’s authentication options and branding options, and embedded login.

Identity Related Settings in Setup

There are various different settings all over the places in the org’s Setup, which can be confusing sometimes. My observation is that Salesforce introduced 3 different SSO mechanisms at different stages. I list them and their key settings in the following table (OpenID Connect seems to be trending these days):

SSO SettingsSalesforce as SPSalesforce as IdP
Delegated AuthenticationSingle Sign-On Settings;
Profile / Permission Set -> Check “Is Single Sign-On”
API -> Delegated Authentication WSDL
SAMLSingle Sign-On SettingsConnected App
OpenID ConnectAuth. ProvidersConnected App

Connected App is a key concept to understand. A connected app is a framework that enables an external application to integrate with Salesforce using APIs and standard protocols, such as SAML, OAuth, and OpenID Connect. It is a representation of an app outside of Salesforce, so when a Salesforce org has a connected app, it is used as either an identity provider for SSO or authorisation server for OAuth. Learn all the settings in the “New Connected App” page. The two main categories of these settings are:

  • Enable OAuth Settings
  • Enable SAML

Auth. Providers is an SSO setting. An auth provider represents the external authentication providers using OpenID Connect (Facebook, Google, LinkedIn, etc.). Salesforce works as the service provider and external social sites are an identity provider.

Single Sign-On Settings is for Delegated Authentication and SAML. Salesforce works as the service provider. SAML Single Sign-On Settings is a list of settings for SAML configuration. Delegated Authentication is probably the first version of SSO Salesforce introduced and this can be reflected by its setting field “Disable login with Salesforce credentials” and users’ Profile or Permission Set field “Is Single Sign-On”.

Other exam relevant but less important settings:

  • Identity Provider
  • Identity Verification
  • OAuth and OpenID Connect Settings: Two less secure OAuth flows (Username-Password and User-Agent)
  • Certificate and Key Management
  • My Domain
  • User Settings
  • Session Settings

The household relationships in Financial Services Cloud

I was recently looking into the Household feature in Financial Service Cloud (FSC). A tree-like structure, nicely presented in a Lightning component, shows all the household members and their related entities.

I couldn’t find any documentation on what entities were exactly selected in each tile of the component so decided experimenting a bit by querying various different objects and comparing the query results with the UI. The following query mapping is purely based on my observation so surely the actual code would be different. The purpose of this is not to figure out the exact queries used in the implementation. Instead it was quite intriguing to me what entities have been considered in the design when a household is modelled in the FSC context.

Group Members

The top left box shows all group members which are also presented in the ā€œGroup Membersā€ table. They are basically the entities based on the answer to the following questions in the ā€œEdit Householdā€ dialogue:Ā 

  • Who Are the Members of This Group?
  • Do the Members Have Relationships with Other Accounts? 

Some key mappings: 

  • FinServ__PrimaryGroup__c stores whether the household is the person accountā€™s primary group.
  • FinServ__Primary__c stores whether the person account is this householdā€™s primary member (only one primary member for every household).
  • FinServ__IncludeInGroup__c is whether the account entity should be added to the group (Enabled checkbox in the “Add to Group” column in the household edit dialogue). 

Related Groups

The bottom left box shows all related household groups. They are modelled by the ā€œAccount-Account Relationā€ object.

Note that in the where clause the related accountā€™s record type needs to be ā€œHouseholdā€. 

Related Accounts

The middle bottom box shows all related business accounts, e.g. Family Trust. They are modelled by the ā€œAccount-Account Relationā€ object.Ā 

The difference here compared to the above ā€œRelated Groupsā€ is that in the where clause the related accountā€™s record type needs to be anything other than ā€œHouseholdā€. 

Related Contacts

The bottom right box shows all related contacts. Itā€™s the group members’ related contacts, e.g. Lawyer. 

Aother perspective

From the person account perspective, the “Relationship” tab of a person account page record shows these categories of data:Ā 

  • Households – query from AccountContactRelation 
  • Related Accounts – query from AccountContactRelation 
  • Related Contacts – query from FinServ__ContactContactRelation__c 

AnĀ anti-pattern causing the “CPU time limit exceeded” error in batch Apex

I recently came across a “CPU time limit exceeded” error thrown from an Apex class that implements the Batchable interface. It is not a typical one caused by excessive Apex logic according to the following observations:

  • There is no stack trace or error details.
  • The error is consistently hit on the first batch of the job.
  • Debug logs in the start method suggest that it runs successfully consuming little CPU time.
  • Debug logs are not even printed at the beginning of the execute method.

This made me think that there is something wrong in between the start and the execute methods. But this is where I have no visibility nor control.

Typically the start method of a Batchable class just queries all records that need to be processed in the system and would not spend much CPU time processing any business logic, whilst the execute method focuses on executing the actual business logic on a single batch of records. The mechanism that chunks all queried records into batches are not exposed at all. But what excessive logic could it possibly run in terms of chunking the data?

I examined the SOQL query in the start method as that is the only thing I can think of. It queries an object with IDs that exist in a subset of child objects. Something like this:

public Database.QueryLocator start(Database.BatchableContext BC) {
    String query = 'select Id from Parent__c '
        where Id in (select ParentId from Child__c)';
    return Database.getQueryLocator(query);
}

For illustrative purposes, I used Parent__c to denote the parent object and Child__c to denote the child object. In the actual org, there are about 40,000 parent records and 40,000 child records. The inner query in the where clause looks alarming. It tries to query 40,000 records in the system to evaluate the where clause of the outer query. The “Id in a set of Ids” condition actually needs to concatenate 40,000 records’ IDs into a set. This is where the 10 seconds CPU time could be spent. Running the following statement proves that concatenating 40,000 IDs consumes roughly 10 seconds CPU time (sometimes faster):

Id anId = '0032000001Dqw31';
String s = '';
for (Integer i = 0; i < 40000; i++) {
    s += anId + ',';
}
Integer cpu = Limits.getCpuTime();
System.debug('>>> cpu: ' + cpu);

What happened (although not documented) was that the SOQL query was executed without the where clause being actually evaluated in the start method so as to promptly return a query locator. Then there came the implicit Apex logic, between the start and execute methods, to evaluate the where clause and retrieve the actual records in chunks.

In summary, the anti-pattern to avoid here is doing an inner query on an object with large data volume in the where clause of the SOQL in the start method. The simple correction is to focus on querying a single object of large data volume in the start method and use filtering logic in the execute method to get the desired records that need to be processed.

A Salesforce developer’s pair programming experience during the pandemic

Pair programming in the software industry has varying acceptance. The answer to whether developers should do more pair programming is normally opinion based. The recent experience I had with my colleague Simon produced some promising results. We had a few productive pair sessions in two weeks developing a payments feature in our Claims product. After the first session we could certainly feel the benefits so decided to do some more. The COVID-19 pandemic sounds like it would impact the pair-programming as it is not possible for the two developers to sit at one machine. But there is nothing to stop remote pairings. It could turn out to be the other way around for some people by improving social connections.

Broadly speaking, we started with some sort of “pair development” mindset as the development of a user story requires not only coding, but many other tasks. We analysed the data model changes, discussed the system behaviour related to each new object/field, drew UML diagrams, mocked up UI prototypes, and drafted documentations in the first couple of sessions. What we were doing there was to clarify ideas, discuss approaches, and come up with a concrete design both of us would feel comfortable to start the coding with.

Before each session, we agreed on how long this session was going to take and what problems we should focus on solving. When there was remaining time, we just went on to solve the next issue. Here are the tools we used:

  • VS Code and its SFDX plugins – the IDE
  • Live Share – a plugging in VS Code to share code (stopped using because it auto launches a diff view)
  • Slack for screen sharing

We mostly used the “Driver and Navigator” style. The Driver is the person who typed the code at the keyboard. The Navigator is in the observer who gives directions, shares thoughts, keeps an eye on the larger issues while the driver is typing. A unique benefit of remote pairings is that the navigator also has a machine and a keyboard so can go find quick answers to driver’s questions. How many times do we, as an individual programmer, have to pause the typing and go find the answer to a specific coding issue or search the code base to find what a class/method is doing? With such a remote pair, the driver can continue the driving smoothly leaving the blocking questions to the navigator and come back to solve them asynchronously.

Pair programming requires intensive focus.

The advantage of pair programming is its gripping immediacy: it is impossible to ignore the reviewer when he or she is sitting right next to you.

Jeff Atwood

One direct benefit of focus is that code turned out to be simpler and algorithms are implemented faster. Simon and I are both familiar with the payments module code base. We needed to implement an algorithm that does some complex date range partitioning logic. In this situation, we tried to follow the “driver and navigator” style for a while, but we were still stuck. As we discussed ideas, we found two different strategies of traversing a tree data structure (top-down or bottom-up) using a recursive method invocation. We spent 15 minutes writing pseudo-code independently. Each of us handled one strategy and then we regrouped to discuss both. In the end, surprisingly we even simplified the implementation by removing the need of the recursive process and the algorithm looked much simpler. It took us 2 hours. But it could have easily taken me the whole day if I worked independently as I imagined it.

Another benefit of the intensive focus is removing errors effectively on the go. The errors range from the compilation errors to the typical copy and paste errors, all the way through the business level edge case problems. In another session where I was the driver, after a two hour non-stop coding session, I successfully pushed all the code changes into the Salesforce server from my VS code IDE in one go, without a single compilation error. This saved time as saving code to server is a non-trivial time consuming factor when coding on the Salesforce platform.

Apart from many known benefits of pair programming such as knowledge sharing, keeping focus, code review on-the-go, etc., I believe teams should encourage more practices of pair development in the Salesforce platform considering some of the platform specific challenges like:

  • More declarative approaches (process builder, formulas, flow, etc.) to solve the problem directly or simplify some of the coding tasks.
  • Various limits and considerations – not just governor limits, but also restrictions and considerations (like the choose between Master-detail and Lookup, the choose between the before and the after triggers, execution order, etc.).
  • Bulkification, although almost embedded into every Salesforce developer’s blood, is sometimes still dismissed or mistakenly used, even by an experienced developer.
  • Platform specific security issues.
  • Configurations / Administrations. Things like profile permissions, layout changes, field-level securities, custom settings, etc. if get missed in a configuration or deployment, can cause the effect that nothing feels to be developed.

In a retrospective session, Simon and I reviewed the approach and its costs and benefits. The conclusion is that pair development is vital for collaborative teamwork, it produces high quality deliverables, and it is effective when tackling a large user story.

Melbourne, Australia.

A simple Apex trigger framework

Whether one should use an Apex trigger framework or not is probably worth a separate discussion since any abstraction is at a cost and the cost could outweigh the benefits of the framework prematurely introduced. For the case that little business logic needs to be managed in triggers, a general practice is to keep it simple stupid (KISS). However Apex trigger frameworks have still been discussed in many developer forums and Salesforce programming books, focusing on organising the code to deal with the more complex domain problems. Many of these frameworks/patterns focus too much on putting an abstraction over the combinations of the trigger stages (before and after) and the operation types (Insert, Update, Delete, Undelete). That normally results in lots of boilerplate code to maintain. Some frameworks/patterns, introducing several interfaces to implement, are not that inviting to even get started with. This post presents an Apex trigger framework (aka. trigger handler pattern) that aims to separate trigger concerns, reduce programmer errors, and improve the modularity while maintaining a simple style.

(The more meaningful compiled code can be found in this GitHub repo.)

There are these main concerns in Apex triggers:

  • Multiple triggers can be defined for the same object and their execution order is not guaranteed.
  • The before and after stages.
  • Trigger operations: isInsert, isUpdate, isDelete, isUndelete.
  • Individual trigger processes are often change-based, i.e. only executed on certain records that have some change.
  • Individual trigger processes may need to be switched on/off.
  • Trigger logic mostly deals with a domain problem so the core logic could be executed else where – such as Apex REST API or a batch job.

When multiple triggers are defined for the same object, code gets complex to debug as developers need to be aware of all its triggers. Since the execution order of the triggers is not guaranteed, multiple before triggers (or after triggers) are in contention with each other. This makes things worse. It’s a widely accepted pattern to have one trigger per object. Further to this, keeping triggers thin has the benefit of leveraging Apex classes to organise the trigger logic. The following code shows how an AccountTrigger is written in such a style. It simply delegates its work to the common TriggerHandler class.

trigger AccountTrigger on Account (before insert, before update, 
before delete, after insert, after update, after delete, after undelete) {
    TriggerHandler.handle(TriggerConfig.ACCOUNT_CONFIG);
}

By looking at the name of the classes, one can tell that only the ACCOUNT_CONFIG is specific to the AccountTrigger. Everything else is common in all triggers. One line per trigger per object looks neat. Note that the typical trigger stages, operations, and their corresponding context variables like Trigger.isBefore, Trigger.isInsert, Trigger.newMap are not concerned here at all.

It’s tempting to put an abstraction on the permutation of the stage factor (before and after) and the operation factor (insert, update, etc.). That would result in lots of boilerplate code (how often do you need to handle a beforeUndelete event?). Quite often, the same logic needs to be invoked in both isInsert and isUpdate operations, e.g. do something when a Status field is changed to “Approved” no matter if it is a new record with “Approved” status or is an existing record that has status changed to “Approved”. Rather, The before and after stages have their distinctive purposes. Developers often need to think about carefully if the new trigger logic should be put into the before or after trigger. Normally the logic should be in either stage, very unlikely in both. Therefore, separating the before and the after concerns is more useful to remove design errors. The TriggerHandler class is common in every trigger. It focuses on these two stages and leaves the handling of the operation type to each specific trigger operation. The code is shown as follows:

/**
 * The common trigger handler that is called by every Apex trigger.
 * Simply delegates the work to config's before and after operations.
 */
public with sharing class TriggerHandler {
    public static void handle(TriggerConfig config) {
        if (!config.isEnabled) return;
        
        if (Trigger.isBefore) {
            for (TriggerOp operation : config.beforeOps) {
                run(operation);
            }
        }
        
        if (Trigger.isAfter) {
            for (TriggerOp operation : config.afterOps) {
                run(operation);
            }
        }
    }
    
    private static void run(TriggerOp operation) {
        if (operation.isEnabled()) {
            SObject[] sobs = operation.filter();
            if (sobs.size() > 0) {
                operation.execute(sobs);
            }
        }
    }
}

Let’s have a look at the TriggerOp interface (“TriggerOperation” is already used by Salesforce). It represents an individual trigger operation that encapsulates some relatively independent business logic.

public interface TriggerOp {
    Boolean isEnabled();
    SObject[] filter();
    void execute(SObject[] sobs);
}

It is important to guard the execution of the logic by checking a condition – which is often the operation types such as Trigger.isInsert, Trigger.isUpdate. Here, the isEnabled() method, if needed, can also merge other flags to allow in-memory switches to turn on/off the operation or to link to a custom setting or a static resource. Another concern developers have is that not all records should be applied with the logic. Normally there should be a check to guard only the records that have a change. Thus the filter() method enforces developers to think about this aspect for if it got dismissed, it would result in some complex trigger recursive calls. If all records need to be processed, the implementation class can simply return all records in the Trigger.new list.

In terms of how the common TriggerHandler handles various different trigger operations, it is the TriggerConfig that addresses these common concerns:

  • The setting to enable/disable the trigger
  • The operations in relation to the before and after stages

The following is the TriggerConfig class that shows various different configurations for different object triggers. It statically instantiates many TriggerConfig objects, each of which is ready to be used in their own trigger.

/**
 * A singleton class that presents the configuration properties of the individual triggers.
 */
public inherited sharing class TriggerConfig {
    public Boolean isEnabled {get; set;}
    public TriggerOp[] beforeOps {get; private set;}
    public TriggerOp[] afterOps {get; private set;}
    
    public static final TriggerConfig ACCOUNT_CONFIG = new TriggerConfig(
        	new TriggerOp[] {new AccountTriggerOps.OperationA()},
        	new TriggerOp[] {new AccountTriggerOps.OperationB()});
    // Other object trigger config
    
    private TriggerConfig(TriggerOp[] beforeOps, TriggerOp[] afterOps) {
        this.isEnabled = true;
        this.beforeOps = beforeOps;
        this.afterOps = afterOps;
    }
}

The above code can be further tweaked to dynamically instantiate TriggerConfig records from a JSON static resource so as to further decouple from the individual TriggerOp implementations. See this GitHub repo for more details.

The AccountTriggerOps class is simply a superset of all TriggerOp(s) in relation to the Account, organised in a top-level class:

public with sharing class AccountTriggerOps {
    public class OperationA implements TriggerOperation {
        public Boolean isEnabled() {
            return Trigger.isInsert || Trigger.isUpdate;
        }
        
        public SObject[] filter() {
            return Trigger.new;
        }
        
        public void execute(Account[] accounts) {
            // validation logic
        }
    }

    public class OperationB implements TriggerOp {
        public Boolean isEnabled() {
            return Trigger.isUpdate;
        }
        
        public Account[] filter() {
            Account[] result = new Account[] {};
            for (Account newAccount : (Account[]) Trigger.new) {
                Account oldAccount = (Account) Trigger.oldMap.get(newAccount.Id);
                if (oldAccount.Status__c != 'Active' && newAccount.Status__c == 'Active')  {
                    result.add(newAccount);
                }
            }
            return result;
        }

        public void execute(Account[] changedAccounts) {
            Set<Id> statusChangedIds = new Set<Id>();
            for (Account acc : changedAccounts) {
                statusChangedIds.add(acc.Id);
            }
            new AccountChangeStatusBatchable(accountIds).run();
        }
    }

    public class OperationC implements TriggerOperation {
        ......
    }

    public class OperationD implements TriggerOperation {
        ......
    }
    
}

The context variables (such as Trigger.old, Trigger.newMap) are only referenced directly in each TriggerOp as only each individual trigger operation knows which condition (Trigger.isInsert, Trigger.isUpdate, etc.) the logic should be executed. This decides which trigger context variables to use.

This framework, if adopted in a managed package, has the potential to open for extension, i.e. having the TriggerOp defined as a global interface. Then individual TriggerOp implementation classes can be specified in a static resource for each TriggerConfig. In theory, the custom code within an org that installs the managed package can hook their own trigger operations into the managed package’s trigger execution order by specifying the individual TriggerOp(s) to run in the a static resource.

In summary, this Apex trigger framework provides these benefits:

  • Allowing each trigger to be individually switched on/off.
  • Allowing each trigger operation to be individually switched on/off.
  • Promoting consideration of the before and after stages where the logic should belong to.
  • Promoting consideration of the changed records that need to be processed.
  • Increased modularity on managing the code.
  • Simple to use (well, subject to the definition of “simple”).

Namespace prefix issues with SObject fields map within managed packages

For lots of the cases, we need to find all fields of an SObject (such as Contact) so we do this:

Map<String, SObjectField> CONTACT_FIELDS_MAP = Schema.SObjectType.Contact.fields.getMap();

This returns a map with each key being a field name and value being the corresponding SObjectField. Keys have all lower case characters. Confusion comes in on the keys when this code is executed from within a managed package installed in a customer’s org. An often asked question is whether the key contains namespace prefix (of the managed package) or not. My latest testing shows starting from API v34.0 onward, the map contains both managed package custom fields (with namespace prefix) and local custom fields (without namespace prefix). Prior to this API version, the map only contains fields without namespace prefix so if the local field happens to have the same API name as a managed package field’s, it is overridden by the managed package field. I did the following test to confirm differences between API versions.

In a managed package with namespace prefix, create a simple Apex class with API version set to v33.0:

public class NsTest {
    private static final MapString, SObjectField> CONTACT_FIELDS_MAP = Schema.SObjectType.Contact.fields.getMap();
    
    public static void test() {
        Map<String, SObjectField> m = DOCUMENT_FIELDS_MAP;
        System.debug('>>> m: ' + m);
    }
}

Execute NsTest.test() in the developer console and the debug log shows the map contains keys without namespace prefix. Change the class’s API version to v34.0 and re-run the script. The debug log shows the map contains keys with namespace prefix (for custom fields).

Salesforce Certified Platform Developer I

After more than 5 years of fiddling around with Apex classes and Visualforce components and focusing on general coding principles, I thought it might be good to learn some of broader Salesforce features that could be easily overlooked by developers. So I took this exam over the weekend: Salesforce Certified Platform Developer I. It was a happy result: PASS. It did not tell me what score I achieved though. Here I just want to list some “new features/points” I discovered during the prep time the week before the exam. Some of these features have existed for years. It is just that I never paid attention to them.

  • Schema Builder. I cannot remember how many times I opened different browser tabs for different SObject definition pages to find relevant fields’ API names, types, picklist values and lookup relationship to other objects. This Schema builder, jut at Setup | App Setup | Schema Builder, is such a powerful tool to do all of those in one place. Moreover, you can add, edit and delete fields and objects by simple drag-and-drops; not to mention there is a “quick find” box to search for things. When customers want the schema of your product’s data model, just mention this to them. Trailhead is at here.
  • Contacts to Multiple Accounts. The Account lookup on Contact usually means the company the contact is most closely associated with. But contacts might work with more than one company. A business owner might own more than one company or a consultant might work on behalf of multiple organizations. Any other accounts associated with the contact represent indirect relationships. The Related Contacts list lets you view current and past relationships. The Salesforce object is AccountContactRole.
  • Quick Deployment. This is a deployment mechanism that rolls out your customizations to production faster by running tests as part of validations and skipping tests in your deployments. This is also useful to prepare a deployment simulation to production before the real deployment happens. Both change set and Ant target for meta data migration support quick deployments. Trailhead is at here.
  • Change Set. I have known the concept for quite a while but have not used it until a recent customer deployment management chance. Have to say it is a tedious work – clicking buttons hundreds of times trying to add relevant components to the deployment. It is only agile when used together with quick deployment and when it deploys relatively small amount of work. It tracks all deployment histories.
  • Enforce CRUD and FLS. I had known that when rendering VisualForce pages, the platform will automatically enforce CRUD and FLS when the developer references SObjects and SObject fields directly in the VisualForce page. However, I have always forgotten to enforce the CRUD and field level security when Visualforce pages are referencing simple string properties that indirectly relate to some SObject fields. Expressions like these should be used more often in this case:
    Schema.sObjectType.Contact.fields.Phone.isAccessible()
    Schema.sObjectType.Contact.fields.Name.isUpdateable()
    
  • Process Builder. A few of the favorite questions the exam asked were about problem solving options. i.e. if we should use Salesforce declarative process automation features or Apex/Trigger code. Apart from formula fields and workflow rules, Salesforce have this strong declarative process automation feature – process builder. It can easily build a wizard by using this tool.
  • Triggers and order of execution. The developer guide is at here. The exam had more than a couple of questions in this area. It is important to remember when before trigger and after trigger get executed. And when there is any workflow rule being involved that can potentially update the same record so to recursively fire the triggers, this is the guide to understand the detail steps of the process.
  • Test Suite. If multiple test classes are selected in the “New Run” from the developer console, they are running concurrently so can sometimes hit the “Unable to unlock the row” error. The correct way is to create a suite of many tests and run the suite. The test classes in the suite are executed one by one sequentially. The suite is also useful in preparing the regression testing.
  • Lightning Components. It is nice to follow the trailhead to get some hands-on experience when learning Lighting. Even though it is still at an early stage and seems to be slow at the preview start-up, it is the modern way of coding the application – single page and JavaScript MVC.
  • Got to learn more standard Salesforce objects such as Opportunity and Lead. Surprisingly, Account to Opportunity is in a master-detail relationship but Account field on Opportunity is not mandatory. There were 3 questions in the exam in relation to Salesforce standard objects and their relationships.

Object alias in SOQL

The object alias used in SOQL can reduce the number of characters in query string and improve the readability. Suppose there are these objects with parent-child relationship:

  • Parent: ObjectA
  • Child: ObjectB
  • Grand Child: ObjectC

And all of these objects have three fields: Field1, Field2 and Field3. A normal SOQL statement that join these objects from the lowest level of the object graph looks like:

List&lt;ObjectC__c&gt; cList = [
        select
                Field1__c,
                Field2__c,
                Field3__c,
                ObjectB__r.Field1__c,
                ObjectB__r.Field2__c,
                ObjectB__r.Field3__c,
                ObjectB__r.ObjectA__r.Field1__c,
                ObjectB__r.ObjectA__r.Field2__c,
                ObjectB__r.ObjectA__r.Field3__c
        from ObjectC__c
];

The version with object alias looks like this:

List&lt;ObjectC__c&gt; cList = [
        select
                objC.Field1__c,
                objC.Field2__c,
                objC.Field3__c,
                objB.Field1__c,
                objB.Field2__c,
                objB.Field3__c,
                objA.Field1__c,
                objA.Field2__c,
                objA.Field3__c
        from ObjectC__c objC, objC.ObjectB__r objB, objB.ObjectA__r objA
];

Notice that all involved objects are specified in the “from” clause. This is really a sort of DRYness which makes the SOQL less verbose. It is particularly useful in the case that the number of characters is limited such as SOQL queries being used in HTTP GET URLs as part of the REST web service calls.